• chalupapocalypse@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    12 hours ago

    They would have to hire a shitload of people to police it all along with the rest of the questionable shit on there, like jailbait or whatever other shit they turned a blind eye to until it showed up on the news

    Not saying it’s right but from a business standpoint it makes sense

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      12 hours ago

      Don’t they flag stuff automatically?

      Not sure what they’re using on the backend, but open source LLMs that take image inputs are good now. Like, they can read garbled text from a meme and interpret it with context, easily. And this is apparently a field thats been refined over years due to the legal need for CSAM detection anyway.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        They do, but they’d still need someone to go through the flagging and check. Reddit gets away with it as it is like Facebook groups do, by offloading the moderation to users, with the admins only being roped in for ostensibly big things like ban evasion/site wide bans, or lately, if the moderators don’t toe the company line exactly.

        I doubt that they would use an LLM for that. That’s very expensive and slow, especially for the volume of images that they would need to process. Existing CSAM detectors aren’t as expensive, and are faster. They basically compute a hash for the image, and compare it to known hashes for CSAM.