Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

  • AwesomeLowlander@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    1 month ago

    “detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster.”

    False positives don’t matter if they stick to the stated intended purpose of making it easier to detect CSAM manually.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      1 month ago

      The problem is that they won’t.

      Yes, AI tools, in the hands of skilled people, can be very helpful.

      But “AI” in capitalism doesn’t mean “more effective workers”, it means “fewer workers.” The issue isn’t technological so much as cultural. You fundamentally cannot convince an MBA not to try to automate away jobs.

      (It’s not even a money thing; it’s about getting rid of all those pesky “workers rights” that workers like to bring with us)

      • AwesomeLowlander@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Here’s the thing. This technology is unequivocally one of the things AI would be very useful for. It can potentially do a lot of good. Yes, MBAs could screw it up like they screw anything else up in society. That doesn’t mean we shouldn’t be happy that we’ve created this new tech.