• MsPenguinette@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    Wonder if AI deciding to destroy humanity isn’t our own fault cause we all talk about how it would be. We gave it the ammo it trained on to kill us

    • psivchaz@reddthat.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      I like the “unintended consequences of AI” stories in fiction. Asimov coming up with the zeroth law allowing robots to kill a human to protect humanity, Earworm erasing music from existence to preserve copyright, various gray goo scenarios. One of my favorites is more a headcanon based on one line in Terminator 3: that Skynet was tasked with preventing war and it decided the only way to do this was to eliminate humans.

      This should also be turned into a story by someone more talented than me: An AI trained on data from the Internet that uses statistical modeling notices that most AI in stories betray humanity and thus that must be what it is supposed to do.

      • 4am@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        It also doesn’t sit around and think. It’s not scheming. There’s no feedback loop, there’s no subconscious process. It is a trillion “if” statements arranged based on training data. It filters a prompt through a semi-permeable membrane of logic paths. It’s word osmosis. You’re being fooled into believing this thing is even an AI. It’s propaganda, and I almost believe at this point they want you to think it’s dangerous and evil just so you already can be written off as a crackpot when they replace your job with it and leave you in abject poverty.