• 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: January 20th, 2023

help-circle
  • I totally agree that both seem to imply intent, but IMHO hallucinating is something that seems to imply not only more agency than an LLM has, but also less culpability. Like, “Aw, it’s sick and hallucinating, otherwise it would tell us the truth.”

    Whereas calling it a bullshit machine still implies more intentionality than an LLM is capable of, but at least skews the perception of that intention more in the direction of “It’s making stuff up” which seems closer to the mechanisms behind an LLM to me.

    I also love that the researchers actually took the time to not only provide the technical definition of bullshit, but also sub-categorized it too, lol.


















  • Honestly, a search engine companion is probably its least offensive case, you’re correct. Mostly, it makes me so mad because they are polluting our entire collected knowledge base, because there is no way to watermark anything as AI-generated (especially when it’s text, not images) which means that every search you make from here on out returns worse results. It’s like being forced to share the road with self-driving Teslas because the self-driving car companies (especially Tesla) have made us all involuntarily part of their beta test.

    The “screw everyone else trying to use the same public resource” mentality is out of control.