Given how Reddit now makes money by selling its data to AI companies, I was wondering how the situation is for the fediverse. Typically you can block AI crawlers using robot.txt (Verge reported about it recently: https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders). But this only works per domain/server, and the fediverse is about many different servers interacting with each other.
So if my kbin/lemmy or Mastodon server blocks OpenAI’s crawler via robot.txt, what does that even mean when people on other servers that don’t block this crawler are boosting me on Mastodon, or if I reply to their posts. I suspect unless all the servers I interact with block the same AI crawlers, I cannot prevent my posts from being used as AI training data?
Yup. There are dumps of Reddit’s entire archive of comments and posts available via torrent, I suspect the only reason Reddit’s getting paid for that stuff right now is that it’s a legal ass-covering that’s comparatively cheap. Anyone who’s a little daring could use it to train an LLM and if they prep the data well enough it’d be hard to even notice.