- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Right now, on Stack Overflow, Luigi Magione’s account has been renamed. Despite having fruitfully contributed to the network he is stripped of his name and his account is now known as “user4616250”.
This appears to violate the creative commons license under which Stack Overflow content is posted.
When the author asked about this:
As of yet, Stack Exchange has not replied to the above post, but they did promptly and within hours gave me a year-long ban for merely raising the question. Of course, they did draft a letter which credited the action to other events that occurred weeks before where I merely upvoted contributions from Luigi and bountied a few of his questions.
No, they can only take from things in their models.
Moreover, all of them use statistics, typically Bayesian, to get the results. What you get from an LLM is essentially an average* of the model data. This is why feeding LLM output into a model is so toxic, it’s already the average.
They only take from the statistical distributions of words in the context of preceding words (which is why they never say “the the” etc, why the grammar is nearly always correct). But that doesn’t mean that whole sentences are lifted from the source material. There are near infinite paths through those word distributions, and many have never been produced by humans, so LLMs do produce sentences that have never been uttered before.
They couldn’t produce new conceptual context spaces in the way that humans can sometimes, but they can produce new combinations within existing context spaces.
Except they are. You ask about Discworld characters and it gives you direct full quotes from the books.