Right now, on Stack Overflow, Luigi Magione’s account has been renamed. Despite having fruitfully contributed to the network he is stripped of his name and his account is now known as “user4616250”.

This appears to violate the creative commons license under which Stack Overflow content is posted.

When the author asked about this:

As of yet, Stack Exchange has not replied to the above post, but they did promptly and within hours gave me a year-long ban for merely raising the question. Of course, they did draft a letter which credited the action to other events that occurred weeks before where I merely upvoted contributions from Luigi and bountied a few of his questions.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    200
    arrow-down
    3
    ·
    4 days ago

    Stack Overflow has been toxic for a long time already. It’s one of the things that a lot of people seem pleased to see AI devour.

    • oce 🐆@jlai.lu
      link
      fedilink
      English
      arrow-up
      64
      arrow-down
      1
      ·
      4 days ago

      I’ve read it is still well valued because people will keep asking questions there when LLM can’t answer, so they remain a precious source of post LLM curated Q&A.

        • Goun@lemmy.ml
          link
          fedilink
          English
          arrow-up
          40
          ·
          4 days ago

          Until it’s just AIs answering questions asked by other AIs while human admins block human accounts…

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          8
          ·
          4 days ago

          Yeah, AI has become good enough at this point that you can provide it with a large blob of context material - such as API documentation, source code, etc. - and then have it come up with its own questions and answers about it to create a corpus of “synthetic data” to train on. And you can fine-tune the synthetic data to fit the format and style that you want, such as telling it not to be snarky or passive-aggressive or whatever.

            • naught101@lemmy.world
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              3
              ·
              4 days ago

              That’s not really true though… They come up with brand new sentences all the time.

              • justOnePersistentKbinPlease@fedia.io
                link
                fedilink
                arrow-up
                1
                ·
                2 days ago

                No, they can only take from things in their models.

                Moreover, all of them use statistics, typically Bayesian, to get the results. What you get from an LLM is essentially an average* of the model data. This is why feeding LLM output into a model is so toxic, it’s already the average.

                • Yes I know it’s not really the average, but for laymen us good enough comparison.
                • naught101@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 day ago

                  They only take from the statistical distributions of words in the context of preceding words (which is why they never say “the the” etc, why the grammar is nearly always correct). But that doesn’t mean that whole sentences are lifted from the source material. There are near infinite paths through those word distributions, and many have never been produced by humans, so LLMs do produce sentences that have never been uttered before.

                  They couldn’t produce new conceptual context spaces in the way that humans can sometimes, but they can produce new combinations within existing context spaces.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              7
              arrow-down
              12
              ·
              4 days ago

              And yet the synthetic training data works, and models trained on it continue scoring higher on the benchmarks than ones trained on raw Internet data. Claim what you want about it, the results speak louder.

              • JcbAzPx@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 days ago

                This is the peak, though. They require new data to get better but most of the available new data is adulterated with AI slop. Once they start eating themselves it’s over.

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  4 days ago

                  You are speaking of “model collapse”, I take it? That doesn’t happen in the real world with properly generated and curated synthetic data. Model collapse has only been demonstrated in highly artificial circumstances where many generations of model were “bred” exclusively on the outputs of previous generations, without the sort of curation and blend of additional new data that real-world models are trained with.

                  There is no sign that we are at “the peak” of AI development yet.

                  • JcbAzPx@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    4 days ago

                    We’re already seeing signs of incestuous data input causing damage. The more that AI takes over, the less capable it will be.

                • Ledivin@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  4 days ago

                  Nah, I’m still giving that one to the blockchain. LLMs are going to be useful for a while, but Ethereum still hasn’t figured out a real use, and they’re the only ones that haven’t given up and moved fully into coin gambling.

      • jawa21@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        18
        ·
        4 days ago

        That might be true if any human could reasonably ask a question there now. Ask a question, and you are likely going to see it removed for a variety of reasons.

        • oce 🐆@jlai.lu
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          4 days ago

          To be honest, I had a bad experience a few years ago when I wanted to try contributing, and I never tried again. Yet, I think it’s really hard to strike a balance of freedom and constrains for organically curated Q&A, so I try not to be too fast on judging them considering the service that they indubitably provided to millions of people.