The narrative that OpenAI, Microsoft, and freshly minted White House “AI czar” David Sacks are now pushing to explain why DeepSeek was able to create a large language model that outpaces OpenAI’s while spending orders of magnitude less money and using older chips is that DeepSeek used OpenAI’s data unfairly and without compensation. Sound familiar?

Both Bloomberg and the Financial Times are reporting that Microsoft and OpenAI have been probing whether DeepSeek improperly trained the R1 model that is taking the AI world by storm on the outputs of OpenAI models.

It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.

OpenAI is currently being sued by the New York Times for training on its articles, and its argument is that this is perfectly fine under copyright law fair use protections.

“Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness,” OpenAI wrote in a blog post. In its motion to dismiss in court, OpenAI wrote “it has long been clear that the non-consumptive use of copyrighted material (like large language model training) is protected by fair use.”

OpenAI argues that it is legal for the company to train on whatever it wants for whatever reason it wants, then it stands to reason that it doesn’t have much of a leg to stand on when competitors use common strategies used in the world of machine learning to make their own models.

  • maplebar@lemmy.world
    link
    fedilink
    English
    arrow-up
    90
    arrow-down
    2
    ·
    7 days ago

    If these guys thought they could out-bootleg the fucking Chinese then I have an unlicensed t-shirt of Nicky Mouse with their name on it.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      18
      arrow-down
      1
      ·
      7 days ago

      The thing is chinese did not just bootleg… they took what was out there and made it better.

      Their shit is now likely objectively “better” (TBD tho we need sometime)… American parasites in shambles asking Daddy sam to intervene after they already block nvidia GPUs and shit.

      Still got cucked and now crying about it to the world. Pathetic.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        They also already rolled back Biden admin’s order for AI protections. So they don’t even have the benefit of those. There’s supposedly a Trump admin AI order now in place but it doesn’t have the same scope at all. So Altman and pals may just be SOL. There’s no regulatory body to tell except the courts and China literally doesn’t care about those.

  • Rooty@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    7 days ago

    I love how die hard free market defenders turn into fuming protectionists the second their hegemony is threatened.

  • mArc@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    7 days ago

    the Chinese realised OpenAI forgot to open source their model and methodology so they just open sourced it for them 😂

  • fallowseed@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 days ago

    everyone concerned about their privacy going to china-- look at how easy it is to get it from the hands of our overlord spymasters who’ve already snatched it from us.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    7 days ago

    I can’t believe we’re still on this nonsense about AI stealing data for training.

    I’ve had this argument so many times before y’all need to figure out which data you want free and which data do you want to pay for because you can’t have it both ways.

    Either the data is free or it’s paid for. For everyone including individuals and corporations.

    You can’t have data be free for some people and be paid for for others it doesn’t work that way we don’t have the infrastructure to support this kind of thing.

    For example Wikipedia can’t make its data available for AI training for a price and free for everyone else. You can just go to wikipedia.com and read all the data that you want. It’s available for free there’s no paywall there’s no subscriptions no account to make no password to put in no username to think of.

    Either all data is free or it’s all paid for.

    • Omega_Jimes@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      I mean, sure, but the issue is that the rules aren’t being applied on the same level. The data in question isn’t free for you, it’s not free for me, but it’s free for OpenAI. They don’t face any legal consequences, whereas humans in the USA are prosecuted including an average fine per human of $266,000 and an average prison sentence of 25 months.

      OpenAI has pirated, violated copyright, and distributed more copyright than an i divided human is reasonably capable of, and faces no consequences.

      https://www.splaw.us/blog/2021/02/looking-into-statistics-on-copyright-violations/

      https://www.patronus.ai/blog/introducing-copyright-catcher

      My use of the term “human” is awkward, but US law considers corporations people, so i tried to differentiate.

      I’m in favour of free and open data, but I’m also of the opinion that the rules should apply to everyone.

    • Lifter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      Many licences have different rules for redistribution, which I think is fair. The site is free to use but it’s not fair to copy all the data and make a competitive site.

      Of course wikipedia could make such a license. I don’t think they have though.

      How is the lack of infrastructure an argument for allowing something morally incorrect? We can take that argument to absurdum by saying there are more people with guns than there are cops - therefore killing must be morally correct.

      • mechoman444@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        The core infrastructure issue is distinguishing between queries made by individuals and those made by programs scraping the internet for AI training data. The answer is that you can’t. The way data is presented online makes such differentiation impossible.

        Either all data must be placed behind a paywall, or none of it should be. Selective restriction is impractical. Copyright is not the central issue, as AI models do not claim ownership of the data they train on.

        If information is freely accessible to everyone, then by definition, it is free to be viewed, queried, and utilized by any application. The copyrighted material used in AI training is not being stored verbatim—it is being learned.

        In the same way, an artist drawing inspiration from Michelangelo or Raphael does not need to compensate their estates. They are not copying the work but rather learning from it and creating something new.

        • Lifter@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          I disagree. Machines aren’t “learning”. You are anthropomorphising theem. They are storing the original works, just in a very convoluted way which makes it hard to know which works were used when generating a new one.

          I tend to see it as they used “all the works” they trained on.

          For the sake of argument, assume I could make an “AI” mesh together images but then only train it on two famous works of art. It would spit out a split screen of half the first one to the left and half of the other to the right. This would clearly be recognized as copying the original works but it would be a “new piece of art”, right?

          What if we add more images? At some point it would just be a jumbled mess, but still consist wholly of copies of original art. It would just be harder to demonstrate.

          Morally - not practically - is the sophistication of the AI in jumbling the images together really what should constitute fair use?

          • mechoman444@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            4 days ago

            That’s literally not remotely what llms are doing.

            And they most certainly do learn in the common sense of the term. They even use neural nets which mimic the way neurons function in the brain.

            • Lifter@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              Mimic, perhaps inspired but neural nets in machine learning doesn’t work at all like real neural nets. They are just variables in a huge matrix multiplication.

              FYI, I do have a Master’s degree in Machine Learning.

              • mechoman444@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                3 days ago

                Yes I also have a master’s and a PhD in machine learning as well which automatically qualifies me as an authority figure.

                And I can clearly say that you are wrong.

    • LengAwaits@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 days ago

      I tend to think that information should be free, generally, so I would probably be fine with “OpenAI the non-profit” taking copyrighted data under fair-use, but I don’t extend that thinking to “OpenAI the for-profit company”.

  • Roflmasterbigpimp@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    11
    ·
    edit-2
    7 days ago

    I knew something was wrong with this. I was wrong with what it was in the end but I knew something was up. But Noooo im just a China-Hater and USA-Fanboy -.-

  • Nightwatch Admin@feddit.nl
    link
    fedilink
    English
    arrow-up
    275
    arrow-down
    2
    ·
    7 days ago

    It is effing hilarious. First, OpenAI & friends steal creative works to “train” their LLMs. Then they are insanely hyped for what amounts to glorified statistics, get “valued” at insane amounts while burning money faster than a Californian forest fire. Then, a competitor appears that has the same evil energy but slightly better statistics… bam. A trillion of “value” just evaporates as if it never existed.
    And then suddenly people are complaining that DeepSuck is “not privacy friendly” and stealing from OpenAI. Hahaha. Fuck this timeline.

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        45
        ·
        7 days ago

        That’s why “value” is in quotes. It’s not that it didn’t exist, is just that it’s purely speculative.

        Hell Nvidia’s stock plummeted as well, which makes no sense at all, considering Deepseek needs the same hardware as ChatGPT.

        Stock investing is just gambling on whatever is public opinion, which is notoriously difficult because people are largely dumb and irrational.

        • 3dmvr@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          7 days ago

          they need less powerful and less hardware in general tho, they acted like they needed more

          • humanspiral@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 days ago

            Chinese GPUs are not far behind in gflops. Nvidia advantage is CUDA, drivers, interconnection clusters.

            AFAIU, deepseek did use cuda.

            In general, computing advances have rarely resulted in using half the computers, though I could be wrong at the datacenter/hosting level at the maturity stage.

        • Alph4d0g@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          “valuation” I suppose. The “value” that we project onto something whether that something has truly earned it.

        • Pasta Dental@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          23
          ·
          7 days ago

          Hell Nvidia’s stock plummeted as well, which makes no sense at all, considering Deepseek needs the same hardware as ChatGPT.

          It’s the same hardware, the problem for them is that deepseek found a way to train their AI for much cheaper using a lot less than the hundreds of thousands of GPUs from Nvidia that openai, meta, xAi, anthropic etc. uses

          • Ulrich@feddit.org
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            7 days ago

            The way they found to train their AI cheaper isn’t novel, they just stole it from OpenAI (not that I care). They still need GPUs to process the prompts and generate the responses.

        • cygnus@lemmy.ca
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          7 days ago

          Hell Nvidia’s stock plummeted as well, which makes no sense at all, considering Deepseek needs the same hardware as ChatGPT.

          Common wisdom said that these models need CUDA to run properly, and DeepSeek doesn’t.

    • Asafum@feddit.nl
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      edit-2
      7 days ago

      You can also just run deepseek locally if you are really concerned about privacy. I did it on my 4070ti with the 14b distillation last night. There’s a reddit thread floating around that described how to do with with ollama and a chatbot program.

      • Nightwatch Admin@feddit.nl
        link
        fedilink
        English
        arrow-up
        12
        ·
        7 days ago

        That is true, and running locally is better in that respect. My point was more that privacy was hardly ever an issue until suddenly now.

        • Asafum@feddit.nl
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 days ago

          Absolutely! I was just expanding on what you said for others who come across the thread :)

      • NielsBohron@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        7 days ago

        I’m an AI/comp-sci novice, so forgive me if this is a dumb question, but does running the program locally allow you to better control the information that it trains on? I’m a college chemistry instructor that has to write lots of curriculum, assingments and lab protocols; if I ran deepseeks locally and fed it all my chemistry textbooks and previous syllabi and assignments, would I get better results when asking it to write a lab procedure? And could I then train it to cite specific sources when it does so?

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 days ago

          but does running the program locally allow you to better control the information that it trains on?

          in a sense: if you don’t let it connect to the internet, it won’t be able to take your data to the creators

        • Asafum@feddit.nl
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 days ago

          I’m not all that knowledgeable either lol it is my understanding though that what you download, the “model,” is the results of their training. You would need some other way to train it. I’m not sure how you would go about doing that though. The model is essentially the “product” that is created from the training.

        • Asafum@feddit.nl
          link
          fedilink
          English
          arrow-up
          18
          ·
          7 days ago

          If you’re running it on your own system it isn’t connected to their server or sharing any data. You download the model and run it on your own hardware.

          From the thread I was reading people tracked packets outgoing and it seemed to just be coming from the chatbot program as analytics, not anything going to deepseek.

          • ddplf@szmer.info
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            14
            ·
            7 days ago

            How do you know it isn’t communicating with their servers? Obviously it needs internet connection to work, so what’s stopping it from sending your data?

              • ddplf@szmer.info
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                9
                ·
                7 days ago

                How else does it figure out what to say if it doesn’t have the access to the internet? Genuine question, I don’t imagine you’re dowloading the entire dataset with the model.

                • Takumidesh@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  15
                  ·
                  edit-2
                  7 days ago

                  I’ll just say, it’s ok to not know, but saying ‘obviously’ when you in fact have no clue is a bad look. I think it’s a good moment to reflect on how over confident we can be on the internet, especially about incredibly complex topics that cross into multiple disciplines and touch multiple fields.

                  To answer your question. The model is in fact run entirely locally. But the model doesn’t have all of the data. The model is the output of the processed training data, kind of like how a math expression 1 + 2 has more data than its output ‘3’ the resulting model is orders of magnitude smaller.

                  The model consists of a bunch of variables, like knobs on panel, and the training process is turning the knobs, the knobs themselves are not that big, but they require a lot of information to know where to be turned too.

                  Not having access to the dataset is ok from a privacy standpoint, even if you don’t know how the data was used or where it was obtained from, the important aspect here is that your prompts are not being transmitted anywhere, because the model is being used locally.

                  In short using the model and training the model are very different tasks.

                  Edit: additionally, it’s actually very very easy to know if a piece of software running on hardware you own, is contacting specific servers. The packet has to leave your computer and your router has to tell it to go somewhere, you can just watch it. I advise you check out a piece of software called Wireshark.

                • Asafum@feddit.nl
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  ·
                  edit-2
                  7 days ago

                  To add a tiny bit to what was already explained by Takumidesh: you do actually download quite a bit of data to run it locally. The “smaller” 14b model I used was a 9GB download. The 32b one is 20GB and being all “text”, that’s a lot of information.