• sunbeam60@lemmy.one
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    5 months ago

    But the content has already been absorbed. I wouldn’t be surprised if they have all of it sucked up (many would argue illegally) and stored as a corpus for them to iterate onto. It’s not like they go out to touch all the web every time they train a new version of their model.

      • QuaternionsRock@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        One of the craziest facts about GPT (to me) is that it was trained on 570GB of text data. That’s obviously a lot of text, but it’s bewildering to me that I could theoretically store their entire training dataset on my laptop.