• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    3 months ago

    This is the best summary I could come up with:


    When OpenAI’s DALL-E 2 debuted on April 6, 2022, the idea that a computer could create relatively photorealistic images on demand based on just text descriptions caught a lot of people off guard.

    But for a tight-knit group of artists and tech enthusiasts who were there at the start of DALL-E 2, the service’s sunset marks the bittersweet end of a period where AI technology briefly felt like a magical portal to boundless creativity.

    As early as the 1960s, artists like Vera Molnar, Georg Nees, and Manfred Mohr let computers do the drawing, generatively creating artwork using algorithms.

    Despite these precursors, DALL-E 2 arguably marked the mainstream breakout point for text-to-image generation, allowing each user to type a description of what they wanted to see and have a matching image appear before their eyes.

    When OpenAI first announced DALL-E 2 in April 2022, certain corners of Twitter quickly filled with examples of surrealistic artworks it generated, such as teddy bears as mad scientists and astronauts on horseback.

    When OpenAI began handing out those beta testing invitations, a common bond quickly spawned a small community of artists who felt like pioneers exploring the new technology together.


    The original article contains 716 words, the summary contains 195 words. Saved 73%. I’m a bot and I’m open source!