• guitarsarereal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    There’s a thing I read somewhere – computer science has a way of understating both the long-term potential impact of a new technology, and the timelines required to get there. People are being told about what’s eventually possible, and they look around and see that the top-secret best in category at this moment is ELIZA with a calculator, and they see a mismatch.

    Thing is, though, it’s entirely possible to recognize that the technology is in very early stages, yet also recognize it still has long-term potential. Almost as soon as the Internet was invented (late 60’s) people were talking about how one day you could browse a mail-order catalogue from your TV and place orders from the comfort of your couch. But until the late 1990’s, it was a fantasy and probably nobody outside the field had a good reason to take it seriously. Now, we laugh at how limited the imaginations of people in the 1960’s were. Hop in a time machine and tell futurists from that era that our phones would be our TV’s and we’d actually do all our ordering and also product research on them, but by tapping the screen instead of calling in orders, and oh yeah there’s no landline, and they’d probably look at you like you were nuts.

    Anyways, considering the amount of interest in AI software even at its current level, I think there’s a clear pathway from “here” to “there.” Just don’t breathlessly follow the hype because it’ll likely follow a similar trajectory to the original computer revolution, which required about 20-30 years of massive investment and constant incremental R&D to create anything worth actually looking at by members of the public, and even further time from there to actually penetrate into every corner of society.