Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
It can do arithmetic now, instead of making up numbers out of thin air? That’s the big secret Q* project? k
A major criticism people had of generative AI is that it was incapable of doing stuff like math, clearly showing it doesn’t have any intelligence. Now it can do it, and it’s still not impressive?
Show that AI to people 20 years ago and they would be amazed this is even possible. It keeps getting more advanced and people keep just dismissing it, possibly not realizing how impressive this shit and recent developments actually are?
Sure, it probably still doesn’t have real intelligence… but how will people be able to tell when something like this has? When it can reason in a similar way we can? It already can imitate reason plenty well… and what is the difference? Is a 3-year old more intelligent? What about a 5-year old? If a 5-year old fails at reasoning in the same way an AI does, do we say it’s not intelligent?
I feel like we are nearing the point where these generative AIs are getting more intelligent than the least intelligent humans, and what then? Will we dismiss the AI, or the humans?
I agree with you. Your statement made me remember this comic:
HAL was dreamt up after the first generation of AI reseacher made audacious claims that AGI was really close. For example, He Simon said “machines will be capable, within twenty years, of doing any work a man can do.”
The issue isn’t that we can or can’t do it, we aren’t even sure what it is or how to test for it yet.
deleted by creator
There’s a thing I read somewhere – computer science has a way of understating both the long-term potential impact of a new technology, and the timelines required to get there. People are being told about what’s eventually possible, and they look around and see that the top-secret best in category at this moment is ELIZA with a calculator, and they see a mismatch.
Thing is, though, it’s entirely possible to recognize that the technology is in very early stages, yet also recognize it still has long-term potential. Almost as soon as the Internet was invented (late 60’s) people were talking about how one day you could browse a mail-order catalogue from your TV and place orders from the comfort of your couch. But until the late 1990’s, it was a fantasy and probably nobody outside the field had a good reason to take it seriously. Now, we laugh at how limited the imaginations of people in the 1960’s were. Hop in a time machine and tell futurists from that era that our phones would be our TV’s and we’d actually do all our ordering and also product research on them, but by tapping the screen instead of calling in orders, and oh yeah there’s no landline, and they’d probably look at you like you were nuts.
Anyways, considering the amount of interest in AI software even at its current level, I think there’s a clear pathway from “here” to “there.” Just don’t breathlessly follow the hype because it’ll likely follow a similar trajectory to the original computer revolution, which required about 20-30 years of massive investment and constant incremental R&D to create anything worth actually looking at by members of the public, and even further time from there to actually penetrate into every corner of society.
When AI could be capable of choosing between good and bad, then it will be a real AI for me.
If the thing has developed its own approach to generalized symbolic reasoning that could actually be a pretty big deal.
That’s basically what I read out of it, but it’s probably much bigger of a breakthrough than the article is suggesting.
Current AI isn’t really intelligent at all. It’s essentially just a search engine combined with that robot voice from TikTok videos. Of course it’s more complicated than that but it helps to illustrate the point, which is that the AI you’ve interacted with thus far don’t know if they’re right about what they tell you. They’re just hoping the answer they found was correct and stating it in an authoritative way that can confuse people who don’t know the real answer to the question it was trying to answer.
Actual AI will be able to reason out correct answers from incomplete information and solve complex mathematical equations very quickly. Being able to solve basic math problems without just searching it’s database for the correct answer is an important step towards real intelligence. It means we’re no longer dealing with a hard drive attached to an answering machine, we’re dealing with something that can process information in basically the same way we do, which opens up all sorts of awkward moral and philosophical questions.
From 10 years time…
The good news: a superintelligent AI has cracked faster than light travel that allows humans to travel across the galaxy in minutes.
The bad news: that AI uses its new-found ability to yeet all of us off to some barren rock far away and leaves us to die there with no resources because humanity is such a crazy, deductive pain in the arse.
And all of the humans that survived the landing will be like, "Holy shit it’s clean fucking air!
There’s fucking water without PFAs here!
The AI removed all the microplastics in my brain!
Holy fucking shit, somebody find all the billionaires and kill those fuckers!"
Not sure about this upcoming development, but they had the math part solved already via a Wolfram Alpha plugin which integrated into ChatGPT. As you may already know, Wolfram can already solve complex math problems with just a natural language input, so this isn’t anything revolutionary.
What would be revolutionary though is if it applied that same sort of logic beyond math, like towards language (and visual) outputs and be able to fact check, or at the very least, not contradict itself or hallucinate like it does sometimes.
It’s not a terribly complicated idea, really. You can train it to output formatted calculations when presented with a problem, then something in the middle watches for those and inserts the solution for it behind the scenes. You might even trigger another generation to let it appear more smooth when presented to the user.
I totally get the skepticism. It’s not surprising given how abused the term “AI” is, but they are a much bigger deal than “a search engine with that robot voice”. In fact that’s exactly the thing they are definitely NOT. They are terrible at recall and terrible at prioritizing reference information.
That said, Gpt models are the first, possibly most important piece of an AGI. They are a proof of concept that the ability to draw basic conceptual and linguistic understanding is possible from an enormous amount of data and shockingly little instruction. There’s no real reason to think they should be as good as they are at correctly interpreting written content, but here we are.
People make a big deal out of gpt because they think it will enable rapid improvement, and personally I don’t think that’s a forgone conclusion. It’s probably appropriate to compare it to the development of the first rudimentary computer: by itself it isn’t particularly groundbreaking, but drawn to its maximum it has revolutionary potential. Every additional step from here is likely as big or bigger than the one from gpt2 and to 3 and 4.
This is a great comment. I first learned ML at Google in Boulder in 2017 using TensorFlow. We were introduced by the google images team to re-create Uber’s fare estimation algorithm using 25+ years of New York City taxi data. GPS locations, fares, times of day, routes, etc. As expected, given gradient descent and how people chose to use the parameters, everyone had very different algorithms by the end. This is what has been known with ML for years (even with GPT, just massive models), but something that can process and learn on the fly is something else entirely and is pretty exciting for the future. Philosophical questions abound.