This is an automated archive.
The original was posted on /r/singularity by /u/Anenome5 on 2024-01-18 00:22:39+00:00.
Altman believes future AI products will need to allow "quite a lot of individual customization" and "that's going to make a lot of people uncomfortable,"
Why?
because AI will give different answers for different users, based on their values preferences and possibly on what country they reside in.
That's not much different from how Google bows to the whims of various countries to remove things they don't like, such as Taiwan being called an independent country because China hates that. But I'm not sure how they would control for this exactly, doesn't that imply highly variable alignment, or are we talking separate models being maintained for different territories?
However, that's less important than what he says next:
Soon, "you might just be able to say 'what are my most important emails today,'" and have AI summarize them. Altman says AI advances will "help vastly accelerate the rate of scientific discovery." He doesn't expect that to happen in 2024, "but when it happens, it's a big, big deal."
We like to think of this in the abstract, but let's think of it in the specific. I want you to imagine a single researcher working on a problem. Something like the Moderna mRNA tech, which itself is the product of about four good inventions put together into one working technology. It took decades to finally have a working product because it took time and entirely other teams to solve problems the first team hit a wall up against.
But, with AGI in the mix, it's entirely possible to imagine that a single researcher could start working on a problem on a level with the mRNA tech back in the 90s, and using AGI as their co-developer, very quickly develop the mRNA technology, run labs in an automated fashion, hit difficulty walls quickly that would stymie a human team (possibly for years), have the AGI brainstorm on 50+ million different ways that the wall could be overcome, have it narrow that down to the most promising ones, then begin doing targeted physical and chemical testing that would quickly show you the more promising approaches, then quickly develop those into new technologies that take you to stage two of that process, and this could likely be done within a year or less.
With a dedicated team of researchers all using AGI to co-develop a single goal, all four of those inventions needed to create working mRNA tech, that took over 30 years to finish, could reasonably be done in less than a year.
Here's the point:
Not only will the pace of invention and technological development accelerate, it will get to the point that tech invention can be individualized to your specific problem.
Not only will this help with science and technology, but also with business and product development. How many people come up with nagging problems they'd love to solve, for which they need an invention, but lack the skills to carry that forward into an actual product.
With AGI, they will have all the skills and advice they could possibly ask for.
One last thing:
Altman said his top priority right now is launching the new model, likely to be called GPT-5.
Why is he talking about AGI now despite just a few months ago saying they weren't even working on GPT5. It seems that Altman was playing to the Board at that time, which he was likely getting weird signals from, trying to assuage them that things weren't moving too fast.
I'm still waiting to hear if it's true that the reason the board freaked out is because a number of them had never actually used ChatGPT4 until then and were shocked at what it was capable of doing. Others say it was the idea of marketizing agents with the now recently launched agents market.
Regardless, that drama is behind us and Altman clearly feels free to discuss openly the creation of GPT5, in fact his CEOship may ride on it, having now secured his position much better, he needs to show that OpenAI still possesses the secret sauce that is keeping them above their numerous rivals.
I'm certainly willing to pay for ChatGPT5, how about you guys? ChatGPT4 is great, but it's just not quite smart enough to be as useful as an actual human expert would be, in short, as useful as AGI would be.
I expect ChatGPT5 will be roughly 90% of the way to being AGI, it will likely be almost as good as a human expert in just about every field. And if that's what actually gets delivered within the next year, we're all going to be in for a helluva ride :)