This is an automated archive.
The original was posted on /r/singularity by /u/Rofel_Wodring on 2024-01-18 16:45:10+00:00.
I predict inarguable AGI will happen in 2024, even if I also suspect that despite being on the whole much smarter than a biological human it will still lag badly in certain cognitive domains, like transcontextual thinking. We're definitely at the point where pretty much any industrialized economy can go 'all-in' on LLMs (i.e. Mistral, hot on GPT-4's heels, is a French despite the EU's hostility to AI development) in a way we couldn't for past revolutionary technologies such as atomic power or even semiconductor manufacturing. That's good, but for various reasons, I don't think it will be as immediately earth-shattering as people will think. The biggest and most important reason, is cost.
This is not in the long run that huge of a concern. Open source LLM models that are within spitting distance of GPT-4 (relevant chart is on page 12) got released around year after when OG ChatGPT chat GPT came out. But these two observations greatly suggest that there's a limit of how much computational power we can squeeze out of top-end models without a huge spike in costs. Moore's Law, or at least if you think of it in terms of computational power instead of transistor density, will drive down the costs of this technology and will make it available sooner rather than later. Hence why I'm an AGI optimist.
But it's not instant! Moore's Law still operates on a timeline of about two years for halving the cost of computers. So even if we do get our magic whizz-bang smarter-than-Einstein AGI and immediately get it to work on improving itself, unless it turns out to be possible with a much more efficient computational model I still expect for it to take several years before things really get revolutionized. If it costs hundreds of millions of dollars in inference training and a million dollars a day just to operate it, there is only so much you can expect out of it. And I imagine that people are not going to want the first AGI to just work on improving itself, especially if it can already do things such as, say, design supercrops or metamaterials.
Maybe it's because I switched from engineering to B2B sales to field service (where I am constantly having to think about the resources I can devote to a job, and also helping customers who themselves have limited resources) but I find it very difficult to think of progress and advancement outside of costs.
Why? Because I have seen so many projects get derailed or slowed or simply not started not because people didn't have the talent, not because people didn't have the vision, not because people didn't have the urgency, or not even because they didn't have the budget/funding. It was often if not usually some other material limitation like, say, vendor bandwidth. Or floor space. Or time. Or waste disposal. Or even just the market availability of components like VFDs. And these can be intractable in a way that simply lacking the people or budget is not.
So compared to the kind of slow progress I've seen at, say, DRS Technologies or Magic Leap in expanding their semiconductor fabs despite having the people and budget and demand, the development of AI seems blazingly fast to me. And yet, amazingly, there are posts about disappointment and slowdown. Geez, it barely been even a year since the release of ChatGPT, you guys are expecting too much, I think.