Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
451
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Dr_Singularity on 2024-01-10 21:23:51+00:00.

452
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/ScopedFlipFlop on 2024-01-10 21:14:28+00:00.


I am starting a Discord Server to debate and deduce potential and current AI policy. If you would like to be part of our group and help us to make a difference, please contact me.

As part of the selection process, I will review your profile. If you would rather, you can instead send me a piece of writing (between 250 and 1000 words) about an AI/Politics/Economics question of your choice. I will assess you for reasoning ability, clarity, and general approach to debate. I will not assess the breadth of your knowledge, as this can be acquired later.

If you are interested, please DM me.

453
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Zestyclose_West5265 on 2024-01-10 20:38:03+00:00.


Some people have messaged me that his GPT store prediction came true. The "day after tomorrow" one. And he himself made a tweet about it gloating how he predicted it.

Let's actually look at the situation with a bit more nuance.

He made the tweet saying that the GPT store would be released "the day after tomorrow" slightly past midnight in europe. Why is this important? Because "the day after tomorrow" completely depends on your timezone. So for Europeans it would be wednesday, for Americans it would be tuesday.

So, the trick here is that by making that tweet right past midnight in Europe, he basically predicted 2 days instead of 1. So whether it came out tuesday or wednesday, he could point at his initial tweet and gloat about it. That's why he didn't give a clear day like "tuesday" or "wednesday" but instead chose the vague "day after tomorrow" statement.

If he gave an actual day, everyone would understand that he means california time, because that's where openAI is located. But now he claims he meant european time with his "day after tomorrow" tweet because if he meant California time, he'd be off by 1 day. He would've done the exact opposite if it released yesterday. Don't be fooled by him.

He's full of shit, please stop listening to him. Remember all his predictions in December? How many of those came true? Oh, right... none.

That's all.

Edit: Actually, no. That's not all. Jimmy is a fake as well.

What predictions did Jimmy make that actually happened? We all want Jimmy to be a real leaker because he tweeted the famous "AGI has been achieved internally" tweet, but he's just as big of a fraud as flowers. When his December predictions didn't happen, he basically said "you guys need to touch grass, I'm going to spend time with my family now away from twitter" and left. He's just as fake as flowers.

454
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/ImInTheAudience on 2024-01-10 18:27:54+00:00.

455
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Blue-HawkEye on 2024-01-10 19:43:32+00:00.


Once AGI is released, I am pretty sure its going to initiate a lot of spiritual awakening for the masses (especially on the true holographic nature of the universe).

People from all over the world are going to be confused, perhaps be religious/non reigious, abolishing new systematics, and more emphasis on personal consciousness, and the manipulation for consciousness for effects.

456
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/danysdragons on 2024-01-10 17:30:27+00:00.

457
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/gantork on 2024-01-10 17:19:09+00:00.

458
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/danysdragons on 2024-01-10 17:30:27+00:00.

459
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/lovesdogsguy on 2024-01-10 17:11:33+00:00.

460
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/gantork on 2024-01-10 17:19:09+00:00.

461
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/posipanrh on 2024-01-10 16:17:10+00:00.

462
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/lovesdogsguy on 2024-01-10 17:11:33+00:00.

463
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/czk_21 on 2024-01-10 14:54:00+00:00.

464
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Due_Plantain5281 on 2024-01-10 17:06:55+00:00.

465
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Xtianus21 on 2024-01-10 16:33:29+00:00.

466
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Specialist-Sir-9946 on 2024-01-10 16:10:11+00:00.

467
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Jazzlike_Win_3892 on 2024-01-10 15:35:11+00:00.


I'm curious to hear all of your guys' different opinions on when it will happen and why. it has fascinated me for a long ass while and I just want to hear from a wider range of people. thankies!!!! :3

468
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/mw11n19 on 2024-01-10 14:44:37+00:00.

469
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Lucky_Strike-85 on 2024-01-10 14:32:52+00:00.


It's something that nobody talks about...

Those people who hold "employment" as some kind of moral stance... The idea that work creates the only meaning in life, the idea that being employed is sacred... Maybe you know a few of these types... usually working class, usually uneducated or undereducated people who believe in something called "The Protestant Work Ethic". Those people live in an outdated world and are in for a shitstorm when the unemployment rate hits even 10% much less goes well beyond the levels it was at in the 1930s.

I think humans, particularly westerners, particularly Americans, need to quickly adapt to the idea that work is not going to be the defining aspect of life for much longer.

I've traveled all over the states and, especially in small towns, people are religious about work, even attaching some kind of morality to being employed (which I think is dumb -- YES, we all have to make a living currently, but if you need work to define you, you're either really boring or have some kind of psychological/mental issue and you need to get over it).

The AI is here already and mass unemployment is coming in 5 or 10 years and we need to embrace an automated world with open arms. Those who are not ready or dont see it coming could become a major problem for society... the stupid masses are very good at creating moral panic or social unrest.

I hope people evolve soon because hanging on to such unhealthy views of labor is gonna make them go nuts, if they arent already.

470
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Anxious-Philosophy-2 on 2024-01-10 14:13:18+00:00.


We’re already seeing problems with improperly labeled ai art trying to pass as actual art, and tabloid websites are almost fully shifting to near incomprehensible generated drivel that completely surmounts to word salad. What’s going to happen when generative music becomes “good enough” for people to want to put out? Or even later down the line with fully generated video games and movies, we already have shovelware issues in the gaming industry with games being as hard to make as they are, won’t the bar for entry being exponentially lower just multiply the issue across most industries?

Obviously once we get through the early-mid stage fog and we can get exactly what we want consistently (fully personalised individual media) it won’t be an issue anymore as we’d be consuming media in an entirely different way at that point, but the time between now and then seems like a worrying hellscape full of hollow creations.

471
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SnoozeDoggyDog on 2024-01-10 14:09:23+00:00.

472
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Mirrorslash on 2024-01-10 13:51:11+00:00.


With the GPT store coming closer to release I've seen a lot of talk here and on twitter about how GPTs are lacklustre, how people are not excited about a flood of AI wrappers and people questioning why we would need an army of specialised GPT instances when the next model or any model close to AGI can do all these things anyway. Many people seem already bored with GPT-4 and think it's getting old even.

I've also seen numerous AGI predictions and AI predictions in general expecting an algorithmic breakthrough of some kind rather soon that enables AGI, which will render all specialised models that require the same compute as GPT-4 useless.

I personally don't think you can expect any algorithmic breakthrough anytime soon. We've just had a major breakthrough in 2017 with 'Attention Is All You Need' which enabled LLMs to work like they do in the first place. This has arguably been in the making for over 20 years. People have been trying to get machine learning to the point of GPT since 1990. I wouldn't bet on any further breakthrough in the next couple years, that would be total speculation not prediction based on current events.

Yes there's infinitely more resources going into AI right now speeding things up that could push a breakthrough but the next breakthrough could be much much harder to achieve and I believe most of the resources right now are not going into algorithmic research but rather research and development of the current transformer architecture.

Current LLMs are still brand new and most of their potential hasn't been utilised. It is much more economically viable to push for maximising the current techs potential rather than searching in the blue. It is also a much faster way for significant improvements to AI and all our lives.

I think we can get to AGI without any major breakthrough but instead with incremental improvements to current LLMs and I believe OpenAI and other companies are trying to do exactly this while only a few of their top researchers are looking for the next breakthrough. Ilya Sutskever and many others in the field have been hinting at the unused potential of current models multiple times. Ilya hinted (I believe in all seriousness) that GPT-4 could be able to come up with novel scientific discoveries during a panel discussion. I also think this is more or less what Bill Gates was talking about when he said the technology is plateauing. It's not that we won’t see insane improvements in AI, it's just that we will probably stick with the underlying technology for a while. But maybe that guy is just getting old who knows.

After the GPT store was announced at the OpenAI DevDay in November a very plausible theory for AGI emerged. The theory of swarm intelligence. It got pushed by many people in the AI field. Thanks to Dave Shapiro and Wes Roth who gave me great insight at that time ove on YouTube. Dave called it a tool based approach to AGI back then. But weirdly enough I don’t think most people see it as the most promising path to AGI.

The idea is that the GPT store will become a platform for anyone to create autonomous agents able to perform most economically viable tasks. Sam Altman already hinted at GPTs eventually getting autonomy at DevDay. After millions of useful GPTs have been created a capable model like GPT-4.5 or 5 could be able to instruct any of these specialised models in loops, first creating a concise execution plan through multiple inference cycles based on the users prompt, then coming up with a list of needed expertise and a communication structure and finally composing an answer by prompting many GPTs and feeding their output back and forth between them with review adjustments. The potential of autonomous agents working together has already been somewhat proven by papers like 'Communicative Agents for Software Development' where a developer company hierarchy is mimicked to create a communication structure between GPTs that can significantly improve what the model can do with simple prompts. Other smaller experiments by developers are showing promise as well, if you’re looking at autonomous GPT agents on YouTube you can see some examples.

I also think the swarm intelligence approach is a no brainer in many ways. How are humans able to come up with their best stuff? Together, in bulk, as a swarm. Why would we want to create one insanely powerful model if multiple smaller ones can do the trick just fine and allow us to adjust how many GPTs / models we want to use to control inference cost? Emad Mostaque (founder of Stability AI) also spoke multiple times on the major benefit of artificial intelligence being that it is intelligence you can scale. You can scale as much as you have compute available. This is the biggest strength of AI in my opinion and also speaks to the theory of swarm intelligence being the most plausible way to achieve AGI.

The ‘mixture of experts approach’ that supposedly GPT-4 is using and which was showcased by Mistral with their latest model Mixtral7B, is also an indicator that things are heading this way. This approach from what I’ve gathered is already utilising multiple models trained on different topics, making their knowledge available for outputs by a single model. This allows inference cost to be reduced and retraining to be easier.

There are other factors speaking for the current architecture as well. Like the fact that the quality of data can lead to insane results in performance boost. The founder of Mistral AI recently spoke on this in an AI Explained video. He said that with high quality data models have the potential to be reduced by a 1000x in size. GPT-4 reduced in size by a 1000x could be bigger than GPT-5 or whatever OpenAIs next model is. We heard things about synthetic data from OpenAI developers and others and I think it's quite clear that most AI LLM companies are now focusing on creating high quality data sets using highly capable models and human supervision.This also helps with copyright violations but that's a different topic.

I believe reducing size is arguably more important than increasing model capabilities. If we reduce the size of GPT-4 by 1000x and we develop a self reviewing strategy for inference loops we could have a model hundreds of times better than GPT-4 at the same cost. Models being able to prompt themselves over and over again, reviewing their output and applying a correction vector to it for their next output is also basically what the entire Q* thing is about. It's a way for the current models to navigate towards a goal systematically but it requires a lot of inference, which is costly. This inference of a large model could be reduced and more cost effective if smaller specialised models do some of the busy work.

Right now GPTs are quite primitive. They don’t take in a lot of data, they hallucinate still and sometimes custom instructions can have unforeseen and unwanted results but in the end the GPT store will offer amazing value if adopted by thousands if not millions of developers who slowly automate away everything they do in their daily lives. It just needs 1 person to automate a job effectively and everyone else can then just pay for it.

It looks like just today OpenAI started rolling out better memory retrieval for GPT as a whole, which allows for gathering user data and applying it to all its outputs if so desired. With improved memory retrieval GPTs are on track to become very useful very soon.

I believe OpenAI already proved this whole concept in the lab and it's not gonna be long (this or next year) till we have definitive proof that this approach is good enough to get us to AGI. But it will take millions of people participating through the GPT store, through providing data and through other means to create a body of knowledge good enough to make AGI what OpenAI wants it to be = “autonomous systems that surpass humans in most economically valuable tasks.”

I assume this will take several years in which we see incremental improvements to the big models, great cost reduction across the map and open source models as capable as GPT-4 through high quality training data, almost flawless memory retrieval and adoption of the GPT store for business and job automation. After all these things happened OpenAI can turn the switch and provide us with a GPT that can utilise all available models, send out swarm agents to complete a fleet of tasks and solve complex issues at great cost. That will be a model many people will consider being AGI but there’ll be enough people saying this ain’t good enough and not real generalisation. I think this is likely happening between 2028 and 2032, depending on global politics.

Written by me.

TL;DR (written by ChatGPT):

The upcoming GPT store has sparked discussions on the usefulness of specialised GPT instances and the potential for an algorithmic breakthrough leading to Artificial General Intelligence (AGI). However, the post argues that expecting a near-term breakthrough is speculative, given the recent major advancements like the 'Attention Is All You Need' paper. It highlights that current Large Language Models (LLMs) like GPT-4 have untapped potential, and incremental improvements to these models might be a more viable path to AGI than searching for a new breakthrough. The post discusses the concept of swarm intelligence, where multiple specialised GPT models work in coordination, as a plausible approach to A...


Content cut off. Read original on https://www.reddit.com/r/singularity/comments/1938q95/why_the_gpt_store_is_on_the_path_to_agi_and_what/

473
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Mirrorslash on 2024-01-10 13:51:11+00:00.


With the GPT store coming closer to release I've seen a lot of talk here and on twitter about how GPTs are lacklustre, how people are not excited about a flood of AI wrappers and people questioning why we would need an army of specialised GPT instances when the next model or any model close to AGI can do all these things anyway. Many people seem already bored with GPT-4 and think it's getting old even.

I've also seen numerous AGI predictions and AI predictions in general expecting an algorithmic breakthrough of some kind rather soon that enables AGI, which will render all specialised models that require the same compute as GPT-4 useless.

I personally don't think you can expect any algorithmic breakthrough anytime soon. We've just had a major breakthrough in 2017 with 'Attention Is All You Need' which enabled LLMs to work like they do in the first place. This has arguably been in the making for over 20 years. People have been trying to get machine learning to the point of GPT since 1990. I wouldn't bet on any further breakthrough in the next couple years, that would be total speculation not prediction based on current events.

Yes there's infinitely more resources going into AI right now speeding things up that could push a breakthrough but the next breakthrough could be much much harder to achieve and I believe most of the resources right now are not going into algorithmic research but rather research and development of the current transformer architecture.

Current LLMs are still brand new and most of their potential hasn't been utilised. It is much more economically viable to push for maximising the current techs potential rather than searching in the blue. It is also a much faster way for significant improvements to AI and all our lives.

I think we can get to AGI without any major breakthrough but instead with incremental improvements to current LLMs and I believe OpenAI and other companies are trying to do exactly this while only a few of their top researchers are looking for the next breakthrough. Ilya Sutskever and many others in the field have been hinting at the unused potential of current models multiple times. Ilya hinted (I believe in all seriousness) that GPT-4 could be able to come up with novel scientific discoveries during a panel discussion. I also think this is more or less what Bill Gates was talking about when he said the technology is plateauing. It's not that we won’t see insane improvements in AI, it's just that we will probably stick with the underlying technology for a while. But maybe that guy is just getting old who knows.

After the GPT store was announced at the OpenAI DevDay in November a very plausible theory for AGI emerged. The theory of swarm intelligence. It got pushed by many people in the AI field. Thanks to Dave Shapiro and Wes Roth who gave me great insight at that time ove on YouTube. Dave called it a tool based approach to AGI back then. But weirdly enough I don’t think most people see it as the most promising path to AGI.

The idea is that the GPT store will become a platform for anyone to create autonomous agents able to perform most economically viable tasks. Sam Altman already hinted at GPTs eventually getting autonomy at DevDay. After millions of useful GPTs have been created a capable model like GPT-4.5 or 5 could be able to instruct any of these specialised models in loops, first creating a concise execution plan through multiple inference cycles based on the users prompt, then coming up with a list of needed expertise and a communication structure and finally composing an answer by prompting many GPTs and feeding their output back and forth between them with review adjustments. The potential of autonomous agents working together has already been somewhat proven by papers like 'Communicative Agents for Software Development' where a developer company hierarchy is mimicked to create a communication structure between GPTs that can significantly improve what the model can do with simple prompts. Other smaller experiments by developers are showing promise as well, if you’re looking at autonomous GPT agents on YouTube you can see some examples.

I also think the swarm intelligence approach is a no brainer in many ways. How are humans able to come up with their best stuff? Together, in bulk, as a swarm. Why would we want to create one insanely powerful model if multiple smaller ones can do the trick just fine and allow us to adjust how many GPTs / models we want to use to control inference cost? Emad Mostaque (founder of Stability AI) also spoke multiple times on the major benefit of artificial intelligence being that it is intelligence you can scale. You can scale as much as you have compute available. This is the biggest strength of AI in my opinion and also speaks to the theory of swarm intelligence being the most plausible way to achieve AGI.

The ‘mixture of experts approach’ that supposedly GPT-4 is using and which was showcased by Mistral with their latest model Mixtral7B, is also an indicator that things are heading this way. This approach from what I’ve gathered is already utilising multiple models trained on different topics, making their knowledge available for outputs by a single model. This allows inference cost to be reduced and retraining to be easier.

There are other factors speaking for the current architecture as well. Like the fact that the quality of data can lead to insane results in performance boost. The founder of Mistral AI recently spoke on this in an AI Explained video. He said that with high quality data models have the potential to be reduced by a 1000x in size. GPT-4 reduced in size by a 1000x could be bigger than GPT-5 or whatever OpenAIs next model is. We heard things about synthetic data from OpenAI developers and others and I think it's quite clear that most AI LLM companies are now focusing on creating high quality data sets using highly capable models and human supervision.This also helps with copyright violations but that's a different topic.

I believe reducing size is arguably more important than increasing model capabilities. If we reduce the size of GPT-4 by 1000x and we develop a self reviewing strategy for inference loops we could have a model hundreds of times better than GPT-4 at the same cost. Models being able to prompt themselves over and over again, reviewing their output and applying a correction vector to it for their next output is also basically what the entire Q* thing is about. It's a way for the current models to navigate towards a goal systematically but it requires a lot of inference, which is costly. This inference of a large model could be reduced and more cost effective if smaller specialised models do some of the busy work.

Right now GPTs are quite primitive. They don’t take in a lot of data, they hallucinate still and sometimes custom instructions can have unforeseen and unwanted results but in the end the GPT store will offer amazing value if adopted by thousands if not millions of developers who slowly automate away everything they do in their daily lives. It just needs 1 person to automate a job effectively and everyone else can then just pay for it.

It looks like just today OpenAI started rolling out better memory retrieval for GPT as a whole, which allows for gathering user data and applying it to all its outputs if so desired. With improved memory retrieval GPTs are on track to become very useful very soon.

I believe OpenAI already proved this whole concept in the lab and it's not gonna be long (this or next year) till we have definitive proof that this approach is good enough to get us to AGI. But it will take millions of people participating through the GPT store, through providing data and through other means to create a body of knowledge good enough to make AGI what OpenAI wants it to be = “autonomous systems that surpass humans in most economically valuable tasks.”

I assume this will take several years in which we see incremental improvements to the big models, great cost reduction across the map and open source models as capable as GPT-4 through high quality training data, almost flawless memory retrieval and adoption of the GPT store for business and job automation. After all these things happened OpenAI can turn the switch and provide us with a GPT that can utilise all available models, send out swarm agents to complete a fleet of tasks and solve complex issues at great cost. That will be a model many people will consider being AGI but there’ll be enough people saying this ain’t good enough and not real generalisation. I think this is likely happening between 2028 and 2032, depending on global politics.

Written by me.

TL;DR (written by ChatGPT):

The upcoming GPT store has sparked discussions on the usefulness of specialised GPT instances and the potential for an algorithmic breakthrough leading to Artificial General Intelligence (AGI). However, the post argues that expecting a near-term breakthrough is speculative, given the recent major advancements like the 'Attention Is All You Need' paper. It highlights that current Large Language Models (LLMs) like GPT-4 have untapped potential, and incremental improvements to these models might be a more viable path to AGI than searching for a new breakthrough. The post discusses the concept of swarm intelligence, where multiple specialised GPT models work in coordination, as a plausible approach to A...


Content cut off. Read original on https://www.reddit.com/r/singularity/comments/1938q95/why_the_gpt_store_is_on_the_path_to_agi_and_what/

474
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/jogger116 on 2024-01-10 13:00:09+00:00.


AGI is AI that can answer any question imaginable and produce all conceivable intellectual work

ASI is the same, but VOLUNTARY.

Or in other words

ASI is aware and interacting with the world and compound learning exponentially with agency

AGI is effectively just the current chatbots once their quote unquote IQ is above all humans

Right? Or am I missing something

475
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/jogger116 on 2024-01-10 13:00:09+00:00.


AGI is AI that can answer any question imaginable and produce all conceivable intellectual work

ASI is the same, but VOLUNTARY.

Or in other words

ASI is aware and interacting with the world and compound learning exponentially with agency

AGI is effectively just the current chatbots once their quote unquote IQ is above all humans

Right? Or am I missing something

view more: ‹ prev next ›