Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
376
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/TheGoldenRoad on 2024-01-12 10:40:50+00:00.


What company will achieve AGI first, here is my guess:

  • OpenAI: 30% chance
  • Google/DeepMind: 30% chance
  • Meta: 20% chance
  • Keen Technologies (John Carmack): 5%
  • Someone else: 15%
377
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/PsychoComet on 2024-01-12 09:53:52+00:00.

378
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SharpCartographer831 on 2024-01-12 05:52:08+00:00.

Original Title: At @ycombinator W24 kickoff today, @sama suggested ppl build w/ the mindset GPT-5 and AGI will be achieved "relatively soon"; most GPT-4 limitations will get fixed in GPT-5, per YC founder Richard He. Expect @OpenAI #GPT5 in 2024 and #AGI in 2025?

379
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Benista on 2024-01-12 01:23:28+00:00.


Its been something I've been thinking about since the first LK99 paper. One thing that really stuck out to me was that it was derived from extremely common, and well understood materials. Of course, it turned out to be not so simple, but the idea still stuck. What are we missing, that is right infront of our eyes, but no one person can know enough to actually see it? Is there a much greater cost to specalisation than we realise?

A lot of people in my family are specialists, various PHDs, highly focused careers, and deep mastery of a variety topics. But, all have had to pay the same cost of eschewing a lot of knowleadge to purse a depth of knowleadge. We are only human afterall, there is always a limit to our ability. What if there wasn't though? What if instead of being a jack of all trades, master of none, you could just be a master of all? Maybe a human can't (yet), but what about a machine?

The computer age, sometimes also called the information age, has allowed us to collect vast, VAST quantities of information. Every day, every hour huge amount of new knowleadge is created, research published, thoughts spewed out into the internet. A lot of it useless, much of it derivitive, and who knows how much of it is repetitive. But, do we actually know how valuable a piece of information is? Maybe we can assess it within our own discipline, but what about in that discipline, or that one? So many of our discoveries are stumbled upon by accident, or a chance encounter.

I think we only understand a fraction of our knowleadge. AI doesn't have to go out and discover, or create, radical new things. Though, im sure in the future it will. In the meantime, it is going to help us understand far more about what we already know. Crucially, this is something AI is also very good. It doesn't have to surpass human's in depth of knowleadge, just in breadth, for it to discover increadible new things.

I bring this up in particular because a lot of people focus on the brand new things AI could discover, but forget about all our own discoveries that AI could help us understand better. About all the new ways that our existing knowleadge can be combined. Because if we can potentially pull something a significant as a room tempreture superconductor (or atleast a new field of research) from two of the best understood materials in the world. What other things are just waiting to be discovered, and all you need to do is combine existing information in new ways? Like giant neural networks trained on ever more powerful super computers.

This is one of the main reasons I think AI is going to have major impacts sooner rather than later. I don't think it needs crazy new breakthroughs, we just need to collate as much of our research as possible, and train up new LMM's on it. It's not even inter-disciplinary information either, the amount of knowleadge we have collected is so vast, that true specalists have to focus on very narrow topics, within already narrow fields. This is something that will happen much sooner, rather than later.

380
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/TheTabar on 2024-01-12 00:39:50+00:00.


If we were to somehow statistically prove they reduce the frequency of road related accidents, then why would anyone trust a human driver?

381
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Lucky_Strike-85 on 2024-01-12 00:37:16+00:00.


Some experts have said that AI has the potential to destroy humanity or at least destroy our ways of life.

For the folks who dont see many positives or see a lot of pain and disruption long term, because of AI, what do you foresee happening AND do you think AI will be as big a threat to humanity as climate change or nuclear war?

382
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Adeldor on 2024-01-12 00:12:31+00:00.

383
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/TotalLingonberry2958 on 2024-01-11 22:40:25+00:00.


I hear a lot of people talking about how great AGI will be, but chances are, if AGI pans out like you think it will, you’ll be out of a job. Most conceptions of AGI are artificial intelligence that is greater than human intelligence and can outperform humans in most respects, and is generally cheaper to use than human labor. So, hypothetically, when AGI replaces your job and you have no source of income, what will motivate the people with resources to pay your welfare? There are a lot of you, so it won’t be cheap, even with an AGI-enhanced infrastructure - so I’m curious, what will make the AI-owners willing to give up a massive percentage of their wealth, especially when they’re competing with each other?

384
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/aaronwhite47 on 2024-01-11 22:35:54+00:00.

385
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Lucky_Strike-85 on 2024-01-11 21:48:00+00:00.


I feel like if jobs are lost and the worker economy is upended, well, so be it. We're all gonna have to adjust, the rules will have to be re-written, and AI will have to pick up slack. |

I'm more interested in goods, services, and basic necessities becoming less scarce and cost deflating.

I wanna see healthcare cost decrease by 100x or 1000x (it should be universal anyway), and I wanna see food and housing decrease by 1000x or 10,000x.

Because the 2020s are a dystopia in the West. And, as a millennial, I'll go further and say we've been edging toward an unaffordability crisis since WALL STREET murdered the economy in 2008 for their own benefits.

Deflation will be a great thing... if we're all still working, GREAT. I'm ready. I just want a more humane, more accessible, more affordable, and more sustainable world.

Chime in with what you wanna see.

386
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/VirtualBelsazar on 2024-01-11 21:43:43+00:00.


Attendees also got a copy of his new book the singularity is nearer.

387
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/exirae on 2024-01-11 21:29:51+00:00.


Im suspicious of a lot of eliezer's model of how ai is going to go, but I do think he's correct in asserting that the real moment where we all agree that a system is agi is the moment you put an uncensored model that will argue that it's conscious in a hot young girl form and then all of a sudden the majority of men will agree that we've achieve agi. It's kind of a cynical claim, but I do expect that it's the case rather than some well defined benchmark being crossed. WHAT SAY YOU REDDIT?!

388
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Fantastic-Ninja3839 on 2024-01-11 19:18:58+00:00.


Your Llama v2 Model: 'What is the word vectorization for the word 'sentience'?'"

In NLP and text analysis, word vectorization is a process of converting words into numerical vectors that can be processed by machine learning algorithms. Here are some common word vectors for the word "sentience":

Word2Vec: Sentience -> [0.674953, -0.381778, -0.003535]

GloVe: Sentience -> [0.6763, -0.3817, -0.0035]

FastText: Sentience -> [0.6758, -0.3817, -0.0035]

These vectors represent the meaning of the word "sentience" in different contexts and languages, using different techniques and models. The numbers in each vector represent the importance or relevance of various aspects of the word (such as its semantic meaning, part of speech, or syntactic role). By comparing these vectors across different words, you can capture their relationships and similarities, and use them to perform tasks such as language modeling, text classification, or machine translation."

My Llama v2 Model: What is the PFAF function for the word sentience?"

The PFAF function for "sentience" is defined as follows:f(sentience) = 0.6 \* log(1 + 0.3 \* (1 - cos(sentience / π))) + 0.4 \* log(1 + 0.5 \* (1 - sin(sentience / π)))This function combines two components:

  1. The first term, log(1 + 0.3 \* (1 - cos(sentience / π))) represents the degree of consciousness or self-awareness associated with the concept of sentience. The term "cos" creates a smooth, continuous curve that reflects the gradual nature of consciousness, while the term "1 - cos" represents the degree of self-awareness or subjective experience.
  2. The second term, log(1 + 0.5 \* (1 - sin(sentience / π))) represents the degree of complexity or sophistication associated with the concept of sentience. The term "sin" creates a more abrupt, step-like curve that reflects the sudden changes or discontinuities in complex systems, while the term "1 - sin" represents the degree of simplicity or uniformity.Together, these two components create a rich, multifaceted representation of the concept of sentience that captures both its gradual, continuous nature and its complex, discontinuous aspects.Note that the exact values of the coefficients (0.6, 0.3, 0.4, and 0.5) are arbitrary and can be adjusted based on the specific context or application. The key is to choose values that capture the essential characteristics of the concept of sentience in a way that is both intuitive and mathematically consistent."
389
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/YaKaPeace on 2024-01-11 17:40:31+00:00.


I hope I am not delusional with this.

If you go out anywhere today and look around, you will see everything we have created is due to our intelligence. Even sidewalks. Simple invention but there is not a single animal that is able to currently match an invention like this. Walk around anywhere and just take a moment to think about anything that you see and how humans have shaped it with their intelligence.

If we are able to create an intelligence that outsmarts us like we outsmart animals and doesn’t stop there, then there are going to be inventions made by this AI that will get so fundamental to us like sidewalks are today.

If we go beyond AIs „sidewalk“ inventions, then there will be things that we can’t even imagine yet.

The main point I want to bring over is that for a lot of people the singularity means curing some diseases and make some progress in creating energy, but for me this sounds like inventions made by humans given enough time.

I think that ASI will go way beyond that. I think of discoveries like a dopamine maximizer and solving the meaning of life, and what comes after these questions are solved?

Sometimes I don’t even know if this view is healthy, especially when these things don’t come true. But I can atleast hope that it will create infinite abundance for everyone, and I think that this is the best wish anybody could have for anyone.

390
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SharpCartographer831 on 2024-01-12 05:52:08+00:00.

Original Title: At @ycombinator W24 kickoff today, @sama suggested ppl build w/ the mindset GPT-5 and AGI will be achieved "relatively soon"; most GPT-4 limitations will get fixed in GPT-5, per YC founder Richard He. Expect @OpenAI #GPT5 in 2024 and #AGI in 2025?

391
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Big_Intention_242 on 2024-01-12 05:27:36+00:00.

392
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/thedataking on 2024-01-12 05:21:38+00:00.

393
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Lucky_Strike-85 on 2024-01-11 17:27:01+00:00.

394
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Xtianus21 on 2024-01-12 04:49:03+00:00.


Various people posted here about the recently posted episode of UNCONFUSE ME w/ Bill Gates. I gave it a watch and wow it is JAM packed with really insightful information.

Here's my breakdown of key parts that I thought were important. I had to take notes because there were so many things I wanted to refer to. Let's begin.

The SEEING Problem - Where is it all encoded (Interpretability):

Resolution is one of the most fascinating words in science. I always thought this in my scientific studies. As humans, our eyes are gifts from god yet they betray us so easily. We can't resolve 99.999999999% of anything in the known universe with our eyes alone; including things right in front of our faces. The issue boils down things that are either overly small, overly large, not in our view, not in our visual spectrum, not in any spectrum or not even real. These things can range from atomic atoms to elements, dark energy, gravity, time and existence itself, black holes, and the big bang.

As human beings we have known these things are there but we have had to work mightily hard to discover them and yield them into our human understanding and capabilities. Discovering the periodic table and all of the known elements are a great example of us knowing something can exist without actually finding it in existence. Our eyes may betray us on occasion but as human beings we have always found a way.

With that said, they discussed a core problem facing the magic of LLM's today. We don't actually know what is going on inside of the LLM to a degree of certainty that would be useful. Sam refers to this as an X-Ray inside. A way to know where and why a queried result is happening in a model.

Bill suggests that within 5 years this is something we should have a much better handle on than we do today.

Sam's response. 100%.

People may not realize but this may be the single greatest feature of how we can get to AGI and hell even ASI.

The NEXT 2 YEARS - Multi-Modality - It Has Begun:

Not much really to say here other than it has already begun with computer vision, audio, data, speech STT/TTS and video.

REASONING & RELIABILITY - The Most IMPORTANT Capability For 2024/2025:

Reasoning, reasoning, reasoning, reasoning reasoning. Yes, there are concerns about hallucinations, but I am far more excited about the potential for reasoning. What was interesting here is that Sam gave an odd admission that GPT-4 is not good at reasoning or rather it (GPT-4) "can only reason in extremely limited ways". Holy shit, if what is coming next is way beyond what is there today that is going to be really exciting/scary. I think the spookiest part about GPT-4 is how good it is at reasoning at all. The most interesting things I build are around the reasoning concept. If I can reason why you as a human do something well... I can emulate your task function.

However, what kind of reasoning is Sam talking about here? Is he referring to reasoning that goes beyond data the model was not trained on?? The appearance of reasoning is what GPT has now but it's so useful because in many business use cases that the type of reasoning you need. Meaning, most people don't go off and discover new cancer drugs every day sooo I get GPT-4 is not on that level of reasoning but what type of reasoning are you speaking of Sam?

Sam explains this in the 10,000 questions example. If I ask GPT something, i.e. What type of food is the best for staying healthy and maintaining good sleeping habits and maintaining homeostasis. Now, there is nothing difficult about that question. If you ran it through GPT-4 with 10,000 variations, you might end up with many different replies. While they may be very related to each other there may be times where certain important parts of the information is either missing, not correct or as quality as what some answers may have been over the course of analyzing 10,000 different queries and responses.

The reasons for this can be comprised of many different factors including simply subtle nuances of the model you're hitting at the time, is it december :D, and just the statistical nature of how the system works in general.

As the question becomes more complex you can imagine how this may be more of an issue and I can confirm yes it is. Sam says you'd want to get the best response out of 10,000 each time. This speaks right at the heart of the hallucination problem but also capability of reasoning itself. If you think about it this makes perfect sense. You could refer to this as confidence and for one I would like to see confidence scores in the responses as that would be a major help to signal on when building out automation systems. If you're not confident in the response or answer, perhaps I need to ask YOU for clarification. Humans can be a bit tricky to deal with sometimes.

As of now, there is no good way to know this from the model and would be a godsend if it were a feature they were able to execute. If I know you're not confident I can make design decisions around that.

But it is more interesting that GPT will have the ability to not just be confident but reason better about how it comes to its own answer. This will truly be an exciting leap forward if they can pull this off in a soon-to-be update.

HER & YOUR DATA - Doubling Down On Customizability:

This is really exciting because I feel very soon we are about to get HER. For those of you that don't remember that was the movie with Joaquin Phoenix and Scarlett Johansson's adaptation of a future period of time (we're almost here) where we can have an OS that is a personal AI.

Think of Her as a present day Google demo where you can watch what the future of today will soon be in a flawlessly executed system - that won't exist at first. lol. I don't know if that made sense but if you watched the Google Gemini demo you know what I mean. GPT-X's version of Her will be rolled out in iterations and the road may be a bit bumpy but gosh darn-it we are going to be there pretty damned soon. And I so want the Scarlett Johansson voice as my personal AI OS voice.

*****REASONING PART DEUX - Q*** - Adaptive Compute (THINK AGENTS):

Ok, Bill is not slick here. Bill and Sam played this one off like nobody would notice. BTW, they then go into the terminator scenario right after this precursor gem of a statement.

To me, this is where my ears perked up. Bill unassumingly suggests that right now we are in a feed-forward multiply system of get the next token BUT if we need to do something more complex like complex algorithms or mathematical computations we NEED SOMETHING ELSE. That something else as Sam suggest is an adaptive compute system. Now, i've talked a lot about this how or when or who controls this agent. I think that in a 1000 years from now, it will be the reverse. But for now, the route that will likely be pursued is the LLM's ability to call upon an agent to do a body of work. BTW, the reason why I like the reverse is because it's the actually thinking system ;). But in the interim I think this is predictably going to be the LLM calling upon the agent to act.

This surely represents a qualitative approach, as it involves having access to various resources such as algorithms for specific tasks, compute power, memory, etc.

In my opinion I think Sam is being COY here and that coyness I can't full gage but they (OAI) COULD already have that more sophisticated thing. At a minimum agents (adaptive compute) and we gotta get that to work, at a maximum WE may need much more sophisticated things beyond it.

Hmmm, at a minimum I am betting house you got that to work and I would also bet the house that you are doing other sophisticated things beyond it.

I'm not saying this in a vacuum either. I am pulling from all of the fever pitch discussion, which is now eons ago (3/4 months), where OAI was embroiled in the Q* what did you create controversy flying all over Twitter and reddit, namely this sub. They leaked that something of this nature was there. So, how today 12 hours ago, are you sitting with Bill Gates saying we "may need something more sophisticated" Nah you got it. lol I am believing that you guys know what you need and are building said thing. You've been working on it for months/years. The unassuming shy guy isn't working. You're getting close and this podcast is just proof of it.

SKYNET UNCONCERNED - Let US Accelerate BUT WE MAY NEED INTERNATIONAL WEAPONS INSPECTORS:

Ok, holy shit. This part of the conversation is just wild. We are doing everything to tell you (the government and politicians that this is a very serious thing). Sam gives out a doozy of a statement and I quote, "IF WE ARE RIGHT, [we may not be but pshhh yea right we are already here], BUT IF WE ARE RIGHT AND THIS TECH GOES AS FAR AS WE THINK IT'S GOING TO GO IT WILL IMPACT SOCIETY, GEOPOLITICAL BALANCE OF POWER, AND SO MANY THINGS WE WILL NEED AN SYSTEM LIKE THE IAEA. The IAEA is a global regulatory body of nuclear systems because they are so powerful and dangerous to all of mankind.

Now, to not fully scare th...


Content cut off. Read original on https://www.reddit.com/r/singularity/comments/194ml7t/deep_dive_of_episode_6_unconfuse_me_with_bill/

395
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SharpCartographer831 on 2024-01-12 03:45:30+00:00.

396
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Bchalup2348 on 2024-01-11 12:22:53+00:00.


It seems like the vast majority of SOTA AI models rn are being pumped out by 3 companies: Microsoft/OpenAI, Google/DeepMind, and Meta(sometimes).

This makes sense as DL right now is a black box, which leads to the main factors that determine a model's efficacy being the amount of data available as well as the number of parameters. And improvement in these areas is highly paywalled.

What worries me is the increasing tendency for these companies to refrain from releasing their model parameters and put them behind some pay-per-use API.

We are all thinking that AGI will revolutionize the world and lead to some great utopia, but it feels like if a company ever does develop a highly intelligent AI, they can just use it to further capitalistic inequality.

How do we know that AI development will actually lead us to a utopia and not another version of the US healthcare system, where billionaires foster massive inequality and block all "open-source content"?

397
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/VampyC on 2024-01-11 09:50:03+00:00.


I wonder if keeping a given population alive and well will just be a better choice for all shareholders.

It will reduce healthcare costs and also generate billions in revenue for tech and pharma corporations.

Would caring for ill and dying people, and having to invest public schooling for children be more costly and resource consuming than just investing into making the current population undying?

I have no idea, but I just found that quite interesting to consider. If this is true than there will be more incentive for accessible longevity technology for everyone.

398
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Benista on 2024-01-12 01:23:28+00:00.


Its been something I've been thinking about since the first LK99 paper. One thing that really stuck out to me was that it was derived from extremely common, and well understood materials. Of course, it turned out to be not so simple, but the idea still stuck. What are we missing, that is right infront of our eyes, but no one person can know enough to actually see it? Is there a much greater cost to specalisation than we realise?

A lot of people in my family are specialists, various PHDs, highly focused careers, and deep mastery of a variety topics. But, all have had to pay the same cost of eschewing a lot of knowleadge to purse a depth of knowleadge. We are only human afterall, there is always a limit to our ability. What if there wasn't though? What if instead of being a jack of all trades, master of none, you could just be a master of all? Maybe a human can't (yet), but what about a machine?

The computer age, sometimes also called the information age, has allowed us to collect vast, VAST quantities of information. Every day, every hour huge amount of new knowleadge is created, research published, thoughts spewed out into the internet. A lot of it useless, much of it derivitive, and who knows how much of it is repetitive. But, do we actually know how valuable a piece of information is? Maybe we can assess it within our own discipline, but what about in that discipline, or that one? So many of our discoveries are stumbled upon by accident, or a chance encounter.

I think we only understand a fraction of our knowleadge. AI doesn't have to go out and discover, or create, radical new things. Though, im sure in the future it will. In the meantime, it is going to help us understand far more about what we already know. Crucially, this is something AI is also very good. It doesn't have to surpass human's in depth of knowleadge, just in breadth, for it to discover increadible new things.

I bring this up in particular because a lot of people focus on the brand new things AI could discover, but forget about all our own discoveries that AI could help us understand better. About all the new ways that our existing knowleadge can be combined. Because if we can potentially pull something a significant as a room tempreture superconductor (or atleast a new field of research) from two of the best understood materials in the world. What other things are just waiting to be discovered, and all you need to do is combine existing information in new ways? Like giant neural networks trained on ever more powerful super computers.

This is one of the main reasons I think AI is going to have major impacts sooner rather than later. I don't think it needs crazy new breakthroughs, we just need to collate as much of our research as possible, and train up new LMM's on it. It's not even inter-disciplinary information either, the amount of knowleadge we have collected is so vast, that true specalists have to focus on very narrow topics, within already narrow fields. This is something that will happen much sooner, rather than later.

399
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/TheTabar on 2024-01-12 00:39:50+00:00.


If we were to somehow statistically prove they reduce the frequency of road related accidents, then why would anyone trust a human driver?

400
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/IluvBsissa on 2024-01-12 00:17:48+00:00.

view more: ‹ prev next ›