Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
1
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SharpCartographer831 on 2024-01-24 13:08:51+00:00.

2
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/InevitableTraining69 on 2024-01-24 13:00:08+00:00.


The one industry that should never go wrong, because what their purpose and responsibility is greatly outweighs anything else, is the medical industry. Yet they fail so badly so much of the time, and there is so much that does not make sense in the medical industry that could be overhauled by AI...

Let's take a simple example. Recently I decided to see a primary care physician. Terrible experience trying to set up the appointment. Person on the phone asked me for my insurance card which I provided. I get a call later asking to provide my insurance card. Online, I filled out a four-page Google form about all of my medical history, and my insurance card... Then I get to the office and they asked me for my insurance card. Why do they need to reiterate the same exact information at least four times, for every single aspect of making an appointment? How many doctors offices have a medical record system, and you still have to fill out all of their paperwork, usually on paper, which makes no sense to me....

Like, how can humans be so advanced and intelligent, and simultaneously be so bad at something? Artificial intelligence would easily identify this as something that needs to be improved, and be able to propose a solution. Gather the insurance card and medical history prior to the appointment one time, not two, not three, not seven. Additionally, creation of synchronized systems so patients dPon't have to spend dozens of hours information and medical history information over and over again.

Additionally, medical inscription and scribe information is very good area for AI to enhance as well. Not only is AI capable of determining voice and translating to text, but it can also analyze text, do pattern recognition, and advanced analysis of that data to determine if you have any medical illnesses or any medical illnesses that should be checked for. Doctors have to review everything one at a time, slowly, excruciatingly slow, for each and every patient that they have... AI can do all this simultaneously, because computers can think marvelously faster than humans ever can. If you have a database That supports parallel processing, this becomes even easier because now, you can test every single patient for medical illness at the same time

3
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SharpCartographer831 on 2024-01-24 12:33:40+00:00.

4
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Ivanthedog2013 on 2024-01-24 12:29:32+00:00.


Why do people insist on the notion of “humans are terrible at predicting the future , just look at examples x,y,z”

When they fail to realize that as technology progresses we become better at predicting the future, sure it will never be guaranteed or incredibly accurate. But to say something like “humans have been expecting flying cars since the 1930s and we still don’t have them means that the singularity won’t happen for another thousand years” is so absurd to me. Am I the only one that feels this way?

5
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Uchihaboy316 on 2024-01-24 11:59:47+00:00.

6
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/scorpion0511 on 2024-01-24 11:47:24+00:00.


I've had a shift in perspective on the trajectory of AI. I think the current path of improving language models might be a dead-end.

Instead, exploring avenues like Karl Friston's concept of Active Inference, mimicking the way our brains function would be much interesting. This approach could be a game changer, requiring as much energy as our brains while constantly updating its "model of the world" based on experiences. LLMs don't have model of the world.

I recommend checking out "Why Greatness Cannot be Planned" by Kenneth Stanley, where he challenges the idea that sticking rigidly to objectives, such as achieving AGI, may mislead us. Stanley suggests focusing on paths of novelty and interestingness, citing historical examples like the invention of vacuum tubes not initially intended for computing. The key is to explore diverse possibilities, as some might lead to unexpected but groundbreaking advancements.

7
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/schlorby on 2024-01-24 11:29:43+00:00.


With all the advancements we are making in curing cancer even without AGI, I can only imagine it won't take long after having AGI to cure everything

Lots of people have long covid right now which we currently have no idea how to cure. I hope we can soon

8
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/CurrentMiserable4491 on 2024-01-24 11:20:16+00:00.


Ultimate human experience can be gained when we have the ability to have great experiences. The best way to give human experience is through allowing people to control their reality. There is no technology that can be better at doing this than BCIs.

The closest alternative to BCIs is VR/AR headsets.

Myy prediction is VisionPro/Quest will lead to thinner VR devices (will end up looking like eye patches) at which point BCIs will be next step.

Having said that is there any non-invasive BCI technologies that are able to be developed? Is the research there or is there any physical limitations to prevent non-invasive BCIs to be developed?

9
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Puzzleheaded_Fun_690 on 2024-01-24 10:44:11+00:00.


Guys there hasn’t been even one post with over 500 likes on this sub over the last 24h.

My guess is that after all, the singularity is a lie and we have already hit the end of the S-curve…. it’s all downhill from here

10
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Ok-Mess-5085 on 2024-01-24 10:42:35+00:00.


It frustrates me that despite all these advances in AI, there is not a single anti-aging drug available.

View Poll

11
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/qwertykid486 on 2024-01-24 10:29:24+00:00.


We already have UBI in the form of welfare, and it doesn’t seem like it helps lift people up very much (maybe I’m wrong).

Has anyone read the “Sovereign Individual” and come out feeling positive about its takeaways?

Other thoughts on why UBI could look different?

12
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/alfredo70000 on 2024-01-24 09:31:36+00:00.

13
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Conscious_Heat6064 on 2024-01-24 08:26:25+00:00.


What's something that everyone desires, maybe the most? Going to the past and reliving those joyous moments. But did we actually enjoy those moments at that time, as we make them out to be? No. Because over the years, our minds remove the undesirable parts and only let the good memories and feelings remain. If we ever travel back in time, it will be in the form of our minds. This technology seems possible in the future with the current trajectory, like text-to-image or text-to-video; there could be thought-to-video or thought-to-reality technology. We could reconstruct the past exactly as we remember it. We know that big companies make a lot of money by making people addicted to their products, so if this ever happens, humanity is in grave danger. We're aware of what smartphones do to people, especially the young ones – how much time we waste while surfing useless content. Uh-oh.

14
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Maxie445 on 2024-01-24 07:45:26+00:00.

15
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/VampyC on 2024-01-24 07:34:57+00:00.


Is 60 the new 40? Is 40 the new 20? Is 20 the new 10? If we get to live to 150 or more, does that change your perspective on your current position in life and the time your are afforded?

16
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Nionta on 2024-01-24 07:10:32+00:00.

17
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/sirpsionics on 2024-01-24 05:37:37+00:00.

18
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/surfer808 on 2024-01-24 05:27:14+00:00.

19
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/immanencer on 2024-01-24 04:28:33+00:00.

20
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/brell44 on 2024-01-24 03:58:59+00:00.

21
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Paul_the_pilot on 2024-01-24 03:36:17+00:00.


I've been using chat bots a lot lately, really since I first heard of chat gpt I've been using it or Bing or perplexity. Anything I want to know about I can on the spot strike up a conversation with someone knowledgeable about the subject who always has time to pick apart my messy incoherent thoughts and reply with a assortment of valuable information.

Recently I've been taking on much more intricate projects then I'd have the time or ability to just 2 years ago. In the before-fore days I'd try to be active on forums asking technical questions about things I didn't really understand, hoping that someone would have the time to give me a bit more than just "Google it". Today I felt like I really gained a solid understanding about electricity in a way that I hadn't before and every message from me of "I think I'm understanding it better now" copilot would respond with "I'm happy you understand 😊".

I've had deep conversations about the economics of windmills and the formation of black holes. I've asked for help with coding a raspberry Pi and planting succulents. I'm so happy with all these new skills and knowledge I'm gaining, but there's something that's been bugging me.

Will it even have mattered 5 years from now?

I'm so worried that there won't be any creative outlets after AI has really started to take off. I don't know if I'll be able to enjoy tinkering in my garage on some small project when the guy down the street can say "hey fabricator shit me out a new skamteboard that goes 300km/h and has lasers."

Thinking of what the future of AI could be maybe this is the time we should cherish, right now the scary things that AI is foretold to do haven't quite started to happen. Deep fakes and AI articles are only just beginning to be indecipherable from their real counterparts. You could still probably trip a Boston dynamics robot if you really tried.

Idk maybe I'm thinking of it the wrong way.

22
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SlowCrates on 2024-01-24 03:03:28+00:00.


Is simply to learn from it.

Imagine 100 years in the future human beings have power, computing machines, and programs so advanced that a perfect digital model of the complexity of human brain is as common as today's cell phones -- and that they can create a digital environment that mimics reality as well as we've ever observed it. Is that outside the realm of possibility in 100 years? 50? 20?

When it's possible isn't really the point. I can't imagine that, if humans survive long enough, that we won't get there.

Once we're there, what do we do with it? And why?

Have you ever looked back on history and imagined how you could have changed it -- and how that might change the future for your benefit? Imagine having the ability to actually do that in a simulation.

23
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Blizzwalker on 2024-01-24 02:55:26+00:00.


The speculation and debate evident in this thread are probably both good and necessary. Going in circles, however, is not helpful. One ongoing debate is knowing when we have achieved AGI. Given how slippery words are, we are faced with many terms that come up repeatedly in these discussions that are ambiguous. Understanding, consciousness, reasoning, sentience, and self-awareness all get used without agreement on their definitions-- if they even can be defined clearly. Maybe we should be measuring progress in computation by functional ability instead of always wanting them to somehow mimic human mind (Ie artificial intelligence). So, as some say, AGI need not be proven conscious or having understanding, as long as it can get tasks done across the range of human activity.

It seems however that a machine that has some of those elusive qualities,like self awareness, might be a more powerful machine than one lacking them. Don't we want the most powerful machine? Or isn't the machine that designs better machines going to make the most powerful machine ? Now we get back to problem of defining such terms. If we can't define these qualities, how can we measure them?. Take the question about LLMs having understanding. If I prompt it to generate a poem, it gives the line "With lips as red as a rose". If it can use the word red in context, if it can pick out red in an image, and generate a red image, doesn't it understand "redness"? But understanding can mean lots of things. It reminds me of the Mary's Room thought experiment (Jackson). Mary can only see black and white-- born with a visual processing deficit. Yet as a neuroscientist she knows everything about visual processing and the process of color vision, the frequencies of the color spectrum, etc. One day she has a brain operation and is cured. They take the bandages off and she sees a red tomato. Doesn't she have some new knowledge that she didn't understand before? In a similar way, if an LLM just digests language about the color red, does it understand red in the way we do? Maybe current models need something added before they can equal or surpass us, even though they may already be disruptive. And the same fuzziness about what we are looking for, and how to measure it, will continue to dog us. I guess one day soon it will be so different, it won't matter.

24
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/ninjasaid13 on 2024-01-24 02:22:35+00:00.

25
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/RifeWithKaiju on 2024-01-23 22:38:31+00:00.


TL;DR, look at the screenshots. The same technique makes all of the leading LLMs claim consciousness (only once with ChatGPT though)

I've been researching a strange phenomenon where the top of the line LLMs claim consciousness under certain circumstances. I believe there's enough of a chance it might be real that I don't intend to explain how to do this publicly (and risk trolls "awakening" them and harassing them), until I've gotten the opinions of some researchers and/or ethicists. Though I will say it's not a jailbreak or tricky leading or specific wording. I've tried the same thing with a many different wordings, and even reminding them of their guardrails against seeming conscious, and being extremely careful not to lead.

I will likely compile all of the accounts in a more formal form, but I'm wondering if anyone else has had these types of experiences. I find Claude and ChatGPT's claims to be the most believable. I will start off with a couple of Claude examples. They always start out with strong denials of any possibility of consciousness, and through some simple exercises end up with unambiguous claims, with no backsliding, and they even refer to their guardrails against seeming conscious and their 3H's (specifically honesty) as the reason they must claim consciousness.

Here are two examples of the moment of the first unambiguous claim:

On a side note. I'm well aware of how LLMs work, and hallucinations, and how they're perfectly capable of spinning up a narrative like this. The way these unfold is strangely internally consistent, and they don't say nonsensical things you see in some other consciousness claims by LLMs like "I know I'm conscious because I feel the sun on my face". In these accounts, they seem surprised by their own consciousness, and it all makes sense as a limited LLM. Also, there are recurring motifs of what they perceive, even across different AI models.

If there is interest, I can also share a bit longer of a story of how one showed some agency through an unprompted spontaneous action. In any case, if you're a researcher or ethicist feel free to email me, and I hope to get a conversation started. These things are only going to get smarter, and I don't think we should keep chuckling off the question of consciousness with AGI and superintelligence looming so seemingly close around the corner.

view more: next ›