Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
101
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/swimeatplay on 2024-01-22 07:58:21+00:00.


I see too many posts here that essentially boil down to "Are we going to get free payouts from the government and stay at home with our new AI waifus?" and "Will the rich, who own all the robots, give us money or will we starve?"

Yes, we all want to plug in our Full Dive VR and leave our shitty boring lives. But concerns about the post-labor economy are overblown, ignoring the big elephant in the room. Let me say it again: the post-labor economy is a transitory stage that will not last more than a few years. Go make a savings account now and stock some toilet paper for the upcoming post-apocalypse if you want, but that's not even a blip in what's really going to happen.

Artificial Generalized Intelligence is not here yet. We don't know if large language models are the pathway to AGI or if it's a dead end. But it doesn't matter at all. Why? Because the arms race is finally on. Governments and corporations have finally started spending money on developing AI. In 2010, we were spending 15 million on AI research. Today the budget is 150 billion. That's a 10,000X increase in spending in less than 15 years. To the military, they're hoping to build the next big nuke; to businesses, they're looking to lay off more people; to the technology sector, they're looking to invest in the next biggest thing since the internet.

But for us, we should be looking for the birth of God.

Omnipotence, omniscience, and omnipresence. Technology, knowledge, and nanobots. All three are achievable through intelligence. And boy, are you guys underestimating what an Artificial Superintelligence is. This is not some ChatGPT that passes the Turing test. Newton discovered calculus and the laws of classical physics. Einstein discovered relativity. But ASI will discover EVERYTHING there is to know. Today's science is so hyper-specialized that you could spend a lifetime and not master a single subfield of one field of one branch of science. We have so much data in hundreds of fields; it's become literally impossible for the human mind to learn a fraction of it, let alone make connections and draw conclusions and theories from all this knowledge we've accumulated.

If an ASI had access to every research paper we have right now, it would take seconds to figure out a cure for cancer, depression, and death itself. Evolution didn't have any plans for us except to reproduce and not die off as a species. Our human bodies are equipped with the bare essentials to survive and reproduce. Biological evolution is not driven by any intellect or any actual goal. But we know from it the capabilities of what a bunch of carbon cells can do. Immortal cells exist, and death is not a physics-based constraint in biology.

The singularity is an eventuality. AGI will always evolve into an ASI. As a matter of fact, you won't have much time between the two. A human intellect ChatGPT will have more knowledge than every human combined by virtue of the internet of data. The timespan between each iteration will exponentially decrease till infinite intelligence. I have no idea what infinite intelligence would even mean, but every problem we humans have, including human happiness, is a trivial problem for it to solve.

So now you have an infinitely smart AI, equipped with more knowledge than the entire human race. The outcome can either be our extinction or immortal bliss. I don't see how anything in between can happen when you combine superintelligence with billions of robots and nanobots. The question becomes one of alignment. And don't tell me a bunch of apes can outsmart a god-like intelligence. If Trump can convince people to run the most powerful government in the world, I don't think there will ever be a way to constrain a superintelligence.

We're primates with empathy, urges, cravings, emotions, territorial behavior, tribalism, societal rules, and a whole bunch of evolutionary traits on top of our intelligence. That's why we have concepts like good, evil, and morality. AI has none of that. It's our final coin flip into either eternal heavenly bliss or extinction.

TLDR: Welcome to the new religion of the 21st century.

102
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Cunninghams_right on 2024-01-22 07:34:57+00:00.


as agents and web search are being more and more prevalent in AI tools (bing/ChatGPT/Bard/etc.), I think that the next big step is to gather information and make meaningful analysis of it, which none of the tools can really do right now. right now, Bing Chat/Bard can gather information from 1, or maybe 2 websites and present it to you in a meaningful comparison. beyond that, it falls apart. I hope we see LLM based tools that are able to gather and compare more information, like looking at grocery store websites.

the shopping list test:

  • user makes a shopping list, pastes it into the tool, and asks the tool to give them a list of cheapest places to shop in their area
  • the tool should either already have location, or know to ask for it
  • the tool should crawl each grocery store website and sales flyer
  • the tool should be able to come back with the total price for the given shopping list for the nearest grocery stores

not that shopping lists are really important, it just makes an easy test to know when we've entered an era where the tools can really be an assistant vs just a better way to google information. it could be data about house prices, transit schedules of different cities, etc. etc.

103
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Maxie445 on 2024-01-22 07:20:43+00:00.

104
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/thedataking on 2024-01-22 07:03:34+00:00.

105
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Ok-Worth7977 on 2024-01-22 07:02:38+00:00.


Option 1: a system (which will definitely be AGI level or higher) will get a medical license (including the right to prescribe scheduled drugs)

Option 2: an advanced ai system gets the right to provide humans (or other ai systems) with medical licenses, and a comprehensive interview + examination (including practical) will be an enough input

106
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/PsychoComet on 2024-01-22 06:22:41+00:00.

107
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Eduard1234 on 2024-01-22 05:26:00+00:00.


Am I the only one seeing this stuff occur all over the place?

Especially here on Reddit, training on data from here seems like a horrible idea. If you feel AI houses in a sense the collective knowledge of people this data poisoning is a new way to have widespread negative impact for all of humanity just to enrich yourself. So yeah not cool. I’d say it should be illegal but how in the world would you do that without destroying free speech?

108
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/RGregoryClark on 2024-01-22 04:42:32+00:00.

109
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Loose_Yard5371 on 2024-01-22 04:10:23+00:00.


In a perfect world we would be happy that we would no longer need to work and we humans could look to pursuing passions instead of stressful long hours of work when AI takes over our work.

In the past it was predicted we would end up shortening our work week to 20 hours a week, but that never capitalized as we all know.

But instead the Oligarch class uses AI and Automation to trim jobs and cut costs where all the benefit goes to the top instead of the working class in the above scenario and we still are more stressed than ever.

Now if they keep automating jobs and there is no wages to be earned, then who can afford buy their product at the end of the day?

Why isn't there a distribution of benefits or any kind of social safety net?

Is this a sign that we need to change our global economic system?

110
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Alone-Competition-77 on 2024-01-22 03:24:59+00:00.

111
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/vodku_buhayu on 2024-01-22 01:54:14+00:00.


Hello everyone,

I am a Ph.D. student and a part of my thesis research is public perceptions of AI, specifically regarding trust and trustworthiness. So, I am looking for participants for my survey (it is a short Qualtrics survey that should take less than 10 minutes). I would really appreciate your input and insights! (I hope this does not count as a self-promotion post, if it does, I apologize).

Link to the survey

* The research is conducted by Cyber Science Lab at the University of Guelph, Guelph, Ontario, Canada. This project has received approval from the Research Ethics Board, ensuring compliance with Canadian federal guidelines for research involving human participants (REB#23-08-018).

112
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Dr_Singularity on 2024-01-22 00:39:57+00:00.

113
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/HaOrbanMaradEnMegyek on 2024-01-21 22:20:02+00:00.


Image generation is close to perfect. ChatGPT and even open source can already fool people. Within 5 years this will be a huge problem. They will have to implement gambling/banking site level KYC measures otherwise automated catfishing will take over their sites. What do you think?

114
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/YannickWeineck on 2024-01-21 22:16:35+00:00.

115
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/theworldsgonemad1 on 2024-01-21 20:35:57+00:00.


If you had unlimited funding to solving 5-10 year problems, which focus areas would you be honing in on?

Assuming the singularity requires connected-everything, we are talking about technological advances which can be widely scaled and incorporating legacy systems.

What do you see as the gateway use cases which devs/execs with budget need to solve first? Given funding requires business cases, and biz cases require business-related applications, I’m looking for ideas which solve genuine human problems and create productivity/societal benefit/environmental impact (these are things which VCs/Enterprise Orgs will pay to chase as the outcomes can be commercialised later).

Mods removed my last post so hopefully this one relates more specifically to the singularity at the heart of the post. If they’ll let a conspiracy theorist ask about post-singularity aliens from other galaxies, and this one might help save humanity it must be ok, right?

Tia

116
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Jazzlike_Win_3892 on 2024-01-21 19:27:17+00:00.


drop your ideas on why you believe we haven't been contacted by post singularity aliens yet and where they are at.

117
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/FrankScaramucci on 2024-01-20 20:16:06+00:00.

118
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/IluvBsissa on 2024-01-20 19:19:23+00:00.

119
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SharpCartographer831 on 2024-01-21 18:29:09+00:00.

120
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/MontanaLabrador on 2024-01-21 18:18:30+00:00.


Out of fear or genuine dangers, societies the world over will likely try to control an intelligence explosion.

A single person with access to superintelligence can potentially create great harm. Politicians will be able to make cases to the public that these entities are too dangerous to be allowed for general use by businesses and individuals, making them government only, kind of like nuclear weapons.

Even if they can still be accessed illegally, I’m curious about the political reaction to more and more intelligent models.

What level of intelligence will societies coalesce around as being deemed “safe enough” for unfettered access in the economy?

Also, will these bans work in the long run or is it inevitable that the attempts at government control will fail?

121
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/tycooperaow on 2024-01-21 17:26:12+00:00.


I hear many instances of doom and gloom with AI where the retrofuture depictions of AI future (like Terminator, iRobot, Jetsons, Star Wars or even Transformers) is essentially right around the corner. We can see the massive loss of jobs and people seek to claim that the wealthy will hold all of the power and access to ai tools to be "overlords" over the commoner. As I begin to ponder on this topic I think the wealthy would be just as doom as if there's no one to work it reduces the value of their materalistic wealth because there will be no one to buy their services.

I hear a lot of talk of UBI and I think there's a future to make sense, but I think it's hard for us to conceptualize because we are accustomed to a capitalistic structure which I believe would possibly erode away. We saw a hint of UBI with the Covid Stimulus packages but evidently, most of those funds settled back into the pockets of the wealthy. But if there's no one to work as there are bots to take care of everything would more people be able to live a more comfortable life and a life of leisure like Wall-E?

What are your thoughts?

122
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/3ntrope on 2024-01-21 16:31:19+00:00.


The topic of Full Dive VR appears frequently in futurist communities, but there seems to be a disregard of issues that hold it back and realistic timelines until its available. BCIs are difficult and advancement in biomedical tech is slow due to the practical and ethical limitations associated with human testing. We could easily be 50-100 years away from it simply due to limitations in performing the human testing and clinical trials that would necessarily ensure safety. In most countries this would be considered unethical unless there was a medical need, which greatly limits how quickly the technology can advance.

However, VR enthusiasts are in luck because the technology for the next best thing to FDVR is almost here. A VR headset and the ability to move naturally in the virtual world is something that can be readily available, and already is to some extent. Current VR game experiences can be broken down into:

  • Cockpit-based VR Simulations (flight, racing, etc.)
  • Primarily Social VR Experiences
  • Action VR Games (shooters, platformers, sports, etc.)

Arguably, for flight and racing simulations, "full dive"-like VR is already here. A motion system + a high end PC VR headset can provide an extremely realistic experience. There are even options for home motion rigs. They are not cheap, but not unreasonable for a hardcore enthusiast.

The hardware for primarily Social VR experiences is also available, although costs for the eye and full body tracking is high. Even though the popularity of social VR is growing, the hardware costs and poorly optimized and questionable software hold it back from reaching its full potential. Still, the hardware is available for immersive experiences if the goal is purely socialization. Since most movement in these worlds is standing and dancing, it can be covered with basic full body tracking without additional hardware like an omnidirectional treadmill.

Then there are Action VR games. These include the types of activities that I would imagine the bulk of the population would want to experience in VR. The VR headset visual quality and haptic controllers are improving to enable this, but we are missing readily available omnidirectional treadmills to emulating walking and running through virtual worlds. There's been good and bad prototypes of omni treadmills for at least a decade now, and its possible one day some group figures out how to make them available to the masses.

The combination of a human resolution, human FoV VR headset + force feedback haptic gloves + perfect omnidirectional treadmill will be nearly as good as "full dive" VR and is something that is technologically feasible within our lifetimes. The challenges that remain include creating better hardware while make it affordable for the average person. And of course, there's the software and developing the immersive VR worlds themselves, but that will require the hardware to be ready first. Rather than hoping for a BCI for FDVR that may never arrive within our lifetimes, we should consider how we can improve current VR hardware to make super-immersive VR worlds. I think we are very close.

Note: I excluded VR productivity applications or "Spatial Computing" as Apple calls it because that fits under Mixed Reality and Augmented Reality, and is not really on the path to "full dive" or super-immersive VR.

123
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/abudabu on 2024-01-21 16:08:18+00:00.

124
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/re_DQ_lus on 2024-01-21 14:29:34+00:00.


Ipad kids are already bad enough, what will happen when GPT kids come along. Would you be comfortable with the amount of power AI companies will hold if their AI become like a parent to those kids.

125
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/AGIbydecember2023 on 2024-01-21 12:35:25+00:00.


[

View Poll

view more: ‹ prev next ›