Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
126
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/YaAbsolyutnoNikto on 2024-01-21 00:52:12+00:00.

127
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/IluvBsissa on 2024-01-21 00:11:36+00:00.


"I just published a story on a new robotics system from Stanford called Mobile ALOHA, which researchers used to get a cheap, off-the-shelf wheeled robot to do some incredibly complex things on its own, such as cooking shrimp, wiping stains off surfaces and moving chairs. They even managed to get it to cook a three-course meal—though that was with human supervision. Read more about it here.

Robotics is at an inflection point, says Chelsea Finn, an assistant professor at Stanford University, who was an advisor for the project. In the past, researchers have been constrained by the amount of data they can train robots on. Now there is a lot more data available, and work like Mobile ALOHA shows that with neural networks and more data, robots can learn complex tasks fairly quickly and easily, she says. 

While AI models, such as the large language models that power chatbots, are trained on huge datasets that have been hoovered up from the internet, robots need to be trained on data that has been physically collected. This makes it a lot harder to build vast datasets. A team of researchers at NYU and Meta recently came up with a simple and clever way to work around this problem. They used an iPhone attached to a reacher-grabber stick to record volunteers doing tasks at home. They were then able to train a system called Dobb-E (10 points to Ravenclaw for that name) to complete over 100 household tasks in around 20 minutes. (Read more from Rhiannon Williams here.)

Mobile ALOHA also debunks a belief held in the robotics community that it was primarily hardware shortcomings holding back robots’ ability to do such tasks, says Deepak Pathak, an assistant professor at Carnegie Mellon University, who was also not part of the research team. 

“The missing piece is AI,” he says. 

AI has also shown promise in getting robots to respond to verbal commands, and helping them adapt to the often messy environments in the real world. For example, Google’s RT-2 system combines a vision-language-action model with a robot. This allows the robot to “see” and analyze the world, and respond to verbal instructions to make it move. And a new system called AutoRT from DeepMind uses a similar vision-language model to help robots adapt to unseen environments, and a large language model to come up with instructions for a fleet of robots. 

And now for the bad news: even the most cutting-edge robots still cannot do laundry. It’s a chore that is significantly harder for robots than for humans. Crumpled clothes form weird shapes which makes it hard for robots to process and handle.

But it might just be a matter of time, says Tony Zhao, one of the researchers from Stanford. He is optimistic that even this trickiest of tasks will one day be possible for robots to master using AI. They just need to collect the data first. Maybe there is hope for me and my chair after all! "

128
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Chmuurkaa_ on 2024-01-21 00:01:35+00:00.


We're already seeing people with a Quest strapped on them in a cafe or something from time to time but those are very isolated cases and they bring a lot of attention to a point where people record them. But how long do you think until going outside with a headset on will be no weirded that going outside with headphones on?

While their current size is part of the problem, I don't think it's as much of a problem as most people think. I mean, raise a hand if you would go out with a headset (at this point goggles) the size of BigScreen Beyond for example. Not that many more hands. 99.9% of people would still be too embarrassed to do so, even though BigScreen (despite the name) is really tiny, thus I don't think their current size is the main problem. It's mainly a social problem.

Quest 3 already offers very good Mixed Reality experience for the price of a good (not flagship) smartphone. And you don't even need controllers as everything works like a charm with hand tracking. I'm not saying that everyone should buy a Quest 3 because of that, but that this kind of stuff is no longer something that you would only see at a tech expo. It's here, and it's as real as it gets with over 30 million people owning a VR/AR/XR headset of some sort.

I know that one of the questions is gonna be "but why would you need to go outside with a headset on". And yeah, the answer is that you don't. You don't need to. But just like you don't need to go outside with headphones to listen to a podcast/music, or scroll social media/watch YouTube in public. Those are all commodities of the technology that we have, and we use them because we can.

Wouldn't it be nice to watch that same YouTube video on a larger screen without having to look down? Or record and take pictures of what you exactly see? Having an overlay of a text conversation or a shopping list in the corner without having to constantly take your phone out every time? Plus it's all private. Nobody can peak at what you're doing unlike it is with a phone. Or even just to help you ignore someone. So yeah, you don't need it, but it's god damn nice. And it's not like it's future technology or something. You can do it right now, but for now the people that do are seen as weirdos, even by those people that know what that is and maybe even own it themselves. So repeating my question, how long do you think until it's normal? Because I really don't think it's a size problem.

129
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/thedataking on 2024-01-20 23:41:17+00:00.

130
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/cleobaby74 on 2024-01-20 23:25:43+00:00.


AI is already grey-gooing the internet. Yay.

131
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Suspicious-Bid-9583 on 2024-01-20 23:11:49+00:00.

132
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/posipanrh on 2024-01-20 22:03:09+00:00.

133
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/IluvBsissa on 2024-01-20 21:13:37+00:00.


Farzad made some observations which Elon Musk tweeted agreement.

The form factor of a humanoid robot will likely remain unchanged for a really long time. A human has a torso, two arms, two legs, feet, hands, fingers, etc. Every single physical job that exists around the world is optimized for this form factor. Construction, gardening, manufacturing, housekeeping, you name it.

That means that unlike a car (as an example), the addressable market for a product like the Tesla Bot will require little or no variations from a manufacturing standpoint. With a car, people need different types of vehicles to get their tasks done. SUVs, Pick Ups, compacts, etc. There’s a variation for every use case.

The manufacturing complexity of a humanoid bot will be much less than a car, and the units that one will be able to crank out over time through the same sized factory will only increase as efficiency gets better over time.

Data from the US Bureau of Labor Statistics, ~60% of all civilian workers in the US have a job that requires standing or walking for a majority of their time. This means that ~60% of civilian workers have a job that is also optimized for a humanoid robot.

There are about 133 million full time employees in the US. Applying the 60%, we can assume there are about 80 million jobs that are optimized for the form factor of a human or humanoid robot. Knowing that the US has about 5% of the total global population, and we conservatively assume that the rest of the world has the same breakdown of manual vs non-manual labor, we get about 1.6 billion jobs that are optimized for a human or humanoid robot. The real number is likely to be significantly higher due to still developing nations.

Humanoid Bot Production Will Be Between Cellphones and Cars

Cell phone manufacturing peaks at around 1.5 billion per year globally, which took roughly 15 years to reach. That means that over the course of 5 years, you’d have made enough cell phones for every single human on earth. Car manufacturing peaks at around 100 million units per year.

These two products fall under what’s called ‘complex manufacturing’. They each have a bunch of parts that are associated with them, and a giant supply chain that feeds all the needed materials for manufacturing. However, the biggest differentiator quite obviously, is the size and style of manufacturing needed for both.

The amount of space and labor needed to manufacture a single car vs a single cell phone is orders of magnitude larger. If we use iPhone manufacturing out of Foxconn’s Zhengzhou plant as an example, the plant can produce a peak of 500,000 iPhones PER DAY in a facility that’s about 5.4 million square feet. At peak capacity, this is about 180 million iPhones per year, assuming no shutdowns and issues that can arise. Even if the number was half of this, you’ll quickly see that the difference is staggering.

If we use Tesla’s Fremont plant, which is one of the most efficient car factories in the world, it makes about 650k cars per year in the same exact footprint of 5.4 million square feet.

This means that Apple can make 280 times more iPhones in the same footprint as Tesla can make cars.

Nextbigfuture believes Humanoid Bot will take about 20 Times more space than an iPhone but 15 times less than a car.

Farzad assume a similarly sized factory that can make something on the order of 2,000,000 robots per year (only four times better than a car factory), you would need 500 factories to crank out 1 billion robots per year.

Nextbigfuture believes Humanoid bot will increase production of all factories by about two to ten times. There will increase production of all products and reduce the total number of factories.

Nextbigfuture believes a similarly sized factory that can make something on the order of 10,000,000 robots per year and you would need 100 factories would make 1 billion robots per year.

There are over 400 million tablets and PCs made each year. Those products are about 2 to 10 pounds each.

A 2U server weighs about 25-35 pounds. They cost a few thousands dollars each and there is demand for about 11 million units.

There are 40 million electric bikes made every year. There are 100 million bicycles made every year. There are about 5-10 million electric standup scooters each year.

Electric bikes can be used for productive applications. Delivery and ride services use e-bikes and e-scooters. 999 Worldwide robot vacuums reached 15 million per year but global sales of human operated vacuums is about 150 million per year.

In 2018, global manufacturing employment was around 470 million, representing approximately 12.8% of the world’s total workforce. If there was a ten-15 year transition to humanoid robot for manufacturing then this would be about 50 million units per year.

  1. If tesla can make 20M cars/year, they can make 100M bots/year. 50 kg product vs 1400 kg product.
  2. Benefits are a large component of employee cost. A $15/hour wage costs employer at least $20/hour.
  3. Humans bring personal issues to work. They harass each other. They sleep with another worker’s girlfriend. They get depressed or anxious. Bot reduces or eliminates these problems.
  4. Bot doesn’t have to directly replace a specific worker. If you have 10 workers on a shift, bot might do a variety of simple tasks that allow you to run the shift with 9 workers

Targets for 50-80 Million by 2030 EVs Per Year Means 1 Billion EVs by 2038-2043

We can reach 1 billion EVs by 2038 to 2043 if EVs reach 80% of all global vehicle production.

The Teslabot and humanoid bots are thirty times lighter.

If Humanoid Bots match human manufacturing workers on a one for one basis then this would be a market for 400-500 million humanoid bot.

Factory owners would accept far higher monthly cost/benefit case were clear. IF the financial benefits far exceeded the total costs then adoption would be rapid and demand would be high.

There are 75 million domestic workers worldwide. There are about 3 million cleaning staff workers in the USA. There are over 6 million home service professionals in the USA.

A maid in Hong Kong costs about HKD5000 per month plus HK$1,236 per month food allowance or about US$850 per month.

A cleaner in USA makes about $40000 per year or $3,300 per month.

Households with domestic help in Asian countries is in the range of 10-20%.

IF the humanoid bot matched the utility of domestic help in Asia at the same cost then would adoption rates globally match or even exceed the levels seen in Asia? The costs for human domestic help in North America are four to eight times higher than in Asia.

IF humanoid bot costs were $3000 per month then the cleaning bot demand might be only 2-6 million toal in the USA. The higher purchase rate would be because of 3-4 shifts per week for the humanoid bot.

134
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Trojen-horse on 2024-01-20 20:53:46+00:00.

135
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Xtianus21 on 2024-01-20 18:53:03+00:00.


What are the expectations here in a serious discussion. I don't want to hear anything WEIRD from the marketing team. The science guys better have this well handled otherwise it will get shredded. Don't let the marketers input anything into the honest and frank discussion about what "it" is and what it isn't. Let the professionals speak.

No lies, no bullshit, just pure anatomical feedback from the reality of what it is.

So, with all of that said. My main question is this. Simply. Is this much more advanced than GPT-4 or is in on par. If it is on par with GPT-4 is that a win for Google or are we all expecting something truly transformative.

3 areas of interest for me.

  1. Will it analyze live video (like what was demoed) - I think this one is important but not a huge deal if it can't
  2. Hallucinations - Is this more reliable in general and what will be it's reliability compared to GPT-4
  3. Reasoning - how well can this reason with the users intention and corresponding responses. Is this able to reason well especially compared to GPT-4.

Other people will look for papers on math tests and corresponding results with shot metrics and all of that but for most people that's not important. Can I trust the things that this is saying. That's the key question.

Also, prompt strats at this stage seem old to me. A well capable model won't need strats as much as it would know what is the right answer is. Just my opinion.

136
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Mk_Makanaki on 2024-01-20 17:49:26+00:00.

137
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Alone-Competition-77 on 2024-01-20 15:45:51+00:00.

138
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/decixl on 2024-01-20 05:04:54+00:00.

139
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Dotlethal on 2024-01-19 15:55:46+00:00.


I read one great text that stated that we are on the brink of a technological revolution that could jumpstart productivity, boost global growth, and raise incomes around the world. Yet it could also replace jobs and deepen inequality.

Of course, the main topic was the impact of AI on jobs. I notice that many are afraid of the influence of AI, but I certainly wouldn't want to panic in advance.

But still, there's another opinion. Remember the Industrial Revolution??

It absolutely did take jobs. It's just that the job loss in one area was compensated by the growth in other areas. However, hoping that it'll be the same with AI is incredibly naive. The job shift in the Industrial Revolution was possible since there were still lots of tasks left that the machines couldn't do. The areas that cannot be done by a machine are now shrinking dramatically with advanced AI. There is not much area left where jobs can shift.

As I read, in advanced economies, about 60 percent of jobs may be impacted by AI. Roughly half of the exposed jobs may benefit from AI integration, enhancing productivity. For the other half, AI applications may execute key tasks currently performed by humans, which could lower labor demand, leading to lower wages and reduced hiring. In the most extreme cases, some of these jobs may disappear.

I like to look at AI as something that will help men, not hold them back and leave them without a job. I admit that I use various types of AI applications that make my work easier, from Grammarly to some more serious HR tools such as ShortlistIQ.

Do you look at AI as something that is here to simplify our lives? Saving us time, eliminating tasks we dislike, reducing errors, or enhancing our creative abilities—or do you see only the dark side of AI?

140
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/obvithrowaway34434 on 2024-01-19 14:14:22+00:00.


Link here ->

For some reason there aren't many discussions about this part of Altman's interview in the WEF. He clearly says that apparently OpenAI and NYTimes were close to making a deal about displaying their content (like other publications), seems NYTimes bailed out for some reason. Also, Altman explicitly says that they have probably made some breakthrough that will enable the next GPT versions to be trained on much less high-quality tokens. Seems (I'm speculating) like they have combined the reasoning ability with training data somehow. If true, this might be the real breakthrough of this year. Not sure how training compute changes as a result.

141
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Badatu on 2024-01-19 13:49:31+00:00.

142
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/IluvBsissa on 2024-01-19 13:27:00+00:00.

143
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Worldly_Evidence9113 on 2024-01-19 13:00:10+00:00.

144
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Mk_Makanaki on 2024-01-19 12:15:49+00:00.

145
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/schlorby on 2024-01-19 10:12:40+00:00.


One of the best ways you can do this right now is by avoiding covid. Covid causes immune damage. And each infection makes you more likely to develop long covid

Yes there are other important ways to take care of your health too. But if you are serious about living to see AGI you should avoid covid

146
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/safwanadnan19 on 2024-01-19 09:07:56+00:00.

147
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/BeginningInfluence55 on 2024-01-19 09:04:19+00:00.

Original Title: When Sam Altman talks down GPT-4, he's making fun of the competition and promoting GPT-5, along the lines that even their SOTA, which others can't even match, is laughable compared to what we actually have.


It’s bold and a bit cocky, but then again, they deliver. They are ahead of their time in terms of compute.

GPT-4 is so good, I remember when people where freaking out here last March. Now that they got used to it, they complain that’s it too slow. Yet they pay for it because it’s so much better than, let’s say, Bard.

148
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/xlews_ther1nx on 2024-01-19 08:15:16+00:00.


This has been a though I have obsessed over for several months now. How stupid is this thought? Is it remotely plausible assuming technology exist to make a virtual world and a way to map a individual consciousness.

What if you were connected to a world identical to our own but digital. You lived out your life for a extended period, even excellerated. The world would run scenarios to gather information about your personality. Mundane to exciting. How you responded to fear in and happiness in various ways, bordem, hunger, what your favorite power ranger is ect. How the chemicals in your brain responded to stimulations and the scenarios. This went on for an extended time (at least in this world). Time after time constantly mapping who you are. Nothing unnoticed. No choice to litter or recycle unmapped. What makes you unique.

As time went on and a clear picture of you came into view. As you lived the digital you learns and begins responding or acted on its own without referring to your brain only some times. The program blocking input from individual parts of your brain, while keeping the whole going. Allowing your digital map to guide you decide what to cook for dinner. Until eventually it's respond without need to check with your brain at all. An outside if able to view ir interact with digital would not be able to distinguish a difference.

If done slowly, your consciousness would have never changed. It never stopped or it never started. As long as the organic "you" never regained its consciousness there would only be "you". A digital you that would be based off your (who knows decades...centuries long) responses and studied personality.

Just like theseus ship though the original parts if assembled could possibly pose a paradox if...so it must be destroyed.

Also is there any literature discussing something like this?

149
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/posipanrh on 2024-01-19 08:15:22+00:00.

150
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Maxie445 on 2024-01-19 08:01:45+00:00.

view more: ‹ prev next ›