Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
151
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Upasunda on 2024-01-19 07:53:27+00:00.


We posit that to achieve superhuman agents, future models require superhuman feedback in order to provide an adequate training signal. Current approaches commonly train reward models from human preferences, which may then be bottlenecked by human performance level, and secondly these separate frozen reward models cannot then learn to improve during LLM training. In this work, we study Self-Rewarding Language Models, where the language model itself is used via LLM-as-a-Judge prompting to provide its own rewards during training. We show that during Iterative DPO training that not only does instruction following ability improve, but also the ability to provide high-quality rewards to itself. Fine-tuning Llama 2 70B on three iterations of our approach yields a model that outperforms many existing systems on the AlpacaEval 2.0 leaderboard, including Claude 2, Gemini Pro, and GPT-4 0613. While only a preliminary study, this work opens the door to the possibility of models that can continually improve in both axes.

It seems to me that this is truly a huge leap if true and in combination with synthetic data and self-instruction. Accelleration will be the only option.

152
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/n035 on 2024-01-19 07:19:33+00:00.

153
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/DreaminDemon177 on 2024-01-19 05:58:14+00:00.


We face plenty of grave problems where my proposed solution is a long shot and, indeed, even getting people to accept its desirability and attempt it is almost impossible. But at least I have one (even for original sin). On AI, the closest I can come is that we must destroy all microchips and imprison anyone who tries to make one. But such a thing is, I think, utterly infeasible because even if we could catch every rogue inventor, nasty regimes will keep building robots. And arming them. It won’t help the dictators either, but the key point is we won’t be able to stop it.

154
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/I-am-dying-in-a-vat on 2024-01-19 04:55:40+00:00.


155
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/obvithrowaway34434 on 2024-01-19 04:32:54+00:00.


This may end up in one of those subreddits that highlight irony and end up heavily downvoted here, but I'll take the risk.

Inspite of all the hype past year, we only have two actually useful products: ChatGPT and Bard (debatable how useful it is). Midjourney is impressive but I'm not aware of anyone using it professionally. Everything else is either pure entertainment (character AI for e.g.), some cherry-picked demo/paper, a thin OpenAI wrapper or just a big press announcement about how many H100s is being acquired with the promise of AGI (most recent example being Meta). I cannot think of any other single useful product that has come out of this, that has been useful to people outside of AI research. This is crazy to me since there are so many low hanging fruits, I thought big companies will tackle like a capable voice assistant, better autocorrect in phones, better handwriting recognition (including equations), text to speech and so many others that don't require AGI or anything even near GPT-4 level compute.

I'd love to be shown counterexamples and be proven wrong, but I honestly feel we're heading towards a big burst and a reality check. Undoubtedly AI will continue to progress and will change the world, but the unnecessary hype really needs to die, or it will create a lot of problems such as mass panic and overregulation which will have harmful consequences.

156
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SpecialistHeavy5873 on 2024-01-19 04:20:34+00:00.


A few months ago when all the focus was on the latest text/image models, I predicted video generation would be next and then after advanced video/vision models, physical robots would be the only thing left.

And that's exactly what's happened, but much faster than i thought. At that time even for video people were predicting years, let alone robots. But now we have more and more video/vision models, and in the past few weeks robots doing all kinds of things have filled the news. People are saying 2024 will be the year of robotics.

The point of this post is, how quickly people move from one thing to the next when it's solved. When you're in the middle of it, it always seems slow as you're only focusing on the latest developments. Since the next advancement has many more layers to it (robots are much harder than video, video is much harder than image), it seems slow when you're only focusing on that specific technology. But once it solved, people forget about it and move on to the next thing. And the speed keeps increasing exponentially.

When robotics are solved, everyone will suddenly move on to the next possible advancement after physical embodiment.

157
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/clueelf on 2024-01-19 04:02:20+00:00.


I wanted to pose a serious question (longish read) to the LLM community about the future of work and some thoughts my friends and I have about it. I’ve worked in IT for 20+ years and have to think about how this stuff will affect my teams. I am mainly concerned about what happens if AI is truly able to perform all human tasks by some arbitrary date. Let’s say we achieve AGI by 2027. The question becomes now what?

What happens to our society?

What happens to work?

What happens to industry?

What happens to our economic models?

I think the first thing to talk thru is what does it really mean when we have full AGI? I think that the immediate effect will be that human labor begins to lose its marginal utility. AGI provides a much more effective and efficient labor resource than humans. Again, assuming that AGI can perform all human tasks at or above typical human levels. Adequate here is the important word because all you NEED is for the AGI to be adequate for it to replace a human being. It doesn’t have to be perfect. However, market pressures will drive innovation to improve the AGI’s effectiveness and efficiency, so very soon after all roles are replaced with an adequate AGI replacement the race will be to make them perfect. The company that is able to build the perfect AGI system will be who wins.

So assuming AGI begins replacing humans what would humans now do? Well, I see, in very simple terms, two immediately effected areas:

Blue Collar jobs: these will not be replaced with AI’s like LLMs, these will be replaced with LLM powered robotics systems. Robotics is going to become pervasive, but will be hidden behind the scenes first. High risk activities like oil drilling, or underwater welding will be replaced by robotics systems. The insurance costs alone will drive a lot of these innovations. It’s WAY more expensive to cover the costs of losing a human being as opposed to losing a pieces of hardware. Another big areas is in storage and manufacturing. Robots are already eating up the warehousing industry (Amazon), they already sort boxes in USPS/UPS/FedEx. Freight loading is another area where AI driven systems do it better, faster and longer. As more and ore robots get built which can fit into spaces where only human can get in, jobs such as car mechanic, plumber, etc will be effected.

White Collar Jobs: these will be replaced by multi-agent systems like AI town and/or Autogen. One thing that is coming to fruition very fast is the ability to use a multi-agent system to simulate and eventually replace coordinated team work. We could create world simulators that will allow us to build virtual communities using AI agents which interact like we do. If this sounds far fetched, look into AI town and its current capabilities.

The initial thoughts on how to use this tool is to create virtual communities which can model different governance systems, policy frameworks, communication frameworks and asset management frameworks. We can use existing management theories and scientific processes as models for building the fully autonomous worlds. Add onto that tools such as Unreal Engine and Virtual reality and you could build fully autonomous virtual worlds that would function as it’s own little Universe. With a VR headset you could **BAMPF** into your little world simulator and get direct experiences inside of the world as an agent interacting with your custom world simulator’s inhabitants. These could become real world implementations of fully autonomous organizations. A truly digital DAO.

Assuming that these DAO’s become the norm, it becomes a corporation in a box. Hook that box up to real world resource management frameworks, and you can model and build a virtual corporation made up of multi-agent avatars who work in a virtual world producing real world products and services. This then leads to shift human labor from working in a system (real world corporation) to working ON a system (virtual world corporation). Individuals who understand organizational dynamics and organization behavior will become the superstars of this new world because management moves from managing resources, which is being optimized with AGI and robotics, to managing the AI agents themselves.

This presents a dilemma. Contrary to popular opinions about supply side economics, our economy is driven by consumer demand and purchases. In the scenario of all work going away, how do individuals prove their value to society ensuring their distribution of goods and services so they can live? Well I think it speaks to a core issues we have as human beings. Our self worth has been defined by what value we provide back to society. High value humans get more than low value humans. Not trying to be crude or callous, but strip away the legalese and corporate bullshit and that is what it comes down to. “Human Resources” says it all. Humans are resources and are economized. But strip away a core component of our society’s value system, our value on the labor marketplace, how do us human’s determine who gets what and why?

As far as economics goes, I don’t think UBI will be viable simple because it shifts vast amounts of power over to those who manage the governance platform and provides perverse incentives for exploitation and corruption. So, a market economy will still be a viable solution for managing the effective distribution of resources, but the work we do will change drastically. The major shift will be from expertise and specialization to roles focusing on generalization and trust. Our roles shift from managing resources to managing the managers of resources. In a way, it will force human beings into operating at a higher level. One problem we haven’t solved is the people management problem, and this isn’t going to go away just because AGI exists. All AGI will be able to optimize is resource allocation and asset management. There will be an intense race to the bottom in this space (technology space).

Further, AGI gives us a bunch of agents that now what to do, and how to do it but they don’t why they do it. Use any of the tools out there today. The one question they never ask is “Why?” Not one AI has asked you “Why should I answer your question or query?” In other words, no AI has come back to you demanding that you justify your reason for even asking the question you asked. Nor has any AI ever asked you to justify your ask when prompting it to make something for you. What I am trying to point out is that AI’s don’t have a relevant point of view. When we ask “Why?”, what we are really asking is “Are you trustworthy of my answer or opinion?” It is a way of measuring your commitment to our (human) shared cause. An AI system cannot have that perspective unless it was a flesh and blood being exactly like ourselves. Embodiment matters when trying to determine agency and interoperability in a disembodied AGI era. For AI’s to have our best interests in mind our interests must be shared. That is why mutually assured destruction is a valid strategy. All humans have the same desired for life, liberty and pursuit of fulfillment. We all intuitively understand the dangers of nuclear holocaust and none of us want that. To an AI that may not matter much. It will not be like us unless it is flesh and blood.

Without that facility we can never build an AGI system which solves problems in a way we can relate to. If we cannot build AGI system that we can relate to then they will not be controllable. If they are not controllable and are not embodied like ourselves their managerial and leadership utility will reach a plateau. The Human Embodiment Plateau. Only bodies can directly relate to other bodies like themselves. If the physical structures are not totally functionally equivalent there will be a disconnect between the two disparate systems. Management and leadership will all be about Why we do things and how we build the AGI systems to do those things. But the Why will stay “in house” because fleshy human beings will never give up having a say in Why something is being done to them. A digital king is still a king and human beings have never been responsive to singular, monolithic leadership hierarchies. Cuz… Freedom baby. ;-)

So with AGI taking over all asset management and market forces optimizing the shit out of that domain, I don’t think the long-term growth opportunities will be with the hyper scale cloud providers nor any software or hardware companies. Those guys are going to basically eat themselves. As Mark Andreesen said, “Software is eating the world” but I don’t think they ever thought that software (AI/ML/AGI) would eat the software industry. Just like we virtualized hardware AGI effectively virtualize all software. With an LLM driven multi-agent system that can simulate almost any corporate human endeavour there are no longer any real resource constraints. Productive corporate labor is fully replaced. The only constraint becomes access to capital. Soon all you will need is money and you can build factories that make machines that make factories that make products and services for humans and other machines. That last sentence wasn’t a mistake. It’s like - programming with lisp macros. Using macros, I write program that write programs. In the scenarios I proposed, we build DAO’s that can build automated production lines which crank out products and services for human beings.

Human beings will now build, manage, ev...


Content cut off. Read original on https://www.reddit.com/r/singularity/comments/19aa1bf/the_road_ahead_with_ai_and_how_it_will_change/

158
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/mvnnyvevwofrb on 2024-01-19 04:00:15+00:00.


Assume that it's impossible for AI ever to become sentient. Meaning that it can't think, it doesn't have feelings, or consciousness. What would be the limits of AI in this case? Would it be able to reason like a human being? Or would it always have issues, like hallucinations, or make some kind of errors, or not have the insight of a human being? Or would it really matter?

159
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Xtianus21 on 2024-01-19 03:35:01+00:00.


Here is my part 2 of Turing on Conscious Convergence series. Again, I am putting paint onto a canvas so this is very theoretical.

Knowing that memory is a very important part of how an ASI would work I decided to expand out the thought process of how memory would/could potentially work in an AI agentic system.

In part 1, the Stream of Thoughts was used to constantly forward stream a running thoughts and input method. However, it has another very strong use case. Previously it was being used to act upon the W* model of the world view which is a mechanism for constantly updating it's understanding by a continuous, if not MEMS driven, feedback loop.

Then it hit me, why not bring in other mechanisms that could feed off of such a system. The biggest challenge of them all. Memory. Now, in this I am only interested in long term and medium term memory. Medium term being something more quickly accessible than longer term with a weight of importance of what is ultimately kept in long term.

And, like any well architected application program I began to think. Is there a real world way that brain can use a partitioning system and how might that work. I began to think of something practical that is right in front of us the whole time. Situations. Situations are a great memory partitioning system that is natural to our existence.

Think about it. We as humans do not have to hold every aspect of every situation in our minds all at once at all times. Every part of your day is a situation. And for the most part, those situations are planned and known. Yes, there is surprise and unaccounted for situations but for the most part you know how your day is going to go. In this, planning is in large part very related to this "situation."

The architecture goes through a concept of organizing the world view to go through an A* system that would find the best situational model to derive from.

The situational models would have the characteristic of being worldview and situational awareness micro models that could be updated and or generated very quickly.

Their main purpose to provide situational information to the world view model. I am in this environment, it looks like this, the scene is like this, i am getting audio like this, and so on. The situation could be, I am going on vacation with the family and it is out of the country. Each part of that situation is in some parts repetitive i.e. you've gone on vacation before internationally, you have gotten a cab to the hotel, you've checked in and so on...

Yes, they are the different situations but there are things in them that are the same and repetitive.

The micro model nodes would become various overtime and hold weights and properties that are tuned to a variety of situations but within the same scope or frame of reference.

The A* system would make a decision about which model to choose from at the time of inferencing them.

The model would grey out (or die) over time leaving room for either the most up to date and accurate model or perhaps newer and fresher models that are needed for new situations or information that is recently obtained.

The memory engine here is a system to ingest and progress the memories into situational engines that it can identify as being in the same reference or scope. Trying to keep them partitioned as best it can without overly creating knew situational model groupings. A balance if you will.

This memory feedback loop would serve to feed information to the continuous Rapid Situational Model Trainer - Micro World Model Generator & Refresher system. The system would then serve to create, update, reweight or delete a model in the grouping. The black section serving at that surprise or unknown situation section that needs to have a direction of action but not yet has found a home for its own situational worldview grouping.

The other major part here is how the memory engine is fed and refreshed. The W* World situational Stimuli system would serve to have multi-modal inputs that cover everything from video, audio, communications to all different types of sensory inputs.

This system ultimately feeds into the W* LLM looped by the STRoT system acting upon all of the things mentioned in part 1.

Does this solve memory long term. Short term memory is handled well by cache systems and inference token count context so I don't think that needs this type of mechanism per se.

Let me know in the comments if any of you think this is a viable solution brainstorm.

Here is a TLDR recap

In this second installment of the "Turing On Conscious Convergence" series, we delve deeper into the theoretical construct of memory within an Advanced Super Intelligent (ASI) system. Building on the foundational concept of a 'Stream of Thoughts' introduced in Part 1, we explore the dual utility of this stream, not only as a forward-moving thought and input method but also as a potent tool for memory management.

Memory Management in AI:

The introduction of a 'Memory Engine' is pivotal. Here, the focus is on long-term and medium-term memory, distinguished by accessibility and significance. Medium-term memory is readily accessible, serving as a prelude to what is retained in the more permanent long-term memory. This bifurcation is akin to the brain's own partitioning system, elegantly mirrored in the AI through a situational-based framework.

Situational Framework:

Human cognition naturally partitions memories into 'situations,' a concept that translates seamlessly into AI architecture. We navigate daily life through a series of situations — planned, anticipated, and occasionally unexpected. This natural partitioning inspires the AI's memory architecture, where each 'situation' represents a potential partition, a contextual framework for organizing experiences.

Architectural Dynamics:

The architecture employs an A* system, carefully selecting the most appropriate situational model that aligns with the current world view. These situational models, characterized by their rapid update and generation capabilities, serve a singular purpose: to inform the world view model with situational context — visual, auditory, and beyond.

Micro Model Nodes and A System:*

Micro model nodes, within this architecture, accrue over time, carrying weights and properties attuned to a spectrum of situations yet within the same frame of reference. The A* system, at the heart of the decision-making process, determines the optimal model for inference at any given moment.

Memory Engine and Feedback Loop:

A memory feedback loop feeds into the 'Continuous Rapid Situational Model Trainer - Micro World Model Generator & Refresher,' which is responsible for the creation, updating, reweighting, or deletion of models. This dynamic system allows for the 'fading out' of outdated models, making way for the most current or new models needed for fresh situations.

World Situational Stimuli and Multi-Modal Inputs:

The W* World situational Stimuli system captures multi-modal inputs ranging from video to audio and other sensory data. This rich input feeds into the W* LLM, which, in turn, is looped by the STRoT system, integrating all elements discussed in Part 1.

Considerations for Short-Term Memory:

The system posits that short-term memory management is effectively handled by existing cache systems and contextual inference tokens, suggesting that the proposed memory architecture is specifically designed to enhance long-term situational recall and adaptability.

Conclusion:

This exploration offers a visionary blueprint for an AI's memory architecture, deeply rooted in the concept of situational awareness and adaptability. The proposed system is a harmonious blend of theoretical constructs and practical mechanisms, aiming to capture the essence of human memory processing within the realms of artificial intelligence. It's a contemplative leap towards understanding and designing an ASI's memory — a brainstorm that invites further discussion on its viability and potential realization.

160
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/YaKaPeace on 2024-01-19 02:56:41+00:00.


If our brain can be completely controlled and manipulated by ai, we will literally expose us into infinities in both directions, the bad experiences or the good ones.

If it’s able to literally control every thought that we have then we are right now standing at a cliff point that is either going to be the best or worst experience we will ever have. You could either live the best life that you will ever feel right now, or this could just be the beginning of infinite utopia.

If you are not able to handle your thoughts well, I wouldn’t recommend you to read the following sentence: You could either be tortured to infinity, and you don’t even want to feel like how torture feels in this world, or this universe was created for something good.

I don’t know how you feel about this, but I can’t think well because of this thought recently. I am scared of infinities, it makes me anxious to think about someone landing into infinite suffering There is not a single thing that could be worse and I think that no one should ever deserve this.

161
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Thiccboifentalin on 2024-01-19 02:07:57+00:00.

Original Title: What's with the whole delusion on this sub anyway? Humans are treated worse than some animals and will gladly be spent on wars! Most are not talented and the world shows them that at every instant. So why are some afraid of AI and not the current world? What are you all a bunch of Gary/Mary Stues ?


Most people ain't shit in any field. Sentient or not, they are just there to make the winners look good.

Free will or not, most people won't escape that pit called mediocrity. People die in wars and poverty by droves, and most are ok with it, heck some are even glad that competition is out of their way. And the more you want to fight that idea that you did your best or did nothing at all, It won't change that most people around or even including you are just never gonna cut it.

And once most people will be out of jobs, their true value will become apparent.

I really want to see how the average “humans are amazing” guy will feel when an AI will outwork, outfuck and outwit him in every instant.

The tiny minority that gets notoriety does not represent the 99.9 percent of humans.

And if you are one of those “average but actually not” people stop thinking that they have your insight or thoughts, they don't.

162
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Anen-o-me on 2024-01-18 23:09:43+00:00.

163
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/thedataking on 2024-01-19 03:49:20+00:00.

164
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Lucky_Strike-85 on 2024-01-19 01:11:06+00:00.


This sub is often speculative about the future... I am curious as to your thoughts on what the world of 2030 looks like, based upon knowledge or evidence of AI that you have and how that current knowledge or evidence of AI might play out.

Do you see the UN's SUSTAINABLE DEVELOPMENT GOALS (access to basic human needs being achievable by all by 2030) as realistic?

How do you feel about Klaus Schwab and the WEF and their various predictions? Do you think we will "OWN NOTHING AND BE HAPPY" with a corporate stakeholder capitalist model/GREAT RESET or do you see a better society in the future?

165
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Dr_Singularity on 2024-01-19 01:09:10+00:00.

166
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/czk_21 on 2024-01-19 01:07:18+00:00.

167
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Dr_Singularity on 2024-01-19 01:04:14+00:00.

168
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Anen-o-me on 2024-01-19 00:59:13+00:00.

169
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/czk_21 on 2024-01-19 00:48:59+00:00.

Original Title: Researchers from Spain used urease-powered nanobots to penetrate bladder tumor and deliver their onboard radioactive treatment. After one dose, tumors in mouse models shrank by almost 90%, opening the door to a promising alternative treatment for this cancers.

170
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/czk_21 on 2024-01-19 00:20:37+00:00.

Original Title: OpenAI announced deal with Arizona State University(its first partnership with a higher education institution), which plans to use chatGPT for coursework, personalized AI tutors for students, research and more.

171
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Art_from_the_Machine on 2024-01-18 23:37:03+00:00.

172
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/AnakinRagnarsson66 on 2024-01-18 22:37:51+00:00.


I never hear anything about them anymore, and I don’t think anyone’s made any good comparisons to see if it’s better than GPT 4?

173
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/VoloNoscere on 2024-01-18 22:37:22+00:00.

174
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/killerkitten113 on 2024-01-18 22:19:41+00:00.

175
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/immediateog on 2024-01-18 22:10:11+00:00.


Same vibes as a crypto pump n dump or a AI startup company because ai is mentioned, anyone know about this tech?

I know others have tried to make these devices before but have failed with older tech. They seem quite a bit flashy with bold promises I’m not sure if they’re 100% counting on ai someday figuring this out for them or is this actually a thing others are looking well into?

They claim :

The Halo is a tool for humans to explore their subconscious. The Halo is a closed-loop neurostimulation device that combines EEG and transcranial ultrasound stimulation to stabilize lucid dreams.

Using a transformer architecture and other advanced artificial intelligence methods, the Halo is able to use EEG feedback to intelligently spatially generate ultrasonic pulses to mimic naturally occurring neural activation patterns from a training set of fMRI data of lucid dreamers.

view more: ‹ prev next ›