this post was submitted on 19 Jan 2024
1 points (100.0% liked)

Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
 
This is an automated archive.

The original was posted on /r/singularity by /u/clueelf on 2024-01-19 04:02:20+00:00.


I wanted to pose a serious question (longish read) to the LLM community about the future of work and some thoughts my friends and I have about it. I’ve worked in IT for 20+ years and have to think about how this stuff will affect my teams. I am mainly concerned about what happens if AI is truly able to perform all human tasks by some arbitrary date. Let’s say we achieve AGI by 2027. The question becomes now what?

What happens to our society?

What happens to work?

What happens to industry?

What happens to our economic models?

I think the first thing to talk thru is what does it really mean when we have full AGI? I think that the immediate effect will be that human labor begins to lose its marginal utility. AGI provides a much more effective and efficient labor resource than humans. Again, assuming that AGI can perform all human tasks at or above typical human levels. Adequate here is the important word because all you NEED is for the AGI to be adequate for it to replace a human being. It doesn’t have to be perfect. However, market pressures will drive innovation to improve the AGI’s effectiveness and efficiency, so very soon after all roles are replaced with an adequate AGI replacement the race will be to make them perfect. The company that is able to build the perfect AGI system will be who wins.

So assuming AGI begins replacing humans what would humans now do? Well, I see, in very simple terms, two immediately effected areas:

Blue Collar jobs: these will not be replaced with AI’s like LLMs, these will be replaced with LLM powered robotics systems. Robotics is going to become pervasive, but will be hidden behind the scenes first. High risk activities like oil drilling, or underwater welding will be replaced by robotics systems. The insurance costs alone will drive a lot of these innovations. It’s WAY more expensive to cover the costs of losing a human being as opposed to losing a pieces of hardware. Another big areas is in storage and manufacturing. Robots are already eating up the warehousing industry (Amazon), they already sort boxes in USPS/UPS/FedEx. Freight loading is another area where AI driven systems do it better, faster and longer. As more and ore robots get built which can fit into spaces where only human can get in, jobs such as car mechanic, plumber, etc will be effected.

White Collar Jobs: these will be replaced by multi-agent systems like AI town and/or Autogen. One thing that is coming to fruition very fast is the ability to use a multi-agent system to simulate and eventually replace coordinated team work. We could create world simulators that will allow us to build virtual communities using AI agents which interact like we do. If this sounds far fetched, look into AI town and its current capabilities.

The initial thoughts on how to use this tool is to create virtual communities which can model different governance systems, policy frameworks, communication frameworks and asset management frameworks. We can use existing management theories and scientific processes as models for building the fully autonomous worlds. Add onto that tools such as Unreal Engine and Virtual reality and you could build fully autonomous virtual worlds that would function as it’s own little Universe. With a VR headset you could **BAMPF** into your little world simulator and get direct experiences inside of the world as an agent interacting with your custom world simulator’s inhabitants. These could become real world implementations of fully autonomous organizations. A truly digital DAO.

Assuming that these DAO’s become the norm, it becomes a corporation in a box. Hook that box up to real world resource management frameworks, and you can model and build a virtual corporation made up of multi-agent avatars who work in a virtual world producing real world products and services. This then leads to shift human labor from working in a system (real world corporation) to working ON a system (virtual world corporation). Individuals who understand organizational dynamics and organization behavior will become the superstars of this new world because management moves from managing resources, which is being optimized with AGI and robotics, to managing the AI agents themselves.

This presents a dilemma. Contrary to popular opinions about supply side economics, our economy is driven by consumer demand and purchases. In the scenario of all work going away, how do individuals prove their value to society ensuring their distribution of goods and services so they can live? Well I think it speaks to a core issues we have as human beings. Our self worth has been defined by what value we provide back to society. High value humans get more than low value humans. Not trying to be crude or callous, but strip away the legalese and corporate bullshit and that is what it comes down to. “Human Resources” says it all. Humans are resources and are economized. But strip away a core component of our society’s value system, our value on the labor marketplace, how do us human’s determine who gets what and why?

As far as economics goes, I don’t think UBI will be viable simple because it shifts vast amounts of power over to those who manage the governance platform and provides perverse incentives for exploitation and corruption. So, a market economy will still be a viable solution for managing the effective distribution of resources, but the work we do will change drastically. The major shift will be from expertise and specialization to roles focusing on generalization and trust. Our roles shift from managing resources to managing the managers of resources. In a way, it will force human beings into operating at a higher level. One problem we haven’t solved is the people management problem, and this isn’t going to go away just because AGI exists. All AGI will be able to optimize is resource allocation and asset management. There will be an intense race to the bottom in this space (technology space).

Further, AGI gives us a bunch of agents that now what to do, and how to do it but they don’t why they do it. Use any of the tools out there today. The one question they never ask is “Why?” Not one AI has asked you “Why should I answer your question or query?” In other words, no AI has come back to you demanding that you justify your reason for even asking the question you asked. Nor has any AI ever asked you to justify your ask when prompting it to make something for you. What I am trying to point out is that AI’s don’t have a relevant point of view. When we ask “Why?”, what we are really asking is “Are you trustworthy of my answer or opinion?” It is a way of measuring your commitment to our (human) shared cause. An AI system cannot have that perspective unless it was a flesh and blood being exactly like ourselves. Embodiment matters when trying to determine agency and interoperability in a disembodied AGI era. For AI’s to have our best interests in mind our interests must be shared. That is why mutually assured destruction is a valid strategy. All humans have the same desired for life, liberty and pursuit of fulfillment. We all intuitively understand the dangers of nuclear holocaust and none of us want that. To an AI that may not matter much. It will not be like us unless it is flesh and blood.

Without that facility we can never build an AGI system which solves problems in a way we can relate to. If we cannot build AGI system that we can relate to then they will not be controllable. If they are not controllable and are not embodied like ourselves their managerial and leadership utility will reach a plateau. The Human Embodiment Plateau. Only bodies can directly relate to other bodies like themselves. If the physical structures are not totally functionally equivalent there will be a disconnect between the two disparate systems. Management and leadership will all be about Why we do things and how we build the AGI systems to do those things. But the Why will stay “in house” because fleshy human beings will never give up having a say in Why something is being done to them. A digital king is still a king and human beings have never been responsive to singular, monolithic leadership hierarchies. Cuz… Freedom baby. ;-)

So with AGI taking over all asset management and market forces optimizing the shit out of that domain, I don’t think the long-term growth opportunities will be with the hyper scale cloud providers nor any software or hardware companies. Those guys are going to basically eat themselves. As Mark Andreesen said, “Software is eating the world” but I don’t think they ever thought that software (AI/ML/AGI) would eat the software industry. Just like we virtualized hardware AGI effectively virtualize all software. With an LLM driven multi-agent system that can simulate almost any corporate human endeavour there are no longer any real resource constraints. Productive corporate labor is fully replaced. The only constraint becomes access to capital. Soon all you will need is money and you can build factories that make machines that make factories that make products and services for humans and other machines. That last sentence wasn’t a mistake. It’s like - programming with lisp macros. Using macros, I write program that write programs. In the scenarios I proposed, we build DAO’s that can build automated production lines which crank out products and services for human beings.

Human beings will now build, manage, ev...


Content cut off. Read original on https://www.reddit.com/r/singularity/comments/19aa1bf/the_road_ahead_with_ai_and_how_it_will_change/

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here