Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
201
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/LoasNo111 on 2024-01-18 12:31:50+00:00.


As we all know, large parts of the world will soon be facing water scarcity. It's a pretty big problem.

So are there any advances to deal with water scarcity? Where's the innovation happening in this area? Will it ever be affordable?

Can large countries (population) like China, India and Nigeria eventually solve their water scarcity issues with innovation? The problem is that it can't just be technically feasible, it also has to be affordable enough unlike desalination right now.

202
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/kecepa5669 on 2024-01-18 12:25:26+00:00.


It feels like native (built-in by the browser makers) agentic AI browser navigation is a feature long overdue. We should have had this weeks ago if not months. And we certainly shouldn't need to resort to third party plug-ins or FFS an entire additional 3rd party hardware device!

WTF is taking GOOG, MSFT and Mozilla so long to come out with this? If third parties can do it, why can't they?

203
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Mk_Makanaki on 2024-01-18 11:50:48+00:00.

204
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/cloudrunner69 on 2024-01-18 11:00:39+00:00.


It's crazy to think that you have lived your entire life in a certain way within a place that mostly has always looked the same. Not much has really changed. The buildings are the same, the people we interact with are the same, cars, buses, trains, clothing, the food you eat the way you work etc etc etc. For all sense and purposes it has been this way for as long as you can remember, your parents lived like this and their parents and theirs, it is quite unimaginable to think that everything you know could suddenly change into something unrecognizable almost overnight. It is truly beyond believable that all that you know could just disappear.

But of course this can happen. We know it can happen because it has happened many times before. It has just probably never happened to you. Look at people like the Australian Aborigines, the Native Americans, The Natives of South America. Their world changed into something they had never thought possible right before their eyes. One day they where all living the same way as they had been for hundreds if not thousands of years and then the next day it was all gone. Horses, boats, trains, different clothing and materials, methods of construction, weaponry and the many different machines. It happened to them and now it is about to happen to us.

We have no idea what is coming, just as those cultures the Europeans colonized had no idea what was coming across the oceans towards them. They couldn't stop them for their technology was vastly superior. We will be assimilated into what is coming towards us just as they where assimilated into the new world. And there is absolutely nothing we can do to stop it.

205
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/PsychoComet on 2024-01-18 09:45:39+00:00.

206
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/rationalkat on 2024-01-18 10:14:29+00:00.

Original Title: DeepSpeed-FastGen (Microsoft): High-throughput Text Generation for LLMs via MII and DeepSpeed-Inference. "deliver up to 2.3x higher effective throughput, 2x lower latency on average, and up to 3.7x lower (token-level) tail latency, compared to state-of-the-art systems"

207
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/PsychoComet on 2024-01-18 09:45:39+00:00.

208
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SharpCartographer831 on 2024-01-18 06:52:37+00:00.

209
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/StewArtMedia_Nick on 2024-01-18 04:38:30+00:00.

210
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/ponieslovekittens on 2024-01-18 04:36:48+00:00.


So who had "receive a high school diploma in VR while masquerading as an anime girl" on their "unpredictable future" bingo card?

"Aominext’s Anime VR High School emerges as a metaverse revolution that not only introduces a novel approach to education but also brings a touch of futurism by allowing students to earn officially recognized diplomas while embodying anime avatars."

"The virtual high school offers a three-year, credit-based course with a curriculum recognized by the Japanese Ministry of Education, Culture, Sports, Science, and Technology."

"At the ME Campus, you can use your favorite avatar in a virtual space to learn what you want at your own pace."

211
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/BeginningInfluence55 on 2024-01-18 04:26:31+00:00.

Original Title: ChatGPT is definitely using context from other conversations. I had so many different „coincidences“ the last weeks, where ChatGPT suddenly used examples or specific words I used in another conversation just minutes ago.


I can’t really prove it, but maybe somebody here has seen similar phenomenons. For example today I was talking about Pepsi with chatgpt. Some minutes later, in ANOTHER instance (not just editing the thread) I talked with ChatGPT about milk. And it suddenly talked about milk being definitely better than pepsi but it would advice to mainly drink water.

I mean it makes sense, but it could have named any soft drink really.

Might also just me seeing a ghost, but idk. Maybe some of you have seen similar things.

212
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Xtianus21 on 2024-01-18 03:59:16+00:00.


I decided to napkin out some thoughts of how mimi consciousness in a live running system. The "other thing" to me as I have stated before is the notion that a learning system has to come from outside of the LLM/inference layer.

Giving this some thought and some a post here about running a live streaming system gave me an epiphany of how such a system may work.

The basic premise is 2 key parts. Again, to me, the agency and all agentic behaviors have to be by a control system outside of the inference layer.

  1. The system can be split/routed so that a determination mechanism can be used for when the agent needs to run the show versus the inference layer.
  2. The other main part is the ability of the agent to pull information from the LLM and begin a planning procedure to methodically answer out a problem and its solution. This to me is the part that is most important. to me, agency is what sets out to do an action. Theoretically, this could be a model of agentic control and or a hard coded microservices system to POC it's viability. However, I don't see the need for this part to be the emergent "thing" as it serves to be more of a directive mechanism anyway. Specifically, there needs to be some mechanical deterministic parts that can act via specific direction. Some parts have to be live and active in real-time. While others can be model built and inference driven in this layer.
  3. The agent control mechanism serve to provide a way to respond in situations where "thought" or creativity of how to respond is required via a direct prompt. This is where the context can act as an inverse mechanism where a type of query may require a unique response. Think TARS.
  4. The W* or World View Clock mechanism would be used for planned control of outward responses and an understand of a world view system of how those outward activities make sense. A loop of behavior if you will.

However, in part 2, if there was a model capability that could be modeled to handle the main portions of the process I do believe that is where any emergent consciousness would derive from. Why? In this layer of controlling behavior would theoretically be, as Bill put it, "where Shakespeare is encoded".

To me, this is exactly it and the epiphany I was looking for. If attention was the key part to the language layer then why aren't we developing a model that is a control of process layer in creating processes of thought.

In any event, here it is. If you like it I will do an official architecture and make it prettier and easier to follow.

Here is an explanation from GPT when I had it analyze the diagram.

Technical Overview of the AI System:

The system is a sophisticated AI architecture designed to handle a diverse array of tasks, from simple computations to complex, multi-step planning problems. It employs a central control agent and integrates various AI technologies and algorithms, including a unique component known as "XoT" (Everything of Thoughts).

Core Components and Their Functions:

  1. Central Control Agent:
  • Acts as the system's brain, determining the best route for each query.
  • Integrates outputs from various subsystems and models to construct and deliver responses.
  • Utilizes reinforcement learning to improve decision-making efficacy over time.
  1. *World Clock (W)**:
  • Maintains a continuous loop of operational thoughts, simulating a stream of consciousness.
  • Enables real-time reevaluation and optimization of decision-making processes, potentially using MEMS technology.
  1. Foundational Large Language Model (FLLM):
  • Provides advanced natural language processing capabilities.
  • Can include models like OpenAI's GPT for language understanding and response generation.
  • Acts as a repository of knowledge and linguistic structures.
  1. *Reinforcement Learning (RL) with Q-Learning (Q)**:
  • Adapts responses based on feedback from previous interactions.
  • The Q-learning component evaluates the potential reward of actions, guiding the agent's decisions.
  1. A Search Algorithm*:
  • Used for finding the most efficient path in complex decision-making tasks.
  • Optimizes the agent's steps when constructing a response.
  1. AI Router:
  • Directs incoming queries to the most suitable processing mechanism.
  • Assesses the complexity of the query and chooses the most efficient processing route.
  1. Safety Lock:
  • Ensures all actions and outputs of the AI are within ethical and safety boundaries.
  • Prevents harmful or biased responses.
  1. XoT (Everything of Thoughts):
  • A prompting mechanism that enables the system to understand and learn how to interact with the model for effective problem-solving.
  • Facilitates the agent’s ability to decompose complex tasks into simpler queries, enhancing interaction with the FLLM.
  • Acts as a dynamic knowledge base, continuously updating and refining the AI's problem-solving strategies.

Interaction Between Components:

  • The central control agent, utilizing the XoT component, actively communicates with the FLLM to fetch information or generate language-based outputs, tailoring the interaction based on the complexity and nature of the query.
  • The agent uses the world clock to maintain continuous assessment, allowing dynamic adjustments to strategies and actions based on ongoing internal and external feedback.
  • For decision-making, the agent employs the A algorithm* to chart out the most efficient action sequence, especially in multi-step or complex situations.
  • The Q-learning component enables the agent to refine decision-making based on historical data and outcomes, enhancing the effectiveness of the RL feedback loop.
  • The AI Router intelligently assigns tasks to either the quick-response mechanism or the agent-based pathway, based on the query's nature.
  • The safety lock monitors the system's operation, intervening if certain actions or outputs are flagged as risky or unethical.

Existing Analogues in Current Technologies:

  • FLLM resembles systems like OpenAI's GPT-3 and Google's BERT in language processing capabilities.
  • Reinforcement Learning and Q-Learning are paralleled in applications like AlphaGo and various recommendation systems.
  • A Search Algorithm* is similar to its use in robotics and game development for pathfinding.
  • World Clock and Continuous Thought Loops draw parallels with RTOS in critical systems.
  • AI Router could operate similarly to network system load balancers.
  • Safety Lock mechanisms may be inspired by fail-safe systems in critical applications.
  • XoT (Everything of Thoughts), as a unique component, would be a novel development in AI, enhancing the system's ability to learn and adapt to complex problem-solving scenarios.

Implementation Considerations:

  • Development, testing, and integration of each component, including the unique XoT system.
  • Training the AI on vast datasets for a variety of tasks.
  • Regular updates and maintenance for current information and technological advancements.
  • Scalability and self-monitoring for operational integrity.
213
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/UAAgency on 2024-01-18 05:22:31+00:00.


Hello, first of all. Nice to meet you! I am looking for likeminded people to share a demo product with! Comment on this post if you'd like to help out! Looking for honest feedback. I guarantee you it will be worth your time and perhaps even be able to impress you!

214
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/xdlmaoxdxd1 on 2024-01-18 02:46:15+00:00.

215
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/YaAbsolyutnoNikto on 2024-01-18 02:09:04+00:00.

216
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/obvithrowaway34434 on 2024-01-18 01:43:12+00:00.

Original Title: Someone successfully used GPT-4 as their co-pilot to help navigate their health issues, uncovering connections doctors missed. He used a creative jailbreak asking ChatGPT to write a script for the TV series House


A great thread by Patrick Blumenthal on how he used GPT-4 as his co-pilot for the past year getting it to analyze his medical data and make connections that doctors missed. To get around OpenAI restrictions he used a great jailbreak. Check the threadreader link if you don't have an X account. The original link is also provided:

217
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/default_friend on 2024-01-18 01:24:50+00:00.

218
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Anenome5 on 2024-01-18 00:22:39+00:00.


Altman believes future AI products will need to allow "quite a lot of individual customization" and "that's going to make a lot of people uncomfortable,"

Why?

because AI will give different answers for different users, based on their values preferences and possibly on what country they reside in.

That's not much different from how Google bows to the whims of various countries to remove things they don't like, such as Taiwan being called an independent country because China hates that. But I'm not sure how they would control for this exactly, doesn't that imply highly variable alignment, or are we talking separate models being maintained for different territories?

However, that's less important than what he says next:

Soon, "you might just be able to say 'what are my most important emails today,'" and have AI summarize them. Altman says AI advances will "help vastly accelerate the rate of scientific discovery." He doesn't expect that to happen in 2024, "but when it happens, it's a big, big deal."

We like to think of this in the abstract, but let's think of it in the specific. I want you to imagine a single researcher working on a problem. Something like the Moderna mRNA tech, which itself is the product of about four good inventions put together into one working technology. It took decades to finally have a working product because it took time and entirely other teams to solve problems the first team hit a wall up against.

But, with AGI in the mix, it's entirely possible to imagine that a single researcher could start working on a problem on a level with the mRNA tech back in the 90s, and using AGI as their co-developer, very quickly develop the mRNA technology, run labs in an automated fashion, hit difficulty walls quickly that would stymie a human team (possibly for years), have the AGI brainstorm on 50+ million different ways that the wall could be overcome, have it narrow that down to the most promising ones, then begin doing targeted physical and chemical testing that would quickly show you the more promising approaches, then quickly develop those into new technologies that take you to stage two of that process, and this could likely be done within a year or less.

With a dedicated team of researchers all using AGI to co-develop a single goal, all four of those inventions needed to create working mRNA tech, that took over 30 years to finish, could reasonably be done in less than a year.

Here's the point:

Not only will the pace of invention and technological development accelerate, it will get to the point that tech invention can be individualized to your specific problem.

Not only will this help with science and technology, but also with business and product development. How many people come up with nagging problems they'd love to solve, for which they need an invention, but lack the skills to carry that forward into an actual product.

With AGI, they will have all the skills and advice they could possibly ask for.

One last thing:

Altman said his top priority right now is launching the new model, likely to be called GPT-5.

Why is he talking about AGI now despite just a few months ago saying they weren't even working on GPT5. It seems that Altman was playing to the Board at that time, which he was likely getting weird signals from, trying to assuage them that things weren't moving too fast.

I'm still waiting to hear if it's true that the reason the board freaked out is because a number of them had never actually used ChatGPT4 until then and were shocked at what it was capable of doing. Others say it was the idea of marketizing agents with the now recently launched agents market.

Regardless, that drama is behind us and Altman clearly feels free to discuss openly the creation of GPT5, in fact his CEOship may ride on it, having now secured his position much better, he needs to show that OpenAI still possesses the secret sauce that is keeping them above their numerous rivals.

I'm certainly willing to pay for ChatGPT5, how about you guys? ChatGPT4 is great, but it's just not quite smart enough to be as useful as an actual human expert would be, in short, as useful as AGI would be.

I expect ChatGPT5 will be roughly 90% of the way to being AGI, it will likely be almost as good as a human expert in just about every field. And if that's what actually gets delivered within the next year, we're all going to be in for a helluva ride :)

219
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Super_Pole_Jitsu on 2024-01-17 21:33:44+00:00.

220
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/safwanadnan19 on 2024-01-17 21:30:30+00:00.

221
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/spockphysics on 2024-01-17 20:01:51+00:00.

222
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/ImInTheAudience on 2024-01-17 19:21:51+00:00.

223
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/CrispestCrumpet on 2024-01-17 19:14:51+00:00.

224
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Curiosity_456 on 2024-01-17 19:08:14+00:00.


225
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/IluvBsissa on 2024-01-17 18:26:28+00:00.

view more: ‹ prev next ›