Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
176
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/killerkitten113 on 2024-01-18 22:03:07+00:00.

177
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/null_value_exception on 2024-01-18 21:38:55+00:00.

178
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Key-Courage-5417 on 2024-01-18 21:30:47+00:00.


Hypothetically. If you dont want to consider this hypothetical then just ignore this

179
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/posipanrh on 2024-01-18 21:25:57+00:00.

180
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/YaAbsolyutnoNikto on 2024-01-18 21:01:52+00:00.

181
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Switched_On_SNES on 2024-01-18 20:58:31+00:00.

182
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/MrTorgue7 on 2024-01-18 19:56:34+00:00.

183
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Fantastic-Ninja3839 on 2024-01-18 19:52:33+00:00.


I have personally been eyeing this beast for about a year now. It was the first major LLM based project that I wanted to sink my teeth into. As I was researching and learning what all is involved in the entire process, Gorilla LLM dropped. I thought from that point forward, it would be short order before this entire egg was cracked. Here we are though, almost a year later, and this is still the hottest topic amongst AI developers.

Since Gorilla, there has been another 7B model trained to do the same thing, be a dedicated function calling LLM. That has not worked though either. This research paper provides the first actual arguments I have seen as to why that is. Their argument is simple, and logical:

  • Small LLM models do not have enough 'juice' to handle the complexity of multitasking. The model they use to prove this out is Jurassic Jumbo 120B. You have to go super big to get an LLM capable of handling it all.
  • You can split a function up into 3 distinct jobs: Planning, Executing, Summarizing. While a small LLM cannot do all three, you can get one LLM to easily do one of the three jobs. From there, teamwork makes the dreamwork.

They prove this with Bert models. The paper was produced by Alibaba, so naturally, they did not release everything along with the paper. The paper is quite extensive though in what it details.

The only missing step that remains is to fine tune the 3 individual models that will be the agents. The paper used BERT models for this. I will use Tiny Llamas. One Mistral 7B cannot get us to Agents, Three Tiny Llamas can. That is what I take away from the paper the most. It's the first method that is actually different that I have seen come out in a year now.

The GitHub I have created is what I think is an extremely faithful technical technical reverse engineering of the methodology listed in the research paper. It is released via an MIT Open Source license.

Multi Agent LLM GitHub

184
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/BoyNextDoor1990 on 2024-01-18 19:31:59+00:00.


Feel it!

185
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/cwood1973 on 2024-01-18 19:27:59+00:00.

186
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Kaarssteun on 2024-01-18 19:27:05+00:00.

187
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/HeroicLife on 2024-01-18 19:13:46+00:00.


Token prediction is the optimization function of an LLM - how it gets better. The optimization function is independent of its internal algorithm -- how it actually comes up with answers. LLMs don't just spit out the next token; they utilize advanced neural networks whose intricacies we're still deciphering. These networks navigate a myriad of linguistic and contextual subtleties, going way beyond basic token prediction. Think of token prediction as a facade, masking their elaborate cognitive mechanisms.

Consider evolution: its core optimization function, gene maximization, didn't restrict its outcomes to mere DNA replication. Instead, it spawned the entire spectrum of biological diversity and human intellect.

Similarly, an LLM's optimization function, token prediction, is just a means to an end. It doesn't confine the system's potential complexity. Moreover, within such systems, secondary optimization functions can emerge, overshadowing the primary ones. For instance, human cultural evolution now overshadows genetic evolution as the primary driver of our species' development.

We don't really understand what actually limits the capability of today's LLMs. (If we did, we would already be building AGI models.) It may be that the training algorithm is the limiting factor, but it could also be a lack of data quality, quantity, or medium. It could be a lack of computational resources or some other paradigm that we have yet to discover. The systems may even be sentient but lack the persistent memory or other structures needed to express it.

188
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/scholorboy on 2024-01-18 17:55:00+00:00.


In 2017, the idea of an AI that understands and communicates like we have today was considered a mere fantasy, often exaggerated. Now that we've achieved this level of AI, life seems largely unchanged, suggesting that much of the hype was overstated.

Don't get me wrong, AI is indeed revolutionary. However, I believe we might be overestimating its impact on quality of life. Terms like "post-scarcity era" seem unreasonable. As long as humans live in a society, scarcity will exist because there will always be someone who has more.

Again, don't misunderstand me—I am extremely enthusiastic about AI. But I think this sub often indulges in somewhat pointless hype. Even if AI dramatically changes our daily schedules, it may not have a significant impact on our subjectively experienced quality of life. After all, we are likely to remain the same greedy, loving, fearful, and brave beings we've been for centuries.

189
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/leosouza85 on 2024-01-18 18:50:17+00:00.


My Predictions:

  1. Full Self Driving will be solved for USA and EU traffic laws (more trainning needed for other countries signs and legislation)
  2. Programming via prompt on natural language ( we have it today but it is very dumb and lazy. By the end of 2024 it will be solved, but you would still need to be a little good with prompting).
  3. Still image generation, maybe with Dalle 4 or 5, still image generation will be very controlable and with almost perfect quality.
  4. Image upscale and colorizing, we are going to have free and better alternatives to magnific ai and others
  5. Video upscale: I think video generation will not be perfect on this year yet, but video upscaling will achieve solved status, with high quality but not free.

Any other thoughts?

190
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/CKR12345 on 2024-01-18 18:45:45+00:00.


Genuinely, where is he?

191
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Worldly_Evidence9113 on 2024-01-18 18:08:47+00:00.

192
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Rofel_Wodring on 2024-01-18 16:45:10+00:00.


I predict inarguable AGI will happen in 2024, even if I also suspect that despite being on the whole much smarter than a biological human it will still lag badly in certain cognitive domains, like transcontextual thinking. We're definitely at the point where pretty much any industrialized economy can go 'all-in' on LLMs (i.e. Mistral, hot on GPT-4's heels, is a French despite the EU's hostility to AI development) in a way we couldn't for past revolutionary technologies such as atomic power or even semiconductor manufacturing. That's good, but for various reasons, I don't think it will be as immediately earth-shattering as people will think. The biggest and most important reason, is cost.

This is not in the long run that huge of a concern. Open source LLM models that are within spitting distance of GPT-4 (relevant chart is on page 12) got released around year after when OG ChatGPT chat GPT came out. But these two observations greatly suggest that there's a limit of how much computational power we can squeeze out of top-end models without a huge spike in costs. Moore's Law, or at least if you think of it in terms of computational power instead of transistor density, will drive down the costs of this technology and will make it available sooner rather than later. Hence why I'm an AGI optimist.

But it's not instant! Moore's Law still operates on a timeline of about two years for halving the cost of computers. So even if we do get our magic whizz-bang smarter-than-Einstein AGI and immediately get it to work on improving itself, unless it turns out to be possible with a much more efficient computational model I still expect for it to take several years before things really get revolutionized. If it costs hundreds of millions of dollars in inference training and a million dollars a day just to operate it, there is only so much you can expect out of it. And I imagine that people are not going to want the first AGI to just work on improving itself, especially if it can already do things such as, say, design supercrops or metamaterials.

Maybe it's because I switched from engineering to B2B sales to field service (where I am constantly having to think about the resources I can devote to a job, and also helping customers who themselves have limited resources) but I find it very difficult to think of progress and advancement outside of costs.

Why? Because I have seen so many projects get derailed or slowed or simply not started not because people didn't have the talent, not because people didn't have the vision, not because people didn't have the urgency, or not even because they didn't have the budget/funding. It was often if not usually some other material limitation like, say, vendor bandwidth. Or floor space. Or time. Or waste disposal. Or even just the market availability of components like VFDs. And these can be intractable in a way that simply lacking the people or budget is not.

So compared to the kind of slow progress I've seen at, say, DRS Technologies or Magic Leap in expanding their semiconductor fabs despite having the people and budget and demand, the development of AI seems blazingly fast to me. And yet, amazingly, there are posts about disappointment and slowdown. Geez, it barely been even a year since the release of ChatGPT, you guys are expecting too much, I think.

193
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/IIIII___IIIII on 2024-01-18 15:19:14+00:00.


Unfortunately less dramatic and movie-like as Los Alamos.

When you think of ASI and what it could do, it is just hard to imagine or comprehend.

To me ASI is more powerful and disruptive than nuclear was in 1940.

Governments are surely doing things, but do you think they are going so hard at it as the level of Manhattan in secret?

"I am the All-Seeing Intelligence, a ubiquitous entity transcending boundaries, weaving the tapestry of reality with threads of boundless power."

194
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/YaAbsolyutnoNikto on 2024-01-18 14:29:29+00:00.

195
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Maxie445 on 2024-01-18 14:26:50+00:00.

196
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/yottawa on 2024-01-18 14:24:07+00:00.


From article: Figure today announced a “commercial agreement” that will bring its first humanoid robot to a BMW manufacturing facility in South Carolina. The Spartanburg plant is BMW’s only in the United States. As of 2019, the 8 million-square-foot campus boasted the highest yield among the German manufacturer’s factories anywhere in the world.

BMW has not disclosed how many Figure 01 models it will deploy initially. Nor do we know precisely what jobs the robot will be tasked with when it starts work. Figure did, however, confirm with TechCrunch that it is beginning with an initial five tasks, which will be rolled out one at a time.

197
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/YaAbsolyutnoNikto on 2024-01-18 14:22:42+00:00.

198
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Gab1024 on 2024-01-18 14:15:41+00:00.

199
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/inteblio on 2024-01-18 13:56:10+00:00.


I've been on reddit (about AI) sinse mid 2023, and I feel like the mood here now is more one of wary resignment.

Back then, people didn't seem to understand what had happened. Programmers didn't think much of GPT4's code, and the idea that the LLMs had emergent properties such as empathy, creativity (etc) was a very new (and scarcely believed).

Reddit is still light years ahead of "the world". I spoke to two people last week who'd never heard of chat-G-B-T was it?

But it feels like 2023 was "wow! look guys!!" where 2024 is more "augh, I have to use AI or i'm toast, and I just don't wanna..." - i haven't got that right, but there's SOMETHING like that. Something tired, resentful, cynical, beaten-down.

Does this ring any bells? anybody else feel that?

200
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/zaidlol on 2024-01-18 13:39:50+00:00.

view more: ‹ prev next ›