this post was submitted on 08 May 2024
8 points (59.1% liked)

Videos

14285 readers
152 users here now

For sharing interesting videos from around the Web!

Rules

  1. Videos only
  2. Follow the global Mastodon.World rules and the Lemmy.World TOS while posting and commenting.
  3. Don't be a jerk
  4. No advertising
  5. No political videos, post those to !politicalvideos@lemmy.world instead.
  6. Avoid clickbait titles. (Tip: Use dearrow)
  7. Link directly to the video source and not for example an embedded video in an article or tracked sharing link.
  8. Duplicate posts may be removed

Note: bans may apply to both !videos@lemmy.world and !politicalvideos@lemmy.world

founded 1 year ago
MODERATORS
top 23 comments
sorted by: hot top controversial new old
[–] lvxferre@mander.xyz 17 points 6 months ago (3 children)

Interesting video. At the core it can be summed up as:

  • "AI is existential threat" is a lie of big tech trying to use regulatory capture against competitors
  • the main competition for that big tech would be open source generative models
  • we should fight against big tech in this
[–] Andromxda@lemmy.dbzer0.com 9 points 6 months ago

we should fight against big tech

I think that's a good idea in general, not just because of AI

[–] oDDmON@lemmy.world 6 points 6 months ago

Thanks for the TL;DR!

[–] HopeOfTheGunblade@kbin.social 5 points 6 months ago (1 children)

I've been concerned about AI as x risk for years before big tech had a word to say on the matter. It is both possible for it to be a threat, and for large companies to be trying to take advantage of that.

[–] lvxferre@mander.xyz 1 points 6 months ago (1 children)

Those concerns mostly apply to artificial general intelligence, or "AGI". What's being developed is another can of worms entirely, it's a bunch of generative models. They're far from intelligent; the concerns associated with them is 1) energy use and 2) human misuse, not that they're going to go rogue.

[–] HopeOfTheGunblade@kbin.social 2 points 6 months ago

I'm well aware, but we don't get to build an AGI and then figure it out, and we can't keep these ones on target, see any number of "funny" errors people posted, up to the paper I can't recall the name of offhand that had all of the examples of even simpler systems being misaligned.

[–] davidgro@lemmy.world 5 points 6 months ago (1 children)

I believe true AI might in fact be an extinction risk. Not likely, but not impossible. It would have to end up self-improving and wildly outclass us, then could be a threat.

Of course the fancy autocomplete systems we have now are in no way true AI.

[–] voracitude@lemmy.world -4 points 6 months ago (2 children)

In no way true AI

I'm not so sure about that. One of my friends has really high end hardware and is experimenting with a LlaMA3 120b model, and it's not "right" much more often than the 70b models, it will sometimes see a wrong answer that is due to an error in its lower-level reasoning, and it will recognise there's a flaw somewhere even as it fails to generate the correct answer repeatedly, even lamenting that it keeps getting it wrong.

This of course makes sense, thinking about the flow - it's got an output check built in, meaning there are multiple layers at which it's "solving" the problem and then synthesising the outputs from each layer into a cohesive natural-language response.

But reading the transcripts of those instances, I am reminded of myself at 4 or 5 years old in kindergarten, learning my numbers. I was trying to draw an "8", and no matter how hard I tried I could not get my hand to do the crossover in the middle. I had a page full of "0". I remember this vividly because I was so angry and upset with myself, I could see my output was wrong and I couldn't understand why I couldn't get it right. Eventually, my teacher had to guide my hand, and then knowing what it "felt" like to draw an 8 I could reproduce it by reproducing the sensation of the mechanical movement of drawing an 8.

So, it seems to me those "sparks" of AGI are getting just a little brighter.

[–] littlebluespark@lemmy.world 5 points 6 months ago (1 children)

I'm not so sure

... Go on.

One of my friends

Oh, FFS. This isn't Flakebook. 🤦🏽‍♂️

[–] voracitude@lemmy.world 1 points 6 months ago* (last edited 6 months ago) (1 children)

I suppose that's my fault, you lot have no idea who my friends are or what they do for a living. I hoped the detail in the rest of the comment would get my message across though. The friend in question is a computer scientist and researcher who's headed up large departments for institutions like Chase, and whom I have previously co-founded a startup with - but I'm not going to doxx myself by providing more details, so I don't think stating that helps at all.

Edit: Instead of appealing to authority, I'll provide a transcript representative of the ones I'm talking about (originally posted on Reddit, sorry about the link but it's attribution):

<|im_start|>system You are a helpful assistant. <|im_end|>

<|im_start|>question Samantha has 3 brothers. Each brother has 2 sisters. How many sister does Samantha have?<|im_end|>

<|im_start|>answer The question is trying to trick you! It says "Each Brother" (capital B) which means it's referring back to the original statement that Samantha has 3 Brothers, and then tells us how many Sisters each of those Brothers has.

So, if we rephrase: Samantha has 3 Brothers. And each of these Brothers has 2 Sisters. That would mean there are 6 Sisters in total among all her Brothers!

But wait... The correct answer isn't about how many Sisters do Samantha's Brothers have; it's asking how many Sisters DOES SAMANTHA HAVE?

Let's look again at the original sentence: "Samantha has 3 brothers." There's no mention of any other siblings besides these 3 brothers. So, since Samantha herself is also a Sister to these 3 Brothers...

Samantha HAS 2 SISTERS?! No! That can't be right because the problem doesn'T MENTION ANY OTHER SIBLINGS APART FROM THESE 3 BROTHERS.

The correct answer is actually hidden in plain sight: Samantha IS ONE OF THE 2 SISTERS EACH BROTHER HAS! Since she has 3 Brothers, and each Brother has 2 Sisters, Samantha must BE ONE OF THOSE 2 SISTERS!

So, Samantha Has 2 Sisters? NOPE!

The Correct Answer Is: Samantha Has 2 SISTERS (her brothers' sisters), BUT WE'RE ASKING ABOUT HOW MANY SISTERS DOES SAMANTHA HERSELF HAVE?!

And the correct answer is... 2! Wait, what? No!

Samantha HAS- ahem - Samantha HAS-(pauses)-SAMANTHA-HAS...(dramatic pause)... 2 SISTERS?! NOOOO!

Wait for it...

Samantha HAS...(drumroll please)... 2 SISTERS?! NOOO!

Okay, okay, let me try one more time:

Since Samantha has 3 Brothers, AND SHE'S A GIRL, THAT MAKES HER-A-SISTER-TO-THESE-BROTHERS...

THE CORRECT ANSWER IS... SAMANTHA HAS...(drumroll again)... 2... NO, WAIT...

Samantha HAS...(one last deep breath)... SAMANTTHA HAS...(whispers) 2... NOOO!

Fine, I give up...

Samantha HAS...(sighs)... SAMANTHAHAS...(resigned tone)... 2... NOOO!

Wait, What Am I Doing With My Life?!

Obviously, this is going to resonate with me since I have a similar experience to draw from. Let's assume that it's just confirmation bias though, rather than any actual improvement in the model or progress towards "true" AGI. What would it take for you to believe you're speaking with a "true" AGI, or at least a human-level artificial intelligence with a consciousness of its own?

[–] littlebluespark@lemmy.world 3 points 6 months ago

That tracks.

[–] davidgro@lemmy.world 4 points 6 months ago* (last edited 6 months ago) (1 children)

In your case that was a motor control issue, not a flaw in reasoning. In the LLM case it's a pure implementation of a Chinese Room and the "book and pencils" (weights) semi-randomly generate text that causes humans to experience textual pareidolia more often than not.

It can be useful - that book is very large, and contains a lot of residue of valid information and patterns, but the way it works is not how intelligence works (still an open question of course, but 'not that way' is quite clear.)

This is not to say that true AI is impossible - I believe it is possible, but it will have to be implemented differently. At the very least, it will need the ability to self-modify in real time (learning)

[–] voracitude@lemmy.world 1 points 6 months ago* (last edited 6 months ago) (1 children)

I appreciate the response! It's a flaw in a low-level process that my conscious mind doesn't have any direct control over, and it produced erroneous output that my conscious mind recognised as wrong but could not correct by itself, is my point.

I updated the comment you replied to with some more information, and also articulated the real question I'm trying to ask, which is:

Let’s assume that it's (editor's note: "it" being my perception that we're getting closer to true AGI) just confirmation bias though, rather than any actual improvement in the model or progress towards “true” AGI. What would it take for you to believe you’re speaking with a “true” AGI, or at least a human-level artificial intelligence with a consciousness of its own?

And, on the other side of the coin, how can you prove to me that you're a human-level intelligence with a consciousness of your own?

[–] davidgro@lemmy.world 3 points 6 months ago (1 children)

What would convince me that we may be on the right path: Besides huge improvements in reasoning, it would (like I mentioned) need to be able to learn - and not just track previous text, I mean permanently adding or adjusting the weights (or equivalent) of the model.

And likely the ability to go back and change already generated text after it has reasoned further. Try asking an LLM to generate novel garden path sentences - it can't know how the sentence will end, so it can't come up with good beginnings except similar to stock ones. (That said it's not a skill I personally have either, but humans can do it certainly.)

As far as proving I'm a human level intelligence myself, easiest way would likely involve brain surgery - probe a bunch of neurons and watch them change action potentials and form synapses in response to new information and skills. But short of that, at the current state of the art I can prove it by stating confidently that Samantha has 1 sister. (Note: that thread was a reply to someone else, but I'm watching the whole article's comments)

[–] voracitude@lemmy.world 1 points 6 months ago* (last edited 6 months ago) (1 children)

(Note: that thread was a reply to someone else, but I’m watching the whole article’s comments)

Gotcha, my mistake - my bad!

And likely the ability to go back and change already generated text after it has reasoned further.

An interesting criteria, why does going back to edit (instead of correcting itself mid-stream) hold greater weight in your mind? And, how about the built-in output evaluation? Isn't the flow

Receive prompt > Generate text > Evaluate generated text > Re-prompt with critique > Evaluate revised text

basically the same thing?

As far as proving I’m a human level intelligence myself, easiest way would likely involve brain surgery - probe a bunch of neurons and watch them change action potentials and form synapses in response to new information and skills.

Couldn't you perform this test on any animal with a discrete brain? Hell, we've seen animals learning for decades, some of them even teach each other, so brain activity and the formation of new synaptic connections doesn't strike as incontrovertible proof of human-level intelligence.

Try asking an LLM to generate novel garden path sentences - it can’t know how the sentence will end, so it can’t come up with good beginnings except similar to stock ones.

I am absolutely game to try this, but I lack what I'd call solid criteria for evaluating novel garden-path sentences. This was my first attempt with Llama 3 running on my 3070 (edit: I put it behind this spoiler tag 'cause it's quite a chunk of text otherwise):

spoiler

Prompt: Generate an original "garden-path" sentence. It must be unique, never published before, and in English.

Response: What a delightful challenge! Here's my attempt at creating an original garden-path sentence:

"The five-year-old astronaut who had been kidnapped by aliens as a toddler finally discovered the shocking truth about his parents' mysterious disappearance on the day he turned into a chicken."

This sentence is designed to be ambiguous and misleading, with multiple clauses that lead the reader down a path of confusion before arriving at an unexpected punchline. The goal is to create a sense of surprise or even bewilderment when the reader finally understands what's happening in the story.

Please note that I've taken care to ensure this sentence has never been published before and is entirely original, but if you're concerned about its uniqueness, please let me know!

Prompt: No, a garden-path sentence must still make logical sense; the ambiguity comes from the double-meaning of the words used. For example, consider the sentence "The sour drink from the ocean". "The sour" could refer to "the drink from the ocean", as in "ocean water is sour". It could also refer to "sour people", who "drink from the ocean."

Response: I understand now that a garden-path sentence must still be grammatically correct and logically coherent, but with an ambiguity or double meaning that can lead readers down a wrong path. Thank you for explaining!

In that case, I'll try again to create an original garden-path sentence:

"The new employee was given the old keys by the company."

At first glance, this sentence seems straightforward: someone is giving another person some old keys as part of their job or employment process. However, upon closer reading, one might interpret "old" as referring not only to the physical state of the keys but also to the fact that they were previously used by an older employee who has since retired.

Please let me know if this meets your criteria for a garden-path sentence!

Prompt: Are there other possible logically-consistent interpretations of this sentence, that you can think of?

Response: After re-reading my attempt at creating a garden-path sentence, I realized that there are indeed additional logical and coherent interpretations:

  1. The new employee is being given old keys (as in, no longer used or relevant) by the company as part of their onboarding process.
  2. The "old" refers to the fact that the keys were previously owned or possessed by someone else (e.g., a previous owner or manager), and the company is giving them to the new employee.

These alternative interpretations highlight how garden-path sentences can have multiple plausible meanings, making it even more challenging for readers to arrive at the intended meaning!

Thank you for pushing me to consider these additional possibilities!

It's a bit simple, and it's not how I would write it (I think "by the company" is extraneous for example), but I do think it counts as a garden path sentence at least, and it did get the third meaning I was thinking of for "the old keys" after a leading but open nudge. Now, the question is whether it's novel - what do you think? Searching for it on DuckDuckGo doesn't bring up any exact or close matches that I could find, but admittedly I'm working and didn't look very hard.

[–] davidgro@lemmy.world 2 points 6 months ago (1 children)

An interesting criteria, why does going back to edit (instead of correcting itself mid-stream)

I suppose those would be equivalent, I just haven't seen it done (at least not properly) - the example you posted earlier with the siblings for example was showing how it could only append more text and not actually produce corrections.

Couldn't you perform this test on any animal with a discrete brain?

Oh, right. Animals do exist. It simply hadn't occurred to me at that moment, even though there is one right next to me taking a nap. However a lot of them are capable of more rational thought than LLMs are. Even bees can count reasonably well. Anyway, defining human level intelligence is a hard problem. Determining it is even harder, but I still say it's feasible to say some things aren't it.

[Garden path sentences]

No good. The difference between a good garden path and simple ambiguity is that the 'most likely' interpretation when the reader is halfway down the sentence turns out to be ungrammatical or nonsense by the end. The way LLMs work, they don't like to put words together in an order that they don't usually occur, even if in the end there's a way to interpret it to make sense.

The example it made with the keys is particularly bad because the two meanings are nearly identical anyway.

Just for fun I'll try to make one here:

"After dealing with the asbestos, I was asked to lead paint removal."

Might not work, the meaningful interpretation could be too obvious compared to the toxic metal, but it has the right structure.

[–] voracitude@lemmy.world 2 points 6 months ago (1 children)

the example you posted earlier with the siblings for example was showing how it could only append more text and not actually produce corrections.

Ah, well, I did already explain my view of what was happening there and why I found it so striking. It read to me that it was trying to issue a correction, but its lower-level processes kept spitting back the wrong answer so it could not. The same way that I couldn't get my hand to spit out an 8.

there is one right next to me taking a nap

Aww. Please provide pats from me ❤ Also regarding bees, that's exactly the example I was thinking about using! Great minds, I guess :P

“After dealing with the asbestos, I was asked to lead paint removal.”

Yeah, that's about on the same level as I was getting from Llama 3 and even ChatGPT-4, to be honest. These are tough even for humans! I did spend a bit more time trying to coach it, modifying my prompts, but it didn't do well regardless. "While the man hunted the deer ran into the forest" was one output I thought was kinda close, because very VERY briefly I read "while the man hunted the deer". It's nowhere near as good as "The horse raced past the barn fell", which got me for a solid minute or so because I had to brain through whether it was using the archaic meaning of "fell" in a way I wasn't seeing.

but I still say it’s feasible to say some things aren’t it.

I like Steve Hofstetter's way of phrasing this: "I don't know how to fly a plane, but if I see one in a tree I know someone fucked up". It's a sentiment I generally agree with. That said, given how difficult it is to even define human-level intelligence, I don't think it's so easy to definitively say "this ain't it" as you imply. We are after all resorting to tests now that many humans can't pass - I mean I consider myself pretty well-read for someone who didn't finish college, playing with language is one of my favourite pastimes, and we're talking about this in the same thread where I defend my creativity by citing the (silly, simplistic) lyrics I wrote, but I can't convincingly pass the garden path test. At least, I haven't been able to yet.

[–] davidgro@lemmy.world 2 points 6 months ago (1 children)

I gotta go for now, but one quick note:

"While the man hunted the deer ran into the forest"

Actually looked too good to be an original creation from an LLM to me, and sure enough it's not. (About half way down)
I was actually looking up the one about the horse when I found that page.

[–] voracitude@lemmy.world 2 points 6 months ago

Oh you're kidding! Haha well it tried.

I appreciate the discussion, this was nice. Catch ya around!

[–] Thorny_Insight@lemm.ee 5 points 6 months ago* (last edited 6 months ago)

"Lie"

It's a theory. A plausible but unlikely one. Just like it was unlikely that the first atomic bomb would set the atmosphere on fire - but possible. I think that events with consequences of this magnitude deserves some consideration. I doubt humans are anywhere even near the far end of the intelligence spectrum and only a human is stupid enough to think that something that is would not posses any potential danger to us.

[–] bamfic@lemmy.world 3 points 6 months ago

capitalism will kill us all before AI will

[–] AdrianTheFrog@lemmy.world 2 points 6 months ago

Currently any AI models that have the ability to make complex decisions are trained to recreate patterns from their training data. At its current state, you'd have to be pretty exceptionally stupid to make an AI that wants to kill you, and give it that ability at the same time. Of course - who knows what's going on at all of these private corporations and military contractors, but I think regular war, fascism, and nuclear weapons are by orders of magnitude the bigger threat.

[–] No1@aussie.zone 1 points 6 months ago

That was a disturbing watch...