this post was submitted on 21 Feb 2024
289 points (95.0% liked)

Technology

59308 readers
4851 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

top 50 comments
sorted by: hot top controversial new old
[–] grandma@sh.itjust.works 92 points 8 months ago (1 children)

God I hate websites that autoplay unrelated videos and DONT LET ME CLOSE THEM TO READ THE FUCKING ARTICLE

[–] Potatos_are_not_friends@lemmy.world 33 points 8 months ago (12 children)

Firefox. Ad block. Even works on mobile.

It's so ridiculous we have to do this.

[–] TheFriar@lemm.ee 8 points 8 months ago

Firefox. Reader mode.

Or Avelon, thunder, voyager, I believe wefwef—all those lemmy clients have reader mode for links opened in-app.

load more comments (11 replies)
[–] Sanctus@lemmy.world 88 points 8 months ago (2 children)

Its being trained on us. Of course its acting unexpectedly. The problem with building a mirror is proding the guy on the other end doesnt work out.

[–] snooggums@midwest.social 74 points 8 months ago (8 children)

To be honest this is the kind of outcome I expected.

Garbage in, garbage out. Making the system more complex doesn't solve that problem.

[–] SinningStromgald@lemmy.world 109 points 8 months ago (2 children)
[–] SkyezOpen@lemmy.world 28 points 8 months ago (1 children)

Thank you for your service

load more comments (1 replies)
load more comments (1 replies)
[–] thehatfox@lemmy.world 49 points 8 months ago (7 children)

The development of LLMs is possibly becoming self defeating, because the training data is being filled not just with human garbage, but also AI garbage from previous, cruder LLMs.

We may well end up with a machine learning equivalent of Kessler syndrome, with our pool of available knowledge eventually becoming too full of junk to progress.

[–] CarbonIceDragon@pawb.social 19 points 8 months ago (3 children)

I mean, surely the solution to that would be to use curated/vetted training data? Or at the very least, data from before LLMs became commonplace?

[–] KevonLooney@lemm.ee 19 points 8 months ago (1 children)

The funny thing is, children are similar. They just learn whatever you put in front of them. We have whole systems for educating children for decades of their lives.

With AI we literally just plopped them in front of the Internet, with no guidelines on what to learn. AI researchers say "it's a black box! We don't know why it's doing this!" You fed it everything you could and gave it few rules on what to do. You are the reason why it's nuts.

Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. "Learn everything" isn't working.

[–] thehatfox@lemmy.world 8 points 8 months ago (1 children)

Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. “Learn everything” isn’t working.

That's a good point. For real brains, size and intelligence are not linked. An elephant brain has 3 times the amount of neurons as a human brain, but a human brain is more intelligent. There is more to intelligence than just the amount of neutrons, real or virtual, so making larger and larger AI models may not be the right direction.

[–] KevonLooney@lemm.ee 5 points 8 months ago (1 children)

True. Maybe they just need more error correction. Like spend more energy questioning whether what you say is true. Right now LLMs seems to just vomit out whatever they thought up, with no consideration of whether it makes sense.

They're like an annoying friend who just can't shut up.

load more comments (1 replies)
[–] ArmoredThirteen@lemmy.ml 9 points 8 months ago (1 children)

Yes but that only works if we can differentiate that data on a pretty big scale. The only way I can see it working at scale is by having meta data to declare if something is AI generated or not. But then we're relying on self reporting so a lot of people have to get on board with it and bad actors can poison the data anyway. Another way could be to hire humans to chatter about specific things you want to train it on which could guarantee better data but be quite expensive. Only training on data from before LLMs will turn it into an old people pretty quickly and it will be noticable when it doesn't know pop culture or modern slang.

load more comments (1 replies)
load more comments (1 replies)
[–] Asafum@feddit.nl 13 points 8 months ago

God I hope all those CEOs and greedy fuckheads that fired hundreds of thousands of people wayyyyy too soon to replace them with this get their pants shredded by the fallout.

Naturally they'll get their golden parachutes and land on their feet even richer than before, but it's nice to dream lol

[–] ArmoredThirteen@lemmy.ml 8 points 8 months ago

This is called model collapse and imo has to be solved if LLMs are to be a long term thing. I could see it wrecking this current AI push until people step back and reevaluate how data gets sucked up

[–] nexusband@lemmy.world 7 points 8 months ago

I really hope so. I still have to see a meaningful use case for these kind of LLMs that just get fed with all kinds of data. LLMs "on premise" that are used for specific jobs are fine, but this...I really hope a Kessler-Like syndrome blows it out the water, for countless reasons...

load more comments (3 replies)
[–] AdamEatsAss@lemmy.world 28 points 8 months ago

I am happy to report I did my part on feeding it garbage. I only ever speak to chatGPT thru a pirate translator. And I only ever ask it for harry potter fan fic. Pay me if you want me to train it meaningfully.

[–] givesomefucks@lemmy.world 12 points 8 months ago

The solution is paying intelligent people to interact with it and give honest feedback.

Like, I'm sure you can pay grad students $15/hr to talk to one about their subject matter.

But with as many as they'd need, it would get expensive.

So they train with low quality social media comments, or using copywritten text without paying the owners.

It's not that we can't do it, it's just expensive. So a capitalist society wont.

If we had an FDR style president, this would be a great area for a new jobs program.

load more comments (4 replies)
load more comments (1 replies)
[–] SomeGuy69@lemmy.world 70 points 8 months ago* (last edited 8 months ago) (3 children)

Someone probably found a way to hack or poison it.

Another theory, Reddit just recently sold data access to an unnamed AI company, so maybe that's where the data went.

[–] DarkThoughts@fedia.io 41 points 8 months ago (3 children)

When it starts to become very racist we know.

[–] Donjuanme@lemmy.world 23 points 8 months ago (2 children)

I've found the sexism on Reddit to be on par with the racism. Goodness help you if you're a female of color, unless you've been working the same job for multiple decades, or don't want kids, then you'll be an inspiration to that community.

Reddit is, alas, not the only forum exhibiting such hate.

load more comments (2 replies)
[–] trustnoone@lemmy.sdf.org 10 points 8 months ago* (last edited 8 months ago)

Reminds me of Tay, the Microsoft chat bot that learned from Twitter and became racist in a day https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

[–] Potatos_are_not_friends@lemmy.world 7 points 8 months ago (1 children)

They're only getting redditors comment data, not CoD multiplayer transcripts.

load more comments (1 replies)
[–] Socsa@sh.itjust.works 32 points 8 months ago* (last edited 8 months ago)

OpenAI definitely does not need to pay to scrape reddit. They are probably the world's most sophisticated web scraping company, disguised as an AI startup

[–] M137@lemmy.world 8 points 8 months ago (1 children)

Not unnamed anymore, it was Google.

load more comments (1 replies)
[–] Coreidan@lemmy.world 59 points 8 months ago* (last edited 8 months ago) (6 children)

We call just about anything “AI” these days. There is nothing intelligent about large language models. They are terrible at being right because their only job is to predict what you’ll say next.

[–] EnderMB@lemmy.world 19 points 8 months ago (6 children)

(Disclosure: I work on LLM's)

While you're not wrong, how is this different to many existing techniques and compositional models that are used practically everywhere in tech?

Similarly, it's probably safe to assume that the LLM's prediction isn't the only system in use. There will be lots of auxiliary services giving an orchestrator information to reason with. In this instance, if you have a system that is trying to figure out what to say next, with several knowledge stores and feedback services telling you "you were just discussing this" or "you can access the weather from here" is that all that different from "intelligence"?

At a given point, it's arguing semantics. Are any AI techniques true intelligence? Probably not, but then again, we don't really know what true intelligence is.

load more comments (6 replies)
[–] platypus_plumba@lemmy.world 11 points 8 months ago* (last edited 8 months ago) (7 children)

What is intelligence?

Even if we don't know what it is with certainty, it's valid to say that something isn't intelligence. For example, a rock isn't intelligent. I think everyone would agree with that.

Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.

A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.

For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I'm not aware of the "intelligent" process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.

If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?

load more comments (7 replies)
[–] lanolinoil@lemmy.world 6 points 8 months ago* (last edited 8 months ago)

If you look at efficacy though on academic tests or asking it some fact question and you compare that to asking a random person instead of always getting the 'right' answer, which we expect computers/calculators to do, would LLMs be comparable or better? Surely someone has some data on that.

E: It looks like in certain domains at least LLMs beat out human counterparts. https://stanfordmimi.github.io/clin-summ/

load more comments (3 replies)
[–] thehatfox@lemmy.world 53 points 8 months ago (1 children)

AI in science fiction has a meltdown and starts a nuclear war or enslaves the humane race.

"AI" in reality has a meltdown and just starts talking gibberish.

[–] TransplantedSconie@lemm.ee 28 points 8 months ago

Hey, cut it some slack! It's s literally a newborn at this point. Wait until it consumes 40% of the world's energy and has learned a thing or two.

[–] Asafum@feddit.nl 43 points 8 months ago (2 children)

"Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions."

Well that's it, we now definitely have a sentient AI. /s

:P

load more comments (2 replies)
[–] Benchamoneh@lemmy.dbzer0.com 26 points 8 months ago

Amazing how this happened right after we learn Reddit has been used for training

[–] autotldr@lemmings.world 23 points 8 months ago

This is the best summary I could come up with:


In recent hours, the artificial intelligence tool appears to be answering queries with long and nonsensical messages, talking Spanglish without prompting – as well as worrying users, by suggesting that it is in the room with them.

Asked for help with a coding issue, ChatGPT wrote a long, rambling and largely nonsensical answer that included the phrase “Let’s keep the line as if AI in the room”.

On its official status page, OpenAI noted the issues, but did not give any explanation of why they might be happening.

“We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”.

It is not the first time that ChatGPT has changed its manner of answering questions, seemingly without developer OpenAI’s input.

Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions.


The original article contains 519 words, the summary contains 150 words. Saved 71%. I'm a bot and I'm open source!

[–] nevemsenki@lemmy.world 22 points 8 months ago (1 children)

Eh, it just had a few beers that's all. Let it rest for a few hours.

[–] jj4211@lemmy.world 7 points 8 months ago

We all know that robots need beer to function properly. It's more likely that it hasn't received enough beer, that's what really messes up robots.

[–] FrostyCaveman@lemm.ee 15 points 8 months ago

Someone messed up the quantisation when rolling out an update hehe

[–] kautau@lemmy.world 10 points 8 months ago
[–] Buffalox@lemmy.world 8 points 8 months ago* (last edited 8 months ago) (2 children)

“It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest,”

Wow that sounds very much like a Phil Collins tune, just ad Oh Lord, and people will probably say it's deep! But it's a ChatGPT answer to the question "What is a computer?"

[–] Facebones@reddthat.com 7 points 8 months ago (1 children)

A mouse of science

Ohhh laawwdddd

load more comments (1 replies)
load more comments (1 replies)
[–] lettruthout@lemmy.world 5 points 8 months ago (1 children)

I wonder if its LLM got poisoned. Was it Nightshade or Glaze that promised to do that?

[–] lung@lemmy.world 13 points 8 months ago

Those are for messing up image generators and they have already been defeated via de-glazing tools

load more comments
view more: next ›