this post was submitted on 27 May 2024
1102 points (98.0% liked)

Technology

59652 readers
4705 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

you are viewing a single comment's thread
view the rest of the comments
[–] TacticsConsort@yiffit.net 222 points 6 months ago (3 children)

In the interest of transparency, I don't know if this guy is telling the truth, but it feels very plausible.

[–] DdCno1@kbin.social 126 points 6 months ago (7 children)

It seems like the entire industry is in pure panic about AI, not just Google. Everyone hopes that LLMs will end years of homeopathic growth through iteration of long-existing technology, which is why it attracts tons of venture capital.

Google, which sits where IBM was decades ago, is too big, too corporate and too slow now, so they needed years to react to this fad. When they finally did, all they were able to come up with was a rushed equivalent of existing LLMs that suffers from all of the same problems.

[–] TigrisMorte@kbin.social 59 points 6 months ago (1 children)

They all hope it'll end years of having to pay employees.

[–] deweydecibel@lemmy.world 5 points 6 months ago

It's also useful because it gives a corporate controlled filter for all information, that most people will never truly appreciate is being used as a mouthpiece.

The end goal of this is fairly obvious: imagine Google where instead of the sponsored result and all subsequent results, it's just the sponsored result.

[–] NutWrench@lemmy.world 53 points 6 months ago (2 children)

I think this is what happens to every company once all the smart / creative people have gone. All you have left are the "line must always go up" business idiots who don't understand what their company does or know how to make it work.

[–] _number8_@lemmy.world 17 points 6 months ago

similarly i'm tired of apple fanboys pretending the company hasn't gotten dramatically worse since jobs died as well. yeah he sucked in his own ways but things were starkly less shitty and belittling. tim cook would be gone for those fucking lightning-3.5mm dongles

[–] SlopppyEngineer@lemmy.world 7 points 6 months ago

And after the MBA's, private equity firms take over, and eventually it's sold for parts.

[–] dustyData@lemmy.world 27 points 6 months ago

Just want to say that homeopathic growth is both hilarious and perfectly adequate description of what modern tech industry is.

[–] SomeGuy69@lemmy.world 8 points 6 months ago (2 children)

The snake ate it's tail before it's fully grown. The AI inbreeding might be already too far integrated, causing all sorts of Mumbo-Jumbo. Also they have layers of censorship, which effect the results. The same that happened to chatgpt, the more filters they added, the more it confused the result. We don't even know if the hallucinations are fixable, AI is just guessing after all, who knows if AI will ever understand 1+1=2, by calculating, instead of going by probability.

[–] jacksilver@lemmy.world 6 points 6 months ago

Hallucinations aren't fixable, as LLMs don't have any actual "intelligence". They can't test/evaluate things to determine if what they say is true, so there is no way to correct it. At the end of the day, they are intermixing all the data they "know" to give the best answer, without being able to test their answers LLMs can't vet what they say.

[–] ech@lemm.ee 5 points 6 months ago

Even saying they're guessing is wrong, as that implies intention. LLMs aren't trying to give an answer, let alone a correct answer. They just put words together.

[–] vrighter@discuss.tchncs.de 3 points 6 months ago

suffers from all the same ~~problems~~ features. It's inherent to the tech itself.

[–] jaybone@lemmy.world 2 points 6 months ago

Well their search has been shit for years and no one seems to be in any “panic” to fix that. How tone deaf thinking adding AI to their shittified search matters to anyone.

“But it will summarize our SEO advertisement search results!”

[–] QuadratureSurfer@lemmy.world -3 points 6 months ago (1 children)

Journalists are also in a panic about LLMs, they feel their jobs are threatened by its potential. This is why (in my opinion) we're seeing a lot of news stories that will focus on any imperfections that can be found in LLMs.

[–] EldritchFeminity@lemmy.blahaj.zone 9 points 6 months ago (1 children)

They're not threatened by its potential. They, like artists, are threatened by management who think that LLMs are good enough today to replace part or all of their staff.

There was a story from earlier this year of a company that owns 12-15 different gaming news outlets who fired about 80% of their writing staff and journalists - replacing 100% of their staff at the majority of the outlets with LLMs and leaving a skeleton crew at the rest.

What you're seeing isn't some slant trying to discredit LLMs. It's the results of management who are using them wrong.

[–] QuadratureSurfer@lemmy.world -1 points 6 months ago

What I mean is that Journalists feel threatened by it in someway (whether I use the word "potential" here or not is mostly irrelevant).

In the end this is just a theory, but it makes sense to me.

I absolutely agree that management has greatly misunderstood how LLMs should be used. They should be used as a tool, but treated like an intern who's speaking out loud without citing any sources. All of their statements and work should be double checked.

[–] FiniteBanjo@lemmy.today 5 points 6 months ago

Nice imgflip watermark you fucking barbarian

[–] CheeseNoodle@lemmy.world 3 points 6 months ago

I feel like the 'Jarvis assistant' is most likely going to be a much simpler siri type thing with a very restricted chatbot overlay. And then there will be the open source assistant that just exist to help you sort through the bullshit generated by other chatbots.