this post was submitted on 27 May 2024
1102 points (98.0% liked)

Technology

59652 readers
5056 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

you are viewing a single comment's thread
view the rest of the comments
[–] Hackworth@lemmy.world 20 points 6 months ago* (last edited 6 months ago) (1 children)

Wikipedia got where it is today by providing accurate information. Google results have always been full of inaccurate information. Sorting through the links for respectable sources just became second nature, then we learned to scroll past ads to start sorting through links. The real issue with misinformation from an AI is that people treat it like it should be some infallible Oracle - a point of view only half-discouraged by marketing with a few warnings about hallucinations. LLMs are amazing, they're just not infallible. Just like you'd check a Wikipedia source if it seemed suspect, you shouldn't trust LLM outputs uncritically. /shrug

[–] blind3rdeye@lemm.ee 14 points 6 months ago* (last edited 6 months ago) (1 children)

Google providing links to dubious websites is not the same as google directly providing dubious answers to questions.

Google is generally considered to be a trusted company. If you do a search for some topic, and google spits out a bunch of links, you can generally trust that those links are going to be somehow related to your search - but the information you find there may or may not be reliable. The information is coming from the external website, which often is some unknown untrusted source - so even though google is trusted, we know that the external information we found might not be. The new situation now is that google is directly providing bad information itself. It isn't linking us to some unknown untrusted source but rather the supposedly trustworthy google themselves are telling us answers to our questions.

None of this would be a problem if people just didn't consider google to be trustworthy in the first place.

[–] Hackworth@lemmy.world 3 points 6 months ago

I do think Perplexity does a better job. Since it cites sources in its generated response, you can easily check its answer. As to the general public trusting Google, the company's fall from grace began in 2017, when the EU fined them like 2 billion for fixing search results. There've been a steady stream of controversies since then, including the revelation that Chrome continues to track you in private mode. YouTube's predatory practices are relatively well-known. I guess I'm saying that if this is what finally makes people give up on them, no skin off my back. But I'm disappointed by how much their mismanagement seems to be adding to the pile of negativity surrounding AI.