this post was submitted on 27 May 2024
1102 points (98.0% liked)
Technology
59652 readers
5056 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Wikipedia got where it is today by providing accurate information. Google results have always been full of inaccurate information. Sorting through the links for respectable sources just became second nature, then we learned to scroll past ads to start sorting through links. The real issue with misinformation from an AI is that people treat it like it should be some infallible Oracle - a point of view only half-discouraged by marketing with a few warnings about hallucinations. LLMs are amazing, they're just not infallible. Just like you'd check a Wikipedia source if it seemed suspect, you shouldn't trust LLM outputs uncritically. /shrug
Google providing links to dubious websites is not the same as google directly providing dubious answers to questions.
Google is generally considered to be a trusted company. If you do a search for some topic, and google spits out a bunch of links, you can generally trust that those links are going to be somehow related to your search - but the information you find there may or may not be reliable. The information is coming from the external website, which often is some unknown untrusted source - so even though google is trusted, we know that the external information we found might not be. The new situation now is that google is directly providing bad information itself. It isn't linking us to some unknown untrusted source but rather the supposedly trustworthy google themselves are telling us answers to our questions.
None of this would be a problem if people just didn't consider google to be trustworthy in the first place.
I do think Perplexity does a better job. Since it cites sources in its generated response, you can easily check its answer. As to the general public trusting Google, the company's fall from grace began in 2017, when the EU fined them like 2 billion for fixing search results. There've been a steady stream of controversies since then, including the revelation that Chrome continues to track you in private mode. YouTube's predatory practices are relatively well-known. I guess I'm saying that if this is what finally makes people give up on them, no skin off my back. But I'm disappointed by how much their mismanagement seems to be adding to the pile of negativity surrounding AI.