You mean that "AI" isn't actually intelligent at all? It just averages over stolen content whether it's correct or a joke? Wow I'm shocked.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
So the next captcha will be a list of AI-generated statements and you have to decide which are bat shit crazy?
He's such a damn clown.
Heres one: SCRAP IT
The "solution" is to curate things, invest massive human resources in it, and ultimately still gets accused of tailoring the results and censoring stuff.
Let's put that toy back in the toy box, and keep it at the few things it can do well instead of trying to fix every non-broken things with it.
The "solution" is to curate things, invest massive human resources in it
Hilariously, Google actually used to do this: they had a database called the "knowledge graph" that slowly accumulated verified information and relationships between commonly-queried entities, producing an excellent corpus of reliable, easy-to-find information about a large number of common topics.
Then they decided having people curate things was too expensive and gave up on it.
It's quite simple. Garbage in, garbage out. Data they use for training needs to be curated. How to curate the entire internet, I have no clue.
The real answer would be "don't". Have a decent whitelist dor training data with reliable data. Don't just add every orifice of the internet (like reddit) to the training data. Limitations would be good in this case.
I've seen suggestions that the AI Overview is based on the top search results for the query, so the terrible answers may be more to do with Google Search just being bad than any issue with their AI. The AI Overview just makes things a bit worse by removing the context, so you can't see the glue on pizza suggestion was a joke on reddit or it was The Onion suggesting eating rocks.
Nothing is going to change until people die because of this shit.
And to show everyone how sorry they are... free Google AI services for a year when you digitally sign this unrelated document.
Yep, better disclaimers are inevitable. When they call it a 'feature' it isn't getting fixed
I just realized that Trump beat them to the punch. Injecting cleaning solution into your body sounds exactly like something the AI Overview would suggest to combat COVID.
These models are mad libs machines. They just decide on the next word based on input and training. As such, there isn’t a solution to stopping hallucinations.
Are they now AI, large language models or AI large language models?
You ask a lot of questions for a bag of sentient meat.
"It's your responsibility to make sure our products aren't nonsense. All we want to do is to make money off you regardless."