this post was submitted on 18 Jun 2024
444 points (97.4% liked)
Technology
59264 readers
2566 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Understanding the variety of speech over a drive-thru speaker can be difficult for a human with experience in the job. I can't see the current level of voice recognition matching it, especially if it's using LLMs for processing of what it managed to detect. If I'm placing a food order I don't need a LLM hallucination to try and fill in blanks of what it didn't convert correctly to tokens or wasn't trained on.
Yeah I've seen a lot of dumb LLM implementations, but this one may take the cake. I don't get why tech leaders see "AI" and go yes, please throw that at everything. I know it's the current buzzword but it's been proven OVER AND OVER just in the past couple of months that it's not anywhere close to ready for prime-time.
Most large corporations’ tech leaders don’t actually have any idea how tech works. They are being told that if they don’t have an AI plan their company will be obsoleted by their competitors that do; often by AI “experts” that also don’t have the slightest understanding of how LLMs actually work. And without that understanding companies are rushing to use AI to solve problems that AI can’t solve.
AI is not smart, it’s not magic, it can’t “think”, it can’t “reason” (despite what Open AI marketing claims) it’s just math that measures how well something fits the pattern of the examples it was trained on. Generative AIs like ChatGPT work by simply considering every possible word that could come next and ranking them by which one best matches the pattern.
If the input doesn’t resemble a pattern it was trained on, the best ranked response might be complete nonsense. ChatGPT was trained on enough examples that for anything you ask it there was probably something similar in its training dataset so it seems smarter than it is, but at the end of the day, it’s still just pattern matching.
If a company’s AI strategy is based on the assumption that AI can do what its marketing claims. We’re going to keep seeing these kinds of humorous failures.
AI (for now at least) can’t replace a human in any role that requires any degree of cognitive thinking skills… Of course we might be surprised at how few jobs actually require cognitive thinking skills. Given the current AI hypewagon, apparently CTO is one of those jobs that doesn’t require cognitive thinking skills.
Especially in situations like this where it's quite possible it would cost less to go back to the basics of better pay and training to create willing workers. Maybe the initial cost was less than what they have to spend to improve things, but add in all the backtracking and cost of mistakes, I doubt it.
Especially with vehicle and background noise like assholes blaring music while they’re second in line and maybe turning it down while ordering, or douchebags with loud trucks rolling coal in line