this post was submitted on 22 Dec 2024
523 points (96.0% liked)

Technology

60328 readers
5764 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] LenielJerron@lemmy.world 136 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

A big issue that a lot of these tech companies seem to have is that they don't understand what people want; they come up with an idea and then shove it into everything. There are services that I have actively stopped using because they started cramming AI into things; for example I stopped dual-booting with Windows and became Linux-only.

AI is legitimately interesting technology which definitely has specialized use-cases, e.g. sorting large amounts of data, or optimizing strategies within highly restrained circumstances (like chess or go). However, 99% of what people are pushing with AI these days as a member of the general public just seems like garbage; bad art and bad translations and incorrect answers to questions.

I do not understand all the hype around AI. I can understand the danger; people who don't see that it's bad are using it in place of people who know how to do things. But in my teaching for example I've never had any issues with students cheating using ChatGPT; I semi-regularly run the problems I assign through ChatGPT and it gets enough of them wrong that I can't imagine any student would be inclined to use ChatGPT to cheat multiple times after their grade the first time comes in. (In this sense, it's actually impressive technology - we've had computers that can do advanced math highly accurately for a while, but we've finally developed one that's worse at math than the average undergrad in a gen-ed class!)

[–] Voroxpete@sh.itjust.works 60 points 2 weeks ago (7 children)

The answer is that it's all about "growth". The fetishization of shareholders has reached its logical conclusion, and now the only value companies have is in growth. Not profit, not stability, not a reliable customer base or a product people will want. The only thing that matters is if you can make your share price increase faster than the interest on a bond (which is pretty high right now).

To make share price go up like that, you have to do one of two things; show that you're bringing in new customers, or show that you can make your existing customers pay more.

For the big tech companies, there are no new customers left. The whole planet is online. Everyone who wants to use their services is using their services. So they have to find new things to sell instead.

And that's what "AI" looked like it was going to be. LLMs burst onto the scene promising to replace entire industries, entire workforces. Huge new opportunities for growth. Lacking anything else, big tech went in HARD on this, throwing untold billions at partnerships, acquisitions, and infrastructure.

And now they have to show investors that it was worth it. Which means they have to produce metrics that show people are paying for, or might pay for, AI flavoured products. That's why they're shoving it into everything they can. If they put AI in notepad then they can claim that every time you open notepad you're "engaging" with one of their AI products. If they put Recall on your PC, every Windows user becomes an AI user. Google can now claim that every search is an AI interaction because of the bad summary that no one reads. The point is to show "engagement", "interest", which they can then use to promise that down the line huge piles of money will fall out of this pinata.

The hype is all artificial. They need to hype these products so that people will pay attention to them, because they need to keep pretending that their massive investments got them in on the ground floor of a trillion dollar industry, and weren't just them setting huge piles of money on fire.

[–] MagicShel@lemmy.zip 9 points 2 weeks ago* (last edited 2 weeks ago) (15 children)

I know I'm an enthusiast, but can I just say I'm excited about NotebookLLM? I think it will be great for documenting application development. Having a shared notebook that knows the environment and configuration and architecture and standards for an application and can answer specific questions about it could be really useful.

"AI Notepad" is really underselling it. I'm trying to load up massive Markdown documents to feed into NotebookLLM to try it out. I don't know if it'll work as well as I'm hoping because it takes time to put together enough information to be worthwhile in a format the AI can easily digest. But I'm hopeful.

That's not to take away from your point: the average person probably has little use for this, and wouldn't want to put in the effort to make it worthwhile. But spending way too much time obsessing about nerd things is my calling.

[–] Voroxpete@sh.itjust.works 16 points 2 weeks ago (3 children)

From a nerdy perspective, LLMs are actually very cool. The problem is that they're grotesquely inefficient. That means that, practically speaking, whatever cool use you come up with for them has to work in one of two ways; either a user runs it themselves, typically very slowly or on a pretty powerful computer, or it runs as a cloud service, in which case that cloud service has to figure out how to be profitable.

Right now we're not being exposed to the true cost of these models. Everyone is in the "give it out cheap / free to get people hooked" stage. Once the bill comes due, very few of these projects will be cool enough to justify their costs.

Like, would you pay $50/month for NotebookLM? However good it is, I'm guessing it's probably not that good. Maybe it is. Maybe that's a reasonable price to you. It's probably not a reasonable price to enough people to sustain serious development on it.

That's the problem. LLMs are cool, but mostly in a "Hey this is kind of neat" way. They do things that are useful, but not essential, but they do so at an operating cost that only works for things that are essential. You can't run them on fun money, but you can't make a convincing case for selling them at serious money.

load more comments (3 replies)
load more comments (14 replies)
load more comments (6 replies)
[–] Brodysseus@lemmy.dbzer0.com 9 points 2 weeks ago

I've ran some college hw through 4o just to see and it's remarkably good at generating proofs for math and algorithms. Sometimes it's not quite right but usually on the right track to get started.

In some of the busier classes I'm almost certain students do this because my hw grades would be lower than the mean and my exam grades would be well above the mean.

load more comments (1 replies)
[–] 2pt_perversion@lemmy.world 76 points 2 weeks ago* (last edited 2 weeks ago) (25 children)

There is this seeming need to discredit AI from some people that goes overboard. Some friends and family who have never really used LLMs outside of Google search feel compelled to tell me how bad it is.

But generative AIs are really good at tasks I wouldn't have imagined a computer doing just a few year ago. Even if they plateaued in place where they are right now it would lead to major shakeups in humanity's current workflow. It's not just hype.

The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn't really fit, like AI enabled fridges and toasters.

[–] buddascrayon@lemmy.world 70 points 2 weeks ago (1 children)

The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn't really fit, like AI enabled fridges and toasters.

This is literally the hype. This is the hype that is dying and needs to die. Because generative AI is a tool with fairly specific uses. But it is being marketed by literally everyone who has it as General AI that can "DO ALL THE THINGS!" which it's not and never will be.

[–] five82@lemmy.world 11 points 2 weeks ago (2 children)

The obsession with replacing workers with AI isn't going to die. It's too late. The large financial company that I work for has been obsessively tracking hours saved in developer time with GitHub Copilot. I'm an older developer and I was warned this week that my job will be eliminated soon.

load more comments (2 replies)
[–] sudneo@lemm.ee 39 points 2 weeks ago (37 children)

Even if they plateaued in place where they are right now it would lead to major shakeups in humanity's current workflow

Like which one? Because it's now 2 years we have chatGPT and already quite a lot of (good?) models. Which shakeup do you think is happening or going to happen?

load more comments (37 replies)
[–] Eldritch@lemmy.world 23 points 2 weeks ago (37 children)

Computers have always been good at pattern recognition. This isn't new. LLM are not a type of actual AI. They are programs capable of recognizing patterns and Loosely reproducing them in semi randomized ways. The reason these so-called generative AI Solutions have trouble generating the right number of fingers. Is not only because they have no idea how many fingers a person is supposed to have. They have no idea what a finger is.

The same goes for code completion. They will just generate something that fills the pattern they're told to look for. It doesn't matter if it's right or wrong. Because they have no concept of what is right or wrong Beyond fitting the pattern. Not to mention that we've had code completion software for over a decade at this point. Llms do it less efficiently and less reliably. The only upside of them is that sometimes they can recognize and suggest a pattern that those programming the other coding helpers might have missed. Outside of that. Such as generating act like whole blocks of code or even entire programs. You can't even get an llm to reliably spit out a hello world program.

load more comments (37 replies)
[–] ssfckdt@lemmy.blahaj.zone 17 points 2 weeks ago

This is easy to say about the output of AIs.... if you don't check their work.

Alas, checking for accuracy these days seems to be considered old fogey stuff.

[–] andallthat@lemmy.world 11 points 2 weeks ago* (last edited 2 weeks ago)

Goldman Sachs, quote from the article:

“AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.”

Generative AI can indeed do impressive things from a technical standpoint, but not enough revenue has been generated so far to offset the enormous costs. Like for other technologies, It might just take time (remember how many billions Amazon burned before turning into a cash-generating machine? And Uber has also just started turning some profit) + a great deal of enshittification once more people and companies are dependent. Or it might just be a bubble.

As humans we're not great at predicting these things including of course me. My personal prediction? A few companies will make money, especially the ones that start selling AI as a service at increasingly high costs, many others will fail and both AI enthusiasts and detractors will claim they were right all along.

load more comments (20 replies)
[–] nroth@lemmy.world 58 points 2 weeks ago (2 children)

"Built to do my art and writing so I can do my laundry and dishes" -- Embodied agents is where the real value is. The chatbots are just fancy tech demos that folks started selling because people were buying.

[–] bradd@lemmy.world 17 points 2 weeks ago (7 children)

Eh, my best coworker is an LLM. Full of shit, like the rest of them, but always available and willing to help out.

load more comments (7 replies)
[–] nroth@lemmy.world 10 points 2 weeks ago (5 children)

Though the image generators are actually good. The visual arts will never be the same after this

[–] LifeInMultipleChoice@lemmy.world 28 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Compare it to the microwave. Is it good at something, yes. But if you shoot your fucking turkey in it at Thanksgiving and expect good results, you're ignorant of how it works. Most people are expecting language models to do shit that aren't meant to. Most of it isn't new technology but old tech that people slapped a label on as well. I wasn't playing Soul Caliber on the Dreamcast against AI openents... Yet now they are called AI opponents with no requirements to be different. GoldenEye on N64 was man VS AI. Madden 1995... AI. "Where did this AI boom come from!"

Marketing and mislabeling. Online classes, call it AI. Photo editors, call it AI.

load more comments (1 replies)
load more comments (4 replies)
[–] NigelFrobisher@aussie.zone 42 points 2 weeks ago (6 children)

At a beach restaurant the other night I kept hearing a loud American voice cut across all conversation, going on and on about “AI” and how it would get into all human “workflows” (new buzzword?). His confidence and loudness was only matched by his obvious lack of understanding of how LLMs actually work.

[–] ikidd@lemmy.world 36 points 2 weeks ago (3 children)

"Confidently incorrect" I think describes a lot of AI aficionados.

[–] wewbull 17 points 2 weeks ago

And LLMs themselves.

[–] ameancow@lemmy.world 10 points 2 weeks ago* (last edited 2 weeks ago)

I would also add "hopeful delusionals" and "unhinged cultist" to that list of labels.

Seriously, we have people right now making their plans for what they're going to do with their lives once Artificial Super Intelligence emerges and changes the entire world to some kind of post-scarcity, Star-Trek world where literally everyone is wealthy and nobody has to work. They think this is only several years away. Not a tiny number either, and they exist on a broad spectrum.

Our species is so desperate for help from beyond, a savior that will change the current status-quo. We've been making fantasies and stories to indulge this desire for millenia and this is just the latest incarnation.

No company on Earth is going to develop any kind of machine or tool that will destabilize the economic markets of our capitalist world. A LOT has to change before anyone will even dream of upending centuries of wealth-building.

load more comments (1 replies)
[–] ChaoticEntropy 17 points 2 weeks ago (3 children)

Some people can only hear "AI means I can pay people less/get rid of them entirely" and stop listening.

load more comments (3 replies)
[–] Blackmist 10 points 2 weeks ago (5 children)

I've noticed that the people most vocal about wanting to use AI get very coy when you ask them what it should actually do.

load more comments (5 replies)
load more comments (3 replies)
[–] einlander@lemmy.world 32 points 2 weeks ago (5 children)
load more comments (5 replies)
[–] ssfckdt@lemmy.blahaj.zone 29 points 2 weeks ago (5 children)

So you're saying we wont have any crowdsourced blockchain Web 2.0 AIs?

[–] razm@sh.itjust.works 13 points 2 weeks ago

Quantum! don't forget quantum, you filthy peasant.

load more comments (4 replies)
[–] walter_wiggles@lemmy.nz 9 points 2 weeks ago

Big tech is out of ideas and needs AI to work in order to drive growth.

load more comments
view more: next ›