this post was submitted on 07 Sep 2024
130 points (100.0% liked)

TechTakes

1430 readers
170 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 47 points 2 months ago (2 children)

As for the prospect that AI will enable business users to do more with fewer humans in the office or on the factory floor, the technology generates such frequent errors that users may need to add workers just to double-check the bots’ output.

And here we were worrying about being out of work...

[–] N0body@lemmy.dbzer0.com 22 points 2 months ago (1 children)

Tediously fixing botslop all day is another kind of hellscape, but at least we won’t be homeless.

[–] zogwarg@awful.systems 14 points 2 months ago (1 children)

More tedious work with worse pay \o/

[–] trolololol@lemmy.world 4 points 2 months ago

Shitty idea: fire the workers anyway and gamify the product so the customer does quality control while earning fake money. If we're feeling generous we may even award NFT monkeys to the harder working customers.

[–] mountainriver@awful.systems 17 points 2 months ago* (last edited 2 months ago) (3 children)

I have so far seen two working AI applications that actually makes sense, both in a hospital setting:

  1. Assisting oncologists in reading cancer images. Still the oncologists that makes the call, but it seems to be of use to them.
  2. Creating a first draft when transcribing dictated notes. Listening and correcting is apparently faster for most people than listening and writing from scratch.

These two are nifty, but it doesn't make a multi billion dollar industry.

In other words the bubble is bursting and the value / waste ratio looks extremely low.

Say what you want about the Tulip bubble, but at least tulips are pretty.

[–] dgerard@awful.systems 17 points 2 months ago (1 children)

This is why you should never allow the use of the marketing term "AI", and instead always refer to the specific technologies.

The use case for the term "AI" is to conflate things that work (ML) with things that don't work (LLMs).

[–] mountainriver@awful.systems 4 points 2 months ago (1 children)

Ok, point on language.

But I thought LLMs were machine learning, or rather a particular application of it? Have I misunderstood that? Isn't it all black boxed matrixes of statistical connections?

[–] dgerard@awful.systems 2 points 2 months ago

they're related in that sense, but what they learn is which token to generate next.

[–] luciole@beehaw.org 15 points 2 months ago* (last edited 2 months ago)

I'd be very wary about the first use case. I've read some days ago that automation bias will seriously fuck with a radiologist's ability to make the correct decision. Relevant bit:

When the AI provided an incorrect result, researchers found inexperienced and moderately experienced radiologists dropped their cancer-detecting accuracy from around 80% to about 22%. Very experienced radiologists’ accuracy dropped from nearly 80% to 45%.

[–] froztbyte@awful.systems 10 points 2 months ago

this is something I've been mulling over for a while now too. there are lots of little boring ways in which some of the ML stuff definitely does work, but none of them are in the shape of anything the hypemen have been shouting. and afaict none of them will be able to justify all the investment either (and only some will be able to justify the compute, even then)

couple months back I speculated in one of the threads here that I believe one of the reasons there's such a hard push to get the llms and shit into as much as possible now is because it'll be harder to remove after the air starts going on - and thus allow to buy more time/runway/rent-extraction

[–] 200fifty@awful.systems 11 points 2 months ago* (last edited 2 months ago)

The bill mandates safety testing of advanced AI models and the imposition of “guardrails” to ensure they can’t slip out of the control of their developers or users and can’t be employed to create “biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.” It’s been endorsed by some AI developers but condemned by others who assert that its constraints will drive AI developers out of California.

Man, if I can't even build homemade nuclear weapons, what CAN I do? That's it, I'm moving to Nevada!