this post was submitted on 06 May 2024
346 points (95.1% liked)

Technology

59693 readers
4300 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

AI’s voracious need for computing power is threatening to overwhelm energy sources, requiring the industry to change its approach to the technology, according to Arm Holdings Plc Chief Executive Officer Rene Haas.

you are viewing a single comment's thread
view the rest of the comments
[–] crispyflagstones@sh.itjust.works 1 points 6 months ago (1 children)

There's an entire resurgence of research into alternative computing architectures right now, being led by some of the biggest names in computing, because of the limits we've hit with the von Neumann architecture as regards ML. I don't see any reason to assume all of that research is guaranteed to fail.

[–] AlotOfReading@lemmy.world 2 points 6 months ago (1 children)

I'm not assuming it's going to fail, I'm just saying that the exponential gains seen in early computing are going to be much harder to come by because we're not starting from the same grossly inefficient place.

As an FYI, most modern computers are modified Harvard architectures, not Von Neumann machines. There are other architectures being explored that are even more exotic, but I'm not aware of any that are massively better on the power side (vs simply being faster). The acceleration approaches that I'm aware of that are more (e.g. analog or optical accelerators) are also totally compatible with traditional Harvard/Von Neumann architectures.

[–] crispyflagstones@sh.itjust.works 1 points 6 months ago

And I don't know that by comparing it to ENIAC I intended to suggest the exponential gains would be identical, but we are currently in a period of exponential gains in AI and it's not exactly slowing down. It just seems unthoughtful and not very critical to measure the overall efficiency of a technology by its very earliest iterations, when the field it's based on is moving as fast as AI is.