this post was submitted on 13 Apr 2024
492 points (96.4% liked)

Technology

59652 readers
4823 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] sugar_in_your_tea@sh.itjust.works 1 points 7 months ago (1 children)

Yes, and the results from that video (i assume, I skimmed it, but have watched similar videos) is that the difference is negligible (like 1-10FPS) and you're usually better off spending that money on something else.

I look at the benchmarks between the Intel MacBook Pro and the M1 MacBook Pro, and both use soldered RAM, yet the M1 gets so much better performance, even on non-GPU tasks (e.g. memory-heavy unit tests at work went from 3-5min to 45-50sec from latest Intel to M1). Docker build times saw a similar drop. But it's hard for me to know what the difference is between memory vs CPU changes. I'd have to check, but I'm guessing there's also the DDR4 to DDR5 switch, which increases memory channels.

The claim is that proximity to the CPU explains it, but I have trouble quantifying that. For me, a 1-10FPS drop isn't enough to reduce repairability and expandability. Maybe it is for others though, but if that's the difference, that's a lot less than the claims they seem to make.

[–] Turun@feddit.de 1 points 7 months ago* (last edited 7 months ago) (1 children)

The video has a short section on productivity (i.e. rendering or compiling). That part is probably the most relevant for most people. Check the chapter view in YouTube to jump directly to it.

I think a 2x performance improvement is plausible when comparing non-soldered ram to the Apple silicon, which goes even further and has the memory on the die itself. If, of course, ram is the limiting factor.

The advantages of upgradable, expandable ram are obvious. But let's face it: most people don't need and even less use that capability.

short section on productivity

Looks about the same as the rest. Big gains for handbrake, pretty much nothing for anything else. And that makes sense, because handbrake will be doing lots of roundtrips to the GPU for encoding.

has the memory on the die itself

On the package, not the die. But perhaps that's what you meant. On die would be closer to a massive cache like on the X3D AMD chips.

The performance improvement seems to be that Apple has a massive iGPU, not anything to do with RAM next to the CPU. So in CPU-only benchmarks, I'd expect the lion's share of the difference to be CPU design and process node, not the memory.

Also, unified memory isn't particularly new, APUs have supported it for years. It's just not well utilized by devs because most users have dGPUs. So I think the main innovation here is Apple committing to it and providing tooling for devs to utilize the unified memory better, like console manufacturers have done.

So I guess that brings a few more questions:

  • what performance improvements could we see if devs use unified memory in socketed LPDDR memory in laptops?
  • how would that compare to Apple's on-package RAM (I think it's also LPDDR, so more apples to apples?)?
  • how likely are AMD and Intel to push for massive APUs on laptops?

I guess we're kind of seeing it with the gaming PC handhelds, like Steam Deck and Ayaneo etc al, so maybe that'll become more mainstream.