this post was submitted on 16 Mar 2024
52 points (90.6% liked)
Technology
59602 readers
3755 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Msn article links a press release which links the paper: https://dl.acm.org/doi/10.1145/3613424.3614285
The idea seems to be if your computer has several kinds of hardware accelerators, there is a systematic way to use them simultaneously. I only read the beginning but it's hard to see a big breakthrough.
Yeah, they're basically talking about a virtual machine (they call it a "framework" but it sounds more in line with the JVM or the .Net CLR) that can automatically offload work to GPUs and tensor cores and the like.
Programs would basically compile to a kind of bytecode that describes what calculations need to be done and the data dependencies between them, and then it's up to the runtime to decide how to schedule that on available hardware, while compensating for differences, e.g. in supported precision for floating point math.
They're quoting like 4x speedups but their benchmarks are all things that already have efficient GPGPU implementations, mainly signal processing and computer vision, so the computation is already highly parallelized.
I could see this being useful for that kind of Big Data processing where you have a ton of stuff to churn through and want to get the most bang for your buck on cloud hardware. It helps that that kind of processing is already coded at a pretty high level, where you just lay out the operations you want done and the system handles the rest. This runtime would be a great target for that.
I don't really see this revolutionizing computing on a grand scale, though. Most day to day workloads (consumer software and even most business applications like web servers) are not CPU bound anyway. I guess memory bound workloads could benefit from being offloaded to a GPU but those aren't all that common either.
Also, it's not even a novel idea.
OpenMP has been around for over 20 years, it's just a lot less "magical".
macOS has been doing something similar for years with something Apple calls Grand General Dispatch