this post was submitted on 31 Jan 2025
330 points (94.8% liked)

Open Source

32337 readers
1132 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
 

Article: https://proton.me/blog/deepseek

Calls it "Deepsneak", failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers - unlike most of the competing SOTA AIs.

I can't speak for Proton, but the last couple weeks are showing some very clear biases coming out.

you are viewing a single comment's thread
view the rest of the comments
[–] tekato@lemmy.world 15 points 1 day ago (2 children)

You can run an imitation of the DeepSeek R1 model, but not the actual one unless you literally buy a dozen of whatever NVIDIA’s top GPU is at the moment.

[–] lily33@lemm.ee 8 points 1 day ago

A server grade CPU with a lot of RAM and memory bandwidth would work reasonable well, and cost "only" ~$10k rather than 100k+...

[–] alcoholicorn@lemmy.ml 1 points 23 hours ago (1 children)

I saw posts about people running it well enough for testing purposes on an NVMe.

[–] Aria@lemmygrad.ml 1 points 12 hours ago (1 children)
[–] alcoholicorn@lemmy.ml 1 points 12 hours ago (2 children)
[–] Aria@lemmygrad.ml 2 points 11 hours ago

That's cool! I'm really interested to know how many tokens per second you can get with a really good U.2. My gut is that it won't actually be better than the 24VRAM+96RAM cache setup this user already tested with though.

[–] Aria@lemmygrad.ml 2 points 12 hours ago