this post was submitted on 24 Jan 2025
43 points (97.8% liked)

Technology

1009 readers
71 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] poo_22@lemmygrad.ml 3 points 3 days ago (1 children)

What do I need to run this? I saw people on Xiaohongshu make an 8 macbook cluster, presumably networked using thunderbolt, and I'm thinking that might actually be the most economical way to do it right now.

[–] yogthos@lemmygrad.ml 4 points 3 days ago (1 children)

It depends on the model size, here's how you can get DeepSeek running locally https://dev.to/shayy/run-deepseek-locally-on-your-laptop-37hl

[–] poo_22@lemmygrad.ml 2 points 2 days ago (1 children)

According to this page to run the full model you need about 1.4TB of memory, or about 16 A100 GPUs. Which is still prohibitively expensive for an individual enthusiast, but yes you can run a simplified model locally with ollama. Still probably needs a GPU with a lot of memory.

[–] yogthos@lemmygrad.ml 2 points 2 days ago

I got deepseek-r1:14b-qwen-distill-fp16 running locally with 32gb ram and a GPU, but yeah you do need a fairly beefy machine to run even medium sized models.