this post was submitted on 01 Feb 2025
59 points (89.3% liked)

Technology

35510 readers
465 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] u_die_for_elmer@lemm.ee 21 points 14 hours ago (2 children)

Just download the model. Problem solved

[–] yogthos@lemmy.ml 19 points 13 hours ago (1 children)

What they're actually in panic over is companies using a Chinese service instead of US ones. The threat here is that DeepSeek becomes the standard that everyone uses, and it would become entrenched. At that point nobody would want to switch to US services.

[–] u_die_for_elmer@lemm.ee 10 points 12 hours ago (1 children)

https://securityconversations.com/episode/inside-the-deepseek-ai-existential-crisis-chinese-backdoor-in-medical-devices/ If you ignore the kind of laent anti China crap, this is a pretty good analysis from a technical perspective. When someone does something faster and cheaper we used to call that progress. Not if China does it I guess, and not if it's open source even if Meta did the same thing with llama.

[–] yogthos@lemmy.ml 9 points 12 hours ago

Exactly, and these kinds of policies will only ensure that the west starts falling behind technologically. Stifling innovation to prop up monopolies will not be a winning strategy in the long run.

[–] Corngood@lemmy.ml 3 points 10 hours ago* (last edited 10 hours ago) (2 children)

I keep seeing this sentiment, but in order to run the model on a high end consumer GPU, doesn't it have to be reduced to like 1-2% of the size of the official one?

Edit: I just did a tiny bit of reading and I guess model size is a lot more complicated than I thought. I don't have a good sense of how much it's being reduced in quality to run locally.

[–] azron@lemmy.ml 1 points 4 hours ago

YouTube-connected the right track still. All these people touting it as an open model likely haven't even tried to run if locally themselves. The hosted version is not the same as what is easily runnable local.

[–] skuzz@discuss.tchncs.de 2 points 8 hours ago

Just think of it this way. Less digital neurons in smaller models means a smaller “brain”. It will be less accurate, more vague, and make more mistakes.