this post was submitted on 29 Jan 2025
938 points (98.6% liked)
Technology
61227 readers
4660 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
DeepSeek’s specific trained model is immaterial—they could take it down tomorrow and never provide access again, and the damage to OpenAI’s business would already be done.
DeepSeek’s model is just a proof-of-concept—the point is that any organization with a few million dollars and some (hopefully less-problematical) training data can now make their own model competitive with OpenAI’s.
Deepseek can't take down the model, it's already been published and is mostly open source. Open source llms are the way, fuck closedAI
Right—by “take it down” I just meant take down online access to their own running instance of it.
I suspect that most usage of the model is going to be companies and individuals running their own instance of it. They have some smaller distilled models based on Llama and Qwen that can run on consumer-grade hardware.
... assuming deepseek is telling the truth, something they have plenty of incentives to lie about
Imagine if a little bit of those so many millions that so many companies are willing to throw away to the shit ai bubble was actually directed to anything useful.