this post was submitted on 24 Jan 2025
199 points (98.5% liked)
technology
23473 readers
262 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
An incredible outcome would be if the US stock market bubble pops because Chinese developed open-source AI that can run locally on your phone end up being about as good as Silicon Valley's stuff.
I think the bubble might not pop so easily. Even if Microsoft is set back dramatically by this, investors have nowhere else to go. The whole industry is in a turmoil, and since there's nothing else to invest into, stocks stay high.
At least that's how i explain the ludicrously high stock rates that we're seeing in the recent years.
Llms that run locally are already a thing, and I wager that one of those smaller models can do 99% of anything anyone would want.
What does it mean for an llm to run locally? Where's all the data with the 'answers' stored?
Imagine if an idea was a point on a graph, ideas that are similar would have points closer to each other, and points that are very different would be very far away. A llm is a predictive model for this graph, just like a line of best fit is a predictive model for a simple linear graph. So in a way, the model is predicting the information, it's not stored directly or searched for.
A locally running llm is just one of these models shrunk down and executing on your computer.
Edit: removed a point about embeddings that wasnt fully accurate
Thanks. That helps me understand things better. I'm guessing you need all the data initially to set up the graph (model). Then you only need that?
Yep, exactly. Every llm has a 'cut off date' which is the last day that the data used to make the model was updated.
How big are the files for the finished model, do you know?
That's a great question! The models come in different sizes, where one 'foundational' model is trained, and that is used to train smaller models. US companies generally do not release the foundational models (I think) but meta, Microsoft, deepseek, and a few others will release smaller ones available on ollama.com. A rule of thumb is that 1 billion parameters is about 1 gigabyte. The foundational models are hundreds of billions if not trillions of parameters, but you can get a good model that is 7-8 billion parameters, small enough to run on a gaming gpu.
Thanks!
In the weights