this post was submitted on 24 Jan 2025
43 points (97.8% liked)
Technology
1009 readers
71 users here now
A tech news sub for communists
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think power usage is a legitimate concern still, but on top of that the unreliability is a huge factor. LLMs hallucinate all the time, by design, so if you are using them for anything where it’s important to be correct you are bound for failure. There will always be hallucination lest we overfit the model, but a model that’s overfit with no hallucination just reproduces its training data and therefore has no more functionality than a search engine but with vastly higher energy requirements. These things have applications, but really only for approximating stuff where no other approach could do it well. IMO any other use case is a mistake
People are actively working on different approaches to address reliability. One that I like in particular is neurosymbolic type of model where deep neural networks are used to classify data and find patterns, and a symbolic logic engine is used to actually reason about it. This basically gives you the best of both worlds. https://arxiv.org/abs/2305.00813