this post was submitted on 24 May 2024
734 points (100.0% liked)

196

16582 readers
2801 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] trollbearpig@lemmy.world 15 points 6 months ago* (last edited 6 months ago) (1 children)

Sorry, but no man. Or rather, what evidence do you have that LLMs are anything like a human brain? Just because we call them neural networks doesn't mean they are networks of neurons ... You are faling to the same fallacy as the people who argue that nazis were socialists, or if someone claimed that north korea was a democratic country.

Perceptrons are not neurons. Activation functions are not the same as the action potential of real neurons. LLMs don't have anything resembling neuroplasticity. And it shows, the only way to have a conversation with LLMs is to provide them the full conversation as context because the things don't have anything resembling memory.

As I said in another comment, you can always say "you can't prove LLMs don't think". And sure, I can't prove a negative. But come on man, you are the ones making wild claims like "LLMs are just like brains", you are the ones that need to provide proof of such wild claims. And the fact that this is complex technology is not an argument.

[–] Deebster@lemmy.ml 3 points 6 months ago (1 children)

Hmm, I think they're close enough to be able to say a neural network is modelled on how a brain works - it's not the same, but then you reach the other side of the semantics coin (like the "can a submarine swim" question).

The plasticity part is an interesting point, and I'd need to research that to respond properly. I don't know, for example, if they freeze the model because otherwise input would ruin it (internet teaching them to be sweaty racists, for example), or because it's so expensive/slow to train, or high error rates, or it's impossible, etc.

When talking to laymen I've explained LLMs as a glorified text autocomplete, but there's some discussion on the boundary of science and philosophy that's asking is intelligence a side effect of being able to predict better.

[–] trollbearpig@lemmy.world 8 points 6 months ago

Nah man, they don't freeze the model because they think we will ruin it with our racism hahaha, that's just their PR bullshit. They freeze them because they don't know how to make the thing learn in real time like a human. We only know how to use backpropagatuon to train them. And this is expected, we haven't solved the hard problem of the mind no matter what these companies say.

Don't get me wrong, backpropagation is an amazing algorithm and the results for autocomplete are honestly better than I expected (though remeber that a lot of this is just underpaid workers in africa that pick good training data). But our current understanding of how human learns points to neuroplasticity as the main mechanism. And then here come all these AI grifters/companies saying that somehow backpropagation produces the same results. And I haven't seen a single decent argument for this.