this post was submitted on 29 Jul 2023
198 points (100.0% liked)

Technology

37739 readers
695 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] rysiek@mstdn.social 0 points 1 year ago (1 children)

@lloram239 great. ChatGPT and other LLMs demonstrably lack the ability to model the world and make predictions based on such models:
https://www.fastcompany.com/90877523/chatgpt-doesnt-know-what-its-saying

Glad we agree they're not intelligent, then!

[–] lloram239@feddit.de 1 points 1 year ago (1 children)

The whole argument of the article is just stupid. So ChatGPT ain't intelligent because it can't see picture, has hands and doesn't have a body? By that logic blind humans aren't either or paralyzed ones or amputees? The thing the article fails to realize is that those are all just sensory inputs. The more sensory inputs you get, the more cross-correlations between those the AI can figure out. Of course ChatGPT won't be able to do anything clever with sensory inputs it doesn't have, just like a human trying to listen to radiowaves with their ears. But human sensory inputs aren't special, they are just what evolution figure out was "good enough" for survival. The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.

[–] rysiek@mstdn.social 0 points 1 year ago (1 children)

@lloram239

> But human sensory inputs aren’t special

It's not about sensory inputs, it's about having a model of the world and objects in it and ability to make predictions.

> The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.

GPT cannot "figure" anything out. That's the point. It only probabilistically generates text. That's what it does, there is no model of the world behind it, no predictions, no"figuring out".

[–] lloram239@feddit.de 1 points 1 year ago (1 children)

It’s not about sensory inputs, it’s about having a model of the world and objects in it and ability to make predictions.

And how do you think that model gets build? From processing sensory inputs. And yes, language models do build internal models of the world from that.

GPT cannot “figure” anything out.

That nonsense of a claim doesn't get any more true from repeating. Seriously, it's profoundly idiotic given everything ChatGPT can do.

It only probabilistically generates text.

So what? In what way does that limit its ability to reason about the world? Predictions about the world are probabilistic by nature, since the future hasn't happened yet.

[–] rysiek@mstdn.social 0 points 1 year ago* (last edited 1 year ago) (1 children)

@lloram239 ah, so you're down to throwing epithets like "idiotic" around. Clearly a mark of thoughtful and well-reasoned argument.

> Predictions about the world are probabilistic by nature, since the future hasn’t happened yet.

Thing is: GPT doesn't make predictions about the world, it makes predictions about what the next word, phrase, sentence should be in a text, based on the prompt and the corpus it got "trained" on.

[–] lloram239@feddit.de 1 points 1 year ago (1 children)

I am calling it idiotic because spending just a minute with ChatGPT proofs it wrong. Just like the claim that GPT doesn't make predictions about the world:

User: A dog sits on the porch, a squirrel climbs the tree. What happens next.

ChatGPT: Next, the dog notices the squirrel climbing the tree. Its natural instinct to chase small animals is triggered, and it becomes excited by the presence of the squirrel. The dog might start barking or whining, expressing its desire to chase after the squirrel. [...]

It's obviously capable making predictions about the world. Frequently giving very detailed and correct answers, which requires a deep understanding of the world. And yes, that ability to predict and understand the world is limited by how much of the world it can perceive through words alone, but that is no different from our ability to understand the world being limited by our perception. Also as it turns out, there is a surprising amount of stuff you can learn about the world just by text alone. There are surprisingly few topics that you can express in language that GPT doesn't have an answer too (math calculations being one example, due to the digits getting lost in the tokenization step).

If you wanna make arguments that GPT isn't intelligent, you have come up with something better than the same old tired phrases that are trivial debunked by just using it for a minute.

[–] rysiek@mstdn.social 0 points 1 year ago (1 children)

@lloram239 that's really akin to claiming that a mannequin is a human being because it really really looks alike.

The "predictions about the world" you refer to here are instead predictions about the text. They are not based on a model of the world, they are based on loads and loads of text the model was trained on.

I don't have to prove ChatGPT is not intelligent. That would be proving a negative. The burden of proof is on those claiming that it is intelligent.

[–] lloram239@feddit.de 1 points 1 year ago* (last edited 1 year ago)

that’s really akin to claiming that a mannequin is a human being because it really really looks alike.

For the job of presenting clothes in a shop, it's close enough. The problem domain matters. You can't expect a model that was never trained on a thing to perform well on that thing. Blind people aren't good at drawing pictures either, doesn't mean they aren't intelligent.

The “predictions about the world” you refer to here are instead predictions about the text.

Text that describes the world. What do you think the electrical signal zapping around your brain are? Cats and dogs? The "world" is not what intelligence operates on. Your brain gets sensory information and that's it (see any of Donald Hoffman's talks). Just like ChatGPT gets text. All the "intelligence" does is figuring out patterns in that data and predicting what might come next. More diverse data from different senses of course helps. But as a little bit of playing around with ChatGPT easily shows, quite a lot of our understanding actually does survive getting mapped into the domain of language and text.