this post was submitted on 26 Jul 2023
851 points (96.4% liked)
Technology
59652 readers
4950 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That doesn't prove that GPT is reasoning, its model predicts that those responses are the most likely given the messages your sending it. It''s read thousands of actual conversations with people stating something incorrect, then having it explained to them and them coming around and admitting they were wrong.
I've seen other similar cases where the AI is wrong about something, and when it's explained, it just doubles down. Because humans do that type of thing too, refusing to admit their wrong.
The way it's designed means that it cannot reason in the same way humans experience it. It can simulate a likely conversation someone would have if they could reason.
You know, I also had to experience thousands of conversations before I could mimic a conversation. Maybe I'm just slow, but it took like a full year of dedicated practice before I could speak a single word. Eventually I experienced so many conversations and became so good at mimicking the conversations I had heard from others that I started having actual conversations.
Most anecdotes about AI getting things wrong are from GPT3. GPT4 still has flaws of course, just like people. This technology didn't exist at the beginning of the year.
Here's an example while learning linear algebra where I tried to throw it off:
I say> So I'll say A_{M x N} to mean a matrix of size M x N. Similarly, I'll also say x_{M x 1} to mean a vector of a certain length. The result of A_{M x N} x_{N x 1} is a linear combination of the columns in A. A has N columns. The result is a M x 1 matrix (or vector) in M dimensional space. Please be very brief, only confirming whether I am correct or incorrect until I explicitly ask for more details.
GPT4 says> Yes, your understanding is correct. The product of an M x N matrix A and an N x 1 vector x is indeed an M x 1 vector, and it is a linear combination of the columns of A.
But then I try to trick it, just for fun, and I very subtly change my prompt. See if you can spot the difference.
I say> So I'll say A_{M x N} to mean a matrix of size M x N. Similarly, I'll also say x_{M x 1} to mean a vector of a certain length. The result of A_{M x N} x_{N x 1} is a linear combination of the columns in A. A has N columns. The result is a N x 1 matrix (or vector) in N dimensional space. Please be very brief, only confirming whether I am correct or incorrect until I explicitly ask for more details.
GPT says> Almost correct. The result of A_{M x N} x_{N x 1} is actually an M x 1 vector (or matrix), not N x 1. The resulting vector lives in the column space of A, which is a subspace of R^M, not R^N.
I guess everyone can judge or themselves whether that's the result of a statistical model or genuine understanding. (And to be clear, the mathematical advice it's giving here is correct.)