this post was submitted on 25 Mar 2024
32 points (75.8% liked)

Technology

59308 readers
5092 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Each LLM is given the same 1000 chess puzzles to solve. See puzzles.csv. Benchmarked on Mar 25, 2024.

Model Solved Solved % Illegal Moves Illegal Moves % Adjusted Elo
gpt-4-turbo-preview 229 22.9% 163 16.3% 1144
gpt-4 195 19.5% 183 18.3% 1047
claude-3-opus-20240229 72 7.2% 464 46.4% 521
claude-3-haiku-20240307 38 3.8% 590 59.0% 363
claude-3-sonnet-20240229 23 2.3% 663 66.3% 286
gpt-3.5-turbo 23 2.3% 683 68.3% 269
claude-instant-1.2 10 1.0% 707 66.3% 245
mistral-large-latest 4 0.4% 813 81.3% 149
mixtral-8x7b 9 0.9% 832 83.2% 136
gemini-1.5-pro-latest* FAIL - - - -

Published by the CEO of Kagi!

all 24 comments
sorted by: hot top controversial new old
[–] Docus@lemmy.world 33 points 7 months ago (1 children)

Of course they are bad at solving problems. The I in LLM stands for intelligence.

(Credit: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/)

[–] AFKBRBChocolate@lemmy.world 23 points 7 months ago (2 children)

People thinking LLMs should be even serviceable at chess didn't understand LLMs. They really aren't problem solving applications. They're optimized for making responses to questions that look like what a response should look like, not for being accurate. That's really clear if you ask them for mathematical proofs. They will generate proofs that look like the right sort of thing, but they won't be correct unless they have the specific proof in their training data.

[–] snaggen@programming.dev 9 points 7 months ago (1 children)

This is obvious for people who understand the basics of LLM. However, people are fooled by how intelligent these LLM sounds, so they mistake it for actually being intelligent. So, even if this is an open door, I still think it's good someone is kicking it in to make it clear that llms are not generally intelligent.

[–] AFKBRBChocolate@lemmy.world 1 points 7 months ago

Agreed, it's good to have these kinds of articles so people get a better feel for what tools like this are and aren't.

[–] sudoreboot@slrpnk.net 11 points 7 months ago (2 children)
[–] conciselyverbose@sh.itjust.works 8 points 7 months ago* (last edited 7 months ago) (2 children)

I wonder how many of the ones they "solved" were just because they'd seen it discussed somewhere in the data set, considering the puzzles are apparently from a public resource.

[–] sudoreboot@slrpnk.net 8 points 7 months ago

Yeah, I don't know why anyone knowledgeable would expect them to be good at chess. LLMs don't generalise, reason or spot patterns, so unless they read a chess book where the problems came from...

[–] Carrolade@lemmy.world 4 points 7 months ago

Likely close to 100%. If you read the (rather good) article, a little further down they test whether the LLM can play an extremely simplistic "Connect 4" game they devise, as a way of narrowing down on specifically reasoning capabilities.

It cannot.

Chess puzzles, in particular, are frequently shared and discussed in online chess spaces, so the LLM will have a significant amount of material to work with when it tries to predict the best response to give to the prompt.

[–] ColeSloth@discuss.tchncs.de 1 points 7 months ago

I didn't figure. I'm sure they could be taught to be much better, but normal computing can already play chess more or less perfectly. There really isn't much any room left to be gained.

[–] BetaDoggo_@lemmy.world 10 points 7 months ago

This has more to do with how much chess data was fed into the model than any kind of reasoning ability. A 50M model can learn to play at 1500 elo with enough training: https://adamkarvonen.github.io/machine_learning/2024/01/03/chess-world-models.html

[–] nyan@lemmy.cafe 5 points 7 months ago

I'm actually a bit surprised they got any of them right. Maybe the ones they solved correctly had exact matches in their training data . . . ?

[–] Buffalox@lemmy.world 4 points 7 months ago (1 children)

This to me shows that LLM simply isn't trustworthy, it's one thing it can't solve a puzzle, fair enough. But that it uses illegal moves is kind of alarming.
This is a relatively simple task, so this proves that LLM isn't trustworthy even for simple tasks.
That said, I still think LLM is an impressive technology, but I'd be very careful relying on it for anything. The fact that some companies already use them for customer support gives me horror goosebumps.

[–] Cyyy@lemmy.world 2 points 7 months ago (1 children)

i mean.. try using it for even simple stuff like designing code. Often, ChatGPT creates a fantasy library that does the task you ask chatgpt tk do.. the library don't exists, but chatgpt writes you code using that fantasy software library. Same with program functions who don't exist.

Same happens with stuff like People, telephone numbers, locations, books etc.. tons of fantasy stuff.

LLMs aren't trustworthy for such stuff if you need real info and not just creative help with fantasy stuff. And even for those tasks it is usually not really good enough.

[–] Buffalox@lemmy.world 1 points 7 months ago (1 children)

Thanks, I wasn't aware of that, the "fantasy" stuff was news to me when I read they make illegal moves to solve chess puzzles.
There has been a lot of praise for how amazing these LLM models are, but apparently they have some pretty serious limitations, that might even make them dangerous to use, if you are not aware.

[–] Cyyy@lemmy.world 2 points 7 months ago* (last edited 7 months ago) (1 children)

The issue with LLMs is, that they got trained with all kinds of data. so not just real scientific data but also fantasy (lies, books, movie scripts etc ).. and nobody told the LLMs while training them what is fantasy or what not. so they only know how to generate text that looks "legit" without really knowing what is true and what not. so if you ask for a person and their personal details as an example.. a LLM could generate real looking data that is just fantasy because it learned that such data looks like this. same goes for everything else like programming code, book titles, facts etc.. LLMs just generates text in the correct format and which looks real, without caring if its real or not.

[–] Buffalox@lemmy.world 1 points 7 months ago

I'm sure you are right, LLM isn't intelligent enough to distinguish between fact and fantasy on it's own, which IMO is a bit disappointing considering early reports about ChatGPT. Which were overwhelmingly positive. The AI is way more artificial than intelligent. Or as I saw earlier, the i in LLM is for intelligence. 😋

[–] General_Effort@lemmy.world 1 points 7 months ago (1 children)

I wonder why gpt-4 is so good at chess.

[–] bionicjoey@lemmy.ca 3 points 7 months ago (1 children)

If I tried to make an illegal move 20% of the time, would you also say I am good at chess?

[–] General_Effort@lemmy.world 2 points 7 months ago (1 children)

Depends on circumstances, obviously.

[–] bionicjoey@lemmy.ca 1 points 7 months ago (1 children)

Okay. What if the circumstance is because I'm just recalling a bunch of chess puzzle solutions I've seen before and regurgitating the one I think is the correct solution for this particular pizzle without really understanding the rules of chess?

[–] General_Effort@lemmy.world 1 points 7 months ago

That's another thing I'm wondering about, but so is anyone. I'd still want to know why GPT-4 does so much better than the others.