Each LLM is given the same 1000 chess puzzles to solve. See puzzles.csv
. Benchmarked on Mar 25, 2024.
Model |
Solved |
Solved % |
Illegal Moves |
Illegal Moves % |
Adjusted Elo |
gpt-4-turbo-preview |
229 |
22.9% |
163 |
16.3% |
1144 |
gpt-4 |
195 |
19.5% |
183 |
18.3% |
1047 |
claude-3-opus-20240229 |
72 |
7.2% |
464 |
46.4% |
521 |
claude-3-haiku-20240307 |
38 |
3.8% |
590 |
59.0% |
363 |
claude-3-sonnet-20240229 |
23 |
2.3% |
663 |
66.3% |
286 |
gpt-3.5-turbo |
23 |
2.3% |
683 |
68.3% |
269 |
claude-instant-1.2 |
10 |
1.0% |
707 |
66.3% |
245 |
mistral-large-latest |
4 |
0.4% |
813 |
81.3% |
149 |
mixtral-8x7b |
9 |
0.9% |
832 |
83.2% |
136 |
gemini-1.5-pro-latest* |
FAIL |
- |
- |
- |
- |
Published by the CEO of Kagi!
The issue with LLMs is, that they got trained with all kinds of data. so not just real scientific data but also fantasy (lies, books, movie scripts etc ).. and nobody told the LLMs while training them what is fantasy or what not. so they only know how to generate text that looks "legit" without really knowing what is true and what not. so if you ask for a person and their personal details as an example.. a LLM could generate real looking data that is just fantasy because it learned that such data looks like this. same goes for everything else like programming code, book titles, facts etc.. LLMs just generates text in the correct format and which looks real, without caring if its real or not.
I'm sure you are right, LLM isn't intelligent enough to distinguish between fact and fantasy on it's own, which IMO is a bit disappointing considering early reports about ChatGPT. Which were overwhelmingly positive. The AI is way more artificial than intelligent. Or as I saw earlier, the i in LLM is for intelligence. ๐