this post was submitted on 09 Jun 2025
826 points (92.0% liked)

Technology

72316 readers
2687 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] FMT99@lemmy.world 289 points 3 weeks ago (20 children)

Did the author thinks ChatGPT is in fact an AGI? It's a chatbot. Why would it be good at chess? It's like saying an Atari 2600 running a dedicated chess program can beat Google Maps at chess.

[–] spankmonkey@lemmy.world 229 points 3 weeks ago (15 children)

AI including ChatGPT is being marketed as super awesome at everything, which is why that and similar AI is being forced into absolutely everything and being sold as a replacement for people.

Something marketed as AGI should be treated as AGI when proving it isn't AGI.

[–] pelespirit@sh.itjust.works 15 points 3 weeks ago (12 children)

Not to help the AI companies, but why don't they program them to look up math programs and outsource chess to other programs when they're asked for that stuff? It's obvious they're shit at it, why do they answer anyway? It's because they're programmed by know-it-all programmers, isn't it.

[–] rebelsimile@sh.itjust.works 29 points 3 weeks ago (1 children)

Because they’re fucking terrible at designing tools to solve problems, they are obviously less and less good at pretending this is an omnitool that can do everything with perfect coherency (and if it isn’t working right it’s because you’re not believing or paying hard enough)

load more comments (1 replies)
[–] ImplyingImplications@lemmy.ca 26 points 3 weeks ago

why don't they program them

AI models aren't programmed traditionally. They're generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model's calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.

Then someone asks it how many R's are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it's done, only for people to once again quickly find some kind of prompt it doesn't answer well.

There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn't the issue. It's trying to get one model to be good at absolutely everything.

load more comments (10 replies)
load more comments (14 replies)
[–] suburban_hillbilly@lemmy.ml 30 points 3 weeks ago (2 children)

Most people do. It's just called AI in the media everywhere and marketing works. I think online folks forget that something as simple as getting a Lemmy account by yourself puts you into the top quintile of tech literacy.

load more comments (2 replies)
[–] malwieder@feddit.org 27 points 3 weeks ago (5 children)

Google Maps doesn't pretend to be good at chess. ChatGPT does.

load more comments (5 replies)
[–] iAvicenna@lemmy.world 16 points 3 weeks ago (1 children)

well so much hype has been generated around chatgpt being close to AGI that now it makes sense to ask questions like "can chatgpt prove the Riemann hypothesis"

load more comments (1 replies)
[–] Broken@lemmy.ml 10 points 3 weeks ago (3 children)

I agree with your general statement, but in theory since all ChatGPT does is regurgitate information back and a lot of chess is memorization of historical games and types, it might actually perform well. No, it can't think, but it can remember everything so at some point that might tip the results in it's favor.

load more comments (3 replies)
load more comments (15 replies)
[–] Objection@lemmy.ml 84 points 3 weeks ago (5 children)

Tbf, the article should probably mention the fact that machine learning programs designed to play chess blow everything else out of the water.

[–] bier@feddit.nl 30 points 3 weeks ago (1 children)

Yeah its like judging how great a fish is at climbing a tree. But it does show that it's not real intelligence or reasoning

[–] 13igTyme@lemmy.world 13 points 3 weeks ago (1 children)

Don't call my fish stupid.

load more comments (1 replies)
[–] Zenith@lemm.ee 15 points 3 weeks ago

I forgot which airline it is but one of the onboard games in the back of a headrest TV was a game called “Beginners Chess” which was notoriously difficult to beat so it was tested against other chess engines and it ranked in like the top five most powerful chess engines ever

[–] andallthat@lemmy.world 13 points 3 weeks ago* (last edited 3 weeks ago)

Machine learning has existed for many years, now. The issue is with these funding-hungry new companies taking their LLMs, repackaging them as "AI" and attributing every ML win ever to "AI".

ML programs designed and trained specifically to identify tumors in medical imaging have become good diagnostic tools. But if you read in news that "AI helps cure cancer", it makes it sound like it was a lone researcher who spent a few minutes engineering the right prompt for Copilot.

Yes a specifically-designed and finely tuned ML program can now beat the best human chess player, but calling it "AI" and bundling it together with the latest Gemini or Claude iteration's "reasoning capabilities" is intentionally misleading. That's why articles like this one are needed. ML is a useful tool but far from the "super-human general intelligence" that is meant to replace half of human workers by the power of wishful prompting

load more comments (2 replies)
[–] NeilBru@lemmy.world 76 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

An LLM is a poor computational/predictive paradigm for playing chess.

[–] surph_ninja@lemmy.world 30 points 3 weeks ago (1 children)

This just in: a hammer makes a poor screwdriver.

load more comments (1 replies)
[–] Takapapatapaka@lemmy.world 12 points 3 weeks ago (4 children)

Actually, a very specific model (chatgpt3.5-turbo-instruct) was pretty good at chess (around 1700 elo if i remember correctly).

load more comments (4 replies)
load more comments (3 replies)
[–] AlecSadler@sh.itjust.works 61 points 3 weeks ago (9 children)

ChatGPT has been, hands down, the worst AI coding assistant I've ever used.

It regularly suggests code that doesn't compile or isn't even for the language.

It generally suggests AC of code that is just a copy of the lines I just wrote.

Sometimes it likes to suggest setting the same property like 5 times.

It is absolute garbage and I do not recommend it to anyone.

[–] j4yt33@feddit.org 17 points 3 weeks ago (4 children)

I find it really hit and miss. Easy, standard operations are fine but if you have an issue with code you wrote and ask it to fix it, you can forget it

[–] AlecSadler@sh.itjust.works 9 points 3 weeks ago (1 children)

I've found Claude 3.7 and 4.0 and sometimes Gemini variants still leagues better than ChatGPT/Copilot.

Still not perfect, but night and day difference.

I feel like ChatGPT didn't focus on coding and instead focused on mainstream, but I am not an expert.

load more comments (1 replies)
load more comments (3 replies)
[–] Etterra@discuss.online 9 points 3 weeks ago (2 children)

That's because it doesn't know what it's saying. It's just blathering out each word as what it estimates to be the likely next word given past examples in its training data. It's a statistics calculator. It's marginally better than just smashing the auto fill on your cell repeatedly. It's literally dumber than a parrot.

load more comments (2 replies)
[–] nutsack@lemmy.dbzer0.com 9 points 3 weeks ago (3 children)

my favorite thing is to constantly be implementing libraries that don't exist

[–] Blackmist@feddit.uk 12 points 3 weeks ago

You're right. That library was removed in ToolName [PriorVersion]. Please try this instead.

*makes up entirely new fictitious library name*

load more comments (2 replies)
load more comments (6 replies)
[–] nednobbins@lemm.ee 50 points 3 weeks ago (5 children)

Sometimes it seems like most of these AI articles are written by AIs with bad prompts.

Human journalists would hopefully do a little research. A quick search would reveal that researches have been publishing about this for over a year so there's no need to sensationalize it. Perhaps the human journalist could have spent a little time talking about why LLMs are bad at chess and how researchers are approaching the problem.

LLMs on the other hand, are very good at producing clickbait articles with low information content.

[–] nova_ad_vitum@lemmy.ca 24 points 3 weeks ago (7 children)

Gotham chess has a video of making chatgpt play chess against stockfish. Spoiler: chatgpt does not do well. It plays okay for a few moves but then the moment it gets in trouble it straight up cheats. Telling it to follow the rules of chess doesn't help.

This sort of gets to the heart of LLM-based "AI". That one example to me really shows that there's no actual reasoning happening inside. It's producing answers that statistically look like answers that might be given based on that input.

For some things it even works. But calling this intelligence is dubious at best.

load more comments (7 replies)
load more comments (4 replies)
[–] floofloof@lemmy.ca 44 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

I suppose it's an interesting experiment, but it's not that surprising that a word prediction machine can't play chess.

load more comments (5 replies)
[–] Halosheep@lemm.ee 43 points 3 weeks ago (3 children)

I swear every single article critical of current LLMs is like, "The square got BLASTED by the triangle shape when it completely FAILED to go through the triangle shaped hole."

[–] drspod@lemmy.ml 42 points 3 weeks ago (4 children)

It's newsworthy when the sellers of squares are saying that nobody will ever need a triangle again, and the shape-sector of the stock market is hysterically pumping money into companies that make or use squares.

[–] inconel@lemmy.ca 19 points 3 weeks ago (1 children)

It's also from a company claiming they're getting closer to create morphing shape that can match any hole.

load more comments (1 replies)
load more comments (3 replies)
load more comments (2 replies)
[–] MonkderVierte@lemmy.zip 41 points 3 weeks ago (1 children)

LLM are not built for logic.

[–] PushButton@lemmy.world 18 points 3 weeks ago (2 children)

And yet everybody is selling to write code.

The last time I checked, coding was requiring logic.

[–] jj4211@lemmy.world 10 points 3 weeks ago (4 children)

To be fair, a decent chunk of coding is stupid boilerplate/minutia that varies environment to environment, language to language, library to library.

So LLM can do some code completion, filling out a bunch of boilerplate that is blatantly obvious, generating the redundant text mandated by certain patterns, and keeping straight details between languages like "does this language want join as a method on a list with a string argument, or vice versa?"

Problem is this can be sometimes more annoying than it's worth, as miscompletions are annoying.

load more comments (4 replies)
load more comments (1 replies)
[–] anubis119@lemmy.world 36 points 3 weeks ago (5 children)

A strange game. How about a nice game of Global Thermonuclear War?

[–] ada@piefed.blahaj.zone 17 points 3 weeks ago

No thank you. The only winning move is not to play

load more comments (4 replies)
[–] Furbag@lemmy.world 29 points 3 weeks ago (6 children)

Can ChatGPT actually play chess now? Last I checked, it couldn't remember more than 5 moves of history so it wouldn't be able to see the true board state and would make illegal moves, take it's own pieces, materialize pieces out of thin air, etc.

load more comments (6 replies)
[–] cley_faye@lemmy.world 25 points 3 weeks ago

Ah, you used logic. That's the issue. They don't do that.

[–] arc99@lemmy.world 20 points 3 weeks ago (3 children)

Hardly surprising. Llms aren't -thinking- they're just shitting out the next token for any given input of tokens.

load more comments (3 replies)
[–] finitebanjo@lemmy.world 15 points 3 weeks ago

All these comments asking "why don't they just have chatgpt go and look up the correct answer".

That's not how it works, you buffoons, it trains off of datasets long before it releases. It doesn't think. It doesn't learn after release, it won't remember things you try to teach it.

Really lowering my faith in humanity when even the AI skeptics don't understand that it generates statistical representations of an answer based on answers given in the past.

[–] Lembot_0003@lemmy.zip 14 points 3 weeks ago (2 children)

The Atari chess program can play chess better than the Boeing 747 too. And better than the North Pole. Amazing!

[–] CarbonatedPastaSauce@lemmy.world 12 points 3 weeks ago (2 children)

Neither of those things are marketed as being artificially intelligent.

load more comments (2 replies)
load more comments (1 replies)
[–] jsomae@lemmy.ml 13 points 3 weeks ago (4 children)

Using an LLM as a chess engine is like using a power tool as a table leg. Pretty funny honestly, but it's obviously not going to be good at it, at least not without scaffolding.

load more comments (4 replies)
[–] Nurse_Robot@lemmy.world 11 points 3 weeks ago (3 children)

I'm often impressed at how good chatGPT is at generating text, but I'll admit it's hilariously terrible at chess. It loves to manifest pieces out of thin air, or make absurd illegal moves, like jumping its king halfway across the board and claiming checkmate

load more comments (3 replies)
[–] Sidhean@lemmy.blahaj.zone 10 points 3 weeks ago

Can i fistfight ChatGPT next? I bet I could kick its ass, too :p

load more comments
view more: next ›