32

We all know by now that ChatGPT is full of incorrect data but I trusted it will no go wrong after I asked for a list of sci-fi books recommendations (short stories anthologies in Spanish mostly) including book names, editorial, print year and of course ISBN.

Some of the books do exist but the majority are nowhere to be found. I pick the one that caught my interest the most and contacted the editorial directly after I did not find it in their website or anywhere else.

This is what they replied (Google Translate):


ChatGPT got it wrong.

We don't have any books with that title.

In the ISBN that has given you the last digit is incorrect. And the correct one (9788477028383) corresponds to "The Holy Fountain" by Henry James.

Nor have we published any science fiction anthologies in the last 25 years.


I quick search in the "old site" shows that others have experienced the same with ChatGPT and ISBN searches... For some reason I thought it will no go wrong in this case, but it did.

you are viewing a single comment's thread
view the rest of the comments

ChatGPT is a text predictor. You are not asking it for book recommendations, you are writing "some good books are:" and hoping that glorified autocorrect will somehow come up with actual books to complete the sentence.

This effect is compounded by the fact that it is trained to predict text that will make you click the thumbs up button and not the thumbs down one. Saying "here are some books" and inventing makes you more likely to click ๐Ÿ‘ or doing nothing, saying "as an AI language model, I do not know about books and cannot accurately recommend good books" makes you more likely to click ๐Ÿ‘Ž, so it doesn't do that.

Expecting chatGPT to do anything about the real world beyond writing text "creatively" is a fool's errand.

[-] TheObserver@lemmy.world 3 points 1 year ago

Exactly. Every time i see chatgpt in a title of some bullshit it clearly cant do it cracks me up seeing all these people falling for it ๐Ÿคฃ

[-] Not_mikey@lemmy.world 1 points 1 year ago

Saying chatgpt is glorified auto correct is like saying humans are glorified bacteria. They were both made under the same basic drive: to survive and reproduce for humans and bacteria; to guess the next word for chatgpt and autocorrect, but they are of wholly different magnitudes. Both chatgpt and humans evolved a much more complex understanding of the world to achieve the same goal as there more basic analogs. If chatgpt had no understanding of the real world it wouldn't have been able to guess any of the books.

ChatGPT does not have an understanding of the world. It's able to guess book titles because book titles have a format and it's training data had book lists in it.

[-] Not_mikey@lemmy.world 1 points 1 year ago

You could make the same case for you not understanding anything outside of your experiential knowledge. You are only able to name COVID variants because you read/heard about them somewhere. Fundamentally any idea outside of your experience is just a nexus of linked concepts in your head. For example COVID omicron is a combination of your concept of COVID along with an idea of it being a mutation/variant and it being more contagious maybe added in with a couple other facts you read about it. This linking of ideas forms your understanding and chatgpt is able to form these connections just as well as a person. Unless you want to make a case that understanding necessitates experience chatgpt understands a lot about the world. If you make that case though then you don't understand evolution, microbiology, history etc. anything that you just read about in your training data.

this post was submitted on 09 Aug 2023
32 points (75.8% liked)

ChatGPT

8902 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS