this post was submitted on 19 Nov 2024
1065 points (97.7% liked)

People Twitter

5375 readers
598 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] WalnutLum@lemmy.ml 20 points 1 month ago* (last edited 1 month ago) (1 children)

Reminder that all these Chat-formatted LLMs are just text-completion engines trained on text formatted like a chat. You're not having a conversation with it, it's "completing" the chat history you're providing it. By randomly(!) choosing the next text tokens that seems like they best fit the text provided.

If you don't directly provide, in the chat history and/or the text completion prompt, the information you're trying to retrieve, you're essentially fishing for text in a sea of random text tokens that seems like it fits the question.

It will always complete the text, even if the tokens it chooses minimally fit the context, it chooses the best text it can but it will always complete the text.

This is how they work, and anything else is usually the company putting in a bunch of guide bumpers to reformat prompts into coaxing the models to respond in a "smarter" way (see GPT-4o and "chain of reasoning")

[โ€“] HackerJoe@sh.itjust.works 7 points 4 weeks ago

They were trained on reddit. How much would you trust a chatbot whose brain consists of the entirety of reddit put in a blender?

I am amazed it works as well as it does. Gemini only occasionally tells people to kill themselves.