this post was submitted on 04 Mar 2025
11 points (82.4% liked)
Comradeship // Freechat
2341 readers
140 users here now
Talk about whatever, respecting the rules established by Lemmygrad. Failing to comply with the rules will grant you a few warnings, insisting on breaking them will grant you a beautiful shiny banwall.
A community for comrades to chat and talk about whatever doesn't fit other communities
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
General rule that's helpful for keeping in mind with generative AI models is they can only be as knowledgeable as the subject matter they have been trained on. And even then, it's only "can" of potential, not a guarantee, as training on the material doesn't necessarily mean it will answer correctly with regards to that material.
Which makes intuitive sense if you compare to a human, but is easy to miss in all the black box hype surrounding AI. No matter how clever a human being is, if they don't know something, they don't know it and thinking about it can only do so much. Now imagine that, but also missing key capabilities that humans have, like the ability to ask questions and learn long-term information from them in real-time.
Side note: The one subject I can think of where thinking analysis alone may work functionally to uncover new knowledge is, like, mathematical proofs where it's abstract A, B, therefore C logic, and that's also something LLMs don't have the design or capability for.
Deepseek has some legit reasons to have hype, but primarily it's hype relative to other LLMs and their training. There are still a lot of hurdles in getting LLMs past common problems.