this post was submitted on 04 Mar 2025
11 points (82.4% liked)

Comradeship // Freechat

2341 readers
140 users here now

Talk about whatever, respecting the rules established by Lemmygrad. Failing to comply with the rules will grant you a few warnings, insisting on breaking them will grant you a beautiful shiny banwall.

A community for comrades to chat and talk about whatever doesn't fit other communities

founded 3 years ago
MODERATORS
 

I've tried using deepseek (first time I've ever used an LLM, so maybe I'm being dumb) to help me a little with designing some circuit because my reference book was leaving out a LOT of crucial information.

The results have been ... subpar. The model seems to be making quite elementary mistakes, like leaving floating components with missing connections.

I'm honestly kinda disappointed. Maybe this is a weak area for it. I've probably had to tell deepseek more about designing the circuit in question than it has told me.

Edit: I realised I was just being dumb, since LLMs aren't designed for this task.

you are viewing a single comment's thread
view the rest of the comments
[–] amemorablename@lemmygrad.ml 2 points 1 month ago

General rule that's helpful for keeping in mind with generative AI models is they can only be as knowledgeable as the subject matter they have been trained on. And even then, it's only "can" of potential, not a guarantee, as training on the material doesn't necessarily mean it will answer correctly with regards to that material.

Which makes intuitive sense if you compare to a human, but is easy to miss in all the black box hype surrounding AI. No matter how clever a human being is, if they don't know something, they don't know it and thinking about it can only do so much. Now imagine that, but also missing key capabilities that humans have, like the ability to ask questions and learn long-term information from them in real-time.

Side note: The one subject I can think of where thinking analysis alone may work functionally to uncover new knowledge is, like, mathematical proofs where it's abstract A, B, therefore C logic, and that's also something LLMs don't have the design or capability for.

Deepseek has some legit reasons to have hype, but primarily it's hype relative to other LLMs and their training. There are still a lot of hurdles in getting LLMs past common problems.