The issue is that it's a language model. You can go a long way by manipulating language to get useful results but it's still fundamentally limited by languages inability to perform reason, only to mimic it.
Syntax can only take you so far, and it won't always take you to the right place. Eventually you need something that can reason about the underlying meaning.
It's still a computer at the end of the day, just use logic. It responds well to it, you remove it's ability to be creative and tell it what you want to accomplish
People are just really bad at prompt engineering and so they aren't good at getting LLM's like gemeni and GPT to do what they want
You can train it, within conversations to get good at specific tasks. They're very useful, you just gotta know how to talk to them
The issue is that it's a language model. You can go a long way by manipulating language to get useful results but it's still fundamentally limited by languages inability to perform reason, only to mimic it.
Syntax can only take you so far, and it won't always take you to the right place. Eventually you need something that can reason about the underlying meaning.
It's still a computer at the end of the day, just use logic. It responds well to it, you remove it's ability to be creative and tell it what you want to accomplish