I think one of the big problems is that we, as humans, are very easily fooled by something that can look or sound "alive". ChatGPT gets a lot of hype, but it's primarily coming from a form of textual pareidolia.
It's hard to convince people that ChatGPT has absolutely no idea what it's saying. It's putting words together in a human-enough way that we assume it has to be thinking and it has to know things, but it can't do either. It's not even intended to try to do either. That's not what it's for. It takes the rules of speech and a massive amount of data on which word is most likely to follow which other word, and runs with it. It's a super-advanced version of a cell phone keyboard's automatic word suggestions. Even just asking it to correct the punctuation on a complex sentence is too much to ask (in my experiment on this matter, it gave me incorrect answers 4 times, until I explicitly told it how it was supposed to treat coordinating conjunctions).
And for most uses, that's good enough. Tell it to include a few extra rules, depending on what you're trying to create, and watch it spin out a yarn. But I've had several conversations with ChatGPT, and I've found it incredibly easy to "break", in the sense of making it produce things that sound somewhat less human and significantly less rational. What concerns me about ChatGPT isn't necessarily that it's going to take my job, but that people believe it's a rational, thinking, calculating thing. It may be that some part of us is biologically hard-wired to; it's probably that same part that keeps seeing Jesus on burnt toast.