If someone types the word "how are you?" into an SMS message box... chances are really high the other person will respond with "I'm good, how are you?" or something along those lines.
That's how ChatGPT works. It's essentially a database of likely responses to questions.
It's not a fixed list of responses to every possible question, it's a mathematical one that can handle arbitrary questions and deliver the most likely arbitrary response. So for example if you ask "how you are?" you'll get the same answer as "how are you?"
ChatGPT is also programmed to behave a certain way - for example if you actually ask how it is, it will tell you it's not a person and doesn't have feelings/etc. That's not part of the algorithm, that's part of the "instructions" OpenAI has given to ChatGPT. They have specifically told it not to imply that it is human or alive.
Finally - it's a little bit random, so if you ask the same question 20 times, you'll get 20 slightly different responses.
ChatGPT is not "impressively smart" at all. It's just responding with mathematically the most likely answer to every question you ask. It will often give the same answer as a smart person, but not always. For example I just asked it how far from my city to a nearby town, it said 50 miles that it'd take 2 hours to drive there. The correct answer is 40 miles / 1 hour.
I expect there's probably a whole bunch of incorrect information in the training data set — it's a popular tourist drive, and tourists probably would take longer, stop along the way, take detours, etc. For a tourist, ChatGPT's answer might be correct. But it's not because it's smart, it's just that's what the algorithm produces.