Friendly reminder that LLMs (large language models) have biases because of how the probability nature of picking tokens works, but they don't have opinions because they don't think and don't have a sensory experience. Some of them are purposefully tuned to refuse on certain kinds of questions or answer in certain kinds of ways and in that capacity, they can be tools of propaganda (and it is important to be aware of that). But this is also more stark in the implementation of them as a static chat assistant. If you were to use the model as text completion (where you give it text and it continues it, no illusion of chat names) or you were able to heavily modify Sampling values (which impacts the math used for picking the next token), its output could become much more random and varied and probably agree with you on a lot of ideologies if you lead it into them.
In order to get a model that is as capable as possible, it's usually trained on "bad" in addition to good. I don't know enough about model training to say why this matters, but I've heard from someone who does know before that it makes a significant difference. In effect, this means models are probably going to be capable of a lot that is unwanted. And then that's where you get the stories like "Open"AI traumatizing Kenyan workers who were hired to help filter disturbing content: https://www.vice.com/en/article/openai-used-kenyan-workers-making-dollar2-an-hour-to-filter-traumatic-content-from-chatgpt/
So, in summary, could DeepSeek have a bias that aligns with what might be called "counter revolutionary"? It could and even if it were trained by people who are full blown communists, that wouldn't guarantee it isn't because of the nature of training data and its biases. Is it capable of much more than that? Almost certainly, as LLMs generally are.