It should come as no surprise to you that ChatGPT and other LLM subscription services may have humans reviewing your chats, but in case you wanted to see the policy...
I saw Brian Roemmele post about this on X and went to look it up for myself.
When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement. We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.