They train the models once on bunch of historical data and they don't continue to train them with new information as it comes in yet. At least, that's my understanding.
It is not necessarily true for more recent models but they still have a tendency to say it likely because they were trained partly on ChatGPT outputs. OpenAI has accused deepseek of training on their model and people have been suspecting grok to be a fine tuned deepseek v3 / deepseek r1, so it would make sense.