... what will that mean for public trust in science? (rest of the title). I'm not sure I have a strong opinion yet about this second part.
What caught my eye were all the interesting ways that people seem to already have implemented to use AI to fight known weaknesses in the scientific system.
Until recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin and Proofig now scan millions of figures for signs of duplication, manipulation and AI generation.
Natural language processing tools flag “tortured phrases” – the telltale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar trace whether papers are cited in support or contradiction.
Yes, this! How many papers I've seen cited just to remind the reader about their huge methodological flaw. It's almost like making mistakes pays off in the end, in terms of citation metrics. There is no such thing as bad publicity?
AI – especially agentic, reasoning-capable models increasingly proficient in mathematics and logic – will soon uncover more subtle flaws.
For example, the Black Spatula Project explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of text.
Do you know of any other potential promising uses of AI in science?