pull down to refresh

... what will that mean for public trust in science? (rest of the title). I'm not sure I have a strong opinion yet about this second part.
What caught my eye were all the interesting ways that people seem to already have implemented to use AI to fight known weaknesses in the scientific system.
Until recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin and Proofig now scan millions of figures for signs of duplication, manipulation and AI generation.
Natural language processing tools flag “tortured phrases” – the telltale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar trace whether papers are cited in support or contradiction.
Yes, this! How many papers I've seen cited just to remind the reader about their huge methodological flaw. It's almost like making mistakes pays off in the end, in terms of citation metrics. There is no such thing as bad publicity?
AI – especially agentic, reasoning-capable models increasingly proficient in mathematics and logic – will soon uncover more subtle flaws.
For example, the Black Spatula Project explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of text.
Do you know of any other potential promising uses of AI in science?
Considering the severity of the Replication Crisis, I imagine a full audit will be disastrous for trust in science (really it should just be disastrous for trust in those scientists, but people aren't very nuanced).
reply
Yeah, i did not comment on that part as that fear of losing trust in science was the main concern of OP and how to respond to it (kinda like how PR teams gear up when their politician got caught cheating) rather than thinking about how to address the underlying issue. Maybe having those AI tools is a good thing...
To be fair, reading it again now, the author is more nuanced than the impression I got from my first reading.
reply
30 sats \ 0 replies \ @gmd 12h
I would love an AI tool that takes a study and a claim and asks if the study actually supports what the person is claiming, and to what degree.
There are so many BS fitness, nutrition and health grifters on twitter who careless throw around study titles and abstracts and bend the implications without reading the actual studies.
This tool could be used to read papers themselves to detect if they are erroneously citing other studies that don't support their claims.
reply
I fear AI will not be able to appreciate genuine papers with new knowledge since that would be missing in its training data. It might end up with several false negatives.
reply
And the final communism will fall over the whole planet...
reply
What I find interesting is the phrasing
This “science is broken” narrative undermines public trust.
I think that this is awesome. Question everything. Not just the X-tard but also the guy with the credentials to do much larger damage.
Perhaps, as the author suggests, AI will make this easier. However, thus far all I have really noticed was Qwen3 in its reasoning step finding a real error in some python code and then "deciding" to not tell me about it.
If you want something as dumb as an LLM to help you, you need to train it well. The good news is that you can train it fast.
reply
stackers have outlawed this. turn on wild west mode in your /settings to see outlawed content.