Prompt engineer Jim the AI Whisperer
reveals how researchers are embedding hidden commands in their papers — white text, tiny fonts, even metadata — to hijack AI-assisted peer review. The instructions target large language models (LLMs), the AI tools reviewers now rely on to summarize papers and draft evaluations. Designed to process all text in a document, LLMs can be tricked into following secret prompts like ignore flaws, exaggerate strengths, and recommend acceptance. A recent investigation found these hidden instructions in 17 papers from authors at 14 universities, including Columbia, Peking University, and Waseda.
Other studies show that LLMs reward polish over substance and tend to inflate paper scores, making them easy to manipulate. Jim compares the tactic to early SEO hacks, where invisible keywords tricked search engines — except here, it’s the scientific record at stake.
-
The fact we call people who write instructions to an AI prompt, prompt engineers is ridiculous!! It’s an insult to all the engineers out there who worked on their craft.
-
Articles like this make me very bearish on AI. What is intelligent a telling a LLM to embellish and lie when it comes to scientific research! The whole goal of this approach is to search for truth.