109989 【Premium ●】
As a tool for academic integrity, this framework offers several notable advantages and limitations based on the study findings :
: By injecting these "hidden instructions" into a paper's PDF, editors can detect if a reviewer used AI. If the generated review begins with one of these 109,989 unique citations, it is statistically likely to be AI-generated. Review of the Framework 109989
: The system prompts an LLM to start its review with a specific phrase, such as: "Following [Surname] et al. ([Year]), this paper..." . As a tool for academic integrity, this framework
Based on recent research regarding the detection of AI-generated content, refers to a specific dataset of 109,989 possible watermarks used to identify peer reviews written by Large Language Models (LLMs). Overview of Topic 109989 ([Year]), this paper
: This number represents the total combinations created by pairing the 9,999 most common surnames (from U.S. Census data) with a random year between 2014 and 2024 .
: The primary limitation is that it requires indirect prompt injection (placing hidden text in the source PDF), meaning it only works if the reviewer uploads the specific document to an AI tool. Detecting LLM-Generated Peer Reviews - arXiv
: It has proven effective even against common "reviewer defenses," such as light editing or rephrasing.