GPTZero
AI text detector built by a Princeton student in January 2023. Useful as a screen, dangerous as a verdict.
What should journalists know about GPTZero?
Edward Tian launched GPTZero in January 2023 as a Princeton senior, weeks after ChatGPT's public release, and watched it become the default AI detector by accident. The product has matured: it now handles bulk scans, integrates with LMS systems, claims to be de-biased for ESL writers, and reports independent benchmarking from Penn State and the third-party RAID test at 95.7% AI recall and 1% human false-positive rate. The company's own framing is honest about uncertainty — scores are probabilities, not verdicts. The independent picture is messier. A 2023 Stanford study found AI detectors flagged 61.3% of human-written TOEFL essays by non-native English speakers as AI-generated. A Ryne AI test of 100,000+ texts found GPTZero's real-world false-positive rate closer to 18% than the 0.5% claimed. In February 2025 a Yale School of Management student sued the university after GPTZero flagged his exam, alleging discrimination against non-native English speakers. A University of Michigan student filed a similar suit in 2026. Yale, UCLA, UC Berkeley, UC San Diego, Waterloo, Michigan State, and Vanderbilt have disabled or restricted AI detection tools entirely. For journalists, the takeaway is narrower than the academic mess: GPTZero is reasonable as a first-pass screen on suspected AI-generated press releases, comment-section flooding, or sock-puppet content — anywhere a probability is useful and the consequences of a false positive are reversible. It is not appropriate as a sole basis for accusing a named human of using AI. The model also degrades against new LLMs and against light human editing of LLM output. Treat scores as a starting point for reporting, never the headline.
Screening suspected AI-generated press releases, astroturf comments, and bulk content floods. Internal newsroom tools that flag possibly synthetic submissions for human review. Quick triage when you have many texts and limited time.
Accusing a named individual of AI use without corroboration. Detecting AI in non-native-English writing — bias is well-documented. Settled-science verdicts in any context. Detecting lightly edited or paraphrased AI text. Detecting output from models released after the detector's last training update.
Security & Privacy
Data is scrambled while being sent to their servers
Data is scrambled when stored on their servers
Where servers are located — affects which governments can request your data
Privacy policy summary
Two paths with very different privacy postures. API submissions are not stored and not used for product improvement. Dashboard submissions (paste-in or upload) are stored, separated from user identity, and may be used in anonymized form for model training. Anonymized text used for training is retained permanently — even after account deletion. Personal data is otherwise deleted within three months of account termination.
How to protect yourself:
Use the API path, not the web dashboard, for any text you don't want retained or used for training. Never paste confidential source material, unpublished drafts, or pre-publication reporting into the web tool. Strip identifying metadata before submission. Treat scores as probabilities, never verdicts — and never name an individual based solely on a GPTZero score. Cross-check suspicious results with a second detector and human editorial judgment. Be especially cautious with text from non-native English speakers.
Technical security is standard commercial SaaS — HTTPS, U.S. jurisdiction, reasonable retention for personal data. The caution is editorial. Dashboard submissions are stored and may be used for training in anonymized form permanently. Documented bias against non-native English speakers and active lawsuits over wrongful accusations make this a tool to use defensively, never offensively. Use the API path for sensitive text. Never base a published claim on a score alone.
Who Owns This
Known issues
Documented false-positive bias against non-native English writers (Stanford 2023: 61.3% of TOEFL essays flagged). Independent benchmarks (Ryne AI) measure real-world false-positive rates near 18% versus the company's 0.5% claim. February 2025 Yale lawsuit and 2026 University of Michigan lawsuit allege wrongful academic discipline based on GPTZero scores. Yale, UCLA, UC Berkeley, UC San Diego, Waterloo, Michigan State, and Vanderbilt have disabled or restricted AI detection. Detection accuracy degrades against new LLMs and against lightly edited or paraphrased AI output. Dashboard submissions may be retained for training in anonymized form indefinitely.
Pricing
Free tier: scans up to 10,000 characters, no account required for small jobs. Essential $9.99/month (150K words). Premium $15.99/month (300K words). Professional $29.99/month (500K words, 250-file batch, LMS integration). Annual plans discount roughly 45%.
This is an editorial assessment based on publicly available information as of 2026-04-07, using our published methodology. Independent security review is pending. Security posture can change at any time. This is not a guarantee of safety.
Something wrong or outdated? Report it.