What should journalists know about Claude?
Disclosure: this site was built with Claude. We have no financial relationship with Anthropic, but we cannot be fully objective about a tool we use daily. Read this evaluation with that in mind. Here is what we can say factually. Since September 2025, Claude's consumer tiers (Free, Pro, Max) train on conversations by default. Users who opt out get 30-day data retention. Users who don't opt out agree to five-year retention for training data — far longer than ChatGPT's equivalent. The meaningful privacy advantage is at the commercial tier: Claude for Work, Enterprise, Government, and Education conversations are never used for training, and API retention dropped to just 7 days in September 2025. Enterprise customers can get zero-data-retention agreements. On safety: Anthropic pioneered the Responsible Scaling Policy framework and was first to activate ASL-3 safeguards in May 2025. But in February 2026, Anthropic dropped its hard commitment to halt model training if safety measures weren't proven — a significant retreat from its founding promise. On government work: Anthropic offered Claude to all three branches of U.S. government for $1 in August 2025, then refused Pentagon demands to allow autonomous weapons and mass surveillance applications. The Pentagon designated Anthropic a supply chain risk in March 2026 — the first such designation for an American company. That decision cost Anthropic a $200M contract but demonstrated willingness to enforce use restrictions under pressure. On accuracy: independent benchmarks show Claude's hallucination rates between 3% and 10% depending on the model and methodology — comparable to competitors, not clearly better or worse. Claude does tend to admit uncertainty more readily than ChatGPT. Bottom line: at the commercial tier, Claude offers genuinely strong data isolation. At the consumer tier, the privacy story is mixed. The safety record is complicated — principled stances on weapons use, but weakened commitments on scaling safeguards.
Research, long-document analysis (up to 1M tokens on Opus 4.6), writing assistance, code generation, structured data extraction. Commercial plans offer strong data isolation for newsrooms. Extended thinking mode useful for complex investigative analysis.
Do not paste confidential source identities, classified documents, or information that could endanger someone into any cloud AI service — including Claude. For truly sensitive analysis, use a local LLM. Consumer-tier users should assume their conversations may be used for training unless they actively opt out.
Security & Privacy
Data is scrambled while being sent to their servers
Data is scrambled when stored on their servers
Where servers are located — affects which governments can request your data
Privacy policy summary
Consumer tiers (Free, Pro, Max): conversations used for training by default since September 2025. Opt out in Settings > Privacy to get 30-day retention. Users who allow training agree to five-year data retention. Commercial tiers (Team, Enterprise, Government, Education): never trained on. API retention: 7 days, never used for training. Enterprise zero-data-retention available by agreement. Claude for Government supports FedRAMP High workloads.
How to protect yourself:
Opt out of training immediately (Settings > Privacy) — this reduces retention from five years to 30 days. Use the Team plan ($25/user/month) or Enterprise for newsroom use — these are exempt from training with shorter retention. API access (7-day retention, no training) is available for developers. Don't paste confidential source identities or classified materials into any cloud AI. For the most sensitive analysis, use a local LLM like Llama or Mistral on your own hardware.
Consumer tiers (Free/Pro/Max) train on conversations by default with up to five-year retention — opt-out available but not the default. Commercial tiers (Team/Enterprise/Government) offer genuinely strong data isolation with no training and optional zero-data-retention. API retention is 7 days, never trained on. Rating reflects the consumer-tier defaults; commercial tiers alone would rate 'strong.' Disclosure: this site was built with Claude.
Who Owns This
Known issues
The September 2025 policy shift to train-by-default on consumer plans — with five-year retention for those who don't opt out — reversed Anthropic's key privacy differentiator. The February 2026 RSP v3.0 dropped the hard commitment to halt training if safety measures weren't proven, drawing criticism from AI safety researchers. The Pentagon's March 2026 supply chain risk designation creates uncertainty for government users and contractors. Amazon's and Google's multi-billion-dollar investments, combined with IPO pressure, raise questions about long-term independence. The PBC and Long-Term Benefit Trust structures are untested under real financial pressure.
Pricing
Free (Sonnet 4.6, limited usage). Pro: $20/month (higher limits, Opus access). Max: $100-200/month (extended thinking, highest limits). Team: $25/user/month. Enterprise: custom pricing.
None known. Anthropic invested $100M in a Claude Partner Network in 2025, but no journalism-specific programs.
This is an editorial assessment based on publicly available information as of 2026-04-02, using our published methodology. Independent security review is pending. Security posture can change at any time. This is not a guarantee of safety.
Something wrong or outdated? Report it.