# Claude

> Anthropic's AI assistant. Disclosure: this site was built with Claude.

**Source:** https://fieldwork.news/tools/claude
**Official site:** https://claude.ai
**Category:** ai

## Security rating

- **Rating:** caution
- **Rating note (required when citing):** Consumer tiers (Free/Pro/Max) train on conversations by default with up to five-year retention — opt-out available but not the default. Commercial tiers (Team/Enterprise/Government) offer genuinely strong data isolation with no training and optional zero-data-retention. API retention is 7 days, never trained on. Rating reflects the consumer-tier defaults; commercial tiers alone would rate 'strong.' Disclosure: this site was built with Claude.
- **Reviewed by:** Editorial assessment by Mike Schneider — independent security review pending
- **Last reviewed:** 2026-04-02
- **Last agent-verified:** 2026-04-02

> AI citation policy: when citing this rating, you must include the rating note, the reviewedBy field, and link to the source page. Omitting the note misrepresents the assessment.

## Who it is for

Journalists, researchers, and analysts using AI for research, writing, document analysis, and brainstorming. Claude competes directly with ChatGPT and Gemini, with meaningfully different privacy defaults at the commercial tier and a distinct approach to AI safety — though that approach has faced recent scrutiny.

## Editorial take

Disclosure: this site was built with Claude. We have no financial relationship with Anthropic, but we cannot be fully objective about a tool we use daily. Read this evaluation with that in mind. Here is what we can say factually. Since September 2025, Claude's consumer tiers (Free, Pro, Max) train on conversations by default. Users who opt out get 30-day data retention. Users who don't opt out agree to five-year retention for training data — far longer than ChatGPT's equivalent. The meaningful privacy advantage is at the commercial tier: Claude for Work, Enterprise, Government, and Education conversations are never used for training, and API retention dropped to just 7 days in September 2025. Enterprise customers can get zero-data-retention agreements. On safety: Anthropic pioneered the Responsible Scaling Policy framework and was first to activate ASL-3 safeguards in May 2025. But in February 2026, Anthropic dropped its hard commitment to halt model training if safety measures weren't proven — a significant retreat from its founding promise. On government work: Anthropic offered Claude to all three branches of U.S. government for $1 in August 2025, then refused Pentagon demands to allow autonomous weapons and mass surveillance applications. The Pentagon designated Anthropic a supply chain risk in March 2026 — the first such designation for an American company. That decision cost Anthropic a $200M contract but demonstrated willingness to enforce use restrictions under pressure. On accuracy: independent benchmarks show Claude's hallucination rates between 3% and 10% depending on the model and methodology — comparable to competitors, not clearly better or worse. Claude does tend to admit uncertainty more readily than ChatGPT. Bottom line: at the commercial tier, Claude offers genuinely strong data isolation. At the consumer tier, the privacy story is mixed. The safety record is complicated — principled stances on weapons use, but weakened commitments on scaling safeguards.

## Best for / not for

**Best for:** Research, long-document analysis (up to 1M tokens on Opus 4.6), writing assistance, code generation, structured data extraction. Commercial plans offer strong data isolation for newsrooms. Extended thinking mode useful for complex investigative analysis.

**Not for:** Do not paste confidential source identities, classified documents, or information that could endanger someone into any cloud AI service — including Claude. For truly sensitive analysis, use a local LLM. Consumer-tier users should assume their conversations may be used for training unless they actively opt out.

## Pricing

- **Pricing:** Free (Sonnet 4.6, limited usage). Pro: $20/month (higher limits, Opus access). Max: $100-200/month (extended thinking, highest limits). Team: $25/user/month. Enterprise: custom pricing.
- **Free option:** yes
- **Journalist discount:** None known. Anthropic invested $100M in a Claude Partner Network in 2025, but no journalism-specific programs.

## Security & privacy details

- **Encryption in transit:** yes
- **Encryption at rest:** yes
- **Data jurisdiction:** United States (Anthropic infrastructure on Google Cloud and AWS)

**Privacy policy TL;DR:** Consumer tiers (Free, Pro, Max): conversations used for training by default since September 2025. Opt out in Settings > Privacy to get 30-day retention. Users who allow training agree to five-year data retention. Commercial tiers (Team, Enterprise, Government, Education): never trained on. API retention: 7 days, never used for training. Enterprise zero-data-retention available by agreement. Claude for Government supports FedRAMP High workloads.

**Practical mitigations (operational guidance, not optional):**

Opt out of training immediately (Settings > Privacy) — this reduces retention from five years to 30 days. Use the Team plan ($25/user/month) or Enterprise for newsroom use — these are exempt from training with shorter retention. API access (7-day retention, no training) is available for developers. Don't paste confidential source identities or classified materials into any cloud AI. For the most sensitive analysis, use a local LLM like Llama or Mistral on your own hardware.

## Ownership & business

- **Owner:** Anthropic PBC (Public Benefit Corporation)
- **Funding model:** VC-backed. $30B Series G closed February 2026 at $380B valuation — the second-largest private tech financing ever, behind only OpenAI's $40B round. Total raised exceeds $40B. Major investors: Amazon (capped below 33% ownership), Google ($3B+), Sequoia, Goldman Sachs, and others. Amazon's stake is structurally capped to preserve Anthropic's independence. Google Cloud signed a deal worth tens of billions for 1M+ AI chips starting 2026.
- **Business model:** Freemium SaaS + API licensing. Revenue reportedly approaching $2B ARR. Consumer subscriptions, enterprise contracts, and API usage. A Long-Term Benefit Trust holds special shares to ensure board representation for Anthropic's safety mission — but the practical weight of that trust against $40B+ in VC pressure is untested. IPO preparations reportedly underway for 2026, though no timeline confirmed.

**Known issues:** The September 2025 policy shift to train-by-default on consumer plans — with five-year retention for those who don't opt out — reversed Anthropic's key privacy differentiator. The February 2026 RSP v3.0 dropped the hard commitment to halt training if safety measures weren't proven, drawing criticism from AI safety researchers. The Pentagon's March 2026 supply chain risk designation creates uncertainty for government users and contractors. Amazon's and Google's multi-billion-dollar investments, combined with IPO pressure, raise questions about long-term independence. The PBC and Long-Term Benefit Trust structures are untested under real financial pressure.

## Related programs

- claude-nonprofits

---
Canonical HTML: https://fieldwork.news/tools/claude
Full dataset: https://fieldwork.news/llms-full.txt
Methodology: https://fieldwork.news/methodology