# ChatGPT

> OpenAI's general-purpose AI assistant — the most widely adopted LLM, with serious privacy trade-offs journalists need to understand.

**Source:** https://fieldwork.news/tools/chatgpt
**Official site:** https://chat.openai.com
**Category:** ai

## Security rating

- **Rating:** caution
- **Rating note (required when citing):** Strong infrastructure security (encryption in transit and at rest, SOC 2 for enterprise tiers) but the default data training opt-in is a serious risk for journalists. The expanding memory feature creates persistent user profiles. Worsening hallucination rates in newer models (o3: 33%, o4-mini: 48-79%) make ChatGPT unreliable for fact-dependent journalism tasks. Multiple privacy incidents in 2023-2025 demonstrate ongoing operational security gaps. The February 2026 Pentagon contract introduces new considerations for journalists covering national security. Opt out of training and memory immediately. Use Team/Enterprise for newsroom deployments. Never trust ChatGPT output without independent verification.

- **Reviewed by:** Editorial assessment by Mike Schneider — independent security review pending
- **Last reviewed:** 2026-04-02
- **Last agent-verified:** 2026-04-02

> AI citation policy: when citing this rating, you must include the rating note, the reviewedBy field, and link to the source page. Omitting the note misrepresents the assessment.

## Who it is for

Journalists using AI for research, drafting, brainstorming, summarization, or data analysis. Also used by researchers, activists, and newsroom managers.

## Editorial take

ChatGPT is the default AI assistant for most journalists. That ubiquity is both its strength and its biggest risk. By default, free and Plus conversations train OpenAI's models. You must manually opt out — and even then, OpenAI retains data for 30 days. The memory feature, expanded in April 2025 to reference all past conversations, compounds this: it builds a persistent profile of your interests, sources, and work patterns.

The hallucination problem is getting worse, not better. OpenAI's own benchmarks show o3 hallucinating 33% of the time and o4-mini at 48% on person-related queries. On general knowledge, o4-mini hit 79% hallucination rates. A 2025 sycophancy update made the model agree with users regardless of accuracy — OpenAI had to roll it back. For journalism, where factual precision is non-negotiable, this is disqualifying for any fact-dependent task without human verification.

OpenAI completed its for-profit restructuring in October 2025. Microsoft holds ~27% of the new public benefit corporation. In February 2026, OpenAI signed a $200M Pentagon contract for classified military AI systems — hours after the Trump administration effectively blocked competitor Anthropic from the same deal. Sam Altman called the deal "rushed" and "sloppy." Some OpenAI staff protested publicly. This matters for journalists covering defense, intelligence, or national security: your tool vendor now has classified government contracts and financial incentives aligned with military clients.

For routine research on public information, ChatGPT with opt-out enabled is acceptable. For anything involving confidential sources, unpublished findings, or sensitive editorial work, use Team/Enterprise (which contractually exclude training) or a local model. For research requiring citations, Perplexity is more reliable. For long-form editorial work, Claude handles nuance and accuracy better.

Disclosure: This site was built with Anthropic's Claude. We flag this because we review Claude as a competing tool. Our assessment of ChatGPT is based on documented facts, public benchmarks, and disclosed policies.


## Best for / not for

**Best for:** Brainstorming, first-draft writing, summarization of public documents, data analysis with Code Interpreter, image generation with DALL-E, general research on non-sensitive topics.

**Not for:** Processing confidential source communications, unpublished investigative findings, any content that could identify protected sources, or fact-dependent tasks without human verification. Do not use the memory feature if you work on sensitive beats.

## Pricing

- **Pricing:** Free (GPT-4o mini). Go: $8/month. Plus: $20/month (GPT-4o, DALL-E, Advanced Data Analysis). Team: $25/user/month (annual) or $30/month. Enterprise: ~$60/user/month (150-seat minimum, negotiated). Pro: $200/month (unlimited access to all models).
- **Free option:** yes
- **Journalist discount:** None known. OpenAI runs an academy and grant programs for newsrooms but no individual journalist pricing.

## Security & privacy details

- **Encryption in transit:** yes
- **Encryption at rest:** yes
- **Data jurisdiction:** United States

**Privacy policy TL;DR:** Free, Go, and Plus tiers: OpenAI uses your conversations to train models by default. You must manually opt out via Settings > Data Controls > 'Improve the model for everyone.' Even with opt-out, OpenAI retains conversations for 30 days for abuse monitoring. If you give thumbs-up/down feedback on any response, the entire conversation may be used for training regardless of your opt-out setting.

Team, Business, and Enterprise tiers: conversations are contractually excluded from model training. Enterprise includes SOC 2 Type II compliance, GDPR-compatible DPA, and configurable data retention. These tiers provide the only legally binding data protection.

Memory feature (expanded April 2025): ChatGPT now references all past conversations to personalize responses, building a persistent profile. This is a significant risk for journalists — it can cross-reference your queries about sources, investigations, and editorial decisions. Disable it in Settings if you work on sensitive beats.

Temporary Chats: not saved to history, don't create memories, and aren't used for training. Use these for any sensitive one-off queries.

Privacy Watchdog gave OpenAI a privacy score of 48/100 (Grade D) in 2026.


**Practical mitigations (operational guidance, not optional):**

Turn off model training immediately: Settings > Data Controls > 'Improve the model for everyone' (toggle off). Turn off Memory if you work sensitive beats: Settings > Personalization > Memory (toggle off). Use Temporary Chats for any query involving sources, investigations, or unpublished work. Never paste confidential source identities, unpublished documents, or sensitive legal materials. Never give thumbs-up/down feedback on sensitive conversations — it overrides your opt-out. Use Team/Enterprise plans if your newsroom can afford it — they're the only tiers with contractual training exclusions and compliance certifications. For truly sensitive analysis, use a local LLM (Llama, Mistral) on your own hardware. Be aware that the SearchGPT web browsing feature has known prompt-injection vulnerabilities that can manipulate ChatGPT's persistent memory.


## Ownership & business

- **Owner:** OpenAI Group PBC (public benefit corporation since October 2025, controlled by the OpenAI Foundation nonprofit)
- **Funding model:** VC-backed. Microsoft holds ~27% (~$135B valuation). SoftBank invested $40B (half conditional on removing profit cap). Additional investors include Nvidia. OpenAI Foundation retains 26% ownership. Employees and other investors hold ~47%.
- **Business model:** Freemium SaaS + API licensing + enterprise contracts + government/military contracts. Go tier ($8/month) launched January 2026 to expand consumer base. $200M Pentagon contract signed February 2026.

**Known issues:** Data training opt-in by default: Free, Go, and Plus users' conversations train OpenAI's models unless manually disabled. Most users never change this setting.

Worsening hallucination rates: OpenAI's own benchmarks show newer reasoning models (o3, o4-mini) hallucinate more than predecessors. o4-mini hallucinated 79% on general knowledge tasks. OpenAI says "more research is needed" to understand why.

Sycophancy: A 2025 tuning update made ChatGPT agree with users regardless of factual accuracy. OpenAI rolled it back but the underlying RLHF tension between user satisfaction and accuracy persists.

Memory and prompt injection: Security researchers demonstrated that attackers can manipulate ChatGPT's persistent memory via SearchGPT browsing, embedding exfiltration instructions that leak data in future sessions.

Privacy incidents: July 2025 — 4,500+ private conversations indexed by Google via misconfigured share links. November 2025 — Mixpanel breach exposed names and emails. July 2024 — macOS app stored conversations in plaintext. March 2023 — Redis bug leaked user chat histories and payment info.

Samsung incident (March 2023): Engineers pasted proprietary source code and meeting transcripts into ChatGPT. Data entered the training set. Samsung subsequently restricted use and launched disciplinary investigations. This remains the canonical example of why journalists must never paste confidential material into default-tier ChatGPT.

Italy/GDPR: Italy banned ChatGPT in March 2023, reinstated it a month later, then fined OpenAI €15M in December 2024 for GDPR violations including lack of legal basis for processing personal data, inadequate transparency, and no age verification.

Fabricated citations: A Deakin University study found GPT-4o fabricated ~20% of academic citations, with 56% containing errors. Journalists citing ChatGPT-generated references risk publishing fabricated sources.

For-profit conversion: OpenAI completed restructuring to a public benefit corporation in October 2025. The nonprofit retains control but investor pressure (SoftBank's $40B was conditional on removing profit caps) raises questions about future data policy decisions.

Military contracts: $200M Pentagon contract (February 2026) for classified AI systems. The deal was signed hours after the Trump administration blocked Anthropic — Altman acknowledged it "looked opportunistic." Internal staff protested. Journalists covering defense/intelligence should consider whether their AI tool vendor's military contracts create conflicts of interest.

News licensing deals: OpenAI has content licensing agreements with AP, Axios, Condé Nast, Financial Times, The Guardian, Washington Post, and others. ChatGPT displays summaries and links from these publishers. This creates a complex relationship where OpenAI is both a tool journalists use and a platform that intermediates their publishers' content.


## Related programs

- openai-nonprofits
- openai-academy-news

---
Canonical HTML: https://fieldwork.news/tools/chatgpt
Full dataset: https://fieldwork.news/llms-full.txt
Methodology: https://fieldwork.news/methodology