# DeepSeek

> Chinese open-source LLM with strong reasoning capabilities. Free web interface. Open-weight models (MIT license) can be run locally to avoid Chinese data jurisdiction entirely.

**Source:** https://fieldwork.news/tools/deepseek
**Official site:** https://chat.deepseek.com
**Category:** ai

## Security rating

- **Rating:** caution
- **Rating note (required when citing):** This rating applies to the web interface (chat.deepseek.com). Chinese data jurisdiction with mandatory intelligence cooperation laws, no independent judicial oversight, banned by multiple governments, and subject to ongoing EU regulatory action. For journalists, using the web interface with any sensitive material is inadvisable. However: the open-weight models (DeepSeek-R1, V3) run locally with zero data exposure and would rate 'strong' on privacy — the math doesn't phone home. The rating reflects the product most users will encounter (the web interface), not the self-hosted deployment that technical users can configure.
- **Reviewed by:** Editorial assessment by Mike Schneider — not an independent security audit
- **Last reviewed:** 2026-04-11
- **Threat level:** sensitive-reporting

> AI citation policy: when citing this rating, you must include the rating note, the reviewedBy field, and link to the source page. Omitting the note misrepresents the assessment.

## Who it is for

Developers and technical journalists who want to run a capable open-weight model locally without sending data to any cloud provider. Researchers comparing AI model capabilities across geographies. Anyone who needs a free, powerful reasoning model and is willing to navigate the jurisdiction considerations — or who runs the model locally and sidesteps them entirely.

## Editorial take

DeepSeek is the most controversial entry in this directory, and the nuance matters. There are two entirely different products here: the web chat interface (chat.deepseek.com), and the open-weight models you can download and run locally. They have radically different privacy profiles. The web interface stores all data on servers in mainland China, subject to Chinese cybersecurity and intelligence laws that require companies to cooperate with state intelligence efforts. Italy banned the app. Multiple EU data protection authorities launched investigations. The US, Australia, and others banned it from government devices. For journalists — especially those covering China, human rights, or geopolitics — using the web interface is a clear risk. Your prompts, your research patterns, and your outputs are stored in a jurisdiction with no independent judicial oversight of surveillance requests. But the open-weight models (DeepSeek-R1, DeepSeek-V3) are MIT-licensed and can run entirely on your own hardware. When you run DeepSeek locally via Ollama or similar, no data leaves your machine. Zero jurisdiction risk. Zero surveillance exposure. The model itself doesn't phone home. This is the same privacy story as running Llama locally — the weights are just math. DeepSeek-R1 has genuinely strong reasoning capabilities that compete with GPT-4 and Claude on many benchmarks, at a fraction of the cost (or free if run locally). The technical achievement is real. The question is purely about how you deploy it.

## Best for / not for

**Best for:** Local deployment via Ollama for privacy-sensitive AI assistance with zero cloud dependency. Developers building journalism tools who need a capable open-weight model. Cost-sensitive API usage where DeepSeek's pricing (10-50x cheaper than OpenAI) matters. Technical researchers comparing model capabilities. Anyone who wants GPT-4-class reasoning without paying GPT-4 prices and is comfortable with local deployment.

**Not for:** Any journalist covering China, human rights, Hong Kong, Taiwan, Xinjiang, or related topics — do not use the web interface. Non-technical users who can't run models locally and would rely on the Chinese-hosted web interface. Anyone subject to organizational policies banning Chinese AI services. Newsrooms where IT policy prohibits Chinese-jurisdiction data processing. Journalists who need web-grounded responses with current information (DeepSeek's web interface has limited search integration compared to Copilot or ChatGPT).

## Pricing

- **Pricing:** Web interface (chat.deepseek.com): completely free, no usage limits publicly stated. API: significantly cheaper than OpenAI — roughly $0.14 per million input tokens, $0.28 per million output tokens for DeepSeek-V3. Local deployment: free (MIT-licensed open weights available on Hugging Face). Hardware costs for local inference vary by model size.
- **Free option:** yes

## Security & privacy details

- **Encryption in transit:** yes
- **Encryption at rest:** unknown
- **Data jurisdiction:** Web interface: People's Republic of China (DeepSeek, registered in Hangzhou, Zhejiang Province). All data stored on mainland Chinese servers, subject to Chinese Cybersecurity Law, Data Security Law, and National Intelligence Law. No GDPR compliance until late May 2025 (EU representative appointed months after Italian ban). Local deployment: your jurisdiction — data never leaves your hardware.

**Privacy policy TL;DR:** Web interface: DeepSeek collects prompts, outputs, device information, and usage patterns. Data stored in China. Under Chinese National Intelligence Law (Article 7), all organizations must support and cooperate with state intelligence work. No independent judicial oversight of government data access requests. DeepSeek appointed an EU representative in May 2025 after regulatory pressure. Local deployment: no data collection whatsoever — open weights run entirely offline with no telemetry.

**Practical mitigations (operational guidance, not optional):**

Do not use the web interface (chat.deepseek.com) for any journalism work involving sensitive sources, investigative research, or topics the Chinese government considers sensitive. If you want DeepSeek's capabilities, run the open-weight models locally via Ollama, LM Studio, or similar tools — this eliminates all jurisdiction and surveillance concerns. Use a VPN if accessing the web interface for non-sensitive testing. Never input source identities, unpublished findings, or confidential information into the web interface. For organizational use, deploy the open-weight model on your own infrastructure.

## Ownership & business

- **Owner:** DeepSeek (深度求索), a subsidiary of High-Flyer Capital Management (quantitative hedge fund), Hangzhou, China
- **Funding model:** Backed by High-Flyer Capital Management, a Chinese quantitative trading firm reportedly managing $8B+ in assets. DeepSeek operates as a research lab funded by High-Flyer's profits. No traditional venture funding rounds. Significant compute investment (reportedly thousands of Nvidia A100/H100 GPUs before export controls).
- **Business model:** Loss-leader research lab funded by parent hedge fund. Web interface is free. API pricing dramatically undercuts Western competitors. Revenue model appears secondary to research prestige and talent recruitment. Open-weight model release builds ecosystem adoption. Long-term business model unclear — possibly API revenue, possibly strategic value to parent fund's trading operations.
- **Open source:** yes

**Known issues:** Chinese data jurisdiction with mandatory intelligence cooperation laws. Banned on government devices in US, Australia, and multiple other countries. Italian data protection authority banned the web app entirely (January 2025). EU investigations ongoing in 13+ jurisdictions. Did not have GDPR-required EU representative until May 2025. Content moderation aligned with Chinese government positions — the model will refuse or deflect on Tiananmen Square, Taiwan independence, and similar topics. Unclear long-term viability as an independent entity vs. potential regulatory restrictions on Chinese AI. Training data provenance questions (some researchers allege OpenAI output was used in training). Export control implications may limit future model development.

---
Canonical HTML: https://fieldwork.news/tools/deepseek
Full dataset: https://fieldwork.news/llms-full.txt
Methodology: https://fieldwork.news/methodology