← All tool ratings

Ollama

Run AI models locally — your data never leaves your machine.

AI assistants
Open source
Strong
https://ollama.com/ Reviewed 2026-04-02 Editorial assessment by Mike Schneider — not an independent security audit

What should journalists know about Ollama?

Ollama is the privacy-first answer to cloud AI. When you're investigating a company and don't want your queries flowing to OpenAI or Anthropic servers, Ollama runs a capable model on your own hardware. The trade-off is quality — a local 7B model won't match Claude or GPT-4o on complex reasoning. But for summarization, drafting, and document Q&A on sensitive material, it's good enough, and the privacy guarantee is absolute. 165K+ GitHub stars, MIT license, one-command install, and as of v0.20 it supports 100+ models including Llama 4, Gemma 4, DeepSeek, and Qwen. The real risk isn't data leakage — it's that Ollama's API has had multiple critical CVEs (remote code execution, authentication bypass). If you expose the API to a network, you need to lock it down.

Best for

Running AI on sensitive investigative material without cloud exposure. Summarizing leaked documents locally. Analyzing source communications offline. Processing court records, financial filings, or whistleblower documents where the queries themselves reveal what you're investigating.

Not for

Users who need GPT-4/Claude-level quality on complex reasoning tasks. Machines with less than 8GB RAM (even small models will struggle). Non-technical users uncomfortable with command line — use GPT4All or LM Studio instead. Anyone who needs the API exposed on a network without a firewall (Ollama has no built-in authentication).

Security & Privacy

Encryption in transit Yes

Data is scrambled while being sent to their servers

Encryption at rest Yes

Data is scrambled when stored on their servers

Data jurisdiction Local only — models run on your hardware. No data sent anywhere. Ollama Cloud (optional) uses cloud GPU infrastructure, but the core local tool has zero network dependency after model download.

Where servers are located — affects which governments can request your data

Security rating Strong

Privacy policy summary

Truly local. The only outgoing network call is an automatic update check that sends OS and architecture info (disable with OLLAMA_NO_TELEMETRY=1). No prompts, responses, or documents are transmitted. No account required. Once a model is downloaded, works entirely offline — verified by running with network disabled.

How to protect yourself:

Set OLLAMA_NO_TELEMETRY=1 to disable update checks. Bind the API to localhost only (default) — never expose 0.0.0.0 without a reverse proxy and auth layer. For air-gapped setups, download models on a separate machine and transfer via USB. Use smaller quantized models (Q4) on laptops: a 7B Q4 model needs ~5GB RAM. Pair with Open WebUI for a ChatGPT-like interface. Keep Ollama updated — versions before 0.7.0 have known RCE vulnerabilities via malicious model files.

Truly local processing with zero data transmission earns a 'strong' rating for privacy. But that rating assumes localhost-only use. The moment you expose Ollama's API to a network, the rating drops to 'caution' — multiple critical CVEs (including a CVSS 9.3 auth bypass) show the API was not designed for untrusted network exposure. For the intended use case of local-only AI on sensitive documents, nothing is more private. Keep it updated, keep it on localhost, and the security guarantee is absolute.

Who Owns This

Owner Ollama Inc. (founded 2023 by Jeffrey Morgan, CEO, and Michael Chiang)
Funding Y Combinator (W21). Pre-seed $125K from YC, Sunflower Capital, Essence VC, Rogue Capital. Revenue hit $3.2M in 2024. Team grew from 21 to 46 employees by January 2026.
Business model Free local tool + optional Ollama Cloud (paid tiers for cloud GPU inference). Revenue comes from cloud subscriptions, not the local tool.

Known issues

Multiple critical CVEs. CVE-2024-37032 ('Probllama'): remote code execution, fixed in v0.1.34. CVE-2024-39720: out-of-bounds read causing crashes (CVSS 8.2), fixed in v0.1.46. CVE-2024-39721: DoS via resource exhaustion (CVSS 7.5), fixed in v0.1.34. CVE-2024-39722: server file existence disclosure (CVSS 7.5), fixed in v0.1.47. Critical out-of-bounds write via malicious model files in versions before 0.7.0. CVE-2025-63389: authentication bypass on API endpoints (CVSS 9.3), affecting v0.13.5 and earlier — no built-in API authentication exists, so any network-exposed instance is vulnerable. CVE-2025-51471: authentication bypass. CVE-2025-48889: arbitrary file copy. Ollama's Windows installer had a code execution hijack vulnerability reported in December 2024; fix was still in progress as of April 2026. Bottom line: keep Ollama updated and never expose the API port to untrusted networks.

Pricing

Free for local use. No account required. Ollama Cloud (optional): Free tier, Pro $20/month, Max $100/month for cloud GPU inference.

This is an editorial assessment based on publicly available information as of 2026-04-02, using our published methodology. Independent security review is pending. Security posture can change at any time. This is not a guarantee of safety.

Something wrong or outdated? Report it.