Independent. Vetted. No tracking.
The best tools for journalism.
243 tools evaluated for security, ownership, and trust. Independently vetted. No affiliate links.
Archive.today
Snapshot any web page and preserve it permanently, independent of the original site.
Downgraded from 'adequate' to 'caution' in April 2026. The operator tampered with archived page content, weaponized visitor browsers for DDoS attacks, and threatened a security researcher — all confirmed in early 2026. Wikipedia banned all links. FBI investigation ongoing. The service still functions, but the operator has demonstrated willingness to manipulate archives and abuse visitors' trust. Use as a secondary reference only, never as sole-source evidence.
CapCut
Free-to-start video editor from ByteDance (TikTok's parent). Fast, capable, massively adopted — and carrying the same data governance questions as TikTok itself.
The 'caution' rating reflects ByteDance's data governance structure: Chinese national security law applies to the parent company, the ToS grant a perpetual license to all uploaded content, a biometric data class-action is pending, and the legal framework for a US ban remains in place. CapCut has not published SOC 2, ISO 27001, or equivalent security certifications. For public social video with no sensitive content, the risk is manageable. For any journalistic material involving sources, unpublished work, or operational security, CapCut is inappropriate.
ChatGPT
OpenAI's general-purpose AI assistant — the most widely adopted LLM, with serious privacy trade-offs journalists need to understand.
Strong infrastructure security (encryption in transit and at rest, SOC 2 for enterprise tiers) but the default data training opt-in is a serious risk for journalists. The expanding memory feature creates persistent user profiles. Worsening hallucination rates in newer models (o3: 33%, o4-mini: 48-79%) make ChatGPT unreliable for fact-dependent journalism tasks. Multiple privacy incidents in 2023-2025 demonstrate ongoing operational security gaps. The February 2026 Pentagon contract introduces new considerations for journalists covering national security. Opt out of training and memory immediately. Use Team/Enterprise for newsroom deployments. Never trust ChatGPT output without independent verification.
Claude
Anthropic's AI assistant. Disclosure: this site was built with Claude.
Consumer tiers (Free/Pro/Max) train on conversations by default with up to five-year retention — opt-out available but not the default. Commercial tiers (Team/Enterprise/Government) offer genuinely strong data isolation with no training and optional zero-data-retention. API retention is 7 days, never trained on. Rating reflects the consumer-tier defaults; commercial tiers alone would rate 'strong.' Disclosure: this site was built with Claude.
Available programs:
DeepSeek
Chinese open-source LLM with strong reasoning capabilities. Free web interface. Open-weight models (MIT license) can be run locally to avoid Chinese data jurisdiction entirely.
This rating applies to the web interface (chat.deepseek.com). Chinese data jurisdiction with mandatory intelligence cooperation laws, no independent judicial oversight, banned by multiple governments, and subject to ongoing EU regulatory action. For journalists, using the web interface with any sensitive material is inadvisable. However: the open-weight models (DeepSeek-R1, V3) run locally with zero data exposure and would rate 'strong' on privacy — the math doesn't phone home. The rating reflects the product most users will encounter (the web interface), not the self-hosted deployment that technical users can configure.
ElevenLabs
The leading AI voice platform. Text-to-speech, voice cloning, dubbing, audio isolation. $11B valuation. Powerful and dangerous in the same breath.
ElevenLabs is SOC 2 Type II compliant with HIPAA and zero-retention options on Enterprise plans. Technical security is appropriate for a company at this scale. The 'caution' rating is editorial, not technical: voice cloning misuse is documented and ongoing, the consent verification flow is weaker than newsroom standards require, and AI audio carries publication risk that the tool itself cannot mitigate. Use the product with a policy in place, not before.
GeoSpy
AI geolocation from photos. Upload an image, get predicted coordinates — no metadata required. Now restricted to law enforcement and enterprise clients.
Upgraded from 'adequate' to 'caution.' Images are uploaded to servers operated by a company whose primary customers are law enforcement. Data retention terms are vague. No transparency report. No independent audit. The tool was publicly available for months with documented stalking misuse before access was restricted — and only after press pressure, not internal policy. Graylark's business model is surveillance; journalists should weigh whether that alignment creates risks for their sources and reporting.
Google Gemini
Google's AI assistant. Deep Workspace integration. The hallucination problem is real.
Strong infrastructure security at the Workspace tier: SOC 1/2/3, ISO 42001, FedRAMP High, HIPAA, client-side encryption. Workspace Business/Enterprise provide genuine data isolation with no model training on customer data. The free tier trains by default with human review of anonymized conversations — a significant risk for journalists. The hallucination problem is the most serious concern: 88-91% hallucination rates on ungrounded queries make Gemini unreliable for fact-dependent journalism without source documents. Use Workspace tiers for newsroom deployments. Never trust ungrounded Gemini outputs without verification.
Google Pinpoint
AI document analysis for investigative journalism.
Strong infrastructure security (Google Cloud encryption, private-by-default collections) but documents are processed on Google's servers under Google's broad privacy policy. Human reviewers can sample your prompts. No journalist-specific data protection guarantees. Use a dedicated account and keep sensitive source materials off the platform entirely.
Available programs:
GPTZero
AI text detector built by a Princeton student in January 2023. Useful as a screen, dangerous as a verdict.
Technical security is standard commercial SaaS — HTTPS, U.S. jurisdiction, reasonable retention for personal data. The caution is editorial. Dashboard submissions are stored and may be used for training in anonymized form permanently. Documented bias against non-native English speakers and active lawsuits over wrongful accusations make this a tool to use defensively, never offensively. Use the API path for sensitive text. Never base a published claim on a score alone.
Grammarly
Dominant grammar and writing assistant with 30 million daily users. Free tier. Processes all text on company servers — opt-out from AI training available but not default.
Strong infrastructure security: encryption in transit (TLS 1.2) and at rest (AES-256), SOC 2 Type II, ISO 27001/27017/27018, HIPAA option for Enterprise. The concern is not infrastructure — it is the data model. All text processing is server-side with no local option. AI training is opt-in by default for individual users. The browser extension processes every text field indiscriminately. Enterprise tier provides contractual protections, but individual journalists on Free or Pro have limited recourse. The rapid corporate transformation (three acquisitions, rebrand, new CEO) adds uncertainty about future data practices. Opt out of training, disable the extension on sensitive sites, and never process confidential source material through Grammarly.
Immersive Translate
Browser extension for bilingual side-by-side web page translation. 20+ AI translation engines. Chrome Best Extension 2024. Read foreign-language sources with original and translation visible together.
Two documented security incidents in 2024–2025: an XSS vulnerability and a critical data exposure through the snapshot feature that leaked user documents to publicly accessible cloud storage. Text is sent to third-party translation APIs by design — this is functional, not a flaw, but journalists must understand that every translated page leaves their device. Data controller is Funstory.ai Limited (Hong Kong) with primary storage in South Korea and processing through Chinese cloud providers (Alibaba, Tencent). No disclosed security certifications. No public bug bounty or vulnerability disclosure program. Google Analytics tracks usage. The translation quality is excellent and the bilingual UX is best-in-class, but the security posture requires caution for any use involving sensitive material.
Midjourney
The most popular AI image generator. Produces high-quality stylized and photorealistic output. No Content Credentials, no provenance trail, no IP indemnification for most users.
Midjourney is a well-funded, profitable company with reasonable infrastructure security. The 'caution' rating reflects the absence of C2PA Content Credentials (a significant gap for editorial use), the lack of IP indemnification for most users, active copyright litigation, default public visibility of all generations, and no explicit commitment regarding training on user content. For non-editorial creative work these are manageable risks; for journalism with provenance requirements they are disqualifying.
Octoparse
No-code visual web scraper. Point-and-click data extraction with cloud execution, IP rotation, and 469+ pre-built scraper templates.
The dual corporate structure — U.S. subsidiary with Chinese parent company — is the primary concern. Cloud-scraped data passes through infrastructure controlled by a company with roots in Shenzhen. The company claims GDPR, CCPA, and Privacy Shield compliance, and its cloud providers have SOC 2 and ISO 27001 certifications. But Meta's 2022 lawsuit against Octopus Data for scraping Facebook/Instagram data raises questions about corporate oversight. For public data scraping, the risk is manageable. For sensitive investigations, use the local extraction mode or switch to open-source scraping tools you control entirely.
Otter.ai
AI-powered meeting transcription and note-taking — fast and accurate, but your audio trains their models.
SOC 2 Type II and HIPAA compliance show genuine security investment, but the core problem is structural: Otter uploads all audio to US cloud servers and uses content for AI training. The 2025 class action lawsuit and 2024 hospital breach demonstrate real-world consequences of this architecture. Adequate for routine journalism. Not recommended for any work involving confidential sources or sensitive material.
Overview
Open-source document clustering and visualization for large investigative sets. Self-host only — the hosted service is gone.
Open-source and self-hostable, which is good for data sovereignty. But the software is unmaintained — no security patches since at least 2020 (copyright range 2011-2020). Running unmaintained server software with document upload capabilities is a real risk. The Scala/Play framework and PostgreSQL stack may have unpatched vulnerabilities. Only run on isolated infrastructure, never internet-facing without additional security layers.
Perplexity
AI search engine with source citations — useful for research, controversial for how it gets those sources.
Search queries are sensitive journalist data. Perplexity collects and retains them by default, with AI training opt-out buried in settings. The company's documented pattern of bypassing robots.txt, disguising crawlers, and reproducing publisher content without permission reveals how it treats consent. 40+ copyright lawsuits pending. Useful tool, real risks. Use only for non-sensitive, public-record research.
Available programs:
PhantomBuster
Social media scraping and automation. Extract data from LinkedIn, Twitter, Instagram for investigations.
Caution rating reflects two concerns: (1) you must share social media session tokens with PhantomBuster's servers, creating credential exposure risk, and (2) most automations violate target platforms' ToS, risking account suspension. The tool itself uses standard cloud security (TLS, encrypted storage, GDPR compliance). For journalists, the operational risk — losing your LinkedIn or Twitter account mid-investigation — is the primary concern. Use dedicated accounts and understand the legal landscape before deploying.
PimEyes
Facial recognition reverse-image search engine — finds photos of a face across the open web. Powerful for identification work, ethically fraught, used by journalists and stalkers alike.
The caution rating is not primarily about technical security — it is about trust, governance, and ethical risk. PimEyes uses HTTPS and standard payment processing, but the company is structurally opaque (registered across Dubai, Belize, Poland, and Seychelles), refuses to disclose data retention or breach history, has been the subject of three open regulatory investigations (UK, Germany, Illinois BIPA), and has been documented enabling stalking, child-image searches, and protest doxing. The opt-out process requires submitting ID to the same company you are trying to escape. For journalism, the tool can produce useful identifications, but using it means trusting an entity with no meaningful accountability and a track record of misuse. Newsrooms should treat PimEyes as a tool of last resort, document its use in published methodology, never query private individuals or minors, and never upload photos of confidential sources. If a comparable result can be obtained with Yandex reverse image search, Google Lens, or direct reporting, prefer those.
Proton Mail
E2E encrypted email under Swiss jurisdiction — but Swiss privacy protections are eroding, and Proton is moving infrastructure to the EU.
Zero-access encryption remains strong technically. But the pattern of journalist account suspensions, payment metadata sharing with the FBI, 89% law enforcement compliance rate, and the proposed VÜPF revision (ID verification, mandatory decryption, IP logging) represents systemic erosion of the trust assumptions journalists relied on. Proton is responding — €100M+ EuroStack investment, SOC 2 Type II certification, Workspace launch — but the gap between privacy and anonymity continues to widen.
Available programs:
QuillBot
AI paraphrasing and rewriting tool. Free tier with limits. Owned by Learneo (Course Hero, LanguageTool, Scribbr).
Text is processed on QuillBot servers and, as of November 2025, stored by default for browser extension users (opt-out available). The shift from opt-in to opt-out storage is a meaningful trust signal change. Owned by Learneo, a portfolio company with seven brands in the education/writing space. QuillBot states it does not sell data or allow third-party AI training, but the data collection posture has expanded over time. Not appropriate for confidential source material or sensitive reporting.
Remove.bg
AI-powered background removal — upload a photo, get a transparent PNG in seconds.
Images are uploaded to Canva's cloud with no local processing option. Third-party tracking on the website. Broad Canva privacy policy. The tool works well for non-sensitive images, but journalists should never upload photos involving sources, unpublished material, or sensitive locations. Adequate for routine newsroom graphics work with appropriate caution.
Runway
The professional AI video platform. Gen-4.5 leads the Video Arena leaderboard. Used in film and editorial. Training data lawsuits remain unresolved.
The technical security posture is standard for a venture-backed AI startup at this scale — encryption in transit and at rest, US infrastructure, account-based access. The 'caution' rating reflects unresolved copyright litigation, the leaked internal training data spreadsheet, the absence of IP indemnification on consumer plans, and the broad terms of use Runway claims over uploaded content. None of these are security failures in the traditional sense. They are governance and provenance failures that matter for newsroom adoption.
Social Blade
Social media analytics platform. Track follower growth, engagement trends, and channel statistics across YouTube, Twitch, Instagram, and TikTok.
The December 2022 data breach (5.6 million records) is a significant mark against Social Blade's security posture. The platform itself is useful for journalists as a read-only analytics tool, but creating an account carries documented risk. Use it without logging in whenever possible. The free tier's heavy advertising also introduces tracker exposure. Rated caution rather than warning because the core use case (looking up public social media stats) doesn't require sharing sensitive information — but the breach history means you should treat any account data as potentially compromised.
Telegram
Cloud-based messaging. NOT end-to-end encrypted by default. Not recommended for journalist-source communication.
Not E2E encrypted by default. Telegram holds encryption keys for all regular and group chats. Custom MTProto protocol with documented cryptographic weaknesses. Server code closed-source. Infrastructure linked to companies with Russian intelligence ties (IStories/OCCRP, June 2025). Founder under indictment in France on 12 charges. Now shares user data with law enforcement — 900 US requests fulfilled in 2024. 1 billion monthly users but massive abuse problem (44M channels blocked in 2025). Not appropriate for journalist-source communication. Use Signal.
Threads
Meta's text-based social platform. 400M+ monthly active users. Instagram integration. No link demotion. ActivityPub federation in progress.
TLS encryption in transit. Encryption at rest for stored data. The core concern is not technical security but data practices. Meta collects 28 categories of user data per the App Store privacy label, including location, browsing history, contacts, and financial information. This data feeds cross-platform ad targeting. DMs are not end-to-end encrypted. Meta has been fined $1.3 billion for GDPR violations and $392 million for deceptive location tracking. For standard journalism use — sharing stories, building audience, monitoring public discourse — the platform functions. For any communication involving sources, confidential information, or sensitive investigations, Meta products are the wrong tool. The 'caution' rating reflects the data collection scope, not a technical vulnerability.
Wayback Machine
Access archived versions of web pages going back to 1996. Over 1 trillion pages captured.
Downgraded from 'adequate' after the October 2024 breach exposed 31 million user records. The Archive is a trusted nonprofit with a 28-year track record, but its security posture failed under sustained attack. Browsing is logged, no E2EE for searches. Use Tor for sensitive queries. The publisher-blocking trend is a reliability concern, not a security one — but it means the archive's coverage of news content is shrinking in real time.
E2E encrypted messaging owned by Meta. Strong encryption, hostile metadata environment. Use Signal instead.
Strong message encryption (Signal protocol with Curve25519, AES-256, perfect forward secrecy) undermined by Meta's metadata collection, cross-platform data sharing, lack of sealed sender, whistleblower allegations of 1,500 engineers with unaudited metadata access, documented spyware targeting of journalists (Paragon Graphite, NSO Pegasus), and forced Meta AI integration. Cloud backups unencrypted by default. 89% of journalists in democratic countries use Signal instead. WhatsApp is a fallback, not a recommendation.
Wispr Flow
AI voice dictation that formats text based on app context.
Screen capture and voice audio sent to third-party AI providers (OpenAI, Meta's Llama) is a significant privacy concern for journalism workflows. All processing is cloud-only — there is no local option. Privacy Mode prevents retention but not transmission. SOC 2 Type II, ISO 27001, and HIPAA certifications demonstrate real security investment, but the architecture is fundamentally incompatible with source protection. The tool is well-built and the company is increasingly transparent, but 17+ outages in Q1 2026 raise reliability questions for deadline-driven journalism.
Available programs:
No results match your search.
Try different keywords or browse by category.