← All tool ratings

InVID/WeVerify

Browser extension for verifying videos and images — keyframe extraction, reverse search, deepfake detection, and metadata analysis.

Verification
Built for journalismOpen source
Adequate
https://github.com/AFP-Medialab/verification-plugin Reviewed 2026-04-02 Editorial assessment by Mike Schneider — not an independent security audit

What should journalists know about InVID/WeVerify?

The standard verification toolkit for newsrooms. Extract keyframes from video, run reverse image searches across Google, Bing, Yandex, Baidu, and TinEye simultaneously, inspect EXIF metadata, and now run deepfake detection — all from one browser extension. Built by AFP Medialab under three successive EU research grants (InVID 2016-2018, WeVerify 2018-2021, vera.ai 2022-2025). The vera.ai funding ended October 2025, but AFP continues maintaining the plugin (v0.89.1, updated March 2026). The deepfake detector is useful as a first-pass screen — it color-codes face-manipulation probability per frame — but independent benchmarks show forensic tools like this have high recall and poor specificity. Translation: it catches a lot of fakes but also flags compression artifacts and motion blur as suspicious. Pair it with TrueMedia or human analysis for anything you'd publish. Some features send data to third-party search engines, so keep pre-publication material away from the reverse search tabs.

Best for

Verifying viral videos and images. Extracting keyframes for reverse image search. Checking EXIF metadata and GPS coordinates. First-pass deepfake screening on video. Archiving disinformation traces in WACZ format. Forensic image analysis (error level analysis, noise analysis). Twitter/X social network analysis (registered users only).

Not for

High-confidence deepfake verdicts — accuracy trails human analysts and dedicated paid tools like Sensity (98% accuracy). Verifying text claims (use Google Fact Check Explorer). Automated monitoring at scale. Confidential pre-publication material (reverse searches hit third-party servers).

Security & Privacy

Encryption in transit Yes

Data is scrambled while being sent to their servers

Encryption at rest Partial

Data is scrambled when stored on their servers

Data jurisdiction Split. Metadata extraction and forensic analysis run locally in your browser. Reverse image searches route through Google, Bing, Yandex, Baidu, and TinEye — each with its own jurisdiction and data practices. Deepfake detection and AI-based tools process via CERTH-ITI servers in Greece (EU). Content is cached by partner tools for approximately one day.

Where servers are located — affects which governments can request your data

Security rating Adequate

Privacy policy summary

No personal data recorded by the extension itself. Matomo analytics tracks usage patterns but you can opt out from the About page. Reverse image searches send your queries and images to third-party engines. AI-based tools (deepfake detection, synthetic image detection) send content to CERTH servers. No account required for core features; registration required for advanced tools (Twitter SNA, CheckGIF, synthetic image detector, voice cloning detector).

How to protect yourself:

Never use reverse image search with pre-publication material — those queries go to Google, Yandex, TinEye, and others who may log them. Use metadata extraction and forensic tabs (local processing) for sensitive content. Opt out of Matomo analytics in the About page. For deepfake detection, treat results as a starting point, not a verdict — the tool flags compression artifacts and motion blur as suspicious. Cross-reference with TrueMedia or manual frame analysis before publishing.

Metadata extraction and forensic analysis run locally — good. Open-source under MIT license with full code on GitHub. No personal data collection by the extension. But reverse searches and AI tools send content to third-party and CERTH servers. Content cached for ~1 day by partner tools. The split architecture (local forensics + remote AI + third-party search) means your operational security depends on which tabs you use. Stick to local-only features for sensitive material.

Who Owns This

Owner AFP Medialab (Agence France-Presse R&D lab). Developed through three EU research consortia with CERTH-ITI (Centre for Research and Technology Hellas), Deutsche Welle, and 14 European partners.
Funding EU grants: Horizon 2020 (InVID, WeVerify) and Horizon Europe (vera.ai, grant 101070093). vera.ai ended October 2025. Additional tools from IFCN DisinfoArchiving project (2024-2025). No announced successor grant as of April 2026 — continued maintenance appears to rely on AFP Medialab's institutional commitment.
Business model Free. Open-source (MIT license) on GitHub. Research project output sustained by AFP's ongoing maintenance. No paid tier, no ads, no affiliate revenue.

Known issues

Deepfake detection has high false-positive rate — benign compression, color grading, and motion blur trigger alerts (confirmed by March 2026 comparative study). AI detection accuracy lags behind human forensic analysts and paid tools like Sensity. vera.ai EU funding ended October 2025 with no announced successor grant, creating long-term maintenance uncertainty. Advanced features (Twitter SNA, CheckGIF, synthetic image detector, voice cloning detector) require registration and are restricted to verified journalists and researchers. Yandex reverse search raises geopolitical concerns for some users. One-day content caching by partner tools means uploaded material persists briefly on external servers.

Pricing

Free

This is an editorial assessment based on publicly available information as of 2026-04-02, using our published methodology. Independent security review is pending. Security posture can change at any time. This is not a guarantee of safety.

Something wrong or outdated? Report it.