ElevenLabs
The leading AI voice platform. Text-to-speech, voice cloning, dubbing, audio isolation. $11B valuation. Powerful and dangerous in the same breath.
What should journalists know about ElevenLabs?
ElevenLabs is the audio AI company everyone uses and nobody quite trusts. Founded in 2022 by Piotr Dabkowski (ex-Google ML) and Mati Staniszewski (ex-Palantir), the company raised a $180M Series C in January 2025 at a $3.3B valuation, then a $500M Series D in February 2026 led by Sequoia at $11B. Investors include a16z, ICONIQ, Nvidia, Lightspeed, and Bond. The product is genuinely state of the art. v3 voices, released in 2025, are widely considered the most natural-sounding TTS available. The dubbing tool can translate and lip-sync interviews across 30+ languages. The audio isolator can rescue recordings that other tools give up on. Newsrooms use it for article-to-audio narration, podcast post-production, and source-protection voice modulation. The problem is the same thing that makes the product valuable: voice cloning that good is also voice cloning that scammers want. ElevenLabs voices have been used in election deepfakes (the 2024 New Hampshire fake-Biden robocall was traced to an ElevenLabs voice), in sextortion scams targeting parents, and in fraud calls impersonating CEOs. ElevenLabs has responded with identity verification for Professional Voice Cloning, a no-go list of public figures, an AI speech classifier to detect ElevenLabs-generated audio, and partnerships with content authentication groups. The safeguards are real but imperfect. For journalism, the editorial question is straightforward: if you use AI voice, label it; if you clone a real voice, get explicit consent in writing; if you publish synthetic audio of a real person, expect to defend it.
Article-to-audio narration of long-form pieces. Multilingual dubbing for documentary and explainer video. Voice cloning of your own narrator (with consent and contract). Source voice modulation to protect identity in audio interviews. Audio isolator for cleaning up field recordings, courtroom audio, leaked tapes. Accessibility versions of written content. Podcast post-production cleanup.
Cloning anyone's voice without explicit, documented consent. Generating audio of public figures or sources without disclosure. Anything labeled or implied to be a real person speaking when it isn't. Newsrooms without an AI audio policy in place — adopt the policy first, the tool second. Sensitive interview audio you don't want stored on a third-party server.
Security & Privacy
Data is scrambled while being sent to their servers
Data is scrambled when stored on their servers
Where servers are located — affects which governments can request your data
Privacy policy summary
Account required. Voice samples and generated audio are stored on ElevenLabs servers. Free and lower-tier plans may use audio for service improvement; Creator plan and above can opt out. Voice cloning requires identity verification (Professional Voice Cloning requires a verbal consent recording). ElevenLabs prohibits cloning voices without consent in its use policy and terminates accounts for abuse. The AI speech classifier lets users check whether an audio file was generated by ElevenLabs. Enterprise plans include zero-retention and contractual data protection terms.
How to protect yourself:
Use the Creator plan or higher and disable training data sharing. For voice cloning, document consent in writing — ElevenLabs' click-through is not enough for editorial defensibility. Never clone a source voice without an explicit, recorded conversation about how it will be used. Disclose AI voice use in episode notes, captions, and on-air. For sensitive interview audio, use Enterprise with zero-retention or process audio locally. Don't upload unpublished investigative recordings — they live on ElevenLabs servers. Keep an internal log of every AI voice use for corrections and accountability.
ElevenLabs is SOC 2 Type II compliant with HIPAA and zero-retention options on Enterprise plans. Technical security is appropriate for a company at this scale. The 'caution' rating is editorial, not technical: voice cloning misuse is documented and ongoing, the consent verification flow is weaker than newsroom standards require, and AI audio carries publication risk that the tool itself cannot mitigate. Use the product with a policy in place, not before.
Who Owns This
Known issues
January 2024: a fake-Biden robocall in the New Hampshire primary urging Democrats not to vote was traced to an ElevenLabs-generated voice. The political consultant responsible was fined $6M by the FCC and indicted. ElevenLabs banned the account and tightened verification on Professional Voice Cloning. 2024–2025: multiple reports of ElevenLabs voices used in sextortion scams, CEO impersonation fraud, and harassment campaigns. The company has since added identity verification, no-go lists for public figures, an AI speech classifier, and partnerships with C2PA and content authentication groups. Voice cloning quality continues to outpace detection tools, meaning safeguards are reactive. The Studio video export watermark drops at the Creator tier — meaning unwatermarked AI audio is broadly available for $22/month.
Pricing
Free: 10,000 characters/month, basic TTS, no commercial use. Starter: $5/month (30,000 characters, instant voice cloning, commercial license). Creator: $22/month (100,000 characters, professional voice cloning, dubbing, audio isolator, no Studio watermark). Pro: $99/month (500,000 characters). Scale: $330/month (2M characters). Business: $1,320/month (11M characters). Enterprise: custom, with HIPAA, SSO, and custom contracts. Pricing was restructured in January 2025 and unified again in August 2025 to be model-agnostic.
This is an editorial assessment based on publicly available information as of 2026-04-07, using our published methodology. Independent security review is pending. Security posture can change at any time. This is not a guarantee of safety.
Something wrong or outdated? Report it.