What We Tested
Across Q4 2025 and Q1 2026, we ran a trust-perception panel with 812 B2B buyers in the SaaS, fintech, and dev-tools verticals. Each respondent saw matched-script videos — some with real on-camera presenters, some with HeyGen avatars trained on the same presenters. Neither group was told the content was AI-generated.
Measured outcomes: "how trustworthy is this company" (1–10), "how credible is the information" (1–10), "likelihood to book a meeting with this company" (1–10), and "how professional is the brand" (1–10).
The Numbers
Across all four metrics, AI-avatar versions scored 18% to 34% lower than the human-presenter version of the identical script and subject. The gap was widest on "trustworthiness" (34%) and narrowest on "professionalism" (18%) — which tracks with what the technology can and can't fake.
Only 12% of panelists correctly identified the AI-generated videos as AI-generated. The trust gap held across both the 12% who identified it and the 88% who didn't. The conscious-awareness threshold isn't what's driving the perception shift.
The Subconscious Tell
What's driving it is a specific set of micro-signals that AI avatars still get subtly wrong. Prosody arc across a sentence. Eye-contact timing around emotional beats. Micro-pauses before specific phrase types. Viewers don't consciously flag "that was weird." They flag, subconsciously, "something about this company is slightly off."
In a B2B buying context, "slightly off" compounds. Across three touchpoints of AI-avatar content, the trust gap widened to 40%+. Across a full campaign? The content is actively anti-selling.
Where the HeyGen Case Studies Miss
HeyGen publishes case studies showing parity or near-parity on conventional metrics (CTR, view count, completion rate). Those metrics measure whether content is engaging enough to watch. They don't measure whether the content changes how the brand is perceived.
The completion-rate parity is real. And it doesn't matter. If your video is completed by 100% of viewers and the brand-trust score drops 30%, you've lost ground. The case studies optimize for the metrics that look good in a quarterly marketing review. They don't optimize for pipeline.
The Legitimate Use Case
We're not anti-HeyGen. We've used it. There are three specific use cases where it earns its keep:
Localization. Translating an existing video into 20 languages is a legitimate HeyGen superpower. The trust gap still applies in each language, but the alternative (not shipping) is usually worse.
Internal comms. Exec updates to employees. No external trust being negotiated. Efficiency wins clean.
Training content. Same logic as internal comms — the viewer is oriented toward the information, not the messenger.
External brand content, outbound, landing-page presenters, thought leadership, and pipeline-generating content: HeyGen costs you trust. The savings are real. The attribution cost is larger.
The Red Flag Pattern
The agencies pushing HeyGen hardest in 2026 are the ones least equipped to produce real on-camera content. They sell HeyGen because it's the only product they can deliver on budget. If your agency's proposal contains an AI-avatar line item by default, the question to ask isn't "is this a good fit for our brand" — it's "does this agency have the capacity to shoot the alternative?"
What To Do If You've Already Built on It
Audit where the avatar content is showing up in the buyer journey. If it's in trust-critical moments — first touchpoint, founder message, customer story, thought-leadership — replace it with real capture. The cost is meaningful; the pipeline cost of leaving it is worse.
If it's in low-trust-stakes moments — localization, FAQ library, internal — leave it. The trust gap doesn't apply.