Copyright Fast Company

Scroll through any social media feed in 2025 and it is increasingly difficult to separate the living from the rendered. Virtual models walk runways in Paris; holographic K-pop idols fill stadiums; and photorealistic deepfake spokespeople host product launches that feel indistinguishable from genuine broadcasts. What began as experimental CGI mascots such as Lil Miquela has become a multibillion-dollar industry of synthetic influencers. Marketers love their brand alignment, guaranteed availability, and tailor-made aesthetics. Regulators and consumers, however, worry about a future where avatars can say anything, appear anywhere, and never reveal their true nature. Let’s explore the tension at the heart of this trend—the reach offered by synthetic personalities versus the fragile authenticity and compliance responsibilities that accompany them. By tracing the technology, psychology, and evolving legal frameworks, we can outline pragmatic guardrails for brands eager to harness artificial charisma without eroding public trust. FROM FICTION TO FRONT ROW Only a decade ago, virtual characters were dismissed as curiosities confined to sci-fi films or Vocaloid concerts. Two forces changed that perception. First,generative adversarial networks and diffusion models made it cheap to create photorealistic faces, voices, and animations. Second, the creator economy boom proved that audiences will follow a persona—not necessarily a person—if the content resonates. Unlike humans,synthetic influencers never age, misspeak, or demand equity. Every pixel and syllable can be controlled in software. Studios can spin up look-alike personas, test storylines, and adjust tone in real time. Fashion houses, gaming publishers, and even public health agencies have embraced these virtual ambassadors. Yet the same traits that make them efficient also risk credibility if audiences sense manipulation. Subscribe to the Daily newsletter.Fast Company's trending stories delivered to you every day Privacy Policy | Fast Company Newsletters THE AUTHENTICITY PARADOX Authenticity was long seen as the currency of influence. Paradoxically, synthetic influencers show that audiences can suspend disbelief when story arcs feel genuine—even if the face delivering them is fictional. Still, authenticity is not binary; it depends on disclosure and context. If a deepfake spokesperson reads remarks from a CEO during a crisis broadcast, viewers may be comforted—until they discover it was generated. Research shows that undisclosedsynthetic representations trigger feelings of betrayal far stronger than when companies hire actors. The breach stems from perceived deception: People believe they were engaging with a real individual capable of accountability. Transparency, then, is not just regulatory formality but the foundation of lasting trust. REGULATORS CATCH UP Lawmakers are racing to keep pace. The European Union’s AI Act, adopted in 2025, classifies commercial deepfakes as “limited-risk,” requiring disclosure and traceable provenance of training data. Regulators in Australia, the UK, and Singapore issued similar advisories, stressing that synthetic broadcasters must meet the same truth-in-advertising standards as humans. Penalties range from fines and takedowns to civil liability. Reputational risk looms even larger: Activist groups have called for boycotts of companies usingdeepfake spokespeople to greenwash credentials or fabricate minority voices. Compliance, therefore, is not a footnote—it is a brand-safety imperative. GUARDRAILS FOR ETHICAL PRACTICE To navigate this terrain, marketers are building playbooks grounded in three pillars: transparency, consent, and traceability. Transparency requires explicit labelling—watermarks, disclaimers, or captions that leave no ambiguity about an avatar’s artificial nature. Consent extends beyond actors lending likenesses to include dataset contributors whose images fuel training. Traceability demands version control and audit logs. Each iteration of a synthetic influencer—voice clone, script, or rig—should be recorded so that bias or tampering can be addressed. Some firms now appoint “avatar compliance officers” to bridge legal, creative, and technical teams. Start-ups are also certifying that synthetic content has not been weaponized. Brands adopting such safeguards not only mitigate risk but also signal to consumers that novelty will not trump responsibility. advertisement TOWARD HYBRID HUMAN-AI STORYTELLING The future will likely be neither wholly synthetic nor purely human. A hybrid model is emerging in which creators coproduce with their avatars, outsourcing routine updates while preserving a visible human presence. An athlete’s digital twin may narrate training highlights, followed by a live Q&A with the athlete themselves. This dual presence extends reach without diluting authenticity. Advances in real-time voice conversion and motion capture will let creators control multiple personas at once, localizing language and tone for regional markets. Such agility could transform global marketing, but only if disclosure is rigorous. Tech companies are piloting watermarking protocols baked into pixel data, allowing automated systems to flag undisclosed synthetic segments. If adopted widely, AI-assisted storytelling may be accepted much like CGI in film, as a tool that enhances, rather than replaces, human creativity. FINAL THOUGHTS Synthetic influencers represent a thrilling yet precarious frontier where innovation collides with sentiment and regulation. Their algorithmic charisma can extend a brand’s reach around the clock and across borders. Yet their existence tests the meaning of authenticity and forces regulators to rewrite advertising norms. For marketers, the path forward is not to reject or blindly embrace these personas, but to integrate them thoughtfully—foregrounding transparency, embedding consent in data practices, and ensuring traceable compliance. Brands that achieve this balance will earn more than efficiency; they will build reputations for ethical leadership in an era when digital trust is as valuable as attention. The synthetic revolution is already here. Whether it becomes a renaissance or a reckoning depends on choices made now—choices that safeguard both technological possibility and the human stories audiences still crave. Boris Dzhingarov is the CEO of ESBO Ltd..