Deepfakes, Likeness & the Law: What Creators and Employers Should Watch

Legal | 22 Oct 2025 | Written By Admin

Image not working Image not working
Deepfakes, Likeness & the Law: What Creators and Employers Should Watch

The rise of advanced artificial-intelligence tools has made it easier than ever to create highly realistic “deepfakes” which are videos, images or audio that appear to show someone doing or saying something they did not. What used to be niche tech is now widely available. As a content creator, employer, marketer or freelancer, you can no longer dismiss deepfakes as remote risk: they raise real legal, reputational and financial risks.
In this article we’ll examine: what deepfakes are, the key legal regimes (especially the NO FAKES Act, right-of-publicity laws and state deepfake statutes), what creators and employers should watch for, and practical steps you can take to stay ahead.

1. What is a Deepfake?

A “deepfake” generally refers to synthetic or manipulated media (video, audio, image) where a person’s likeness, voice or other identifiable attribute is used in a way that misrepresents them. 
These tools matter now because:

  • AI technology (e.g., generative adversarial networks) has matured and made realistic fakes easier. 

  • The risk isn’t only humorous or personal because it includes commercial exploitation (endorsements without permission), misinformation (political deepfakes) and reputation/brand damage.

  • Many jurisdictions are only just catching up with laws and enforcement. 

2. The key legal regimes you need to know

a) Right of Publicity (and “NIL” – Name, Image, Likeness)

The “right of publicity” allows individuals to control the commercial use of their identity (name, image, likeness, voice). 
In the U.S., it’s mainly governed by state law; there is no single federal statute yet that covers all states uniformly.

  • Some states protect living persons; others extend protection after death (post-mortem rights). 

  • A deepfake that uses someone’s voice or likeness even without explicit name may trigger this right. 

  • The commercial purpose often matters: unauthorized use for business/endorsement is more clearly actionable. 

b) Deepfake / Digital replicas legislation

Because of the rise of AI-generated “digital replicas” (i.e., synthetic voices, images, avatars) lawmakers have begun enacting legislation. For example:

  • The NO FAKES Act would create a federal property-right in voice, likeness and image for digital replicas. 

  • Many U.S. states have laws specifically targeting deepfakes: non-consensual deepfake porn, election-related deepfakes, digital replica laws. 

  • For example: Tennessee’s “ELVIS Act” protects artists from AI voice replication. 

c) Other legal regimes

  • Copyright law: If a deepfake uses copyrighted content or training data without permission, there may be copyright or related-rights issues. 

  • Defamation, privacy, false endorsement: If the deepfake falsely attributes statements or actions to someone, defamation or false endorsement laws may apply. 

  • Platform liability & takedown: Online platforms may be compelled by statute or pressure to remove non-consensual deepfake content especially involving intimate imagery.

3. Why creators and employers should care

For Creators (e.g., influencers, freelancers, agencies)

  • Your likeness or voice may be used without your consent (for ads, endorsements, deepfake scams) which undermines your brand or dilutes your value.

  • If you hire contractors or use AI tools, you might inadvertently create deepfakes or lack proper rights clearance (risk of liability).

  • Digital content you create could be cloned or repurposed and you’ll need to manage how your work is used and licensed.

For Employers / Brands

  • Marketing campaigns using AI-generated personas or voice clones without clear consent/licensing risk legal claims for misappropriation.

  • If you hire remote talent globally, the risk of someone being misrepresented by a deepfake like harming your brand or exposing you to liability grows.

  • You need internal policies: how is AI content generation managed? Who audits deepfakes? How do you respond to misuse?

4. Practical checklist

Here is a practical checklist you or your organisation should go through:

  1. Audit your content & AI use

    • Identify whether you/your team use generative AI tools (images, video, voice) and whether those tools involve likeness or voice replication.

    • Map where the likenesses/voices of individuals (employees, talent, contractors) are used.

  2. Update contracts and rights-grants

    • If you engage talent (actors, influencers, voice-actors) ensure the contract explicitly covers AI-generated replicas of their likeness/voice and clarifies permissible uses.

    • Include indemnities: talent will not be misrepresented; if AI is used, what consent is needed.

    • For freelancers/creators: include clauses limiting unauthorized use of your likeness, voice, or brand assets.

  3. Create a policy for deepfake use and monitoring

    • Define “approved use” vs “unauthorized use” of likeness/voice/brand assets.

    • Set up an internal escalation process if you identify a deepfake misuse (external or internal).

    • Assign someone to monitor public channels / social media for unauthorized clones or impersonation.

  4. Prepare takedown & mitigation plan

    • In jurisdictions where applicable, ensure you know how to send takedown notices (platform procedures, legal letter).

    • Internationally: look for jurisdictions relevant to you (if you operate/market globally).

    • Ensure you have rapid response: the longer a fake circulates, the greater the damage.

  5. Educate and train your team

    • Make sure creatives, marketers, HR and legal teams know about the potential for deepfake misuse.

    • Train talent: if approached for endorsements or voice/likeness use, clarify whether AI will be involved.

  6. Monitor legislative and regulatory developments

    • Since laws are still evolving (especially state-by-state in the U.S.), keep an eye on new statutes/regulations in the jurisdictions you operate in. 

    • If you operate or hire globally, note that other countries have their own rules (and may move faster).

5. Case scenarios

  • A campaign hires an influencer and uses their voice to generate an AI avatar for worldwide ads. Without clear consent for “digital replica” use, they may face a right of publicity claim.

  • A remote worker’s likeness is cloned and used in a spoof video by a disgruntled contractor, harming reputation that the employer may face brand risk and the worker may have claims.

  • A freelancer creates a deepfake voice-over for a client without obtaining full rights, and later the client uses it for an endorsement. The freelancer might be exposed and the client too.

6. Specific considerations for the Philippines / Asia-based creators & employers

  • Even if you’re based in the Philippines or hire talent from the Philippines, if your content is used in the U.S. or markets with strong right-of-publicity laws, you still face risk of U.S.-based claims.

  • Local awareness is growing: even if Philippine laws don’t yet mirror U.S. depth of “right of publicity for digital replicas”, having contracts and policies strengthens your position globally.

  • If you engage talent from many countries, consider including a “global rights” clause for use of likeness/voice including AI-generated versions, and specify which jurisdictions are covered.

7. What comes next

  • Passage of a federal U.S. law: the NO FAKES Act or equivalent may eventually establish a uniform federal property-right for digital replicas of likeness/voice. 

  • Further state laws expanding the “right of publicity” to include AI clones and post-mortem use of likeness. 

  • Platform regulation: increased obligations on content platforms to label or remove AI‐generated deepfakes especially in elections or commercial contexts. 

  • International moves: as other jurisdictions adopt stricter laws on non-consensual synthetic media, cross-border compliance becomes more complex.

8. Summary:

  • Deepfakes are a real and growing risk for creators and employers: misuse of someone’s image, voice or likeness can trigger legal claims, brand damage or contract exposure.

  • Your best defence is proactive: review how you use AI, update contracts/consent, set internal policies, and monitor for misuse.

  • Don’t rely solely on existing law. The field is evolving, so building the right internal safeguards today can save major headaches tomorrow.

  • For creators and brands, clarity of rights especially around AI-generated content is crucial to protect value and avoid unintended claims.

If you’re a creator, freelancer, agency or employer who uses AI in content, or hires talent globally, the legal risks around deepfakes and digital replicas are real and avoidable.
➡️ Sign up at Kemecon for more Tips and Take aways YOU can benefit from.
 

0 Comment