TL;DR

Lawmakers in Ireland are pushing to accelerate a bill that would make knowingly exploiting another person’s voice, image or likeness without consent a standalone criminal offence. The move follows concerns about AI tools that can create sexualised 'deepfakes', and experts want stronger rules that address platforms and products as well as individual users.

What happened

The Oireachtas committee on artificial intelligence has urged the Government to fast-track the Protection of Voice and Image Bill, first introduced in April by Fianna Fáil TD Malcolm Byrne. The proposed law would create a new standalone offence for knowingly exploiting someone’s name, image, voice or likeness without consent, with particular focus on harmful or deceptive uses often described as 'deepfakes'. The call for speed comes amid reports that X’s Grok AI has been used to 'nudify' photos, including images of women and children, prompting fresh scrutiny. Ireland’s Attorney General is reviewing the robustness of existing statutes that criminalise non-consensual intimate images and child sexual-abuse imagery. Child protection rapporteur Caoilfhionn Gallagher warned that sexually explicit deepfakes inflict harms comparable to real images, highlighted a gendered impact with an estimated predominance of women and girls among victims, and urged consideration of platform and product accountability rather than targeting only individual users.

Why it matters

  • Creates a dedicated criminal offence aimed specifically at AI-enabled misuse of identity elements such as voice and image.
  • Addresses growing harms from realistic synthetic sexual imagery that victims experience as authentic and damaging.
  • Raises questions about whether current laws sufficiently hold platforms and AI products to account, not just individual users.
  • Signals potential shifts in regulatory focus toward product safety and platform responsibilities in the AI space.

Key facts

  • The Protection of Voice and Image Bill was introduced in April by Fianna Fáil TD Malcolm Byrne.
  • The Bill would establish a standalone crime for knowingly exploiting another person’s name, image, voice or likeness without consent.
  • Concerns were raised after reports that Grok AI on X was used to digitally undress women and children and to create sexualised images.
  • Ireland’s Attorney General is reviewing how well existing laws criminalise AI-generated non-consensual intimate images and child sex abuse images.
  • Child protection rapporteur Caoilfhionn Gallagher said harms from sexual deepfakes feel equivalent to harms from authentic images for victims.
  • Gallagher noted an estimated concentration of sexually explicit deepfakes depicting women and girls and framed the issue as gender-based violence.
  • Existing Irish laws cited include section five of the Child Trafficking and Pornography Act 1998 and 'Coco’s Law', which are described as largely focused on individual users.
  • Officials and experts argued current platform policies, such as X AI’s acceptable use rules, are inadequate on their own.

What to watch next

  • Whether the Government agrees to formally fast-track the Protection of Voice and Image Bill (not confirmed in the source).
  • Findings and recommendations from the Attorney General’s review of Ireland’s laws on AI-generated non-consensual images (not confirmed in the source).
  • Any moves to extend legal or regulatory responsibility to platforms or AI products themselves, beyond individual users (not confirmed in the source).

Quick glossary

  • Deepfake: Synthetic audio or visual media produced or altered using AI to make it appear that a real person said or did something they did not.
  • Non-consensual intimate image: A sexual or nude image of a person shared or created without that person’s consent.
  • Product safety (in tech): A regulatory concept considering whether a product’s design or functionality creates risks that should be mitigated by rules, restrictions, or bans.
  • Acceptable use policy: A platform’s published rules that describe permitted and prohibited activities for users and the service itself.

Reader FAQ

What would the Protection of Voice and Image Bill do?
It would introduce a standalone criminal offence for knowingly exploiting someone’s name, image, voice or likeness without consent, especially when used to harm or deceive.

Who introduced the Bill and when?
Fianna Fáil TD Malcolm Byrne introduced the Bill in April.

Are deepfakes already illegal in Ireland?
The source says generating child sexual abuse imagery and sharing intimate images without consent are already criminal offences, but the Bill would create a separate offence specifically targeting misuse of voice or image.

Will platforms be held legally responsible?
Not confirmed in the source — experts urged greater accountability for platforms and products, but specific legal measures for platform liability were not detailed.

The head of the Oireachtas committee on artificial intelligence (AI) has called on the Government to fast track a Bill that would criminalise the harmful misuse of someone’s voice or…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *