In our daily lives, we treat our selfies and voice notes like digital postcards—fleeting moments shared with friends and family. However, in the eyes of the law in 2026, these snippets of our identity have become high-value raw materials for a new kind of manufacturing. This is the paradox of the AI era: the very technology that allows us to create art and streamline our work is being harnessed to craft ‘digital ghosts’ of real people without their consent.
Italy’s Data Protection Authority, the Garante per la protezione dei dati personali, has seen enough. After a series of warnings that failed to stem the tide of non-consensual deepfakes, the regulator is now asking for a more robust weapon. They are seeking the legal authority to flip a switch and block access to entire platforms that facilitate the creation of harmful synthetic media.
To understand why the Garante is asking for more muscle now, we have to look back at the events of early 2026. In January, the regulator issued a blanket warning to users and developers of AI-based services. The message was clear: using a stranger’s—or even an acquaintance’s—image to generate synthetic content is not a victimless hobby; it is a serious violation of fundamental rights.
At that time, the Garante utilized its existing powers to notify several AI startups that their data processing methods were precarious. Essentially, these platforms were scraping the web for faces and voices, then allowing users to manipulate them into compromising videos or fraudulent audio clips. Despite the warning, the viral sharing of these ‘fakes’ didn’t slow down. This suggested that the current legal framework was like trying to stop a flood with a paper umbrella. The authority could tell a company to stop, but by the time the paperwork cleared, the damage was already done and the content had reached millions.
The Garante’s latest request to the Italian government represents a significant shift in strategy. Rather than playing a game of ‘whack-a-mole’ with individual pieces of content, the regulator wants the power to block access from Italy to the platforms themselves.
In a regulatory context, this is a massive escalation. Currently, blocking an entire website is a measure usually reserved for extreme cases like child exploitation material or massive copyright infringement (as seen with the ‘Piracy Shield’ system). By requesting these powers, the Garante is arguing that the threat to personal dignity and identity posed by deepfakes is equally systemic and severe.
Imagine the law as a digital shield. Up until now, that shield could only be raised after someone had already been hit. If the Garante receives these new powers, they will be able to place the shield at the border, preventing the ‘arrows’ from entering Italian cyberspace in the first place.
One of the most important takeaways from the Garante’s recent announcement is the reminder that deepfakes aren’t just a breach of privacy—they are often a crime. Under current Italian statutes, creating and distributing non-consensual synthetic media can lead to charges of defamation, identity theft, and specifically, ‘revenge porn’ if the content is of a sexual nature.
Many users mistakenly believe that if they didn’t ‘film’ the person, they aren't liable. But the law is evolving to recognize that a digital reconstruction can be just as damaging as a real recording. If an AI service allows you to swap a colleague’s face into an adult video or clone a business partner’s voice to authorize a bank transfer, you are stepping into a legal minefield.
| Practice | Legal Status (Italy 2026) | Potential Consequence |
|---|---|---|
| Creating a deepfake for personal, private parody | Gray Area / Restricted | Potential civil liability if shared |
| Non-consensual sexual deepfakes | Illegal | Criminal prosecution (Revenge Porn laws) |
| AI Voice Cloning for financial gain | Illegal | Fraud and Identity Theft charges |
| Scraping public photos to train AI models | Strictly Regulated | Massive administrative fines (GDPR) |
A major hurdle for the Garante—and a reason they are asking for blocking powers—is the issue of jurisdiction. Jurisdiction refers to the legal boundary where a court’s or regulator’s power ends. Many of the most problematic AI services are hosted on servers in countries with lax privacy laws, far outside the reach of the European Union’s stringent GDPR (General Data Protection Regulation).
If a company is based in a remote digital tax haven, the Garante cannot easily fine them or drag them into an Italian courtroom. This makes the company essentially ‘untouchable’ through traditional legal recourse. However, by targeting the internet service providers (ISPs) within Italy, the Garante can make that platform invisible to Italian users, effectively cutting off the service's air supply in that region.
For the average citizen, the biggest challenge isn't just that deepfakes exist; it’s the difficulty of proving they are fake. We call this the burden of proof—it is the heavy backpack of evidence you must carry to show a court that you have been wronged.
In the past, if someone showed a scandalous video of you, you could point to an alibi. Today, AI can make you appear to be in a location you never visited, saying words you never spoke, with perfect lighting and realistic emotion. Proving a negative—that you didn't do something—is notoriously difficult. This is why the Garante is focusing on the source (the platforms) rather than just the symptoms (the individual videos). By making these tools less accessible, they hope to reduce the overall volume of synthetic disinformation.
While the government debates whether to grant the Garante these sweeping new powers, you don't have to wait for a change in the law to protect yourself. Here are practical steps to fortify your digital presence:
The Garante’s request is a bold attempt to bring order to the ‘Wild West’ of the synthetic media age. By treating harmful AI platforms as a public nuisance that can be blocked, Italy is signaling that the right to one’s own image and voice is a fundamental pillar of a civilized society.
Ultimately, the goal is not to stifle innovation, but to ensure that technology serves as a bridge to progress rather than a trapdoor for our reputations. As the law catches up to the speed of code, we must remain vigilant and informed, ensuring that our digital doubles remain under our control.
Sources:
Disclaimer: This article is for informational and educational purposes only and does not constitute formal legal advice. Laws regarding AI and data privacy are evolving rapidly and vary by jurisdiction. If you believe your rights have been violated or you are facing a legal dispute regarding synthetic media, please consult with a qualified attorney licensed in your area.



Our end-to-end encrypted email and cloud storage solution provides the most powerful means of secure data exchange, ensuring the safety and privacy of your data.
/ Create a free account