Long before a defendant steps into a courtroom or a case file is even opened, an invisible digital hand may have already sorted, analyzed, and flagged the evidence. In the silent corridors of the justice system, algorithms are increasingly the ones doing the heavy lifting. But for years, these systems operated within a 'black box'—a space where code remained opaque and decisions were made without a clear trail of accountability.
This era of unchecked automation is coming to an end in Western Europe. On the heels of the EU AI Act’s implementation, Portugal’s Public Prosecution Service (Ministério Público) has taken a decisive step to pull back the curtain. By adopting a comprehensive Charter for the Ethical Use of Artificial Intelligence and a Technical Audit and Monitoring Policy, the service is signaling that while AI may be a powerful tool, it will never be the one holding the gavel.
In a regulatory context, the move by the Portuguese Public Prosecution Service isn't just a bureaucratic update; it is a fundamental realignment of how technology interacts with the law. The newly adopted Charter applies to all AI systems used within the service, but it places a heavy emphasis on what the EU AI Act defines as high-risk systems.
High-risk AI refers to software that has a significant impact on an individual’s life, such as systems used in recruitment, credit scoring, or, in this case, law enforcement and the judiciary. Because these tools can influence whether someone is investigated or how evidence is prioritized, the margin for error is nonexistent.
Essentially, the Charter acts as a set of guardrails on a mountain road. It allows the vehicle—the AI—to move quickly, but it prevents it from veering off the cliff of bias or illegality. By establishing these rules early, Portugal is attempting to prevent the 'black box' problem, where even the developers cannot explain why an algorithm reached a specific conclusion.
At the heart of the Charter lie six core principles that every AI system must satisfy before it is allowed to touch a case file.
From a compliance standpoint, these principles serve as a compass for the prosecutors. They ensure that technology remains a subordinate partner rather than an autonomous decision-maker.
Curiously, one of the most significant parts of the Charter isn’t about what AI can do, but what it is strictly forbidden from doing. The Public Prosecution Service has drawn a hard line in the sand: AI cannot replace human judgment, and predictive assessments are prohibited.
In some jurisdictions, 'predictive policing' or 'predictive sentencing' tools have been used to estimate the likelihood of a person committing a crime in the future. Portugal has rejected this path. Under this framework, an algorithm cannot be used to determine a defendant’s 'risk score' or suggest a specific sentence based on historical data.
This is a critical victory for digital rights. It recognizes that algorithms are backward-looking by nature—they learn from the past, including past biases. Allowing them to predict the future within the justice system would be like using a rearview mirror to steer a car through a crowded intersection. It is inherently dangerous and, in a regulatory context, legally precarious.
While the Charter provides the 'what,' the Technical Audit and Monitoring Policy provides the 'how.' In practice, many organizations adopt ethical guidelines and then let them sit on a shelf. Portugal is avoiding this trap by creating a Multidisciplinary AI Supervision Committee.
This committee is tasked with continuous compliance verification. It’s not a one-time checkup; it’s a living process of institutional health. Every AI system used by the prosecutors will be subject to granular audits that examine the data sets used for training, the logic of the algorithms, and the actual real-world outcomes they produce.
Think of this as a digital witness protection program for data integrity. The auditors ensure that the data fed into the AI hasn’t been 'poisoned' by inaccuracies and that the system’s performance hasn’t 'drifted' over time to become less accurate or more biased.
While this policy is specific to the Portuguese Public Prosecution Service, its ripples will be felt across the private sector. Companies developing legal-tech or AI tools for government use now have a very clear checklist of requirements.
Moreover, this serves as a blueprint for any organization—be it a bank, a hospital, or a retail giant—that uses high-risk AI. The transition from 'move fast and break things' to 'move carefully and document everything' is now the global standard. Organizations that fail to adopt similar ethical guidelines and audit policies will find themselves increasingly vulnerable to both legal challenges and a loss of public trust.
Ultimately, privacy-preserving AI is not just about following the law; it is about ensuring that as we move into an automated future, we don’t leave our fundamental humanity behind.
If your organization is currently deploying or developing AI systems, take a page from the Portuguese playbook to ensure your 'digital apprentice' stays on the right track:
Disclaimer: This article is for informational and journalistic purposes only. It explores the intersection of law and technology but does not constitute formal legal advice. For specific compliance requirements, consult with a qualified legal professional specializing in AI and data protection.



Our end-to-end encrypted email and cloud storage solution provides the most powerful means of secure data exchange, ensuring the safety and privacy of your data.
/ Create a free account