Power Reads

Beyond the Workshop: Why Participatory AI Must Outlive the Design Phase

Explore why meaningful stakeholder involvement in AI requires engagement across the entire lifecycle, from design to long-term monitoring and evolution.
Beyond the Workshop: Why Participatory AI Must Outlive the Design Phase

Despite the unprecedented surge in global initiatives aimed at making artificial intelligence more inclusive, a striking paradox has emerged: as the vocabulary of participation becomes ubiquitous, the actual influence of the public remains remarkably fragmented. A recent review of eighty participatory AI projects revealed that while communities are frequently invited to the table during the initial stages of data collection or design, they are almost entirely absent once the system is actually deployed. We have mastered the art of the kick-off workshop, but we have yet to figure out how to keep the door open once the code starts running.

The Illusion of the Open Door

I recently spent an afternoon in a community center where a local council was presenting a new AI-driven resource allocation tool. The room was filled with the hum of genuine civic energy. Residents were asked to color-code maps and discuss what fairness meant to them. It was a textbook example of participatory design—vibrant, earnest, and deeply rooted in the local context. However, as the session ended, a woman sitting near me asked a poignant question: “What happens in six months when the algorithm decides my street doesn't need a bus anymore? Who do I talk to then?”

The facilitator’s pause was telling. In that moment, the systemic gap in our current approach to AI governance became visceral. We treat participation as a discrete event—a ribbon-cutting ceremony for an algorithm—rather than a continuous relationship. Once the participatory design phase concludes, the governance of the system almost invariably shifts back to the opaque corridors of the developers or the commissioning agencies. The community, having served its purpose as a data point or a sounding board, is effectively sidelined.

The Lifecycle Gap: From Design to Deployment

On a macro level, this trend reflects a broader sociological shift toward what we might call the “consultation economy.” In this model, engagement is treated as a checkbox for compliance rather than a redistribution of power. The OECD AI Principles and the EU AI Act both emphasize stakeholder involvement as a cornerstone of trustworthy AI. Yet, in practice, this involvement is often front-loaded. We invite stakeholders to help build the car, but we rarely give them a seat in the vehicle once it’s on the road.

This lifecycle gap is not merely a procedural oversight; it is a structural flaw. AI systems are not static tools; they are dynamic entities that evolve through feedback loops, retraining, and shifting environmental contexts. When stakeholder involvement ends at the deployment phase, the system loses its social tether. Consequently, the very communities that helped shape the initial model find themselves marginalized when the system begins to exhibit unforeseen biases or when its operational scope expands beyond the original agreement.

Linguistically Speaking: The Semantics of Participation

Zooming out, we can see this tension reflected in the language we use. In the tech industry, the term “user” has long been the dominant descriptor. Linguistically speaking, a user is a passive recipient of a service. The shift toward “stakeholder” was intended to imply agency and a vested interest. However, if a stakeholder only has a voice during a ninety-minute focus group, the term becomes a symbolic gesture rather than a functional reality.

Through this lens, the current state of participatory AI looks less like a democratic revolution and more like a theater stage. We perform the rituals of inclusion—the sticky notes, the town halls, the ethics charters—but the script is often written in advance. To be truly participative, the discourse must move beyond the ephemeral excitement of the “launch” and settle into the mundane, long-term work of monitoring and system evolution.

The Archipelago of Governance

Historically, we might compare this to the way modern cities have become an archipelago of atomized spaces. We live in close proximity, yet our systems of governance are often isolated islands. One island handles the technical development, another handles the legal compliance, and a small, temporary island is built for “community engagement.” Once the project is finished, the bridge to the community island is dismantled.

Stage of AI Lifecycle Typical Level of Participation Desired Level of Power
Problem Identification High (Consultative) Co-Definition
Data Collection Moderate (Extractive) Data Sovereignty
Model Development Low (Technical) Algorithmic Oversight
Deployment Negligible Veto Power / Red-teaming
Monitoring & Audit Rare Community-led Auditing
Decommissioning Non-existent Collective Decision-making

Paradoxically, the most critical moments for stakeholder influence occur after the system is live. This is when the nuances of real-world impact become visible. Without a mechanism for ongoing involvement, the feedback loop is broken. The system becomes a “hall of mirrors,” reflecting only the internal metrics of the developers rather than the lived experiences of the people it affects.

Reclaiming the Narrative: Longitudinal Agency

Ultimately, the goal of participatory AI should be the establishment of longitudinal agency. This means creating structures where stakeholders are not just consultants, but co-governors throughout the entire lifespan of the technology. This requires a shift from “one-off” engagement to “durable” involvement.

In everyday terms, this might look like community-led audit boards that have the power to trigger a system review, or “human-in-the-loop” mechanisms that prioritize local knowledge over algorithmic efficiency. It involves recognizing that the expertise of a resident who understands the social fabric of their neighborhood is just as vital as the expertise of the data scientist who understands the architecture of the neural network.

Food for Thought

As we navigate this shifting landscape, we must ask ourselves uncomfortable questions about the nature of power in the digital age. If you are involved in a tech project, consider these reflections:

  • Who governs the evolution? If the AI system changes its behavior tomorrow, is there a clear, accessible path for the community to intervene?
  • Is the participation transformative or performative? Does the input of stakeholders have the power to stop a project, or only to tweak its margins?
  • Can we move beyond the workshop? How can we build digital infrastructures that allow for continuous, rather than episodic, civic engagement?

By treating AI governance as a living, breathing social contract rather than a static technical requirement, we can begin to bridge the gap between the promise of participation and the reality of power. The goal is not just to build better AI, but to build a society where technology serves the collective resilience of the people, rather than just the efficiency of the machine.

Sources

  • Analysis of 80 participatory AI initiatives and the “participation wash” phenomenon.
  • The OECD AI Principles on stakeholder engagement and trustworthy AI.
  • The EU AI Act’s provisions on fundamental rights impact assessments and stakeholder involvement.
  • Sociological frameworks on “Liquid Modernity” and the atomization of civic life.
  • Linguistic studies on the evolution of “user” vs. “stakeholder” in technological discourse.
bg
bg
bg

See you on the other side.

Our end-to-end encrypted email and cloud storage solution provides the most powerful means of secure data exchange, ensuring the safety and privacy of your data.

/ Create a free account