
There’s a moment that comes in every emerging technology’s lifecycle when the first real harm occurs. For artificial intelligence, that moment has arrived many times over, yet we are still struggling with one deceptively simple question: Who is accountable? The developer who built the system? The organization that deployed it? The user who engages with it? The regulatory bodies that failed to prevent it? The answer matters, not because we enjoy assigning blame, but because without clear accountability, we’ll repeat the same mistakes at an exponential scale.
Regulating AI accountability has not been a priority for most regulators. Yet, between suicides and AI attacks, the first debates emerge on who is responsible for actions and abuses. The European Union and the US State of Ohio are explicitly beginning to grapple with this reality. Their approaches recognize that accountability isn’t about choosing a single culprit, but rather an ecosystem in which every actor bears responsibility proportional to their role.
Engineering Foundations Of AI Accountability
The European Union’s AI Act represents the first comprehensive attempt at global AI regulation, and its scaffolding beautifully reveals this complexity. When the act entered into force on August 1, 2024, it didn’t simply ban bad AI or create a one-size-fits-all compliance regime. Instead, it established a risk-based framework that distributes accountability across developers and users alike. The most severe restrictions came into effect on February 2, 2025, when the EU’s prohibitions on specific high-risk AI systems took effect. These aren’t minor restrictions. Most important is the blanket ban of manipulative AI systems, social scoring systems, and certain biometric identification technologies. Yet, more importantly, are the reasons the ban went into effect. Within the debate, the EU recognized that harm doesn’t emerge from the technology alone, but from the decisions made by both those who create it and those who wield it.
Developers, under the EU framework, bear the primary burden of designing systems responsibly. They must ensure transparency, mitigate bias, and implement safeguards that prevent the most egregious uses. This approach is akin to responsible engineering in other disciplines, where you can’t simply hand off a powerful tool and claim ignorance when it causes harm.
The Responsible User
Yet, responsibility and accountability flow both ways. Users, both individuals and organizations, carry equal weight in this accountability structure. When you implement AI to make decisions about hiring, lending, healthcare, or criminal justice, you’re not absolved of responsibility because a model made the decision. Users must demonstrate that they appropriately vet the AI system, that humans maintain adequate oversight, and that remedies are in place when things go wrong. The oversight component is where things get thorny. Many organizations have adopted AI precisely because it promised to remove human judgment from sensitive decisions. They believed automation would eliminate bias. That logic was always flawed, but the accountability framework exposes it as dangerous. Automation doesn’t eliminate responsibility. It compounds it. Deploying an AI system requires more thoughtfulness, not less, because the consequences scale differently than human decisions.
Education AI: A Showcase For Community Involvement
Ohio’s approach to AI accountability highlights another critical aspect of the question. House Bill 96, passed in 2025, mandates that K-12 school districts adopt policies governing AI use by July 1, 2026. Thus, for the upcoming school year, districts must determine how teachers and students can use AI responsibly, either by adopting the state’s model policy or creating their own. This approach fosters accountability through transparency and collective deliberation. Schools must document their decisions about where to permit and prohibit AI and communicate them to students, staff, and parents. Ohio recognized that responsibility can’t live only with developers or regulators. The organizations on the ground must be part of the decision-making process and support it.
Genuine Accountability, Not Paperwork
The EU and Ohio frameworks acknowledge, though not explicitly enough, that accountability requires genuine accountability structures and a culture that supports them. A mere legal framework isn’t sufficient. Take the ‘vibe coding‘ phenomenon, which illustrates precisely what happens when accountability dissolves. Early in its adoption, vibe coding promised to democratize development by letting AI generate code from loose prompts. The result was spectacular technical failure on a cultural level. Developers would accept AI-generated code without understanding it and submit it to open-source projects. AI would hallucinate dependencies and produce brittle architectures, and the entire proposition violated the core values of open-source communities. Why did this happen? Because no one in the chain bore accountability. The AI vendor claimed the code was just a set of suggestions. The developer claimed they were saving time. The organization claimed they were innovating. Everyone abdicated responsibility simultaneously, and the system collapsed not from regulatory intervention, but from community rejection.
A Shared Responsibility
Ultimately, all of these examples highlight a fundamental truth. Accountability works when every actor in the chain understands what they’re responsible for and what consequences follow from negligence. For developers, it means building systems with auditability, testability, and transparency built into their architecture. It means acknowledging that you cannot create a system for security-sensitive or rights-impacting applications and then release it into the world without tests or maintenance. For users, it means refusing the temptation to treat AI deployment as a set-it-and-forget-it proposition. It means maintaining human oversight, understanding what your system does, and taking responsibility when it fails. For regulators, it means creating frameworks that make non-compliance more costly than compliance, and that create incentives for everyone to err on the side of caution rather than speed.
Where this becomes challenging is at the boundaries. What happens when an AI system trained on data from Europe is deployed in Ohio? What happens when thousands of others use an open-source model fine-tuned by one developer? What happens when the consequences of a system’s decisions take years to manifest, long after the developer has moved on to other projects? These questions have no perfect answers, but they have better and worse approaches. The better approach is to recognize that accountability flows through the entire system, not just its most convenient points. Developers must build systems that independent parties can audit. Users must maintain logs and be able to demonstrate their decision-making process. Regulators must coordinate across jurisdictions and establish shared standards.
Accountability Comes From Culture
Ultimately, accountability for AI is not something imposed from the outside. We must build it into the culture of our organizations, businesses, and society, ensuring accountability for the development, deployment, and oversight of new technologies. The EU and Ohio are showing two possible paths forward. Yet the real test will be whether organizations treat these frameworks seriously as governance structures or as compliance checkboxes to ignore and plow through on the way to a giant catastrophe. The difference will determine whether we emerge from this era of AI adoption having learned something about how to build technology responsibly, or whether we move from one crisis to the next.

Leave a Reply