Automated Compliance
Plus some other updates and opportunities
Automated Compliance and the Regulation of AI
My colleague Kevin Frazier and I have a new working paper out this week: Automated Compliance and the Regulation of AI. The paper starts from the premise, previously developed by Paul Ohm, that AI systems will soon be capable of automating many—perhaps most or indeed nearly all—regulatory compliance tasks.1
Kevin and I2 apply this insight to a question of particular interest to both of us: when is it appropriate to regulate AI? Readers will know that I generally favor certain new regulations aimed at reducing the risk of catastrophes from frontier AI systems. Kevin, interestingly, used to agree, but has since updated his views to become more skeptical of such proposals:
A few years ago I would have agreed with the argument that the most important AI regulatory issue is mitigating the low probability of catastrophic risks. Today, I’d think nearly the opposite. My primary concern is that we will fail to realize the already feasible and significant benefits of AI—benefits that stand to improve the lives of marginalized communities, individuals lacking access to health care and quality legal representation, and students suffering because their school cannot afford the necessary specialists.
This made Automated Compliance something of an adversarial collaboration, in the academic sense.
Despite these differences, however, we share a number of important assumptions. Economists define innovation as the ability to produce more goods and services with the same or fewer inputs. As applied to the innovation versus safety regulation tradeoff, this means that AI innovation will enable a society aiming to achieve a fixed level of safety assurance via regulation will have to pay less to do so. Equivalently, a society willing to pay a fixed amount to reduce risks through regulation will be able to achieve greater risk reduction for that budget. It’s a classic positive-sum dynamic.
For me, Automated Compliance was an attempt to leverage such common assumptions to find policy mechanisms that could expand the pie a bit, by loosening tradeoffs between innovation and risk regulation.
We converged on a few different policy ideas. You can see all of them behind the paywall in the full paper. The one I am most excited about, however, is what we call automatability triggers. An automatability trigger is a legal mechanism that specifies that the regulation to which it is attached becomes effective only when compliance with that underlying regulation is sufficiently automatable. In other words, before AI has progressed to the point where compliance with the underlying regulation is automatable, it has no effect. Once AI passes that point, however, the regulation automatically becomes effective.
One way to think about automatability triggers is that they represent a bargain between the proregulatory and deregulatory sides of a given debate. Each side concedes something: the deregulatory side concedes that regulation would be desirable once the cost is sufficiently low, while the proregulatory side concedes that, at the time of enactment, the costs of the regulation are too high to warrant immediate implementation. Since both sides agree that AI progress entails decreasing compliance costs (at least for certain compliance tasks),3 they can agree to postpone implementation of the regulation.
I am not naive to the challenges of drafting an automatability trigger that accomplishes its intended effect. But I think further exploration of the idea might unlock novel, positive-sum compromises between different sides of these AI policy debates.
I also want to be clear that I do not think automatability triggers make sense for any given underlying regulation. Automatability triggers make the most sense when we can expect compliance with the regulation to be automatable before (or at least, not too long after) the risks that the regulation aims to prevent arise. This seems like it will be true of some risks, but less so of others. Compliance automation is a product of AI capabilities diffusion: it will likely involve new firms building products atop new AI models. But the risks might outrun the compliance-automating capabilities of an AI system, either due to differential progress in underlying capabilities or, more likely, because some capabilities diffuse more quickly than others.
Nevertheless, I am very excited about work like this, that attempts to develop new AI policy proposals under the assumption that AI capabilities will themselves unlock new means of governance. Our team calls this auto-governance, and you can expect more from us on this in 2026.
Other Updates
Also out this week: Legal Alignment for Safe and Ethical AI. This multi-author project collects themes that we have been converging on, both independently and collaboratively, over the past several years into an emerging unified field of Legal Alignment. Legal alignment consists of three related “(1) designing AI systems to comply with the content of legal rules developed through legitimate institutions and processes, (2) adapting methods from legal interpretation to guide how AI systems reason and make decisions, and (3) harnessing legal concepts as a structural blueprint for confronting challenges of reliability, trust, and cooperation in AI systems.” We’re hoping this paper helps frame the field so we can make rapid progress on these questions.
In December, my colleagues Matthijs Maas and Tobi Olasunkanmi released Treaty-Following AI: an exploration of how law-following AI might be extended to help solve problems in international law. I’m super excited about this line of work, as I think the original Law-Following AI article only scratches the surface of the possible applications of law-aligned AI systems.
Gillian Hadfield recently published a thoughtful book chapter. It argues, inter alia, that law-following AIs will be limited, as a type of normatively desirable AI, unless embedded in and capable of reasoning about a richer, dynamic network of normative agents and institutions. Highly recommended reading!
Applications for the 2026 Workshop on Law-Following AI are open! Last year’s was great, and this year’s will be bigger and better. Apply here and pass along to anyone who may be interested.
LawAI also has some other exciting job openings:
Summer Research Fellow (US, EU, Legal Frontiers)
This is of course related to a major theme of Law-Following AI: AI systems will be able to perform an increasing fraction of computer-based tasks (which includes many compliance tasks).
This blog post is solely authored by me, and I have not run it by Kevin prior to posting. While I hope to have done a good job at representing his views, he may end up disagreeing with my account here, in which case the error is my own!
See page 17 of Automated Compliance for an explanation of which regulatory and compliance costs might not be susceptible to automation-driven cost reductions.

