What a COVID Commission Tells Us About AI Governance

The parallels between the COVID pandemic and the AI revolution are not obvious at first glance. One was a biological event. The other is a technological transformation. The timelines differ. The decision-makers differ. But John Chachas sees a structural similarity that matters: in both cases, decisions with enormous consequences for ordinary people were made rapidly, under pressure, by parties with conflicting interests, and the framework for evaluating those decisions afterward has been inadequate or nonexistent.

Writing in Finance Monthly, Chachas revisits the bipartisan commission he proposed with Tom Korologos in a 2023 Wall Street Journal op-ed, specifically to apply its analytical logic to AI policy. The commission was never created. The analytical logic, he argues, has only become more relevant as AI capabilities have advanced and Congress has failed to respond.

The COVID case study is instructive. Three years of pandemic data accumulated in state health agencies, school districts, and unemployment offices. Evidence of learning loss, confirmed by research published in 2025, was substantial and disproportionately concentrated in low-income communities. Federal agencies most responsible for pandemic guidance had institutional reasons to emphasize their own performance rather than evaluate it honestly. The partisan environment made any critical assessment into a political weapon.

A commission led by Clinton and Bush, as Chachas and Chachas proposed, would have structural independence from those pressures. The design principle was methodological: exclude the architects of the original decisions, assemble credible figures from both sides, and produce a shared factual foundation that neither party can easily dismiss. “Let us use the data,” they wrote. “Not run from them because they are politically uncomfortable.”

The same template applies to AI. The developers who built the most powerful AI systems are not the right people to evaluate their labor market impacts. The companies deploying AI to reduce headcount are not disinterested analysts of their own technology’s social costs. The legislators who have been unable to pass substantive AI legislation need an independent analytical foundation credible to voters on both sides.

You do not let the researchers who ran the study chair the committee that evaluates it. You do not let the companies that funded the research design the ethics review. Independent analysis, conducted by people with standing from both parties and no stake in defending the original decisions, is what produces findings that can actually move policy.

The framework for evaluating AI policy exists. What Chachas built for COVID, rigorous data analysis, bipartisan leadership with no stake in defending the original decisions, clear empirical questions, is portable to AI governance and to local media sustainability. What is missing is the political will to use it.