
By Vish Karthikeyan ’27
Much discourse has taken place around AI and its implications for public policy, particularly at Stanford. As much as this discourse is prevalent and timely, it is severely polarizing and painfully confusing. And that is not to mention that this discussion over the future of AI policy is unsettling in a manner that causes me to question my place in society. Though perhaps not as much as it has made me question the judgments of the otherwise intelligent people I encounter all the time who don’t believe in’ AI—a position roughly equivalent to not believing in the Internet.
This past quarter alone, we’ve had an impressive array of politicians, technologists, economists, and professors share their opinions on AI policy. From Bernie Sanders’ and Ro Khanna’s riveting address to the Hoover Institution’s AI labor panel featuring Rishi Sunak, remarks from across the political spectrum have been spilled with much enthusiasm.
What follows is my attempt to make sense of it all. Of course, I haven’t solved it. But understanding what we’re actually arguing about feels like the least we can do before the algorithm figures it out for us. In this series of four articles, I distill the debate around AI policy into four crucial ideas, serving as a starting point for those interested in joining this conversation, and as a common vocabulary for those already in it. In this first installment, I explore AI regulation. Left off here
AI regulation is perhaps the most contentious topic in the arena of AI policy. Those who argue largely in favor cite safety, transparency, and accountability issues emerging from proprietary large language models, while those against it believe safety guardrails are barriers to sustaining American exceptionalism—that unhindered innovative freedom is what keeps the heart of Silicon Valley beating, particularly amidst the looming threat of Chinese technical excellence.
On the one hand, a novel technology such as this thrusting itself onto the face of a population that had no say in its arrival poses legitimate risks—ones that are already materializing—such as deepfakes and chatbot-related suicides. To that end, legislators like Ro Khanna (D-CA) are calling for a rather bold slew of reforms, including a Data Bill of Rights and removing the protections of Section 230 to engagement-driven social media platforms that allegedly optimize so much for users’ preferences, resulting in extreme polarization and digital violence.
While I am all for knowing where my data lies and what it is being used for, the Section 230 argument merits a more nuanced discussion. Should platforms be protected from liability for the predictable social harms of their own design choices? No—machine learning-driven algorithmic recommendations are likely treated as protected editorial speech under the First Amendment, and hence imposing liability would be seen as an unconstitutional restriction on free speech. Yet, there exists a tiny scope for reform if interested lawmakers aimed to prove that these ‘predictable harms’ are legitimate and causal.
In the absence of an overarching federal regulatory framework, already in progress are pockets of experimental legislation that have emerged across multiple states, notably California and Colorado, providing for disclosure mechanisms such as whistleblower protections and deployment restrictions. While I am pro-transparency, I deviate from the dominant narrative here in terms of deployment.
While algorithmic bias is a legitimate mathematical phenomenon, the debate often overlooks the baseline of human fallibility. Research on allocational harms where AI produces disparate outcomes is significant, but we frequently lack direct, same-task comparisons between human and model decision-making. Consequently, declaring AI inherently “more biased” or “more racist” than humans is statistically premature.
The legislative overcaution seen in states like Colorado, which mandates “reasonable care,” annual impact assessments, and human review appeals for high-risk systems, may be counterproductive if it stalls the deployment of systems that are already more objective than their human counterparts. As a rule of thumb, if a model’s measurable bias is below or equal to the human baseline in a specific domain, it signals positively for deployment. Waiting for mathematical perfection while ignoring the “noise” and inconsistency of human bias is a choice to allow known human errors to persist.
But misdiagnosing bias is only one cost of regulatory overcaution. Gina Raimondo, the Secretary of Commerce under President Joe Biden, put it quite bluntly to a Hoover audience: “Chinese investments in AI, robotics, and quantum [computing] are dominating the world. We’re foolish to think it’s not a race.” President Trump revoked President Biden’s Safe, Secure and Trustworthy Development and Use of AI and replaced it with his Removing Barriers to American Leadership in AI, which as both names suggest, tell you everything you need to know about where each administration’s priorities began and ended.
Contemporary discourse makes China seem like the ungoverned Wild West of AI. In reality, China has built a comprehensive and rapidly maturing AI governance framework, including mandatory content labeling, lifecycle safety standards, and voluntary industry commitments that rivals the West in ambition. Safety and censorship, though, are the same instrument in China’s hands; accountability to the state and accountability to the public are not the same thing.
Neither side is wrong. What is wrong, however, is viewing regulation and technical innovation as being mutually exclusive.
The true hindrance to innovation—something that few figures on both sides of the spectrum have gotten right—is the glacial yet palpable decline in the American research university model. Condoleezza Rice, Secretary of State for President Bush, argues in her latest report that declining investment in fundamental research as a percentage of GDP is risking our position on the pedestal of innovation. Bernie Sanders, who couldn’t be any more opposed to Secretary Rice otherwise, validated Rice’s concern by underscoring the significance of the American research university model as the key distinguishing feature of innovation in Europe versus America.
Under University President Levin, much has been done to position Stanford as a pioneer in the AI revolution. From institutions like Stanford Human Centered AI to the new Department of Data Science, to infrastructure projects like Marlow to the CoDa building, we’re truly scraping the bottom of the barrel with all the non-endowment funding we have.
While OpenAI and Google’s projected $100 billion+ annual AI budgets now dwarf the entire $9 billion National Science Foundation budget, elite institutions like Stanford are simultaneously facing $140 million in internal cuts due to stagnating federal support and rising endowment taxes, effectively pricing the American research university out of the very frontier it created.
Yet, we must confront a sobering reality: the sheer capital required to train a frontier model like GPT-4 now dwarfs the combined grants and budgets of even the most elite U.S. research institutions. If only big tech possessed the money to build these models, what do we even mean by innovation? Innovation in corporate interests?
If there is anything I want you to take away from this article, it is that regulation and technical innovation are not two ends of the same spectrum. It is quite interesting to see how we trapped ourselves in this ridiculous orthogonality.
Regulation, done right, does not slow innovation. We need an overarching federal framework that is consistent with practical standards of deployment rather than theoretical possibilities of a discrimination-free society. There are issues arising with overregulation that are seemingly more dangerous than losing our precious spot on the pedestal. This will happen anyway if we do not rescue the research universities model, a possibility I am becoming increasingly skeptical about.
In my next article, Towards Eigenemployment, we will discuss the nature and economics of work in the era of generative AI and beyond, and its implications for policy. In the meantime, I encourage you to extend the conversation via the comments section below.
Leave a comment