The swift advancement of artificial intelligence technologies worldwide has sparked growing demands for comprehensive AI governance: regulations that foster technological breakthroughs while safeguarding individuals from privacy intrusions, exploitative monitoring, algorithmic discrimination, and other potential harms.
However, crafting and implementing such regulations has proven exceptionally challenging.
"This represents an incredibly complex challenge," Luis Videgaray PhD '98, director of MIT's AI Policy for the World Project, explained during a Wednesday afternoon lecture. "This isn't something that can be resolved through a single report. This must become a collaborative dialogue, and it will require time. We're looking at a multi-year journey."
Throughout his presentation, Videgaray detailed a comprehensive vision for global AI policy—one that acknowledges economic and political realities while being rooted in tangible equity and democratic discourse.
"Trust stands as perhaps our most critical challenge," Videgaray emphasized.
Videgaray's presentation, titled "From Principles to Implementation: The Challenge of AI Policy Around the World," was part of the Starr Forum series of public discussions addressing topics of global significance. The Starr Forum is hosted by MIT's Center for International Studies. Videgaray delivered his remarks to a standing-room-only audience exceeding 150 people in MIT's Building E25.
Videgaray, who also serves as a senior lecturer at the MIT Sloan School of Management, previously held positions as Mexico's finance minister from 2012 to 2016 and foreign minister from 2017 to 2018. Videgaray has also extensive experience in investment banking.
Knowledge gaps and media exaggeration
During his talk, Videgaray began by outlining several "themes" connected to AI that he believes policymakers should consider. These include governmental applications of AI; the technology's economic impacts, including its potential to enable major tech corporations to consolidate market dominance; social accountability issues such as privacy, fairness, and bias; and AI's implications for democracy, particularly when bots can shape political discourse. Videgaray also highlighted a "geopolitics" of AI regulation—ranging from China's comprehensive technology control efforts to the more relaxed approaches employed in the U.S.
Videgaray noted that AI regulators struggle to keep pace with technological developments.
"There exists an information lag," Videgaray said. "Issues that concern computer scientists today may only become policymakers' concerns several years in the future."
Furthermore, he observed, media sensationalism can distort perceptions of AI and its applications. Here Videgaray contrasted the recent report from MIT's Task Force on the Future of Work, which identifies uncertainty about how many jobs technology will replace, with a recent television documentary depicting automated vehicles replacing all truck drivers.
"Clearly, the evidence nowhere suggests that all truck driving jobs, particularly in long-distance transportation, will disappear," he stated. "That simply isn't accurate."
With these overarching issues in mind, what should policymakers address regarding AI now? Videgaray proposed several concrete recommendations. To begin: Policymakers should move beyond merely outlining broad philosophical principles—something that has been done repeatedly, with a general convergence of ideas.
"Focusing on principles yields very minimal marginal returns," Videgaray explained. "We can progress to the next phase… principles are necessary but insufficient for AI policy. Because policy involves making difficult decisions amid uncertainty."
Indeed, he emphasized, greater progress can be achieved by making many AI policy decisions specific to particular industries. When considering medical diagnostics, for instance, policymakers want technology "to be highly accurate, but also explainable, fair, unbiased, and secure… there are numerous objectives that may conflict with one another. This fundamentally involves tradeoffs."
In many instances, he suggested, algorithm-based AI tools could undergo rigorous testing processes, as required in other sectors: "Pre-market testing makes sense," Videgaray said. "We implement this for pharmaceuticals through clinical trials, we apply it to automobiles—why shouldn't we conduct pre-market testing for algorithms?"
While Videgaray recognizes the value of industry-specific regulations, he's less enthusiastic about having a patchwork of varying state-level AI laws governing technology in the U.S.
"Is this problematic for Facebook or Google? I don't believe so," Videgaray noted. "They possess sufficient resources to navigate this complexity. But what about startups? What about students from MIT, Cornell, or Stanford attempting to launch ventures, who would need to navigate, in extreme cases, 55 different regulatory frameworks?"
A collaborative dialogue
At the event, Videgaray was introduced by Kenneth Oye, an MIT political science professor specializing in technological regulation, who posed questions to Videgaray following the lecture. Among other points, Oye suggested that U.S. states could serve as valuable laboratories for regulatory innovation.
"In a field characterized by substantial uncertainty, complexity, and controversy, there can be advantages to experimentation—having different models implemented in various regions to determine which approaches work best or worst," Oye proposed.
Videgaray didn't necessarily disagree but emphasized the importance of eventual regulatory convergence. The U.S. banking industry, he noted, followed a similar path until "eventually the financial regulation we have [became] federal" rather than state-determined.
Prior to his remarks, Videgaray acknowledged several audience members, including his MIT PhD thesis adviser, James Poterba, the Mitsui Professor of Economics, whom Videgaray described as "one of the finest educators, not only in economics but regarding numerous aspects of life." Mexico's Consul General in Boston, Alberto Fierro, also attended the event.
Ultimately, Videgaray stressed to the audience that the future of AI policy will be collaborative.
"You can't simply visit a computer lab and request, 'Give me some AI policy,'" he emphasized. "This must become a collective conversation."