MDI Community Members Reflect on Biden’s AI Executive Order

On October 30, 2023, the Biden-Harris Administration issued an Executive Order (EO) on artificial intelligence, intending to strike a balance between “seizing the promise and managing the risks” of the rapidly advancing technology. 

The Georgetown community has continued to discuss this rapidly emerging policy focus. Some of our MDI Scholars have shared their reflections on the EO below as well as affiliated faculty and students across the campus.

“[The executive order] ultimately falls short due to a range of vague points and a lack of legislative force.”

Professor Amir Zeldes of the Computational Linguistics Department believes that while the Biden Administration’s Executive Order is a step in the right direction, “what is needed is detailed regulatory legislation.”

“I believe that using artificial intelligence correctly, safely, and sustainably will depend on how we, as humans, behave and use it.”

MDI Scholar Gabriel Soto (MS-DSPP ’25) believes that “just like how the introduction of phones, computers, and even cars marked a significant change in society, AI is a groundbreaking technological innovation. The key factor that will determine whether its use is appropriate or not is our human behavior towards it. Therefore, it’s essential to start teaching about AI and its benefits from the early years of education.”

“This is a growing form of digital harm that requires further research, regulation, and innovation to prevent.”

Professor Elissa Redmiles, an Assistant Professor in the Computer Science Department, says she is “very glad” that the topic of “synthetically generated non-consensual intimate imagery” is covered in the recent Executive Order [under Section 4.5.(a)(iv), and Section 10.1.(b)(viii)(B)] because “this is a growing form of digital harm that requires further research, regulation, and innovation to prevent.”

“There is a great opportunity to integrate AI into personalized assessment and learning.”

Professor Qiwei Britt He, an Associate Professor in the Data Science and Analytics program at the Graduate School of Arts and Sciences, is interested in how this executive order might “shape AI’s potential to transform education.”

“For me, the hardest part of the puzzle right now is enforcement.”

MDI Scholar Brian Holland (MS-DSPP ’24) explained that “if a government sets a speed limit on a road, there are several ways to objectively enforce that by checking people’s speed. But, the AI EO sets maximum limits that trigger reporting requirements. Even aside from the discussion of what the ‘speed limit’ is, we currently have no metric for enforcement that wouldn’t violate First Amendment law pretty simply.”

“I think the order is a positive step forward.”

Professor Will Fleisher, Assistant Professor in the Philosophy Department and Assistant Research Professor in the Center for Digital Ethics, was “happy to see sections 4.1 [Developing Guidelines, Standards, and Best Practices for AI Safety and Security] and 4.2 [Ensuring Safe and Reliable AI] aimed at establishing better governance for safe and reliable AI, and section 7 on equity and civil rights.” However, he would like to see “even more support for developing technical and moral frameworks for determining and ensuring fairness, justice, and anti-discrimination.” 

“The Executive Order on AI overlooks the crucial roles of workers like data labelers and global domain experts, essential in data production for machine learning.”

Professor Rajesh Veeraraghavan, Assistant Professor in Science, Technology and International Affairs at the School of Foreign Service, believes that while the recent Executive Order “acknowledges workers and labor unions as important stakeholders, primarily in terms of the harm caused to them, an inclusive analysis of AI’s global labor process is needed, focusing beyond just the impact on workers, to center the work and futures of all workers globally to prevent exploitation and deskilling.”

“I am uncertain whether mainstream media and social media platforms are prepared for the potential onslaught of disinformation.”

MDI Scholar Zhiqiang Ji (MS-DSPP ’24) believes that “the executive order addresses nearly all conceivable risks associated with AI at this time, including a specific focus on ‘reducing the risks posed by synthetic content.’” However, he warned that he is “concerned about how the general public—many of whom may be unaware of ChatGPT, Bard, Stable Diffusion, or Pika Lab, and how easy it is for a tech-savvy graduate student to create a data science tutorial video with synthetic voices of Morgan Freeman, Barack Obama, and Donald Trump (a project developed by one of our DSPP cohorts last semester)—will handle the confusion, frustration, and atmosphere of distrust that may arise amidst a flood of political disinformation.”

MDI Faculty Affiliate Rajesh Veeraraghavan taught a first-year proseminar entitled “Politics of Data” throughout the Fall 2023 Semester. His students shared their analysis of the Biden Administration’s Executive Order (EO) on AI in a collated report, compiled by Cate Kanapathy (SFS ’27) and Tuqa Alibadi (SFS ’27) under Professor Veeraraghavan’s supervision. 

The report praised the EO for mandating accountability, for its level-headedness in addressing both innovation potential and the minimization of harm, and for its incorporation of public input. However, the students highlight several weaknesses of the EO: overly specific, often misinformed technological mandates; a lack of explainability and consequences; the exclusion of certain fields; limited enforcement of rules; and an insufficient emphasis on the sociotechnical.

Here are a few highlights of the students’ thoughts. The full report is available here.

“Implementing an EO of this size and nuance will be extremely difficult.”

Jake Farber (SFS ’27) believes that “even though there are timelines within the document, they are aggressive and do not leave much room for potential issues that may come up throughout the process of rolling out this order.” One of Farber’s recommendations is “to implement the new AI EO into existing structures rather than trying to produce whole new agencies and expand the bureaucracy even further.”

“The limited legal scope of an executive order has prohibited what the White House can do to regulate AI without the help of Congress, greatly hurting its abilities to enforce the goals it sets forth.”

Lucas Holden (SFS ’27) highlights that “the burden now falls on local and state governments to adopt similar frameworks, and on Congress to strengthen AI regulation, regulate the private sector, and continue the progress the White House has begun.”

“While President Biden’s AI executive order is a strong message that encourages hard accountability, innovation, and US leadership in AI governance, it falls short by taking a superficial approach to AI harms related to climate change, pre-deployment, use cases, and bias.”

Jamie Alford (SFS ’27)  thinks that “it is unquestionable that Biden’s recent AI executive order is a critical first step toward AI governance.” However, Alford believes its “limited scope could worsen existing harms with the increased use of AI in the government, which is why comprehensive legislation is necessary for the future of AI governance.”