Written by Tilde Jaques, MDI Journalism Intern
While the concept of artificial intelligence (AI) is not new, the applications in today’s world are unprecedented. Most people come into contact with AI every day. Apple introduced the virtual assistant, Siri, to iPhone users nearly 15 years ago; Amazon introduced its voice AI, Alexa, in 2014. As generative AI technologies are experiencing a period of immense advancement, the adaptation of these tools has deep implications for the Georgetown and broader global community. Georgetown students and faculty are continuing to research policy-driven solutions to some of these pressing issues as AI and technology become increasingly sophisticated.
During the Spring 2024 semester, the Massive Data Institute (MDI) and the Tech & Public Policy Program (TPP), both part of the McCourt School of Public Policy, launched a series of panels titled “AI & Me”. Tech & Public Policy Director Michelle De Mooy explained that “The panel series aimed to give people a sense of connection to AI and its impact in their daily lives.” MDI and TPP also hope to “educate and then to explore, through experts, AI in the context of key issues that both impact our lives on a day-to-day level, like privacy, equity and creativity, and that have policy implications.”
The AI and Me series focused specifically on three areas of generative AI: privacy and data, equity and representation, and creativity and content. Across these themes, panelists discussed the common thread of public policy and the role it could play in regulating AI. As this technology develops, the U.S. federal government has worked to respond with regulations to guide the new technology. This past October, for example, the Biden Administration released an Executive Order on AI that detailed the White House’s intentions to continue “harnessing AI for good and realizing its myriad benefits.” (Learn about some of our community members’ reactions to the EO on AI.) Throughout the AI & Me panel series, panelists and audience members considered how individuals and groups — both governmental and non-governmental — can grapple with the broadening horizons–and risks–that come with readily available AI.
The first panel, hosted on March 12, focused on privacy and data and was moderated by MDI Research Professor and Director of Georgetown Federal Statistical Research Data Center Amy O’Hara. Panelists included Ryan Hagemann, Global AI Policy Lead at IBM; Jane Horvath, Partner at Gibson, Dunn & Crutcher LLP; Eric Null, Co-Director of the Privacy & Data Project at the Center for Democracy & Technology; and Professor Muthu Venkitasubramaniam of Georgetown’s Department of Computer Science.
The panel opened with remarks by MDI Director Lisa Singh, who emphasized the importance of open discourse around AI given that “over the last decade as the cost of hardware has decreased and the datafication of the world has increased, [the world is experiencing an AI revolution].” Moderator O’Hara led discussions on the global policy landscape regarding AI, tensions between existing privacy laws and AI, and the question of how to define ‘privacy’ and whether or not it still exists given how much tech companies know about us from our personal data. The panelists also discussed possible ways to help protect data; among these solutions was cryptography, the practice of coding information so that it is hidden to maintain privacy.
Following the moderated discussion, audience members asked the panelists questions about privacy and personal data. In response to a question about whether privacy exists in an increasingly digital world, Eric Null offered a call for action, explaining that “privacy is a fundamental right. Privacy will always exist. It’s a question of how far we are actually willing to go to protect people’s online data.”
One attendee asked what the general public – people who might not know much about how personal data is used – should do to improve their privacy and understand more about the implications of the data they share on the internet. Jane Horvath offered a simple yet effective piece of advice in response: “once a month, go through what permissions you’ve given each app.”
The second panel in the AI & Me series on March 26 discussed the issue of equity and representation in the algorithms behind AI. As the output generated by AI can be influenced by biases of datasets, training, and creators, representation in this field is a key issue. For example, a study by researchers at the University of Southern California’s Information Sciences Institute identified bias in about 38% of ‘facts’ used by AI.
The panelists addressed AI bias and offered solutions to these inequities to promote more diverse representation. Panelists included Professor and MDI Faculty Affiliate NaLette Brodnax of the McCourt School of Public Policy; Victoria Houed, Director of AI and Strategy at the U.S. Department of Commerce; Amen Ra Mashariki, Director of AI and Data Strategies at the Bezos Earth Fund; and Professor Elissa Redmiles of Georgetown’s Computer Science Department.
The discussion identified some areas where the role of equity and representation in AI-enabled products and services could have an enormous impact, such as in hiring decisions, healthcare, the criminal justice system, and access to capital. Bias in AI primarily arises from the training data, the data that are used by machine learning algorithms to build AI models. Professor Redmiles explained that “AI systems propagate patterns in data, so if there is bias in a [training] dataset it will be propagated.”
The third and final panel on April 9, held during Georgetown University’s Tech & Society Week, explored the role AI and emerging technologies can play in creativity and content creation. Moderated by Dr. Soyica Diggs Colbert, the Idol Family Professor of Black Studies and Performing Arts at Georgetown, the panel included Jacqueline Assar, Spatial Computing & AI Innovator; Professor Sarah Adel Bargal of Georgetown’s Computer Science Department; Laura DeNardis, Professor and Endowed Chair in Technology, Ethics, and Society at Georgetown; and Henry Lee, Interdisciplinary Designer and part-time Lecturer at Parsons School of Design at the New School.
Panelists delved into the impact of generative AI in our daily lives, considering both the content we consume and the content we could potentially create, amid the abundance of technological tools for content creation available to individuals and organizations. The questions raised encompassed a range of topics from ways AI might complement creativity, to attribution and copyright issues, to deep fakes. Professor DeNardis explained that “we’re looking at a broad spectrum of issues from data privacy to antitrust, discriminatory bias, and copyright, essentially turning the whole internet governance and tech policy landscape into a highly charged field.”
Panelists also discussed some of the ways that AI might play a constructive role in creativity and content creation. Responding to a question regarding how AI might actually help artists and creators, Professor Bargal explained, “Unlike traditional methods where producing a single design can be labor-intensive, AI enables the creation of thousands of variations, significantly speeding up the material generation process for large projects.”
Looking forward, the capabilities of AI will continue to grow and become even more prevalent in our daily lives. These panels, through their open dialogue on matters such as data governance, algorithmic bias, and content creation have demonstrated that this technological opportunity offers exciting potential for innovation, but it is up to all of us to collectively ensure that its use aligns with our values and acts to improve the lives of all segments of our population.