Spotlight on Women in AI: Interview with Irene Solaiman, Global Policy Head at Hugging Face
To shine a light on the achievements of remarkable female academics and others in the field of AI, TechCrunch is beginning a series of interviews. This ongoing series will focus on those who’ve played substantial roles in advancing AI but seldom receive due recognition. More profiles can be found here.
Beginning her work in AI as a researcher and public policy manager at OpenAI, Irene Solaiman spearheaded a novel approach to the release of GPT-2, a precursor to ChatGPT. Solaiman later joined Hugging Face as the head of global policy after a nearly one-year tenure as AI policy manager at Zillow. At Hugging Face, her roles range from leading AI policy worldwide to executing socio-technical research.
In addition to her duties, Solaiman provides counsel to the Institute of Electrical and Electronics Engineers (IEEE) on matters concerning AI. She is also a well-known AI authority at the intergovernmental Organization for Economic Co-operation and Development (OECD).
As is typical in the AI industry, my career path has not been linear. I discovered my interest in the field as a socially awkward teenager finding solace in science-fiction media. I began by studying human rights policy and subsequently enrolled in computer science courses, perceiving AI as a tool to improve human rights and build a superior future. The possibility to perform technical research and guide policy in a field laden with unexplored opportunities and unanswered queries is what continues to fuel the enthusiasm for my work.
I feel a profound sense of accomplishment when my knowledge and perspectives strongly echo within the AI field, particularly when it comes to my writings on the subject of the delicate balance in AI system releases and openness. One such example is my paper on AI Release Gradient frame technical deployment. Seeing ongoing debates among scientists and its inclusions in government reports is an affirmation I am headed in the correct direction. On a personal level, the work I am most passionate about is cultural value alignment, essentially ensuring that AI systems perform optimally in the culture where they are deployed. Together with my esteemed co-author and friend, Christy Dennison, our intense project of Process for Adapting Language Models to Society has significantly influenced today’s safety and alignment work.
I am still discovering my circle of people, from working with compassionate company leaders who deeply care about the same pressing issues I prioritize, to finding great research co-authors. Affinity groups are tremendously helpful in fostering community and exchanging ideas. I strongly believe in intersectionality; my fellow Muslim and BIPOC researchers continually motivate me.
Seek a support group that regards your achievements as a collective victory. I refer to this as a “girl’s girl” approach. The women and allies who joined this field along with me have become my comrades, available for a quick coffee catch-up or a last-minute call ahead of a pressing deadline. One timeless piece of career guidance I found insightful was Arvind Narayan’s “Liam Neeson Principle” on the now-defunct platform, Twitter, who advised that you don’t have to be the smartest but should foster a unique set of skills.
The nature of urgent issues continues to transform, thereby emphasizing the underlying need for: international collaborations to ensure safer systems for all. Peoples using and being affected by these systems, even within the same country, hold diverse views and preferences over what safety means to them. The issues that emerge will rely not merely on AI’s evolution, but on the environment where they are launched. Our definition of capability and safety priorities differs regionally, like the increased risk of cyberattacks to vital infrastructure in economies that are more digitized.
Often, technical solutions don’t adequately address risks and harms in a comprehensive manner. Despite the potential for users to improve their understanding of AI, it remains crucial to develop a variety of precautions for evolving risks. One promising avenue of research that I’m interested in is watermarking as a technical tool. However, it’s also imperative to have unified policymaker guidance on the dissemination of generated content, particularly on social media platforms.
Those affected should always be a part of the process, and we must consistently reassess our approaches to evaluating and applying safety techniques. The potential benefits and potential dangers are ever-changing, necessitating ongoing adjustment and feedback. It’s important for the entire field to collectively scrutinize how we enhance AI safety. The primary assessments for models in 2024 far outstrip those executed in 2019. Presently, I view technical assessments more favorably than red-teaming. Human evaluations indeed have a high utility, but as evidence builds regarding the psychological strain and differential costs of human feedback, I’m increasingly in favor of standardizing evaluations.
Many investors and venture capital firms are already actively participating in safety and policy dialogues, which include penning open letters and testifying before Congress. I’m keen to glean more insights into what motivates small businesses in diverse sectors from investment experts, especially as we see an uptick in AI adoption in areas beyond the established tech industries.
Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More