Highlighting Women in AI: Introducing Claire Leibowicz, a Media Integrity Expert at PAI

To highlight the achievements and contributions of women in the field of AI, TechCrunch is starting a series of interviews. These will focus on extraordinary women who’ve played pivotal roles in the AI revolution. Our plan is to release numerous pieces throughout the year to bring into limelight work that often remains unacknowledged. For more profiles, follow the link here.

Claire Leibowicz leads the AI and media integrity program at the Partnership on AI (PAI). This organization, supported by Amazon, Meta, Google, Microsoft amongst others, is dedicated to endorsing the judicious use of AI technology. Additionally, Leibowicz supervises PAI’s AI and Media Integrity steering committee.

In the year 2021, Leibowicz served as a journalism fellow at Tablet Magazine. The following year, she was a fellow at The Rockefeller Foundation’s Bellagio Center where her focus was on AI governance. Armed with a BA in psychology and computer science from Harvard and a postgraduate degree from Oxford, Leibowicz has been a consultant for various companies, governments, and non-profit organizations advising on AI governance, generative media, and digital information.

Can you share the story behind your introduction to AI? What piqued your interest in the field?

My journey to the realm of AI was born from a fascination with human behavior. I was raised in New York, a city known for its diverse population and their engaging interactions. Queries about truth, trust, and societal structures intrigued by cognitive science-loving mind. As I delved deeper into these questions, technology, and specifically AI, emerged as an intriguing facet affecting the answers. Fascinatingly, I noticed parallels between artificial and human intelligence.

My quest led me to computer science classes where I was taught by the likes of Professor Barbara Grosz, a pioneer in natural language processing, and Professor Jim Waldo, who amalgamated his knowledge of philosophy and computer science. These faculty members emphasized the induction of non-computer science students and underscored the significant societal implications of technologies like AI. Their teachings made it evident that despite the advantages of technical understanding, technology’s influence spans across geopolitics, economics, social interaction, and more, necessitating inputs from various domains on fundamentally technical questions.

AI’s reach is not limited to a particular field. Whether you’re an educator contemplating the effects of generative AI tools on pedagogy, a museum curator considering a predictive model for an exhibition, or a doctor researching image detection methods for lab analysis, AI can revolutionize your sphere. The fact that AI’s influence pervades numerous fields piqued my interest owing to the diverse intellectual opportunities it offers and its potential for societal impact.

What are your most significant achievements in the AI domain?

I am truly honored to be part of the AI field that harmonizes contrasting viewpoints in a surprising and actionable manner. This milieu not only accommodates but also promotes debates and opposing opinions. I commenced my journey with PAI six years ago as the second staff member, and was immediately struck by its visionary approach toward promoting varied perspectives. PAI acknowledged the importance of such initiatives as a cornerstone for achieving AI governance that helps minimize harm and fosters practical implementation in the AI sector. This vision indeed unfolded into reality and I am delighted to have played my part in acquainting PAI with multidisciplinarity, while also watching the institute grow in tandem with the AI industry.

Over the past six years, our work on synthetic media began significantly earlier than when generative AI entered public awareness, demonstrating the potential of multi-stakeholder AI governance. In 2020, nine distinct associations spanning civil society, industry, and media, collaborated with us to outline Facebook’s Deepfake Detection Challenge. It was a machine learning contest aimed at crafting models to identify AI-forged media. Insights from these external contributors facilitated in setting the objectives and fairness of the victorious models, illustrating how human rights experts and journalists can weigh in on tech-oriented topics like detecting deepfakes. Last year, we rolled out a standard set of guidelines for responsible synthetic media under PAI’s Responsible Practices for Synthetic Media which has garnered support from 18 incredibly diverse entities ranging from OpenAI, TikTok, Code for Africa, Bumble, BBC, and WITNESS. Drafting actionable guidelines, while integrating technical and societal realities, was one task; securing institutional backing, was another. The institutions, in this case, pledged to deliver transparency reports elucidating how they navigate the synthetic media arena. AI initiatives that provide tangible guidance and demonstrate their application across institutes hold profound meaning to me.

How do you cope with the hurdles in the male-centric tech industry, and, correspondingly, the male-centric AI industry?

All through my professional life, I have been fortunate enough to have exceptional male and female mentors. The key to any progress I have made has been finding individuals who encourage and challenge me. In my experience, concentrating on common interests and discussing invigorating questions related to the AI domain helps bring together people with diverse backgrounds. Interestingly enough, over half of PAI’s workforce comprises women, and many entities that focus on AI and societal issues or responsible AI questions have a considerable number of female employees. This, in contrast to those working on AI research and engineering teams, seems to be a promising step towards ensuring representation in the AI ecosystem.

What advice would you give to women seeking to enter the AI field?

As I discussed in the previous query, some of the areas within AI that are majorly dominated by men have been the most technical. It’s essential not to consider only technical skills as the pinnacle in the AI sector, but my experience is that having a technical background boosts my self-confidence, and efficiency in these kinds of situations. We require equal distribution in technical positions and an acceptance towards the expertise of individuals who are specialists in other sectors such as civil rights and politics that have a balanced representation. Meanwhile, providing more women with technical expertise is central to achieving balance in the representation in the AI field.

Connecting with women in the AI sector who have managed to strike the right balance between their family and professional lives has been of immense value to me. Meeting and talking to mentors about substantial career-related queries and parenthood- together with some of the distinctive challenges women continue to face at work – has prepared me better to handle some of those challenges as they come my way.

What are some of the most pressing issues facing AI as it progresses?

The complexity of trust and truth, both online and offline, increases as AI continues to advance. As technology enables us to generate or even modify a diverse range of content from images and text to videos, we start to question the credibility of what we observe. Can we still trust what we see? The ability to convincingly alter documents shakes our confidence in evidence. When it’s frightfully simple to impersonate real people online, how do we ascertain the existence of spaces exclusive to humans? Navigating the delicate balance between freely expressing ourselves and provoking potential harm from AI systems becomes a challenge. On a grander scale, it becomes crucial to ascertain that the digital environment isn’t shaped by a handful of companies and their employees, but by the perspectives of worldwide stakeholders, inclusive of the public.

In parallel, PAI has indulged in investigating other aspects of AI’s influence on society, such as exploring bias and fairness in an era dominated by algorithmic decision-making. Delving into the impact of AI on labor and the necessary steps for a responsible AI system deployment is another area of concern. One of the key challenges is instilling AI systems with a range of perspectives. On a broad scale, it becomes essential to discern how diverse viewpoints can guide AI governance, despite the hurdles.

What should AI users be cautious of?

Initially, AI users must comprehend that if an offer appears excessively enticing, it’s likely not authentic.

The rise in generative AI over the previous year not only showcases immense creativity and inventiveness, but has also provoked public interpretations of AI that can be exaggerative and incorrect.

It is crucial for users of AI to comprehend that AI does not bring about a revolution. Instead, it enhances and amplifies established complications and prospects. This realization doesn’t mean users should take AI lightly. On the contrary, they should leverage this understanding to confidently traverse across the rapidly evolving, AI integrated world. For instance, if the potential misinterpretation of a video’s context before an election due to a modified caption is a cause of concern, the same concern should extend to the ability to deceive swiftly and extensively with deepfake technology. Similarly, apprehensions about surveillance at workplace should consider the ease and pervasiveness that AI brings to such monitoring. A balanced skepticism about AI’s novelty, while also recognizing what’s unique about the current era, provides a useful perspective for users in their interactions with AI.

How can AI be built responsibly?

Constructing AI responsibly necessitates an expansion in our understanding of who contributes to “building” AI. Certainly, influencing tech firms and social media platforms plays a significant role in managing the effects of AI systems and these entities are crucial in building technology responsibly. Simultaneously, it is necessary to recognize the continuing involvement of various institutions from civil society, industry, media, academia and the public in creating responsible AI that caters to the public interest.

Consider the mindful development and use of artificially generated media.

Tech companies might question their responsibility in managing the potential influence of artificially generated videos on users before an election. Journalists may be anxious about fraudsters generating fake videos that claim to originate from their reliable news brand. Human rights advocates might debate the responsibility related to how AI-produced media minimizes the impact of videos acting as proof of transgressions. Artists might enjoy the chance to express through generative media while worrying about the unauthorized use of their works in training AI models to create new media. This variety of viewpoints illustrates the crucial need to engage various stakeholders in the responsible construction of AI. It also shows the many institutions affected by—and affecting—the manner AI is incorporated into our society.

How can investors foster more responsible AI?

Years before, I remember hearing DJ Patil, the former head data scientist of the White House, propose a modification to the much-repeated “move fast and break things” slogan of the early social media period. He recommended that we “move deliberately and mend things”.

I loved this because it didn’t imply stagnation or an abandonment of innovation, but rather intentionality and the possibility of innovation while embracing responsibility. It’s thought that investors should foster this mindset, allowing more space and time for their venture companies to incorporate responsible AI practices without curbing progress. Often, institutions cite limited time and stringent deadlines as the primary constraint to do what’s “right”. Investors could be a key player in altering this situation.

As I delve deeper into working with AI, I find myself wrestling with profoundly humanistic dilemmas. It’s crucial that we all come together to find answers to these questions.

Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More

Leave a Reply

Your email address will not be published. Required fields are marked *