Understanding AI with Karine Perset: A Guide for Governments
To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Karine Perset works for the Organization for Economic Co-operation and Development (OECD), where she runs its AI Unit and oversees the OECD.AI Policy Observatory and the OECD.AI Networks of Experts within the Division for Digital Economy Policy.
Perset specializes in AI and public policy. She previously worked as an advisor to the Internet Corporation for Assigned Names and Numbers (ICANN)’s Governmental Advisory Committee and as Conssellor of the OECD’s Science, Technology, and Industry Director.
I am extremely proud of the work we do at OECD.AI. Over the last few years, the demand for policy resources and guidance on trustworthy AI has really increased from both OECD member countries and also from AI ecosystem actors.
When we started this work around 2016, there were only a handful of countries that had national AI initiatives. Fast forward to today, and the OECD.AI Policy Observatory – a one-stop shop for AI data and trends – documents over 1,000 AI initiatives across nearly 70 jurisdictions.
Globally, all governments are facing the same questions on AI governance. We are all keenly aware of the need to strike a balance between enabling innovation and opportunities AI has to offer and mitigating the risks related to the misuse of the technology. I think the rise of generative AI in late 2022 has really put a spotlight on this.
The ten OECD AI Principles from 2019 were quite prescient in the sense that they foresaw many key issues still salient today – 5 years later and with AI technology advancing considerably. The Principles serve as a guiding compass towards trustworthy AI that benefits people and the planet for governments in elaborating their AI policies. They place people at the center of AI development and deployment, which I think is something we can’t afford to lose sight of, no matter how advanced, impressive, and exciting AI capabilities become.
To track progress on implementing the OECD AI Principles, we developed the OECD.AI Policy Observatory, a central hub for real-time or quasi-real-time AI data, analysis, and reports, which have become authoritative resources for many policymakers globally. But the OECD can’t do it alone, and multi-stakeholder collaboration has always been our approach. We created the OECD.AI Network of Experts – a network of more than 350 of the leading AI experts globally – to help tap their collective intelligence to inform policy analysis. The network is organized into six thematic expert groups, examining issues including AI risk and accountability, AI incidents, and the future of AI.
When we look at the data, unfortunately, we still see a gender gap regarding who has the skills and resources to effectively leverage AI. In many countries, women still have less access to training, skills, and infrastructure for digital technologies. They are still underrepresented in AI R&D, while stereotypes and biases embedded in algorithms can prompt gender discrimination and limit women’s economic potential. In OECD countries, more than twice as many young men than women aged 16-24 can program, an essential skill for AI development. We clearly have more work to do to attract women to the AI field.
However, while the private sector AI technology world is highly male-dominated, I’d say that the AI policy world is a bit more balanced. For instance, my team at the OECD is close to gender parity. Many of the AI experts we work with are truly inspiring women, such as Elham Tabassi from the U.S National Institute of Standards and Technology (NIST); Francesca Rossi at IBM; Rebecca Finlay and Stephanie Ifayemi from the Partnership on AI; Lucilla Sioli, Irina Orssich, Tatjana Evas and Emilia Gomez from the European Commission; Clara Neppel from the IEEE; Nozha Boujemaa from Decathlon; Dunja Mladenic at the Slovenian JSI AI lab; and of course my own amazing boss and mentor Audrey Plonk, just to name a few, and there are so many more.
We need women and diverse groups represented in the technology sector, academia, and civil society to bring rich and diverse perspectives. Unfortunately, in 2022, only one in four researchers publishing on AI worldwide was a woman. While the number of publications co-authored by at least one woman is increasing, women only contribute to about half of all AI publications compared to men, and the gap widens as the number of publications increases. All this to say, we need more representation from women and diverse groups in these spaces.
So to answer your question, how do I navigate the challenges of the male-dominated technology industry? I show up. I am very grateful that my position allows me to meet with experts, government officials, and corporate representatives and speak in international forums on AI governance. It allows me to engage in discussions, share my point of view, and challenge assumptions. And, of course, I let the data speak for itself.
Sharing my experiences from the world of AI policy, I would urge you not to hesitate in voicing your perspectives. There is a dire need for inclusion of diverse viewpoints when formulating AI policies and models. Each person has unique experiences that can enrich the discussion.
For creating AI that is safer, more inclusive, and trustworthy, it’s imperative to scrutinize AI models and the data that feeds them from various angles. We should constantly question ourselves about potential blind spots. If you decide to remain silent, it might result in a crucial insight being overlooked. Due to your diverse viewpoint, there’s a high likelihood that you’ll observe something which others may not. If everyone chips in, we as a global community can be stronger than simply the sum of our individual parts.
Additonally, it’s important to understand that there exist multiple roles and career paths within the AI industry. Having a degree in computer science isn’t a prerequisite to make a foray into AI. We already have legal professionals, economists, social scientists, and many other professionals contributing their perspectives. As we progress, bringing about real innovation will increasingly require a combination of domain knowledge, AI literacy, and technical skills to devise effective AI applications for specific fields. Many universities have already started offering AI courses beyond the confines of computer science departments. I firmly believe that interdisciplinarity will be crucial for future AI jobs. Therefore, I would like to encourage women from all backgrounds to explore the potentials of AI, without the fear of being less competent than men.
I would categorize the most urgent issues facing AI into three main groups.
First, I believe we must facilitate a bridge between policymakers and technologists. Generative AI advances in late 2022 caught many off guard, although some researchers had forecasted such developments. Naturally, every domain tackles AI matters from its unique standpoint. However, AI-related problems are multifaceted; collaboration and interdisciplinary approaches between policymakers, AI developers, and researchers are crucial. These joint efforts help understand AI matters more comprehensively, keep up with AI advances, and fill knowledge voids.
Second, the global compatibility of AI rules is a central element in the governance of AI. Several large economies have initiated the regulation of AI. For example, the European Union recently endorsed its AI Act, the U.S. has issued an executive order to pursue the secure and credible development and employment of AI, while both Brazil and Canada have proposed bills to regulate the progress and application of AI. The tough part here is finding the appropriate balance between safeguarding citizens and fostering business innovation. AI has no geographical boundaries, and each of these economies has varying regulation and protection approaches. Enabling regulation compatibility between jurisdictions is of critical importance.
Third, tracking AI incidents is another critical query. The rise of generative AI has led to a surge in AI-related incidents. Neglecting to manage the risks associated with these incidents could intensify the lack of public trust. Significant data of prior incidents can help prevent analogous situations in future. Last year, we initiated the AI Incidents Monitor. This device harnesses global news sources to track global AI incidents and gain better insights into the harms caused by such incidents. It delivers real-time data to aid policy and regulatory decisions about AI, particularly regarding tangible risks such as bias, discrimination, social disruption, and the AI systems responsible for these problems.
Something that policymakers around the world are trying to resolve is how to protect the public from AI-created misinformation and disinformation – including synthetic media like deepfakes. While misinformation and disinformation have been present for a while, the notable change here is the vastness, quality, and affordable cost of AI-generated synthetic outputs.
Governments are well aware of the issue and are looking at ways to help citizens identify AI-generated content and assess the veracity of the information they are consuming, but this is still an emerging field, and there is still no consensus on how to tackle such issues.
Our AI Incidents Monitor can help track global trends and keep people informed about major cases of deepfakes and disinformation. But in the end, with the increasing volume of AI-generated content, people need to develop information literacy, sharpening their skills, reflexes, and ability to check reputable sources to assess information accuracy.
Many of us in the AI policy community are diligently working to find ways to build AI responsibly, acknowledging that determining the best approach often hinges on the specific context in which an AI system is deployed. Nonetheless, building AI responsibly necessitates careful consideration of ethical, social, and safety implications throughout the AI system lifecycle.
One of the OECD AI Principles refers to the accountability that AI actors bear for the proper functioning of the AI systems they develop and use. This means that AI actors must take measures to ensure that the AI systems they build are trustworthy. By this, I mean that they should benefit people and the planet, respect human rights, be fair, transparent, and explainable, and meet appropriate levels of robustness, security, and safety. To achieve this, actors must govern and manage risks throughout their AI systems’ lifecycle – from planning, design, and data collection and processing to model building, validation and deployment, operation, and monitoring.
Last year, we published a report on “Advancing Accountability in AI,” which provides an overview of integrating risk management frameworks and the AI system lifecycle to develop trustworthy AI. The report explores processes and technical attributes that can facilitate the implementation of values-based principles for trustworthy AI and identifies tools and mechanisms to define, assess, treat, and govern risks at each stage of the AI system lifecycle.
By advocating for responsible business conduct in the companies they invest in. Investors play a crucial role in shaping the development and deployment of AI technologies, and they should not underestimate their power to influence internal practices with the financial support they provide.
For example, the private sector can support developing and adopting responsible guidelines and standards for AI through initiatives such as the OECD’s Responsible Business Conduct (RBC) Guidelines, which we are currently tailoring specifically for AI. These guidelines will notably facilitate international compliance for AI companies selling their products and services across borders and enable transparency throughout the AI value chain – from suppliers to deployers to end-users. The RBC guidelines for AI will also provide a non-judiciary enforcement mechanism – in the form of national contact points tasked by national governments to mediate disputes – allowing users and affected stakeholders to seek remedies for AI-related harms.
By guiding companies to implement standards and guidelines for AI — like RBC – private sector partners can play a vital role in promoting trustworthy AI development and shaping the future of AI technologies in a way that benefits society as a whole.
Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More