How Francine Bennett Utilizes Data Science to Enhance AI Responsibility
To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Francine Bennett is a founding member of the board at the Ada Lovelace Insititute and currently serves as the organization’s interim Director. Prior to this, she worked in biotech, using AI to find medical treatments for rare diseases. She also co-founded a data science consultancy and is a founding trustee of DataKind UK, which helps British charities with data science support.
I started out in pure maths and wasn’t so interested in anything applied – I enjoyed tinkering with computers but thought any applied maths was just calculation and not very intellectually interesting. I came to AI and machine learning later on when it started to become obvious to me and to everyone else that because data was becoming much more abundant in lots of contexts, that opened up exciting possibilities to solve all kinds of problems in new ways using AI and machine learning, and they were much more interesting than I’d realized.
I’m most proud of the work that’s not the most technically elaborate but which unlocks some real improvement for people – for example, using ML to try and find previously unnoticed patterns in patient safety incident reports at a hospital to help the medical professionals improve future patient outcomes. And I’m proud of representing the importance of putting people and society rather than technology at the center at events like this year’s UK’s AI Safety Summit. I think it’s only possible to do that with authority because I’ve had experience both working with and being excited by the technology and getting deeply into how it actually affects people’s lives in practice.
Mainly by choosing to work in places and with people who are interested in the person and their skills over the gender and seeking to use what influence I have to make that the norm. Also working within diverse teams whenever I can – being in a balanced team rather than being an exceptional ‘minority’ makes for a really different atmosphere and makes it much more possible for everyone to reach their potential. More broadly, because AI is so multifaceted and is likely to have an impact on so many walks of life, especially on those in marginalized communities, it’s obvious that people from all walks of life need to be involved in building and shaping it, if it’s going to work well.
Enjoy it! This is such an interesting, intellectually challenging, and endlessly changing field – you’ll always find something useful and stretching to do, and there are plenty of important applications that nobody’s even thought of yet. Also, don’t be too anxious about needing to know every single technical thing (literally nobody knows every single technical thing) – just start by starting on something you’re intrigued by, and work from there.
Right now, I think a lack of a shared vision of what we want AI to do for us and what it can and can’t do for us as a society. There’s a lot of technical advancement going on currently, which is likely having very high environmental, financial, and social impacts, and a lot of excitement about rolling out those new technologies without a well-founded understanding of potential risks or unintended consequences. Most of the people building the technology and talking about the risks and consequences are from a pretty narrow demographic. We have a window of opportunity now to decide what we want to see from AI and to work to make that happen. We can think back to other types of technology and how we handled their evolution or what we wish we’d done better – what are our equivalents for AI products of crash-testing new cars; holding liable a restaurant that accidentally gives you food poisoning; consulting impacted people during planning permission; appealing an AI decision as you could a human bureaucracy.
I’d like people who use AI technologies to be confident about what the tools are and what they can do and to talk about what they want from AI. It’s easy to see AI as something unknowable and uncontrollable, but actually, it’s really just a toolset – and I want humans to feel able to take charge of what they do with those tools. But it shouldn’t just be the responsibility of people using the technology – government and industry should be creating conditions so that people who use AI are able to be confident.
We frequently contemplate on this subject at the Ada Lovelace Institute, an organization dedicated to ensuring AI and data benefit people and society. Although this question can be approached in numerous ways, from my perspective, two major considerations stand out.
The first critical action is to sometimes decide not to build or abandon a project. We often come across AI projects with substantial momentum where the developers attempt to affix ‘safety measures’ later to alleviate potential issues and damages but fail to contemplate halting the project as an option.
The second important step is to genuinely interact with and comprehend how people from all walks of life will interface with what is being built. Acquiring an in-depth understanding of their experiences immensely increases the likelihood of creating responsible AI – developing something that genuinely helps people based on a shared ideal of the preferred outcome, while avoiding negative outcomes – like unintentionally worsening an individual’s life because their daily reality is vastly different from yours.
To illustrate, the Ada Lovelace Institute collaborated with the NHS in creating an algorithmic impact assessment to be undertaken by developers as a condition for access to healthcare data. The stipulation obliges developers to evaluate the potential societal impact of their AI system before launching and take into account the lived experiences of individuals and communities who could be affected.
By asking questions about their investments and their possible futures – for this AI system, what does it look like to work brilliantly and be responsible? Where could things go off the rails? What are the potential knock-on effects for people and society? How would we know if we need to stop building or change things significantly, and what would we do then? There’s no one-size-fits-all prescription, but just by asking the questions and signaling that being responsible is important, investors can change where their companies are putting attention and effort.
Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More