Redefining Culture with AI: A Deep Dive with Ewa Luger on Women in AI

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Ewa Luger is co-director at the Institute of Design Informatics, and co-director of the Bridging Responsible AI Divides (BRAID) program, backed by the Arts and Humanities Research Council (AHRC). She works closely with policymakers and industry, and is a member of the U.K. Department for Culture, Media and Sport (DCMS) college of experts, a cohort of experts who provide scientific and technical advice to the DCMS.

Luger’s research explores social, ethical and interactional issues in the context of data-driven systems, including AI systems, with a particular interest in design, the distribution of power, spheres of exclusion, and user consent. Previously, she was a fellow at the Alan Turing Institute, served as a researcher at Microsoft, and was a fellow at Corpus Christi College at the University of Cambridge.

Briefly, how did you get your start in AI? What attracted you to the field?

Following my doctorate studies, I transitioned to Microsoft Research where my tasks were based in the design and user experience cluster at the Cambridge lab located in the United Kingdom. I was constantly interacting with AI as the main focus of this lab, thus my work gradually leaned more into AI and also expanded to issues revolving around human-focused AI like the intelligent voice assistants.

In pursuit of further understanding about algorithmic intelligibility, I transferred to the University of Edinburgh in 2016. This was quite a unique field at the time. I’ve positioned myself in the area of responsible AI and at present, together with others, I am leading a nationwide program on this matter. The program funding is being provided by the AHRC.

Could you share about the AI work you take most pride in?

The recognition I get mostly is for my paper on the experience of users of voice assistance which was published in 2016. This was a unique study of its kind at that time and is still highly recognized today. However, the work that gives me the greatest personal satisfaction is ongoing. BRAID, a program I am jointly leading, was carefully planned together with a philosopher and ethicist. It’s an authentic multidisciplinary effort aimed at ensuring the growth of a responsible AI ecosystem in the UK.

The aim of our association with Ada Lovelace Institute and the BBC is to merge knowledge of arts and humanities with policy, regulation, industry, and the voluntary sector. The significance of arts and humanities in the realm of AI is often underestimated, which is quite surprising. With the onset of COVID-19, the importance of creative industries emerged substantially. Learning from history is vital to prevent reiteration of past mistakes. Philosophical ethics have consistently played a key role in safeguarding us and keeping us abreast in the field of medical science. Systems such as Midjourney depend on content generated by artists and designers as their foundational data. However, these contributors seldom have any say in the field. Our objective is to bring a change to this.

On a more practical level, we have collaborated with industry partners like Microsoft and the BBC to co-develop responsible AI challenges. Academics capable of solution-finding for these challenges were jointly sought. BRAID so far has financed 27 projects, inclusive of individual fellowships. We are looking forward to launching a new call soon.

We aim to launch a complimentary online course for stakeholders wishing to get involved with AI. A forum is being established to interact with varied section of population and other sector stakeholders to aid work governance. Besides, we are also striving to debunk numerous misconceptions and overstatements related to AI presently.

While it’s evident this type of narrative fuels current AI investment, it also fosters fear and disorientation among people who are mainly susceptible to future damages. BRAID will be operational till the end of 2028, and in its next phase, we’ll focus on AI literacy, hardening areas of resistance, and establishing contestation and recourse mechanisms. A considerably large initiative, with a budget of £15.9 million spread over six years, it is funded by the AHRC.

Addressing the Gender Gap in the Tech and AI Industries

Exploring the pressing question, how does one handle the challenges present in the largely male-driven tech and AI sectors?

These industries aren’t the only areas where such gender-dominated issues are prevalent. These challenges exist just as prominently within academia, contrary to the general perception. Presently, I co-lead an institute – Design Informatics, bridging the gap between design and informatics schools and hence, paving the way for a more balanced work environment. This balance is not just gender-specific, but it also extends to cultural barriers that often undercut women’s professional growth.

During my doctoral years, I was in a male-dominant lab and experienced a similar atmosphere in the industry to some extent. Setting aside obvious interruptions like career breaks and care duties, my journey has been impacted by two interconnected dynamics. Firstly, she must adhere to an absurdly high standard and expectations, such as being agreeable, positive, supportive, among others. Secondly, there is a reluctance amongst us to seize opportunities that men, often less qualified, would aggressively vie for. Consequently, it becomes necessary to break out of the comfort zone time and again.

Another crucial area of focus becomes setting strict boundaries for oneself and learning to refuse. As women, a general stereotype plagues us, painting us as ‘people pleasers.’ Thus, it is often assumed that we would inherently take on roles less appealing to men, even as trivial as making tea or taking notes during meetings, irrespective of our professional standing. Rebelling against and resisting these stereotypes, while asserting one’s value is essential for one’s true worth to be recognized. Although this doesn’t hold true for all women, it has been a significant part of my experience. During my time in the industry, my supervisor was a woman, who was a remarkable leader. Therefore, most of the part, the sexism confronted by me was within academia.

Overall, the problems are both structural and cultural, hence, steering through them demands effort – firstly, in identifying them and secondly, in actively combating them. Simple solutions do not exist, and any attempt to navigate amplifies the emotional burden on women in tech.

What guidance would you offer to women intending to venture into the AI industry?

The counsel I’ve always given is to seize opportunities that grant the chance to advance, even when you don’t feel perfectly suited for it. Rather than you blocking opportunities yourself, allow the decision to come from them. It is documented that men pursue roles they think they can handle, but women only chase roles they believe they’re already competent at or are competently doing. Presently, there’s an inclination towards increased gender sensitivity in hiring practices and among financiers, but recent instances display how much further we need to progress.

Looking at U.K. Research and Innovation and AI research hubs, a notable and recent multi-million-pound investment, all of the newly announced nine AI research hubs are headed by men. We should really strive harder to ensure gender representation.

What are some of the most pressing issues facing AI as it evolves?

As someone with deep roots in the field, it might seem predictable for me to state that the most pressing complications that AI encounters are related to the potential immediate and long-term damages that could come into existence if we don’t exhibit prudence in AI systems’ creation, control, and operation.

The environmental impact of large-scale frameworks is the most critical problem, and it has been significantly neglected in research. We might decide to tolerate these impacts at some point if the rewards of the application outbalance the hazards. However, presently, we witness the rampant use of platforms like Midjourney conducted merely for amusement, with users mainly, if not entirely, oblivious of the fallout each time they execute a query.

Yet another grave issue is our ability to balance the rapid pace of AI advancements and the capacity of the regulatory landscape to remain in stride. Regulation is no fresh matter, but it remains our most reliable tool to guarantee that AI systems are formulated and implemented responsibly.

Assumptions about the so-called democratization of AI seem straightforward and purely positive. This notion refers to concepts like ChatGPT, which are easily accessible to all. Yet, the impact of created content is already noticeable in the realm of creativity, especially concerning issues like copyright and attribution. The rush of news manufacturers and journalists to keep their content and branding unaffected is observable. This rush also influences our democratic structures, especially as important electoral cycles approach. The outcomes could influence the world order, particularly from a geopolitical viewpoint. Likewise, the topic of bias can’t be left out.

What should AI users be cautious about?

If we’re talking about regular citizens using AI and not companies, then trust is a major concern. For instance, let’s consider the large number of students using extensive language models to generate academic work. problems are still pervasive beyond moral issues. These models still make a lot of citation errors, and they tend to take them out of context, and they tend to lose the subtlety inherent in some scholarly papers.

This leads to a broader problem: AI-generated text lack complete reliability. Therefore, such systems should only be used when the context or outcome is trivial. The next issue is the authenticity and reliability of these systems. With increasingly advanced models, it becomes more difficult to distinguish between human and machine-generated content. Society has not yet acquired the necessary skills to make rational decisions about content in an AI-heavy media environment. Meanwhile, the long-standing rules of media literacy are still valid: always verify the source.

Another issue is that AI is not human intelligence, and so the models aren’t perfect — they can be tricked or corrupted with relative ease if one has a mind to.

What is the best way to responsibly build AI?

The best instruments we have are algorithmic impact assessments and regulatory compliance, but ideally, we’d be looking for processes that actively seek to do good rather than just seeking to minimize risk.

Going back to basics, the obvious first step is to address the composition of designers — ensuring that AI, informatics and computer science as disciplines attract women, people of color and representation from other cultures. It’s obviously not a quick fix, but we’d clearly have addressed the issue of bias earlier if it was more heterogeneous. That brings me to the issue of the data corpus, and ensuring that it’s fit-for-purpose and efforts are made to appropriately de-bias it.

Then there comes the need to train systems architects to be aware of moral and socio-technical issues — placing the same weight on these as we do the primary disciplines. Then we need to give systems architects more time and agency to consider and fix any potential issues. Then we come to the matter of governance and co-design, where stakeholders should be involved in the governance and conceptual design of the system. And finally, we need to thoroughly stress-test systems before they get anywhere near human subjects.

Ideally, we should also be ensuring that there are mechanisms in place for opt-out, contestation and recourse — though much of this is covered by emerging regulations. It seems obvious, but I’d also add that you should be prepared to kill a project that’s set to fail on any measure of responsibility. There’s often something of the fallacy of sunk costs at play here, but if a project isn’t developing as you’d hope, then raising your risk tolerance rather than killing it can result in the untimely death of a product.

The European Union’s recently adopted AI act covers much of this, of course.

How can investors better push for responsible AI?

Taking a step back here, it’s now generally understood and accepted that the whole model that powers the internet is the capitalization of user data. Similarly, much, if not all, of AI innovation is propelled by financial gain. AI development, in particular, is a resource-intensive endeavour, and the drive to be the first to market is often characterized as an arms race. Hence, responsibility as a value continually competes with these other values.

That’s not to imply that companies are indifferent, and multiple AI ethicists have made significant efforts to recast responsibility as a distinctive attribute within the field. But unless you’re a government or another public service, this seems like a far-fetched scenario. It’s evident that the priority to be the first to market is frequently compromised against thorough and comprehensive mitigation of potential harms.

Returning to the term responsibility, in my view, being responsible is the least we can do. When we entrust our kids to be responsible, we mean they shouldn’t do anything unlawful, disgraceful, or insane. It’s essentially the minimum requirement for behaving like a functional human being in society. Conversely, when applied to corporations, it seems to become an unattainable standard. It leads one to contemplate, why are we even engaged in this discussion?

Additionally, the motivations to prioritize responsibility are fairly straightforward, relating to the desire to be a trusted organization while not wishing your users to face publicly noticeable harm. This is because a lot of people living near the poverty line or those from marginalized groups fall below the radar of concern, as they lack the economic or social influence to challenge any adverse consequences, or to bring them to public awareness.

So, to loop back to the question, it depends on who the investors are. If it’s one of the big seven tech companies, then they’re covered by the above. They have to choose to prioritize different values at all times, and not only when it suits them. For the public or third sector, responsible AI is already aligned to their values, and so what they tend to need is sufficient experience and insight to help make the right and informed choices. Ultimately, to push for responsible AI requires an alignment of values and incentives.

Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More

Leave a Reply

Your email address will not be published. Required fields are marked *