Bridging Code and Conscience: UMD’s Commitment to Ethical and Inclusive AI Development

Dashveenjit is a seasoned journalist specializing in technology and business, dedicated to uncovering and crafting narratives for both online and print publications. She also has experience in parliamentary reporting, as well as occasional work within the lifestyle and arts sectors.

As AI technologies become more integrated into essential decision-making processes in our lives, embedding ethical principles into AI development is rapidly gaining traction in research. At the University of Maryland (UMD), interdisciplinary teams are exploring the intricate relationships between normative reasoning, machine learning algorithms, and socio-technical frameworks.

In a recent discussion with Artificial Intelligence News, postdoctoral researchers Ilaria Canavotto and Vaishnav Kameswaran combine their backgrounds in philosophy, computer science, and human-computer interaction to tackle urgent challenges in AI ethics. Their research encompasses both the conceptual bedrock necessary for incorporating ethical principles into AI systems and the real-world consequences of deploying AI in critical areas like employment.

Ilaria Canavotto, who is part of UMD’s Values-Centered Artificial Intelligence (VCAI) initiative, collaborates with the Institute for Advanced Computer Studies and the Philosophy Department. She is exploring a crucial question: How can we equip AI systems with a normative understanding? As AI’s role in shaping decisions that affect human rights and welfare expands, these systems must grasp ethical and legal standards.

“The question I am exploring is how we can embed this type of information, this normative perspective of the world, into a machine, whether it’s a robot or a chatbot,” Canavotto remarks.

Her research integrates two distinct strategies:

Top-down approach: This established method relies on explicitly programming rules and norms into the system. However, Canavotto emphasizes, “It’s incredibly challenging to document them straightforwardly. New situations continually arise.”

Bottom-up approach: A more contemporary technique that leverages machine learning to derive rules from data. While it offers greater flexibility, it suffers from a lack of transparency: “The issue with this method is that we often don’t comprehend what the system learns, making it quite difficult to clarify its decisions,” Canavotto observes.

Canavotto, along with her colleagues Jeff Horty and Eric Pacuit, is working on a hybrid method that seeks to merge the strengths of different approaches. Their goal is to develop AI systems that can derive rules from data while ensuring that the decision-making processes are transparent and founded on legal and ethical principles.

“Our method is rooted in a discipline known as artificial intelligence and law. In this discipline, algorithms have been created to extract valuable information from data. We aim to generalize some of these algorithms to create a system capable of extracting information that is firmly based in legal and normative reasoning,” she illustrates.

While Canavotto delves into the theoretical aspects, Vaishnav Kameswaran, who is linked with UMD’s NSF Institute for Trustworthy AI and Law and Society, focuses on the practical effects of AI, especially its consequences for individuals with disabilities.

Kameswaran’s research investigates the role of AI in hiring practices, revealing how systems can unintentionally favor against candidates with disabilities. He states, “We have been striving to… illuminate the black box somewhat, aiming to comprehend the functions of these algorithms behind the scenes, and how they evaluate candidates.”

The research indicates that numerous AI-powered hiring tools significantly depend on normative behavioral indicators, including eye contact and facial expressions, to evaluate applicants. This method can put individuals with certain disabilities at a disadvantage. For example, candidates who are visually impaired might find it difficult to maintain eye contact, which AI systems often misinterpret as a lack of engagement.

“By concentrating on these attributes and evaluating candidates accordingly, these platforms tend to intensify prevailing social inequalities,” Kameswaran cautions. He contends that this approach could additionally marginalize individuals with disabilities in the job market, a demographic that is already encountering substantial employment hurdles.

Both researchers highlight that the ethical issues associated with AI encompass much more than their individual realms of research. They address a variety of pressing concerns:

Despite the considerable challenges, both researchers are actively pursuing solutions:

Nonetheless, they recognize the intricate nature of the challenges at hand. Kameswaran observes, “Regrettably, I don’t believe that a purely technical method of training AI with specific types of data and implementing auditing tools will fundamentally resolve the issue. It necessitates a comprehensive strategy.”

One important insight from the researchers’ findings is the necessity for increased public understanding of AI’s influence on our daily lives. Individuals must be aware of the extent of personal data they disclose and how it is utilized. Canavotto highlights that businesses frequently have a motive to obscure this information, characterizing them as “Companies that attempt to convince you that my service will be enhanced if you share your data.”

The researchers contend that substantial efforts are essential to inform the public and ensure that companies are held accountable. Ultimately, the interdisciplinary strategy proposed by Canavotto and Kameswaran, blending philosophical exploration with practical implementation, presents a promising avenue toward ensuring that AI systems are not only potent but also ethical and just.

See also: Regulations to help or hinder: Cloudflare’s take

Interested in diving deeper into AI and big data with insights from industry experts? Discover the AI & Big Data Expo happening in Amsterdam, California, and London. This extensive event is held alongside other prominent gatherings such as the Intelligent Automation Conference, BlockX, Digital Transformation Week, and the Cyber Security & Cloud Expo.

Learn about more upcoming enterprise technology events and webinars brought to you by TechForge here.

Tags: ai, artificial intelligence, ethics, research, society

You need to be logged in to leave a comment.

Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More

Leave a Reply

Your email address will not be published. Required fields are marked *