What are some of the Key Concerns about adoption of AI-led Automation/RPA?
Digital Dialogues Season 3
Presented by StrategINK & Sponsored by Automation Anywhere
EPISODE 2 SUMMARY
EPISODE 3 - Powering Automation with Cloud
Date: 02 December 2020
Time: 11:30 AM onwards
Register Now
What are some of the Suggested Remedies to Reduce Bias in AI-led Initiatives?
Onward ho! – Learnings and Best Practices in Debiasing AI/ML Solutions
Fairness Testing
Most AI experts acknowledge that an unbiased dataset is not possible. Bias, in some form, will always exist, which means that it is vital to understand how bias affects different groups of users and different data sets. This leads to the concept of fairness testing.
AI systems can make four types of predictions –
a) A true positive (TP)
b) A false positive (FP)
c) A true negative (TN)
d) A false negative (FN).Statistical fairness tests use error rates (false positives and false negatives) to test various ratios of failure between different groups. There are many different types of fairness tests but they fall into three broad categories: individual fairness, where similar predictions are given to similar individuals; group fairness, where different groups are treated equally; and subgroup fairness, which picks the best properties of the individual and the group and tests across various subgroups.
There is usually a conflict between accuracy and fairness. Even with many technical guidelines and definitions, fairness testing remains context- and value-dependent. It involves making decisions about the kinds of mistakes that are made and how these mistakes are distributed between different groups. In their book “The Ethical Algorithm”, authors Michael Kearns and Aaron Roth point out the push-and-pull between fairness and accuracy will never go away, but it can now be measured and managed better than in the past.
It is especially important that data scientists are encouraged to think beyond the strict boundaries of their expertise and consider the consequences/impact of their work. This helps to reduce bias being introduced as a product of someone’s background or because of a certain preference which unconsciously contributes to design choices and results in that unintended bias being amplified at scale.
Human-centred Design
As technology and AI diffuse through a variety of products, services and business solutions – from devices in our homes, to apps we use to track our health, to sophisticated equipment in industry – design plays an increasingly important role. We expect things to work “seamlessly”, including AI-based apps and digital solutions.
Good designers have always sought to closely knit human psychology and product functionality together. A fundamental principle of design is feedback: how we adjust our predictions based on our experience. In this area AI presents a big design challenge because it can be hard to accurately pinpoint what is going on and bias can amplify inaccurate and unreliable feedback. For example, more likes showing up ads of a certain type on a user’s Facebook page.
Human-centred AI design is an emerging practice with the goal of making better AI. People who work in the area value putting humans first, with the belief that AI should serve humans, rather than the other way around. At the same time, many leading designers and AI experts consider unbiased AI to be unrealistic and counter to technology’s role of increasing productivity. So, instead setting a goal for an AI engine to be as accurate as possible across a large and diverse population, these experts suggest a trade-off: more models of the AI driven solution – each designed for a “narrow utility” application. These models are localized, which has the effect of reducing bias. While this is less convenient for technologists but it turns out better for users – training data may be more difficult to gather, models tend to become more fragmented, but each individual user has more influence over the AI, which increases its inherent usefulness for the particular business problem.
The more a user can interact, the more the user plays a role in the AI, the more the AI becomes a collaborator with a human worker who participates by making active and informed choices. This increases the trust between the AI solution/application/bot and its human collaborators resulting in a win-win-win for all concerned stakeholders.
Hiring and Deploying AI Ethicists, Training Ethical AI Engineers
Technology OEMs and large social platforms have all added Ethics Boards or hired external “AI ethicists” to help product and solution development teams think deeply about removing/reducing bias and improving/increasing fairness. But many people are sceptical about the role of ethicists in technology companies – including both ethicists and technologists as often “AI ethicists” do not have formal training in ethics or philosophy.
Google, Facebook, and other technology companies have been accused of “ethics washing” to avoid passage of new regulations to govern their operations and legal liability. But this other extreme of “ethics bashing” is also not fruitful as it does not build a conducive climate for healthy debates on ethics.
Elettra Bietti, a researcher at Harvard Law School argues that argues that individuals, companies and communities/governments need to see ethics as a mode of inquiry that helps people evaluate competing technology choices – whether they are policy strategies or product design choices. Ethics should be measured by how it enables participation because ethics, in practice, often involves redefining old boundaries between organizations, their partners and their customers. This means AI ethicists and ethics must pay careful attention to how their actions impact key business decisions at key check points. For example, if an ethics board recommended that the only way to deliver an “ethical AI” product was to abandon the entire product line and start over again, would they be collectively allowed to voice this recommendation?
Josh Lovejoy, Head of Design, Ethics and Society at Microsoft Cloud and AI argues that ethics need to be seen not as some sort of humane or philosophical add-on, “but as ‘just good design’ that works when validated in the real world.” Going by this line of argument, what is needed are not AI ethicists so much as AI designers duly trained to incorporate ethics in the process of software development. So, it is vital for vendor specialists and IT/digital leaders to accord priority to developing techniques and practices that extend traditional roles – whether as engineer, program manager, designer, or researcher.
As far as IT vendors and solution providers are concerned, the general consensus is they would do well to invest in strengthening their programs for design skills + social skills + ethics training to enrich and empower the role of the “ethical AI engineer.” This will help to inform and improve the process of designing, building, testing and managing ever-changing, intelligent AI products.
This will further help everyday IT + line function workers, managers and executives in business enterprises choose and employ thoroughly tested robotics solutions to deliver greater value to consumers, business partners, and investors by automating routine processes, reducing response times, improving quality of products and services, and increasing output and productivity over time.