Blog & Articles

A Primer on Intelligent Automation, Robotics and Resolving Bias in AI/ML Solutions – Part 2

  Artificial Intelligence   November 19, 2020

What are some of the Key Concerns about adoption of AI-led Automation/RPA?

While AI-led technologies are increasingly automating and speeding up business processes, improving productivity and quality of outcomes, concerns have been raised by many technology experts and social scientists about the biases that have crept in or may creep in if the technology is not well-designed or effectively monitored. For instance, sexism bias (identifying males and females by their traditional roles in society), racism bias (mis-identifying Black or Asian origin persons), choice bias (serving up articles or videos or social media posts that reflect not just an individual’s particular tastes and preferences, but his/her social and cultural biases as well), and so on.
Some of the leading voices who have either spoken about or contributed to building awareness or rectifying issues in this area include – author, media studies professor and digital economy commentator, Douglass Rushkoff; Chief Science Officer and Co-founder, Luminoso Technologies, Robyn Speer; computer scientist, former President of Google China and Founder, Sinovation Ventures, Kai-Fu Lee; Indian-origin entrepreneur, activist, writer and CEO of Glitch, Anil Dash and most famously, Tesla Founder, Elon Musk among many others.
What is clear is that with the advent of new technologies, policymakers, solution developers and marketers will have to be sensitive and cognizant of the concerns about protecting and safeguarding individual rights, while leaving the door open for the adoption of new, innovative techniques by business, industry, government as well as individual citizens themselves.

Digital Dialogues Season 3
Presented by StrategINK & Sponsored by Automation Anywhere

EPISODE 2 SUMMARY
EPISODE 3 - Powering Automation with Cloud
Date: 02 December 2020
Time: 11:30 AM onwards

Register Now

What are some of the Suggested Remedies to Reduce Bias in AI-led Initiatives?

Many global corporations are working to use larger, more diverse data sets to train their AI applications and deliver much needed ‘public goods’ to support businesses, individuals, communities and state/local governments. For example, one of StrategINK’s partners – Automation Anywhere, a global leader in building and deploying RPA solutions has gone the extra mile by making available free bots for small business owners and their employees to use, to stay competitive in a digital-first business environment. The company also launched ‘Project K’, a self-learning RPA program for children at home during the COVID-19 lockdown, with free software and online video courses to help children understand AI/ML as well as build self-confidence in solving real-life problems. Automation Anywhere helped the residents of Macao, China by building and running ‘NetCraft’, a community services website supplying up to date information on the extent of the COVID-19 pandemic, hospitals, availability and prices of sanitizer, masks and PPE, and other useful information.
Many technology companies have appointed external consultants/diversity officers/board members on their teams to help development teams build better products in terms of user experience and predictability of outcomes. For instance, Neil Sahota, an Indian-origin inventor, author and business advisor helped in the early development of the IBM Watson AI engine, advises global Fortune 500 corporations in designing and building out new products, and helped the United Nations create and launch their ‘AI for Global Good’ ecosystem.
Many individual developers, technology start-ups and evangelists are committed to the idea of removing bias and not allowing personal/societal/racial/ethnic/gender stereotypes and prejudices from creeping into the design of their apps and solutions. For instance, former Chief of AI at Google Cloud, Stanford University Professor and recently appointed Board Member at Twitter, Dr. Fei-Fei Lee.
Thus, clearly, automation and robotics solutions enabled with artificial intelligence (AI) and machine learning (ML) tools are here to stay and capable of delivering many benefits to businesses, governments, individual users and society at large. The crucial mantra for all stakeholders in this journey would be that each entity is sensitive to, respects and safeguards the interests of the other.
Deployed in a thoughtful manner, Automation/Robotics and AI/ML technologies have a bright future, capable of progressively delivering value to individual consumers and communities, optimal RoI to businesses and investors, and sustainable and effective outcomes for governments and multilateral trade, technology and welfare organizations.

Onward ho! – Learnings and Best Practices in Debiasing AI/ML Solutions

Individuals, companies, communities, and countries are beginning to address AI bias through new programming tools, design techniques, laws, and guidelines. New technical tools are being developed. Human-centred AI design practices and ethical AI design are being adopted and refined as more people and companies work on AI-enabled products.
Much can be achieved with purely technical tools for documenting sources of bias, testing for fairness, de-biasing models, and archiving previous model versions. A critical step involves understanding the data representations and choices the AI engine is making. There are many tools that have been developed to help, including IBM’s AI Explainability 360, Google’s What-If tool, and LIME (Local Interpretable Model-Agnostic Explanations) from the University of Washington amongst many others. These tools are designed to help data scientists understand the most important features of an AI model and how it makes predictions. The tools allow combining visualizations with sophisticated data sets to help data scientists and engineers examine and manipulate data.

Fairness Testing

Most AI experts acknowledge that an unbiased dataset is not possible. Bias, in some form, will always exist, which means that it is vital to understand how bias affects different groups of users and different data sets. This leads to the concept of fairness testing.

AI systems can make four types of predictions –

a) A true positive (TP)
b) A false positive (FP)
c) A true negative (TN)
d) A false negative (FN).

Statistical fairness tests use error rates (false positives and false negatives) to test various ratios of failure between different groups. There are many different types of fairness tests but they fall into three broad categories: individual fairness, where similar predictions are given to similar individuals; group fairness, where different groups are treated equally; and subgroup fairness, which picks the best properties of the individual and the group and tests across various subgroups.

Quartz.com

There is usually a conflict between accuracy and fairness. Even with many technical guidelines and definitions, fairness testing remains context- and value-dependent. It involves making decisions about the kinds of mistakes that are made and how these mistakes are distributed between different groups. In their book “The Ethical Algorithm”, authors Michael Kearns and Aaron Roth point out the push-and-pull between fairness and accuracy will never go away, but it can now be measured and managed better than in the past.

It is especially important that data scientists are encouraged to think beyond the strict boundaries of their expertise and consider the consequences/impact of their work. This helps to reduce bias being introduced as a product of someone’s background or because of a certain preference which unconsciously contributes to design choices and results in that unintended bias being amplified at scale.

Human-centred Design

As technology and AI diffuse through a variety of products, services and business solutions – from devices in our homes, to apps we use to track our health, to sophisticated equipment in industry – design plays an increasingly important role. We expect things to work “seamlessly”, including AI-based apps and digital solutions.

Good designers have always sought to closely knit human psychology and product functionality together. A fundamental principle of design is feedback: how we adjust our predictions based on our experience. In this area AI presents a big design challenge because it can be hard to accurately pinpoint what is going on and bias can amplify inaccurate and unreliable feedback. For example, more likes showing up ads of a certain type on a user’s Facebook page.

Human-centred AI design is an emerging practice with the goal of making better AI. People who work in the area value putting humans first, with the belief that AI should serve humans, rather than the other way around. At the same time, many leading designers and AI experts consider unbiased AI to be unrealistic and counter to technology’s role of increasing productivity. So, instead setting a goal for an AI engine to be as accurate as possible across a large and diverse population, these experts suggest a trade-off: more models of the AI driven solution – each designed for a “narrow utility” application. These models are localized, which has the effect of reducing bias. While this is less convenient for technologists but it turns out better for users – training data may be more difficult to gather, models tend to become more fragmented, but each individual user has more influence over the AI, which increases its inherent usefulness for the particular business problem.

Quartz.com

The more a user can interact, the more the user plays a role in the AI, the more the AI becomes a collaborator with a human worker who participates by making active and informed choices. This increases the trust between the AI solution/application/bot and its human collaborators resulting in a win-win-win for all concerned stakeholders.

Hiring and Deploying AI Ethicists, Training Ethical AI Engineers

Technology OEMs and large social platforms have all added Ethics Boards or hired external “AI ethicists” to help product and solution development teams think deeply about removing/reducing bias and improving/increasing fairness. But many people are sceptical about the role of ethicists in technology companies – including both ethicists and technologists as often “AI ethicists” do not have formal training in ethics or philosophy.

Google, Facebook, and other technology companies have been accused of “ethics washing” to avoid passage of new regulations to govern their operations and legal liability. But this other extreme of “ethics bashing” is also not fruitful as it does not build a conducive climate for healthy debates on ethics.

Elettra Bietti, a researcher at Harvard Law School argues that argues that individuals, companies and communities/governments need to see ethics as a mode of inquiry that helps people evaluate competing technology choices – whether they are policy strategies or product design choices. Ethics should be measured by how it enables participation because ethics, in practice, often involves redefining old boundaries between organizations, their partners and their customers. This means AI ethicists and ethics must pay careful attention to how their actions impact key business decisions at key check points. For example, if an ethics board recommended that the only way to deliver an “ethical AI” product was to abandon the entire product line and start over again, would they be collectively allowed to voice this recommendation?

Josh Lovejoy, Head of Design, Ethics and Society at Microsoft Cloud and AI argues that ethics need to be seen not as some sort of humane or philosophical add-on, “but as ‘just good design’ that works when validated in the real world.” Going by this line of argument, what is needed are not AI ethicists so much as AI designers duly trained to incorporate ethics in the process of software development. So, it is vital for vendor specialists and IT/digital leaders to accord priority to developing techniques and practices that extend traditional roles – whether as engineer, program manager, designer, or researcher.

Quartz.com

As far as IT vendors and solution providers are concerned, the general consensus is they would do well to invest in strengthening their programs for design skills + social skills + ethics training to enrich and empower the role of the “ethical AI engineer.” This will help to inform and improve the process of designing, building, testing and managing ever-changing, intelligent AI products.

This will further help everyday IT + line function workers, managers and executives in business enterprises choose and employ thoroughly tested robotics solutions to deliver greater value to consumers, business partners, and investors by automating routine processes, reducing response times, improving quality of products and services, and increasing output and productivity over time.

Disclaimer: The views expressed in this feature article are of the author. This is not meant to be an advisory to purchase or invest in products, services or solutions of a particular type or, those promoted and sold by a particular company, their legal subsidiary in India or their channel partners. No warranty or any other liability is either expressed or implied.
© Copyright 2024 StrategINK - All Rights Reserved

Email Us

Call Us

WhatsApp