4: AI can institutionalize bias. Published Date: 12. A state using a criminal justice algorithm found that the algorithm "mislabeled African-American defendants as ‘high risk’ at nearly twice the rate it mislabeled white defendants. Headline after headline has shown the ways in which machine learning models often mirror and even magnify systemic biases. The first is the opportunity to use AI to identify and reduce the effect of human biases. Tackling Bias In Artificial Intelligence. While significant progress has been made in recent years in technical and multidisciplinary research, more investment in these efforts will be needed. No optimization algorithm can resolve such questions, and no machine can be left to determine the right answers; it requires human judgment and processes, drawing on disciplines including social sciences, law, and ethics, to develop standards so that humans can deploy AI with bias and fairness in mind. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. September 2020. Watch Queue Queue By using artificial intelligence, organizations can lead candidates through a recruiting funnel that will give them a better experience. This explains why the company that developed COMPAS scores claimed its system was unbiased because it satisfied “predictive parity,” but ProPublica found that it was biased because it did not demonstrate “balance for the false positives.”. Two opportunities present themselves in the debate. In this article we look at some key steps you can take to ensure AIs of the future are not biased against, e.g., race, gender, sexuality, etc. Researchers are also developing and testing other improvements. Working to end racial and ethnic bias in artificial intelligence (AI)-based biometric facial recognition, Vintra, a provider of AI-powered video analytics solutions, released the results of a year-long effort to ensure that its AI platform “can equitably recognize and correctly identify faces across different races,” the San Jose, California-based company said in a statement. As a result of these complexities, crafting a single, universal definition of fairness or a metric to measure it will probably never be possible. Yet human decision … ... Tackling AI's Unintended Consequences ... Risk No. The second approach consists of post-processing techniques. our use of cookies, and Recently, a technology company discontinued development of a hiring algorithm based on analyzing previous decisions after discovering that the algorithm penalized applicants from women’s colleges. Flip the odds. Certain AI tools use chatbots to address candidate questions in real-time and can also be quite valuable during the interview process. Unleash their potential. Humans are also prone to misapplying information. Efforts such as the annual reports from the AI Now Institute, which cover many critical questions about AI, and Embedded EthiCS, which integrates ethics modules into standard computer science curricula, demonstrate how experts from across disciplines can collaborate. More progress will require interdisciplinary engagement, including ethicists, social scientists, and experts who best understand the nuances of each application area in the process. Artificial intelligence (AI) today has an ethics problem. The reduction of bias is critical for AI to reach its maximum potential– to drive profits for business, productivity growth in the economy, and also tackle some major societal issues. Work to define fairness has also revealed potential trade-offs between different definitions, or between fairness and other objectives. Discussion Paper - McKinsey Global Institute. Bias issues in AI decisionmaking have become increasingly problematic in recent years, as many companies increase the use of AI systems across their operations. This includes considering situations and use-cases when automated decision making is acceptable (and indeed ready for the real world) vs. when humans should always be involved. Progress in identifying bias points to another opportunity: rethinking the standards we use to determine when human decisions are fair and when they reflect problematic bias. Home / Tackling Bias Issues in Artificial Intelligence. 12 See "Tackling Bias in Artificial Intelligence (and in Humans)," June 6, 2019, McKinsey Global Institute for a comprehensive discussion of such measures. Bias points in AI decisionmaking have change into more and more problematic in recent times, as many firms enhance using AI methods throughout their operations. Something went wrong. For example, we often accept outcomes that derive from a process that is considered “fair.” But is procedural fairness the same as outcome fairness? Another study found that automated financial underwriting systems particularly benefit historically underserved applicants. Turn it on to take full advantage of this site, then refresh the page. "The growing use of artificial intelligence in sensitive areas, including for hiring, criminal justice, and healthcare, has stirred a debate about bias and fair Tackling bias in artificial intelligence (and in humans) - Digital Transformation Hub This will be critical if AI is to reach its potential, shown by the research of MGI and others, to drive benefits for businesses, for the economy through productivity growth, and for society through contributions to tackling pressing societal issues. For example, Jon Kleinberg and others have shown that algorithms could help reduce racial disparities in the criminal justice system. Underlying data rather than the algorithm itself are most often the main source of the issue. ... in a push to advance the responsible utilization of artificial intelligence (AI) models. Furthermore, in which situations should fully automated decision making be permissible at all? A machine learning algorithm may also pick up on statistical correlations that are societally unacceptable or illegal. Artificial Intelligence Has A Problem With Bias, Here's How To Tackle It. Tackling Bias Issues in Artificial Intelligence – Lexology. Learn more about cookies, Opens in new It’s used to make diagnostic decisions in healthcare, to allocate resources for social services in things like child protection, to help recruiters crunch through piles of job applications, and much more. ... And gender bias is not merely a male problem: a recent UNDP report entitled Tackling Social Norms found that about 90% of people (both men and women) hold some bias against women. The new emerging digital world carries with it a scary thing: Artificial Intelligence (AI) bias. Tackling bias in AI. In many cases, AI can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used. Watch Queue Queue. The use of Artificial Intelligence (AI) in employment practices is growing at a rapid pace, with the potential to make human processes and workplace decisions more efficient and less biased. ... How Data Bias Crawls Into Artificial Intelligence. ", One cause of bias issues in AI may be lack of diversity. Please click "Accept" to help us improve its usefulness with additional cookies. Read on to learn more. This latter group includes “counterfactual fairness” approaches, which are based on the idea that a decision should remain the same in a counterfactual world in which a sensitive attribute is changed. Others contend that maintaining a single threshold is fairer to all groups. AI systems learn to make decisions based on the data and algorithms humans put into them. Our use of Artificial Intelligence is growing along with advancements in the field. As a rapidly growing number of organizations adopt artificial intelligence solutions, it’s crucial that we work to mitigate bias in AI systems. Instead, different metrics and standards will likely be required, depending on the use case and circumstances. Some of the emerging work has focused on processes and methods, such as “data sheets for data sets” and “model cards for model reporting” which create more transparency about the construction, testing, and intended uses of data sets and AI models. On one hand, AI can help reduce the impact of human biases in decisionmaking. Original article was published by on artificial intelligence. Original article was published by on artificial intelligence. Silvia Chiappa’s path-specific counterfactual method can even consider different ways that sensitive attributes may affect outcomes—some influence might be considered fair and could be retained, while other influence might be considered unfair, and therefore should be discarded. The EU’s Ethics Guidelines for Trustworthy AI mandates that trustworthy AI should be lawful, ethical, and robust. Minimizing bias in AI is an important prerequisite for enabling people to trust these systems. In the mobility industry, business leaders and policymakers are increasingly looking to big data analytics and artificial intelligence (AI) algorithms to make informed decisions. Tackling unfair bias will require drawing on a portfolio of tools and procedures. Print. Home / Tackling Bias Issues in Artificial Intelligence. Artificial Intelligence Has A Problem With Bias, Here's How To Tackle It. Innovative training techniques such as using transfer learning or decoupled classifiers for different groups have proven useful for reducing discrepancies in facial analysis technologies. In criminal justice models, oversampling certain neighborhoods because they are overpoliced can result in recording more crime, which results in more policing. This video is unavailable. Six potential ways forward for AI practitioners and business and policy leaders to consider 1. Who decides when an AI system has sufficiently minimized bias so that it can be safely released for use? But there’s a big problem when the people who design the systems program in … Similarly, if an organization realizes an algorithm trained on its human decisions (or data based on prior human decisions) shows bias, it should not simply cease using the algorithm but should consider how the underlying human behaviors need to change. tab, Travel, Logistics & Transport Infrastructure. CNBC Africa. Biases in how humans make decisions are well documented. Bias issues in AI decisionmaking have become increasingly problematic in recent years, as many companies increase the use of AI systems across their operations. Some researchers have highlighted how judges’ decisions can be unconsciously influenced by their own personal characteristics, while employers have been shown to grant interviews at different rates to candidates with identical resumes but with names considered to reflect different racial groups. By using artificial intelligence, organizations can lead candidates through a recruiting funnel that will give them a better experience. Tackling the problem of bias in AI software. The growing use of artificial intelligence in sensitive areas, including for hiring, criminal justice, and healthcare, has stirred a debate about bias and fair Tackling bias in artificial intelligence … Or will AI make these problems worse? Algorithmic bias has become a hot topic in recent months and as AI becomes more widely used the subject is becoming ever more important. Human bias is not new. Problems arise when the available data reflects societal bias. Press enter to select and open the results on a new page. Our mission is to help leaders in multiple sectors develop a deeper understanding of the global economy. This could take the form of running algorithms alongside human decision makers, comparing results, and examining possible explanations for differences. Original article was published by on artificial intelligence. Tackling Unconscious Bias with Artificial Intelligence. Please use UP and DOWN arrow keys to review autocomplete results. Artificial intelligence (AI) today has an ethics problem. Models developed from globally distributed intelligence networks may offer a way forward and more unique, unbiased approaches to tackling serious world issues. Algorithmic bias has become a hot topic in recent months and as AI becomes more widely used the subject is becoming ever more important. Salesforce tackling bias in AI with new Trailhead module. Finally, transparency about processes and metrics can help observers understand the steps taken to promote fairness and any associated trade-offs. Our flagship business publication has been defining and informing the senior-management agenda since 1964. For example, employers may review prospective employees’ credit histories in ways that can hurt minority groups, even though a definitive link between credit history and on-the-job behavior has not been established. One of the problems in society that AI decision-making was meant to solve, was bias. Among others, here are six steps that companies should consider. Bias issues in AI decisionmaking have become increasingly problematic in recent years, as many companies increase the use of AI systems across their operations. Many have pointed to the fact that the AI field itself does not encompass society’s diversity, including on gender, race, geography, class, and physical disabilities. September 2020. Establish responsible processes and practices to mitigate bias in AI systems ; Engage in fact-based conversations around potential human biases; Consider how humans and machines can work together to mitigate bias; Invest more and make more data available for bias research; Focus on diversity in … ... in a push to advance the responsible utilization of artificial intelligence (AI) models. Tackling Bias Issues in Artificial Intelligence | Morgan Lewis – Tech & Sourcing – JD Supra. It's a type of software that can speed up decision-making, and grow more useful with more data. “We definitely have a growing need for more quantitative managers,” he notes. In our webinar, Cansu Canca (Founder and Director of AI Ethics Lab), Laura Haaber (Visiting Research Fellow at Harvard University) and Julia Zacharias (VP Delivery & Customer Success at Applause) will discuss biases in Artificial Intelligence. The technical tools described above can highlight potential sources of bias and reveal the traits in the data that most heavily influence the outputs. Tackling bias in AI (and in humans) Article By Jake Silberg and James Manyika June 2019 The growing use of artificial intelligence in sensitive areas, including hiring, criminal justice, and healthcare, has stirred a debate about bias and fairness. “We definitely have a growing need for more quantitative managers,” he notes. On the other, AI can make the bias problem worse. One is about artificial intelligence — the golden promise and hard sell of these companies. The growing use of artificial intelligence in sensitive areas, including for hiring, criminal justice, and healthcare, has stirred a debate about bias and fairness. Tackling bias in AI recruitment tools Researchers: Rebecca Raper, Kevin Maynard, Dr Paul Jackson Artificial Intelligence (AI) is being increasingly used within HR and Recruitment. To quote Andrew McAfee of MIT, “If you want the bias out, get the algorithms in.”. On one hand, AI can help reduce the impact of human biases in decisionmaking. People create and sustain change. Innovative training techniques such as using transfer learning or decoupled classifiers for different groups have proven useful for reducing discrepancies in facial analysis technologies. Be aware of the contexts in which AI can help correct for bias as well as where there is a high risk that AI could... 2. Will AI’s decisions be less biased than human ones? This will require investments on multiple fronts, but especially in AI education and access to tools and opportunities. The Trailblazing Roboticist Tackling Diversity and Bias in Artificial Intelligence. While definitions and statistical measures of fairness are certainly helpful, they cannot consider the nuances of the social contexts into which an AI system is deployed, nor the potential issues surrounding how the data were collected. They also wish to thank their McKinsey colleagues Tara Balakrishnan, Jacques Bughin, Michael Chui, Rita Chung, Daniel First, Peter Gumbel, Mehdi Miremadi, Brittany Presten, Vasiliki Stergiou, and Chris Wigley for their contributions. Singapore – Standard Chartered has partnered with Truera, a US based startup, to use their model intelligence platform to improve model quality and increase trust by analysing models and helping to identify and eliminate unjust biases in the decision-making process. What can business and policy leaders do to minimize bias in AI going forward? We use cookies essential for this site to function well. cookies, Notes from the AI frontier: Tackling bias in AI (and in humans), algorithms could help reduce racial disparities, incorrectly labeled African-American defendants as “high-risk”, racial differences in online ad targeting, setting different decision thresholds for different groups, Silvia Chiappa’s path-specific counterfactual method, McKinsey_Website_Accessibility@mckinsey.com. It’s used to make diagnostic decisions in healthcare, to allocate resources for social services in things like child protection, to help recruiters crunch through piles of job applications, and much more. Artificial intelligence (AI) can imitate and amplify human prejudices, however, when used responsibly it can help overcome biases to make objective, data-driven decisions. Be aware of the contexts in which AI can help correct for bias as well as where there is a high risk that AI could exacerbate bias. Don't miss this roundup of our newest and most distinctive insights, Select topics and stay current with our latest insights, Tackling bias in artificial intelligence (and in humans). The rise of artificial intelligence is evident across various industries, it is also introduces new risks to society and is as prone to bias. Techniques in this vein include “human-in-the-loop” decision making, where algorithms provide recommendations or options, which humans double-check or choose from. On one hand, AI … Use minimal essential When Amazon put together a team to work on its new recruitment engine in 2014, it had high hopes. Tackling Bias Issues in Artificial Intelligence. This video is unavailable. Artificial Intelligence (AI) is bringing a technological revolution to society. Perhaps organizations can benefit from the recent progress made on measuring fairness by applying the most relevant tests for bias to human decisions, too. ... Is artificial intelligence the answer? According to our 2020 State of Data Science report, of 1,592 people surveyed globally, 27 percent identified social impacts from bias in data and models as the biggest problem to tackle in AI and machine learning … Email. These decisions range from investments and funding, to reduction of congestion and pollution, to improving safety. Addressing the gender bias in artificial intelligence and automation. Tackling bias in artificial intelligence (and in humans) 15-07-2019 Downloadable Resources Article (PDF-120KB) The growing use of artificial intelligence in sensitive areas, including for hiring, criminal justice, and healthcare, has stirred a debate about bias and fairness. Practical resources to help leaders navigate to the next normal: guides, tools, checklists, interviews and more. Work by Joy Buolamwini and Timnit Gebru found error rates in facial analysis technologies differed by race and gender. It has gone to the point that it is used in riskier areas such as hiring, criminal justice, and healthcare. On the other, AI can make the bias problem worse. On the data side, researchers have made progress on text classification tasks by adding more data points to improve performance for protected groups. Much of the conversation about definitions has focused on individual fairness, or treating similar individuals similarly, and on group fairness—making the model’s predictions or outcomes equitable across groups, particularly for potentially vulnerable groups. Yet human decision making in these and other domains can also be flawed, shaped by individual and societal biases that are often unconscious. Tackling Bias Issues in Artificial Intelligence – Lexology. AI can help humans with bias — but only if humans are working together to tackle bias in AI. Artificial Intelligence in decision-making processes. Models may be trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities. In, Notes from the AI frontier: Tackling bias in AI (and in humans) (PDF–120KB), we provide an overview of where algorithms can help reduce disparities caused by human biases, and of where more human vigilance is needed to critically analyze the unfair biases that can become baked in and scaled by AI systems. Linkedin. In the “CEO image search,” only 11 percent of the top image results for “CEO” showed women, whereas women were 27 percent of US CEOs at the time. July 23, 2018 | Updated: July 24, 2018 . Perhaps these have traditionally been the best tools we had, but as we begin to apply tests of fairness to AI systems, can we start to hold humans more accountable as well? For example, if a mortgage lending model finds that older individuals have a higher likelihood of defaulting and reduces lending based on age, society and legal institutions may consider this to be illegal age discrimination. How should we codify definitions of fairness? The main discussion here is about how blockchain could help in tackling these data reliability concerns. The first consists of pre-processing the data to maintain as much accuracy as possible while reducing any relationship between outcomes and protected characteristics, or to produce representations of the data that do not contain information about sensitive attributes. Another proxy often used is compositional fairness, meaning that if the group making a decision contains a diversity of viewpoints, then what it decides is deemed fair. Practical resources to help leaders navigate to the next normal: guides, tools, checklists, interviews and more, Learn what it means for you, and meet the people who create it. Twitter. More often than not we rely on fairness proxies. September 2020. A.I. Some promising systems use a combination of machines and humans to reduce bias. The Trailblazing Roboticist Tackling Diversity and Bias in Artificial Intelligence. One tech company stopped using a hiring algorithm when it found that the algorithm favored applicants based on words that were commonly found on men's resumes. All rights reserved. Copyright © 2020 Morgan, Lewis & Bockius LLP. The second is the opportunity to improve AI systems themselves, from how they leverage data to how they are developed, deployed, and used, to prevent them from perpetuating human and societal biases or creating bias and related challenges of their own. Diversity needed to tackle the inherent bias in artificial intelligence AI is designed to make our lives easier. Unlike human decisions, decisions made by AI could in principle (and increasingly in practice) be opened up, examined, and interrogated. We strive to provide individuals with disabilities equal access to our website. Is it the percentage of women CEOs we have today? The use of Artificial Intelligence (AI) in employment practices is growing at a rapid pace, with the potential to make human processes and workplace decisions more efficient and less biased.