

5 reasons you should invest in AI today
AI (AI) and machine learning (ML) have experienced numerous ground-breaking advancements during recent decades. Here’s why you should invest in AI and ML today.
In 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination. The medical school was using a computer program to determine which applicants should be invited for an interview. Though the program was developed to match human decisions with precision up to 95%, the decisions made by the program were biased against women and non-European names. Resorting to a solution to this bias, only ended in the British medical school signing up more non-European students than most medical schools in London.
Fast forward to today. AI algorithms have become a lot more complex than they were 30 years ago. Yet, there are challenges associated with AI bias continue to exist. On one hand, AI can help us identify and reduce the effect of human bias. On the other hand, it can worsen the problem by baking in bias and deploying it. For instance, training natural language processing (NLP) models on news articles can lead them to exhibit gender bias.
Most often, we tend to explain away AI bias by blaming it on biased training data. Truth is that bias can creep into your AI system in various ways and that can happen long before data is collected or at any stage of the AI learning process.
The first thing you need to do before building an AI system is to decide what problem you wish to solve using AI. Let’s say a credit card company wants to predict their customer’s creditworthiness. But how do you explain the concept of ‘creditworthiness’ to a computer? In order to translate that concept to a computer, the company needs to decide whether creditworthiness should mean maximizing profit margin or maximizing the number of loans repaid by the customer. Therefore, the concept of creditworthiness is defined within the context of the credit card company’s goal. Based on this understanding, if the algorithm determines that giving out subprime loans is the best way to reap maximum profits, it would inadvertently result in loss, even if the company didn’t intend it that way.
There are two common ways in which bias enters the training data
There can be different attributes that we choose to train a model. And the decisions around choosing these attributes play a crucial role in the accuracy and unbiased nature of the AI model. For example, in the above-mentioned instance of determining the creditworthiness of a customer, there are various attributes that can be used. This could be the age of the customer, their income, or the number of loans that have been paid off. Based on the attribute that the credit card company chooses, the AI model can either be inaccurate or accurate. While measuring the impact on accuracy is easy, the same cannot be said about bias.
In order to minimize bias, we need to be able to define and measure fairness. There are numerous definitions of fairness. What might seem fair in one context or to one person might not be the same in a different context or to a different person. For example, when we say there are a fair share of women CEOs in the world, do we mean 50% of the CEOs in the world are women? Or do we mean that the number of women CEOs is not negligible? Is this ratio of women CEOs true in the real world? The efforts to define fairness has had its share of tradeoffs. For example, a model at given point in time can only demonstrate fairness to one individual or group.
There is disagreement on the best way to get rid of these tradeoffs. One way to do this would be to set different thresholds for different groups of people. For instance, in the example of the credit card company, the credit limit can be set based on their respective income brackets. Another way to do this is to set a single threshold for everyone. For instance, the credit limit does not vary based on the income of the applicant. Since not everyone agrees to a single definition of fairness, there is no single way to measure fairness. Different approaches or standards of measuring it will be required based on the use case or context.
One of the foremost things that you can do as a business leader is to stay up to date on the research done in AI and the ways of tackling bias. There are plenty of resources that provide you with necessary information such as annual reports from AI Now Institute,Alan Turing Institute’s Fairness, Transparency, Privacy group, etc.
Setting up your business for successfully deploying AI requires establishing processes that reduce bias. Use a portfolio of technical tools, operational practices, and third-party audits. AI tech giants such as Google AI have established best practices. IBM’s Fairness 360 toolkit puts together common technical tools.
You need to figure out how humans and machines can work together to remove bias from the system. There is one kind of system where the AI system makes recommendations which the humans double check to ensure sanity. But do these decision makers know precisely how confident the AI system is in its recommendations?
Removing bias requires a lot of research and data. Hence it is important that you are open to the idea of investing in research. This way you can advance this field and reduce the risks of bias in your AI system. Allow room for collaboration between teams involved in this project and encourage transparency.
Build a diverse AI community. This way you are better prepared to spot bias while ensuring that engagement within the community is unaffected. To drive these efforts, you need to invest in education, mentorship, and opportunities, like the work that the nonprofit organization AI4ALL does.
AI systems are only as good as the data that they are fed. Bad data can contain all kinds of biases which affect the output from the AI system. As the training of the AI models continues, these biases only end up becoming an ongoing or unsolvable problem. Most often we assume that the AI system can spot and tackle these problems by itself. But this is not the case. Hence, it is essential that both humans avoid these biases and refrain from feeding biased data to the training AI models. This way, you can build systems that can be trusted both by your organization as well as your customers.
Get a free copy of our whitepaper.
AI (AI) and machine learning (ML) have experienced numerous ground-breaking advancements during recent decades. Here’s why you should invest in AI and ML today.
Responsible AI is a framework that registers how an organization is addressing the challenges around artificial intelligence (AI) from an ethical and legal perspective.
AI content personalization is one of the more recent and popular use cases of artificial intelligence.
© 2020 Brainalyzed. All rights reserved. Imprint Privacy Policy Terms and Conditions