Previously, we discussed some of the pros and cons posed by machine learning. While “self-learning” computers have become the future of autonomous tech, R&D teams are still working to iron out fundamental limitations found within the technology. Machine learning was created to eliminate the constant need for human computational resources, but researchers are finding more and more that we are unable to entirely separate the artificial intelligence from our own.
The Failure to Achieve Objectivity
“As much as we’d like it to be,” says Corey Gary, Enterprise Solutions Architect at TSA, “Machine learning isn’t a computational cure-all.” Despite the hopeful aspirations of its creators, machine learning’s initial human input is the very thing that can hinder its accuracies.
“Computers only look at the things that we tell them to look at,” continues Gary, “So, in some capacity, our gender, racial, and socio-economic predilections are naturally transferred into the algorithms.” When it comes to bias in machine learning, even though the computer is instructed to decipher inputs into its own conclusions—autonomous of its creator—those original inputs can still be biased in human terms.
The Gender Bias
It goes without saying that a purely objective computer should not have opinions about gender binaries or norms. However, the engineers who architected machine learning intelligence to begin with, unfortunately do. What these engineers often fail to realize is that they are imparting their own human biases straight into the “objective” coding.
For instance, only 26% of U.S. computer engineering jobs are held by women. If the majority of individuals who teach computers to act like humans are male, then it is a logical progression for the resulting products to be predisposed to a more one-sided gender bias. According to a study conducted by Carnegie Mellon University, (as compared to men) significantly fewer women are shown targeted ads that offer help in landing them jobs that pay more than $200,000. As a result, “There may be issues around who’s awarded loans or policies; women may miss out on job opportunities; and learning tools may end up being designed for [one gender only],” warns Tabitha Goldstaub of CognitionX.
Machine learning gender biases also affect the autonomous tech movement. For instance, these skewed inputs may impair the objectivity of devices like autonomous cars. Complex moral dilemmas, such as the “Trolley Problem,” become increasingly difficult as men and women may have differing opinions of best practices to saving (or taking) human life.
And it’s awfully difficult to forget Microsoft’s attempt at an objective AI chatbot named Tay (who happens to fit the bill quite nicely for both gender and racial biases). The well-meaning bot started out with innocuous Tweets like “humans are super cool,” but, within a matter of hours, and after assimilating the data from those who were tweeting at her, Tay had spiraled darkly into a flagrantly sexist, Hitler-loving nightmare. While Microsoft certainly did not intentionally program Tay to grow to exhibit such vile behavior, her intrinsic inputs—coupled with the unfair influences of rogue Twitter users—sent her tail-spinning into an anything-but-unbiased black hole.
In many cases, machine learning is only be as objective as the humans who create them.
The Racial Bias
Equally concerning are the explicitly racist biases exhibited in other forms of machine learning outputs.
Consider the Los Angeles network of police-surveillance cameras that scan and monitor the faces of the public. These “digital eyes” can identify individuals from over 600 feet away and simultaneously compare their real-time findings to “hot lists” of criminals and suspects. However, according to The Atlantic, recent testing that has been conducted on the objectivity of facial recognition software is much “more likely to either misidentify or fail to identify African Americans than other races.” In fact, such algorithms have been found to perform 5-to-10% worse on African Americans than they do on Caucasians.
Moreover, machine learning technologies have proven themselves easily swayed by the “deeply ingrained race [prejudices] concealed within the patterns of language use.” Common machine learning algorithms are trained to recognize and associate human language preferences within morally neutral biases, like having “a preference for flowers over insects.” However, AI researcher, Joanna Bryson, has discovered that machine learning algorithms are somehow also learning to associate European American names with positive words—like “gift” or “happy—while African American (as well as other international) names are more commonly associated with “unpleasant words.” Unfortunately, these biases unfairly inject social prejudices into simple decision-making algorithms like job application processing. A previous study proved that a CV touting a European American name was 50% more likely to prompt a job interview invitation than that of an African American. This is just one example, but consider the socio-economic ramifications.
We created a technology to fabricate objectivity for us, but instead, we have created prejudice-prone machines that judge the world—and those who live in it—in a manner similar to our own.
How We Fight
University of Oxford data ethics and algorithm researcher, Sandra Wachter, says “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.” Fundamentally, our input data and objectivity need to change on the home front because machine learning can only be as effective and impartial as its creators. If we want to place humanity in the balance, then we have a moral obligation to work towards a machine learning technology that is effectively unadulterated by our own impartiality.
After all, there is still a great deal of good to be accomplished with this powerful computing potential (see our past blog on machine learning’s speech recognition and natural language processing abilities). Together with companies like HPE, we are working to outperform traditional machine learning approaches by pushing the tech to surpass human abilities as well as our biases.