Every neural network is biased. True or false?

Have you ever heard about the veil of ignorance? It’s a concept by modern philosopher John Rawls, who suggests that when making decisions regarding more than ourselves, we should imagine we sit behind a veil of ignorance that keeps us from knowing who we are and identifying with our circumstances.  Is it possible for humans to make decisions without any prejudices? 

According to an article in Current Directions in Psychological Science, prejudice comes from people who aren't comfortable with ambiguity being prone to make generalizations about others. These generalizations reduce ambiguity while also allowing for quicker, often harmful decisions. Bias and prejudice are often passed on between generations and in social groups such as nationalities, regions, and on a smaller scale - among family and friends. Bias becomes a regular part of life and the social experience - we may unconsciously spread it to the world of technology.

What about AI, then?

Humans select every data set behind the AI. Consequentially,  AI is influenced by way of thinking and the biases from that data.

In 2019 in the American healthcare system, researchers found that an algorithm used on more than 200 million people in hospitals to predict which patients will require additional medical care - basing on their past healthcare expenditures, favored white patients over black patients by a considerable margin. Black patients with similar ailments spent smaller amounts on healthcare than white patients. Researchers and a health services company, Optum, worked together to reduce bias by 80%. However, if AI had not been questioned by human consideration, the algorithm would have continued to discriminate against Black individuals. Similarly, one of the most significant instances of AI bias was the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm used in US court systems to forecast the prospect of the defendant becoming a recidivist. Because of the data used, the chosen model, and the overall process of creating the algorithm, the model predicted twice as many false positives for recidivism for Black offenders (45%) than white offenders (23%).

Another notable, large-scale AI bias was observed in 2015 in Amazon’s hiring process. To no surprise, the multibillion-dollar company relies heavily on machine learning and artificial intelligence, including an algorithm for ranking potential employees among applicants. The algorithm was consequently found to be biased against women. The cause for that was that the data used came from the applications submitted over the past ten years, and since most of the candidates were male - reflecting the industry’s male dominance, it was trained to favor men over women. The company has changed the programs to be more neutral by not allowing valuation keywords that indicated an applicant’s gender, but the bias kept occurring despite that. As a result, recruiters used the tool's suggestions to find new employees but never depended exclusively on its rankings. Amazon dissolved the effort in 2017 after management gave up on the initiative. 

Let's say we did everything we could to pass a diverse dataset. Can AI be “traumatized” like a person or have a non-positive experience?

According to the experiments - yes, it can. An example is the case of “Loab”, an AI-generated entity created by the multimedia artist Supercomposite. This picture-generating AI network generated the image and the idea of horror independently. Loab imprinted itself on the AI neural network as much as a traumatic experience would on a human brain. Wherever it appears, other images are converted into horror.

How much do we care about AI without prejudice?

In the 2020 State of AI and Machine Learning Report, only 24% of participating companies declared unbiasedness, diversity, and global ability as critical, vital missions to complete. Furthermore, creating non-biased algorithms is a rather complicated matter and a goal that we’re still far from achieving. To do that, the data has to be bias-free, and the engineers creating these algorithms need to ensure they’re not leaking any of their own biases - needles to say that AI tends to reflect human societal prejudices.

Do you think we can ever be free of bias? What can be done for AI to be?



Sources:

Previous
Previous

How to combat AI bias?

Next
Next

Battle the “winter slump” with the winter sports