The Danger with Technology: Algorithm Biases and Racism

We often assume that racism is done by people, to people. Microaggressions, stereotypes, slurs and acts of violence – racism takes shape in multiple different ways. That’s why it might be surprising to some when we say that the very computers, software, and technologies that we use every day can perpetuate the same discrimination and racism.

After all, humans can be biased, but computers supposedly exhibit pure objectivity. But we must question who creates these algorithms? What data does software use to form decisions and findings? This becomes a deeper problem when we don’t recognize how the algorithms powering a company’s hiring system or government operations can discriminate against Black, Indigenous, marginalized and immigrant applicants with often non westernized sounding names.

How Does This Happen?

“Algorithm biases” is a term that coins the systemic inequality that predictive technologies display. Think about how algorithmic biases play a role in recruitment or how many years someone is sentenced to prison. Alternatively, think about how these technologies can affect the people and content we see on our Instagram or TikTok feed.

While the phenomena have gained greater traction in recent years, the complex nature of software development and the corporate nature of these algorithms makes it difficult to understand among the general public. As such, we’ve curated a breakdown of ways algorithms can discriminate against people of colour.

The People Behind the Technology

When we think of how these algorithms become biased in the first place, we must question the people who design and deploy this technology. Software development and artificial intelligence is a field highly dominated by white men. As such, when these engineers put these systems together, there’s little to no input from people of colour who may have concerns or suggestions.

Machine Learning

Machine Learning technology is a highly complex topic but here’s a simple breakdown of how it works. As an example, let’s consider how credit card limits are determined. An algorithm is fed millions of historical data on debt, time of payment, and more to automatically determine whether an individual should receive a higher credit limit.

You continually run new data and test the accuracy of your algorithm. You can train your data to pick and select all sorts of attributes related to humans. As such, you can see how certain systems may favour one ethnic group over another depending on its application. Additionally, what type of data that the algorithm uses can create racial biases and prejudices as well.

Shadow Banning

Shadow banning is when an algorithm blocks or censors content, so it does not become visible to users on the platform. This has occurred countless times when it came to the #BlackLivesMatter movement or any content involving awareness around racial tension, etc. Algorithms can censor this content even if it abides by their policies. For example, many Black creators on TikTok have accused the platform of censoring their videos that spoke on racial injustice, the #BlackLivesMatter movement, or the death of George Floyd. Censoring content involving racial matters inhibits the ability to spread the message and awareness of the issue.

Real-Life Examples

To highlight the magnitude of the situation, let’s look at real-life examples that highlight the biases created by algorithms.

Twitter

In 2020, Twitter made headlines when its algorithm automatically cropped Black individuals in photos and focused on white faces instead. The company uses this image cropping system to focus on important aspects of an image when a user uploads multiple photos to its Twitter feed. After several trials, many people noticed that Black individuals were left out of photos. This only furthers the lack of representation that Black and racialized individuals face daily.

Recidivism

Governments use a recidivism algorithm to predict if a convicted criminal has the likelihood of reoffending again. After researching these algorithms, researchers found that these algorithms over predicted how often black defendants would reoffend. They were twice as likely to be misclassified as higher risk in comparison to white defendants.

Additionally, the algorithm mistakenly predicted that white defendants were less likely to reoffend when. From this analysis, we can see how algorithms discriminate against Black people. Additionally, this data can be used against Black defendants to either convince the court to carry out longer jail sentences and more.

Hiring Practices

As we know, algorithms and software are not inherently objective. The Equal Employment Opportunity Commission is looking at two complaints made against algorithmic hiring discrimination. This often happens when companies deploy AI-based applicant screenings. These algorithms analyze facial features and names which could lead to discrimination against minorities during hiring rounds.

How to Be More Inclusive

The answer to how to solve this issue is unclear. However, here are a few caveats towards the issue that may open up the floor to greater discussions and considerations.

Companies need to be transparent

To garner greater attention and public interest in this issue, companies should explain their training data and algorithms to others. This allows others to understand whether or not these algorithms present biases.

Companies need to recognize biases exist everywhere

Company leaders need to understand that biases can exist in their code and their software. They also need to understand how these biases setback diversity and inclusion in the workspace. As such, managers should aim to hire diverse employees and continually run through the algorithm to ensure it presents no biases against a group of people.

It’s up to companies and their management to ensure diversity and inclusion are present everywhere, not just in policies and actions, but through the technologies implemented. And that starts with a level of understanding that racism and biases can happen in the very algorithms and systems we utilize. 

Want to learn more, I suggest reading Algorithms of Oppression by Safiya Noble. Check out her 12-minute Tedtalk where Safiya discusses how we can challenge algorithms of oppression. 


The blog is curated by Colleen James, Principal and Founder of Divonify Incorporated. Colleen’s work is centered around the dismantling of oppressive systems by working with organizational leadership to address issues of systemic racism, equity, diversity and inclusion. If you enjoyed this blog, please share with others you feel would gain value from it.

Join our mailing list to continue the conversation >

Colleen JamesComment