Close Menu

Algorithmic bias and how it impacts all of us

Apr 11, 2022
5 min Read
Kate Malott
Image
algorithmic bias

Google and other search engines know nearly everything about us. They know our age, gender, race, income, location, education, marital status, interests, political affiliation and more. So, it is no surprise these companies personalize and commercialize each individual online experience, in a way that is not only effective, but profitable.  

“They want to keep you engaged in order to influence you, sell you things, sell you ads,” says Thomas Freeman, JD’07, MS’09, MBA’12, an instructor in business law and ethics at Creighton.

Companies do this using algorithms, systematic and repeated mathematical designs inspired by data, that often create flawed, unequal and oftentimes unjust outcomes. However, error-prone assumptions by both human and artificial intelligence, known as algorithmic bias, have harmful implications.

“Algorithms use the past to make predictions of our future, and a lot of the decisions that used to be made by people are now being made by robots, like employment decisions and health care decisions,” Freeman says.

Freeman teaches and lectures on algorithmic bias and algorithmic injustices at the Heider College of Business. He says our online experiences are increasingly dictated by algorithms. What you see online is determined by how you have been categorized using key demographic and psychographic information. A person in their 50s will have a different online experience than someone in their 30s.

Algorithms dictate the suggested movies we watch on Netflix, products to buy on Amazon, articles to read on Facebook, matches on Tinder and more. Freeman’s work with the Institute for Digital Humanity is focused on algorithmic bias and other data science issues.

“Bias can emerge from the design of the algorithm, or the unintended or unanticipated use of data or decisions relating to the way data is coded, collected, selected, or used to train the algorithm,” Freeman says.  

Artificial intelligence affects more than entertainment and social networks. It impacts political advertisements, employment opportunities, housing options and more. It’s increasingly used in policing and health care, too, Freeman says.

Algorithms are making assumptions about you, and this bakes in historical, predictive biases of the past.
— Thomas Freeman, JD’07, MS’09, MBA’12

The problem with algorithms, although useful at times, is they tend to reinforce social biases on gender, ability, race, sexuality and ethnicity.

“Algorithms are making assumptions about you, and this bakes in historical, predictive biases of the past,” Freeman says.

So, how do we, as a society, ensure that emerging technology is transparent and free from racial, gender and other biases? Freeman says there are three ways.  

First, people need to be educated about algorithms and the power of this framework. The public, educators, lawyers and legislators need to be informed about the increasing influence of technology personally and globally.  

“Most people don’t understand the effects, or really understand the decisions being made about them, or how to go about challenging those decisions,” Freeman says.

Second, legislative policy and protections, along with guardrails on incorporating artificial intelligence in business, need to be created and regularly audited and evaluated to hold developers accountable.

While working at the Nebraska Attorney General’s Office, Freeman recognized the need to look at large-scale impact.

“I was going after smaller businesses and thought, ‘Why are we doing this when Google and other companies have been, at a very wide scale, violating our privacy? Why is the government not involved in this?’”

Third, additional legal protection is necessary. The problem, Freeman says, lies in the lack of comprehensive regulation. He says there is a lack of comprehensive privacy laws, and each state is free to develop its own rules.

“Algorithms are a very useful tool, but, like any tool, we have to be careful that we’re assigning them tasks they are capable of, they are designed properly, and they are regularly evaluated and audited for accuracy and to eliminate bias. We have to understand how they affect our lives.”