Redlining in the Twenty-First Century: Everything Everywhere All at Once

Computers make everything bigger and faster; unfortunately, that applies to harmful biases as well. Racism gone digital will be bigger than ever.

February 18, 2025
/
4
min read

Image generated by DALL-E 3

In the US during the twentieth century, racist people and corporations used certain tactics against non-white people, including redlining. Most commonly banks, insurance, and other financial service organizations would charge higher interest rates to certain neighborhoods (that “coincidentally” had many non-white residents). Similar approaches were used to prevent non-whites from buying in certain neighborhoods. This practice is considered one of the causes of urban decay (reference).

Good news everyone, it’s back! If you thought the negative impacts of such racist policies didn’t do enough damage in the twentieth century, just wait until you see round two. Companies today are engaging widespread in targeted promotions, in which algorithms determine who does, and who doesn’t get better pricing.

Let’s first recognize that this isn’t entirely new. Airlines would offer “discounts” to better customers by letting them pay with frequent flier miles. Casinos would comp meals, rooms, tickets, and other benefits to high rollers. (My roommate who was president of the MIT blackjack team once came home with a $15,000 watch—that was after he lost about $180,000 in a weekend. Fear not, their algorithmic approach to playing helped them beat the casinos in the long run.) These benefits were based on simple “algorithms,” basically: how much have they spent in the past? Unless the pit boss at the casino was racist (I’m sure a few have been over the years), race wasn’t a factor. The airline doesn’t use race in its model of frequent flier models.

That’s not to say there was no racial bias. When a store would run a promotion, such as promoting a sale or offering coupons, they would choose where to promote it. Each newspaper, radio station, and other channel had its own racial bias. It may have been intentional on the part of the store to advertise in one and not the other due to race, or it may have just been that experience said to use certain channels over others and race was not an intentional factor.

Today, algorithms are much more personalized and sophisticated. They build up profiles of individuals and cohorts and provide specific, micro targeting. In the past prices might have varied by region, e.g., a McDonald’s in affluent Darrien, CT charges more for a big mac than other locations (Darrien, CT is predominantly white). But with online shopping different prices could literally be offered to two people sitting next to each other, buying the same product. We see this becoming more and more common, as noted in articles like When online sellers use different prices for different consumers, Why Online Shoppers See Different Prices for the Same Item, and You’re not going crazy — you may actually be paying higher prices than other people.

Here’s the problem, the algorithms that determine what price to charge someone, or what promotion to offer take into account a number of variables. Quite often those variables include zip code or other geographic information. This is because people generally won’t tell a company how much income they make, but companies can estimate the income of the buyer from the zip code. (I say this as someone who has built these types of algorithms in the past.)

Also correlated to zip code is, you guessed it, race. It’s not that the companies are trying to be racist, but it winds up working out the same as if it was. When the machine learning models say offering $1 less to people over here leads to more sales while offering $1 less to people over there does not, it’s just looking at revenue; but when these people and those people are of different races, we’re back to de facto redlining. (For those who think we’re being overly sensitive on race, just substitute rich people and poor people, because you get the same effects.)

Those working on and with such algorithms need to be aware of this and take steps to minimize it. It’s not trivial, but it is doable. Look back at what the algorithm is doing and then compare offers against the metrics of different groups by income, geography, age, race, etc.

Unfortunately, as the world gets more and more automated by algorithms, this will become a bigger problem. Cathy O’Neil writes about this in her book, Weapons of Math Destruction as well as her blog at mathbabe (including some articles that explicitly cover redlining).

The data we have already has some racial bias built into it. There’s plenty of data on racial bias in sentencing stemming from a variety of factors (c.f., United States Sentencing Commission, Open Society’s Racial Disparity in Sentencing, and The Sentencing Project’s One in Five: Racial Disparity in Imprisonment — Causes and Remedies). If we build any type of sentencing recommendation engine to “remove bias” from human judges today it will be built upon the bias of human judges from years gone by.

There’s no easy answer. Machine learning is typically built by finding patterns in (historical) data. [I’m way oversimplifying what machine learning is here, but at a 100,000-foot level that’s what it does.] Generative AI (also known as LLMs or Large Language Models) is built upon what humans have been writing for decades or even centuries. All of that data has bias. It’s important that those creating and those using such systems recognize there is likely bias built in and to tread carefully. Preferably, there should be checkpoints in any rollout and supervision of the process to flag and fix bias as the system develops and is in use.

This adds cost, which no business likes. Unfortunately, we’re facing a tragedy of the commons risk where each company says, “Sorry if I’m a little biased but it’s expensive to be unbiased; besides the worst case is that just going to be a little worse (e.g., more expensive, slower, harder) for this one group.” If the group it was biased against was picked at random, maybe it would all even out; but we know it’s not random. It’s the same groups over and over due the systematic bias in the data, and it’s not just your company but every company doing it to them. Your algorithm might just do it to them once in a while, but taken together, certain groups will be facing bias from everything, everywhere, all at once.

By
Mark A. Herschberg
See also

Not Sure How to Ask about Corporate Culture during an Interview? Blame Me.

It’s critical to learn about corporate culture before you accept a job offer but it can be awkward to raise such questions. Learn what to ask and how to ask it to avoid landing yourself in a bad situation.

February 8, 2022
/
7
min read
Interviewing
Interviewing
Working Effectively
Working Effectively
Read full article

3 Simple Steps to Move Your Career Forward

Investing just a few hours per year will help you focus and advance in your career.

January 4, 2022
/
4
min read
Career Plan
Career Plan
Professional Development
Professional Development
Read full article

Why Private Groups Are Better for Growth

Groups with a high barrier to entry and high trust are often the most valuable groups to join.

October 26, 2021
/
4
min read
Networking
Networking
Events
Events
Read full article

The Career Toolkit shows you how to design and execute your personal plan to achieve the career you deserve.