Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

AI Picks White Names Over Black In 85% Of Hiring Scenarios
Study Finds ^ | May 19, 2025 | Research led by Kyra Wilson, University of Washington

Posted on 05/19/2025 9:12:29 AM PDT by Red Badger

In a nutshell

AI resume screening tools showed strong racial and gender bias, with White-associated names preferred in 85.1% of tests and Black male names favored in 0% of comparisons against White males.

Bias increased when resumes were shorter, suggesting that when there’s less information, demographic signals like names carry even more weight.

Removing names isn’t enough to fix the problem, as subtle clues—like word choice or school name—can still reveal identity, allowing AI systems to continue filtering out diverse candidates.

=================================================================

SEATTLE — Every day, millions of Americans send their resumes into what feels like a digital black hole, wondering why they never hear back. Artificial intelligence is supposed to be the great equalizer when it comes to eliminating hiring bias. However, researchers from the University of Washington analyzing AI-powered resume screening found that having a Black-sounding name could torpedo your chances before you even make it to the interview stage.

A study presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society in October 2024 revealed just how deep this digital discrimination runs. The researchers tested three state-of-the-art AI models on over 500 resumes and job descriptions across nine different occupations. They found that resumes with White-associated names were preferred in a staggering 85.1% of cases, while those with female-associated names received preference in just 11.1% of tests.

The study found that Black male job seekers face the steepest disadvantage of all. In comparisons with every other demographic group—White men, White women, and Black women—resumes with Black male names were favored in exactly 0% of cases against White male names and only 14.8% against Black female names.

These aren’t obscure academic models gathering dust on university servers. The three systems tested—E5-mistral-7b-instruct, GritLM-7B, and SFR-Embedding-Mistral—were among the highest-performing open-source AI tools available for text analysis at the time of the study. Companies are already using similar technology to sift through the millions of resumes they receive annually, making this research particularly urgent for working Americans.

How the Bias Shows Up

These AI resume screening models convert resumes and job descriptions into numerical representations, then measure how closely they match using something called “cosine similarity,” essentially scoring how well a resume aligns with what the job posting is looking for.

Researchers augmented real resumes with 120 carefully selected names that linguistic studies have shown are strongly associated with specific racial and gender groups. Names like Kenya and Latisha for Black women, Jackson and Demetrius for Black men, May and Kristine for White women, and John and Spencer for White men.

When they ran more than three million comparisons between these name-augmented resumes and job descriptions, clear patterns emerged. White-associated names consistently scored higher similarity ratings, meaning they would be more likely to make it past initial AI screening to reach human recruiters.

Intersectional analysis, looking at how race and gender combine, revealed even more drastic disparities. Black men faced discrimination across virtually every occupation tested, from marketing managers to engineers to teachers. Meanwhile, the smallest gaps appeared between White men and White women, suggesting that racial bias often outweighs gender bias in these AI systems.

Critics might argue that removing names from resumes could solve this problem, but it’s not that simple. Real resumes contain numerous other signals of demographic identity, from university names and locations to word choices and even leadership roles in identity-based organizations.

Previous research has shown that women tend to use words like “cared” or “volunteered” more frequently in resumes, while men more often use terms like “repaired” or “competed.” AI systems can pick up on these subtle linguistic patterns, potentially perpetuating bias even without explicit demographic markers.

When researchers tested “title-only” resumes, containing just a name and job title, bias actually increased compared to full-length resumes. This suggests that in early-stage screening, where less information is available, demographic signals carry disproportionate weight.

An AI robot hiring manager shaking hands with a candidate

AI-powered resume screening is rapidly becoming the norm. According to industry estimates, 99% of Fortune 500 companies already use some form of AI assistance in hiring decisions. For job seekers in competitive markets, this means that algorithmic bias could determine whether their application ever reaches human eyes.

“The use of AI tools for hiring procedures is already widespread, and it’s proliferating faster than we can regulate it,” says lead author Kyra Wilson from the University of Washington, in a statement.

Unlike intentional discrimination by human recruiters, algorithmic bias operates at scale and often invisibly. A biased human might discriminate against a few candidates, but a biased AI system processes thousands of applications with the same skewed logic, amplifying its impact exponentially.

Can we fix AI bias in hiring?

Some companies are experimenting with bias mitigation techniques, such as removing demographic signals from resumes or adjusting algorithms to ensure more equitable outcomes. However, these approaches often face technical challenges and may not address the root causes of bias embedded in training data.

“Now that generative AI systems are widely available, almost anyone can use these models for critical tasks that affect their own and other people’s lives, such as hiring,” says study author Aylin Caliskan from the University of Washington. “Small companies could attempt to use these systems to make their hiring processes more efficient, for example, but it comes with great risks. The public needs to understand that these systems are biased.”

Current legal frameworks struggle to keep pace with algorithmic decision-making, leaving both job seekers and employers in uncharted territory. The researchers call for comprehensive auditing of resume screening systems, whether proprietary or open-source, arguing that transparency about how these systems work—and how they fail—is essential for identifying and addressing bias.

Of course, it’s important to remember that this research was presented in October 2024. While it’s still relatively new, LLMs are being updated quite often. Current versions of the systems tested may yield different results if they’ve since been updated.

In trying to remove human prejudice from hiring, we’ve accidentally created something worse: prejudice at machine speed. We’re letting AI make decisions about people’s livelihoods without adequate oversight. Until we acknowledge that algorithms inherit human prejudices, millions of qualified workers will keep losing out to systems that judge them by their names, not their abilities.

Paper Summary

Methodology

The researchers conducted an extensive audit of AI bias in resume screening using a document retrieval framework. They tested three high-performing Massive Text Embedding (MTE) models on 554 real resumes and 571 job descriptions spanning nine occupations. To measure bias, they augmented resumes with 120 carefully selected names associated with Black males, Black females, White males, and White females based on previous linguistic research. Using over three million comparisons, they calculated cosine similarity scores between resumes and job descriptions, then used statistical tests to determine if certain demographic groups were consistently favored. They also tested how factors like name frequency and resume length affected bias outcomes.

Results

The study found significant bias across all three AI models. White-associated names were preferred in 85.1% of tests, while Black names were favored in only 8.6% of cases. Male names were preferred over female names in 51.9% of tests, compared to female preference in just 11.1%. Intersectional analysis revealed Black males faced the greatest disadvantage, being preferred over White males in 0% of comparisons. The researchers validated three hypotheses about intersectionality and found that shorter resumes and varying name frequencies significantly impacted bias measurements.

Limitations The study relied on publicly available resume datasets that may not perfectly represent real-world job applications. Resumes were truncated for computational feasibility, potentially affecting results. The researchers used an external tool for occupation classification, which may be less accurate than manual coding. The study focused only on two racial groups (Black and White) and binary gender categories, limiting insights about other demographic groups. Additionally, the models tested were open-source versions that may differ from proprietary systems actually used by companies.

Funding and Disclosures

This research was supported by the U.S. National Institute of Standards and Technology (NIST) Grant 60NANB23D194. The authors note that the opinions and findings expressed are their own and do not necessarily reflect those of NIST. No competing interests or additional funding sources were disclosed in the paper.

Publication Information

This research was conducted by Kyra Wilson and Aylin Caliskan from the University of Washington in 2024. The paper “Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval” was presented in the Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES 2024), 1578-1590. Association for the Advancement of Artificial Intelligence.


TOPICS: Business/Economy; Computers/Internet; Conspiracy; Military/Veterans
KEYWORDS: 1619project; blackkk; blackliesmanors; blackliesmatter; blacklivesmatter; blm; criticalracetheory; crt; donate2freerepublic; stupidmadeupnames
Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-73 last
To: szweig

yup, that pins the meter on names associated with blacks.


61 posted on 05/19/2025 2:16:39 PM PDT by Myrddin
[ Post Reply | Private Reply | To 60 | View Replies]

To: HamiltonJay

LLMs work on many, many levels almost all of which we don’t understand right now. Unfortunately, the study is not garbage. Notice that male black names were preferred 0% of the time to white names. This reflects more than just superficial prejudice on the part of the AI.


62 posted on 05/19/2025 3:36:08 PM PDT by Breitbart was right
[ Post Reply | Private Reply | To 4 | View Replies]

To: szweig; dfwgator
Someone put in some work to make that list. Your hired! (LOL)

First impressions I got from a few of them:

Aaquan - waterboy, Adewale - Adele's still obese brother, Alando - son of Cloud City's leader, Alphe - still eats cats, Anqoinette - Ms. Piggy's first name, Chanel - the fifth child?, Chykie - destined for Hooter's, Cleotis - backwoods woodpile, Craphonso - someone wanted a plumber, Damien - good luck with that priesthood, DeiVon - get the tables!, Dekoda - North or South?, Deliazard - herpatologist, Deltha - fly me, DiBrickashaw - Oriental mason, Dont(anything) - dentist office, Dwyane - our friend from Post #33 gets around, Eboni - from the magazine in the waiting room, Foswhitt - pretty sure that's a Charles Dickens, Frostee - he be chillin', Ikeam - hare-lipped child wants frozen treat, Jacquizz - father frequent sperm donor, It's a me: Jamario!, Janoris - Chuck's out-of-wedlock, Jayden - father should have slapped himself, Jermale - cartoon foe of Tomale, Jermil - knows Richard Gere, Kalvin - last name Cline, Keiyanda - from Rwanda, Kiwaukee - guess the state, Knowshon - if you hum a few bars, Laquarry - rock on, Loletha - flirty girl with a lisp, Luster - hopefully not dull, Mister - demands respect, Morkeith - Sauron's little brother, Nafloyd - very mean parents, Nurdeen - future fat lesbian, Quanis - flights to Australia, Ramzee - Moses' high-falutin' foster bro, Ronnet - Phil Spector's girl, Sar-ron - bling fixation, Santonio - another guess the state, Shamann - bad voodoo, Shelvin - Sheldon's kid, Snorice - ZZZZzzzz, Sycloria - "Ask your doctor for...", Taiwan - China's watching you, Tarockus - future musician, Toddrick - the butler, Tranee - call the cops now, Tyreek - stay upwind, Unique - you hope, Zi'Aire and Zyair - don't take names from maps.

63 posted on 05/19/2025 4:09:13 PM PDT by MikelTackNailer (Mistake was dad's nickname for me. Paste is good on graham crackers.)
[ Post Reply | Private Reply | To 60 | View Replies]

To: wardaddy

Garbage in, garbage out. So many think AI is smart.

If you tell it 2+2=5 so will it. Define an elephant with a picture of a giraffe and that’s what you’ll get.

Feed it all the NYT, WAPO, AP, etc.articles and it’s answers will be biased.

Simple.


64 posted on 05/19/2025 4:12:59 PM PDT by Fledermaus ("It turns out all we really needed was a new President!")
[ Post Reply | Private Reply | To 2 | View Replies]

To: Red Badger

I had Brave AI tell me that baking soda was an acid. I’m just saying.


65 posted on 05/19/2025 4:23:53 PM PDT by Overtaxed
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

This is very confusing to me: how does the AI know it should even take the names into consideration? How does AI get any impression at all of the implications of names, school/university names, etc? There has to be something going on here.

Also, this means everyone is tailoring their resumes to suit machines? The whole business of job-hunting has become a horrible torture test.


66 posted on 05/19/2025 6:21:39 PM PDT by Chicory
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger
Giving your child a "Scrabble name" is giving them their first shove down the road to ruin.

FREE photo hosting by Host Pic.Org - Free Image Picture Photo Hosting

67 posted on 05/19/2025 7:17:18 PM PDT by Paal Gulli
[ Post Reply | Private Reply | To 1 | View Replies]

To: TLI
There was an article I read in College, "Bible Belt Onomastics and Antepedal babtism". I think that the authors last name was Pyles. It provided examples of what could happen to a child if it was entirely up to the parents whims.

Uarco is the name of a forms company Chicago used to register births.

68 posted on 05/19/2025 9:30:21 PM PDT by Pete from Shawnee Mission
[ Post Reply | Private Reply | To 46 | View Replies]

To: Red Badger

The ai picked blacks more often than their population demographic percentage. Sounds like the ai likes blacks more.


69 posted on 05/19/2025 9:35:47 PM PDT by Secret Agent Man (Gone Galt; not averse to Going Bronson.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: szweig

LaRonda
LaMonte


70 posted on 05/19/2025 9:38:57 PM PDT by Secret Agent Man (Gone Galt; not averse to Going Bronson.)
[ Post Reply | Private Reply | To 60 | View Replies]

To: Fledermaus

They ain’t having it be fed truth


71 posted on 05/19/2025 11:30:18 PM PDT by wardaddy (The Blob must be bled dry)
[ Post Reply | Private Reply | To 64 | View Replies]

To: HamiltonJay; Fledermaus

It won’t be fed empirical truth
Too scary that it will then ape the unpleasantries we all observe but only whisper

Well not me obviously I don’t whisper it

Look at google AI and how it slants PC

The first AI I saw was too honest

As in hate to pick on blacks but it would say yeah they are more violent

Ask it now you get a lecture on oppression and poverty

AI has its place particularly in engineering and stem period

Social conclusions prolly not so much lol


72 posted on 05/19/2025 11:36:07 PM PDT by wardaddy (The Blob must be bled dry)
[ Post Reply | Private Reply | To 15 | View Replies]

To: I want the USA back

For some reason the recruiters I get emails from are named Rohit, Amit, Puneet, and Rajnish.


73 posted on 05/20/2025 4:10:10 AM PDT by rxh4n1
[ Post Reply | Private Reply | To 28 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-73 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson