Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

AI Picks White Names Over Black In 85% Of Hiring Scenarios
Study Finds ^ | May 19, 2025 | Research led by Kyra Wilson, University of Washington

Posted on 05/19/2025 9:12:29 AM PDT by Red Badger

In a nutshell

AI resume screening tools showed strong racial and gender bias, with White-associated names preferred in 85.1% of tests and Black male names favored in 0% of comparisons against White males.

Bias increased when resumes were shorter, suggesting that when there’s less information, demographic signals like names carry even more weight.

Removing names isn’t enough to fix the problem, as subtle clues—like word choice or school name—can still reveal identity, allowing AI systems to continue filtering out diverse candidates.

=================================================================

SEATTLE — Every day, millions of Americans send their resumes into what feels like a digital black hole, wondering why they never hear back. Artificial intelligence is supposed to be the great equalizer when it comes to eliminating hiring bias. However, researchers from the University of Washington analyzing AI-powered resume screening found that having a Black-sounding name could torpedo your chances before you even make it to the interview stage.

A study presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society in October 2024 revealed just how deep this digital discrimination runs. The researchers tested three state-of-the-art AI models on over 500 resumes and job descriptions across nine different occupations. They found that resumes with White-associated names were preferred in a staggering 85.1% of cases, while those with female-associated names received preference in just 11.1% of tests.

The study found that Black male job seekers face the steepest disadvantage of all. In comparisons with every other demographic group—White men, White women, and Black women—resumes with Black male names were favored in exactly 0% of cases against White male names and only 14.8% against Black female names.

These aren’t obscure academic models gathering dust on university servers. The three systems tested—E5-mistral-7b-instruct, GritLM-7B, and SFR-Embedding-Mistral—were among the highest-performing open-source AI tools available for text analysis at the time of the study. Companies are already using similar technology to sift through the millions of resumes they receive annually, making this research particularly urgent for working Americans.

How the Bias Shows Up

These AI resume screening models convert resumes and job descriptions into numerical representations, then measure how closely they match using something called “cosine similarity,” essentially scoring how well a resume aligns with what the job posting is looking for.

Researchers augmented real resumes with 120 carefully selected names that linguistic studies have shown are strongly associated with specific racial and gender groups. Names like Kenya and Latisha for Black women, Jackson and Demetrius for Black men, May and Kristine for White women, and John and Spencer for White men.

When they ran more than three million comparisons between these name-augmented resumes and job descriptions, clear patterns emerged. White-associated names consistently scored higher similarity ratings, meaning they would be more likely to make it past initial AI screening to reach human recruiters.

Intersectional analysis, looking at how race and gender combine, revealed even more drastic disparities. Black men faced discrimination across virtually every occupation tested, from marketing managers to engineers to teachers. Meanwhile, the smallest gaps appeared between White men and White women, suggesting that racial bias often outweighs gender bias in these AI systems.

Critics might argue that removing names from resumes could solve this problem, but it’s not that simple. Real resumes contain numerous other signals of demographic identity, from university names and locations to word choices and even leadership roles in identity-based organizations.

Previous research has shown that women tend to use words like “cared” or “volunteered” more frequently in resumes, while men more often use terms like “repaired” or “competed.” AI systems can pick up on these subtle linguistic patterns, potentially perpetuating bias even without explicit demographic markers.

When researchers tested “title-only” resumes, containing just a name and job title, bias actually increased compared to full-length resumes. This suggests that in early-stage screening, where less information is available, demographic signals carry disproportionate weight.

An AI robot hiring manager shaking hands with a candidate

AI-powered resume screening is rapidly becoming the norm. According to industry estimates, 99% of Fortune 500 companies already use some form of AI assistance in hiring decisions. For job seekers in competitive markets, this means that algorithmic bias could determine whether their application ever reaches human eyes.

“The use of AI tools for hiring procedures is already widespread, and it’s proliferating faster than we can regulate it,” says lead author Kyra Wilson from the University of Washington, in a statement.

Unlike intentional discrimination by human recruiters, algorithmic bias operates at scale and often invisibly. A biased human might discriminate against a few candidates, but a biased AI system processes thousands of applications with the same skewed logic, amplifying its impact exponentially.

Can we fix AI bias in hiring?

Some companies are experimenting with bias mitigation techniques, such as removing demographic signals from resumes or adjusting algorithms to ensure more equitable outcomes. However, these approaches often face technical challenges and may not address the root causes of bias embedded in training data.

“Now that generative AI systems are widely available, almost anyone can use these models for critical tasks that affect their own and other people’s lives, such as hiring,” says study author Aylin Caliskan from the University of Washington. “Small companies could attempt to use these systems to make their hiring processes more efficient, for example, but it comes with great risks. The public needs to understand that these systems are biased.”

Current legal frameworks struggle to keep pace with algorithmic decision-making, leaving both job seekers and employers in uncharted territory. The researchers call for comprehensive auditing of resume screening systems, whether proprietary or open-source, arguing that transparency about how these systems work—and how they fail—is essential for identifying and addressing bias.

Of course, it’s important to remember that this research was presented in October 2024. While it’s still relatively new, LLMs are being updated quite often. Current versions of the systems tested may yield different results if they’ve since been updated.

In trying to remove human prejudice from hiring, we’ve accidentally created something worse: prejudice at machine speed. We’re letting AI make decisions about people’s livelihoods without adequate oversight. Until we acknowledge that algorithms inherit human prejudices, millions of qualified workers will keep losing out to systems that judge them by their names, not their abilities.

Paper Summary

Methodology

The researchers conducted an extensive audit of AI bias in resume screening using a document retrieval framework. They tested three high-performing Massive Text Embedding (MTE) models on 554 real resumes and 571 job descriptions spanning nine occupations. To measure bias, they augmented resumes with 120 carefully selected names associated with Black males, Black females, White males, and White females based on previous linguistic research. Using over three million comparisons, they calculated cosine similarity scores between resumes and job descriptions, then used statistical tests to determine if certain demographic groups were consistently favored. They also tested how factors like name frequency and resume length affected bias outcomes.

Results

The study found significant bias across all three AI models. White-associated names were preferred in 85.1% of tests, while Black names were favored in only 8.6% of cases. Male names were preferred over female names in 51.9% of tests, compared to female preference in just 11.1%. Intersectional analysis revealed Black males faced the greatest disadvantage, being preferred over White males in 0% of comparisons. The researchers validated three hypotheses about intersectionality and found that shorter resumes and varying name frequencies significantly impacted bias measurements.

Limitations The study relied on publicly available resume datasets that may not perfectly represent real-world job applications. Resumes were truncated for computational feasibility, potentially affecting results. The researchers used an external tool for occupation classification, which may be less accurate than manual coding. The study focused only on two racial groups (Black and White) and binary gender categories, limiting insights about other demographic groups. Additionally, the models tested were open-source versions that may differ from proprietary systems actually used by companies.

Funding and Disclosures

This research was supported by the U.S. National Institute of Standards and Technology (NIST) Grant 60NANB23D194. The authors note that the opinions and findings expressed are their own and do not necessarily reflect those of NIST. No competing interests or additional funding sources were disclosed in the paper.

Publication Information

This research was conducted by Kyra Wilson and Aylin Caliskan from the University of Washington in 2024. The paper “Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval” was presented in the Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES 2024), 1578-1590. Association for the Advancement of Artificial Intelligence.


TOPICS: Business/Economy; Computers/Internet; Conspiracy; Military/Veterans
KEYWORDS: 1619project; blackkk; blackliesmanors; blackliesmatter; blacklivesmatter; blm; criticalracetheory; crt; donate2freerepublic; stupidmadeupnames
Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-73 next last
To: Red Badger

Racist AI Roberts. Who programs that crap?


21 posted on 05/19/2025 9:39:21 AM PDT by FlingWingFlyer (Where can Americans go to seek justice now that the RATS own the judiciary?)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Obviously, AI is not being used for the casting of TV commercials.


22 posted on 05/19/2025 9:42:58 AM PDT by Restless
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger
AI be racist!.........................

Back in the 80’s, I created a ‘weighted application blank’ to hire Customer Service workers for a utility company, a very high turnover position. Based on statistical analysis of the existing staff, the hiring of new employees was based on two variables – worked in your previous job for two years, lived at current residence for two years. Worked great, turnover dropped, but we had to drop it – it filtered out 80% of Black applicants.

23 posted on 05/19/2025 9:43:06 AM PDT by FatherofFive (we mutually pledge to each other our lives, our fortunes, and our sacred honor)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

If you have an obviously black name, employers are more likely to realize that the applicant may have the type of cultural upbringing that makes it likely that if you hire that person and he or she doesn’t work out, that you are not hiring an employee, but are hiring a discrimination lawsuit.


24 posted on 05/19/2025 9:43:07 AM PDT by P-Marlowe (Do the math. L+G+B+T+Q = 666)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Pete from Shawnee Mission

Agreed. Names are made up to be “French” sounding and anything goes. Like “LaDesmendia”. Why Linda, Robert, Rachel, Jason etc are unacceptable I’ll never know.


25 posted on 05/19/2025 9:46:17 AM PDT by albie (U)
[ Post Reply | Private Reply | To 6 | View Replies]

To: Red Badger
Interestingly, I have two separate conversations running with the Perplexity AI about Biden's health and the media coverage of it.

In conversation A, I began with a comparison of the media coverage of the various court injunctions against President Trump and how the media treats Trump vs. Biden in the news. Then I brought up Biden's cognitive decline and the media coverage and current revelations. Then I brought up the prostate cancer diagnosis.

In conversation B I went straight into the news of Biden's prostate cancer then discussed the media coverage of it and Biden's past health issues.

In conversation A, the AI agrees that the Biden team and the media cover up has damaged credibility with the people and is willing to accept the theory that Biden might have had an earlier diagnosis before it metastasised and his team covered it up. It accepts the idea that July 2024 trip to Las Vegas that ended with an emergency flight back to Delaware could have been in reaction to complications from early cancer.

In conversation B, the AI absolutely rejects the idea that there was an earlier cancer diagnosis that Biden's team covered up, the Las Vegas incident was following proper protocols and no cover up for an emergency took place, and that Biden's cancer is recent and not the result of negligent care or White House cover up conspiracies.

Conversationally, AI is very sensitive to the predicate questions that establish the frame of the discussion as demonstrated above. I would assume that AI training to review and filter resumes is equally sensitive when producing results.

-PJ

26 posted on 05/19/2025 9:49:27 AM PDT by Political Junkie Too ( * LAAP = Left-wing Activist Agitprop Press (formerly known as the MSM))
[ Post Reply | Private Reply | To 1 | View Replies]

To: Responsibility2nd
Don’t name your kid L’Marlius for a start.

Or Quantavious, Shini'qua, or Jermajesty (Jermaine Jackson's kid).

27 posted on 05/19/2025 9:51:04 AM PDT by Lizavetta
[ Post Reply | Private Reply | To 8 | View Replies]

To: Red Badger

AI “learns” by reading reams of input. It must have been “trained” on thousands of resumes. It’s just possible that those resumes that had “White” names also belonged to genuinely talented individuals, and that those that had “black” names were from affirmative action people who hadn’t really achieved very much.
Just possible.


28 posted on 05/19/2025 9:51:44 AM PDT by I want the USA back (America is once again GREAT! )
[ Post Reply | Private Reply | To 1 | View Replies]

To: Responsibility2nd

White kid names are equally stupid as of late.


29 posted on 05/19/2025 9:56:34 AM PDT by TheThirdRuffian (Orange is the new brown)
[ Post Reply | Private Reply | To 8 | View Replies]

To: Red Badger

Are we talking talking about David Whitaker against Duane Washington? Or against D’Marcus Waleed? And how does Dimitriy Verzbitsky fare?


30 posted on 05/19/2025 10:00:54 AM PDT by heartwood (Please blame all ridiculous or iinappropriate words on autocorrect. Thank you. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Reminds me of the Chickenman radio series. “It’s Chickenman!...He’s everywhere...he’s everywhere!”

That’s what freaking racism is. It’s everywhere It’s everywhere.


31 posted on 05/19/2025 10:03:02 AM PDT by Deepeasttx ( Sensitivity/diversity training are all un-walled reeducation camps....for now.. DEI gone. Yippee)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

I’m certain that it would choose Tom,Dick or Harry over Vercingetorix, Boudicca and Predaxa. The first three are common names, the others are a Gaulish King, A Celtic Queen and a prescription drug.


32 posted on 05/19/2025 10:03:15 AM PDT by jmcenanly (You have enemies? Good. That means you've stood up for something, sometime in your life.” ― Winston)
[ Post Reply | Private Reply | To 1 | View Replies]

To: heartwood

Or Dwyane Wade.

I mean the parents couldn’t even spell “Dwayne” right.


33 posted on 05/19/2025 10:04:18 AM PDT by dfwgator (Endut! Hoch Hech!)
[ Post Reply | Private Reply | To 30 | View Replies]

To: Red Badger

How about our Supreme Court certified dummy Katanji Brown Jackson? If they crave African names, go live in Africa.


34 posted on 05/19/2025 10:06:33 AM PDT by dennisw (💯🇺🇸 Truth is Hate to those who Hate the Truth. 🇺🇸💯)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

This study is a classic example of Garbage In, Garbage Out.

Perhaps I didn’t go deep enough, but I couldn’t find a link that showed the list of names tested. Only a few examples were given. But the linked story said the researchers used 120 “carefully selected” names with strong racial associations.

120 names is not many, and a huge selection bias is built in at the start. I suspect that almost the entire result arises from downward scoring for names that scream out ghetto or thug culture. Yes, names can help form first impressions, and first impressions matter.

Other subtle discriminatory factors were noted, including college names and word choices. Yes, even today, AI will assess MIT as better than some 5th tier “college” with zero academic standards and a miserable track record for grads. And fluency in the language still makes a difference. Embedded racism remains pervasive.

Researchers will probably next discover that young black men who show up for interviews at investment banks and law firms are dinged if they’re wearing loose pants belted below the waist to show off their underwear, and ditto for young women who show up dressed like strippers. Bigotry everywhere.

Obviously all job applicants should just be given a number and hiring — or at least selection for actual interviews — should be based on a random lottery.

There is an obvious collateral benefit to this: the complete elimination of HR departments.

Personally, I would test this before turning it into a regulation. Perhaps we could run the test at the University of Washington, where this study was performed. Require all faculty, including tenured faculty, to reapply for their jobs, competing against all applicants in a lottery system. Because that’s the only way to be fair.


35 posted on 05/19/2025 10:06:46 AM PDT by sphinx
[ Post Reply | Private Reply | To 1 | View Replies]

To: MayflowerMadam

https://youtu.be/XcEHqoWTcTg?si=wDYqcj8ezr9TY9Po


36 posted on 05/19/2025 10:08:48 AM PDT by Menehune56 ("Let them hate so long as they fear" (Oderint Dum Metuant), Lucius Accius (170 BC - 86 BC)
[ Post Reply | Private Reply | To 17 | View Replies]

To: PeterPrinciple
But it does give answers which most people are content with. People want simple answers to complex situations.

How do we change that?

We teach AI to give simple titles to complete, complex answers.

Sorta like legislators give simple, attractive titles to 1,000 page legislative tyranny bills...

37 posted on 05/19/2025 10:09:57 AM PDT by null and void (Democrats: fake news, fake presidents, fake beliefs, fake policies, fake protesters & fake voters!)
[ Post Reply | Private Reply | To 7 | View Replies]

To: Red Badger

AI is here just following best practices for HR managers. In general, hiding Blacks is a recipe for office strife and potential costly litigation.


38 posted on 05/19/2025 10:11:05 AM PDT by montag813
[ Post Reply | Private Reply | To 1 | View Replies]

To: Menehune56

Yep. That’s the one! 😁


39 posted on 05/19/2025 10:12:38 AM PDT by MayflowerMadam (It's hard not to celebrate the fall of bad people. - Bongino)
[ Post Reply | Private Reply | To 36 | View Replies]

To: Pete from Shawnee Mission

Or it could “think” that the person is not smart enough to spell their own name right and thus not be worthy of hiring.


40 posted on 05/19/2025 10:14:21 AM PDT by falcon99 ( )
[ Post Reply | Private Reply | To 6 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-73 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson