AI May Not Steal Your Job, but It Could Stop You Getting Hired

AI May Not Steal Your Job, but It Could Stop You Getting Hired

If you’ve worried that candidate-screening algorithms could be standing between you and your dream job, reading Hilke Schellmann’s The Algorithm won’t ease your mind. The investigative reporter and NYU journalism professor’s new book demystifies how HR departments use automation software that not only propagate bias, but fail at the thing they claim to do: find the best candidate for the job.

Schellmann posed as a prospective job hunter to test some of this software, which ranges from résumé screeners and video-game-based tests to personality assessments that analyze facial expressions, vocal intonations, and social media behavior. One tool rated her as a high match for a job even though she spoke nonsense to it in German. A personality assessment algorithm gave her high marks for “steadiness” based on her Twitter use and a low rating based on her LinkedIn profile.

It’s enough to make you want to delete your LinkedIn account and embrace homesteading, but Schellmann has uplifting insights too. In an interview that has been edited for length and clarity, she suggested how society could rein in biased HR technology and offered practical tips for job seekers on how to beat the bots.

Caitlin Harrington: You’ve reported on the use of AI in hiring for The Wall Street Journal, MIT Technology Review, and The Guardian over the past several years. At what point did you think, I’ve got a book here?

Hilke Schellmann: One was when I went to one of the first HR tech conferences in 2018 and encountered AI tools entering the market. There were like 10,000 people, hundreds of vendors, a lot of buyers and big companies. I realized this was a gigantic market, and it was taking over HR.

Software companies often present their products as a way to remove human bias from hiring. But of course AI can absorb and reproduce the bias of the training data it ingests. You discovered one résumé screener that adjusted a candidate’s scores when it detected the phrase “African American” on their résumé.

Schellmann: Of course companies will say their tools ​​don’t have bias, but how have they been tested? Has anyone looked into this who doesn’t work at the company? One company’s manual stated that their hiring AI was trained on data from 18- to 25-year-old college students. They might have just found something very specific to 18- to 25-year-olds that’s not applicable to other workers the tool was used on.

There’s only so much damage a human hiring manager can do, and obviously we should try to prevent that. But an algorithm that is used to score hundreds of thousands of workers, if it is faulty, can damage so many more people than any one human.

Now obviously, the vendors don’t want people to look into the black boxes. But I think employers also shy away from looking because then they have plausible deniability. If they find any problems, there might be 500,000 people who have applied for a job and might have a claim. That’s why we need to mandate more transparency and testing.

Right, because they could be violating employment law. Even when vendors do conduct bias audits, you write that they don’t typically include disability discrimination. Is there any clarity around where the responsibility lies when AI discriminates?

It’s an open question because we haven’t seen litigation. A lot of lawyers say that the company that does the hiring is ultimately responsible, because that company makes the hiring decision. Vendors certainly always say, “We don’t make the decision. The companies make the decision. The AI would never reject anyone.”

That may be right in some cases, but I found out that some vendors do use automatic rejection cutoffs for people who score under a certain level. There was an email exchange between a vendor and a school district that stipulated that people who scored under 33 percent on an AI-based assessment would get rejected.

I think all employers hope that these tools will find the most successful candidates, but we do not have a lot of proof that they do. We have seen employers save a lot of money on labor. I think that’s often all they want.

I thought, maybe naively, that people are becoming more aware that AI can’t be blindly trusted and often needs human intervention. But for many companies, recognizing that would almost negate the point of automated HR software, which is to save time.

A lot of vendors use deep neural networks to build this AI software, so they often don’t know exactly what the tool is basing its predictions on. If a judge asked them why they rejected someone, a lot of companies probably could not answer. That’s a problem in high-stakes decisionmaking. We should be able to fact-check the tool.

If you’re training algorithms on currently successful employees, and there’s bias ingrained into past hiring decisions, it seems like a recipe for perpetuating that bias.

I’ve heard from a couple of whistleblowers who found exactly that. In one case, a résumé screener was trained on the résumés of people who had worked at the company. It looked at statistical patterns and found that people who had the words “baseball” and “basketball” on their résumé were successful, so they got a couple of extra points. And people who had the word “softball” on their résumé were downgraded. And obviously, in the US, people with “baseball” on their résumé are usually men, and folks who put “softball” are usually women.

I was aware of some of the uses of AI in hiring, but I was less aware of their use on people who already had jobs, to determine layoffs and promotions. What concerned you there?

One company used key tag data, from when people swipe in at an office, to understand who were the most productive workers. Obviously, that’s a flawed metric because the number of hours you spend at the office doesn’t reveal how productive you are. But they looked at this data for promotions, and when the company had to do layoffs during the pandemic, they looked at the data to determine who was less successful. So I think once the data is available, it becomes really enticing for companies to use it.

You compare some of these game-based personality assessments and facial expression analysis to ancient pseudosciences like physiognomy and phrenology. What made that connection in your mind?

I think it started when the “Gaydar” study came out, which tried to find the essence of how people who identify as gay or straight look using an algorithm and photos from a dating app. There’s still this deep belief that outward signals, facial expressions, the way our body moves, the intonation of our voices, carries the essence of ourselves. We have seen this in the past with facial expression analysis in the 19th century claiming that criminals had different faces than non-criminals, or handwriting analysis. There was really no science behind it, but the allure is there. Now we have the technology to quantify facial expressions and other outward signals, but the problem is that some companies attribute meaning to that when there really isn’t meaning.

A computer can quantify a smile, but the computer doesn’t know if I’m happy underneath it. So the essence doesn’t reveal itself. We see this in AI social media analysis tools. This idea of, If I look at your Twitter, I will find your unvarnished personality.

It’s almost enough to make a job seeker feel like the decisions being made about their futures are arbitrary and outside their control. What can a person do to regain some agency?

Generative AI has actually given some power back to job applicants. I’ve seen a lot of people on social media talk about how they used ChatGPT to build a better résumé and write cover letters. Some applicants use it to train themselves to answer interview questions. It’s pitting AI against AI.

There are online résumé screeners where you can upload a job description and your résumé, and the computer tabulates how much overlap there is. You don’t want to use 100 percent of the keywords, because your résumé would probably get flagged as a copy of the job description; you want to be in the 60 to 80 percent range. The old mantra was “You need to stand out,” but that’s not applicable anymore, because in all likelihood your résumé will be screened by AI which is more error prone than most people think. So have one column, and have clearly labeled sections like “Work Experience” and “Skills” so the computer can read them.

Another thing that might help you get noticed is contacting recruiters on LinkedIn after you send in an application. I also learned from talking to a lot of recruiters that you may want to consider sending your application directly through the company’s website, because that’s where recruiters look first. Then they go to job platforms like LinkedIn or Indeed.

These are good tips. I feel empowered again.

And I do think that change is possible. I think there needs to be a nonprofit that starts testing AI hiring tools and makes the results publicly available—I actually got some seed funding. I think a lot of universities understand that technology and society are rapidly changing, and we need to bring together people who work on societal problems and people who work on technical problems. But I also think there’s something special that journalists can bring, because we have our ears to the ground. We can use data journalism to help find authoritative answers to these larger questions.

What role do you think the government should play in regulating these tools?

Some experts say we should have a government licensing agency that tests predictive tools when they’re used to make high-stakes decisions. I’m not sure if governments have the capability to do this, because it would be an enormous undertaking. I do hope that government will force more transparency and open up access to data—that would allow researchers, scientists, and journalists to swoop in and do the testing ourselves. That would be a huge first step.

https://www.wired.com/feed/rss

Leave a Reply