Is AI ready to interview applicants? And more importantly, is it the right thing to do?
You can barely go a week in HR these days without seeing an AI-related headline or news about some new AI-assisted hiring tool. It’s an exciting time for HR tech, but the hype cycle is at its peak and it’s important for us leaders to have our wits about us.
The latest in the long line of incoming AI-related hiring trends is AI interviewers. These are AI-powered avatars that actually conduct interviews with applicants in the place of humans. In practice, a great way for hiring teams to save time, but in principle? Something feels off.
Is the tech ready?
There are two types of AI interviews: AI-assisted, and AI-led.
1. AI-assisted AVI and in-person interviews:
In AI-assisted AVIs, recorded responses are reviewed by humans but also analyzed by AI algorithms. For example, AI studies candidates’ responses, facial expressions, tone of voice, etc., and helps to identify behavioural traits such as confidence and communication skills.
That said, AI assistance isn’t limited to AVIs. Employers also use AI note-takers during two-way virtual or in-person interviews. These help transcribe conversations, summarize key points, and outline action items that emerge from interviews. Although AI tools assist these interviews, selection decisions remain with recruiters and the hiring team.
2. AI-led AVI interviews:
In AI-led video interviews, AI takes a more central role by conducting the interview and making selection decisions. These specialized AI software help with asking questions, probing with follow-up questions, providing a comprehensive analysis of responses and candidates’ expressions, tone, and body language, and suggesting which candidates to progress versus reject in the application process.
While the AI tool is usually pre-fed with information about the role, desired skill set, and more, this type of interview occurs without human intervention.
The problem with AI-led interviews is that AI simply isn’t ready to interview candidates without any human involvement. Let’s break down why.
Discrimination and inaccuracies
There’s abundant evidence that AI systems can introduce and perpetuate biases in recruitment. Even a tech giant like Amazon stopped using AI screening when its algorithms were found to favour men’s resumes over women’s, regardless of skill.
According to the World Economic Forum, human biases are embedded in AI tools that haven’t been thoroughly audited for gender, age, religion, and other forms of discrimination. It’s worth noting that the data used to train these tools are overwhelmingly “WEIRD” (Western, Educated, Industrialized, Rich, and Democratic), so they are likely biased against folks who don’t fit this type.
In addition, some AI tools aren’t trained to process and interpret diverse accents. This makes them frustrating and unresponsive for speakers with non-standard accents, such as Cockney or African-American vernacular. According to one blog post, many YouTube parody videos demonstrate this.
Data security and privacy
AI-powered interview tools can put you at risk of non-compliance with data privacy laws such as the California Consumer Privacy Act (CCPA), General Data Protection Regulation (GDPR), and more. Here’s a breakdown of possible threats:
-
Data collection and storage: AI interview tools gather and store candidates’ data, such as their personal information, video recordings, transcripts, etc. Without proper handling, these could be at risk of unauthorized access, especially when using third-party interview software.
-
Biometric information: AI tools that analyze applicants’ biometric data, such as facial expressions or voices, must prevent this information from being leaked and misused to access candidates’ other personal technologies, such as their phones, digital locks, and more.
-
Lack of transparency: Candidates may not be fully aware of what data is collected, how it will be used, and who has access to it. This raises concerns about consent and data privacy.
Candidates don’t want it
In 2023, research firm Pew surveyed 11,000 American adults and found that 71% opposed AI making final hiring decisions. Earlier that year, Harvard Business Review reported that while candidates are impressed with AI’s novelty, the lack of human connection during interviews can be daunting. Candidates felt “judged” by some sort of superior entity, and 66% have said they wouldn’t apply to employers that use AI in hiring
Clearly, using AI means you risk deterring top talent from applying for a job in the first place. And even if they do apply, they could feel anxious during the process and find it hard to be themselves when interacting with an AI instead of a person. Their discomfort will reflect badly on you – Reddit’s /recruiting hell is riddled with job seekers complaining about hiring processes that aren’t human-centric. One Reddit user, who posted about a jobot interview, wrote:
“The moment I realized I was being “interviewed” by a jumped-up dialogue tree in a fake chat room I noped right the hell out — it’s like someone said to themselves “Hunting for a job is already kind of awful and humiliating, but how could we make it feel completely dystopian at the same time?” and freaking nailed it.”
Another redditer posted: “Man, this sucks. Every job application is a potentially life-altering experience for the jobseeker, yet here we are trying to remove the human element from the process altogether.”
Keeping it human
Is it right to deny candidates, who are already facing a challenging job market, some human interaction during the interview? I don’t think so. AI tools can be fantastic at helping with grunt work like scheduling interviews, taking notes, and summarizing feedback. But the tech’s not ready to lead and analyze interviews – and even if it was, it wouldn’t be right to use it.
Source URL : eustartups