AI in HR: the tightrope walker of recruitment

image of person and robot at computer | AI in HR

A string of media outlets and research reports today are discussing the crisis in the global job market, with employers struggling to find talent and more candidates finding themselves without a job.

According to the recent Talent Trends 2024 report conducted by Michael Page International, the biggest challenge in the last 12 months for 73 percent of employers has been finding candidates with the right skills and for 65 percent – a lack of applicants in general, with 54 percent of businesses struggling to match salary expectations.

On the other hand, according to the most recent report by Sage, “95 percent of HR leaders say working in HR is simply too much work and stress.”

With the need to channel and streamline efficiency and growth, organizations cannot afford the human factor to be an obstacle. Instead, contributors to the Sage report say, “HR leaders need to up the pace when it comes to HR tech” which could “take away a lot of that transactional and tactical burden for them.”

This is why AI became one of the major topics of discussion around the possibilities of HR enhancement. If you closely follow news about AI, you may have already seen various articles and research that explore all the risky and unpredictable consequences of AI deployment in HR and recruitment. Of course, the negativity and caution around the topic are not uncalled for – the research-based evidence has already proved that ethical considerations are some of the primary concerns in AI recruitment today.

However, the fact that AI is not going anywhere, becoming a larger part of our lives is undeniable. Soon enough, every growing business will find itself at the point where AI-powered HR processes will become not just convenient – but essential. The balance that AI software developers and users have to master to be able to increase efficiency without imposing risks of bias and unethical practices is the tightrope walker dilemma that this article explores.

 

Deploy automation so you don’t fall behind

The way AI can be deployed in HR processes is widely considered by recruitment agencies. Among them is Kelly, an HR consulting services company specializing in connecting STEM candidates with employers via its automated platform, Kelly Arc.

Through a partnership with UiPath dating back to 2017, Kelly designed the platform to enhance traditional recruiting processes. According to Ed Pederson, VP of innovation and product development at Kelly, normally, the process can take up to 30 to 45 days, with additional time given to interviews. With automation, however, a completely different experience can be achieved.

“The concept that Kelly Arc is born around is [that] talent and the hiring managers are interacting with each other almost live. There’s a matching technology that understands the profile of the candidates, like what their skill sets are, their profile interests in terms of geography and pay and their ambitions. It matches them to job descriptions on the platform.

“Hiring managers, once they post a job description, can immediately start to see candidates that fit the profile just by virtue of that matching technology, and vice versa. The talent can also see job descriptions that are directly applicable to them as opposed to having to weed through [uncertainties] like: ‘Does this really match what I want, that type of company?’”

So, the benefits are clear: AI and automation in recruiting offer speed and efficiency and, in the words of Pederson, the possibility of “a more modern user experience.” Compared to legacy technology where you had to provide your staff with a training manual, the use of these platforms can be more intuitive and instant, making the adoption process easier and faster.

“So, that’s the value of it, where you’re getting people connected to jobs much more quickly and it’s an experience that’s more consumeristic”, Pederson adds.

Channeling global accessibility with AI

DHL Group, the global leader in the logistics industry, also shared why the deployment of Phenom AI in its HR processes has been crucial for its continuous success today.

As an international company, DHL processes approximately 180,000 applications across 220 countries and territories each year. According to Meredith Wellard, VP of group talent acquisition, learning and growth at DHL, what the company usually struggled to balance was the need to have a single solution that would be agile enough to adapt to the local requirements of each territory, country and business sector, such as languages and the supply and demand of top talent.

Before adopting the software, Wellard says, the company struggled to “get consistency in terms of the way we position our brand, the way we market ourselves in the local environments or even the way we describe the experience that you might have as an employee in our business.”

Wellard adds that the company’s recruitment lacked the overall necessary level of agility that would prioritize a “candidate-centric talent acquisition experience”: Managing multiple departments in many different countries led to the creation of over 200 websites which made it very difficult for candidates to access or be flexible with their job search.

Thus, when the DHL Express division first found Phenom offering solutions to address those challenges and implemented Phenom AI, it didn’t take long before the rest of the organization said, “‘This looks very interesting,’” Wellard says. “And in 2020, we took it and expanded it globally.

“So, over the following couple of years, we reduced the number of career sites down to – need a drumroll here – just one! That was a big step. And at that point, we started to see the huge benefits of how much that can impact our visibility in terms of search engine optimization, our visibility on job boards and so forth, simply because everyone’s contributing to that single name, descriptor or a URL and not all fighting for the airspace,” Wellard adds.

The uncomfortable question

While the passion for AI among HR leaders is clear, Ruaa Alsaleh, senior director of global HR at UiPath, points out that the most important strategy for its deployment is understanding it.

That is, HR leaders need to know how to leverage it, what the pathway to its deployment looks like, how it is regulated and what the compliance procedures are. The crucial point here is, she explains, how to do it responsibly.

“It’s important to understand that you have to develop a governance and it’s multifunctional. […] Typically, what happens [is] almost 60 percent of automation projects with AI fail. And they fail because of poor change management and poor governance that has to be peopledriven.”

Expanding on that, Alsaleh explains that particularly in recruiting, “bots and AI should never dictate a candidate’s disposition of whether they’re moving forward or if they’re approved.” Instead, she says, the mindset should be focused on the utilization of AI as “digital assistants” which would allow the recruiter to expand their requisition and have “an aggregated view into the talent pool.”

Apart from establishing the right governance and compliance, no discussion about responsible AI is possible without considering ethics. When ERP Today enquired about the growing ethical considerations in the deployment of AI in HR, Alsaleh did not hesitate to be honest about it.

“Now, it’s a valid concern and it can happen, right? It happens today with humans – we have an unconscious bias, whether we like it or not. And depending on how we set the algorithm or even machine learning, it can happen. What we want to do in this [is to go] back to governance and ethical AI and put checkpoints in which we’re auditing the outcomes.”

Alsaleh specifies that this means that it is possible to “leverage AI to analyze the algorithms and the output to dictate: ‘Am I rejecting more females? Am I rejecting more minorities? Am I rejecting more people over 65?’ And in that way, we’re actually allowing AI to not only help us accelerate hiring but also leveraging the analytics behind it to determine [whether there is] bias and [whether] we need to reset the algorithm or the role-based decision-making in the background. So, it’s very imperative with the governance model that we have those checkpoints in the auditing.” She adds: “I’m not gonna sit here and tell you [that] AI is never going to be biased or all the input [will be] as good as the output. […] When you feed into the AI model what you’re looking for per requisition, it’ll look at a job description and look at years of experience in the industry, but sometimes what we’re looking for also has a bias. So, it’s constant auditing that needs to happen. And we’re seeing it not only in recruiting but also in the employment cycle. It’s definitely a journey.”

But how do the users address this concern? Speaking to Pederson and Wellard, two clear examples can be identified. “What we’ve done as a company is we [created] an AI Council which focuses on five different things”, says Pederson. Among them is external scanning which analyzes what is happening in the world “from politics to new technologies”, an internal use case development group, a technology lead that assesses the different technology partners, risk in which “robust AI ethics set of principles is adopted in the training of the entire company” and impact “which is tracking the trials we’re doing.”

“It’s definitely something that you can’t avoid,” Pederson adds. “You need to be aware of it and you need to work with partners who are intentionally working to solve those problems.”

Wellard shares the sentiment of Alsaleh and Pederson but describes a different approach to the issue. Although DHL does not have a dedicated committee, the company pledged to stay “selective” and only adopt AI where it could help efficiency.

“We take the topic of AI being a new technology very, very seriously. We as an organization are on a journey to understand the opportunities [it presents] not just in HR and recruitment, but across all of our business. But we also understand that while we don’t want to be slow, we want to be sensible about it. We want to make sure we know what we’re implementing”.

On the subject of bias, Wellard adds: “What we’ve learned is that the bias and problems that potentially exist in AI are less about the machine and more about the user. Until we know how algorithms work, how they will be used and what the implications are from privacy, compliance and other [perspectives], we won’t use them.”

Bringing joy back to HR with balanced tightrope-walking

So, what awaits the future of HR? According to Alsaleh, implementing AI in business will lead to an expansion of the job market and a large part of that potential is played by reshaping the employee experience.

“AI will never take over HR. It will never be 100 percent automated. What AI is actually doing is putting the human back into HR. It’s allowing us to spend more time focusing on what many like to do, which is [building] the strategy in the big picture, [i.e.] how we get the company where it needs to go from a people perspective,” Alsaleh explains.

“HR is shifting dramatically… McKinsey Global Institute estimated that by 2030 AI is going to contribute to the creation of one million to 50 million new jobs globally. These are higher-paying jobs. These are more exciting jobs that will be a lot more lucrative”, she concludes.

This thought is echoed by Wellard who believes that with AI, recruiters can “bring back the joy to HR” that used to be there “before technology-enabled platforms came along.”

“You know, I’ve been in HR for over 25 years and when I first joined, the recruiters were like the rockstars of the HR organization. Today’s platforms definitely made recruiters’ lives easier…but they did turn recruiting into a process. So, [it] became all about time, cost and efficiency, and we saw recruiters shift from being creative and inspiring to being administrative and process-driven. And that’s perfectly fine. But it is different.”

So, what Wellard is hoping for with AI appearing in the arena is that it will take over some of those administrative tasks and give recruiters the space to go back to “being more creative.”

“My theory is that it will become a lot more fluid than it’s ever been before, and I think that’s exciting,” she adds.

In his own vision of what’s to come, Pederson adds a few more points of AI potential: In the future, “You are going to walk into the office or [whatever you are using] and you’ll have a team of digital workers or AI that have been working for you overnight while you were sleeping.” They will, he argues, lay out all the technical aspects in front of you, such as the right candidates that you need to speak to, and give you the space to spend more time on “human activities” like training your workforce or cultivating relationships with the hiring manager.

“So, it will allow them to do more with these available technologies, but it will be much more conversational and relational in general,” Pederson says.

It is clear that ethics and bias considerations are not going anywhere. Companies view it as an ongoing issue that needs to be discussed collaboratively. Continuous auditing and the ability to point out the issue and learn from it is crucial to ensure the fluidity, ease and creativity that AI could bring to recruiting.

As long as software developers and users continue to have that open conversation, develop compliance and governance policies and ensure that the weight of wish for efficiency does not outweigh the balance of ethics on the tightrope, the walk should feel smooth and safe. However, whether this becomes a reality where recruitment is bright and fair is up to all parties involved to decide.