Will AI take the “human” out of human resources?

image of laptop with code on the screen, in the background abstract red, pink and green lights | AI and ML

The use of AI and ML in HCM continues to be transformative. Its power to deliver personalized experiences and gain actionable insights from HR data while automating routine tasks is already having a positive impact on the bottom line. But questions are now being asked about the quality of data being used, as HCM professionals realize AI can only ever be as good as the data it is trained on. How can busy HR executives ensure they are benefiting from the AI revolution, while working safely and ethically? Where does their responsibility lie and what do they need to be aware of?

The new tech revolution

AI is going to change HR. In fact, change is already underway. This powerful technology is currently being used by many organizations across a range of HR functions including employee skills management, payroll processing, recruitment and onboarding. Against this backdrop, the way HR professionals select and engage with this pervasive AI and ML technology is coming under greater scrutiny. There are, for example, legitimate concerns around the extent to which technology is disrupting the HR function and whether it could be taking the “human” out of human resources altogether. Workday’s own survey data indicates that although only 29 percent of business leaders said they are very confident that AI and ML are being applied ethically in business, 80 percent agree that AI and ML helps employees work more efficiently and make better decisions.

It could be taking the “human” out of human resources altogether.

This uncertainty around the technology is perfectly understandable as AI gains widespread acceptance, but few would argue that AI is not going to become more pervasive.

As most HR professionals are aware, when we talk about AI and ML, we’re referring to a combination of algorithms and data. By training algorithms with data to identify and understand the relationships between data points, AI has become a powerful tool for HR. With its understanding of the data, AI can be used to flag anomalies or inconsistencies to highlight potential issues and recommend actions.

This is just one reason why the nature and quality of data being used to train algorithms is key to the success of AI. Good quality, accurate data is required if we want AI to do its job successfully because, just like human beings, AI cannot make solid decisions if the information it is dealing with is corrupt or inaccurate. Using a comprehensive unified data model, which allows you to maintain clean and coherent data while being confident of its provenance, is critical here.

Workday has been utilizing AI and ML for nearly a decade, which gives us a unique vantage point when it comes to overseeing the quality and structure of the data fed into our ML platforms. Workday was built in the cloud and all of our customers are on the same version – transacting against the same uniformed data model. Our platform approach and AI and ML infused applications have become foundational to our system.

What do we mean by accurate data?

Accuracy in HR is incredibly important. That’s not a new phenomenon as it has always been the case. The reason why it is talked about so much now is that there are so many new ways inaccurate information can creep into a business and undermine solid decision-making.

Data scraped from the internet, for example, is likely to be of variable quality (and that’s being charitable). Internet data is likely to contain fake news and opinion as well as fact and if this is what is training an organization’s algorithms this could cause grave inaccuracies.

If we look at just one source of information – social media platforms, for example, we can see where issues might arise. Not everything online is highly accurate and verifiable. This might be as a simple result of human error (some people won’t remember the exact month or year they began a job, for example) or might come from a specific desire to misrepresent or embellish skill sets or qualifications. There are also a growing number of scams and fraudulent postings or adverts on social platforms which spring up despite the best intentions of the platform and most users. If AI is using social media as a data source, this will impact the quality of suggestions it makes.

Some people won’t remember the exact month or year they began a job

Accurate data sources

On a positive note, finding high quality data sources is perfectly achievable. If we look at HR transactions such as holiday bookings, payments, performance reviews and employee departures, these are all hard verifiable facts, occurring in large volumes which can be used to train algorithms. If these are the data sources, it stands to reason that the AI will create high quality results.

An ethical issue

As AI becomes more deeply embedded in the HR function, transparency and explainability  will be critical to provide reassurance that decisions reached through its use are fair and free from bias. Data can be a powerful tool in identifying bias and to measure whether our efforts to combat it are effective. However, it could also, depending on how it is used, make problems worse. There are, sadly, multiple examples of where AI has amplified existing biases. For this reason, it is vital that AI is thoughtfully designed and developed to avoid unintended consequences.

Similarly, the data being used to train AI must be from trustworthy, verifiable sources. We may also question whether it is legitimate for other activities such as candidates’ social media profiles being used in HR data during a recruitment process. Questions have been asked by HR professionals around whether this is fair and proper for years, but now technology is enabling this to be done automatically through AI it poses a major ethical question.

We may also question whether it is legitimate for candidates’ social media profiles to be used in HR data for recruitment

Trusted partners

With the use of AI being such a hot issue both in the HR function and beyond, it has never been more important that the way technology vendors’ work is interrogated and understood. Organizations need to understand how the AI they are buying operates in practice and what data sets the vendor is using to make it work. Responsible vendors may provide detailed factsheets to their customers to explain the nuances of how they are using data, for example, so users can confidently explain how they are embedding AI.

Beyond this, a vendor’s philosophy is also important. How do they envision AI transforming the workplace? Is it (as it should be) to remove drudgery by automating repetitive work, personalize the employee experience, support better decision-making and to enable better strategic planning? Or do they have a different vision?

For me, a key principle of ethical AI is ensuring that technology is amplifying human potential. The focus of AI and ML is not to replace humans; rather, it’s about driving human performance and making things possible that were not possible before.

AI and ML is about enabling our users to become super humans

Really, AI and ML is about enabling our users to become super humans. Getting this right can ensure AI is working in everyone’s best interests while delivering the transformative results we’re all looking for.