Intelligent automation is the hot potato every business wants to grab. But how long before hands are burnt as automation technology moves faster than ethics?
Recent gains in process and data mining technologies are enough to have C-suites across the globe chomping at the bit to implement. Intelligent automation platforms now have the power to create ‘digital co-workers’, scraping information from across endless systems: apps, UIs, unstructured documents, images, emails, chats, Excel and more. No and low-coded automated software bots can pull, process and input data, and complete hundreds – potentially thousands – of hours of manual tasks in seconds.
The opportunities for automation span across industries, from finance’s ivory towers to fast-paced healthcare wards. Now, handing these tools to your teams could reveal opportunities in data near impossible with solely human capabilities. As said by Robert Enslin, co-CEO of UiPath, while at the company’s FORWARD 5 intelligent automation conference in Las Vegas, “Every company has to be a software company. This is an opportunity to increase the productivity of everyone.”
Certainly, this is a deal not to be sniffed at with current labour shortages. But what will businesses need to consider to ensure data privacy, transparency and protection are maintained for their employees? Is there a chance that automation is moving too fast for organisations to keep up with any ethical dilemmas?
Intelligent automation is coming in hot
Re:infer, a new partner of UiPath, is a company enabling specific email mining abilities for the UiPath platform, enabling large quantities of communications to be searched through and categorised, highlighting the main tasks that are eating up employee time, and suggesting and actioning areas where automation can step in to assist using no-code software tools.
Catching up with Dr Ed Challis, general manager at Re:infer, after his FORWARD 5 keynote, we took a seat to discuss the latest process and data mining technology.
“In the last two years, the best natural language processing algorithms [NLP] have started to outperform human baselines,” Challis says. “Personally, I know that there are five emails in my inbox right now, and they’re burning a hole in my conscience because I know I need to reply and I haven’t. We constantly forget to do things because we are too busy, and that can cause big problems in business. Imagine I was selling Re:infer and I said, ‘it basically works, but sometimes it just doesn’t ever work’ – because that’s what you get with a human.”
For Challis, intelligent automation is about enabling the often mindless parts of the working day, the policy renewals, or information change and invoice requests, to become the job of the digital worker, without causing massive upheaval for employees.
I know that there are five emails in my inbox right
now, and they’re burning a hole in my conscience because I know I need to reply and I haven’t – Ed Challis, Re:infer
“SMEs with no technical skills, across any UI, can get valuable analytics on their workloads and drive the best actions to automate. The software is trained by a user clicking around in the UI, to teach it what the task is. You can test the AI algorithm to see how well it matches up with human signals and assign the algorithm to certain use cases: from banking and financial services, insurance, industrial manufacturing, FMCG [fast-moving consumer goods], to even digital-first ecommerce.”
Once live, the automation extracts the relevant information like policy numbers, effective dates, topics, and even the intensity rating of the wording to assess the email’s tone, before automatically logging details in the CRM and ERP systems and replying to the email.
Nancy Hauge, head of people at Automation Anywhere (AA), told a similar story of intelligent automation advances at AA’s Imagine conference in New York. Here, the way that people and automation opportunities are combined is what seems to make all the difference. The AA HR team have enabled software bots for automating the sorting of resume relevance, interview scheduling, reminder alerts and onboarding support. Hauge’s team have also automated bespoke employee development programmes that give management specific actions and timelines for supporting each individual employee’s growth.
Grabbing a word with her away from the crowds, Hauge shared her thoughts with ERP Today:
“Both finance and HR are areas that are very ripe for automation,” Hauge explains. “I’ve got the equivalent of seven full-time employees with automation.”
Intelligent automation all sounds like quite the deal, but before leaping in to grab it while it’s hot, Hauge is cautious to share some ethical areas that need to be thought through to achieve a successful digital worker and human worker collaboration.
You can’t worry about the things that two percent of the population might do and treat 98 percent as thieves – Nancy Hauge, AA
Businesses can’t be fluffy on data ethics
When thinking about automation ethics, for Hauge, the role of human resources is a vital area in helping businesses maintain data protection and employee privacy with their automation pursuits. No matter how advanced the technology, the old principle stays the same: permission should always be asked before collecting and using data.
The tricky part is, with tools scanning the inner wording of emails, web apps, images, and more, safeguarding an area of privacy for the employee is becoming increasingly problematic. HR needs to be able to assess whether the data handling is legal and appropriate in any given case, and also interpret the extent of the impacts of unearthed data findings for any given employee and ensure fair treatment.
“My CEO once asked me about the role of human resources when only 30 percent of the workforce are human,” Hauge says. “My answer was we need to be humanists that care rather than just business partners – because a bot can’t care – and not thinking how I monetise it every day. Chief people officers need to embrace automation and get ahead of it. To protect promises of confidentiality when implementing automation products, it’s vital that management build in security protocols, so users will have to consciously breach them.
“Every single person in AA HR has been through bot writing training, and when the bots impact anybody else, it is reviewed by our CoE for standards. With automation providing this customised world that we live in, you better be equipped. We must be more human than resources, and bring humanity to it.”
The Automation Anywhere head explains some big tech names are struggling in this arena.
“Google, for instance, is in a tough spot. Interestingly enough, they didn’t have a very accessible management team and, therefore, people had to find other outputs for expressing things. HR needs an open-door policy. You can’t worry about the things that two percent of the population might do and treat 98 percent as thieves.”
Hauge’s argument here is spot on. Businesses need to ensure they avoid turning data mining into a ‘Big Brother’ exercise. Also, if they are going to process data on workflows, they need to maintain a sense of anonymity for employees. When seeking to automate, managers should determine exactly which data needs to be tracked and what can be ignored. They also need to ensure a company policy of data privacy and protection is maintained in spite of information on employees becoming increasingly available. For Hauge, data should not always be used, even if it is readily accessible:
“Streamlining everything doesn’t make it better. My concern is that email mining could misinterpret messages, without knowing what the true context is, creating wrong assumptions. Also, you might observe that a chat goes on for too long on email or team comms, and you realise that it’s just about people connecting. With data mining, if you aren’t considering all these factors, well, it’s like mining for anything: you can bring up a lot of crap and think it’s gold.”
As Hauge alludes to, there is the potential for employers, developers and citizen developers to use automation tools in the wrong way. What if personal data seen by management during mining leads to bias, impacting the avenues for employee promotion, or leading to unfair dismissal? How about if the software designed to make water cooler moments less guilt-ridden, could be the very thing that pushes employers to limit these opportunities for the sake of efficiency? Or else what if bad-mouthing the boss becomes a recommended automation after too many mentions over email?
Ethical oven gloves
Vendors and businesses can’t be passing this hot potato issue back and forth. There needs to be a way to ensure businesses are on the right path with automation ethics, and that vendors are helping them catch up securely. This way, people at all places in the chain don’t get burned as the technology gets ahead of the ethics.
It’s time for automation to enter as a parameter that influences the definition and scoring of an ethical company – Shail Khiyara, VOCAL
Head of VOCAL (Voice of Customer in the Automation Landscape) Shail Khiyara, told ERP Today the dangers of customers getting automation use cases wrong:
“Data can be used implicitly or explicitly to introduce biased treatment, unauthorised monitoring and encroaching on individual rights. When it comes to ethics in data, it’s not just about improving the technology, it’s about a social reform that ensures that current social biases are not built into the technology. There are several ethical ratings out in the market, many based on ESG principles. It’s time for automation to enter as a parameter that influences the definition and scoring of an ethical company.”
For Khiyara, businesses using this technology have the choice to buy or build trust in automation and data mining, with building being preferable. Potentially, the future of privacy is to cut the chain between the data and the person, and the responsibility lies at both ends of the platform.
Every company has to be a software company. This is an opportunity to increase the productivity of everyone – Rob Enslin, UiPath
“Employee privacy is a balancing act. Organisations should have a strong culture management and education programme around the benefits of automation and alleviating fears around job loss.” Khiyara continues. “Building trust requires clear policies to be outlined and enforced, ensuring a ‘fear-less’ environment. Automation tools should enable the easy masking of data and anonymisation (one-way data encryption) of any personal identifiable information (PII) data.”
The boost businesses can get from intelligent automation is truly amazing, as long as ethical responsibility remains a priority for businesses and vendors. As Hauge said it, the way things currently stand, ‘bots can’t care’. Intelligent automation is not yet able to ensure its uses are ethical. As such, it’s up to humans at either end of the UI to bake ethical expertise into the creation of every bot, and to make sure the power of this technology doesn’t fall into the wrong uses and ultimately burn the hands it was meant to feed.