Addressing AI bias: ERP Today Live! from IFS Unleashed

Key Takeaways

AI bias is prevalent in various domains, notably in talent management, where existing biases can perpetuate exclusionary practices against underrepresented groups.

Transparency in AI processes is crucial for mitigating bias; moving away from the 'black box' model can help build trust and understanding across industries, particularly those less familiar with technology.

Diversity in development teams is essential for creating fair AI systems; including varied perspectives can help detect and address biases, ensuring that AI technology does not amplify existing inequalities.

Reporting from the ERP Today News Desk, live at the IFS Unleashed event in Orlando, Florida, Mark Vigoroso, chief content officer, ERP Today/Wellesley Information Services and Stephanie Ball, deputy editor, ERP Today, sat down with leaders from IFS to gain insights on the best ways to address AI bias today and in the future. 

“We are already seeing the impact of artificial intelligence (AI) in areas like talent management, where it continues to learn off an existing biased pool that does not account for people that may have been excluded in the past,” said Stephanie Poore, managing director, UK&I, IFS. 

Jacqueline de Rojas CBE, board director, IFS, elaborated with an example of a female doctor who could not access the locker room in the gym because her job title had been hard-coded as a male job title. “We live in a world full of bias, and it is our job to make sure that we clear that bias, whether it is tech-enabled or not,” she said. 

So, how can AI bias be mitigated, especially in industries that are traditionally less tech-savvy? According to Bob de Caux, chief AI officer, IFS, transparency is critical to making the results of AI data as interpretable as possible in the language that people from those industries understand. He added that the industry must move away from AI being a “black box” as that scares people, and transparency can eliminate that fear. 

Regarding innovation, de Caux indicated some exciting areas, like the “adversarial approach” technique that was helping reduce AI bias. “[In this technique] you have two AIs, where one is training the model while the other one is trying to exploit the model for different biases, which can help developers uncover biases more quickly,” he said.  

However, innovation without ethical considerations can have adverse effects. “As AI moves from being an assistant to an agent technology, we need to be clear about what we want it to do and the outcomes it is pursuing,” de Rojas said. 

Still, technology enthusiasts de Rojas and Poore are confident that AI can be executed for good if frameworks are in place to ensure that AI does what we want it to do.

“I am optimistic that we will get over the challenges we’re facing currently,” Poore noted, giving the example of the interest generated from the 60 use cases of IFS-enabled AI showcased during IFS Unleashed. “We need to be more mindful, but the future is AI.”  

Diversity will also play a huge part in addressing or detecting bias in AI, de Rojas said. “Looking into the future, I’m not sure we will have AI developers. I think we will have AI that develops itself. So, it will be all about who is creating the technology and diverse teams with different voices will need to be at the table when we are building, creating, developing and testing new AI technology.” 

“Otherwise, [AI] will amplify the bias that already exists,” de Rojas concluded.