Artificial Intelligence (AI) is smart, the clue really is in the name. As we build out our global approach to implementing AI and now also dovetail existing advancements with the new era of generative AI for human-like chat and text experiences, AI is getting smarter all the time.
But, even so, more and more of our AI development is being negatively impacted by bias in the systems developed in a way that ultimately surfaces in the services we seek to tap into.
Keen to show where faults exist and help move us towards as more unbiased future is Burlington, Massachusetts-based Progress. The application development and infrastructure software company hsa now tabled its global survey, “Data Bias: The Hidden Risk of AI” to highlight some of the more pertinent issues.
AI jambalaya jumble
Where ‘bad’ AI with bias exists, the danger is that we end up with a kind of confused mix of ingredients in the final system and finished dish… something like an AI jambalaya in terms of the concoction of (software code) ingredients all mixed together, and the resulting taste is not always appealing.
“Every day, bias can negatively impact business operations and decision making – from governance and lost customer trust to financial implications and potential legal and ethical exposure,” said John Ainsworth, EVP and general manager, application and data platform, Progress. “We put our customers at the centre of everything we do and as we explore all that AI/ML can do, we want to ensure our customers are armed with the right information to make the best decisions to drive their business forward.”
Inherited (unconscious) bias
Ainsworth further explains that biases are often inherited by cultural and personal experiences.
“When data is collected and used in the training of machine learning models, the models inherit the bias of the people building them, producing unexpected and potentially harmful outcomes. Yet, despite the potential legal and financial pitfalls associated with data bias, there is a lack of understanding around the training, processes and technology needed to tackle data bias successfully,” notes Ainsworth and team.
The Progress survey indicated that 78% of business and IT decision makers believe data bias will become a bigger concern as AI/ML use increases, but only 13% are currently addressing it and have an ongoing evaluation process. The biggest barriers they see are lack of awareness of potential biases, understanding how to identify bias as well as the lack of available expert resources, such as having access to data scientists.
Deeper AI trends
The survey findings also show:
- 66% of organizations anticipate becoming more reliant on AI/ML decision-making, in the coming years.
- 65% believe there is currently data bias in their organisation.
- 77% believe they need to be doing more to address data bias.
- 51% consider lack of awareness and understating of biases as a barrier to addressing it.
The Progress survey here is based on interviews with more than 640 business and IT professionals, director level and above, who use data to make decisions and are using or plan to use artificial intelligence (AI) and machine learning (ML) to support their decision making.
The full report and findings can be found here.