In 2019, ERP Today looked at artificial intelligence (AI) within enterprise technology and asked whether execs should be worried about the ghost in the machine. More recently, the technology has matured and the central questions has changed from; ‘are you using AI?’ to ‘how are you using AI?
With growth in data volumes and algorithms soaring, it is no surprise that AI adoption continues to rise. However, there is a large gap between those simply using AI and those achieving real value and measurable benefits. A recent report by Accenture reveals that only 12 percent of the surveyed global firms are “AI achievers”, with nearly 30 percent of their revenue attributed directly to AI.
The Accenture report also lists a focus going beyond financial metrics and into responsible AI as one of the features in the AI leader’s group. Supporting the slow release of ethics into high performing organizations is a Gartner Hype AI Report which states that many organizations don’t see a business case for ethics within their industry. By 2024, though, it believes 30 percent of major organizations will use a new ‘voice of society’ metric to act on societal issues and assess the impact on their business performance.
While 30 percent by 2024 doesn’t feel like an ambitious goal, there are clear murmurings of change coming from multiple sources across the AI industry, and not just in the startup community.
For example, over at IBM, birthplace of machine learning (ML), the company is claiming a strong focus on recognizing and embedding ethics into their AI products and services. IBM’s director of product management for data and AI, Priya Krishnan, believes that ethics and humanity must sit at the heart of everything in the technology industry.
“Every organization that is developing and deploying AI has an obligation to put people and their interests at the centre of the technology, to see that it’s used responsibly and help ensure that its benefits are felt by the many, not the few.”
The IBM research team, she says, focuses their work and open source tool kits on incorporating fairness, explainability and security principles.
Where security has been a high-level concern of organizations for a few years now, the ethical AI case is in the early stages of take off. There is, however, still a long path to trek. While bigger organizations are inclined to stay on the right side of their reputation, most organizations are not yet incentivized to bring ethics into their business model.
Google, for example, has infamously fired various members of its Ethical Artificial Intelligence team. The highest-profile of these figures is Timnit Gebru, who later founded the Distributed AI Research Institute (DAIR) in order to tackle the inequalities arguably trickling down from the leaders of AI.
“The #1 thing that would safeguard us from unsafe uses of AI is curbing the power of the companies who develop it and increasing the power of those who speak up against the harms of AI and these companies’ practices,” as she wrote in The Guardian last year.
Gebru spoke recently at the EU parliament about what could be done to prevent harm resulting from Big Tech AI. The EU subsequently tabled a proposal to consider developers of AI liable for the damage it causes.
Though the regulation yet goes the entire length of punishing irresponsible AI behavior, it is a message to Big Tech that Big Government is watching. Moreover, it is the start of a conversation that advocacy groups like DAIR are pushing, along with startups in the enterprise tech field.
Good ethics is good business
One of those startups is Arthur AI. Since its incorporation in 2019, the NYC-based company has been making a business case for ethical AI. The startup provides an AI monitoring and observability platform that helps enterprise teams monitor, measure and continuously improve their ML models. It has experienced a 253 percent growth in the first two quarters of 2022, and recently netted a bumper $42m in Series B funding, the first ever of this size in a ML observability platform.
“In the old days, people didn’t always put enough consideration into the ethics of their business practices before putting them into production, and if it ever got called out, they would apologize and move on,” CEO and co-founder Adam Wenchel tells ERP Today. “In 2022 that no longer cuts it – you need to consider the ethical implications of your system on day one.”
Although conversations around AI in the enterprise have been around for a while, the emergence of MLOps has given rise to a new call for a more ethical ecosystem.
“When we started, we were talking about some of these ideas around responsible AI (which) just weren’t on people’s radars. So in the last year, it’s been pretty awesome to see (responsible AI) issues around AI ethics becoming top of mind for a broad group of people.”
Whether biases in the models exclude a customer base or cause detrimental reputational damage, the case for ethics in the business model is evident to Arthur AI. Wench also mentions a ‘waking up’ of the enterprise as companies raise the bar on the inclusion of ethics and responsible business practice.
“It’s been a really interesting transition. I think one of the things that’s made this happen is that the world is becoming more aware of general societal issues and the need to do something about them.
“We’re just starting to get some of that anecdotally in the last four quarters. One of our major public companies that we worked with about two quarters ago saw their top investor ask in the public shareholder meeting about how they were thinking about AI ethics, and it was a pretty amazing moment for us.”
With AI, there is often talk about a ‘ghost in the machine.’ But perhaps the ghost isn’t yet in the machine, rather than in the organization itself. Is it the specter of Enron which suffered significant ethics rot from within? Whatever the noise, the enterprise would do well to see that, from every direction, the future is ethical AI.
As Wenchel says: “Good ethics is good business. Full stop.”