Artificial Intelligence (AI) is badly named.
It is only as a result of us actually sourcing, corralling, feeding and managing our AI systems with real world (and very often real-time) data that we can hope to achieve reasonable, usable and functional Machine Learning (ML) advancements that will drive our next generation of intelligent systems.
One man that works at the (virtual) coal face of AI data every day is Peter van der Putten, director, AI Lab at Pegasystems, a company known for its low code platform for AI-powered ‘decisioning’ and workflow automation.
With so much change happening in the AI space now, where does van der Putten think we need to focus our attention in the information intelligence zone?
“Today, [technology and business] change is rampant with customers are expecting real-time contextual interactions as processes are becoming more digital, automated and short-lived. It is no wonder that AI specialists need to operate in a faster-moving, ever-changing real-time environment,” said van der Putten.
But the data market is tough and he points out that AI specialists we may think they don’t need to worry about given information pipes i.e. because it is the job of data and machine learning engineers to populate these data stores with low latency data.
However, he says, there are multiple reasons why AI specialists should develop real-time data competencies.
Automated decisioning
“First off, the rise of automated decisioning. As AI matures, it is being used more and more to drive core customer interactions in marketing and customer service or to optimize vertical-specific business processes such as claims in insurance, lending in banking or eligibility, or investigation in government through the use of automated decision-making. This requires the contextual execution of many models and business rules in milliseconds. So, data and decision scientists will need to ensure that their models and logic can scale to meet these requirements,” explained van der Putten.
He then highlights the need to analyse real-time data streams on the fly as these are being generated.
“In the past, AI specialists could wait until all data was loaded and transformed in data warehouses and data lakes,” he clarified. “However, the business requires immediacy in many instances – leads need to be spotted and converted within a user session, fraud needs to be detected before more damage is done, IoT predictive maintenance raised before issues worsen and beyond.”
The shape of the modern stack
This new world of operations means that the modern IT stack demands real-time capabilities such as event stream monitoring, complex event processing, speech and text processing and real-time process mining.
All said and done then thus far, where do we go in terms of working with real-time information streams on a day-to-day basis i.e. surely there is a more pressing need to be always-on and ready to ‘drink from the firehose’ of data flows now being presented at a far bigger capacity than ever before, right?
“Yes, so, most importantly, there is a real and pressing need to capture feedback on outcomes in real-time, feed it back to AI and both spawn and retrain models on the fly. This completely breaks the standard model of the artisan data scientist who handcrafts models offline,” clarified van der Putten.
The future is adaptive
Looking ahead, the Pegasystems AI guru says that the future is online and adaptive, which means that models will increasingly be automatically created and constantly learn.
“The emphasis will shift to the real-time monitoring and tuning of these systems after the design phase, i.e. at run time, for accuracy, robustness but also measures related to AI ethics and regulations. AI experts will have to become proficient in online learning, adaptive models, recommenders and real-time monitoring and control,” he concluded.
We started with AI as a sort of comedy vehicle in the movies of the 1980s before we then witnessed the post-millennial cloud-centric renaissance of AI that we now enjoy in our always-on era of continuous computing. It appears that we now also need to reinvent our understanding of the cadence and velocity of real-data within modern AI systems – and that’s pretty smart stuff.