Any good IT leader knows that when it comes to technology trends, having one eye on what’s coming down the road can make all the difference between rolling with the punches and getting knocked to the floor. So, here are a few predictions when it comes to software testing – while hopefully avoiding getting sucked into the vortex of meaningless discussion around ChatGPT.
Prediction 1: UAT and integration testing will reign
Most of the big ERP providers are heavily promoting their cloud platforms – despite the consequences that has for testing. Thinking over the longer term, the move to the cloud has two consequences that are particularly interesting: as updates become more regular, they become (generally) smaller and functionally stable, and the relative inflexibility of cloud platforms will cause application proliferation as organizations try to create the functionality they need.
The bottom line of these two points is that there will be less internal software development effort. Cloud updates will require almost no back-end implementation or configuration (which is, after all, part of the point of the cloud). This trend will likely play out with third-party applications, too – as the number of integrations needed grows, third-party integration apps, such as Zapier, are likely to be deployed to reduce the technical effort of connecting applications to your core software.
Because of this, your testing efforts will become even more focused on User Acceptance Testing (UAT). There will be some regression testing to do, of course, but most of that will likely be automated (as it can be right now), to not take up much time. However, every update will still need to be tested by users to ensure business processes work as they should. Regarding those integrations, every time an application is updated, you’ll need to test every application that integrates with the updated application to make sure everything functions properly.
Given that, at present, many organizations find UAT one of the most painful and time-consuming elements of testing, it might be time for IT teams to urgently review how they get UAT done – especially if a cloud migration project has already happened or is due to.
Prediction 2: a shift away from public cloud computing
Over the years, I’ve noticed a pattern of centralization and localization with IT. In the early days of computing, an organization might have had one computer terminal that everyone shared for various tasks. Then as personal computing became more possible and popular, compute was localized as everyone was given a work laptop or desktop. Next, as network infrastructure matured, organizations started centralizing their data repositories. At present, compute is being even further centralized in the cloud. I can’t help but expect a shift back towards localization at some point in the future. You can already see hints of it in disciplines such as edge computing, which aim to move compute nearer to users (as opposed to being in a large centralized data center), and products like Azure Stack, which can be used to create a private cloud environment.
For those involved in software testing, this prediction is something to keep an eye on for now rather than something to act upon. If private clouds suddenly increase in popularity, then you’ll need to make sure you have the right skills in-house to manage the tech stack, but that may happen ten years from now (or never – I’m not clairvoyant, after all!). It may mean that application development moves back in-house to a certain extent, changing the makeup of your testing and therefore requiring robust test management to keep everything on track.
I do believe that, even if we move from centralized back to localized IT infrastructure, it will be with all the benefits that cloud computing has given us – particularly microservices-based applications, elasticity, and so forth – so it’s not like we’ll be going back to the age of CD-ROMs.
Prediction 3: yes, alright, AI will probably be a big thing
I think it’s important to note that a lot of what commonly gets called AI right now isn’t true AI. A lot of what gets touted as AI currently involves highly sophisticated algorithms – but it doesn’t genuinely learn unless a user trains it. What makes ChatGPT so exciting is that it genuinely does learn from its interactions and refines its own algorithms as it learns. True self-learning AI could have a load of applications in testing.
The most obvious application is a rise in tools that require absolutely no configuration. You simply show them your applications, and they automatically work out what those applications are, how they work, devise automated tests and run them. From those tools, we could, say, review a user’s test results from a manual testing phase, such as UAT and turn those into easy-to-understand feedback, which can be sent to your dev team.
We may also see a step-change in code-free test script creation, where users can simply describe the test they want, and the system automatically creates the test.
Some of the building blocks for these things are already there. Lots of companies out there have been putting in effort to improve object recognition in automated testing (i.e., getting their tools to recognize what’s an important part of an application to test). That’s a key foundation of a tool that could automatically create tests just from seeing an application. There are also tools out there that will automatically create variants to your test script to cover a broader range of test scenarios – but none of them are directed by a self-learning AI.
We also have to remember that this generation of AI is still in its early stages. I’ve used ChatGPT to help me write code and have noticed that, on occasion, it’s given me two different sets of code from the same prompt. Both worked, and both did the same thing, but they were different – and that could cause problems further down the line if an organization was to adopt AI-driven code generation at scale.
It’s a bit of a wait-and-see prediction. It’s important to state, in my opinion, AI won’t ever be able to replace the human element of testing, no matter how many pictures, novels or plays ChatGPT writes. In testing, you’re not looking to replicate human creativity; you’re ensuring your software stands up to the randomness of human behavior.
Prediction 4: a convergence of process testing and optimization
This is a bit more in the realms of pure speculation, but a topic that I think would be worthy of discussion. If testing tools become sufficiently advanced that they can understand your software instantly and devise tests to ensure it performs as it should, then how much longer will it be before they start suggesting process improvements? For instance, what if your tool recognizes when you enter the same data multiple times and could suggest alterations to the process to eliminate repeat data entry? Or could suggest changes to reduce the number of clicks required to complete a process?
The AI software could take note of the time it takes people to do their work, then offer ways to improve productivity, enabling people to focus on more strategic work. Let’s take into account the fact that AI has nigh-perfect data inputs, improving the quality of the relative inflexibility of cloud platforms will cause application proliferation as organizations try to create the functionality they need. Of course, all these improvements will only work to enhance customer satisfaction.
Back to the present
It’s exciting to think about what’s coming and the opportunities it could bring to your organization. Technological advances in testing tools have the potential to make it far easier to create and conduct tests, speed up test cycles and reduce the burden on business users at critical stages, such as UAT.
Whatever happens in the future, it’s certainly true right now that you need to think critically about your testing capabilities. Are they ready for a cloud-native and UAT-heavy testing regime, where updates are released monthly and affect multiple systems? If you’re not confident in your current tools and processes, then we should talk.