Software testing is a critical part of the software and product development cycle. For years, quality assurance (QA) has been the final gate before a release, a functional validation that the code is ready to go to market. Today, however, despite a growing need for agile and efficient testing, legacy QA models are struggling to keep up with cloud-native environments, fragmented teams and rapid release cycles. And this, says Mandla Mbonambi, CEO of Africonology, introduces a new era of testing using AI and automation to prioritise tests and analyse vast quantities of data. It is also, he says, introducing quality, security and governance risks.
“The benefits of AI in testing automation and QA are that it allows teams to move incredibly rapidly,” he says. “Companies benefit from faster automation processes, and their productivity increases exponentially. AI is also capable of analysing the data to detect defects or coverage gaps, and it can provide teams with high-risk scenarios or recommend additional tests based on its analyses.”
The Capgemini World Quality Report 2024-2025 was quick to highlight the impact of AI on the industry, emphasising its ability to optimise test coverage, reduce human error and introduce intelligent automation[1]. The study found that an impressive 68% of companies are using AI, with 72% reporting faster processes as a result. The technology is changing the testing story, moving it away from an after-the-fact process that discovers unexpected errors and frustrates teams and deadlines alike. AI enables testing in near real time as models continuously analyse code throughout the development process, and they can be integrated into development and operations from the outset.
“AI models can learn, they can predict errors that crop up regularly or where they’re most likely to occur,” says Mbonambi. “They turn testing into a smoother part of the process, making it proactive and immediate instead of reactive and defined by ticking boxes. They also allow for self-healing, where the automation can detect when a test will break, find a resolution, and then apply the update so the test still passes if the business behaviour remains valid.”
Self-healing has the potential to minimise failures caused by minor changes and reduce the monotony burden on testing teams – talented humans now have more time to prioritise exploratory testing or more complex tasks. The technology gives people the space to become high-quality architects, defining risk models, guiding AI, and interpreting patterns, rather than just running tests.
“There are agentic platforms that now can take on a lot of the heavy testing grunt work with minimal human input, capable of acting almost like testing interns that fortunately don’t get tired or frustrated,” says Mbonambi. “It sounds too good to be true, which unfortunately it can be – while AI has immense value in the QA environment, it also introduces risks that have to stay top of mind.”
Just as AI in the workplace tends to hallucinate or overcompensate, the same risks apply in testing. Some of the most common problems include false positives and false negatives, which add to testing noise rather than minimising it, an over-reliance on automation that impacts skills and awareness, and data privacy and security. Then there’s the reality that AI models are still black boxes – teams don’t have visibility into the AI decision-making process and don’t know why some tests are prioritised while others are not.
“Bias, security, private information, limited guardrails and governance that’s battling to keep up with the pace of change, are very real concerns when it comes to introducing AI into the testing environment,” says Mbonambi. “Right now, as teams learn how to use and optimise AI in testing better, it’s important to remember it is a tool designed to augment the process. People are needed to validate AI output and make release decisions and, very importantly, become the guardrails for data privacy and explainability.”
This doesn’t mean AI is too risky, just as it doesn’t mean AI is the perfect solution to every testing problem. Right now, AI is as much in its infancy as its use cases, which means testing with AI models and tools should be considered and balanced. Humans are essential, but so are the tools that lift the burden of complexity and deadlines.