Mastering the pivot from QA to QE
Many financial service institutions (FSIs) have sought to improve their QA capabilities over the last few years – having introduced elements of automation and other efficiencies that help them run more test cases faster. Yet recent conversations with FSI IT leaders highlight that this progress may not be enough. Business leaders want to see more dramatic improvements – without quality slippage.
So where’s the problem? First some background: Many leaders have embraced the move from traditional QA to a process based around the concept of quality engineering (QE). At its heart, QE refers to a shift “left” of testing into the development portion of a DevOps workflow, while employing automation and AI to test faster and more efficiently. But in practice, these initiatives have not always progressed according to expectations, as quality considerations continue to slow time to market.
Let’s look more closely at some key missteps holding back returns on the move from QA to QE.
Not quite automation
While virtually every FSI uses automation in some part of the QA process, when you dig deeper, it has generally only been applied to a few test cases or is fragmented. For example, when one FSI automated key portions of its QA process, they unintentionally introduced a major inefficiency. As QA specialists ran automated tests aligned with their area of focus, they did so from their personal laptops. This led to a situation where when they were busy or out, the “automated testing” in their focus areas stopped.
With true automation, testing never stops, and you limit reliance on individuals through virtualization of the testing processes. As a result, testing becomes a continuous contributor to development, measurably accelerating time to market and lowering costs without requiring team members to drive the process. So when your automation fails to free resources to focus on resolving quality issues sooner, that’s a clear sign you haven’t achieved true automation.
Test case excess
A typical regression testing package could include as many as 1,000 test cases, with many FSIs running all or almost all of their test cases within multiple packages prior to every release. The idea is to err on the side of safety. As a nod to optimization, an FSI might run five of ten complete regression packages – the equivalent of running thousands of potentially unnecessary test cases per release. But every extra test case adds time to the process and drains resources you could devote to something else.
At the same time, due to the complexity of legacy systems, failing to run a test that seems unnecessary could lead to serious problems. The answer is not to run every test. Neither is it to rely on experience or intuition to choose which tests to run. Rather, it lies in AI-based analysis of past tests, which can help optimize regression by utilizing machine learning (ML) processes. It works by tracing connections between past defects and specific areas impacted by the release, typically cutting the number of tests by more than 60%. The time saved not only accelerates release cycles but also helps teams focus their energies on identifying and addressing issues.
Racing by ML to get to GenAI
Many FSIs are eager to explore even more advanced AI capabilities beyond ML. At Amdocs, we’re redefining optimization in quality engineering, with GenAI playing an increasingly important role. As part of our work with many organizations implementing GenAI for automating test case creation, we’ve recognized that understanding the distinct roles of different AI technologies is crucial.
As the name indicates, GenAI is generative – meaning you can use it to generate appropriate test cases. ML, on the other hand, is selective – meaning you use it to help select relevant test cases from the thousands you already have. This explains why ML provides a fundamental foundation for optimizing test cases and for the future use of GenAI.
Consider a recent customer case study: An FSI wanted to use GenAI to transform and optimize their QA process. After running through a GenAI demonstration and ML scenario with our team, the company realized that ML-based AI could help it optimize faster. They also saw that it could provide insights to help develop prompts for future use of GenAI. For this customer, ML optimization delivered the QA improvements they needed now, while establishing a rich data source they can use when they’re ready for GenAI.
Moving forward with quality engineering
FSIs looking to move from QA to QE need more than just automation, AI and optimization. Success starts with understanding current QA practices and how they compare to industry standards. Each FSI needs a tailored plan that optimizes testing at every stage, with clear milestones at 3, 6, 9, 12 and 18 months – delivering quick wins with ML-based optimization while building the foundations for future GenAI and other advanced capabilities. For guidance on your QE journey, talk to our team.