Our quality assurance strategy is centered on continually evolving our testing process to prevent bugs from being created as the software is developed or from being encountered by our customers. We use a combination of developer tests (unit and integration), extensive manual testing and a number of automated tools to validate that our products function properly. We are also using our products in real-world scenarios as they're being actively developed, which helps us to identify problems before our customers do.
Test plans
Manual test plans lay out individual test cases to check correct functionality around our various products. We divide our test plans into two parts: P0 and Depth. P0 tests are "line of death" checks which ensure proper behavior of critical features or functionality. We run P0 testing on our components on a weekly basis. Depth tests are much more time-consuming and check important but less critical functionality.
Automated testing
For automated testing, we use a variety of tools to create a suite of tests that are repeatable and can be run with the click of a button. These suites ensure features work as advertised, and also provide a safety net against regressions. These automated tests are run against our system on a regular basis.
We have a range of automated tests across all our products and components. We use unit tests to check small, discrete functionality within components of our software. Integration tests check behavior between system components such as REST Web services. Functional tests ensure user features to operate correctly.
Manual testing
Our manual test strategy uses extensive and continually updated test plans to cover our products with formal test cases. We take care to test our products on all supported platforms and browsers. We also make use of our Quality Assurance team's experience by unleashing them in ad hoc exploratory testing to carefully examine different areas of our products.
Ad hoc testing
Ad hoc testing encourages our team to use their experience and knowledge of the system to explore areas of the system which may not fall under automated or manual test plans. Ad hoc testing takes place during many different phases of our work including manual test passes, bug validation, and general exploratory testing.
Developer unit and integration tests
Developers are responsible for writing unit tests and any necessary integration tests to validate portions of the work they do.
Functional tests
The QA team performs core function testing via automated browser sessions. These tests guarantee proper functionality and provide another regression safety net - but it is the user interface itself instead of at the web services layer like the REST integration tests.
Platform test matrix
Verint products run on several different platforms, so we cover a number of different combinations of platforms and browsers during our testing.
Using our own product day in, day out
We use our own product on this site. This allows us to get immediate feedback from real users on new features: Are the features working as expected? Do they fill a real need? We get users who exercise the system in ways our test plan doesn’t cover. This helps us find bugs we might have otherwise missed, and it helps us expand our test plan with those new use cases.
Delivering well-tested product releases
We are constantly testing our products as they are being developed; however, we focus carefully on testing during our release-for-delivery process. Every product package gets additional attention as it is completed and readied for release to our customers.
In addition to the regular testing already discussed, our major releases undergo several additional test passes where the entire product development team focuses on executing detailed test passes on each of our Release Candidate packages. This final vetting ensures we’re getting additional levels of exposure to the entire system.
Verint also releases hotfixes for our products. These versions hold incremental fixes we release to our customers in between regular product versions. Our hotfix releases undergo the same level of testing as our regular releases do; however, we also take care to specifically validate each issue we’re resolving in that release.
Moving forward
We are committed to continuing the expansion of our testing, both manual and automated. We’re aggressively using customer feedback to evolve our testing plans as we resolve issues identified in our support forums, custom services, and formal support chains. Additionally, each new unit of development drives test cases specific to that work.