Client: MiCase
Project: Grievance Module
Background
Casmaco Ltd was commissioned by a large US-based law enforcement agency to extend the functionality of its Mi-Case system to manage all aspects of inmate administration. The system encompasses many components from allocating cells to parole and release. Autism Works were contracted to provide independent software testing for the Grievance module, which is to be used by the Inmate Grievance Office to process the grievances raised by inmates throughout the state.
The general concept of the Grievance system is a database where each of the records would be grievance cases, rather than the inmates who made the grievances. The grievances would then be sent through a series of ’Transactions’ depending on the outcome of the grievance, until the case was closed one way or another. Statistical information of all current and past grievances would then be compiled in reports.
The Task
Remit
Autism Works' remit was to provide functional testing for the Grievance module. Other forms of testing, such as load testing and usability testing, were carried out by the Casmaco in-house Test Team.
The functional testing was focused on the requirements agreed between Casmaco and their client, with a particular emphasis on the use cases provided in the functional specification. Autism Works made recommendations on which tests should be prioritised. However, the final decision on testing priorities remained with Mi-Case Test Manager.
Our Testing
Test Planning
Following review of the functional specification, Autism Works documented a State Transition diagram to assist in understanding the inter-relationship and flow of transactions through the system. The first test cases were chosen to mirror the processes given in the use cases. Further test cases were then written to cover all of the transactions as end-to-end processes. Additional test cases were then developed to cover the remaining requirements not covered by either the use cases or transaction tests. Where it was practical to do so, a single test case was chosen to cover multiple requirements.
The test cases were then arranged into high, medium and low priority (with use cases and transactions all given high priority). Full test scripts were then written for the high-priority tests. Additional tests were later added for selected medium-priority tests requested by Mi-Case.
A slightly different approach was taken for reports. As the reports functionality was delivered after the rest of the system, this area was handled separately. The tests were structured around the reports and their contents rather than the requirements, although the requirements were still used where necessary to ascertain the correct behaviour of cells.
Test Scripts
Test scripts were written on a step-by-step basis using the functional specification. There was no pre-release to work from, but because of the detailed nature of the functional specification, it was still possible to write very accurate scripts with little need for revision after the test software was released.
Where possible, tests scripts were written to reflect intended processes in the real environment. For example, where a transaction test involved scheduling a hearing, it would normally be preceded with a pre-hearing order and notice of hearing, even though it was not necessary under system rules to do this. The test scripts were peer reviewed prior to the testing.
Test Execution
Tests were executed over a period of four weeks. The test scripts were prioritised so that the business-critical use case-based scripts were performed first. These scripts were also re-executed towards the end of the test run to check that no bug fixes had caused these business-critical functions to break. During this time, development continued on the system, and updates were deployed on a daily (or sometimes twice-daily) basis. However, this did not cause existing data to be deleted, and so minimal impact on the testing.
Defects were re-tested as and when they were reported fixed, taking precedence over the test scripts. The scripted tests were also supplemented by exploratory tests. These were primarily aimed at loopholes in the program and potential defects not covered by the test scripts, and a number of extra issues were detected as a result.
Tools Utilised
Bugs were logged using a customised SharePoint page developed by Mi-Case. This was used throughout the Department of Corrections’ system and was also used by Autism Works for purposes of compatibility with the rest of the project.
It was not necessary or appropriate to use other tools in the testing. The varied nature of the testing and the lack of stability at this stage in the development meant that test automation was not feasible. It was considered whether Selenium could be used for data preparation where it could be used to generate hundreds of records for purposes of report testing, but this option was not taken up as a simpler set of report tests was chosen.
Outcome
At the start of the testing, a number of issues were discovered that would have prevented practical operation in the live environment. Many of the defects were uncovered due to a change in focus between component testing (where individual functions were tested in isolation) and Autism Works’ system testing (where attention turned to using the system in the context of business processes). In particular, a number of issues were discovered concerning the after-effects of transactions, with attributes such as case status and hearing date not being updated.
Defects were fixed and retested during the test execution phase, and the system became progressively more stable with subsequent bugs being noted in increasingly minor areas. By the end of the test execution, the only significant bugs remaining were two bugs relating to cells in reports.
In total, 170 defects were reported, which following a cycle of fix and regression test, Mi-Case were confident that they could release the application to User Acceptance Testing.
Social Value
A secondary aim of Autism Works was to trial its method for developing step-by-step scripts. In the long term, it is intended to provide an introduction to software testing for employees with little experience by providing scripts with step-by-step instructions, with the idea that this will be suitable work for people with autism. In commissioning Autism Works to carry out this work, Mi-Case has helped deliver a social value of £8,022, based on National Audit Office findings (June 2009), reducing dependency on the state and bringing self-respect with a positive future to Autism Works employees.
And Finally…
During a particularly busy period of project delivery for Mi-Case, we commissioned Autism Works to manage the delivery of a module that had a high level of functional complexity and would have absorbed too much time from our in-house test team. The fact that we could almost adopt a hands-off approach with regards to testing this module was a big advantage for us, with only a small element of test management required. We were confident that the specification was sufficiently detailed and precise, allowing us confidence that Autism Works could pick the module up, despite not knowing our product set and hit the ground running, so to speak. Overall, the quality of the testing performed, alongside the coverage and the team’s ability to understand the complex functionality under test was very impressive. We are extremely happy with the end product and feel we can comfortably hand the solution over for User Acceptance Testing without any concerns. We will not hesitate to use Autism Works in the future for any upcoming work!
Chris Masson – Mi-Case Test Manager
seeDetail is the trading name of Autism Works Limited, a Social Enterprise that offers the opportunity of sustainable employment to people with an autism spectrum condition or Asperger’s Syndrome in the field of software testing.