User Acceptance Testing Part 1

Back in June 2017 Andrew attended the Medidata Next Basel Hackathon and
put together a proof-of-concept for Automated User Acceptance Testing (UAT). 18 months later we're getting ready to release the first production version of this system.

What took so long? Well, to say we've been busy is something of an understatement. In that time we've built up to more than 100 Diagnostics; advanced editors for Matrices, Data Dictionaries, Unit Dictionaries and Forms; standards and library management features and a whole lot more. In all, we've released more than 150 new features of the software to our pre-release Beta site since June 2017. But we held off on UAT features because we really wanted to do it right.

What is User Acceptance Testing anyway?

But first, what is User Acceptance Testing and what are the challenges to doing it?

The term User Acceptance Testing comes from software projects. Imagine that an organization wants to automate some business process. They get their business domain experts together to create a Specification for what the software should do. This Specification is passed to the developers to build the solution. When it comes back from the developers the organization will perform User Acceptance Testing to ensure that the software meets the Specification.

In the world of Rave study building, User Acceptance Testing may be done by the Sponsor team but it may also be done by a CRO with the evidence of testing being provided to the Sponsor team. Regardless of its roots, User Acceptance Testing in our industry means the process of testing to provide evidence that the study as-built matches the Specification.

Test Scripts

The current gold standard for testing evidence is to have a Test Script which can be manually executed by a user. A typical script for the testing of an Edit Check might look something like this:


Name : Check SCREENING_VISIT_DATE
Version : 1

Step Instruction Expected Result Actual Result Comments User Date Pass / Fail
1 Log into EDC using an Investigator role Login succeeds, user is at Rave User home page
2 Navigate to Site 123 User is at Subject listing for Site 123
3 Create a new Subject with the following data: Date of Birth = 10 JAN 1978 Subject is created with Date of Birth 10 JAN 1978
4 Navigate to Folder Screening, Visit Form and enter the following data: Visit Date = 13 DEC 1968 Visit Date for Screening Folder is 13 DEC 1986.
5 Confirm that Edit Check has fired on Visit Date with text "Visit Date is before subject Date of Birth. Please correct." Edit Check has fired.

The script consists of a set of instructions each with expected results. The user performs each step and documents the actual results, adding their initials and the date/time of the execution along with any comments and whether the step passed or failed. The user may also capture screenshots or the test subject may be maintained in a test environment for the study as evidence that the Check was tested.

Risk-based Approach

Since a phase-III trial might contain more than 1,000 Edit Checks many organizations building studies will take a risk based approach. If an Edit Check comes from a Library it may have been tested once in an example study and then not tested for each new study where that Edit Check is used. Edit Checks considered low risk may not be tested in this way at all.

A risk based approach means that we're balancing a negative outcome (a migration to fix an edit check say) against the cost of a more comprehensive set of tests. If we assume 10 minutes to write and 5 minutes to execute a positive test (the check fires) and a negative test (the check doesn't fire) then 1,000 edit checks is....counts on fingers...250 hours, more than a month of effort. The work doesn't stop there of course - these test scripts would have to be maintained. If the Query Message of an Edit Check is changed then the test script should also be updated to reflect that and if the View or Entry restrictions for a Field are changed then the script should be checked to ensure that the user type executing the test (e.g. Investigator or Data Manager) can still see the query. Even then, the test scripts are going to be executed once because the cost/effort of re-running these scripts because of a change to the study is just too prohibitive.

In summary, we would create a test script for every Edit Check if we could but it is a huge undertaking to:

1) Create the tests
2) Execute the tests
3) Maintain the tests

The TrialGrid Approach

"Doing UAT Right" means taking the work out of each of these steps. It is no good having a solution that executes the tests quickly if it doesn't also reduce the effort of creating the tests. Having fast authoring and execution of tests doesn't help if the resulting tests can't be easily maintained.

We are confident that with the new TrialGrid approach you can reduce the overall effort of creating, running and maintaining scripted test cases by at least a factor of 10. That means that 250 hours for 1000 edit checks would be reduced to 25 hours, at a rate of $100/hr that's a saving a $22,500 saved per study.

Inconceivable? Impossible? Unlikely? Why not join our free webinar on Thursday January 10th to find out more:

https://register.gotowebinar.com/register/2929700804630029324

Or if you can't wait until then. Come back tomorrow for another post on the TrialGrid approach to UAT.