Articles tagged with 'UAT'

User Acceptance Testing Part 2

Yesterday I described the cost and effort of creating, executing and maintaining test scripts for User Acceptance Testing. I also made the bold statement that the TrialGrid approach could reduce this effort by a factor of 10. As you might expect, we achieve this through automation. Of the three parts:

1) Create the tests
2) Execute the tests
3) Maintain the tests

The first, Create the tests, is the most technically challenging. Before we can automate creation of tests we first need to understand what tests are going to look like and how they are executed.

If we look at the test script from the first part of this series we can see that it is designed to be read and executed by a human:


Name : Check SCREENING_VISIT_DATE
Version : 1

Step Instruction Expected Result Actual Result Comments User Date Pass / Fail
1 Log into EDC using an Investigator role Login succeeds, user is at Rave User home page
2 Navigate to Site 123 User is at Subject listing for Site 123
3 Create a new Subject with the following data: Date of Birth = 10 JAN 1978 Subject is created with Date of Birth 10 JAN 1978
4 Navigate to Folder Screening, Visit Form and enter the following data: Visit Date = 13 DEC 1968 Visit Date for Screening Folder is 13 DEC 1986.
5 Confirm that Edit Check has fired on Visit Date with text "Visit Date is before subject Date of Birth. Please correct." Edit Check has fired.

Once executed the signed and dated test script will be kept as evidence to be reviewed by the Sponsor.

Clearly, if we want to automate the process of executing these scripts we need to keep that readability. We need a format for test scripts that is structured enough for software to execute but also naturally readable for humans.

Executable Specifications

Fortunately, Software Development has had a solution to this problem for nearly a decade. Behaviour Driven Development (BDD) is an approach to writing specifications and acceptance tests which can be read and understood by humans and executed by software. This is exactly what we are looking for. BDD doesn't specify any particular format for these tests but the most widely adopted standard is called "gherkin"[1].

Gherkin uses a simple syntax. Here is a short example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
Feature: Buying things from the shop

  If a user has money they can buy things

  Scenario: Buying things
    Given Alice has $1.30 
    When she visits the grocery store
    And she buys 1 banana for $0.25
    Then she will have $1.05    
    And she will have 1 banana

This example starts with a "Feature" declaration. It's documentation telling is about the Scenario tests which follow. At line 3 we have some free text description of the tests. At line 5 we start with a Scenario called "Buying things".

Scenarios follow a format of:

Given some background information that sets up the test conditions
When some action is taken
Then I should see some result

Acceptance tests for Edit Checks

If we convert our original test script to the Gherkin Given..When..Then structure we might get:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# Version 1.0

Feature: Testing Edit check SCREENING_VISIT_DATE

  Visit Date should not be before Date of Birth

  Scenario: The Check Fires
    Given I log into EDC using an Investigator role 
    And I navigate to Site 123
    When I create a new Subject
    And I enter "10 JAN 1978" as the Date Of Birth on the Subject Form
    And I enter "13 DEC 1968" as the Visit Date for the Visit Form in the Screening Folder
    Then I will see query text "Visit Date is before subject Date of Birth. Please correct."   

This format is a little wordy, mostly because of the need to specify Fields, Forms and Folders for Data. Gherkin includes a data table structure which can help here and we can combine it with some simple shortcuts for field selection:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# Version 1.0

Feature: Testing Edit check SCREENING_VISIT_DATE

  Visit Date should not be before Date of Birth

  Scenario: The Check Fires
    Given I log into EDC using an Investigator role 
    And I navigate to Site 123
    When I create a new Subject
    And I enter data:
    | DataPoint                      | Value       |
    | SUBJECT.SUBJECT.DOB            | 10 JAN 1978 |
    | SCREENING.VISIT.VDATE          | 13 DEC 1968 | 
    Then I will see query text 
       """
       Visit Date is before subject Date of Birth. Please correct.
       """   

That makes the test a bit more concise.

Automated Testing

The Given..When..Then style of test is readable to a human but can also be read by software. The format isn't totally free-form. Each of the Given / When / Then steps must conform to a pattern that the software understands. Currently TrialGrid understands around 50 patterns which can check a wide range of states in Medidata Rave, not just whether a query exists. This means you can write tests which ensure that Forms and Fields are visible (or not visible) to certain EDC user Roles, to check the calculations for Derivations and to verify the results of data integrations such as IxRS feeds which enter data into Medidata Rave forms.

Tests can be read and then executed against a live Rave instance by the TrialGrid UAT module. Provide a Rave URL, study name, environment (e.g. TEST) and credentials to interact with Rave and the TrialGrid system will execute your tests against the Rave instance.

Data is entered via Rave Web Services and results verified automatically. Screenshots of the Rave page showing results of actions such as data entry and queries created can be captured for both Classic Rave and the new Rave EDC (formerly called RaveX). Results are updated in real-time as the system works through each step but you can also leave it to run unattended and view the results when it is done. TrialGrid runs these tasks in the background so you can get on with some other work.

The output is a PDF document that shows the actions taken and the results, comparing expected results against actual and providing screenshots as evidence.

Automated Maintainance

One of the challenges of test scripts is keeping them up-to-date with changes to the study. For example, imagine an Edit Check that ensures that when Race is "Other" then Race Other is specified on the demography form. The check has been programmed with the query text:

"Race is Other and Specify Other Race is missing. Please review and correct."

But in the test we are looking for:

"Race is Other and Specify Other Race is missing."

This could happen if the Specification or the programming of the Edit Check changed. But we want them to match. A human tester might be tempted to pass this test as "close enough" but automated test software looks for an exact match and will fail this test.

The TrialGrid approach can identify these kinds of problems before the test is ever run. In this example we see a warning in the Test editor which identifies that there is an issue:

Enter URL

Here we are giving the system extra hints about what are test relates to through the @EditCheck "tag" (another feature of the gherkin format) and referencing the DM001 Edit Check. This has several benefits:

  1. By setting up a link between the Edit Check and the Test we can say whether an Edit Check has been tested or not and calculate what percentage of Edit Checks and other objects are exercised by tests.

  2. The system has greater contextual knowledge about what is being tested and can help with warnings like the one shown here.

TrialGrid performs similar validation of data dictionary values, unit dictionary selections, Folder, Field and Form references and more. This capability reduces the effort of maintaining tests and supports risk-based approaches where you don't run tests because "nothing has changed and this test is still valid". This function can tell you something has changed and this test may not be valid any more.

Summary

In this second part of the three part series on Automated User Acceptance Testing we briefly covered the formatting of the tests, how they are executed and how the system helps you ensure that tests stay in synchronization with the Edit Checks, Forms and other objects that they are supposed to be testing.

These features make the execution and maintenance of tests much easier and faster but we are still left with the huge challenge of writing these kinds of tests for hundreds of Edit Checks. In the last part we'll cover how the TrialGrid system can automate that part, creating tests in seconds that would take a human hundreds of hours of effort.

Come back tomorrow. But if you want to see this system in action don't forget our Free Webinar on January 10 2019.

Registration at https://register.gotowebinar.com/register/2929700804630029324

Notes:

[1] Why "gherkin" is a story in itself but in summary, "gherkin" is the format used by a software tool called "cucumber" and it is called cucumber because passing tests are shown in green text so the idea was to get everything to "look as green as a 'cuke". I know, hilarious.

User Acceptance Testing Part 1

Back in June 2017 Andrew attended the Medidata Next Basel Hackathon and
put together a proof-of-concept for Automated User Acceptance Testing (UAT). 18 months later we're getting ready to release the first production version of this system.

What took so long? Well, to say we've been busy is something of an understatement. In that time we've built up to more than 100 Diagnostics; advanced editors for Matrices, Data Dictionaries, Unit Dictionaries and Forms; standards and library management features and a whole lot more. In all, we've released more than 150 new features of the software to our pre-release Beta site since June 2017. But we held off on UAT features because we really wanted to do it right.

What is User Acceptance Testing anyway?

But first, what is User Acceptance Testing and what are the challenges to doing it?

The term User Acceptance Testing comes from software projects. Imagine that an organization wants to automate some business process. They get their business domain experts together to create a Specification for what the software should do. This Specification is passed to the developers to build the solution. When it comes back from the developers the organization will perform User Acceptance Testing to ensure that the software meets the Specification.

In the world of Rave study building, User Acceptance Testing may be done by the Sponsor team but it may also be done by a CRO with the evidence of testing being provided to the Sponsor team. Regardless of its roots, User Acceptance Testing in our industry means the process of testing to provide evidence that the study as-built matches the Specification.

Test Scripts

The current gold standard for testing evidence is to have a Test Script which can be manually executed by a user. A typical script for the testing of an Edit Check might look something like this:


Name : Check SCREENING_VISIT_DATE
Version : 1

Step Instruction Expected Result Actual Result Comments User Date Pass / Fail
1 Log into EDC using an Investigator role Login succeeds, user is at Rave User home page
2 Navigate to Site 123 User is at Subject listing for Site 123
3 Create a new Subject with the following data: Date of Birth = 10 JAN 1978 Subject is created with Date of Birth 10 JAN 1978
4 Navigate to Folder Screening, Visit Form and enter the following data: Visit Date = 13 DEC 1968 Visit Date for Screening Folder is 13 DEC 1986.
5 Confirm that Edit Check has fired on Visit Date with text "Visit Date is before subject Date of Birth. Please correct." Edit Check has fired.

The script consists of a set of instructions each with expected results. The user performs each step and documents the actual results, adding their initials and the date/time of the execution along with any comments and whether the step passed or failed. The user may also capture screenshots or the test subject may be maintained in a test environment for the study as evidence that the Check was tested.

Risk-based Approach

Since a phase-III trial might contain more than 1,000 Edit Checks many organizations building studies will take a risk based approach. If an Edit Check comes from a Library it may have been tested once in an example study and then not tested for each new study where that Edit Check is used. Edit Checks considered low risk may not be tested in this way at all.

A risk based approach means that we're balancing a negative outcome (a migration to fix an edit check say) against the cost of a more comprehensive set of tests. If we assume 10 minutes to write and 5 minutes to execute a positive test (the check fires) and a negative test (the check doesn't fire) then 1,000 edit checks is....counts on fingers...250 hours, more than a month of effort. The work doesn't stop there of course - these test scripts would have to be maintained. If the Query Message of an Edit Check is changed then the test script should also be updated to reflect that and if the View or Entry restrictions for a Field are changed then the script should be checked to ensure that the user type executing the test (e.g. Investigator or Data Manager) can still see the query. Even then, the test scripts are going to be executed once because the cost/effort of re-running these scripts because of a change to the study is just too prohibitive.

In summary, we would create a test script for every Edit Check if we could but it is a huge undertaking to:

1) Create the tests
2) Execute the tests
3) Maintain the tests

The TrialGrid Approach

"Doing UAT Right" means taking the work out of each of these steps. It is no good having a solution that executes the tests quickly if it doesn't also reduce the effort of creating the tests. Having fast authoring and execution of tests doesn't help if the resulting tests can't be easily maintained.

We are confident that with the new TrialGrid approach you can reduce the overall effort of creating, running and maintaining scripted test cases by at least a factor of 10. That means that 250 hours for 1000 edit checks would be reduced to 25 hours, at a rate of $100/hr that's a saving a $22,500 saved per study.

Inconceivable? Impossible? Unlikely? Why not join our free webinar on Thursday January 10th to find out more:

https://register.gotowebinar.com/register/2929700804630029324

Or if you can't wait until then. Come back tomorrow for another post on the TrialGrid approach to UAT.