Image

This week at TrialGrid (Feb 24, 2017)

One of the things that makes writing software for the clinical trials market different to writing software for, say, the consumer market is the need to Validate the product. I write it in scary bold font because I think this requirement does deter would-be startup founders in our industry. So how are we building software at TrialGrid in a way that meets these requirements?

First of all, there is a fundamental difference between Validated software and software that has been developed using best practice. Validated software is software that has been developed within the guiding framework of a Quality Management System (QMS). The QMS describes what you do and how you do it. An Auditor (internal or external) will compare your documented procedures to evidence that you followed the procedures.

Developing a QMS is beyond the scope of a blog post but I want to give you an overview of our process for creating quality software while collecting our documented evidence that we're following procedures.

Activities and Evidence

Imagine I am a TrialGrid programmer. I have received a requirement and I have written some code to meet that requirement so now I want my code merged into the master copy of the product so that it can be made available to my customers.

Here are the steps that have to be covered under our process:

1. Code Review (Manual)

First I need my code peer-reviewed. Have I missed anything? Is there a better (more efficient, clearer) way to implement the feature. Does the peer understand the code well enough to maintain it in the future? The output of this step will be documented evidence of review, comments and perhaps suggestions on changes to the code. If changes are needed the cycle starts again. We do this code review in our source code control system, so that we have evidence of who proposed the code, who reviewed it and what their review comments were. The source code control system is set up so that any comments must be resolved (answered by explanation or change to the code) before the code can go on.

2. Coding Standards Check (Automated)

Every company has a set of coding standards which dictate how code should be written and formatted. A lot of time can be spent arguing the merits of one style over another so to avoid that we have an independent arbiter, a code compliance checking software robot that enforces our preferred style. If the submitted code doesn't make the grade then "No Soup for you!" and it is rejected. The output of this step is evidence that the checker was run and any findings it reports.

3. Complexity Testing (WIP, Automated)

This one is a work in progress. Code should be understandable. That means that we should have code units with a clear purpose, ideally performing just one function. A complexity measurement examines the code to determine how many logical paths there are through that code. Thomas McCabe wrote a paper defining a logical score. He says a complexity score of 10 is a good (but not magical) upper limit. We currently have 37 functions in our 23,000 lines of code with a score greater than 10. Our worst case has a score of 42. That represents our high bound. Once we have simplified that code and reduced its score we'll set the threshold to our next highest with the goal of getting to a max of 10. Any code over the limit is rejected. As before, the output of this step is evidence that the checker was run and any findings it reports.

4. Unit/Integration Testing (Automated)

We have a simple rule. If you write code, you write a unit/integration test to prove that it does what was intended. This is enforced through running unit tests with a code-coverage tool. Code-coverage checks to see that every logical path of the code has been exercised. We have a 95% threshold (100% unit test coverage is not usually regarded as being useful - you end up writing tests just to achieve coverage rather than helping ensure the code works as intended). As I write, our coverage figure is 97% with more than 1,200 - one thousand, two hundred - unit tests. If code coverage falls below that threshold the code is rejected. The evidence from this step is a detailed report and a set of hyperlinked pages of the code which show the coverage for each file and line individually.

5. Requirements Testing (Automated)

Our Unit and Integration tests are code-based tests, code checking code but we need to provide traceability between user requirements, tests that show those requirements are met and evidence that the tests have been executed. I remember one company I worked for recording videos of testers running through manual tests scripts. Happily those days are behind us. We write requirements in a fixed style:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
Feature: Login
    As a security auditor I want to be sure that logins to the system are secure
    so that I have assurance that the system is not inappropriately accessed

    Background: Test Setup
        Given a user exists with username "login_test" and password "password1234"

    Scenario: Failed Login due to bad password
        When I visit the login page
        And I enter "login_test" as my username
        And I enter "BADPASSWORD" as my password
        And I try to log in
        Then I should see "Please enter a correct username and password. Note that both fields may be case-sensitive."

the steps of these tests are then automated, another software robot driving a web browser (quite eerie to watch) clicking through the application, checking what it sees on the screen (by examining the HTML content of the pages) and taking evidence screenshots. The output of this step is a set of detailed logs showing which steps passed (or failed) and a set of evidence screenshots which are later packaged into the Validation Evidence Package (see later). Of course, if any step fails here, we're back to the beginning.

6. Documentation Review (Automated)

TrialGrid isn't just code, it's also online help. When you write a new feature you write the help that goes along with it. This is reviewed in Step 1 but there's an (automated) process of assembling the individual help pages into a hyperlinked website. Cross-links are checked and if there are any problems (missing pages, bad links) the build process fails and we start again from Step 1.

7. License Checks (Automated)

Modern software is built on a foundation of libraries. For instance, to read a SpreadSheetML file I don't want to have to write a general-purpose XML parser, I'll use an Open Source one or license a commercial product that someone else has written. Package managers make adding Open Source libraries to a project very easy. As part of the documentation and build system we have a requirements file that lists our required libraries and their versions:

1
2
3
4
5
6
7
8
amqp==1.4.9
anyjson==0.3.3
appdirs==1.4.0
asgi-redis==1.0.0
asgiref==1.0.0
attrs==16.3.0
autobahn==0.17.1
...

But not all Open Source licenses are created equal. Some do not permit use in Commercial Products like ours. As you can see from that list, there is no licensing information there. To protect ourselves and our clients we need that so we have a step in our pipeline that uses an API to pull information about these packages from their setup manifests. Most packages list their license (MIT, BSD, etc) and we compare these to a list of pre-approved licenses. Some don't list their license and in that case the system warns us so that we can manually review. It might take some internet searching to find those packages and we can add them to an exclusion list of manual approvals. Note that we pin to exact versions like amqp==1.4.9 if this library changes to version 1.5.0 and we just automatically upgraded we might find that the license changed between versions. At the moment this tool does not reject code that makes use of non-approved or not-found licenses but it already gives us early warning if unapproved libraries are sneaking in so we can find or write alternatives. The output of this stage is a compliance report which goes into the Validation Evidence Package.

8. Create Validation Evidence Package (Automated)

The final step of the process is the generation of a Validation Evidence Package. This gathers up all the evidence that was generated in the previous steps into a set of hyperlinked documents so that Auditors who want to review our evidence can browse our license management, unit test, code coverage and requirements test evidence in a single place. Once generated this file is automatically packaged up and retained in our build system.

Summary

We didn't invent Continuous Integration (automatic running of tests on checked-in code) and many of the steps I describe are performed routinely by other companies as part of best practice. Nor is this list comprehensive, to get to production the code would have to go through Installation Qualification (IQ) and Operational Qualification (OQ) steps. I think we are unusual in being so rigorous at such an early stage of our company life and in our drive to automate so much of the process. We pay a price for that rigor, it is very annoying to have a build fail because we failed to put an opening brace in the correct place but we think the long-term benefits are worth it.