Articles tagged with 'TrialGrid'

Overlapping Matrices

We've blogged before about Matrix features in TrialGrid; creating All-Forms and Merged Matrices, features to help with viewing large Matrices and highlighting inactive Forms.

Recently we were asked by some of our users if we could help identify 'overlapping Matrices', i.e. a Folder/Form combination which exists in two or more Matrices (excluding the All-Forms and Merged Matrices). It is useful to check for this because Medidata Rave EDC will remove Forms when reversing a Merge Matrix Check Action and if a Form is included in more than one Matrix then Forms can disappear from subjects unexpectedly.

However it is very difficult to spot this problem in advance, especially on a large study. The largest we've seen has more than 500 Folders, 50 Forms, 30 Matrices and over 14,000 Folder/Form combinations in those Matrices. Checking those for overlaps in Rave or a spreadsheet is virtually impossible.

With that in mind we set to work to add features to TrialGrid Matrices to make it easy to see when Folder/Forms are used in more than one Matrix.

Searching and selecting multiple Matrices

When we first open the Matrices editor in TrialGrid the Default Matrix is displayed. Here we're using a small demo study as an example:

Default Matrix

We can zoom in:


and can search for all of the 'VISIT' Matrices and select them by clicking 'Select All':

Visit Matrices

Overlaps highlighted

Now we can see two Folder/Form combinations which are highlighted orange (used in two Matrices) and red (used in three or more Matrices). Hovering over the cells shows us which Matrices are used:

Highlight overlaps

This highlighting means it is easy to see where there are overlaps and find the Matrices which need to be edited.

Editing is as simple as clicking on the cells:

Matrix Edit

Merged and All-Forms Matrices

Once we're done with correcting the Matrices we can quickly generate Merged or All-Form Matrices:

Create Merged Matrix Step 1

Create Merged Matrix Step 2

Create Merged Matrix Step 3

Printing Matrices

You can print out one, or a combination of Matrices:

Print Matrix

Large Matrices

The examples above are from a small study we use to demo TrialGrid. A real study might be much larger. Here's an extract from a real study Draft which has more than 200 Matrices:

Large Matrix

In this image we are displaying 214 Matrices simultaneously and we can immediately see the overlaps.

Imagine searching through Rave Architect or an Architect Loader Spreadsheet to find them!

One more thing...

While working on these additions to Matrix features we added an easter egg for some entertainment.

'GAME ON' as our Medidata friends would say (no Matrices were harmed in the making of this clip).

Contact us if you'd like to learn more about TrialGrid and see these features in action.

Continuous Validation (2019 Edition)

Back in 2017 I wrote a blog post outlining our Continuous Validation procedure. This week Hugh O'Neill wrote an article on the PHARMASEAL Blog describing their process of Continuous Validation and it sparked some conversation on LinkedIn.

While our process remains much the same, it has been refined and tested through audits and I hope that an update to how our process works will be helpful to other organizations in adopting a Continuous Validation process or at least in accelerating their existing process.

Our practice is a refinement of an approach used by our former colleagues at Medidata Solutions. We didn't invent it and we are still open to learning new and better ways to perform validation.

Who are we?

To understand our process it helps to understand what kind of company we are.

  • We have a small, very experienced team. We need everyone in the team to be able to do everything - coding, database admin, code review, testing etc. We don't have silos of responsibility.
  • We are a geographically distributed team. We don't have a central office or any company owned physical assets. The company doesn't own a filing cabinet - it's all in the cloud with qualified SaaS providers. Everything has to be accessible via the web - our product, our systems, our procedures and policies and our validation documentation.
  • We don't handle Subject/Patient Clinical Data. That makes us inherently less risky than, say, an EDC system.
  • We know we are not world-leading experts in Validation. None of us have an auditing background. Our interactions with Auditors are an opportunity to learn and improve. Here we describe what is working for us.

If you don't have a Quality Management System, you're Testing not Validating

I said this before but it's worth repeating. Validated software is software that has been developed within the guiding framework of a Quality Management System (QMS). The QMS describes what you do, why and how you do it. An Auditor will compare your documented procedures to evidence that you followed the procedures.

Developing a QMS is beyond the scope of a blog post, but write to me if you're interested in that topic.

Continuous Integration

ThoughtWorks defines Continuous Integration as:

Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

"Automated Build" means that when changes are received from a developer they are automatically compiled into a working system. If that process succeeds, further automated tests are run on the system to see that it works as expected. These steps are usually organized as a pipeline with each step performing a test and then passing onto the next step in the pipeline if the tests pass.

Here we see part of the TrialGrid CI pipeline showing 3 stages, some with multiple parallel tasks.

Merge Request

Continuous Delivery

Continuous Delivery expands on this idea with the end of the pipeline being a package that can be deployed to customers at the push of a button.

Commit Code > Compile > Run Tests > Assemble Deployable Package

Organizing a development team so that it can use Continuous Delivery is a big undertaking. It means automating every step in the process so that the output deployment package contains everything required. This is easier if you don't have hand-offs between groups. If your process involves programmers writing code and then technical writers authoring the help material this will be hard to coordinate. Much better if the help material is written alongside the code and they are dropped into the pipeline together.

Continuous Validation

Continuous Validation takes the idea of Continuous Delivery one step further and adds an additional output to the pipeline - a Validation Package that bundles up all the evidence that the software development practices which are mandated in the Quality Management System have been performed.

We want to do more than just drop a zip file on an auditor, by collecting evidence as data (just as Hugh O'Neill describes in his blog post) we can create a hyperlinked website, a Validation Portal which is easy to navigate and contains all the evidence required to perform a virtual audit of the software.


Auditors have a simple rule:

If there is no evidence that something happened, it didn't happen.

Some of this evidence is generated as part of the pipeline - the output of tests for example but evidence of our activities such as code review need to be fed into the pipeline as data.

When we do something mandated by our Quality Management System we want to record that we did it. Ideally recording this evidence should not put a greater burden on us, it should be a by-product of the action.

Requirements and Traceability

In validated software everything starts with a requirement. When a user points to a widget on a screen and says "Why is this here?" you should be able to direct them to a written requirement that explains its function.

This will lead to the question "How do I know what you have implemented is functionally correct?" and you should be able to guide them to a plan for testing that feature and from there to evidence that the plan was executed.

Requirement > Test Plan > Evidence of Testing

Maintaining this traceability matrix is a challenge and it's a core element of the Validation Package that auditors will inspect so this too has to be captured.


To organize our development efforts we use GitLab. GitLab is an open-source product which you can install yourself or use their pay-to-use hosted version. It combines many of the tools required for the software development process:

  • Source code repository
  • Bug tracker / issues list
  • Continuous Integration pipeline management

And a lot more besides. GitHub and BitBucket are similar products.

We use git for managing changes to source code. All you really need to know about git is that it:

  1. Allows you to pull down a complete copy of the "master" copy of your source code.
  2. Tracks every change you make
  3. Packages up a set of changes into a "commit" with a comment on why these changes were made
  4. Manages merges of your committed changes into the master copy

GitLab provides a way to coordinate this activity and collect data on it. When I want to merge changes into the master copy I create a record in GitLab called a "Merge Request". Here's a screenshot:

Merge Request

Notice the #1182, #1183 and #1184 - these are references to Issues stored in the GitLab issue tracker. We also put these references into our commit messages:

Commit Log

Here's the related Issue for #1184:

Issue 1184

We use GitLab issues both for bug reports and for features. You can see that this one has been labelled as a Client Suggestion and as a Feature. You can also see that GitLab is finding references to this Issue in the merge request and in the commits which are part of that merge request.

Traceability from tests to requirements

Using Issue reference numbers gives us traceability between the requirement for a feature (the Issue) and the changes to the code that addressed that feature. This is cool but probably too much information for an Auditor. They want to see a link between this requirement, a test plan and testing evidence.

We use human-readable (and computer executable) tests to exercise UI features and we tag them with @issue to create a link between a test and an issue.

Here's a test definition (a plan) for the issue 1184:

  Scenario: should be the default iMedidata URL for Draft Import
    When I view the Drafts list for my Project
    And I click the "Import Draft" button
    Then I should see "iMedidata" is the Login Type
    Then I should see "" as the iMedidata URL name

When this test is run it will click through the TrialGrid application just as a human would and take screenshots as evidence as it goes.

The programs that generate our validation package extract issue references from the Merge Requests and Commits and match them with the test definitions that exercise those issues and their related test evidence.

Here's a short sequence that shows the linkage between tests / evidence / issues and merge requests in the validation package.

Using this method it is easy to maintain traceability and the hyperlinked navigation has been well received by auditors used to dealing with requirements documents written in Word, traceability matrices managed in Excel spreadsheets and evidence in ring-binders.

Code Review

At the end of the previous video I scroll through the commits and their comments which formed part of a Merge Request.

An example comment:


When a Merge Request is created it displays all the changes made to the code and provides an opportunity to review and make comments. A Merge Request is blocked and code cannot be merged into the master copy until all comments have been resolved. This is our opportunity to perform code review. All comments and responses, including changes to the code as a result of comments are captured by GitLab.

We use GitLab API's to pull out the comment history on Merge requests and include them in the validation package as evidence of code review and of acceptance of the changes made to the code.

Generating Validation Packages

At TrialGrid we practice Continuous Delivery. That doesn't mean that if a code change passes our pipeline it is deployed direct to the Production environment. We decide when to deploy to production and trigger the automated deployment at times agreed with our customers.

We do deploy immediately to our Beta environment when the CI pipeline succeeds and a Merge Request is approved.

In addition, every Merge Request that passes the pipeline is deployed to our Development environment where it can be smoke tested by the reviewer before they approve the contents of the Merge Request.

We generate a validation package for every run of the pipeline. We only keep and sign-off on packages that relate to Production deployment but we are constantly generating these packages.

In effect, we are making the validation package part of the software development process. If a developer changes code in such a way that it affects the generation of the validation package then the pipeline will fail. The software may be fine, all tests may pass but if the validation package isn't generated correctly the whole pipeline fails and the developer must fix it.

For production releases we have a manual process of review for the validation package and sign off via electronic signature. Our CI pipeline has around 40 steps, many running in parallel, and completes in about 60 minutes. The validation package it generates weighs in at 1.3GB with about 500MB of images. This gets deployed automatically to Amazon S3 as a password protected resource ready for virtual audit.


I hope this glimpse into our Continuous Delivery / Continuous Validation approach is helpful. We are constantly refining this process and welcome new ideas.

In customer audits we have been able to demonstrate that we have our software development process under control and provide evidence that we are doing the things required by our Quality Management System.

Categorizing Diagnostic results into High, Medium and Low Importance

With more than 100 Diagnostic checks (and more planned) TrialGrid is able to identify many quality, standards and best practice issues in Rave Study Builds. But if the Diagnostics identify 100 issues, which should you work on first? What should get priority?

As with so many things, it depends. Not all TrialGrid Diagnostics are appropriate for every study. For example, there are a number of Diagnostics which relate to Rave EDC (formerly RaveX). If you are using Rave EDC for your study then findings from these Diagnostics are high priority. If you are using Rave Classic then findings from these Diagnostics are either of low priority or the Diagnostic should not be used at all.

This week we released a new feature for Diagnostics which allow an Importance of High, Medium or Low to be assigned to Diagnostics used in your Project. So for Project 1 a Diagnostic might be High importance while in Project 2 the same Diagnostic is of Medium or Low importance.

Any Diagnostic which is Active for a Project can now have its importance set:

Setting Importance

This importance level is also exported in Diagnostic results. Very useful in the Excel exports to allow filtering of results:

Exporting Importance

And in the Diagnostic results themselves you can also filter by Importance level:

Importance Filtering

All Diagnostics have a default of "Medium" importance. Adjust this as required for your individual Project.

We hope these features help our customers who are communicating Diagnostic results back to study build teams for action.