Articles tagged with 'TrialGrid'

Tracing Bug Fixes to the code

One of the key things that auditors want to see in validation documents is traceability between features, tests of those features and the evidence that the tests were executed. Normally this is presented as a requirements specification, a set of test scripts which reference the specification and records that the test scripts were executed. For product management and for an auditor this provides assurance that you planned what you wanted to create, you developed acceptance criteria and you tested the results.

At TrialGrid we record our specifications as Issues in a GitLab repository and we reference the unique identifiers for these issues in our automated test scripts and in commit messages. I outlined this in our recent blog post. For this discussion the important thing to know is that for every Issue we have a summary page in our validation portal which pulls data from GitLab to display a description, history and links to requirements tests for that Issue.

What about traceability between bugs and bug-fixes?

We also use Issues to track bugs. Some bugs have a visual component and we will include tests that these issues are fixed as part of our automated requirement test scripts. These "Bug Issues" then have tracability from the Issue to the tests that show they are fixed just like a "Requirement Issue".

However, not all bug fixes have a visual component so we would end up with some Issue pages with no hyperlink traceability to any proof that the bug had been fixed and tested.

Is this a big deal? In my experience, organizations don't provide this kind of traceability. Bug fixes are treated separately to requirements and references to bugs fixed go in the release notes. There may be a unit test that proves the bug was fixed but maintaining a traceability matrix between unit tests and bugs fixed is more work than anyone is looking for.

Still, it bothered us that there was this lack of traceability in our validation docs. In our unit testing code we would make reference to an issue. For example:

1
2
3
4
5
# issue 1359 
def test_result_suffix_inactive_field(self): 
    """Diagnostic should generate one result for suffix when field is inactive""" 
    self.params["ignore_inactive"] = "false" 
    ...

With a comment like issue 1359 we were making reference to the issue - what we needed was a way to find these kinds of references and match them up with the pages in the validation docs with a hyperlink to the code.

Code is data

Our source code coverage analysis system already generates hyperlinked code listings which we include in our validation package. These listings allow us to create hyperlinks which will take the reader to an exact line of a particular file. What we needed was a way to read all our source code, identify the issue comments and make an association between an issue and the file and line where the issue is referenced.

Happily for us, python has good support for analysing source code so we were able to get this working in just a few hours. The end result is hyperlinks from an issue page:

Issue Summary

to the source code that exercises the issue:

Source Code Link

Summary

We don't expect that auditors will want to review the source code of unit tests but we think that providing traceability to bug fixes is useful and meets the needs of auditors who want to see evidence.

We're not yet done with improving traceability in our validation package. More on our ongoing efforts in a future blog post.

Documenting Standards with Annotates

One of the challenges of maintaining standard libraries in Rave Architect is that it is not possible to add any kind of notes or extended metadata on library objects. For example, a standard form might have the following information associated with it:

  • A unique version number
  • A description of the form and its intended usage
  • Copyright holder and licensing information (e.g. for copyrighted scales)
  • SDTM domain associated with the Form
  • Rules regarding changes which are allowed to be made to the Form and still be standards compliant

Since Architect can't store this information it has to be maintained in some separate system. That might be a Metadata Repository (MDR), a set of modified Architect Loader Spreadsheets or in Word documents. However it is managed or presented, that metadata isn't available in Rave Architect where the study builder is doing their work.

So how does TrialGrid help?

Additional Properties for Forms and Fields

First of all, TrialGrid allows you to add additional metadata properties to Forms and to Fields.

Here we add a property "Field Set" to all fields:

Field Set Property

And we do the same for the other properties we want:

  • SDTM identifier for fields
  • Instructions, Copyright, Version and SDTM Domain for Forms

Properties

Note that Version and SDTM Domain properties for Forms and Field Set properties for Field are marked as "show in lists". This tells the system to include this value in the Field and Form lists respectively.

Now that we have added these additional metadata properties we can now update these in our Form designer:

Setting Properties

Now our Form listing contains the Version and SDTM Domain for that form:

Form Listing

We can also complete some property settings for Fields:

Setting Field Properties

Note that for the Collection date we have entered a field set of COLLECTION_PLUS_AGE. In this form we should either be using COLLECTION_DATE and AGE fields or the Date of Birth Field. Later we'll use this custom property to display different sets of fields in different colors in our annotates.

Exporting and Importing Custom Properties

So far we've added custom properties to our Forms and Fields. This provides useful additional information to Study Builders inside the TrialGrid Form Editor. That's great if you're only using TrialGrid to manage your standards and to build studies - what if you want to load those custom properties into TrialGrid from an MDR export or want to feed the information captured in TrialGrid to other downstream processes?

TrialGrid provides this data as part of the Architect Loader Spreadsheet and can import and export this data:

Properties in the ALS

It's simply added as an extra sheet in the ALS which Rave Architect will ignore.

Including Custom Properties in Annotate output

Part of the reason we add additional metadata to Forms and to Fields is so that we can communicate this additional information to other team members who may not be study builders. Including this metadata in an annotate is one good example, so lets do that.

Annotates are a new feature of TrialGrid, they allow you to define a template which can be used to generate a Microsoft Word document. We start with a simple annotate that just shows the basics about our Demographics form:

Annotate Step 1

So far it contains none of the additional metadata which we just added. We'll need to edit the annotate template to do that. At the moment only TrialGrid employees can modify templates but I'm showing how it is done here because we expect that users will have the option to modify their own templates in future.

Adding Instructions

The format of annotate templates are very similar to HTML. Here we add a level 4 heading, grab the "Instructions" custom property and if it is defined (it may not have been set for all Forms) we insert the value into a paragraph before the layout:

Annotate Step 2

We can add the Form version and Copyright properties to the table of standard Form properties using markup like this to add a row to a table:

Adding a new Row

that generates an annotate like:

Annotate Step 3

Next we want to add the SDTM annotations. Typically these are done in color. We could choose any color for the background of these sections of text but lets go with aqua. Here we show how we add that text in that color to the field:

Adding SDTM field

And we do something similar to the top of the form to add the SDTM domain there:

Annotate 4

Lastly we want to highlight the rows of different Field groups to show those that should not be used together. The logic for this is reasonably complex but can be done within the template system in 8 lines. We need to look at every field and determine if it has a "Field Set" property set. For each unique "Field Set" value we assign a unique color from a list of colors we provide.

The result is:

Annotate 5

The colors used here make it clear which groups of fields belong together. Note that this isn't a feature of TrialGrid, it's something we created ourselves with custom properties and some modification of our annotate generator - if you have a different system or different metadata you want to capture and show in annotates this can easily be achieved!

Summary

In this post we showed how..

  1. Custom metadata (properties) can be defined in TrialGrid for Forms and Fields.
  2. Properties can be edited in the Form editor
  3. Property values appear in system listings such as the Form list
  4. Properties can be imported and exported into/from the TrialGrid system using the Architect Loader Spreadsheet
  5. Properties can be included in the new Annotate Templates system to document standards, usage instructions and key metadata such as SDTM annotations.

We're barely scratching the surface of what you can do with the annotate template system. Essentially it's a report writing engine that has access to all the data in an Architect Study Draft. That includes all the extra information that TrialGrid captures and that we haven't yet mentioned here - user comments on design objects, the audit trail of changes to objects, the standards compliance workflow state of an object, custom labels (such as workflow state) applied to an item and a whole lot more.

Contact us if you have particular needs for annotates or Draft reports. We can help!

Quality Management System and Word

If you work in a regulated environment then the work you do will be guided by the policies and procedures outlined in your Quality Management System (QMS). Typically the QMS documents are embodied as a set of Microsoft Word or PDF documents stored on an intranet or (think of the trees!) printed and placed in a giant binder.

Word is a bad tool for maintaining QMS documents

At TrialGrid we knew from the start that using Word to author our QMS was not going to work for us. Here are some reasons:

  • Link management. A QMS is highly inter-linked so that a policy document references an SOP which references a work practice which in turn references a form template. Word documents were not designed to maintain those kinds of links.

  • Review management. Documents in a QMS need to be reviewed and while Word provides excellent review tools you mostly end up emailing around different versions which then have to be manually reconciled.

  • Formatting and layout. The infinite customizability of Word means that you have to maintain strict control over Word Templates so that authors use the correct heading and bullet styles. Consistency is important but the cost of maintaining it with Word is too high.

  • Lack of searchability. You can search an individual Word document but if you want to search your entire QMS then you'll need 3rd party tools that can index and search the documents.

  • Versioning. It is typical for every document in the QMS to have its own version and version history. Are you using the version of the Work Form that goes with the version of the SOP you're using?

Our conclusion? Word is the wrong tool for the job and investing in a document management system like Documentum or Sharepoint to try to make up for its shortcomings just compounds the wrongness.

The TrialGrid Approach : QMS as a Software Project

Our approach is to treat the QMS as a Software Project. We write all our documents in a markup language called reStructuredText. This is plain text with the use of *'s and other symbols to denote bold, italic, headings etc:

1
2
3
4
5
6
7
A Heading
=========

1. A **bold** bullet item.
2. An *italic* bullet item.

Refer to :ref:`this Procedures Index reference <procedures_index>`

We use a document generation system called Sphinx to collate all these plain text files into a hyperlinked website just like compiling a software application.

Benefits

  • Treating documents as source code files means we can use software tools we are familiar with to compare and review changes. This includes capturing review comments and history to prove that our QMS content is being scrutinized - something that is very hard to do with Word documents.

  • The automated construction of documents from source code means that all files have exactly the same formatting applied. No fighting with margins, no arguments over heading styles or indents. Total consistency.

  • The build process checks references between documents and fails if they are not consistent. We spend minimal time maintaining links.

  • Constant values can be defined and applied across all documents. For example, we define the company name as part of the configuration. This is automatically substituted into all the documents. Should we change the company name or the product name then we can change in a single place and regenerate the entire system and it will be consistent.

  • The resulting website has full text search built in which makes navigating it really easy.

  • We can use automation so that when a new version of the QMS "application" is approved and has passed all tests (e.g. link checking) then the website can be automatically generated and deployed - removing the old version from circulation.

  • The QMS is versioned as a single unit, just like a software application. There is no individual document versioning, the entire QMS is versioned (e.g. version 6) and released as a unit.

Checking for Errors

Nothing is perfect and in a recent audit it was noticed that the footer on all our QMS pages was showing "Version 6" when the version history for the website showed it was version 8. We had updated the text revision history table but forgotten to change a value in the configuration. A small finding but embarrassing.

This can be easily fixed but how could we prevent this from happening again in the future? In a traditional QMS management process you would add a new line to a release checklist: "[ ] Check version number" but adding a checkbox to a manual checklist doesn't guarantee that the task is done and these kinds of checks build up like scar tissue in your process, reminding you of past wounds and slowing you down.

In our process we can fix this with software. We added a 20-line program to our build process that checks the latest (manually entered) value for the version in the revision history text file against the configuration value for the version. If they don't match, the build fails with an error message and it's clear what to do to fix it.

Summary

Managing a QMS as source code may not work for every organization but it is working well for us. Auditors accept that we have the QMS under appropriate management controls and are grateful for the ease of navigation and searchability of the content so overall we are glad we avoided Word for this use-case.

How do you manage your QMS?