Articles tagged with 'Rave'

Save 20-30% on Edit Check Builds

Andrew and I are on a mission to reduce the cost and effort of building Rave studies by 50%. It's an ambitious goal but nothing really worth doing is easy.

One of the most costly areas of study build is the writing and testing of Edit Checks. So lets take a look at Edit Checks and where the costs are.

Three levels of Edit Check logic

In the previous post we looked at the three levels of edit check logic:

  • Field Checks (Range, IsRequired, QueryFutureDate etc)
  • Configured Checks (Rave Edit Checks)
  • Custom Functions

Field Checks can be set up with a few clicks and some data entry for expected high and low ranges. They are extremely fast and easy to set up and require little or no testing since they are features of the validated Rave system. Field edit checks are so easy we're giving these a value of $1 for all the checks set on a field (Is Required, Simple Numeric Ranges, Cannot be a Future date etc). That doesn't mean they literally cost $1 to include in your study. Depending on how you build, staffing costs, how luxurious your offices are etc your price will vary. $1 is just a good baseline figure to compare other costs against.

Configured Checks are written using Rave's Edit Check editor which uses a postfix notation (1 1 + 2 isequalto). Rave Edit Checks are flexible and very functional but every Edit Check that is written has to be specified, written and tested making it more expensive to create than a simple Field Check. You also need a more skilled study builder to write a Configured check. So let's say, $10, on average, to create a configured edit check. Again, $10 is not a literal cost, it's just a comparison.

Lastly we have Custom Functions. These are written in C#, VB.NET or SQL and require some level of true programming expertise. Custom Functions are the fallback, the special tool in the toolbox for the truly complex situations. Besides the difficulty of hiring (and keeping) good programmers in the current technical market Custom Functions have to be specified, reviewed for coding standards and performance impact as well as tested. We'll say, conservatively, $50 for the development of a Custom Function. Once again $50 is just a relative cost to the $1 field check since the average Custom Function is at least 50x more complex than a field check.

Study Averages

There is no such thing as an average study the size and complexity of a study depends on it's Phase, Therapeutic Area and many other variables. But we have seen a lot of trials over the years so we'll illustrate costs with what we think is fairly typical: A study with around 1,000 data entry fields, 1,000 Configured Edit Checks and 100 Custom Functions.

Given those numbers we can draw a graph that shows how the Edit Checks in our study stack up.

TypicalEditChecksByType

A graph of the costs is also enlightening:

OverallCost1

The bulk of the costs is in the Configured Edit Checks but those 100 Custom Functions account for 30% of the cost.

How to reduce the cost?

Field Edits are so easy there is little that could be done to make creating them more efficient but there is scope for improvement in Configured Edits and Custom Functions. How could we reduce the costs of those?

At TrialGrid we're attacking this challenge with CQL, the Clinical Query Language. CQL is an infix format for Rave Configured Edit Checks which is easy and fast to write and which has built-in testing facilities.

An Edit Check with CQL (infix) logic like:

1
A > B AND (C == D OR C == E)

would be translated into a Rave Edit Check (postfix) logic like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
 A
 B
 ISGREATERTHAN
 C
 D
 ISEQUALTO
 C
 E
 ISEQUALTO
 OR
 AND

CQL also includes a set of built-in functions that automatically generate Custom Functions for you.

For example, We have been asked for an Edit Check that determines if a text field contains non-ASCII characters. Using it in a CQL expression is easy:

1
AETERM.IsNotAscii

The TrialGrid application takes care of generating the Custom Function. You'll still need some bespoke Custom Functions but fewer and fewer as time goes on and we build more into CQL.

We (conservatively) estimate that CQL can save a Clinical Programmer or Data Manager 50% of the effort of writing Configured Edit Checks and that the generation of Custom Functions will reduce the number of Custom Functions that have to be hand-written by at least 10%. When we plug these numbers into our costings for our example study the price drops to $10,500 from $16,000 a saving of 34%

OverallCost2

Who wouldn't want that?

Image

Have you hit the Edit Check Wall?

Anyone who participates in endurance sports such as cycling or running will have heard of The Wall. It is the point at which the athlete exhausts their glycogen stores, resulting in a feeling of fatigue, the inability to go on.

As a Data Manager in a Study Builder role the chances are that you have experienced something similar, the point at which the logic for an edit check becomes too complex and you have to fall back on a Custom Function. This is the Edit Check complexity "Wall."

Three levels of Edit Check logic

Essentially Rave has three levels of edit check logic:

  • Field Checks (Range, IsRequired, QueryFutureDate etc)
  • Configured Checks (Rave Edit Checks)
  • Custom Functions

Field Checks can be set up with a few clicks and some data entry for expected high and low ranges. They are extremely fast and easy to set up and require little or no testing since they are features of the validated Rave system.

Configured Checks are written using Rave's Edit Check editor which uses a postfix notation (1 1 + 2 isequalto). Rave Edit Checks are flexible and very functional but every Edit Check that is written has to be specified, written and tested making it an order of magnitude more expensive to create than a Field Check. The learning curve for Configured Checks is quite steep since most of us were taught infix notation in school (1 + 1 == 2).

Lastly we have Custom Functions. These are written in C#, VB.NET or SQL and require some level of true programming expertise. Custom Functions are the fallback, the special tool in the toolbox for the truly complex situations. Besides the difficulty of hiring (and keeping) good programmers in the current technical market, Custom Functions have to be specified, reviewed for coding standards and performance impact as well as tested. Because of the level of skill required we want to write as few Custom Functions as possible.

Costs and learning curves

A graph of the learning curves for the different Edit Check logic types might look like this:

TheWall1

As we can see from the image, Field Checks have a very fast learning curve but they don't get you to a very high level of complexity. Learning Configured checks can be done quite quickly for the basics but mastery takes longer and eventually you reach the Wall where the complexity of a specified check means that you will need to use a Custom Function. We are all familiar with the most simple Custom Function:

1
return true;

But doing anything more complicated takes technical training.

Mitigation

The Wall represents the transition from Configured Checks to those requiring Custom Functions. We know that writing Custom Functions is expensive so we want to reduce reliance on them and so move the Wall further away. Some strategies which can be used to do this are:

  • Have standard/parameterized Custom Functions. For instance, instead of writing a custom function to compare specific date and time values, create a parameterized function which can be used for any date and time comparisons. These types of standard functions don't need the same level of validation as a bespoke Custom Function.

  • Analyse the Edit Checks you have written in the past and the queries that they generated. Research on Edit Check complexity in Medidata Rave studies found that the most complex edit checks were the ones least likely to fire. If an Edit Check requires logic so complex that it requires a bespoke Custom Function you may be better off using a manual listing or running the Edit Check as part of other back-end checks.

What we are doing

At TrialGrid we're attacking this challenge with CQL, our Clinical Query Language. CQL is an infix format for Rave Configured Edit Checks.

An Edit Check with CQL (infix) logic like:

1
A > B AND (C == D OR C == E)

would be translated into a Rave Edit Check (postfix) logic like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
 A
 B
 ISGREATERTHAN
 C
 D
 ISEQUALTO
 C
 E
 ISEQUALTO
 OR
 AND

In fact the translation works both ways, Rave Edit Checks can be instantly translated into CQL and CQL can be translated instantly back into Rave Edit Checks. There is no lock-in here, CQL translates into pure-Rave Edit Checks. Since infix notation is what we all learned in school, CQL is much easier to learn.

But we can do more. CQL includes a set of built-in functions that look like Rave Edit Check functions (IsEqualTo, IsPresent etc) but which automatically generate Custom Functions for you.

For example, We have been asked for an Edit Check that determines if a text field contains non-ASCII characters. Providing a standard Custom Function to do that is easy enough but we go one further and integrate it into CQL.

1
AETERM.IsNotAscii

to the user this looks no more complex than the standard Rave IsNotEmpty test:

1
AETERM.IsNotEmpty

The TrialGrid application takes care of generating the Custom Function. You'll still need some bespoke Custom Functions but fewer and fewer as time goes on and we build more into CQL.

Why wait?

But why wait? TrialGrid allows you to create these function templates yourself, extending CQL and your Rave Edit Checks with your own private functions that become part of the CQL language. Want to know if AETERM.IsSigned? or AETERM.HasOpenQuery? Add them to CQL and give your Custom Function programmers more interesting work to do.

Configuration, not Programming

By using TrialGrid, Edit Checks that would previously have required Custom Functions can now be done by configuration. Our graph looks more like:

TheWall2

The Wall is moved further away and the learning curve is made much flatter. This is more than just a nice to have, it means more productive Study Build staff and reduced costs. Another step on our journey to reduce the time and effort of Study Build by 50%.

Interested in improving your Rave study build efficiency? Contact us to find out how TrialGrid can help.

Brick wall image by FWStudio

Image

Better Rave Study Build Quality, Faster

To get the best out of Medidata Rave there are a set of Best Practices which should be followed when building studies. For example, Rave’s Clinical Views will not allow reserved words to be used as Field OIDs; Medidata publishes Rave Technical Note 22 which lists these reserved words in detail. In total the list runs to 8 pages with more than 200 reserved words.

Many organisation adopt a checklist approach to try and ensure quality. These checklists can easily contain 100 or more recommendations. For many of the items on these checklists the easiest way to check for conformance is to download the Architect Loader Spreadsheet (ALS) and use the filtering in Microsoft Excel to inspect data values. Corrections can then be made to the ALS and it can be re-loaded into Rave Architect.

Example checks to be performed manually in this way include:

  • Does the input control (Checkbox, Text, LongText, RadioButton) match the data format ?
  • Does the data format for Data Dictionary fields allow for the longest coded value length?
  • Are derived fields easily identified by a standard naming convention?
  • Are Field OID lengths compatible with SAS v5/v6?
  • Has RecordPosition been correctly used in Edit Checks for standard fields?
  • Do all Fields using the same Data Dictionary have identical data formats?

Completing these checks, documenting that they have been performed and explaining any deviations is so time consuming that generally these kinds of checklists are only used as a final quality check before go-live or before a major amendment.

The TrialGrid Way

Our mission is to improve the quality of Medidata Rave study builds and to reduce the burden of assuring that quality. We take these pre-flight checklists and turn them into Diagnostics, automated quality checks that can be run at any time against the study build. Ideally these quality checks would be performed during study build when small problems can be quickly fixed.

This is exactly what Diagnostics were designed to do. Study builders can run the Diagnostics at a click of a button and fix issues just as quickly. If the Diagnostic identifies an issue, applying the suggested fix can be done with a single click. The change is audit trailed and if there is a deviation that requires explanation it can be entered right there so that any future run of that Diagnostic for the project will not report on that explained issue.

TrialGrid Diagnostics take a time-consuming activity that requires expert knowledge and transform it into a few clicks to get assurance of conformance to best practice and a full PDF report output to document the results. With more than 50 Diagnostics (and more being added all the time), using TrialGrid could save your Study Builders hundreds of hours of manual effort.

Interested? You can read more about TrialGrid features on our tour page