Articles tagged with 'Custom Functions'

Developing Custom Functions for Medidata Rave

Writing Custom Functions in Medidata Rave Architect can be a frustrating experience. The editor provides the basics for entering and validating Custom Function code in C# but little more. So what are the alternatives?

Rave Custom Function Editor

Visual Studio

Many Custom Function programmers do not develop Custom Functions in Architect. Instead they rely on Visual Studio which provides line numbering, syntax highlighting and all the benefits of a modern source code editor. Properly configured with the Medidata Core Object DLL's Visual Studio can also provide syntax highlighting and intellisense for the Rave Core Object Model classes.

In the most basic usage a programmer would write their Custom Function code in Visual Studio and then cut and paste it into the Architect Custom Function edit window to validate and save the code.

Custom Function Debugger

Programmers with Visual Studio and access to the Custom Function Debugger and ROAD (Rave On A Disk) have the best possible experience since they can connect directly to a running Rave instance to debug and test Custom Functions. This setup is highly recommended for Custom Function programmers but it can be difficult to arrange access to ROAD.

What if you don't have Visual Studio?

The majority of Rave study builders do not have access to the Custom Function Debugger and cannot install Visual Studio on their computers and so are stuck with the basic Custom Function editor in Architect. If you want to develop Custom Functions of any complexity you will need a guide to the available classes and methods of the Rave Object Model such as DataPoint, Record and Subject. Happily, Medidata has published a list of allowed methods as part of the Custom Function documentation on learn.mdsol.com. This is recommended reading and can help you look up method signatures without having to rely on guesswork and scanning other Custom Functions for examples.

The TrialGrid Custom Function Editor

Organizations with access to TrialGrid may use our advanced version of the Custom Function editor. This includes many of the features of a programmers editor like Visual Studio:

  • Syntax highlighting
  • Line numbering
  • Search/Replace
  • Auto-formatting
  • Undo/redo for editing
  • Code folding
  • Validation
  • Autocomplete

Rave Custom Function Editor

In the example above you can see Autocomplete showing the list of available methods for the DataPoint object type.

This isn't the same as the Intellisense you get with Visual Studio which is context-aware (i.e. it knows the types of variables and can provide autocomplete specific to that variable) but it reduces the need to consult the documentation to look up method signatures which is a big time saver.

Overall the TrialGrid Custom Function editor is a major step-up from the basic editor included with Rave Architect but for users that can install Visual Studio this is still the recommended approach.

Feature comparison

          

Rave
Architect
TrialGrid Visual Studio Visual Studio
with Core Objects DLL
Visual Studio
& Custom Function Debugger
Price Rave
License
TrialGrid
License
Free
(Install required)
Rave License  
(Install Required)
Rave License, ROAD License
(Install Required)
Validation Yes Yes Yes Yes Yes
Syntax Highlighting - Yes Yes Yes Yes
Line Numbering - Yes Yes Yes Yes
Code Formatting - Yes Yes Yes Yes
Rave Autocomplete - Yes - Yes Yes
Rave Intellisense - - - Yes Yes
CF Debugging - - - - Yes


If you are interested in knowing more about how the TrialGrid Custom Function Editor can accelerate your Medidata Rave study build process please contact us!

Unicode

If you haven't heard of Unicode you have certainly seen it. You are seeing it now since Unicode is the standard for the encoding of characters viewable in Web Browsers and on computers in general. As of this writing, version 10 of the standard includes more then 136,000 characters from multiple writing systems and Medidata Rave supports the Unicode standard both for study designs and for data collection. So what is the problem?

Actually, there is no problem so long as you know what characters from the Unicode standard are being used in your study, where they are and how they display and appear in outputs.

Unicode in Study Design

If you are building your study in Japanese or localizing it to Russian, Armenian or Greek then having the full set of Unicode characters to use is vital. For studies in English you may want to stick to the set of 128 characters known as ASCII (a-Z, 0-9 and symbols). But sometimes you can be surprised by characters that aren՚t what you think they are…

Did you spot those alternative characters hiding in the last sentence?

characters that aren՚t what you think they are…

vs:

characters that aren't what you think they are...

Still can't see it? Hint: It's the ՚ and the … The differences are (or at least, may be) subtle on the screen but when we render them in a Rave PDF they appear quite different:

Apostrophe and Ellipsis

It is very hard for the human eye to distinguish between these characters the way they are rendered in Browsers but they are different characters and the font that Rave uses to display characters won't have a way to render all 135,000 possible characters so it is best (in English studies at least) to stick to characters that appear in the limited ASCII set of characters that all fonts cover well.

Be especially wary of text that is cut and pasted from web pages, Word and Excel or from PDF documents. It is very tempting to copy verbatim from a Protocol document but word processors use all kinds of character variants to make writing look better on the screen or in print. You can't even trust the spaces in these documents because Unicode defines at least 20 different "empty" space characters of different widths including one that has no width at all (i.e. it is invisible!)

Tip: TrialGrid Diagnostic 70 will identify and highlight non-ASCII characters, even invisible ones

Unicode in Study Data

If unexpected characters in study design can cause strange PDF outputs, unexpected or unwanted characters in the clinical data can be real poison. A study that collects data in the English language might expect that all the text data in the study is in ASCII. However, Rave will accept data input to text fields of any Unicode character so the same problems of cut & pasted content can occur. Rave is 100% Unicode compatible so it will happily take, store and output any Unicode content but SAS and other analysis programs may have to be set to accept non-ASCII content.

In English studies you want to identify non-ASCII content at the point of entry. This can only be done with a Custom Function that looks at the content of a text field and determines if any of the characters are outside the ASCII range. A quick search of the web will throw up simple code which will return true if it finds a non-ASCII character in the input string:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
    //Take string from datapoint.Data or datapoint.StandardValue
    string s = "characters that aren՚t what you think they are…";  

    foreach (char c in s)
    { 
        if (((int)c) > 127) 
        { 
            return true; 
        } 
    } 
    return false;

Tip: TrialGrid contains a CQL extension that makes this as easy as using FieldName.IsNotAscii in an Edit Check.

Summary

Rave handles Unicode really well and web browsers are very good at displaying a wide range of Unicode characters but not all characters can be displayed by all systems so be careful what you put into your study design and what you collect in your study data. Being able to cut and paste text between systems is great for productivity but can have unintended consequences.

Save 20-30% on Edit Check Builds

Andrew and I are on a mission to reduce the cost and effort of building Rave studies by 50%. It's an ambitious goal but nothing really worth doing is easy.

One of the most costly areas of study build is the writing and testing of Edit Checks. So lets take a look at Edit Checks and where the costs are.

Three levels of Edit Check logic

In the previous post we looked at the three levels of edit check logic:

  • Field Checks (Range, IsRequired, QueryFutureDate etc)
  • Configured Checks (Rave Edit Checks)
  • Custom Functions

Field Checks can be set up with a few clicks and some data entry for expected high and low ranges. They are extremely fast and easy to set up and require little or no testing since they are features of the validated Rave system. Field edit checks are so easy we're giving these a value of $1 for all the checks set on a field (Is Required, Simple Numeric Ranges, Cannot be a Future date etc). That doesn't mean they literally cost $1 to include in your study. Depending on how you build, staffing costs, how luxurious your offices are etc your price will vary. $1 is just a good baseline figure to compare other costs against.

Configured Checks are written using Rave's Edit Check editor which uses a postfix notation (1 1 + 2 isequalto). Rave Edit Checks are flexible and very functional but every Edit Check that is written has to be specified, written and tested making it more expensive to create than a simple Field Check. You also need a more skilled study builder to write a Configured check. So let's say, $10, on average, to create a configured edit check. Again, $10 is not a literal cost, it's just a comparison.

Lastly we have Custom Functions. These are written in C#, VB.NET or SQL and require some level of true programming expertise. Custom Functions are the fallback, the special tool in the toolbox for the truly complex situations. Besides the difficulty of hiring (and keeping) good programmers in the current technical market Custom Functions have to be specified, reviewed for coding standards and performance impact as well as tested. We'll say, conservatively, $50 for the development of a Custom Function. Once again $50 is just a relative cost to the $1 field check since the average Custom Function is at least 50x more complex than a field check.

Study Averages

There is no such thing as an average study the size and complexity of a study depends on it's Phase, Therapeutic Area and many other variables. But we have seen a lot of trials over the years so we'll illustrate costs with what we think is fairly typical: A study with around 1,000 data entry fields, 1,000 Configured Edit Checks and 100 Custom Functions.

Given those numbers we can draw a graph that shows how the Edit Checks in our study stack up.

TypicalEditChecksByType

A graph of the costs is also enlightening:

OverallCost1

The bulk of the costs is in the Configured Edit Checks but those 100 Custom Functions account for 30% of the cost.

How to reduce the cost?

Field Edits are so easy there is little that could be done to make creating them more efficient but there is scope for improvement in Configured Edits and Custom Functions. How could we reduce the costs of those?

At TrialGrid we're attacking this challenge with CQL, the Clinical Query Language. CQL is an infix format for Rave Configured Edit Checks which is easy and fast to write and which has built-in testing facilities.

An Edit Check with CQL (infix) logic like:

1
A > B AND (C == D OR C == E)

would be translated into a Rave Edit Check (postfix) logic like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
 A
 B
 ISGREATERTHAN
 C
 D
 ISEQUALTO
 C
 E
 ISEQUALTO
 OR
 AND

CQL also includes a set of built-in functions that automatically generate Custom Functions for you.

For example, We have been asked for an Edit Check that determines if a text field contains non-ASCII characters. Using it in a CQL expression is easy:

1
AETERM.IsNotAscii

The TrialGrid application takes care of generating the Custom Function. You'll still need some bespoke Custom Functions but fewer and fewer as time goes on and we build more into CQL.

We (conservatively) estimate that CQL can save a Clinical Programmer or Data Manager 50% of the effort of writing Configured Edit Checks and that the generation of Custom Functions will reduce the number of Custom Functions that have to be hand-written by at least 10%. When we plug these numbers into our costings for our example study the price drops to $10,500 from $16,000 a saving of 34%

OverallCost2

Who wouldn't want that?