CDISC Europe Interchange 2020 goes virtual!

In 2020 the CDISC Europe Interchange will be a virtual conference and we are very pleased to announce that OCS Life Sciences is giving two presentations and one poster presentation.

Just come and meet our statistical programmers and other staff members at our virtual booth at the conference! Or contact us for more information at any other time that suits you best.


Fatima Kassim (MSc) and Louella Schoemacher (MSc): "Let Food Be Thy Medicine and Medicine Be Thy Food"
“Dear diary, This morning I ate two sandwiches with jam. I drank one glass of grapefruit juice and one glass of water to take my medication.”
The person who wrote this diary is exposed both to nutrition and medication. However, the way these data would be collected and standardised in clinical trials is very different.In the pharmaceutical industry, rules are very strict and all submissions to the FDA, EMA and PMDA should follow strict CDISC rules to be accepted. However, in nutrition rules are often not that strictly implemented and data that is collected can be innately so different that standardisation is one tough cookie.

In September of 2019, the Nutrition TAUG version 1.0 was published. This user guide describes how to use CDISC standards to represent data pertaining to nutrition studies. What is the added value of using CDISC standards for nutritional data and what can be learned from pharma and the other way around?There is an inherent difference between the way nutritional data and pharmaceutical data are collected, handled and reviewed within the life sciences industry and by regulatory authorities. How do these differences come into play when trying to standardise data across the industry?

The challenges lie not only in the way data is collected, which is generally more absolute and objectively done for pharmaceuticals than nutritional products, but there are also stricter laws governing one over the other. This plays a substantial role in how far the implementation of standards will be carried out across the industry.
In this talk we would like to present viewpoints on this subject and hopefully guide others to a meaningful discussion on the implementation of standards for nutritional data, and to provide a roadmap to full standardisation of nutritional data.

Lieke Gijsbers (MSc) and Jasmine Kestemont from Innovion: "Team Building Through Legacy Data Conversion"

This presentation highlights the experiences and the lessons learned from a legacy data conversion project. The aim of the project was to prepare a study portfolio for submission to the U.S. Food and Drug Administration (FDA) and encompassed the conversion of more than 20 clinical trials to SDTM and ADaM standards. Data in these trials were collected by various CRO’s in various formats, most of which weren’t anywhere near CDISC(-like) formats, some collected over a decade ago.

The focus of the presentation will be on the project strategy and approaches taken to conduct the conversion in an efficient and consistent way, the challenge of scaling up a team from zero to 10 people within a specialist CRO, and the benefit of project team members with different educational and professional backgrounds.

Poster Presentation

Sofia Vale (MSc): "Defining Standards: Nearing 100% Automatic Development of Define XML"
Define-XML is an integral part of any submission package of clinical studies to regulatory agencies like the FDA, EMA and the PMDA. It provides all the metadata information for all datasets present in the submission package; therefore, it is essential that it is accurate and complete.In many companies, the development of a Define-XML file currently requires extensive manual input, making it a time-consuming task that is prone to many errors. There are solutions on the market, but these are often very expensive or incomplete. Thankfully, due to its repetitive structure, it is possible to automate the development (and validation) of a Define-XML to a far extent. This way, much of the manual input required can be avoided, and company or project-wide conventions can also be implemented to ensure consistency across several studies.

Our solution takes advantage of several tools and information already available to the development of a Define-XML, from the datasets or shells already developed to the standards present in the CDISC Library, and integrates them in a simple and intuitive way for the end user. It turns the development process into a fast and more accurate process, reducing the manual input to an absolute minimum.

In contrast to Pinnacle 21 Community, a commonly used solution, our solution bases itself on both XPT files as well as SAS SDTM or ADaM datasets. It also reads annotations from the blank (annotated) CRF and may rely on mapping specifications in a tabular format. From this information is creates a spreadsheet, highly similar in structure to what Pinnacle 21 creates, that is already completed for the majority of items. This includes the data type (including types that Pinnacle 21 does not detect), CRF page number, and codelist information. Based on the variable name, this solution is also able to detect where value-level metadata should be applied and where specific (predictable) derivation methods are required. Manual input is only required to verify a small number of items, that the solution highlights using colour indicators—pink for manual entry, orange for verification only. This way, the workload is largely reduced and accuracy can be ensured.