whitepapers


The journey of preparing an ISS 


by Lieke Gijsbers

The integrated summary of safety (ISS) is a critical component of a submission to the FDA regulatory authority. For the ISS, data from different studies are pooled and harmonised to conduct the integrated analyses. Integrated SDTM/ADaM Define-XML and integrated Reviewer’s Guides (icSDRG and iADRG) may accompany the ISS to provide additional context and information about the integrated SDTM and ADaM datasets.

Based on a use case, we’ll explain in this presentation the approach we took to create the pooled datasets. Moreover, we’ll share our experiences regarding the creation of an SDTM and ADaM Define-XML for integrated databases as well as the icSDRG and the iADRG.

Standardisation in a fast growing environment; MDR, EDC and other abbreviations

by Louella Schoemacher

With a fast growing portfolio, the importance and need for standardisation increases. In order to keep up with the growing number of compounds, indications and studies the argenx Data Standards team implemented a metadata repository in 2021. By having a single source of truth for our metadata we aim to spend less time and resources during study set-up phase.

Standardisation however is a dynamic process. New vendors, new guidances and new insights are all in place to improve our processes and data quality, but can be a challenge in standardisation. How many versions do we need and how to best govern different versions? When is something study-specific and when do we decide to standardise? Subsequently, how can we use the metadata repository to further expand standardisation across other processes? What is the impact on upstream and downstream processes?

With standardising our CRFs and Define-XML files we aim to have more standardised raw data. This should enable us to ease the process of creating SDTM datasets. What other actions could we undertake to reduce the time and effort to manage and clean data and to focus on analysing the data not only within, but also across trials, to focus on the bigger picture.

Last year we presented our implementation findings. Now we aim to give a short recap, but to also guide through our current activities, learnings and other projects we are focusing on to standardise more of our processes.

A proprietary, CDASH/SDTM-hybrid data model to expedite clinical data review

by Lieke Gijsbers (OCS) & Tom Van Der Spiegel (Janssen Pharmaceutica N.V.)

In 2016 Janssen identified the need to expedite clinical data review. A proof of concept demonstrated the value of pursuing a new and proprietary data model for data review, serving as single source of truth. The Data Review Model (DRM) that was introduced is strongly based on CDISC CDASH and SDTM. DRM provides full traceability and describes both clinical and operational (system) data consistently across studies. On the longer term, the pharmaceutical company plans to implement a metadata-driven environment, including data conversion from source data into DRM. In 2017, OCS Life Sciences and Janssen piloted DRM by implementing a mapping framework that supports both documentation and execution of source to target data mapping. This paper describes how multiple trials were mapped to support the pilot phase of DRM, to learn, refine and document the value of DRM prior to moving to production implementation.

Work Less - Do More

by Kai Wanke & Sofia Vale (OCS)

As Define-XML contains many repetitive items, manual validation is not only a tedious and time-consuming task, but also prone to errors. However, this repetitive structure allows for many of its contents to be validated automatically. It is our recommendation to aim at automating the validation on top of the development for a number of reasons, including:
1) Define-XML should ideally be created before the SDTM or ADaM data is available; 2) manual enrichments may have been made which (inadvertently or not) affect the metadata; and 3) the data may be updated but a new Define-XML cannot be automatically generated as it would undo manual enrichments.
This whitepaper will describe how a significant part of the validation of Define-XML can be automated using SAS scripts, independent of the software or method used to generate the Define-XML, to reduce the workload of the programmer while guaranteeing a high degree of accuracy.

Handling CRF design changes in a live environment

by Nieke de Ruiter & André Snelting (both OCS)

Building an EDC database is based on the study protocol which should contain all required information for the eCRF. Precision, consistency, and completeliness are highly important for that. In many situations these criteria are not met. In other scenarios the protocol demands a high level of complexity from the EDC design. For example, study procedures can be country or site specific, or a combination of paper and electronic diaries is used. These aspects and challenges should be taken into account when reviewing the protocol. Even when the protocol fulfils these criteria, and the protocol is not overly complex, change requests in the production phase cannot be avoided but are often time consuming and error prone. This paper provides a number of examples of change requests to the eCRF and the database and describes how these were implemented. In addition, lessons learned about smart design changes will be shared and discussed.

The use of SDTM standards for non-submission purposes

by Lieke Gijsbers (OCS) & Paul Vervuren (Nutricia Research B.V.)

Driven by the growing need for the aggregation of clinical trial data, Nutricia Research started the implementation of CDISC SDTM for their early life nutrition studies, and initiated a conversion project for their legacy studies. One of the challenges faced is that current CDISC standards do not entirely address the complex and diverse nature of data collected in nutrition studies, such as gastrointestinal tolerance and intake of infant formula as study product together with a breastfeeding reference group. A combination of domains is sometimes required to represent data from a single source dataset. Since 2015, Nutricia Research and OCS Life Sciences have been working together on the definition of specific SDTM domains and jointly built expertise in the legacy data conversion of early life nutrition data. This paper presents concrete examples and implementation solutions from this project, and reflects on possibilities and limitations of using SDTM as a standard for non-submission purposes.

A proprietary, CDASH/SDTM-hybrid data model to expedite clinical data review

by Lieke Gijsbers (OCS) & Tom Van Der Spiegel (Janssen Pharmaceutica N.V.)

In 2016 Janssen identified the need to expedite clinical data review. A proof of concept demonstrated the value of pursuing a new and proprietary data model for data review, serving as single source of truth. The Data Review Model (DRM) that was introduced is strongly based on CDISC CDASH and SDTM. DRM provides full traceability and describes both clinical and operational (system) data consistently across studies. On the longer term, the pharmaceutical company plans to implement a metadata-driven environment, including data conversion from source data into DRM. In 2017, OCS Life Sciences and Janssen piloted DRM by implementing a mapping framework that supports both documentation and execution of source to target data mapping. This paper describes how multiple trials were mapped to support the pilot phase of DRM, to learn, refine and document the value of DRM prior to moving to production implementation.

Work Less - Do More

by Kai Wanke & Sofia Vale (OCS)

As Define-XML contains many repetitive items, manual validation is not only a tedious and time-consuming task, but also prone to errors. However, this repetitive structure allows for many of its contents to be validated automatically. It is our recommendation to aim at automating the validation on top of the development for a number of reasons, including:
1) Define-XML should ideally be created before the SDTM or ADaM data is available; 2) manual enrichments may have been made which (inadvertently or not) affect the metadata; and 3) the data may be updated but a new Define-XML cannot be automatically generated as it would undo manual enrichments.
This whitepaper will describe how a significant part of the validation of Define-XML can be automated using SAS scripts, independent of the software or method used to generate the Define-XML, to reduce the workload of the programmer while guaranteeing a high degree of accuracy.