CDISC 2023 Europe Interchange

We are pleased to announce that we have submitted 5 abstracts for CDISC 2023 Europa interchange, in Copenhagen. Here you can read all the submitted abstracts. Which presentation would you be interested in attending? Join the conversation on our LinkedIn page.

Ensuring that a Define-XML is submission ready

by Caro Sluijter

Selected to be presented at the conference

Define-XML is a required component of your submission to FDA and other authorities. It contains a lot of information and a lot of ground needs to be covered to ensure that all relevant information of correctly and accurately presented in the Define-XML. Luckily some of that information can be checked and verified by off-the-shelf tools (such as Pinnacle21 and more recently CDISC CORE) but what checks do they do? And what about the information that is not checked by these tools? When developing a Define-XML it’s important to understand what checks are needed, and how the Define-XML will be validated.

As a CRO, we develop and validate many Define-XMLs in studies that we manage and report. Clients and sponsors also expect us to ensure that their Define-XMLs are submission ready for studies that we have not been involved in. While the requirements and validation rules are the same in either situation, the validation approach may be different. After all, having or not having prior knowledge of the creation of the datasets is a crucial differentiator.

This presentation will show our best practices on various aspects of the validation of Define-XML and how to ensure that the Define-XML is ready for submission. Lastly, the presentation will take a glimpse into the future on the validation challenges of a Define-XML V2.1.

Standardisation in a fast growing environment; MDR, EDC and other abbreviations

by Louella Schoemacher & Jasmine Kestemont (Innovion)

Selected to be presented at the conference

With a fast growing portfolio, the importance and need for standardisation increases. In order to keep up with the growing number of compounds, indications and studies the argenx Data Standards team implemented a metadata repository (MDR) in October 2021. By having a single source of truth for our metadata we aim to spend less time and resources during study set-up phase.

Standardisation however is a dynamic process. New vendors, new guidances and new insights are all in place to improve our processes and data quality, but can be a challenge in standardisation. What different versions of forms do we need in order to smoothen the process? How should we best implement versioning to keep up with new controlled terminology and new implementation guides? If a study provides us with new insights, do we immediately implement it to our library as well and how does that affect versioning and traceability within the system?

Subsequently, looking forward how can we use the metadata repository to further expand standardisation across other processes? What is the impact on upstream and downstream processes? Can CRF standardisation have an impact on protocol design and reduce protocol amendments? Does it speed up or slow down the process?

With standardising our CRFs and Define-XML files we aim to also have more standardised raw data. This should also enable us to ease the process of creating SDTM datasets. Would it be beneficial to also pull in our source data and to standardise SDTM creation in order to have near real-time insight into our data? Do we also standardise data coming in via other systems, such as data transfer agreements with our lab vendors? What other actions could we undertake to reduce the time and effort to manage and clean data and to focus on analysing the data not only within, but also across trials, to focus on the bigger picture.

Last year we presented our findings of implementing the metadata repository. In this presentation we aim to give a short recap, but to also guide through our current activities in the metadata repository and learnings on using standards for study set-up. Furthermore, we would like to share some insights on projects we are currently focusing on to standardise more and more of our processes.

Bookmarking your aCRF: quick & easy?!

by Kimberley Santing & Ramon Jakobs


Part of getting your study FDA submission ready is the bookmarking of the aCRF. This can be a tedious and labour-intensive task and is prone to human errors, but it doesn’t have to be. We would like to present a solution that can automate part of the process or even fully automate the bookmarking.

According to the FDA guidelines there need to be bookmarks present by form and by visit. The bookmarks are displayed as a long list in the aCRF. This long list can also be visualized as a matrix where the intersect of form and visit depicts the bookmark. This matrix is the basis of our solution.

Only the aCRF and visit schedule are needed to create the matrix, but if your study and aCRF allow it, even this can be automatically filled out for you. Our solution will create an xml file that contains code for the bookmarks. This xml file can be imported in your aCRF and will create bookmarks according to the FDA guidelines. That is all, your aCRF is now ready for submission!

Preparing Vaccine studies for FDA submission: Legacy Data, Diaries, Derived Datasets and how to keep it together

by  Luis Boullosa

Standards and guidelines are often designed with planning in mind – it’s easiest to apply them if your project follows the expected planning. When getting a vaccine study ready for submission, there are resources available, such as the Therapeutic Area Guide (TAUG) for Vaccines and the FDA Technical Conformance Guide. These guidelines present examples on how to deal with, for instance, solicited and unsolicited adverse event data from collection to delivery. However, what happens when the data collection did not take the TAUG or FDA submission guidelines into account? What happens when you need to convert a legacy vaccine study into the format ready for submission?

In legacy data conversion you are often faced with analysis datasets in a custom format, generated directly from the source data, with tables, listings, and figures derived from those. Often times the code that was written to create these datasets and TLF is no longer available. Now the challenge is to generate SDTM and ADaM datasets that would allow you to get to the exact same results and outcomes. This is not always possible if you have to adhere to strict data standards. That is the challenge of submitting data for legacy studies. But do not panic: there is light at the end of the tunnel!

For this presentation we would like to share our experience, challenges and solutions to get a legacy vaccine study ready for FDA submission.

Clinical Trials with Medical Devices - Examples and Challenges in Statistical Programming

by Fenna Hensen

Clinical trials with a medical device as investigational product are fundamentally different from clinical trials performed in drug developmental studies. For example, placebo-controlled trials are not always possible due to ethical and/or practical objections. Within the pharmaceutical industry it is common practice to present clinical trial data in CDISC standards (Clinical Data Interchange Standards Consortium). However, the use of these CDISC standards is not required by the notified body for market release of a medical device.

On May 26th, 2021, a new European law for market release of medical devices is released called “Medical Device Regulation” (MDR). Market release of new medical devices should adhere to this new regulation and devices already released into the market have a transition period in which manufacturers need to make sure their devices comply to this new regulation. An example of the stricter demands on medical devices is the extended post market investigational period in which devices needs to be followed for a longer period and data needs to be collected actively (not just collecting complaints and notifications). In recent years, efforts have been put into generating specific data standards for medical device studies to create uniformity across studies and better transparency and traceability within studies. Implementing medical device data within these standards is of high importance but is challenging as well due to the nature of the clinical trials and the data to be collected.

Within this presentation, with the aid of real-life examples, challenges will be treated that are commonly faced during statistical programming for clinical trials with medical devices. SDTM (Study Data Tabulation Model) and ADaM (Analysis Data Model) implementation guides specifically developed for medical device investigation are available and their specific topics will be handled. Most importantly, in this presentation the added value of adhering and implementing data standards in clinical trials with a medical device as investigational product will be demonstrated.