######################################################
Don't miss the 3rd edition of HECKTOR at MICCAI 2022
More data/centers, Primary tumor and metastatic lymph nodes segmentation, RFS prediction.
https://hecktor.grand-challenge.org/
######################################################
🕵 Challenge description
Following the success of the first HECKTOR challenge in 2020, this challenge will be presented at the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) on September 27th, 2021. Three tasks are proposed this year (participants can choose to participate in one, two or all three tasks):
- Task 1: the automatic segmentation of Head and Neck (H&N) primary tumors in FDG-PET and CT images;
- Task 2: the prediction of patient outcomes, namely Progression Free Survival (PFS) from the FDG-PET/CT images and available clinical data;
- Task 3: The prediction of PFS (same as task 2) from the FDG-PET/CT images and available clinical data, except that the ground truth annotations of primary tumors will be made available as inputs to the challengers algorithms through dockers.
📔 LNCS proceedings
The LNCS proceedings of HECKTOR 2021 at MICCAI are now available, including the overview paper and the challenge participants' papers.
Free online access: https://link.springer.com/book/10.1007/978-3-030-98253-9
🌎 Motivation
Head and Neck (H&N) cancers are among the most common cancers worldwide (5th leading cancer by incidence) (Parkin et al. 2005). Radiotherapy combined with cetuximab has been established as standard treatment (Bonner et al. 2010). However, locoregional failures remain a major challenge and occur in up to 40% of patients in the first two years after the treatment (Chajon et al. 2013). Recently, several radiomics studies based on Positron Emission Tomography (PET) and Computed Tomography (CT) imaging were proposed to better identify patients with a worse prognosis in a non-invasive fashion and by exploiting already available images such as these acquired for diagnosis and treatment planning (Vallières et al. 2017),(Bogowicz et al. 2017),(Castelli et al. 2017). Although highly promising, these methods were validated on 100-400 patients. Further validation on larger cohorts (e.g. 300-3000 patients) is required to ensure an adequate ratio between the number of variables and observations in order to avoid an overestimation of the generalization performance. Achieving such a validation requires the manual delineation of primary tumors and nodal metastases for every patient and in three dimensions, which is intractable and error-prone. Methods for automated lesion segmentation in medical images were proposed in various contexts, often achieving expert-level performance (Heimann and Meinzer 2009), (Menze et al. 2015). Surprisingly few studies evaluated the performance of computerized automated segmentation of tumor lesions in PET and CT images (Song et al. 2013),(Blanc-Durand et al. 2018), (Moe et al. 2019). In 2020, we organized the first HECKTOR challenge and offered the opportunity to participants to develop automatic bi-modal approaches for the 3D segmentation of H&N tumors in PET/CT scans, focusing on oropharyngeal cancers. Following good participation and promising results in the 2020 challenge, we now increase the dataset size with 76 new cases from another clinical center with a different PET/CT scanner model and associated reconstruction settings (CHU Milétrie, Poitiers, France). In addition, we expand the scope of the challenge by considering an additional task with the purpose of outcome prediction based on the PET/CT images. A clinically-relevant endpoint that can be leveraged for personalizing patient management at diagnosis will be considered: prediction of progression-free survival in H&N oropharyngeal cancer. By focusing on metabolic and morphological tissue properties respectively, PET and CT modalities include complementary and synergistic information for cancerous lesion segmentation as well as tumor characteristics relevant for patient outcome prediction. Modern image analysis methods must be developed to best extract and leverage this information. The data used in this challenge is multi-centric, overall including four centers in Canada (Vallières et al. 2017), one center in Switzerland (Castelli et al. 2017), and one center in France (Hatt et al. 2019; Legot et al. 2018) for a total of 330 patients with annotated primary tumors.
📂 Dataset
To obtain the data, go to the "Resources" tab and follow the instructions.
📅 Timeline
- the release date of the training cases:
June 01June 04 2021 - the release date of the test cases:
Aug. 01Aug. 06 2021 - the submission date(s): opens Sept. 01 2021 closes
Sept. 10Sept. 14 2021 (23:59 UTC-10) - paper abstract submission deadline: Sept. 15 2021 (23:59 UTC-10)
- full paper submission deadline: Sept. 17 2021 (23:59 UTC-10)
- the release date of the ranking:
Sept. 17 2021Sept. 27 2021 - associated workshop days: Sept. 27 2021 https://miccai2021.pathable.eu/meetings/XMs8ZRJymqK7DstPY
✍Paper submission
In order to be eligible for the official ranking, the participants must submit a paper describing their methods due Sept. 15 2021 (minimum 6 pages, maximum 12 pages). We will review them (independently from the MICCAI conference reviews) and publish a Lecture Notes in Computer Science (LNCS) volume in the challenges subline. When participating in multiple tasks, you can either submit a single paper reporting all methods and results or multiple papers.
The submission platform (EasyChair) can be found here: https://easychair.org/conferences/?conf=hecktor2021
Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer’s proceedings LaTeX templates are also available in Overleaf. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made. Please append this consent-to-publish form at the end of the pdf for the submission on easychair (concatenate the pdf form with the paper).
In order to link your paper with your aicrowd team, please add your team name at the end of the abstract.
The following papers must be cited:
[1] Overview of the HECKTOR challenge at MICCAI 2021: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT images. Vincent Andrearczyk, Valentin Oreiller, Sarah Boughdad, Catherine Chez Le Rest, Hesham Elhalawani, Mario Jreige, John O. Prior, Martin Vallières, Dimitris Visvikis, Mathieu Hatt, Adrien Depeursinge, LNCS challenges, 2021
[2] Head and Neck Tumor Segmentation in PET/CT: The HECKTOR Challenge. Oreiller, Valentin, et al., Medical Image Analysis, 102336, 2021
We encourage the participants to release their code and add the GitHub link to their papers.
The top-ranked teams with a paper submission will be contacted in September to prepare an oral presentation for the half-day event at MICCAI 2021.
🏆Prize
Each task is being sponsored by a different company:
- Task 1 (segmentation) is sponsored by Siemens Healthineers Switzerland with a prize of 500 €.
Siemens Healthineers (https://www.siemens-healthineers.com) is a German society specialized in healthcare and medical imaging applications including the development of artificial intelligence in the field.
- Task 2 (prediction of PFS) is sponsored by Aquilab with a prize of 500 €.
Aquilab (https://www.aquilab.com) is a French company created 20 years ago and dedicated to improving cancer care and treatment through the development of software solutions focused on radiotherapy and medical imaging. Aquilab just launched its new platform Onco Place.
- Task 3 (prediction of PFS using ground-truth delineation of tumors) is sponsored by Bioemtech with a prize of 500 €.
Bioemission technoloy solutions (Bioemtech, https://bioemtech.com/) is a Greek company founded in 2013 by a group of young engineers with significant research and professional experience in the field of emerging molecular imaging technology. It aims to fulfill the existing needs in imaging equipment and imaging services for small, medium, but also large groups.
We would like to thank the sponsors for their support!
💾 Data description
The data contains the same patients from MICCAI 2020 with the addition of a cohort of 71 patients from a new center (CHUP), of which 23 were added to the training set, 48 to the testing set. The total number of training cases is 224 from 5 centers. No specific validation cases are provided, and the training set can be split in any manner for cross-validation. The total number of test cases is 101 from two centers. A part of the test set will be from a center present in the training data (CHUP), another part from a different center (CHUV). For consistency, the GTVt (primary gross tumor volume) for these patients were annotated by experts following the same procedure as the curation performed for MICCAI 2020.
Patient clinical data are provided in hecktor2021_patient_info_training.csv and hecktor2021_patient_info_test.csv, including center, age, gender, TNM 7/8th edition staging and clinical stage, tobacco and alcohol consumption, performance status, HPV status, treatment (radiotherapy only or chemoradiotherapy). Note that some information may be missing for some patients. In the same files, we specify the five patients for which the weight is unknown and was estimated (75kg) to compute SUV values: , where ID is the injection dose, BW the body weight.
To obtain the data, go to the "Resources" tab and follow the instructions.
- Task 1: Training and testing cases represent one 3D FDG-PET volume (in SUV) registered with a 3D CT volume (from low-dose FDG-PET/CT) of the head and neck region, as well as a binary contour with the annotated ground truth of the primary Gross Tumor Volume (GTVt), only available for training cases to the participating teams, in NIfTI format, along with the bounding box location (in hecktor2021_bbox_training.csv file). The csv file contains one row per patient that specifies a 144 × 144 × 144 mm3 bounding box (in absolute mm reference) in itk convention: in the patient reference, x goes from right to left, y anterior to posterior and z inferior to superior. Similar bounding boxes will be provided for the test set (please ignore CHUP01 to CHUP023 in hecktor2021_bbox_testing.csv). The evaluation (DSC scores) will be computed only within these bounding boxes at the original CT resolution. Although a priori not useful for the segmentation task, the clinical information of patients available for task 1 is provided in the hecktor_patient_info_training.csv file. We also provide various functions to load, crop, resample the data, train a baseline CNN (niftynet) and evaluate the results on our github repository: https://github.com/voreille/hecktor. These codes are provided as a suggestion to help the participants. As long as the results are submitted in the original resolution and cropped to the correct bounding boxes, any other processing can be used.
- Task 2: Same as task 1. Patient clinical data are available for training in the hecktor2021_patient_info_training.csv file. Regarding the progression-free survival endpoint to predict, censoring and time-to-event between PET/CT scan and event (in days) are provided in the hecktor2021_patient_endpoint_training.csv file. The progression is defined based on RECIST criteria: either a size increase of known lesions (change of T and or N), or appearance of new lesions (change of N and/or M). Disease-specific death is also considered a progression event for patients previously considered stable.
- Task 3. Same as task 2. In addition, ground-truth contours of the test cases will be made available for the algorithms developed by challengers through a docker framework, in order to ensure the challengers do not have direct access to them.
Various functions to load, crop, resample the data, train a baseline CNN and evaluate the results will be available on our GitHub repository: https://github.com/voreille/hecktor
📨 Results submission format
In order to provide a fair comparison, participants who want to rely on additional training data should also report results using only the HECKTOR training data and discuss differences in the results (no matter the task considered).
For each task, participants are allowed five valid submissions (5 per team). The best result will be reported for each team.
Task 1: Results should be provided as a single binary mask per patient (1 in the predicted GTVt) in .nii.gz format. The resolution of this mask should be the same as the original CT resolution and the volume cropped using the provided bounding boxes. The participants should pay attention to saving NIfTI volumes with the correct pixel spacing and origin with respect to the original reference frame. The NIfTI files should be named [PatientID].nii.gz, matching the patient names, e.g. CHUV001.nii.gz and placed in a folder. This folder should be zipped before submission. If results are submitted without cropping and/or resampling, we will employ nearest neighbor interpolation given that the coordinate system is provided.
Task 2: Results should be submitted as a CSV file containing the patient ID as "PatientID" and the output of the model (continuous) as "Prediction". An individual output should be anti-concordant with the PFS in days (i.e., the model should output a predicted risk score). If you have a concordant output (e.g predicted PFS days), you can simply submit your_estimate times -1.
Task 3: For this task, the developed methods will be evaluated on the testing set by the organizers by running them within a docker provided by the challengers. Practically, your method should process one patient at a time. It should take 3 nifty files as inputs (file 1: the PET image, file 2: the CT image, file 3: the provided ground-trugh segmentation mask, all 3 files have the same dimensions, the ground-truth mask contains only 2 values: 0 for the background, 1 for the tumor), and should output the predicted risk score produced by your model.
Input and output names must be explicit in the command-line:
predict.sh [PatientID]PET.nii.gz [PatientID]CT.nii.gz [PatientID]SegMask.nii.gz
where predict.sh is a BASH script taking as first 3 arguments the input PET image, the input CT image, the ground-truth mask image.
You must provide a built image containing your method. Please refer to the Docker Docker documentation to build your image.
During the evaluation, your docker will be executed by the organizers on the test dataset, and the output scores will be processed similarly as in task 2 to compute the C-index, using the following command line:
We provide a simple docker example that you can directly use to encapsulate your method HERE.
In the archive above you will find several files:
- 'dockerfile' contains the basic docker instructions. You can (should) populate it with different system and packages depending on what your method relies on.
- 'predict.sh' contains the call to your method. This is the file you need to modify in order to put the call to your method there.
- 'process.sh' is there to allow the organizers to run your method on all images contained in the test folder and to fill in the csv output file that will then be used as input to the evaluation code in the same way as for task 2. You do not need to modify it.
1. First install docker with nvidia support on your computer following these instructions :
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker
2. Then, build your docker image using the (potentially modified) provided dockerfile
cd /path/to/challengeDirectory
sudo docker build -t hecktor_team_name .
3. Once built, it can be run with the following command (the GPU is optional of course):
sudo docker container run --gpus "device=0" --name team_name --rm -v /path/to/challengeDirectory:/output -v /path/to/hecktor_dir:/data hecktor_myname:latest /bin/bash -c "sh process.sh"
where /path/to/hecktor_dir is the directory containing all images. You should test your encapsulated method on the train set to check it produces the same result as your method outside the docker.
4. Once everything is running appropriately, build your docker image to a file using:
sudo docker save -o team_name.tar.gz team_name
5. Once you upload your docker image to the submission AICrowd platform, the organizers will download it and run it locally on the test set and upload the c-index value on AICrowd.
⚖ Evaluation Criteria
- Task 1: The Dice Similarity Coefficient (DSC) and Hausdorf Distance at 95% (HD95) will be performed on the 3D volumes to assess the segmentation algorithms by comparing the automatic segmentation and the annotated ground truth within the provided bounding boxes. The final ranking will be based on the mean of the DSC and the median of the HD95. The two metrics are ranked separately and the final rank is obtained by Borda counting. In the event of a tie between two participants, the one with the higher average DSC will be ranked better. Precision and recall will also be computed to assess over- and under-segmentation, as well as the arithmetic mean of sensitivity and positive predictive value. Note that the best submission of a team is obtained by a borda count ranking with all submission of this team using the mean of the DSC and the median of the HD95. The best submissions of all teams are then used for the final ranking.
- Task 2: The ranking will be based on the concordance index (C-index) on the test data. The C-index quantifies the model’s ability to provide an accurate ranking of the survival times based on the computed individual risk scores, generalizing the area under the ROC curve (AUC). It can account for censored data and represents the global assessment of the model discrimination power.
- Task 3: Same as task 2.
🎉 Program of the Satellite Event
The satellite event takes place on Monday (September 27) from 9:00-13:00 UTC.
This event is held virtually here.
Introductory talk by organizers
- 9:00 - 9:30: The HECKTOR 2021 challenge, Vincent Andrearczyk, Valentin Oreiller, Martin Vallières, Catherine Chez Le Rest, Hesham Elhalawani, Sarah Boughdad, Mario Jreige, John O. Prior, Dimitris Visvikis, Mathieu Hatt, Baptiste Laurent, Adrien Depeursinge
Oral Session 1: Automatic segmentation of the primary tumor (Task 1)
- 09:30 - 09:45: A Coarse-to-Fine Framework for Head and Neck Tumor Segmentation in CT and PET Images, Chengyang An, Huai Chen and Lisheng Wang
- 09:45 - 10:00: The Head and Neck Tumor Segmentation based on 3D U-Net, Juanying Xie, Ying Peng
- 10:00 - 10:15: Priori and Posteriori Attention for Generalizing Head and Neck Tumors Segmentation, Jiangshan Lu, Wenhui Lei, Ran Gu and Guotai Wang
Oral Session 2: Outcome prediction using contours of primary tumors (Task 3)
- 10:15 - 10:30: An Hybrid Radiomics Approach to Modeling Progression-free Survival in Head and Neck Cancers, Sebastian Starke, Dominik Thalmeier, Peter Steinbach and Marie Piraud
- 10:30 - 10:45: Combining Tumor Segmentation Masks with PET/CT Images and Clinical Data in a Deep Learning Framework for Improved Prognostic Prediction in Head and Neck Squamous Cell Carcinoma, Kareem Wahid, Renjie He, Cem Dede, Abdallah Mohamed, Moamen Abobakr, Lisanne van Dijk, Clifton Fuller and Mohamed Naser
- 10:45 - 11:00: Head and Neck Primary Tumor Segmentation using Deep Neural Networks and Adaptive Ensembling, Gowtham Murugesan, Eric Brunner, Diana McCrumb, Jithendra Kumar, Jeff Vanoss, Stephen Moore, Anderson Peck and Anthony Chang
Break
- 11:00 - 11:15
Keynote: Clifton Fuller, MD Anderson Cancer Center
- 11:15 - 12:00: 30min talk + 15min questions
Oral Session 3: Fully automatic outcome prediction (Task 2) Chair:
- 12:00 - 12:15: Progression Free Survival Prediction for Head and Neck Cancer using Deep Learning based on Clinical and PET-CT Imaging Data, Mohamed Naser, Kareem Wahid, Abdallah Mohamed, Moamen Abobakr, Renjie He, Cem Dede, Lisanne van Dijk and Clifton Fuller
- 12:15 - 12:30: An Ensemble Approach for Patient Prognosis of Head and Neck Tumor Using Multimodal Data, Numan Saeed, Roba Al Majzoub, Ikboljon Sobirov and Mohammad Yaqub
- 12:30 - 12:45: Advanced Survival Prediction and Automatic Segmentation in Head and Neck Cancer, Mohammadreza Salmanpour Paeenafrakati, Ghasem Hajianfar, Seyed Masoud Rezaeijo, Mohammad Ghaemi and Arman Rahmim
Winners and Awards
- 12:45 - 12:55
Closing remarks: Feedback from participants / What next
- 12:55 - 13:00
📱Contact
Task 1: vincent[dot]andrearczyk[at]gmail[dot]com
Tasks 2 and 3: mathieu[dot]hatt[at]inserm[dot]fr
🔗 References
(Blanc-Durand et al. 2018) Blanc-Durand, Paul, et al. "Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-Net convolutional neural network study." PLoS One 13.4 (2018): e0195798.
(Bogowicz et al. 2017) Bogowicz, Marta, et al. "Comparison of PET and CT radiomics for prediction of local tumor control in head and neck squamous cell carcinoma." Acta oncologica 56.11 (2017): 1531-1536.
(Castelli et al. 2017) Castelli, Joël, et al. "A PET-based nomogram for oropharyngeal cancers." European journal of cancer 75 (2017): 222-230.
(Chajon et al. 2013) Chajon, Enrique, et al. "Salivary gland-sparing other than parotid-sparing in definitive head-and-neck intensity-modulated radiotherapy does not seem to jeopardize local control." Radiation oncology 8.1 (2013): 1-9.
(Hatt et al. 2009) Hatt, Mathieu, et al. "A fuzzy locally adaptive Bayesian segmentation approach for volume determination in PET." IEEE transactions on medical imaging 28.6 (2009): 881-893.
(Heiman and Meinzer 2009) Heimann, Tobias, and Hans-Peter Meinzer. "Statistical shape models for 3D medical image segmentation: a review." Medical image analysis 13.4 (2009): 543-563.
(Legot et al. 2018) Legot, Floriane, et al. "Use of baseline 18F-FDG PET scan to identify initial sub-volumes with local failure after concomitant radio-chemotherapy in head and neck cancer." Oncotarget 9.31 (2018): 21811.
(Menze et al. 2014) Menze, Bjoern H., et al. "The multimodal brain tumor image segmentation benchmark (BRATS)." IEEE transactions on medical imaging 34.10 (2014): 1993-2024.
(Moe et al. 2019) Moe, Yngve Mardal, et al. “Deep learning for automatic tumour segmentation in PET/CT images of patients with head and neck cancers.” Medical Imaging with Deep Learning (2019).
(Parkin et al. 2005) Parkin, D. Max, et al. "Global cancer statistics, 2002." CA: a cancer journal for clinicians 55.2 (2005): 74-108.
(Song et al. 2013) Song, Qi, et al. "Optimal co-segmentation of tumor in PET-CT images with context information." IEEE transactions on medical imaging 32.9 (2013): 1685-1697.
(Vallières et al. 2017) Vallières, Martin et al. “Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer.” Scientific reports, 7(1):10117, 2017
👥 Oganiser Info
- Vincent Andrearczyk: Vincent Andrearczyk completed his PhD degree on deep learning for texture and dynamic texture analysis at Dublin City University in 2017. He is currently a senior researcher at the University of Applied Sciences and Arts Western Switzerland with a research focus on deep learning for texture analysis and medical imaging. Vincent co-organized ImageCLEF 2018 Caption detection and prediction challenge and his team at HES-SO Valais has extensive experience in organizing challenges (various tasks in ImageCLEF every year since 2012)
- Valentin Oreiller: Valentin Oreiller received his M.Sc. degree in bioengineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland with a specialization in bioimaging. He is currently a PhD candidate at the University of Applied Sciences and Arts Western Switzerland with a research focus on radiomics.
- Martin Vallières: Martin Vallières is an Assistant Professor in the Department of Computer Science of Université de Sherbrooke since April 2020. He received a PhD in Medical Physics from McGill University in 2017, and completed post-doctoral training in France and USA in 2018 and 2019. The overarching goal of Martin Vallières’ research is centered on the development of clinically-actionable models to better personalize cancer treatments and care (“precision oncology”). He is an expert in the field of radiomics (i.e. the high-throughput and quantitative analysis of medical images) and machine learning in oncology. Over the course of his career, he has developed multiple prediction models for different types of cancers. His main research interest is now focused on the graph-based integration of heterogeneous medical data types for improved precision oncology. He has shared various datasets on The Cancer Imaging Archive (TCIA), including Soft-tissue sarcoma: FDG-PET/CT and MR imaging data of 51 patients, with tumors contours (RTstruct) and clinical data, Low-grade gliomas: Tumour contours for MR images of 108 patients of the TCGA-LGG dataset in MATLAB format, and Head-and-neck: FDG-PET/CT imaging data of 300 patients, with RT plans (RTstruct, RTdose, RTplan) and clinical data. Moreover, he has co-organized the PET radiomics challenge: A MICCAI 2018 CPM Grand Challenge. He participated in the organization of the data online. He also contributed to the challenge data pool via the Head-and-neck TCIA collection.
- Catherine Chez Le Rest: Nuclear medicine department, CHU Poitiers, Poitiers, France and LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
- Hesham Elhalawani: Hesham Elhalawani, MD, MSc is a radiation oncology clinical fellow at Cleveland Clinic. He completed a 3-year quantitative imaging biomarker research fellowship at MD Anderson Cancer Center. His deep-rooted research focus is leveraging artificial intelligence, radiomics, and imaging informatics to personalize cancer patients care. He published more than 50 peer-reviewed articles and served as a reviewer for journals and conferences, including Radiotherapy & Oncology, Red Journal, European Radiology, and AMIA conferences. He is among the editorial board of Radiology: Artificial intelligence, an RSNA publication. He has been an advocate for FAIR principles of data management via contributing to the mission and goals of NCI Cancer Imaging Program. Collaboratively with The Cancer Imaging Archive (TCIA), they publicly shared two large curated head and neck cancer datasets that included matched clinical and multi-modal imaging data. Moreover, he served on the organizing committee for the 2016 and 2018 MICCAI radiomics challenges that were hosted on Kaggle in Class to fuel the growing trend in mass crowdsource innovation.
- Sarah Boughdad: Dr. Boughdad is currently a Fellow at the Service of Nuclear Medicine and Molecular Imaging at Lausanne University Hospital, Switzerland. In 2014, she graduated from the Medical Faculty of Paris-Sud, Paris-Saclay. She obtained her PhD in medical physics in 2018 from EOBE, Orsay University. She is an active researcher in the field of Radiomics.
- Mario Jreige: Mario Jreige, MD, is a nuclear medicine resident at Lausanne University Hospital, Switzerland. He has previously completed a specialization in radiology at the Saint-Joseph University, Beirut. He is a junior member of the Swiss Society of Nuclear Medicine.
- John O. Prior: John O. Prior, PhD MD, FEBNM has been Professor and Head of Nuclear Medicine and Molecular Imaging at Lausanne University Hospital, Switzerland since 2010. After graduating with a MSEE degree from ETH Zurich, he received a PhD in Biomedical Engineering from The University of Texas Southwestern Medical Center at Dallas and a MD from the University of Lausanne. He underwent thereafter specialization training in nuclear medicine in Lausanne and a visiting associate professorship at the University of California at Los Angeles (UCLA). Prof. Prior is currently President of the Swiss Society of Nuclear Medicine, Member of the European Association of Nuclear Medicine, the Society of Nuclear Medicine and Molecular Imaging, as well as IEEE Senior Member.
- Dimitris Visvikis: Dimitris Visvikis is a director of research with the National Institute of Health and Medical Research (INSERM) in France and the Director of the Medical Information Processing Lab in Brest (LaTIM, UMR 1101). Dimitris has been involved in nuclear medicine research for more than 25 years. He obtained his PhD from the University of London in 1996 working in PET detector development within the Joint Department of Physics in the Royal Marsden Hospital and the Institute of Cancer Research. After working as a Senior Research Fellow in the Wolfson Brain Imaging Centre of the University of Cambridge he joined the Institute of Nuclear Medicine as Principal Medical Physicist in University College London where he introduced and worked for five years with one of the first clinical PET/CT systems in the world. He has spent the majority of his scientific activity in the field of PET/CT imaging, including developments in both hardware and software domains. His current research interests focus on improvement in PET/CT image quantitation for specific oncology applications, such as response to therapy and radiotherapy treatment planning, through the development of methodologies for detection and correction of respiratory motion, 4D PET image reconstruction, partial volume correction and denoising, tumour volume automatic segmentation and machine learning for radiomics, as well as the development and validation of Monte Carlo simulations for emission tomography and radiotherapy treatment dosimetry applications.
He is a member of numerous professional societies such as IPEM (Fellow, Past Vice-President International), IEEE (Senior Member, Past NPSS NMISC chair), AAPM, SNMMI (CaIC board of directors 2007-2012) and EANM (physics committee chair). He is also the first Editor in Chief of the IEEE Transactions in Radiation and Plasma Medical Sciences
- Mathieu Hatt: Mathieu Hatt is a computer scientist. He received his PhD in 2008 and his habilitation to supervise research in 2012. His main skills and expertise lie in radiomics, from automated image segmentation to features extraction, as well as machine (deep) learning methods, for PET/CT, MRI and CT modalities. He is in charge of a research group "radiomics modeling" in the team ACTION (therapeutic action guided by multimodal images in oncology) of the LaTIM (Laboratory of Medical Information Processing, INSERM UMR 1101, University of Brest, France). He is an elected member of the EANM physics committee, the SNMMI physics, data science and instrumentation council board of directors, and the IEEE nuclear medical and imaging sciences council.
- Baptiste Laurent: Baptiste Laurent is an engineeer. He received his MSc in 2013 and is currently enrolled in a PhD program at the LaTIM (Laboratory of Medical Information Processing, INSERM UMR 1101, University of Brest, France).
- Adrien Depeursinge: Adrien Depeursinge received the M.Sc. degree in electrical engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland with a specialization in signal processing. From 2006 to 2010, he performed his Ph.D. thesis on medical image analysis at the University Hospitals of Geneva (HUG). He then spent two years as a Postdoctoral Fellow at the Department of Radiology of the School of Medicine at Stanford University. He has currently a joint position as an Associate Professor at the Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), and as a Senior Research Scientist at the Lausanne University Hospital (CHUV). A large experience in challenge organization (e.g. ImageCLEF, VISCERAL) exists in his group jointly led with Prof. Müller (MedGIFT). He also prepared a dataset of Interstitial Lung Disease (ILD) for comparison of algos open access dataset. The library contains 128 patients affected with ILDs, 108 image series with more than 41 liters of annotated lung tissue patterns as well as a comprehensive set of 99 clinical parameters related to ILDs. This dataset has become a reference for research on ILDs and the associated paper has >100 citations.
✊ Sponsors