Advancing Medical Image Registration and Tumor Segmentation with Deep Learning: Design, Implementation and Transfer into Clinical Application

Doctoral Candidate Name: 
Yaying Shi
Program: 
Computing and Information Systems
Abstract: 

The advancement of medical imaging has significantly enhanced the ability to diagnose, monitor, and treat cancer. This dissertation focuses on the development of deep learning methodologies for the segmentation and registration of medical images, specifically Positron Emission Tomography (PET), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and pathology images, to improve the accuracy and efficiency of cancer diagnosis and treatment planning.
Segmentation, the process of delineating anatomical structures and pathological regions, is a crucial step in medical image analysis. This work introduces novel high-precision deep learning models for the automatic segmentation of tumors and organs at risk (OARs). These models utilize convolutional neural networks (CNNs) and transformer-based architectures to handle the complexities and variations inherent in PET, CT, and MRI. The segmentation models are trained on multi-modal imaging datasets, incorporating advanced techniques such as data augmentation, transfer learning, and ensemble learning to enhance robustness and generalization. Evaluation on various datasets demonstrates that these models achieve superior performance compared to traditional methods, with significant improvements in accuracy and reliability.

Registration, which aligns images from different modalities or time points, is another critical component in the analysis of medical images. This dissertation presents advanced deep learning approaches for the registration of CT, MRI, and pathology images, leveraging deep neural networks (DNNs) and unsupervised learning techniques. The proposed registration methods employ spatial transformer networks (STNs) and other novel architectures to learn complex spatial transformations directly from the data, enabling accurate alignment of multi-modal images. These approaches are designed to be computationally efficient and scalable, facilitating their integration into clinical workflows.

Our final goal is to streamline these deep learning methods to real clinical applications. This dissertation explores the practical applications of the developed models, including their deployment in microservices for common radiotherapy imaging tasks. The models are made accessible via Python scripts for clinical treatment planning software such as RayStation, allowing seamless integration into existing clinical systems. Evaluation using images and treatment planning data for prostate cancer underscores the potential of these models to enhance the quality of treatment planning and streamline the overall process of planning, response assessment, and adaptation. Additionally, this dissertation investigates the potential of federated learning for collaborative model training across multiple institutions without sharing sensitive patient data. This approach could enhance model robustness and generalizability by leveraging diverse datasets from various sources.

In conclusion, this dissertation explores the critical component of medical imaging for cancer diagnosis, monitoring, and treatment with advanced deep learning methods. We hope these innovative techniques developed in this research pave the way for more precise, efficient, and individualized patient care in oncology.

Defense Date and Time: 
Friday, July 19, 2024 - 12:00pm
Defense Location: 
Woodward 212 and https://charlotte-edu.zoom.us/j/94325931444
Committee Chair's Name: 
Dr. Yonghong Yan
Committee Members: 
Dr. Min Shin, Dr. Razvan C. Bunescu, Dr. Srijan Das, Dr. Xiuxia Du