Brain Tumor Detection and Classification from Multi-Channel MRIs: Study using ConvNets
- 0 Collaborators
Glioblastoma Multiforme constitutes 80% of malignant primary brain tumors in adults, and is usually classified as High Grade Glioma (HGG) and Low Grade Glioma (LGG). The LGG tumors are less aggressive, with slower growth rate as compared to HGG, and are responsive to therapy. Tumor biopsy being challenging for brain tumor patients, noninvasive imaging techniques like Magnetic Resonance Imaging (MRI) have been extensively employed in diagnosing brain tumors. Therefore, development of automated systems for the detection and prediction of the grade of tumors based on MRI data becomes necessary for assisting doctors in the framework of augmented intelligence. In this paper, we thoroughly investigate the power of Deep Convolutional Neural Networks (ConvNets) for classification of brain tumors using multi-sequence MR images. First we propose three ConvNets, which are trained from scratch, on MRI patches, slices, and multi-planar volumetric slices. The suitability of transfer learning for the task is next studied by applying two existing ConvNets models (VGGNet and ResNet) trained on ImageNet dataset, through fine-tuning of the last few layers. Leave-one-patient-out (LOPO) testing scheme is used to evaluate the performance of the ConvNets. Results demonstrate that ConvNet achieves better accuracy in all cases where the model is trained on the multi-planar volumetric dataset. Unlike conventional models, it obtains a testing accuracy of 97% without any additional effort towards extraction and selection of features. We study the properties of self-learned kernels/ filters in different layers, through visualization of the intermediate layer outputs. We also compare the results with that of state-of-the-art methods, which require manual feature engineering for the task, demonstrating a maximum improvement of 12% on grading performance of ConvNets. ...learn more
Project status: Under Development
Overview / Usage
In this project we exhaustively investigate the behaviour and performance of ConvNets, with and without transfer learning, for non-invasive brain tumor detection and grade prediction from multi-sequence MRI. Tumors are typically heterogeneous, depending on cancer subtypes, and contain a mixture of structural and patch-level variability. Prediction of the grade of a tumor may thus be based on either the image patch containing the tumor, or the 2D MRI slice containing the image of the whole brain including the tumor, or the 3D MRI volume encompassing the full image of the head enclosing the tumor. While in the first case only the tumor patch is necessary as input, the other two cases require the ConvNet to learn to localize the ROI (or VOI) followed by its classification. Therefore, the first case needs only classification while the other two cases additionally require detection or localization. Since the performance and complexity of ConvNets depend on the difficulty level of the problem and the type of input data representation, we prepare here three kinds viz. i) Patch-based, ii) Slice-based, and iii) Volume-based data, from the original MRI dataset. Three ConvNet models are developed corresponding to each case, and trained from scratch. We also compare two state-of-the-art ConvNet architectures, viz. VGGNet and ResNet, with parameters pre-trained on ImageNet using transfer learning (via fine-tuning).
Methodology / Approach
We propose three ConvNet architectures, named PatchNet, SliceNet, and VolumeNet, which are trained from scratch. This is followed by transfer learning and fine-tuning of existing networks. PatchNet is trained on the patch-based dataset and provides the probability of a patch belong to HGG or LGG. SliceNet gets trained on the slice-based dataset and predicts the probability of a slice being from HGG or LGG. Finally, VolumeNet is trained on the multi-planar volumetric dataset and predicts the grade of a tumor from its 3D representation using the multi-planar 3D MRI data.
Technologies Used
The ConvNets were developed using TensorFlow, with Keras in Python. The experiments were performed on a desktop machine with Intel i7 CPU (clock speed 3.40GHz), having 4 cores, 32GB RAM, and NVIDIA GeForce GTX 1080 GPU with 8GB VRAM. The operating system was Ubuntu 16.04.