Final proposal + presentation – 388

with No Comments


Cancer staging and metastasis detection using convolutional neural networks and transfer learning


Artificial Intelligence (AI) has been used extensively in the field of medicine. More recently, advanced machine learning algorithms have become a big part of oncology as they assist with detection and diagnosis of cancer. Convolutional Neural Networks (CNN) are common in image analysis and they offer great power for detection, diagnosis and staging of cancerous regions in radiology images. Convolutional Neural Networks get more accurate results, and more importantly, need less training data with transfer learning, which is the practice of using pre-trained models and fine-tuning them for specific problems. This paper proposes utilizing transfer learning along with CNNs for staging cancer diagnoses. Randomly initialized CNNs will be compared with CNNs that used transfer learning to determine the extent of improvement that transfer learning can offer with cancer staging.


Artificial Intelligence, Cancer Detection, Tumor Detection, Machine Learning, Transfer Learning, Convolutional Neural Networks, Ra- diology, Oncology


Artificial Intelligence (AI) has grown into an advanced field that plays a major role in our healthcare. AI in conjunction with Ma- chine Learning (ML), has been aiding radiologists with detecting cancerous regions [2], determining if the cancerous region is benign or malignant [5], to what degree cancer has spread outside of the initial area [6], how well a patient is responding to treatment [8], and more. Among many ML methods assisting radiologists, Con- volutional Neural Networks (CNN) are deep learning algorithms capable of extracting features from images and making classifi- cation using those features [4]. CNN’s are one of the major deep learning methods on image analysis and have become a popular tool in AI-assisted oncology [2]. Over the years many studies have attempted to improve the accuracy of these implementations by comparing different CNN architectures [12], addressing overfitting of the models, using continuous learning [10], transfer learning [12], etc. This proposal aims to improve cancer staging CNNs by applying transfer learning methods and combining the unique im- provements that CNN and transfer learning methods can offer. In this paper, related work for implementation of CNNs and transfer learning for cancer detection is examined and compared to set up an understanding of the algorithms and the tools, the implemen- tation of the CNNs and transfer learning is described, and finally the evaluation method for determining the accuracy of the CNNs is mentioned. Additionally, major risks for the implementation and a proposed timeline of the implementation are included.


This section focuses on outlining the main components of what is being proposed in this paper. CNN’s and transfer learning methods are used frequently in recent related research and it is important to understand the basics of how they work.

Ali Farahmand
Computer Science Department at Earlham College Richmond, Indianapage1image50177168

Figure 1: Simple CNN implementation (derived from Choy et al. [4])

2.1 Convolutional Neural Network

Convolutional Neural Networks are a subset of deep learning meth- ods that extract features from images and further use these features for classification. CNNs are optimized for having images as input and since radiology is image focused, CNNs are one of the most common AI methods used in radiology [14]. A CNN consists of convolution and pooling layers. Figure 1 shows the layer layout of a basic CNN [4]. Convolution layers include filters that, through training, learn to create a feature map which outputs detected fea- tures from the input [14]. This feature map is then fed to a pooling layer, which downsizes the image by picking either the maximum value from the portion of the image that was covered by the con- volution filter or the average value from the portion of the image that was covered by the convolution filter. These two pooling meth- ods are referred to as Max Pooling layer and Average Pooling layer respectively. The purpose of pooling is to reduce computa- tion and/or avoiding overfitting the model. At the end of the last convolution and pooling layers there is fully connected (FC) layer which is used as the classifier after the feature extracting process. Figure 2 visualises a CNN with two convolution layers and two pooling layers. There are multiple architectures for CNNs which use different layer combinations [14] and these architectures are used in detection, segmentation and diagnosis steps of oncology [12]. Among the common architectures there are: AlexNet and VGG architectures. AlexNet is the shallow one of the two with five con- volutional layers. AlexNet can have different numbers of pooling layers, normally on the convolutional layers that are closer to the FC

Ali Farahmandpage2image50151632

Figure 2: CNN with two conv and pooling layers (derived from Soffer et al. [14])

layer. Figure 3 shows the AlexNet architecture without the pooling layers included. VGG is a deeper CNN with VGG16 having sixteen layers and VGG19 having nineteen layers. Both VGG16 and VGG19 are very clear about how many convolutional and pooling layers are included. Figure 4 shows a VGG16 architecture along with a breakdown of the layers. As shown in figure 4, pooling layers are present after every two or three convolutional layers in the VGG16 architecture. Both AlexNet and VGG16 have been used in cancer detection systems [14]. AlexNet, as the shallower of the two archi- tectures is more commonly used for detection while VGG is used for diagnosis since it is a deeper network and has smaller kernel sizes. These two architecture will both be used and compared in my work for staging cancer.

Figure 3: AlexNet architecture (derived from Han et al. [7]). AlexNet includes five convlution layers and a combination of pooling layers after any of the convolution layers

different learning environments, as knowledge gained from one learning process can be used in a different learning process with a different but similar goal. CNNs are commonly known to require large amounts of data for reasonable levels of accuracy, and as a re- sult, training CNNs could face problems such as: not having access to enough data, not having access to enough hardware resources for computation, time-consuming training process, etc. Transfer learning can reduce the need for large sets of data while also increas- ing the accuracy of the CNN [11]. When a CNN without transfer learning is being trained, it is initialized with random weights be- tween the nodes of the network, however, in transfer learning, a pre-trained model is used as the initial state of the network and as a result less data is required to train a capable model for the original problem. This pre-trained model is a network that was trained to solve a different but similar problem. For instance, if we have a functional model that can detect horses in images, the model can be used, with little fine-tuning, for transfer learning into a new model that aims to detect dogs. Transfer learning can be very useful in cancer detecting CNNs as it helps improve and expedite the training process. Transfer learning with [1] and without [11] fine tuning has been used in medical imaging systems and has shown improved results.


Substantial research has been done on the usability of both CNN and transfer learning methods and how they can improve the results of Computer-Aided Detection (CADe) and Computer-Aided Diagnosis (CADx) systems. Shin et al. use three different CNN architectures along with transfer learning for cancer detection and have published very thorough results in their work [12]. This proposal is very similar to the work Shin et al. have done with the key differences of the focus on staging and the use of the VGG architecture. Shi et al. use similar methods to reduce the number of false positives in cancer detection [11]. Bi et al. [2], Hosny et al. [8] and Soffer et al. [14] all have thoroughly explored the current and future applications of CNNs in cancer detection.


The process of acquiring images and pre-processing the data is no different than other cancer detection CNNs, as the major difference in this proposal is that it focuses on staging the cancerous tumor us- ing transfer learning with a pre-trained model. The staging system that will be used is the TNM staging system developed by the Na- tional Cancer Institution [3]. Table 1 represents how TNM numbers are associated with each patient. Each of the TNM numbers can also be represented as X instead of a number which would mean the measurement was not possible for that patient. Table 2 shows how different stages are determined, based on the numbers from Table 1.

Figure 5 shows the proposed framework of this project. This framework will be applied to both AlexNet and VGG architectures. Each architecture however, will be started off once randomly ini- tialized and once with a pre-trained model. This means that the proposed framework in figure 5 will be implemented at least four times in this project and at least four accuracy results will be re- ported and compared. As shown in figure 5, the datasets will bepage2image50193760page2image50193968

Figure 4: VGG16 architecture (derived from Peltarion web- site [9] based on Simonyan et al. [13]). VGG16 includes a to- tal sixteen layers of convolution and pooling

2.2 Transfer Learning

Transfer learning is inspired by the way humans learn new knowl- edge. The core concept is built around the idea of not isolating

Cancer staging and metastasis detection using convolutional neural networks and transfer learningpage3image35501952

Range T 0-4

N 0-3

M 0-1

Size of the tumor,
bigger number means bigger tumor Number of nearby lymph nodes affected by cancer spread Whether the cancer has spread to a distant organ

for the ImageNet and Ciphar datasets that can be used for transfer learning. Additionally pre-trained models can be acquired from previous work done in this field such as Shin et al. [12].

4.2 Evaluation

The accuracy of AlexNet and VGG architectures will be assessed both with and without transfer learning. The difference between the accuracy results of the two architectures will be measured before and after using a pre-trained models. This will show how much of a difference transfer learning has made for each CNN architecture. The goal is to find the architecture with the highest accuracy and to find the architecture that had more of an improvement with transfer learning.


  • Overfitting can be a major risk with a problem such as this one. The final product could get incredibly accurate results with the given dataset but fail to do a good staging detection on any other dataset. Finding the ideal learning rate and setting up the proper pooling layers in the architectures is the key to avoiding this problem as much as possible.
  • There is vast amounts of datasets available in the field of cancer detection. However, this particular project demands a dataset that not only it can be used for cancer detection, but also for staging. The need for the TNM numbers in the labels massively narrows down the number of datasets that can be used for this project.
  • There is a chance that transfer learning might not increase accuracy for both or one of the CNN architectures. In that case switching to a more relevant pre-trained model will likely solve the issue.6 TIMELINEThis timeline is proposed for the current 7 week terms.• Week 1:
    – Implementation of the initial CNNs for staging
    – Include notes about the implementation in the paper• Week 2:
    • –  Implementation of the initial CNNs for pre-trained model
    • –  Include the steps taken to reach a pre-trained model fortransfer learning in the paper • Week 3:
    • –  Implementation of the evaluation and tests
    • –  Include notes about the evaluation process and results inthe paper • Week 4:
    • –  Comparison of different CNN architectures and different transfer learning models
    • –  Include notes about the comparison in the paper • Week 5:– Troubleshooting and fixing errors– Paper edits and overall clean up • Week 6:– Software improvement and outside feedback – Paper revision and final draft


Table 1: The meaning of T, N and M numbers in TNM staging system. (derived from: National Cancer Institute’s website [3] )page3image35502912

Stage Stage 0

Stage I, II and III Stage IV

Meaning Abnormal cells present but no cancer present yet. Cancer is present Cancer has spread to
a distant organpage3image35503104page3image35503296page3image35503488page3image35503680

Table 2: Staging of the final TNM number. (derived from: Na- tional Cancer Institute’s website [3] )

pre-processed before being used for feature extraction in the CNN or for the classification.page3image50194592

Figure 5: The overall framework of the project

4.1 Pre-Trained models

The project’s aim is to create the pre-trained models for transfer learning in the final model for cancer staging. This can be achieved with training the AlexNet and VGG architectures with publicly available datasets such as MNIST and Ciphar. However, if this pro- cess turns out to be too time consuming for the proposed timeline, there are different pre-trained models are available such as models


I would like to thank David Barbella for helping with this research idea and clarifying the details of the proposed technology.


  1. [1]  Yaniv Bar, Idit Diamant, Lior Wolf, and Hayit Greenspan. 2015. Deep learning with non-medical training used for chest pathology identification. In Medical Imaging 2015: Computer-Aided Diagnosis, Vol. 9414. International Society for Optics and Photonics, 94140V.
  2. [2]  Wenya Linda Bi, Ahmed Hosny, Matthew B. Schabath, Maryellen L. Giger, Nicolai J. Birkbak, Alireza Mehrtash, Tavis Allison, Omar Arnaout, Christo- pher Abbosh, Ian F. Dunn, Raymond H. Mak, Rulla M. Tamimi, Clare M. Tem- pany, Charles Swanton, Udo Hoffmann, Lawrence H. Schwartz, Robert J. Gillies, Raymond Y. Huang, and Hugo J. W. L. Aerts. 2019. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA: A Cancer Jour- nal for Clinicians 69, 2 (2019), 127–157. arXiv:
  3. [3] [n.d.]. National Cancer Institute website. Retrieved September 22, 2020 from
  4. [4]  Garry Choy, Omid Khalilzadeh, Mark Michalski, Synho Do, Anthony E. Samir,Oleg S. Pianykh, J. Raymond Geis, Pari V. Pandharipande, James A. Brink, and Keith J. Dreyer. 2018. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 288, 2 (2018), 318–328. radiol.2018171820
  5. [5]  Macedo Firmino, Giovani Angelo, Higor Morais, Marcel R Dantas, and Ricardo Valentim. 2016. Computer-aided detection (CADe) and diagnosis (CADx) system for lung cancer with likelihood of malignancy. Biomedical engineering online 15, 1 (2016), 1–17.
  6. [6]  Richard Ha, Peter Chang, Jenika Karcich, Simukayi Mutasa, Reza Fardanesh, Ralph T Wynn, Michael Z Liu, and Sachin Jambawalikar. 2018. Axillary lymph

node evaluation utilizing convolutional neural networks using MRI dataset. Jour-

nal of Digital Imaging 31, 6 (2018), 851–856.
[7] Xiaobing Han, Yanfei Zhong, Liqin Cao, and Liangpei Zhang. 2017. Pre-trained

alexnet architecture with pyramid pooling and supervision for high spatial res- olution remote sensing image scene classification. Remote Sensing 9, 8 (2017), 848.

[8] AhmedHosny,ChintanParmar,JohnQuackenbush,LawrenceHSchwartz,and Hugo JWL Aerts. 2018. Artificial intelligence in radiology. Nature Reviews Cancer 18, 8 (2018), 500–510.

[9] [n.d.]. Peltarion Website. Retrieved September 22, 2020 from center/documentation/modeling- view/build- an- ai- model/snippets- – – your- gateway- to- deep- neural- network- architectures/vgg- snippet

[10] Oleg S Pianykh, Georg Langs, Marc Dewey, Dieter R Enzmann, Christian J Herold, Stefan O Schoenberg, and James A Brink. 2020. Continuous learning AI in radiology: implementation principles and early applications. Radiology (2020), 200038.

[11] Zhenghao Shi, Huan Hao, Minghua Zhao, Yaning Feng, Lifeng He, Yinghui Wang, and Kenji Suzuki. 2019. A deep CNN based transfer learning method for false positive reduction. Multimedia Tools and Applications 78, 1 (2019), 1017–1033.

[12] Hoo-Chang Shin, Holger R Roth, Mingchen Gao, Le Lu, Ziyue Xu, Isabella Nogues, Jianhua Yao, Daniel Mollura, and Ronald M Summers. 2016. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Transactions on Medical Imaging 35, 5 (2016), 1285–1298.

[13] Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).

[14] Shelly Soffer, Avi Ben-Cohen, Orit Shimon, Michal Marianne Amitai, Hayit Greenspan, and Eyal Klang. 2019. Convolutional Neural Networks for Radiologic Images: A Radiologist’s Guide. Radiology 290, 3 (2019), 590–606. 10.1148/radiol.2018180547

Ali Farahmand

Leave a Reply