Lam – CS388 Three Pitches

with No Comments

Idea 1: Detect and predict mental health status using NLP

The objective of that research is to determine whether NLP (Natural Language Processing) is useful for mental health treatment. Data sources vary from traditional sources (EHRs/EMRs, clinical notes and records) to social sources (tweets and posts from social media platforms). The main subject for data will be the social media posts because they enable immediate detection of users’ possible mental illness. They are preprocessed by extracting the key words to form an artificial paragraph that contains only numbers and characters. Then, an NLP (Natural Processing Language) model, built on genuine and artificial data, will test the real (raw/original) data and compare with the artificial paragraph above. The outputs will be the correct mental health situation from the patients after the testing.

Libraries such as Scikit and Keras are frequently used in a lot of papers. The dataset I would like to experiment is the suicide and depression dataset from Reddit posts starting at \r.

Idea 2: Using Machine Learning Algorithms to predict air pollution

Machine Learning algorithms can predict air pollution based on the amount of air pollutant in the atmosphere, which is defined by the level of particulate matters (we will look at PM2.5 – the most dangerous air pollutant to human’s health – specifically). Given a data indicating PM 2.5 level in different areas, that data has to be preprocessed. Then, it will undergo several machine learning models, notably random forest and linear regression due to their simplicity and usage for regression and classification problems, to produce the best model for forecasting the presence of PM 2.5 level in the air.

Python and its library Scikit are the most common tools to train models. For this idea, I would like to select a dataset to measure air quality of 5 Chinese cities and it is available on Kaggle.

Idea 3: Machine Learning in music genre classification

Given an input (dataset) of thousands of songs from different genres such as pop, rock, hip hop/rap, country, etc, the audio will be processed by reducing/filtering noise and the soundwave. Then, the neural network model will take those audio inputs and images of their spectrograms to differentiate several objects as their neural network outputs. Eventually, the KNN method, due to its simplicity and robust with large noisy data, will test the music from the model outputs and classify the songs into their respective genres.

Python is mostly involved in the research due to the wide range of usage of its dynamic libraries such as Keras, Scikit, Tensorflow, and even Numpy that support the findings. This idea, I would like to choose Spotify dataset thanks to its rich collection of songs.

Content-based Hashtag Recommendation Methods for Twitter

with No Comments

Abstract

For Twitter, a hashtag recommendation system is an important tool to organize similar content together for topic categorization. Much research has been carried out on figuring out a new technique for hashtag recommendation, and very little research has been done on evaluating the performance of different existing models
using the same dataset and the same evaluation metrics. This paper evaluates the performance of different content-based methods(Tweet similarity using hashtag frequency, Naïve Bayes model, and KNN-based cosine similarity) for hashtag recommendation using different evaluation metrics including Hit Ratio, a metric recently created for evaluating a hashtag recommendation system. The result shows that Naive Bayes outperforms other methods with an average accuracy score of 0.83.

Software Architecture Diagram

Fig 1: Software Architecture Diagram

Pre-processing Framework

Evaluation Model

Evaluation Results

Design of the Web Application

Senior Capstone: Cancer metastasis detection using CNNs and transfer learning

with No Comments

Abstract

Artificial Intelligence (AI) has been used extensively in the field of medicine. More recently, advanced machine learning algorithms have become a big part of oncology as they assist with detection and diagnosis of cancer. Convolutional Neural Networks (CNN) are common in image analysis and they offer great power for detection, diagnosis and staging of cancerous regions in radiology images. Convolutional Neural Networks get more accurate results, and more importantly, need less training data with transfer learning, which is the practice of using pre-trained models and fine-tuning them for specific problems. This paper proposes utilizing transfer learning along with CNNs for staging cancer diagnoses. Randomly initialized CNNs will be compared with CNNs that used transfer learning to determine the extent of improvement that transfer learning can offer with cancer staging and metastasis detection. Additionally, the model utilizing transfer learning will be trained with a smaller subset of the dataset to determine if using transfer learning reduced the need for a large dataset to get improved results.

Software architecture diagram

Links to project components

Link to complete paper:

Link to software on gitlab: https://code.cs.earlham.edu/afarah18/senior-cs-capstone

Link to video on youtube: https://www.youtube.com/watch?v=G01y0ZLKST4

Link to copy of poster:

Final proposal + presentation – 388

with No Comments

presentation: https://docs.google.com/presentation/d/1o5tocDFjyPA1eswjaDEi_mrpe2eHbuZGkbrdTn4egXQ/edit?usp=sharing

Cancer staging and metastasis detection using convolutional neural networks and transfer learning

ABSTRACT

Artificial Intelligence (AI) has been used extensively in the field of medicine. More recently, advanced machine learning algorithms have become a big part of oncology as they assist with detection and diagnosis of cancer. Convolutional Neural Networks (CNN) are common in image analysis and they offer great power for detection, diagnosis and staging of cancerous regions in radiology images. Convolutional Neural Networks get more accurate results, and more importantly, need less training data with transfer learning, which is the practice of using pre-trained models and fine-tuning them for specific problems. This paper proposes utilizing transfer learning along with CNNs for staging cancer diagnoses. Randomly initialized CNNs will be compared with CNNs that used transfer learning to determine the extent of improvement that transfer learning can offer with cancer staging.

KEYWORDS

Artificial Intelligence, Cancer Detection, Tumor Detection, Machine Learning, Transfer Learning, Convolutional Neural Networks, Ra- diology, Oncology

1 INTRODUCTION

Artificial Intelligence (AI) has grown into an advanced field that plays a major role in our healthcare. AI in conjunction with Ma- chine Learning (ML), has been aiding radiologists with detecting cancerous regions [2], determining if the cancerous region is benign or malignant [5], to what degree cancer has spread outside of the initial area [6], how well a patient is responding to treatment [8], and more. Among many ML methods assisting radiologists, Con- volutional Neural Networks (CNN) are deep learning algorithms capable of extracting features from images and making classifi- cation using those features [4]. CNN’s are one of the major deep learning methods on image analysis and have become a popular tool in AI-assisted oncology [2]. Over the years many studies have attempted to improve the accuracy of these implementations by comparing different CNN architectures [12], addressing overfitting of the models, using continuous learning [10], transfer learning [12], etc. This proposal aims to improve cancer staging CNNs by applying transfer learning methods and combining the unique im- provements that CNN and transfer learning methods can offer. In this paper, related work for implementation of CNNs and transfer learning for cancer detection is examined and compared to set up an understanding of the algorithms and the tools, the implemen- tation of the CNNs and transfer learning is described, and finally the evaluation method for determining the accuracy of the CNNs is mentioned. Additionally, major risks for the implementation and a proposed timeline of the implementation are included.

2 BACKGROUND

This section focuses on outlining the main components of what is being proposed in this paper. CNN’s and transfer learning methods are used frequently in recent related research and it is important to understand the basics of how they work.

Ali Farahmand afarah18@earlham.edu
Computer Science Department at Earlham College Richmond, Indianapage1image50177168

Figure 1: Simple CNN implementation (derived from Choy et al. [4])

2.1 Convolutional Neural Network

Convolutional Neural Networks are a subset of deep learning meth- ods that extract features from images and further use these features for classification. CNNs are optimized for having images as input and since radiology is image focused, CNNs are one of the most common AI methods used in radiology [14]. A CNN consists of convolution and pooling layers. Figure 1 shows the layer layout of a basic CNN [4]. Convolution layers include filters that, through training, learn to create a feature map which outputs detected fea- tures from the input [14]. This feature map is then fed to a pooling layer, which downsizes the image by picking either the maximum value from the portion of the image that was covered by the con- volution filter or the average value from the portion of the image that was covered by the convolution filter. These two pooling meth- ods are referred to as Max Pooling layer and Average Pooling layer respectively. The purpose of pooling is to reduce computa- tion and/or avoiding overfitting the model. At the end of the last convolution and pooling layers there is fully connected (FC) layer which is used as the classifier after the feature extracting process. Figure 2 visualises a CNN with two convolution layers and two pooling layers. There are multiple architectures for CNNs which use different layer combinations [14] and these architectures are used in detection, segmentation and diagnosis steps of oncology [12]. Among the common architectures there are: AlexNet and VGG architectures. AlexNet is the shallow one of the two with five con- volutional layers. AlexNet can have different numbers of pooling layers, normally on the convolutional layers that are closer to the FC

Ali Farahmandpage2image50151632

Figure 2: CNN with two conv and pooling layers (derived from Soffer et al. [14])

layer. Figure 3 shows the AlexNet architecture without the pooling layers included. VGG is a deeper CNN with VGG16 having sixteen layers and VGG19 having nineteen layers. Both VGG16 and VGG19 are very clear about how many convolutional and pooling layers are included. Figure 4 shows a VGG16 architecture along with a breakdown of the layers. As shown in figure 4, pooling layers are present after every two or three convolutional layers in the VGG16 architecture. Both AlexNet and VGG16 have been used in cancer detection systems [14]. AlexNet, as the shallower of the two archi- tectures is more commonly used for detection while VGG is used for diagnosis since it is a deeper network and has smaller kernel sizes. These two architecture will both be used and compared in my work for staging cancer.

Figure 3: AlexNet architecture (derived from Han et al. [7]). AlexNet includes five convlution layers and a combination of pooling layers after any of the convolution layers

different learning environments, as knowledge gained from one learning process can be used in a different learning process with a different but similar goal. CNNs are commonly known to require large amounts of data for reasonable levels of accuracy, and as a re- sult, training CNNs could face problems such as: not having access to enough data, not having access to enough hardware resources for computation, time-consuming training process, etc. Transfer learning can reduce the need for large sets of data while also increas- ing the accuracy of the CNN [11]. When a CNN without transfer learning is being trained, it is initialized with random weights be- tween the nodes of the network, however, in transfer learning, a pre-trained model is used as the initial state of the network and as a result less data is required to train a capable model for the original problem. This pre-trained model is a network that was trained to solve a different but similar problem. For instance, if we have a functional model that can detect horses in images, the model can be used, with little fine-tuning, for transfer learning into a new model that aims to detect dogs. Transfer learning can be very useful in cancer detecting CNNs as it helps improve and expedite the training process. Transfer learning with [1] and without [11] fine tuning has been used in medical imaging systems and has shown improved results.

3 RELATED WORK

Substantial research has been done on the usability of both CNN and transfer learning methods and how they can improve the results of Computer-Aided Detection (CADe) and Computer-Aided Diagnosis (CADx) systems. Shin et al. use three different CNN architectures along with transfer learning for cancer detection and have published very thorough results in their work [12]. This proposal is very similar to the work Shin et al. have done with the key differences of the focus on staging and the use of the VGG architecture. Shi et al. use similar methods to reduce the number of false positives in cancer detection [11]. Bi et al. [2], Hosny et al. [8] and Soffer et al. [14] all have thoroughly explored the current and future applications of CNNs in cancer detection.

4 DESIGN

The process of acquiring images and pre-processing the data is no different than other cancer detection CNNs, as the major difference in this proposal is that it focuses on staging the cancerous tumor us- ing transfer learning with a pre-trained model. The staging system that will be used is the TNM staging system developed by the Na- tional Cancer Institution [3]. Table 1 represents how TNM numbers are associated with each patient. Each of the TNM numbers can also be represented as X instead of a number which would mean the measurement was not possible for that patient. Table 2 shows how different stages are determined, based on the numbers from Table 1.

Figure 5 shows the proposed framework of this project. This framework will be applied to both AlexNet and VGG architectures. Each architecture however, will be started off once randomly ini- tialized and once with a pre-trained model. This means that the proposed framework in figure 5 will be implemented at least four times in this project and at least four accuracy results will be re- ported and compared. As shown in figure 5, the datasets will bepage2image50193760page2image50193968

Figure 4: VGG16 architecture (derived from Peltarion web- site [9] based on Simonyan et al. [13]). VGG16 includes a to- tal sixteen layers of convolution and pooling

2.2 Transfer Learning

Transfer learning is inspired by the way humans learn new knowl- edge. The core concept is built around the idea of not isolating

Cancer staging and metastasis detection using convolutional neural networks and transfer learningpage3image35501952

Range T 0-4

N 0-3

M 0-1

Meaning
Size of the tumor,
bigger number means bigger tumor Number of nearby lymph nodes affected by cancer spread Whether the cancer has spread to a distant organ

for the ImageNet and Ciphar datasets that can be used for transfer learning. Additionally pre-trained models can be acquired from previous work done in this field such as Shin et al. [12].

4.2 Evaluation

The accuracy of AlexNet and VGG architectures will be assessed both with and without transfer learning. The difference between the accuracy results of the two architectures will be measured before and after using a pre-trained models. This will show how much of a difference transfer learning has made for each CNN architecture. The goal is to find the architecture with the highest accuracy and to find the architecture that had more of an improvement with transfer learning.

5 MAJOR RISKS

  • Overfitting can be a major risk with a problem such as this one. The final product could get incredibly accurate results with the given dataset but fail to do a good staging detection on any other dataset. Finding the ideal learning rate and setting up the proper pooling layers in the architectures is the key to avoiding this problem as much as possible.
  • There is vast amounts of datasets available in the field of cancer detection. However, this particular project demands a dataset that not only it can be used for cancer detection, but also for staging. The need for the TNM numbers in the labels massively narrows down the number of datasets that can be used for this project.
  • There is a chance that transfer learning might not increase accuracy for both or one of the CNN architectures. In that case switching to a more relevant pre-trained model will likely solve the issue.6 TIMELINEThis timeline is proposed for the current 7 week terms.• Week 1:
    – Implementation of the initial CNNs for staging
    – Include notes about the implementation in the paper• Week 2:
    • –  Implementation of the initial CNNs for pre-trained model
    • –  Include the steps taken to reach a pre-trained model fortransfer learning in the paper • Week 3:
    • –  Implementation of the evaluation and tests
    • –  Include notes about the evaluation process and results inthe paper • Week 4:
    • –  Comparison of different CNN architectures and different transfer learning models
    • –  Include notes about the comparison in the paper • Week 5:– Troubleshooting and fixing errors– Paper edits and overall clean up • Week 6:– Software improvement and outside feedback – Paper revision and final draft

page3image35502144page3image35502336page3image35502528page3image35502720

Table 1: The meaning of T, N and M numbers in TNM staging system. (derived from: National Cancer Institute’s website [3] )page3image35502912

Stage Stage 0

Stage I, II and III Stage IV

Meaning Abnormal cells present but no cancer present yet. Cancer is present Cancer has spread to
a distant organpage3image35503104page3image35503296page3image35503488page3image35503680

Table 2: Staging of the final TNM number. (derived from: Na- tional Cancer Institute’s website [3] )

pre-processed before being used for feature extraction in the CNN or for the classification.page3image50194592

Figure 5: The overall framework of the project

4.1 Pre-Trained models

The project’s aim is to create the pre-trained models for transfer learning in the final model for cancer staging. This can be achieved with training the AlexNet and VGG architectures with publicly available datasets such as MNIST and Ciphar. However, if this pro- cess turns out to be too time consuming for the proposed timeline, there are different pre-trained models are available such as models

7 ACKNOWLEDGMENTS

I would like to thank David Barbella for helping with this research idea and clarifying the details of the proposed technology.

REFERENCES

  1. [1]  Yaniv Bar, Idit Diamant, Lior Wolf, and Hayit Greenspan. 2015. Deep learning with non-medical training used for chest pathology identification. In Medical Imaging 2015: Computer-Aided Diagnosis, Vol. 9414. International Society for Optics and Photonics, 94140V.
  2. [2]  Wenya Linda Bi, Ahmed Hosny, Matthew B. Schabath, Maryellen L. Giger, Nicolai J. Birkbak, Alireza Mehrtash, Tavis Allison, Omar Arnaout, Christo- pher Abbosh, Ian F. Dunn, Raymond H. Mak, Rulla M. Tamimi, Clare M. Tem- pany, Charles Swanton, Udo Hoffmann, Lawrence H. Schwartz, Robert J. Gillies, Raymond Y. Huang, and Hugo J. W. L. Aerts. 2019. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA: A Cancer Jour- nal for Clinicians 69, 2 (2019), 127–157. https://doi.org/10.3322/caac.21552 arXiv:https://acsjournals.onlinelibrary.wiley.com/doi/pdf/10.3322/caac.21552
  3. [3]  cancer.gov. [n.d.]. National Cancer Institute website. Retrieved September 22, 2020 from https://www.cancer.gov/about-cancer/diagnosis-staging/staging
  4. [4]  Garry Choy, Omid Khalilzadeh, Mark Michalski, Synho Do, Anthony E. Samir,Oleg S. Pianykh, J. Raymond Geis, Pari V. Pandharipande, James A. Brink, and Keith J. Dreyer. 2018. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 288, 2 (2018), 318–328. https://doi.org/10.1148/ radiol.2018171820
  5. [5]  Macedo Firmino, Giovani Angelo, Higor Morais, Marcel R Dantas, and Ricardo Valentim. 2016. Computer-aided detection (CADe) and diagnosis (CADx) system for lung cancer with likelihood of malignancy. Biomedical engineering online 15, 1 (2016), 1–17.
  6. [6]  Richard Ha, Peter Chang, Jenika Karcich, Simukayi Mutasa, Reza Fardanesh, Ralph T Wynn, Michael Z Liu, and Sachin Jambawalikar. 2018. Axillary lymph

node evaluation utilizing convolutional neural networks using MRI dataset. Jour-

nal of Digital Imaging 31, 6 (2018), 851–856.
[7] Xiaobing Han, Yanfei Zhong, Liqin Cao, and Liangpei Zhang. 2017. Pre-trained

alexnet architecture with pyramid pooling and supervision for high spatial res- olution remote sensing image scene classification. Remote Sensing 9, 8 (2017), 848.

[8] AhmedHosny,ChintanParmar,JohnQuackenbush,LawrenceHSchwartz,and Hugo JWL Aerts. 2018. Artificial intelligence in radiology. Nature Reviews Cancer 18, 8 (2018), 500–510.

[9] peltarion.com. [n.d.]. Peltarion Website. Retrieved September 22, 2020 from https://peltarion.com/knowledge- center/documentation/modeling- view/build- an- ai- model/snippets- – – your- gateway- to- deep- neural- network- architectures/vgg- snippet

[10] Oleg S Pianykh, Georg Langs, Marc Dewey, Dieter R Enzmann, Christian J Herold, Stefan O Schoenberg, and James A Brink. 2020. Continuous learning AI in radiology: implementation principles and early applications. Radiology (2020), 200038.

[11] Zhenghao Shi, Huan Hao, Minghua Zhao, Yaning Feng, Lifeng He, Yinghui Wang, and Kenji Suzuki. 2019. A deep CNN based transfer learning method for false positive reduction. Multimedia Tools and Applications 78, 1 (2019), 1017–1033.

[12] Hoo-Chang Shin, Holger R Roth, Mingchen Gao, Le Lu, Ziyue Xu, Isabella Nogues, Jianhua Yao, Daniel Mollura, and Ronald M Summers. 2016. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Transactions on Medical Imaging 35, 5 (2016), 1285–1298.

[13] Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).

[14] Shelly Soffer, Avi Ben-Cohen, Orit Shimon, Michal Marianne Amitai, Hayit Greenspan, and Eyal Klang. 2019. Convolutional Neural Networks for Radiologic Images: A Radiologist’s Guide. Radiology 290, 3 (2019), 590–606. https://doi.org/ 10.1148/radiol.2018180547

Ali Farahmand

Final Literature Review – 388

with No Comments

Artificial intelligence in radiology cancer detection: a literature review

KEYWORDS

Artificial Intelligence, Cancer Detection, Tumor Detection, Machine Learning, Convolutional Neural Networks, Radiology

1 INTRODUCTION

In a world where Artificial Intelligence (AI) is helping us drive our cars and control our traffic, it is to no surprise that AI is also getting involved in our healthcare. Computers have been used for many decades to help healthcare workers with both imaging patient organs and reading those images [4]. Now with the power of AI, new aspects of computer technology are helping healthcare workers and patients. Research has shown that radiologists have better sensitivity in their cancer detection diagnostics if they’ve had the help of AI-powered software [11], and in some cases the AI was able to outperform the radiologists [12]. The even more complex concept of detecting if the cancer has spread to other locations has also been tackled by AI [6]. Arguably AI can help minimize or eliminate human error in such a sensitive field where a patient’s life depends on how an image is interpreted [12]. Modern AI methods are more than capable of learning and correcting what triggers a radiologists to decide whether there is a cancerous tumor, whether a tumor is benign or whether cancer has spread to other places in the patient’s body. On the other hand, some AI methods are reliant on data from radiologists for their learning, and, as a result, these AI methods cannot completely replace human radiologists. In addition, all AI software needs to be maintained and updated over time to keep the integrity of the AI model, given that an outdated AI model can lose quality over time [10]. This literature review discusses different methods of radiology imaging, along with how they are used and what specific fields of cancer detection AI can be helpful in. Finally it focuses on how Convolutional Neural Networks can be trained to imitate a radiologist in detecting and diagnosing cancer and the spread of cancer.

2 ONCOLOGY AND ARTIFICIAL INTELLIGENCE

Oncology is the branch of medicine that focuses on diagnosing and treating cancer patients. After the proper images are acquired through Computed Tomography (CT), Positron Emission Tomogra- phy (PET), Magnetic Resonance Imaging (MRI) or any other means of radiology imaging, the images need to be preprocessed in a for- mat that the AI can accept as input. Artificial Intelligence has made itself a crucial part of the oncology world by assisting in three clin- ical steps: 1-Detection , 2-Characterization and 3-Monitoring of the given cancerous abnormality[1]. Figure 1 shows the com- plete steps of AI assistance in oncology from image creation until final diagnostic and monitoring of the patient’s response to the

treatment. Specific tumor detection where AI has been applied to be useful with oncology or could potentially be useful are: Breast, Lung, Brain and Central Nervous System (CNS), Gastric, Prostate, Lymph node spreads, etc [1] [6].

2.1 Detection

Abnormality detection is defined as the process in which an AI system searches for a Region of Interest (ROI) in which any abnor- mality can be present [1] [7]. It is in this step where AI is aiming to correct any oversensitivity or undersensitivity in a human radi- ologist, reducing the number of false-negative and false-positive diagnostics. Computers have been helping with this step in oncol- ogy for over two decades in what is described as Computer Aided Detection (CADe) [3]. However, with rapid advancements in AI’s capabilities and popularity, CADe and AI-CADe have become an inseparable part of radiology imaging.

2.2 Characterization

After the detection of the abnormality, the detected abnormality needs to be characterized. Characterization includes separating the tumor or the area in question from the non-cancerous surrounding tissue, classifying the tumor as malignant or benign and finally determining the stage of cancerous tumor based on how much the tumor has spread. These three steps in characterization are com- monly referred to as: Segmentation, Diagnosis and Staging[4] [7].

Ali Farahmand afarah18@earlham.edu
Computer Science Department at Earlham College Richmond, Indiana

2.3

• Segmentation is a process similar to edge detection in im- age processing, as it aims to narrow down the image and only draw focus to the cancerous part of the image.

• Computer Aided Diagnosis (CADx) is the name of systems used in the diagnosis part of characterization [4]. CADx sys- tems use characteristic features such as texture and intensity to determine the malignancy of the tumor [4].

There are specific criteria that put each patient into pre- specified stages according to data acquired in the segmen- tation and diagnosis steps. Staging systems use measures such as the size of the tumor, whether the cancer has spread out of the tumor (Metastasizing) and the number of nearby lymph nodes where the cancer has spread to.

Monitoring

The final step where AI assists in oncology is monitoring a patient’s response after a period of time in which the patient underwent one or a series of treatments, such as chemotherapy. The AI is effectively looking for any changes in the tumor, such as growth or shrinking in size, changes in the tumor’s malignancy and the spread. The AI system can do this accurately since it can detect changes that

might not be visible to the radiologists’ eyes. The AI system also eliminates the human error involved in reading smaller changes in images and comparison between images over time [1].

3 DATASETS

Machine Learning is mostly data driven. Thus, AI systems have a constant need for patient radiology records in order to be of any assistance to radiology practices or to have the ability to compete with human radiologists. Fortunately, there is no lack of data or variety of data in this field as statistics show that one in four Amer- icans receive a CT scan and one in ten Americans receive an MRI scan each year [7].

Furthermore, industry famous dataset libraries are publicly avail- able, including but not limited to:

• OpenNeuro [9] (formerly known as OpenfMRI [8]) • Camelyon17 as part of the camelyon challenge [5] • BrainLife [2]

4 ARTIFICIAL INTELLIGENCE METHODS IN RADIOLOGY

Different AI algorithms and methods have been used in oncology both for Computer-Aided Detection (CADe) and for Computer- Aided Diagnosis (CADx). Traditionally, supervised, labeled data and shallow networks have been used. However, with advancements in AI technology, unsupervised, unlabeled and deeper networks have proven to be of more help in detecting patterns for CADe and CADx systems. Deep learning methods might even find patterns that are not easily detectable to the human eye [15].

4.1 Support Vector Machines

Support Vector Machine (SVM) is a supervised machine learning model that is more on the traditional side of models for CAD sys- tems. Due to their simplicity and the fact that they aren’t very computationally expensive, SVMs have been used extensively in tumor detection and have yielded good results [1]. However they’ve been succeeded with more advanced machine learning methods that are capable of detecting features without having labeled data as their input.

4.2 Convolutional Neural Networks

Convolutional Neural Networks are commonly used for unsuper- vised learning and are considered to be deep learning meaning they include many more layers than supervised versions of neural

networks. CNNs are optimized for having images as input and since radiology is image focused, CNNs are one of the most common AI methods used in radiology [14]. In a 2016 study by Shin et al. on different CNN applications, CNNs yielded better average results than traditional approaches on lymph node datasets [13]. CNNs layers consists of convolution and pooling layers. Convolution lay- ers include filters which through training, learn to create a feature map which outputs detected features in the input [14]. Pooling layers are used for downsizing the output of the convolution lay- ers which helps with reducing the computation and overfitting issues [14]. What makes CNNs unique from other multi-layered machine learning models is feature extracting which is powered by the combination of convolution and pooling layers. At the end of the last convolution and pooling layers there is fully connected (FC) layer which is used as the classifier after the feature extracting process [14]. There are multiple architectures for CNNs which use different layer combinations [14] and these architectures are used in detection, segmentation and diagnosis steps of oncology [13].

5 CONCLUSION

This literature review looked into how AI has been used in radiology for detecting, diagnosing and monitoring cancer in patients. We discussed the main steps the AI applies to oncology, from when the image has been acquired by CT, PET or MRI scanner machines, to when the AI-CAD system reads the image to detect an ROI and diagnose the tumor, to how AI can be helpful to monitor the shrink or growth of tumors in a patient that is undergoing a treatment for the tumor. We further discussed how AI systems are capable of detecting metastasis which it categorizes the patient into different stages depending on how much the cancer has spread away from the initial tumor. After discussing the data and glancing over a few important datasets we looked at different AI, both supervised and unsupervised, and discussed how they differ.

REFERENCES

  1. [1]  Wenya Linda Bi, Ahmed Hosny, Matthew B. Schabath, Maryellen L. Giger, Nicolai J. Birkbak, Alireza Mehrtash, Tavis Allison, Omar Arnaout, Christo- pher Abbosh, Ian F. Dunn, Raymond H. Mak, Rulla M. Tamimi, Clare M. Tem- pany, Charles Swanton, Udo Hoffmann, Lawrence H. Schwartz, Robert J. Gillies, Raymond Y. Huang, and Hugo J. W. L. Aerts. 2019. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA: A Cancer Jour- nal for Clinicians 69, 2 (2019), 127–157. https://doi.org/10.3322/caac.21552 arXiv:https://acsjournals.onlinelibrary.wiley.com/doi/pdf/10.3322/caac.21552
  2. [2]  brainlife.io. [n.d.]. Brain Life dataset library. Retrieved September 17, 2020 from https://brainlife.io/
  3. [3]  RonaldACastellino.2005.Computeraideddetection(CAD):anoverview.Cancer Imaging 5, 1 (2005), 17.

Figure 1: The main AI steps in Oncology

Ali Farahmandpage2image49010784

Artificial intelligence in radiology cancer detection: a literature review

[4] Macedo Firmino, Giovani Angelo, Higor Morais, Marcel R Dantas, and Ricardo Valentim. 2016. Computer-aided detection (CADe) and diagnosis (CADx) system for lung cancer with likelihood of malignancy. Biomedical engineering online 15, 1 (2016), 1–17.

[11] [12] [13] [14] [15]

Alejandro Rodríguez-Ruiz, Elizabeth Krupinski, Jan-Jurre Mordang, Kathy Schilling, Sylvia H Heywang-Köbrunner, Ioannis Sechopoulos, and Ritse M Mann. 2019. Detection of breast cancer with mammography: effect of an artificial in- telligence support system. Radiology 290, 2 (2019), 305–314. https://doi.org/10. 1148/radiol.2018181371 arXiv:https://doi.org/10.1148/radiol.2018181371 Alejandro Rodriguez-Ruiz, Kristina Lång, Albert Gubern-Merida, Mireille Broed- ers, Gisella Gennaro, Paola Clauser, Thomas H Helbich, Margarita Chevalier, Tao Tan, Thomas Mertelmeier, et al. 2019. Stand-alone artificial intelligence for breast cancer detection in mammography: comparison with 101 radiologists. JNCI: Journal of the National Cancer Institute 111, 9 (2019), 916–922.

H. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers. 2016. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Transactions on Medical Imaging 35, 5 (2016), 1285–1298.

Shelly Soffer, Avi Ben-Cohen, Orit Shimon, Michal Marianne Amitai, Hayit Greenspan, and Eyal Klang. 2019. Convolutional Neural Networks for Radiologic Images: A Radiologist’s Guide. Radiology 290, 3 (2019), 590–606. https://doi.org/ 10.1148/radiol.2018180547

An Tang, Roger Tam, Alexandre Cadrin-Chênevert, Will Guest, Jaron Chong, Joseph Barfett, Leonid Chepelev, Robyn Cairns, J Ross Mitchell, Mark D Cicero, et al. 2018. Canadian Association of Radiologists white paper on artificial intel- ligence in radiology. Canadian Association of Radiologists Journal 69, 2 (2018), 120–135.

[5] grand challenge.org. [n.d.]. Camelyon17 grand challenge. 17, 2020 from https://camelyon17.grand- challenge.org

Retrieved September

  1. [6]  Richard Ha, Peter Chang, Jenika Karcich, Simukayi Mutasa, Reza Fardanesh, Ralph T Wynn, Michael Z Liu, and Sachin Jambawalikar. 2018. Axillary lymph node evaluation utilizing convolutional neural networks using MRI dataset. Jour- nal of Digital Imaging 31, 6 (2018), 851–856.
  2. [7]  Ahmed Hosny, Chintan Parmar, John Quackenbush, Lawrence H Schwartz, and Hugo JWL Aerts. 2018. Artificial intelligence in radiology. Nature Reviews Cancer 18, 8 (2018), 500–510.
  3. [8]  G. Manogaran, P. M. Shakeel, A. S. Hassanein, P. Malarvizhi Kumar, and G. Chandra Babu. 2019. Machine Learning Approach-Based Gamma Distribution for Brain Tumor Detection anData Sample Imbalance Analysis. IEEE Access 7 (2019), 12–19.
[9] openneuro.org. [n.d.]. Open Neuro dataset library. 2020 from https://openneuro.org

Retrieved September 17,

[10] Oleg S Pianykh, Georg Langs, Marc Dewey, Dieter R Enzmann, Christian J Herold, Stefan O Schoenberg, and James A Brink. 2020. Continuous learning AI in radiology: implementation principles and early applications. Radiology (2020), 200038.

Annotated Bibliography – 388

with No Comments

Pitch #1

Style transfer and Image manipulation

Given any photo this project should be able to take any art movement such as Picasso’s Cubism and apply the style of the art movement to the photo. The neural network first needs to detect all the subjects and separate them from the background and also learn about the color schemes of the photo. Then the art movement datasets needs to be analyzed to find patterns in the style. The style will then need to be applied to the initial photo. The final rendering of the photo could get a little computationally expensive, if that is the case there will be need for GPU hardware. Imaging libraries such as pillow and scikit would be needed. It might be a little hard to find proper datasets since there are limited datasets available for each art movement. Contrarily I could rid myself of the need for readily-made datasets by training the network to detect style patterns by feeding it unlabeled painting scans.

Source #1:

Chen, D. Yuan, L. Liao, J. Yu, N. Hua, G. 2017. StyleBank: An Explicit Representation for Neural Image Style Transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1897-1906.

Link

This paper uses a network that successfully differentiates between content and style of each image. I’ve had doubts about this project being two separate tasks: 1- understanding what subjects are in the photo (content). 2- understanding the color scheme and style used for these subjects (style). This paper’s approach manages to tackle both of those. The conclusion of this paper actually mentions what wasn’t explored but it could be an interesting new thing to investigate which could be a good starting point for my work. Many successful and reputable other works are used for comparison in this paper (including source #2 below) and overall this paper is a gold mine of datasets (including the mentioned Microsoft COCO dataset). It might be hard to find a way to evaluate my final results for this pitch, however this paper does an excellent job of doing this evaluation simply by comparing the final image with final images in previous work.

Source #2:

Gatys, L.A. Ecker, A.S. Bethge M. 2016. Image Style Transfer Using Convolutional Neural Networks. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2414-2423.

Link

Gatys et al. start with an amazing summary and comparison of previous work in the introduction. They propose a new algorithm which is aimed to overcome the challenges of separating content and style of an image. This paper is one of the most famous and popular approaches to style transfer and it has been cited many times. The filters used in this paper to detect content can be very useful in my project since the approach focuses on higher layers in the convolutional network for content detection which are not the exact pixel recreations of the original picture. Eliminating the exact pixel representation of the image could also result in reducing the need for computationally expensive code.

Source #3:

Arpa, S. Bulbul, A. Çapin, T.K. Özgüç, B. 2012. Perceptual 3D rendering based on principles of analytical cubism.Computers & Graphics, 6. 991-1004.

Link

This paper is gives us a different perspective than the other resources since it analyzes what exactly constructs the cubism model by analyzing cubism in a 3d rendered environment. The approach uses three different sets of rules and patterns to create cubist results which are 1-Faceting, 2-Ambiguity and 3- Discontinuity. If I can successfully apply these three patterns in a 2d environment then I have decreased the number of filters that need to be applied to each photo down to only  three filters. Faceting seems to be the most important part of cubism as different facet sizes can actually create both ambiguity and discontinuity. Main problem with this paper is that there is not much other work to compare this to and it seems the only way to evaluate the final work is just by observing and out own judgement even though the paper includes some art critics input.

Source #4:

Lian, G. Zhang, K. 2020. Transformation of portraits to Picasso’s cubism style. The Visual Computer, 36. 799–807.

Link

This paper start off by mentioning that the Gatys et al. (source #2) approach, however successful, is slow and memory consuming and it claims to improve the method to be more efficient. The problem with this paper is that even though the algorithm is a lot more efficient and quicker than Gatys et al. it only applies the same style and it only applies that style to portraits. Gatys, however memory consuming does a great job to work with any image with any content inside. So my goal would be to expand on this papers efficient approach to be able to work with a wider range of image content and styles while still being less computationally expensive than the famous Gatys et al. approach.

Pitch #2

Image manipulation detection

Neural network would be trained to detect image manipulation in a given photo. There are many ways to achieve this including but not limited to image noise analysis. Different algorithms can be compared to see which can do the best detection manipulation or which one was better automated with the training process.

Python libraries such as Keras and sklearn will be used for the Neural Network and the deep learning. Many previous research papers and datasets are available for this project or similar ones. 

Source #1:

Zhou, P. Han, X. Morariu, V.I. Davis L.S. 2018. Learning Rich Features for Image Manipulation Detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1053-1061.

Link

Strong paper with strong list of citations. Separate convolutions for RGB and noise level seem to work together very well. This paper talks about how they had issues with finding good enough datasets and how they overcame this problem by creating a pre trained model from the Microsoft COCO dataset. The noise stream approach is especially interesting. The filters used in the convulsions are given which can be used directly or slightly modified for comparison and perhaps enhancement of the method. The results section of this unlike the previous pitch is very thorough and shows how the RGB and noise level approach compliment each other. My goal would be to enhance the noise approach model until it can identify manipulation independent from the RGB stream. The noise level detection seems to be almost complete on its own and stripping the RGB stream and the comparison between the two would decrease the computation resources required for the method extensively.

Source #2:

Bayram, S. Avcibas, I. Sankur, B. Memon, N. 2006. Image manipulation detection. J. Electronic Imaging, 15.

Link

This paper is a lot more technical than Zhou et al (source #1). Source #1 above seems to be more focused on if more content was added or if content was removed from an image while this paper also focuses on if the original photo is still with the same content but slightly manipulated. Very strong results along with thorough graphs are included for comparison between methods. Five different forensic approaches are compared in this paper which all five need to be separately researched to be able to apply the actual method (or only the best performing one in the results section of the paper). My goal would be to find what makes each approach do better than others and take only the best parts of each approach and somehow mix them together.

Source #3:

Bayar, B. Stamm M.C 2016. A Deep Learning Approach to Universal Image Manipulation Detection Using a New Convolutional Layer. In Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security (IH&MMSec ’16). Association for Computing Machinery, New York, NY, USA, 5–10. 

https://doi.org/10.1145/2909827.2930786

This paper does a great analysis of what different convolutional neural network methods are which ones could be used for maximum optimization in image manipulation and forgery detection. The datasets used are a massive collection of images with more than 80000 images (all publicly available on their github page) which would be very helpful for my project if I were to choose this pitch. The two sets of results for binary classification and multi-class classification approach both get very impressive results with the binary classification getting a minimum of 99.31% accuracy. This paper’s unique approach is to remove the need for detecting the content of the image and directly attempting to detect manipulation with no pre-processing.

Pitch #3

Radiology disease detection

Trained neural networks for detecting radiology abnormalities and diseases have reached a level that can easily compete with a human radiologists. For this project I will be using neural network libraries to detect different abnormalities. There are very different field that this can apply to such as: Brain tumor detection, breast cancer detection, endoscopic, colonoscopy, CT/MRI, oncology, etc. There are countless datasets and different approaches available for a project such as this one which leaves gives me the opportunity to compare and contrast different variations of them. This is a very rich field with a lot of previous work and published papers to explore.

Source #1:

Liu, Y. Kohlberger, T. Norouzi, M. Dahl, G.E. Smith, J.L. Mohtashamian, A. Olson, N. Peng, L.H. Hipp, J.D. Stumpe, M.C. 2019. Artificial Intelligence–Based Breast Cancer Nodal Metastasis Detection: Insights Into the Black Box for Pathologists. Arch Pathol Lab, 143 (7). 859–868.

Link

A lot of medical terms used in this paper which helped me understand the field better by looking them up. Examples are: LYNA, Sentinel Lymph node, metastatic, etc. This paper is different from the other sources since the sources below (especially the brain tumor one: source #3) are all focused finding tumor in a specific place, while this paper is focused on finding out if cancer has spread to other places such as lymph nodes (that’s what metastatic means). Detecting if cancer has spread seems to me a better real life application of deep learning radiology and potentially might affect the final version of this pitch. More than one good datasets are mentioned in this paper which can be used in this pitch. This includes the Camelyon16 dataset from 399 patients. Additionally there is a newer Camelyon17 dataset available. This paper comes incredibly close to the Camelyon16 winner which could also be a good source to check out. Strong list of references which includes papers about using deep networks to detect skin cancer. Some of the authors of this paper have other papers that contribute to the field even further. (interesting fact, somehow a lot of the authors of this paper and the ones in the references list are from my country)

Source #2:

Manogaran, G. Shakeel, P. M. Hassanein, A. S. Kumar, P.M. Babu G.C. 2019. Machine Learning Approach-Based Gamma Distribution for Brain Tumor Detection and Data Sample Imbalance Analysis. IEEE Access, 7. 12-19.

 Link

This paper uses a datasets from openfmri which has turned into https://openneuro.org. Both the old and the new dataset are very large, very useful datasets that can be used for this pitch. This paper also includes two different algorithms that help find the location of the tumor in each radiology image. Important to remember that ROI in this paper doesn’t mean Return On Investment, it actually means Region of Interest which is the region that the Neural Network has detected the tumor to be located. Orthogonal gamma distribution model seems to play a big role in the ROI detection in this paper (it is in the title) which makes this approach unique as it gives the algorithm the capability to self-identify the ROI. This automation is the winning achievement of this paper.

Source #3:

Logeswari, T. Karnan, M. 2010. An improved implementation of brain tumor detection using segmentation based on soft computing. Journal of Cancer Research and Experimental Oncology, 2 (1). 6-14.

Link

This paper proposes a hierarchical self-organizing map (HSOM) for MRI segmentation and it claims that this approach will have improvements to a traditional Self-Organizing Map (SOM) method. The winning neuron method is a simple yet elegant method that I can utilize in my application of MRI reading and detection. This paper is relatively old compared to the other sources and doesn’t include any significant dataset, however, the simple winning neuron algorithm is a good starting point that can be expanded on with better segmentation methods, better model training, etc.

Source #4:

Bi, W.L. Hosny, A. Schabath, M.B. Giger, M.L. Birkbak, N.J. Mehrtash, A. Allison, T. Arnaout, O. Abbosh, C. Dunn, I.F. Mak, R.H. Tamimi, R.M. Tempany, C.M. Swanton, C. Hoffmann, U. Schwartz, L.H. Gillies, R.J. Huang, R.Y. Aerts, H.J.W.L. 2019. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA A Cancer J Clin, 69. 127-157. 

Link

This paper is a little different from the other sources, as it discusses Artificial Intelligence used in Cancer detection as a general concepts while also getting in depth in various fields of cancer detection such as lung cancer detection, breast mammography, prostate and brain cancer detection. The paper can be very helpful since it describes different methods of cancer detection in a machine learning environment and talks about different kinds of tumors and abnormalities at length. The authors even do a thorough comparison between previous work for each field which includes what tumor was studied, how many patients were in the dataset, what algorithm was used, how accurate the detection was and more. This paper opens the possibility for me to choose between any of the different fields to choose from for my final proposal.

Source #5:

Watanabe, A.T. Lim, V. Vu, H.X. et al. 2019. Improved Cancer Detection Using Artificial Intelligence: a Retrospective Evaluation of Missed Cancers on Mammography. In Journal of Digital Imaging 32. 625–637.

Link

This paper focuses generally on human error in cancer detection and how Artificial Intelligence can minimize human error. Unfortunately, Artificial Intelligence cancer detection is not as mainstream as it should be even though it has been proven to assist and outperform radiologists. That is what this paper is trying to address. The paper claims that computer-aided detection (CAD) before deep learning was not being very helpful and tries to prove that since the addition of AI to CAD (AI-CAD), false negative and false positive detections have been diminished. The paper then studies the sensitivity, number of false positives, etc. in a group of radiologist with various backgrounds and experience levels to prove that the AI-CAD system in question is helpful. Perhaps my final results in my work could be compared against a radiologist’s detection for a small section in the results section of the paper using the same methods this paper used for comparison.

Source #6:

Rodriguez-Ruiz, A. Lång, K. Gubern-Merida, A. Broeders, M. Gennaro, G. Clauser, P. Helbich, T.H. Chevalier, M. Tan, T. Mertelmeier, T. Wallis, M.G. Andersson, I. Zackrisson, S. Mann, R.M. Sechopoulos, I. 2019. Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison With 101 Radiologists, In JNCI: Journal of the National Cancer Institute, Volume 111, Issue 9. 916–922.

https://doi.org/10.1093/jnci/djy222

Similarly to Watanabe et al. (source #5 above) this paper attempts to compare AI powered cancer detection to radiologists diagnosis without any computer assisted results. They use results from nine different studies with a total of 101 radiologist diagnostics and compare their collective results to their AI powered detection system. This paper does a more thorough comparison with a larger set of datasets and has more promising results than Watanabe et al. (source #5). Their supplementary tables show in depth results of each individual dataset against the AI system which shows slightly better results than the average of the radiologists in most cases.

Source #7:

Pianykh, O.S. Langs, G. Dewey, M. Enzmann, D.R. Herold, C.J. Schoenberg, S.O. Brink, J.A. 2020. Continuous Learning AI in Radiology: Implementation Principles and Early Applications. 

https://doi.org/10.1148/radiol.2020200038

The unique problem under focus in this paper is how even the best AI algorithms in radiology reading and detection can become outdated given that data and the environment is always evolving in the field. The paper proves that models trained to work in the radiology field lose their quality over even short periods of time, such as a few months, and it proposes a continuous learning method to solve this problem. Continuous learning and its advantages are discussed and explained at length in the paper. According to the data in the paper, the AI model was able to keep its quality if feedback from the radiologists and patients was given and the model was retrained once a week using this feedback data from the two groups. The problem with this approach is that inclusion of radiologists and the less knowledgeable patients into the model training process increases the risk of mistraining due to human error and also input from someone that doesn’t have enough experience and/or knowledge about the field. In my work I could explore ways of ridding this method of human input and potentially make different AI models work with each other to provide feedback for continuous learning.

Source #8:

Tang, A. Tam, R. Cadrin-Chênevert, A. Guest, W. Chong, J. Barfett, J. Chepelev, L. Cairns, R. Mitchell, J. R. Cicero, M. D. Poudrette, M. G. Jaremko, J. L. Reinhold, C. Gallix, B. Gray, B. Geis, R. 2018. Canadian Association of Radiologists White Paper on Artificial Intelligence in Radiology. In Canadian Association of Radiologists Journal69(2). 120–135. 

https://doi.org/10.1016/j.carj.2018.02.002

This paper doesn’t actually build any Artificial Intelligence program and therefore there aren’t any datasets or results that can be helpful. However, the paper includes a vast insight into Artificial Intelligence and its role in modern radiology. Many helpful topics are included such as different Artificial Intelligence learning methods, their different applications in the radiology world, different imaging techniques, the role of radiologists in AI field and even the terminology needed to understand and communicate to others about the concept. Even though there is not implementation of software in this paper and no datasets are used as a result, datasets are discussed as length. What a dataset requires to be useful and how large should a dataset be are included with actual examples of these datasets (such as the ChestX-ray8 with more than 100 thousand x-ray images. 

Source #9:

Ha, R. Chang, P. Karcich, J. et al. 2018. Axillary Lymph Node Evaluation Utilizing Convolutional Neural Networks Using MRI Dataset. In Journal of Digital Imaging 31. 851–856.

https://doi.org/10.1007/s10278-018-0086-7

Axillary lymph nodes are a set of lymph nodes in the armpit area and due to their proximity to the breasts they are the first set of lymph nodes that breast cancer spreads to. Detecting even smallest amount of microscopic cancer cells in the axillary lymph nodes is critical to determining the stage of breast cancer and the proper treatment methods. For that reason this paper focuses on using MRI scans to detect the spread of cancer to these nodes as opposed to the normal tumor detection in breast mammographs in sources #1, #5 and #6. The other major difference between this source and others is the pre-processing of the 3d datasets. Interestingly this paper also leaves pooling out and downsizes the data with a different method. I believe I can use and even optimize many of the methods in this paper to get better results in my work. Metastasis detection is a very interesting field which doesn’t have the same amount of AI work done compared to tumor detection and for that reason I believe there is more room for improvement in the field.

Source #10:

Rodríguez-Ruiz, A. Krupinski, E. Mordang, J.J. Schilling, K. Heywang-Köbrunner, S.H. Sechopoulos, I. Mann, R.M. 2018. Detection of Breast Cancer with Mammography: Effect of an Artificial Intelligence Support System.

https://doi.org/10.1148/radiol.2018181371

This paper is likely what encouraged the creating of Rodríguez-Ruiz et al. (source #6). The difference between the two papers is small yet substantial. In this paper Rodríguez-Ruiz et al. compare how radiologists perform with and without the help AI powered systems as opposed to their newer paper (source #6) which takes the AI systems out and makes the radiologists compete against what was used to help them in this paper. Interestingly enough the results of the paper in which radiologists compete against AI are better compared to when AI is used as an aid to the radiologists. The data in this paper is not as extensive as the newer paper (source #6), however, as long as good data is available (such as in source #6), there is a lot for me to explore in the world of comparisons between different methods of AI and different kinds of radiologists (perhaps in a different field than mammography).

Source #11:

Hirasawa, T., Aoyama, K., Tanimoto, T. et al. 2018. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. In Gastric Cancer 21. 653–660.

https://doi.org/10.1007/s10120-018-0793-2

With most of my sources focused on either Breast and Brain tumor detection I thought it would be helpful to expand the range of possibilities by including more applications of AI in radiology fields. This paper uses similar CNN approaches for endoscopy imaging for gastric cancer detection. Going into fields such as gastric cancer detection where the amount of previous work is not as much as breast and brain tumor detection has advantages and disadvantages. Main advantage is that I am more likely to be able to get better improved results than the most recent work and the main disadvantage is the less amount of datasets available in such a field. This paper however uses good datasets acquired from different hospitals and clinics. Perhaps acquiring data from local clinics and hospitals is something I can look into for my work as well. 

three pitches (388)

with No Comments

Pitch #1

Style transfer and Image manipulation

Given any photo this project should be able to take any art movement such as Picasso’s Cubism and apply the style of the art movement to the photo. The neural network first needs to detect all the subjects and separate them from the background and also learn about the color schemes of the photo. Then the art movement datasets needs to be analyzed to find patterns in the style. The style will then need to be applied to the initial photo. The final rendering of the photo could get a little computationally expensive, if that is the case there will be need for GPU hardware. Imaging libraries such as pillow and scikit would be needed. It might be a little hard to find proper datasets since there are limited datasets available for each art movement. Contrarily I could rid myself of the need for readily-made datasets by training the network to detect style patterns by feeding it unlabeled paintings.

Pitch #2

Image manipulation detection

Neural network would be trained to detect image manipulation in a given photo. There are many ways to achieve this including but not limited to image noise analysis. Different algorithms can be compared to see which can do the best detection manipulation or which one was better automated with the training process.

Python libraries such as Keras and sklearn will be used for the Neural Network and the deep learning. Many previous research papers and datasets are available for this project or similar ones. 

Pitch #3

Radiology disease detection

Trained neural networks for detecting radiology abnormalities and diseases have reached a level that can easily compete with a human radiologists. For this project I will be using neural network libraries to detect different abnormalities. There are very different field that this can apply to such as: Brain tumor detection, breast cancer detection, colonoscopy, CT/MRI, oncology, etc. I have already found many datasets for some of these applications. Again this is a very rich field with a lot of previous work and published papers to explore.