6
Convergence and Technologies A Convolutional Neural Network Uses Microscopic Images to Differentiate between Mouse and Human Cell Lines and Their Radioresistant Clones Masayasu Toratani 1 , Masamitsu Konno 2,3 , Ayumu Asai 2,3 , Jun Koseki 2 , Koichi Kawamoto 2 , Keisuke Tamari 1 , Zhihao Li 1 , Daisuke Sakai 3 , Toshihiro Kudo 3 , Taroh Satoh 3 , Katsutoshi Sato 4,5 , Daisuke Motooka 6 , Daisuke Okuzaki 6 , Yuichiro Doki 7 , Masaki Mori 7 , Kazuhiko Ogawa 1 , and Hideshi Ishii 2,3 Abstract Articial intelligence (AI) trained with a convolutional neural network (CNN) is a recent technological advance- ment. Previously, several attempts have been made to train AI using medical images for clinical applications. However, whether AI can distinguish microscopic images of mamma- lian cells has remained debatable. This study assesses the accuracy of image recognition techniques using the CNN to identify microscopic images. We also attempted to distin- guish between mouse and human cells and their radio- resistant clones. We used phase-contrast microscopic images of radioresistant clones from two cell lines, mouse squa- mous cell carcinoma NR-S1, and human cervical carcinoma ME-180. We obtained 10,000 images of each of the parental NR-S1 and ME-180 controls as well as radioresistant clones. We trained the CNN called VGG16 using these images and obtained an accuracy of 96%. Features extracted by the trained CNN were plotted using t-distributed stochastic neighbor embedding, and images of each cell line were well clustered. Overall, these ndings suggest the utility of image recognition using AI for predicting minute differences among phase-contrast microscopic images of cancer cells and their radioresistant clones. Signicance: This study demonstrates rapid and accurate identication of radioresistant tumor cells in culture using artical intelligence; this should have applications in future preclinical cancer research. Cancer Res; 78(23); 67037. Ó2018 AACR. Introduction Recently, there has been a remarkable development in image recognition technology based on articial intelligence (AI) trained with a machine learning method, called deep learning. Also, extensive research on computer-aided diagnosis has been conducted in several elds of medicine using medical images, such as radiologic images (X-ray, CT, and MRI) and pathologic images (cytology and histology; refs. 16). The victory at the ImageNet Large Scale Visual Recognition Challenge in 2012 has popularized image recognition using deep learning (7). In 2015, the accuracy of AI exceeded human image recognition perfor- mance in the same contest (8). In deep learning, multilayered learning circuits called neural networks simulate human neurons. Deep learning facilitates the extraction of appropriate features from an image. One of the leading neural networks used in image recognition is the convolutional neural network (CNN; ref. 7). The CNN is organized on the basis of the human visual system and is a robust network against image shift (9). However, deep learning warrants substantial training data to enhance the performance of the CNN, especially with deep multilayered networks. How- ever, it is impractical to prepare such extensive training data in some cases. Thus, while the transfer learning technique is used to reduce the amount of data required for learning, employing a pretrained CNN is used to reduce the number of times learning is required (10). Reportedly, while the lower convolutional layers capture low-level local features such as edges, higher convolutional layers capture more complex features reecting the entire image (7). In transfer learning, learning efciency is enhanced by optimizing the parameters of only the higher layers without altering the lower layers. This study aims to apply image recognition technology with AI in clinical decision-making using microscopic images of clinical specimens. In particular, we intend to establish the technology using microscopic images of cancer cells to predict the effect of chemotherapy and/or radiotherapy. This will enable the devel- opment of objective indicators to personalize cancer treatment according to the patient's requirements. Also, we aim to classify 1 Department of Radiation Oncology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan. 2 Department of Disease Data Science, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan. 3 Department of Frontier Science for Cancer and Chemotherapy, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan. 4 Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, QST, Inage, Chiba, Japan. 5 Division of Hematology and Medical Oncology, Icahn School of Medicine at Mount Sinai, New York, New York. 6 Genome Information Research Center, Research Institute for Microbial Diseases, Osaka University, Suita, Osaka, Japan. 7 Department of Gastroenterological Surgery, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan. Note: Supplementary data for this article are available at Cancer Research Online (http://cancerres.aacrjournals.org/). Corresponding Authors: Kazuhiko Ogawa, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan. Phone: 8106-6879- 3482; Fax: 8106-6879-3489; E-mail: [email protected]; and Hideshi Ishii, [email protected] doi: 10.1158/0008-5472.CAN-18-0653 Ó2018 American Association for Cancer Research. Cancer Research www.aacrjournals.org 6703 on August 2, 2020. © 2018 American Association for Cancer Research. cancerres.aacrjournals.org Downloaded from Published OnlineFirst September 25, 2018; DOI: 10.1158/0008-5472.CAN-18-0653

AConvolutionalNeuralNetworkUsesMicroscopic Images to ...networks simulate human neurons. Deep learning facilitates the extraction of appropriate features from an image. One of the

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: AConvolutionalNeuralNetworkUsesMicroscopic Images to ...networks simulate human neurons. Deep learning facilitates the extraction of appropriate features from an image. One of the

Convergence and Technologies

AConvolutional Neural NetworkUsesMicroscopicImages to Differentiate between Mouse andHuman Cell Lines and Their Radioresistant ClonesMasayasu Toratani1, Masamitsu Konno2,3, Ayumu Asai2,3, Jun Koseki2, Koichi Kawamoto2,Keisuke Tamari1, Zhihao Li1, Daisuke Sakai3, Toshihiro Kudo3, Taroh Satoh3,Katsutoshi Sato4,5, Daisuke Motooka6, Daisuke Okuzaki6, Yuichiro Doki7, Masaki Mori7,Kazuhiko Ogawa1, and Hideshi Ishii2,3

Abstract

Artificial intelligence (AI) trained with a convolutionalneural network (CNN) is a recent technological advance-ment. Previously, several attempts have been made to trainAI using medical images for clinical applications. However,whether AI can distinguish microscopic images of mamma-lian cells has remained debatable. This study assesses theaccuracy of image recognition techniques using the CNN toidentify microscopic images. We also attempted to distin-guish between mouse and human cells and their radio-resistant clones. We used phase-contrast microscopic imagesof radioresistant clones from two cell lines, mouse squa-mous cell carcinoma NR-S1, and human cervical carcinomaME-180. We obtained 10,000 images of each of the parentalNR-S1 and ME-180 controls as well as radioresistant clones.

We trained the CNN called VGG16 using these images andobtained an accuracy of 96%. Features extracted by thetrained CNN were plotted using t-distributed stochasticneighbor embedding, and images of each cell line were wellclustered. Overall, these findings suggest the utility of imagerecognition using AI for predicting minute differencesamong phase-contrast microscopic images of cancer cellsand their radioresistant clones.

Significance: This study demonstrates rapid and accurateidentification of radioresistant tumor cells in culture usingartifical intelligence; this should have applications in futurepreclinical cancer research. Cancer Res; 78(23); 6703–7. �2018AACR.

IntroductionRecently, there has been a remarkable development in image

recognition technology based on artificial intelligence (AI)trained with a machine learning method, called deep learning.Also, extensive research on computer-aided diagnosis has beenconducted in several fields of medicine using medical images,such as radiologic images (X-ray, CT, and MRI) and pathologicimages (cytology and histology; refs. 1–6). The victory at theImageNet Large Scale Visual Recognition Challenge in 2012 has

popularized image recognition using deep learning (7). In 2015,the accuracy of AI exceeded human image recognition perfor-mance in the same contest (8).

In deep learning, multilayered learning circuits called neuralnetworks simulate human neurons. Deep learning facilitatesthe extraction of appropriate features from an image. One ofthe leading neural networks used in image recognition is theconvolutional neural network (CNN; ref. 7). The CNN isorganized on the basis of the human visual system and is arobust network against image shift (9). However, deep learningwarrants substantial training data to enhance the performanceof the CNN, especially with deep multilayered networks. How-ever, it is impractical to prepare such extensive training data insome cases. Thus, while the transfer learning technique is usedto reduce the amount of data required for learning, employing apretrained CNN is used to reduce the number of times learningis required (10). Reportedly, while the lower convolutionallayers capture low-level local features such as edges, higherconvolutional layers capture more complex features reflectingthe entire image (7). In transfer learning, learning efficiency isenhanced by optimizing the parameters of only the higherlayers without altering the lower layers.

This study aims to apply image recognition technology with AIin clinical decision-making using microscopic images of clinicalspecimens. In particular, we intend to establish the technologyusing microscopic images of cancer cells to predict the effect ofchemotherapy and/or radiotherapy. This will enable the devel-opment of objective indicators to personalize cancer treatmentaccording to the patient's requirements. Also, we aim to classify

1Department of Radiation Oncology, Graduate School of Medicine, OsakaUniversity, Suita, Osaka, Japan. 2Department of Disease Data Science, GraduateSchool of Medicine, Osaka University, Suita, Osaka, Japan. 3Department ofFrontier Science for Cancer and Chemotherapy, Graduate School of Medicine,Osaka University, Suita, Osaka, Japan. 4Research Center for Charged ParticleTherapy, National Institute of Radiological Sciences, QST, Inage, Chiba, Japan.5Division of Hematology and Medical Oncology, Icahn School of Medicine atMount Sinai, New York, New York. 6Genome Information Research Center,Research Institute for Microbial Diseases, Osaka University, Suita, Osaka, Japan.7Department of Gastroenterological Surgery, Graduate School of Medicine,Osaka University, Suita, Osaka, Japan.

Note: Supplementary data for this article are available at Cancer ResearchOnline (http://cancerres.aacrjournals.org/).

Corresponding Authors: Kazuhiko Ogawa, Graduate School of Medicine, OsakaUniversity, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan. Phone: 8106-6879-3482; Fax: 8106-6879-3489; E-mail: [email protected]; andHideshi Ishii, [email protected]

doi: 10.1158/0008-5472.CAN-18-0653

�2018 American Association for Cancer Research.

CancerResearch

www.aacrjournals.org 6703

on August 2, 2020. © 2018 American Association for Cancer Research. cancerres.aacrjournals.org Downloaded from

Published OnlineFirst September 25, 2018; DOI: 10.1158/0008-5472.CAN-18-0653

Page 2: AConvolutionalNeuralNetworkUsesMicroscopic Images to ...networks simulate human neurons. Deep learning facilitates the extraction of appropriate features from an image. One of the

controls and radioresistant clones of human and mouse cancercell lines using phase-contrast microscopic images. Furthermore,we intend to determine the features extracted by the trained CNNand assess the correlation among the features of five classes ofcells. This study demonstrates that it is feasible to distinguishbetween cell lines and their radioresistant clones from the limitedamount of visual information available in phase-contrast micro-scopic images.

Materials and MethodsFigure 1 illustrates the workflow of this study.

Cell linesCell lines used in this study included NR-S1 controls, NR-S1�

60 (radioresistant to X-ray), NR-S1 C30 (radioresistant to carbonion beam; refs. 11, 12), ME-180 controls, and ME-180 X-ray–resistant cell lines. Cells were cultured in DMEM (Sigma-Aldrich)supplemented with 10% (volume/volume) FBS (HyClone; GEHealthcare) and 1% (volume/volume) penicillin/streptomycin(Sigma-Aldrich), andmaintained at 37�C in a 5%CO2 incubator.The NR-S1 cell lines were kindly provided by Dr. Katsutoshi Sato(IcahnSchoolofMedicine atMountSinai,NewYork,NY) in2014.The ME-180 parental cell line was kindly provided by our col-league Dr. Keisuke Tamari (Graduate School of Medicine, OsakaUniversity,Osaka, Japan) in 2017. The initial passage numbers forcellswe usedweremore than 30. All the relevant experimentswereconducted within 10 passages from revival of the initial frozenseeds. All the relevant experiments were conducted within 10passages fromrevival of the initial frozen seeds.Mycoplasma testingwas performed using the MycoAlert Mycoplasma Detection Kit(Lonza; catalog code: LT07-218). Mycoplasma testing confirmednegative results. The cell authentication ofME-180was performedwith analyzing the short tandem repeat profile by the NationalInstitute of Biomedical Innovation (Osaka, Japan).

Establishment of radioresistant cellsThe NR-S1 � 60 and C30 cells were established as described

previously (11, 12). Briefly, NR-S1 parental control cells wereirradiatedwith 60GyofX-ray at a rate of 10Gyonce every 2weeks.

The NR-S1 C30 cells were established by irradiating NR-S1 paren-tal control cells with 30 Gy of carbon ion beam radiation at a rateof 5 Gy once every 2 weeks. TheME-180 X-ray–resistant cells wereestablished by irradiating ME-180 parental control cells with60 Gy of g irradiation at a rate of 2 Gy at every passage. Cellswere cultured for a week after the final irradiation and then usedfor the experiment.

Clonogenic survival assayCells were harvested with TrypLE Express (Thermo Fisher

Scientific), seeded onto cell culture dishes, and incubated at 37�Cunder 5% CO2 for 2 hours. Subsequently, cells were irradiatedwith gamma irradiation using the Gammacell 40 Exactor (MDSNordion) and incubated at 37�C under 5% CO2 for 7 to 13 days.Cells were then stained with 0.5% crystal violet (w/v) andcounted. Colonies containing >50 cells were counted as cells thatsurvived. The number of survived cells were plotted against thedose of gamma irradiation.

Image preparation and preprocessingCells were photographed using a phase-contrast microscope

(BZ-X700; Keyence). For each cell type, 5,000 images at a reso-lution of 640� 480 sq. pixels were captured. Two images of 320�320 sq. pixels were cropped from the original images, and theresolution of the images was changed to 160 � 160 sq. pixels tofacilitate image processing (Fig. 2). Overall, 50,000 images werecaptured, with 10,000 images per cell type. The processed imageswere divided into 8,000 training images and 2,000 test images foreach cell type.

Neural network architecture and transfer learningThe CNN comprisesmultiple convolutional layers to detect the

local features of inputs and pooling layers to reduce the compu-tational burden, overfitting, and image shift. A pretrained CNN,namely VGG16, was used in this study; VGG16 has been pub-lished by the Visual Geometry Group of Oxford University(Oxford, United Kingdom) and has a high accuracy of imagerecognition (8, 13). Figure 3 presents the architecture of VGG16,including 13 convolutional layers and 3 fully connected layers.Although theoriginal VGG16outputs, that is, 1,000parameters tocategorize images in 1,000 classes, are defined in ImageNet, datain this study were categorized in only five classes: NRN, NRX,NRC, MEN, and MEX. Thus, the number of VGG16 outputs waschanged from 1,000 to 5.

In this study, only the last 3 convolutional layers and 3 fullyconnected layers were trained using training images. The trainingdataset was processed inminibatches. Aminibatch was randomlyselected from the training dataset at each training step. Thelearning of all 40,000 images was defined as one learning orone epoch. Also, cross-entropy error was evaluated using theoutput of the VGG16 model, and the actual class of images wasevaluated using "accuracy" as a metric. Then, backpropagationwas performed, and model parameters were updated using amomentum stochastic gradient descent algorithmwith a learningrate of 0.0001 and a momentum of 0.9. Overall, 20 epochs weretrained; one epoch implies completing learning once with alltraining images. For each epoch, test images were used to assessthe accuracy of the trained model. Furthermore, Google'sTensorFlow (14) deep learning framework and Keras (15)followed by the data library of TensorFlow were used to train,validate, and test the model.

Figure 1.

Schematic representation of the overall experiment. Images of cell sampleswere obtained using a microscope and trimmed and stored as image data.Image data were separated into training data and test data and subjected tothe deep CNN analysis to study data depiction. Training data were usedfor network optimization, and test data were used for the estimation ofperformance of trained AI and extracted features.

Toratani et al.

Cancer Res; 78(23) December 1, 2018 Cancer Research6704

on August 2, 2020. © 2018 American Association for Cancer Research. cancerres.aacrjournals.org Downloaded from

Published OnlineFirst September 25, 2018; DOI: 10.1158/0008-5472.CAN-18-0653

Page 3: AConvolutionalNeuralNetworkUsesMicroscopic Images to ...networks simulate human neurons. Deep learning facilitates the extraction of appropriate features from an image. One of the

Research ethicsWe declare that we used cell lines only and no animal samples

including humans and mice in this study.

ResultsConfirmation of the radioresistant phenotype of cell lines

We used mouse squamous cell carcinoma NR-S1 andhuman cervical carcinoma ME-180 cell lines to establish radio-resistant cells. In this study, NR-S1 and its X-ray- and carbonion beam–resistant cell lines were named "NRN," "NRX," and"NRC," respectively. Similarly, ME-180 and its X-ray–resistantcell lines were named "MEN" and "MEX," respectively. Toconfirm the radioresistant phenotype of cell lines, we per-formed a clonogenic survival assay and estimated the survivalfraction of each cell line subjected to 4 and 8 Gy of girradiation (Supplementary Fig. S1). Both NRX and MEXexhibited higher survival than the corresponding control celllines (NRN and MEN).

Image datasetsPhase-contrast microscopic images of cell lines were captured,

and square images were cropped from the original rectangularimages. From a total of 50,000 images, we obtained 10,000microscopic images for each cell line at a resolution of 160 �160 sq. pixels. Representative images are shown in Fig. 2. Cells ofNRX were marginally smaller than those of NRN. However,differences between the size of NRC and NRN and betweenMEN and MEX were ambiguous. Also, we divided the imagedatasets into 8,000 training images and2,000 test images; trainingimageswere used to train theneural network, and test imageswereused to validate the accuracy of the neural network and featureengineering.

Training and validating the CNNIn this study, we used the CNN, called VGG16 (Fig. 3), which

waspretrainedusing ImageNet datasets (13).Using thepretrainedmodel, the number of images required for learning was reduced,thus improving the learning speed, a process called transfer

Figure 3.

Analysis of the CNN. The CNN, called VGG16, comprised 13convolutional layers and three fully connected layers,including flattened and dense layers. The max poolinglayers were inserted in convolutional layers. The CNNreceived each image as input data and output theprobability of each of the five classes.

Figure 2.

Collection of cell images.A, Rectangular images of 640� 480 sq. pixels were cropped to square images of 320� 320 sq. pixels and further reduced to 160� 160 sq.pixels. B, Representative images of each cell line. Similar images of cells were obtained, and data were analyzed from a total of 5,000 images of eachcell line. NRN, control parental NR-S1 cells; NRX, X-ray–resistant NR-S1 cells; NRC, carbon ion beam–radioresistant NR-S1 cells; MEN, control parentalME-180 cells; and MEX, X-ray–resistant ME-180 cells.

Recognition of Microscopic Cancer Images with Deep Learning

www.aacrjournals.org Cancer Res; 78(23) December 1, 2018 6705

on August 2, 2020. © 2018 American Association for Cancer Research. cancerres.aacrjournals.org Downloaded from

Published OnlineFirst September 25, 2018; DOI: 10.1158/0008-5472.CAN-18-0653

Page 4: AConvolutionalNeuralNetworkUsesMicroscopic Images to ...networks simulate human neurons. Deep learning facilitates the extraction of appropriate features from an image. One of the

learning.We performed transfer learning to optimize only the last3 convolutional layers and 3 fully connected layers using trainingdata and trained the model for 20 epochs. Figure 4 shows thetraining course of each epoch. The use of the pretrained modelresulted in a dramatic improvement in the accuracy of the modelafter only one epoch. Using test images, the accuracy of themodelreached approximately 96% after one epoch but plateaued there-after. The accuracy of the model was 99.9%, 98.8%, 99.8%,98.7%, and 91.1% for NRN, NRX, NRC, MEN, and MEX, respec-tively, using test images. Although the accuracy of the classifica-tion of NR-S1 cell lines was high, it was difficult to classify theME-180 cell lines, especially MEX, using this model. We per-formed Receiver Operating Characteristic analysis and calculatedthe area under the curve (AUC) to predict the trained VGG16model. The AUC values of NRN, NRX, NRC, MEN, andMEXwere1.00000, 0.99991, 0.99978, 0.99793, and 0.99908, respectively.Overall, the trained VGG16 model showed a very high perfor-mance in distinguishing each cell line.

Elucidating the reasons behind predictionsThe "Local Interpretable Model-agnostic Explanations" (LIME)

method (16) was used to determine the area of emphasis of thetrained CNN on the image. Using the LIME method on some testimages, we visualized the bases in the images for the classificationof the trained CNN. Representative results are shown in Supple-mentary Fig. S2. We successfully visualized the informationemphasized by the trained CNN, suggesting that the CNN regis-tered the shape of the cell or cell population because the boundaryof the extracted region was along the edge of the cell.

Next, we extracted internal features from test images using thetrained VGG16model to assess the basis for the categorization ofthese images. We obtained the output of the last hidden layer ofthe trained VGG16. The CNN designed in this study changed aninput image to 512 feature maps, comprising 5 � 5 square data,using convolutional layers. In addition, it integrated featuremapsinto 4,096 features by fully connected layers to render featuresuseful for categorization into five classes. Although the 5 � 5featuremaps facilitated the visualization of data extracted fromanimage by the CNN, these maps were too many and too small tocomprehend. Hence, 4,096 multivariate inputs were used for the

final layer as features extracted from the image. Moreover, mapswere reduced to two dimensions using t-distributed stochasticneighbor embedding (t-SNE) to visualize the features of 4,096multivariate inputs (17). We created a scatter plot with thesefeatures (Fig. 5). Although each point represented a microscopicimage of one cell line, each color represented a type of cell line(red, NRN; blue, NRX; black, NRC; orange,MEN; and cyan,MEX).We observed five clusters of points with the same categories. Threeclusters of NR-S1 (NRN, NRX, and NRC) were distinct from eachother, whereas clusters of ME-180 (MEN and MEX) were distrib-uted relatively close together, implying that the VGG16 modelrecognized MEN and MEX as similar cell lines.

DiscussionThis study established that it is feasible for AI to accurately

differentiate between cancer cell lines and their radioresistantclones, even if simple phase-contrast microscopic images are usedas inputs, suggesting clear visual differences amongmultiple typesof cells with different properties. These data suggest the potentialto create universal AI using a variety of cells for learning. Thedistinction among the three types of NR-S1 cells was moreaccurate than that between the two types of ME-180 cells, whichwas consistent with our visual intuition. Among all cell types,MEX cells were the least accurate. The trained VGG16 modelidentified somephotographs ofMEXasMENpossibly because theMEX cells were very similar to MEN cells, and the VGG16 set thedistinction threshold to be strict for MEX. The LIME method wasused to illustrate the predictions of any classifier by learning aninterpretable model locally around the prediction. We attempted

Figure 4.

The accuracy of each epoch. Training and test images of radioresistantand parental cells were analyzed. The accuracy of CNN's prediction of eachepoch was plotted as a line graph. The accuracy of training and testimages is indicated with broken and solid lines, respectively.

Figure 5.

CNN analysis of radioresistant and parental cell lines. Features wereextracted from 2,000 test images of each cell line. Each point representsfeatures obtained from a single image. Data from the mouse squamous cellcarcinoma NR-S1 cell line (control; NRN) and its X-ray–resistant (NRX) andcarbon ion beam–radioresistant (NRC) cells are shown in red, blue, and black,respectively. Data from the human cervical carcinoma ME-180 cell line(control; MEN) and its X-ray–resistant (MEX) cells are shown in cyan andorange, respectively.

Toratani et al.

Cancer Res; 78(23) December 1, 2018 Cancer Research6706

on August 2, 2020. © 2018 American Association for Cancer Research. cancerres.aacrjournals.org Downloaded from

Published OnlineFirst September 25, 2018; DOI: 10.1158/0008-5472.CAN-18-0653

Page 5: AConvolutionalNeuralNetworkUsesMicroscopic Images to ...networks simulate human neurons. Deep learning facilitates the extraction of appropriate features from an image. One of the

to elucidate the visual basis for the classification of images by theCNN and extracted features using LIME or t-SNE; however, theresults were unclear. Thus, it remains unclear whether AI couldpredict radiosensitivity using microscopic images of cell lines. Inthis study, only two kinds of cells were available, as the estab-lishment of radioresistant clones of cancer cells requires consid-erable time and effort. Further investigations are needed to trainthe CNNwith a higher number of cell lines and to verify its abilityto predict radiosensitivity using cell lines not used for training.

Although the CNN is an excellent technique in the field ofimage recognition, several problems await resolution. For exam-ple, there exists a black box in the learning processes and extractedfeatures associated with deep learning, such as the CNN.Whetherprediction using deep learning is correct and has practical impli-cations warrants further investigation. Also, since no mathemat-ical support exists in modifying hyperparameters to enhance theresult of deep learning, there is a need to explore optimal para-meters by using random sampling or grid search. Optimization ofhyperparameters using a machine learning approach such asBayesian optimization may resolve this problem (18).

There have been several reports on the efficacy of imagerecognition using deep learning in the diagnosis of existinglesions and the qualitative diagnosis of tissue type (1–3, 5).However, much less work has been conducted on the predictionof the sensitivity of treatment such as radiotherapy using radio-logical or pathologic images. In the future, using big data trainingAI with more information will advance predictive medicine,including the prediction of treatment effect, and contribute tothe realization of personalized medicine.

Disclosure of Potential Conflicts of InterestT. Kudo reports receiving a commercial research grant from Yakult Honsha

Co., Ltd., Chugai Pharmaceutical Co., Ltd., and Ono Pharmaceutical Co. Ltd.

H. Ishii reports receiving a commercial research grant from Taiho Pharmaceu-tical Co. Ltd, Unitech Co. Ltd. (Chiba, Japan), IDEA Consultants Inc. (Tokyo,Japan), and Kinshu-kai Medical Corporation (Osaka, Japan). No potentialconflicts of interest were disclosed by the other authors.

Authors' ContributionsConception and design: M. Toratani, M. Konno, M. Mori, K. Ogawa, H. IshiiDevelopment of methodology: M. Toratani, M. Konno, A. Asai, J. Koseki,K. Tamari, D. Sakai, D. Motooka, H. IshiiAcquisition of data (provided animals, acquired and managed patients,provided facilities, etc.): M. Toratani, M. Konno, A. Asai, K. Kawamoto, Z. Li,T. Kudo, K. Sato, D. Okuzaki, H. IshiiAnalysis and interpretation of data (e.g., statistical analysis, biostatistics,computational analysis): M. Toratani, M. Konno, A. Asai, J. Koseki, Z. Li, D.Sakai, T. Satoh, K. Sato, K. Ogawa, H. IshiiWriting, review, and/or revision of the manuscript: M. Toratani, M. Konno,T. Satoh, M. Mori, H. IshiiAdministrative, technical, or material support (i.e., reporting or organizingdata, constructing databases): M. Konno, J. Koseki, D. Sakai, Y. Doki, H. IshiiStudy supervision: H. Ishii

AcknowledgmentsWe thank the laboratory staff for their helpful discussions. This work

received financial support from grants-in-aid for Scientific Research from theJapan Agency for Medical Research and Development and the Ministry ofEducation, Culture, Sports, Science, and Technology (grant nos. 17H04282and 17K19698 to H. Ishii), grant no. 16K15615 to M. Konno, and grant no.15H05791 to M. Mori.

The costs of publication of this article were defrayed in part by thepayment of page charges. This article must therefore be hereby markedadvertisement in accordance with 18 U.S.C. Section 1734 solely to indicatethis fact.

Received February 28, 2018; revised May 6, 2018; accepted September 21,2018; published first September 25, 2018.

References1. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al.

Dermatologist-level classification of skin cancer with deep neuralnetworks. Nature 2017;542:115–8.

2. Kainz P, Pfeiffer M, Urschler M. Segmentation and classification of colonglands with deep convolutional neural networks and total variationregularization. PeerJ 2017;5:e3874.

3. Aubreville M, Knipfer C, Oetter N, Jaremenko C, Rodner E, Denzler J, et al.Automatic classification of cancerous tissue in laserendomicroscopyimages of the oral cavity using deep learning. Sci Rep 2017;7:11979.

4. Lao J, Chen Y, Li Z-C, Li Q, Zhang J, Liu J, et al. A deep learning-basedradiomics model for prediction of survival in glioblastoma multiforme.Sci Rep 2017;7:10353.

5. Teramoto A, Tsukamoto T, Kiriyama Y, Fujita H. Automated classificationof lung cancer types from cytological images using deep convolutionalneural networks. Biomed Res Int 2017;2017:4067832.

6. Zhen X, Chen J, Zhong Z, Hrycushko B, Zhou L, Jiang S, et al. Deepconvolutional neural network with transfer learning for rectum toxicityprediction in cervical cancer radiotherapy: a feasibility study. PhysMedBiol2017;62:8246–63.

7. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deepconvolutional neural networks. Proceedings of the 25th InternationalConference on Neural Information Processing Systems 2012;1:1097–105.

8. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNetlarge scale visual recognition challenge. Int J Comput Vis 2015;115:211–52.

9. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied todocument recognition. Proc of the IEEE 1998;86:2278–324.

10. Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng2010;22:1345–59.

11. Sato K, Imai T, Okayasu R, Shimokawa T. Heterochromatin domainnumber correlateswith X-Ray and carbon-ion radiation resistance in cancercells. Radiat Res 2014;182:408–19.

12. Sato K, Azuma R, Imai T, Shimokawa T. Enhancement of mTOR signalingcontributes to acquired X-ray and C-ion resistance in mouse squamouscarcinoma cell line. Cancer Sci 2017;108:2004–10.

13. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition; 2015. Available from: https://arxiv.org/abs/1409.1556v6.

14. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, et al.TensorFlow: large-scale machine learning on heterogeneous distributedsystems; 2016. Available from: https://arxiv.org/abs/1603.04467v2.

15. Chollet F. Keras. GitHub; 2015. https://github.com/fchollet/keras.16. Tulio RibeiroM, Singh S, Guestrin C. "Why should I trust you?": explaining

the predictions of any classifier; 2016. Available from: https://arxiv.org/abs/1602.04938v3.

17. Van Der Maaten L, Hinton G. Visualizing data using t-SNE. J Mach LearnRes 2008;9:2579–605.

18. Lee J, Bahri Y,NovakR, Schoenholz SS, Pennington J, Sohl-Dickstein J, et al.Deep neural networks asGaussian processes; 2017. Available from: https://arxiv.org/abs/1711.00165v3.

www.aacrjournals.org Cancer Res; 78(23) December 1, 2018 6707

Recognition of Microscopic Cancer Images with Deep Learning

on August 2, 2020. © 2018 American Association for Cancer Research. cancerres.aacrjournals.org Downloaded from

Published OnlineFirst September 25, 2018; DOI: 10.1158/0008-5472.CAN-18-0653

Page 6: AConvolutionalNeuralNetworkUsesMicroscopic Images to ...networks simulate human neurons. Deep learning facilitates the extraction of appropriate features from an image. One of the

2018;78:6703-6707. Published OnlineFirst September 25, 2018.Cancer Res   Masayasu Toratani, Masamitsu Konno, Ayumu Asai, et al.   Radioresistant ClonesDifferentiate between Mouse and Human Cell Lines and Their A Convolutional Neural Network Uses Microscopic Images to

  Updated version

  10.1158/0008-5472.CAN-18-0653doi:

Access the most recent version of this article at:

  Material

Supplementary

  http://cancerres.aacrjournals.org/content/suppl/2018/09/25/0008-5472.CAN-18-0653.DC1

Access the most recent supplemental material at:

   

   

   

  E-mail alerts related to this article or journal.Sign up to receive free email-alerts

  Subscriptions

Reprints and

  [email protected]

To order reprints of this article or to subscribe to the journal, contact the AACR Publications Department at

  Permissions

  Rightslink site. Click on "Request Permissions" which will take you to the Copyright Clearance Center's (CCC)

.http://cancerres.aacrjournals.org/content/78/23/6703To request permission to re-use all or part of this article, use this link

on August 2, 2020. © 2018 American Association for Cancer Research. cancerres.aacrjournals.org Downloaded from

Published OnlineFirst September 25, 2018; DOI: 10.1158/0008-5472.CAN-18-0653