%0 Journal Article %T AI for medical use %A Hideshi Ishii %A Masamitsu Konno %J Archive of "Oncotarget". %D 2019 %R 10.18632/oncotarget.26556 %X In the field of artificial intelligence (AI), developments are leading to a new era involving its social implementation. In 2006, Hinton et. al. reported that high-dimensional data can be converted into low-dimensional codes by training a multilayer neural network with a small perceptron [1]. This discovery has triggered developments in AI. In recent years, the technology involving AI trained with a convolutional neural network (CNN), which mimics the optical nerve network, has been developed. Various attempts have been made to train AI with medical images for its utilization in clinical applications (Figure (Figure1).1). Esteva et. al. published the first report on clinical AI [2], wherein they trained a CNN using 129,450 clinical images, which consisted of 2,032 different diseases. Surprisingly, the performance of the CNN was comparable with the level of competency shown by dermatologists in classifying skin cancer. We recently trained a CNN using10,000 images each of radiosensitive and radioresistant cancer cells [3]. The accuracy of this model was very high (96%). Features extracted by the CNN were plotted using t-distributed stochastic neighbor embedding, and it was confirmed that each cell line was well clustered. Rajpurkar et. al. reported the results on X-ray image diagnosis [4]. They established a novel algorithm, CheXNet, which comprised a 121-layer CNN. This CheXNet network was trained using the ChestX-ray14 dataset, which contained 112,120 frontal-view chest X-ray images that were individually labeled with 14 different thoracic diseases. Four practicing academic radiologists annotated a test set, and the performance of CheXNet was compared to that of the radiologists. The performance of the CheXNet algorithm was found to exceed the average radiologist performance. Another group developed AI for the diagnosis of cancer metastasis [5]. Liu et. al. made a CNN to automatically detect and localize tumors as small as 100 กม 100 pixels in a large-sized image (100,000 กม 100,000 pixels). This CNN was trained using the Camelyon16 dataset, which included hematoxylin and eosin-stained whole-slide images of cancer metastasized lymph node sections. This network could detect 92.4% of the tumors, relative to the 82.7% detected by the previous optimal automated approach. At the same time, a pathologist performing an exhaustive search achieved 73.2% sensitivity. This AI achieved image level AUC scores > 97% on the Camelyon16 test set and an independent set of 110 clinical sample slides. AI has also been used for the detection of vascular diseases [6]. Santini %K artificial intelligence %K convolutional neural network %K diagnosis %K surgery %K medicine %U https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6349441/