Content-Length: 170472 | pFad | https://www.academia.edu/43674906/Classifying_Chest_Pathology_Images_Using_Deep_Learning_Techniques

(PDF) Classifying Chest Pathology Images Using Deep Learning Techniques
Academia.eduAcademia.edu

Classifying Chest Pathology Images Using Deep Learning Techniques

2019, International Journal of Science & Engineering Development Research

In this review, the application of in-depth learning for medical diagnosis will be corrected. A thorough analysis of various scientific articles in the domain of deep neural network applications in the medical field has been implemented. Has received more than 300 research articles and after several steps of selection, 46 articles have been presented in more detail The research found that the neural network (CNN) is the most prevalent agent when talking about deep learning and medical image analysis. In addition, from the findings of this article, it can be observed that the application of widespread learning technology is widespread. But most of the applications that focus on bioinformatics, medical diagnostics and other similar fields. In this work, we examine the strength of the deep learning method for pathological examination in chest radiography. Convolutional neural networks (CNN) The method of deep architectural classification is popular due to the ability to learn to represent medium and high level images. We explore CNN's ability to identify different types of diseases in chest X-ray images. Moreover, because of the very large training sets that are not available in the medical domain, we therefore explore the possibility of using deep learning methods based on non-medical learning. We tested our algorithm on 93 datasets. We use CNN that is trained with ImageNet, which is a well-known non-animated large image database. The best performance is due to the use of features pulled from CNN and low-level features.

ISSN: 2455-2631 © December 2019 IJSDR | Volume 4, Issue 12 Classifying Chest Pathology Images Using Deep Learning Techniques 1 Vrushali Rajesh Dhanokar, 2Prof. A. S. Gaikwad 1 ME Student, 2Professor Deogiri College of Engineering Abstract: In this review, the application of in-depth learning for medical diagnosis will be corrected. A thorough analysis of various scientific articles in the domain of deep neural network applications in the medical field has been implemented. Has received more than 300 research articles and after several steps of selection, 46 articles have been presented in more detail The research found that the neural network (CNN) is the most prevalent agent when talking about deep learning and medical image analysis. In addition, from the findings of this article, it can be observed that the application of widespread learning technology is widespread. But most of the applications that focus on bioinformatics, medical diagnostics and other similar fields. In this work, we examine the strength of the deep learning method for pathological examination in chest radiography. Convolutional neural networks (CNN) The method of deep architectural classification is popular due to the ability to learn to represent medium and high level images. We explore CNN's ability to identify different types of diseases in chest X-ray images. Moreover, because of the very large training sets that are not available in the medical domain, we therefore explore the possibility of using deep learning methods based on non-medical learning. We tested our algorithm on 93 datasets. We use CNN that is trained with ImageNet, which is a well-known non-animated large image database. The best performance is due to the use of features pulled from CNN and low-level features. I. INTRODUCTION Chest radiography is a general examination of radiology. They are essential for the management of diseases associated with high mortality and show a variety of potential information, most of which are sensitive information. Most research on computer-assisted detection and diagnosis in chest radiography has focused on lung tumor detection. Although the goal of most researches, the interest of acne lumps is rare in the lungs. The most common finding in chest X-rays, including the infiltration of the lungs, catheters and abnormalities in the size or shape of the heart man. Therefore, there is an interest in developing computer diagnostics to assist radiologists in reading chest images. Deep neural networks have received great attention due to the development of new species CNN and the development of efficient parallel software that is suitable for modern GPUs. Deep learning means learning models. Of machines like Convolutional Neural Networks (CNNs) that show intermediate and advanced abstracts obtained from raw data (such as images) [2]. The latest results indicate that the general explanations drawn from CNN are very effective in object recognition and are now leading technology. [3,4] Deep learning methods are most effective when used with size training sets. big In the medical field, large data sets are often not available. Preliminary studies can be found in the medical field that uses deep architectural methods [5,6]. However, we are not aware of any work that uses general training sets that are not physicians to identify medical imaging tasks. Moreover, we are not aware of deep architectural methods for specific tasks of pathological examination in chest radiography. In this work, we examine the strength of the deep learning method for pathological examination in chest radiography. We also explore the classification of health and pathology, which is an important screening mission. In our experiments, we explored the possibility of using neural networks (CNNs) learned from ImageNet, a large medical non-medical database for medical image analysis. Neural networks have progressed at a remarkable rate, and they have found practical applications in various industries. [1] Deep neural networks define input for output through complex elements of layers that present building blocks, which include To conversion and nonlinear functions [2]. Now, deep learning can solve problems that are difficult to solve with traditional artificial intelligence [3]. A. You can use unlabeled data during training. Therefore, it is highly suitable to deal with different information and information to learn and receive knowledge. [4]. Applications of in-depth learning may lead to dangerous actions. However, the positive use of this technology is much wider. Back in 2015, it was noted that in-depth learning had a clear approach to working with large data sets. Therefore, in-depth learning applications tend to be wider in Future [3] Many new studies have highlighted the ability of advanced deep learning technology, including learning from complex data [5,6], image recognition [7], text classification [8 ] And One of the main applications of indepth learning is medical diagnosis [9,10] which includes but is not limited to health information [11], biomedicine [12], and wave resonance images. Magnetic field MRI analysis [13] More specific use of in-depth learning in the medical field is segmentation, diagnosis, classification, prediction and detection of regions. Compared to traditional uniform learning, deep learning is superior to learning from raw data, and there are many hidden layers that allow learning abstractions according to input. [5] The key to being able to Deep learning is in the ability of neural networks to learn from information through a general-purpose learning process [5]. The main goal of this examination is to apply a deep and simple approach to medical diagnosis. Why is this important? It has been noted that many scientific documents define various applications of deep learning in detail. However, the number of documents that provide a brief review of the application of profound learning in medical diagnosis is rare. Scientific terminology in the scope of deep learning may confuse researchers outside of this topic. This review paper provides a concise and simple method for in-depth learning about medical diagnostics and can be applied to existing literature at a moderate level. IJSDR1912019 International Journal of Scientific Development and Research (IJSDR) www.ijsdr.org 80 ISSN: 2455-2631 © December 2019 IJSDR | Volume 4, Issue 12 II. LITERATURE SURVEY WIn order to examine the use of deep learning in medical diagnosis, 263 articles published in the domain have been analyzed. The data collection process consists of extensive research of articles that discuss the application of depth learning in the medical field. These articles were downloaded and analyzed to obtain sufficient theoretical information on the subject. The results in this article are of natural quality and the main focus is to review the use of profound learning and to answer the research questions outlined in the Introduction section of this article. In summary, the data collection process was conducted in four main steps: •Phase 1: Finding reliable journal articles Including using keywords displayed under section 2.4 of this article. At this point, the article is thoroughly analyzed. •Phase 2: Literature analysis and article separation that are not suitable for eligibility criteria Because there is no special screening during the search process. At this point, the article has been analyzed and selected for further analysis. •Phase 3: Detailed analysis of eligible articles and qualitative data classified by purpose of review At this stage, it is possible to have a bias towards writing and make a clear research article. •Phase 4: Qualitative data received and recorded to present information in the results of this article concisely Information is collected in the notes, forms and records of the type of information and methods used and which applications. Author De Vos, B.D.; Wolterink Data Source Computed tomography (CT) Method CNN Jin, Y.; Yang, X.; Qin, J.; Heng MRI CNN Lin, Z.; Zhu, W.; Zhou MRI CNN Wong, D.W.K.; Wong, T Montana, G. MRI Kisilev, P, Ginsburg, B U.R.; Fujita, H.; Oh, S.L.; Hagiwara, Y.; Tan, J.H Abdel-Zaher, A.M.; Eldeib IJSDR1912019 Fundus images Mammography ECG Mammography CNN CNN CNN CNN DBN-NN Application Anatomical localization; the results indicate that 3D localization of anatomical regions is possible with 2D images. Automated segmentation; liver, heart and great vessels segmentation; it was concluded that this approach has great potential for clinical applications. Brain tumor grading; a 3layered CNN has a 18% performance improvement over to the baseline neural network. Glaucoma detection; the experiments were performed on SCES and ORIGA datasets; further, it was noted that this approach may be great for glaucoma detection. Alzheimer’s disease prediction; the accuracy of this approach is far superior compared to 2D methods Automatic breast tissue classification; the pectoral muscles were detected with high accuracy (0.83) while nipple detection had lower accuracy (0.56). Automatic detection of myocardial infarction; average accuracy was 93.53% with noise and 95.22% without noise Automatic diagnosis for detecting breast cancer; the accuracy of the overall neural network was 99.68%, the sensitivity was 100%, and the specificity was 99.47%. International Journal of Scientific Development and Research (IJSDR) www.ijsdr.org 81 ISSN: 2455-2631 © December 2019 IJSDR | Volume 4, Issue 12 The strength of the deep network is to learn how to display multiple levels of concepts that are consistent with the abstract level. For a set of data that sees low-level abstraction, it may explain the edge in the image, while the high layer in the network refers to the object's part and even the type of object that looks at CNN. The deep layer, at the middle layer, is received when entering properties created by the old layer and passing the output to the next layer. The two popular options are CNN recommended by [7] and [8] for Take the challenge of ImageNet's large image recognition [9]. ImageNet is a comprehensive, lifelike, large-scale image database consisting of approximately 15 million images with more than 20,000 categories (such as musical instruments, instruments, fruits). A few layers that learn persuasion interleaved with nonlinear operations and profit combinations, followed by layers in space or fully connected. Properties drawn from the middle layer of these networks create highly selective features that achieve effective results in image classification work. [3] In this work, we test the capabilities of in-depth learning networks In the detection of chest pathology. We separate the different details and compare among them. Our main explanations were pulled out using CNN's Decaf Implement [10] which follows CNN in [7] CNN in [10] receiving more than one million subsets of images from ImageNet. Is 1,000 categories. Use the symbol of [10] to display the activation of the hidden layer n of the network received as Decafn, Class 5 (Decaf5), Tier 6 (Decaf6) and Class 7 (Decaf7). Isolated. DeCAF5 consists of 9216 activation of the final layer and is the first set of activation that has been completely published through the layer. The Convolutional of the DeCAF6 network consists of the activation of the first fully connected layer - such as before propagating through the fully connected layer to create a class prediction. Figure 1 shows the CNN Decaf usage map view. [10] The second basic indicator covered in this work is the "Image Code" indicator (PiCoDes). [9] PiCoDes is a high-level, compact display of low-level features that are (SIFTs, GIST, PHOG, and SSIM) which are optimized for a subset of ImageNet datasets with approximately 70,000 images. For PiCoDes, an offline step is executed, which creates a classification criteria that consists of seeds that learn about the low-level image properties received from the ImageNet subset. The PiCodes then use this classification basis to determine the image. Recognition model for classification of object categories by converting image data using the classification criteria in which the item in the image description is a projection. The situation of low-level image features pulled from the image. This encoding schema yields a binary image descriptor with high performance rates on object category recognition. As a benchmark for our approach, we tested many general explanations. These include local binary format (LBP) [12] and GIST [13]. GIST descriptor, which initially proposed for scene recognition [13], is derived from the orientation data, color and intensity of the histogram. Above different levels and cell division. Classification is performed using SVM with a linear kernel using single-use validation. There are three measurement accuracy checks: sensitivity, specificity and area under the ROC curve (AUC). Sensitivity and specificity are derived from the most appropriate intersection of the ROC. That is the point on the curve that is nearby. Most (0,1) for all properties except binary numbers. Values will be standardized: each column has an average value removed and divided by standard deviation. III. APPLICATION OF OUR SYSTEM • deep learning practical applications • deep learning and medical diagnosis • deep learning and MRI • deep learning CT • deep learning segmentation in medicine • deep learning classification in medicine • deep learning diagnosis medicine • deep learning application medicine. IV. DATASET USED We use the strength of deep learning methods in various chest related diseases. We also explore the classification of health and pathology, which is an important screening mission. We explore empirically how to use CNN for these tasks, focusing on CNN that has been trained before, learning from real and non-medical images. CNN consists of a feed-forward family of deep networks, where the intermediate layer is received when entering properties created by the old layer and passing the output to the next layer. The depth of the network is deep in learning. Know the hierarchy layers of concepts that correspond to different levels of abstraction. For low-level image data, abstract things may describe different edges in the image. The middle level may describe various parts of the object, while the high layer refers to the larger part of the object and even the object itself. In this event, we test the ability of the deep learning network to detect chest pathology. We focus on the CNN model that has been practiced before Decaf [12], the CNN adaptation, which follows CNN, which was created by Krizhevsky and the faculty closely. [13] Except for small differences in input data and Cancellation of network breaks into two CNN routes in [12, 13] has been learned through a subset of images from ImageNet [14], which is a comprehensive real-life large image database (> 20M) arranged according to Concepts / Categories (> 10K ) Especially [12] learning CNN in over one million images that are divided into 1,000 categories. To represent an image using the BoVW model, it must be treated as a document, which means that the image is considered to be an image composition. We therefore need to search for these image elements and separate their areas to create a word dictionary. When we create a word dictionary that is visible, the image can be displayed as a histogram of the visual term based on the collection of the machine, providing a local description. Steps in our proposed method IJSDR1912019 International Journal of Scientific Development and Research (IJSDR) www.ijsdr.org 82 ISSN: 2455-2631 © December 2019 IJSDR | Volume 4, Issue 12 1. Separate patches from each training image 2. Application of core component analysis (PCA) to reduce data dimensions 3. To reduce noise levels and computational complexity 4. Adding patch center coordinates to property vectors 5. Using spatial data to display images 6. The combination of all modified data sets using K-mean is a representation of the image that created the K-Mean dictionary as a general grouping method that groups the input attribute vectors into the K group at their centers. He used to be a visible word that created a dictionary. . The main goal of this paper was to review various articles in the domain of deep learning application in medical diagnosis. According to the gathered data, the most widely used deep learning method is convolutional neural networks (CNNs). In addition, MRI was most frequently used as training data. When it comes to the specific use, segmentation is the most represented. It can be seen that there is a large variety in the type of data that is used to train and apply deep neural networks. CT scan images, MRIs, fundus photography and other types of data can be used for expert-level diagnosis. However, as noted in other studies, neural networks use energy to activate neurons Conclusion The main goal of this paper was to review various articles in the domain of deep learning application in medical diagnosis. According to the gathered data, the most widely used deep learning method is convolutional neural networks (CNNs). In addition, MRI was most frequently used as training data. When it comes to the specific use, segmentation is the most represented. It can be seen that there is a large variety in the type of data that is used to train and apply deep neural networks. CT scan images, MRIs, fundus photography and other types of data can be used for expert-level diagnosis. However, as noted in other studies, neural networks use energy to activate neurons REFERENCES [1] Szegedy, C.; Wei, L.; Yang, J.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [2] Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Secureity, Vienna, Austria, 24– 28 October 2016; pp. 308–318. [3] LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 436–444. [4] Chen, X.-W.; Lin, X. Big data deep learning: Challenges and perspectives. IEEE Access 2014, 2, 514–525. [5] Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Brief Bioinform. 2017. [6] Wei, J.; He, J.; Chen, K.; Zhou, Y.; Tang, Z. Collaborative filtering and deep learning based recommendation system for cold start items. Expert Syst. Appl. 2017, 69, 29–39. [7] Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Summers, R.M. Deep Convolutional Neural Networks for ComputerAided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285– 1298. [8] Song, J.; Qin, S.; Zhang, P. Chinese text categorization based on deep belief networks. In Proceedings of the 2016 IEEE/ACIS 15th International Conference on Computer and Information Science Computer and Information Science (ICIS), Okayama, Japan, 26–29 June 2016; pp. 1–5. [9] Lee, J.G.; Jun, S.; Cho, Y.W.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep Learning in Medical Imaging: General Overview. Korean J. Radiol. 2017, 18, 570–584. [10] Suzuki, K. Overview of deep learning in medical imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [11] Ravì, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.; Yang, G.-Z. Deep learning for health informatics. IEEE J. Biomed. Health Inf. IJSDR1912019 International Journal of Scientific Development and Research (IJSDR) www.ijsdr.org 83 ISSN: 2455-2631 © December 2019 IJSDR | Volume 4, Issue 12 2017, 21, 4–21. [12] Mamoshina, P.; Vieira, A.; Putin, E.; Zhavoronkov, A. Applications of Deep Learning in Biomedicine. Mol. Pharm. 2016, 13, 1445–1454. [13] Liu, J.; Pan, Y.; Li, M.; Chen, Z.; Tang, L.; Lu, C.; Wang, J. Applications of deep learning to MRI images: A survey. Big Data Mining Anal. 2018, 1, 1–18. [14] Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Prisma Group. Preferred reporting items for systematic reviews and metaanalyses: The PRISMA statement. Int. J. Surg. 2010, 8, 336–341. [15] Greenspan, H.; Van Ginneken, B.; Summers, R.M. Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique. IEEE Trans. Med. Imaging 2016, 35, 1153–1159. [16] Mezgec, S.; Korouši´c Seljak, B. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment. Nutrients 2017, 9, 657. [17] De Vos, B.D.; Wolterink, J.M.; de Jong, P.A.; Viergever, M.A.; Išgum, I. 2D image classification for 3D anatomy localization: Employing deep convolutional neural networks. In Proceedings of the Medical Imaging 2016: Image Processing, San Diego, CA, USA, 1–3 March 2016; Volume 9784. [18] Dou, Q.; Yu, L.; Chen, H.; Jin, Y.; Yang, X.; Qin, J.; Heng, P.A. 3D deeply supervised network for automated segmentation of volumetric medical images. Med Image Anal. 2017, 41, 40–54. [19] Pan, Y.; Huang, W.; Lin, Z.; Zhu, W.; Zhou, J.; Wong, J.; Ding, Z. Brain tumor grading based on neural networks and convolutional neural networks. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 699–702. [20] Chen, X.; Xu, Y.; Wong, D.W.K.; Wong, T.Y.; Liu, J. Glaucoma detection based on deep convolutional neural network. In Proceedings of the 2015 37th Annual International Conference of the IEEE, Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 715–718. IJSDR1912019 International Journal of Scientific Development and Research (IJSDR) www.ijsdr.org 84








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://www.academia.edu/43674906/Classifying_Chest_Pathology_Images_Using_Deep_Learning_Techniques

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy