Please use this identifier to cite or link to this item:
https://scidar.kg.ac.rs/handle/123456789/23025| Title: | Comparison of Different Convolutional Neural Networks Utilizing Transfer Learning for Pneumothorax Segmentation from Whole Chest X-Ray Images and Extracted Patches |
| Authors: | Dašić, Lazar Pavić, Ognjen Geroski, Tijana Vaskovic Jovanovic, Mina Filipovic, Nenad |
| Issue Date: | 2024 |
| Abstract: | Pneumothorax is a lung condition characterized by the presence of air between chest wall and lungs. In order to diagnose location and size of pneumothorax, chest X-ray is a commonly used imaging technique. U-Net con-volutional neural network models with different backbones are compared in or-der to assess their capability to automatically and correctly segment signs of pneumothorax from chest X-rays. Five different pretrained backbones have been chosen: VGG19, ResNet34, ResNet50, DenseNet121 and Inceptionv3. Two different approaches for pneumothorax segmentation have also been test-ed: one methodology used X-ray images of the whole chest area for training, while the second one split the original images into patches and used them for the training process. Both methodologies performed at a similar level, with the best results achieved by U-Net model with DenseNet121 backbone for segmen-tation of X-ray of the whole chest. This model achieved a Jaccard index and Dice score of 76.92% and 78.81%, respectively. These results indicate that the tested models are capable of extracting fine-grained features from X-ray images of whole chest and that patch-based segmentation does not provide additional benefits. |
| URI: | https://scidar.kg.ac.rs/handle/123456789/23025 |
| Type: | conferenceObject |
| DOI: | 10.1007/978-3-031-71419-1_15 |
| ISSN: | 2367-3370 |
| Appears in Collections: | Faculty of Engineering, Kragujevac |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Dasic_SoftLungX.pdf | 516.96 kB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License

