Application of Convolutional Gated Recurrent Units U-Net for Distinguishing between Retinitis Pigmentosa and Cone-Rod Dystrophy
Artykuł w czasopiśmie
MNiSW
100
Lista 2024
Status: | |
Autorzy: | Skublewska-Paszkowska Maria, Powroźnik Paweł, Rejdak Robert, Nowomiejska Katarzyna |
Dyscypliny: | |
Aby zobaczyć szczegóły należy się zalogować. | |
Rok wydania: | 2024 |
Wersja dokumentu: | Elektroniczna |
Język: | angielski |
Numer czasopisma: | 3 |
Wolumen/Tom: | 18 |
Strony: | 505 - 513 |
Impact Factor: | 1,0 |
Web of Science® Times Cited: | 0 |
Scopus® Cytowania: | 0 |
Bazy: | Web of Science | Scopus |
Efekt badań statutowych | NIE |
Materiał konferencyjny: | NIE |
Publikacja OA: | TAK |
Licencja: | |
Sposób udostępnienia: | Witryna wydawcy |
Wersja tekstu: | Ostateczna wersja opublikowana |
Czas opublikowania: | W momencie opublikowania |
Data opublikowania w OA: | 29 stycznia 2024 |
Abstrakty: | angielski |
Artificial Intelligence (AI) has gained a prominent role in the medical industry. The rapid development of the computer science field has caused AI to become a meaningful part of modern healthcare. Image-based analysis involving neural networks is a very important part of eye diagnoses. In this study, a new approach using Convolutional Gated Recurrent Units (GRU) U-Net was proposed for the classifying healthy cases and cases with retinitis pigmentosa (RP) and cone–rod dystrophy (CORD). The basis for the classification was the location of pigmentary changes within the retina and fundus autofluorescence (FAF) pattern, as the posterior pole or the periphery of the retina may be affected. The dataset, gathered in the Chair and Department of General and Pediatric Ophthalmology of Medical University in Lublin, consisted of 230 ultra-widefield pseudocolour (UWFP) and ultra-widefield FAF images, obtained using the Optos 200TX device (Optos PLC). The data were divided into three categories: healthy subjects (50 images), patients with CORD (48 images) and patients with RP (132 images). For applying deep learning classification, which rely on a large amount of data, the dataset was artificially enlarged using augmentation involving image manipulations. The final dataset contained 744 images. The proposed Convolutional GRU U-Net network was evaluated taking account of the following measures: accuracy, precision, sensitivity, specificity and F1. The proposed tool achieved high accuracy in a range of 91.00%–97.90%. The developed solution has a great potential in RP diagnoses as a supporting tool. |