EyeBench: A Call for More Rigorous Evaluation of Retinal Image Enhancement
FOS: Computer and information sciences
Artificial Intelligence (cs.AI)
Computer Science - Artificial Intelligence
Computer Vision and Pattern Recognition (cs.CV)
Image and Video Processing (eess.IV)
Computer Science - Computer Vision and Pattern Recognition
FOS: Electrical engineering, electronic engineering, information engineering
Electrical Engineering and Systems Science - Image and Video Processing
DOI:
10.48550/arxiv.2502.14260
Publication Date:
2025-02-19
AUTHORS (11)
ABSTRACT
Over the past decade, generative models have achieved significant success in enhancement fundus images.However, evaluation of these still presents a considerable challenge. A comprehensive benchmark for image is indispensable three main reasons: 1) The existing denoising metrics (e.g., PSNR, SSIM) are hardly to extend downstream real-world clinical research Vessel morphology consistency). 2) There lack both paired and unpaired methods, along with need expert protocols accurately assess value. 3) An ideal system should provide insights inform future developments enhancement. To this end, we propose novel benchmark, EyeBench, that align needs, offering foundation work improve relevance applicability EyeBench has appealing properties: multi-dimensional alignment evaluation: In addition evaluating task, several clinically tasks images, including vessel segmentation, DR grading, generalization, lesion segmentation. Medical expert-guided design: We introduce dataset promote fair comparisons between methods includes manual protocol by medical experts. Valuable insights: Our study provides rigorous across different tasks, assisting experts making informed choices. Additionally, offer further analysis challenges faced methods. code available at \url{https://github.com/Retinal-Research/EyeBench}
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....