Abstract
Rapid progress and widespread outbreak of COVID-19 have caused devastating influence on the health systems all around the world. The importance of countermeasures to tackle this problem lead to widespread use of Computer Aided Diagnosis (CADs) applications using deep neural networks. The unprecedented success of machine learning techniques, especially deep learning networks in medical images, have led to their recent prominence in improving efficient diagnosis of COVID-19 with increased detection accuracy. However, recent studies in the field of security of AI-based systems revealed that these deep learning models are vulnerable to adversarial attacks. Adversarial examples generated by attack algorithms are not recognizable by the human eye and can easily deceive the state-of-the-art deep learning models, therefore they threaten security-critical learning applications. In this paper, the methodology, results and concerns of recent works on robustness of AI based COVID-19 systems are summarized and discussed. We explore important security concerns related to deep neural networks and review current state-of-the-art defense methods to prevent performance degradation.
【저자키워드】 COVID-19, deep learning, security, Adversarial attack, 【초록키워드】 Accuracy, outbreak, Algorithm, Rapid, methodology, health system, Degradation, computer, Defense, attacks, widespread, Prevent, caused, example, diagnosis of COVID-19, 【제목키워드】 Analysis,