OPPORTUNITIES OF ARTIFICIAL INTELLIGENCE FOR CYBERSECURITY AUDIT AND RISK MANAGEMENT
DOI:
https://doi.org/10.28925/2663-4023.2025.29.872Keywords:
artificial intelligence, cybersecurity audit, risk management, anomaly detection, machine learningAbstract
This article explores the potential of artificial intelligence (AI) in cybersecurity auditing and risk management within the context of ongoing digital transformation. Traditional approaches to information security auditing—based on manual data collection and periodic assessments—are increasingly insufficient for dynamic and large-scale digital ecosystems. They are limited in scalability, prone to human error, and lack the capacity for continuous monitoring. The integration of AI technologies allows for automated anomaly detection, proactive risk assessment, real-time decision support, and analysis of vast volumes of both structured and unstructured data, including event logs, network traffic, and audit reports. The study examines the application of machine learning and deep learning models in audit practices, including recurrent and convolutional neural networks, clustering algorithms, and natural language processing (NLP) techniques for detecting security policy violations. Particular attention is given to the concept of Network Situation Awareness, which enables the prediction of system behavior and potential threats based on historical and real-time behavioral data. In addition to technical achievements, the research addresses the ethical challenges associated with AI deployment in audits: algorithmic opacity, bias risks, privacy concerns, and difficulties in delegating decision-making to automated systems. The need for explainable AI (XAI) and the development of ethical guidelines for responsible AI use in cybersecurity audits is emphasized. AI is highlighted as a dual-use technology—capable of both defending against and facilitating cyberattacks. The article refers to real-world incidents, such as the use of generative models in social engineering and voice-based fraud. The aim of the study is to identify both the benefits and limitations of AI-powered cybersecurity auditing and to provide recommendations for the ethical and effective implementation of intelligent systems. The paper concludes that a hybrid model—combining AI automation with human expertise—is the most promising strategy for enhancing the accuracy, efficiency, and adaptability of cybersecurity risk assessment. This integrated approach is essential to improving cyber resilience in today’s volatile digital environment.
Downloads
References
Vasan, D., Alazab, M., Wassan, S., et al. (2020). IMCFN: Image-based malware classification using fine-tuned convolutional neural network architecture. Computer Networks, 171, Article 107138. https://doi.org/10.1016/j.comnet.2020.107138
Buczak, A. L., & Guven, E. (2016). A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Communications Surveys & Tutorials, 18(2), 1153–1176. https://doi.org/10.1109/COMST.2015.2494502
Authorea. (n.d.). AI-driven cyber risk assessment: Predicting and preventing data breaches with machine learning. https://www.authorea.com/users/898703/articles/1274531-ai-driven-cyber-risk-assessment-predicting-and-preventing-data-breaches-with-machine-learning
Sommer, R., & Paxson, V. (2010). Outside the closed world: On using machine learning for network intrusion detection. In 2010 IEEE Symposium on Security and Privacy (pp. 305–316). IEEE. https://doi.org/10.1109/SP.2010.25
https://www.sciencedirect.com/science/article/abs/pii/S016740481930118X?via%3Dihub
Anjum, N., & Chowdhury, M. R. (2024). Revolutionizing cybersecurity audit through artificial intelligence automation: A comprehensive exploration. International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE).
Phanishlakarasu. (n.d.). AI for cybersecurity audits: Enhancing transparency and accountability. Medium. https://medium.com/@phanishlakarasu/ai-for-cybersecurity-audits-enhancing-transparency-and-accountability-a4572a59b436
Expert.com.ua. (2024). Meta планує автоматизувати багато оцінок ризиків продуктів. https://expert.com.ua/200293-meta-planue-avtomatyzuvaty-bahato-ocinok-ryzykiv-produktiv.html
Chen, T., Wang, Z., & Zhang, C. (2021). Deep learning for cyber security intrusion detection: Approaches, datasets, and comparative study. Journal of Information Security and Applications, 58, 102726. https://doi.org/10.1016/j.jisa.2021.102726
Roy, A., Dey, N., & Ashour, A. S. (Eds.). (2022). Cyber security and digital forensics: Challenges and future trends. Springer Nature. https://doi.org/10.1007/978-981-19-2591-3
Xu, W., Wang, L., & Zhao, Y. (2020). Intrusion detection system based on deep belief network and probabilistic neural network. Neural Computing and Applications, 32, 11265–11273. https://doi.org/10.1007/s00521-019-04552-2
Zhou, Y., & Sharma, A. (2022). A survey of NLP techniques for cybersecurity applications. ACM Computing Surveys, 55(3), Article 50. https://doi.org/10.1145/3491200
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint. https://arxiv.org/abs/1702.08608
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2
McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics (pp. 1273–1282). PMLR. https://proceedings.mlr.press/v54/mcmahan17a.html
Dwork, C., Roth, A., et al. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4), 211–407. https://doi.org/10.1561/0400000042
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint. https://arxiv.org/abs/1802.07228
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning. fairmlbook.org. https://fairmlbook.org
Islam, M. R. (2024). Generative AI, cybersecurity, and ethics. Wiley.
Petrenko, S. A., & Smirnov, V. I. (2023). Threat models and recommendations for protecting information systems based on AI. In Proceedings of the Conference on Information Security and Cyber Defense (pp. 764–770).
Lungol, O. M. (2024). Review of methods and strategies of cybersecurity using AI tools. In Proceedings of the 2nd All-Ukrainian Scientific and Practical Conference "Digital Transformations in the Context of Security Challenges" (pp. 379–389). Kyiv: National Academy of the Security Service of Ukraine.
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Віктор Ободяк, Михайло Отрощенко, Володимир Любчак

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.