SOCIAL ENGINEERING AS A TOOL OF INFORMATION AND PSYCHOLOGICAL OPERATIONS IN THE CONTEXT OF ARMED CONFLICT
DOI:
https://doi.org/10.28925/2663-4023.2026.32.1108Keywords:
social engineering; cybersecurity; information and psychological influence; manipulative messages; phishing; disinformation; digital hygiene; media literacyAbstract
The article examines social engineering as one of the most effective tools of manipulative influence on users in the digital environment under conditions of armed conflict. It is emphasized that during wartime social engineering acquires particular danger, as it combines psychological pressure with technological channels of information dissemination, which complicates the critical perception of messages and increases the likelihood of impulsive behavior. It is substantiated that the main goal of social engineering influence in crisis conditions lies not only in misleading the user, but also in shaping controlled behavioral responses, such as panic, rapid dissemination of unverified messages, reduced trust in official communication channels, as well as disorganization of the information space.
Typical examples of manipulative messages in messengers that imitate emergency threat warnings and contain calls for immediate action (for example, “urgent,” “alert,” “open the map of targets/threats”) are analyzed. It is shown that the effectiveness of such messages is ensured by the use of stable psychological triggers, in particular the urgency effect, appeals to fear, informational uncertainty, and cognitive overload. It is proven that the combination of pseudo-official stylistics with visual markers of “legitimacy” (danger symbols, short imperative formulations, emotionally charged headlines) creates an impression of credibility for the user and contributes to an automated reaction without proper verification of the source.
Indicators by which the social engineering nature of messages can be identified are separately defined, including the imposition of urgency, localization of the threat to a specific territory or population group, direct behavioral instructions (“go,” “click,” “open”), as well as the use of reach or reaction indicators as a means of social confirmation. The practical significance of the study lies in the formation of basic preventive recommendations for countering social engineering: verification of messages through official sources, adherence to the principles of digital hygiene, limitation of following suspicious links, development of media literacy and resistance to emotional influence. It is concluded that systematic counteraction to social engineering during wartime requires a combination of technical, informational, and educational measures aimed at preserving information stability and safe user behavior in cyberspace.
Downloads
References
Haborets, O. A., & Lunhol, O. M. (2025). Social engineering as a phenomenon of information influence in the digital environment. National Interests of Ukraine, 11(16), 95–104.
Zhmurko, O. (2024). Social engineering as a cybersecurity threat: Prevention and protection methods. Security Pedagogy, 9(1), 37–42. https://doi.org/10.31649/2524-1079-2024-9-1-037-042
Naseeb, J. (2025). Analyzing psychological operations: A case study of Indo-Pak hostility (2010–2024). NUST Journal of International Peace & Stability, 8(1), 77–90. https://doi.org/10.37540/njips.v8i1.187
Gupta, B. B., Arachchilage, N. A. G., & Psannis, K. E. (2017). Defending against phishing attacks: Taxonomy of methods, current issues and future directions. Telecommunication Systems, 67(2), 247–267. https://doi.org/10.1007/s11235-017-0334-z
Luhovets, D. V., & Petrenko, A. B. (2021). Structure for detecting phishing attacks of social engineering. In Proceedings of the 6th International Scientific and Practical Conference “International Scientific Innovations in Human Life” (pp. 201). Cognum Publishing House.
Laptiev, S. (2022). Improved method of personal data protection against attacks using social engineering algorithms. Cybersecurity: Education, Science, Technique, 4(16), 45–62. https://doi.org/10.28925/2663-4023.2022.16.4562
Dobryshyn, Yu. (2024). Application of statistical methods for predicting phishing attacks. Cybersecurity: Education, Science, Technique, 3(23), 56–70. https://doi.org/10.28925/2663-4023.2024.23.5670
Danylyk, V., Vysotska, V., & Nazarkevych, M. (2024). Methods for identifying disinformation, fake news, and propaganda in mass media based on machine learning. Cybersecurity: Education, Science, Technique, 1(25), 449–467. https://doi.org/10.28925/2663-4023.2024.25.449467
Nazarkevych, M., Vysotska, V., Yurynets, R., & Nakonechnyi, N. (2025). Methods for detecting disinformation in social networks based on artificial intelligence. Cybersecurity: Education, Science, Technique, 2(30), 209–223. https://doi.org/10.28925/2663-4023.2025.30.965
Prokopovych-Tkachenko, D., Bakuta, A., Zvieriev, V., Kozachenko, I., & Cherkaskyi, O. (2025). Modeling phishing scenarios in Ukraine’s cyberspace: An analytical approach using Grafana dashboards. Cybersecurity: Education, Science, Technique, 1(29), 331–347. https://doi.org/10.28925/2663-4023.2025.29.881
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Ольга Габорець

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.