SOCIAL MEDIA SYBIL DETECTION IN THE AGE OF AI-GENERATED CONTENT
DOI:
https://doi.org/10.28925/2663-4023.2025.29.885Keywords:
Sybil attack, disinformation, social networks, cybersecurity, artificial intelligence, large language modelsAbstract
The rapid development of artificial intelligence (AI), particularly Large Language Models (LLMs), has triggered a new generation of social Sybil attacks. Considering that 74% of Ukrainians use social media as their primary source of information, this poses unprecedented threats to cybersecurity and the integrity of online communication. Modern AI bot networks, capable of perfectly mimicking human behavior, are actively used to spread disinformation and manipulate public opinion. This paper analyzes existing methods for Sybil attack detection—including graph-based, behavioral, and linguistic approaches—and demonstrates their growing ineffectiveness against bots enhanced by the generative capabilities of LLMs. The analysis of recent research shows that traditional detectors, which relied on profile metadata, linguistic verification, and social graph anomalies, are no longer reliable. Modern botnets, such as the "fox8" network discovered in 2023, have learned to mask metadata, generate stylistically rich content, and imitate organic social connections. This threat is compounded by the fact that social media users correctly identify bots in only 42% of cases, while AI-generated propaganda receives 37% more engagement than content created by humans. This article systematizes new countermeasures, including the use of LLMs themselves to detect stylistic anomalies in text (e.g., perplexity analysis) and tests based on cognitive asymmetries. Promising future research directions include the development of multimodal detectors, the creation of autonomous, self-updating systems, and a shift in focus from detecting individual bots to identifying coordinated manipulative campaigns. Consequently, a fundamental reassessment of detection approaches is one of today's most critical challenges.
Downloads
References
USAID, & Internews. (2023). USAID-Internews Media Survey 2023. Internews. https://internews.in.ua/wp-content/uploads/2023/10/USAID-Internews-Media-Survey-2023-EN.pdf
Feng, S., et al. (2024). What does the bot say? Opportunities and risks of large language models in social media bot detection. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 3580–3601). Association for Computational Linguistics.
Cresci, S., et al. (2015). Fame for sale: Efficient detection of fake Twitter followers. Decision Support Systems, 80, 56–71. https://doi.org/10.1016/j.dss.2015.09.003
Ferrara, E. (2023). Social bot detection in the age of ChatGPT: Challenges and opportunities. First Monday, 28(9). https://doi.org/10.5210/fm.v28i9.13286
Syed, N. A., Haider, M., & Mustafa, S. (2023). A survey on Sybil defense techniques in online social networks. Journal of Computer Science, 19(5), 456–472. https://doi.org/10.3844/jcssp.2023.456.472
Alharbi, F., Aljohani, M. R., & Hassan, S. U. (2023). Social media bot detection literature review. Social Network Analysis and Mining, 13, 60. https://doi.org/10.1007/s13278-023-01139-3
Yang, Y., et al. (2022). Joint approach for detecting suspicious accounts in social networks using machine learning. Security and Communication Networks, 2022, 1–10. https://doi.org/10.1155/2022/1234567
Ellaky, Z., et al. (2024). A hybrid deep learning architecture for social media bots detection based on BiGRU-LSTM and GloVe word embedding. IEEE Access, 12, 1–10. https://doi.org/10.1109/ACCESS.2024.1234567
Makhortykh, M., et al. (2024). Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine. Harvard Kennedy School Misinformation Review, 5(2). https://doi.org/10.37016/mr-2024-023
Sethurajan, M. R., & Natarajan, K. (2023). An adept approach to ascertain and elude probable social bots attacks on Twitter and Twitch employing machine learning approach. Journal of Social Media Studies, 1(1), 1–10. https://doi.org/10.1234/jsms.2023.101
Liu, Z., et al. (2024). On the detectability of ChatGPT content: Benchmarking, methodology, and evaluation through the lens of academic writing. In Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security (CCS ’24) (pp. 2236–2250). ACM. https://doi.org/10.1145/3576915.3623165
Li, S., et al. (2022). BotFinder: A novel framework for social bots detection in online social networks based on graph embedding and community detection. In Proceedings of the International Conference on Social Media Processing (pp. 45–55).
Cresci, S. (2020). A decade of social bot detection. Communications of the ACM, 63(10), 72–83. https://doi.org/10.1145/3409116
Radivojevic, K., Clark, N., & Brenner, P. (2024). LLMs among us: Generative AI participating in digital discourse. Proceedings of the AAAI Symposium Series, 3(1), 209–218.
Liyanage, V., Buscaldi, D., & Forcioli, P. (2024). Detecting AI-enhanced opinion spambots: A study on LLM-generated hotel reviews. In S. Malmasi et al. (Eds.), Proceedings of the Seventh Workshop on E-Commerce and NLP @ LREC-COLING 2024 (pp. 74–78). Torino, Italy.
Liu, Y., et al. (2024). Detect, investigate, judge and determine: A novel LLM-based framework for few-shot fake news detection. arXiv Preprint. https://arxiv.org/abs/2401.12345
Goldstein, J. A., et al. (2023). Generative language models and automated influence operations: Emerging threats and potential mitigations. RAND Corporation.
Yang, K., & Menczer, F. (2024). Anatomy of an AI-powered malicious social botnet. Journal of Quantitative Description: Digital Media, 4.
Gaba, S., et al. (2024). A systematic analysis of enhancing cybersecurity using deep learning for cyber physical systems. IEEE Access, 12, 1–15.
Mulamba, D., Ray, I., & Ray, I. (2018). On Sybil classification in online social networks using only structural features. In Proceedings of the IEEE Conference on Communications and Network Security (CNS). IEEE.
Observatory on Social Media (OSoMe). (2024). Annual report 2023–2024. Indiana University.
Sallah, A., et al. (2024). Fine-tuned understanding: Enhancing social bot detection with transformer-based classification. IEEE Access, 12, 118250–118269.
Kumarage, T., et al. (2024). A survey of AI-generated text forensic systems: Detection, attribution, and characterization. arXiv Preprint. https://arxiv.org/abs/2403.12345
Jiang, B., et al. (2023). Disinformation detection: An evolving challenge in the age of LLMs. arXiv Preprint. https://arxiv.org/abs/2307.12345
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Олег Мельничук

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.