DETECTION OF MANIPULATIVE COMPONENT IN TEXT MESSAGES OF MASS MEDIA IN THE CONTEXT OF PROTECTION OF DOMESTIC CYBERSPACE

Authors

DOI:

https://doi.org/10.28925/2663-4023.2025.29.839

Keywords:

manipulative component, large language model, cyberspace, information confrontation, automated text analysis, information security, mass media

Abstract

The problem of the article is to increase the effectiveness of information security means within the national cyberspace by automated detection of the manipulative component in text messages of the mass media. It is shown that one of the main directions of increasing the effectiveness of such means is the use of large language models, which are capable of performing a deep contextual analysis of a natural language text, taking into account emotional, rhetorical and semantic content. It is established that most of the known solutions in the field of detecting text manipulations using large language models are quite difficult to adapt to the conditions of practical application in systems for protecting domestic cyberspace due to the need to create a powerful hardware and software infrastructure to service the corresponding LLM-means and the need to form specialized training samples that take into account the main types of manipulative influences characteristic of the realities of information confrontation. To overcome these limitations, the article proposes a concept for using LLM tools (GPT, Gemini, DeepSeek, Grok, etc.), which is based on the implementation of dialogic interaction with pre-standardized formalized queries that take into account the main types of manipulative influences. Such influences, according to the classification, include emotional-manipulative messages, information substitutions, discrediting narratives, context manipulation, propaganda constructs, exploitation of socially sensitive topics, and artificial formation of public opinion through bot activity. A method of interaction with LLM has been developed, which includes the stages of text pre-processing, the formation of queries at the basic, typological, and interpretative levels, as well as the interpretation of LLM responses in a formalized form. Experimental studies involving three groups of texts (scientific, political, and destructive-propaganda) have shown that the analysis results obtained using the GPT-4-turbo LLM tool agree with expert assessments by an average of 87%, which indicates a high level of reliability of the results obtained and confirms the practical effectiveness of the proposed solutions. It is shown that it is possible to increase the accuracy and stability of assessments of the manipulative component by retraining the LLM tools by including examples of correct answers in the query, which allows to increase the consistency of the results without the need for complete retraining of the model.

Downloads

Download data is not yet available.

References

Alsaedi, N., & Alsaedi, A. (2023). Improving multiclass classification of fake news using BERT-based models and ChatGPT-augmented data. Inventions, 8(5), 112. https://doi.org/10.3390/inventions8050112

Da San Martino, G., Barrón-Cedeño, A., Petrov, R., & Nakov, P. (2019). Fine-grained analysis of propaganda in news articles [Preprint]. arXiv. https://arxiv.org/abs/1910.02517

Da San Martino, G., Yu, S., Barrón-Cedeño, A., Petrov, R., & Nakov, P. (2020). SemEval-2020 task 11: Detection of propaganda techniques in news articles [Preprint]. arXiv. https://arxiv.org/abs/2009.02696

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding [Preprint]. arXiv. https://arxiv.org/abs/1810.04805

Korchenko, O., Tereikovskyi, I., Ziubina, R., Tereikovska, L., Korystin, O., Tereikovskyi, O., & Karpinskyi, V. (2025). Modular neural network model for biometric authentication of personnel in critical infrastructure facilities based on facial images. Applied Sciences, 15, 2553. https://doi.org/10.3390/app15052553

Li, L., Ma, R., Guo, Q., & Qiu, X. (2020). BERT-ATTACK: Adversarial attack against BERT using BERT. arXiv. https://arxiv.org/abs/2004.09984

Lin, S., Hilton, J., & Evans, O. (2021). TruthfulQA: Measuring how models mimic human falsehoods [Preprint]. arXiv. https://arxiv.org/abs/2109.07958

Smith, J., & Brown, L. (2022). Fake news detection using deep learning models. Journal of Computational Linguistics, 28(4), 345–360.

StopRussia | MRIYA. (n.d.). Official chatbot of Ukraine. Internet Archive. https://web.archive.org/web/20220601115228/https://t.me/stopdrugsbot

StopRussiaChannel | MRIYA. (n.d.). Official Telegram channel of Ukraine for checking and blocking resources that spread fake news and propaganda. Internet Archive. https://web.archive.org/web/20220601090556/https://t.me/stoprussiachannel

Tereikovskyi, I., Korchenko, O., Bushuyev, S., Tereikovskyi, O., Ziubina, R., & Veselska, O. (2023). A neural network model for object mask detection in medical images. International Journal of Electronics and Telecommunications, 69(1), 41–46. https://doi.org/10.24425/ijet.2023.144329

Tereikovskyi, I., AlShboul, R., Mussiraliyeva, Sh., Tereikovska, L., Bagitova, K., Tereikovskyi, O., & Hu, Zh. (2024). Method for constructing neural network means for recognizing scenes of political extremism in graphic materials of online social networks. International Journal of Computer Network and Information Security, 16(3), 52–69. https://doi.org/10.5815/ijcnis.2024.03.05

Tereikovskyi, I., Hu, Zh., Chernyshev, D., Tereikovska, L., Korystin, O., & Tereikovskyi, O. (2022). The method of semantic image segmentation using neural networks. International Journal of Image, Graphics and Signal Processing, 14(6), 1–14. https://doi.org/10.5815/ijigsp.2022.06.01

Wang, Z., Liu, Y., & Chen, X. (2023). Implementing BERT and fine-tuned RoBERTa to detect AI generated news by ChatGPT [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2306.07401

Volokhovskyi, V., Khovrat, А., Kobziev, V., & Nazarov, О. (2024). Domain specific text analysis via decoder-only large language models. Grail of Science, 43, 313–321. https://doi.org/10.36074/grail-of-science.06.09.2024.041

Vorochek O.H., & Solovei I.V. (2024). Using artificial intelligence speech models to generate social media posts. Technical Engineering, 1(93), 128–134. https://doi.org/10.26642/ten-2024-1(93)-128-134

Zaporozhets, V., & Opirskyy, I. (2024). The danger of using Telegram and its impact on ukrainian society. Cybersecurity: Education, Science, Technique, 1(25), 59–78. https://doi.org/10.28925/2663-4023.2024.25.5978.

Cabinet of Ministers of Ukraine. (2025). On approval of the action plan for 2025 for the implementation of the cybersecurity strategy of Ukraine: Order No. 204-p. https://www.kmu.gov.ua/npas/pro-zatverdzhennia-planu-zakhodiv-na-2025-rik-z-realizatsii-stratehii-kiberbezpeky-t70325

Korovii О., & Tereikovskyi І. (2024). Conceptual model of the process of determining the emotional tonality of the text. Computer-integrated technologies: education, science, production, 55, 115–123. https://doi.org/10.36910/6775-2524-0560-2024-55-14

Lesko N. V., & Kira S. O. (2023). Cyber security as a part of the national security of Ukraine in the conditions of war. Juridical scientific and electronic journal, 5, 112–120. https://doi.org/10.32782/2524-0374/2023-5/55

Chervyakov О. (2024). Peculiarities of ensuring cyber security as a leading component of Ukraine’s national security. Bulletin of criminological association of Ukraine, 33(3), 511–518. https://doi.org/10.32631/vca.2024.3.47

Downloads


Abstract views: 13

Published

2025-09-26

How to Cite

Korchenko, O., Tereikovskyi, I., Dychka, I., Romankevich, V., & Tereikovska, L. (2025). DETECTION OF MANIPULATIVE COMPONENT IN TEXT MESSAGES OF MASS MEDIA IN THE CONTEXT OF PROTECTION OF DOMESTIC CYBERSPACE . Electronic Professional Scientific Journal «Cybersecurity: Education, Science, Technique», 1(29), 27–40. https://doi.org/10.28925/2663-4023.2025.29.839

Most read articles by the same author(s)