ENSURING SECURE OPERATING OF VOICE AGENTS BASED ON THE LARGE LANGUAGE MODELS
DOI:
https://doi.org/10.28925/2663-4023.2025.29.868Keywords:
voice agents, large language models, software architecture, information systems securityAbstract
The rapid deployment of voice agents based on LLM (Large Language Models) in corporate systems creates new challenges for the software architecture and information system security. Traditional development approaches do not take into account the specifics of LLM integration as a component that can be compromised by attackers through prompt injections and other attacks at the natural language processing level. The article presents an architectural approach to secure voice agent integration based on the principle of separation of the basic business rules from the logic of interaction with language models. The proposed architecture creates a multi-level security system via 1. the implementation of the principle of least privilege for LLM components: 2. independent validation of business operations at the domain service level; 3. the usage of a large language model as a mediator between the user and critical systems. The research demonstrates the effectiveness of the architectural approach in countering typical attacks, including prompt injections to bypass authorisation, sensitive data leakage, unauthorised operations via function calling, and violation of business process integrity. The mediator architecture greatly simplifies the understandability and manageability of the system due to a clear division of roles between components, which increases the system predictability, and enables the use of traditional validation and audit methods. The architectural isolation of LLM from critical operations allows organisations to implement advanced AI technologies without a major overhaul of existing security systems, ensuring resilience to new vulnerabilities and flexibility in choosing of the technological solutions. The results of the research are of practical importance for the development of corporate systems with voice agent integration. Also, the resultscontribute to the forming of secure practices for the usage of large language models in applications of critical importance.
Downloads
References
Bucaion, A. (2025). A functional software reference architecture for LLM-integrated systems.
Chan, C. M. (2024). AgentMonitor: A plug-and-play framework for predictive and secure multi-agent systems.
Chang, E., & Geng, L. (2025). SagaLLM: Context management, validation, and transaction guarantees for multi-agent LLM planning.
Chun, Y. E. (2025). CREFT: Sequential multi-agent LLM for character relation extraction.
Ginart, A. A. (2024). Asynchronous tool usage for real-time agents.
Guran, N. (2024). Towards a middleware for large language models.
Kalhor, B., & Das, S. (2023). Evaluating the security and privacy risk postures of virtual assistants.
Kim, M. (2024). A multi-agent approach for REST API testing with semantic graphs and LLM-driven input.
Li, X. (2024). A survey on LLM-based multi-agent systems: Workflow, infrastructure, and challenges.
Ma, H. (2025). Coevolving with the other you: Fine-tuning LLM with sequential cooperative multi-agent reinforcement learning.
Nunez, A. (2024). AutoSafeCoder: A multi-agent framework for securing LLM code generation through static analysis and fuzz testing.
Suo, X. (2025). Signed-Prompt: A new approach to prevent prompt injection attacks against LLM-integrated applications.
Szeider, S. (2025). MCP-Solver: Integrating language models with constraint programming systems.
Wen, W., Zhenyue, Z., & Tianshu, S. (2023). Customizing large language models for business context: Framework and experiments.
Yi-Chang, C. (2024). Enhancing function-calling capabilities in LLMs: Strategies for prompt formats, data integration, and multilingual translation.
Zhang, Y. (2024). Towards efficient LLM grounding for embodied multi-agent collaboration.
Modelcontextprotocol.io. (2025, March 26). Specification (latest version). https://modelcontextprotocol.io/specification/2025-03-26
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Андрій Артеменко, Богдан Худік

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.