MODEL OF SOCIAL-ADAPTIVE NAVIGATION OF A MOBILE ROBOT USING REINFORCEMENT LEARNING METHODS
DOI:
https://doi.org/10.28925/2663-4023.2025.29.907Keywords:
information technology; modeling; machine learning methods; reinforcement learning methods; autonomous mobile robots mobile robot navigationAbstract
Classic trajectory planning algorithms, despite their effectiveness in static environments, demonstrate significant limitations when integrated into a dynamic social environment. The main drawback is the inability to interpret human movement in real time, which leads to unpredictable and potentially dangerous maneuvers. In response to these limitations, reinforcement learning (RL) methods have gained considerable popularity. This paradigm allows an autonomous mobile robot to independently form an optimal behavior strategy through direct interaction with the environment and receiving feedback in the form of rewards or penalties. This study focuses on reinforcement learning methods and social behavior models with the aim of developing safe, effective, and socially adaptive navigation for autonomous mobile robots. This paper presents a model in which the key feature is a comprehensive approach to shaping agent behavior. The proposed model takes into account not only basic tasks, goal achievement, and avoidance of physical obstacles, but also important aspects of social interaction. The scientific novelty of the work lies in the development of a multicomponent reward function that integrates rewards for reaching the target point, avoiding collisions with dynamic and static objects, and purposefully encourages the agent to adhere to socially acceptable norms. In this way, the robot learns not only to avoid people, but to do so in a way that is intuitive and comfortable for them. The ultimate goal of the research is to create a navigation agent that is not only safe, but also socially intelligent. This is a step towards the full integration of autonomous robotic systems into everyday human environments, as successful coexistence requires not only physical safety, but also psychological comfort and intuitive understanding of the robot's behavior.
Downloads
References
Kruse, T., Pandey, A. K., Alami, R., & Kirsch, A. (2013). Human-aware robot navigation: A survey. Robotics and Autonomous Systems, 61(12), 1726–1743. https://doi.org/10.1016/j.robot.2013.05.007
Yin, X., Yulian, C., Cheng, M., Liu, W., Dong, W., & Yao, D. (2024). Path planning of mobile robot based on improved D* Lite_TEB algorithm. IET Conference Proceedings, 2023(49), 26–31. https://doi.org/10.1049/icp.2024.3623
Li, Y., Jin, R., Xu, X., Qian, Y., Wang, H., Xu, S., & Wang, Z. (2022). A mobile robot path planning algorithm based on improved A* algorithm and dynamic window approach. IEEE Access, 1. https://doi.org/10.1109/access.2022.3179397
Forer, S., Banisetty, S. B., Yliniemi, L., Nicolescu, M., & Feil-Seifer, D. (2018). Socially-aware navigation using non-linear multi-objective optimization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1–8). IEEE. https://doi.org/10.1109/iros.2018.8593825
Teso-Fz-Betoño, D., Zulueta, E., Fernandez-Gamiz, U., Saenz-Aguirre, A., & Martinez, R. (2019). Predictive dynamic window approach development with artificial neural fuzzy inference improvement. Electronics, 8(9), 935. https://doi.org/10.3390/electronics8090935
Kang, S., Yang, S., Kwak, D., Jargalbaatar, Y., & Kim, D. (2024). Social type-aware navigation framework for mobile robots in human-shared environments. Sensors, 24(15), 4862. https://doi.org/10.3390/s24154862
Kivrak, H., Cakmak, F., Kose, H., & Yavuz, S. (2020). Social navigation framework for assistive robots in human inhabited unknown environments. Engineering Science and Technology, an International Journal. https://doi.org/10.1016/j.jestch.2020.08.008
Faust, A., Oslund, K., Ramirez, O., Francis, A., Tapia, L., Fiser, M., & Davidson, J. (2018). PRM-RL: Long-range robotic navigation tasks by combining reinforcement learning and sampling-based planning. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–9). IEEE. https://doi.org/10.1109/icra.2018.8461096
Chen, Y. F., Liu, M., Everett, M., & How, J. P. (2017). Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–8). IEEE. https://doi.org/10.1109/icra.2017.7989037
Kahn, G., Villaflor, A., Ding, B., Abbeel, P., & Levine, S. (2018). Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–8). IEEE. https://doi.org/10.1109/icra.2018.8460655
Chen, Y. F., Everett, M., Liu, M., & How, J. P. (2017). Socially aware motion planning with deep reinforcement learning. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1–8). IEEE. https://doi.org/10.1109/iros.2017.8202312
Hanenko, L. D., & Zhebka, V. V. (2023). Analytical review of navigation issues of mobile robots in indoor environment. Telecommunication and Information Technologies, 80(3). https://doi.org/10.31673/2412-4338.2023.038087
Ngo, H. Q. T., Le, V. N., Thien, V. D. N., Nguyen, T. P., & Nguyen, H. (2020). Develop the socially human-aware navigation system using dynamic window approach and optimize cost function for autonomous medical robot. Advances in Mechanical Engineering, 12(12), 168781402097943. https://doi.org/10.1177/1687814020979430
Wang, Y., Yu, J., Kong, Y., Sun, L., Liu, C., Wang, J., & Chi, W. (2024). Socially adaptive path planning based on generative adversarial network. IEEE Transactions on Intelligent Vehicles, 1–13. https://doi.org/10.1109/tiv.2024.3478219
Singamaneni, P. T., Bachiller-Burgos, P., Manso, L. J., Garrell, A., Sanfeliu, A., Spalanzani, A., & Alami, R. (2024). A survey on socially aware robot navigation: Taxonomy and future challenges. The International Journal of Robotics Research. https://doi.org/10.1177/02783649241230562
Li, K., Lu, Y., & Meng, M. Q. H. (2021). Human-aware robot navigation via reinforcement learning with hindsight experience replay and curriculum learning. In 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 1–7). IEEE. https://doi.org/10.1109/robio54168.2021.9739519
Ngo, H. Q. T., Le, V. N., Thien, V. D. N., Nguyen, T. P., & Nguyen, H. (2020). Develop the socially human-aware navigation system using dynamic window approach and optimize cost function for autonomous medical robot. Advances in Mechanical Engineering, 12, 168781402097943. https://doi.org/10.1177/1687814020979430
Daza, M., Barrios-Aranibar, D., Diaz-Amado, J., Cardinale, Y., & Vilasboas, J. (2021). An approach of social navigation based on proxemics for crowded environments of humans and robots. Micromachines, 12(2), 193. https://doi.org/10.3390/mi12020193
Kang, S., Yang, S., Kwak, D., Jargalbaatar, Y., & Kim, D. (2024). Social type-aware navigation framework for mobile robots in human-shared environments. Sensors, 24(15), 4862. https://doi.org/10.3390/s24154862
Hanenko, L. D., & Zhebka, V. V. (2024). Application of reinforcement learning methods for path planning of mobile robots. Telecommunication and Information Technologies, (1), 16–25. https://doi.org/10.31673/2412-4338.2024.011625
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Людмила Ганенко, Вікторія Жебка

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.