IDENTIFICATION OF MILITARY OBJECTS BASED ON ARTIFICIAL NEURAL NETWORKS

Authors

DOI:

https://doi.org/10.28925/2663-4023.2025.29.912

Keywords:

interactive interfaces, graphical user interface, object recognition, artificial neural network

Abstract

This paper explores modern approaches to military object recognition using neural networks (ANNs). It highlights the application of the capsule network algorithm, which improves the modeling of hierarchical relationships using advanced technologies. This paper explores the problem of identifying modern military assets using artificial neural networks (ANNs). A fundamental aspect of automated target detection is the ability to recognize objects in images acquired by reconnaissance platforms such as drones. In this context, convolutional neural networks (CNNs) play a crucial role in the analysis and classification of visual data acquired during aerial surveillance. Although CNNs are highly effective for object identification based on images, their performance is largely dependent on the availability of large training datasets. However, due to the classified nature of military infrastructure, obtaining sufficient training data remains a significant limitation. Therefore, insufficient training data can significantly reduce the performance of the NNN. To solve this problem, a multi-layer CapsNet platform was chosen, specifically designed for military object recognition with a small training set. The dataset used in the study is taken from https://universe.roboflow.com/robo-flow-woln1/military-object-detection-x7gfp., which includes both military and civilian objects. The proposed framework demonstrates a significant improvement in recognition accuracy, reaching 96.54%. Experimental results show that this approach outperforms many other algorithms in recognition accuracy, and also contributes to the development of interactive interfaces using design patterns and frameworks. In addition, automated methods for interface verification are investigated to identify potential problems and errors at the early stages of development. Overall, the paper offers a detailed analysis of methods and algorithms that improve the efficiency of object recognition from selected images. It also emphasizes the importance of considering user needs and usability when developing software products.

Downloads

Download data is not yet available.

References

Kuang, C. (2020). User friendly: How the hidden rules of design are changing the way we live, work, and play. MCD.

Watan, S., & Shoger, S. (2018). Refactoring UI.

Unger, R., & Chandler, C. (2011). UX design: A practical guide to designing interaction experience. 336 p.

Li, S., Xu, L. D., & Zhao, S. (2018). 5G Internet of Things: A survey. Journal of Industrial Information Integration, 10, 1–9. https://doi.org/10.1016/j.jii.2018.01.005

Wroblewski, L. (2011). Mobile first. New York.

Cooper, A. (2009). Psychiatric hospital in the hands of patients.

Krug, S. (2017). Don’t make me think. New Riders.

Li, Q. (2023). Deep learning based pavement crack detection system. Journal of Physics: Conference Series. https://www.researchgate.net/publication/373417689_Deep_Learning_based_Pavement_Crack_Detection_System

Hussain, M., & Khanam, R. A. (2024). Comprehensive review of convolutional neural networks for defect detection in industrial applications. IEEE. https://ieeexplore.ieee.org/document/10589380

Mueen, M., & Abbood, M. (2025). Investigation of IoT and deep learning techniques integration for smart city applications. American Journal of Computing and Engineering, 8(1), 57–68.

Harel, J., Koch, K., & Perona, P. (2006). Graph-based visual saliency. Proceedings of NeurIPS. California Institute of Technology. https://proceedings.neurips.cc/paper_files/paper/2006/file/4db0f8b0fc895da263fd77fc8aecabe4-Paper.pdf

Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009, June 20–26). Frequency-tuned salient region detection. 2009 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1597–1604). Miami, FL, USA.

Fassold, H., & Bailer, W. (2022). Few-shot object detection as a semi-supervised learning problem. International Conference on Content-based Multimedia Indexing. https://www.researchgate.net/publication/364245764_Fewshot_Object_Detection_as_a_Semi-supervised_Learning_Problem

Liu, M., & Yao, P. (2024). Robust classification of incomplete time series with noisy labels. 2024 27th International Conference on Computer Supported Cooperative Work in Design (CSCWD). https://www.researchgate.net/publication/382154918_Robust_Classification_of_Incomplete_Time_Series_with_Noisy_Labels

Stephens, B. (2011). Push recovery control for force-controlled humanoid robots. https://www.researchgate.net/publication/266907694_Push_Recovery_Control_for_Force-Controlled_Humanoid_Robots

Pontil, M., & Theodoros, E. (2001). Support vector machines: Theory and applications. Lecture Notes in Computer Science, 249–257.

Mikolov, T., & Chen, K. (2013). Efficient estimation of word representations in vector space. https://www.researchgate.net/publication/234131319_Efficient_Estimation_of_Word_Representations_in_Vector_Space

Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. https://arxiv.org/abs/1409.1556

He, K., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 770–778). https://www.cvfoundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf

Huang, G. (2018). Densely connected convolutional networks. https://arxiv.org/abs/1608.06993

Hays, J., & Maire, M. (2014). Microsoft COCO: Common objects in context. Lecture Notes in Computer Science. https://www.researchgate.net/publication/263002356_Microsoft_COCO_Common_Objects_in_Context

Talusani, H. (2020). Detection of military targets from satellite images using deep convolutional neural networks. 2020 IEEE 5th International Conference on Computing Communication and Automation.

Zhou, L. (2023). A multi-scale object detector based on coordinate and global information aggregation for UAV aerial images. Remote Sensing, 15(14), 3468. https://www.mdpi.com/2072-4292/15/14/3468

Sabur, S., Frost, N., & Hinton, J. (2017). Dynamic routing between capsules. Advances in Neural Information Processing Systems (NeurIPS).

Downloads


Abstract views: 10

Published

2025-09-26

How to Cite

Kostiuk, A., Zaitsev, S., Vasylenko, V., & Zaitseva, L. (2025). IDENTIFICATION OF MILITARY OBJECTS BASED ON ARTIFICIAL NEURAL NETWORKS . Electronic Professional Scientific Journal «Cybersecurity: Education, Science, Technique», 1(29), 609–627. https://doi.org/10.28925/2663-4023.2025.29.912