Ball Detection System for a Soccer on Wheeled Robot Using the MobileNetV2 SSD Method
Downloads
This paper discusses the research on the use of Artificial Intelligence in autonomous robot object identification. The specific focus of this research is on a wheeled soccer playing robot. The goal is to recognize a ball as an object using the Single Shot MultiBox Detector MobileNetV2 model. This system has multi-vision inputs such as distance measurements and angle values for object detection. This methodology is based on deep learning with the TensorFlow Object Detection API with the MobileNetV2 SSD model. This model is trained with a dataset of 3707 ball images over 617 thousand steps on Google Collaboratory. It was found that the average measurement error of the ball object is 6.58% for the distance when viewed through the robot's front camera. In addition, the omnidirectional camera is able to detect the ball object and angle values from the front of the robot. What makes this research different is the use of distance and angle measurements for detection and the omnidirectional camera for system performance in dynamic environments. This research aims to address the improvement of AI-based object detection systems for autonomous robotics in the context of real-world use cases.
Downloads
[1] Javaid, M., Haleem, A., Singh, R. P., & Suman, R. (2021). Substantial capabilities of robotics in enhancing industry 4.0 implementation. Cognitive Robotics, 1, 58–75. doi:10.1016/j.cogr.2021.06.001.
[2] Purwono, P., Wulandari, A. N. E., Ma’arif, A., & Salah, W. A. (2025). Understanding Generative Adversarial Networks (GANs): A Review. Control Systems and Optimization Letters, 3(1), 36–45. doi:10.59247/csol.v3i1.170.
[3] Furizal, Ma’arif, A., & Rifaldi, D. (2023). Application of Machine Learning in Healthcare and Medicine: A Review. Journal of Robotics and Control (JRC), 4(5), 621–631. doi:10.18196/jrc.v4i5.19640.
[4] Sharkawy, A.-N. (2025). Impact of Inertial and External Forces on Joint Dynamics of Robotic Manipulator: Experimental Insights. Control Systems and Optimization Letters, 3(1), 1–7. doi:10.59247/csol.v3i1.163.
[5] Soori, M., Arezoo, B., & Dastres, R. (2023). Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cognitive Robotics, 3, 54–70. doi:10.1016/j.cogr.2023.04.001.
[6] Oliveira, L. F. P., Moreira, A. P., & Silva, M. F. (2021). Advances in agriculture robotics: A state-of-the-art review and challenges ahead. Robotics, 10(2), 52. doi:10.3390/robotics10020052.
[7] Grau, A., Indri, M., Lo Bello, L., & Sauter, T. (2021). Robots in Industry: The Past, Present, and Future of a Growing Collaboration with Humans. IEEE Industrial Electronics Magazine, 15(1), 50–61. doi:10.1109/MIE.2020.3008136.
[8] Dzedzickis, A., Subačiūtė-Žemaitienė, J., Šutinys, E., Samukaitė-Bubnienė, U., & Bučinskas, V. (2021). Advanced Applications of Industrial Robotics: New Trends and Possibilities. Applied Sciences, 12(1), 135. doi:10.3390/app12010135.
[9] Sitompul, E., Teguh, M., Yaqin, I., Jaya Tarigan, H., Tampubolon, G. M., Samsuri, F., & Galina, M. (2025). Integration of Pixy2 Camera Sensor and Coordinate Transformation for Automatic Color-Based Implementation of a Pick-and-Place Arm Robot. Jurnal Ilmiah Teknik Elektro Komputer Dan Informatika, 11(1), 92–109. doi:10.26555/JITEKI.V11I1.30717.
[10] Furizal, Ma’arif, A., Firdaus, A. A., & Rahmaniar, W. (2023). Future Potential of E-Nose Technology: A Review. International Journal of Robotics and Control Systems, 3(3), 449–469. doi:10.31763/ijrcs.v3i3.1091.
[11] Solikhah, E. W., Asih, H. M., Astuti, F. H., Ghazali, I., & Mohammad, E. Bin. (2024). Industry 4.0 Readiness Trends: A Bibliometric and Visualization Analysis. International Journal of Robotics and Control Systems, 4(1), 105–124. doi:10.31763/ijrcs.v4i1.1247.
[12] Górriz, J. M., Álvarez-Illán, I., Álvarez-Marquina, A., Arco, J. E., Atzmueller, M., Ballarini, F., Barakova, E., Bologna, G., Bonomini, P., Castellanos-Dominguez, G., Castillo-Barnes, D., Cho, S. B., Contreras, R., Cuadra, J. M., Domínguez, E., Domínguez-Mateos, F., Duro, R. J., Elizondo, D., Fernández-Caballero, A., … Ferrández-Vicente, J. M. (2023). Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends. Information Fusion, 100, 101945. doi:10.1016/j.inffus.2023.101945.
[13] Wijaya, S. A., Famuji, T. S., Mu’min, M. A., Safitri, Y., Tristanti, N., Dahmani, A., Driss, Z., Sharkawy, A.-N., & Al-Sabur, R. (2025). Trends and Impact of the Viola-Jones Algorithm: A Bibliometric Analysis of Face Detection Research (2001-2024). Scientific Journal of Engineering Research, 1(1), 33–42. doi:10.64539/sjer.v1i1.2025.8.
[14] Semeraro, F., Griffiths, A., & Cangelosi, A. (2022). Human–robot collaboration and machine learning: A systematic review of recent research. Robotics and Computer-Integrated Manufacturing, 79, 102432. doi:10.1016/j.rcim.2022.102432.
[15] Sarker, I. H. (2021). Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Computer Science, 2(3), 160. doi:10.1007/s42979-021-00592-x.
[16] Fahmi, M., Yudhana, A., Sunardi, Abdel-Nasser Sharkawy, & Furizal. (2025). Classification for Waste Image in Convolutional Neural Network Using Morph-HSV Color Model. Scientific Journal of Engineering Research, 1(1), 18–25. doi:10.64539/sjer.v1i1.2025.12.
[17] Ali, A. K., & Mahmoud, M. S. (2022). Methodologies and Applications of Artificial Intelligence in Systems Engineering. International Journal of Robotics and Control Systems, 2(1), 201–229. doi:10.31763/ijrcs.v2i1.532.
[18] Kumar, G. S., Kumar, M. D., Reddy, S. V. R., Kumari, B. V. S., & Reddy, C. R. (2024). Injury Prediction in Sports using Artificial Intelligence Applications: A Brief Review. Journal of Robotics and Control (JRC), 5(1), 16–26. doi:10.18196/jrc.v5i1.20814.
[19] Saranya, A., & Subhashini, R. (2023). A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends. Decision Analytics Journal, 7, 100230. doi:10.1016/j.dajour.2023.100230.
[20] Sarker, I. H. (2022). AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems. SN Computer Science, 3(2), 1–20. doi:10.1007/s42979-022-01043-x.
[21] Rashid, A. Bin, & Kausik, M. A. K. (2024). AI revolutionizing industries worldwide: A comprehensive overview of its diverse applications. Hybrid Advances, 7, 100277. doi:10.1016/j.hybadv.2024.100277.
[22] Diantoro, R., & Aji, W. S. (2020). Design of Blessing Ball Throwing Techniques for KRAI 2018 Robot. Buletin Ilmiah Sarjana Teknik Elektro, 2(3), 119. doi:10.12928/biste.v2i3.2623.
[23] Sukaswanto, P., & Fadlil, A. (2020). Humanoid Dancer Robot Movement Synchronization System with Zigbee. Buletin Ilmiah Sarjana Teknik Elektro, 2(3), 103. doi:10.12928/biste.v2i3.2695.
[24] Wajiansyah, A., Malani, R., Supriadi, & Gaffar, A. F. O. (2022). Optimization of Humanoid Robot Leg Movement Using Open CM 9.04. Journal of Robotics and Control (JRC), 3(5), 699–709. doi:10.18196/jrc.v3i5.15071.
[25] Setiawardhana, Kurnianto Wibowo, I., & Achmad Husein Bernardt, N. (2025). Prediction of Ball Position Using CNN Methods With Zed Camera on Goalkeeper Robot Application. IEEE Access, 13, 41559–41570. doi:10.1109/ACCESS.2025.3546346.
[26] Putra, P. M., & Widodo, N. S. (2023). Development of the Design and Control of a Hexapod Robot for Uneven Terrain. Buletin Ilmiah Sarjana Teknik Elektro, 5(4), 552–566. doi:10.12928/biste.v5i4.9426.
[27] Prasetio, Y., & Widodo, N. S. (2023). Hexapod Robot Movement Control for Uneven Terrain. Control Systems and Optimization Letters, 1(2), 82–86. doi:10.59247/csol.v1i2.23.
[28] Antonioni, E., Suriani, V., Riccio, F., & Nardi, D. (2021). Game Strategies for Physical Robot Soccer Players: A Survey. IEEE Transactions on Games, 13(4), 342–357. doi:10.1109/TG.2021.3075065.
[29] Jati, H., Ilyasa, N. A., & Dominic, D. D. (2024). Enhancing Humanoid Robot Soccer Ball Tracking, Goal Alignment, and Robot Avoidance Using YOLO-NAS. Journal of Robotics and Control (JRC), 5(3), 829–838. doi:10.18196/jrc.v5i3.21839.
[30] Handayani, A. N., Pusparani, F. A., Lestari, D., Wirawan, I. M., Wibawa, A. P., & Fukuda, O. (2023). Real-Time Obstacle Detection for Unmanned Surface Vehicle Maneuver. International Journal of Robotics and Control Systems, 3(4), 765–779. doi:10.31763/ijrcs.v3i4.1147.
[31] Ghasemi, Y., Jeong, H., Choi, S. H., Park, K. B., & Lee, J. Y. (2022). Deep learning-based object detection in augmented reality: A systematic review. Computers in Industry, 139, 103661. doi:10.1016/j.compind.2022.103661.
[32] Bai, Q., Li, S., Yang, J., Song, Q., Li, Z., & Zhang, X. (2020). Object detection recognition and robot grasping based on machine learning: A survey. IEEE Access, 8, 181855–181879. doi:10.1109/ACCESS.2020.3028740.
[33] Manakitsa, N., Maraslidis, G. S., Moysis, L., & Fragulis, G. F. (2024). A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision. Technologies, 12(2), 15. doi:10.3390/technologies12020015.
[34] Dewantara, B. S. B., & Ariyadi, B. N. D. (2021). Adaptive Behavior Control for Robot Soccer Navigation Using Fuzzy-based Social Force Model. Smart Science, 9(1), 14–29. doi:10.1080/23080477.2021.1871799.
[35] Muwardi, R., Nugroho, I. P., Salamah, K. S., Yunita, M., Rahmatullah, R., & Chung, G. J. (2024). Optimization of YOLOv4-Tiny Algorithm for Vehicle Detection and Vehicle Count Detection Embedded System. Jurnal Ilmiah Teknik Elektro Komputer Dan Informatika, 10(3), 639–648. doi:10.26555/jiteki.v10i3.29693.
[36] Chotikunnan, P., Puttasakul, T., Chotikunnan, R., Panomruttanarug, B., Sangworasil, M., & Srisiriwat, A. (2023). Evaluation of Single and Dual Image Object Detection through Image Segmentation Using ResNet18 in Robotic Vision Applications. Journal of Robotics and Control (JRC), 4(3), 263–277. doi:10.18196/jrc.v4i3.17932.
[37] Lee, T. H., Shair, E. F., Abdullah, A. R., Rahman, K. A., Ali, N. M., Saharuddin, N. Z., & Nazmi, N. (2025). Comparative Analysis of 1D – CNN, GRU, and LSTM for Classifying Step Duration in Elderly and Adolescents Using Computer Vision. International Journal of Robotics and Control Systems, 5(1), 426–439. doi:10.31763/ijrcs.v5i1.1588.
[38] Putra, R. K., Alfarisy, G. A. F., Nugraha, F. W., & Nuryono, A. A. (2024). Automatic Plant Disease Classification with Unknown Class Rejection using Siamese Networks. Buletin Ilmiah Sarjana Teknik Elektro, 6(3), 308–316. doi:10.12928/biste.v6i3.11619.
[39] Banjaransari, M., & Prahara, A. (2023). Image Classification of Wayang Using Transfer Learning and Fine-Tuning of CNN Models. Buletin Ilmiah Sarjana Teknik Elektro, 5(4), 632–641. doi:10.12928/biste.v5i4.9977.
[40] Szemenyei, M., & Estivill-Castro, V. (2022). Fully neural object detection solutions for robot soccer. Neural Computing and Applications, 34(24), 21419–21432. doi:10.1007/s00521-021-05972-1.
[41] Wibowo, I. K., Haq, M. A., Bachtiar, M. M., Dewantara, B. S. B., & Ihsan, F. L. H. (2019). Ball Detection using Local Binary Pattern in Middle Size Robot Soccer (ERSOW). 2019 2nd International Conference of Computer and Informatics Engineering (IC2IE), 29–32. doi:10.1109/ic2ie47452.2019.8940835.
[42] Le, V. H., & Pham, T. L. (2024). Ovarian Tumors Detection and Classification on Ultrasound Images Using One-stage Convolutional Neural Networks. Journal of Robotics and Control (JRC), 5(2), 561–581. doi:10.18196/jrc.v5i2.20589.
[43] Sanubari, F. F., & Puriyanto, R. D. (2022). Ball and Goal Detection with the YOLO Method Using an Omnidirectional Camera on the KRSBI-B Robot. Buletin Ilmiah Sarjana Teknik Elektro, 4(2), 76–85. doi:10.12928/biste.v4i2.6712.
[44] Cruz, N., Leiva, F., & Ruiz-del-Solar, J. (2021). Deep learning applied to humanoid soccer robotics: playing without using any color information. Autonomous Robots, 45(3), 335–350. doi:10.1007/s10514-021-09966-9.
[45] Barry, D., Shah, M., Keijsers, M., Khan, H., & Hopman, B. (2019). xYOLO: A Model For Real-Time Object Detection In Humanoid Soccer on Low-End Hardware. 2019 International Conference on Image and Vision Computing New Zealand (IVCNZ), 1–6. doi:10.1109/ivcnz48456.2019.8960963.
[46] Fathurrahman, H. I. K., & Li-Yi, C. (2022). Character Translation on Plate Recognition with Intelligence Approaches. Buletin Ilmiah Sarjana Teknik Elektro, 4(3), 105–110. doi:10.12928/biste.v4i3.7161.
[47] Chuang, G., & Li-Jia, C. (2023). Bridge Crack Detection Based on Attention Mechanism. International Journal of Robotics and Control Systems, 3(2), 259–269. doi:10.31763/ijrcs.v3i2.929.
[48] Soebhakti, H., Prayoga, S., Fatekha, R. A., & Fashla, M. B. (2019). The Real-Time Object Detection System on Mobile Soccer Robot using YOLO v3. 2019 2nd International Conference on Applied Engineering (ICAE), 1–6. doi:10.1109/icae47758.2019.9221734.
[49] Irfan, S. Al, & Widodo, N. S. (2020). Application of Deep Learning Convolution Neural Network Method on KRSBI Humanoid R-SCUAD Robot. Buletin Ilmiah Sarjana Teknik Elektro, 2(1), 40. doi:10.12928/biste.v2i1.985.
[50] Aulia, U., Hasanuddin, I., Dirhamsyah, M., & Nasaruddin, N. (2024). A new CNN-BASED object detection system for autonomous mobile robots based on real-world vehicle datasets. Heliyon, 10(15), e35247. doi:10.1016/j.heliyon.2024.e35247.
[51] Mohamed Ameerdin, M. I., Jamaluddin, M. H., Shukor, A. Z., & Mohamad, S. (2024). A Review of Deep Learning-Based Defect Detection and Panel Localization for Photovoltaic Panel Surveillance System. International Journal of Robotics and Control Systems, 4(4), 1746–1771. doi:10.31763/ijrcs.v4i4.1579.
[52] Rani, S., Ghai, D., & Kumar, S. (2022). Object detection and recognition using contour based edge detection and fast R-CNN. Multimedia Tools and Applications, 81(29), 42183–42207. doi:10.1007/s11042-021-11446-2.
[53] Handayani, A. N., Amaliya, S., Akbar, M. I., Wiryawan, M. Z., Liang, Y. W., & Kurniawan, W. C. (2025). Hand Keypoint-Based CNN for SIBI Sign Language Recognition. International Journal of Robotics and Control Systems, 5(2), 813–829. doi:10.31763/ijrcs.v5i2.1745.
[54] Allebosch, G., Van Hamme, D., Veelaert, P., & Philips, W. (2022). Efficient detection of crossing pedestrians from a moving vehicle with an array of cameras. Optical Engineering, 62(3), 031210-031210. doi:10.1117/1.oe.62.3.031210.
[55] Dang, T. V. (2023). Smart Attendance System based on Improved Facial Recognition. Journal of Robotics and Control (JRC), 4(1), 46–53. doi:10.18196/jrc.v4i1.16808.
[56] Aghaee, F., Fazl-Ersi, E., & Noori, H. (2024). MDSSD-MobV2: An embedded deconvolutional multispectral pedestrian detection based on SSD-MobileNetV2. Multimedia Tools and Applications, 83(15), 43801–43829. doi:10.1007/s11042-023-17188-7.
[57] Kirana, K. C., & Abdulrahman, S. A. K. (2024). Random Multi-Augmentation to Improve TensorFlow-Based Vehicle Plate Detection. Buletin Ilmiah Sarjana Teknik Elektro, 6(2), 113–125. doi:10.12928/biste.v6i2.10542.
[58] Biglari, A., & Tang, W. (2022). A Vision-Based Cattle Recognition System Using TensorFlow for Livestock Water Intake Monitoring. IEEE Sensors Letters, 6(11). doi:10.1109/LSENS.2022.3215699.
[59] Wang, L., Shoulin, Y., Alyami, H., Laghari, A. A., Rashid, M., Almotiri, J., Alyamani, H. J., & Alturise, F. (2024). A novel deep learning-based single shot multibox detector model for object detection in optical remote sensing images. Geoscience Data Journal, 11(3), 237–251. doi:10.1002/gdj3.162.
[60] Cai, J., Makita, Y., Zheng, Y., Takahashi, S., Hao, W., & Nakatoh, Y. (2022). Single shot multibox detector for honeybee detection. Computers and Electrical Engineering, 104, 108465. doi:10.1016/j.compeleceng.2022.108465.
[61] Bouazizi, O., Azroumahli, C., El Mourabit, A., & Oussouaddi, M. (2024). Road Object Detection using SSD-MobileNet Algorithm: Case Study for Real-Time ADAS Applications. Journal of Robotics and Control (JRC), 5(2), 551–560. doi:10.18196/jrc.v5i2.21145.
[62] Magalhães, S. A., Castro, L., Moreira, G., Dos Santos, F. N., Cunha, M., Dias, J., & Moreira, A. P. (2021). Evaluating the single-shot multibox detector and yolo deep learning models for the detection of tomatoes in a greenhouse. Sensors, 21(10), 3569. doi:10.3390/s21103569.
[63] Aldhahri, E., Aljuhani, R., Alfaidi, A., Alshehri, B., Alwadei, H., Aljojo, N., Alshutayri, A., & Almazroi, A. (2023). Arabic Sign Language Recognition Using Convolutional Neural Network and MobileNet. Arabian Journal for Science and Engineering, 48(2), 2147–2154. doi:10.1007/s13369-022-07144-2.
[64] Mohammed, O. N. (2024). Enhancing Pulmonary Disease Classification in Diseases: A Comparative Study of CNN and Optimized MobileNet Architectures. Journal of Robotics and Control (JRC), 5(2), 427–440. doi:10.18196/jrc.v5i2.21422.
[65] Dang, T. V. (2022). Smart home Management System with Face Recognition Based on ArcFace Model in Deep Convolutional Neural Network. Journal of Robotics and Control (JRC), 3(6), 754–761. doi:10.18196/jrc.v3i6.15978.
[66] Bi, C., Wang, J., Duan, Y., Fu, B., Kang, J. R., & Shi, Y. (2022). MobileNet Based Apple Leaf Diseases Identification. Mobile Networks and Applications, 27(1), 172–180. doi:10.1007/s11036-020-01640-1.
[67] Jia, S., Jiang, S., Lin, Z., Li, N., Xu, M., & Yu, S. (2021). A survey: Deep learning for hyperspectral image classification with few labeled samples. Neurocomputing, 448, 179–204. doi:10.1016/j.neucom.2021.03.035.
[68] Wiguna, I. W. A. A., Huizen, R. R., & Pradipta, G. A. (2024). Optimization of Vehicle Detection at Intersections Using the YOLOv5 Model. Scientific Journal of Electrical Engineering, Computer Science and Informatics (JITEKI), 10(4), 885–896. doi:10.26555/JITEKI.V10I4.29309.
[69] Julianda, R. R., & Puriyanto, R. D. (2024). Tracking Ball Using YOLOv8 Method on Wheeled Soccer Robot with Omnidirectional Camera. Buletin Ilmiah Sarjana Teknik Elektro, 6(2), 203–213. doi:10.12928/biste.v6i2.10816.
[70] Pérez, I., & Figueroa, M. (2021). A heterogeneous hardware accelerator for image classification in embedded systems. Sensors, 21(8), 2637. doi:10.3390/s21082637.
[71] Kamath, V., & Renuka, A. (2024). Investigation of MobileNet-Ssd on human follower robot for stand-alone object detection and tracking using Raspberry Pi. Cogent Engineering, 11(1). doi:10.1080/23311916.2024.2333208.
[72] Tseng, A. A., Tanaka, M., & Leeladharan, B. (2002). Laser-based internal profile measurement system. Automation in Construction, 11(6), 667–679. doi:10.1016/S0926-5805(02)00008-0.
[73] Chen, Y., Yi, H., & He, C. (2018). Introduction and investigation of magnetic location system for steam-assisted gravity drainage dual-horizontal heavy oil wells. Advances in Mechanical Engineering, 10(9), 1-8. doi:10.1177/1687814018798978.
[74] Santos, R. C. C. D. M., Coelho, M., & Oliveira, R. (2024). Real-time Object Detection Performance Analysis Using YOLOv7 on Edge Devices. IEEE Latin America Transactions, 22(10), 799–805. doi:10.1109/TLA.2024.10705971.
[75] Jaiswal, M., Sharma, A., & Saini, S. (2025). Hardware acceleration of Tiny YOLO deep neural networks for sign language recognition: A comprehensive performance analysis. Integration, 100, 102287. doi:10.1016/j.vlsi.2024.102287.
- This work (including HTML and PDF Files) is licensed under a Creative Commons Attribution 4.0 International License.



















