Reinforcement Learning-based Collision Avoidance for UAV
Abstract
One of the significant aspects for enabling the intelligent behavior to the Unmanned Aerial Vehicles (UAVs) is by providing an algorithm for navigation through the dynamic and unseen environment. Therefore, to be autonomous, they need sensors to perceive their surroundings and utilize gathered information to decide which action to take. Having that in mind, in this paper, the authors designed the system for obstacle avoidance and also investigate the elements of the Markov decision process and their influence on each other. The flying mobile robot used within the considered problem is quadrotor type and has an integrated Lidar sensor which is utilized to
detect obstacles. The sequential decision-making model based on Q-learning is trained within the MATLAB Simulink environment. The simulation results demonstrate that the UAV can navigate through the environment in most algorithm runs without colliding with surrounding obstacles.
Keywords:
Unmanned Aerial Vehicles (UAVs) / Collision avoidance / Reinforcement learning / Q-learning / Simulation / MATLAB Simulink environment / Autonomous localization and navigation / Markov decision process / The flying mobile robot / The sequential decision-making modelSource:
Proceedings of the 10th International Conference on Electrical, Electronics and Computing Engineering (IcETRAN 2023), 2023, 5496, 1-6Publisher:
- ETRAN Society, The Society for Electronics, Telecommunications, Computing, Automatics and Nuclear engineering supported by IEEE
Funding / projects:
- Ministry of Science, Technological Development and Innovation of the Republic of Serbia, institutional funding - 200105 (University of Belgrade, Faculty of Mechanical Engineering) (RS-MESTD-inst-2020-200105)
Collections
Institution/Community
Mašinski fakultetTY - CONF AU - Jevtić, Đorđe AU - Miljković, Zoran AU - Petrović, Milica AU - Jokić, Aleksandar PY - 2023 UR - https://machinery.mas.bg.ac.rs/handle/123456789/6896 AB - One of the significant aspects for enabling the intelligent behavior to the Unmanned Aerial Vehicles (UAVs) is by providing an algorithm for navigation through the dynamic and unseen environment. Therefore, to be autonomous, they need sensors to perceive their surroundings and utilize gathered information to decide which action to take. Having that in mind, in this paper, the authors designed the system for obstacle avoidance and also investigate the elements of the Markov decision process and their influence on each other. The flying mobile robot used within the considered problem is quadrotor type and has an integrated Lidar sensor which is utilized to detect obstacles. The sequential decision-making model based on Q-learning is trained within the MATLAB Simulink environment. The simulation results demonstrate that the UAV can navigate through the environment in most algorithm runs without colliding with surrounding obstacles. PB - ETRAN Society, The Society for Electronics, Telecommunications, Computing, Automatics and Nuclear engineering supported by IEEE C3 - Proceedings of the 10th International Conference on Electrical, Electronics and Computing Engineering (IcETRAN 2023) T1 - Reinforcement Learning-based Collision Avoidance for UAV EP - 6 IS - 5496 SP - 1 UR - https://hdl.handle.net/21.15107/rcub_machinery_6896 ER -
@conference{ author = "Jevtić, Đorđe and Miljković, Zoran and Petrović, Milica and Jokić, Aleksandar", year = "2023", abstract = "One of the significant aspects for enabling the intelligent behavior to the Unmanned Aerial Vehicles (UAVs) is by providing an algorithm for navigation through the dynamic and unseen environment. Therefore, to be autonomous, they need sensors to perceive their surroundings and utilize gathered information to decide which action to take. Having that in mind, in this paper, the authors designed the system for obstacle avoidance and also investigate the elements of the Markov decision process and their influence on each other. The flying mobile robot used within the considered problem is quadrotor type and has an integrated Lidar sensor which is utilized to detect obstacles. The sequential decision-making model based on Q-learning is trained within the MATLAB Simulink environment. The simulation results demonstrate that the UAV can navigate through the environment in most algorithm runs without colliding with surrounding obstacles.", publisher = "ETRAN Society, The Society for Electronics, Telecommunications, Computing, Automatics and Nuclear engineering supported by IEEE", journal = "Proceedings of the 10th International Conference on Electrical, Electronics and Computing Engineering (IcETRAN 2023)", title = "Reinforcement Learning-based Collision Avoidance for UAV", pages = "6-1", number = "5496", url = "https://hdl.handle.net/21.15107/rcub_machinery_6896" }
Jevtić, Đ., Miljković, Z., Petrović, M.,& Jokić, A.. (2023). Reinforcement Learning-based Collision Avoidance for UAV. in Proceedings of the 10th International Conference on Electrical, Electronics and Computing Engineering (IcETRAN 2023) ETRAN Society, The Society for Electronics, Telecommunications, Computing, Automatics and Nuclear engineering supported by IEEE.(5496), 1-6. https://hdl.handle.net/21.15107/rcub_machinery_6896
Jevtić Đ, Miljković Z, Petrović M, Jokić A. Reinforcement Learning-based Collision Avoidance for UAV. in Proceedings of the 10th International Conference on Electrical, Electronics and Computing Engineering (IcETRAN 2023). 2023;(5496):1-6. https://hdl.handle.net/21.15107/rcub_machinery_6896 .
Jevtić, Đorđe, Miljković, Zoran, Petrović, Milica, Jokić, Aleksandar, "Reinforcement Learning-based Collision Avoidance for UAV" in Proceedings of the 10th International Conference on Electrical, Electronics and Computing Engineering (IcETRAN 2023), no. 5496 (2023):1-6, https://hdl.handle.net/21.15107/rcub_machinery_6896 .