Volume List  / Volume 13 (1)

Article

COMPARISON OF TRADITIONAL AND REINFORCEMENT LEARNING BASED ROUTING PROTOCOLS IN VANET SCENARIO

DOI: 10.7708/ijtte2023.13(1).10


13 / 1 / 125-137 Pages

Author(s)

Nenad Jevtić - University of Belgrade, Faculty of Transport and Traffic Engineering, Vojvode Stepe 305, 11000 Belgrade, Serbia -

Pavle Bugarčić - University of Belgrade, Faculty of Transport and Traffic Engineering, Vojvode Stepe 305, 11000 Belgrade, Serbia -


Abstract

Vehicular ad hoc networks (VANETs) are characterized by high mobility of nodes and frequent changes in the network topology, which significantly complicates the process of routing data packets. It has been shown that traditional routing protocols are unable to promptly follow these changes and cannot be efficiently used in VANETs for vehicle to vehicle (V2V) communications. This is the reason why protocols based on reinforcement learning (RL) have been developed. These protocols enable constant monitoring of changes in the network environment, and adaptation of the routing process to those changes. In this paper, an analysis and comparison of the traditional and RL based routing protocols are performed in VANET scenario. The Ad hoc on-demand distance vector routing protocol (AODV) and AODV with Expected transmission count (ETX) metric are chosen as the representatives of traditional routing protocols, while the Adaptive routing protocol based on reinforcement learning (ARPRL) is chosen as the representative of routing protocols based on RL. The simulation results show that the ARPRL protocol has significantly better network performance in terms of packet loss ratio (PLR) and end-to-end delay (E2ED) in urban VANET scenario.


Download Article

Number of downloads: 246


References:

Ardianto, B. et al. 2022. Performance Comparison of AODV, AODV-ETX and Modified AODV-ETX in VANET using NS3. In Proceedings of the IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom), 156-161.

 

Bi, X. et al. 2020. A reinforcement learning-based routing protocol for clustered EV-VANET. In Proceedings of the 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), 1769–1773.

 

Bugarčić, P. et al. 2019. Modifications of AODV protocol for VANETs: performance analysis in NS-3 simulator. In Proceedings of the 27th Telecommunications Forum (TELFOR), 1-4.

 

Bugarčić, P. et al. 2022. Reinforcement learning-based routing protocols in vehicular and flying ad hoc networks – a literature survey, Promet 34(6): 893-906.

 

De Couto, S.J. et al. 2005. A high–throughput path metric for multi-hop wireless routing, Wireless Networks 11(4): 419-434.

 

Jafarzadeh, O. et al. 2020. A novel protocol for routing in vehicular ad hoc network based on model-based reinforcement learning and fuzzy logic, International Journal of Information and Communication Technology Research 12(4): 10-25.

 

Jevtić, N.; Malnar, M. 2019. Novel ETX-Based Metrics for Overhead Reduction in Dynamic Ad Hoc Networks, IEEE Access 7(2019): 116490-116504.

 

Malnar, M.; Jevtić, N. An improvement of AODV protocol for the overhead reduction in scalable dynamic wireless ad hoc networks, Wireless Networks 28(3): 1039 - 1051.

 

Mnih, V. et al. 2015. Human level control through deep reinforcement learning, Nature 518(7540): 529–533.

 

Mubarek, F.S. et al. 2018. Urban-AODV: an improved AODV protocol for vehicular ad-hoc networks in urban environment, International Journal of Engineering & Technology 7(4): 3030-3036.

 

Perkins, C.; Belding–Royer, E.; Das, Ss. 2003. Ad Hoc On demand Distance Vector (AODV) routing. RFC 3561, IETF.

 

Saravanan, M.; Ganeshkumar, P. 2020. Routing using reinforcement learning in vehicular ad hoc networks, Computational Intelligence 36(2): 682–697.

 

Sutton, R.; Barto, A. 2018. Reinforcement learning: An introduction, second edition. MIT Press, Inc. USA. 526 p.

 

Wang, Z. et al. 2016. Dueling network architectures for deep reinforcement learning. In Proceedings of the 33rd International conference on machine learning, 1995-2003.

 

Wu, J. et al. 2018. Reinforcement Learning Based Mobility Adaptive Routing for Vehicular Ad-Hoc Networks, Wireless Personal Communications 101: 2143-2171.

 

Zhang, D. et al. 2018. A deep reinforcement learning-based trust management scheme for software-defined vehicular networks. In Proceedings of the 8th ACM Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications (DIVANet), 1-7.


Quoted IJTTE Works



Related Keywords