Reinforcement Learning Based Traffic Signal Control: A Performance Comparison Under Different Traffic Scenarios
Traffic congestion, which is increasing in megacities in parallel with population growth and urbanization, has become one of the most important problems in modern urban transportation. In this context, the effectiveness of control strategies at signalized intersections is even more important. In this study, a comparative evaluation of different traffic signal control strategies was conducted in a single intersection scenario using the SUMO Urban Mobility Model. The main objective of the study is to demonstrate the effectiveness of the reinforcement learning (RL) approach in adaptive traffic signal control, especially in comparison with traditional and rule-based methods. In this context, four different control methods were implemented and tested: fixed-time control, vehicle-initiated (actuated) control specific to the SUMO platform, a fuzzy logic controller, and an RL-based controller. The RL model was trained on episodes of different durations (50, 150, 300, and 500) to study the performance changes during the training process. The simulations were conducted under two different demand scenarios: low traffic density (3196 vehicles) and high traffic density (6748 vehicles). The average waiting time and average travel time were used to evaluate the system performance. The results show that the RL-based method exhibits limited performance under low traffic density conditions but provides significant advantage over other control methods under high traffic density conditions.