With the rapid development of intelligent transportation systems, the challenge of achieving efficient and accurate multimodal traffic data transmission and collaborative processing in complex network environments with bandwidth limitations, signal… Click to show full abstract
With the rapid development of intelligent transportation systems, the challenge of achieving efficient and accurate multimodal traffic data transmission and collaborative processing in complex network environments with bandwidth limitations, signal interference, and high concurrency has become a key issue that needs to be addressed. This paper proposes a Self-supervised Multi-modal and Reinforcement learning-based Traffic data semantic collaboration Transmission mechanism (SMART), aiming to optimize the transmission efficiency and robustness of multimodal data through a combination of self-supervised learning and reinforcement learning. The sending end employs a self-supervised conditional variational autoencoder and Transformer-DRL-based dynamic semantic compression strategy to intelligently filter and transmit the most core semantic information from video, radar, and LiDAR data. The receiving end combines Transformer and graph neural networks for deep decoding and feature fusion of multimodal data, while also using reinforcement learning self-supervised multi-task optimization engine to collaboratively enhance multiple task scenarios (such as traffic accident detection and vehicle behavior recognition). Experimental results show that SMART significantly outperforms traditional methods in low signal-to-noise ratio, high packet loss rate, and large-scale concurrency environments, excelling in key indicators such as semantic similarity, transmission efficiency, robustness, and end-to-end latency, demonstrating its effectiveness and innovation in smart transportation scenarios.
               
Click one of the above tabs to view related content.