The communication infrastructure is most likely to be damaged after a major disaster occurred, which would lead to further chaos in the disaster stricken area. Modern rescue activities heavily rely… Click to show full abstract
The communication infrastructure is most likely to be damaged after a major disaster occurred, which would lead to further chaos in the disaster stricken area. Modern rescue activities heavily rely on the wireless communications, such as safety status report, disrupted area monitoring, evacuation instruction, rescue coordination, etc. Large amount of data generated from victims, sensors and responders must be delivered and processed in a fast and reliable way, even when the normal communication infrastructure is degraded or destroyed. To this end, reconstructing the post-disaster network by deploying MDRU (Movable and Deployable Resource Unit) and relay unit at edge is a very promising solution. However, the optimal wireless access control in this heterogeneous hastily formed network is extremely challenging, due to the frequent varying environment and the lack of statistics information in advance in post-disaster scenarios. In this paper, we propose a learning based wireless access control approach for edge-aided disaster response network. More specifically, we model the wireless access control procedure as a discrete-time single agent Markov decision process, and solve the problem by exploiting deep reinforcement learning technique. By extensive simulation results, we show that the proposed mechanism significantly outperforms the baseline schemes in terms of delay and packet drop rate.
               
Click one of the above tabs to view related content.