The fundamental challenge for randomly deployed resource-constrained wireless sensor network is to enhance the network lifetime without compromising its performance metrics such as coverage rate and network connectivity. One way… Click to show full abstract
The fundamental challenge for randomly deployed resource-constrained wireless sensor network is to enhance the network lifetime without compromising its performance metrics such as coverage rate and network connectivity. One way is to schedule the activities of sensor nodes and form scheduling rounds autonomously in such a way that each spatial point is covered by at least one sensor node and there must be at least one communication path from the sensor nodes to base station. This autonomous activity scheduling of the sensor nodes can be efficiently done with Reinforcement Learning (RL), a technique of machine learning because it does not require prior environment modeling. In this paper, a Nash Q-Learning based node scheduling algorithm for coverage and connectivity maintenance (CCM-RL) is proposed where each node autonomously learns its optimal action (active/hibernate/sleep/customize the sensing range) to maximize the coverage rate and maintain network connectivity. The learning algorithm resides inside each sensor node. The main objective of this algorithm is to enable the sensor nodes to learn their optimal action so that the total number of activated nodes in each scheduling round becomes minimum and preserves the criteria of coverage rate and network connectivity. The comparison of CCM-RL protocol with other protocols proves its accuracy and reliability. The simulative comparison shows that CCM-RL performs better in terms of an average number of active sensor nodes in one scheduling round, coverage rate, and energy consumption.
               
Click one of the above tabs to view related content.