We consider a network, tasked with solving binary distributed detection, consisting of N sensors, a fusion center (FC), and a feedback channel from the FC to sensors. Each sensor is… Click to show full abstract
We consider a network, tasked with solving binary distributed detection, consisting of N sensors, a fusion center (FC), and a feedback channel from the FC to sensors. Each sensor is capable of harvesting energy and is equipped with a finite size battery to store randomly arrived energy. Sensors process their observations and transmit their symbols to the FC over orthogonal fading channels. The FC fuses the received symbols and makes a global binary decision. We aim at developing adaptive channel-dependent transmit power control policies such that J-divergence based detection metric is maximized at the FC, subject to total transmit power constraint. Modeling the quantized fading channel, the energy arrival, and the battery dynamics as time-homogeneous finite-state Markov chains, and the network lifetime as a geometric random variable, we formulate our power control optimization problem as a discounted infinite-horizon constrained Markov decision process (MDP) problem, where sensors' transmit powers are functions of the battery states, quantized channel gains, and the arrived energies. We utilize stochastic dynamic programming and Lagrangian approach to find the optimal and sub-optimal power control policies. We demonstrate that our sub-optimal policy provides a close-to-optimal performance with a reduced computational complexity and without imposing signaling overhead on sensors.
               
Click one of the above tabs to view related content.