We consider a communication cell comprised of Internet-of-Things (IoT) nodes transmitting to a common Access Point (AP). The nodes in the cell are assumed to generate data samples periodically, which… Click to show full abstract
We consider a communication cell comprised of Internet-of-Things (IoT) nodes transmitting to a common Access Point (AP). The nodes in the cell are assumed to generate data samples periodically, which are to be transmitted to the AP. The AP hosts a machine learning model, such as a neural network, which is trained on the received data samples to make accurate inferences. We address the following tradeoff: The more often the IoT nodes transmit, the higher the accuracy of the inference made by the AP, but also the higher the energy expenditure at the IoT nodes. We propose distributed importance filtering, a scheme employed by the IoT nodes to filter out the redundant data and reduce the number of irrelevant transmissions. The IoT nodes do not have large on-device machine learning models and the data filtering scheme operates under periodic instructions from the model placed at the AP. The proposed scheme is evaluated using neural networks on a benchmark machine vision dataset, as well as in two practical scenarios: leakage detection in water distribution networks and air-pollution detection in urban areas. The results show that the proposed scheme offers significant benefits in terms of network longevity as it preserves the devices’ resources, whilst maintaining high inference accuracy. Our approach reduces the computational complexity for training the model and obviates the need for data pre-processing, which makes it highly applicable in practical IoT scenarios.
               
Click one of the above tabs to view related content.