Advancements in Artificial Intelligence, Machine Learning, and Deep Learning have paved the way for ample applications in real-time. One of the major applications of this advancement is the innovative systems… Click to show full abstract
Advancements in Artificial Intelligence, Machine Learning, and Deep Learning have paved the way for ample applications in real-time. One of the major applications of this advancement is the innovative systems influenced by the Internet of Everything (IoE). The IoE environment greatly relies on the interconnection of an enormous number of sensors that are used for collecting and transmitting data. The data gathered helps for monitoring, decision-making, and automation of smart systems in multi-disciplinary domains. These sensors operate on battery power. The battery life of the sensors limits their efficiency in operation. The mechanism for analyzing the Remaining Battery Life (RBL) plays a major role in optimizing the network performance, thereby ensuring the reliability and availability of data throughout. This work focuses on proposing a novel framework integrating pre-processing, standardization, encoding scheme, and predictive modeling that includes two algorithms, RFRImpute and MetaStackD, for predicting the RBL of sensors in any IoE device using a meta-learning-based deep ensemble approach blue for analyzing factors such as power consumption, environmental conditions, operational frequency, and workload patterns. Leveraging regression algorithms such as Random Forest, Gradient Boosting, Light Gradient Boosting, Categorical Boosting and Extreme Gradient Boosting, we have modeled the non-linear and temporal dynamics of sensor battery degradation, thereby enabling proactive maintenance strategies, dynamic energy management, and resource allocation. Experimental results on the real-world Chicago Park District Beach water IoE dataset validate the effectiveness of our proposed approach, showing a 1.4% improvement in accuracy over the traditional voting ensemble model and a 93.3% reduction in training time as well as prediction time. The model size is reduced by 95.23% when compared to traditional voting ensembles.
               
Click one of the above tabs to view related content.