The scheduling of the operational time of household appliances requires several parameters to be tuned according to the available energy supplied to a smart home. However, scheduling of operational time… Click to show full abstract
The scheduling of the operational time of household appliances requires several parameters to be tuned according to the available energy supplied to a smart home. However, scheduling of operational time of multiple appliances in a smart home itself is the NP-hard problem and thus requires an intelligent, heuristic method to be solved in polynomial time. In this research work, we propose Real-time Scheduling of Operational Time of Household Appliances based on the well-known value iterative reinforcement learning called Quality learning (RSOTHA-QL). The proposed RSOTHA-QL scheme operates in two phases. In the first phase, the agents of the Q learning act by interacting with the smart home environment and obtain a reward. The reward value is further utilized to schedule the operational time of household appliances in the next state ensuring minimum energy consumption. In the second phase, the dissatisfaction arises due to scheduling of operational time of the household appliances of the home user is maintained by categorizing the household appliances into three groups: 1) deferrable, 2) non-deferrable, and 3) controllable. Besides, using the shared memory synchronization phenomenon, the agents attached to each appliance of the smart home become coordinated. The simulation and experiments are performed in a smart home scenario comprised of a single user and multiple appliances. As compared with our previous research work using the Least Slack Time (LST) scheduling algorithm and scheduling based on demand-response strategy, it is revealed that the operational time of the household appliances is efficiently scheduled to reduce the energy consumption and dissatisfaction level of the home users significantly.
               
Click one of the above tabs to view related content.