Advanced machine learning techniques have recently demonstrated outstanding performance when applied to power quality disturbance (PQD) classification. Nevertheless, a possible problem is that power experts may find it hard to… Click to show full abstract
Advanced machine learning techniques have recently demonstrated outstanding performance when applied to power quality disturbance (PQD) classification. Nevertheless, a possible problem is that power experts may find it hard to trust the results of such algorithms, if they do not fully understand the reasons for their outputs. In this light, this article suggests a method that explains the outputs of PQD classifiers, using explainable artificial intelligence (XAI). The method operates as follows: first, various XAI techniques and classifiers are combined and scored based on their explanations during the validation step. Then, the best combination of classifier and XAI technique for each disturbance is used on the testing set, such that the classifier outputs are more transparent. To accomplish these steps, a definition for the correct explanation in PQD is given. Also, to determine the quality of an explanation for a certain output, we propose an evaluation process to measure the explainability score for each XAI technique and classifier. By means of this approach, the PQD classifier outputs are optimized to be both accurate and easy to understand, allowing experts to make informed and trustworthy decisions.
               
Click one of the above tabs to view related content.