The offline evaluation of recommender systems is typically based on accuracy metrics such as the Mean Absolute Error and the Root Mean Squared Error for error rating prediction and Precision… Click to show full abstract
The offline evaluation of recommender systems is typically based on accuracy metrics such as the Mean Absolute Error and the Root Mean Squared Error for error rating prediction and Precision and Recall for measuring the quality of the top-N recommendations. However, it is difficult to reproduce the results since there are various libraries that can be used for running experiments and also within the same library there are many different settings that if not taken into consideration when replicating the results might vary. In this paper, we show that within the use of the same library an explanation-based approach can be used to assist in the reproducibility of experiments. Our proposed approach has been experimentally evaluated using a wide range of recommendation algorithms ranging from collaborative filtering to complicated fuzzy recommendation approaches that can solve the filter bubble problem, a real dataset, and the results show that it is both practical and effective.
               
Click one of the above tabs to view related content.