Automatic music recommendation is an open research problem that has seen much work in recent years. A common and successful music recommendation approach is collaborative filtering, which has worked well… Click to show full abstract
Automatic music recommendation is an open research problem that has seen much work in recent years. A common and successful music recommendation approach is collaborative filtering, which has worked well in this domain. One major drawback of this method is that it suffers from a cold‐start problem, and it requires a lot of user‐personalized information. It is an ineffective mechanism for recommending new and unpopular songs as well as for new users. In this article, we report a hybrid methodology that uses the song's content information. We use MIDI (Musical Instrument Digital Interface) content data, a compressed version of an audio song that contains digital information about a song and is machine‐readable. We describe a model called MSA‐SRec (MIDI Based Self Attentive Sequential Music Recommendation), a latent factor‐based self‐attentive deep learning model that uses a substantial amount of sequential information as content information of the song for recommendation generation. We use MIDI data of a song that is under‐explored content information for music recommendation. We show that using MIDI as content data with user and item latent vector produces reasonable recommendations. We also demonstrate that using MIDI over other music metadata performs better with various state‐of‐the‐art models of recommendation systems.
               
Click one of the above tabs to view related content.