Recently, fashion compatibility modeling, which can score the matching degree of several complementary fashion items, has gained increasing research attention. Previous studies have primarily learned the features of fashion items… Click to show full abstract
Recently, fashion compatibility modeling, which can score the matching degree of several complementary fashion items, has gained increasing research attention. Previous studies have primarily learned the features of fashion items and utilize their interaction as the fashion compatibility. However, the try-on looking of an outfit help us to learn the fashion compatibility in a combined manner, where items are spatially distributed and partially covered by other items. Inspired by this, we design a try-on-enhanced fashion compatibility modeling framework, named TryonCM2, which incorporates the try-on appearance with the item interaction to enhance the fashion compatibility modeling. Specifically, we treat each outfit as a sequence of items and adopt the bidirectional long short-term memory (LSTM) network to capture the latent interaction of fashion items. Meanwhile, we synthesize a try-on template image to depict the try-on appearance of an outfit. And then, we regard the outfit as a sequence of multiple image stripes, i.e., local content, of the try-on template, and adopt the bidirectional LSTM network to capture the contextual structure in the try-on appearance. Ultimately, we combine the fashion compatibility lying in the item interaction and try-on appearance as the final compatibility of the outfit. Both the objective and subjective experiments on the existing FOTOS dataset demonstrate the superiority of our framework over the state-of-the-art methods.
               
Click one of the above tabs to view related content.