Image-based virtual try-on systems aim at transferring the try-on clothes onto a target person. Despite making considerable progress recently, such systems are still highly challenging for real-world applications because of… Click to show full abstract
Image-based virtual try-on systems aim at transferring the try-on clothes onto a target person. Despite making considerable progress recently, such systems are still highly challenging for real-world applications because of occlusion and drastic spatial deformation. To address the issues, we propose a novel Flow-based Virtual Try-on Network (FVTN). It consists of three modules. Firstly, the Parsing Alignment Module (PAM) aligns the source clothing to the target person at the semantic level by predicting a semantic parsing map. Secondly, the Flow Estimation Module (FEM) learns a robust clothing deformation model by estimating multi-scale dense flow fields in an unsupervised fashion. Thirdly, the Fusion and Rendering Module (FRM) synthesizes the final try-on image by effectively integrating the warped clothing features and human body features. Extensive experiments on a public fashion dataset demonstrate that our FVTN qualitatively and quantitatively outperforms the state-of-the-art approaches. The source code and trained models are available at https://github.com/gxl-groups/GFGN.
               
Click one of the above tabs to view related content.