Pose Guided Person Image Generation (PGPIG) is a popular task in deepfake, which aims at generating a person image with the given pose based on the source image. However, existing… Click to show full abstract
Pose Guided Person Image Generation (PGPIG) is a popular task in deepfake, which aims at generating a person image with the given pose based on the source image. However, existing methods cannot comprehensively model the correlation between the source and the target domain. Most of them only focus on the correlation of the keypoints but ignore detail textures. In this paper, we propose a novel Texture Correlation Network (TCN) to simultaneously build pose and texture correlations. Specifically, our TCN adopts a two-stage design, including two networks: Pose Guided Person Alignment Network (PGPAN) and Texture Correlation Attention Network (TCAN). The PGPAN generates a coarse person image aligned with the target pose, while the TCAN produces a target generated image with the guidance of multiple correlations. The key component of TCAN is our new module, Texture Correlation Attention Module (TCAM), which explicitly builds geometry and texture correlation between the source image and the coarse target image. Those kinds of correlations facilitate to transfer real textures from the source to the target. Extensive experiments on the DeepFashion and Market1501 benchmarks demonstrate the superior performance of the proposed method. In addition, our model only uses 8.5 million parameters, which is significantly smaller than other methods.
               
Click one of the above tabs to view related content.