Traditional object detection methods always assume both of the training and test data follow the same distribution, but this cannot always be guaranteed in the real world. Domain adaptive methods… Click to show full abstract
Traditional object detection methods always assume both of the training and test data follow the same distribution, but this cannot always be guaranteed in the real world. Domain adaptive methods are proposed to handle this situation. However, existing methods generally ignore the semantic alignment at feature level when they try to align data distributions between source and target domains. In this paper, we propose a novel unsupervised cross-domain object detection method, named Cycle-consistent domain Adaptive Faster RCNN (CA-FRCNN). A couple of Generative Adversarial Nets (GAN) are used to make the features from two domains consistent at both data distribution level and semantic level. Specifically, features from source domain are transformed to the target domain. Then they are aligned with features from target domain. At the same time, target features are handled with similar operations. Furthermore, a cycle-consistent loss is optimized to guarantee that the semantic information is preserved before and after the style translations. In the end, identity module is used to make the feature in source domain equivalent to the reconstructed feature output by the source generator, whose input is from the source domain. For the feature in the target domain, similar identity is required. Experiments on multiple datasets show that our method performs better than previous state-of-the-art methods.
               
Click one of the above tabs to view related content.