Recently, Neural Architecture Search (NAS) methods are introduced and show impressive performance on many benchmarks. Among those NAS studies, Neural Architecture Transformer (NAT) aims to adapt the given neural architecture… Click to show full abstract
Recently, Neural Architecture Search (NAS) methods are introduced and show impressive performance on many benchmarks. Among those NAS studies, Neural Architecture Transformer (NAT) aims to adapt the given neural architecture to improve performance while maintaining computational costs. In the architecture adaptation task, we can utilize the known high-performance architectures, and the architecture adaptation results of NAT showed performance improvements on various architectures in their experiments. However, we verified that NAT lacks reproducibility through multiple trials of experiments. Moreover, it requires an additional architecture adaptation process before network weight training. In this paper, we propose proxyless neural architecture adaptation that is reproducible and efficient. The proposed method doesn’t need a proxy task for architecture adaptation. It directly improves the architecture during the conventional training process, and we can directly use the trained neural network. Moreover, the proposed method can be applied to both supervised learning and self-supervised learning. The proposed method shows stable performance improvements on various architectures and various datasets. Extensive experiments on two benchmark datasets, i.e., CIFAR-10 and Tiny Imagenet, present that the proposed method definitely outperforms NAT and be applicable to various models and datasets.
               
Click one of the above tabs to view related content.