Specific emitter identification (SEI) methods via deep learning have shown significant progress in accuracy recently. However, these methods require a large amount of the labeled data. In this letter, contrastive… Click to show full abstract
Specific emitter identification (SEI) methods via deep learning have shown significant progress in accuracy recently. However, these methods require a large amount of the labeled data. In this letter, contrastive learning is introduced to cope with the problem of the lack of the labeled data, which the network modules include the feature extractor, the projection head, and the classifier. We use a two-stage semi-supervised training scheme. In the first stage, the feature extractor extracts the features of the received signal, and the projection head deepens the network to reserve more information in the features. They are trained with the unlabeled samples via the self-supervised contrastive learning loss. In the second stage, the overall network is fine-tuned over a small amount of the labeled data using an alternative loss. This loss combines the original cross-entropy loss with the supervised contrastive learning loss. Numerical results on the FIT/CorteXlab dataset show that even with tens of the labeled samples, the accuracy of the proposed identifier attains around 90%, which outperforms the conventional supervised and semi-supervised identifiers.
               
Click one of the above tabs to view related content.