The autoregressive-based model with the left-to-right generation order has been a predominant paradigm for generative linguistic steganography. However, such steganography does not perform well on semantic control and content planning,… Click to show full abstract
The autoregressive-based model with the left-to-right generation order has been a predominant paradigm for generative linguistic steganography. However, such steganography does not perform well on semantic control and content planning, which is forced by the secret message during the generation process. To mitigate this issue and efficiently produce high-quality steganographic texts (stegotexts), we present a Progressive Non-autoregressive Generative linguistic Steganography (PNG-Stega), which encodes secret messages and extends the context to generate stegotexts in a multi-round insertion manner. Each round continuously refines the generated steganographic sequences on the premise of the global information of the previous round, while striving to decline the adverse effects of steganographic encoding on text quality. Moreover, for enhancing the semantic internal dependency of stegotexts, we utilize a constraint word sequences extraction scheme to obtain keywords to initialize the skeleton of targeted stegotexts, then expand the existing keywords with insertion operations. Experimental results demonstrate that PNG-Stega outperforms compared methods in terms of imperceptibility and anti-steganalysis ability. In particular, PNG-Stega provides high information hiding efficiency, even exceeding the autoregressive methods by around 2 times.
               
Click one of the above tabs to view related content.