Visual saliency (VS) is an important mechanism for defining which areas of an image will attract more attention of the HVS. Thus, VS can be employed to weight the just… Click to show full abstract
Visual saliency (VS) is an important mechanism for defining which areas of an image will attract more attention of the HVS. Thus, VS can be employed to weight the just noticeable difference (JND) with different attention levels. Some VS-based JND profiles have been proposed in the DCT domain, which used the bottom-up features, such as luminance and texture only. Recently, the research about saliency detection has shown that a better saliency model considering both bottom-up features and top-down features will lead to a significant improvement for the overall saliency detection performance. In this paper, we propose a novel two-layer VS-induced JND profile, which is composed of the bottom-up features and the top-down feature extracted from DCT blocks in the transformed domain. In this model, the luminance and texture features are adapted to calculate the bottom-up features maps, while the top-down feature of focus is used to guide the generation of final salient regions since the camerapersons are used to facilitate the attention regions in focus. The proposed two-layer saliency-induced JND model is further applied to modulate the quantization step in the watermarking framework, which can make full use of its individual merits to achieve a better tradeoff between fidelity and robustness. The experimental results show that the proposed scheme has superior performance than the previous watermarking schemes.
               
Click one of the above tabs to view related content.