LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Study on feature extraction technology of real-time video acquisition based on deep CNN

Photo from wikipedia

In the process of image acquisition, the existing image-based real-time video acquisition system is susceptible to noise and distortion due to the influence of attitude, illumination and other conditions, which… Click to show full abstract

In the process of image acquisition, the existing image-based real-time video acquisition system is susceptible to noise and distortion due to the influence of attitude, illumination and other conditions, which reduces the quality and stability of the acquired image, and thus makes it difficult to locate the image feature area. Therefore, the feature extraction technology of real-time video capture based on deep convolution neural network is proposed. Cut out high-quality images by locating reference points in feature connection areas, smooth each part of the image by using mean image filter, extract texture features by using convolution, transform, discrete cosine transform and statistical features, and replace random initialization weights with pre-trained models. In the process of model training and recognition, the methods of feature state division, image preprocessing and observation vector calculation are studied. The experimental results on ORL database verify the effectiveness of the image feature extraction method, which can meet the needs of current real-time video capture.

Keywords: feature extraction; image; time video; real time

Journal Title: Multimedia Tools and Applications
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.