LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A Fully Convolutional Encoder–Decoder Spatial–Temporal Network for Real-Time Background Subtraction

Photo from wikipedia

Background subtraction is described as the task of distinguishing pixels into moving objects and the background in a frame. In this paper, we propose a fully convolutional encoder–decoder spatial–temporal network… Click to show full abstract

Background subtraction is described as the task of distinguishing pixels into moving objects and the background in a frame. In this paper, we propose a fully convolutional encoder–decoder spatial–temporal network (FCESNet) to achieve real-time background subtraction. In the proposed many-to-many architecture method encoded features of consecutive frames are fed into a spatial–temporal information transmission (STIT) module to capture the spatial–temporal correlation in the frame sequence, and then a decoder is designed to output the subtraction results of all frames. A “patch-based” training method is designed to increase the practicability and flexibility of the proposed method. The experiments over CDNet2014 have shown that the proposed method could achieve state-of-the-art performance. The proposed method is proved to be able to achieve real-time background subtraction.

Keywords: real time; spatial temporal; background subtraction; time background; subtraction

Journal Title: IEEE Access
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.