LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A Scale Aggregation and Spatial-Aware Network for Multi-View Crowd Counting

Photo from wikipedia

Previous multi-view crowd counting methods underperform in maintaining scale consistency across views and overlook the negative effect of the complex background. To solve these problems, a scale aggregation and spatial-aware… Click to show full abstract

Previous multi-view crowd counting methods underperform in maintaining scale consistency across views and overlook the negative effect of the complex background. To solve these problems, a scale aggregation and spatial-aware network for multi-view crowd counting (SASNet) is proposed. Firstly, we design a multi-branch adaptive scale aggregation module which aggregates the appropriate scale for each pixel in each view. Benefiting from the automatic feature learning process, it can help all camera-view features maintain the scale consistency as much as possible. Then, a crowd-centric selection module is used to reasonably assign the weight of pixels at different spatial locations, thereby selecting the region of crowd and suppressing the background information. Finally, we project each view selected features to the consistent world coordinate system and fuse them. Experimental results demonstrate that the proposed SASNet outperforms the state-of-the-art methods. Our SASNet achieves 7.44 MAE (9.46 RMSE) and 1.01 MAE(1.24 RMSE) in City Street and DukeMTMC respectively.

Keywords: view; scale aggregation; crowd counting; multi view; view crowd

Journal Title: IEEE Access
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.