In modern security systems such as CCTV-based surveillance applications, real-time deep-learning based computer vision algorithms are actively utilized for always-on automated execution. The real-time computer vision system for surveillance applications… Click to show full abstract
In modern security systems such as CCTV-based surveillance applications, real-time deep-learning based computer vision algorithms are actively utilized for always-on automated execution. The real-time computer vision system for surveillance applications is highly computation-intensive and exhausts computation resources when it performed on the device with a limited amount of resources. Based on the nature of Internet-of-Things networks, the device is connected to main computing platforms with offloading techniques. In addition, the real-time computer vision system such as the CCTV system with image recognition functionality performs better when arrival images are sampled at a higher rate because it minimizes missing video frame feeds. However, performing it at overwhelmingly high rates exposes the system to the risk of a queue overflow that hampers the reliability of the system. In order to deal with this issue, this paper proposes a novel queueaware dynamic sampling rate adaptation algorithm that optimizes the sampling rates to maximize the computer vision performance (i.e., recognition ratio) while avoiding queue overflow under the concept of Lyapunov optimization framework. Through extensive system simulations, the proposed approaches are shown to provide remarkable gains.
               
Click one of the above tabs to view related content.