Positioning data and other sensor measurements, such as camera orientation, have become important contextual features generated by mobile devices during video recording, which proved to be increasingly beneficial to video… Click to show full abstract
Positioning data and other sensor measurements, such as camera orientation, have become important contextual features generated by mobile devices during video recording, which proved to be increasingly beneficial to video search. To enable access to videos based on their metadata (e.g., geo-properties produced by GPS and digital compass), a model representing camera field of view (FOV) is needed. Vector model of previous work, which simplifies FOV by ignoring the viewable angle for search efficiency, has become popular in georeferenced video search. However, when the viewable angle is large, many false positives and false negatives occur, which are undesirable for filtering of georeferenced video search. This paper proposes a new model, which can appropriately represent the actual FOV as a filtering step, without any false positive or false negative. Based on this model, we investigate how to process five types of overlap queries for searching videos as spatio-temporal objects. To verify the effectiveness of our model and the corresponding query processing algorithms, experiments on a real data set we collected and a large synthetic data set are conducted. The results show that the proposed model can perform much better compared with the existing vector model, and the accuracy of its search results remains almost steady, even when the angle changes.
               
Click one of the above tabs to view related content.