LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Guest editorial: mobile visual tagging with mobile context

Photo by jpvalery from unsplash

there is still much work to do to satisfy the requirements for different applications. In fact, mobile devices bring a large amount of contextual information, which can be useful clues… Click to show full abstract

there is still much work to do to satisfy the requirements for different applications. In fact, mobile devices bring a large amount of contextual information, which can be useful clues to facilitate image annotation and tagging. Moreover, mobile context is enriched by application-specific information at two levels: one is internal contextual information which is intrinsically contained in the mobile devices, such as personal profile, stored textual/visual content, camera and other sensor’s parameters. The other is external contextual information which can be easily acquired by the mobile devices, such as weather, geo-location and aural information. How to fully utilize these information is an interesting and promising research problem. This special issue aims to seek innovative papers that exploit novel technologies and solutions from both industry and academia on how to recognize and tag images/videos with mobile contextual information. This special issue includes both papers of direct submission to the call for papers and extension of papers selected from 2015 International Conference on Internet Multimedia Computing and Service (ICIMCS). The extended conference papers have at least 30% improvement compared with the original paper. The first article is a survey paper entitled “A survey on context-aware mobile visual recognition” by Min et al. This paper focuses on recent advances in context-aware mobile visual recognition and reviews-related work regarding different contextual information, recognition methods, recognition types, and various application scenarios. Various contextual information including the location, time and camera parameters from different sensors of mobile devices are introduced for mobile visual recognition. This paper discusses three types of recognition methods: classification-based methods, retrieval-based methods and tag propagation-based methods. This paper also proposes several open issues that need to be addressed in the future, including designing compact and Recently, the wide popularity of personal mobile devices has greatly changed our daily lives both in terms of human-tohuman communication and human-to-computer information access. An important development under this circumstance is that the visual contents are playing more and more dominant roles. We take pictures using mobile phones every day; we can send image/video messages to our friends anywhere and anytime; and online social communities are flushed with images such as Twitter, Facebook and so on. How to satisfactorily utilize and manage such visual information presents a great challenge for visual information understanding technologies. We are now seeing the rapid improvement of image recognition techniques with the recent development of deep feature learning, cross-media annotation, contextual information complementation, transfer learning and so on. However,

Keywords: contextual information; mobile devices; information; context; recognition; mobile visual

Journal Title: Multimedia Systems
Year Published: 2017

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.