Next generation of industrial revolution will be featured with broad applications of intelligent technologies; among those popular ones are intelligent manufacturing and autonomous products like vehicles and robotic systems. In… Click to show full abstract
Next generation of industrial revolution will be featured with broad applications of intelligent technologies; among those popular ones are intelligent manufacturing and autonomous products like vehicles and robotic systems. In both cases, autonomous operations are at the center of the stage, in which appropriate sensing and perception play critical roles. Indeed, recent advances in sensing and perception technologies have produced exciting new ideas in facilitating autonomous manufacturing and/or robotic vehicular systems. These technologies will potentially evolve with more and more ‘smart functions’ and move manufacturing and robotic systems from single structured operation to sensing/perception-based self-governed yet collaborative multisystem operations. This Focused Section is dedicated to new progresses in modeling, design, control, communication, and implementation of sensing and perception systems for autonomous and/or networked robotics, and intends to provide the state-of-the-art update of research fronts. The Focused Section consists of six research papers covering detection of human motion (Jiang, et al.), vision based pose measurement (Zhang, et al.), ream-time object detection and tracking (Benabderrahmane), 3-D map reconstruction (Turan et al.; Landsiedel and Wollherr), and vision based endoscopic capsule robot (Turan et al.). In Jiang et al., an alternative method utilizing temperature fields and their gradients from infrared (IR) images is presented to improve the perception ability of the blind/visually impaired people who are generally familiar with stationary objects but less confident in congested environment where human motion is unpredictable. This approach takes the advantages of the fact that the human body is essentially a natural heat source and is applied to locate individual person and determine his/her face orientation and motion states. This alternative temperature field based perception method is potentially applicable in intelligent space, smart city and smart cars. In Zhang et al., a global image-to-ground homography based calibration method is presented to obtain the mapping between the image and the planar scene lying in the whole camera field of view, through fusing multiple local homography matrices. The proposed method does not require the knowledge of internal parameters of cameras and renders high calibration accuracy with easy implementation. It can potentially provide sensing aid for mobile robot localization with an accuracy close to the performance limit of a monocular camera. In Benabderrahmane, an improved real time object detection and tracking framework is presented, built on Adaboost classification, where a strong classifier is generated using an iterative combination of weak learners. A heuristics optimization algorithm is developed to accelerate the extraction of relevant features from the image. Considerable improvement has been observed when applying * Xiang Chen [email protected]
               
Click one of the above tabs to view related content.