Gaze-based implicit intention inference provides a new human-robot interaction for people with disabilities to accomplish activities of daily living independently. Existing gaze-based intention inference is mainly implemented by the data-driven… Click to show full abstract
Gaze-based implicit intention inference provides a new human-robot interaction for people with disabilities to accomplish activities of daily living independently. Existing gaze-based intention inference is mainly implemented by the data-driven method without prior object information in intention expression, which yields low inference accuracy. Aiming to improve the inference accuracy, we propose a gaze-based hybrid method by integrating model-driven and data-driven intention inference tailored to disability applications. Specifically, intention is considered as the combination of verbs and nouns. The objects corresponding to the nouns are regarded as intention-interpreting objects and served as prior knowledge, i.e., punished factors. The punished factor considers the object information, i.e., the priority in object selection. Class-specific attribute weighted naïve Bayes model learned through training data is presented to represent the relationship among intentions and objects. An intention inference engine is developed by combining the human prior knowledge, and the data-driven class-specific attribute weighted naïve Bayes model. Computer simulations: (i) verify the contribution of each critical component of the proposed model, (ii) evaluate the inference accuracy of the proposed model, and (iii) show that the proposed method is superior to state-of-the-art intention inference methods in terms of accuracy.
               
Click one of the above tabs to view related content.