LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Multi-sensor data fusion for sign language recognition based on dynamic Bayesian network and convolutional neural network

Photo from wikipedia

A new multi-sensor fusion framework is proposed, which is based on the Convolutional Neural Network (CNN) and the Dynamic Bayesian Network (DBN) for Sign Language Recognition (SLR). In this framework,… Click to show full abstract

A new multi-sensor fusion framework is proposed, which is based on the Convolutional Neural Network (CNN) and the Dynamic Bayesian Network (DBN) for Sign Language Recognition (SLR). In this framework, a Microsoft Kinect, which is a low-cost RGB-D sensor, is used as tools of the Human-Computer-Interaction (HCI). In our method, at first, the color and depth videos are collected using the Kinect, the next, all image sequences features are extracted out using the CNN. The color and depth feature sequences are input into the DBN as observation data. Based on graph model fusion, the maximum recognition rate of dynamic isolated sign language is calculated. The proposed the DBN + CNN SLR framework is tested in our dataset, the highest recognition rate can up to 99.40%. The test results show that our approach is effective.

Keywords: network; recognition; fusion; sensor; sign language

Journal Title: Multimedia Tools and Applications
Year Published: 2018

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.