LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Multimodal Patient Satisfaction Recognition for Smart Healthcare

Photo from wikipedia

The inclusion of multimodal inputs improves the accuracy and dependability of smart healthcare systems. A user satisfaction monitoring system that uses multimodal inputs composed of users’ facial images and speech… Click to show full abstract

The inclusion of multimodal inputs improves the accuracy and dependability of smart healthcare systems. A user satisfaction monitoring system that uses multimodal inputs composed of users’ facial images and speech is proposed in this paper. This smart healthcare system then sends multimodal inputs to the cloud. The inputs are processed and classified as fully satisfied, partly satisfied, or unsatisfied, and the results are sent to various stakeholders in the smart healthcare environment. Multiple image and speech features are extracted during cloud processing. Moreover, directional derivatives and a weber local descriptor is used for speech and image features, respectively. The features are then combined to form a multimodal signal, which is supplied to a classifier by support vector machine. Our proposed system achieves 93% accuracy for satisfaction detection.

Keywords: multimodal inputs; smart healthcare; satisfaction; multimodal patient; healthcare

Journal Title: IEEE Access
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.