The Internet of Things (IoT) devices and technologies for smart city applications produces a vast amount of multimedia data (e.g., audio, video, image, text and sensorial data), such big data… Click to show full abstract
The Internet of Things (IoT) devices and technologies for smart city applications produces a vast amount of multimedia data (e.g., audio, video, image, text and sensorial data), such big data are difficult to handle with traditional techniques and algorithms. The emerging machine learning techniques have the potential to facilitate the development of a new class of applications that can deal with such multimedia big data. Recently, Activity Recognition systems suggest using of multimedia data to detect daily actions, since it provides more accurate patterns; prevent the arising complain on privacy issues (in case of using audio-base data) and able to work on a big data. In this paper, we propose a Deep Learning (DL) methodology for classifying audio data that is based on multilayer perceptron neural networks. The contributions of our work are to propose an efficient design of the network topology including hidden layers, neurons, and the fitness function. In addition, the proposed methodology contributed in producing high performance classifier in terms of accuracy and f-measure. The experiments have been conducted on four large audio-datasets that have been collected to represent different modalities in a smart city. The results indicated that the proposed methodology achieved high performance as compared to the state-of-the-art machine learning techniques.
               
Click one of the above tabs to view related content.