On average 3,700 people lose their lives on roads every day due to car accidents as a result of drivers’ distraction. In this research, a proposed hybrid approach is presented.… Click to show full abstract
On average 3,700 people lose their lives on roads every day due to car accidents as a result of drivers’ distraction. In this research, a proposed hybrid approach is presented. The approach is based on deep learning to detect the driver’s actions and eliminate the driver’s distraction as a packed solution. The detection is performed by analyzing the driver’s actions and his head pose. The elimination is made by using voice commands that are based on trigger words, speech to text, and text classification models to access the car’s functions such as the air-condition, radio, etc. The results of the driver’s actions classification showed 94.1% accuracy on the AUC benchmark database for driver distraction achieving the state-of-the-art accuracy on this benchmark. The results of the command to text classification is 95.19% while the results of the head pose estimation show a 6.21-degree MAE in face angles detection. With using our car commands dataset, the domain of the speech recognition output is more focused on car commands. The previously mentioned algorithms are beneficial to the safety of the driver. He can use his voice to operate the car accessories. His alert state is monitored and he is warned through an alarm if a distraction is detected. However, this research is not concerned with the detection of retinal abnormalities such as sleeping with eyes open. The results of real-time testing show 0.080 second response time for the driver’s behavior classification and command following with the use of graphical processing units.
               
Click one of the above tabs to view related content.