Indonesian Dynamic Sign Language Recognition At Complex Background With 2D Convolutional Neural Networks

As the media of communication for people with hearing and speech disabilities, the importance to bridge the communication gap between them and normal people using sign language has become significance. This research proposed a model for the development of sign language recognition technology using Microsoft Kinect and convolutional neural network (CNNs). The proposed model succeeds in recognizing 10 dynamic Indonesian sign language words on complex background. There are total of 100 gesture image sequences containing color and depth data, perform by different users. The classifier consists of two CNNs and one ANN. The first CNN is to extract hand feature from color data, while the other is to extract hand feature from depth data. The training consists of three modes by applying drop-out and data augmentation and achieves the highest validation rate on 81.60% and test result on 73.00%.

Artikel lain