Αναγνώριση της αγγλικής νοηματικής γλώσσας σε βίντεο με χρήση Συνελικτικών Αναδρομικών Δικτύων
English sign language recognition through the utilisation of Convolutional Recurrent Neural Networks
Hearing-impaired people are using Sign Language (SL) as their main communication. This language contains mostly the combined features of hand motions and facial expressions to define the words and the sentences. The relationship between the meaning of a sentence and the specific words is - in some cases - complex and difficult to understand, as a result of which learning it is not easy for most people. Sign language courses are held in limited institutes in large urban centers only, while they are not taught in any public school. Also, it should be mentioned that visually impaired people have no way to communicate directly with deaf people. These are very basic reasons that create the need for an automated translation of Sign Language into written words and sentences. The process to achieve that can be made from AI and machine learning and right now the research community finds difficulties, because of the lack of large datasets and the differences of each language. Therefore, the need to help deaf community in their communications is big and engineers are trying to find methods and ways to establish it. One of the first issues is the motion recognition of a human that communicate in SL by the machine. Another issue is to match the hands motions with the word. The main way is by training Artificial Neural Networks and improving continuously every matter that come across. So, using a lot of different approaches and testing the conclusions is crucial. This thesis is trying to reach a small goal in this big vision and help other researcher to translate SL in other languages, with the help of image classification and Deep Sequence Models.