Learning Programming for Hearing Impaired Students via an Avatar

Deaf and hearing-impaired students face many obstacles throughout their education, especially with learning applied sciences such as computer programming. In addition, there is no clear signs in the Arabic Sign Language that can be used to identify programming logic terminologies such as while, for, case, switch etc. However, hearing disabilities should not be a barrier for studying purpose nowadays, especially with the rapid growth in educational technology. In this paper, we develop an Avatar based system to teach computer programming to deaf and hearing-impaired students using Arabic Signed language with new signs vocabulary that is been developed for computer programming education. The system is tested on a number of high school students and results showed the importance of visualization in increasing the comprehension or understanding of concepts for deaf students through the avatar.

A Motion Dictionary to Real-Time Recognition of Sign Language Alphabet Using Dynamic Time Warping and Artificial Neural Network

Computacional recognition of sign languages aims to allow a greater social and digital inclusion of deaf people through interpretation of their language by computer. This article presents a model of recognition of two of global parameters from sign languages; hand configurations and hand movements. Hand motion is captured through an infrared technology and its joints are built into a virtual three-dimensional space. A Multilayer Perceptron Neural Network (MLP) was used to classify hand configurations and Dynamic Time Warping (DWT) recognizes hand motion. Beyond of the method of sign recognition, we provide a dataset of hand configurations and motion capture built with help of fluent professionals in sign languages. Despite this technology can be used to translate any sign from any signs dictionary, Brazilian Sign Language (Libras) was used as case study. Finally, the model presented in this paper achieved a recognition rate of 80.4%.

Factors Affecting Access to Education: The Experiences of Parents of Children Who Are Deaf or Hard of Hearing

The purpose of this research is to examine the experiences of parents of children who are deaf or hard of hearing in supporting their children to access education in Vietnam. Parents play a crucial role in supporting their children to gain full access to education. It was widely reported that parents of those children confronted a range of problems to support their children to access education. To author’s best knowledge, there has been a lack of research exploring the experiences of those parents in literature. This research examines factors affecting those parents in supporting their children to access education. To conduct the study, qualitative approach using a phenomenological research design was chosen to explore the central phenomena. Ten parents of children who were diagnosed as deaf or hard of hearing and aged 6-9 years were recruited through the support of the Association of Parents of Children with Hearing Impairment. Participants were interviewed via telephone with a mix of open and closed questions; interviews were audio recorded, transcribed and thematically analysed. The research results show that there are nine main factors that affected the parents in this study in making decisions relating to education for their children including: lack of information resources, perspectives of those parents on communication approaches, the families’ financial capacity, the psychological impact on the participants after their children’ diagnosis, the attitude of family members, attitude of school administrators, lack of local schools and qualified teachers, and current education system for the deaf in Vietnam. Apart from those factors, the lack of knowledge of the participants’ partners about deaf education and the partners’ employment are barriers to educational access and successful communication with their child.

Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System

Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems.

Contributions of Non-Formal Educational Spaces for the Scientific Literacy of Deaf Students

The school is a social institution that should promote learning situations that remain throughout life. Based on this, the teaching activities promoted in museum spaces can represent an educational strategy that contributes to the learning process in a more meaningful way. This article systematizes a series of elements that guide the use of these spaces for the scientific literacy of deaf students and as experiences of this nature are favorable for the school development through the concept of the circularity. The methodology for the didactic use of these spaces of non-formal education is one of the reflections developed in this study and how such environments can contribute to the learning in the classroom. To develop in the student the idea of ​​association making him create connections with the curricular proposal and notice how the proposed activity is articulated. It is in our interest that the experience lived in the museum be shared collaborating for the construction of a scientific literacy and cultural identity through the research.

Hand Gesture Detection via EmguCV Canny Pruning

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Family Functionality in Mexican Children with Congenital and Non-Congenital Deafness

A total of 100 primary caregivers (mothers, fathers, grandparents) with at least one child or grandchild with a diagnosis of congenital bilateral profound deafness were assessed in order to evaluate the functionality of families with a deaf member, who was evaluated by specialists in audiology, molecular biology, genetics and psychology. After confirmation of the clinical diagnosis, DNA from the patients and parents were analyzed in search of the 35delG deletion of the GJB2 gene to determine who possessed the mutation. All primary caregivers were provided psychological support, regardless of whether or not they had the mutation, and prior and subsequent, the family APGAR test was applied. All parents, grandparents were informed of the results of the genetic analysis during the psychological intervention. The family APGAR, after psychological and genetic counseling, showed that 14% perceived their families as functional, 62% moderately functional and 24% dysfunctional. This shows the importance of psychological support in family functionality that has a direct impact on the quality of life of these families.

The Effect of Static Balance Enhance by Table Tennis Training Intervening on Deaf Children

Children with hearing impairment have deficits of balance and motors. Although most of parents teach deaf children communication skills in early life, but rarely teach the deficits of balance. The purpose of this study was to investigate whether static balance improved after table tennis training. Table tennis training was provided four times a week for eight weeks to two 12-year-old deaf children. The table tennis training included crossover footwork, sideway attack, backhand block-sideways-flutter forehand attack, and one-on-one tight training. Data were gathered weekly and statistical comparisons were made with a paired t-test. We observed that the dominant leg is better than the non-dominant leg in static balance and girl balance ability is better than boy. The final result shows that table tennis training significantly improves the deaf children’s static balance performance. It indicates that table tennis training on deaf children helps the static balance ability.

An Inclusion Project for Deaf Children into a Northern Italy Contest

84 deaf students (from primary school to college) and their families participated in this inclusion project in cooperation with numerous institutions in northern Italy (Brescia-Lombardy). Participants were either congenitally deaf or their deafness was related to other pathologies. This research promoted the integration of deaf students as they pass from primary school to high school to college. Learning methods and processes were studied that focused on encour­aging individual autonomy and socialization. The research team and its collaborators included school teachers, speech ther­apists, psychologists and home tutors, as well as teaching assistants, child neuropsychiatrists and other external authorities involved with deaf persons social inclusion programs. Deaf children and their families were supported, in terms of inclusion, and were made aware of the research team that focused on the Bisogni Educativi Speciali (BES or Special Educational Needs) (L.170/2010 - DM 5669/2011). This project included a diagnostic and evaluative phase as well as an operational one. Results demonstrated that deaf children were highly satisfied and confident; academic performance improved and collaboration in school increased. Deaf children felt that they had access to high school and college. Empowerment for the families of deaf children in terms of networking among local services that deal with the deaf also improved while family satisfaction also improved. We found that teachers and those who gave support to deaf children increased their professional skills. Achieving autonomy, instrumental, communicative and relational abilities were also found to be crucial. Project success was determined by temporal continuity, clear theoretical methodology, strong alliance for the project direction and a resilient team response.

Development of Personal and Social Identity in Immigrant Deaf Adolescents

Identity development in adolescence is characterized by many risks and challenges, and becomes even more complex by the situation of migration and deafness. In particular, the condition of the second generation of migrant adolescents involves the comparison between the family context in which everybody speaks a language and deals with a specific culture (usually parents’ and relatives’ original culture), the social context (school, peer groups, sports groups), where a foreign language is spoken and a new culture is faced, and finally in the context of the “deaf” world. It is a dialectic involving unsolved differences that have to be treated in a discontinuous process, which will give complex outcomes and chances depending on the process of elaboration of the themes of growth and development, culture and deafness. This paper aims to underline the problems and opportunities for each issue which immigrant deaf adolescents must deal with. In particular, it will highlight the importance of a multifactorial approach for the analysis of personal resources (both intra-psychic and relational); the level of integration of the family of origin in the migration context; the elaboration of the migration event, and finally, the tractability of the condition of deafness. Some psycho-educational support objectives will be also highlighted for the identity development of deaf immigrant adolescents, with particular emphasis on the construction of the adolescents’ useful abilities to decode complex emotions, to develop self-esteem and to get critical thoughts about the inevitable attempts to build their identity. Remarkably, and of importance, the construction of flexible settings which support adolescents in a supple, “decentralized” way in order to avoid the regressive defenses that do not allow for the development of an authentic self.

Evaluation of Cognitive Benefits among Differently Abled Subjects with Video Game as Intervention

In this study, the potential benefits of playing action video game among congenitally deaf and dumb subjects is reported in terms of EEG ratio indices. The frontal and occipital lobes are associated with development of motor skills, cognition, and visual information processing and color recognition. The sixteen hours of First-Person shooter action video game play resulted in the increase of the ratios β/(α+θ) and β/θ in frontal and occipital lobes. This can be attributed to the enhancement of certain aspect of cognition among deaf and dumb subjects.

Tactile Sensory Digit Feedback for Cochlear Implant Electrode Insertion

Cochlear Implantation (CI) which became a routine procedure for the last decades is an electronic device that provides a sense of sound for patients who are severely and profoundly deaf. The optimal success of this implantation depends on the electrode technology and deep insertion techniques. However, this manual insertion procedure may cause mechanical trauma which can lead to severe destruction of the delicate intracochlear structure. Accordingly, future improvement of the cochlear electrode implant insertion needs reduction of the excessive force application during the cochlear implantation which causes tissue damage and trauma. This study is examined tool-tissue interaction of large prototype scale digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale cochlea phantom for simulating the human cochlear which could lead to small scale digit requirements. The digit, distributive tactile sensors embedded with silicon-substrate was inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit have provided tactile information from the digitphantom insertion interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The tests demonstrated that even devices of such a relative simple design with low cost have potential to improve cochlear implant surgery and other lumen mapping applications by providing tactile sensory feedback information and thus controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied to other minimally invasive surgery applications as well as diagnosis and path navigation procedures.

Video Quality Assessment using Visual Attention Approach for Sign Language

Visual information is very important in human perception of surrounding world. Video is one of the most common ways to capture visual information. The video capability has many benefits and can be used in various applications. For the most part, the video information is used to bring entertainment and help to relax, moreover, it can improve the quality of life of deaf people. Visual information is crucial for hearing impaired people, it allows them to communicate personally, using the sign language; some parts of the person being spoken to, are more important than others (e.g. hands, face). Therefore, the information about visually relevant parts of the image, allows us to design objective metric for this specific case. In this paper, we present an example of an objective metric based on human visual attention and detection of salient object in the observed scene.

The Effect of Ambient Occlusion Shading on Perception of Sign Language Animations

The goal of the study reported in the paper was to determine whether Ambient Occlusion Shading (AOS) has a significant effect on users' perception of American Sign Language (ASL) finger spelling animations. Seventy-one (71) subjects participated in the study; all subjects were fluent in ASL. The participants were asked to watch forty (40) sign language animation clips representing twenty (20) finger spelled words. Twenty (20) clips did not show ambient occlusion, whereas the other twenty (20) were rendered using ambient occlusion shading. After viewing each animation, subjects were asked to type the word being finger-spelled and rate its legibility. Findings show that the presence of AOS had a significant effect on the subjects perception of the signed words. Subjects were able to recognize the animated words rendered with AOS with higher level of accuracy, and the legibility ratings of the animations showing AOS were consistently higher across subjects.

3D Rendering of American Sign Language Finger-Spelling: A Comparative Study of Two Animation Techniques

In this paper we report a study aimed at determining the most effective animation technique for representing ASL (American Sign Language) finger-spelling. Specifically, in the study we compare two commonly used 3D computer animation methods (keyframe animation and motion capture) in order to ascertain which technique produces the most 'accurate', 'readable', and 'close to actual signing' (i.e. realistic) rendering of ASL finger-spelling. To accomplish this goal we have developed 20 animated clips of fingerspelled words and we have designed an experiment consisting of a web survey with rating questions. 71 subjects ages 19-45 participated in the study. Results showed that recognition of the words was correlated with the method used to animate the signs. In particular, keyframe technique produced the most accurate representation of the signs (i.e., participants were more likely to identify the words correctly in keyframed sequences rather than in motion captured ones). Further, findings showed that the animation method had an effect on the reported scores for readability and closeness to actual signing; the estimated marginal mean readability and closeness was greater for keyframed signs than for motion captured signs. To our knowledge, this is the first study aimed at measuring and comparing accuracy, readability and realism of ASL animations produced with different techniques.

Video Quality Control Using a ROI and Two- Component Weighted Metrics

In this paper we propose a new content-weighted method for full reference (FR) video quality control using a region of interest (ROI) and wherein two-component weighted metrics for Deaf People Video Communication. In our approach, an image is partitioned into region of interest and into region "dry-as-dust", then region of interest is partitioned into two parts: edges and background (smooth regions), while the another methods (metrics) combined and weighted three or more parts as edges, edges errors, texture, smooth regions, blur, block distance etc. as we proposed. Using another idea that different image regions from deaf people video communication have different perceptual significance relative to quality. Intensity edges certainly contain considerable image information and are perceptually significant.

Working Memory Capacity in Australian Sign Language (Auslan)/English Interpreters and Deaf Signers

Little research has examined working memory capacity (WMC) in signed language interpreters and deaf signers. This paper presents the findings of a study that investigated WMC in professional Australian Sign Language (Auslan)/English interpreters and deaf signers. Thirty-one professional Auslan/English interpreters (14 hearing native signers and 17 hearing non-native signers) completed an English listening span task and then an Auslan working memory span task, which tested their English WMC and their Auslan WMC, respectively. Moreover, 26 deaf signers (6 deaf native signers and 20 deaf non-native signers) completed the Auslan working memory span task. The results revealed a non-significant difference between the hearing native signers and the hearing non-native signers in their English WMC, and a non-significant difference between the hearing native signers and the hearing non-native signers in their Auslan WMC. Moreover, the results yielded a non-significant difference between the hearing native signers- English WMC and their Auslan WMC, and a non-significant difference between the hearing non-native signers- English WMC and their Auslan WMC. Furthermore, a non-significant difference was found between the deaf native signers and the deaf non-native signers in their Auslan WMC.

Intelligibility of Cued Speech in Video

This paper discusses the cued speech recognition methods in videoconference. Cued speech is a specific gesture language that is used for communication between deaf people. We define the criteria for sentence intelligibility according to answers of testing subjects (deaf people). In our tests we use 30 sample videos coded by H.264 codec with various bit-rates and various speed of cued speech. Additionally, we define the criteria for consonant sign recognizability in single-handed finger alphabet (dactyl) analogically to acoustics. We use another 12 sample videos coded by H.264 codec with various bit-rates in four different video formats. To interpret the results we apply the standard scale for subjective video quality evaluation and the percentual evaluation of intelligibility as in acoustics. From the results we construct the minimum coded bit-rate recommendations for every spatial resolution.

Trispectral Analysis of Voiced Sounds Defective Audition and Tracheotomisian Cases

This paper presents the cepstral and trispectral analysis of a speech signal produced by normal men, men with defective audition (deaf, deep deaf) and others affected by tracheotomy, the trispectral analysis based on parametric methods (Autoregressive AR) using the fourth order cumulant. These analyses are used to detect and compare the pitches and the formants of corresponding voiced sounds (vowel \a\, \i\ and \u\). The first results appear promising, since- it seems after several experimentsthere is no deformation of the spectrum as one could have supposed it at the beginning, however these pathologies influenced the two characteristics: The defective audition influences to the formants contrary to the tracheotomy, which influences the fundamental frequency (pitch).

A Virtual Learning Environment for Deaf Children: Design and Evaluation

The object of this research is the design and evaluation of an immersive Virtual Learning Environment (VLE) for deaf children. Recently we have developed a prototype immersive VR game to teach sign language mathematics to deaf students age K- 4 [1] [2]. In this paper we describe a significant extension of the prototype application. The extension includes: (1) user-centered design and implementation of two additional interactive environments (a clock store and a bakery), and (2) user-centered evaluation including development of user tasks, expert panel-based evaluation, and formative evaluation. This paper is one of the few to focus on the importance of user-centered, iterative design in VR application development, and to describe a structured evaluation method.