Multi-Modal Visualization of Working Instructions for Assembly Operations

Growing individualization and higher numbers of variants in industrial assembly products raise the complexity of manufacturing processes. Technical assistance systems considering both procedural and human factors allow for an increase in product quality and a decrease in required learning times by supporting workers with precise working instructions. Due to varying needs of workers, the presentation of working instructions leads to several challenges. This paper presents an approach for a multi-modal visualization application to support assembly work of complex parts. Our approach is integrated within an interconnected assistance system network and supports the presentation of cloud-streamed textual instructions, images, videos, 3D animations and audio files along with multi-modal user interaction, customizable UI, multi-platform support (e.g. tablet-PC, TV screen, smartphone or Augmented Reality devices), automated text translation and speech synthesis. The worker benefits from more accessible and up-to-date instructions presented in an easy-to-read way.

Further the Future: The Exploratory Study in 3D Animation Marketing Trend and Industry in Thailand

Lately, many media organizations in Thailand have started to produce 3D animation, so the quality of personnel should be identified. As an instructor in the school of Animation and Multimedia, the researchers have to prepare the students, suitable for the need of industry. The current study used exploratory research design to establish the knowledge of about this issue, including the required qualification of employees and the potential of animation industry in Thailand. The interview sessions involved three key informants from three well-known organizations. The interview data was used to design a questionnaire for the confirmation phase. The overall results showed that the industry needed an individual with 3D animation skill, computer graphic skills, good communication skills, a high responsibility, and an ability to finish the project on time. Moreover, it is also found that there were currently various kinds of media where 3D animation has been involved, such as films, TV variety, TV advertising, online advertising, and application on mobile device.

EMOES: Eye Motion and Ocular Expression Simulator

We introduce, a new interactive 3D simulation system of ocular motion and expressions suitable for: (1) character animation applications to game design, film production, HCI (Human Computer Interface), conversational animated agents, and virtual reality; (2) medical applications (ophthalmic neurological and muscular pathologies: research and education); and (3) real time simulation of unconscious cognitive and emotional responses (for use, e.g., in psychological research). The system is comprised of: (1) a physiologically accurate parameterized 3D model of the eyes, eyelids, and eyebrow regions; and (2) a prototype device for realtime control of eye motions and expressions, including unconsciously produced expressions, for application as in (1), (2), and (3) above. The 3D eye simulation system, created using state-of-the-art computer animation technology and 'optimized' for use with an interactive and web deliverable platform, is, to our knowledge, the most advanced/realistic available so far for applications to character animation and medical pedagogy.

3D Rendering of American Sign Language Finger-Spelling: A Comparative Study of Two Animation Techniques

In this paper we report a study aimed at determining the most effective animation technique for representing ASL (American Sign Language) finger-spelling. Specifically, in the study we compare two commonly used 3D computer animation methods (keyframe animation and motion capture) in order to ascertain which technique produces the most 'accurate', 'readable', and 'close to actual signing' (i.e. realistic) rendering of ASL finger-spelling. To accomplish this goal we have developed 20 animated clips of fingerspelled words and we have designed an experiment consisting of a web survey with rating questions. 71 subjects ages 19-45 participated in the study. Results showed that recognition of the words was correlated with the method used to animate the signs. In particular, keyframe technique produced the most accurate representation of the signs (i.e., participants were more likely to identify the words correctly in keyframed sequences rather than in motion captured ones). Further, findings showed that the animation method had an effect on the reported scores for readability and closeness to actual signing; the estimated marginal mean readability and closeness was greater for keyframed signs than for motion captured signs. To our knowledge, this is the first study aimed at measuring and comparing accuracy, readability and realism of ASL animations produced with different techniques.

The Overall Aspects of E-Leaning Issues, Developments, Opportunities and Challenges

Rapid steps made in the field of Information and Communication Technology (ICT) has facilitated the development of teaching and learning methods and prepared them to serve the needs of an assorted educational institution. In other words, the information age has redefined the fundamentals and transformed the institutions and method of services delivery forever. The vision is the articulation of a desire to transform the method of teaching and learning could proceed through e-learning. E-learning is commonly deliberated to use of networked information and communications technology in teaching and learning practice. This paper deals the general aspects of the e-leaning with its issues, developments, opportunities and challenges, which can the higher institutions own.

3D Simulator of Ocular Motion and Expression

We introduce a new interactive 3D simulator of ocular motion and expressions suitable for: (1) character animation applications to game design, film production, HCI (Human Computer Interface), conversational animated agents, and virtual reality; (2) medical applications (ophthalmic neurological and muscular pathologies: research and education); and (3) real time simulation of unconscious cognitive and emotional responses (for use, e.g., in psychological research). Using state-of-the-art computer animation technology we have modeled and rigged a physiologically accurate 3D model of the eyes, eyelids, and eyebrow regions and we have 'optimized' it for use with an interactive and web deliverable platform. In addition, we have realized a prototype device for realtime control of eye motions and expressions, including unconsciously produced expressions, for application as in (1), (2), and (3) above. The 3D simulator of eye motion and ocular expression is, to our knowledge, the most advanced/realistic available so far for applications in character animation and medical pedagogy.

An Efficient 3D Animation Data Reduction Using Frame Removal

Existing methods in which the animation data of all frames are stored and reproduced as with vertex animation cannot be used in mobile device environments because these methods use large amounts of the memory. So 3D animation data reduction methods aimed at solving this problem have been extensively studied thus far and we propose a new method as follows. First, we find and remove frames in which motion changes are small out of all animation frames and store only the animation data of remaining frames (involving large motion changes). When playing the animation, the removed frame areas are reconstructed using the interpolation of the remaining frames. Our key contribution is to calculate the accelerations of the joints of individual frames and the standard deviations of the accelerations using the information of joint locations in the relevant 3D model in order to find and delete frames in which motion changes are small. Our methods can reduce data sizes by approximately 50% or more while providing quality which is not much lower compared to original animations. Therefore, our method is expected to be usefully used in mobile device environments or other environments in which memory sizes are limited.

A Virtual Learning Environment for Deaf Children: Design and Evaluation

The object of this research is the design and evaluation of an immersive Virtual Learning Environment (VLE) for deaf children. Recently we have developed a prototype immersive VR game to teach sign language mathematics to deaf students age K- 4 [1] [2]. In this paper we describe a significant extension of the prototype application. The extension includes: (1) user-centered design and implementation of two additional interactive environments (a clock store and a bakery), and (2) user-centered evaluation including development of user tasks, expert panel-based evaluation, and formative evaluation. This paper is one of the few to focus on the importance of user-centered, iterative design in VR application development, and to describe a structured evaluation method.

Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data

We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignement method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.

3D Definition for Human Smiles

The study explored varied types of human smiles and extracted most of the key factors affecting the smiles. These key factors then were converted into a set of control points which could serve to satisfy the needs for creation of facial expression for 3D animators and be further applied to the face simulation for robots in the future. First, hundreds of human smile pictures were collected and analyzed to identify the key factors for face expression. Then, the factors were converted into a set of control points and sizing parameters calculated proportionally. Finally, two different faces were constructed for validating the parameters via the process of simulating smiles of the same type as the original one.