Virtual Speaking Head for Hearing Impaired Students

Developed tool is one of system tools for easier access to various scientific areas and real time interactive learning between lecturer and for hearing impaired students. There is no demand for the lecturer to know Sign Language (SL). Instead, the new software tools will perform the translation of the regular speech into SL, after which it will be transferred to the student. On the other side, the questions of the student (in SL) will be translated and transferred to the lecturer in text or speech. One of those tools is presented tool. It-s too for developing the correct Speech Visemes as a root of total communication method for hearing impared students.




References:
[1] G. Eason, B. Noble, and I. N. Sneddon, "On certain integrals of Lipschitz-Hankel type involving products of Bessel functions," Phil.
Trans. Roy. Soc. London, vol. A247, pp. 529-551, April 1955.
[2] Hr├║z M., Krňoul Z., Campr P., and Muller Ludek M.S.: Tovards
Automatic Annotation of Sing Language Dictionarz Corpora, 2011 TDS
Pilsen, page 331 - 339.
[3] May, Qin Caiy, David Gallupz, Cha Zhangy, and Zhengyou Zhang: 3D
Deformable Face Tracking with aCommodity Depth Camera.
http://research.microsoft.com/enus/
um/people/qincai/papers/eccv2010.pdf
[4] Peter Drahoš and Martin Šperka, Face Expressions Animation in e-
Learning. conf06.dei.uc.pt/pdfs/paper14.pdf.
[5] Albrecht, J. Haber, and H.-P. Seidel. Speech Synchronization for
Physicsbased Facial Animation. In V. Skala, editor, Proc. 10th Int. Conf.
on Computer Graphics, Visualization and Computer Vision (WSCG
2002), pages 9-16.
[6] ApJ. Ma, R. Cole, B. Pellom, W. Ward, and B. Wise. Accurate visible
speech synthesis based on concatenating variable length motion capture
data. IEEE
[7] A. Wang, M. Emmi, and P. Faloutsos. Assembling an expressive facial
animation system. In Sandbox -07: Proceedings of the 2007 ACM
SIGGRAPH symposium on Video games, pages 21-26. ACM Press,2007.
[8] T. Ezzat, G. Geiger, and T. Poggio. Trainable videorealistic speech
animation. In SIGGRAPH -02: Proceedings of the 29th annual
conference on Computer graphics and interactive techniques, pages 388-
398. ACM Press, 2002.
[9] M. M. Cohen and D. W. Massaro. Modeling coarticulation in synthetic
visual speech. In N. Magnenat Thalmann and D. Thalmann, editors,
Models and Techniques in Computer Animation, pages 139-156.
Springer, Tokyo, 1994.
[10] M. Brand. Voice puppetry. In SIGGRAPH -99: Proceedings of the 26th
annual conference on Computer graphics and interactive techniques,
pages 21- 28. ACM Press/Addison-Wesley Publishing Co., 1999.
[11] I.-J. Kim and H.-S. Ko. 3d lip-synch generation with data-faithful
machine learning. In Computer Graphics Forum, Vol. 26, No. 3
EUROGRAPHICS 2007, 2007.
[12] C. Bregler, M. Covell, and M. Slaney. Video rewrite:driving visual
speech with audio. In Computer Graphics Proc. SIGGRAPH-97, pages
67-74, 1997.
[13] Meadow - Orlans, Kathryn P., Mertens, Donna M., & Sass - Lehrer
Marilyn. (2003) Parents and their Deaf Children, Washington D.C.;
Gallaudet University Press.
[14] D C. Sylvie, W. Ong, S. Ranganath. Automatic Sign Language Analysis:
A Survey and the Future beyond Lexical Meaning, IEEE Trans. on
Pattern Analysis and Machine Intelligence 27(6):873-891, 2005.
[15] R. Bowden, D. Windridge, T. Kadir, A. Zinaerman, M. Brady. A
linguistic feature vector for the visual interpretation of SL. Proc. of 8th
EU Conf. on Computer Vision 2:391-401, 2004.
[16] A. Edwards. Progress in Sign Language Recognition, Proc. Gesture
Workshop, pp. 13-21, 1997.
[17] B. Bauer, K. Kraiss. Towards a 3rd Generation Mobile Telecommunication for Deaf People, Proc. 10th Aachen Symp. Signal
Theory Algorithms and Software for Mobile Comm., pp. 101-106, 2001.
[18] E. Holden, G. Lee, R. Owens. Australian SL recognition, Machine
Vision and Applications, pp. 312-320, 2005.
[19] T. Starner, A. Pentland. Visual recognition of American Sign language
using hidden Markov models. Proc. of the Intern. Workshop on Automatic Face- and Gesture-Recognition, pp. 189-194, 1995.