|
|
Papers by Programme:
Friday 11 September 2009 - 10:00-11:00
Mol, L., Krahmer, E., and Swerts, M.
Alignment in iconic gestures: does it make sense?
Sakamoto, S., Tanaka, A., Numuhata, S., Imai, A., Takagi, T., and Suzuki, Y.
Aging effect on audio-visual speech asynchrony perception:
comparison of time-expanded speech and a moving image of a talker's face
Cosi, P., and Graziano, T.
LW2A: an easy tool to transform voice wav files into talking animations
Friday 11 September 2009 - 11:30-12:30
Fagel, S.
Effects of smiled speech on lips, larynx and acoustics
Jesse, A., and Janse, E.
Visual speech information aids elderly adults in stream segregation
Kyle, F., MacSweeney, M., Mohammed, T., and Campbell, R.
The development of speechreading in deaf and hearing children:
introducing a new Test of Child Speechreading (ToCS)
Friday 11 September 2009 - 13:30-14:30
Liu, K., and Ostermann, J.
An image-based talking head system
Krnoul, Z., and Zelezny, M.
The UWB 3D talking head text-driven system controlled
by the SAT method used for the LIPS 2009 challenge
Beskow, J., Salvi, G., Al Moubayed, S.
SynFace - verbal and non-verbal face animation from audio
Wang, L., Han, W., Qian, X., and Soong, F.
HMM-based motion trajectory generation for speech animation synthesis
Friday 11 September 2009 - 14:30-15:30
Chetty, G., Goecke, R., and Wagner, M.
Audio-visual mutual dependency models for biometric liveness checks
Hisanaga, S., Sekiyama, K., Igasaki, T., and Murayama, N.
Audiovisual speech perception in Japanese and English -
inter-language differences examined by event-related potentials
Al Moubayed, S., and Beskow, J.
Effects of visual prominence cues on speech intelligibility
Friday 11 September 2009 - 16:00-17:50
Mattheyses, W., Latacz, L., and Verhelst, W.
Multimodal coherency issues in designing and optimizing audiovisual speech synthesis techniques
Haq, S., and Jackson, P.J.B.
Speaker-dependent audio-visual emotion recognition
Phillips, N.A., Baum, S., and Taler, V.
Audio-visual speech perception in mild cognitive impairment and healthy elderly controls
Kuratate, T., Ayers, K., Kim, J., Riley, M., and Burnham, D.
Are virtual humans uncanny?: varying speech, appearance and motion to
better understand the acceptability of Synthetic Humans
Kroos, C., and Hogan, K.
Visual influence on auditory perception: Is speech special?
Saturday 12 September 2009 - 10:00-11:00
Coulon, M., Guellai, B., and Streri, A.
Auditory-visual perception of talking faces at birth: a new paradigm
Do, C-T., Aissa-El-Bey, A., Pastor, D., and Goalic, A.
Area of mouth opening estimation from speech acoustics using blind deconvolution technique
Hilder, S., Harvey, R., Theobald, B-J.
Comparison of human and machine-based lip-reading
Saturday 12 September 2009 - 11:30-12:30
Hoetjes, M., Krahmer, E., and Swerts, M.
Untying the knot between gestures and speech
Engwall, O., and Wik, P.
Can you tell if tongue movements are real or synthesized?
Lan, Y., Harvey, R., Theobald, B-J., Ong, E-J., and Bowden, R.
Comparing visual features for lipreading
Saturday 12 September 2009 - 13:30-15:40
Shochi, T., Sekiyama, K., Lees, N., Boyce, M., Goecke, R., and Burnham, D.
Auditory-visual infant directed speech in Japanese and English
Tanaka, A., Asakawa, K., and Imai, H.
Recalibration of audiovisual simultaneity in speech
Kolossa, D., Zeilner, S., Vorwerk, A., Orglmeister, R.
Audiovisual speech recognition with missing or unreliable data
Winneke, A.H., and Phillips, N.A.
Older and younger adults use fewer neural resources during audiovisual than
during auditory speech perception
Eger, J., and Bothe, H-H.
Strategies and results for the evaluation of the naturalness of the LIPPS facial animation system
Davis, C., and Kim, J.
Recognizing spoken vowels in multi-talker babble: spectral and visual speech cues
Saturday 12 September 2009 - 16:00-18:10
Almajai, I., and Milner, B.
Effective visually-derived Wiener filtering for audio-visual speech processing
Devergie, A., Berthommier, F., and Grimault, N.
Pairing audio speech and various visual displays: binding or not binding?
Wollermann, C., and Schroeder, B.
Effects of exhaustivity and uncertainty on audiovisual focus production
Takeuchi, S., Hashiba, T., Tamura, S., and Hayamizu, S.
Voice activity detection based on fusion of audio and visual information
Pachoud, S., Gong, S., and Cavallaro, A.
Space-time audio-visual speech recognition with multiple
multi-class probabilistic Support Vector Machines
Krnoul, Z.
Refinement of lip shape in sign speech synthesis
|