|
|
Papers by Author:
Aissa-El-Bey
Area of mouth opening estimation from speech acoustics using blind deconvolution technique
Al Moubayed, S.
Effects of visual prominence cues on speech intelligibility
SynFace - verbal and non-verbal face animation from audio
Almajai, I.
Effective visually-derived Wiener filtering for audio-visual speech processing
Asakawa, K.
Recalibration of audiovisual simultaneity in speech
Ayers, K.
Are virtual humans uncanny?: varying speech, appearance and motion to
better understand the acceptability of Synthetic Humans
Back to top
Baum, S.
Audio-visual speech perception in mild cognitive impairment and healthy elderly controls
Berthommier, F.
Pairing audio speech and various visual displays: binding or not binding?
Beskow, J.
Effects of visual prominence cues on speech intelligibility
SynFace - verbal and non-verbal face animation from audio
Bothe, H-H.
Strategies and results for the evaluation of the naturalness of the LIPPS facial animation system
Bowden, R.
Comparing visual features for lipreading
Boyce, M.
Auditory-visual infant directed speech in Japanese and English
Burnham, D.
Are virtual humans uncanny?: varying speech, appearance and motion to
better understand the acceptability of Synthetic Humans
Auditory-visual infant directed speech in Japanese and English
Back to top
Campbell, R.
The development of speechreading in deaf and hearing children:
introducing a new Test of Child Speechreading (ToCS)
Cavallaro, A.
Space-time audio-visual speech recognition with multiple
multi-class probabilistic Support Vector Machines
Chetty, G.
Audio-visual mutual dependency models for biometric liveness checks
Cosi, P.
LW2A: an easy tool to transform voice wav files into talking animations
Coulon, M.
Auditory-visual perception of talking faces at birth: a new paradigm
Back to top
Davis, C.
Recognizing spoken vowels in multi-talker babble: spectral and visual speech cues
Devergie, A.
Pairing audio speech and various visual displays: binding or not binding?
Do, C-T.
Area of mouth opening estimation from speech acoustics using blind deconvolution technique
Back to top
Eger, J.
Strategies and results for the evaluation of the naturalness of the LIPPS facial animation system
Engwall, O.
Can you tell if tongue movements are real or synthesized?
Back to top
Fagel, S.
Effects of smiled speech on lips, larynx and acoustics
Back to top
Goalic, A.
Area of mouth opening estimation from speech acoustics using blind deconvolution technique
Goecke, R.
Audio-visual mutual dependency models for biometric liveness checks
Auditory-visual infant directed speech in Japanese and English
Gong, S.
Space-time audio-visual speech recognition with multiple
multi-class probabilistic Support Vector Machines
Graziano, T.
LW2A: an easy tool to transform voice wav files into talking animations
Grimault, N.
Pairing audio speech and various visual displays: binding or not binding?
Guellai, B.
Auditory-visual perception of talking faces at birth: a new paradigm
Back to top
Han, W.
HMM-based motion trajectory generation for speech animation synthesis
Haq, S.
Speaker-dependent audio-visual emotion recognition
Harvey, R.
Comparison of human and machine-based lip-reading
Comparing visual features for lipreading
Hashiba, T.
Voice activity detection based on fusion of audio and visual information
Hayamizu, S.
Voice activity detection based on fusion of audio and visual information
Hilder, S.
Comparison of human and machine-based lip-reading
Hisanaga, S.
Audiovisual speech perception in Japanese and English -
inter-language differences examined by event-related potentials
Hoetjes, M.
Untying the knot between gestures and speech
Hogan, K.
Visual influence on auditory perception: Is speech special?
Back to top
Igasaki, T.
Audiovisual speech perception in Japanese and English -
inter-language differences examined by event-related potentials
Imai, A.
Aging effect on audio-visual speech asynchrony perception:
comparison of time-expanded speech and a moving image of a talker's face
Imai, H.
Recalibration of audiovisual simultaneity in speech
Back to top
Jackson, P.J.B.
Speaker-dependent audio-visual emotion recognition
Janse, E.
Visual speech information aids elderly adults in stream segregation
Jesse, A.
Visual speech information aids elderly adults in stream segregation
Back to top
Kim, J.
Are virtual humans uncanny?: varying speech, appearance and motion to
better understand the acceptability of Synthetic Humans
Recognizing spoken vowels in multi-talker babble: spectral and visual speech cues
Kolossa, D.
Audiovisual speech recognition with missing or unreliable data
Krahmer, E.
Alignment in iconic gestures: does it make sense?
Untying the knot between gestures and speech
Kroos, C.
Visual influence on auditory perception: Is speech special?
Krnoul, Z.
Refinement of lip shape in sign speech synthesis
The UWB 3D talking head text-driven system controlled
by the SAT method used for the LIPS 2009 challenge
Kuratate, T.
Are virtual humans uncanny?: varying speech, appearance and motion to
better understand the acceptability of Synthetic Humans
Kyle, F.
The development of speechreading in deaf and hearing children:
introducing a new Test of Child Speechreading (ToCS)
Back to top
Lan, Y.
Comparing visual features for lipreading
Latacz, L.
Multimodal coherency issues in designing and optimizing audiovisual speech synthesis techniques
Lees, N.
Auditory-visual infant directed speech in Japanese and English
Liu, K.
An image-based talking head system
Back to top
MacSweeney, M.
The development of speechreading in deaf and hearing children:
introducing a new Test of Child Speechreading (ToCS)
Mattheyses, W.
Multimodal coherency issues in designing and optimizing audiovisual speech synthesis techniques
Milner, B.
Effective visually-derived Wiener filtering for audio-visual speech processing
Mohammed, T.
The development of speechreading in deaf and hearing children:
introducing a new Test of Child Speechreading (ToCS)
Mol, L.
Alignment in iconic gestures: does it make sense?
Murayama, N.
Audiovisual speech perception in Japanese and English -
inter-language differences examined by event-related potentials
Back to top
Numuhata, S.
Aging effect on audio-visual speech asynchrony perception:
comparison of time-expanded speech and a moving image of a talker's face
Back to top
Ong, E-J.
Comparing visual features for lipreading
Orglmeister, R.
Audiovisual speech recognition with missing or unreliable data
Ostermann, J.
An image-based talking head system
Back to top
Pachoud, S.
Space-time audio-visual speech recognition with multiple
multi-class probabilistic Support Vector Machines
Pastor, D.
Area of mouth opening estimation from speech acoustics using blind deconvolution technique
Phillips, N.A.
Audio-visual speech perception in mild cognitive impairment and healthy elderly controls
Older and younger adults use fewer neural resources during audiovisual than
during auditory speech perception
Back to top
Qian, X.
HMM-based motion trajectory generation for speech animation synthesis
Back to top
Riley, M.
Are virtual humans uncanny?: varying speech, appearance and motion to
better understand the acceptability of Synthetic Humans
Back to top
Sakamoto, S.
Aging effect on audio-visual speech asynchrony perception:
comparison of time-expanded speech and a moving image of a talker's face
Salvi, G.
SynFace - verbal and non-verbal face animation from audio
Schroeder, B.
Effects of exhaustivity and uncertainty on audiovisual focus production
Sekiyama, K.
Auditory-visual infant directed speech in Japanese and English
Audiovisual speech perception in Japanese and English -
inter-language differences examined by event-related potentials
Shochi, T.
Auditory-visual infant directed speech in Japanese and English
Soong, F.
HMM-based motion trajectory generation for speech animation synthesis
Streri, A.
Auditory-visual perception of talking faces at birth: a new paradigm
Suzuki, Y.
Aging effect on audio-visual speech asynchrony perception:
comparison of time-expanded speech and a moving image of a talker's face
Swerts, M.
Alignment in iconic gestures: does it make sense?
Untying the knot between gestures and speech
Back to top
Takagi, T.
Aging effect on audio-visual speech asynchrony perception:
comparison of time-expanded speech and a moving image of a talker's face
Takeuchi, S.
Voice activity detection based on fusion of audio and visual information
Tanaka, A.
Aging effect on audio-visual speech asynchrony perception:
comparison of time-expanded speech and a moving image of a talker's face
Recalibration of audiovisual simultaneity in speech
Taler, V.
Audio-visual speech perception in mild cognitive impairment and healthy elderly controls
Tamura, S.
Voice activity detection based on fusion of audio and visual information
Theobald, B-J.
Comparison of human and machine-based lip-reading
Comparing visual features for lipreading
Back to top
Verhelst, W.
Multimodal coherency issues in designing and optimizing audiovisual speech synthesis techniques
Vorwerk, A.
Audiovisual speech recognition with missing or unreliable data
Back to top
Wagner, M.
Audio-visual mutual dependency models for biometric liveness checks
Wang, L.
HMM-based motion trajectory generation for speech animation synthesis
Wik, P.
Can you tell if tongue movements are real or synthesized?
Winneke, A.H.
Older and younger adults use fewer neural resources during audiovisual than
during auditory speech perception
Wollermann, C.
Effects of exhaustivity and uncertainty on audiovisual focus production
Back to top
Zeilner, S.
Audiovisual speech recognition with missing or unreliable data
Zelezny, M.
The UWB 3D talking head text-driven system controlled
by the SAT method used for the LIPS 2009 challenge
|