1. Three puzzles of multimodal speech perception R. E. Remez; 2. Visual speech perception L. E. Bernstein; 3. Dynamic information for face perception K. Lander and V. Bruce; 4. Investigating auditory-visual speech perception development D. Burnham and K. Sekiyama; 5. Brain bases for seeing speech: FMRI studies of speechreading R. Campbell and M. MacSweeney; 6. Temporal organization of cued speech production D. Beautemps, M.-A. Cathiard, V. Attina and C. Savariaux; 7. Bimodal perception within the natural time-course of speech production M.-A. Cathiard, A. Vilain, R. Laboissiere, H. Loevenbruck, C. Savariaux and J.-L. Schwartz; 8. Visual and audiovisual synthesis and recognition of speech by computers N. M. Brooke and S. D. Scott; 9. Audiovisual automatic speech recognition G. Potamianos, C. Neti, J. Luettin and I. Matthews; 10. Image-based facial synthesis M. Slaney and C. Bregler; 11. A trainable videorealistic speech animation system T. Ezzat, G. Geiger and T. Poggio; 12. Animated speech: research progress and applications D. W. Massaro, M. M. Cohen, M. Tabain, J. Beskow and R. Clark; 13. Empirical perceptual-motor linkage of multimodal speech E. Vatikiotis-Bateson and K. G. Munhall; 14. Sensorimotor characteristics of speech production G. Bailly, P. Badin, L. Reveret and A. Ben Youssef.
Gerard Bailly is a Senior CNRS Research Director at the Speech and Cognition Department, GIPSA-Lab, University of Grenoble, where he is now Head of Department. Pascal Perrier is a Professor in the GIPSA-Lab at the University of Grenoble. Eric Vatikiotis-Bateson is Professor and Canada Research Chair in Linguistics and Cognitive Science in the Department of Linguistics at the University of British Colombia.