Articulatory models of the vocal tract are valuable tools to explore the articulation of speech sounds, the interaction of articulators between each other during speech production and to control the geometry of the vocal tract during articulatory speech synthesis with a small number of parameters. The talk starts by presenting the different classes of articulatory models and what can be expected from them. Then it will focus on a model which is derived from medical images of the vocal tract to guarantee that the shapes produced are sufficiently realistic from an anatomic point of view. The talk will then concentrate on the approach used to control all the articulators (mandible, tongue, lips, velum, larynx and epiglottis) and in particular how their interactions have been modeled either via statistical interdependence or via detection of collisions or clipping. The computation of the area function and the exploitation within articulatory synthesis will conclude this talk.
CNRS senior scientist working at LORIA; research interests: speech analysis, articulatory synthesis and acoustic-to-articulatory inversion; current projects: developing software for guiding learners of a foreign language to realize the expected phonetic features, synthesizing speech by copying human speech production at the articulatory and acoustic levels.