Title: Toward Machines with Tongue
Speaker : Prof. Ho Sung Nam (Dept. of English Language and Literature, Korea University)
Date: Mar. 20 (Mon.), 2017
Time: 05:00 PM
Location: Woo Jung Information & Communication bldg., Room 601
Hosted by Dept. of Brain & Cognitive Engineering, Korea University / Institute of Brain and Cognitive Engineering, Korea University / BK21+ Global Leader Development Division in Brain Engineering, Korea University / Interdisciplinary Major in Brain and Cognitive Science
A portable computational system called TADA was developed for the Task Dynamic model of speech motor control. The model maps from a set of linguistic gestures, specified as activation functions with corresponding constriction goal parameters, to time functions for a set of model articulators. The original Task Dynamic code was ported to the (relatively) platform‐independent MATLAB environment and includes a MATLAB version of the Haskins articulatory synthesizer, so that articulator motions computed by the Task Dynamic model can be used to generate sound. Gestural scores can now be edited graphically and the effects of gestural score changes on the models output evaluated. Other new features of the system include: (1) A graphical user interface that displays the input gestural scores, output time functions of constriction goal variables and articulators, and an animation of the resulting vocal‐tract motion; (2) Integration of the Task Dynamic model with the prosodic clock‐slowing, pi‐gesture model of Byrd and Saltzman. This now allows prosody‐driven slowing to be applied to the full set of active gestures and its effects to be evaluated perceptually.