Speech synthesis by mimicking articulatory movements

Masaaki Honda, Tokihiko Kaburagi, Takeshi Okadome

Research output: Contribution to journalConference article

4 Citations (Scopus)

Abstract

We describe a computational model of speech production which consists of trajectory formation, for generating articulatory movements from a phoneme specific gesture, and articulatory-to-acoustic mapping for generating speech signal from the articulatory motion. The context-dependent and independent approaches in the task-oriented trajectory formation are presented form a viewpoint how to cope with the contextual variability in the articulatory movements. The model is evaluated by comparing the computed and the original articulatory trajectories and speech acoustics. Also, we describe a recovery of the articulatory motion from speech acoustics for generating the articulatory movements by mimicking speech acoustics.

Original languageEnglish
JournalProceedings of the IEEE International Conference on Systems, Man and Cybernetics
Volume2
Publication statusPublished - Dec 1 1999
Externally publishedYes
Event1999 IEEE International Conference on Systems, Man, and Cybernetics 'Human Communication and Cybernetics' - Tokyo, Jpn
Duration: Oct 12 1999Oct 15 1999

Fingerprint

Speech synthesis
acoustics
Acoustics
trajectory
Trajectories
speech
Recovery

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Hardware and Architecture

Cite this

Speech synthesis by mimicking articulatory movements. / Honda, Masaaki; Kaburagi, Tokihiko; Okadome, Takeshi.

In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Vol. 2, 01.12.1999.

Research output: Contribution to journalConference article

@article{463ad468d03f4e08ba5f7357441290d0,
title = "Speech synthesis by mimicking articulatory movements",
abstract = "We describe a computational model of speech production which consists of trajectory formation, for generating articulatory movements from a phoneme specific gesture, and articulatory-to-acoustic mapping for generating speech signal from the articulatory motion. The context-dependent and independent approaches in the task-oriented trajectory formation are presented form a viewpoint how to cope with the contextual variability in the articulatory movements. The model is evaluated by comparing the computed and the original articulatory trajectories and speech acoustics. Also, we describe a recovery of the articulatory motion from speech acoustics for generating the articulatory movements by mimicking speech acoustics.",
author = "Masaaki Honda and Tokihiko Kaburagi and Takeshi Okadome",
year = "1999",
month = "12",
day = "1",
language = "English",
volume = "2",
journal = "Proceedings of the IEEE International Conference on Systems, Man and Cybernetics",
issn = "0884-3627",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - JOUR

T1 - Speech synthesis by mimicking articulatory movements

AU - Honda, Masaaki

AU - Kaburagi, Tokihiko

AU - Okadome, Takeshi

PY - 1999/12/1

Y1 - 1999/12/1

N2 - We describe a computational model of speech production which consists of trajectory formation, for generating articulatory movements from a phoneme specific gesture, and articulatory-to-acoustic mapping for generating speech signal from the articulatory motion. The context-dependent and independent approaches in the task-oriented trajectory formation are presented form a viewpoint how to cope with the contextual variability in the articulatory movements. The model is evaluated by comparing the computed and the original articulatory trajectories and speech acoustics. Also, we describe a recovery of the articulatory motion from speech acoustics for generating the articulatory movements by mimicking speech acoustics.

AB - We describe a computational model of speech production which consists of trajectory formation, for generating articulatory movements from a phoneme specific gesture, and articulatory-to-acoustic mapping for generating speech signal from the articulatory motion. The context-dependent and independent approaches in the task-oriented trajectory formation are presented form a viewpoint how to cope with the contextual variability in the articulatory movements. The model is evaluated by comparing the computed and the original articulatory trajectories and speech acoustics. Also, we describe a recovery of the articulatory motion from speech acoustics for generating the articulatory movements by mimicking speech acoustics.

UR - http://www.scopus.com/inward/record.url?scp=0033308489&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0033308489&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:0033308489

VL - 2

JO - Proceedings of the IEEE International Conference on Systems, Man and Cybernetics

JF - Proceedings of the IEEE International Conference on Systems, Man and Cybernetics

SN - 0884-3627

ER -