Technical Report of IEICE Japan, Vol. WIT2007-36, pp. 23-28, 2007 (in Japanese)
Development of a bilingual speech synthesis system for a Japanese ALS patient
S. Kajima, A. Iida, K. Yasu and T. Arai
Abstract: This study aimed to build a bilingual speech synthesis system for a communication aid for a Japanese amyotrophic lateral sclerosis (ALS) patient. A corpus-based speech synthesis method is ideal for a communication aid for people anticipating the loss of their voice since the synthetic speech made by such method can reflect the speaker’s voice quality. However, this system needs a recording of a large amount of speech that is a burden for them. This paper first describes about our work on building an English speech synthesis system with the patient’s voice using a corpus-based speech synthesis method with a voice conversion technique that requires smaller amount of speaker’s recordings. After that, we report on our ongoing project of developing a HMM-based Japanese speech synthesis system using HTS, a modified version of HTK. The first method synthesizes speech with HTS and the acoustic model made from the recordings of the patient’s voice. The second method synthesizes speech with HTS and the acoustic model with the voice quality which was converted to the patient’s voice using voice conversion technique which requires fewer burdens to the patients. The result of the perceptual experiment showed that the voice synthesized with latter method was perceived to have a closer voice quality to the patient’s natural speech.
Keywords: a bilingual communication aid, speech synthesis, corpus-based, HMM, voice conversion