Passing As Fluent

[ Contents ]


Re: Computer model

From: Terry Dartnall
Date: 10/2/03
Time: 2:41:30 AM
Remote Name: 132.234.9.66

Comments

>How difficult would it be to create a computer >model of the human speech mechanism? Such a >model might lead to a greater understanding of >stuttering.

Hi Ed -

It's a very good question, and I don't know the answer. Artificial Intelligence is a large area and speech synthesis is up in one corner of it, where I have no expertise. None of my colleagues do, either. I do remember the very first speech synthesiser (well, they said it was the first, but who knows?), at Edinburgh University in the late 60s. They'd put about 10 million man hours into it, and it uttered a whole sentence! But it was kinda fun, because you could alter the tone by altering the components of human speech.

I don't even know how speech synthesisers work. You probably know that there are two types of phonetics: acoustic and articulatory. Acoustic phonetics studies the sounds we make, as sounds, as the noises that come out of our mouths, using sound spectrograms and such. Articulatory phonetics measures the sounds we make in terms of the speech organs we use to make them (lips, tongue, nose, etc) If speech synthesisers use acoustic phonetics to produce sounds they won't shed much light on the human speech mechanism. If they use articulatory phonetics, they might shed light on it.

That's a long-winded way of saying that I don't know ... :)

Terry


Last changed: September 12, 2005