Joined: Aug 2007
Posts: 1,144
Thanks:
0
I'm not sure what more one could say other than that the sound wave produced (whether whispered or spoken) has a different distribution of frequencies depending on mouth shape and tension/shape of the vocal chords. I guess you could easily do a spectrum analysis to show see the differences, but I don't think that would provide any insight on how to help beginners.
Edited: 2013-03-31, 6:58 pm
Joined: Aug 2007
Posts: 1,144
Thanks:
0
Yeah, I can quite distinctively, the first sounds "higher pitched", so I guess there's more comparatively more amplitude in the higher harmonics?
I guess there could be some minor differences in timing/overall amplitude too.
Edited: 2013-03-31, 8:23 pm
Joined: Aug 2010
Posts: 346
Thanks:
0
I think I'm going to have nightmares.
Joined: Jul 2009
Posts: 107
Thanks:
0
@dizmox
You have great ears! Given that the accuracy rate in the experiment was 65%, and given that 21 Japanese-speaking people (I'm assuming they were native speakers) were given the test, it seems like some of the people couldn't hear the differences. I guess some people are missing the FFT module in their brains.
Joined: Aug 2007
Posts: 1,144
Thanks:
0
I could just be fooling myself, since I knew the differences beforehand. 8-) I might be picking up other subtle audial clues (other than frequency) that I can't pick out consciously, cuing my brain to fill in the missing frequency information.
Edited: 2013-03-31, 8:36 pm
Joined: Jul 2009
Posts: 107
Thanks:
0
@fakewookie
Ever hear of the experiment that Pavlov performed on dogs to induce neurosis? He taught dogs to discriminate between a circle and a square shape. He then slowly made the shapes look more and more alike, until it was really difficult to tell the shapes apart. Needless to say, the dogs were very unhappy.
Welcome to your new neurosis.
Joined: Dec 2005
Posts: 290
Thanks:
0
I can hear the difference quite clearly, although again I knew what to listen for.
I know nothing about vocal sound production and acoustics, but e.g. when you record musical instruments, you can "compress" the sound so that there are objectively no differences in volume (i.e. level of noise/sound produced), but pretty much anybody, musician or not, can hear what a "quiet part" is, or when there are dynamics, i.e. when the music goes from quiet to loud etc. How the instruments sound is strongly associated with a perceived level, but the brain is fooling itself into hearing the difference... perhaps something similar is at work here. Must be, though. Can you imagine you couldn't whisper in a tonal language?