As children, part of
learning English involved uttering the proverbial ah, eh, ee, oh, oo – the
vowels, which along with consonants, form the building blocks of the English
Language. Overtime, we learn to easily distinguished between different vowels
in speech. To make this discrimination we need to process a wealth of continuous information, such as the
pitch, the duration, or the loudness of the sound and then make a discrete choice about what vowel we have
heard. Researchers have found it difficult to reconcile the fact that listeners
make discrete choices based on continuous information.
Graduate student Gabriel
Tillman and Professor Scott Brown from the University of Newcastle along with
Titia Benders (Macquarie University) and Don van Ravenzwaaij (University of
Groningen) developed a cognitive process model that describes how continuous acoustic
information leads to discrete phoneme decisions. In a nutshell, the model posits
that people sample evidence from the sounds and this evidence accumulates until
a decision threshold is crossed, which triggers an overt response.
The model accounted for choice and response time data from
an experiment where Dutch listeners discriminated between Dutch vowels. With
the model, the researchers could examine unobserved processes involved in the
perception of Dutch vowels. They found that sound frequency information
contributes more to the perception of vowels than duration information, that
frequency was more important for some of the Dutch vowels than others, and that
longer durations did not delay when participants started using information from
the sound.
Read more about this study here: