Technical difficulties seemed to be a recurring thread in
Dorothy Chun’s metanalysis of studies on intonation in software, “Signal
Analysis Software for Teaching Discourse Intonation.” SLA software has had to leap over numerous hurdles over the
past few decades, namely weak speech signals (especially with voiceless
consonants) and feedback delay (not responses that lacked “real time”
quality).
Speech digitization is another such concern. Can a machine’s “synthetic voice”
adequately account for the nuances of speech intonations for language
learners? What would be “good enough,”
and why would that be good enough?
By what standard could/should speech intonation software (for teaching
and learning purposes) be held accountable? And what other factors might we consider—variables that might
play into the greater equation of SLA, technology, and education?
As Chun notes, “One of the greatest advantages of using
computer-assisted pronunciation and intonation tutors, for example, is that the
computer serves both as a medium of
instruction and as a tool for
research; that is a software program, while teaching pronunciation can
simultaneously keep detailed and thorough records of student performance and
progress” (Chun, p.9).
Before we can paint a holistic portrait of a given software’s
strengths and weaknesses, I think we would be wise to consider it through multiple
lenses and a variety of angles. There’s
a lot on the plate.
Hi Zach,
ReplyDeleteYou ask some good questions and raise a good point in your post that we, as language teachers, want to keep in mind that technology serves as a tool and not the sole mean of instruction. I always enjoy the face-to-face interaction with my students, and although technology opens up a number of possibilities, there are certain human elements that just cannot be mimicked by computers.