Many spoken words sound the same. Saying “recognize speech” makes a sound that can be indistinguishable from “wreck a nice beach.” Other laughers include “wreck an eyes peach” and “recondite speech.” But with a little knowledge of word meaning and grammar, it seems like a computer ought to be able to puzzle it out. Ironically, however, much of the progress in speech recognition came from a conscious rejection of the deeper dimensions of language. As an IBM researcher famously put it: “Every time I fire a linguist my system improves.” But pink-slipping all the linguistics PhDs only gets you 80% accuracy, at best.
We can take comfort in knowing that the human brain is still way ahead of machines. Link -via Metafilter
(image source: Creative Coffins)
However, stenographers are beginning to become a dying race, as voice-writers (that is, people who "talk" to a computer program in their computer) are on the rise, being much more efficient, faster, and more accurate. It's also an alternative for people who want to do stenography, but have carpal-tunnel or other debilitating diseases.
The reason I ask is because if it only has to compare what you say against a voice sample of your own, there will be much less variance to wade through. Otherwise, the speech recognition software will have a much bigger sample to have to sort out.