Speech and natural language remain our 
most natural form of interaction; yet the HCI community have been very 
timid about focusing their attention on designing and developing spoken 
language interaction techniques. This may be due to a widespread 
perception that perfect domain-independent speech recognition is an 
unattainable goal. Progress is continuously being made in the 
engineering and science of speech and natural language processing, 
however, and there is also recent research that suggests that many 
applications of speech require far less than 100% accuracy to be useful 
in many contexts. Engaging the CHI community now is timely -- many 
recent commercial applications, especially in the mobile space, are 
already tapping the increased interest in and need for natural user 
interfaces (NUIs) by enabling speech interaction in their products. As 
such, the goal of this panel is to bring together interaction designers,
 usability researchers, and general HCI practitioners to discuss the 
opportunities and directions to take in designing more natural 
interactions based on spoken language, and to look at how we can 
leverage recent advances in speech processing in order to gain 
widespread acceptance of speech and natural language interaction.
CHI EA '13 CHI '13 Extended Abstracts on Human Factors in Computing Systems
CHI EA '13 CHI '13 Extended Abstracts on Human Factors in Computing Systems
