Spoken language and multimodal applications for electronic realities
نویسندگان
چکیده
منابع مشابه
Salt: a Spoken Language Inte Multimodal Dialo
This paper describes the Speech Application Language Tags, or SALT, an emerging spoken language interface standard for multimodal or speech-only applications. A key premise in SALT design is speech-enabled user interface shares a lot of the design principles and computational requirements with the graphical user interface (GUI). As a result, it is logical to introduce into speech the object-ori...
متن کاملUser-Centered Modeling for Spoken Language and Multimodal Interfaces
By modeling difficult sources of linguistic variability in spontaneous speech and language, interfaces can be designed that transparently guide human input to match system processing capabilities. Such work is yielding more user-centered and robust interfaces for next-generation spoken language and multimodal systems. Historically, the development of spoken language systems has been primarily a...
متن کاملTowards Spoken Language Interfaces for Mobile Applications
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitte...
متن کاملSpontaneous speech consolidation for spoken language applications
This paper describes the work done as a part of the International Workshop on Speech Summarization for Information Extraction and Machine Translation (IWSpS) , on spoken language processing including summarization, machine translation and question answering on lecture speech in the Translanguage English Database (TED) corpus . The hypotheses of lecture speech obtained by automatic speech recogn...
متن کاملA visual context-aware multimodal system for spoken language processing
Recent psycholinguistic experiments show that acoustic and syntactic aspects of online speech processing are influenced by visual context through cross-modal influences. During interpretation of speech, visual context seems to steer speech processing and vice versa. We present a real-time multimodal system motivated by these findings that performs early integration of visual contextual informat...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Virtual Reality
سال: 1999
ISSN: 1359-4338,1434-9957
DOI: 10.1007/bf01408590