نتایج جستجو برای: spoken grammar

تعداد نتایج: 56647  

2004
Maarika Traat Johan Bos

In this paper we present a grammar formalism that combines the insights from Combinatory Categorial Grammar with feature structure unification. We show how information structure can be incorporated with syntactic and semantic representations in a principled way. We focus on the way theme, rheme, and focus are integrated in the compositional semantics, using Discourse Representation Theory as fi...

1990
Robert J. Bobrow Robert Ingria David Stallard

This paper presents recent natural language work on HARC, the BBN Spoken Language System. The HARC system incorporates the Byblos system [6] as its speech recognition component and the natural language system Delphi, which consists of a bottom-up parser paired with an integrated syntax/semantics unification grammar, a discourse module, and a database question-answering backend. The paper focuse...

2011
Markus Saers Dekai Wu Chi-kiu Lo Karteek Addanki

We introduce a new type of transduction grammar that allows for learning of probabilistic phrasal bilexica, leading to a significant improvement in spoken language translation accuracy. The current state-of-the-art in statistical machine translation relies on a complicated and crude pipeline to learn probabilistic phrasal bilexica—the very core of any speech translation system. In this paper, w...

2002
K. Taušer

The paper deals with a spoken dialogue system component – response generation module. We are developing the spoken dialogue system called CIC (city information centre) providing a subset of services of a real city information centre. The main focus of this article is an experiment with usage of UCG (Unification Categorial Grammar) for response generation within a dialogue system speaking Czech....

1998
Tom Brøndsted

This paper describes the concepts behind a sub grammar design tool being developed within the EU-funded language-engineering project REWARD. The tool is a sub-component of a general platform for designing spoken language systems and addresses dialogue designers who are non-experts in natural language processing and speech technology. Yet, the tool interfaces to a powerful and “professional” uni...

2003
Genevieve Gorrell

Spoken language recognition meets with difficulties when an unknown word is encountered. In addition to the new word being unrecognisable, its presence impacts on recognition performance on the surrounding words. The possibility is explored here of using a back-off statistical recogniser to allow recognition of out-of-vocabulary words in a grammar-based speech recognition system. This study sho...

2007
Mary P. Harper Christopher M. White Stephen A. Hockema Randall A. Helzerman

Constraint Dependency Grammar (CDG) 11, 13] is a constraint-based grammatical formalism that has proven eeective for processing English 5] and improving the accuracy of spoken language understanding systems 4]. However, prospective users of CDG face a steep learning curve when trying to master this powerful formalism. Therefore, a recent trend in CDG research has been to try to ease the burden ...

2003
Genevieve Gorre

Spoken language recognition meets with difficulties when an unknown word is encountered. In addition to the new word being unrecognisable, its presence impacts on recognition performance on the surrounding words. The possibility is explored here of using a back-off statistical recogniser to allow recognition of out-of-vocabulary words in a grammar-based speech recognition system. This study sho...

2009
Tatjana Scheffler Roland Roller Norbert Reithinger

The increasing number of spoken dialog systems calls for efficient approaches for their development and testing. Our goal is the minimization of hand-crafted resources to maximize the portability of this evaluation environment across spoken dialog systems and domains. In this paper we discuss the user simulation technique which allows us to learn general user strategies from a new corpus. We pr...

2000
Michael Johnston

In order to realize their full potential, multimodal interfaces need to support not just input from multiple modes, but single commands optimally distributed across the available input modes. A multimodal language processing architecture is needed to integrate semantic content from the different modes. Johnston 1998a proposes a modular approach to multimodal language processing in which spoken ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید