نتایج جستجو برای: multimodal structure

تعداد نتایج: 1595917  

2002
Michael Johnston Srinivas Bangalore Gunaranjan Vasireddy Amanda Stent Patrick Ehlen Marilyn A. Walker Steve Whittaker Preetam Maloor

Mobile interfaces need to allow the user and system to adapt their choice of communication modes according to user preferences, the task at hand, and the physical and social environment. We describe a multimodal application architecture which combines finite-state multimodal language processing, a speech-act based multimodal dialogue manager, dynamic multimodal output generation, and user-tailo...

2011
Grzegorz Bocewicz Robert Wójcik Zbigniew Antoni Banaszak

The problem of cyclic scheduling of multimodal cyclic processes (MCPs) is considered. The issue follows the production engineering and supply chains environment, where the imposition of the integer domain results (due to inherent process features such as discrete slot sizes, etc.) in the Diophantine character of a scheduling problem. Consequently, some classes of MCPs scheduling problems can be...

2006
Daniel Sonntag Massimo Romanelli

General purpose ontologies and domain ontologies make up the infrastructure of the Semantic Web, which allow for accurate data representations with relations, and data inferences. In our approach to multimodal dialogue systems providing question answering functionality (SMARTWEB), the ontological infrastructure is essential. We aim at an integrated approach in which all knowledge-aware system m...

2015
Federico Raue Wonmin Byeon Thomas M. Breuel Marcus Liwicki

The problem of how infants learn to associate visual inputs, speech, and internal symbolic representation has long been of interest in Psychology, Neuroscience, and Artificial Intelligence. A priori, both visual inputs and auditory inputs are complex analog signals with a large amount of noise and context, and lacking of any segmentation information. In this paper, we address a simple form of t...

2009
Fabio De Felice Giovanni Attolico Arcangelo Distante

3D virtual environments (VE) require an advanced user interface to fully express their information contents. New I/O devices enable the use of multiple sensorial channels (vision, hearing, touch, etc.) to increase the naturalness and the efficiency of complex interactions. Haptic and acoustic interfaces extend the effective experience of virtual reality to visually impaired users. For these use...

Journal: :The Journal of biological chemistry 2007
Yuusuke Maruyama Toshihiko Ogura Kazuhiro Mio Shigeki Kiyonaka Kenta Kato Yasuo Mori Chikara Sato

Transient receptor potential melastatin type 2 (TRPM2) is a redox-sensitive, calcium-permeable cation channel activated by various signals, such as adenosine diphosphate ribose (ADPR) acting on the ADPR pyrophosphatase (ADPRase) domain, and cyclic ADPR. Here, we purified the FLAG-tagged tetrameric TRPM2 channel, analyzed it using negatively stained electron microscopy, and reconstructed the thr...

2010
Yuankai K. Tao Sina Farsiu Joseph A. Izatt

Scanning laser ophthalmoscopy (SLO) and spectral domain optical coherence tomography (SDOCT) have become essential clinical diagnostic tools in ophthalmology by allowing for video-rate noninvasive en face and depth-resolved visualization of retinal structure. Current generation multimodal imaging systems that combine both SLO and OCT as a means of image tracking remain complex in their hardware...

Journal: :Biomedical optics express 2017
Sicong He Wenqian Xue Zhigang Duan Qiqi Sun Xuesong Li Huiyan Gan Jiandong Huang Jianan Y Qu

We developed a multimodal nonlinear optical (NLO) microscope system by integrating stimulated Raman scattering (SRS), second harmonic generation (SHG) and two-photon excited fluorescence (TPEF) imaging. The system was used to study the morphological and biochemical characteristics of tibial cartilage in a kinesin-1 (Kif5b) knockout mouse model. The detailed structure of fibrillar collagen in th...

2009
Timo Ropinski Ivan Viola Martin Biermann Helwig Hauser Klaus H. Hinrichs

Closeups are used in illustrations to provide detailed views on regions of interest. They are integrated into the rendering of the whole structure in order to reveal their spatial context. In this paper we present the concept of interactive closeups for medical reporting. Each closeup is associated with a region of interest and may show a single modality or a desired combination of the availabl...

1995
Massimo Zancanaro Oliviero Stock Carlo Strapparava

The adoption of SharedPlans as a basis for multimodal dialogues is discussed. An extension to the model of plan augmentation for discourse is proposed so that it applies for multimodal interaction. The proposed process exploits SharedPlans and Adjacency Pairs in conjunction to account for global and local collaboration. Finally, multimedia coordination is taken into account. An example is follo...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید