نتایج جستجو برای: multimodal structure

تعداد نتایج: 1595917  

2013
Hachem Kadri Stéphane Ayache Cécile Capponi Sokol Koço François-Xavier Dupé Emilie Morvant

We study the problem of learning from multiple views using kernel methods in a supervised setting. We approach this problem from a multi-task learning point of view and illustrate how to capture the interesting multimodal structure of the data using multi-task kernels. Our analysis shows that the multi-task perspective offers the flexibility to design more efficient multiple-source learning alg...

2017
Aliyah Morgenstern

Through constant exposure to adult input, in dialogue, children’s language gradually develops into rich linguistic constructions that contain multiple cross-modal elements subtly used together for coherent communicative functions. In this chapter, we retrace children’s pathways into multimodal language acquisition in a scaffolding interactional environment. We begin with the first multimodal bu...

Journal: :CoRR 2017
Dhanesh Ramachandram Michal Lisicki Timothy J. Shields Mohamed R. Amer Graham W. Taylor

A popular testbed for deep learning has been multimodal recognition of human activity or gesture involving diverse inputs such as video, audio, skeletal pose and depth images. Deep learning architectures have excelled on such problems due to their ability to combine modality representations at different levels of nonlinear feature extraction. However, designing an optimal architecture in which ...

2016
Patrizia Grifoni

An important issue for communication processes in general, and for multimodal interaction in particular, is the information output arrangement and organization (multimodal fission). Considering information structure, intonation, and emphasis for the output by speech, considering moreover spatio-temporal coordination of pieces of information for visual (video, graphics, images, and texts) output...

2013
Yueting Zhuang Yanfei Wang Fei Wu Yin Zhang Weiming Lu

A better similarity mapping function across heterogeneous high-dimensional features is very desirable for many applications involving multi-modal data. In this paper, we introduce coupled dictionary learning (DL) into supervised sparse coding for multi-modal (crossmedia) retrieval. We call this Supervised coupleddictionary learning with group structures for MultiModal retrieval (SliM). SliM for...

2003
Ashwani Kumar Susanne Salmon-Alt Laurent Romary

This paper tries to fit a novel reference resolution mechanism into a multimodal dialogue system framework. Essentially, our aim is to show that a typical multimodal dialogue system can actually benefit from the cognitive grammar approach that we adopt for reference resolution. The central idea is to construct and update reference and context models in a manner that imparts adequate level of un...

Journal: :J. Applied Mathematics 2012
Bing Feng Si Xuedong Yan Huijun Sun Xiaobao Yang Ziyou Gao

In this paper, the structural characteristic of urban multimodal transport system is fully analyzed and then a two-tier network structure is proposed to describe such a system, in which the firsttier network is used to depict the traveller’s mode choice behaviour and the second-tier network is used to depict the vehicle routing when a certain mode has been selected. Subsequently, the generalize...

2012
Hua Wang Feiping Nie Heng Huang Shannon L. Risacher Andrew J. Saykin Li Shen

MOTIVATION Recent advances in brain imaging and high-throughput genotyping techniques enable new approaches to study the influence of genetic and anatomical variations on brain functions and disorders. Traditional association studies typically perform independent and pairwise analysis among neuroimaging measures, cognitive scores and disease status, and ignore the important underlying interacti...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید