نتایج جستجو برای: multi modal

تعداد نتایج: 493573  

2007
Wei Li Maosong Sun Christopher Habel

Automatic image annotation (AIA) refers to the association of words to whole images which is considered as a promising and effective approach to bridge the semantic gap between low-level visual features and high-level semantic concepts. In this paper, we formulate the task of image annotation as a multi-label multi class semantic image classification problem and propose a simple yet effective m...

2013
Yueting Zhuang Yanfei Wang Fei Wu Yin Zhang Weiming Lu

A better similarity mapping function across heterogeneous high-dimensional features is very desirable for many applications involving multi-modal data. In this paper, we introduce coupled dictionary learning (DL) into supervised sparse coding for multi-modal (crossmedia) retrieval. We call this Supervised coupleddictionary learning with group structures for MultiModal retrieval (SliM). SliM for...

2012
Masud Ahmed Camille Williams Bo Peng Nathan Fisher

Multiple operating modes can be found in many embedded devices (e.g., smart-phones) to efficiently utilize device resources. As similar advantages are also preferred under realtime settings, there have been researches to develop realtime multi-modal systems. However, many commerciallyavailable embedded devices still depend on unimodal operations even for inherent multi-modal systems (e.g., auto...

2016
Kyoung-Woon On Eun-Sol Kim Byoung-Tak Zhang

In multi-modal learning, data consists of multiple modalities, which need to be represented jointly to capture the real-world ’concept’ that the data corresponds to (Srivastava & Salakhutdinov, 2012). However, it is not easy to obtain the joint representations reflecting the structure of multi-modal data with machine learning algorithms, especially with conventional neural networks. This is bec...

2004
Jason M. Baldridge

The paper shows how Combinatory Categorial Grammar (CCG) can be adapted to take advantage of the extra resourcesensitivity provided by the Categorial Type Logic framework. The resulting reformulation, Multi-Modal CCG, supports lexically specified control over the applicability of combinatory rules, permitting a universal rule component and shedding the need for language-specific restrictions on...

Journal: :Big data and cognitive computing 2023

Due to inter-modal effects hidden in multi-modalities and the impact of weak modalities on multi-modal entity alignment, a Multi-modal Entity Alignment Method with Inter-modal Enhancement (MEAIE) is proposed. This method introduces unique modality called numerical modal aspect applies feature encoder encode it. In embedding stage, this paper utilizes visual features enhance relation representat...

2009
Katarzyna J. Blinowska Gernot R. Müller-Putz Vera Kaiser Laura Astolfi Katrien Vanderperren Sabine Van Huffel Louis Lemieux

Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims o...

2007
Judit Madarász

In [13] and [14] Maksimova proved that a normal modal logic (with one unary modality) has the Craig interpolation property iff the corresponding class of algebras has the superamalgamation property. (These notions will be recalled below.) In this paper we extend Maksimova’s theorem to normal multi-modal logics with arbitrarily many, not necessarily unary modalities, and to not necessarily norma...

2015
Douwe Kiela Luana Bulat Stephen Clark

Multi-modal semantics has relied on feature norms or raw image data for perceptual input. In this paper we examine grounding semantic representations in olfactory (smell) data, through the construction of a novel bag of chemical compounds model. We use standard evaluations for multi-modal semantics, including measuring conceptual similarity and cross-modal zero-shot learning. To our knowledge, ...

2007
Zhipeng Zhang Tomoyuki Ohya Toshiaki Sugimura

In mobile environments, user input and operation via voice are simple and effective. However, the speech recognition performance is highly influenced by ambient noise in the mobile environments, which may cause a significant deterioration of the performance, and there is a strong demand for improvement of the performance. In order to resolve these issues, we conducted research in two directions...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید