نتایج جستجو برای: latent semantic analysis

تعداد نتایج: 2942209  

Journal: :Research in Computing Science 2015
Francisco López-Orozco Luis D. Rodríguez-Vega

This paper presents a cognitive computational model of the way people read a paragraph with the task of quickly deciding whether it is related or not to a given goal. In particular, the model attempts to predict the time at which participants would decide to stop reading the paragraph because they have enough information to make their decision. Our model makes predictions at the level of words ...

2012
Rui-Qin WANG

A novel technique of semantic relatedness measurement between words based on link structure of Wikipedia was provided. Only Wikipedia’s link information was used in this method, which avoid researchers from burdensome text processing. During the process of relatedness computation, the positive effects of two-directional Wikipedia’s links and four link types are taken into account. Using a widel...

2016
Alexander Dallmann Thomas Niebler Florian Lemmerich Andreas Hotho

Semantic relatedness between words has been extracted from a variety of sources. In this ongoing work, we explore and compare several options for determining if semantic relatedness can be extracted from navigation structures in Wikipedia. In that direction, we first investigate the potential of representation learning techniques such as DeepWalk in comparison to previously applied methods base...

Journal: :CoRR 2005
Saif Mohammad Graeme Hirst

The automatic ranking of word pairs as per their semantic relatedness and ability to mimic human notions of semantic relatedness has widespread applications. Measures that rely on raw data (distributional measures) and those that use knowledge-rich ontologies both exist. Although extensive studies have been performed to compare ontological measures with human judgment, the distributional measur...

2012
Karl Moritz Hermann Chris Dyer Phil Blunsom Stephen G. Pulman

We investigate the semantic relationship between a noun and its adjectival modifiers. We introduce a class of probabilistic models that enable us to to simultaneously capture both the semantic similarity of nouns and modifiers, and adjective-noun selectional preference. Through a combination of novel and existing evaluations we test the degree to which adjective-noun relationships can be catego...

Journal: :CoRR 2016
Kimberly Glasgow Matthew Roos Amy J. Haufler Mark A. Chevillet Michael Wolmetz

Semantic textual similarity (STS) systems are designed to encode and evaluate the semantic similarity between words, phrases, sentences, and documents. One method for assessing the quality or authenticity of semantic information encoded in these systems is by comparison with human judgments. A data set for evaluating semantic models was developed consisting of 775 English word-sentence pairs, e...

2008
Scott A. Crossley Thomas L. Salsbury Philip M. McCarthy Danielle S. McNamara

This study explores how Latent Semantic Analysis (LSA) can be used as a method to examine the lexical development of second language (L2) speakers. This year long longitudinal study with six English learners demonstrates that semantic similarity (using LSA) between utterances significantly increases as the L2 learners study English. The findings demonstrate that L2 learners begin to develop tig...

2015
Vivek Kumar Rangarajan Sridhar

We present an unsupervised topic model for short texts that performs soft clustering over distributed representations of words. We model the low-dimensional semantic vector space represented by the dense distributed representations of words using Gaussian mixture models (GMMs) whose components capture the notion of latent topics. While conventional topic modeling schemes such as probabilistic l...

Journal: :Pattern Recognition 2012
Hao Wu Jiajun Bu Chun Chen Jianke Zhu Lijun Zhang Haifeng Liu Can Wang Deng Cai

Topic modeling is a powerful tool for discovering the underlying or hidden structure in text corpora. Typical algorithms for topic modeling include probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA). Despite their different inspirations, both approaches are instances of generative model, whereas the discriminative structure of the documents is ignored. In this p...

Journal: :Inf. Process. Manage. 2010
Jean-François Pessiot Young-Min Kim Massih-Reza Amini Patrick Gallinari

Most document clustering algorithms operate in a high dimensional bag-of-words space. The inherent presence of noise in such representation obviously degrades the performance of most of these approaches. In this paper we investigate an unsupervised dimensionality reduction technique for document clustering. This technique is based upon the assumption that terms co-occurring in the same context ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید