نتایج جستجو برای: dimensionality reduction

تعداد نتایج: 505670  

2002
Michel Verleysen

The visual interpretation of data is an essential step to guide any further processing or decision making. Dimensionality reduction (or manifold learning) tools may be used for visualization if the resulting dimension is constrained to be 2 or 3. The field of machine learning has developed numerous nonlinear dimensionality reduction tools in the last decades. However, the diversity of methods r...

2008
Gal Chechik

A fundamental problem in machine learning is to extract compact but relevant representations of empirical data. Relevance can be measured by the ability to make good decisions based on the representations, for example in terms of classification accuracy. Compact representations can lead to more human-interpretable models, as well as improve scalability. Furthermore, in multi-class and multi-tas...

2015
Debmalya Panigrahi Allen Xiao

In many applications today, data is drawn from a high-dimensional feature space, where the dimension d is incredibly high. That is, we view each data point as a vector in Rd. A few examples of high dimensional data: • DNA sequencing: each nucleotide in the sequence is a feature. • Health records: various measurements like weight, blood pressure, diagnosed diseases, medications, nutrition, etc. ...

2006
Ali Rahimi Brian Ferris

We have been applying dimensionality reduction techniques to a variety of tracking problems. We have experimented with tracking the articulated pose of humans from video imagery, the trajectory of RFID tags from signal strength measurements, the trajectory of acoustic beacons in sensor networks, and the location of wireless device from 802.11 signal measurements. In each case, an analytic relat...

2013
Lee-Ad Gottlieb Aryeh Kontorovich Robert Krauthgamer

We study data-adaptive dimensionality reduction in the context of supervised learning in general metric spaces. Our main statistical contribution is a generalization bound for Lipschitz functions in metric spaces that are doubling, or nearly doubling, which yields a new theoretical explanation for empirically reported improvements gained by preprocessing Euclidean data by PCA (Principal Compone...

2015
Jelani Nelson

Here we collect some notation and basic lemmas used throughout this note. Throughout, for a random variable X, ‖X‖p denotes (E |X|). It is known that ‖ · ‖p is a norm for any p ≥ 1 (Minkowski’s inequality). It is also known ‖X‖p ≤ ‖X‖q whenever p ≤ q. Henceforth, whenever we discuss ‖ · ‖p, we will assume p ≥ 1. Lemma 1 (Khintchine inequality). For any p ≥ 1, x ∈ R, and (σi) independent Rademac...

2004
Yoshua Bengio Olivier Delalleau Nicolas Le Roux Jean-Francois Paiement Pascal Vincent Marie Ouimet

© 2004 Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux, Jean-Francois Paiement, Pascal Vincent, Marie Ouimet. Tous droits réservés. All rights reserved. Reproduction partielle permise avec citation du document source, incluant la notice ©. Short sections may be quoted without explicit permission, if full credit, including © notice, is given to the source. Série Scientifique Scientific Series ...

2010
Neil D. Lawrence

We introduce a new perspective on spectral dimensionality reduction which views these methods as Gaussian random fields (GRFs). Our unifying perspective is based on the maximum entropy principle which is in turn inspired by maximum variance unfolding. The resulting probabilistic models are based on GRFs. The resulting model is a nonlinear generalization of principal component analysis. We show ...

2016
Sergey Levine

In the preceding lectures, we discussed clustering. One view of clustering is that it’s a way to summarize a complex real-valued datapoint x ∈ R with a single categorical variable y ∈ {1, . . . ,K}. This could be useful for understanding and visualizing the structure in the data, as well as a preprocessing step for other learning algorithms. For example, we could build a very simple classifier ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید