نتایج جستجو برای: k nearest neighbors

تعداد نتایج: 408702  

2012
Iñigo Mendialdua Noelia Oses Basilio Sierra Elena Lazkano

The K Nearest Neighbors classification method assigns to an unclassified observation the class which obtains the best results after a voting criteria is applied among the observation’s K nearest, previously classified points. In a validation process the optimal K is selected for each database and all the cases are classified with this K value. However the optimal K for the database does not hav...

2003
Christopher A. Peters Faramarz Valafar

In this study, we attempt to distinguish between acute myeloid leukemia (AML) and acute lymphoid leukemia (ALL) using microarray gene expression data. Bayes’ classification is used with three different density estimation techniques: Parzen, k nearest neighbors(k-NN), and a new hybrid method, called k-neighborhood Parzen (k-NP), that combines properties of the other two. The classifiers are appl...

2010
Meng-Jung Shih Duen-Ren Liu

Patent management is increasingly important for organizations to sustain their competitive advantage. The classification of patents is essential for patent management and industrial analysis. In this study, we propose a novel patent network-based classification method to analyze query patents and predict their classes. The proposed patent network, which contains various types of nodes that repr...

2014
Sanjoy Dasgupta Samory Kpotufe

We present two related contributions of independent interest: (1) high-probability finite sample rates for k-NN density estimation, and (2) practical mode estimators – based on k-NN – which attain minimax-optimal rates under surprisingly general distributional conditions.

Journal: :CoRR 2013
Jianbo Ye

Many researches have been devoted to learn a Mahalanobis distance metric, which can effectively improve the performance of kNN classification. Most approaches are iterative and computational expensive and linear rigidity still critically limits metric learning algorithm to perform better. We proposed a computational economical framework to learn multiple metrics in closed-form.

2012
Yoko Anan Kohei Hatano Hideo Bannai Masayuki Takeda Ken Satoh

This paper addresses the polyphonic music classification problem on symbolic data. A new method is proposed which converts music pieces into binary chroma vector sequences and then classifies them by applying the dissimilaritybased classification method TWIST proposed in our previous work. One advantage of using TWIST is that it works with any dissimilarity measure. Computational experiments sh...

2005
Gregory Shakhnarovich Trevor Darrell Piotr Indyk Vassilis Athitsos Jonathan Alon Stan Sclaroff George Kollios

Description of the series-need to check with Bob Prior what it is iv 2005 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.

Journal: :CoRR 2015
Damiano Lombardi Sanjay Pant

A non-parametric k-nearest neighbour based entropy estimator is proposed. It improves on the classical Kozachenko-Leonenko estimator by considering non-uniform probability densities in the region of k-nearest neighbours around each sample point. It aims at improving the classical estimators in three situations: first, when the dimensionality of the random variable is large; second, when near-fu...

Journal: :JCP 2010
Chun sheng Li Yao-nan Wang Hai Dong Yang

this paper firstly generalizes majority vote to fuzzy majority vote, then proposes a cluster matching algorithm that is able to establish correspondence among fuzzy clusters from different fuzzy partitions over a common data set. Finally a new combination model of fuzzy partitions is build on the basis of the proposed cluster matching algorithm and fuzzy majority vote. Comparative results show ...

2009
Jacques Guyot Gilles Falquet Karim Benzineb

We were rather disappointed to note that the class filtering did not help to eliminate the noise. This filtering method is highly efficient when the query is short. However, in this case the query was a whole patent, so the classification filtering did not bring any improvement since the cosine-based similarity calculation acted implicitly as a kNN (k Nearest Neighbours), which is itself an alt...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید