نتایج جستجو برای: false nearest neighbors
تعداد نتایج: 109844 فیلتر نتایج به سال:
The algorithm is discussed in the context of one of the practical applications: aligning DNA reads to a reference genome. An implementation of the algorithm is shown to align about 106 reads per CPU minute and about 108 base-pairs per CPU minute (human DNA reads). This implementation is compared to the popular software packages Bowtie and BWA, and is shown to be over 5−10 times faster in some a...
We introduce a new method for nding several types of optimal k-point sets, minimizing perimeter, diameter, circumradius, and related measures, by testing sets of the O(k) nearest neighbors to each point. We argue that this is better in a number of ways than previous algorithms, which were based on high order Voronoi diagrams. Our technique allows us for the rst time to e ciently maintain minima...
In this paper, we propose a novel scheme for approximate nearest neighbor (ANN) retrieval based on dictionary learning and sparse coding. Our key innovation is to build compact codes, dubbed SpANN codes, using the active set of sparse coded data. These codes are then used to index an inverted file table for fast retrieval. The active sets are often found to be sensitive to small differences amo...
Spectral clustering is a method of subspace clustering which is suitable for the data of any shape and converges to global optimal solution. By combining concepts of shared nearest neighbors and geodesic distance with spectral clustering, a self-adaptive spectral clustering based on geodesic distance and shared nearest neighbors was proposed. Experiments show that the improved spectral clusteri...
The simple k nearest neighbor method is often very competitive, especially in classiication methods. When the number of predictors is large, the nearest neighbors are likely to be quite distant from the target point. Furthermore they tend to all be on one side of the target point. These are consequences of high dimensional geometry. This paper introduces a modiication of nearest neighbors that ...
An important part of Pattern Recognition deals with the problem of classification of data into a finite number of categories. In the usual setting of " supervised learning " , examples are given that consists of pairs, (X i , Y i), i ≤ n, where X i is the d-dimensional covariate vector and y i is the corresponding " category " in some finite set C. In the examples, y i is known! Based on these ...
A fundamental question of machine learning is how to compare examples. If an algorithm could perfectly determine whether two examples were semantically similar or dissimilar, most subsequent machine learning tasks would become trivial (i.e, the 1-nearest-neighbor classifier will achieve perfect results). A common choice for a dissimilarity measurement is an uninformed norm, like the Euclidean d...
The nearest neighbor (NN) classiiers, especially the k-NN algorithm, are among the simplest and yet most eecient classiication rules and are widely used in practice. We introduce three adaptation rules that can be used in iterative training of a k-NN classiier. This is a novel approach both from the statistical pattern recognition and the supervised neural network learning points of view. The s...
Given a set P of N points in a ddimensional space, along with a query point q, it is often desirable to find k points of P that are with high probability close to q. This is the Approximate k-NearestNeighbors problem. We present two algorithms for AkNN. Both require O(Nd) preprocessing time. The first algorithm has a query time cost that is O(d+logN), while the second has a query time cost that...
We consider tradeoffs between the query and update complexities for the (approximate) nearest neighbor problem on the sphere, extending the spherical filters recently introduced by [Becker–Ducas–Gama– Laarhoven, SODA’16] to sparse regimes and generalizing the scheme and analysis to account for different tradeoffs. In a nutshell, for the sparse regime the tradeoff between the query complexity nq...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید