نتایج جستجو برای: k means

تعداد نتایج: 702376  

2016
Edo Liberty Ram Sriharsha Maxim Sviridenko

This paper shows that one can be competitive with the kmeans objective while operating online. In this model, the algorithm receives vectors v1, . . . , vn one by one in an arbitrary order. For each vector vt the algorithm outputs a cluster identifier before receiving vt+1. Our online algorithm generates Õ(k) clusters whose k-means cost is Õ(W ∗) where W ∗ is the optimal k-means cost using k cl...

Journal: :PVLDB 2012
Bahman Bahmani Benjamin Moseley Andrea Vattani Ravi Kumar Sergei Vassilvitskii

Over half a century old and showing no signs of aging, k-means remains one of the most popular data processing algorithms. As is well-known, a proper initialization of k-means is crucial for obtaining a good final solution. The recently proposed k-means++ initialization algorithm achieves this, obtaining an initial set of centers that is provably close to the optimum solution. A major downside ...

Journal: :Research in Computing Science 2016
Eréndira Rendón Lara Itzel M. Abundez B.

Resumen. Sin lugar a duda el algoritmo K-means es el más utilizado en la comunidad de aprendizaje no supervisado. Desafortunadamente es muy sensible a la selección de los centroides iniciales. Debido a ello, se han propuesto un gran número de métodos para la selección de los centros iniciales. En este artículo se presenta un algoritmo de agrupamiento que tiene como base al algoritmo K-means, en...

2009
Nir Ailon Ragesh Jaiswal Claire Monteleoni

We provide a clustering algorithm that approximately optimizes the k-means objective, in the one-pass streaming setting. We make no assumptions about the data, and our algorithm is very light-weight in terms of memory, and computation. This setting is applicable to unsupervised learning on massive data sets, or resource-constrained devices. The two main ingredients of our theoretical work are: ...

Journal: :CoRR 2014
Apoorv Agarwal Anna Choromanska Krzysztof Choromanski

In this paper, we compare three initialization schemes for the KMEANS clustering algorithm: 1) random initialization (KMEANSRAND), 2) KMEANS++, and 3) KMEANSD++. Both KMEANSRAND and KMEANS++ have a major that the value of k needs to be set by the user of the algorithms. (Kang 2013) recently proposed a novel use of determinantal point processes for sampling the initial centroids for the KMEANS a...

Journal: :CoRR 2013
Ragesh Jaiswal Prachi Jain Saumya Yadav

The k-means++ seeding algorithm is one of the most popular algorithms that is used for finding the initial k centers when using the k-means heuristic. The algorithm is a simple sampling procedure and can be described as follows: Pick the first center randomly from among the given points. For i > 1, pick a point to be the i center with probability proportional to the square of the Euclidean dist...

Journal: :Entropy 2014
Frank Nielsen Richard Nock Shun-ichi Amari

Clustering sets of histograms has become popular thanks to the success of the generic method of bag-of-X used in text categorization and in visual categorization applications. In this paper, we investigate the use of a parametric family of distortion measures, called the α-divergences, for clustering histograms. Since it usually makes sense to deal with symmetric divergences in information retr...

2017
Matthew Staib Stefanie Jegelka

Much work has sought to discern the different types of cloud regimes, typically via Euclidean k-means clustering of histograms. However, these methods ignore the underlying similarity structure of cloud types. Wasserstein k-means clustering is a promising candidate for utilizing this structure during clustering, but existing algorithms do not scale well and lack the quality guarantees of the Eu...

Journal: :CoRR 2017
Mieczyslaw A. Klopotek

We prove in this paper that the expected value of the objective function of the k-means++ algorithm for samples converges to population expected value. As k-means++, for samples, provides with constant factor approximation for k-means objectives, such an approximation can be achieved for the population with increase of the sample size. This result is of potential practical relevance when one is...

2016
Richard Nock Raphaël Canyasse Roksana Boreli Frank Nielsen

This is the Supplementary Information to Paper ”k-variates++: more pluses in the kmeans++”, appearing in the proceedings of ICML 2016. Notation “main file” indicates reference to the paper.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید