نتایج جستجو برای: sparseness constraint
تعداد نتایج: 79838 فیلتر نتایج به سال:
In this project, we implement a robust face recognition system via sparse representation and convex optimization. We treat each test sample as sparse linear combination of training samples, and get the sparse solution via L1-minimization. We also explore the group sparseness (L2-norm) as well as normal L1-norm regularization.We discuss the role of feature extraction and classification robustnes...
One of the nice properties of kernel classifiers such as SVMs is that they often produce sparse solutions. However, the decision functions of these classifiers cannot always be used to estimate the conditional probability of the class label. We investigate the relationship between these two properties and show that these are intimately related: sparseness does not occur when the conditional pro...
Since most previous works for HMM-based tagging consider only part-of-speech information in contexts, their models cannot utilize lexical information which is crucial for resolving some morphological ambiguity. In this paper we introduce uniformly lexicalized HMMs for partof-speech tagging in both English and Korean. The lexicalized models use a simpli ed back-o smoothing technique to overcome ...
Sparse nonnegative matrix factorization (NMF) is exploited to solve spectral unmixing. Firstly, a novel model of sparse NMF is proposed, where the smoothed L0 norm is used to control the sparseness of the factors corresponding to the abundances. Thus, one need not set the degree of the sparseness in prior any more. Then, a gradient based algorithm NMF-SL0 is utilized to solve the proposed model...
Short text differs from traditional documents in its shortness and sparseness. Feature extension can ease the problem of high sparseness in the vector space model, but it inevitably introduces noise. To resolve this problem, this paper proposes a high-frequency feature expansion method based on a latent Dirichlet allocation (LDA) topic model. High-frequency features are extracted from each cate...
Several sparseness penalties have been suggested for delivery of good predictive performance in automatic variable selection within the framework of regularization. All assume that the true model is sparse. We propose a penalty, a convex combination of the L1and L∞-norms, that adapts to a variety of situations including sparseness and nonsparseness, grouping and nongrouping. The proposed penalt...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید