نتایج جستجو برای: l1 norm
تعداد نتایج: 74840 فیلتر نتایج به سال:
Recently, there has been a great deal of work developing super-resolution reconstruction (SRR) algorithms. While many such algorithms have been proposed, the almost SRR estimations are based on L1 or L2 statistical norm estimation, therefore these SRR algorithms are usually very sensitive to their assumed noise model that limits their utility. The real noise models that corrupt the measure sequ...
Non-negative matrix factorization (NMF), i.e. V ≈ WH where both V, W and H are non-negative has become a widely used blind source separation technique due to its part based representation. The NMF decomposition is not in general unique and a part based representation not guaranteed. However, imposing sparseness both improves the uniqueness of the decomposition and favors part based representati...
<p style='text-indent:20px;'>We propose <inline-formula><tex-math id="M1">\begin{document}$ \ell_1 $\end{document}</tex-math></inline-formula> norm regularized quadratic surface support vector machine models for binary classification in supervised learning. We establish some desired theoretical properties, including the existence and uniqueness of optimal solution,...
Velocity-stack inversion is the process of creating a model in velocity space that can correctly reconstruct the measured data. This is usually implemented by minimizing the L2 norm of the data misfit and the L2 norm of the model. Superior velocity space images, with a better separation of primaries and multiples, can be created by minimizing a different norm of the data misfit and a different ...
We study harmonic Bergman functions on the upper half-space of Rn. Among our main results are: The Bergman projection is bounded for the range 1 < p <∞; certain nonorthogonal projections are bounded for the range 1 ≤ p < ∞; the dual space of the Bergman L1-space is the harmonic Bloch space modulo constants; harmonic conjugation is bounded on the Bergman spaces for the range 1 ≤ p <∞; the Bergma...
In this paper, we give a new generalization error bound of Multiple Kernel Learning (MKL) for a general class of regularizations. Our main target in this paper is dense type regularizations including lp-MKL that imposes lp-mixed-norm regularization instead of l1-mixed-norm regularization. According to the recent numerical experiments, the sparse regularization does not necessarily show a good p...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید