نتایج جستجو برای: پارامتر پایدارسازی regularization

تعداد نتایج: 37168  

2011
E. Loli Piccolomini F. Zama

In this paper we present an iterative algorithm for the solution of regularization problems arising in inverse image processing. The regularization function to be minimized is constituted by two terms, a data fit function and a regularization function, weighted by a regularization parameter. The proposed algorithm solves the minimization problem and estimates the regularization parameter by an ...

2010
Mark Schmidt

This work looks at fitting probabilistic graphical models to data when the structure is not known. The main tool to do this is `1-regularization and the more general group `1-regularization. We describe limited-memory quasi-Newton methods to solve optimization problems with these types of regularizers, and we examine learning directed acyclic graphical models with `1-regularization, learning un...

Journal: :J. Complexity 2006
Peter Mathé Sergei V. Pereverzyev

We discuss adaptive strategies for choosing regularization parameters in TikhonovPhillips regularization of discretized linear operator equations. Two rules turn out to be entirely based on the underlying regularization scheme. Among them only the discrepancy principle allows to search for the optimal regularization parameter from the easiest problem. This possible advantage cannot be used with...

2016
Young-Seok Choi

We present a normalized LMS (NLMS) algorithm with robust regularization. Unlike conventional NLMS with the fixed regularization parameter, the proposed approach dynamically updates the regularization parameter. By exploiting a gradient descent direction, we derive a computationally efficient and robust update scheme for the regularization parameter. In simulation, we demonstrate the proposed al...

2010
Lothar Reichel Fiorella Sgallari Qiang Ye

We consider Tikhonov regularization of large linear discrete ill-posed problems with a regularization operator of general form and present an iterative scheme based on a generalized Krylov subspace method. This method simultaneously reduces both the matrix of the linear discrete ill-posed problem and the regularization operator. The reduced problem so obtained may be solved, e.g., with the aid ...

Journal: :Journal of Machine Learning Research 2015
Hamed Masnadi-Shirazi Nuno Vasconcelos

Regularization is commonly used in classifier design, to assure good generalization. Classical regularization enforces a cost on classifier complexity, by constraining parameters. This is usually combined with a margin loss, which favors large-margin decision rules. A novel and unified view of this architecture is proposed, by showing that margin losses act as regularizers of posterior class pr...

Journal: :Inverse Problems 2022

We introduce and study a mathematical framework for broad class of regularization functionals ill-posed inverse problems: Regularization Graphs. graphs allow to construct using as building blocks linear operators convex functionals, assembled by means that can be seen generalizations classical infimal convolution operators. This exhaustively covers existing approaches it is flexible enough craf...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه یزد 1390

از مسائل اساسی کنترل مقاوم محاسبه محدوده پایداری و طراحی کنترلکننده برای سیستمهای دارای پارامترهای نامعین است. برای سیستمهایی که ضرائب معادله مشخصه آنها خود توابعی چندجملهای از پارامترهای نامعین هستند، نتایج کمی برای محاسبه محدوده پایداری و طراحی کنترلکننده در دسترس است. نتایج موجود حجم محاسبات زیادی را نیاز دارد. در این پایاننامه سیستمهای با چنین ساختار نامعینیای در نظر گرفته شده و با استفاد...

Journal: :Journal of Machine Learning Research 2015
David P. Helmbold Philip M. Long

Dropout is a simple but effective technique for learning in neural networks and other settings. A sound theoretical understanding of dropout is needed to determine when dropout should be applied and how to use it most effectively. In this paper we continue the exploration of dropout as a regularizer pioneered by Wager et al. We focus on linear classification where a convex proxy to the misclass...

2017
Abhishake Rastogi

Manifold regularization is an approach which exploits the geometry of the marginal distribution. The main goal of this paper is to analyze the convergence issues of such regularization algorithms in learning theory. We propose a more general multi-penalty framework and establish the optimal convergence rates under the general smoothness assumption. We study a theoretical analysis of the perform...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید