نتایج جستجو برای: bounded loss function
تعداد نتایج: 1612405 فیلتر نتایج به سال:
In this paper, we study the problem of learning a metric and propose a loss function based metric learning framework, in which the metric is estimated by minimizing an empirical risk over a training set. With mild conditions on the instance distribution and the used loss function, we prove that the empirical risk converges to its expected counterpart at rate of root-n. In addition, with the ass...
Extended Abstract. The study of truncated parameter space in general is of interest for the following reasons: 1.They often occur in practice. In many cases certain parameter values can be excluded from the parameter space. Nearly all problems in practice have a truncated parameter space and it is most impossible to argue in practice that a parameter is not bounded. In truncated parameter...
in the present paper, among other results, a decomposition formula is given for the w-bounded continuous negative definite functions of a topological *-semigroup s with a weight function w into a proper h*-algebra a in terms of w-bounded continuous positive definite a-valued functions on s. a generalization of a well-known result of k. harzallah is obtained. an earlier conjecture of the author ...
We extend classical consumer theory to account for reference dependence and loss aversion under complete certainty. The classical results obtain as a special case. Several new results emergethere is a kink in the demand curve at the reference point, consumers are subject to money illusion, and some kinds of inconsistencies of preferences can be accounted for. However, the reference dependent mo...
The support vector machine (SVM) is a popular classifier in machine learning, but it is not robust to outliers. In this paper, based on the Correntropy induced loss function, we propose the rescaled hinge loss function which is a monotonic, bounded and nonconvex loss that is robust to outliers. We further show that the hinge loss is a special case of the proposed rescaled hinge loss. Then, we d...
We investigate properties of kernel based regression (KBR) methods which are inspired by the convex risk minimization method of support vector machines. We first describe the relation between the used loss function of the KBR method and the tail of the response variable Y . We then establish a consistency result for KBR and give assumptions for the existence of the influence function. In partic...
A bstract We quantify the role of scrambling in quantum machine learning. characterize a neural network’s (QNNs) error terms properties via out-of-time-ordered correlator (OTOC). network can be trained by minimizing loss function. show that function bounded OTOC. prove gradient This demonstrates OTOC landscape regulates trainability QNN. numerically this is flat for maximally QNNs, which pose c...
We prove performance guarantees for Bayesian learning algorithms, in particular stochastic model selection, with the help of potential functions. Such a potential quantifies the current state of learning in the system, in a way that the expected error in the next step is bounded by the expected decrease of the potential. For Bayesian stochastic model selection, an appropriate potential function...
A twin bounded support vector machine (TBSVM) is a phenomenon of symmetry that improves the performance traditional classification algorithm. In this paper, we propose an improved model based on TBSVM, called Welsch loss with capped L2,p-norm distance metric robust (WCTBSVM). On one hand, by introducing in problem non-sparse output regularization term solved; thus, generalization and robustness...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید