نتایج جستجو برای: loss functions

تعداد نتایج: 911381  

Journal: :journal of sciences islamic republic of iran 0

this paper is concerned with the problem of finding the minimax estimators of the scale parameter ? in a family of transformed chi-square distributions, under asymmetric squared log error (sle) and modified linear exponential (mlinex) loss functions, using the lehmann theorem [2]. also we show that the results of podder et al. [4] for pareto distribution are a special case of our results for th...

Journal: :mathematics interdisciplinary research 0
mohammad taati payame noor university sirous moradi arak university shahram najafzadeh payame noor university

in this paper, we consider a new class of analytic functions in the unit disk using polynomials of order alpha. we give some sufficient conditions for functions belonging to this class.

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه صنعتی شریف 1369

front end یک رادار باند (f=9.4ghz)x به صورت (microwave integrated circuits)mic طراحی، ساخته و اندازه گیری شد . تقویت کننده کم نویز مدار یک تقویت کننده بالانس می باشد که ترانزیستورهای آن gaasmesfet به صورت چپ می باشند. بهره تقویت کننده در فرکانس 9.4ghz برابر 13db و عدد نویز آن 3db اندازه گیری می باشد . میکسر مدار، یک میکسر دیودی به صورت تک بالانس بوده و هایبرید 180 درجه آن از نوع حلقه (ratrace) ...

Journal: :CoRR 2012
Wei Gao Zhi-Hua Zhou

AUC (area under ROC curve) is an important evaluation criterion, which has been popularly used in diverse learning tasks such as class-imbalance learning, cost-sensitive learning, learning to rank and information retrieval. Many learning approaches are developed to optimize AUC, whereas owing to its non-convexity and discontinuousness, almost all approaches work with surrogate loss functions. T...

2014
Elad Hazan Tomer Koren Kfir Y. Levy

The logistic loss function is often advocated in machine learning and statistics as a smooth and strictly convex surrogate for the 0-1 loss. We investigate the question of whether these smoothness and convexity properties make the logistic loss preferable to other widely considered options such as the hinge loss. We show that in contrast to known asymptotic bounds, as long as the number of pred...

2014
Michal Derezinski Manfred K. Warmuth

Some of the simplest loss functions considered in Machine Learning are the square loss, the logistic loss and the hinge loss. The most common family of algorithms, including Gradient Descent (GD) with and without Weight Decay, always predict with a linear combination of the past instances. We give a random construction for sets of examples where the target linear weight vector is trivial to lea...

2014
Peng Sun Tong Zhang Jie Zhou

LogitBoost, MART and their variant can be viewed as additive tree regression using logistic loss and boosting style optimization. We analyze their convergence rates based on a new weak learnability formulation. We show that it has O( 1 T ) rate when using gradient descent only, while a linear rate is achieved when using Newton descent. Moreover, introducing Newton descent when growing the trees...

2006
Herbert Süße Wolfgang Ortmann Klaus Voss

Affine point pattern matching (APPM) is an integral part of many pattern recognition problems. Given two sets P and Q of points with unknown assignments pi → qj between the points, no additional information is available. The following task must be solved: – Find an affine transformation T such that the distance between P and the transformed set Q′ = TQ is minimal. In this paper, we present a ne...

2008
Nootan Kumar Pavel Trofimovich Elizabeth Gatbonton

Although it is commonly believed that language and culture are inexorably linked, the precise nature of this relationship remains elusive. This study investigated the hypothesis that a loss in language signals a loss in culture if language is considered a central value. This hypothesis was investigated by rating the Hindi and English proficiency of 30 first and second generation Indo-Canadian H...

Journal: :Journal of Machine Learning Research 2012
Matus Telgarsky

Boosting combines weak learners into a predictor with low empirical risk. Its dual constructs a high entropy distribution upon which weak learners and training labels are uncorrelated. This manuscript studies this primal-dual relationship under a broad family of losses, including the exponential loss of AdaBoost and the logistic loss, revealing: • Weak learnability aids the whole loss family: f...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید