نتایج جستجو برای: multilayer perceptrons

تعداد نتایج: 20290  

2015
Nicolas Schilling Martin Wistuba Lucas Drumond Lars Schmidt-Thieme

In machine learning, hyperparameter optimization is a challenging task that is usually approached by experienced practitioners or in a computationally expensive brute-force manner such as grid-search. Therefore, recent research proposes to use observed hyperparameter performance on already solved problems (i.e. data sets) in order to speed up the search for promising hyperparameter configuratio...

Journal: :IEEE Trans. Pattern Anal. Mach. Intell. 1988
Hervé Bourlard Christian Wellekens

Hidden Markov models are widely used for automatic speech recognition. They inherently incorporate the sequential character of the speech signal and are statistically trained. However, the a-priori choice of the model topology limits their flexibility. Another drawback of these models is their weak discriminating power. Multilayer perceptrons are now promising tools in the connectionist approac...

2014
N Keshav Kumar

In this paper a novel the artificial neural networks are used for both residual generation and residual analysis for fault diagnosis of robust manipulators. A Multilayer Perception (MLP) is employed to reproduce the dynamics of the robotic manipulator. Its outputs are compared with actual position and velocity measurements, generating the so-called residual vector. The residuals, when properly ...

Journal: :Neural computation 2003
Taehwan Kim Tülay Adali

We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville's theorem. To avoid the conflict between the boundedness and the analyticity ...

2000
Rolf Pfeifer

Multilayer feed-forward networks, or multilayer perceptrons (MLPs) have one or several " hidden " layers of nodes. This implies that they have two or more layers of weights. The limitations of simple perceptrons do not apply to MLPs. In fact, as we will see later, a network with just one hidden layer can represent any Boolean function (including the XOR which is, as we saw, not linearly separab...

1993
Benoît Simon Benoit M. Macq Michel Verleysen

Journal: :CoRR 2013
Ian J. Goodfellow

We propose a new type of hidden layer for a multilayer perceptron, and demonstrate that it obtains the best reported performance for an MLP on the MNIST dataset. 1 The piecewise linear activation function We propose to use a specific kind of piecewise linear function as the activation function for a multilayer perceptron. Specifically, suppose that the layer receives as input a vector x ∈ R. Th...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید