نتایج جستجو برای: critic
تعداد نتایج: 2831 فیلتر نتایج به سال:
Abstract: Fuzzy critic-based learning forms a reinforcement learning method based on dynamic programming. In this paper, an adaptive critic-based neuro-fuzzy system is presented for an unmanned bicycle. The only information available for the critic agent is the system feedback which is interpreted as the last action performed by the controller in the previous state. The signal produced by the c...
This paper briefly describes the proceedings of the Panel of Inquiry held May 13, 2008 at Saint Michael’s College on the case of “Anna" (Podetz, 2008, 2011). It summarizes the advocate's and critic's positions on four claims and one counter-claim. The five judges independently voted to accept all four of the advocate’s claims (by votes of 5-0 or 4-1), and rejected the critic's counterclaim by a...
In this paper we introduce an online algorithm that uses integral reinforcement knowledge for learning the continuous-time optimal control solution for nonlinear systems with infinite horizon costs and partial knowledge of the system dynamics. This algorithm is a data based approach to the solution of the Hamilton-Jacobi-Bellman equation and it does not require explicit knowledge on the system’...
In this paper, a model-free and effective approach is proposed to solve infinite horizon optimal control problem for affine nonlinear systems based on adaptive dynamic programming technique. The developed approach, referred to as the actor-critic structure, employs two multilayer perceptron neural networks to approximate the state-action value function and the control policy, respectively. It u...
Recently, actor-critic methods have drawn much interests in the area of reinforcement learning, and several algorithms have been studied along the line of the actor-critic strategy. This paper studies an actor-critic type algorithm utilizing the RLS(recursive least-squares) method, which is one of the most efficient techniques for adaptive signal processing, together with natural policy gradien...
Least-squares temporal difference learning (LSTD) has been used mainly for improving the data efficiency of the critic in actor-critic (AC). However, convergence analysis of the resulted algorithms is difficult when policy is changing. In this paper, a new AC method is proposed based on LSTD under discount criterion. The method comprises two components as the contribution: (1) LSTD works in an ...
A large number of computational models of information processing in the basal ganglia have been developed in recent years. Prominent in these are actor-critic models of basal ganglia functioning, which build on the strong resemblance between dopamine neuron activity and the temporal difference prediction error signal in the critic, and between dopamine-dependent long-term synaptic plasticity in...
The acrobot is a two-link robot, actuated only at the joint between the two links. It is one of dicult tasks in reinforcement learning (RL) to control the acrobot because it has nonlinear dynamics and continuous state and action spaces. In this article, we discuss applying the RL to the task of balancing control of the acrobot. Our RL method has an architecture similar to the actor-critic. The ...
Brain-Machine Interfaces (BMIs) can be used to restore function in people living with paralysis. Current BMIs require extensive calibration that increase the set-up times and external inputs for decoder training that may be difficult to produce in paralyzed individuals. Both these factors have presented challenges in transitioning the technology from research environments to activities of daily...
Two-factor theory (Mowrer, 1947, 1951, 1956) remains one of the most influential theories of avoidance, but it is at odds with empirical findings that demonstrate sustained avoidance responding in situations in which the theory predicts that the response should extinguish. This article shows that the well-known actor-critic model seamlessly addresses the problems with two-factor theory, while s...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید