نتایج جستجو برای: robot learning
تعداد نتایج: 694921 فیلتر نتایج به سال:
Today, there are several drawbacks that impede the necessary and much needed use of robot learning techniques in real applications. First, the time needed to achieve the synthesis of any behavior is prohibitive. Second, the robot behavior during the learning phase is – by definition – bad, it may even be dangerous. Third, except within the lazy learning approach, a new behavior implies a new le...
A cognitive robot may face failures during the execution of its actions in the physical world. In this paper, we investigate how robots can ensure robustness by gaining experience on action executions, and we propose a lifelong experimental learning method. We use Inductive Logic Programming (ILP) as the learning method to frame new hypotheses. ILP provides first-order logic representations of ...
This paper describes an approach to learning an indoor robot navigation task through trial-and-error. A mobile robot, equipped with visual, ultrasonic and laser sensors, learns to servo to a designated target object. In less than ten minutes of operation time, the robot is able to navigate to a marked target object in an office environment. The central learning mechanism is the explanation-base...
Designing robots that learn by themselves to perform complex real-world tasks is a still-open challenge for the field of Robotics and Artificial Intelligence. In this paper we present the robot learning problem as a lifelong problem, in which a robot faces a collection of tasks over its entire lifetime. Such a scenario provides the opportunity to gather general-purpose knowledge that transfers ...
The purpose of this paper is to give an overview of recent progress and development in humanoid robot, HanSaRam series. HanSaRam is a humanoid robot undergoing continual design and development in the Robot Intelligence Technology (RIT) Laboratory at KAIST. This paper also presents the experimental results of the ZMP compensation in the walking and standing posture of HSR-IV. During walking moti...
Robot learning is a challenging – and somewhat unique – research domain. If a robot behavior is defined as a mapping between situations that occurred in the real world and actions to be accomplished, then the supervised learning of a robot behavior requires a set of representative examples (situation, desired action). In order to be able to gather such learning base, the human operator must hav...
Computer models can be used to investigate the role of emotion in learning. Here we present EARL, our framework for the systematic study of the relation between emotion, adaptation and reinforcement learning (RL). EARL enables the study of, among other things, communicated affect as reinforcement to the robot; the focus of this chapter. In humans, emotions are crucial to learning. For example, ...
Computer models can be used to investigate the role of emotion in learning. Here we present EARL, our framework for the systematic study of the relation between emotion, adaptation and reinforcement learning (RL). EARL enables the study of, among other things, communicated affect as reinforcement to the robot; the focus of this paper. In humans, emotions are crucial to learning. For example , a...
When demonstrating unknown robot tasks via teleoperation, a human user may leverage information, latent in their mind, that is not observable to the robot. Such information may include user preferences as to how a task should be performed, state information observable to the human but not the robot, or task structure information such as subtask objectives. Multiple, different actions may thus o...
Q-learning is a one of the well-known Reinforcement Learning algorithms that has been widely used in various problems. The main contribution of this work is how to speed up the learning in a single agent environment (e.g. the robot). In this work, an attempt to optimize the traditional Q-learning algorithm has been done via using the Repeated Update Q-learning (RUQL) algorithm (the recent state...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید