نتایج جستجو برای: robot learning
تعداد نتایج: 694921 فیلتر نتایج به سال:
The authors have applied reinforcement learning methods to real robot tasks in several aspects. We selected a skill of soccer as a task for a vision-based mobile robot. In this paper, we explain two of our method; (1)learning a shooting behavior, and (2)learning a shooting with avoiding an opponent. These behaviors were obtained by a robot in simulation and tested in a real environment in RoboC...
Decision-theoretic reasoning and planning algorithms are increasingly being used for mobile robot navigation, due to the signi cant uncertainty accompanying the robots' perception and action. Such algorithms require detailed probabilistic models of the environment of the robot and it is very desirable to automate the process of compiling such models by means of autonomous learning algorithms. T...
We propose to learn tasks directly from visual demonstrations by learning to predict the outcome of human and robot actions on an environment. We enable a robot to physically perform a human demonstrated task without knowledge of the thought processes or actions of the human, only their visually observable state transitions. We evaluate our approach on two table-top, object manipulation tasks a...
This article is a position paper on the role of robot learning relative to other disciplines. Our discussion reflects the sentiments expressed at the Robolearn-96 Workshop on this topic. Robot learning is most closely related to the fields of machine learning and robotics but also encompasses aspects of AI and various social sciences such as cognitive psychology. We believe that robot learning ...
− As a novel learning method, reinforced learning by which a robot acquires control rules through trial and error has gotten a lot of attention. However, it is quite difficult for robots to acquire control rules by reinforcement learning in real space because many learning trials are needed to achieve the control rules; the robot itself may lose control, or there may be safety problems with the...
This is the team description of Osaka University “Trackies” for RoboCup-99. We have worked two issues for our new team. First, we have changed our robot system from a remote controlled vehicle to a self-contained robot. The other, we have proposed a new learning method based on a Q-learning method so that a real robot can aquire a bhevior by reinforcement learning.
We present a case study of applying a framework for learning from numeric human feedback—TAMER—to a physically embodied robot. In doing so, we also provide the first demonstration of the ability to train multiple behaviors by such feedback without algorithmic modifications and of a robot learning from free-form human-generated feedback without any further guidance or evaluative feedback. We des...
This paper presents developmental learning on a humanoid robot from human-robot interactions. We consider in particular teaching humanoids as children during the child’s Separation and Individuation developmental phase (Mahler, 1979). Cognitive development during this phase is characterized both by the child’s dependence on her mother for learning while becoming awareness of her own individuali...
In this paper, we propose a hierarchical reinforcement learning architecture for a robot with large degrees of freedom. In order to enable learning in a practical numbers of trials, we introduce a low-dimensional representation of the state of the robot for higher-level planning. The upper level learns a discrete sequence of sub-goals in a low-dimensional state space for achieving the main goal...
In this paper, we describe a reinforcement robot learning method which enables a mobile robot to simultaneously acquire the ability to avoid objects, follow walls and control its velocity as a result of interacting with its environment. Our approach differs to conventional reinforcement learning approaches in that the robot learns associations between input vectors and trajectory velocities rat...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید