Dynamic Obstacle Avoidance by Distributed Algorithm based on Reinforcement Learning (RESEARCH NOTE)

نویسنده

چکیده مقاله:

In this paper we focus on the application of reinforcement learning to obstacle avoidance in dynamic Environments in wireless sensor networks. A distributed algorithm based on reinforcement learning is developed for sensor networks to guide mobile robot through the dynamic obstacles. The sensor network models the danger of the area under coverage as obstacles, and has the property of adoption of itself against possible changes. The proposed protocol can integrate the reward computation of the sensors with information of the intended place of robot so that it guides the robot step by step through the sensor network by choosing the safest path in dangerous zones. Simulation results show that the mobile robot can get to the target point without colliding with any obstacle after a period of learning.Also we discussed about time propagation between obstacle, goal, and mobile robot information. Experimental results show that our proposed method has the ability of fast adoption in real applications in wireless sensor networks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Obstacle Avoidance through Reinforcement Learning

A method is described for generating plan-like, reflexive, obstacle avoidance behaviour in a mobile robot. The experiments reported here use a simulated vehicle with a primitive range sensor. Avoidance behaviour is encoded as a set of continuous functions of the perceptual input space. These functions are stored using CMACs and trained by a variant of Barto and Sutton's adaptive critic algorith...

متن کامل

Dynamic Obstacle Avoidance with PEARL: PrEference Appraisal Reinforcement Learning

Manual derivation of optimal robot motions for task completion is difficult, especially when a robot is required to balance its actions between opposing preferences. One solution has been to automatically learn near optimal motions with Reinforcement Learning (RL). This has been successful for several tasks including swing-free UAV flight, table tennis, and autonomous driving. However, high-dim...

متن کامل

Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning

Obstacle avoidance is a fundamental requirement for autonomous robots which operate in, and interact with, the real world. When perception is limited to monocular vision avoiding collision becomes significantly more challenging due to the lack of 3D information. Conventional path planners for obstacle avoidance require tuning a number of parameters and do not have the ability to directly benefi...

متن کامل

Operation Scheduling of MGs Based on Deep Reinforcement Learning Algorithm

: In this paper, the operation scheduling of Microgrids (MGs), including Distributed Energy Resources (DERs) and Energy Storage Systems (ESSs), is proposed using a Deep Reinforcement Learning (DRL) based approach. Due to the dynamic characteristic of the problem, it firstly is formulated as a Markov Decision Process (MDP). Next, Deep Deterministic Policy Gradient (DDPG) algorithm is presented t...

متن کامل

A fuzzy controller with supervised learning assisted reinforcement learning algorithm for obstacle avoidance

Fuzzy logic systems are promising for efficient obstacle avoidance. However, it is difficult to maintain the correctness, consistency, and completeness of a fuzzy rule base constructed and tuned by a human expert. A reinforcement learning method is capable of learning the fuzzy rules automatically. However, it incurs a heavy learning phase and may result in an insufficiently learned rule base d...

متن کامل

Neural Reinforcement Learning for an Obstacle Avoidance Behavior

Reinforcement learning (RL) offers a set of various algorithms for in-situation behavior synthesis [1]. The Qlearning [2] technique is certainly the most used of the RL methods. Multilayer perceptron implementations of the Q-learning have been proposed early [3], due to the interest of the restricted memory need and the generalization capability [4]. Self-organizing map implementation of the Q-...

متن کامل

منابع من

با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ذخیره در منابع من قبلا به منابع من ذحیره شده

{@ msg_add @}


عنوان ژورنال

دوره 28  شماره 2

صفحات  198- 204

تاریخ انتشار 2015-02-01

با دنبال کردن یک ژورنال هنگامی که شماره جدید این ژورنال منتشر می شود به شما از طریق ایمیل اطلاع داده می شود.

میزبانی شده توسط پلتفرم ابری doprax.com

copyright © 2015-2023