A Q-learning Based Continuous Tuning of Fuzzy Wall Tracking

نویسندگان

  • A. Ebrahimzadeh Electerical & Computer Engineeing, Babol Nooshirvani University of Technology
چکیده مقاله:

A simple easy to implement algorithm is proposed to address wall tracking task of an autonomous robot. The robot should navigate in unknown environments, find the nearest wall, and track it solely based on locally sensed data. The proposed method benefits from coupling fuzzy logic and Q-learning to meet requirements of autonomous navigations. Fuzzy if-then rules provide a reliable decision making framework to handle uncertainties, and also allow incorporation of heuristic knowledge. Dynamic structure of Q-learning makes it a promising tool to tune fuzzy inference systems when little or no prior knowledge is available about the world. To robot, the world is modeled into a set of state-action pairs. For each fuzzified state, there are some suggested actions. States are related to their corresponding actions via fuzzy if-then rules based on human reasoning. The robot selects the most encouraged action for each state through online experiences. Experiments on simulated Khepera robot validate efficiency of the proposed method. Simulation results demonstrate a successful implementation of wall tracking task where the robot keeps itself within predefined margins from walls even with complex concave, convex, or polygon shapes.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Solving Continuous Action/State Problem in Q-Learning Using Extended Rule Based Fuzzy Inference Systems

Q-learning is a kind of reinforcement learning where the agent solves the given task based on rewards received from the environment. Most research done in the field of Q-learning has focused on discrete domains, although the environment with which the agent must interact is generally continuous. Thus we need to devise some methods that enable Q-learning to be applicable to the continuous proble...

متن کامل

Continuous Deep Q-Learning with Model-based Acceleration: Appendix

The iLQG algorithm optimizes trajectories by iteratively constructing locally optimal linear feedback controllers under a local linearization of the dynamics p(xt+1|xt,ut) = N (fxtxt + futut,Ft) and a quadratic expansion of the rewards r(xt,ut) (Tassa et al., 2012). Under linear dynamics and quadratic rewards, the action-value function Q(xt,ut) and value function V (xt) are locally quadratic an...

متن کامل

Continuous Deep Q-Learning with Model-based Acceleration

Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions. However, the sample complexity of modelfree algorithms, particularly when using highdimensional function approximators, tends to limit their applicability to physical systems. In this paper, we explore alg...

متن کامل

Q Memory based active learning for optimizing noisy continuous functions

This paper introduces a new algorithm Q for optimizing the expected output of a multi input noisy continuous function Q is de signed to need only a few experiments it avoids strong assumptions on the form of the function and it is autonomous in that it re quires little problem speci c tweaking These capabilities are directly applicable to industrial processes and may become in creasingly valuab...

متن کامل

Incremental-Topological-Preserving-Map-Based Fuzzy Q-Learning (ITPM-FQL)

Reinforcement Learning (RL) is thought to be an appropriate paradigm to acquire policies for autonomous learning agents that work without initial knowledge because RL evaluates learning from simple “evaluative” or “critic” information instead of “instructive” information used in Supervised Learning. There are two well-known types of RL, namely Actor-Critic Learning and Q-Leaning. Among them, Q-...

متن کامل

Efficient Implementation of Dynamic Fuzzy Q-Learning

This paper presents a Dynamic Fuzzy Q-Learning (DFQL) method that is capable of tuning the Fuzzy Inference Systems (FIS) online. On-line self-organizing learning is developed so that structure and parameters identification are accomplished automatically and simultaneously. Selforganizing fuzzy inference is introduced to calculate actions and Q-functions so as to enable us to deal with continuou...

متن کامل

منابع من

با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ذخیره در منابع من قبلا به منابع من ذحیره شده

{@ msg_add @}


عنوان ژورنال

دوره 25  شماره 4

صفحات  355- 366

تاریخ انتشار 2012-10-01

با دنبال کردن یک ژورنال هنگامی که شماره جدید این ژورنال منتشر می شود به شما از طریق ایمیل اطلاع داده می شود.

میزبانی شده توسط پلتفرم ابری doprax.com

copyright © 2015-2023