نتایج جستجو برای: geo grid reinforcement
تعداد نتایج: 139042 فیلتر نتایج به سال:
The designer of a mapping system for mobile robots has to choose how to model the environment of the robot. Popular models are feature maps and grid maps. Depending on the structure of the environment, each representation has certain advantages. In this paper, we present an approach that maintains feature maps as well as grid maps of the environment. This allows a robot to update its pose and m...
Incorporating skills in reinforcement learning methods results in accelerate agents learning performance. The key problem of automatic skill discovery is to find subgoal states and create skills to reach them. Among the proposed algorithms, those based on graph centrality measures have achieved precise results. In this paper we propose a new graph centrality measure for identifying subgoal stat...
Considering dynamic, heterogeneous and autonomous characteristics of computing resources in grid computing systems and the flexibility and effectivity of economics methods applied to solve the problem of resource management, a double auction mechanism for resource allocation on grid computing systems is presented. Firstly, a market model of double auction is described, in which agents are utili...
Reinforcement learning methods for discrete and semi-Markov decision problems such as Real-Time Dynamic Programming can be generalized for Controlled Diffusion Processes. The optimal control problem reduces to a boundary value problem for a fully nonlinear second-order elliptic differential equation of HamiltonJacobi-Bellman (HJB-) type. Numerical analysis provides multigrid methods for this ki...
Computer models can be used to investigate the role of emotion in learning. Here we present EARL, our framework for the systematic study of the relation between emotion, adaptation and reinforcement learning (RL). EARL enables the study of, among other things, communicated affect as reinforcement to the robot; the focus of this chapter. In humans, emotions are crucial to learning. For example, ...
Computer models can be used to investigate the role of emotion in learning. Here we present EARL, our framework for the systematic study of the relation between emotion, adaptation and reinforcement learning (RL). EARL enables the study of, among other things, communicated affect as reinforcement to the robot; the focus of this paper. In humans, emotions are crucial to learning. For example , a...
Parti-game is a new algorithm for learning feasible trajectories to goal regions in high dimensional continuous state-spaces. In high dimensions it is essential that learning does not plan uniformly over a state-space. Parti-game maintains a decision-tree partitioning of state-space and applies techniquesfrom game-theoryand computational geometryto eeciently and adaptively concentrate high reso...
There are many Geo-Location techniques proposed in cellular networks. They are mainly classified based on the parameters used to extract location information. In this study it is tried to have a new look to these positioning methods and to classify them differently regardless of parameters type. We classified these techniques base on mathematical algorithms which is used to derive location info...
This paper presents two examples of contact dynamics simulations for space robotics application: Satellite docking in GEO and rover locomotion on planetary surfaces. The contact modeling techniques include a) contact between polygonal surfaces according to the elastic foundation model theory and b) contact between digital elevation grid surfaces and point cloud surfaces with application of Bekk...
Scott Hogg & Associates Ltd. have completed the development of a 3-axis, helicopter towed, aeromagnetic gradiometer. The towed bird is a complete airborne system with 4 cesium sensors, radar altimeter, pitch, roll and yaw measurement, a 3-axis magnetic fluxgate and a GPS positioning system. The system is designed to provide accurate geo-referenced magnetic gradients; G-east, G-north and Gvertic...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید