نتایج جستجو برای: sokoban
تعداد نتایج: 72 فیلتر نتایج به سال:
This demo proposal presents the implementation of a solution of the problem of pushing boxes in a mobile robot simulator. We solve several boards of the Sokoban game, consisting in pushing boxes in a scenario, from a start configuration to a goal configuration. The game finishes when all boxes are in one the goal positions. The player has some constraints in his actuation capabilities because i...
We describe a case study in human problem solving for a particular problem – a Sokoban puzzle. For the study we collected data using the Internet. In this way we were able to collect significantly larger data (2000 problems solved, 780 hours of problem solving activity) than in typical studies of human problem solving. Our analysis of collected data focuses on the issue of problem difficulty. W...
Al research has developed an extensive collect ion of methods to solve state-space problems. Using the challenging domain of Sokoban, this paper studies the effect of search enhancements on program performance. We show that the current state of the ar t in AT generally requires a large p rog ramming and research effort into domain-dependent: methods to solve even moderately complex problems in ...
We describe an algorithm for the procedural generation of levels for the popular Japanese puzzle game Sokoban. The algorithm takes a few parameters and builds a random instance of the puzzle that is guaranteed to be solvable. Although our algorithm and its implementation runs in exponential time, we present experimental evidence that it is sufficiently fast for offline use on a current generati...
Recently, due to the widespread diffusion of smart-phones, mobile puzzle games have experienced a huge increase in their popularity. A successful puzzle has to be both captivating and challenging, and it has been suggested that this features are somehow related to their computational complexity [5]. Indeed, many puzzle games – such as Mah-Jongg, Sokoban, Candy Crush, and 2048, to name a few – a...
When given several problems to solve in some domain, a standard reinforcement learner learns an optimal policy from scratch for each problem. If the domain has particular characteristics that are goal and problem independent, the learner might be able to take advantage of previously solved problems. Unfortunately, it is generally infeasible to directly apply a learned policy to new problems. Th...
We present an architectural approach to learning problem solving skills from demonstration, using internal models to represent problem-solving operational knowledge. Internal forward and inverse models are initially learned through active interaction with the environment, and then enhanced and finessed by observing expert teachers. While a single internal model is capable of solving a single go...
Humans can e ectively navigate through large search spaces, enabling them to solve problems with daunting complexity. This is largely due to an ability to successfully distinguish between relevant and irrelevant actions (moves). In this paper we present a new single-agent search pruning technique that is based on a move's in uence. The in uence measure is a crude form of relevance in that it is...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید