Multi-Agent Coordination: DCOPs and Beyond

نویسنده

  • Marc Pujol-Gonzalez
چکیده

Distributed constraint optimization problems (DCOPs) are a model for representing multi-agent systems in which agents cooperate to optimize a global objective. The DCOP model has two main advantages: it can represent a wide range of problem domains, and it supports the development of generic algorithms to solve them. Firstly, this paper presents some advances in both complete and approximate DCOP algorithms. Secondly, it explains that the DCOP model makes a number of unrealistic assumptions that severely limit its range of application. Finally, it points out hints on how to tackle such limitations. 1 Distributed constraint optimization Distributed constraint optimization problems (DCOPs) are a model for representing multi-agent systems in which agents cooperate to optimize a global objective. The DCOP model has two main advantages. Firstly, it can represent a wide range of problem domains such as wireless sensor networks [Zhang et al., 2005], peer-to-peer networks [Faltings et al., 2006], meeting scheduling [Maheswaran et al., 2004], and traffic control [Junges and Bazzan, 2008]. Secondly, it supports the development of generic solving algorithms. Therefore, researchers have developed several complete algorithms such as ADOPT [Modi et al., 2005], DPOP [Petcu and Faltings, 2005], and its generalization GDL [Aji and McEliece, 2000; Vinyals et al., 2010b]. Nevertheless, DCOPs are shown to be NP-Hard [Modi et al., 2005]. Being complete, the main advantage of these algorithms is that they guarantee the maximum possible quality: optimality. However, they scale poorly when the number of agents increases, regarding both computational and communication requirements. Function filtering is a promising technique [Brito and Meseguer, 2010] to achieve better scalability. Basically, given ∗This work has been funded by projects EVE (TIN200914702-C02-01 and 02), Agreement Technologies (CONSOLIDER CSD2007-0022), RECEDIT (TIN2009-13591-C02-02) and Generalitat de Catalunya (2009-SGR-1434 and 2009-SGR-362). Marc Pujol-Gonzalez is supported by the Ministry of Science and Innovation (BES-2010-030466) a method to compute approximations of cost functions and a candidate solution, function filtering allows pruning regions of the solution space that only contain non-optimal solutions. Notice that some application domains are specially communication constrained, whereas others are mainly computationally constrained. For instance, data transmission is severely limited in wireless sensor networks, and bandwidth is a scarce resource in peer-to-peer networks. Conversely, meeting scheduling and traffic control are usually computationally constrained because they mainly operate over high-speed networks. Such distinctions between resourceconstrained settings motivated us to study different function approximation methods to be employed along with function filtering, to reduce either communication or computation requirements as much as possible. First, in [Pujol-Gonzalez et al., 2011] we presented a novel class of approximation techniques, the so-called top-down approximations. Combining these new techniques with function filtering, we managed to reduce communication requirements by as much as two orders of magnitude, while keeping computational requirements at bay. As a consequence, the resulting algorithm appears as a very good candidate to solve DCOPs optimally in communication-constrained scenarios. Currently, we are working on improving the effectiveness of function filtering in computationally-constrained settings. Since function filtering’s pruning is based on lower and upper bounds on the optimal solution cost, tightening such bounds increases the amount of pruning. Because more pruning means a reduction on the solution space to explore, agents require less computational resources (both CPU and memory) to solve the same problem. Likewise, these savings in computational resources also increase the range of problems that can be solved optimally by algorithms employing function filtering. In fact, preliminary results indicate that our improvements allow agents to solve up to 75% more problem instances given the same resource constraints. Another approach to improve the scalability of DCOP algorithms is to drop optimality in favor of lower complexity, approximate algorithms. Traditionally, these algorithms have not offered any quality guarantees at all [Zhang et al., 2005], but recent works have been able to provide offline bounds for some of them [Farinelli et al., 2009; Kiekintveld et al., 2010]. The disadvantage of such offline bounds is that they are generally very weak. Thus, we provided a new class of al2838 Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

ER-DCOPs: A Framework for Distributed Constraint Optimization with Uncertainty in Constraint Utilities

Distributed Constraint Optimization Problems (DCOPs) have been used to model a number of multi-agent coordination problems. In DCOPs, agents are assumed to have complete information about the utility of their possible actions. However, in many real-world applications, such utilities are stochastic due to the presence of exogenous events that are beyond the direct control of the agents. This pap...

متن کامل

Decentralized multi-agent reinforcement learning in average-reward dynamic DCOPs

Researchers have introduced the Dynamic Distributed Constraint Optimization Problem (Dynamic DCOP) formulation to model dynamically changing multi-agent coordination problems, where a dynamic DCOP is a sequence of (static canonical) DCOPs, each partially different from the DCOP preceding it. Existing work typically assumes that the problem in each time step is decoupled from the problems in oth...

متن کامل

On Message-Passing, MAP Estimation in Graphical Models and DCOPs

The maximum a posteriori (MAP) estimation problem in graphical models is a problem common in many applications such as computer vision and bioinformatics. For example, they are used to identify the most likely orientation of proteins in protein design problems. As such, researchers in the machine learning community have developed a variety of approximate algorithms to solve them. On the other h...

متن کامل

Infinite-Horizon Proactive Dynamic DCOPs

The Distributed Constraint Optimization Problem (DCOP) formulation is a powerful tool for modeling multi-agent coordination problems. Researchers have recently extended this model to Proactive Dynamic DCOPs (PD-DCOPs) to capture the inherent dynamism present in many coordination problems. The PD-DCOP formulation is a finite-horizon model that assumes a finite horizon is known a priori. It ignor...

متن کامل

Distributed Constraint Optimization Problems Related with Soft Arc Consistency

Distributed Constraint Optimization Problems (DCOPs) are commonly used for modeling multi-agent coordination problems. DCOPs can be optimally solved by distributed search algorithms, based on messages exchange. In centralized solving, maintaining soft arc consistency techniques during search has proved to be beneficial for performance. In this thesis we aim to explore the maintenance of differe...

متن کامل

Incremental DCOP Search Algorithms for Solving Dynamic DCOPs ∗ ( Extended

Distributed constraint optimization problems (DCOPs) are wellsuited for modeling multi-agent coordination problems. However, most research has focused on developing algorithms for solving static DCOPs. In this paper, we model dynamic DCOPs as sequences of (static) DCOPs with changes from one DCOP to the next one in the sequence. We introduce the ReuseBounds procedure, which can be used by any-s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011