Introduction to Approximation Algorithms
نویسنده
چکیده
To date, thousands of natural optimization problems have been shown to be NP-hard [8, 18]. To deal with these problems, two approaches are commonly adopted: (a) approximation algorithms, (b) randomized algorithms. Roughly speaking, approximation algorithms aim to find solutions whose costs are as close to be optimal as possible in polynomial time. Randomized algorithms can be looked at from different angles. We can design algorithms giving optimal solutions in expected polynomial time, or giving expectedly good solutions. Many different techniques are used to devise these kinds of algorithms, which can be broadly classified into: (a) combinatorial algorithms, (b) linear programming based algorithms, (c) semi-definite programming based algorithms, (d) randomized (and derandomized) algorithms, etc. Within each class, standard algorithmic designs techniques such as divide and conquer, greedy method, and dynamic programming are often adopted with a great deal of ingenuity. Designing algorithms, after all, is as much an art as it is science. There is no point approximating anything unless the approximation algorithm runs efficiently, i.e. in polynomial time. Hence, when we say approximation algorithms we implicitly imply polynomial time algorithms. To clearly understand the notion of an approximation algorithm, we first define the so-called approximation ratio. Informally, for a minimization problem such as the VERTEX COVER problem, a polynomial time algorithm A is said to be an approximation algorithm with approximation ratio δ if and only if for every instance of the problem,A gives a solution which is at most δ times the optimal value for that instance. This way, δ is always at least 1. As we do not expect to have an approximation algorithm with δ = 1, we would like to get δ as close to 1 as possible. Conversely, for maximization problems the algorithm A must produce, for each input instance, a solution which is at least δ times the optimal value for that instance. (In this case δ ≤ 1.) The algorithm A in both cases are said to be δ-approximation algorithm. Sometimes, for maximization problems people use the term approximation ratio to refer to 1/δ. This is to ensure that the ratio is at least 1 in both the min and the max case. The terms approximation ratio, approximation factor, performance guarantee, worst case ratio, absolute worst case ratio, are more or less equivalent, except for the 1/δ confusion we mentioned above. Let us now be a little bit more formal. Consider an optimization problem Π, in which we try to minimize a certain objective function. For example, when Π is VERTEX COVER the objective function is the size of a vertex cover. For each instance I ∈ Π, let OPT(I) be the optimal value of the objective function for I . In VERTEX COVER, I is a graph G and OPT(G) depends on the structure of G. Given a polynomial time algorithm A which returns some feasible solution for Π, let A(I) denote the objective value returned by A on in put I . Define
منابع مشابه
Efficient Approximation Algorithms for Point-set Diameter in Higher Dimensions
We study the problem of computing the diameter of a set of $n$ points in $d$-dimensional Euclidean space for a fixed dimension $d$, and propose a new $(1+varepsilon)$-approximation algorithm with $O(n+ 1/varepsilon^{d-1})$ time and $O(n)$ space, where $0 < varepsilonleqslant 1$. We also show that the proposed algorithm can be modified to a $(1+O(varepsilon))$-approximation algorithm with $O(n+...
متن کاملMinimizing a General Penalty Function on a Single Machine via Developing Approximation Algorithms and FPTASs
This paper addresses the Tardy/Lost penalty minimization on a single machine. According to this penalty criterion, if the tardiness of a job exceeds a predefined value, the job will be lost and penalized by a fixed value. Besides its application in real world problems, Tardy/Lost measure is a general form for popular objective functions like weighted tardiness, late work and tardiness with reje...
متن کاملApproximation Solutions for Time-Varying Shortest Path Problem
Abstract. Time-varying network optimization problems have tradition-ally been solved by specialized algorithms. These algorithms have NP-complement time complexity. This paper considers the time-varying short-est path problem, in which can be optimally solved in O(T(m + n)) time,where T is a given integer. For this problem with arbitrary waiting times,we propose an approximation algorithm, whic...
متن کاملAn Introduction to Ant Colony Optimization
Published as a chapter in Approximation Algorithms and Metaheuristics, a book edited by T. F. Gonzalez.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005