Eecient Algorithms for Geometric Optimization Eecient Algorithms for Geometric Optimization
نویسندگان
چکیده
Linear Programming 15 contain such a constraint), which implies that after kd iterations (H) (B) 2k. Hence, after kd successful rounds, 2k (H) nek=3. This implies that the above algorithm terminates in at most 3d lnn successful rounds. Since each round takes O(dd) time to compute xR and O(dn) time to compute V , the expected running time of the algorithm is O((d2n+dd+1) log n). By combining this algorithm with a randomized recursive algorithm, Clarkson improved the expected running time to O(d2n) + (d)d=2+O(1) logn. 6 Abstract Linear Programming In this section we present an abstract framework that captures linear programming, as well as many other geometric optimization problems, including computing smallest enclosing balls (or ellipsoids) of nite point sets in Rd , computing largest balls (ellipsoids) circumscribed in convex polytopes in Rd , computing the distance between polytopes in d-space, general convex programming, and many other problems. Sharir and Welzl [218] and Matou sek et al. [178] (see also Kalai [149]) presented a randomized algorithm for optimization problems in this framework, whose expected running time is linear in terms of the number of constraints whenever the combinatorial dimension d (whose precise de nition, in this abstract framework, will be given below) is xed. More importantly, the running time is `subexponential' in d for many of the LP-type problems, including linear programming. This is the rst subexponential `combinatorial' bound for linear programming (a bound that counts the number of arithmetic operations and is independent of the bit complexity of the input), and is a rst step toward the major open problem of obtaining a strongly polynomial algorithm for linear programming. The papers by Gartner and Welzl [110] and by Goldwasser [112] also survey the known results on LP-type problems. 6.1 An abstract framework Let us consider optimization problems speci ed by a pair (H;w), where H is a nite set, and w : 2H ! W is a function into a linearly ordered set (W; ); we assume that W has a minimum value 1. The elements of H are called constraints, and for G H, w(G) is called the value of G. Intuitively, w(G) denotes the smallest value attainable by a certain objective function while satisfying all the constraints of G. The goal is to compute a minimal subset BH of H with w(BH) = w(H) (from which, in general, the value of H is easy to determine), assuming the availability of three basic operations, which we specify below. Such a minimization problem is called LP-type if the following two axioms are satis ed: Axiom 1. (Monotonicity) For any F;G with F G H, we have w(F ) w(G). Geometric Optimization January 24, 1997 Abstract Linear Programming 16 Axiom 2. (Locality) For any F G H with 1 < w(F ) = w(G) and any h 2 H, w(G) < w(G [ fhg)) w(F ) < w(F [ fhg): Linear programming is easily shown to be an LP-type problem, if we set w(G) to be the vertex of the feasible region that minimizes the objective function and that is coordinatewise lexicographically smallest (this de nition is important to satisfy Axiom 2), and if we extend the de nition of w(G) in an appropriate manner to handle empty or unbounded feasible regions. A basis B H is a set of constraints satisfying 1 < w(B), and w(B0) < w(B) for all proper subsets B0 of B. For G H, with 1 < w(G), a basis of G is a minimal subset B of G with w(B) = w(G). (For linear programming, a basis of G is a minimal set of halfspace constraints in G such that the minimal vertex of their intersection is the minimal vertex of G.) A constraint h is violated by G if w(G) < w(G [ fhg), and it is extreme in G if w(G fhg) < w(G). The combinatorial dimension of (H;w), denoted as dim(H;w), is the maximum cardinality of any basis. We call an LP-type problem basis regular if for any basis with jBj = dim(H;w) and for any constraint h, every basis of B [ fhg has exactly dim(H;w) elements. (Clearly, linear programming is basis-regular, where the dimension of every basis is d.) We assume that the following primitive operations are available. (Violation test) h is violated by B: for a constraint h and a basis B, tests whether h is violated by B. (Basis computation) basis(B; h): for a constraint h and a basis B, computes a basis of B [ fhg. (Initial basis) initial(H): An initial basis B0 with exactly dim(H;w) elements is available. For linear programming, the rst operation can be performed in O(d) time, by substituting the coordinates of the vertex w(B) into the equation of the hyperplane de ning h. The second operation can be regarded as a dual version of the pivot step in the simplex algorithm, and can be implemented in O(d2) time. The third operation is also easy to implement. We are now in position to describe the algorithm. Using the initial-basis primitive, we compute a basis B0 and call SUBEX lp(H;B0), where SUBEX lp is the recursive algorithm, given in Figure 2, for computing a basis BH of H. A simple inductive argument shows the expected number of primitive operations performed by the algorithm is O(2 n), where n = jHj and = dim(H;w) is the combinatorial dimension. However, using a more involved analysis, which can be found in [178], one can show that basis-regular LP-type problems can be solved with an expected number of Geometric Optimization January 24, 1997Linear Programming 16 Axiom 2. (Locality) For any F G H with 1 < w(F ) = w(G) and any h 2 H, w(G) < w(G [ fhg)) w(F ) < w(F [ fhg): Linear programming is easily shown to be an LP-type problem, if we set w(G) to be the vertex of the feasible region that minimizes the objective function and that is coordinatewise lexicographically smallest (this de nition is important to satisfy Axiom 2), and if we extend the de nition of w(G) in an appropriate manner to handle empty or unbounded feasible regions. A basis B H is a set of constraints satisfying 1 < w(B), and w(B0) < w(B) for all proper subsets B0 of B. For G H, with 1 < w(G), a basis of G is a minimal subset B of G with w(B) = w(G). (For linear programming, a basis of G is a minimal set of halfspace constraints in G such that the minimal vertex of their intersection is the minimal vertex of G.) A constraint h is violated by G if w(G) < w(G [ fhg), and it is extreme in G if w(G fhg) < w(G). The combinatorial dimension of (H;w), denoted as dim(H;w), is the maximum cardinality of any basis. We call an LP-type problem basis regular if for any basis with jBj = dim(H;w) and for any constraint h, every basis of B [ fhg has exactly dim(H;w) elements. (Clearly, linear programming is basis-regular, where the dimension of every basis is d.) We assume that the following primitive operations are available. (Violation test) h is violated by B: for a constraint h and a basis B, tests whether h is violated by B. (Basis computation) basis(B; h): for a constraint h and a basis B, computes a basis of B [ fhg. (Initial basis) initial(H): An initial basis B0 with exactly dim(H;w) elements is available. For linear programming, the rst operation can be performed in O(d) time, by substituting the coordinates of the vertex w(B) into the equation of the hyperplane de ning h. The second operation can be regarded as a dual version of the pivot step in the simplex algorithm, and can be implemented in O(d2) time. The third operation is also easy to implement. We are now in position to describe the algorithm. Using the initial-basis primitive, we compute a basis B0 and call SUBEX lp(H;B0), where SUBEX lp is the recursive algorithm, given in Figure 2, for computing a basis BH of H. A simple inductive argument shows the expected number of primitive operations performed by the algorithm is O(2 n), where n = jHj and = dim(H;w) is the combinatorial dimension. However, using a more involved analysis, which can be found in [178], one can show that basis-regular LP-type problems can be solved with an expected number of Geometric Optimization January 24, 1997 Abstract Linear Programming 17 function procedure SUBEX lp(H;C); /* H : set of n constraints in Rd ; if H = C then /* C H : a basis; return C /* returns a basis of H . elsechoose a random h 2 H n C; B := SUBEX lp(H n fhg; C); if h is violated by B then return SUBEX lp(H; basis(B; h)) elsereturn B; Figure 2: A randomized algorithm for LP-type problems. at most e2p ln((n )=p )+O(p +lnn) violation tests and basis computations. This is the `subexponential' bound that we alluded to. Matou sek [173] has given examples of abstract LP-type problems of combinatorial dimension d with 2d constraints, for which the above algorithm requires (ep2d= 4 pd) primitive operations. Here is an example of such a problem. Let A be a lower-triangular d d f0; 1g-matrix, with all diagonal entries being 0, i.e., ai;j 2 f0; 1g for 1 j < i d and aij = 0 for 1 i j d. Let x1; : : : ; xd denote variables over Z2, and suppose that all additions and multiplications are performed modulo 2. We de ne a set of 2d constraints H(A) = fhci j 1 i d; c 2 f0; 1gg, where hci : xi i 1 X j=1 aijxj + c : That is, xi = 1 if the right-hand side of the constraint is 1 modulo 2 and xi 2 f0; 1g if the right-hand side is 0 modulo 2. For a subset G H, we de ne w(G) to be the lexicographically smallest point of Th2G h. It can be shown that the above example is an instance of a basis-regular LP-type problem, with combinatorial dimension d. Matou sek showed that if A is chosen randomly (i.e., each entry aij , for 1 j < i d, is chosen independently, with Pr[aij = 0] = Pr[aij = 1] = 1=2) and the initial basis is also chosen randomly, then the expected number of primitive operations performed by SUBEX lp is (ep2d= 4 pd). 6.2 Linear programming We are given a set H of n halfspaces in Rd . We assume that the objective vector is c = (1; 0; 0; : : : ; 0), and the goal is to minimize cx over all points in the common intersection Geometric Optimization January 24, 1997Linear Programming 17 function procedure SUBEX lp(H;C); /* H : set of n constraints in Rd ; if H = C then /* C H : a basis; return C /* returns a basis of H . elsechoose a random h 2 H n C; B := SUBEX lp(H n fhg; C); if h is violated by B then return SUBEX lp(H; basis(B; h)) elsereturn B; Figure 2: A randomized algorithm for LP-type problems. at most e2p ln((n )=p )+O(p +lnn) violation tests and basis computations. This is the `subexponential' bound that we alluded to. Matou sek [173] has given examples of abstract LP-type problems of combinatorial dimension d with 2d constraints, for which the above algorithm requires (ep2d= 4 pd) primitive operations. Here is an example of such a problem. Let A be a lower-triangular d d f0; 1g-matrix, with all diagonal entries being 0, i.e., ai;j 2 f0; 1g for 1 j < i d and aij = 0 for 1 i j d. Let x1; : : : ; xd denote variables over Z2, and suppose that all additions and multiplications are performed modulo 2. We de ne a set of 2d constraints H(A) = fhci j 1 i d; c 2 f0; 1gg, where hci : xi i 1 X j=1 aijxj + c : That is, xi = 1 if the right-hand side of the constraint is 1 modulo 2 and xi 2 f0; 1g if the right-hand side is 0 modulo 2. For a subset G H, we de ne w(G) to be the lexicographically smallest point of Th2G h. It can be shown that the above example is an instance of a basis-regular LP-type problem, with combinatorial dimension d. Matou sek showed that if A is chosen randomly (i.e., each entry aij , for 1 j < i d, is chosen independently, with Pr[aij = 0] = Pr[aij = 1] = 1=2) and the initial basis is also chosen randomly, then the expected number of primitive operations performed by SUBEX lp is (ep2d= 4 pd). 6.2 Linear programming We are given a set H of n halfspaces in Rd . We assume that the objective vector is c = (1; 0; 0; : : : ; 0), and the goal is to minimize cx over all points in the common intersection Geometric Optimization January 24, 1997 Abstract Linear Programming 18 Th2H h. For a subset G H, de ne w(G) to be the lexicographically smallest point (vertex) of the intersection of halfspaces in G. (As noted, some care is needed to handle unbounded or empty feasible regions; we omit here details concerning this issue.) As noted above, linear programming is a basis-regular LP-type problem, with combinatorial dimension d, and each violation test or basis computation can be implemented in time O(d) or O(d2), respectively. In summary, we obtain a randomized algorithm for linear programming, which performs e2pd ln(n=pd )+O(pd+lnn) expected number of arithmetic operations. Using SUBEX lp instead of the simplex algorithm for solving the small-size problems in the RANDOM lp algorithm (given in Figure 1), the expected number of arithmetic operations can be reduced to O(d2n) + eO(pd log d). In view of Matou sek's lower bound, one should aim to exploit additional properties of linear programming to obtain a better bound on the performance of the algorithm for linear programming; this is still a major open problem. 6.3 Extensions Recently, Chazelle and Matou sek [54] gave a deterministic algorithm for solving LP-type problems in timeO( O( )n), provided an additional axiom holds (together with an additional computational assumption). Still, these extra requirements are satis ed in many natural LP-type problems. Matou sek [175] has investigated the problem of nding the best solution, for an abstract LP-type problem, that satis es all but k of the given constraints. He proved that the number of bases that violate at most k constraints in a non-degenerate instance of an LP-type problem is O((k+1) ), where is the combinatorial dimension of the problem, and that they can be computed in time O(n(k + 1) ). In some cases the running time can be improved using appropriate data structures; see [175] for details. Amenta [32] considers the following extension of the abstract framework: Suppose we are given a family of LP-type problems (H;w ), monotonically parameterized by a real parameter ; the underlying ordered value setW has a maximum element +1 representing infeasibility. The goal is to nd the smallest for which (H;w ) is feasible, i.e. w (H) < +1. See [32, 33] for more details and related work. 6.4 Abstract linear programming and Helly-type theorems In this subsection we describe an interesting connection between Helly-type theorems and LP-type problems, as originally noted by Amenta [32]. Let K be an in nite collection of sets in Rd , and let t be an integer. We say that K satis es a Helly-type theorem, with Helly number t, if the following holds: If K is a nite subcollection of K with the property that every subcollection of t elements of K has a Geometric Optimization January 24, 1997Linear Programming 18 Th2H h. For a subset G H, de ne w(G) to be the lexicographically smallest point (vertex) of the intersection of halfspaces in G. (As noted, some care is needed to handle unbounded or empty feasible regions; we omit here details concerning this issue.) As noted above, linear programming is a basis-regular LP-type problem, with combinatorial dimension d, and each violation test or basis computation can be implemented in time O(d) or O(d2), respectively. In summary, we obtain a randomized algorithm for linear programming, which performs e2pd ln(n=pd )+O(pd+lnn) expected number of arithmetic operations. Using SUBEX lp instead of the simplex algorithm for solving the small-size problems in the RANDOM lp algorithm (given in Figure 1), the expected number of arithmetic operations can be reduced to O(d2n) + eO(pd log d). In view of Matou sek's lower bound, one should aim to exploit additional properties of linear programming to obtain a better bound on the performance of the algorithm for linear programming; this is still a major open problem. 6.3 Extensions Recently, Chazelle and Matou sek [54] gave a deterministic algorithm for solving LP-type problems in timeO( O( )n), provided an additional axiom holds (together with an additional computational assumption). Still, these extra requirements are satis ed in many natural LP-type problems. Matou sek [175] has investigated the problem of nding the best solution, for an abstract LP-type problem, that satis es all but k of the given constraints. He proved that the number of bases that violate at most k constraints in a non-degenerate instance of an LP-type problem is O((k+1) ), where is the combinatorial dimension of the problem, and that they can be computed in time O(n(k + 1) ). In some cases the running time can be improved using appropriate data structures; see [175] for details. Amenta [32] considers the following extension of the abstract framework: Suppose we are given a family of LP-type problems (H;w ), monotonically parameterized by a real parameter ; the underlying ordered value setW has a maximum element +1 representing infeasibility. The goal is to nd the smallest for which (H;w ) is feasible, i.e. w (H) < +1. See [32, 33] for more details and related work. 6.4 Abstract linear programming and Helly-type theorems In this subsection we describe an interesting connection between Helly-type theorems and LP-type problems, as originally noted by Amenta [32]. Let K be an in nite collection of sets in Rd , and let t be an integer. We say that K satis es a Helly-type theorem, with Helly number t, if the following holds: If K is a nite subcollection of K with the property that every subcollection of t elements of K has a Geometric Optimization January 24, 1997 Facility-Location Problems 19 nonempty intersection, then TK 6= ;. (The best known example of a Helly-type theorem is Helly's theorem itself [123], which applies for the collection K of all convex sets in Rd , with the Helly number d+1; see [70] for an excellent survey on this topic.) Suppose further that we are given a collection K( ), consisting of n sets K1( ); : : : ;Kn( ) that are parametrized by some real parameter , with the property that Ki( ) Ki( 0), for i = 1; : : : ; n and for 0, and that, for any xed , the family fK1( ); : : : ;Kn( )g admits a Helly-type theorem, with a xed Helly number t. Our goal is to compute the smallest for which Tni=1Ki( ) 6= ;, assuming that such a minimum exists. Amenta proved that this problem can be transformed to an LP-type problem, whose combinatorial dimension is at most t. As an illustration, consider the smallest-enclosing-ball problem. Let P = fp1; : : : ; png be the given set of n points in Rd , and let Ki( ) be the ball of radius centered at pi, for i = 1; : : : ; n. Since the Ki's are convex, the collection in question has Helly number d + 1. It is easily seen that the minimal for which the Ki( )'s have nonempty intersection is the radius of the smallest enclosing ball of P . This shows that the smallest-enclosing-ball problem s LP-type, and can thus be solved in O(n) randomized expected time in any xed dimension. See below for more details. There are several other examples where Helly-type theorems can be turned into LP-type problems. They include (i) computing a line transversal to a family of translates of a convex object in the plane, (ii) computing a smallest homothet of a given convex set that intersects (or contains, or is contained in) every member in a given collection of n convex sets in Rd , and (iii) computing a line transversal to certain families of convex objects in 3-space. We refer the reader to [32, 33] for more details and for additional examples. PART II: APPLICATIONS In the rst part of the paper we focused on general techniques for solving geometric optimization problems. In this second part, we list numerous problems in geometric optimization that can be attacked using some of the techniques reviewed above. For the sake of completeness, we will also review variants of these problems for which the above techniques are not applicable. 7 Facility-Location Problems A typical facility-location problem is de ned as follows: Given a set D = fd1; : : : ; dng of n demand points in Rd , a parameter p, and a distance function , we wish to nd a set S of p supply objects (points, lines, segments, etc.) so that the maximum distance between a Geometric Optimization January 24, 1997 Facility-Location Problems 20 demand point and its nearest supply object is minimized. That is, we minimize, over all possible appropriate sets S, the following objective function c(D;S) = max 1 i nmin s2S (di; s): Instead of minimizing the above quantity, one can choose other objective functions, such as c0(D;S) = n Xi=1min s2S (di; s): In some applications, a weight wi is assigned to each point di 2 D, and the distance from di to a point x 2 R2 is de ned as wi (di; x). The book by Drezner [83] describes many other variants of the facility-location problem. The set S = fs1; : : : ; spg of supply objects partitions D into p clusters, D1; : : : ;Dp, so that si is the nearest supply object to all points in Di. Therefore a facility-location problem can also be regarded as a clustering problem. These facility-location (or clustering) problems arise in many areas, including operations research, pattern matching, data compression, and data mining. A useful extension of the facility-location problem, which has been widely studied, is the capacitated facility-location problem, in which we have an additional constraint that the size of each cluster should be at most c for some parameter c n=p. If p is considered as part of the input, most facility-location problems are NP-hard, even in the plane or even when only an approximate solution is being sought [101, 113, 159, 186, 187, 167]. Although many of these problems can be solved in polynomial time for a xed value of p, some of them still remain intractable. In this section we review e cient algorithms for a few speci c facility-location problems, to which the techniques introduced in Part I can be applied; in these applications, p is usually a small constant. 7.1 Euclidean p-center Given a set D of n demand points in Rd , we wish to nd a set S of p supply points so that the maximum Euclidean distance between a demand point and its nearest neighbor in S is minimized. This problem can be solved e ciently, when p is small, using the parametric searching technique. The decision problem in this case is to determine, for a given radius r, whether D can be covered by the union of p balls of radius r. In some applications, S is required to be a subset of D, in which case the problem is referred to as the discrete p-center problem. General results. A naive procedure for the p-center problem runs in time O(ndp+2), observing that the critical radius r is determined by at most d + 1 points, which also determine one of the balls; similarly, there are O(nd(p 1)) choices for the other p 1 balls, Geometric Optimization January 24, 1997 Facility-Location Problems 21 and it takes O(n) time to verify whether a speci c choice of balls covers D. For the planar case, Drezner [79] gave an improved O(n2p+1)-time algorithm, which was subsequently improved by Hwang et al. [142] to nO(pp). Hwang et al. [141] have given another nO(pp)time algorithm for computing a discrete p-center. Therefore, for a xed value of p, the Euclidean p-center (and also the Euclidean discrete p-center) problem can be solved in polynomial time in any xed dimension. However, either of these problems is NP-complete for d 2, if p is part of the input [104, 187]. This has led researchers to develop e cient algorithms for approximate solutions and for small values of p and d. Approximation algorithms. Let r be the minimum value of r for which p disks of radius r cover D. The greedy algorithm described in Figure 3, originally proposed by Gonzalez [113] and by Hochbaum and Shmoys [132, 133], computes in O(np) time a set S of p points so that c(D;S) 2r . function procedure GREEDY COVER (D; p); /* D: set of n points in Rd ; for i = 1 to n do Max Dist(i) =1; for i = 1 to p do si = dj s.t. Max Dist(j) = max1 l nMax Dist(l); for j = 1 to n do Max Dist(j) = minfMax Dist(j); (si; dj)g; return fs1; : : : ; spg; Figure 3: Greedy algorithm for approximate p-center. This algorithm works equally well for any metric and for the weighted case [89]. Note that it also provides an approximate solution to the discrete p-center problem. The running time was improved to O(n log p) by Feder and Green [101]. They also showed that computing a set S of p supply points such that c(D;S) 1:822r under the Euclidean distance function, or c(D;S) < 2r under the L1-metric, is NP-Hard. See [114, 159] for other approximation algorithms. Another way of seeking an approximation is to nd a small number of balls of a xed radius, say r, that cover all demand points. Computing k , the minimum number of balls of radius r that cover D, is also NP-complete [104]. A greedy algorithm can construct k logn balls of radius r that cover D. Hochbaum and Maass gave a polynomial-time algorithm to compute a cover of size (1 + ")k , for any " > 0 [130]; see also [45, 101, 114]. No constant-factor approximation algorithm is known for the capacitated covering problem, with unit-radius disks, that is, the problem of partitioning a given point set S in the plane into the minimum number of clusters, each of which consists of at most c points and can be covered by a disk of radius r. Nevertheless, the greedy algorithm can be modi ed to obtain Geometric Optimization January 24, 1997 Facility-Location Problems 22 an O(log n)-factor approximation for this problem [36]. The general results reviewed so far do not make use of parametric searching: since there are only O(nd+1) candidate values for the optimum radius r , one can simply enumerate all these values and run a standard binary search among them. The improvement that one can gain from parametric searching is signi cant only when p is relatively small, which is what we are going to discuss next. Euclidean 1-center. The 1-center problem is to compute the smallest ball enclosing D. The decision procedure for the 1-center problem is thus to determine whether D can be covered by a ball of radius r. For d = 2, the decision problem can be solved in O(log n) parallel steps using O(n) processors, e.g., be testing whether the intersection of the disks of radius r centered at the points of D is nonempty. This yields an O(n log3 n)-time algorithm for the planar Euclidean 1-center problem. Using the prune-and-search paradigm, one can, however, solve the 1-center problem in linear time [86], and this approach extends to higher dimensions, where, for any xed d, the running time is dO(d)n [17, 54, 90]. Megiddo [185, 189] extends this approach to obtain a linear-time algorithm for the weighted 1-center problem. Dynamic data structures for maintaining the smallest enclosing ball of a set of points, as points are being inserted and deleted, are given in [11, 37]. See [78, 82, 84, 102, 182] for other variants of the 1-center problem. A natural extension of the 1-center problem is to nd a disk of the smallest radius that contains k of the n input points. The best known deterministic algorithm runs in time O(n log n+nk log k) usingO(n+k2 log k) space [100, 72] (see also [96]), and the best known randomized algorithm runs in O(n log n+ nk) expected time using O(nk) space, or in O(n logn + nk log k) expected time using O(n) space [174]. Matou sek [175] also showed that the smallest disk covering all but k points can be computed in time2 O(n logn+ k3n"). The smallest-enclosing-ball problem is an LP-type problem, with combinatorial dimension d+1 [218, 232]. Indeed, the constraints are the given points, and the function w maps each subset G to the radius of the smallest ball containing G. Monotonicity of w is trivial, and locality follows easily from the uniqueness of the smallest enclosing ball of a given set of points in general position. The combinatorial dimension is d+ 1 because at most d+ 1 points are needed to determine the smallest enclosing ball. This problem is, however, not basis-regular (the smallest enclosing ball may be determined by any number, between 2 and d+1, of points), and a naive implementation of the basis-changing operation may be quite costly (in d). Nevertheless, Gartner [109] showed that this operation can be performed in this case using expected eO(pd) arithmetic operations. Hence, the expected running time of the algorithm is O(d2n) + eO(pd log d). 2In this paper, the meaning of complexity bounds that depend on an arbitrary parameter " > 0, like the one stated here, is that given any " > 0, we can ne-tune the algorithm so that its complexity satis es the stated bound. In these bounds the constant of proportionality usually depends on ", and tends to in nity when " tends to zero. Geometric Optimization January 24, 1997 Facility-Location Problems 23 There are several extensions of the smallest-enclosing-ball problem. They include: (i) computing the smallest enclosing ellipsoid of a point set [54, 87, 201, 232], (ii) computing the largest ellipsoid (or ball) inscribed inside a convex polytope in Rd [109], (iii) computing a smallest ball that intersects (or contains) a given set of convex objects in Rd (see [185], and (iv) computing a smallest area annulus containing a given planar point set. All these problems are known to be LP-type, and thus can be solved using the algorithm described in Section 6. However, not all of them run in subexponential expected time because they are not basis regular. Linear-time algorithms, based on prune-and-search technique, have also been developed for many of these problems in two dimensions [40, 41, 42, 145]. Euclidean 2-center. In this problem we want to cover a set D of n points in Rd by two balls of smallest possible common radius. There is a trivial O(nd+1)-time algorithm for the 2-center problem in Rd , because the `clusters' D1 and D2 in an optimal solution can be separated by a hyperplane [80]. Faster algorithms have been developed for the planar case using parametric searching. Agarwal and Sharir [13] gave an O(n2 logn)-time algorithm for determining whether D can be covered by two disks of radius r. Their algorithm proceeds as follows: There are O(n2) distinct subsets of D that can be covered by a disk of radius r, and these subsets can be computed in O(n2 log n) time, by processing the arrangement of the n disks of radius r, centered at the points of D. For each such subset D1, the algorithm checks whether DnD1 can be covered by another disk of radius r. Using a dynamic data structure, the total time spent is shown to be O(n2 logn). Plugging this algorithm into the parametric searching machinery, one obtains an O(n2 log3 n)-time algorithm for the Euclidean 2-center problem. Matou sek [170] gave a simpler randomized O(n2 log2 n) expected-time algorithm by replacing parametric searching with randomization. The running time of the decision algorithm was improved by Hershberger [126] to O(n2), which has been utilized in the best near-quadratic solution, by Jaromczyk and Kowaluk [146], which runs in O(n2 logn) time; see also [147]. A major progress on this problem was recently made by Sharir [215], who gave an O(n log9 n)-time algorithm, by combining the parametric searching technique with several additional techniques, including a variant of the matrix searching algorithm of Frederickson and Johnson [108]. Eppstein [99] has simpli ed Sharir's algorithm, using randomization and better data structures, and obtained an improved solution, whose expected running time is O(n log2 n). Recently Agarwal et al. [19] have developed an O(n4=3 log5 n)-time algorithm for the discrete 2-center problem. Rectilinear p-center. In this problem the metric is the L1-distance, so the decision problem is now to cover the given set D by a set of p axis-parallel cubes, each of length 2r. The problem is NP-Hard if p is part of the input and d 2, or if d is part of the input and Geometric Optimization January 24, 1997 Facility-Location Problems 24 p 3 [104, 186]. Ko et al. [159] showed that computing an S with c(D;S) < 2r is also NP-Hard. The rectilinear 1-center problem is trivially solved in linear time, and a polynomial-time algorithm for the rectilinear 2-center problem, even if d is unbounded, is given in [186]. A linear-time algorithm for the planar rectilinear 2-center problem is given by Drezner [81] (see also [157]); Ko and Lee [158] gave an O(n log n)-time algorithm for the weighted case. Recently, Sharir and Welzl [219] have developed a linear-time algorithm for the rectilinear 3-center problem, by showing that it is an LP-type problem (as is the rectilinear 2-center problem). They have also obtained an O(n log n)-time algorithm for computing a rectilinear 4-center (and have shown that this algorithm is worst-case optimal), and an O(n log5 n)time algorithm for computing a rectilinear 5-center. The algorithms for the 4-center and 5-center employ the Frederickson-Johnson matrix searching technique. See [152, 219] for additional related results. 7.2 Euclidean p-line-center Let D be a set of n points in Rd and be the Euclidean distance function. We wish to compute the smallest real value w so that D can be covered by the union of p strips of width w . Megiddo and Tamir showed that the problem of determining whether w = 0 (i.e, D can be covered by p lines) is NP-Hard [188], which not only proves that the p-linecenter is NP-Complete, but also proves that approximating w within a constant factor is NP-Complete. Approximation algorithms for this problem are given in [121]. The 1-line center is the classical width problem. For d = 2, an O(n logn)-time algorithm was given by Houle and Toussaint [138]. A matching lower bound was proved by Lee and Wu [163]. They also gave an O(n2 log n)-time algorithm for the weighted case, which was improved to O(n logn) in [137]. For the 2-line-center problem in the plane, Agarwal and Sharir [13] (see also [12]) gave an O(n2 log5 n)-time algorithm, using parametric searching. This algorithm is very similar to their 2-center algorithm, i.e., the decision algorithm nds all subsets of S that can be covered by a strip of width w and for each such subset S1, it determines whether S nS1 can be covered by another strip of width w. The heart of this decision procedure is an e cient algorithm for the following o -line width problem: given a sequence = ( 1; : : : ; n) of insertions and deletions of points in a set D and a real number w, is there an i such that after performing the rst i updates, the width of the current point set is at most w? A solution to this o -line width problem, that runs in O(n2 log3 n) time, is given in [12]. The running time for the optimization problem was improved to O(n2 log4 n) by Katz and Sharir [154] and by Glozman et al. [111], using expander graphs and the Frederickson-Johnson matrix searching technique, respectively. The best known algorithm, by Jaromczyk and Kowaluk [148], runs in O(n2 log2 n) time. It is an open problem whether a near-linear (or just subquadratic) Geometric Optimization January 24, 1997 Facility-Location Problems 25 time algorithm exists for computing a 2-line center. 7.3 Euclidean p-median Let D be a set of n points in Rd . We wish to compute a set S of p supply points so that the sum of distances from each demand point to its nearest supply point is minimized (i.e., we want to minimize the objective function c0(D;S)). This problem can be solved in polynomial time for d = 1 (for d = 1 and p = 1 the solution is the median of the given points, whence the problem derives its name), and it is NP-Hard for d 2 [187]. The special case of d = 2; p = 1 is the classical Fermant-Weber problem, and it goes back to the 17th century. It is known that the solution for the Fermant-Weber problem is unique and algebraic. Several numerical approaches have been proposed to compute an approximate solution. See [48, 233] for the history of the problem and for the known algorithms, and [197] for some heuristics for the p-median problem that work well for a set of random points. 7.4 Segment-center Given a segment e, we wish to nd a translated and rotated copy of e so that the maximum distance from each point of the given set D of demand points to this copy is minimized. This problem was originally considered by Imai et al. [143], who had given an O(n4 log n)-time algorithm. An improved solution, based on parametric searching, with O(n2 (n) log3 n) running time, was later obtained in [8] (here (n) denotes the extremely slowly growing inverse of Ackermann's function). The decision problem in this case is to determine whether there exists a translated and rotated copy of the `hippodrome' H = e Br, the Minkowski sum of the segment e with a disk of radius r, which fully contains D. Since H is convex, this is equivalent to H containing P = conv(D). Hence the decision procedure is actually: Given a convex polygon P and the hippodrome H, does H contain a translated and rotated copy of P ? see Figure 4. Note that placements of P can be speci ed in terms of three parameters, two for the translation and one for the rotation. Let FP R3 denote the set of placements of P at which P lies inside H. Using Davenport{Schinzel sequences [216], Agarwal and Sharir showed that the complexity of FP is O(n22 (n)), and that it can be computed in time O(n22 (n) log n). By exploiting various geometric and combinatorial properties of FP and using some elegant results from combinatorial geometry, Efrat and Sharir [95] showed that the complexity of FP is only O(n log n), and that one can determine in time O(n1+") whether FP 6= ;. Plugging this into the parametric searching technique, one obtains an O(n1+")-time solution to the segment-center problem. Geometric Optimization January 24, 1997 Proximity Problems 26 e r H P Figure 4: The segment-center problem 7.5 Other facility-location problems Besides the problems discussed above, several other variants of the facility-location problem have been studied. For example, Hershberger [125] described an O(n2= log log n)-time algorithm for partitioning a given set S of n points into two subsets so that the sum of their diameters is minimized. If we want to minimize the maximum of the two diameters, the running time can be improved to O(n logn) [127]. Glozman et al. [111] have studied problems of covering S by several di erent kinds of shapes. Maass [167] showed that the problem of covering S with the minimum number of unit-width annuli is NP-Hard even for d = 1 (a unit-width annulus in 1-dimension is a union of two unit-length intervals), and Hochbaum and Maass [131] gave an approximation algorithm for covering points with annuli. 8 Proximity Problems 8.1 Diameter in 3-space Given a set S of n points in R3 , we wish to compute the diameter of S, that is, the maximum distance between any two points of S. The decision procedure here is to determine, for a given radius r, whether the intersection of the balls of radius r centered at the points of S contains S. The intersection of congruent balls in R3 has linear complexity [118, 124], therefore it is natural to ask whether the intersection of n congruent balls can be computed in O(n logn) time. (Checking whether all points of S lie in the intersection can then be performed in additional O(n logn) time, using straightforward point-location techniques.) Clarkson and Shor [64] gave a very simple O(n log n) expected-time randomized algorithm (which is worst-case optimal) for computing the intersection, and then used a randomized prune-and-search algorithm, summarized in Figure 5, to compute the diameter of S. Geometric Optimization January 24, 1997 Proximity Problems 27 function procedure DIAMETER (S); choose a random point p 2 S; q = a farthest neighbor of p; compute I = Tp02S B(p0; (p; q)) S1 = S n I if S1 = ; then return d(p; q) else return DIAMETER (S1) Figure 5: A randomized algorithm for computing the diameter in 3D. The correctness of the above algorithm is easy to check. The only nontrivial step in the above algorithm is computing I and S1. If is the Euclidean metric, I can be computed in O(jSj log jSj) expected time, using the ball-intersection algorithm. S1 can then be computed in additionalO(jSj log jSj) time, using any optimal planar point-location algorithm (see, e.g., [208]). Hence, each recursive step of the algorithm takes O(jSj log jSj) expected time. Since p is chosen randomly, jS1j 2jSj=3 with high probability, which implies that the expected running time of the overall algorithm is O(n log n). It was a challenging open problem whether an O(n log n)-time deterministic algorithm can be developed for computing the intersection of n congruent balls in 3-space. This has been answered in the a rmative by Amato et al. [31], following a series of near-linear time but weaker deterministic algorithms [51, 177, 202]. Amato et al. derandomized the Clarkson-Shor algorithm, using several sophisticated techniques.3 Their algorithm yields an O(n log3 n)-time algorithm for computing the diameter. Obtaining an optimal O(n log n)time deterministic algorithm for computing the diameter in 3-space still remains elusive. 8.2 Closest line pair Given a set L of n lines in R3 , we wish to compute a closest pair of lines in L. Let d(L;L0) denote the Euclidean distance between the closest pair of lines in L L0, for two disjoint sets L;L0 of lines. Two algorithms for this problem, both based on parametric searching, were given independently by Chazelle et al. [51] and by Pellegrini [199]; both algorithms run in O(n8=5+") time. Using Pl ucker coordinates [53, 220] and range-searching data structures, the algorithms construct, in O(n8=5+") time, a family of pairs f(L1; L01); : : : ; (Lk; L0k)g, so that every line in Li lies below (in the z-direction) all the lines of L0i and d(L;L0) = min1 i k d(Li; L0i). Hence, it su ces to compute a closest pair in Li L0i, for each i k, which can be done using parametric searching. The decision procedure is: For a given real number r, determine whether d(Li; L0i) r, for each i k. Since lines in 3-space have 3An earlier attempt by Br onnimann et al. [44] to derandomize Clarkson-Shor algorithm had an error. Geometric Optimization January 24, 1997 Proximity Problems 28 four degrees of freedom, each of these subproblems can be transformed to the following point-location problem in R4 : Given a set S of n points in R4 (representing the lines in Li) and a set of m surfaces, each being the graph of an algebraic trivariate function of constant degree (each surface is the locus of lines in R3 that pass above a line of L0i at distance r from it), determine whether every point in S lies below all the surfaces of . It is shown in [51] that this point-location problem can be solved in time O(n4=5+"m4=5+"), which implies an O(n8=5+")-time algorithm for computing d(L;L0). Agarwal and Sharir [14] have shown that d(Li; L0i) can be computed in O(n3=4+"m3=4+") expected time, by replacing parametric searching with randomization and by exploiting certain geometric properties that the surfaces in possess. Roughly speaking, this is accomplished by generalizing the Clarkson-Shor algorithm for computing the diameter, described in Figure 5. However, this algorithm does not improve the running time for computing d(L;L0), because we still need O(n8=5+") time for constructing the pairs (Li; L0i). If we are interested in computing a pair of lines with the minimum vertical distance, the running time can be improved to O(n4=3+") [199]. 8.3 Distance between polytopes We wish to compute the Euclidean distance d(P1;P2) between two given convex polytopes P1 and P2 in Rd . If the polytopes intersect, then this distance is 0. If they do not intersect, then this distance equals the maximum distance between two parallel hyperplanes separating the polytopes; such a pair of hyperplanes is unique, and they are orthogonal to the segment connecting two points a 2 P1 and b 2 P2 with d(a; b) = d(P1;P2). It is shown by Gartner [109] that this problem is LP-type, with combinatorial dimension at most d + 2 (or d + 1, if the polytopes do not intersect). It is also shown there that the primitive operations can be performed with expected eO(pd) arithmetic operations. Hence, the problem can be solved by the general LP-type algorithm, whose expected number of arithmetic operations is O(d2n) + eO(pd log d), where n is the total number of facets in P1 and P2. For d = 2, the maximum and the minimum distance between two convex polygons can be computed in O(logn) time, assuming that the vertices of each Pi are stored in an array, sorted in a clockwise order [92]. 8.4 Selecting distances Let S be a set of n points in the plane, and let 1 k n2 be an integer. We wish to compute the k-th smallest distance between a pair of points of S. This can be done using parametric searching. The decision problem is to compute, for a given real r, the sum Pp2S jDr(p) \ (S fpg)j, where Dr(p) is the closed disk of radius r centered at p. (This sum is twice the number of pairs of points of S at distance r.) Agarwal et al. [5] gave an O(n4=3 log4=3 n) expected-time randomized algorithm for the decision problem, Geometric Optimization January 24, 1997 Proximity Problems 29 using the random-sampling technique of [64], which yields an O(n4=3 log8=3 n) expected-time algorithm for the distance-selection problem. Goodrich [115] derandomized this algorithm, at a cost of an additional polylogarithmic factor in the running time. Katz and Sharir [153] obtained an expander-based O(n4=3 log3+" n)-time (deterministic) algorithm for this problem. See also [207]. 8.5 Shape matching Let P and Q be two polygons with m and n edges, respectively. The problem is to measure the `resemblance' between P and Q, that is, to determine how well can a copy of P t Q, if we allow P to translate or to both translate and rotate. The Hausdor distance is one of the common ways of measuring resemblance between two ( xed) sets P and Q [139]; it is de ned as H(P;Q) = max fmax a2P min b2Q d(a; b); max a2Q min b2P d(a; b)g ; where d( ; ) is the Euclidean distance. If we allow P to translate only, then we want to compute minv H(P+v;Q). The problem has been solved by Agarwal et al. [18], using parametric searching, in O((mn)2 log3(mn)) time, which is signi cantly faster than the previously best known algorithm by Alt et al. [30]. If P and Q are nite sets of points, a more e cient solution, not based on parametric searching, is proposed by Huttenlocher et al. [140]. Their solution, however, does not apply to the case of polygons. If we measure distance by the L1-metric, faster algorithms, based on parametric searching, are developed in [55, 57]. If we allow P to translate and rotate, then computing the minimum Hausdor distance becomes signi cantly harder. Chew et al. [56] have given an O(m2n2 log3mn)-time algorithm when both P and Q are nite point sets, and an O(m3n2 log3mn)-time algorithm when P and Q are polygons. Another way of measuring the resemblance between two polygons P and Q is by computing the area of their intersection (or, rather, of their symmetric di erence). Suppose we wish to minimize the area of the symmetric di erence between P and Q, under translation of P . For this case, de Berg et al. [73] gave an O(n log n)-time algorithm, using the pruneand-search paradigm. Their algorithm extends to higher dimensions at a polylogarithmic cost, using parametric searching. 8.6 Surface simpli cation A generic surface-simpli cation problem is de ned as follows: Given a polyhedral object P in R3 and an error parameter " > 0, compute a polyhedral approximation of P with the minimum number of vertices, so that the maximum distance between P and is at most Geometric Optimization January 24, 1997 Proximity Problems 30 ". There are several ways of de ning the maximum distance between P and , depending on the application. We will refer to an object that lies within " distance from P as an "-approximation of P . Surface simpli cation is a central problem in graphics, geographic information systems, scienti c computing, and visualization. One way of solving the problem is to run a binary search on the number of vertices of the approximating surface. We then need to solve the decision problem of determining whether there exists an "-approximation with at most k vertices, for some given k. Unfortunately, this problem is NP-Hard [20], so one seeks e cient techniques for computing an "-approximation of size (number of vertices) close to kOPT, where kOPT is the minimum size of an "-approximation. Although several ad-hoc algorithms have been developed for computing an "-approximation [74, 75, 135, 136, 168], none of them guarantees any reasonable bound on the size of the output, and many of them do not even ensure that the maximum distance between the input and the output surface is indeed at most ". There has been some recent progress on developing polynomial-time approximation algorithm for computing "-approximations in some special cases. The simplest, but nevertheless an interesting, special case is when P is a convex polytope (containing the origin). In this case we wish to compute another convex polytope Q with the minimum number of vertices so that (1 ")P Q (1+")P (or so that P Q (1+")P ). We can thus pose a more general problem: Given two convex polytopes P1 P2 in R3 , compute a convex polytope Q with the minimum number of vertices such that P1 Q P2. Das and Joesph [71] have attempted to prove that this problem is NP-Hard, but their proof contains an error, and it still remains an open problem. Mitchell and Suri [192] have shown that there exists a nested polytope Q with at most 3kOPT vertices, whose vertices are a subset of the vertices of P2. The problem can now be formulated as a hitting-set problem, and, using a greedy approach, they presented an O(n3)-time algorithm for computing a nested polytope with O(kOPT log n) vertices. Clarkson [61] showed that the randomized technique described in Section 5 can compute a nested polytope with O(kOPT log kOPT) vertices in O(n logc n) expected-time, for some constant c > 0. Bronnimann and Goodrich [45] extended Clarkson's algorithm to obtain a polynomial-time, deterministic algorithm that constructs a nested polytope with O(kOPT) vertices. A widely-studied special case of surface simpli cation, motivated by applications in geographic information systems and scienti c computing, is when P is a polyhedral terrain (i.e., the graph of a continuous piecewise-linear bivariate function). In most of the applications, P is represented as a nite set of n points, sampled from the input surface, and the goal is to compute a polyhedral terrain Q with the minimum number of vertices, such that the vertical distance between any point of P and Q is at most ". Agarwal and Suri [20] showed that this problem is NP-Hard. They also gave a polynomial-time algorithm for computing an "-approximation of size O(kOPT log kOPT), by reducing the problem to a geometric set-cover problem, but the running time of their algorithm is O(n8), which is rather high. Agarwal and Desiken [6] have shown that Clarkson's randomized algorithm can Geometric Optimization January 24, 1997 Statistical Estimators and Related Problems 31 be extended to compute a polyhedral terrain of size O(k2 OPT log2 kOPT) in expected time O(n2+ + k3 OPT log3 kOPT). The survey paper by Heckbert and Garland [122] summarizes most of the known results on terrain simpli cation. A dual version of the problem of computing an "-approximation is: Given a polyhedral surface P and an integer k, compute an approximating surface Q that has at most k vertices, whose distance from P is the smallest possible. Very little is known about this problem, except in the plane. Goodrich [116] showed that, given a set S of n points in the plane, an x-monotone polygonal chain Q with at most k vertices that minimizes the maximum vertical distance between Q and the points of S can be computed in time O(n logn). His algorithm is based on the parametric-searching technique, and uses Cole's improvement of parametric searching. (See [116] for other related work on this problem.) If the vertices of Q are required to be a subset of S, the best known algorithm is by Varadarajan [229]; it is based on parametric searching, and its running time is O(n4=3+"). 9 Statistical Estimators and Related Problems 9.1 Plane tting Given a set S of n points in R3 , we wish to t a plane h through S so that the maximum distance between h and the points of S is minimized. This is the same problem as computing the width of S (the smallest distance between a pair of parallel supporting planes of S), which is considerably harder than the two-dimensional variant mentioned in Section 7.2. Houle and Toussaint [138] gave an O(n2)-time algorithm for computing the width in R3 . This can be improved using parametric searching. The decision procedure is to determine, for a given distance w, whether the convex hull of S has two `antipodal' edges, such that the two parallel planes containing these edges are supporting planes of S and lie at distance w. (One also needs to consider pairs of parallel planes, one containing a facet of conv(S) and the other passing through a vertex. However, it is easy to test all these pairs in O(n log n) time.) The major technical issue here is to avoid having to test quadratically many pairs of antipodal edges, which may exist in the worst case. Chazelle et al. [51] gave an algorithm that is based on parametric searching and runs in time O(n8=5+") (see [2] for an improved bound). They reduced the width problem to the problem of computing a closest pair between two sets L;L0 of lines in R3 (each line containing an edge of the convex hull of S), such that each line in L lies below all the lines of L0. The fact that this latter problem now has an improved O(n3=2+") expected-time solution [14] implies that the width can be computed in expected time O(n3=2+"). See [160, 176, 221, 222, 230] for other results on hyperplane tting. Geometric Optimization January 24, 1997 Statistical Estimators and Related Problems 32 9.2 Circle tting Given a set S of n points in the plane, we wish to t a circle C through S so that the maximum distance between the points of S and C is minimized. This is equivalent to nding an annulus of minimum width that contains S. Ebara et al. [91] observed that the center of a minimum-width annulus is either a vertex of the closest-point Voronoi diagram of S, or a vertex of the farthest-point Voronoi diagram, or an intersection point of a pair of edges of the two diagrams. Based on this observation, they obtained a quadratic-time algorithm. Using parametric searching, Agarwal et al. [18] have shown that the center of the minimum-width annulus can be found without checking all of the O(n2) candidate intersection points explicitly; their algorithm runs in O(n8=5+") time; see also [2] for an improved solution. Using randomization and an improved analysis, the expected running time has been improved to O(n3=2+") by Agarwal and Sharir [14]. Finding an annulus of minimum area that contains S is a simpler problem, since it can be formulated as an instance of linear programming in R4 , and can thus be solved in O(n) time [183]. In certain metrology applications [134, 206, 231], one wants to t a circle C through S so that the sum of distances between C and the points of S is minimized. No algorithm is known for computing an exact solution, though several numerical techniques have been proposed; see [38, 161, 224]. See [162, 223] for other variants of the circletting problem and for some special cases. 9.3 Cylinder tting Given a set S of n points in R3 , we wish to nd a cylinder of smallest radius that contains S. Using parametric searching, the decision problem in this case can be rephrased as: Given a set B of n balls of a xed radius r in R3 , determine whether there exists a line that intersects all the balls of B (the balls are centered at the points of S and the line is the symmetry axis of a cylinder of radius r that contains S). Agarwal and Matou sek [10] showed that nding such a line can be reduced to computing the convex hull of a set of n points in R9 , which, combined with parametric searching, leads to an O(n4 logO(1) n)-time algorithm for nding a smallest cylinder enclosing S; see e.g. [209]. The bound has recently been improved by Agarwal et al. [4] to O(n3+"), by showing that the combinatorial complexity of the space of all lines that intersect all the balls of B is O(n3+"), and by designing a di erent algorithm, also based on parametric searching, whose decision procedure calculates this space of lines and determines whether it is nonempty. Faster algorithms have been developed for some special cases [103, 209]. Agarwal et al. [4] also gave an O(n= 2)-time algorithm to compute a cylinder of radius (1 + )r containing all the points of S, where r is the radius of the smallest cylinder enclosing S. Note that this problem is di erent from those considered in the two previous subsections. The problem analogous to those studied above would be to nd a cylindrical shell (a region Geometric Optimization January 24, 1997 Statistical Estimators and Related Problems 33 enclosed between two concentric cylinders) of smallest width (di erence between the radii of the cylinders), which contains a given point set S. This problem is considerably harder, and no solution for it that improves upon the naive brute-force technique is known. 9.4 Center points Given a set S of n points in the plane, we wish to compute a center point 2 R2 , such that any halfplane containing also contains at least bn=3c points of S. It is a known consequence of Helly's Theorem that always exists [93]. In a dual setting, let L be the set of lines dual to the points in S, and let K1;K2 be the convex hulls of the bn=3c and b2n=3c levels of the arrangement A(L), respectively.4 The dual of a center point of S is a line separating K1 and K2. This implies that the set of center points is a convex polygon with at most 2n edges. Cole et al. [68] gave an O(n log3 n)-time algorithm for computing a center point, using multi-dimensional parametric searching. Using the prune-and-search paradigm, Matou sek [169] obtained an O(n log3 n)-time algorithm for computingK1 andK2, which in turn yields the set of all center points. Recently, Jadhav and Mukhopadhyay [144] gave a linear-time algorithm for computing a center point, using a direct and elegant technique. Near-quadratic algorithms for computing a center point in three dimensions were developed in [68, 195]. Clarkson et al. [63] gave an e cient algorithm for computing an approximate center point in Rd . 9.5 Ham-sandwich cuts Let S1; : : : ; Sd be d ( nite) point sets in Rd . A ham-sandwich cut is a hyperplane h such that each of the two open halfspaces bounded by h contains at most bjSij=2c points of Si, for each 1 i d. The ham-sandwich theorem (see, e.g., [93]) guarantees the existence of such a cut. For d = 2, there is always a ham-sandwich cut whose dual is an intersection point of the median levels of A(L1) and A(L2), where Li is the set of lines dual to the points in Si, for i = 1; 2. It can be shown that the number of intersection points between the median levels of A(L1) and of A(L2) is always odd. Several prune-and-search algorithms have been proposed for computing a ham-sandwich cut in the plane. Megiddo [184] gave a linear-time algorithm for the special case where S1 and S2 are linearly separable. Modifying this algorithm, Edelsbrunner and Waupotitsch [94] gave an O(n logn)-time algorithm when S1 and S2 are not necessarily linearly separable. A linear-time, recursive algorithm for this general case is given by Lo and Steiger [166]. 4The level of a point p with respect to A(L) is the number of lines lying strictly below p. The k-level of A(L) is the closure of the set of edges of A(L) whose level is k (the level is xed over an edge of A(L)); each k-level is an x-monotone connected polygonal chain. Geometric Optimization January 24, 1997 Placement and Intersection 34 It works as follows. At each level of recursion, the algorithm maintains two sets of lines, R and B, and two integers p; q, such that any intersection point between Kp R, the p-level of A(R), and Kq B , the q-level of A(B), is dual to a ham-sandwich cut of the original sets; moreover, the number of such intersections is guaranteed to be odd. The goal is to compute an intersection point of Kp R and Kq B . Initially R and B are the sets of lines dual to S1 and S2, and p = bjS1j=2c, q = bjS2j=2c. Let r be a su ciently large constant. One then computes a (1=r)-cutting of R [ B. At least one of the triangles of contains an odd number of intersection points of the levels. By computing the intersection points of the edges of with R [ B, such a triangle can be found in linear time. Let R R and B B be the subsets of lines in R and B, respectively, that intersect , and let p0 (resp. q0) be the number of lines in R (resp. B) that lie below . We then solve the problem recursively for R and B with the respective levels p = p p0 and q = q q0. It easily follows that the p -level of A(R ) and the q -level of A(B ) intersect at an odd number of points. Since jR j+ jB j = O(n=r), the total running time of the algorithm is O(n). Lo et al. [165] extend this approach to R3 , and obtain an O(n3=2)-time algorithm for computing ham-sandwich cuts in three dimensions. 10 Placement and Intersection 10.1 Intersection of polyhedra Given a set P = fP1; : : : ; Pmg of m convex polyhedra in Rd , with a total of n facets, is their common intersection I = Tmi=1 Pi nonempty? Of course, this is an instance of linear programming in Rd with n constraints, but the goal is to obtain faster algorithms that depend on m more signi cantly than they depend on n. Reichling [203] presented an O(m log2 n)-time prune-and-search algorithm for d = 2. His algorithm maintains a vertical strip W bounded by two vertical lines bl; br, such that I W . Let k be the total number of vertices of all the Pi's lying inside W . If k m log n, the algorithm explicitly computes I \W in O(m log2 n) time. Otherwise, it nds a vertical line ` insideW such that bothW+ and W contain at least k=4 vertices, where W+ (resp. W ) is the portion of W lying to the right (resp. to the left) of `. By running a binary search on each Pi, one can determine whether ` intersects Pi and, if so, obtain the top and bottom edges of Pi intersecting `. This allows us to compute the intersection I \ ` as the intersection of m intervals, in O(m) time. If this intersection is nonempty, we stop, since we have found a point in I. Otherwise, if one of the polygons of P lies fully to the right (resp. to the left) of `, then I cannot lie in W (resp. in W+); if one polygon lies fully in W and another lies fully in W+, then clearly I = ;. Finally, if ` intersects all the Pi's, but their intersection along ` is empty, then, following the same technique as in Megiddo's two-dimensional linear-programming algorithm [181], one can determine, in additional O(m) time, which of W+;W can be asserted not to contain I. Hence, if the algorithm has not stopped, it needs to recurse Geometric Optimization January 24, 1997 Placement and Intersection 35 in only one of the slabs W+, W . Since the algorithm prunes a fraction of the vertices in each stage, it terminates after O(log n) stages, from which the asserted running time follows easily. Reichling [204] and Eppstein [98] extended this approach to d = 3, but their approaches do not extend to higher dimensions. However, if we have a comparison-based data structure that can determine in O(log n) time whether a query point lies in a speci ed Pi, then, using multi-dimensional parametric searching, we can determine in O(m logO(1) n) time whether I 6= ;. 10.2 Polygon placement Let P be a convex m-gon, and let Q be a closed planar polygonal environment with n edges. We wish to compute the largest similar copy of P (under translation, rotation, and scaling) that can be placed inside Q. Using generalized Delauney triangulation induced by P within Q, Chew and Kedem [58] obtained an O(m4n22 (n) log n)-time algorithm. Faster algorithms have been developed using parametric searching [3, 217]. The decision problem in this case is: Given a convex polygon B with m edges (a scaled copy of P ) and a planar polygonal environment Q with n edges, can B be placed inside Q (allowing translation and rotation)? Each placement of B can be represented as a point in R3 , using two coordinates for translation and one for rotation. Let FP denote the resulting threedimensional space of all free placements of B inside Q. Leven and Sharir [164] have shown that the complexity of FP is O(mn 6(mn)), where s(n) is the maximum length of a Davenport{Schinzel sequence of order s composed of n symbols [216] (it is almost linear in n for any xed s). Sharir and Toledo [217] gave an O(m2n 6(mn) logmn)-time algorithm to determine whether FP 6= ; | they rst compute a superset of the vertices of FP , in O(mn 6(mn) logmn) time, and then spend O(m log n) time for each of these vertices to determine whether the corresponding placement of B is free, using a standard triangle range-searching data structure. Recently, Agarwal et al. [3] gave an O(mn 6(mn) logmn) expected-time randomized algorithm to compute FP . Plugging these algorithms into the parametric searching machinery, one can obtain an O(m2n 6(mn) log3mn log logmn)-time deterministic algorithm, or an O(mn 6(mn) log4mn) expected-time randomized algorithm, for computing a largest similar placement of P inside Q. Faster algorithms are known for computing a largest placement of P inside Q in some special cases. If both P and Q are convex, then a largest similar copy of P inside Q can be computed in time O(mn2 log n) [1]; if P is not allowed to rotate, then the running time is O(m+ n log2 n) [225]. The biggest-stick problem is another interesting special case of the largest-placement problem; here Q is a simple polygon and P is a line segment. In this case, we are interested in nding the longest segment that can be placed inside Q. This problem can be solved using a divide-and-conquer algorithm, developed in [18], and later re ned in [2, 14]. It proceeds as follows: Partition Q into two simple polygons Q1; Q2 by a diagonal ` so that Geometric Optimization January 24, 1997 Placement and Intersection 36 each of Q1 and Q2 has at most 2n=3 vertices. Recursively compute the longest segment that can be placed in each Qi, and then determine the longest segment that can be placed in Q and that intersects the diagonal `. The decision step for this subproblem is to determine whether there exists a placement of a line segment of length w that lies inside Q and crosses `. Agarwal et al. [18] have shown that this problem can be reduced to that in which we are given a set S of points and a set of algebraic surfaces in R4 , where each surface is the graph of a trivariate function, and we wish to determine whether every point of S lies below all the surfaces of . Agarwal and Sharir [14] gave a randomized algorithm with O(n3=2+") expected running time for this point-location problem. Using randomization, instead of parametric searching, they obtained an O(n3=2+") expected-time procedure for the overall merge step ( nding the biggest stick that crosses `). The total running time of the algorithm is therefore also O(n3=2+"). Finding a longest segment insideQ whose endpoints are vertices of Q is a simpler problem, and can be solved by a linear-time algorithm due to Hershberger and Suri [129]. 10.3 Collision detection Let P and Q be two (possibly nonconvex) polyhedra in R3 . P is assumed to be xed and Q to move along a given trajectory . The goal is to determine the rst position on , if any, at which Q intersects P . This problem can be solved using parametric searching. Suppose, for example, that Q is only allowed to translate along a line. Then the decision problem is to determine whether Q intersects P as it translates along a segment e in R3 . Let Qe = Q e be the Minkowski sum of Q and e. Then Q intersects P as it translates along e if and only if Qe intersects P . This intersection problem can be solved in O(n8=5+") time, using simplex range-searching data structures [198, 210]. Plugging this into the parametric searching machinery, we can compute, in O(n8=5+") time, the rst intersection of Q with P as Q moves along a line. If Q rotates around a xed axis `, then the decision problem is to determine whether Q intersects P as it rotates by a given angle from its initial position. In this case, each edge of Q sweeps a section of a hyperboloid. Schomer and Thiel [210] have shown that, using a standard linearization technique (as described in [10, 234]), the intersection-detection problem can be formulated as an instance of simplex range searching in R5 , and can be solved in time O(n8=5+"). Plugging this algorithm into the parametric searching technique, we can also compute the rst intersection in time O(n8=5+"). Gupta et al. [119] have studied various collision-detection problems for a set of moving points in the plane. For example, they give an O(n5=3 log6=5 n)-time algorithm for determining whether a collision occurs in a set of points, each moving in the plane along a line with constant velocity. Geometric Optimization January 24, 1997 Query-Type Problems 37 11 Query-Type Problems Parametric searching has also been successfully applied in designing e cient data structures for a number of query-type problems. In this section we discuss a few of these problems, including ray shooting and linear optimization queries. 11.1 Ray shooting The general ray-shooting problem can be de ned as follows. Preprocess a given set S of objects in Rd (usually d = 2 or 3), so that the rst object hit by a query ray can be computed e ciently. The ray-shooting problem arises in computer graphics, visualization, and in many other geometric problems [9, 10, 11, 200]. The connection between ray shooting and parametric searching was observed by Agarwal and Matou sek [9]. Here the decision problem is to determine, for a speci ed point on the query ray , whether the initial segment s of intersects any object in S (where s is the origin of ). Hence, we need to execute generically, in the parametric-searching style, an appropriate intersection-detection query procedure on the segment s , where is the (unknown) rst intersection point of and the objects in S. Based on this technique, several e cient ray-shooting data structures have been developed [2, 9, 15, 16]. We illustrate this technique by giving a simple example. Let S be a set of n lines in the plane, and let S be the set of points dual to the lines of S. A segment e intersects a line of S if and only if the double-wedge e dual to e contains a point of S . Hence, a segment intersection-detection query for S can be answered by preprocessing S into a triangle (or a wedge) range-searching structure; see, e.g., [171, 172]. Roughly speaking, we construct a partition tree T on S as follows. We x a su ciently large constant r. If jS j 2r, T consists of a single node storing S . Otherwise, using a result of Matou sek [171], we construct, in O(n) time, a family of pairs = f(S 1 ; 1); : : : ; (S u; u)g such that (i) S 1 ; : : : ; S u form a partition of S , (ii) n=r jS i j 2n=r for each i, (iii) each i is a triangle containing S i , and (iv) every line intersects at most cpr triangles i of , for some absolute constant c (independent of r). We recursively construct a partition tree Ti on each S i , and attach it as the i-th subtree of T . The root of Ti stores the simplex i. The total size of T is linear, and the time spent in constructing T is O(n log n). Let e be a query segment, and let e be its dual double wedge. To determine whether e intersects any line of S (that is, whether e contains any point of S ), we traverse T in a top-down fashion, starting from the root. Let v be a node visited by the algorithm. If v is a leaf, we explicitly check whether any point of S v lies in the double wedge e . Suppose then that v is an internal node. If v e , then clearly e \S 6= ;, and we stop. If v \ e = ;, then we stop processing v and do not visit any of its children. If @e intersects v, we recursively visit all the children of v. Let Q(nv) denote the number of nodes in the subtree rooted at v visited by the query procedure (nv is the size of S v). By construction, a line intersects at most cpr triangles of v (the partition constructed at v), so @e intersects at Geometric Optimization January 24, 1997 Query-Type Problems 38 most 2cpr triangles of v. Hence, we obtain the following recurrence: Q(nv) 2cprQ(2nv=r) +O(r): The solution of the above recurrence is O(n1=2+"), for any " > 0, provided r is chosen su ciently large (as a function of "). Since the height of T is O(logn), we can answer a query in O(log n) parallel time, using O(n1=2+") processors, by visiting the nodes of the same level in parallel. s p p p s p Figure 6: A ray-shooting query Returning to the task of answering a ray-shooting query, let be a query ray with origin s, and let be the (unknown) rst intersection point of with a line of S. We compute by running the parallel version of the segment intersection-detection procedure generically, in the parametric searching style, on the segment = s . At each node v that the query procedure visits, it tests whether v or v\@ 6= ;. Since is the only indeterminant in these tests, the tests reduce to determining, for each vertex p of v, whether p lies above, below, or on the line dual to ; see Figure 6. Let p be the intersection point of the dual line p and the line containing ; and set p = s p. By determining whether the segment p intersects a line of S, we can determine whether p lies above or below . Hence, using this parametric searching approach, a ray-shooting query can be answered in O(n1=2+") time. Several other ray-shooting data structures based on this technique have been developed in [9, 10, 11, 200]. 11.2 Linear-optimization queries We wish to preprocess a set H of halfspaces in Rd into a linear-size data structure so that, for a linear objective function c, we can e ciently compute the vertex of TH that minimizes c. Using multi-dimensional parametric searching and data structures for answering halfspaceemptiness queries, Matou sek [172] presented an e cient algorithm for answering linearGeometric Optimization January 24, 1997 Query-Type Problems39optimization queries. A slightly faster randomized algorithm has recently been proposedby Chan [47]. Linear-optimization queries can be used to answer many other queries. Forexample, using Matousek's technique and a dynamic data structure for halfspace rangesearching, the 1-center of a set S of points in Rd can be maintained dynamically, as pointsare inserted into or deleted from S. See [7, 11, 172] for additional applications of multi-dimensional parametric searching for query-type problems.`1 `2t (t)Figure 7: An extremal-placement query11.3 Extremal placement queriesLet S be a set of n points in Rd . We wish to preprocess S into a data structure so thatqueries of the following form can be answered e ciently: Let (t), for t 2 R, be a familyof simplices such that (t1) (t2), for any t1 t2, and such that for any t, (t) can becomputed in O(1) time. The goal is to compute the largest value t = tmax for which theinterior of (tmax) does not contain any point of S. For example, let be a xed simplexand let (t) = t be the simplex obtained by scaling by a factor of t. We thus wantto nd the largest dilated copy of that does not contain any point of S. As anotherexample, let `1; `2 be two lines, and let m be a constant. De ne (t) to be the triangleformed by `1; `2, and the line with slope m and at distance t from the intersection point`1 \ `2; see Figure 7. These problems arise in many applications, including hidden surfaceremoval [200] and Euclidean shortest paths [190].By preprocessing S into a simplex range-searching data structure, we can determinewhether (t0) \ S = ;, for any given t0, and then plug this query procedure into theparametric searching machinery, thereby obtaining the desired tmax. The best-known datastructure for simplex range searching can answer a query in time O(m=n1=d logd+1 n), usingO(m) space, so an extremal placement query can be answered in time O(m=n1=d log2(d+1) n).Geometric OptimizationJanuary 24, 1997 Discussion4012 DiscussionIn this survey we have reviewed several techniques for geometric optimization, and discussedmany geometric problems that bene t from these techniques. There are of course quite a fewnon-geometric parametric optimization problems that can also be solved e ciently usingthese techniques. For example, parametric searching has been applied to develop e cientalgorithm for the following problems: (i) let G be a directed graph, in which the weight ofeach edge e is a d-variate linear function we(x), and let s and t be two vertices in G, ndthe point x 2 Rd , so that the maximum ow from s to t is maximized over all points x 2 Rd[65]; (ii) compute the minimum edit distance between two given sequences, where the costof performing an insertion, deletion, and substitution is a univariate linear function [120].There are several other geometric optimization problems that are not discussed here,and we conclude by mentioning two classes of them. The rst class is the optimal motion-planning problem, where we are given a moving robot B, an environment with obstacles,and two placements of the robot, and we want to compute an \optimal" collision-freepath for B between the given placements. The cost of a path depends on B and on theapplication. In the simplest case, B is point robot, O is a set of polygonal obstacles inthe plane or polyhedral obstacles in 3-space, and the cost of a path is its Euclidean length.We then face the Euclidean shortest-path problem, which has been studied intensively inthe past decade; see [46, 128, 205]. The problem becomes much harder if B is not a point,because even the notion of optimality is not well de ned. See [191] for an excellent surveyon this topic.The second class of problems that we want to mention can be called geometric graphproblems. Given a set S of points in Rd , we can de ne a weighted complete graph in-duced by S, where the weight of an edge (p; q) is the distance between p and q under somesuitable metric (two of the most commonly used metrics are the Euclidean and the rec-tilinear metrics). We can now pose many optimization problems on this graph, includingthe Euclidean travelling salesperson, Euclidean matching, Euclidean (rectilinear) Steinertrees, and minimum weight triangulation. Although all these problems can be solved usingtechniques known for general graphs, the hope is that better and/or simpler algorithmscan be developed by exploiting the geometry of the problem. There have been several sig-ni cant developments on geometric graph problems over the last few years, of which themost exciting is an nO(1=")-time (1 + ")-approximation algorithm for the Euclidean travel-ling salesperson problem [35]. We refer the reader to [39] for a survey on approximationalgorithms for such geometric optimization problems.ReferencesGeometric OptimizationJanuary 24, 1997 References41[1] P. K. Agarwal, N. Amenta, and M. Sharir, Placement of one convex polygon inside another,Tech. Report CS-1995-29, Duke University, 1995.[2] P. K. Agarwal, B. Aronov, and M. Sharir, Computing envelopes in four dimensions withapplications, Proc. 10th Annu. ACM Sympos. Comput. Geom., 1994, pp. 348{358.[3] P. K. Agarwal, B. Aronov, and M. Sharir, Motion planning for a convex polygon in a polygonalenvironment, manuscript, 1996.[4] P. K. Agarwal, B. Aronov, and M. Sharir, Line transversals of balls and smallest enclosingcylinders in three dimensions, Proc. 8th ACM-SIAM Sympos. Discrete Algorithms, 1997.[5] P. K. Agarwal, B. Aronov, M. Sharir, and S. Suri, Selecting distances in the plane, Algorith-mica, 9 (1993), 495{514.[6] P. K. Agarwal and P. K. Desikan, An approximation algorithm for terrain simpli cation, Proc.8th ACM-SIAM Sympos. Discrete Algorithms, 1997.[7] P. K. Agarwal, A. Efrat, and M. Sharir, Vertical decomposition of shallow levels in 3-dimensional arrangements and its applications, Proc. 11th Annu. ACM Sympos. Comput.Geom., 1995, pp. 39{50.[8] P. K. Agarwal, A. Efrat, M. Sharir, and S. Toledo, Computing a segment center for a planarpoint set, J. Algorithms, 15 (1993), 314{323.[9] P. K. Agarwal and J. Matou sek, Ray shooting and parametric search, SIAM J. Comput.,22 (1993), 794{806.[10] P. K. Agarwal and J. Matou sek, On range searching with semialgebraic sets, Discrete Comput.Geom., 11 (1994), 393{418.[11] P. K. Agarwal and J. Matou sek, Dynamic half-space range reporting and its applications,Algorithmica, 13 (1995), 325{345.[12] P. K. Agarwal and M. Sharir, O -line dynamic maintenance of the width of a planar pointset, Comput. Geom. Theory Appl., 1 (1991), 65{78.[13] P. K. Agarwal and M. Sharir, Planar geometric location problems, Algorithmica, 11 (1994),185{195.[14] P. K. Agarwal and M. Sharir, E cient randomized algorithms for some geometric optimizationproblems, Proc. 11th Annu. ACM Sympos. Comput. Geom., 1995, pp. 326{335.[15] P. K. Agarwal and M. Sharir, Ray shooting amidst convex polygons in 2D, J. Algorithms,21 (1996), 508{519.[16] P. K. Agarwal and M. Sharir, Ray shooting amidst convex polyhedra and polyhedral terrainsin three dimensions, SIAM J. Comput., 25 (1996), 100{116.[17] P. K. Agarwal, M. Sharir, and S. Toledo, An e cient multi-dimensional searching techniqueand its applications, Tech. Report CS-1993-20, Dept. Comp. Sci., Duke University, 1993.Geometric OptimizationJanuary 24, 1997 References42[18] P. K. Agarwal, M. Sharir, and S. Toledo, Applications of parametric searching in geometricoptimization, J. Algorithms, 17 (1994), 292{318.[19] P. K. Agarwal, M. Sharir, and E. Welzl, The discrete 2-center problem, manuscript, 1996.[20] P. K. Agarwal and S. Suri, Surface approximation and geometric partitions, Proc. 5th ACM-SIAM Sympos. Discrete Algorithms, 1994, pp. 24{33.[21] R. Agarwala and D. Fernandez-Baca, Weighted multidimensional search and its applicationsto convex optimization, SIAM J. Comput., 25 (1996), 83{99.[22] A. Aggarwal and M. M. Klawe, Applications of generalized matrix searching to geometricalgorithms, Discrete Appl. Math., 27 (1987), 3{23.[23] A. Aggarwal, M. M. Klawe, S. Moran, P. Shor, and R. Wilber, Geometric applications of amatrix-searching algorithm, Algorithmica, 2 (1987), 195{208.[24] A. Aggarwal, D. Kravets, J. K. Park, and S. Sen, Parallel searching in generalized Monge ar-rays with applications, Proc. 2nd ACM Sympos. Parallel Algorithms Architect., 1990, pp. 259{268.[25] A. Aggarwal and J. Park, Notes on searching in multidimensional monotone arrays, Proc.29th Annu. IEEE Sympos. Found. Comput. Sci., 1988, pp. 497{512.[26] M. Ajtai, J. Koml os, and E. Szemer edi, Sorting in c logn parallel steps, Combinatorica,3 (1983), 1{19.[27] M. Ajtai and N. Megiddo, A deterministic poly(log logn)-time n-processor algorithm for linearprogramming in xed dimensions, SIAM J. Comput., 25 (1996), 1171{1195.[28] N. Alon and N. Megiddo, Parallel linear programming in xed dimension almost surely inconstant time, Proc. 31st Annu. IEEE Sympos. Found. Comput. Sci., 1990, pp. 574{582.[29] N. Alon and J. Spencer, The Probabilistic Method, J. Wiley and Sons, New York, NY, 1993.[30] H. Alt, B. Behrends, and J. Blomer, Approximate matching of polygonal shapes, Ann. Math.Artif. Intell., 13 (1995), 251{266.[31] N. M. Amato, M. T. Goodrich, and E. A. Ramos, Parallel algorithms for higher-dimensionalconvex hulls, Proc. 35th Annu. IEEE Sympos. Found. Comput. Sci., 1994, pp. 683{694.[32] N. Amenta, Bounded boxes, Hausdor distance, and a new proof of an interesting Hellytheorem, Proc. 10th Annu. ACM Sympos. Comput. Geom., 1994, pp. 340{347.[33] N. Amenta, Helly-type theorems and generalized linear programming, Discrete Comput.Geom., 12 (1994), 241{261.[34] D. S. Arnon, G. E. Collins, and S. McCallum, Cylindrical algebraic decomposition I: Thebasic algorithm, SIAM J. Comput., 13 (1984), 865{877.[35] S. Arora, Polynomial time approximation schemes for Euclidean TSP and other geometricproblems, Proc. 37th Annu. IEEE Sympos. Found. Comput. Sci, 1996, pp. 2{11.Geometric OptimizationJanuary 24, 1997 References43[36] J. Bar-Ilan, G. Kortsarz, and D. Peleg, How to allocate network centers, J. Algorithms,15 (1993), 385{415.[37] R. Bar-Yehuda, A. Efrat, and A. Itai, A simple algorithm for maintaining the center of aplanar point-set, Proc. 5th Canad. Conf. Comput. Geom., 1993, pp. 252{257.[38] M. Berman, Large sample bias in least squares estimators of a circular arc center and itsradius, Comput. Vision, Graphics, and Image Process, 45 (1989), 126{128.[39] M. Bern and D. Eppstein, Approximation algorithms for geometric problems, in: Approxi-mation Problems for NP-Hard Problems (D. S. Hochbaum, ed.), PWS Publishing Company,Boston, MA, 1996, pp. 296{345.[40] B. Bhattacharya, J. Czyzowicz, P. Egyed, G. Toussaint, I. Stojmenovi c, and J. Urrutia, Com-puting shortest transversals of sets, Proc. 7th Annu. ACM Sympos. Comput. Geom., 1991,pp. 71{80.[41] B. Bhattacharya and G. Toussaint, Computing shortest transversals, Computing, 46 (1991),93{119.[42] B. K. Bhattacharya, S. Jadhav, A. Mukhopadhyay, and J.-M. Robert, Optimal algorithms forsome smallest intersection radius problems, Proc. 7th Annu. ACM Sympos. Comput. Geom.,1991, pp. 81{88.[43] H. Bronnimann and B. Chazelle, Optimal slope selection via cuttings, Proc. 6th Canad. Conf.Comput. Geom., 1994, pp. 99{103.[44] H. Bronnimann, B. Chazelle, and J. Matou sek, Product range spaces, sensitive sampling, andderandomization, Proc. 34th Annu. IEEE Sympos. Found. Comput. Sci., 1993, pp. 400{409.[45] H. Bronnimann and M. T. Goodrich, Almost optimal set covers in nite VC-dimension, Dis-crete Comput. Geom., 14 (1995), 263{279.[46] J. Canny and J. H. Reif, New lower bound techniques for robot motion planning problems,Proc. 28th Annu. IEEE Sympos. Found. Comput. Sci., 1987, pp. 49{60.[47] T. M. Chan, Fixed-dimensional linear programming queries made easy, Proc. 12th Annu.ACM Sympos. Comput. Geom., 1996, pp. 284{290.[48] R. Chandrasekaran and A. Tamir, Algebraic optimization: the Fermat-Weber location prob-lem, Math. Program., 46 (1990), 219{224.[49] B. Chazelle, Cutting hyperplanes for divide-and-conquer, Discrete Comput. Geom., 9 (1993),145{158.[50] B. Chazelle, H. Edelsbrunner, L. Guibas, and M. Sharir, A singly-exponential strati cationscheme for real semi-algebraic varieties and its applications, Theoret. Comput. Sci., 84 (1991),77{105.[51] B. Chazelle, H. Edelsbrunner, L. Guibas, and M. Sharir, Diameter, width, closest line pairand parametric searching, Discrete Comput. Geom., 10 (1993), 183{196.Geometric OptimizationJanuary 24, 1997 References44[52] B. Chazelle, H. Edelsbrunner, L. Guibas, and M. Sharir, Algorithms for bichromatic linesegment problems and polyhedral terrains, Algorithmica, 11 (1994), 116{132.[53] B. Chazelle, H. Edelsbrunner, L. J. Guibas, M. Sharir, and J. Stol , Lines in space: Combi-natorics and algorithms, Algorithmica, 15 (1996), 428{447.[54] B. Chazelle and J. Matou sek, On linear-time deterministic algorithms for optimization problems in xed dimension, J. Algorithms, 21 (1996), 579{597.[55] L. P. Chew, D. Dor, A. Efrat, and K. Kedem, Geometric pattern matching in d-dimensionalspace, Proc. 2nd Annu. European Sympos. Algorithms, Lecture Notes in Computer Science,Vol. 979, Springer-Verlag, 1995, pp. 264{279.[56] L. P. Chew, M. T. Goodrich, D. P. Huttenlocher, K. Kedem, J. M. Kleinberg, and D. Kravets,Geometric pattern matching under Euclidean motion, Proc. 5th Canad. Conf. Comput. Geom.,1993, pp. 151{156.[57] L. P. Chew and K. Kedem, Improvements on geometric pattern matching problems, Proc. 3rdScand. Workshop Algorithm Theory, Lecture Notes in Computer Science, Vol. 621, Springer-Verlag, 1992, pp. 318{325.[58] L. P. Chew and K. Kedem, A convex polygon among polygonal obstacles: Placement andhigh-clearance motion, Comput. Geom. Theory Appl., 3 (1993), 59{89.[59] K. L. Clarkson, Linear programming in O(n3d2) time, Inform. Process. Lett., 22 (1986), 21{24.[60] K. L. Clarkson, Randomized geometric algorithms, in: Computing in Euclidean Geometry(D.-Z. Du and F. K. Hwang, eds.), World Scienti c, Singapore, 1992, pp. 117{162.[61] K. L. Clarkson, Algorithms for polytope covering and approximation, Proc. 3rd WorkshopAlgorithms Data Struct., Lecture Notes in Computer Science, Vol. 709, Springer-Verlag, 1993,pp. 246{252.[62] K. L. Clarkson, Las Vegas algorithms for linear and integer programming, J. ACM, 42 (1995),488{499.[63] K. L. Clarkson, D. Eppstein, G. L. Miller, C. Sturtivant, and S.-H. Teng, Approximatingcenter points with iterated Radon points, Proc. 9th Annu. ACM Sympos. Comput. Geom.,1993, pp. 91{98.[64] K. L. Clarkson and P. W. Shor, Applications of random sampling in computational geometry,II, Discrete Comput. Geom., 4 (1989), 387{421.[65] E. Cohen and N. Megiddo, Maximizing concave functions in xed dimension, in: Complexityin Numeric Computation (P. Pardalos, ed.), World Scienti c, Singapore, 1993.[66] R. Cole, Slowing down sorting networks to obtain faster sorting algorithms, J. ACM, 34 (1987),200{208.[67] R. Cole, J. Salowe, W. Steiger, and E. Szemer edi, An optimal-time algorithm for slope selec-tion, SIAM J. Comput., 18 (1989), 792{810.Geometric OptimizationJanuary 24, 1997 References45[68] R. Cole, M. Sharir, and C. K. Yap, On k-hulls and related problems, SIAM J. Comput.,16 (1987), 61{77.[69] G. E. Collins, Quanti er elimination for real closed elds by cylindrical algebraic decomposi-tion, Proc. 2nd GI Conference on Automata Theory and Formal Languages, Lecture Notes inComputer Science, Vol. 33, Springer-Verlag, 1975, pp. 134{183.[70] L. Danzer, B. Grunbaum, and V. Klee, Helly's theorem and its relatives, in: Convexity, Proc.Symp. Pure Math., Vol. 7, Amer. Math. Soc., Providence, 1963, pp. 101{180.[71] G. Das and D. Joseph, The complexity of minimum convex nested polyhedra, Proc. 2ndCanad. Conf. Comput. Geom., 1990, pp. 296{301.[72] A. Datta, H.-P. Lenhof, C. Schwarz, and M. Smid, Static and dynamic algorithms for k-pointclustering problems, J. Algorithms, 19 (1995), 474{503.[73] M. de Berg, O. Devillers, M. van Kreveld, O. Schwarzkopf, and M. Teillaud, Computingthe maximum overlap of two convex polygons under translation, Proc. 7th Annu. Internat.Sympos. Algorithms Comput., 1996.[74] L. De Floriani, A graph based approach to object feature recognition, Proc. 3rd Annu. ACMSympos. Comput. Geom., 1987, pp. 100{109.[75] M. DeHaemer and M. Zyda, Simpli cation of objects rendered by polygonal approximations,Computers and Graphics, 15 (1992), 175{184.[76] X. Deng, An optimal parallel algorithm for linear programming in the plane, Inform. Process.Lett., 35 (1990), 213{217.[77] M. B. Dillencourt, D. M. Mount, and N. S. Netanyahu, A randomized algorithm for slopeselection, Internat. J. Comput. Geom. Appl., 2 (1992), 1{27.[78] Z. Drezner, On a modi ed 1-center problem, Manage. Sci., 27 (1981), 838{851.[79] Z. Drezner, The p-centre problems | Heuristic and optimal algorithms, J. Oper. Res. Soc.,35 (1984), 741{748.[80] Z. Drezner, The planar two-center and two-median problem, Transp. Sci., 18 (1984), 351{361.[81] Z. Drezner, On the rectangular p-center problem, Naval Res. Logist. Q., 34 (1987), 229{234.[82] Z. Drezner, Conditional p-centre problems, Transp. Sci., 23 (1989), 51{53.[83] Z. Drezner, ed., Facility Location, Springer-Verlag, New York, 1995.[84] Z. Drezner, A. Mehrez, and G. O. Wesolowsky, The facility location problems with limiteddistances, Transp. Sci., 25 (1992), 183{187.[85] M. E. Dyer, Linear time algorithms for twoand three-variable linear programs, SIAM J.Comput., 13 (1984), 31{45.[86] M. E. Dyer, On a multidimensional search technique and its application to the Euclideanone-centre problem, SIAM J. Comput., 15 (1986), 725{738.Geometric OptimizationJanuary 24, 1997 References46[87] M. E. Dyer, A class of convex programs with applications to computational geometry, Proc.8th Annu. ACM Sympos. Comput. Geom., 1992, pp. 9{15.[88] M. E. Dyer, A parallel algorithm for linear programming in xed dimension, Proc. 11th Annu.ACM Sympos. Comput. Geom., 1995, pp. 345{349.[89] M. E. Dyer and A. M. Frieze, A simple heuristic for the p-centre problem, Oper. Res. Lett.,3 (1985), 285{288.[90] M. E. Dyer and A. M. Frieze, A randomized algorithm for xed-dimension linear programming,Math. Program., 44 (1989), 203{212.[91] H. Ebara, N. Fukuyama, H. Nakano, and Y. Nakanishi, Roundness algorithms using theVoronoi diagrams, Abstracts 1st Canad. Conf. Comput. Geom., 1989, p. 41.[92] H. Edelsbrunner, Computing the extreme distances between two convex polygons, J. Algo-rithms, 6 (1985), 213{224.[93] H. Edelsbrunner, Algorithms in Combinatorial Geometry, Springer-Verlag, Heidelberg, 1987.[94] H. Edelsbrunner and R. Waupotitsch, Computing a ham-sandwich cut in two dimensions, J.Symbolic Comput., 2 (1986), 171{178.[95] A. Efrat and M. Sharir, A near-linear algorithm for the planar segment center problem,Discrete Comput. Geom., 16 (1996), in press.[96] A. Efrat, M. Sharir, and A. Ziv, Computing the smallest k-enclosing circle and related prob-lems, Comput. Geom. Theory Appl., 4 (1994), 119{136.[97] M. Eisner and D. Severance, Mathematical techniques for e cient record segmentation inlarge shared databases, J. ACM, 23 (1976), 619{635.[98] D. Eppstein, Dynamic three-dimensional linear programming, ORSA J. Comput., 4 (1992),360{368.[99] D. Eppstein, Faster construction of planar two-centers, Proc. 8th ACM-SIAM Sympos. Dis-crete Algorithms, 1997.[100] D. Eppstein and J. Erickson, Iterated nearest neighbors and nding minimal polytopes, Dis-crete Comput. Geom., 11 (1994), 321{350.[101] T. Feder and D. H. Greene, Optimal algorithms for approximate clustering, Proc. 20th Annu.ACM Sympos. Theory Comput., 1988, pp. 434{444.[102] F. Follert, E. Schomer, and J. Sellen, Subquadratic algorithms for the weighted maximinfacility location problem, Proc. 7th Canad. Conf. Comput. Geom., 1995, pp. 1{6.[103] F. Follert, E. Schomer, J. Sellen, M. Smid, and C. Thiel, Computing a largest empty anchoredcylinder, and related problems, Proc. 15th Conf. Foundations of Software Technology andTheoretical Comput. Sci., Lecture Notes in Computer Science, Vol. 1026, Springer-Verlag,1995, pp. 428{442.Geometric OptimizationJanuary 24, 1997 References47[104] R. J. Fowler, M. S. Paterson, and S. L. Tanimoto, Optimal packing and covering in the planeare NP-complete, Inform. Process. Lett., 12 (1981), 133{137.[105] G. N. Frederickson, Optimal algorithms for tree partitioning, Proc. 2nd ACM-SIAM Symp.Discr. Algo., 1991, pp. 168{177.[106] G. N. Frederickson and D. B. Johnson, The complexity of selection and ranking in X+Y andmatrices with sorted rows and columns, J. Comput. Syst. Sci., 24 (1982), 197{208.[107] G. N. Frederickson and D. B. Johnson, Finding kth paths and p-centers by generating andsearching good data structures, J. Algorithms, 4 (1983), 61{80.[108] G. N. Frederickson and D. B. Johnson, Generalized selection and ranking: sorted matrices,SIAM J. Comput., 13 (1984), 14{30.[109] B. Gartner, A subexponential algorithm for abstract optimization problems, SIAM J. Com-put., 24 (1995), 1018{1035.[110] B. Gartner and E. Welzl, Linear programming | Randomized and abstract frameworks, Proc.13th Sympos. Theoret. Aspects Comput. Sci., Lecture Notes in Computer Science, Vol. 1046,Springer-Verlag, 1996, pp. 669{687.[111] A. Glozman, K. Kedem, and G. Shpitalnik, On some geometric selection and optimizationproblems via sorted matrices, Proc. 4th Workshop Algorithms Data Struct., Lecture Notes inComputer Science, Vol. 955, Springer-Verlag, 1995, pp. 26{37.[112] M. Goldwasser, A survey of linear programming in randomized subexponential time, ACM-SIGACT News, 26 (1995), 96{104.[113] T. Gonzalez, Clustering to minimize the maximum intercluster distance, Theoret. Comput.Sci., 38 (1985), 293{306.[114] T. Gonzalez, Covering a set of points in multidimensional space, Inform. Process. Lett.,40 (1991), 181{188.[115] M. T. Goodrich, Geometric partitioning made easier, even in parallel, Proc. 9th Annu. ACMSympos. Comput. Geom., 1993, pp. 73{82.[116] M. T. Goodrich, E cient piecewise-linear function approximation using the uniform metric,Discrete Comput. Geom., 14 (1995), 445{462.[117] M. T. Goodrich, Fixed-dimensional parallel linear programming via relative epsilon-approximations, Proc. 7th ACM-SIAM Sympos. Discrete Algorithms, 1996, pp. 132{141.[118] B. Grunbaum, A proof of Vazsonyi's conjecture, Bull. Research Council Israel, Section A,6 (1956), 77{78.[119] P. Gupta, R. Janardan, and M. Smid, Fast algorithms for collision and proximity prob-lems involving moving geometric objects, Report MPI-I-94-113, Max-Planck-Institut Inform.,Saarbrucken, Germany, 1994.[120] D. Gus eld, K. Balasubramanian, and D. Naor, Parametric optimization of sequence align-ment, Algorithmica, 12 (1994), 312{326.Geometric OptimizationJanuary 24, 1997 References48[121] R. Hassin and N. Megiddo, Approximation algorithms for hitting objects by straight lines,Discrete Appl. Math., 30 (1991), 29{42.[122] P. S. Heckbert and M. Garland, Fast polygonal approximation of terrains and height elds,Report CMU-CS-95-181, Carnegie Mellon University, 1995.[123] E. Helly, Uber Systeme von abgeschlossenen Mengen mit gemeinschaftlichen Punkten,Monaths. Math. und Physik, 37 (1930), 281{302.[124] A. Heppes, Beweis einer Vermutung von A. Vazsonyi, Acta Math. Acad. Sci. Hungar., 7 (1956),463{466.[125] J. Hershberger, Minimizing the sum of diameters e ciently, Comput. Geom. Theory Appl.,2 (1992), 111{118.[126] J. Hershberger, A faster algorithm for the two-center decision problem, Inform. Process. Lett.,47 (1993), 23{29.[127] J. Hershberger and S. Suri, Finding tailored partitions, J. Algorithms, 12 (1991), 431{463.[128] J. Hershberger and S. Suri, E cient computation of Euclidean shortest paths in the plane,Proc. 34th Annu. IEEE Sympos. Found. Comput. Sci., 1993, pp. 508{517.[129] J. Hershberger and S. Suri, Matrix searching with the shortest path metric, Proc. 25th Annu.ACM Sympos. Theory Comput., 1993, pp. 485{494.[130] D. S. Hochbaum and W. Maass, Approximation schemes for covering and packing problemsin image processing and VLSI, J. ACM, 31 (1984), 130{136.[131] D. S. Hochbaum and W. Maass, Fast approximation algorithms for a nonconvex coveringproblem, J. Algorithms, 8 (1987), 305{323.[132] D. S. Hochbaum and D. Shmoys, A best possible heuristic for the k-center problem, Math.Oper. Res., 10 (1985), 180{184.[133] D. S. Hochbaum and D. Shmoys, A uni ed approach to approximation algorithms for bottle-neck problems, J. ACM, 33 (1986), 533{550.[134] R. Hocken, J. Raja, and U. Babu, Sampling issues in coordinate metrology, ManufacturingReview, 6 (1993), 282{294.[135] H. Hoppe, T. DeRose, T. Duchamp, M. Halstead, H. Jin, J. McDonald, J. Schweitzer, andW. Stuetzle, Piecewise smooth surface reconstruction, Proc. SIGGRAPH 94, 1994, pp. 295{302.[136] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, Mesh optimization, Proc.SIGGRAPH 93, 1993, pp. 19{26.[137] M. E. Houle, H. Imai, K. Imai, and J.-M. Robert, Weighted orthogonal linear L1-approximation and applications, Proc. 1st Workshop Algorithms Data Struct., Lecture Notesin Computer Science, Vol. 382, Springer-Verlag, 1989, pp. 183{191.Geometric OptimizationJanuary 24, 1997 References49[138] M. E. Houle and G. T. Toussaint, Computing the width of a set, IEEE Trans. Pattern Anal.Mach. Intell., PAMI-10 (1988), 761{765.[139] D. P. Huttenlocher and K. Kedem, Computing the minimum Hausdor distance for point setsunder translation, Proc. 6th Annu. ACM Sympos. Comput. Geom., 1990, pp. 340{349.[140] D. P. Huttenlocher, K. Kedem, and M. Sharir, The upper envelope of Voronoi surfaces andits applications, Discrete Comput. Geom., 9 (1993), 267{291.[141] R. Z. Hwang, R. C. Chang, and R. C. T. Lee, The generalized searching over separatorsstrategy to solve some NP-Hard problems in subexponential time, Algorithmica, 9 (1993),398{423.[142] R. Z. Hwang, R. C. T. Lee, and R. C. Chang, The slab dividing approach to solve the Euclideanp-center problem, Algorithmica, 9 (1993), 1{22.[143] H. Imai, D. Lee, and C. Yang, 1-segment center covering problems, ORSA J. Comput.,4 (1992), 426{434.[144] S. Jadhav and A. Mukhopadhyay, Computing a centerpoint of a nite planar set of points inlinear time, Discrete Comput. Geom., 12 (1994), 291{312.[145] S. Jadhav, A. Mukhopadhyay, and B. Bhattacharya, An optimal algorithm for the intersectionradius of a set of convex polygons, J. Algorithms, 20 (1996), 244{267.[146] J. W. Jaromczyk and M. Kowaluk, An e cient algorithm for the Euclidean two-center prob-lem, Proc. 10th Annu. ACM Sympos. Comput. Geom., 1994, pp. 303{311.[147] J. W. Jaromczyk and M. Kowaluk, A geometric proof of the combinatorial bounds for thenumber of optimal solutions to the 2-center Euclidean problem, Proc. 7th Canad. Conf. Com-put. Geom., 1995, pp. 19{24.[148] J. W. Jaromczyk and M. Kowaluk, The two-line center problem from a polar view: A newalgorithm and data structure, Proc. 4th Workshop Algorithms Data Struct., Lecture Notes inComputer Science, Vol. 955, Springer-Verlag, 1995, pp. 13{25.[149] G. Kalai, A subexponential randomized simplex algorithm, Proc. 24th Annu. ACM Sympos.Theory Comput., 1992, pp. 475{482.[150] N. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica,4 (1984), 373{395.[151] M. J. Katz, Improved algorithms in geometric optimization via expanders, Proc. 3rd IsraelSymposium on Theory of Computing and Systems, 1995, pp. 78{87.[152] M. J. Katz and F. Nielsen, On piercing sets of objects, Proc. 12th Annu. ACM Sympos.Comput. Geom., 1996, pp. 113{121.[153] M. J. Katz and M. Sharir, An expander-based approach to geometric optimization, Proc. 9thAnnu. ACM Sympos. Comput. Geom., 1993, pp. 198{207.[154] M. J. Katz and M. Sharir, Optimal slope selection via expanders, Inform. Process. Lett.,47 (1993), 115{122.Geometric OptimizationJanuary 24, 1997 References50[155] L. G. Khachiyan, Polynomial algorithm in linear programming, U.S.S.R. Comput. Math. andMath. Phys., 20 (1980), 53{72.[156] D. E. Knuth, Sorting and Searching, Addison-Wesley, Reading, MA, 1973.[157] M. T. Ko and Y. T. Ching, Linear time algorithms for the weighted tailored 2-partitionproblem and the weighted rectilinear 2-center problem under L1-distance, Discrete Appl.Math., 40 (1992), 397{410.[158] M. T. Ko and R. C. T. Lee, On weighted rectilinear 2-center and 3-center problems, Inform.Sci., 54 (1991), 169{190.[159] M. T. Ko, R. C. T. Lee, and J. S. Chang, An optimal approximation algorithm for therectilinear m-center problem, Algorithmica, 5 (1990), 341{352.[160] N. M. Korneenko and H. Martini, Hyperplane approximation and related topics, in: NewTrends in Discrete and Computational Geometry (J. Pach, ed.), Algorithms and Combina-torics, Vol. 10, Springer-Verlag, Heidelberg, 1993, pp. 135{161.[161] U. M. Landau, Estimation of circular arc and its radius, Comput. Vision, Graphics, and ImageProcess, 38 (1987), 317{326.[162] V. B. Le and D. T. Lee, Out-of-roundness problem revisited, IEEE Trans. Pattern Anal.Mach. Intell., PAMI-13 (1991), 217{223.[163] D. T. Lee and Y. F. Wu, Geometric complexity of some location problems, Algorithmica,1 (1986), 193{211.[164] D. Leven and M. Sharir, On the number of critical free contacts of a convex polygonal objectmoving in two-dimensional polygonal space, Discrete Comput. Geom., 2 (1987), 255{270.[165] C.-Y. Lo, J. Matou sek, and W. L. Steiger, Algorithms for ham-sandwich cuts, Discrete Com-put. Geom., 11 (1994), 433{452.[166] C.-Y. Lo and W. Steiger, An optimal-time algorithm for ham-sandwich cuts in the plane,Proc. 2nd Canad. Conf. Comput. Geom., 1990, pp. 5{9.[167] W. Maass, On the complexity of nonconvex covering, SIAM J. Comput., 15 (1986), 453{467.[168] P. Magillo and L. De Floriani, Maintaining multiple levels of detail in the overlay of hierarchicalsubdivisions, Proc. 8th Canad. Conf. Comput. Geom., 1996, pp. 190{195.[169] J. Matou sek, Computing the center of planar point sets, in: Computational Geometry: papersfrom the DIMACS special year (J. E. Goodman, R. Pollack, and W. Steiger, eds.), Amer.Math. Soc., Providence, 1991, pp. 221{230.[170] J. Matou sek, Randomized optimal algorithm for slope selection, Inform. Process. Lett.,39 (1991), 183{187.[171] J. Matou sek, E cient partition trees, Discrete Comput. Geom., 8 (1992), 315{334.[172] J. Matou sek, Linear optimization queries, J. Algorithms, 14 (1993), 432{448.Geometric OptimizationJanuary 24, 1997 References51[173] J. Matou sek, Lower bound for a subexponential optimization algorithm, Random Structures& Algorithms, 5 (1994), 591{607.[174] J. Matou sek, On enclosing k points by a circle, Inform. Process. Lett., 53 (1995), 217{221.[175] J. Matou sek, On geometric optimization with few violated constraints, Discrete Comput.Geom., 14 (1995), 365{384.[176] J. Matou sek, D. M. Mount, and N. S. Netanyahu, E cient randomized algorithms for therepeated median line estimator, Proc. 4th ACM-SIAM Sympos. Discrete Algorithms, 1993,pp. 74{82.[177] J. Matou sek and O. Schwarzkopf, A deterministic algorithm for the three-dimensional diam-eter problem, Comput. Geom. Theory Appl., 6 (1996), 253{262.[178] J. Matou sek, M. Sharir, and E. Welzl, A subexponential bound for linear programming,Algorithmica, 16 (1996), 498{516.[179] N. Megiddo, Combinatorial optimization with rational objective functions, Math. Oper. Res.,4 (1979), 414{424.[180] N. Megiddo, Applying parallel computation algorithms in the design of serial algorithms, J.ACM, 30 (1983), 852{865.[181] N. Megiddo, Linear-time algorithms for linear programming in R3 and related problems, SIAMJ. Comput., 12 (1983), 759{776.[182] N. Megiddo, The weighted Euclidean 1-center problem, Math. Oper. Res., 8 (1983), 498{504.[183] N. Megiddo, Linear programming in linear time when the dimension is xed, J. ACM,31 (1984), 114{127.[184] N. Megiddo, Partitioning with two lines in the plane, J. Algorithms, 6 (1985), 430{433.[185] N. Megiddo, On the ball spanned by balls, Discrete Comput. Geom., 4 (1989), 605{610.[186] N. Megiddo, On the complexity of some geometric problems in unbounded dimension, J.Symbolic Comput., 10 (1990), 327{334.[187] N. Megiddo and K. J. Supowit, On the complexity of some common geometric location prob-lems, SIAM J. Comput., 13 (1984), 182{196.[188] N. Megiddo and A. Tamir, On the complexity of locating linear facilities in the plane, Oper.Res. Lett., 1 (1982), 194{197.[189] N. Megiddo and E. Zemel, A randomized O(n logn) algorithm for the weighted Euclidean1-center problem, J. Algorithms, 7 (1986), 358{368.[190] J. S. B. Mitchell, Shortest paths among obstacles in the plane, Proc. 9th Annu. ACM Sympos.Comput. Geom., 1993, pp. 308{317.[191] J. S. B. Mitchell, Shortest paths and networks, Technical Report, State University of NewYork at Stony Brook, 1996.Geometric OptimizationJanuary 24, 1997 References52[192] J. S. B. Mitchell and S. Suri, Separation and approximation of polyhedral objects, Comput.Geom. Theory Appl., 5 (1995), 95{114.[193] R. Motwani and P. Raghavan, Randomized Algorithms, Cambridge University Press, NewYork, NY, 1995.[194] K. Mulmuley, Computational Geometry: An Introduction Through Randomized Algorithms,Prentice Hall, Englewood Cli s, NJ, 1994.[195] N. Naor and M. Sharir, Computing a point in the center of a point set in three dimensions,Proc. 2nd Canad. Conf. Comput. Geom., 1990, pp. 10{13.[196] C. H. Norton, S. A. Plotkin, and E. Tardos, Using separation algorithms in xed dimensions,J. Algorithms, 13 (1992), 79{98.[197] C. H. Papadimitriou, Worst-case and probabilistic analysis of a geometric location problem,SIAM J. Comput., 10 (1981), 542{557.[198] M. Pellegrini, Ray shooting on triangles in 3-space, Algorithmica, 9 (1993), 471{494.[199] M. Pellegrini, On collision-free placements of simplices and the closest pair of lines in 3-space,SIAM J. on Computing, 23 (1994), 133{153.[200] M. Pellegrini, Repetitive hidden surface removal for polyhedra, J. Algorithms, 21 (1996),80{101.[201] M. J. Post, Minimum spanning ellipsoids, Proc. 16th Annu. ACM Sympos. Theory Comput.,1984, pp. 108{116.[202] E. Ramos, Intersection of unit-balls and diameter of a point set in R3, Computat. Geom.Theory Appl., 6 (1996), in press.[203] M. Reichling, On the detection of a common intersection of k convex objects in the plane,Inform. Process. Lett., 29 (1988), 25{29.[204] M. Reichling, On the detection of a common intersection of k convex polyhedra, in: Computa-tional Geometry and its Applications, Lecture Notes in Computer Science, Vol. 333, Springer-Verlag, 1988, pp. 180{186.[205] J. H. Reif and J. A. Storer, A single-exponential upper bound for nding shortest paths inthree dimensions, J. ACM, 41 (1994), 1013{1019.[206] U. Roy and X. Zhang, Establishment of a pair of concentric circles with the minimum radialseparation for assessing roundness error, Computer Aided Design, 24 (1992), 161{168.[207] J. Salowe, L1 interdistance selection by parametric search, Inform. Process. Lett., 30 (1989),9{14.[208] N. Sarnak and R. E. Tarjan, Planar point location using persistent search trees, Commun.ACM, 29 (1986), 669{679.[209] E. Schomer, J. Sellen, M. Teichmann, and C. Yap, E cient algorithms for the smallest en-closing cylinder problem, Proc. 8th Canad. Conf. Comput. Geom., 1996, pp. 264{269.Geometric OptimizationJanuary 24, 1997 References53[210] E. Schomer and C. Thiel, E cient collision detection for moving polyhedra, Proc. 11th Annu.ACM Sympos. Comput. Geom., 1995, pp. 51{60.[211] R. Seidel, Small-dimensional linear programming and convex hulls made easy, Discrete Com-put. Geom., 6 (1991), 423{434.[212] R. Seidel, Backwards analysis of randomized geometric algorithms, in: New Trends in Discreteand Computational Geometry (J. Pach, ed.), Springer-Verlag, Heidelberg, Germany, 1993,pp. 37{68.[213] S. Sen, Parallel multidimensional search using approximation algorithms: with applicationsto linear-programming and related problems, Proc. 8th ACM Sympos. Paral. Algorithms andArchitectures, 1996, pp. 251{260.[214] L. Shafer and W. Steiger, Randomizing optimal geometric algorithms, Proc. 5th Canad. Conf.Comput. Geom., 1993, pp. 133{138.[215] M. Sharir, A near-linear algorithm for the planar 2-center problem, Proc. 12th Annu. ACMSympos. Comput. Geom., 1996, pp. 106{112.[216] M. Sharir and P. K. Agarwal, Davenport-Schinzel Sequences and Their Geometric Applica-tions, Cambridge University Press, New York, 1995.[217] M. Sharir and S. Toledo, Extremal polygon containment problems, Comput. Geom. TheoryAppl., 4 (1994), 99{118.[218] M. Sharir and E. Welzl, A combinatorial bound for linear programming and related problems,Proc. 9th Sympos. Theoret. Aspects Comput. Sci., Lecture Notes in Computer Science, Vol.577, Springer-Verlag, 1992, pp. 569{579.[219] M. Sharir and E. Welzl, Rectilinear and polygonal p-piercing and p-center problems, Proc.12th Annu. ACM Sympos. Comput. Geom., 1996, pp. 122{132.[220] D. M. H. Sommerville, Analytical Geometry in Three Dimensions, Cambridge UniversityPress, Cambridge, 1951.[221] A. Stein and M. Werman, Finding the repeated median regression line, Proc. 3rd ACM-SIAMSympos. Discrete Algorithms, 1992, pp. 409{413.[222] A. Stein and M. Werman, Robust statistics in shape tting, Proc. IEEE Internat. Conf.Comput. Vision Pattern. Recogn., 1992, pp. 540{546.[223] K. Swanson, D. T. Lee, and V. L. Wu, An optimal algorithm for roundness determination onconvex polygons, Comput. Geom. Theory Appl., 5 (1995), 225{235.[224] S. M. Thomas and Y. T. Chen, A simple approach for the estimation of circular arc and itsradius, Comput. Vision, Graphics, and Image Process, 45 (1989), 362{370.[225] S. Toledo, Extremal Polygon Containment Problems and Other Issues in Parametric Search-ing, M.S. Thesis, Dept. Comput. Sci., Tel Aviv Univ., Tel Aviv, 1991.[226] S. Toledo, Approximate parametric search, Inform. Process. Lett., 47 (1993), 1{4.Geometric OptimizationJanuary 24, 1997 Appendix: Multidimensional Parametric Searching54[227] S. Toledo, Maximizing non-linear concave functions in xed dimension, in: Complexity inNumerical Computations (P. M. Pardalos, ed.), World Scienti c, Singapore, 1993, pp. 429{447.[228] L. Valiant, Parallelism in comparison problems, SIAM J. Comput., 4 (1975), 348{355.[229] K. R. Varadarajan, Approximating monotone polygonal curves using the uniform metric,Proc. 12th Annu. ACM Sympos. Comput. Geom., 1996, pp. 311{318.[230] K. R. Varadarajan and P. K. Agarwal, Linear approximation of simple objects, Proc. 7thCanad. Conf. Comput. Geom., 1995, pp. 13{18.[231] H. Voelcker, Current perspective on tolerancing and metrology, Manufacturing Review,6 (1993), 258{268.[232] E. Welzl, Smallest enclosing disks (balls and ellipsoids), in: New Results and New Trends inComputer Science (H. Maurer, ed.), Lecture Notes in Computer Science, Vol. 555, Springer-Verlag, 1991, pp. 359{370.[233] G. Wesolowsky, The Weber problem: History and perspective, Location Science, 1 (1993),5{23.[234] A. C. Yao and F. F. Yao, A general approach to D-dimensional geometric queries, Proc. 17thAnnu. ACM Sympos. Theory Comput., 1985, pp. 163{168.[235] E. Zemel, A linear time randomizing algorithm for searching ranked functions, Algorithmica,2 (1987), 81{90.Appendix: Multidimensional Parametric SearchingIn this appendix we describe how to extend the parametric searching technique to higherdimensions. Suppose we have a d-variate (strictly) concave function F ( ), where variesover Rd . We wish to compute the point 2 Rd at which F ( ) attains its maximum value.Let As be, as above, an algorithm that can compute F ( 0) for any given 0. As in theparametric searching, we assume that the control ow of As is governed by comparisons,each of which amounts to computing the sign of a d-variate polynomial p( ) of a constantmaximum degree. We also need a few additional assumptions on As. We call a variable inAs dynamic if its value depends on . The only operations allowed on dynamic variables are:(i) evaluating a polynomial p( ) of degree at most , where is a constant, and assigningthe value to a dynamic variable, (ii) adding two dynamic variables, and (iii) multiplying adynamic variable with a constant. These assumptions imply that if is indeterminant, theneach dynamic variable is a polynomial in of degree at most ; and that F is a piecewisepolynomial, each piece being a polynomial of degree at most .We run As generically at . Each comparison involving now amounts to evaluatingthe sign of a d-variate polynomial p( 1; : : : ; d) at .Geometric OptimizationJanuary 24, 1997 Appendix: Multidimensional Parametric Searching55First consider the case where p is a linear function of the form a0+P1 i d ai i, suchthat ad 6= 0. Consider the hyperplane h : d =(a0+Pd 1i=1 ai i)=ad. It su ces to describean algorithm for computing the point h 2 h such that F ( h) = max 2h F ( ). By invokingthis algorithm on h and two other hyperplanes h"+ and h" , whereh"+ : d = (a0 + "+ d 1Xi=1 ai i)=ad and d = (a0 "+ d 1Xi=1 ai i)=adfor some arbitrarily small constant ", we can determine whether 2 h, 2 h+, or2 h , where h+ and h are the two open halfspaces bounded by h. (Technically, onecan, and should, treat " as an in nitesimal quantity; see [183] for details. Also, a similarperturbation scheme works when ad = 0.) We solve the following more general problem:Let g be a kat in Rd contained in a (k + 1)at , and let g+(resp. g) bethe halfspace of lying above (resp. below) g, relative to a direction in orthogonal tog. We wish to compute the point g 2 g such that F ( g) = max 2g F ( ). Denote byA(k)s an algorithm for solving this problem. As above, by running A(k)s on g and on twoin nitesimally shifted copies of g within , we can determine whether the point whereF attains its maximum on lies in g, in g+, or in g . Notice thatA0s = As, and thatA(d 1)s is the algorithm for computingh. Inductively, assume that we have an algorithmA(k 1)s that can solve this problem for any (k 1)-dimensional at. We run As genericallyat g, where varies over g. Each comparison involves the determination of the side of a(k 1)at g0 g that containsg. Running A(k 1)s on g0 and on two other in nitesimallyshifted copies, g0"+ and g0" , of g0 within g, we can perform the desired location of g withrespect to g0, and thereby resolve the comparison. When the simulation of As terminates,g will be found.The total running time of the algorithmA(k)s isO(T k+1s ). The details of this approach canbe found in [17, 65, 172, 196]. If we also have a parallel algorithm Ap that evaluates F ( 0)in time Tp using P processors, then the running time of A(k)s can be improved, as in the one-dimensional case, by executing Ap generically at g in each recursive step. A parallel step,however, requires resolving P independent comparisons. The goal is therefore to resolve, byinvoking A(k 1)s a constant number of times, a xed fraction of these P comparisons, whereeach comparison requires the location of g with respect to a (k 1)at g0 g. Cohenand Megiddo [65] developed such a procedure that yields a2O(d2)Ts(Tp logP )d-time algo-rithm for computing ; see also [183]. Agarwala and Fernandez-Baca [21] extended Cole'simprovement of Megiddo's parametric searching to multidimensional parametric searching,which improves the running time of the Cohen-Megiddo algorithm in some cases by a poly-logarithmic factor. Agarwal et al. [17] showed that these procedures can be simpli ed andimproved, using (1=r)-cuttings, todO(d)Ts(Tp logP )d.Toledo [227] extended the above approach to resolving the signs of nonlinear polynomi-als, using Collins's cylindrical algebraic decomposition [69]. We describe his algorithm forGeometric OptimizationJanuary 24, 1997 Appendix: Multidimensional Parametric Searching56d = 2. That is, we want to compute the sign of a bivariate, constant-degree polynomialp at . Let _p denote the set of roots of p. We compute Collins' cylindrical algebraicdecomposition of R2 so that the sign of p is invariant within each cell of [34, 69]. Ouraim is to determine the cell 2 that contains , thereby determining the sign of p at. The cells of are delimited by O(1) y-vertical lines | each passing through a self-intersection point of _p or through a point of vertical tangency of _p; see Figure 8. Foreach vertical line `, we run the standard 1-dimensional parametric-searching procedure todetermine which side of ` contains . If any of these substeps returns , we are done.Otherwise, we obtain a vertical strip that contains . We still have to search through thecells of within , which are stacked one above the other in the y-direction, to determinewhich of them contains . We note that the number of roots of p along any vertical line` : x = x0 within is the same, that each root varies continuously with x0, and that theirrelative y-order is the same for each vertical line. In other words, the roots of _p inconstitute a collection of disjoint, x-monotone arcs 1; : : : ; t whose endpoints lie on theboundary lines of . We can regard each i as the graph of a univariate function i(x).Next, for each i, we determine whether lies below, above, or on i. Let x be thex-coordinate of , and let ` be the vertical line x = x . If we knew x , we could have runAs at each i \ `, and could have located with respect to i, as desired. Since we donot know x , we execute the 1-dimensional parametric-searching algorithm generically, onthe line `, with the intention of simulating it at the unknown point i = i \ `. This time,performing a comparison involves computing the sign of some bivariate, constant-degreepolynomial g at i (we prefer to treat g as a bivariate polynomial, although we could haveeliminated one variable, by restricting to lie on i). We compute the roots r1; : : : ; ru ofg that lie on i, and set r0 and ru+1 to be the left and right endpoints of i, respectively.As above, we compute the index j so that lies in the vertical strip 0 bounded betweenrj and rj+1. Notice that the sign of g is the same for all points on i within the strip 0, sowe can now compute the sign of g at i.When the generic algorithm being simulated on i terminates, it returns a constant-degree polynomial Fi(x; y), corresponding to the value of F at i (i.e., Fi( i) = F ( i)), anda vertical strip ithat contains . Let i(x) = Fi(x; i(x)). Let +i (resp. i ) be thecopy of i translated by an in nitesimally small amount in the (+y)-direction (resp. ( y)-direction), i.e., +i (x) = i(x) + " (resp. i (x) = i(x) "), where " > 0 is an in nitesimal.We next simulate the algorithm at+i = +i \ ` and i = i \ `. We thus obtain twofunctions+i (x), i (x) and two vertical strips +i ; i . Let ̂i = i \ +i \ i . We need toevaluate the signs of i(x )+i (x ) and i(x ) i (x ) to determine the location of withrespect to i (this is justi ed by the concavity of F ). We compute the x-coordinates of theintersection points of (the graphs of) i;+i ; i that lie inside ̂i. Let x1 x2xs bethese x-coordinates, and let x0; xs+1 be the x-coordinates of the left and right boundariesof ̂i, respectively. By running As on the vertical lines x = xj, for 1 j s, we determineGeometric OptimizationJanuary 24, 1997 Appendix: Multidimensional Parametric Searching57 (i)(ii)12r2r1 g = 0(iii)0Figure 8: (i) roots of p; (ii) the cylindrical algebraic decomposition of p; (iii) the curvesg = 0 and 1the vertical strip Wi = [xj ; xj+1] R that contains . Notice that the signs of polynomialsi(x)+i (x); i(x) i (x) are xed for all x 2 [xj ; xj+1]. By evaluating i;+i ; i for anyx0 2 [xj; xj+1], we can compute the signs of i(x )+i (x ) and of i(x ) +(x ).Repeating this procedure for all i's we can determine the cell of that contains ,and thus resolve the comparison involving p. We then resume the execution of the genericalgorithm.The execution of the 1-dimensional procedure takes O(T 2s ) steps, which implies thatthe generic simulation of the 1-dimensional procedure requires O(T 3s ) time. The total timespent in resolving the sign of p at is therefore O(T 3s ). Hence, the total running time ofthe 2-dimensional algorithm is O(T 4s ). As above, using a parallel version of the algorithmfor the generic simulation reduces the running time considerably. In d dimensions, therunning time of Toledo's original algorithm is O(Ts(Tp log n)2d 1), which can be improvedto Ts(Tp log n)O(d2)), using the result by Chazelle et al. [50] on vertical decomposition ofarrangements of algebraic surfaces. Geometric OptimizationJanuary 24, 1997
منابع مشابه
Shortest Paths Help Solve Geometric Optimization Problems in Planar Regions
The goal of this paper is to show that the concept of the shortest path inside a polygonal region contributes to the design of eecient algorithms for certain geometric optimization problems involving simple polygons: computing optimum separators, maximum area or perimeter inscribed triangles, a minimum area circumscribed concave quadrilateral, or a maximum area contained triangle. The structure...
متن کاملOptimal Design of a Brushless DC Motor, by Cuckoo Optimization Algorithm (RESEARCH NOTE)
This contribution deals with an optimal design of a brushless DC motor, using optimization algorithms, based on collective intelligence. For this purpose, the case study motor is perfectly explained and its significant specifications are obtained as functions of the motor geometric parameters. In fact, the geometric parameters of the motor are considered as optimization variables. Then, the obj...
متن کاملAERO-THERMODYNAMIC OPTIMIZATION OF TURBOPROP ENGINES USING MULTI-OBJECTIVE GENETIC ALGORITHMS
In this paper multi-objective genetic algorithms were employed for Pareto approach optimization of turboprop engines. The considered objective functions are used to maximize the specific thrust, propulsive efficiency, thermal efficiency, propeller efficiency and minimize the thrust specific fuel consumption. These objectives are usually conflicting with each other. The design variables consist ...
متن کاملOn Some Geometric Selection and Optimization Problems via Sorted Matrices
In this paper we apply the selection and optimization technique of Frederickson and Johnson to a number of geometric selection and optimization problems, some of which have previously been solved by parametric search, and provide eecient and simple algorithms. Our technique improves the solutions obtained by parametric search by a log n factor. For example, we apply the technique to the two-lin...
متن کاملOn Some Geometric Optimization Problems in Layered Manufacturing
EEcient geometric algorithms are given for optimization problems arising in layered manufacturing , where a 3D object is built by slicing its CAD model into layers and manufacturing the layers successively. The problems considered include minimizing the stair-step error on the surfaces of the manufactured object under various formulations, minimizing the volume of the so-called support structur...
متن کاملUsing Neural Networks and Genetic Algorithms for Modelling and Multi-objective Optimal Heat Exchange through a Tube Bank
In this study, by using a multi-objective optimization technique, the optimal design points of forced convective heat transfer in tubular arrangements were predicted upon the size, pitch and geometric configurations of a tube bank. In this way, the main concern of the study is focused on calculating the most favorable geometric characters which may gain to a maximum heat exchange as well as a m...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1998