Elsevier is quite willing, however, to sell you a pdf file listing the non-existent editorial board for $31.50. At the risk of angering the copyright lawyers at Elsevier, see for yourself. (My library still subscribes.)
« July 2009 | Main | October 2009 »
Elsevier is quite willing, however, to sell you a pdf file listing the non-existent editorial board for $31.50. At the risk of angering the copyright lawyers at Elsevier, see for yourself. (My library still subscribes.)
Posted at 07:45 AM | Permalink | Comments (1)
Reblog
(0)
| |
I am alone as a TCS person in the department, and I’m sure I will feel some isolation as a result. I am really hoping that I will be able to work on some interesting problems with non-TCS people. My background, though, is in very traditional algorithmic work, so I am sure there will be growing pains. I’ll start with what will probably be the first of many requests for advice: any advice?
Posted at 12:32 PM | Permalink | Comments (0)
Reblog
(0)
| |
We consider the problem of partitioning the set of vertices of a given unit disk graph (UDG) into a minimum number of cliques. The problem is NP-hard and various constant factor approximations are known, with the current best ratio of 3. Our main result is a polynomial time approximation scheme (PTAS) for this problem on UDG. In fact, we present a weakly-robust algorithm that given a graph G (not necessarily UDG) with edge-lengths, it either (i) computes a clique partition or (ii) gives a certificate that the graph is not a UDG; for the case (i) that it computes a clique partition, we show that it is guaranteed to be within (1 + ε) ratio of the optimum if the input is UDG; however if the input is not a UDG it either computes a clique partition as in case (i) with no guarantee on the quality of the clique partition or detects that it is not a UDG. Noting that recognition of UDG’s is NP-hard even if we are given edge lengths, our PTAS is a weakly-robust algorithm. Our main technical contribution involves showing the property of separability of an optimal clique partition: that there exists an optimal clique partition where the convex hulls of the cliques are pairwise non-overlapping. Our algorithm can be transformed into an O(log*n/ε^O(1)) time distributed polynomial-time approximation scheme (PTAS). Finally, we consider a weighted version of the clique partition problem on vertex weighted UDGs given given in the standard form (e.g. adjacency matrix); the weight of a clique is the weight of a heaviest vertex in it, and the weight of a clique partition is the sum of the weights of the cliques in it. This formulation generalizes the classical clique partition problem. We note some key distinctions between the weighted and the unweighted versions, where ideas developed for the unweighted case do not help. Yet, surprisingly, we show that the problem admits a (2 + ε)-approximation algorithm for the weighted version of the problem even when the graph is expressed in standard form. This improves on the best known algorithm which constructs an 8-approximation for the unweighted case for UDGs expressed in standard form.
In the capacitated vehicle routing problem, introduced by Dantzig and Ramser in 1959, we are given the locations of n customers and a depot, along with a vehicle of capacity k, and wish to find a minimum length collection of tours, each starting from the depot and visiting at most k customers, whose union covers all the customers. We give a quasi-polynomial time approximation scheme for the setting where the customers and the depot are on the plane, and distances are given by the Euclidean metric.
We settle the 1-pass space complexity of (1+eps)-approximating the Lp norm, for real p with 1 <= p <= 2, of a length-n vector updated in a length-m stream with updates to its coordinates. We assume the updates are integers in the range [-M, M]. In particular, we show the space required is Theta(eps^{-2}*log(mM) + loglog(n)) bits. Our result also holds for 0 < p < 1; although Lp is not a norm in this case, it remains a well-defined function. Our upper bound improves upon previous algorithms of [Indyk, JACM '06] and [Li, SODA '08]. This improvement comes from showing an improved derandomization of the Lp sketch of Indyk by using k-wise independence for small k, as opposed to using the heavy hammer of a generic pseudorandom generator against space-bounded computation such as Nisan's PRG. Our lower bound improves upon previous work of [Alon-Matias-Szegedy, JCSS '99] and [Woodruff, SODA '04], and is based on showing a direct sum property for the 1-way communication of the gap-Hamming problem.
We address the problem of localizing homology classes, namely, finding the cycle representing a given class with the most concise geometric measure. We focus on the volume measure, that is, the 1-norm of a cycle. Two main results are presented. First, we prove the problem is NP-hard to approximate within any constant factor. Second, we prove that for homology of 2-dimension or higher, the problem is NP-hard to approximate even when the Betti number is $O(1)$. A side effect is the inapproximability of the problem of computing the nonbounding cycle with the smallest volume, and computing cycles representing a homology basis with the minimal total volume. We also discuss other geometric measures (diameter and radius) and show their disadvantages in homology localization.
We provide an $O(n\log n)$ time constant factor approximation algorithm for the crossing number of a graph of bounded maximum degree which is ``densely enough'' embeddable on any fixed orientable surface. Our approach combines some known tools with a powerful new lower bound on the crossing number of an embedded graph. This result extends previous results that gave such approximations in particular cases of projective, toroidal or apex graphs; it is a qualitative improvement over previously published algorithms that constructed low-crossing-number drawings of embeddable graphs without giving any approximation guarantees. No constant factor approximation algorithms for the crossing number problem over comparably rich classes of graphs are known to date.
Given a d-dimensional array A with N entries, the Range Minimum Query (RMQ) asks for the minimum element within a contiguous subarray of A. The 1D RMQ problem has been studied intensively because of its relevance to the Nearest Common Ancestor problem and its important use in stringology. If constant-time query answering is required, linear time and space preprocessing algorithms were known for the 1D case, but not for the higher dimensional cases. In this paper, we give the first linear-time preprocessing algorithm for arrays with fixed dimension, such that any range minimum query can be answered in constant time. This improves the preprocessing time over all previous work under the same model, i.e., RAM with logarithmic word size.
Searching for a point in the plane is a well-known search game problem introduced in the early eigthies. The best known search strategy is given by a spiral and achieves a competitive ratio of 17.289.... It was shown by Gal \cite{g-sg-80} that this strategy is the best strategy among all \emph{monotone} and \emph{periodic} strategies. Since then it was unknown whether the given strategy is optimal in general. This paper settles this old open fundamental search problem and shows that spiral search is indeed optimal. The given problem can be considered as the continous version of the well-known $m$-ray search problem and also appears in several non-geometric applications and modifications. Therefore the optimality of spiral search is an important questions considered by many researchers in the last decades. We answer the logarithmic spiral conjecture for the given problem. The lower bound construction might be helpful for similar settings, it also simplifies existing proofs on classical $m$-ray search.
Motivated by applications in the area of sensor networks, Isler, Khanna, Spletzer and Taylor (2005) discuss an algorithmic problem where 2n cameras have to be assigned in disjoint pairs to n targets. The cameras are located on a straight line, whereas the targets are somewhere in the plane. They assume that the 2n cameras are positioned on the x-axis, in the 2n points with coordinates (x(i), 0) with 1 <= i <= n. They discuss an error measure motivated by stereo reconstruction that mainly depends on the y-coordinates y(1),..., y(n) of the n targets: If the i-th and the j-th camera together are assigned to the k-th target, then the corresponding incurred error cost is c(ijk) = y(k)/(|x(i)-x(j)|). The objective is to find an assignment of camera pairs to targets such that the sum of all error costs c(ijk) is minimized.We investigate a generalization of this cost structure that is based on a real parameter p. An instance of problem p-FOA is specified by 2n real numbers x(1),...,x(2n) and by n real numbers y(1),...,y(n). The corresponding cost-coefficient for matching x(i) and x(j) with y(k) is c(ijk) = y(k)/(|x_i-x_j|^p). The goal is to find an assignment that minimizes the sum of all costs. We provide a complete complexity classification of problem p-FOA:
As our main contribution, we fully resolve the approximability status of p-FOA:
- For every real p with -1 <= p <= 0, problem p-FOA is polynomially solvable.
- For every real p with p < -1 or p > 0, problem p-FOA is strongly NP-hard.
- Even the special case with equi-distant cameras is strongly NP-hard. This settles a question left open in Isler et al. (2005).
Our PTAS extends the results and ideas of Isler et al. (2005) for the equi-distant special case. The design of our PTAS is quite intricate, and introduces a number of new ideas to the area.
- For every real p, problem p-FOA possesses a PTAS.
We present a simple and practical algorithm for the $c-$approximate near neighbor problem ($c-$NN): given $n$ points $P\subset\R^d$ and radius $R$, build a data structure which, given $q\in\R^d$, can with probability $1-\delta$ return a point $p\in P$ with $\mathsf{dist}(p,q)\le cR$ if there is any $p^*\in P$ with $\mathsf{dist}(p^*,q)\le R$. For $c=d+1$, our algorithm deterministically ($\delta=0$) preprocesses in time $O(nd\log d)$, space $O(dn)$, and answers queries in expected time $O(d^2)$; this is the first known algorithm to deterministically guarantee an $O(d)-$NN solution in constant time with respect to $n$ for all $\ell_p$ metrics. A probabilistic version empirically achieves useful $c$ values ($c < 2$) where $c$ appears to grow minimally as $d\to\infty$. A query time of $O(d\log d)$ is available, providing slightly less accuracy. These techniques can also be used to approximately find (pointers between) all pairs $x,y\in P$ with $\mathsf{dist}(x,y) \le R$ in time $O(nd\log d)$.The key to the algorithm is a locality-sensitive hash: a mapping $h:\R^d\to U$ with the property that $h(x) = h(y)$ is much more likely for nearby $x,y$. We introduce a somewhat regular simplex which tessellates $\R^d$, and efficiently hash each point in any simplex of this tessellation to all $d+1$ corners; any points in neighboring cells will be hashed to a shared corner and noticed as nearby points. This method is completely independent of dimension reduction, so that additional space and time savings are available by first reducing all input vectors.
Let Ω be a disk of radius $R$ in the plane. A set $F$ of closed unit disks contained in Ω forms a maximal packing if the unit disks are pairwise disjoint and the set is maximal: i.e., it is not possible to add another disk to $F$ while maintaining the packing property. A point $p$ is hidden within the ``forest'' defined by $F$ if any ray with apex $p$ intersects some disk of $F$: that is, a person standing at $p$ can hide without being seen from outside the forest. We show that if the radius $R$ of Ω is large enough, one can find a hidden point for any maximal packing of unit disks in Ω. This proves a conjecture of Joseph~Mitchell. We also present an $O(n^{5/2} \log n)$-time algorithm that, given a forest with $n$ (not necessarily congruent) disks, computes the boundary illumination map of all disks in the forest.
Two mobile agents (robots) with distinct labels have to meet in an arbitrary, possibly infinite, unknown connected graph or in an unknown connected terrain in the plane. Agents are modeled as points, and the route of each of them only depends on its label and on the unknown environment. The actual walk of each agent also depends on an asynchronous adversary that may arbitrarily vary the speed of the agent, stop it, or even move it back and forth, as long as the walk of the agent in each segment of its route is continuous, does not leave it and covers all of it. Meeting in a graph means that both agents must be at the same time in some node or in some point inside an edge of the graph, while meeting in a terrain means that both agents must be at the same time in some point of the terrain. Does there exist a deterministic algorithm that allows any two agents to meet in any unknown environment in spite of this very powerfull adversary? We give deterministic rendezvous algorithms for agents starting at arbitrary nodes of any anonymous connected graph (finite or infinite) and for agents starting at any interior points with rational coordinates in any closed region of the plane with path-connected interior. While our algorithms work in a very general setting - agents can, indeed, meet almost everywhere - we show that none of the above few limitations imposed on the environment can be removed. On the other hand, our algorithm also guarantees the following approximate rendezvous for agents starting at arbitrary interior points of a terrain as above: agents will eventually get at an arbitrarily small positive distance from each other.
Spectral methods have been widely used in a broad range of application fields. One important object involved in such methods is the Laplace-Beltrami operator of a manifold. Indeed, a variety of work in graphics and geometric optimization uses the eigen-structures of the Laplace operator, and applications include mesh smoothing, compression, editing, shape segmentation, matching, and parametrization, among others. While the Laplace operator is defined (mathematically) for a smooth domain, in these applications, the underlying manifold is often approximated by a discrete mesh, and the spectral structure of the manifold Laplacian is estimated from some discrete Laplace operator constructed from this mesh. In this paper, we study the important question of how well the spectrum computed from the discrete mesh approximates the true spectrum of the manifold Laplacian. We exploit a recent result on mesh Laplacian and provide the first convergence result to relate the spectrum constructed from a mesh with the true spectrum. We also study how stable these eigen-structures and their discrete approximations are when the underlying manifold is perturbed, and provide explicit bounds for the Laplacian spectra of two ``close'' manifolds, as well as a convergence result for their discrete approximations. Finally, we present various experimental results to demonstrate that these discrete spectra are both accurate and robust in practice.
We present an algorithm for computing a $d$-dimensional subspace of the row space of a matrix. For an $n \times n$ matrix $A$ with $m$ nonzero entries and with $rank(A) \ge d$ the algorithm generates a $d \times n$ matrix with full row rank and which is a subspace of $Rows(A)$. If $rank(A) < d$ the algorithm generates a $rank(A) \times n$ row-equivalent matrix. The running time of the algorithm isO( min{ n^{2-2/ω} m^{1/ω} d^{ω-2+1/ω}, n^2 d^{ω-2} } ) where ω < 2.376 is the matrix multiplication exponent. An immediate corollary of the algorithm is the construction of a row-reduced equivalent matrix of $A$, and hence the computation of $rank(A)$, in time $O( min {n^{2-2/ω} m^{1/ω} rank(A)^{ω-2+1/ω}, n^2 rank(A)^{ω-2} } )$. We note that the running time is sub-quadratic if d < (n^2/m)^{0.528}.
Given a point set $P$ in $\REAL^d$, an $F_q(\ell_p)$-subspace approximation problem asks for a subspace that minimizes the sum of $q$-th powers of $\ell_p$-distances to this subspace. In this paper we develop techniques for subspace approximation, regression, and matrix approximation that can be used to deal with massive data sets in high dimensional spaces. In particular, we develop coresets and sketches, i.e. small space representations that approximate the input point set $P$ with respect to a subspace approximation problem. Our results are:
- A dimensionality reduction method that can be applied to $F_q(\ell_p)$-clustering and shape fitting problems \cite{DDHKM09, Har06c}.
- The first strong coreset for $F_1(\ell_2)$-subspace approximation in high dimensional spaces, i.e. of size polynomial in the dimension of the space.
- A $(1+\epsilon)$-approximation algorithm for the $j$-dimensional $F_1(\ell_2)$-subspace approximation problem with running time $O(nd (j/\eps)^{O(1)} + (n+d) 2^{O(j/\epsilon)^{O(1)}})$.
- A streaming algorithm that maintains a coreset for the $F_1(\ell_2)$-subspace approximation problem and uses $\tilde{O}(d(\frac{j2^{ O(\sqrt{\log n})}}{\epsilon^2})^{\poly(j)})$ space, where the space gives the number of memory cells used.
- Several streaming algorithm with bounded precision in the turnstile model. We significantly extend the results of \cite{CW09} for approximate linear regression, distance to subspace approximation, and best rank-$j$ approximation, to error measures other than the Frobenius norm, obtaining $1$-pass sublinear, near-optimal space complexity for these problems. Our time is always polynomial for constant $j/\eps$, even for the time to extract a $(1+\eps)$-approximation to the best rank-$j$ subspace from the sketch.
Robot Arm Reachability, the problem of computing the extremal configurations of a 3D revolute-jointed manipulator, is a long standing open problem in Robotics. In this paper we completely solve it for orthogonal polygonal chains, which appear as a special case of a larger family, fully characterized here by a technical condition. Until now, in spite of the practical importance of the problem, only numerical optimization heuristics were available. These were not even guaranteed to produce the optimum; in fact, the problem was not even known to be computationally solvable, and in practice, the numerical heuristics were applicable only to small problem sizes.We present surprisingly elementary and efficient (mostly linear time) algorithms for four fundamental problems: (1) finding the minimum and the maximum reach value, (2) finding an extremal configuration (or enumerating all of them), (3) folding a given chain to a given extremal position, and (4) folding a chain in a way that changes the endpoint distance function monotonically. Our theoretical results reduce the first problem to finding a shortest path between two vertices in an associated simple triangulated polygon, and the last problem to a simple version of the planar Carpenter's Rule Problem.
The Lipschitz multi-armed bandit (MAB) problem generalizes the classical multi-armed bandit problem by assuming one is given side information consisting of a priori upper bounds on the difference in expected payoff between certain pairs of strategies. Classical results of Lai and Robbins and Auer et al imply a logarithmic regret bound for the Lipschitz MAB problem on finite metric spaces. Recent results on continuum-armed bandit problems and their generalizations imply lower bounds of $\sqrt{t}$, or stronger, for many infinite metric spaces such as the unit interval. Is this dichotomy universal? We prove that the answer is yes: for every metric space, the optimal regret of a Lipschitz MAB algorithm is either bounded above by any $f\in \omega(\log t)$, or bounded below by any $g\in o(\sqrt{t})$. Perhaps surprisingly, this dichotomy does not coincide with the distinction between finite and infinite metric spaces; instead it depends on whether the completion of the metric space is compact and countable. Our proof connects upper and lower bound techniques in online learning with classical topological notions such as perfect sets and the Cantor-Bendixson theorem.
We also consider the "full-feedback" (a.k.a., "best-expert") version of Lipschitz MAB problem, termed the "Lipschitz experts problem", and show that this problem exhibits a similar dichotomy. We proceed to give nearly matching upper and lower bounds on regret in the Lipschitz experts problem on uncountable metric spaces. These bounds are of the form $\tilde{\Theta}(t^\gamma)$, where the exponent $\gamma\in [\tfrac12, 1]$ depends on the metric space. To characterize this dependence, we introduce a novel dimensionality notion tailored to the experts problem. Finally, we show that both Lipschitz bandits and Lipschitz experts problems become completely intractable (in the sense that no algorithm has regret $o(t)$) if and only if the completion of the metric space is non-compact.
A set G of points on a 1.5-dimensional terrain, also known as an x-monotone polygonal chain, is said to guard the terrain if any point on the terrain is ‘seen’ by a point in G. Two points on the terrain see each other if and only if the line segment between them is never strictly below the terrain. The minimum terrain guarding problem asks for a minimum guarding set for the given input terrain. We prove that the decision version of this problem is NP-hard. This solves a significant open problem and complements recent positive approximability results for the optimization problem.Our proof uses a reduction from PLANAR 3-SAT. We build gadgets capable of ‘mirroring’ a consistent variable assignment back and forth across a main valley. The structural simplicity of 1.5-dimensional terrains makes it difficult to build general clause gadgets that do not destroy this assignment when they are evaluated. However, we exploit the structure in instances of PLANAR 3-SAT to find very specific operations involving only ‘adjacent’ variables. For these restricted operations we can construct gadgets that allow a full reduction to work.
Star-shaped bodies are an important nonconvex generalization of convex bodies (e.g., linear programming with violations). Here we present an efficient algorithm for sampling a given star-shaped body. The complexity of the algorithm grows polynomially in the dimension and inverse polynomially in the fraction of the volume taken up by the kernel of the star-shaped body. The analysis is based on a new isoperimetric inequality. Our main technical contribution is a tool for proving such inequalities when the domain is not convex. As a consequence, we obtain a polynomial algorithm for computing the volume of such a set as well. In contrast, linear optimization over star-shaped sets is NP-hard.
We consider the problem of computing a minimum-distortion bijection between two point-sets in R^2. We prove the first non-trivial inapproximability result for this problem, for the case when the distortion is constant. More precisely, we show that there exist constants 0 < alpha < beta, such that it is NP-hard to distinguish between spaces for which the distortion is either at most alpha, or at least beta, under the Euclidean norm. This addresses a question of [Kenyon, Rabani and Sinclair 2004], and extends a result due to [Papadimitriou and Safra 2005], who gave inapproximability for point-sets in R^3.We also apply similar ideas to the problem of computing a minimum distortion embedding of a finite metric space into R^2. We obtain an analogous inapproximability result under the \ell_{\infty} norm for this problem. Inapproximability for the case of constant distortion was previously known only for dimension at least 3 [Matousek and Sidiropoulos 2008].
The Chord algorithm is a popular, simple method for the succinct approximation of curves, which is widely used, under different names, in a variety of areas, such as, multiobjective and parametric optimization, computational geometry, and graphics. We analyze the performance of the chord algorithm, as compared with the optimal approximation that achieves a desired accuracy with the minimum number of points, and prove sharp upper and lower bounds, both in the worst case and average case setting.
We give an $O(n\sqrt{\lg n})$-time algorithm for counting the number of inversions in a permutation on $n$ elements. This improves a long-standing previous bound of $O(n\lg n/\lg\lg n)$ that followed from Dietz's data structure [WADS'89], and answers a question of Andersson and Petersson [SODA'95]. As Dietz's result is known to be optimal for the related dynamic rank problem, our result demonstrates a significant improvement in the \emph{offline} setting.Our new technique is quite simple: we perform a ``vertical partitioning'' of a trie (akin to van Emde Boas trees), and use ideas from external memory. However, the technique finds numerous applications: for example, we obtain
As a bonus, we also give a simple $(1+\eps)$-approximation algorithm for counting inversions that runs in linear time, improving the previous $O(n\lg\lg n)$ bound by Andersson and Petersson.
- in $d$ dimensions, an algorithm to answer $n$ offline orthogonal range counting queries in time $O(n\lg^{d-2+1/d}n)$;
- an improved construction time for online data structures for orthogonal range counting;
- an improved update time for the partial sums problem;
- faster Word RAM algorithms for finding the maximum depth in an arrangement of axis-aligned rectangles, and for the slope selection problem.
We study the quantitative geometry of graphs with small genus. In particular, we exhibit new, optimal random partitioning schemes for such graphs. Using these geometric primitives, we give exponentially improved dependence on genus for a number of problems like approximate max-flow/min-cut theorems, approximations for uniform and non-uniform Sparsest Cut, treewidth approximation, Laplacian eigenvalue bounds, Lipschitz extension theorems, and various embeddings into normed spaces.
We develop streaming algorithms for maintaining extent measures of a stream S of n points in R^d. We focus on designing streaming algorithms that use polynomial in d (poly(d)) and sub-linear in n space. For the problems of computing diameter, width and minimum enclosing ball of S, we obtain lower bounds on the worst-case approximation ratio of any streaming algorithm that uses poly(d) space. On the positive side, we design an insertion-only data structure called the blurred ball cover that maintains a set C \subseteq S of $O(1/eps^3 log 1/\eps)$ points. We show that blurred ball cover can be used for answering approximate farthest-point queries and maintaining approximate minimum enclosing ball and diameter of S. The size of blurred ball cover is linear in d and independent of n.
We investigate the prospects for an algorithm for Boolean CNF satisfiability (CNF-SAT) that runs in $O^*(2^{\delta n})$ time for some $\delta < 1$. In such a case we would say CNF-SAT has an improved algorithm. We present several hypotheses which we feel are plausible, given current knowledge. We prove that if any of the hypotheses are true, then CNF SAT has an improved algorithm. For other candidate hypotheses, they imply that $k$-SAT can be solved in $2^{\delta n}$ time for a fixed $\delta < 1$ that is independent of $k$. Such results would be breakthroughs in exact algorithms. The contrapositive statements show that if we assume satisfiability does not admit improved algorithms, then many interesting lower bounds follow. Our reductions provide tight connections between satisfiability and several foundational problems in logic, graph theory, communication complexity, and computational geometry.
We present a new optimization technique that yields the first FPTAS for several geometric problems. These problems reduce to optimizing a sum of non-negative, constant description-complexity algebraic functions. We first give a FPTAS for optimizing such a sum of algebraic functions, and then we apply it to several geometric optimization problems. We obtain the first FPTAS for two fundamental geometric shape matching problems in fixed dimension: maximizing the volume of overlap of two polyhedra under rigid motions, and minimizing their symmetric difference. We obtain the first FPTAS for other problems in fixed dimension, such as computing an optimal ray in a weighted subdivision, finding the largest axially symmetric subset of a polyhedron, and computing minimum area hulls.
A Random Geometric Graph (RGG) in two dimensions is constructed by distributing $n$ nodes uniformly at random in $[0,\sqrt{n}]^2$ and connecting two nodes if their Euclidean distance is at most $r$, for some prescribed $r$. We analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node in the largest connected component of the RGG. Then, in each round, each informed node chooses a neighbor uniformly at random and informs it. We prove that this algorithm informs every node in the largest connected component of an RGG within $O(\sqrt{n}/r+\log n)$ rounds with probability $1-O(n^{-1})$. This holds for any value of $r$ larger than the critical value for the emergence of a connected component with $\Omega(n)$ nodes. By proving this result, we show that for any two nodes sufficiently distant from each other in $[0,\sqrt{n}]^2$, the length of the shortest path between them in the RGG, when such a path exists, has length only a constant factor larger than the optimum. This result has independent interest and, in particular, gives that the diameter of the largest component of an RGG is $\Theta(\sqrt{n}/r)$, which surprisingly has been an open problem so far.
We observe that the classical maximum flow problem in any directed planar graph $G$ can be reformulated as a parametric shortest path problem in the oriented dual graph $G^*$. This reformulation immediately suggests an algorithm to compute maximum flows, which runs in $O(n\log n)$ time. As we continuously increase the parameter, each change in the shortest path tree can be effected in $O(\log n)$ time using standard dynamic tree data structures, and the special structure of the parametrization implies that each directed edge enters the evolving shortest path tree at most once. The resulting maximum-flow algorithm is identical to the recent algorithm of Borradaile and Klein [J. ACM 2009], but our new formulation allows a simpler presentation and analysis. On the other hand, we demonstrate that for a similarly structured parametric shortest path problem on the torus, the shortest path tree can change $\Omega(n^2)$ times in the worst case, suggesting that different methods may be required to efficiently compute maximum flows in higher-genus graphs.
We consider the problem of reconstructing a road network from a collection of path traces and provide guarantees to the accuracy of the reconstruction under reasonable assumptions. Our algorithm can be used to process a collection of polygonal paths in the plane so that shared structures (subpaths) among the paths in the collection can be discovered and the collection can be organized to allow efficient path similarity queries against new query paths on the same road network. This is a timely problem, as GPS or other location traces of both people and vehicles are becoming available in a large scale and there is a real need to create appropriate data structures and data bases for such data.
We give an algorithm for computing planar convex hulls in the self-improving model: Given a sequence I_1, I_2, ... of planar n-point sets, the upper convex hull conv(I) of each set I is desired. We assume that there exists a probability distribution D on sets of n points, such that the inputs I_j are drawn independently according to D. Furthermore, D is such that the individual points in I are distributed independently of each other. In other words, the i-th point is distributed according to D_i. The D_i's can be arbitrary but are independent of each other. The distribution D is not known to the algorithm in advance. After a learning phase of n^\eps rounds, the expected time to compute conv(I) is O(n + H(conv(I))). Here, H(conv(I)) is the minimum entropy of any random variable that maps I to a description of conv(I) and to a labeling scheme that proves non-extremality for every point in I not on the hull. This is a lower bound for the expected running time of any algebraic computation tree that computes the convex hull. Our algorithm is therefore asymptotically optimal for D.
We prove that any sketching protocol for edit distance achieving a constant approximation requires nearly logarithmic (in the strings' length) communication complexity. This is an exponential improvement over the previous, doubly-logarithmic, lower bound of [Andoni-Krauthgamer, FOCS'07]. Our lower bound also applies to the Ulam distance (edit distance over non-repetitive strings). In this special case, it is polynomially related to the recent upper bound of [Andoni-Indyk-Krauthgamer, SODA'09].From a technical perspective, we prove a direct-sum theorem for sketching product metrics that is of independent interest. We show that, for any metric X that requires sketch size which is a sufficiently large constant, sketching the max-product metric \ell_\infty^d(X) requires \Omega(d) bits. The conclusion, in fact, also holds for arbitrary two-way communication. The proof uses a novel technique for information complexity based on Poincar\'e inequalities and suggests an intimate connection between non-embeddability, sketching and communication complexity.
Let $X$ be a space and $F$ a family of ${0,1}$-valued functions on $X$. Vapnik and Chervonenkis showed that if $F$ is "simple" (finite VC dimension), then for every probability measure $\mu$ on $X$ and $\e>0$ there is a finite set $S$ such that for all $f \in F$, $\sum_{x \in S} f(x)/|S| = [\int f(x) d\mu(x)] \pm \e$.Think of $S$ as a "universal $\epsilon$-approximator" for integration in $F$. $S$ can actually be obtained w.h.p. just by sampling a few points from $\mu$. This is a mainstay of computational learning theory. It was later extended by other authors to families of bounded (e.g., $[0,1]$-valued) real functions.
In this work we establish similar "universal $\epsilon$-approximators" for families of unbounded nonnegative real functions --- in particular, for the families over which one optimizes when performing data classification. In this case the $\epsilon$-approximation is multiplicative.
The ham-sandwich theorem states that, given $d\ge 2$ measures in $\R^d$, it is possible to divide all of them in half with a single $(d-1)$-dimensional hyperplane. We study an orthogonal version of the ham-sandwich theorem and define an orthogonal cut using at most $d$ hyperplanes orthogonal to coordinate axes. For example, a hyperplane orthogonal to a coordinate axis is an orthogonal cut. We prove that any three measures in $\R^3$ can be divided in half each with a single orthogonal cut. Applied to point measures, it implies that any three finite sets of points in $\R^3$ can be simultaneously bisected by an orthogonal cut. We present an algorithm for computing an orthogonal ham-sandwich cut in $O(n\log n)$ time.
Posted at 09:29 AM | Permalink | Comments (0)
Reblog
(0)
| |
One significant change from previous years is that your conference paper can be up to twenty (20) pages, including the title, authors names and addresses, references or bibliography, and appendix (if desired). In some cases, the program committee may choose to verify that the final version incorporates changes requested in the feedback you receive.It can? It may? I guess this is a pretty strong indication that paper SODA proceedings are a thing of the past, which is good. But still... Who decided this all of a sudden, with no warning and no community discussion?
Oh, who am I kidding. We all know exactly who decided this with, as usual, no warning and no community discussion.
Update 8/5: I was wrong. According to a PC member:
Actually, it was moses, together with SIAM, that did the 20 page thingie. this after Claire Mathieu suggested it.My apologies to David. On the other hand, my complaint about lack of warning and community discussion stands.
Posted at 11:30 PM | Permalink | Comments (1)
Reblog
(0)
| |
Recent Comments