Anyway here's a partial list of geometry/topology papers. Enjoy!
- Kevin Buchin and Wolfgang Mulzer. Delaunay Triangulations in O(sort(n)) and Other Transdichotomous and Hereditary Algorithms in Computational Geometry.
We present several results about Delaunay triangulations (DTs) and convex hulls in transdichotomous and hereditary settings: (i) the DT of a planar point set can be omputed in time O(sort(n)) on a word RAM, where sort(n) is the time to sort n numbers; (ii) if we know the ordering of a planar point set in x- and in y-direction, its DT can be found by a randomized algebraic computation tree of expected linear depth; (iii) given a universe U of points in the plane, we construct a data structure D for Delaunay queries: for any subset P of U, D can find the DT of P in time O(|P| log log |U|); (iv) given a universe U of three-dimensional points in general convex position, there is a data structure D for _convex hull queries: for any subset P of U, D can find the convex hull of P in time O(|P| (log log |U|)^2); (v) given a convex polytope in 3-space with n vertices which are colored with k > 2 colors, we can split it into the convex hulls of the individual color classes in time O(n (log log n)^2). The results (i)--(iii) generalize to higher dimensions. We need a wide range of techniques. Most prominently, we describe a reduction from nearest-neighbor graphs to DTs that relies on a new variant of randomized incremental constructions which allow dependent sampling.
- Saugata Basu and Thierry Zell. Polynomial hierarchy, Betti numbers and a real analogue of Toda's Theorem.
Toda proved in 1989 that the (discrete) polynomial time hierarchy, is contained in the class of languages that can be decided by a Turing machine in polynomial time given access to an oracle with the power to compute a function in the counting complexity class #P. This result which illustrates the power of counting is considered to be a seminal result in computational complexity theory. An analogous result in the complexity theory over the reals (in the sense of Blum-Shub-Smale real Turing machines) has been missing so far. In this paper we formulate and prove a real analogue of Toda's theorem. Unlike Toda's proof in the discrete case, which relied on sophisticated combinatorial arguments, our proof is topological in nature. As a consequence of our techniques we are also able to relate the computational hardness of two extremely well-studied problems in algorithmic semi-algebraic geometry -- namely the problem of deciding sentences in the first order theory of the reals with a constant number of quantifier alternations, and that of computing the Betti numbers of semi-algebraic sets. We obtain a polynomial time reduction of the compact version of the first problem to the second. This latter result might be of independent interest to researchers in algorithmic semi-algebraic geometry.
- Adam Kalai, Shang-Hua Teng
and Alex
Samorodnitsky. Learning Decision Trees From Random Examples: a Smoothed Analysis.
We consider a new model of learning from random examples, motivated by smoothed analysis. Like previous work, we assume that the distribution over unlabeled examples is a product distribution -- the individual bits are chosen independently. However, we consider "typical" product distributions. Formally, we assume that the parameters are chosen with some (constant) amount of randomness, for example from the cube [0.49,0.51]^n. In this model we show how to agnostically learn polynomially-sized decision trees from random examples alone.
We also consider a second model, which is a specific distribution over unlabeled data from which we efficiently learn all O(log n)-depth decision trees. In particular, we consider a distribution where the data exhibits even a small amount of diversity, e.g., symmetrically-distributed data where the fraction of 1's in the attributes span a constant range such as [.49,.51]. In recent work, Arpe and Mossel (2008) efficiently learn Juntas of size O(log(n) log log(n)) in a related model.
-
Peyman Afshani, Lars Arge, and
Kasper Dalgaard Larsen. Orthogonal Range Reporting in Three and Higher
Dimensions.
In orthogonal range reporting we are to preprocess N points in d dimensions such that the points inside an axis-aligned query box can be found efficiently. This is a fundamental problem in various fields, including spatial databases and computational geometry. In this paper we provide a number of improvements for three and higher dimensional orthogonal range reporting: In the pointer machine model, we improve all the best previous results, some of which have not seen any improvements in almost two decades. In the I/O-model, we give the first polylogarithmic query bounds that exactly match the bounds in the pointer machine model but with logarithms taken to base B; no such results were known in more than three dimensions. We also prove a space lower bound that shows our new structures are space-optimal in the I/O-model. Our main pointer machine data structure uses optimal O(N (log N / loglog N)^{d-1}) space and answers queries in O(log^{d-1}N / (loglog N)^{d-2} + K) time; here K is the size of the output. Our main I/O-efficient structure answers queries in O(Log^{d-1}N / (LogLog N)^{d-2} + K/B) I/Os and uses optimal O(N (Log N / LogLog N)^{d-1}) space where Log N is logarithm taken to base B.
-
David Arthur, Bodo Manthey, and Heiko Roeglin.
k-Means has Polynomial Smoothed Complexity.
The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In this paper, we settle the smoothed running time of the k-means method. We show that the smoothed number of iterations is bounded by a polynomial in the number of data points and the reciprocal of the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.
-
Peyman Afshani, Jeremy
Barbay, and Timothy
M. Chan. Instance-Optimal Geometric Algorithms.
We prove the existence of an algorithm A for computing 2-d or 3-d convex hulls that is optimal for every point set in the following sense: for every set S of n points and for every algorithm A' in a certain class C, the maximum running time of A on input s_1,...,s_n is at most a constant factor times the maximum running time of A' on s_1,...,s_n, where the maximum is taken over all permutations s_1,...,s_n of S. In fact, we can establish a stronger property: for every S and A', the maximum running time of A is at most a constant factor times the average running time of A' over all permutations of S. We call algorithms satisfying these properties instance-optimal in the order-oblivious and random-order setting. Such instance-optimal algorithms simultaneously subsume output-sensitive algorithms and distribution-dependent average-case algorithms, and all algorithms that do not take advantage of the order of the input or that assume the input is given in a random order.
The class C under consideration consists of all algorithms in a decision tree model where the tests involve only multilinear functions with a constant number of arguments. To establish an instance-specific lower bound, we deviate from traditional Ben-Or-style proofs and adopt an interesting adversary argument. For 2-d convex hulls, we prove that a version of the well known algorithm by Kirkpatrick and Seidel (1986) or Chan, Snoeyink, and Yap (1995) already attains this lower bound. For 3-d convex hulls, we propose a new algorithm.
To demonstrate the potential of the concept, we further obtain instance-optimal results for a few other standard problems in computational geometry, such as maxima in 2-d and 3-d, orthogonal line segment intersection in 2-d, finding bichromatic L_\infty-close pairs in 2-d, off-line orthogonal range searching in 2-d, off-line dominance reporting in 2-d and 3-d, off-line halfspace range reporting in 2-d and 3-d, and off-line point location in 2-d.
- Matt Gibson and Kasturi Varadarajan.
Decomposing Coverings and the Planar Sensor Cover Problem.
We show that a k-fold covering using translates of an arbitrary convex polygon can be decomposed into Omega(k) covers (using an efficient algorithm). We generalize this result to obtain a constant factor approximation to the sensor cover problem where the ranges of the sensors are translates of a given convex polygon. The crucial ingredient in this generalization is a constant factor approximation for a one-dimensional version of the sensor cover problem, called the Restricted Strip Cover (RSC) problem, where sensors are intervals of possibly different lengths. Our algorithm for RSC improves on the previous O(log log log n) approximation.
-
Jonathan Kelner, James Lee, Gregory Price and Shanghua Teng. Higher eigenvalues of graphs.
We present a general method for proving upper bounds on the eigenvalues of the graph Laplacian. In particular, we show that for any positive integer k, the kth smallest eigenvalue of the Laplacian on a bounded-degree planar graph is O(k/n). This bound is asymptotically tight for every k, as it is easily seen to be achieved for planar grids. We also extend this spectral result to graphs with bounded genus, graphs which forbid fixed minors, and other natural families. Previously, such spectral upper bounds were only known for k=2, i.e. for the Fiedler value of these graphs. In addition, our result yields a new, combinatorial proof of the celebrated result of Korevaar in differential geometry.
Comments