Chee K Yap.

Symbolic treatment of geometric degeneracies online

. (page 1 of 2)
Online LibraryChee K YapSymbolic treatment of geometric degeneracies → online text (page 1 of 2)
Font size
QR-code for this ebook

Robotics Research
Ibchnical Report

Symbolic Treatment
of Geometric Degeneracies


C. Yap

Technical Report No. 344

Robotics Report No. 138

February, 1988



PL, (U o *-•

s x; -H 4J
i O O "-I

o o e

-ja o

1=) cue cu


New York University
Institute of Mathematical Sciences

Computer Science Division

251 Mercer Street New York, N.Y 10012

r '

Symbolic Treatment
of Geometric Degeneracies


C. Yap

Technical Report No. 344

Robotics Report No. 138

February, 1988

New York University

Dept. of Computer Science

Courant Institute of Mathematical Sciences

251 Mercer Street

New York, New York 10012

Work supported in part by NSF grants #DCR-84-01898 and #DCR-84-01633.

Symbolic Treatment of Geometric Degeneracies

Chee-Keng Yap^

Courant Institute of Mathematical Sciences
New York University

251 Mercer Street
New York, NY 10012.

January 8, 1988

'Supported in part by NSF grants #DCR-84-01898 and #DCR-84-01633.


Many descriptions of algorithms in computational geometry exclude degeneracies by fiat. Prac-
titic ners are left to their own devices for dealing with degeneracies when implementing such
algr^rithms. Since degeneracies tend to be numerous and hard to enumerate exhaustively, this
is often a reason for not implementing theoretical algorithms. This paper proposes a powerful
symbolic scheme for treating degeneracies. Our method is simple to use, and is applicable for
a wide variety of problems in computational geometry (in particular, whenever random pertur-
bations are applicable). Our method is deterministic but is as efficient as probabilistic schemes.
Illustrations, limitations and wider issues are discussed.


1 Introduction

The theoretical study of algorithm in computational geometry is an active area. Besides the
inherent beauty arising from the interplay of geometrical and algorithmic properties, this area
holds the promise of impact on important application areas such as robotics, graphics and VLSI.
Unfortunately, the reduction of theoretical algorithms to practice has been relatively slow and
this has often been commented on. In our view, two fundamental issues must be addressed in
order to speed up this 'technology transfer'.

Fixed precision arithmetic. Theoretical algorithms assume an exact (arbitrary precision)
model of numerical computation. Its implications in the world of fixed precision computa-
tions are not fully understood.

Data degeneracy. Theoretical algorithms are often described for the 'non-degenerate' cases
of the inputs. Sometimes, even careful attempts at capturing all degenerate cases leave
hard-to-detect gaps.

Both problems are a deep source of frustration for practitioners who often find mysterious fail-
ures in their algorithms. We firmly believe that theoreticians are justified in making these two
assumptions as long as their goal is the understanding of the global, combinatorial structure of
problems. However, it would not justify a continuing neglect of both these issues. Both raise
extremely interesting questions in their own right. Partly because of such neglect, there seems
to be a credibility gap between theoreticians and implementors: the latter often view theoretical
algorithms with suspicion.

Theoreticians have begun to address these questions. For instance, the recent paper [11],
addresses the fixed precision issue in the context of computer graphics. It is important to realize
that although fixed precision arithmetic and date degeneracy are related, they are distinct issues.
In this paper we deal with the latter. The symmetry breaking rules in simplex algorithms
(e.g. [2]) are the precursors of symbolic treatment of data degeneracies. In computational
geometry, Edelsbrunner and his students are among the first to publish solutions to the problem
of degeneracies [6,8,7] (see chapter 9.4, [9]). Our independently discovered scheme wiU turn out
to be a generalization and simplification of their method.

Degeneracy in computational geometry is a general phenomenon. So in what sense can we
justify its neglect in theoretical algorithms? One justification is that explicit handling of degen-
eracies obscures the centrality of the non-degenerate cases: degenerate cases normally involve
an overwhelming number of cases that are disproportionate to their likelihood of occurrence.
But an implementor of these algorithms must handle the degeneracies when they do arise. Most
authors are correct in suggesting that a random perturbation of data will remove degeneracies
with high probability. Such a suggestion is justified if the problem is stable, as the case generally
turns out to be. However, an explicit demonstration that the problem is stable is seldom done.


In contrast to the suggestion of random perturbation, it is occasionally suggested that the al-
gorithmic description ought to carefully work out all the degenerate cases. We take the opposite
position that this is, in general, inadvisable: it is neither illuminating for a global understanding
of the algorithm nor is it in the interest of the implementor who would then have to implement
the numerous degenerate cases. Besides, the increase in the number of cases may lead to other
programming errors or incomplete theoretical analysis. The approach advocated in this paper
is to work out some general scheme to achieve suitable data perturbation. In this way, theo-
reticians can continue to focus on the interesting non-degenerate cases while implementors can
enjoy the benefits of algorithms that have few cases and yet can handle all conceivable inputs.
(Paraphrased: implementors can now join the theoretical paradise in which degeneracies are
abolished.) Thus the main contribution of this paper is a practical and general scheme achieving
these goals.

Overview. The rest of the paper is organized as follows. Section 2 explores the meaning of
degeneracy and related concepts. Section 3 reviews probabilistic perturbation schemes and points
out unsatisfactory properties. To illustrate the symbolic approach, we outline Edelsbrunner's
method in section 4. The new scheme appears in section 5. In section 6, we explore the concept
of ordered rings implied by our scheme. Section 7 investigates the computational and complexity
questions arising in implementing our scheme. We conclude with some directions for future work
in section 8.

2 What is geometric data degeneracy?

How can a general solution as proposed in the introduction be achieved? We first need an
understanding of what we mean by degeneracies. In this paper, we assume that the problem
input is a sequence of real numbers called the input parameters a = (oi, . . .,a„). An input is
degenerate if some polynomial Pj{xi, . . . , Xkj) in a fixed set

D = {pj{xi,...,Xk^) : j = 1,2,..., m and kj > 1), (m > 1)

evaluates to zero when an allowable substitution for the variables (ij, . . . , x^j) is made using the
o,'s. Here the set D, called the test polynomials, depends only on the algorithm and not on the
inputs. Although £> is a finite set here (the usual case in practice), our method works as well
if D were infinite. The 'allowable substitution' of variables in each test polynomial by input
parameters is dictated by the problem. We refrain from making this more formal but this will
be easy to do (for each particular case) once the following examples are understood.

Examples of degeneracy. When input a represents a set of points in the Euclidean plane,
degeneracies include (i) two coincident points, (ii) three coUinear points, or (iii) four cocircular
points. If the input represents a set of lines, common notions of degeneracy include (iv) a vertical
line, (v) two paraUel lines, (vi) two perpendicular lines, or (vii) three concurrent lines. If the input


represents points and lines, degeneracy may include (viii) a point lying in a line, or (ix) a line
parallel to the line through two points. This list can go on. The 'allowable substitution' above
may only amount to typing the variables in D and the input parameters so that substitutions
must respect the type distinctions. For instance, some variables and parameters correspond to
the first coordinate of points and others correspond to the slope of lines, etc. More concretely,
suppose the input parameters are 01,61,02,62, .. .,a„,6n representing n points in the plane, and
let the set of test polynomials consists of only one polynomial,

A(xi, 2/1, 12,2/2,2^3,2/3)

which is just the 3 by 3 determinant that tests if the points (a:i,?/i),(a'2, 2/2) and (13,2/3) are
collinear (cf. section 4). Allowable substitution here means that (1) the substitution of the
input parameters must be in pairs (i.e. for all ij, (a,-, 6,) must be substituted simultaneously
for (xj,2/_,)), and that (2) the three input points to be substituted must have distinct subscripts.

Exact model of numerical computation. It is important to realize that this paper
assumes the exact model of numerical computations. All numbers are represented exactly: for
example, to represent any real algebraic number a exactly, it suffices to specify a polynomial
p{x) with integer coefficients together with an interval / containing a but no other distinct roots
oi p{x). The end points of/ are exact, say represented by rational numbers. Our development is
not restricted to any particular exact representation. Note that since the integers involved (as in
the coefficients of p(x), or in representing /) can be arbitrarily long, we sometimes call this the
arbitrary precision model {ot, somewhat misleadingly, 'infinite precision model'). This contrasts
with the fixed precision world of the numerical analysts. With the advent of computer algebra,
the exact models are becoming more important and indeed unavoidable for some applications.

We have stated that the issues of fixed precision and data degeneracy are distinct. Indeed,
any attempt to consider data degeneracy in the fixed precision models faces some very difficult
problems: for instance, there are many puzzles in just trying to define what it means for three
points to be collinear in the fixed precision model (actually, the world of pixels). In any Ccise,
one should begin by understanding degeneracy in exact models.

Degree of derivation and derived degeneracies. In the course of executing algorithms,
derived values b = (61, 62, . . .) may be generated from the input parameters a. Let us define the
'degree of derivation' of b be the least integer d > 1 such that for each 6,- in b there is {d+ 1)-
variate polynomial p{x,xi, . . .,xj) with integer coefficients, whose degree in each variable is at
most d such that if each Xj {j = l,...,d) is substituted by suitable values 7j from a, then 6,
is a root of p(x,7i, . . .,7d). For instance, if the input represents points, then b may be the
computed distances between pairs of points, and the degree of derivation is 2. If this degree d
of derivation does not grow with the input size n, we say that the algorithm has bounded degree
of derivation. A problem is of bounded degree if it has an algorithm with bounded degree of
derivation. Examples of bounded degree problems include computing the convex hulls or the
Voronoi diagrams. Examples of problems with unbounded degree include shortest path problems


and root isolation (or more generally, cell decomposition). Our method to be described does not
allow the substitution of derived values with derivation degree more than 1 (this is really the
same as allowing substitution by input parameters only), but see section 8 for discussion.

Our definition of degeneracy above can be generalized thus: we say that the input parameters
have derived degeneracies if substitution by derived values into the test polynomials of D results
in a zero value.

Alternative notions of degeneracy. In a provocative discussion, [10], Render and Freuden-
stein point out that there are several, not necessarily mutually-consistent, notions of degeneracy
in the literature. Render and Freudenstein propose to unify these disparate notions by describ-
ing degeneracies to be a 'system relative' concept. Of course, this is a very general formulation
and our particular notion here is 'system relative' to the extend that the particular set of test
polynomials D depend on the algorithm.

Degeneracies are often depicted as rare events. In computer vision, there are notions of
degeneracy that belie this view, as shown by this example from [10]. Imagine a very squat
pyramid and a view of the pyramid from 'below'. This view shows only the bcise of the pyramid
and would be considered a 'degenerate view' in certain contexts of vision research. Yet, this
bottom view is hardly 'rare' by any reasonable definition (in the sense of geometric measures).
Perhaps it is better to caU such views 'deficient' (since they give inadequate information) rather
than degenerate.

Perhaps a more pertinent illustration which suggests that the 'degeneracies as rare events'
view require careful interpretation is this: when dealing with highly structured scenes or robot
environments, certain 'degeneracies' such as parallel lines or collinear points are features rather
than accidents of the input space. For instance, in descriptions of a robot environment in a
factory, we expect parallel lines to be a common feature. The explanation lies in realizing that
we normally assume that the input space (for a fixed input size of n) is the set of aU n-tuples
of (exactly representable) real numbers. It may happen that the input space is a proper subset
of all possible 72-tuples (usually, a submanifold); 'rarity of degeneracies' must be relative to this
manifold. However, our method as it stands assumes the full input space.

Problem stability. The fundamental assumption in this paper is that with the 'rare event'
view of degeneracies, we can perturb away the degeneracy. A crucial but often implicit require-
ment is that the problem at hand defines a function from the input space to the output space that
is 'continuous' in this sense: a solution to a perturbed version of the input data is a reasonable
approximate solution for the original data. Then we are indeed justified in using a solution to the
perturbed input as the final output. We caU such problems stable. Since continuity arguments
are involved, stability is relative to the choice of topologies on the spaces concerned.

Example of instability/stability. Consider the problem of constructing the Voronoi dia-
gram of a set of planar points (sites). We would like to show that this problem is stable. Let us
assume that certain four cocircular sites cause the Voronoi diagram to have a Voronoi vertex of
degree 4. Any perturbation of the input that removes the cocircularity of these four sites causes


the said \ oronoi vertex to split into two very close Voronoi vertices of degree 3 each. There
are two combinatorially distinct ways in which this split can occur. Clearly the combinatorial
structure of the perturbed Voronoi diagram is different from the original Voronoi diagram. So, in
the combinatorial sense, the problem would not seem stable. Another attempt at trying to show
that the problem is stable is this: use the Hausdorff metric on closed point sets. Unfortunately
a small perturbation may introduce an infinite Hausdorff distance between the two Voronoi di-
agrams. [Consider the diagram of two points p = (-1,0) and q = ( + 1,0) and then perturb one
of the points top' = {6 - 1,0) for all 0.]

Now suppose our goal is to use this Voronoi diagram to compute an obstacle-avoiding path
for a unit disc between two specified points P and Q, viewing these sites as obstacles. (It follows
from [15], that to find such a path, it is sufficient to look for one path in which the center of
the disc lies in the Voronoi diagram.) Let the set of n sites be represented by a £ f^ (where
d = 2n and £"'' is the Euclidean d-dimensional space). It is natural to measure the 'connection
width' between any two points P and Q in the plane by the quantity C(P, Q;a) > defined
as the maximum value attained by the clearance of some path from P to Q in the presence of
obstacles defined by a.'. Clearly, if C(P, Q;a) < 1 then the unit disc has no obstacle-avoiding
path connecting P and Q. Then it is easy to show:

Proposition 1 For any 6 > there is an e > such that for all a, b £ S"^, and for all points P
andQ, ifC{P,Q;&) > 6 and ||a-b|| < e then \C{P,Q;a) - C{P,Q;h)\ < S. Here ||a|| denotes
the Euclidean norm.

Such a stability or continuity property justifies the perturbation of input a in this appli-
cation. This illustrates the kind of justification that must logically precede any application of
perturbation methods.

Induced and inherent degeneracy. We distinguish between inherent degeneracy of the
input data versus an algorithm-induced degeneracy. For example, if the input represent points to
a convex huU algorithm, it is apparent that if three consecutive collinear vertices on the convex
hull ought to be an inherent degeneracy of the convex huU problem. Now if the algorithm
uses some kind of vertical partitioning of the input set of points, or uses the vertical sweepline
paradigm, then two co-vertical points may be regarded as a degeneracy. These degeneracies
are esisily removed in this case, either by modifying the algorithm or by perturbing the data
deterministicaUy. In any case, they are degeneracies induced by the algorithm. This paper
is concerned with algorithm-induced degeneracy. It seems that normally, induced degeneracies
subsume inherent degeneracies.

^The clearance of a point i is its distance from the closest site; the clearance of a path is the minimum clearance
among points along the path.


3 Probabilistic schemes

In this paper, we assume a computational model with the following property. There is a fixed
set D of polynomials, independent of the input such that

(*) The algorithm only makes decision steps based on the sign of polynomials p(x) G
D evaluated at x = (ii, . . . , x^) '■= [Oji , . . , Oj^) = b where each b is an (allowable)
substitution from the input parameters a = (fli,. . .,an)- The algorithm then makes
a 3-way branch depending on the ^ign of p(b), with the case p(h) = considered to
be degenerate.

The goal is to devise a data-perturbation method so that the algorithm never takes a degenerate
branch. It is instructive to first review probabilistic methods for data perturbation often alluded
to in the literature.

Perhaps the simplest solution is to arbitrarily choose either p(b) < or p(b) > whenever
p(b) = is encountered. This method suffers from the problem of global consistency: how can
transitivity (i.e., p(b) > q(b) and ^(b) > r(b) implies p(b) > r(b)) be maintained without
expensive bookkeeping?

The initial perturbation scheme overcomes the consistency problem by precedirg the entire
computation with an initial random perturbation of the input data. This perturbation ought to

(1) No new degeneracies arise as a result of the perturbation.

(2) The original degeneracies are all removed.

Property (1) can be satisfied by computing some a priori upper bound on the size of the
perturbation. Part (2) seems more difficult to ensure. Granted that with very high probability
no degeneracy remains, the issue remains as to what the algorithm must do when a degeneracy
does arise? The simplest recovery is to restart the algorithm with another random perturbation.
This, in principle, can repeat indefinitely. Thus, although the probabilistic overhead complexity
of the initial perturbation scheme is small, its deterministic complexity is unbounded. This is
unsatisfactory from a theoretical viewpoint.

4 Simulation of Simplicity

We now turn to symbolic perturbation schemes. To illustrate, we briefly review a version of
a scheme (called simulation of sinr^Iicity, or SoS for short) described in [6,8,7,9]. The setting
scheme is normally carried out in the setting of computing hyperplane arrangements where the
test polynomials are determinants. For illustration, say a set H of lines in the plane is simple if


(i) no three lines are concurrent, (ii) no two lines are parallel, (iii) no two pairs of lines intersect
on a common vertical line, and (iv) no two pairs of lines intersect on a line parallel to a line
in H. Each of these conditions corresponds to the vanishing of a suitable determinant over the
input parameters. Let a line h, £ H he given by y = a.i + 6,. Now replace each input line /i, by
/i,(e) with equation y — a,{()x + 6,(f) where

a,(e) = a, + e^" , b,{e) = b, + e^""'

It is then shown that if e > is sufficiently small, then the new arrangement n{e) is simple.
However, instead of substituting actual values for e, we carry out the calculation symbolically.
For example, to decide if the intersection of h,{€) n /i_,(e) lies above the line /ifc(e), we must
determine the sign of

/ a,{e) 6.(e) 1 \
A(e) = det c_,(e) 6_,(e) 1

V ak{e) bk{e) 1 /

It is then observed that evaluating the sign of A(e) amounts to evaluating a sequence of subde-
terminants of the original matrix until the first non-zero entry:

AfO =




We will show that this is actually a general phenomenon. One may regard each input pa-
rameter c,- to be perturbed by an infinitesimal amount S, {S, = ^ in the preceding illustration).
Furthermore there is a suitable fixed total ordering on the set of infinitesimals and their products.
For instance, without loss of generality, we may assume

h < ^2 < • • • < «5. < • • • < \1 In the next section, we wiU give
a systematic framework for making such comparisons. We note that [7] also uses a different
infinitesimal for each variable, but they did not have to give a general rule for comparing products
of infinitesimals since they restricted attention to evaluation of determinants only.

Another remark is that it is possible to justify such uses of infinitesimals in terms of non-
standard analysis.


5 A general scheme to avoid zeroes

We describe a procedure to evaluate any polynomial p(x) at any value x:=a. The procedure
outputs the value p(a). However, in case p(x) is a non-zero polynomial and p(a) = 0, then the
output is one of two types of zeroes: 0- or 0+. This sign information will be globally consistent
in a natural sense. Such a procedure can be used as a black-box by any algorithm satisfying
assumption (*) above to always avoid the degenerate branch of a decision step.

Observe that this scheme also gives us a method of comparing the values of two distinct
polynomials p, q at any fixed point x = a. More precisely, if the sign of the difference polynomial
p — p at X = a is positive then we say p(a) > 9(a); otherwise p(a) < 9(a). Extending this, it
mea - that we can strictly order any set of distinct polynomials {pi, . . .,p,} by their 'values' at
any poi:,t x:= a.

As in [14], let PP = PP(2;i , . . . , a^n) denote the set of all power products


= n^r (^.^0)-


Let \w\ denote Yl?-i ^«- ^ total ordering < on PP is admissible if for any w,w',w" G PP,


1. l


Online LibraryChee K YapSymbolic treatment of geometric degeneracies → online text (page 1 of 2)