Geometric algorithms concern the management of complex objects. The machine language level cannot handle these complex objects. Therefore, the algorithm designer must organize these objects by means of the simpler data types directly represent able by the computer. Data structures refer to these organizations.

As an example, consider the searching problem. From the given set of n objects, find those objects that satisfying some given criterion.

## Naïve Method

Step 1. Scan the entire given set of objects.

Step 2. Check the criterion for each object.

Obviously, this algorithm is linear.

A fundamental way of obtaining efficient algorithms in such situations is to organize the data with some data structures. These data structures must be capable of supplying the basic operations (namely, searching, inserting at deleting) on these structures in sub linear time. We can use Height-balanced binary search trees such as the AVL trees or 2-3-4 tree trees that allow to perform basic operations in “O (logn)” time.

Most computational geometry algorithms relay on some algorithmic strategies. In this section, we discuss three well-known general strategies and illustrated with the specific problem.

The basic idea of incremental strategy is as follows. First, take a subset of the input small enough so that we can solve problem easily. Then, one by one add remaining elements (of input) while maintaining the solution at each step.

The example of this strategy is the well-known incremental Convex-hull algorithm.

The divide-and-conquer relies on a recursive approach and has two major steps.

**Top down step**In this step, the input is recursively divided until the sub-problems are small enough to be solved easily.

**Bottom-up step**This step consists of recursively merging the solutions.

Sorting algorithms such as merge-sort and quick-sort are the typical examples of this technique. In computational geometry, the example of this strategy is the famous Half-plane Intersection problem.

A divide-and-conquer algorithm can be thought of as a binary tree, where the leaves correspond to “atomic” operations and the nodes to merge operations.

Analysis of any algorithm that follows divide-and-conquer
strategy is goes like this: Split the problem of size into two problems of size *n* into
two problems of size *n*/2 and if we assume that the merge step can be done in
linear time i.e., O(*n*) time, then the recurrence relation is

*T*(*n*) = 2 *T*(*n*/2) + *cn*,

Where c is the cost factor and is constant. If the
divide-and-conquer strategy takes* * * i* steps, its cost is

*T*(*n*) = 2^{i} *T*(*n*/2^{i}) +
*cni*

Now let *k *= log* n*
be the number of times n can be
halved before reaching 1 (note that k
is the depth of the binary tree). At the step k,
the above recurrence becomes

*T*(*n*) = 2^{k} *T*(*n*/2^{k}) +
*cnk*

= *n T*(n/2^{k}) +
*cnlog(n)*

Now assume that once the size of the problem is small
enough to solve, we can solve in constant time in the size of the problem.
Therefore, *T*(n/2^{k}) = 1.

The final complexity is

*T*(*n*) = O(*nl og n*)

because of the merge cost that dominates
the O(*n*) leaves processing cost.

The sweep technique (plane-sweep in two dimension and space sweep in three dimension) is according to the “unique and natural” sprit of some geometric algorithms. Unlike incremental and divide-and-conquer strategies, this technique is consider be the purely computational technique. The basic idea of sweep-line strategy is as follows: Decompose the geometric input into vertical strips, such that the information relevant to the problem is located on the vertical lives that separate two strips. Therefore, by “sweeping” a vertical line from left-to-right, stopping at the strip boundaries, one can maintain and update the information needed for solving a given problem.

The example of this technique is the well-known algorithm
by Bentley and Ottmann computes the intersections of *n*
line segments in the plane by moving a vertical line across the plane.

** **