This technique applies to problems involving comparisons.

**Useful Definitions**

- One way to represent working of an algorithm is a decision tree or binary
rooted tree (binary tree of height k has at most 2
^{k}leaves). - Each interval node of the decision tree contains a "test" on the data.
- Each leaf or verdict contains an output.
- Conversely, any such decision tree can be though of as an algorithm.
- Decision tree can also be used to analysis the complexity of a problem on the average rather then in the worst case.

**The Complexity of Comparison-based Sorting**

**Problem:** What is the minimum number of comparison needed to sort n
items?

The only operation allowed on the items to be sorted consists of comparing them pairwise to determine whether they are equal and if not, which is the greater. We use the decision tree.

A decision tree for sorting A, B, and C

figure 12.3 pp. 418 --- Brassard

Every valid decision tree for sorting n items gives a sorting algorithm. For example, the algorithm corresponding to the above decision tree is as follows

**SORTING (A[1 . . 3])**

IF A[1] < A[2]

THEN IF A[2] < A[3]

THEN {already sorted}

ELSE IF A[2] < A[3]

THEN S = A[1], A[2], A[3]

ELSE S = A[3], A[1], A[2]

ELSE IF A[2] < A[3]

THEN IF A[1] < A{3]

THEN S = A[2], A[1], A[3]

ELSE S - A[2], A[3], A[1]

ELSE S = A[3], A[2], A[1]

Similarly, to enemy deterministic comparison sorting algorithm there exists a decision tree.

For example

**INSERTION-SORT (A[1 . . n])**

For i = 2 to n

do key = A[i]

j = i -1

while j > 0 and key < A[j]

do A[j + 1] = A[j]

j = j -1

The corresponding decision tree is for sorting three items.

Figure 12.4, PP. 420 ------- Brassard.

HEAPSORT (A)

BUILD-HEAP (A)

For i = length [A] down to 2

do exchange A[1] ↔ A[i]

heap-size [A] = heap-size [A] - 1

HEAPIFY (A, 1)BUILD-HEAP (A[1 . . n])

heap-size [A] = length [A] = n

For i = floor(length [A]) down to 1

do HEPIFY (A, i)

The corresponding decision tree for sorting three items.

Figure 12.5 PP. 421 ------ Brasard.

**HEAPIFY (A, i)**

l = LEFT ( i )

r = RIGHT ( i )

if l ≤ heap-size [A] and A[l] > A[i]

then largest = l

else largest = i

if r ≤ heap-size [A] and A[r] > A[largest]

then largest = r

if largest ≠ i

then exchange A[i] ↔ A[largest]

HEAPIFY (A, largest)

Since we are sorting three items, the decision trees are all of height 3.

The reasons why any "correct" comparison algorithm must be able to produce at
least six different verdicts (leaves) when n = 3 is that this is the number of
permutations of three items (3! = 3 = 6)

More generally, any valid decision tree for sorting n items must contain at
least n! leaves and thus it must have height at least floor(lgn!)and average
height at least lgn! .

This translates direct into the complexity of sorting any algorithm that sort n
items by comparisons must make at least floor(lgn!) comparisons in the worst
case and lgn! on average, provided all verdicts (leaves) are equally likely.

Since each comparison must take at least some constant time and since lgn! =
θ(nlgn), it follows that it takes a time in
Ω(nlgn) to sort n items both
in the worst and on the average.