Heaps

Heaps are slightly obscure data structures that underlie some common higher-level programming constructs, and that may come in handy if you find yourself solving certain classes of problems. (They also come up in certain types of programming interviews, which, to be honest, are the only places I’ve used them directly in the last 15 years.) Once stripped of their obscurity they’re easy to understand, so today I’m going to sketch out a brief overview of the heap data structure.

Logical Structure

Heaps are trees in which each node has a value greater than or equal to that of any of its children. For instance, these are valid heaps:

    8         8         4
   / \       / \       / \
  2   4     4   2     4   4
 /         /               \
1         1                 1

… and these are not:

    2         8         4
   / \       / \       / \
  8   4     1   2     4   1
 /         /               \
1         4                 4

Note that these are “max-heaps”; so-called “min-heaps” flip the rule s.t. each node has a value less than or equal to that of any of its children. There’s almost no logical difference between the two structures, for fairly obvious reasons.

Physical Structure

Heaps are usually internally stored as memory arrays, with each element of the array representing a node, and simple arithmetic deriving the indices of a node’s parent and children from its own index. The math is a little different depending on whether the root node is at index 1 (in textbooks) or index 0 (in practice):

# If the root is at one                                                                                                                                     
def parent_index(n):
    return n//2
def child_indices(n):
    return (2*n, 2*n+1)

# If the root is at zero                                                                                                                                    
def parent_index(n):
    return (n-1)//2
def child_indices(n):
    return (2*n+1, 2*n+2)

Runtime

The point of a heap is that certain interesting operations have either a constant runtime, or a runtime proportional to the height of the heap, which is in turn proportional to the log of the number of elements (N) in the heap. In particular, the maximum value of a max-heap can be found in O(1), and a new value can be added to a heap in O(log(N)). These properties make heaps useful when merging sorted lists, to give just one (popular) example.

Code

As a short demonstration of how a heap works, I present the following Python code snippets. Following the example of the Python heapq module, they use an ordinary Python list to represent the heap, place the root at index 0, and implement a min-heap (in contrast to the max-heap discussed above).

To begin with, consider fix_min_heap(), which will be the backbone of the heap implementation. This function takes as arguments a heap, and the index of a particular node in that heap. The specified node is assumed to have well-formed heaps for children (if any are present), and so only the node itself might have an out-of-order value. The function makes the sub-heap rooted at the argument node well-formed by first checking if the argument node’s value is larger than that of either of its children, and then pushing it down the heap if it is.

def fix_min_heap(h, i):
    l,r = child_indices(i)
    m = i
    if (l < len(h)) and (h[l] < h[m]): m = l
    if (r < len(h)) and (h[r] < h[m]): m = r
    if m != i:
        h[i],h[m] = h[m],h[i]
        fix_min_heap(h,m)

Next, we can use this function to efficiently build a heap (in linear time, as it turns out) out of any randomly ordered list:

def make_min_heap(a):
    # The back half of the list begins as a set of trivially valid heaps
    for i in xrange(len(a)//2-1, -1, -1):
        fix_min_heap(a, i)

We can also add values one-at-a-time to the heap by inserting them at the bottom/end of the heap, and then bubbling them up as long as they are smaller than their parent:

def push_min_heap(h, new):
    h.append(new)
    i = len(h)-1
    while (i):
        p = parent_index(i)
        if new >= h[p]: break
        h[p],h[i] = new,h[p]
        i = p

Finally, we can trivially find the smallest value in the heap; it's always the first element in the underlying array. More interestingly, we can remove the smallest element by replacing it with the last element in the underlying array and then using fix_min_heap() to order the heap properly:

def pop_min_heap(h):
    if len(h) > 1:
        rv,h[0] = h[0],h.pop()
        fix_min_heap(h, 0)
        return rv
    else:
        return h.pop()

To wrap up, here's some test code (and, yes, I know about heapsort):

def test():
    from itertools import repeat

    a = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0]
    b = [-1, -5, 10, 15]

    h = a[:]
    make_min_heap(h)
    for e in b: push_min_heap(h, e)

    print h
    print map(pop_min_heap, repeat(h, len(h))) == sorted(a+b)

Of course, the mere existence of the heapq module makes an important point: You'll almost certainly never have to implement a heap yourself. An understanding of how to use one, on the other hand, might be good to have. (Why did I present my own example instead of just pointing you to the heapq source code? Because the heapq stuff has been tuned for performance, and is therefore a little dense. It's also worth a look, though.)

Share and Enjoy:
  • Twitter
  • Facebook
  • Digg
  • Reddit
  • HackerNews
  • del.icio.us
  • Google Bookmarks
  • Slashdot
This entry was posted in Python. Bookmark the permalink.

Comments are closed.