Disjoint-set data structure
In computer science, a disjoint-set data structure, also called a union–find data structure or merge–find set, is a data structure that stores a collection of disjoint (non-overlapping) sets. Equivalently, it stores a partition of a set into disjoint subsets. It provides operations for adding new sets, merging sets (replacing them with their union), and finding a representative member of a set. The last operation makes it possible to determine efficiently whether any two elements belong to the same set or to different sets. While there are several ways of implementing disjoint-set data structures, in practice they are often identified with a particular implementation known as a disjoint-set forest. This specialized type of forest performs union and find operations in near-constant amortized time. For a sequence of m addition, union, or find operations on a disjoint-set forest with n nodes, the total time required is O(mα(n)), where α(n) is the extremely slow-growing inverse Ackermann function. Although disjoint-set forests do not guarantee this time per operation, each operation rebalances the structure (via tree compression) so that subsequent operations become faster. As a result, disjoint-set forests are both asymptotically optimal and practically efficient. Disjoint-set data structures play a key role in Kruskal's algorithm for finding the minimum spanning tree of a graph. The importance of minimum spanning trees means that disjoint-set data structures support a wide variety of algorithms. In addition, these data structures find applications in symbolic computation and in compilers, especially for register allocation problems. HistoryDisjoint-set forests were first described by Bernard A. Galler and Michael J. Fischer in 1964.[2] In 1973, their time complexity was bounded to , the iterated logarithm of , by Hopcroft and Ullman.[3] In 1975, Robert Tarjan was the first to prove the (inverse Ackermann function) upper bound on the algorithm's time complexity,.[4] He also proved it to be tight. In 1979, he showed that this was the lower bound for a certain class of algorithms, that include the Galler-Fischer structure.[5] In 1989, Fredman and Saks showed that (amortized) words of bits must be accessed by any disjoint-set data structure per operation,[6] thereby proving the optimality of the data structure in this model. In 1991, Galil and Italiano published a survey of data structures for disjoint-sets.[7] In 1994, Richard J. Anderson and Heather Woll described a parallelized version of Union–Find that never needs to block.[8] In 2007, Sylvain Conchon and Jean-Christophe Filliâtre developed a semi-persistent version of the disjoint-set forest data structure and formalized its correctness using the proof assistant Coq.[9] "Semi-persistent" means that previous versions of the structure are efficiently retained, but accessing previous versions of the data structure invalidates later ones. Their fastest implementation achieves performance almost as efficient as the non-persistent algorithm. They do not perform a complexity analysis. Variants of disjoint-set data structures with better performance on a restricted class of problems have also been considered. Gabow and Tarjan showed that if the possible unions are restricted in certain ways, then a truly linear time algorithm is possible.[10] RepresentationEach node in a disjoint-set forest consists of a pointer and some auxiliary information, either a size or a rank (but not both). The pointers are used to make parent pointer trees, where each node that is not the root of a tree points to its parent. To distinguish root nodes from others, their parent pointers have invalid values, such as a circular reference to the node or a sentinel value. Each tree represents a set stored in the forest, with the members of the set being the nodes in the tree. Root nodes provide set representatives: Two nodes are in the same set if and only if the roots of the trees containing the nodes are equal. Nodes in the forest can be stored in any way convenient to the application, but a common technique is to store them in an array. In this case, parents can be indicated by their array index. Every array entry requires Θ(log n) bits of storage for the parent pointer. A comparable or lesser amount of storage is required for the rest of the entry, so the number of bits required to store the forest is Θ(n log n). If an implementation uses fixed size nodes (thereby limiting the maximum size of the forest that can be stored), then the necessary storage is linear in n. OperationsDisjoint-set data structures support three operations: Making a new set containing a new element; Finding the representative of the set containing a given element; and Merging two sets. Making new setsThe In a disjoint-set forest, function MakeSet(x) is if x is not already in the forest then x.parent := x x.size := 1 // if nodes store size x.rank := 0 // if nodes store rank end if end function This operation has linear time complexity. In particular, initializing a disjoint-set forest with n nodes requires O(n) time. Lack of a parent assigned to the node implies that the node is not present in the forest. In practice, Finding set representativesThe Performing a There are several algorithms for function Find(x) is if x.parent ≠ x then x.parent := Find(x.parent) return x.parent else return x end if end function This implementation makes two passes, one up the tree and one back down. It requires enough scratch memory to store the path from the query node to the root (in the above pseudocode, the path is implicitly represented using the call stack). This can be decreased to a constant amount of memory by performing both passes in the same direction. The constant memory implementation walks from the query node to the root twice, once to find the root and once to update pointers: function Find(x) is root := x while root.parent ≠ root do root := root.parent end while while x.parent ≠ root do parent := x.parent x.parent := root x := parent end while return root end function Tarjan and Van Leeuwen also developed one-pass function Find(x) is while x.parent ≠ x do (x, x.parent) := (x.parent, x.parent.parent) end while return x end function Path halving works similarly but replaces only every other parent pointer: function Find(x) is while x.parent ≠ x do x.parent := x.parent.parent x := x.parent end while return x end function Merging two setsThe operation The choice of which node becomes the parent has consequences for the complexity of future operations on the tree. If it is done carelessly, trees can become excessively tall. For example, suppose that In an efficient implementation, tree height is controlled using union by size or union by rank. Both of these require a node to store information besides just its parent pointer. This information is used to decide which root becomes the new parent. Both strategies ensure that trees do not become too deep. Union by sizeIn the case of union by size, a node stores its size, which is simply its number of descendants (including the node itself). When the trees with roots x and y are merged, the node with more descendants becomes the parent. If the two nodes have the same number of descendants, then either one can become the parent. In both cases, the size of the new parent node is set to its new total number of descendants. function Union(x, y) is // Replace nodes by roots x := Find(x) y := Find(y) if x = y then return // x and y are already in the same set end if // If necessary, swap variables to ensure that // x has at least as many descendants as y if x.size < y.size then (x, y) := (y, x) end if // Make x the new root y.parent := x // Update the size of x x.size := x.size + y.size end function The number of bits necessary to store the size is clearly the number of bits necessary to store n. This adds a constant factor to the forest's required storage. Union by rankFor union by rank, a node stores its rank, which is an upper bound for its height. When a node is initialized, its rank is set to zero. To merge trees with roots x and y, first compare their ranks. If the ranks are different, then the larger rank tree becomes the parent, and the ranks of x and y do not change. If the ranks are the same, then either one can become the parent, but the new parent's rank is incremented by one. While the rank of a node is clearly related to its height, storing ranks is more efficient than storing heights. The height of a node can change during a function Union(x, y) is // Replace nodes by roots x := Find(x) y := Find(y) if x = y then return // x and y are already in the same set end if // If necessary, rename variables to ensure that // x has rank at least as large as that of y if x.rank < y.rank then (x, y) := (y, x) end if // Make x the new root y.parent := x // If necessary, increment the rank of x if x.rank = y.rank then x.rank := x.rank + 1 end if end function It can be shown that every node has rank or less.[11] Consequently each rank can be stored in O(log log n) bits and all the ranks can be stored in O(n log log n) bits. This makes the ranks an asymptotically negligible portion of the forest's size. It is clear from the above implementations that the size and rank of a node do not matter unless a node is the root of a tree. Once a node becomes a child, its size and rank are never accessed again. Time complexityA disjoint-set forest implementation in which If an implementation uses path compression alone, then a sequence of n Using union by rank, but without updating parent pointers during The combination of path compression, splitting, or halving, with union by size or by rank, reduces the running time for m operations of any type, up to n of which are Proof of O(m log* n) time complexity of Union-FindThe precise analysis of the performance of a disjoint-set forest is somewhat intricate. However, there is a much simpler analysis that proves that the amortized time for any m Lemma 1: As the find function follows the path along to the root, the rank of node it encounters is increasing. Proof
We claim that as Find and Union operations are applied to the data set, this fact remains true over time. Initially when each node is the root of its own tree, it's trivially true. The only case when the rank of a node might be changed is when the Union by Rank operation is applied. In this case, a tree with smaller rank will be attached to a tree with greater rank, rather than vice versa. And during the find operation, all nodes visited along the path will be attached to the root, which has larger rank than its children, so this operation won't change this fact either. Lemma 2: A node u which is root of a subtree with rank r has at least nodes. Proof
Initially when each node is the root of its own tree, it's trivially true. Assume that a node u with rank r has at least 2r nodes. Then when two trees with rank r are merged using the operation Union by Rank, a tree with rank r + 1 results, the root of which has at least nodes. Lemma 3: The maximum number of nodes of rank r is at most Proof
From lemma 2, we know that a node u which is root of a subtree with rank r has at least nodes. We will get the maximum number of nodes of rank r when each node with rank r is the root of a tree that has exactly nodes. In this case, the number of nodes of rank r is At any particular point in the execution, we can group the vertices of the graph into "buckets", according to their rank. We define the buckets' ranges inductively, as follows: Bucket 0 contains vertices of rank 1. Bucket 1 contains vertices of ranks 2 and 3. In general, if the B-th bucket contains vertices with ranks from interval , then the (B+1)st bucket will contain vertices with ranks from interval
We can make two observations about the buckets's sizes.
Let F represent the list of "find" operations performed, and let
Then the total cost of m finds is Since each find operation makes exactly one traversal that leads to a root, we have T1 = O(m). Also, from the bound above on the number of buckets, we have T2 = O(mlog*n). For T3, suppose we are traversing an edge from u to v, where u and v have rank in the bucket [B, 2B − 1] and v is not the root (at the time of this traversing, otherwise the traversal would be accounted for in T1). Fix u and consider the sequence that take the role of v in different find operations. Because of path compression and not accounting for the edge to a root, this sequence contains only different nodes and because of Lemma 1 we know that the ranks of the nodes in this sequence are strictly increasing. By both of the nodes being in the bucket we can conclude that the length k of the sequence (the number of times node u is attached to a different root in the same bucket) is at most the number of ranks in the buckets B, that is, at most Therefore, From Observations 1 and 2, we can conclude that Therefore, Other structuresBetter worst-case time per operationThe worst-case time of the DeletionThe regular implementation as disjoint-set forests does not react favorably to the deletion of elements,
in the sense that the time for ApplicationsDisjoint-set data structures model the partitioning of a set, for example to keep track of the connected components of an undirected graph. This model can then be used to determine whether two vertices belong to the same component, or whether adding an edge between them would result in a cycle. The Union–Find algorithm is used in high-performance implementations of unification.[20] This data structure is used by the Boost Graph Library to implement its Incremental Connected Components functionality. It is also a key component in implementing Kruskal's algorithm to find the minimum spanning tree of a graph. The Hoshen-Kopelman algorithm uses a Union-Find in the algorithm. See also
References
External links |