Efficient algorithms for analyzing segmental duplications with deletions and inversions in genomes

Background Segmental duplications, or low-copy repeats, are common in mammalian genomes. In the human genome, most segmental duplications are mosaics comprised of multiple duplicated fragments. This complex genomic organization complicates analysis of the evolutionary history of these sequences. One model proposed to explain this mosaic patterns is a model of repeated aggregation and subsequent duplication of genomic sequences. Results We describe a polynomial-time exact algorithm to compute duplication distance, a genomic distance defined as the most parsimonious way to build a target string by repeatedly copying substrings of a fixed source string. This distance models the process of repeated aggregation and duplication. We also describe extensions of this distance to include certain types of substring deletions and inversions. Finally, we provide a description of a sequence of duplication events as a context-free grammar (CFG). Conclusion These new genomic distances will permit more biologically realistic analyses of segmental duplications in genomes.


Introduction
Genomes evolve via many types of mutations ranging in scale from single nucleotide mutations to large genome rearrangements. Computational models of these mutational processes allow researchers to derive similarity measures between genome sequences and to reconstruct evolutionary relationships between genomes. For example, considering chromosomal inversions as the only type of mutation leads to the so-called reversal distance problem of finding the minimum number of inversions/ reversals that transform one genome into another [1]. Several elegant polynomial-time algorithms have been found to solve this problem (cf. [2] and references therein). Developing genome rearrangement models that are both biologically realistic and computationally tractable remains an active area of research.
Duplicated sequences in genomes present a particular challenge for genome rearrangement analysis and often make the underlying computational problems more difficult. For instance, computing reversal distance in genomes with duplicated segments is NP-hard [3]. Models that include both duplications and other types of mutations -such as inversions -often result in similarity measures that cannot be computed efficiently. Thus, most current approaches for duplication analysis rely on heuristics, approximation algorithms, or restricted models of duplication [3][4][5][6][7]. For example, there are efficient algorithms for computing tandem duplication histories [8][9][10][11] and whole-genome duplication histories [12,13]. Here we consider another class of duplications: large segmental duplications (also known as low-copy repeats) that are common in many mammalian genomes [14]. These segmental duplications can be quite large (up to hundreds of kilobases), but their evolutionary history remains poorly understood, particularly in primates. The mystery surrounding them is due in part to their complex organization; many segmental duplications are found within contiguous regions of the genome called duplication blocks that contain mosaic patterns of smaller repeated segments, or duplicons [15]. Duplication blocks that are located on different chromosomes, or that are separated by large physical distances on a chromosome, often share sequences of duplicons [16]. These conserved sequences suggest that these duplicons were copied together across large genomic distances. One hypothesis proposed to explain these conserved mosaic patterns is a two-step model of duplication [14]. In this model, a first phase of duplications copies duplicons from the ancestral genome and aggregates these copies into primary duplication blocks. Then in a second phase, portions of these primary duplication blocks are copied and reinserted into the genome at disparate loci forming secondary duplication blocks.
In [17], we introduced a measure called duplication distance that models the duplication of contiguous substrings over large genomic distances. We used duplication distance in [18] to find the most parsimonious duplication scenario consistent with the two-step model of segmental duplication. The duplication distance from a source string x to a target string y is the minimum number of substrings of x that can be sequentially copied from x and pasted into an initially empty string in order to construct y. We derived an efficient exact algorithm for computing the duplication distance between a pair of strings. Note that the string x does not change during the sequence of duplication events. Moreover, duplication distance does not model local rearrangements, like tandem duplications, deletions or inversions, that occur within a duplication block during its construction. While such local rearrangements undoubtedly occur in genome evolution, the duplication distance model focuses on identifying the duplicate operations that account for the construction of repeated patterns within duplication blocks by aggregating substrings of other duplication blocks over large genomic distances. Thus, like nearly every other genome rearrangement model, the duplication distance model makes some simplifying assumptions about the underlying biology to achieve computational tractability. Here, we extend the duplication distance measure to include certain types of deletions and inversions. These extensions make our model less restrictive -although we still maintain the restriction that x is unchanged -and permit the construction of more rich, and perhaps more biologically plausible, duplication scenarios. In particular, our contributions are the following.

Summary of Contributions
Let μ(x) denote the number of times a character appears in the string x. Let |x| denote the length of x.
1. We provide an O(|y| 2 |x|μ(x) μ(y))-time algorithm to compute the distance between (signed) strings x and y when duplication and certain types of deletion operations are permitted.
2. We provide an O(|y| 2 μ(x) μ(y))-time algorithm to compute the distance between (signed) strings x and y when duplicated strings may be inverted before being inserted into the target string.
3. We provide an O(|y| 2 |x|μ(x)μ(y))-time algorithm to compute the distance between signed strings x and y when duplicated strings may be inverted before being inserted into the target string, and deletion operations are also permitted. 4. We provide an O(|y| 2 |x| 3 μ(x)μ(y))-time algorithm to compute the distance between signed strings x and y when any substring of the duplicated string may be inverted before being inserted into the target string. Deletion operations are also permitted.
5. We provide a formal proof of correctness of the duplication distance recurrence presented in [18]. No proof of correctness was previously given. 6. We show how a sequence of duplicate operations that generates a string can be described by a contextfree grammar (CFG).

Preliminaries
We begin by reviewing some definitions and notation that were introduced in [17] and [18]. Let ∅ denote the empty string. For a string x = x 1 . . . x n , let x i, j denote the substring x i x i+1 . . . x j . We define a subsequence S of x to be a string x x x i i i k 1 2  with i 1 <i 2 < ... <i k . We represent S by listing the indices at which the characters of S occur in x. For example, if x = abcdef, then the subsequence S = (1, 3, 5) is the string ace. Note that every substring is a subsequence, but a subsequence need not be a substring since the characters comprising a subsequence need not be contiguous. For a pair of subsequences S 1 , S 2 , denote by S 1 ∩ S 2 the maximal subsequence common to both S 1 and S 2 . Definition 1. Subsequences S = (s 1 , s 2 ) and T = (t 1 , t 2 ) of a string x are alternating in x if either s 1 <t 1 <s 2 <t 2 or t 1 <s 1 <t 2 <s 2 .
Definition 2. Subsequences S = (s 1 , . . ., s k ) and T = (t 1 , . . ., t l ) of a string x are overlapping in x if there exist indices i, i' and j, j' such that Definition 3. Given subsequences S = (s 1 , . . ., s k ) and T = (t 1 , . . ., t l ) of a string x, S is inside of T if there exists an index i such that 1 ≤ i <l and t i <s 1 <s k <t i+1 . That is, the entire subsequence S occurs in between successive characters of T. See Figure 2.
Definition 4. A duplicate operation from x, δ x (s, t, p), copies a substring x s . . . x t of the source string x and pastes it into a target string at position p. Specifically, if x = x 1 . . . x m and z = z 1 . . . z n , then z ∘ δ x (s, t, p) = z 1 . . . z p-1 x s . . . x t z p . . . z n . See Figure 3.
Definition 5. The duplication distance from a source string x to a target string y is the minimum number of duplicate operations from x that generates y from an initially empty target string. That is, y = ∅ ∘ δ x (s 1 , t 1 , p 1 ) ∘ δ x (s 2 , t 2 , p 2 ) ∘ ... ∘ δ x (s l , t l , p l ).
To compute the duplication distance from x to y, we assume that every character in y appears at least once in x. Otherwise, the duplication distance is undefined.

Duplication Distance
In this section we review the basic recurrence for computing duplication distance that was introduced in [18]. The recurrence examines the characters of the target string, y, and considers the sets of characters of y that could have been generated, or copied from the source string in a single duplicate operation. Such a set of characters of y necessarily correspond to a substring of the source x (see Def. 4). Moreover, these characters must be a subsequence of y. This is because, in a sequence of duplicate operations, once a string is copied and inserted into the target string, subsequent duplicate operations do not affect the order of the characters in the previously inserted string. Because every character of y is generated by exactly one duplicate operation, a sequence of duplicate operations that generates y partitions the characters of y into disjoint subsequences, each of which is generated in a single duplicate operation. A more interesting observation is that these subsequences are mutually non-overlapping. We formalize this property as follows.
Lemma 1 (Non-overlapping Property). Consider a source string x and a sequence of duplicate operations of the form δ x (s i , t i , p i ) that generates the final target string y from an initially empty target string. The substrings , of x that are duplicated during the construction of y appear as mutually non-overlapping subsequences of y.
Proof. Consider a sequence of duplicate operations δ x (s 1 , t 1 , p 1 ), . . ., δ x (s k , t k , p k ) that generates y from an initially empty target string. For 1 ≤ i ≤ k, Let z i be the intermediate target string that results from δ x (s 1 , t 1 , p 1 ) ∘ ... ∘ δ x (s i , t i , p i ). Note that z k = y. For j ≤ i, let S j i be the subsequence of z i that corresponds to the characters duplicated by the j th operation. We shall show by induction on the length i of the sequence that S S S 1 are mutually non-overlapping subsequences in z i -1 . For the induction step note that, by the definition of a duplicate operation, S i i is inserted as a contiguous substring into z i-1 at location p i to form z i . Therefore, for any j, j' <i, if S j i1 and S j i  1 are non overlapping in z i-1 then S j i and S j i  , are non overlapping in z i . It remains to show that for any j <i, S j i and S i i are non-overlapping in z i . There are two cases: (1) the elements of S j i are either all smaller or all greater than the elements of S i i or (2) S i i is inside of S j i in z i Figure 1 Overlapping. The red subsequence is overlapping with the blue subsequence in x. The indices (s i , s i' ) and (t j , t j' ) are alternating in x.

Figure 2
Inside. The red subsequence is inside the blue subsequence T . All the characters of the red subsequence occur between the indices t i and t i+1 of T.
. . x t of the source string x is copied and inserted into the target string z at index p.
(Definition 3). In either case, S j i and S i i are not overlapping in z i as required.
The non-overlapping property leads to an efficient recurrence that computes duplication distance. When considering subsequences of the final target string y that might have been generated in a single duplicate operation, we rely on the non-overlapping property to identify substrings of y that can be treated as independent subproblems. If we assume that some subsequence S of y is produced in a single duplicate operation, then we know that all other subsequences of y that correspond to duplicate operations cannot overlap the characters in S. Therefore, the substrings of y in between successive characters of S define subproblems that are computed independently.
In order to find the optimal (i.e. minimum) sequence of duplicate operations that generate y, we must consider all subsequences of y that could have been generated by a single duplicate operation. The recurrence is based on the observation that y 1 must be the first (i.e. leftmost) character to be copied from x in some duplicate operation. There are then two cases to consider: either (1) y 1 was the last (or rightmost) character in the substring that was duplicated from x to generate y 1 , or (2) y 1 was not the last character in the substring that was duplicated from x to generate y 1 .
The recurrence defines two quantities: d(x, y) and d i (x, y). We shall show, by induction, that for a pair of strings, x and y, the value d(x, y) is equal to the duplication distance from x to y and that d i (x, y) is equal to the duplication distance from x to y under the restriction that the character y 1 is copied from index i in x, i.e. x i generates y 1 . d(x, y) is found by considering the minimum among all characters x i of x that can generate y 1 , see Eq. 1.
As described above, we must consider two possibilities in order to compute d i (x, y). Either: Case 1: y 1 was the last (or rightmost) character in the substring of x that was copied to produce y 1 , (see Fig.  4), or Case 2: x i+1 is also copied in the same duplicate operation as x i , possibly along with other characters as well (see Fig. 5).
For case one, the minimum number of duplicate operations is one -for the duplicate that generates y 1 -plus the minimum number of duplicate operations to generate the suffix of y, giving a total of 1 + d(x, y 2,|y| ) ( Fig. 4). For case two, Lemma 1 implies that the minimum number of duplicate operations is the sum of the optimal numbers of operations for two independent subproblems. Specifically, for each j > 1 such that x i+1 = y j we compute: (i) the minimum number of duplicate operations needed to build the substring y 2, j-1 , namely d(x, y 2, j-1 ), and (ii) the minimum number of duplicate operations needed to build the string y 1 y j,|y| , given that y 1 is generated by x i and y j is generated by x i+1 . To compute the latter, recall that since x i and x i+1 are copied in the same duplicate operation, the number of duplicates necessary to generate y 1 y j,|y| using x i and x i+1 is equal to the number of Figure 4 Recurrence: Case 1. y 1 is generated from x i in a duplicate operation where y 1 is the last (rightmost) character in the copied substring (Case 1). The total duplication distance is one plus the duplication distance for the suffix y 2,|y| .
The recurrence is, therefore: x y x y is the minimum number of duplicate operations that generate y from x. For {i : y) is the minimum number of duplicate operations that generate y from x such that y 1 is generated by x i .
Proof. Let OPT(x, y) denote minimum length of a sequence of duplicate operations that generate y from x. Let OPT i (x, y) denote the minimum length of a sequence of operations that generate y from x such that y 1 is generated by x i . We prove by induction on |y| that d(x, y) = OPT(x, y) and d i (x, y) = OPT i (x, y).
For |y| = 1, since we assume there is at least one i for which x i = y 1 , OPT (x, y) = OPT i (x, y) = 1. By definition, the recurrence also evaluates to 1. For the inductive step, assume that OPT (x, y') = d(x, y') and OPT i (x, y') = d i (x, y') for any string y' shorter than y. We first show that OPT i (x, y) ≤ d i (x, y). Since OPT (x, y) = min i OPT i (x, y), this also implies OPT (x, y) ≤ d(x, y). We describe different sequences of duplicate operations that generate y from x, using x i to generate y 1 : • Consider a minimum-length sequence of duplicates that generates y 2,|y| . By the inductive hypothesis its length is d(x, y 2,|y| ). By duplicating y 1 separately using x i we obtain a sequence of duplicates that generates y whose length is 1 + d(x, y 2,|y| ).
• For every {j : y j = x i+1 , j > 1} consider a minimumlength sequence of duplicates that generates y j,|y| using x i+1 to produce y j , and a minimum-length sequence of duplicates that generates y 2, j-1 .
By the inductive hypothesis their lengths are d i+1 (x, y j,| y| ) and d(x, y 2, j-1 ) respectively. By extending the start index s of the duplicate operation that starts with x i+1 to produce y j to start with x i and produce y 1 as well, we produce y with the same number of duplicate operations.
Since OPT i (x, y) is at most the length of any of these options, it is also at most their minimum. Hence, To show the other direction (i.e. that d(x, y) ≤ OPT (x, y) and d i (x, y) ≤ OPT i (x, y)), consider a minimum-length sequence of duplicate operations that generate y from x, using x i to generate y 1 . There are a few cases: • If y 1 is generated by a duplicate operation that only duplicates x i , then OPT i (x, y) = 1 + OPT (x, y 2,|y| ). By the inductive hypothesis this equals 1 + d(x, y 2,| y| ) which is at least d i (x, y).
• Otherwise, y 1 is generated by a duplicate operation that copies x i and also duplicates x i+1 to generate some character y j . In this case the sequence Δ of duplicates that generates y 2, j-1 must appear after the duplicate operation that generates y 1 and y j because y 2, j-1 is inside (Definition 3) of (y 1 , y j ). Without loss of generality, suppose Δ is ordered after all the other duplicates so that first y 1 y j . . . y |y| is generated, and then Δ generates y 2 . . . y j-1 between y 1 and y j . Hence, OPT i (x, y) = OPT i (x, y 1 y j,|y| ) + OPT (x, y 2, j-1 ). Since in the optimal sequence x i generates y 1 in the same Figure 5 Recurrence: Case 2. y 1 is generated from x i in a duplicate operation where y 1 is not the last (rightmost) character in a copied substring (Case 2). In this case, x i+1 is also copied in the same duplicate operation (top). Thus, the duplication distance is the sum of d(x, y 2, j-1 ), the duplication distance for y 2, j-1 (bottom left), and d i+1 (x, y j, |y| ), the minimum number of duplicate operations to generate y j, |y| given that x i+1 generates y j (bottom right).
duplicate operation that generates y j from x i+1 , we have OPT i (x, y 1 y j,|y| ) = OPT i+1 (x, y j,|y| ). By the inductive hypothesis, OPT (x, y 2, j-1 ) + OPT i+1 (x, y j,|y| ) = d (x, y 2, j-1 ) + d i+1 (x, y j,|y| ) which is at least d i (x, y). □ This recurrence naturally translates into a dynamic programing algorithm that computes the values of d(x, ·) and d i (x, ·) for various target strings. To analyze the running time of this algorithm, note that both y 2, j and y j,|y| are substrings of y. Since the set of substrings of y is closed under taking substrings, we only encounter substrings of y. Also note that since i is chosen from the set {i : is the maximal multiplicity of a character in x. Thus, there are O(μ(x)|y| 2 ) different values to compute. Each value is computed by considering the minimization over at most μ(y) previously computed values, so the total running time is bounded by O(|y| 2 μ(x)μ(y)), which is O(|y| 3 |x|) in the worst case. As with most dynamic programming approaches, this algorithm (and all others presented in subsequent sections) can be extended through trace-back to reconstruct the optimal sequence of operations needed to build y. We omit the details.
Extending to Affine Duplication Cost It is easy to extend the recurrence relations in Eqs. (1), (2) to handle costs for duplicate operations. In the above discussion, the cost of each duplicate operation is 1, so the sum of costs of the operations in a sequence that generates a string y is just the length of that sequence. We next consider a more general cost model for duplication in which the cost of a duplicate operation δ x (s, t, p) is Δ 1 + (t -s + 1) Δ 2 (i.e., the cost is affine in the number of duplicated characters). Here Δ 1 , Δ 2 are some non-negative constants. This extension is obtained by assigning a cost of Δ 2 to each duplicated character, except for the last character in the duplicated string, which is assigned a cost of Δ 1 + Δ 2 . We do that by adding a cost term to each of the cases in Eq. 2. If x i is the last character in the duplicated string (case 1), we add Δ 1 + Δ 2 to the cost. Otherwise x i is not the last duplicated character (case 2), so we add just Δ 2 to the cost. Eq. (2) thus becomes x y x y The running time analysis for this recurrence is the same as for the one with unit duplication cost.

Duplication-Deletion Distance
In this section we generalize the model to include deletions. Consider the intermediate string z generated after some number of duplicate operations. A deletion operation removes a contiguous substring z i , . . ., z j of z, and subsequent duplicate and deletion operations are applied to the resulting string.
The cost associated with t (s, t) depends on the number t -s + 1 of characters deleted and is denoted Φ(t -s + 1).
Definition 7. The duplication-deletion distance from a source string x to a target string y is the cost of a minimum sequence of duplicate operations from x and deletion operations, in any order, that generates y.
We now show that although we allow arbitrary deletions from the intermediate string, it suffices to consider deletions from the duplicated strings before they are pasted into the intermediate string, provided that the cost function for deletion, Φ(·) is non-decreasing and obeys the triangle inequality.
of the source string x and pastes it into a target string at position p. Specifically, if x = x 1 . . . x m and z = z 1 . . . z n , then z ∘ h x (i 1 ,

The cost associated with such a duplication-deletion is
1 . The first two terms in the cost reflect the affine cost of duplicating an entire substring of length j ki 1 + 1, and the second term reflects the cost of deletions made to that substrings.
Lemma 2. If the affine cost for duplications is nondecreasing and Φ (·) is non-decreasing and obeys the triangle inequality then the cost of a minimum sequence of duplicate and delete operations that generates a target string y from a source string x is equal to the cost of a minimum sequence of duplicate-delete operations that generates y from x.
Proof. Since duplicate operations are a special case of duplicate-delete operations, the cost of a minimal sequence of duplicate-delete operations and delete  operations that generates y cannot be more than that of a sequence of just duplicate operations and delete operations. We show the (stronger) claim that an arbitrary sequence of duplicate-delete and delete operations that produces a string y with cost c can be transformed into a sequence of just duplicate-delete operations that generates y with cost at most c by induction on the number of delete operations. The base case, where the number of deletions is zero, is trivial. Consider the first delete operation, τ . Let k denote the number of duplicate-delete operations that precede τ, and let z be the intermediate string produced by these k operations. For i = 1, . . ., k, let S i be the subsequence of x that was used in the ith duplicate-delete operation. By lemma 1, S 1 , . . ., S k form a partition of z into disjoint, non-overlapping subsequences of z. Let d denote the substring of z to be deleted. Since d is a contiguous substring, S i ∩ d is a (possibly empty) substring of S i for each i. There are several cases: 1. S i ∩ d = ∅. In this case we do not change any operation.
2. S i ∩ d = S i . In this case all characters produced by the ith duplicate-delete operation are deleted, so we may omit the ith operation altogether and decrease the number of characters deleted by τ . Since Φ (·) is nondecreasing, this does not increase the cost of generating z (and hence y).
3. S i ∩ d is a prefix (or suffix) of S i . Assume it is a prefix. The case of suffix is similar. Instead of deleting the characters S i ∩ d we can avoid generating them in the first place. Let r be the smallest index in S i \d (that is, the first character in S i that is not deleted by τ). We change the ith duplicate-delete operation to start at r and decrease the number of characters deleted by τ . Since the affine cost for duplications is non-decreasing and Φ (·) is non-decreasing, the cost of generating z does not increase.
4. S i ∩ d is a non-empty substring of S i that is neither a prefix nor a suffix of S i . We claim that this case applies to at most one value of i. This implies that after taking care of all the other cases τ only deletes characters in S i . We then change the ith duplicate-delete operation to also delete the characters deleted by τ, and omit τ . Since Φ (·) obeys the triangle inequality, this will not increase the total cost of deletion. By the inductive hypothesis, the rest of y can be generated by just duplicate-delete operations with at most the same cost. It remains to prove the claim. Recall that the set {S i } is comprised of mutually non-overlapping subsequences of z. Suppose that there exist indices i ≠ j such that S i ∩ d is a non-prefix/suffix substring of S i and S j ∩ d is a nonprefix/suffix substring of S j . There must exist indices of both S i and S j in z that precede d, are contained in d, and succeed d. Let i p <i c <i s be three such indices of S i and let j p <j c <j s be similar for S j . It must be the case also that j p <i c <j s and i p <j c <i s . Without loss of generality, suppose i p <j p . It follows that (i p , i c ) and (j p , j s ) are alternating in z. So, S i and S j are overlapping which contradicts Lemma 1.
To extend the recurrence from the previous section to duplication-deletion distance, we must observe that because we allow deletions in the string that is duplicated from x, if we assume character x i is copied to produce y 1 , it may not be the case that the character x i+1 also appears in y; the character x i+1 may have been deleted. Therefore, we minimize over all possible locations k >i for the next character in the duplicated string that is not deleted. The extension of the recurrence from the previous section to duplication-deletion distance is:  y) is the duplication-deletion distance from x to y. For {i : x i = y 1 },d i (x, y) is the duplication-deletion distance from x to y under the additional restriction that y 1 is generated by x i .
The proof of Theorem 2 is almost identical to that of Theorem 1 in the previous section and is omitted. However, the running time increases; while the number of entries in the dynamic programming table does not change, the time to compute each entry is multiplied by the possible values of k in the recurrence, which is O(| x|). Therefore, the running time is O(|y| 2 |x|μ(x)μ(y)), which is O(|y| 3 |x| 2 ) in the worst case. We conclude this section by showing, in the following lemma, that if both the duplicate and delete cost functions are the identity function (i.e. one per operation), then the duplicationdeletion distance is equal to duplication distance without deletions.
Lemma 3. Given a source string x, a target string y, If the cost of duplication is 1 per duplicate operation, and the cost of deletion is 1 per delete operation, thend (x, y) = d(x, y).
Proof. First we note that if a target string y can be built from x in d(x, y) duplicate operations, then the same sequence of duplicate operations is a valid sequence of duplicate and delete operations as well, so d (x, y) is at leastd (x, y).
We claim that every sequence of duplicate and delete operations can be transformed into a sequence of duplicate operations of the same length. The proof of this claim is similar to that of Lemma 2. In that proof we showed how to transform a sequence of duplicate and delete operations into a sequence of duplicate-delete operations of at most the same cost. We follow the same steps, but transform the sequence into an a sequence that consists of just duplicate operations without increasing the number of operations. Recall the four cases in the proof of Lemma 2. In the the first three cases we eliminate the delete operation without increasing the number of duplicate operations. Therefore we only need to consider the last case (S i ∩ d is a nonempty substring of S i that is neither a prefix nor a suffix of S i ). Recall that this case applies to at most one value of i. Deleting S i ∩ d from S i leaves a prefix and a suffix of S i . We can therefore replace the i th duplicate operation and the delete operation with two duplicate operations, one generating the appropriate prefix of S i and the other generating the appropriate suffix of S i . This eliminates the delete operation without changing the number of operations in the sequence. Therefore, for any string y that results from a sequence of duplicate and delete operations, we can construct the same string using only duplicate operations (without deletes) using at most the same number of operations. So, d(x, y) is no greater thand (x, y).

Duplication-Inversion Distance
In this section we extend the duplication-deletion distance recurrence to allow inversions. We now explicitly define characters and strings as having two orientations: forward (+) and inverse (-).
Definition 9. A signed string of length m over an alphabet Σ is an element of ({+, -} × Σ) m .
For example, (+b -c -a +d) is a signed string of length 4. An inversion of a signed string reverses the order of the characters as well as their signs. Formally, Definition 10. The inverse of a signed string x = x 1 . . . x m is a signed string x = -x m . . . -x 1 .
For example, the inverse of (+b -c -a +d) is (-d +a +c -b).
In a duplicate-invert operation a substring is copied from x and inverted before being inserted into the target string y. We allow the cost of inversion to be an affine function in the length ℓ of the duplicated inverted string, which we denote Θ 1 + ℓΘ 2 , where Θ 1 , Θ 2 ≥ 0. We still allow for normal duplicate operations.
Definition 11. A duplicate-invert operation from x,  x (s, t, p), copies an inverted substring -x t , -x t -1 . . ., -x s of the source string x and pastes it into a target string at position p. Specifically, if x = x 1 . . . x m and z = z 1 . . . z n , then z ∘  x (s, t, p) = z z x x x z z p t t s p n 1 1 1 . The cost associated with each duplicate-invert operation is Θ 1 + (ts + 1)Θ 2 .
Definition 12. The duplication-inversion distance from a source string x to a target string y is the cost of a minimum sequence of duplicate and duplicate-invert operations from x, in any order, that generates y.
The recurrence for duplication distance (Eqs. 1, 3) can be extended to compute the duplication-inversion distance. This is done by introducing a term for inverted duplications whose form is very similar to that of the term for regular duplication (Eq. 3). Specifically, when considering the possible characters to generate y 1 , we consider characters in x that match either y 1 or its inverse, -y 1 . In the former case, then, we use d i  (x, y) to denote the duplication-inversion distance with the additional restriction that y 1 is generated by x i without an inversion. The recurrence for d i  is the same as for d i in Eq. 3. In the latter case, we consider an inverted duplicate in which y 1 is generated by -x i . This is denoted by d i  , which follows a similar recurrence. In this recurrence, since an inversion occurs, x i is the last character of the duplicated string, rather than the first one. Therefore, the next character in x to be used in this operation is -x i-1 rather than x i+1 . The recurrence for d i  also differs in the cost term, where we use the affine cost of the duplicate-invert operation. The extension of the recurrence to duplication-inversion distance is therefore: x y x y Theorem 3. d (x, y) is the duplication-inversion distance from x to y. For {i : y) is the duplication-inversion distance from x to y under the additional restriction that y 1 is generated by x i . For {i : y) is the duplication-inversion distance from x to y under the additional restriction that y 1 is generated by -x i . The correctness proof is very similar to that of Theorem 1, only requiring an additional case for handling the case of a duplicate invert operation which is symmetric to the case of regular duplication. The asymptotic running time of the corresponding dynamic programming algorithm is O(|y| 2 μ(x)μ(y)). The analysis is identical to the one in section 3. The fact that we now consider either a duplicate or a duplicateinvert operation does not change the asymptotic running time.

Duplication-Inversion-Deletion Distance
In this section we extend the distance measure to include delete operations as well as duplicate and duplicate-invert operations. Note that we only handle deletions after inversions of the same substring. The order of operations might be important, at least in terms of costs. The cost of inverting (+a +b +c) and then deleting -b may be different than the cost of first deleting +b from (+a +b +c) and then inverting (+a +c).
Definition 13. The duplication-inversion-deletion distance from a source string x to a target string y is the cost of a minimum sequence of duplicate and duplicateinvert operations from x and deletion operations, in any order, that generates y.
into a target string at position p. Specifically, if x = x 1 . . . x m and z = z 1 . . . z n , then z ∘  x (i 1 , j 1 , i 2 , j 2 , . . ., i k , j k , 1 . Similar to the previous section, it suffices to consider just duplicate-invert-delete and duplicate-delete operations, rather than duplicate, duplicate-invert and delete operations. Lemma 4. If Φ (·) is non-decreasing and obeys the triangle inequality and if the cost of inversion is an affine non-decreasing function as defined above, then the cost of a minimum sequence of duplicate, duplicate-invert and delete operations that generates a target string y from a source string x is equal to the cost of a minimum sequence of duplicate-delete and duplicate-invert-delete operations that generates y from x.
The proof of the lemma is essentially the same as that of Lemma 2. Note that in that proof we did not require all duplicate operations to be from the same string x. Therefore, the arguments in that proof apply to our case, where we can regard some of the duplicates from x and some from the inverse of x.
The recurrence for duplication-inversion-deletion distance is obtained by combining the recurrences for duplication-deletion (Eq. 5) and for duplication-inversion distance (Eq. 6). We use separate terms for duplicate-delete operations (d i  ) and for duplicate-invertdelete operations (d i  ). Those terms differ from the terms in Eq. 6 in the same way Eq. 5 differs from Eq. 2; Because of the possible deletion we do not know that x i +1 (x i-1 ) is the next duplicated character. Instead we minimize over all characters later (earlier) than x i .
The recurrence for duplication-inversion-deletion distance is therefore:ˆ( Theorem 4.d (x, y) is the duplication-inversion-deletion distance from x to y. For {i : y) is the duplication-inversion-deletion distance from x to y under the additional restriction that y 1 is generated by y) is the duplication-inversion-deletion distance from x to y under the additional restriction that y 1 is generated by -x i .
The proof, again, is very similar to the proofs in the previous sections. The running time of the corresponding dynamic programming algorithm is the same (asymptotically) as that of duplication-deletion distance. It is O(|y| 2 |x|μ(y)μ(x)), where the multiplicity μ(y) (or μ(x)) is the number of times a character appears in the string y (or x), regardless of its sign.
In comparing the models of the previous section and the current one, we note that restricting the model of rearrangement to allow only duplicate and duplicateinvert operations (Section 5) instead of duplicate-invertdelete operations may be desirable from a biological perspective because each duplicate and duplicate-invert requires only three breakpoints in the genome, whereas a duplicate-invert-delete operation can be significantly more complicated, requiring more breakpoints.

Variants of Duplication-Inversion-Deletion Distance
It is possible to extend the model even further. We give here one detailed example which demonstrates how such extensions might be achieved. Other extensions are also possible. In the previous section we handled the model where the duplicated substring of x may be inverted in its entirety before being inserted into the target string. In the generalized model a substring of the duplicated string may be inverted before the string is inserted into y. For example, we allow (+a +b +c +d +e +f) to become (+a +b -e -d -c +f) before being inserted into y. In this model, the cost of duplicating a string of length m with an inversion of a substring of length ℓ is Δ 1 + mΔ 2 + Θ (ℓ), for some non-negative monotonically increasing cost function Θ.
The way we extend the recurrence is by considering all possible substring inversions to the original string x. For 1 ≤ s ≤ t ≤ |x|, let  x s t , be the string x 1 . . . x s-1 -x t . restricts the order in which rules are applied. For example, y 1 is always produced by the first production rule. The recurrence for d i (x, y) naturally arises by observing that if T is an optimal parse tree for y in which the first production rule generates y 1 by x i and y j by x i+1 , then the subtree T 1 of T that generates y 2, j-1 is a valid parse tree which is optimal for y 2, j-1 . Similarly, the tree T 2 obtained by deleting x i and T 1 from T is a valid parse tree which is optimal for y j,|y| under the restriction that y j must be generated by x i+1 (see Fig. 7). Moreover, T 1 and T 2 are disjoint trees which contain all non trivial productions in T . This explains the term d(x, y 2, j-1 ) + d i+1 (x, y j,|y| ) in Eq. 2, which is the heart of the recursion. The minimization over {j : y j = x i+1 , j > 1} simply enumerates all of the possibilities for constructing T . The term 1 + d (x, y 2,|y| ) handles the possibility that y 1 is generated by a duplicate operation that ends with x i . In this case the tree T 2 is empty, so we only consider T 1 . We add one to account for the production rule at the root of T which is not part of T 1 . This is illustrated in Fig. 8.

Conclusion
We have shown how to generalize duplication distance to include certain types of deletions and inversions and how to compute these new distances efficiently via dynamic programming. In earlier work [17,18], we used duplication distance to derive phylogenetic relationships between human segmental duplications. We plan to apply the generalized distances introduced here to the same data to determine if these richer computational models yield new biological insights.