 Research
 Open Access
 Published:
Efficient algorithms for analyzing segmental duplications with deletions and inversions in genomes
Algorithms for Molecular Biology volume 5, Article number: 11 (2010)
Abstract
Background
Segmental duplications, or lowcopy repeats, are common in mammalian genomes. In the human genome, most segmental duplications are mosaics comprised of multiple duplicated fragments. This complex genomic organization complicates analysis of the evolutionary history of these sequences. One model proposed to explain this mosaic patterns is a model of repeated aggregation and subsequent duplication of genomic sequences.
Results
We describe a polynomialtime exact algorithm to compute duplication distance, a genomic distance defined as the most parsimonious way to build a target string by repeatedly copying substrings of a fixed source string. This distance models the process of repeated aggregation and duplication. We also describe extensions of this distance to include certain types of substring deletions and inversions. Finally, we provide a description of a sequence of duplication events as a contextfree grammar (CFG).
Conclusion
These new genomic distances will permit more biologically realistic analyses of segmental duplications in genomes.
Introduction
Genomes evolve via many types of mutations ranging in scale from single nucleotide mutations to large genome rearrangements. Computational models of these mutational processes allow researchers to derive similarity measures between genome sequences and to reconstruct evolutionary relationships between genomes. For example, considering chromosomal inversions as the only type of mutation leads to the socalled reversal distance problem of finding the minimum number of inversions/reversals that transform one genome into another [1]. Several elegant polynomialtime algorithms have been found to solve this problem (cf. [2] and references therein). Developing genome rearrangement models that are both biologically realistic and computationally tractable remains an active area of research.
Duplicated sequences in genomes present a particular challenge for genome rearrangement analysis and often make the underlying computational problems more difficult. For instance, computing reversal distance in genomes with duplicated segments is NPhard [3]. Models that include both duplications and other types of mutations  such as inversions  often result in similarity measures that cannot be computed efficiently. Thus, most current approaches for duplication analysis rely on heuristics, approximation algorithms, or restricted models of duplication [3–7]. For example, there are efficient algorithms for computing tandem duplication histories [8–11] and wholegenome duplication histories [12, 13]. Here we consider another class of duplications: large segmental duplications (also known as lowcopy repeats) that are common in many mammalian genomes [14]. These segmental duplications can be quite large (up to hundreds of kilobases), but their evolutionary history remains poorly understood, particularly in primates. The mystery surrounding them is due in part to their complex organization; many segmental duplications are found within contiguous regions of the genome called duplication blocks that contain mosaic patterns of smaller repeated segments, or duplicons[15]. Duplication blocks that are located on different chromosomes, or that are separated by large physical distances on a chromosome, often share sequences of duplicons [16]. These conserved sequences suggest that these duplicons were copied together across large genomic distances. One hypothesis proposed to explain these conserved mosaic patterns is a twostep model of duplication [14]. In this model, a first phase of duplications copies duplicons from the ancestral genome and aggregates these copies into primary duplication blocks. Then in a second phase, portions of these primary duplication blocks are copied and reinserted into the genome at disparate loci forming secondary duplication blocks.
In [17], we introduced a measure called duplication distance that models the duplication of contiguous substrings over large genomic distances. We used duplication distance in [18] to find the most parsimonious duplication scenario consistent with the twostep model of segmental duplication. The duplication distance from a source string x to a target string y is the minimum number of substrings of x that can be sequentially copied from x and pasted into an initially empty string in order to construct y. We derived an efficient exact algorithm for computing the duplication distance between a pair of strings. Note that the string x does not change during the sequence of duplication events. Moreover, duplication distance does not model local rearrangements, like tandem duplications, deletions or inversions, that occur within a duplication block during its construction. While such local rearrangements undoubtedly occur in genome evolution, the duplication distance model focuses on identifying the duplicate operations that account for the construction of repeated patterns within duplication blocks by aggregating substrings of other duplication blocks over large genomic distances. Thus, like nearly every other genome rearrangement model, the duplication distance model makes some simplifying assumptions about the underlying biology to achieve computational tractability. Here, we extend the duplication distance measure to include certain types of deletions and inversions. These extensions make our model less restrictive  although we still maintain the restriction that x is unchanged  and permit the construction of more rich, and perhaps more biologically plausible, duplication scenarios. In particular, our contributions are the following.
Summary of Contributions
Let μ(x) denote the number of times a character appears in the string x. Let x denote the length of x.
1. We provide an O(y^{2}xμ(x) μ(y))time algorithm to compute the distance between (signed) strings x and y when duplication and certain types of deletion operations are permitted.
2. We provide an O(y^{2} μ(x) μ(y))time algorithm to compute the distance between (signed) strings x and y when duplicated strings may be inverted before being inserted into the target string.
3. We provide an O(y^{2}xμ(x)μ(y))time algorithm to compute the distance between signed strings x and y when duplicated strings may be inverted before being inserted into the target string, and deletion operations are also permitted.
4. We provide an O(y^{2}x^{3} μ(x)μ(y))time algorithm to compute the distance between signed strings x and y when any substring of the duplicated string may be inverted before being inserted into the target string. Deletion operations are also permitted.
5. We provide a formal proof of correctness of the duplication distance recurrence presented in [18]. No proof of correctness was previously given.
6. We show how a sequence of duplicate operations that generates a string can be described by a contextfree grammar (CFG).
Preliminaries
We begin by reviewing some definitions and notation that were introduced in [17] and [18]. Let ∅ denote the empty string. For a string x = x_{1} . . . x_{ n }, let x_{i, j}denote the substring x_{ i }x_{i+1}. . . x_{ j }. We define a subsequence S of x to be a string with i_{1} <i_{2} < ⋯ <i_{ k }. We represent S by listing the indices at which the characters of S occur in x. For example, if x = abcdef, then the subsequence S = (1, 3, 5) is the string ace. Note that every substring is a subsequence, but a subsequence need not be a substring since the characters comprising a subsequence need not be contiguous. For a pair of subsequences S_{1}, S_{2}, denote by S_{1} ∩ S_{2} the maximal subsequence common to both S_{1} and S_{2}.
Definition 1. Subsequences S = (s_{1}, s_{2}) and T = (t_{1}, t_{2}) of a string x are alternating in x if either s_{1} <t_{1} <s_{2} <t_{2}or t_{1} <s_{1} <t_{2} <s_{2}.
Definition 2. Subsequences S = (s_{1}, . . ., s_{ k }) and T = (t_{1}, . . ., t_{ l }) of a string x are overlapping in x if there exist indices i, i' and j, j' such that 1 ≤ i <i' ≤ k, 1 ≤ j <j' ≤ l, and (s_{ i }, s_{i'}) and (t_{ j }, t_{j'}) are alternating in x . See Figure1.
Definition 3. Given subsequences S = (s_{1}, . . ., s_{ k }) and T = (t_{1}, . . ., t_{ l }) of a string x , S is inside of T if there exists an index i such that 1 ≤ i <l and t_{ i }<s_{1} <s_{ k }<t_{i+1}. That is, the entire subsequence S occurs in between successive characters of T. See Figure2.
Definition 4. A duplicate operation from x , δ_{ x } (s, t, p), copies a substring x_{ s } . . . x_{ t }of the source string x and pastes it into a target string at position p. Specifically, if x = x_{1} . . . x_{ m }and z = z_{1} . . . z_{ n }, then z ∘ δ_{ x }(s, t, p) = z_{1} . . . z_{p1}x_{ s }. . . x_{ t }z_{ p }. . . z_{ n }. See Figure3.
Definition 5. The duplication distance from a source string x to a target string y is the minimum number of duplicate operations from x that generates y from an initially empty target string. That is, y = ∅ ∘ δ_{ x }(s_{1}, t_{1}, p_{1}) ∘ δ_{ x }(s_{2}, t_{2}, p_{2}) ∘ ⋯ ∘ δ_{ x }(s_{ l }, t_{ l }, p_{ l }).
To compute the duplication distance from x to y, we assume that every character in y appears at least once in x. Otherwise, the duplication distance is undefined.
Duplication Distance
In this section we review the basic recurrence for computing duplication distance that was introduced in [18]. The recurrence examines the characters of the target string, y, and considers the sets of characters of y that could have been generated, or copied from the source string in a single duplicate operation. Such a set of characters of y necessarily correspond to a substring of the source x (see Def. 4). Moreover, these characters must be a subsequence of y. This is because, in a sequence of duplicate operations, once a string is copied and inserted into the target string, subsequent duplicate operations do not affect the order of the characters in the previously inserted string. Because every character of y is generated by exactly one duplicate operation, a sequence of duplicate operations that generates y partitions the characters of y into disjoint subsequences, each of which is generated in a single duplicate operation. A more interesting observation is that these subsequences are mutually nonoverlapping. We formalize this property as follows.
Lemma 1 (Nonoverlapping Property). Consider a source string x and a sequence of duplicate operations of the form δ_{ x }(s_{ i }, t_{ i }, p_{ i }) that generates the final target string y from an initially empty target string. The substringsof x that are duplicated during the construction of y appear as mutually nonoverlapping subsequences of y .
Proof. Consider a sequence of duplicate operations δ_{ x }(s_{1}, t_{1}, p_{1}), . . ., δ_{ x }(s_{ k }, t_{ k }, p_{ k }) that generates y from an initially empty target string. For 1 ≤ i ≤ k, Let z^{i}be the intermediate target string that results from δ_{ x }(s_{1}, t_{1}, p_{1}) ∘ ⋯ ∘ δ_{ x }(s_{ i }, t_{ i }, p_{ i }). Note that z^{k}= y. For j ≤ i, let be the subsequence of z^{i}that corresponds to the characters duplicated by the j^{th}operation. We shall show by induction on the length i of the sequence that are pairwise nonoverlapping subsequences of z^{i}. For the base case, when there is a single duplicate operation, there is no nonoverlap property to show. Assume now that , . . . are mutually nonoverlapping subsequences in z^{i 1}. For the induction step note that, by the definition of a duplicate operation, is inserted as a contiguous substring into z^{i1}at location p_{ i }to form z^{i}. Therefore, for any j, j' <i, if and are non overlapping in z^{i1}then and , are non overlapping in z^{i}. It remains to show that for any j <i, and are nonoverlapping in z^{i}. There are two cases: (1) the elements of are either all smaller or all greater than the elements of or (2) is inside of in z^{i}(Definition 3). In either case, and are not overlapping in z^{i}as required.
The nonoverlapping property leads to an efficient recurrence that computes duplication distance. When considering subsequences of the final target string y that might have been generated in a single duplicate operation, we rely on the nonoverlapping property to identify substrings of y that can be treated as independent subproblems. If we assume that some subsequence S of y is produced in a single duplicate operation, then we know that all other subsequences of y that correspond to duplicate operations cannot overlap the characters in S. Therefore, the substrings of y in between successive characters of S define subproblems that are computed independently.
In order to find the optimal (i.e. minimum) sequence of duplicate operations that generate y, we must consider all subsequences of y that could have been generated by a single duplicate operation. The recurrence is based on the observation that y_{1} must be the first (i.e. leftmost) character to be copied from x in some duplicate operation. There are then two cases to consider: either (1) y_{1} was the last (or rightmost) character in the substring that was duplicated from x to generate y_{1}, or (2) y_{1} was not the last character in the substring that was duplicated from x to generate y_{1}.
The recurrence defines two quantities: d(x, y) and d_{ i }(x, y). We shall show, by induction, that for a pair of strings, x and y, the value d(x, y) is equal to the duplication distance from x to y and that d_{ i }(x, y) is equal to the duplication distance from x to y under the restriction that the character y_{1} is copied from index i in x, i.e. x_{ i }generates y_{1}. d(x, y) is found by considering the minimum among all characters x_{ i }of x that can generate y_{1}, see Eq. 1.
As described above, we must consider two possibilities in order to compute d_{ i }(x, y). Either:
Case 1: y_{1} was the last (or rightmost) character in the substring of x that was copied to produce y_{1}, (see Fig. 4), or
Case 2: x_{i+1}is also copied in the same duplicate operation as x_{ i }, possibly along with other characters as well (see Fig. 5).
For case one, the minimum number of duplicate operations is one  for the duplicate that generates y_{1}  plus the minimum number of duplicate operations to generate the suffix of y, giving a total of 1 + d(x, y_{2,y}) (Fig. 4). For case two, Lemma 1 implies that the minimum number of duplicate operations is the sum of the optimal numbers of operations for two independent subproblems. Specifically, for each j > 1 such that x_{i+1}= y_{ j }we compute: (i) the minimum number of duplicate operations needed to build the substring y_{2, j1}, namely d(x, y_{2, j1}), and (ii) the minimum number of duplicate operations needed to build the string y_{1}y_{j,y}, given that y_{1} is generated by x_{ i }and y_{ j }is generated by x_{i+1}. To compute the latter, recall that since x_{ i }and x_{i+1}are copied in the same duplicate operation, the number of duplicates necessary to generate y_{1}y_{j,y}using x_{ i }and x_{i+1}is equal to the number of duplicates necessary to generate y_{j,y}using x_{i+1}, namely d_{i+1}(x, y_{j,y}), (see Fig. 5 and Eq. 2).
The recurrence is, therefore:
Theorem 1. d(x, y) is the minimum number of duplicate operations that generate y from x . For {i : x_{ i }= y_{1}}, d_{ i }(x, y) is the minimum number of duplicate operations that generate y from x such that y_{1}is generated by x_{ i }.
Proof. Let OPT(x, y) denote minimum length of a sequence of duplicate operations that generate y from x. Let OPT_{ i }(x, y) denote the minimum length of a sequence of operations that generate y from x such that y_{1} is generated by x_{ i }. We prove by induction on y that d(x, y) = OPT(x, y) and d_{ i }(x, y) = OPT_{ i }(x, y).
For y = 1, since we assume there is at least one i for which x_{ i }= y_{1}, OPT (x, y) = OPT_{ i }(x, y) = 1. By definition, the recurrence also evaluates to 1. For the inductive step, assume that OPT (x, y') = d(x, y') and OPT_{ i }(x, y') = d_{ i }(x, y') for any string y' shorter than y. We first show that OPT_{ i }(x, y) ≤ d_{ i }(x, y). Since OPT (x, y) = min_{ i }OPT_{ i }(x, y), this also implies OPT (x, y) ≤ d(x, y). We describe different sequences of duplicate operations that generate y from x, using x_{ i }to generate y_{1}:

Consider a minimumlength sequence of duplicates that generates y_{2,y}. By the inductive hypothesis its length is d(x, y_{2,y}). By duplicating y_{1} separately using x_{ i }we obtain a sequence of duplicates that generates y whose length is 1 + d(x, y_{2,y}).

For every {j : y_{ j }= x_{i+1}, j > 1} consider a minimumlength sequence of duplicates that generates y_{j,y}using x_{i+1}to produce y_{ j }, and a minimumlength sequence of duplicates that generates y_{2, j1}.
By the inductive hypothesis their lengths are d_{i+1}(x, y_{j,y}) and d(x, y_{2, j1}) respectively. By extending the start index s of the duplicate operation that starts with x_{i+1}to produce y_{ j }to start with x_{ i }and produce y_{1} as well, we produce y with the same number of duplicate operations.
Since OPT_{ i }(x, y) is at most the length of any of these options, it is also at most their minimum. Hence,
To show the other direction (i.e. that d(x, y) ≤ OPT (x, y) and d_{ i }(x, y) ≤ OPT_{ i }(x, y)), consider a minimumlength sequence of duplicate operations that generate y from x, using x_{ i }to generate y_{1}. There are a few cases:

If y_{1} is generated by a duplicate operation that only duplicates x_{ i }, then OPT_{ i }(x, y) = 1 + OPT (x, y_{2,y}). By the inductive hypothesis this equals 1 + d(x, y_{2,y}) which is at least d_{ i }(x, y).

Otherwise, y_{1} is generated by a duplicate operation that copies x_{ i }and also duplicates x_{i+1}to generate some character y_{ j }. In this case the sequence Δ of duplicates that generates y_{2, j1}must appear after the duplicate operation that generates y_{1} and y_{ j }because y_{2, j1}is inside (Definition 3) of (y_{1}, y_{ j }). Without loss of generality, suppose Δ is ordered after all the other duplicates so that first y_{1}y_{ j }. . . y_{y}is generated, and then Δ generates y_{2} . . . y_{j1}between y_{1} and y_{ j }. Hence, OPT_{ i }(x, y) = OPT_{ i }(x, y_{1}y_{j,y}) + OPT (x, y_{2, j1}). Since in the optimal sequence x_{ i }generates y_{1} in the same duplicate operation that generates y_{ j }from x_{i+1}, we have OPT_{ i }(x, y_{1}y_{j,y}) = OPT_{i+1}(x, y_{j,y}). By the inductive hypothesis, OPT (x, y_{2, j1}) + OPT_{i+1}(x, y_{j,y}) = d(x, y_{2, j1}) + d_{i+1}(x, y_{j,y}) which is at least d_{ i }(x, y). □
This recurrence naturally translates into a dynamic programing algorithm that computes the values of d(x, ·) and d_{ i }(x, ·) for various target strings. To analyze the running time of this algorithm, note that both y_{2, j}and y_{j,y}are substrings of y. Since the set of substrings of y is closed under taking substrings, we only encounter substrings of y. Also note that since i is chosen from the set {i : x_{ i }= y_{1}}, there are O(μ(x)) choices for i, where μ(x) is the maximal multiplicity of a character in x. Thus, there are O(μ(x)y^{2}) different values to compute. Each value is computed by considering the minimization over at most μ(y) previously computed values, so the total running time is bounded by O(y^{2}μ(x)μ(y)), which is O(y^{3}x) in the worst case. As with most dynamic programming approaches, this algorithm (and all others presented in subsequent sections) can be extended through traceback to reconstruct the optimal sequence of operations needed to build y. We omit the details.
Extending to Affine Duplication Cost
It is easy to extend the recurrence relations in Eqs. (1), (2) to handle costs for duplicate operations. In the above discussion, the cost of each duplicate operation is 1, so the sum of costs of the operations in a sequence that generates a string y is just the length of that sequence. We next consider a more general cost model for duplication in which the cost of a duplicate operation δ_{ x }(s, t, p) is Δ_{1} + (t  s + 1) Δ_{2} (i.e., the cost is affine in the number of duplicated characters). Here Δ_{1}, Δ_{2} are some nonnegative constants. This extension is obtained by assigning a cost of Δ_{2} to each duplicated character, except for the last character in the duplicated string, which is assigned a cost of Δ_{1} + Δ_{2}. We do that by adding a cost term to each of the cases in Eq. 2. If x_{ i }is the last character in the duplicated string (case 1), we add Δ_{1} + Δ_{2} to the cost. Otherwise x_{ i }is not the last duplicated character (case 2), so we add just Δ_{2} to the cost. Eq. (2) thus becomes
The running time analysis for this recurrence is the same as for the one with unit duplication cost.
DuplicationDeletion Distance
In this section we generalize the model to include deletions. Consider the intermediate string z generated after some number of duplicate operations. A deletion operation removes a contiguous substring z_{ i }, . . ., z_{ j }of z, and subsequent duplicate and deletion operations are applied to the resulting string.
Definition 6. A delete operation , τ (s, t), deletes a substring z_{ s }. . . z_{ t }of the target string z , thus making z shorter. Specifically, if z = z_{1} . . . z_{ s }. . . z_{ t }. . . z_{ m }, then z ∘ τ (s, t) = z_{1} . . . z_{s1}z_{t+1}. . . z_{ m }. See Figure6.
The cost associated with t (s, t) depends on the number t  s + 1 of characters deleted and is denoted Φ(t  s + 1).
Definition 7. The duplicationdeletion distance from a source string x to a target string y is the cost of a minimum sequence of duplicate operations from x and deletion operations, in any order, that generates y .
We now show that although we allow arbitrary deletions from the intermediate string, it suffices to consider deletions from the duplicated strings before they are pasted into the intermediate string, provided that the cost function for deletion, Φ(·) is nondecreasing and obeys the triangle inequality.
Definition 8. A duplicatedelete operation from x , η_{ x }(i_{1}, j_{1}, i_{2}, j_{2},. . ., i_{ k }, j_{ k }, p), for i_{1} ≤ j_{1} <i_{2} ≤ j_{2} < ⋯ <i_{ k }≤ j_{ k }copies the subsequenceof the source string x and pastes it into a target string at position p. Specifically, if x = x_{1} . . . x_{ m }and z = z_{1} . . . z_{ n }, then z ∘ η_{ x }(i_{1}, j_{1}, . . ., i_{ k }, j_{ k }, p) = .
The cost associated with such a duplicationdeletion is Δ_{1} + (j_{ k } i_{1} + 1)Δ_{2} + . The first two terms in the cost reflect the affine cost of duplicating an entire substring of length j_{ k } i_{1} + 1, and the second term reflects the cost of deletions made to that substrings.
Lemma 2. If the affine cost for duplications is nondecreasing and Φ (·) is nondecreasing and obeys the triangle inequality then the cost of a minimum sequence of duplicate and delete operations that generates a target string y from a source string x is equal to the cost of a minimum sequence of duplicatedelete operations that generates y from x .
Proof. Since duplicate operations are a special case of duplicatedelete operations, the cost of a minimal sequence of duplicatedelete operations and delete operations that generates y cannot be more than that of a sequence of just duplicate operations and delete operations. We show the (stronger) claim that an arbitrary sequence of duplicatedelete and delete operations that produces a string y with cost c can be transformed into a sequence of just duplicatedelete operations that generates y with cost at most c by induction on the number of delete operations. The base case, where the number of deletions is zero, is trivial. Consider the first delete operation, τ . Let k denote the number of duplicatedelete operations that precede τ, and let z be the intermediate string produced by these k operations. For i = 1, . . ., k, let S_{ i }be the subsequence of x that was used in the i th duplicatedelete operation. By lemma 1, S_{1}, . . ., S_{ k }form a partition of z into disjoint, nonoverlapping subsequences of z. Let d denote the substring of z to be deleted. Since d is a contiguous substring, S_{ i }∩ d is a (possibly empty) substring of S_{ i }for each i. There are several cases:
1. S _{ i }∩ d = ∅. In this case we do not change any operation.
2. S _{ i }∩ d = S _{ i }. In this case all characters produced by the i th duplicatedelete operation are deleted, so we may omit the i th operation altogether and decrease the number of characters deleted by τ . Since Φ (·) is nondecreasing, this does not increase the cost of generating z (and hence y).
3. S _{ i }∩ d is a prefix (or suffix) of S _{ i }. Assume it is a prefix. The case of suffix is similar. Instead of deleting the characters S _{ i }∩ d we can avoid generating them in the first place. Let r be the smallest index in S _{ i }\d (that is, the first character in S _{ i }that is not deleted by τ). We change the i th duplicatedelete operation to start at r and decrease the number of characters deleted by τ . Since the affine cost for duplications is nondecreasing and Φ (·) is nondecreasing, the cost of generating z does not increase.
4. S _{ i }∩ d is a nonempty substring of S _{ i }that is neither a prefix nor a suffix of S _{ i }. We claim that this case applies to at most one value of i. This implies that after taking care of all the other cases τ only deletes characters in S _{ i }. We then change the i th duplicatedelete operation to also delete the characters deleted by τ, and omit τ . Since Φ (·) obeys the triangle inequality, this will not increase the total cost of deletion. By the inductive hypothesis, the rest of y can be generated by just duplicatedelete operations with at most the same cost. It remains to prove the claim. Recall that the set {S _{ i }} is comprised of mutually nonoverlapping subsequences of z. Suppose that there exist indices i ≠ j such that S _{ i }∩ d is a nonprefix/suffix substring of S _{ i }and S _{ j }∩ d is a nonprefix/suffix substring of S _{ j }. There must exist indices of both S _{ i }and S _{ j }in z that precede d, are contained in d, and succeed d. Let i _{ p }<i _{ c }<i _{ s }be three such indices of S _{ i }and let j _{ p }<j _{ c }<j _{ s }be similar for S _{ j }. It must be the case also that j _{ p }<i _{ c }<j _{ s }and i _{ p }<j _{ c }<i _{ s }. Without loss of generality, suppose i _{ p }<j _{ p }. It follows that (i _{ p }, i _{ c }) and (j _{ p }, j _{ s }) are alternating in z. So, S _{ i }and S _{ j }are overlapping which contradicts Lemma 1.
To extend the recurrence from the previous section to duplicationdeletion distance, we must observe that because we allow deletions in the string that is duplicated from x, if we assume character x_{ i }is copied to produce y_{1}, it may not be the case that the character x_{i+1}also appears in y; the character x_{i+1}may have been deleted. Therefore, we minimize over all possible locations k >i for the next character in the duplicated string that is not deleted. The extension of the recurrence from the previous section to duplicationdeletion distance is:
Theorem 2. (x, y) is the duplicationdeletion distance from x to y . For {i : x_{ i }= y_{1}}, (x, y) is the duplicationdeletion distance from x to y under the additional restriction that y_{1}is generated by x_{ i }.
The proof of Theorem 2 is almost identical to that of Theorem 1 in the previous section and is omitted. However, the running time increases; while the number of entries in the dynamic programming table does not change, the time to compute each entry is multiplied by the possible values of k in the recurrence, which is O(x). Therefore, the running time is O(y^{2}xμ(x)μ(y)), which is O(y^{3}x^{2}) in the worst case. We conclude this section by showing, in the following lemma, that if both the duplicate and delete cost functions are the identity function (i.e. one per operation), then the duplicationdeletion distance is equal to duplication distance without deletions.
Lemma 3. Given a source string x , a target string y , If the cost of duplication is 1 per duplicate operation, and the cost of deletion is 1 per delete operation, then(x, y) = d(x, y).
Proof. First we note that if a target string y can be built from x in d(x, y) duplicate operations, then the same sequence of duplicate operations is a valid sequence of duplicate and delete operations as well, so d(x, y) is at least (x, y).
We claim that every sequence of duplicate and delete operations can be transformed into a sequence of duplicate operations of the same length. The proof of this claim is similar to that of Lemma 2. In that proof we showed how to transform a sequence of duplicate and delete operations into a sequence of duplicatedelete operations of at most the same cost. We follow the same steps, but transform the sequence into an a sequence that consists of just duplicate operations without increasing the number of operations. Recall the four cases in the proof of Lemma 2. In the the first three cases we eliminate the delete operation without increasing the number of duplicate operations. Therefore we only need to consider the last case (S_{ i }∩ d is a nonempty substring of S_{ i }that is neither a prefix nor a suffix of S_{ i }). Recall that this case applies to at most one value of i. Deleting S_{ i }∩ d from S_{ i }leaves a prefix and a suffix of S_{ i }. We can therefore replace the i^{th}duplicate operation and the delete operation with two duplicate operations, one generating the appropriate prefix of S_{ i }and the other generating the appropriate suffix of S_{ i }. This eliminates the delete operation without changing the number of operations in the sequence. Therefore, for any string y that results from a sequence of duplicate and delete operations, we can construct the same string using only duplicate operations (without deletes) using at most the same number of operations. So, d(x, y) is no greater than (x, y).
DuplicationInversion Distance
In this section we extend the duplicationdeletion distance recurrence to allow inversions. We now explicitly define characters and strings as having two orientations: forward (+) and inverse ().
Definition 9. A signed string of length m over an alphabet Σ is an element of ({+, } × Σ)^{m}.
For example, (+b c a +d) is a signed string of length 4. An inversion of a signed string reverses the order of the characters as well as their signs. Formally,
Definition 10. The inverse of a signed string x = x_{1} . . . x_{ m }is a signed string = x_{ m }. . . x_{1}.
For example, the inverse of (+b c a +d) is (d +a +c b).
In a duplicateinvert operation a substring is copied from x and inverted before being inserted into the target string y. We allow the cost of inversion to be an affine function in the length ℓ of the duplicated inverted string, which we denote Θ_{1} + ℓΘ_{2}, where Θ_{1}, Θ_{2} ≥ 0. We still allow for normal duplicate operations.
Definition 11. A duplicateinvert operation from x , (s, t, p), copies an inverted substring x_{ t }, x_{ t }_{1} . . ., x_{ s }of the source string x and pastes it into a target string at position p. Specifically, if x = x_{1} . . . x_{ m }and z = z_{1} . . . z_{ n }, then z ∘ (s, t, p) = .
The cost associated with each duplicateinvert operation is Θ_{1}+ (t  s + 1)Θ_{2}.
Definition 12. The duplicationinversion distance from a source string x to a target string y is the cost of a minimum sequence of duplicate and duplicateinvert operations from x , in any order, that generates y .
The recurrence for duplication distance (Eqs. 1, 3) can be extended to compute the duplicationinversion distance. This is done by introducing a term for inverted duplications whose form is very similar to that of the term for regular duplication (Eq. 3). Specifically, when considering the possible characters to generate y_{1}, we consider characters in x that match either y_{1} or its inverse, y_{1}. In the former case, then, we use (x, y) to denote the duplicationinversion distance with the additional restriction that y_{1} is generated by x_{ i }without an inversion. The recurrence for is the same as for d_{ i }in Eq. 3. In the latter case, we consider an inverted duplicate in which y_{1} is generated by x_{ i }. This is denoted by , which follows a similar recurrence. In this recurrence, since an inversion occurs, x_{ i }is the last character of the duplicated string, rather than the first one. Therefore, the next character in x to be used in this operation is x_{i1}rather than x_{i+1}. The recurrence for also differs in the cost term, where we use the affine cost of the duplicateinvert operation. The extension of the recurrence to duplicationinversion distance is therefore:
Theorem 3. (x, y) is the duplicationinversion distance from x to y . For {i : x_{ i }= y_{1}}, (x, y) is the duplicationinversion distance from x to y under the additional restriction that y_{1}is generated by x_{ i }. For {i : x_{ i }= y_{1}}, (x, y) is the duplicationinversion distance from x to y under the additional restriction that y_{1}is generated by x_{ i }.
The correctness proof is very similar to that of Theorem 1, only requiring an additional case for handling the case of a duplicate invert operation which is symmetric to the case of regular duplication. The asymptotic running time of the corresponding dynamic programming algorithm is O(y^{2}μ(x)μ(y)). The analysis is identical to the one in section 3. The fact that we now consider either a duplicate or a duplicateinvert operation does not change the asymptotic running time.
DuplicationInversionDeletion Distance
In this section we extend the distance measure to include delete operations as well as duplicate and duplicateinvert operations. Note that we only handle deletions after inversions of the same substring. The order of operations might be important, at least in terms of costs. The cost of inverting (+a +b +c) and then deleting b may be different than the cost of first deleting +b from (+a +b +c) and then inverting (+a +c).
Definition 13. The duplicationinversiondeletion distance from a source string x to a target string y is the cost of a minimum sequence of duplicate and duplicateinvert operations from x and deletion operations, in any order, that generates y .
Definition 14. A duplicateinvertdelete operation from x ,
(i_{1}, j_{1}, i_{2}, j_{2}, . . ., i_{ k }, j_{ k }, p), for i_{1} ≤ j_{1} <i_{2} ≤ j_{2}<⋯ <i_{ k }≤ j_{ k }pastes the stringinto a target string at position p. Specifically, if x = x_{1} . . . x_{ m }and z = z_{1} . . . z_{ n }, then z ∘ (i_{1}, j_{1}, i_{2}, j_{2}, . . ., i_{ k }, j_{ k }, p) = .
The cost of such an operation is Θ_{1} + (j_{ k } i_{1} + 1)Θ_{2} + . Similar to the previous section, it suffices to consider just duplicateinvertdelete and duplicatedelete operations, rather than duplicate, duplicateinvert and delete operations.
Lemma 4. If Φ (·) is nondecreasing and obeys the triangle inequality and if the cost of inversion is an affine nondecreasing function as defined above, then the cost of a minimum sequence of duplicate, duplicateinvert and delete operations that generates a target string y from a source string x is equal to the cost of a minimum sequence of duplicatedelete and duplicateinvertdelete operations that generates y from x .
The proof of the lemma is essentially the same as that of Lemma 2. Note that in that proof we did not require all duplicate operations to be from the same string x. Therefore, the arguments in that proof apply to our case, where we can regard some of the duplicates from x and some from the inverse of x.
The recurrence for duplicationinversiondeletion distance is obtained by combining the recurrences for duplicationdeletion (Eq. 5) and for duplicationinversion distance (Eq. 6). We use separate terms for duplicatedelete operations () and for duplicateinvertdelete operations (). Those terms differ from the terms in Eq. 6 in the same way Eq. 5 differs from Eq. 2; Because of the possible deletion we do not know that x_{i+1}(x_{i1}) is the next duplicated character. Instead we minimize over all characters later (earlier) than x_{ i }.
The recurrence for duplicationinversiondeletion distance is therefore:
Theorem 4. (x, y) is the duplicationinversiondeletion distance from x to y . For {i :x_{ i }= y_{1}}, (x, y) is the duplicationinversiondeletion distance from x to y under the additional restriction that y_{1}is generated by x_{ i }. For {i : x_{ i }= y_{1}}, (x, y) is the duplicationinversiondeletion distance from x to y under the additional restriction that y_{1}is generated by x_{ i }.
The proof, again, is very similar to the proofs in the previous sections. The running time of the corresponding dynamic programming algorithm is the same (asymptotically) as that of duplicationdeletion distance. It is O(y^{2}xμ(y)μ(x)), where the multiplicity μ(y) (or μ(x)) is the number of times a character appears in the string y (or x), regardless of its sign.
In comparing the models of the previous section and the current one, we note that restricting the model of rearrangement to allow only duplicate and duplicateinvert operations (Section 5) instead of duplicateinvertdelete operations may be desirable from a biological perspective because each duplicate and duplicateinvert requires only three breakpoints in the genome, whereas a duplicateinvertdelete operation can be significantly more complicated, requiring more breakpoints.
Variants of DuplicationInversionDeletion Distance
It is possible to extend the model even further. We give here one detailed example which demonstrates how such extensions might be achieved. Other extensions are also possible. In the previous section we handled the model where the duplicated substring of x may be inverted in its entirety before being inserted into the target string. In the generalized model a substring of the duplicated string may be inverted before the string is inserted into y. For example, we allow (+a +b +c +d +e +f) to become (+a +b e d c +f) before being inserted into y. In this model, the cost of duplicating a string of length m with an inversion of a substring of length ℓ is Δ_{1} + m Δ_{2} + Θ (ℓ), for some nonnegative monotonically increasing cost function Θ.
The way we extend the recurrence is by considering all possible substring inversions to the original string x. For 1 ≤ s ≤ t ≤ x, let be the string x_{1} . . . x_{s1}x_{ t }. . . x_{ s }x_{t+1}. . . x_{x}. That is, the string that is obtained from x by inverting (inplace) x_{s, t}. For convenience, define also = x. We will use (x, y) to denote the distance from x to y in this model under the additional restriction that y_{1} is generated by x_{ i }and that the substring x_{s, t}was inverted. Note that this does not make much sense unless s ≤ i ≤ t, since otherwise the inverted substring is not used in the duplication. However, restricting the inversion cost Θ (ℓ) to be nonnegative and monotonically increasing makes sure that those cases will not contribute to the minimization since inverting a character that is not duplicated will only increase the cost. The recurrence for duplicationdeletion with arbitrarysubstringduplicateinversions distance is given below.
The running time is O(y^{2}x^{3}μ(x)μ(y)). The multiplicative x^{2} factor in the running time in comparison with that of the previous section arises from considering all possible inverted substrings of x. We note that if we were only interested in handling inversions to just a prefix or a suffix of the duplicated string, then it is possible to extend the duplicationinversiondeletion recurrence without increasing the asymptotic running time.
Duplication Distance as a ContextFree Grammar
The process of generating a string y by repeatedly copying substings of a source string x and pasting them into an initially empty target string is naturally described by a contextfree grammar (CFG). This alternative view might be useful in understanding our algorithms and their correctness. Thus, we provide the basic idea behind this connection for the most simple variant of duplication distance: no inversions or deletions and the cost of each duplicate operation is 1. For a fixed source string x, we construct a grammar G_{ x }in which for every i, j such that 1 ≤ i ≤ j ≤ x, there is a production rule S → Sx_{ i }Sx_{i+1}S . . . Sx_{ j }S.
These production rules correspond to duplicating the substring x_{i, j}. In addition there is a trivial production rule S → ∈, where ∈ denotes the empty string. It is easy to see that the language described by this grammar is exactly the set of strings that can be duplicated from x. The nonoverlapping property (Lemma 1) is now an immediate consequence of the structure of parse trees of CFGs. Finding the duplication distance from x to y is equivalent to finding a parse tree with a minimal number of nontrivial productions among all possible parse trees for y.
Consider now the slightly different grammar obtained by removing the leading S to the left of x_{ i }from each of the production rules, so that the new rules are of the form S → x_{ i }Sx_{i+1}S . . . Sx_{ j }S. It is not difficult to see that both grammars produce the same language and have the same minimal size parse tree for every string y. The change only restricts the order in which rules are applied. For example, y_{1} is always produced by the first production rule.
The recurrence for d_{ i }(x, y) naturally arises by observing that if T is an optimal parse tree for y in which the first production rule generates y_{1} by x_{ i }and y_{ j }by x_{i+1}, then the subtree T_{1} of T that generates y_{2, j1}is a valid parse tree which is optimal for y_{2, j1}. Similarly, the tree T_{2} obtained by deleting x_{ i }and T_{1} from T is a valid parse tree which is optimal for y_{j,y}under the restriction that y_{ j }must be generated by x_{i+1}(see Fig. 7). Moreover, T_{1} and T_{2} are disjoint trees which contain all non trivial productions in T . This explains the term d(x, y_{2, j1}) + d_{i+1}(x, y_{j,y}) in Eq. 2, which is the heart of the recursion. The minimization over {j : y_{ j }= x_{i+1}, j > 1} simply enumerates all of the possibilities for constructing T . The term 1 + d(x, y_{2,y}) handles the possibility that y_{1} is generated by a duplicate operation that ends with x_{ i }. In this case the tree T_{2} is empty, so we only consider T_{1}. We add one to account for the production rule at the root of T which is not part of T_{1}. This is illustrated in Fig. 8.
Conclusion
We have shown how to generalize duplication distance to include certain types of deletions and inversions and how to compute these new distances efficiently via dynamic programming. In earlier work [17, 18], we used duplication distance to derive phylogenetic relationships between human segmental duplications. We plan to apply the generalized distances introduced here to the same data to determine if these richer computational models yield new biological insights.
References
 1.
Sankoff D, Leduc G, Antoine N, Paquin B, Lang B, Cedergren R: Gene Order Comparisons for Phylogenetic Inference: Evolution of the Mitochondrial Genome. Proc Natl Acad Sci USA. 1992, 89 (14): 65756579.
 2.
Pevzner P: Computational molecular biology: an algorithmic approach. 2000, Cambridge, Mass.: MIT Press
 3.
Chen X, Zheng J, Fu Z, Nan P, Zhong Y, Lonardi S, Jiang T: Assignment of Orthologous Genes via Genome Rearrangement. IEEE/ACM Trans Comp Biol Bioinformatics. 2005, 2 (4): 302315. 10.1109/TCBB.2005.48.
 4.
Marron M, Swenson KM, Moret BME: Genomic Distances Under Deletions and Insertions. TCS. 2004, 325 (3): 347360. 10.1016/j.tcs.2004.02.039.
 5.
ElMabrouk N: Genome Rearrangement by Reversals and Insertions/Deletions of Contiguous Segments. Proc 11th Ann Symp Combin Pattern Matching (CPM00). 2000, 1848: 222234. full_text. Berlin: SpringerVerlag
 6.
Zhang Y, Song G, Vinar T, Green ED, Siepel AC, Miller W: Reconstructing the Evolutionary History of Complex Human Gene Clusters. Proc 12th Int'l Conf on Research in Computational Molecular Biology (RECOMB). 2008, 2949.
 7.
Ma J, Ratan A, Raney BJ, Suh BB, Zhang L, Miller W, Haussler D: DUPCAR: Reconstructing Contiguous Ancestral Regions with Duplications. Journal of Computational Biology. 2008, 15 (8): 10071027.
 8.
Bertrand D, Lajoie M, ElMabrouk N: Inferring Ancestral Gene Orders for a Family of Tandemly Arrayed Genes. J Comp Biol. 2008, 15 (8): 10631077. 10.1089/cmb.2008.0025.
 9.
Chaudhuri K, Chen K, Mihaescu R, Rao S: On the Tandem DuplicationRandom Loss Model of Genome Rearrangement. Proceedings of the Seventeenth Annual ACMSIAM Symposium on Discrete Algorithms (SODA). 2006, 564570. full_text. New York, NY, USA: ACM
 10.
Elemento O, Gascuel O, Lefranc MP: Reconstructing the Duplication History of Tandemly Repeated Genes. Mol Biol Evol. 2002, 19 (3): 278288.
 11.
Lajoie M, Bertrand D, ElMabrouk N, Gascuel O: Duplication and Inversion History of a Tandemly Repeated Genes Family. J Comp Bio. 2007, 14 (4): 462478. 10.1089/cmb.2007.A007.
 12.
ElMabrouk N, Sankoff D: The Reconstruction of Doubled Genomes. SIAM J Comput. 2003, 32 (3): 754792. 10.1137/S0097539700377177.
 13.
Alekseyev MA, Pevzner PA: Whole Genome Duplications and Contracted Breakpoint Graphs. SICOMP. 2007, 36 (6): 17481763.
 14.
Bailey J, Eichler E: Primate Segmental Duplications: Crucibles of Evolution, Diversity and Disease. Nat Rev Genet. 2006, 7: 552564.
 15.
Jiang Z, Tang H, Ventura M, Cardone MF, MarquesBonet T, She X, Pevzner PA, Eichler EE: Ancestral reconstruction of segmental duplications reveals punctuated cores of human genome evolution. Nature Genetics. 2007, 39: 13611368.
 16.
Johnson M, Cheng Z, Morrison V, Scherer S, Ventura M, Gibbs R, Green E, Eichler E: Recurrent duplicationdriven transposition of DNA during hominoid evolution. Proc Natl Acad Sci USA. 2006, 103: 1762617631.
 17.
Kahn CL, Raphael BJ: Analysis of Segmental Duplications via Duplication Distance. Bioinformatics. 2008, 24: i133138.
 18.
Kahn CL, Raphael BJ: A Parsimony Approach to Analysis of Human Segmental Duplications. Pacific Symposium on Biocomputing. 2009, 126137.
Acknowledgements
SM was supported by NSF Grant CCF0635089. BJR is supported by a Career Award at the Scientific Interface from the Burroughs Wellcome Fund and by funding from the ADVANCE Program at Brown University, under NSF Grant No. 0548311.
Author information
Affiliations
Corresponding authors
Correspondence to Crystal L Kahn or Shay Mozes or Benjamin J Raphael.
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
CLK, SM, and BJR all designed and analyzed the algorithms and drafted the manuscript. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Kahn, C.L., Mozes, S. & Raphael, B.J. Efficient algorithms for analyzing segmental duplications with deletions and inversions in genomes. Algorithms Mol Biol 5, 11 (2010). https://doi.org/10.1186/17487188511
Received:
Accepted:
Published:
Keywords
 Production Rule
 Segmental Duplication
 Parse Tree
 Minimum Sequence
 Signed String