Skip to main content

Efficient algorithms for analyzing segmental duplications with deletions and inversions in genomes

Abstract

Background

Segmental duplications, or low-copy repeats, are common in mammalian genomes. In the human genome, most segmental duplications are mosaics comprised of multiple duplicated fragments. This complex genomic organization complicates analysis of the evolutionary history of these sequences. One model proposed to explain this mosaic patterns is a model of repeated aggregation and subsequent duplication of genomic sequences.

Results

We describe a polynomial-time exact algorithm to compute duplication distance, a genomic distance defined as the most parsimonious way to build a target string by repeatedly copying substrings of a fixed source string. This distance models the process of repeated aggregation and duplication. We also describe extensions of this distance to include certain types of substring deletions and inversions. Finally, we provide a description of a sequence of duplication events as a context-free grammar (CFG).

Conclusion

These new genomic distances will permit more biologically realistic analyses of segmental duplications in genomes.

Introduction

Genomes evolve via many types of mutations ranging in scale from single nucleotide mutations to large genome rearrangements. Computational models of these mutational processes allow researchers to derive similarity measures between genome sequences and to reconstruct evolutionary relationships between genomes. For example, considering chromosomal inversions as the only type of mutation leads to the so-called reversal distance problem of finding the minimum number of inversions/reversals that transform one genome into another [1]. Several elegant polynomial-time algorithms have been found to solve this problem (cf. [2] and references therein). Developing genome rearrangement models that are both biologically realistic and computationally tractable remains an active area of research.

Duplicated sequences in genomes present a particular challenge for genome rearrangement analysis and often make the underlying computational problems more difficult. For instance, computing reversal distance in genomes with duplicated segments is NP-hard [3]. Models that include both duplications and other types of mutations - such as inversions - often result in similarity measures that cannot be computed efficiently. Thus, most current approaches for duplication analysis rely on heuristics, approximation algorithms, or restricted models of duplication [3–7]. For example, there are efficient algorithms for computing tandem duplication histories [8–11] and whole-genome duplication histories [12, 13]. Here we consider another class of duplications: large segmental duplications (also known as low-copy repeats) that are common in many mammalian genomes [14]. These segmental duplications can be quite large (up to hundreds of kilobases), but their evolutionary history remains poorly understood, particularly in primates. The mystery surrounding them is due in part to their complex organization; many segmental duplications are found within contiguous regions of the genome called duplication blocks that contain mosaic patterns of smaller repeated segments, or duplicons[15]. Duplication blocks that are located on different chromosomes, or that are separated by large physical distances on a chromosome, often share sequences of duplicons [16]. These conserved sequences suggest that these duplicons were copied together across large genomic distances. One hypothesis proposed to explain these conserved mosaic patterns is a two-step model of duplication [14]. In this model, a first phase of duplications copies duplicons from the ancestral genome and aggregates these copies into primary duplication blocks. Then in a second phase, portions of these primary duplication blocks are copied and reinserted into the genome at disparate loci forming secondary duplication blocks.

In [17], we introduced a measure called duplication distance that models the duplication of contiguous substrings over large genomic distances. We used duplication distance in [18] to find the most parsimonious duplication scenario consistent with the two-step model of segmental duplication. The duplication distance from a source string x to a target string y is the minimum number of substrings of x that can be sequentially copied from x and pasted into an initially empty string in order to construct y. We derived an efficient exact algorithm for computing the duplication distance between a pair of strings. Note that the string x does not change during the sequence of duplication events. Moreover, duplication distance does not model local rearrangements, like tandem duplications, deletions or inversions, that occur within a duplication block during its construction. While such local rearrangements undoubtedly occur in genome evolution, the duplication distance model focuses on identifying the duplicate operations that account for the construction of repeated patterns within duplication blocks by aggregating substrings of other duplication blocks over large genomic distances. Thus, like nearly every other genome rearrangement model, the duplication distance model makes some simplifying assumptions about the underlying biology to achieve computational tractability. Here, we extend the duplication distance measure to include certain types of deletions and inversions. These extensions make our model less restrictive - although we still maintain the restriction that x is unchanged - and permit the construction of more rich, and perhaps more biologically plausible, duplication scenarios. In particular, our contributions are the following.

Summary of Contributions

Let μ(x) denote the number of times a character appears in the string x. Let |x| denote the length of x.

1. We provide an O(|y|2|x|μ(x) μ(y))-time algorithm to compute the distance between (signed) strings x and y when duplication and certain types of deletion operations are permitted.

2. We provide an O(|y|2 μ(x) μ(y))-time algorithm to compute the distance between (signed) strings x and y when duplicated strings may be inverted before being inserted into the target string.

3. We provide an O(|y|2|x|μ(x)μ(y))-time algorithm to compute the distance between signed strings x and y when duplicated strings may be inverted before being inserted into the target string, and deletion operations are also permitted.

4. We provide an O(|y|2|x|3 μ(x)μ(y))-time algorithm to compute the distance between signed strings x and y when any substring of the duplicated string may be inverted before being inserted into the target string. Deletion operations are also permitted.

5. We provide a formal proof of correctness of the duplication distance recurrence presented in [18]. No proof of correctness was previously given.

6. We show how a sequence of duplicate operations that generates a string can be described by a context-free grammar (CFG).

Preliminaries

We begin by reviewing some definitions and notation that were introduced in [17] and [18]. Let ∅ denote the empty string. For a string x = x1 . . . x n , let xi, jdenote the substring x i xi+1. . . x j . We define a subsequence S of x to be a string with i1 <i2 < ⋯ <i k . We represent S by listing the indices at which the characters of S occur in x. For example, if x = abcdef, then the subsequence S = (1, 3, 5) is the string ace. Note that every substring is a subsequence, but a subsequence need not be a substring since the characters comprising a subsequence need not be contiguous. For a pair of subsequences S1, S2, denote by S1 ∩ S2 the maximal subsequence common to both S1 and S2.

Definition 1. Subsequences S = (s1, s2) and T = (t1, t2) of a string x are alternating in x if either s1 <t1 <s2 <t2or t1 <s1 <t2 <s2.

Definition 2. Subsequences S = (s1, . . ., s k ) and T = (t1, . . ., t l ) of a string x are overlapping in x if there exist indices i, i' and j, j' such that 1 ≤ i <i' ≤ k, 1 ≤ j <j' ≤ l, and (s i , si') and (t j , tj') are alternating in x . See Figure1.

Figure 1
figure 1

Overlapping. The red subsequence is overlapping with the blue subsequence in x. The indices (s i , si') and (t j , tj') are alternating in x.

Definition 3. Given subsequences S = (s1, . . ., s k ) and T = (t1, . . ., t l ) of a string x , S is inside of T if there exists an index i such that 1 ≤ i <l and t i <s1 <s k <ti+1. That is, the entire subsequence S occurs in between successive characters of T. See Figure2.

Figure 2
figure 2

Inside. The red subsequence is inside the blue subsequence T . All the characters of the red subsequence occur between the indices t i and ti+1of T.

Definition 4. A duplicate operation from x , δ x (s, t, p), copies a substring x s . . . x t of the source string x and pastes it into a target string at position p. Specifically, if x = x1 . . . x m and z = z1 . . . z n , then z ∘ δ x (s, t, p) = z1 . . . zp-1x s . . . x t z p . . . z n . See Figure3.

Figure 3
figure 3

A duplicate operation. A duplicate operation, denoted δ x (s, t, p). A substring x s xs+1. . x t of the source string x is copied and inserted into the target string z at index p.

Definition 5. The duplication distance from a source string x to a target string y is the minimum number of duplicate operations from x that generates y from an initially empty target string. That is, y = ∅ ∘ δ x (s1, t1, p1) ∘ δ x (s2, t2, p2) ∘ ⋯ ∘ δ x (s l , t l , p l ).

To compute the duplication distance from x to y, we assume that every character in y appears at least once in x. Otherwise, the duplication distance is undefined.

Duplication Distance

In this section we review the basic recurrence for computing duplication distance that was introduced in [18]. The recurrence examines the characters of the target string, y, and considers the sets of characters of y that could have been generated, or copied from the source string in a single duplicate operation. Such a set of characters of y necessarily correspond to a substring of the source x (see Def. 4). Moreover, these characters must be a subsequence of y. This is because, in a sequence of duplicate operations, once a string is copied and inserted into the target string, subsequent duplicate operations do not affect the order of the characters in the previously inserted string. Because every character of y is generated by exactly one duplicate operation, a sequence of duplicate operations that generates y partitions the characters of y into disjoint subsequences, each of which is generated in a single duplicate operation. A more interesting observation is that these subsequences are mutually non-overlapping. We formalize this property as follows.

Lemma 1 (Non-overlapping Property). Consider a source string x and a sequence of duplicate operations of the form δ x (s i , t i , p i ) that generates the final target string y from an initially empty target string. The substringsof x that are duplicated during the construction of y appear as mutually non-overlapping subsequences of y .

Proof. Consider a sequence of duplicate operations δ x (s1, t1, p1), . . ., δ x (s k , t k , p k ) that generates y from an initially empty target string. For 1 ≤ i ≤ k, Let zibe the intermediate target string that results from δ x (s1, t1, p1) ∘ ⋯ ∘ δ x (s i , t i , p i ). Note that zk= y. For j ≤ i, let be the subsequence of zithat corresponds to the characters duplicated by the jthoperation. We shall show by induction on the length i of the sequence that are pairwise non-overlapping subsequences of zi. For the base case, when there is a single duplicate operation, there is no non-overlap property to show. Assume now that , . . . are mutually non-overlapping subsequences in zi -1. For the induction step note that, by the definition of a duplicate operation, is inserted as a contiguous substring into zi-1at location p i to form zi. Therefore, for any j, j' <i, if and are non overlapping in zi-1then and , are non overlapping in zi. It remains to show that for any j <i, and are non-overlapping in zi. There are two cases: (1) the elements of are either all smaller or all greater than the elements of or (2) is inside of in zi(Definition 3). In either case, and are not overlapping in zias required.

The non-overlapping property leads to an efficient recurrence that computes duplication distance. When considering subsequences of the final target string y that might have been generated in a single duplicate operation, we rely on the non-overlapping property to identify substrings of y that can be treated as independent subproblems. If we assume that some subsequence S of y is produced in a single duplicate operation, then we know that all other subsequences of y that correspond to duplicate operations cannot overlap the characters in S. Therefore, the substrings of y in between successive characters of S define subproblems that are computed independently.

In order to find the optimal (i.e. minimum) sequence of duplicate operations that generate y, we must consider all subsequences of y that could have been generated by a single duplicate operation. The recurrence is based on the observation that y1 must be the first (i.e. leftmost) character to be copied from x in some duplicate operation. There are then two cases to consider: either (1) y1 was the last (or rightmost) character in the substring that was duplicated from x to generate y1, or (2) y1 was not the last character in the substring that was duplicated from x to generate y1.

The recurrence defines two quantities: d(x, y) and d i (x, y). We shall show, by induction, that for a pair of strings, x and y, the value d(x, y) is equal to the duplication distance from x to y and that d i (x, y) is equal to the duplication distance from x to y under the restriction that the character y1 is copied from index i in x, i.e. x i generates y1. d(x, y) is found by considering the minimum among all characters x i of x that can generate y1, see Eq. 1.

As described above, we must consider two possibilities in order to compute d i (x, y). Either:

Case 1: y1 was the last (or rightmost) character in the substring of x that was copied to produce y1, (see Fig. 4), or

Figure 4
figure 4

Recurrence: Case 1. y1 is generated from x i in a duplicate operation where y1 is the last (rightmost) character in the copied substring (Case 1). The total duplication distance is one plus the duplication distance for the suffix y2,|y|.

Case 2: xi+1is also copied in the same duplicate operation as x i , possibly along with other characters as well (see Fig. 5).

Figure 5
figure 5

Recurrence: Case 2. y1 is generated from x i in a duplicate operation where y1 is not the last (rightmost) character in a copied substring (Case 2). In this case, xi+1is also copied in the same duplicate operation (top). Thus, the duplication distance is the sum of d(x, y2, j-1), the duplication distance for y2, j-1(bottom left), and di+1(x, yj, |y|), the minimum number of duplicate operations to generate yj, |y|given that xi+1generates y j (bottom right).

For case one, the minimum number of duplicate operations is one - for the duplicate that generates y1 - plus the minimum number of duplicate operations to generate the suffix of y, giving a total of 1 + d(x, y2,|y|) (Fig. 4). For case two, Lemma 1 implies that the minimum number of duplicate operations is the sum of the optimal numbers of operations for two independent subproblems. Specifically, for each j > 1 such that xi+1= y j we compute: (i) the minimum number of duplicate operations needed to build the substring y2, j-1, namely d(x, y2, j-1), and (ii) the minimum number of duplicate operations needed to build the string y1yj,|y|, given that y1 is generated by x i and y j is generated by xi+1. To compute the latter, recall that since x i and xi+1are copied in the same duplicate operation, the number of duplicates necessary to generate y1yj,|y|using x i and xi+1is equal to the number of duplicates necessary to generate yj,|y|using xi+1, namely di+1(x, yj,|y|), (see Fig. 5 and Eq. 2).

The recurrence is, therefore:

(1)
(2)

Theorem 1. d(x, y) is the minimum number of duplicate operations that generate y from x . For {i : x i = y1}, d i (x, y) is the minimum number of duplicate operations that generate y from x such that y1is generated by x i .

Proof. Let OPT(x, y) denote minimum length of a sequence of duplicate operations that generate y from x. Let OPT i (x, y) denote the minimum length of a sequence of operations that generate y from x such that y1 is generated by x i . We prove by induction on |y| that d(x, y) = OPT(x, y) and d i (x, y) = OPT i (x, y).

For |y| = 1, since we assume there is at least one i for which x i = y1, OPT (x, y) = OPT i (x, y) = 1. By definition, the recurrence also evaluates to 1. For the inductive step, assume that OPT (x, y') = d(x, y') and OPT i (x, y') = d i (x, y') for any string y' shorter than y. We first show that OPT i (x, y) ≤ d i (x, y). Since OPT (x, y) = min i OPT i (x, y), this also implies OPT (x, y) ≤ d(x, y). We describe different sequences of duplicate operations that generate y from x, using x i to generate y1:

  • Consider a minimum-length sequence of duplicates that generates y2,|y|. By the inductive hypothesis its length is d(x, y2,|y|). By duplicating y1 separately using x i we obtain a sequence of duplicates that generates y whose length is 1 + d(x, y2,|y|).

  • For every {j : y j = xi+1, j > 1} consider a minimum-length sequence of duplicates that generates yj,|y|using xi+1to produce y j , and a minimum-length sequence of duplicates that generates y2, j-1.

By the inductive hypothesis their lengths are di+1(x, yj,|y|) and d(x, y2, j-1) respectively. By extending the start index s of the duplicate operation that starts with xi+1to produce y j to start with x i and produce y1 as well, we produce y with the same number of duplicate operations.

Since OPT i (x, y) is at most the length of any of these options, it is also at most their minimum. Hence,

To show the other direction (i.e. that d(x, y) ≤ OPT (x, y) and d i (x, y) ≤ OPT i (x, y)), consider a minimum-length sequence of duplicate operations that generate y from x, using x i to generate y1. There are a few cases:

  • If y1 is generated by a duplicate operation that only duplicates x i , then OPT i (x, y) = 1 + OPT (x, y2,|y|). By the inductive hypothesis this equals 1 + d(x, y2,|y|) which is at least d i (x, y).

  • Otherwise, y1 is generated by a duplicate operation that copies x i and also duplicates xi+1to generate some character y j . In this case the sequence Δ of duplicates that generates y2, j-1must appear after the duplicate operation that generates y1 and y j because y2, j-1is inside (Definition 3) of (y1, y j ). Without loss of generality, suppose Δ is ordered after all the other duplicates so that first y1y j . . . y|y|is generated, and then Δ generates y2 . . . yj-1between y1 and y j . Hence, OPT i (x, y) = OPT i (x, y1yj,|y|) + OPT (x, y2, j-1). Since in the optimal sequence x i generates y1 in the same duplicate operation that generates y j from xi+1, we have OPT i (x, y1yj,|y|) = OPTi+1(x, yj,|y|). By the inductive hypothesis, OPT (x, y2, j-1) + OPTi+1(x, yj,|y|) = d(x, y2, j-1) + di+1(x, yj,|y|) which is at least d i (x, y).   â–¡

This recurrence naturally translates into a dynamic programing algorithm that computes the values of d(x, ·) and d i (x, ·) for various target strings. To analyze the running time of this algorithm, note that both y2, jand yj,|y|are substrings of y. Since the set of substrings of y is closed under taking substrings, we only encounter substrings of y. Also note that since i is chosen from the set {i : x i = y1}, there are O(μ(x)) choices for i, where μ(x) is the maximal multiplicity of a character in x. Thus, there are O(μ(x)|y|2) different values to compute. Each value is computed by considering the minimization over at most μ(y) previously computed values, so the total running time is bounded by O(|y|2μ(x)μ(y)), which is O(|y|3|x|) in the worst case. As with most dynamic programming approaches, this algorithm (and all others presented in subsequent sections) can be extended through trace-back to reconstruct the optimal sequence of operations needed to build y. We omit the details.

Extending to Affine Duplication Cost

It is easy to extend the recurrence relations in Eqs. (1), (2) to handle costs for duplicate operations. In the above discussion, the cost of each duplicate operation is 1, so the sum of costs of the operations in a sequence that generates a string y is just the length of that sequence. We next consider a more general cost model for duplication in which the cost of a duplicate operation δ x (s, t, p) is Δ1 + (t - s + 1) Δ2 (i.e., the cost is affine in the number of duplicated characters). Here Δ1, Δ2 are some non-negative constants. This extension is obtained by assigning a cost of Δ2 to each duplicated character, except for the last character in the duplicated string, which is assigned a cost of Δ1 + Δ2. We do that by adding a cost term to each of the cases in Eq. 2. If x i is the last character in the duplicated string (case 1), we add Δ1 + Δ2 to the cost. Otherwise x i is not the last duplicated character (case 2), so we add just Δ2 to the cost. Eq. (2) thus becomes

(3)

The running time analysis for this recurrence is the same as for the one with unit duplication cost.

Duplication-Deletion Distance

In this section we generalize the model to include deletions. Consider the intermediate string z generated after some number of duplicate operations. A deletion operation removes a contiguous substring z i , . . ., z j of z, and subsequent duplicate and deletion operations are applied to the resulting string.

Definition 6. A delete operation , τ (s, t), deletes a substring z s . . . z t of the target string z , thus making z shorter. Specifically, if z = z1 . . . z s . . . z t . . . z m , then z ∘ τ (s, t) = z1 . . . zs-1zt+1. . . z m . See Figure6.

Figure 6
figure 6

A delete operation. A delete operation, denoted t (s, t). The substring zs, tis deleted.

The cost associated with t (s, t) depends on the number t - s + 1 of characters deleted and is denoted Φ(t - s + 1).

Definition 7. The duplication-deletion distance from a source string x to a target string y is the cost of a minimum sequence of duplicate operations from x and deletion operations, in any order, that generates y .

We now show that although we allow arbitrary deletions from the intermediate string, it suffices to consider deletions from the duplicated strings before they are pasted into the intermediate string, provided that the cost function for deletion, Φ(·) is non-decreasing and obeys the triangle inequality.

Definition 8. A duplicate-delete operation from x , η x (i1, j1, i2, j2,. . ., i k , j k , p), for i1 ≤ j1 <i2 ≤ j2 < ⋯ <i k ≤ j k copies the subsequenceof the source string x and pastes it into a target string at position p. Specifically, if x = x1 . . . x m and z = z1 . . . z n , then z ∘ η x (i1, j1, . . ., i k , j k , p) = .

The cost associated with such a duplication-deletion is Δ1 + (j k - i1 + 1)Δ2 + . The first two terms in the cost reflect the affine cost of duplicating an entire substring of length j k - i1 + 1, and the second term reflects the cost of deletions made to that substrings.

Lemma 2. If the affine cost for duplications is non-decreasing and Φ (·) is non-decreasing and obeys the triangle inequality then the cost of a minimum sequence of duplicate and delete operations that generates a target string y from a source string x is equal to the cost of a minimum sequence of duplicate-delete operations that generates y from x .

Proof. Since duplicate operations are a special case of duplicate-delete operations, the cost of a minimal sequence of duplicate-delete operations and delete operations that generates y cannot be more than that of a sequence of just duplicate operations and delete operations. We show the (stronger) claim that an arbitrary sequence of duplicate-delete and delete operations that produces a string y with cost c can be transformed into a sequence of just duplicate-delete operations that generates y with cost at most c by induction on the number of delete operations. The base case, where the number of deletions is zero, is trivial. Consider the first delete operation, τ . Let k denote the number of duplicate-delete operations that precede τ, and let z be the intermediate string produced by these k operations. For i = 1, . . ., k, let S i be the subsequence of x that was used in the i th duplicate-delete operation. By lemma 1, S1, . . ., S k form a partition of z into disjoint, non-overlapping subsequences of z. Let d denote the substring of z to be deleted. Since d is a contiguous substring, S i ∩ d is a (possibly empty) substring of S i for each i. There are several cases:

1. S i ∩ d = ∅. In this case we do not change any operation.

2. S i ∩ d = S i . In this case all characters produced by the i th duplicate-delete operation are deleted, so we may omit the i th operation altogether and decrease the number of characters deleted by τ . Since Φ (·) is non-decreasing, this does not increase the cost of generating z (and hence y).

3. S i ∩ d is a prefix (or suffix) of S i . Assume it is a prefix. The case of suffix is similar. Instead of deleting the characters S i ∩ d we can avoid generating them in the first place. Let r be the smallest index in S i \d (that is, the first character in S i that is not deleted by τ). We change the i th duplicate-delete operation to start at r and decrease the number of characters deleted by τ . Since the affine cost for duplications is non-decreasing and Φ (·) is non-decreasing, the cost of generating z does not increase.

4. S i ∩ d is a non-empty substring of S i that is neither a prefix nor a suffix of S i . We claim that this case applies to at most one value of i. This implies that after taking care of all the other cases τ only deletes characters in S i . We then change the i th duplicate-delete operation to also delete the characters deleted by τ, and omit τ . Since Φ (·) obeys the triangle inequality, this will not increase the total cost of deletion. By the inductive hypothesis, the rest of y can be generated by just duplicate-delete operations with at most the same cost. It remains to prove the claim. Recall that the set {S i } is comprised of mutually non-overlapping subsequences of z. Suppose that there exist indices i ≠ j such that S i ∩ d is a non-prefix/suffix substring of S i and S j ∩ d is a non-prefix/suffix substring of S j . There must exist indices of both S i and S j in z that precede d, are contained in d, and succeed d. Let i p <i c <i s be three such indices of S i and let j p <j c <j s be similar for S j . It must be the case also that j p <i c <j s and i p <j c <i s . Without loss of generality, suppose i p <j p . It follows that (i p , i c ) and (j p , j s ) are alternating in z. So, S i and S j are overlapping which contradicts Lemma 1.

To extend the recurrence from the previous section to duplication-deletion distance, we must observe that because we allow deletions in the string that is duplicated from x, if we assume character x i is copied to produce y1, it may not be the case that the character xi+1also appears in y; the character xi+1may have been deleted. Therefore, we minimize over all possible locations k >i for the next character in the duplicated string that is not deleted. The extension of the recurrence from the previous section to duplication-deletion distance is:

(4)
(5)

Theorem 2. (x, y) is the duplication-deletion distance from x to y . For {i : x i = y1}, (x, y) is the duplication-deletion distance from x to y under the additional restriction that y1is generated by x i .

The proof of Theorem 2 is almost identical to that of Theorem 1 in the previous section and is omitted. However, the running time increases; while the number of entries in the dynamic programming table does not change, the time to compute each entry is multiplied by the possible values of k in the recurrence, which is O(|x|). Therefore, the running time is O(|y|2|x|μ(x)μ(y)), which is O(|y|3|x|2) in the worst case. We conclude this section by showing, in the following lemma, that if both the duplicate and delete cost functions are the identity function (i.e. one per operation), then the duplication-deletion distance is equal to duplication distance without deletions.

Lemma 3. Given a source string x , a target string y , If the cost of duplication is 1 per duplicate operation, and the cost of deletion is 1 per delete operation, then(x, y) = d(x, y).

Proof. First we note that if a target string y can be built from x in d(x, y) duplicate operations, then the same sequence of duplicate operations is a valid sequence of duplicate and delete operations as well, so d(x, y) is at least (x, y).

We claim that every sequence of duplicate and delete operations can be transformed into a sequence of duplicate operations of the same length. The proof of this claim is similar to that of Lemma 2. In that proof we showed how to transform a sequence of duplicate and delete operations into a sequence of duplicate-delete operations of at most the same cost. We follow the same steps, but transform the sequence into an a sequence that consists of just duplicate operations without increasing the number of operations. Recall the four cases in the proof of Lemma 2. In the the first three cases we eliminate the delete operation without increasing the number of duplicate operations. Therefore we only need to consider the last case (S i ∩ d is a non-empty substring of S i that is neither a prefix nor a suffix of S i ). Recall that this case applies to at most one value of i. Deleting S i ∩ d from S i leaves a prefix and a suffix of S i . We can therefore replace the ithduplicate operation and the delete operation with two duplicate operations, one generating the appropriate prefix of S i and the other generating the appropriate suffix of S i . This eliminates the delete operation without changing the number of operations in the sequence. Therefore, for any string y that results from a sequence of duplicate and delete operations, we can construct the same string using only duplicate operations (without deletes) using at most the same number of operations. So, d(x, y) is no greater than (x, y).

Duplication-Inversion Distance

In this section we extend the duplication-deletion distance recurrence to allow inversions. We now explicitly define characters and strings as having two orientations: forward (+) and inverse (-).

Definition 9. A signed string of length m over an alphabet Σ is an element of ({+, -} × Σ)m.

For example, (+b -c -a +d) is a signed string of length 4. An inversion of a signed string reverses the order of the characters as well as their signs. Formally,

Definition 10. The inverse of a signed string x = x1 . . . x m is a signed string = -x m . . . -x1.

For example, the inverse of (+b -c -a +d) is (-d +a +c -b).

In a duplicate-invert operation a substring is copied from x and inverted before being inserted into the target string y. We allow the cost of inversion to be an affine function in the length ℓ of the duplicated inverted string, which we denote Θ1 + ℓΘ2, where Θ1, Θ2 ≥ 0. We still allow for normal duplicate operations.

Definition 11. A duplicate-invert operation from x , (s, t, p), copies an inverted substring -x t , -x t -1 . . ., -x s of the source string x and pastes it into a target string at position p. Specifically, if x = x1 . . . x m and z = z1 . . . z n , then z ∘ (s, t, p) = .

The cost associated with each duplicate-invert operation is Θ1+ (t - s + 1)Θ2.

Definition 12. The duplication-inversion distance from a source string x to a target string y is the cost of a minimum sequence of duplicate and duplicate-invert operations from x , in any order, that generates y .

The recurrence for duplication distance (Eqs. 1, 3) can be extended to compute the duplication-inversion distance. This is done by introducing a term for inverted duplications whose form is very similar to that of the term for regular duplication (Eq. 3). Specifically, when considering the possible characters to generate y1, we consider characters in x that match either y1 or its inverse, -y1. In the former case, then, we use (x, y) to denote the duplication-inversion distance with the additional restriction that y1 is generated by x i without an inversion. The recurrence for is the same as for d i in Eq. 3. In the latter case, we consider an inverted duplicate in which y1 is generated by -x i . This is denoted by , which follows a similar recurrence. In this recurrence, since an inversion occurs, x i is the last character of the duplicated string, rather than the first one. Therefore, the next character in x to be used in this operation is -xi-1rather than xi+1. The recurrence for also differs in the cost term, where we use the affine cost of the duplicate-invert operation. The extension of the recurrence to duplication-inversion distance is therefore:

(6)

Theorem 3. (x, y) is the duplication-inversion distance from x to y . For {i : x i = y1}, (x, y) is the duplication-inversion distance from x to y under the additional restriction that y1is generated by x i . For {i : x i = -y1}, (x, y) is the duplication-inversion distance from x to y under the additional restriction that y1is generated by -x i .

The correctness proof is very similar to that of Theorem 1, only requiring an additional case for handling the case of a duplicate invert operation which is symmetric to the case of regular duplication. The asymptotic running time of the corresponding dynamic programming algorithm is O(|y|2μ(x)μ(y)). The analysis is identical to the one in section 3. The fact that we now consider either a duplicate or a duplicate-invert operation does not change the asymptotic running time.

Duplication-Inversion-Deletion Distance

In this section we extend the distance measure to include delete operations as well as duplicate and duplicate-invert operations. Note that we only handle deletions after inversions of the same substring. The order of operations might be important, at least in terms of costs. The cost of inverting (+a +b +c) and then deleting -b may be different than the cost of first deleting +b from (+a +b +c) and then inverting (+a +c).

Definition 13. The duplication-inversion-deletion distance from a source string x to a target string y is the cost of a minimum sequence of duplicate and duplicate-invert operations from x and deletion operations, in any order, that generates y .

Definition 14. A duplicate-invert-delete operation from x ,

(i1, j1, i2, j2, . . ., i k , j k , p), for i1 ≤ j1 <i2 ≤ j2<⋯ <i k ≤ j k pastes the stringinto a target string at position p. Specifically, if x = x1 . . . x m and z = z1 . . . z n , then z ∘ (i1, j1, i2, j2, . . ., i k , j k , p) = .

The cost of such an operation is Θ1 + (j k - i1 + 1)Θ2 + . Similar to the previous section, it suffices to consider just duplicate-invert-delete and duplicate-delete operations, rather than duplicate, duplicate-invert and delete operations.

Lemma 4. If Φ (·) is non-decreasing and obeys the triangle inequality and if the cost of inversion is an affine non-decreasing function as defined above, then the cost of a minimum sequence of duplicate, duplicate-invert and delete operations that generates a target string y from a source string x is equal to the cost of a minimum sequence of duplicate-delete and duplicate-invert-delete operations that generates y from x .

The proof of the lemma is essentially the same as that of Lemma 2. Note that in that proof we did not require all duplicate operations to be from the same string x. Therefore, the arguments in that proof apply to our case, where we can regard some of the duplicates from x and some from the inverse of x.

The recurrence for duplication-inversion-deletion distance is obtained by combining the recurrences for duplication-deletion (Eq. 5) and for duplication-inversion distance (Eq. 6). We use separate terms for duplicate-delete operations () and for duplicate-invert-delete operations (). Those terms differ from the terms in Eq. 6 in the same way Eq. 5 differs from Eq. 2; Because of the possible deletion we do not know that xi+1(xi-1) is the next duplicated character. Instead we minimize over all characters later (earlier) than x i .

The recurrence for duplication-inversion-deletion distance is therefore:

Theorem 4. (x, y) is the duplication-inversion-deletion distance from x to y . For {i :x i = y1}, (x, y) is the duplication-inversion-deletion distance from x to y under the additional restriction that y1is generated by x i . For {i : x i = -y1}, (x, y) is the duplication-inversion-deletion distance from x to y under the additional restriction that y1is generated by -x i .

The proof, again, is very similar to the proofs in the previous sections. The running time of the corresponding dynamic programming algorithm is the same (asymptotically) as that of duplication-deletion distance. It is O(|y|2|x|μ(y)μ(x)), where the multiplicity μ(y) (or μ(x)) is the number of times a character appears in the string y (or x), regardless of its sign.

In comparing the models of the previous section and the current one, we note that restricting the model of rearrangement to allow only duplicate and duplicate-invert operations (Section 5) instead of duplicate-invert-delete operations may be desirable from a biological perspective because each duplicate and duplicate-invert requires only three breakpoints in the genome, whereas a duplicate-invert-delete operation can be significantly more complicated, requiring more breakpoints.

Variants of Duplication-Inversion-Deletion Distance

It is possible to extend the model even further. We give here one detailed example which demonstrates how such extensions might be achieved. Other extensions are also possible. In the previous section we handled the model where the duplicated substring of x may be inverted in its entirety before being inserted into the target string. In the generalized model a substring of the duplicated string may be inverted before the string is inserted into y. For example, we allow (+a +b +c +d +e +f) to become (+a +b -e -d -c +f) before being inserted into y. In this model, the cost of duplicating a string of length m with an inversion of a substring of length ℓ is Δ1 + m Δ2 + Θ (ℓ), for some non-negative monotonically increasing cost function Θ.

The way we extend the recurrence is by considering all possible substring inversions to the original string x. For 1 ≤ s ≤ t ≤ |x|, let be the string x1 . . . xs-1-x t . . . -x s xt+1. . . x|x|. That is, the string that is obtained from x by inverting (in-place) xs, t. For convenience, define also = x. We will use (x, y) to denote the distance from x to y in this model under the additional restriction that y1 is generated by x i and that the substring xs, twas inverted. Note that this does not make much sense unless s ≤ i ≤ t, since otherwise the inverted substring is not used in the duplication. However, restricting the inversion cost Θ (ℓ) to be non-negative and monotonically increasing makes sure that those cases will not contribute to the minimization since inverting a character that is not duplicated will only increase the cost. The recurrence for duplication-deletion with arbitrary-substring-duplicate-inversions distance is given below.

The running time is O(|y|2|x|3μ(x)μ(y)). The multiplicative |x|2 factor in the running time in comparison with that of the previous section arises from considering all possible inverted substrings of x. We note that if we were only interested in handling inversions to just a prefix or a suffix of the duplicated string, then it is possible to extend the duplication-inversion-deletion recurrence without increasing the asymptotic running time.

Duplication Distance as a Context-Free Grammar

The process of generating a string y by repeatedly copying substings of a source string x and pasting them into an initially empty target string is naturally described by a context-free grammar (CFG). This alternative view might be useful in understanding our algorithms and their correctness. Thus, we provide the basic idea behind this connection for the most simple variant of duplication distance: no inversions or deletions and the cost of each duplicate operation is 1. For a fixed source string x, we construct a grammar G x in which for every i, j such that 1 ≤ i ≤ j ≤ |x|, there is a production rule S → Sx i Sxi+1S . . . Sx j S.

These production rules correspond to duplicating the substring xi, j. In addition there is a trivial production rule S → ∈, where ∈ denotes the empty string. It is easy to see that the language described by this grammar is exactly the set of strings that can be duplicated from x. The non-overlapping property (Lemma 1) is now an immediate consequence of the structure of parse trees of CFGs. Finding the duplication distance from x to y is equivalent to finding a parse tree with a minimal number of non-trivial productions among all possible parse trees for y.

Consider now the slightly different grammar obtained by removing the leading S to the left of x i from each of the production rules, so that the new rules are of the form S → x i Sxi+1S . . . Sx j S. It is not difficult to see that both grammars produce the same language and have the same minimal size parse tree for every string y. The change only restricts the order in which rules are applied. For example, y1 is always produced by the first production rule.

The recurrence for d i (x, y) naturally arises by observing that if T is an optimal parse tree for y in which the first production rule generates y1 by x i and y j by xi+1, then the subtree T1 of T that generates y2, j-1is a valid parse tree which is optimal for y2, j-1. Similarly, the tree T2 obtained by deleting x i and T1 from T is a valid parse tree which is optimal for yj,|y|under the restriction that y j must be generated by xi+1(see Fig. 7). Moreover, T1 and T2 are disjoint trees which contain all non trivial productions in T . This explains the term d(x, y2, j-1) + di+1(x, yj,|y|) in Eq. 2, which is the heart of the recursion. The minimization over {j : y j = xi+1, j > 1} simply enumerates all of the possibilities for constructing T . The term 1 + d(x, y2,|y|) handles the possibility that y1 is generated by a duplicate operation that ends with x i . In this case the tree T2 is empty, so we only consider T1. We add one to account for the production rule at the root of T which is not part of T1. This is illustrated in Fig. 8.

Figure 7
figure 7

Example parse tree. An optimal parse tree T for y = bbccd where x = abcd. The root production duplicates x2,4 = bcd. x2 generates y1 and x3 generates y4. The trees T1 and T2 are indicated. T1 is an optimal parse tree for y2,4-1 = bc. T2 is an optimal parse tree for y4,|y|= cd.

Figure 8
figure 8

Example parse tree. An optimal parse tree T for y = dab where x = abcd. The root production duplicates just x4 = d. The tree T1 is indicated. T2 is empty (not indicated). The root production is not part of T1.

Conclusion

We have shown how to generalize duplication distance to include certain types of deletions and inversions and how to compute these new distances efficiently via dynamic programming. In earlier work [17, 18], we used duplication distance to derive phylogenetic relationships between human segmental duplications. We plan to apply the generalized distances introduced here to the same data to determine if these richer computational models yield new biological insights.

References

  1. Sankoff D, Leduc G, Antoine N, Paquin B, Lang B, Cedergren R: Gene Order Comparisons for Phylogenetic Inference: Evolution of the Mitochondrial Genome. Proc Natl Acad Sci USA. 1992, 89 (14): 6575-6579.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  2. Pevzner P: Computational molecular biology: an algorithmic approach. 2000, Cambridge, Mass.: MIT Press

    Google Scholar 

  3. Chen X, Zheng J, Fu Z, Nan P, Zhong Y, Lonardi S, Jiang T: Assignment of Orthologous Genes via Genome Rearrangement. IEEE/ACM Trans Comp Biol Bioinformatics. 2005, 2 (4): 302-315. 10.1109/TCBB.2005.48.

    Article  CAS  Google Scholar 

  4. Marron M, Swenson KM, Moret BME: Genomic Distances Under Deletions and Insertions. TCS. 2004, 325 (3): 347-360. 10.1016/j.tcs.2004.02.039.

    Article  Google Scholar 

  5. El-Mabrouk N: Genome Rearrangement by Reversals and Insertions/Deletions of Contiguous Segments. Proc 11th Ann Symp Combin Pattern Matching (CPM00). 2000, 1848: 222-234. full_text. Berlin: Springer-Verlag

    Chapter  Google Scholar 

  6. Zhang Y, Song G, Vinar T, Green ED, Siepel AC, Miller W: Reconstructing the Evolutionary History of Complex Human Gene Clusters. Proc 12th Int'l Conf on Research in Computational Molecular Biology (RECOMB). 2008, 29-49.

    Chapter  Google Scholar 

  7. Ma J, Ratan A, Raney BJ, Suh BB, Zhang L, Miller W, Haussler D: DUPCAR: Reconstructing Contiguous Ancestral Regions with Duplications. Journal of Computational Biology. 2008, 15 (8): 1007-1027.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  8. Bertrand D, Lajoie M, El-Mabrouk N: Inferring Ancestral Gene Orders for a Family of Tandemly Arrayed Genes. J Comp Biol. 2008, 15 (8): 1063-1077. 10.1089/cmb.2008.0025.

    Article  CAS  Google Scholar 

  9. Chaudhuri K, Chen K, Mihaescu R, Rao S: On the Tandem Duplication-Random Loss Model of Genome Rearrangement. Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). 2006, 564-570. full_text. New York, NY, USA: ACM

    Chapter  Google Scholar 

  10. Elemento O, Gascuel O, Lefranc MP: Reconstructing the Duplication History of Tandemly Repeated Genes. Mol Biol Evol. 2002, 19 (3): 278-288.

    Article  PubMed  CAS  Google Scholar 

  11. Lajoie M, Bertrand D, El-Mabrouk N, Gascuel O: Duplication and Inversion History of a Tandemly Repeated Genes Family. J Comp Bio. 2007, 14 (4): 462-478. 10.1089/cmb.2007.A007.

    Article  CAS  Google Scholar 

  12. El-Mabrouk N, Sankoff D: The Reconstruction of Doubled Genomes. SIAM J Comput. 2003, 32 (3): 754-792. 10.1137/S0097539700377177.

    Article  Google Scholar 

  13. Alekseyev MA, Pevzner PA: Whole Genome Duplications and Contracted Breakpoint Graphs. SICOMP. 2007, 36 (6): 1748-1763.

    Article  Google Scholar 

  14. Bailey J, Eichler E: Primate Segmental Duplications: Crucibles of Evolution, Diversity and Disease. Nat Rev Genet. 2006, 7: 552-564.

    Article  PubMed  CAS  Google Scholar 

  15. Jiang Z, Tang H, Ventura M, Cardone MF, Marques-Bonet T, She X, Pevzner PA, Eichler EE: Ancestral reconstruction of segmental duplications reveals punctuated cores of human genome evolution. Nature Genetics. 2007, 39: 1361-1368.

    Article  PubMed  CAS  Google Scholar 

  16. Johnson M, Cheng Z, Morrison V, Scherer S, Ventura M, Gibbs R, Green E, Eichler E: Recurrent duplication-driven transposition of DNA during hominoid evolution. Proc Natl Acad Sci USA. 2006, 103: 17626-17631.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  17. Kahn CL, Raphael BJ: Analysis of Segmental Duplications via Duplication Distance. Bioinformatics. 2008, 24: i133-138.

    Article  PubMed  Google Scholar 

  18. Kahn CL, Raphael BJ: A Parsimony Approach to Analysis of Human Segmental Duplications. Pacific Symposium on Biocomputing. 2009, 126-137.

    Google Scholar 

Download references

Acknowledgements

SM was supported by NSF Grant CCF-0635089. BJR is supported by a Career Award at the Scientific Interface from the Burroughs Wellcome Fund and by funding from the ADVANCE Program at Brown University, under NSF Grant No. 0548311.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Crystal L Kahn, Shay Mozes or Benjamin J Raphael.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

CLK, SM, and BJR all designed and analyzed the algorithms and drafted the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Kahn, C.L., Mozes, S. & Raphael, B.J. Efficient algorithms for analyzing segmental duplications with deletions and inversions in genomes. Algorithms Mol Biol 5, 11 (2010). https://doi.org/10.1186/1748-7188-5-11

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1748-7188-5-11

Keywords