# Efficient edit distance with duplications and contractions

## Abstract

### Abstract

We propose three algorithms for string edit distance with duplications and contractions. These include an efficient general algorithm and two improvements which apply under certain constraints on the cost function. The new algorithms solve a more general problem variant and obtain better time complexities with respect to previous algorithms. Our general algorithm is based on min-plus multiplication of square matrices and has time and space complexities of O (|Σ|MP (n)) and O (|Σ|n2), respectively, where |Σ| is the alphabet size, n is the length of the strings, and MP (n) is the time bound for the computation of min-plus matrix multiplication of two n × n matrices (currently,

$\mathit{\text{MP}}\left(n\right)=O\left(\frac{{n}^{3}\stackrel{3}{\text{log}}\text{log}n}{\stackrel{2}{\text{log}}n}\right)$ due to an algorithm by Chan).

For integer cost functions, the running time is further improved to $O\left(\frac{|\Sigma |{n}^{3}}{\stackrel{2}{\text{log}}n}\right)$. In addition, this variant of the algorithm is online, in the sense that the input strings may be given letter by letter, and its time complexity bounds the processing time of the first n given letters. This acceleration is based on our efficient matrix-vector min-plus multiplication algorithm, intended for matrices and vectors for which differences between adjacent entries are from a finite integer interval D. Choosing a constant $\frac{1}{\underset{|D|}{\text{log}}n}<\lambda <1$, the algorithm preprocesses an n × n matrix in $O\left(\frac{{n}^{2+\lambda }}{|D|}\right)$ time and $O\left(\frac{{n}^{2+\lambda }}{|D|{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}n}\right)$ space. Then, it may multiply the matrix with any given n-length vector in $O\left(\frac{{n}^{2}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}n}\right)$ time. Under some discreteness assumptions, this matrix-vector min-plus multiplication algorithm applies to several problems from the domains of context-free grammar parsing and RNA folding and, in particular, implies the asymptotically fastest $O\left(\frac{{n}^{3}}{\stackrel{2}{\text{log}}n}\right)$ time algorithm for single-strand RNA folding with discrete cost functions.

Finally, assuming a different constraint on the cost function, we present another version of the algorithm that exploits the run-length encoding of the strings and runs in $O\left(\frac{|\Sigma |\mathit{\text{nMP}}\left(ñ\right)}{ñ}\right)$ time and $O\left(|\Sigma |\mathrm{nñ}\right)$ space, where $ñ$ is the length of the run-length encoding of the strings.

## Background

Comparing strings is a well-studied problem in computer science as well as in bioinformatics. Traditionally, string similarity is measured in terms of edit distance, which reflects the minimum-cost edit of one string to the other, based on the edit operations of substitutions (including matches) and deletions/insertions (indels). In this paper, we address the problem of string edit distance with the additional operations of duplication and contraction. The motivation for this problem originated from the study of minisatellites and their comparisons in the context of population genetics [1].

### Motivation: minisatellite comparison

A minisatellite is a section of DNA that consists of tandem repetitions of short (6–100 nucleotides) sequence motifs spanning 500 nucleotides to several thousand nucleotides. The repeated motifs also vary in sequence through base substitutions and indels. For one minisatellite locus, both the type and the number of motifs vary between individuals in a population. Therefore, pairwise comparisons of minisatellites are typically applied in studying the evolution of populations.

A minisatellite map represents a minisatellite region, where each motif is encoded by a letter and handled as one entity (denoted unit). When comparing minisatellite maps, one has to consider that regions of the map have arisen as a result of duplication events from the neighboring units. The single copy duplication model, where only one unit can duplicate at a time, is the most popular and its biological validation was asserted for the MSY1 minisatellites [1, 2]. According to this model, one unit can mutate to another unit via an indel or a mutation of a single nucleotide within it. Also, a unit can be duplicated, that is, an additional copy of the unit may appear next to the original one in the map (tandem repeat). Thus, when comparing minisatellite maps, two additional operations are considered: unit duplication and unit contraction.

### The EDDC problem definition

The single copy duplication model of minisatellite maps gave rise to a new variant of the string edit distance problem, Edit Distance with Duplications and Contractions (EDDC), which allows five edit operations: insertion, deletion, mutation, duplication and contraction.

We start with some string notations. Let s be a string. Denote by s i  the i-th letter in s, starting at index 0, and by si,j the substring s i si+ 1 … sj- 1 of s. A substring of the form si,i is an empty string, which will be denoted by ε. We use superscripts to denote substrings without an explicit indication of their start and end positions, and write e.g. s = sasb to indicate that s is a concatenation of the two substrings sa and sb.

In the edit distance problem, one is given a source string s and a target string t over a finite alphabet Σ. An edit script from s to t is a series of strings $\mathcal{E}\mathcal{S}=〈s={u}^{0},{u}^{1},{u}^{2},\dots ,{u}^{r}=t〉$, where each intermediate string ui is obtained by applying a single edit operation to the preceding string ui-1. In the standard problem definition [35], the allowed edit operations are insertion of a letter at some position in an intermediate string ui, deletion of a letter in ui, and mutation of a letter in ui to another letter. The single-copy EDDC problem variant adds two operations: duplication - inserting into ui a letter in a position adjacent to a position that already contains the same letter, and contraction - deleting from ui one copy of a letter where there are two consecutive copies of this letter. Denote by ins (α),dup (α) and del (α) costs associated with the insertion, duplication and deletion operations applied to a letter α in the alphabet, respectively, by cont (α) the cost of contracting two consecutive occurrences of α into a single occurrence, and by mut (α,β) the cost of mutating α to a letter β. Define the cost of $\mathcal{E}\mathcal{S}$ to be the summation of its implied operation costs, and the length$|\mathcal{E}\mathcal{S}|=r$ of $\mathcal{E}\mathcal{S}$ to be the number of operations performed in $\mathcal{E}\mathcal{S}$. Clearly, for every pair of strings s and t, there is some script transforming s to t, e.g. the script that first deletes all letters in s and then inserts all letters in t. An optimal edit script from s to t is one which has a minimum cost. The edit distance from s to t, denoted by ed (s,t), is the cost of an optimal edit script from s to t. The goal of the EDDC problem is, given strings s and t, to compute e d (s,t).

Previous algorithms assume various constraints on operation costs (see Section “A comparison with previous works”). In this paper, the only limiting assumption made is that all operation costs are nonnegative. In addition, we can make the following assumption without loss of generality, which will be required by the algorithms presented in this paper:

#### Property 1.

It may be assumed without loss of generality that for every α,βΣ,

• ins (α) = ed (ε,α), del (α) = ed (α,ε),

• dup (α) = ed (α,α α), cont (α) = ed (α α,α),

• mut (α,β) = ed (α,β).

This assumption can be made, since in case one of the operation costs violates the assumption, then such an operation can always be replaced by a series of operations that would induce the same modification at lower cost. For example, it cannot be that mut (α,β) < ed (α,β), since ed (α,β) is smaller then or equal to the cost of any script from α to β, among which is the script containing the single mutation operation of α into β. Moreover, if mut (α,β) > ed (α,β), then we can always replace any mutation of α into β by a series of operations that transform α into β at cost ed (α,β). In this case, we may simply assume that mut (α,β) = ed (α,β), and interpret any such a mutation appearing in a script as being implem ented by the corresponding series of operations. In particular, Property 1 implies that mut (α,α) = 0 (since all operation costs are nonnegative, ed (w,w) = 0 for every string w), dup (α) ≤ ins (α) (since the cost of the script from α to α α that applies a single insertion of α is ins (α) ≥ ed (α,α α) = dup (α)), cont (α) ≤ del (α), and mut (α,β) ≤ mut (α,γ) + mut (γ,β) for every γΣ.

Insertions and duplications are considered to be generating operations, increasing by one letter the length of the string. Similarly, deletions and contractions are considered to be reducing operations, decreasing by one letter the length of the string. An edit script containing no reducing operation is called a non-reducing script, and an edit script containing no generating operation is called a non-generating script.

### Previous work

The EDDC problem was first defined by Bérard and Rivals [2], who suggested an O (n4) time and O (n3) space algorithm for the problem, where n is the length of the two input strings (for the sake of simplicity, we assume that both strings are of the same length). This was followed by the work of Behzadi and Steyaert [6], who gave an O (|Σ|n3) time and O (|Σ|n2) space algorithm for the problem, where |Σ| is the alphabet size (typically a few tens of unique units). Behzadi and Steyaert [7] improved their algorithms’ complexity, based on run-length encoding, to $O\left({n}^{2}+n{ñ}^{2}+|\Sigma |{ñ}^{3}+|\Sigma {|}^{2}{ñ}^{2}\right)$ time and $O\left(|\Sigma |\left(n+{ñ}^{2}\right)+{n}^{2}\right)$ space, where $ñ$ is the length of the run-length encoding of the input strings. Run-length encoding was also used by Bérard et al. [8], who proposed an $O\left({n}^{3}+|\Sigma |{ñ}^{3}\right)$ time and $O\left({n}^{2}+|\Sigma |{ñ}^{2}\right)$ space algorithm. Abouelhoda et al. [9] gave an algorithm with an alphabet size independent time and space complexities of $O\left({n}^{2}+n{ñ}^{2}\right)$ and O(n2), respectively. A detailed comparison between the different problem models appears in Section “A comparison with previous works”.

### Our contribution

This paper presents several algorithms for EDDC which are currently the most general and efficient for the problem.

1. 1.

We give an algorithm for EDDC for general non-negative cost functions that is based on min-plus square matrix multiplication. This algorithm is an adaptation of the framework of [10] (see also [11]). For two input strings over an alphabet Σ and of length n each, the time and space complexities of this algorithm are O (|Σ|MP (n)) and O (|Σ|n 2), respectively, where MP (n) is the time complexity of a min-plus multiplication of two n × n matrices. Using the matrix multiplication algorithm of Chan [12], this algorithm runs in $O\left(\frac{|\Sigma |{n}^{3}\stackrel{3}{\text{log}}\text{log}n}{\stackrel{2}{\text{log}}n}\right)$ time (Section “A matrix multiplication based algorithm for EDDC”). Moreover, our algorithm applies less restrictions on the cost function with respect to previous algorithms and is currently the only algorithm that works for the most general problem settings (Section “A comparison with previous works”).

2. 2.

We describe a more efficient algorithm for EDDC when all operation costs are integers. This algorithm can also be applied in an online setting, where in each step a letter is added to one of the input strings. The time complexity of processing n letters in the input is $O\left(\frac{|\Sigma |{n}^{3}}{\stackrel{2}{\text{log}}n}\right)$, where the base of the log function is determined by the range of cost values (Section “An online algorithm for EDDC using min-plus matrix-vector multiplication for discrete cost functions”). In order to achieve this, we obtained the following stepping-stone results, which are of interest on their own.

1. (a)

Let A be an n × m matrix for which differences between adjacent entries are within some finite integer interval D. Choosing a time/space complexity tradeoff parameter λ, where $\frac{1}{\underset{|D|}{\text{log}}\left(n+m\right)}<\lambda <1$, we describe a preprocessing algorithm for A that runs in $O\left(\frac{\mathit{\text{nm}}{\left(n+m\right)}^{\lambda }}{|D|}\right)$ time and requires $O\left(\frac{\mathit{\text{nm}}{\left(n+m\right)}^{\lambda }}{|D|{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n+m\right)}\right)$ space. This preprocessing allows later to compute min-plus multiplications between A and m-length vectors sustaining the same discreteness requirement in $O\left(\frac{\mathit{\text{nm}}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n+m\right)}\right)$ time (Section “An efficient D-discrete min-plus matrix-vector multiplication algorithm”). The algorithm is an adaptation of Williams’ matrix-vector multiplication algorithm over a finite semiring [13], with some notions similar to those in Frid and Gusfield’s RNA folding algorithm [14].

2. (a)

The manner in which the new matrix-vector multiplication algorithm is integrated into the EDDC algorithm can be generalized to algorithms for a family of related problems, denoted VMT problems [11], under certain discreteness assumptions. This family includes many important problems from the domains of RNA folding and CFG parsing. An example of such a problem is the single strand RNA folding problem [15] under discrete scoring schemes. Our new matrix-vector multiplication algorithm can be integrated into an algorithm for the latter problem to yield an $O\left(\frac{{n}^{3}}{\stackrel{2}{\text{log}}n}\right)$ time algorithm, improving the best previously known asymptotic time bound for the problem (see Section “Online VMT algorithms”).

3. 3.

We extend our approach to exploit run-length encodings of the input strings, assuming some restrictions on the cost functions. This reduces the time and space complexities of the algorithm to $O\left(|\Sigma |{n}^{2}+\frac{|\Sigma |\mathit{\text{nMP}}\left(ñ\right)}{ñ}\right)$ and $O\left(|\Sigma |\mathrm{nñ}\right)$, respectively, where $ñ$ is the length of the run-length encoding of the input (Section “Additional acceleration using run-length encoding”).

The rest of the paper is organized as follows. In Section “A baseline algorithm for the EDDC problem”, a recursive computation for EDDC and its implementation using dynamic programming (DP) is presented. Section “A matrix multiplication based algorithm for EDDC” shows how to accelerate the algorithm by incorporating efficient min-plus matrix multiplication subroutines. In Section “An online algorithm for EDDC using min-plus matrix-vector multiplication for discrete cost functions”, an efficient min-plus matrix-vector multiplication algorithm is described for matrices and vectors which differences between adjacent entries are taken from a finite integer interval. This algorithm can be used for obtaining an accelerated online version of the EDDC algorithm, as well as for improving time complexities of several related problems. Section “Additional acceleration using run-length encoding” describes a variant of the EDDC algorithm that exploits run-length encoding. Comparison between this and previous works is given in Section “A comparison with previous works”, and Section “Conclusions and discussion” gives a concluding discussion. Additional proofs omitted from the main manuscript are given in the Appendix.

## A baseline algorithm for the EDDC problem

In this section, we give a simple algorithm for the EDDC problem. We start by showing some recursive properties of the problem, and then formulate a straightforward dynamic programming implementation for the recursive computation.

### The recurrence formula

Our recursive formulas resemble previous formulations [6, 9], yet solve a slightly more general variant of the problem (see discussion in Section “A comparison with previous works”). Since the proof of correctness of these recursive formulas is similar to previous ones, we defer it to Appendix “Correctness of the recursive computation”.

A (strict) partition of a string w of length at least 2 is a pair of strings (wa,wb), such that w = wawband wa,wb≠ ε. Denote by P (w) the set of all partitions of w. For example, for w = abac , P(w) = {(a,bac),(ab,ac),(aba,c)}.

For a source string which is either empty or contains a single letter and a target string t, Equations 1 to 3 (Figure 1) describe a recursive EDDC computation. This computation interleaves, in a mutually recursive manner, the computation of an additional special value ed (α,t), where ed (α,t) is defined to be the minimum cost of a non-reducing edit script from α to t that does not start with a mutation (t is required to contain at least two letters).

$\phantom{\rule{-12.0pt}{0ex}}\begin{array}{l}\mathit{\text{ed}}\left(\epsilon ,t\right)=\left\{\begin{array}{ll}0,& t=\epsilon ,\\ \text{min}\left\{\mathit{\text{ins}}\left(\alpha \right)+\mathit{\text{ed}}\left(\alpha ,t\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\alpha \in \Sigma \right\},& \text{otherwise}.\end{array}\right\\end{array}$
(1)
$\phantom{\rule{-12.0pt}{0ex}}\begin{array}{l}\mathit{\text{ed}}\left(\alpha ,t\right)=\left\{\begin{array}{ll}\mathit{\text{del}}\left(\alpha \right),& t=\epsilon ,\\ \mathit{\text{mut}}\left(\alpha ,\beta \right),& t=\beta ,\\ \text{min}\left\{\mathit{\text{mut}}\left(\alpha ,\beta \right)+e{d}^{\prime }\left(\beta ,t\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\beta \in \Sigma \right\},& \text{otherwise}.\end{array}\right\\end{array}$
(2)
(3)

Symmetrically, Equations 4 to 6 give the recursive computation for a source string s and a target string which is either empty or contains a single letter. Here, ed (s,α) is defined as the minimum cost of a non-generating edit script from s to α which does not end with a mutation (s is required to contain at least two letters).

$\phantom{\rule{-12.0pt}{0ex}}\begin{array}{l}\mathit{\text{ed}}\left(s,\epsilon \right)=\left\{\begin{array}{ll}0,& s=\epsilon ,\\ \text{min}\left\{\mathit{\text{ed}}\left(s,\alpha \right)+\mathit{\text{del}}\left(\alpha \right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\alpha \in \Sigma \right\},& \text{otherwise}.\end{array}\right\\end{array}$
(4)
$\phantom{\rule{-12.0pt}{0ex}}\begin{array}{l}\mathit{\text{ed}}\left(s,\alpha \right)=\left\{\begin{array}{ll}\mathit{\text{ins}}\left(\alpha \right),& s=\epsilon ,\\ \mathit{\text{mut}}\left(\beta ,\alpha \right),& s=\beta ,\\ \text{min}\left\{e{d}^{\prime }\left(s,\beta \right)+\mathit{\text{mut}}\left(\beta ,\alpha \right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\beta \in \Sigma \right\},& \text{otherwise}.\end{array}\right\\end{array}$
(5)
(6)

In case both source string s and target string t are of length at least 2, the following equation can be used for computing ed (s,t) (Figure 2(a)):

$\phantom{\rule{-12.0pt}{0ex}}\begin{array}{l}\mathit{\text{ed}}\left(s,t\right)\\ \phantom{\rule{1em}{0ex}}=\text{min}\left\{\begin{array}{l}\mathit{\text{ed}}\left(s,\alpha \right)+\mathit{\text{ed}}\left(\alpha ,t\right),\\ \mathit{\text{ed}}\left({s}^{a},{t}^{a}\right+\mathit{\text{ed}}\left({s}^{b},\alpha \right)+\mathit{\text{ed}}\left(\alpha ,{t}^{b}\right)\end{array}\left|\begin{array}{l}\left({s}^{a},\phantom{\rule{1em}{0ex}}{s}^{b}\right)\in P\left(s\right),\\ \left({t}^{a},\phantom{\rule{1em}{0ex}}{t}^{b}\right)\in P\left(t\right),\\ \alpha \in \Sigma \end{array}\right\right\}\\ \phantom{\rule{2em}{0ex}}\left(s\phantom{\rule{0.3em}{0ex}}\text{and}\phantom{\rule{0.3em}{0ex}}t\phantom{\rule{0.3em}{0ex}}\text{are of lengths}\ge 2\right)\end{array}$
(7)

For allowing efficient computation, Equation 7 can be replaced by Equations 8 and 9, which are computed in a mutually recursive manner to yield an equivalent computation (Figure 2(b) and Figure 2(c), respectively).

$\phantom{\rule{-12.0pt}{0ex}}\begin{array}{ll}\mathit{\text{ed}}\left(s,t\right)=& \text{min}\left\{\begin{array}{l}\mathit{\text{ed}}\left(s,\alpha \right)+\mathit{\text{ed}}\left(\alpha ,t\right),\\ \mathit{\text{ed}}{t}^{\alpha }\left({s}^{a},t\right)+\mathit{\text{ed}}\left({s}^{b},\alpha \right)\end{array}\left|\begin{array}{l}\left({s}^{a},\phantom{\rule{1em}{0ex}}{s}^{b}\right)\in P\left(s\right)\\ \alpha \in \Sigma \end{array}\right\right\}\\ \left(s\phantom{\rule{0.3em}{0ex}}\text{and}\phantom{\rule{0.3em}{0ex}}t\phantom{\rule{0.3em}{0ex}}\text{are of lengths}\ge 2\right)\end{array}$
(8)
$\phantom{\rule{-12.0pt}{0ex}}\begin{array}{ll}\mathit{\text{ed}}{t}^{\alpha }\left(s,t\right)=& \text{min}\left\{\begin{array}{l}\mathit{\text{ed}}\left(s,{t}^{a}\right)+\mathit{\text{ed}}\left(\alpha ,{t}^{b}\right)\end{array}\left|\begin{array}{l}\left({t}^{a},{t}^{b}\right)\in P\left(t\right)\end{array}\right\right\}\\ \left(t\phantom{\rule{0.3em}{0ex}}\text{is of lengths}\ge 2\right)\end{array}$
(9)

All base-cases of the above recursive equations are implied from Property 1 in a straightforward manner.

#### Theorem 1.

Theorem 1.EDDC is correctly solved by Equations 1-9.

The proof of Theorem 1 appears in Appendix “Correctness of the recursive computation”.

### A baseline dynamic-programming algorithm for EDDC

In this section, we describe a DP algorithm implementing the recursive EDDC computation given by Equations 1 to 9, which is the basis for improvements introduced later in this paper.

Let s and t be the input source and target strings, respectively, and for simplicity assume both strings are of length n. The algorithm maintains the following matrices for storing solutions to sub-instances of the input which occur along the recursive computation. All matrices are of size (n + 1) × (n + 1), with row and column indices in the range 0,1,2,…,n.

•  For every αΣ, the algorithm maintains matrices Sα, Sα, Tα, and Tα. Entries Sα[ k,i], Sα[ k,i], Tα[ l,j] and Tα[ l,j] are used for storing the values ed (sk,i,α), ed(sk,i,α), ed (α,tl,j), and ed (α,tl,j), respectively.

•  Two matrices Sεand Tε, whose entries Sε[ k,i] and Tε[ l,j] are used for storing values of the forms ed (sk,i,ε) and ed (ε,tl,j), respectively.

•  For every αΣ, a matrix EDTαwhose entries EDTα[ i,j] are used for storing the values edtα(s0,i,t0,j).

•  A matrix ED, whose entries ED [ i,j] are used for storing the values ed (s0,i,t0,j).

The algorithm consists of two stages: Stage 1 computes solutions to all sub-instances in which one of the substrings is either empty or single-lettered, applying Equations 1 to 6. Stage 2 uses the values computed in Stage 1 in order to compute all prefix-to-prefix solutions ed (s0,i,t0,j) and edtα(s0,i,t0,j) according to Equations 8 and 9. In particular, Stage 2 computes the edit distance ed (s0,n,t0,n) = ed (s,t) between the two complete strings. The entries are traversed in an order which guarantees that upon computing each entry, all solutions to sub-instances appearing on the right-hand side of the relevant equation are already computed and contained in the corresponding entries. Algorithm 1 gives the pseudo-code for this computation.

#### Complexity analysis of Algorithm 1

The running time of Algorithm 1 is dictated by the total time required to compute all entries in the DP matrices. Each entry is computed according to one of the recursive equations, where the number of operations in such a computation depends on the number of expressions examined on the right-hand side of the corresponding recursive equation. Note that the value of each examined expression is obtained in a constant time, by querying previously computed values stored in the matrices.

The computation of each entry in matrices Tεand Sεand in matrices of the form Tαand Sαtakes O (|Σ|) time, due to Equations 1, 4, 2, and 5, respectively. As there are O (|Σ|) such matrices and each matrix contains O (n2) entries, their overall computation time is O (|Σ|2n2). The computation of entries in Tαand Sαtake O (n) time, due to Equations 3 and 6, respectively. There are O (|Σ|) such matrices, each of size O (n2), and so the total time for computing all entries in these matrices is O (|Σ|n3). According to Equation 9, computing each entry of the form EDTα[ i,j] takes O (n) time, and as there are O (|Σ|n2) such entries the total time for computing all these entries is O (|Σ|n3). According to Equation 8, computing each entry of the form ED [ i,j] takes O (|Σ|n) time, and since there are O (n2) such entries, the total time for computing all these entries is again O (|Σ|n3). Thus, the total running time of the algorithm is O (|Σ|n3 + |Σ|2n2). Under the assumption that |Σ| = O (n), the time is O (|Σ|n3). The algorithm requires O (|Σ|n2) space for maintaining the DP matrices.

## A matrix multiplication based algorithm for EDDC

In previous work by the authors [11], Vector Multiplication Templates (VMTs) were identified as templates for computational problems sustaining certain properties, such that algorithms for solving these problems can be accelerated using efficient matrix multiplication subroutines (similarly to Valiant’s algorithm for CFG recognition [10]). Intuitively, standard algorithms for VMT problems perform computations that can be expressed in terms of vector multiplications, and these computations can be computed and combined more efficiently using efficient matrix multiplications. In this section, we show that EDDC exhibits such VMT properties, and formulate a new algorithm that incorporates matrix-matrix min-plus multiplications. This algorithm yields a better running time than that of the baseline algorithm in the previous section.

### Notations for matrices

For two integers p,q such that p ≤ q, Ip,q denotes the interval of integers Ip,q = [ p,p + 1,…,q - 1]. We use the notation An × m to imply that the matrix A has n rows and m columns, and say that A has the dimensions n × m (rows and column indices start at 0). For a subset of row indices I and a subset of column indices J, denote by I × J the region which contains all pairs of indices (i,j), such that iI and jJ. Define A [ I,J] to be the submatrix of A, which is induced by all entries in the region I × J. When I contains a single row i or J contains a single column j, we simplify the notation and write A [ i,J] or A [ I,j], respectively.

Define the following operations on matrices. Let tr (·) denote the transpose operation for matrices. For a set of matrices $\mathcal{A}=\left\{{A}^{1},{A}^{2},\dots ,{A}^{r}\right\}$ all of the same dimensions n × m, denote by $\text{min}\left\{\mathcal{A}\right\}$ the entry-wise min operation over $\mathcal{A}$, whose result is a matrix Cn × m, such that $C\left[\phantom{\rule{0.3em}{0ex}}i,j\right]=\text{min}\left\{A\left[\phantom{\rule{0.3em}{0ex}}i,j\right]|\phantom{\rule{0.3em}{0ex}}A\in \mathcal{A}\right\}$. $\text{min}\left\{\mathcal{A}\right\}$ can be computed in $O\left(|\mathcal{A}|\mathit{\text{nm}}\right)$ time in a straightforward manner. For matrices An × k and Bk × m, the min-plus multiplication of A and B, denoted A B, results in a matrix Cn × m, where the entries of C are defined by C [ i,j] = min{A [ i,h] + B [ h,j]| 0 ≤ h < k}. Naively, AB can be computed in O (nkm) operations. Denote the time complexity of a min-plus multiplication of two n × n matrices by MP (n). At present, the asymptotically fastest algorithm for min-plus square matrix multiplication is that of Chan [12], taking $O\left(\frac{{n}^{3}\stackrel{3}{\text{log}}\text{log}n}{\stackrel{2}{\text{log}}n}\right)$ time.

In the following observation, we point out how matrix multiplication can be computed as a composition of two parts, where each of the items (1-3) in the observation addresses a partitioning in one of the three dimensions. This will be later used by our recursive computation which is based on such partitioning.

#### Observation 1.

Let An × k,Bk × mand Cn × mbe matrices, such that C = AB(see Figure 3).

1. 1.

For every 0 ≤ h < m, C [ I 0,n,I 0,h] = AB [ I 0,k,I 0,h] and C [ I 0,n,I h,m] = AB [ I 0,k,I h,m].

1. 2.

For every 0 ≤ h < n, C [ I 0,h,I 0,m] = A [ I 0,h,I 0,k] B and C [ I h,n,I 0,m] = A[ I h,n,I 0,k] B.

2. 3.

For every 0 ≤ h < k, C = min {A [ I 0,n,I 0,h] B [ I 0,h,I 0,m],A [ I 0,n,I h,k] B [ I h,k,I 0,m]}.

### EDDC expressed via min-plus vector multiplications

The key observation that enables a further improvement of the worst-case bounds of EDDC is that Equations 3, 6, 8, and 9 can be expressed in terms of min-plus vector multiplications. Under the assumption that all solutions to sub-instances appearing on the right-hand side of the equations are computed and stored in the corresponding entries, these equations can be written as follows:

(10)
(11)
(12)
$\phantom{\rule{-17.0pt}{0ex}}\begin{array}{ll}\mathit{\text{ed}}{t}^{\alpha }\left({s}_{0,i},{t}_{0,j}\right)& \stackrel{\text{Eq.9}}{=}\phantom{\rule{0.3em}{0ex}}\text{min}\phantom{\rule{0.3em}{0ex}}\left\{\mathit{\text{ED}}\left[\phantom{\rule{0.3em}{0ex}}i,l\right]+{T}^{\alpha }\left[\phantom{\rule{0.3em}{0ex}}l,j\right]\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}0
(13)

### The algorithm

The new algorithm has the same two stages as the baseline algorithm. It can be observed that the computation of all matrices of the forms Sα, Sα, Sε, Tα, Tα, and Tε performed in Stage 1 of the baseline algorithm adhere to the Inside-VMT requirements as given in Definition 1 in [11]. The application of the generic Inside-VMT algorithm[11] to this computation is immediate, and therefore we focus only on adapting the method to the computation of matrices of the form EDTα and ED conducted in Stage 2 of the baseline algorithm.

After allocating all dynamic programming matrices and performing Stage 1 of the algorithm, the COMPUTEMATRIX procedure is used for implementing Stage 2 (see Algorithm 2 and Figure 4). This is a divide-and-conquer recursive procedure that accepts a region I × J and computes the values in all entries of ED and EDTα within the region. The procedure partitions the given region into two parts and performs recursive calls on each part. In order to maintain a required precondition, the procedure applies min-plus matrix multiplication subroutines between recursive calls. The correctness proof of Algorithm 2 appears in Appendix “Correctness of Algorithm 2”.

### Time complexity analysis

The time complexity of Algorithm 2 can be established by an identical analysis to that of the Inside-VMT algorithm of [11] (see Section 3.3.1 of [11]). For completeness, we repeat this analysis here for Stage 2 of the computation, where the time complexity of Stage 1 can be inferred similarly. The complexity is expressed as a function of the bound MP (n) over the running time of a min-plus multiplication of two n × n matrices. Note that MP (n) = Ω (n2), as the input and output matrices of the computation contain O (n2) entries. We assume here that MP (n) = Ω (n2+δ) for some constant 0 < δ ≤ 1, which is true for the current best bound over MP (n) [12]. In some of the expressions developed below, we avoid the “big O” notation and give explicit bounds over the number of operations, as constant factors that may be hidden due to this notation cannot be ignored in the analysis.

The initialization time of Stage 2 is dominated by the matrix multiplications and entry-wise min operations performed in line 4 of Algorithm 2. This initialization performs 2|Σ| multiplications of matrices of dimensions (n-1)×1 with matrices of dimensions 1×(n - 1), which can naively be implemented in O (|Σ|n2) time, and an entry-wise min operation over a set containing |Σ| matrices of dimensions (n - 1) × (n - 1), which is also implemented in O (|Σ|n2) time.

The computation of the remaining entries in ED and EDTα matrices is done within recursive calls to the COMPUTE-MATRIX procedure. Observe that when COMPUTE-MATRIX is called over a region of dimensions r × r for some even integer r ≥ 2, the procedure applies a vertical partitioning and performs two recursive calls over regions of dimensions $r×\frac{r}{2}$ (lines 6 and 8). For a call over a region of dimensions $r×\frac{r}{2}$, the procedure applies a horizontal partitioning and performs two recursive calls over regions of dimensions $\frac{r}{2}×\frac{r}{2}$ (lines 11 and 13). For simplicity, assume that n - 1 = 2p for some integer p ≥ 0, and thus it follows that the dimensions of all regions occurring as inputs in recursive calls are either 1×1, or of the form r × r or $r×\frac{r}{2}$ for some even integer r. Denote by T (x × y) an upper bound over the number of operations conducted when applying COMPUTE-MATRIX over a region of dimensions x × y.

From line 2 of the procedure, T (1 × 1) = O (|Σ|). Consider a region of dimensions r × r for an even integer r ≥ 2. For such a region, the code in lines 5-8 of COMPUTE-MATRIX is executed. In order to implement line 7 for some αΣ, it is necessary to compute first a min-plus matrix multiplication C = AB, where the matrix A = ED [ Ii,k,Ij,h] is of dimensions $r×\frac{r}{2}$, the matrix B = Tα[ Ij,h,Ih,l] is of dimensions $\frac{r}{2}×\frac{r}{2}$, and the resulting matrix C is of dimensions $r×\frac{r}{2}$. Due to Observation 1, it is possible to compute independently the upper and lower halves of C, where $C\left[\phantom{\rule{0.3em}{0ex}}{I}_{0,\frac{r}{2}},{I}_{0,\frac{r}{2}}\right]=A\left[\phantom{\rule{0.3em}{0ex}}{I}_{0,\frac{r}{2}},{I}_{0,\frac{r}{2}}\right]\otimes B$ and $C\left[\phantom{\rule{0.3em}{0ex}}{I}_{\frac{r}{2},r},{I}_{0,\frac{r}{2}}\right]=A\left[\phantom{\rule{0.3em}{0ex}}{I}_{\frac{r}{2},r},{I}_{0,\frac{r}{2}}\right]\otimes B$. The time required to conduct this computation is $2\mathit{\text{MP}}\left(\frac{r}{2}\right)$. Then, it is required to compute min {EDTα[ Ii,k, Ih,l], C} and to update EDTα[ Ii,k,Ih,l] to be the result of this operation, a computation which requires at most cr2 operations for some constant c. Since line 7 is computed for every αΣ, the total number of applied operations due to this line is at most $|\Sigma |\left(2\mathit{\text{MP}}\left(\frac{r}{2}\right)+c{r}^{2}\right)$. Besides line 7, two recursive calls are made in lines 6 and 8 over regions of dimensions $r×\frac{r}{2}$, and therefore we get

$T\left(r×r\right)\le 2T\left(r×\frac{r}{2}\right)+|\Sigma |\left(2\mathit{\text{MP}}\left(\frac{r}{2}\right)+c{r}^{2}\right).$

When the procedure is called over a region of dimensions $r×\frac{r}{2}$, the code in lines 10-13 is executed. Similarly as above, it can be shown that the computation in line 12 requires at most $|\Sigma |\left(\mathit{\text{MP}}\left(\frac{r}{2}\right)+\frac{c{r}^{2}}{4}\right)$ operations, and due to the two recursive calls in lines 11 and 13 over regions of dimensions $\frac{r}{2}×\frac{r}{2}$, we get

$T\left(r×\frac{r}{2}\right)\le 2T\left(\frac{r}{2}×\frac{r}{2}\right)+|\Sigma |\left(\mathit{\text{MP}}\left(\frac{r}{2}\right)+\frac{c{r}^{2}}{4}\right).$

Therefore,

$T\left(r×r\right)\le 4T\left(\frac{r}{2}×\frac{r}{2}\right)+|\Sigma |\left(4\mathit{\text{MP}}\left(\frac{r}{2}\right)+\frac{3c{r}^{2}}{2}\right).$

The explicit form of the above recursive equation can be established by the Master Theorem (under the assumption that MP (n) = Ω (n2+δ), see Chapter 4 in [16]), yielding the expression T (r × r) = O (|Σ|MP (r)). Thus, the time complexity of Stage 2 of the algorithm is O (|Σ|MP (n)). The time analysis of the Inside-VMT algorithm of [11], applied to implement Stage 1 of the algorithm yields the same bound of O (|Σ|MP (n)), and thus O (|Σ|MP (n)) is the time complexity of the entire algorithm. Using the currently asymptotically fastest algorithm for min-plus matrix multiplication [12]$\mathit{\text{MP}}\left(n\right)=\Theta \left(\frac{{n}^{3}\stackrel{3}{\text{log}}\text{log}n}{\stackrel{2}{\text{log}}n}\right)$, we get the currently best explicit time bound for EDDC of $O\left(\frac{|\Sigma |{n}^{3}\stackrel{3}{\text{log}}\text{log}n}{\stackrel{2}{\text{log}}n}\right)$.

## An online algorithm for EDDC using min-plus matrix-vector multiplication for discrete cost functions

In this section, we present an EDDC algorithm which is based on the general algorithm (given in Section “A matrix multiplication based algorithm for EDDC”) and improves its time complexity by a factor of O (log3 logn). This EDDC algorithm is intended for integer cost functions, but can also be applied to rational cost functions after they are scaled. It is an online algorithm; it can process the input strings letter by letter with a guaranteed low time bound for any prefix of the input. The EDDC algorithm presented in this section is based on a D-discrete matrix-vector min-plus multiplication algorithm we developed, which is generic and may be applied to other problems as well.

### D-discrete matrices and the EDDC problem with integer costs

Given a matrix of integers An × m and indices 1 ≤ i < n and 0 ≤ j < n, call the pair of entries A [ i - 1,j] and A [ i,j] adjacent. Let D = Ia,b= [ a,a + 1,…,b - 1] be an integer interval for some integers a < b. Say that matrix A is D-discrete if for every pair of adjacent entries A [ i - 1,j] and A [ i,j], their difference A [ i - 1,j] - A [ i,j] is in D.

Consider the EDDC problem in the case of integer costs for all edit operations. In Lemma 1, we show that in this case, all matrix multiplications applied by Algorithm 2 are between D-discrete metrices, with respect to a certain integer interval D. This proof is similar to that of Masek and Paterson for simple edit distance [17]. This would allow conducting such matrix multiplications using a faster algorithm, described in Section “An efficient D-discrete min-plus matrix-vector multiplication algorithm”.

#### Lemma 1.

Given strings s and t and an integer cost function for EDDC, all matrix multiplications applied by Algorithm 2 are over D-discrete matrices, where D = Ia,bis determined according to the cost function by a=- max{del(α) | αΣ} and b = max{ins (α) | αΣ} + 1.

The proof of Lemma 1 appears in Appendix “Proofs to lemmas corresponding to the EDDC algorithm for discrete cost functions”.

### D-discrete matrices and vectors

Here, we present some properties of D-discrete matrices and vectors that are similar to those previously observed in [14, 17]. The following lemmas show that the set of D-discrete matrices is closed under the min-plus multiplication and entry-wise min operations. In what follows, let D=Ia,b be some integer interval. The proofs of the following lemmas appear in Appendix “Proofs to lemmas corresponding to the EDDC algorithm for discrete cost functions”.

#### Lemma 2.

Let X,Y and Z be matrices, such that X and Y contain only integer elements and Z = XY. If X is D-discrete, then is D-discrete.

#### Lemma 3.

Let X,Y and Z be matrices, such that X and Y contain only integer elements and Z = min{X,Y}. If X and Y are D -discrete, then Z is D-discrete.

The following lemma implies that when the absolute difference between the first elements of two q-length D-discrete vectors x and y is sufficiently large, one of the vectors can be immediately taken as the result of the min(x,y) operation.

#### Lemma 4.

Let x =(x0,…,xq - 1) and y = (y0,…,yq - 1) be two q-length D-discrete vectors for some q > 0. If y0 - x0 ≥ q |D|, then min(x,y) =  x.

In what follows, fix an integer q > 1. Let x = (x0,x1,…,xq-1) be a q-length D-discrete vector. By definition, for every 0<i < q, xi-1-x i  is an integer within D, and so xi-1-x i  - a is an integer within the interval I0,b-a = I0,|D|. Therefore, the series x0 - x1 - a,x1 - x2 - a,…,xq-2-xq-1-a can be thought of as a series of q-1 digits in a |D|-base representation of an integer $\Delta x=\sum _{0\le i, where 0 ≤ Δx < |D|q-1. The Δ -encoding of x is defined to be the pair of integers (x0,Δx). We write x = (x0,Δx) to indicate that (x0,Δx) is the Δ-encoding of x, where x0 is called the offset of x and Δ x is called the canonical index of x. Note that for two q-length D-discrete vectors x = (x0,Δx) and y = (y0,Δy), Δx = Δ y if and only if for every 0 ≤ i < q, x i  - y i  = c for some constant c. In particular, x and y share the same Δ-encoding if and only if they are identical. Call a D-discrete vector of the form x = (0,Δx) (with an offset x0 = 0) a canonical vvector.

The next observations show that both operations of entry-wise min and min-plus multiplication, with respect to D-discrete matrices and vectors, can be expressed via canonical vectors.

#### Observation 2.

Observation 2.Let x = (x0,Δ x), y = (y0,Δ y), and z = (z0,Δ z) be q-length D-discrete vectors such that z = min (x,y). Then, for every number c it holds that min((x0 - c,Δ x),(y0 - c,Δ y)) = (z0 - c,Δ z). In particular, min((0,Δ x),(y0 - x0,Δ y)) = (z0 - x0,Δ z).

#### Observation 3.

LetBq × qbe a D-discrete matrix, x = (0,Δ  x) a q-length canonical D-discrete vector, and y =(y0,Δ y) a q-length D-discrete vector, such that Bx = y. Then, for any number c it holds that B (c,Δ x) = (y0 + c,Δ y).

### An efficient D-discrete min-plus matrix-vector multiplication algorithm

Let An×m be a D-discrete matrix, and fix a constant $\frac{1}{\underset{|D|}{\text{log}}\left(n+m\right)}<\lambda <1$. We give an algorithm for min-plus D-discrete matrix-vector multiplication that, after preprocessing A in $O\left(\frac{\mathit{\text{nm}}{\left(n+m\right)}^{\lambda }}{|D|}\right)$ time and $O\left(\frac{\mathit{\text{nm}}{\left(n+m\right)}^{\lambda }}{|D|{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n+m\right)}\right)$ space, computes Ax for any m-length D-discrete vector x in $O\left(\frac{\mathit{\text{nm}}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n+m\right)}\right)$ time under the RAM computational model. Our algorithm is an adaptation of Williams’ algorithm [13] for finite semiring matrix-vector multiplications, with some notions similar to Frid and Gusfield’s acceleration technique for RNA folding [14]. It follows the concept of the Four-Russians Algorithm[18] (see also [14, 17, 19]), i.e. preprocessing reoccurring computations, tabulating their results in lookup tables, and retrieving such results in order to accelerate the general computation.

Specifically, the algorithm stores preprocessed computations of two kinds: matrix-vector min-plus multiplications, and vector entry-wise minima, where vectors and matrices are of q-length and of q × q dimensions, respectively, for q = λ log|D|(n + m) . For conducting this preprocessing, we will assume that |D| ≤ n + m, otherwise q = 0 and the multiplication cannot be accelerated using the suggested method. In addition, for simplicity of the analysis we assume that q3 ≤ min(n,m). If this does not hold, a multiplication of the form Ax can be naively computed in the relatively efficient time complexity of $O\left(\text{max}\left(n,m\right)\underset{|D|}{\overset{3}{\text{log}}}\left(n+m\right)\right)$. The space complexity of the preprocessing phase is higher than the O (n m) space complexity of the standard multiplication algorithm and depends on the constant λ, ranging between O(n m|D|) and $O\left(\frac{\mathit{\text{nm}}\left(n+m\right)}{\underset{|D|}{\overset{2}{\text{log}}}\left(n+m\right)}\right)$ for λ values between $\frac{1}{\underset{|D|}{\text{log}}\left(n+m\right)}$ and 1, correspondingly. The lower bound $\frac{1}{\underset{|D|}{\text{log}}\left(n+m\right)}$ for λ is chosen so that the time complexity $O\left(\frac{\mathit{\text{nm}}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n+m\right)}\right)$ of matrix-vector multiplications involving the preprocessed matrix would be better than the naive time complexity O (n m).

#### Preprocessing of matrix-vector ⊗ computations

Let ${n}^{\prime }=q⌊\frac{n}{q}⌋$ and ${m}^{\prime }=q⌊\frac{m}{q}⌋$, and note that 0 ≤ n - n < q and 0 ≤ m - m < q. Let Q k  denote the q-length integer interval Q k  = [ k q,k q + 1,…,(k + 1) q - 1]. The sub-matrix ${A}^{\prime }=A\left[{I}_{0,{n}^{\prime }},{I}_{0,{m}^{\prime }}\right]$ is decomposed into $\frac{{n}^{\prime }{m}^{\prime }}{{q}^{2}}$ blocks Bi,j = A[ Q i ,Q j ] where $i=0,1,\dots ,\frac{{n}^{\prime }}{q}-1$ and $j=0,1,\dots ,\frac{{m}^{\prime }}{q}-1$. For each block B, a corresponding lookup table MUL B  is created, which tabulates min-plus multiplications between B and all canonical q-length D-discrete vectors. For the canonical vector x = (0,Δ x), the result y = B x is stored in the entry MUL B [ Δ x] by its encoding (y0,Δ y) (by Lemma 2, y is also D-discrete and thus can be encoded accordingly).

The multiplication of a q × q block with a q-length vector can be done in O (q2) time in a straightforward manner and the encoding of the resulting q-length vector requires additional O (q) time. There are $\frac{{n}^{\prime }{m}^{\prime }}{{q}^{2}}$ blocks in the decomposition of A, each is multiplied by |D|q-1 canonical vectors, and so the total time required for computing these multiplications is $O\left({q}^{2}|D{|}^{q-1}\frac{\mathit{\text{nm}}}{{q}^{2}}\right)=O\left(|D{|}^{q-1}\mathit{\text{nm}}\right)=O\left(\frac{\mathit{\text{nm}}{\left(n+m\right)}^{\lambda }}{|D|}\right)$.

Let (y0,Δ y) be the Δ - encoding of some result y = B x computed in the preprocessing of A as described above. Note that y0 = min 0 ≤ i < q {B [ 0,i] + x [ i]} ≤ min 0 ≤ i < q {2 max(B [ 0,i],x[ i])}. Therefore, the number of bits in the binary representation of y0 is at most one plus the maximum number of bits required for the representations of B [ 0,i] and x [ i] for some 0 ≤ i < q. Also, note that $0\le \Delta y<|D{|}^{q-1}=\frac{{\left(n+m\right)}^{\lambda }}{|D|}$, and Δ y can be represented in O (log(n + m)) bits. Thus, under the RAM computational model assumptions, each such encoding (y0,Δ y) requires O (1) space units and can be written and read in a constant time, and therefore the overall space complexity for maintaining all MUL B  tables is $O\left(\frac{|D{|}^{q-1}\mathit{\text{nm}}}{{q}^{2}}\right)=O\left(\frac{\mathit{\text{nm}}{\left(n+m\right)}^{\lambda }}{|D|{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n+m\right)}\right)$. In addition, given a canonical index Δ x, it is possible to retrieve the encoding (y0,Δ y) = B  (0,Δ x) stored in the entry MUL B [ Δ x] in a constant time.

Let x = (x0,Δ x) be some (not necessarily canonical) q-length D-discrete vector, for which we wish to compute Bx. Due to Observation 3, the multiplication result can be obtained in constant time by retrieving (y0,Δ y) = MUL B [ Δ x], and returning the encoding (y0 + x0,Δ y).

#### Preprocessing of vector entry-wise min computations

The algorithm constructs a lookup table MIN, storing entry-wise min calculations between every canonical q-length D-discrete vector x = (0,Δ x) and every q-length D-discrete vector y = (y0,Δ y) such that abs (y0) < q|D| (here abs (y0) denotes the absolute value of y0). For every such x and y, the table entry MIN [ Δ x,y0,Δ y] stores the Δ - encoding (z0,Δ z) of the vector z = min (x,y) (due to Lemma 3, z is D-discrete and can be encoded accordingly). There are $O\left(q|D||D{|}^{2\left(q-1\right)}\right)=O\left(\frac{{\left(n+m\right)}^{2\lambda }\lambda \underset{|D|}{\text{log}}\left(n+m\right)}{|D|}\right)$ entries in the table MIN, and each entry can be computed in O(q) = O (λ log|D|(n + m)) time. Thus, the computation of all entries in MIN requires $O\left(\frac{{\left(n+m\right)}^{2\lambda }{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n+m\right)}{|D|}\right)$ time, and the table occupies $O\left(\frac{{\left(n+m\right)}^{2\lambda }\lambda \underset{|D|}{\text{log}}\left(n+m\right)}{|D|}\right)$ space.

Given two encoded q-length D-discrete vectors x = (x0,Δ x) and y = (y0,Δ y), the encoding (z0,Δ z) of the vector z = min(x,y) can now be obtained in a constant time as follows: (z0,Δ z) = (x0,Δ x) if y0 - x0 ≥ q |D| or (z0,Δ z) = (y0,Δ y) if x0 - y0 ≥ q |D|, due to Lemma 4. Otherwise, |y0 - x0| < q |D|, and for the vectors x = (0,Δ x), y = (y0 - x0,Δ y), and ${z}^{\prime }=\left({z}_{0}^{\prime },\Delta {z}^{\prime }\right)=\text{min}\left({x}^{\prime },{y}^{\prime }\right)$, we have that (z,Δ z) = MIN [ Δ x,y0 - x0,Δ y]. From Observation 2, $\left({z}_{0},\Delta z\right)=\left({z}_{0}^{\prime }+{x}_{0},\Delta {z}^{\prime }\right)$.

#### Computing matrix-vector multiplications

Given an m-length D-discrete vector x and assuming the preprocessing of matrix An × m was preformed as described above, we next explain how to efficiently compute the vector y = Ax. Note that y is an n-length D-discrete vector, due to Lemma 2.

Our algorithm computes first the multiplication ${y}^{\prime }={A}^{\prime }\otimes x\left[{I}_{0,{m}^{\prime }}\right]$ in parts of length q. First, for every $0\le j<\frac{{m}^{\prime }}{q}$, the algorithm computes the encoding $\left({x}_{\phantom{\rule{0.3em}{0ex}}0}^{\phantom{\rule{0.3em}{0ex}}j},\Delta {x}^{\phantom{\rule{0.3em}{0ex}}j}\right)$ of the sub-vector xj = x [ Q j ] of x. These encodings can be obtained in a total time of O (m). Then, for every $0\le i<\frac{{n}^{\prime }}{q}$, the encoding $\left({y}_{\phantom{\rule{0.3em}{0ex}}0}^{i},\Delta {y}^{i}\right)$ of the sub-vector ${y}^{i}={A}^{\prime }\left[\phantom{\rule{0.3em}{0ex}}{Q}_{i},{I}_{0,{m}^{\prime }}\right]\otimes x\left[\phantom{\rule{0.3em}{0ex}}{I}_{0,{m}^{\prime }}\right]$ of y is computed independently of the other sub-vectors of y. By definition (see Figure 5),

$\begin{array}{ll}{y}^{i}& =\text{min}\left\{{A}^{\prime }\left[\phantom{\rule{0.3em}{0ex}}{Q}_{i},{Q}_{j}\right]\otimes x\left[\phantom{\rule{0.3em}{0ex}}{Q}_{j}\right]\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}0\le j<\frac{{m}^{\prime }}{q}\right\}\\ =\text{min}\left\{{B}_{i,j}\otimes {x}^{\phantom{\rule{0.3em}{0ex}}j}\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}0\le j<\frac{{m}^{\prime }}{q}\right\}.\end{array}$

The encoded result $\left({z}_{\phantom{\rule{2.77626pt}{0ex}}\phantom{\rule{2.77626pt}{0ex}}0}^{i,j},\Delta {z}^{i,j}\right)$ of each multiplication zi,j = Bi,jxj can be obtained in a constant time as explained in Section “Preprocessing of matrix-vector computations”. As there are $\frac{{m}^{\prime }}{q}$ such terms to compute with respect to yi, their total computation time is $O\left(\frac{m}{q}\right)$. In addition, the entry-wise min over all these terms can be computed by initializing $\left({y}_{\phantom{\rule{0.3em}{0ex}}0}^{i},\Delta {y}^{i}\right)←\left({z}_{\phantom{\rule{2.77626pt}{0ex}}\phantom{\rule{2.77626pt}{0ex}}\phantom{\rule{2.77626pt}{0ex}}0}^{i,0},\Delta {z}^{i,0}\right)$, and iteratively updating $\left({y}_{\phantom{\rule{0.3em}{0ex}}0}^{i},\Delta {y}^{i}\right)←\text{min}\left(\left(\underset{\phantom{\rule{0.3em}{0ex}}0}{\overset{i}{y}},\Delta {y}^{i}\right),\left(\underset{\phantom{\rule{2.77626pt}{0ex}}\phantom{\rule{2.77626pt}{0ex}}0}{\overset{i,j}{z}},\Delta {z}^{i,j}\right)\right)$ for all $0. Each such update is computed in a constant time as described in Section “Preprocessing of vector entry-wise min computations”, and so the encoding of a single segment yi in y is computed in a total time of $O\left(\frac{m}{q}\right)$, and the encodings of all $O\left(\frac{n}{q}\right)$ such segments are computed in $O\left(\frac{\mathit{\text{nm}}}{{q}^{2}}\right)$ time. Decoding all encoded vectors yi can be done in additional O (n) operations, obtaining an explicit form of y in a total time of $O\left(\frac{\mathit{\text{nm}}}{{q}^{2}}\right)$.

Let ${y}^{\prime \prime }=A\left[{I}_{0,{n}^{\prime }},{I}_{{m}^{\prime },m}\right]\otimes x\left[{I}_{{m}^{\prime },m}\right]$, where from Observation 1, $y\left[{I}_{0,{n}^{\prime }}\right]=\text{min}\left({y}^{\prime },{y}^{\prime \prime }\right)$. The computation of y′′ can be conducted in O (n q) time in a straightforward manner, and the computation of min(y,y′′) requires additional O (n) time. In addition, $y\left[{I}_{{n}^{\prime },n}\right]=A\left[{I}_{{n}^{\prime },n},{I}_{0,m}\right]\otimes x$, where this computation can be done naively in O (m q) time, and so the overall running time for computing y  is $O\left(\frac{\mathit{\text{nm}}}{{q}^{2}}+\mathit{\text{nq}}+\mathit{\text{mq}}\right)=O\left(\frac{\mathit{\text{nm}}}{{q}^{2}}\right)=O\left(\frac{\mathit{\text{nm}}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n+m\right)}\right)$.

The above matrix-vector min-plus multiplication algorithm can be used as a fast square matrix-matrix multiplication algorithm in a straightforward manner. For two D-discrete matrices An × n and Bn × n, the computation of C = AB can be conducted by first preprocessing A as described in Sections “Preprocessing of matrix-vector computations” in $O\left(\frac{{n}^{2+\lambda }}{|D|}\right)$ time and $O\left(\frac{{n}^{2+\lambda }}{|D|{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}n}\right)$ space, and then computing each column j of C independently by multiplying A with the j-th column of B, in $O\left(\frac{{n}^{2}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}n}\right)$ time as explained above. The total computation time of all n columns of C is therefore $O\left(\frac{{n}^{3}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}n}\right)$.

### Online preprocessing of D-discrete matrices

In the previous section, we assumed the settings in which a D-discrete matrix is given, and that it is preprocessed once prior to any multiplication operation. Next, we describe how to maintain the required lookup tables for the case the input matrix is dynamic, acquiring additional rows and columns. Consider a streaming computational model, which begins with an initial empty matrix ${A}_{0×0}^{0}$. In each step r, the current matrix ${A}_{{n}_{r}×{m}_{r}}^{r}$ is obtained from the previous matrix ${A}_{{n}_{r-1}×{m}_{r-1}}^{r-1}$ by either adding an mr-1-length vector as the last row or adding an nr-1-length vector as the last column in the matrix. Note that n r  + m r  = r, and therefore the preprocessing block length corresponding to Ar is q = λ log|D|(n r  + m r )  = λ log|D|(r). For the purpose of this analysis, we assume that λ ≤ 0.5 (note that this does not limit the asymptotic upper bounds of the running time). This assumption implies the following inequality

$|D{|}^{\frac{1}{\lambda }}\ge {2}^{2}=4.$
(14)

Lookup tables corresponding to intermediate matrices along the series can be maintained as follows. Let r0 and r1 be the smallest integers such that the block sizes corresponding to ${A}^{{r}_{0}}$ and ${A}^{{r}_{1}}$ are q and q + 1, respectively. Assume that upon reaching ${A}^{{r}_{0}}$ in the matrix sequence, all required lookup tables with respect to ${A}^{{r}_{0}}$ are already computed. Along the series of steps r0,r0 + 1,…,r1, we distribute two kinds of computations: (1) new MUL B  tables for accumulated q × q blocks in matrices Ar for r0 ≤ r < r1, and (2) a new MIN table, as well as new MUL B  tables, with respect to block length q + 1.

(1) Computing MUL B tables for accumulated q × q blocks. Assume that for some r0 ≤ r < r1, a column was added to the matrix at step r so that the number of columns m r  in the intermediate matrix Ar is divisible by q. Thus, at most $\frac{{n}_{r}}{q}\le \frac{r}{q}$ new q × q complete blocks are now available for preprocessing. The computation of lookup tables of the form MUL B  corresponding to these new blocks will be equally distributed along the series of q consecutive steps r,r + 1,…,r + q - 1, during which it is guaranteed that no column addition would introduce new complete q × q blocks in the matrix. As shown in Section “Preprocessing of matrix-vector computations”, the time required for processing a single q × q block is O (q2|D|q-1), and so the total time for processing all $O\left(\frac{r}{q}\right)$ blocks is O (q r |D|q-1). Thus, in each step among the q steps, there is a need to perform $O\left(r|D{|}^{q-1}\right)=O\left(\frac{{r}^{1+\lambda }}{|D|}\right)$ operations due to these computations. Symmetrically, computing lookup tables corresponding to new blocks added due to the accumulation of rows can be performed by conducting $O\left(\frac{{r}^{1+\lambda }}{|D|}\right)$ operations per step r.

(2) Computing a new MIN lookup table and new MUL B tables with respect to block length q + 1 . By the selection of r0 and r1, q - 1 = λ log|D|(r0 - 1)  > λ log|D|(r0 - 1) - 1, and q + 1= λ log|D|(r1)  ≤ λ log|D|(r1). Therefore, $\underset{|D|}{\text{log}}\left({r}_{1}\right)>\underset{|D|}{\text{log}}\left({r}_{0}-1\right)+\frac{1}{\lambda }$, and so ${r}_{1}>\left({r}_{0}-1\right)|D{|}^{\frac{1}{\lambda }}\ge {r}_{0}\frac{|D{|}^{\frac{1}{\lambda }}}{2}\stackrel{\text{Eq.14}}{\ge }2{r}_{0}$. In particular, ${r}_{2}=\frac{{r}_{1}}{2}$ satisfies r0 < r2 < r1, and for every r2 ≤ r < r1 we have that O (r) = O (r1). The computation of the table MIN and tables of the form MUL B  with respect to block length q + 1 is distributed along the series of $\frac{{r}_{1}}{2}$ steps r2,r2 + 1,…,r1.

The new MIN table is computed independently from the specific input instance, and its overall computation time is $O\left(\frac{{r}_{1}^{2\lambda }{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left({r}_{1}\right)}{|D|}\right)$ (see Section “Preprocessing of vector entry-wise min computations”). By distributing this computation evenly along all O (r1) steps, the computation time required for each step r2 ≤ r < r1 is $O\left(\frac{{r}_{1}^{2\lambda -1}{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left({r}_{1}\right)}{|D|}\right)=O\left(\frac{{r}^{2\lambda -1}{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(r\right)}{|D|}\right)$.

The MUL B  tables are computed similarly as done in (1), starting with (q + 1) × (q + 1) blocks already present in ${A}^{{r}_{2}}$, and continuing with blocks accumulated as the sequence progress. The overall preprocessing time of all these blocks is $O\left(\frac{{r}_{1}^{2+\lambda }}{|D|}\right)$ (see Section “Preprocessing of matrix-vector computations”), and so the computation time required for each step r2 ≤ r < r1 is $O\left(\frac{{r}_{1}^{1+\lambda }}{|D|}\right)=O\left(\frac{{r}^{1+\lambda }}{|D|}\right)$.

All in all, the time complexity due to computations of (1) and (2) for each step r0 ≤ r < r1 is $O\left(\frac{{r}^{1+\lambda }}{|D|}\right)$. In particular, the overall time complexity of preprocessing the n - size prefix A0,A1,…,An of the streamed matrices is $O\left(\frac{{n}^{2+\lambda }}{|D|}\right)$.

### The EDDC algorithm based on efficient D-discrete min-plus matrix-vector multiplication

Consider the EDDC problem in cases where all edit operation costs are integers. As explained in Section “D-discrete matrices and the EDDC problem with integer costsD-discrete matrices and the EDDC problem with integercosts”, the EDDC DP tables can be considered D-discrete. This property allows for efficient min-plus square D-discrete matrix-vector multiplications, using the algorithm described in Section “An efficient D-discrete min-plus matrix-vector multiplication algorithm” to yield an $O\left(\frac{|\Sigma |{n}^{3}}{\underset{|D|}{\overset{2}{\text{log}}}n}\right)$ running time algorithm for EDDC. We next describe an online version of the algorithm, in which the letters of the input strings s and t are received in a streaming model.

Assume that some pair of prefixes s0,i and t0,j-1 was already processed, and all entries in the DP matrices corresponding to these prefixes are computed. We explain how to update the tables in case where the next letter to arrive is the letter tj-1 in t, where the case in which the arriving letter is from s is symmetric. The DP matrices are D-discrete, and assume that lookup tables for efficient min-plus multiplications of these matrices are maintained as explained in the previous section. The addition of tj-1 requires updating all matrices of the forms Tε, Tα, and Tα, for which the j-th row and column should be added. In addition, it is required to add the j-th column to matrices of the form ED and EDTα.

In the first stage, the algorithm computes rows and columns j in all matrices of the form Tα, Tα, and Tε. The process is similar to the computation of these entries by the loop in lines 5 to 8 of Algorithm 1, with the following modification. Let q j  = λ log|D|(2j) , and let ${j}^{\prime }={q}_{j}⌊\frac{j}{{q}_{j}}⌋$. The algorithm first initializes the entries [ j - 1,j] in all these matrices with the corresponding base-case values. The column is partitioned to intervals of length q j , where as before Q k  denotes the interval ${I}_{k{q}_{j},\left(k+1\right){q}_{j}}$. Once an interval Q k  is computed (i.e. the loop was executed with respect to index l = k q j ), the Δ - encoding of the sub-vector Tα[ Q k ,j] is computed and kept for its later usage as lookup index. In addition, upon starting to compute the entries within an interval Q k  (i.e. when l = (k + 1)q j  - 1), the following multiplications are computed for every αΣ:

$\phantom{\rule{-6.0pt}{0ex}}\begin{array}{ll}{y}^{\alpha ,k}& ={T}^{\alpha }\left[\phantom{\rule{0.3em}{0ex}}{Q}_{k},{I}_{{q}_{j}\left(k+1\right),{j}^{\prime }}\right]\otimes {T}^{\alpha }\left[\phantom{\rule{0.3em}{0ex}}{I}_{{q}_{j}\left(k+1\right),{j}^{\prime }},j\right]\\ \stackrel{\text{Obs.1}}{=}\text{min}\left\{{T}^{\alpha }\left[\phantom{\rule{0.3em}{0ex}}{Q}_{k},{Q}_{p}\right]\otimes {T}^{\alpha }\left[\phantom{\rule{0.3em}{0ex}}{Q}_{p},j\right]\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}k

Observe that all required entries for the computation of yα,k are already computed and stored in Tα, and that similarly as done in Section “Computing matrix-vector multiplications”, yα,k can be computed by performing $O\left(\frac{j}{{q}_{j}}\right)$ constant time lookup table queries. After yα,k is computed, yα,k[ x] contains the value min{Tα[ k q j  + x,h] + Tα[ h,j] | (k + 1)q j  ≤ h < j}. Given yα,k[ x], the number of expressions that need to be examined in line 5 of the loop with respect to l = k q j  + x reduces to O (q j ) per entry (considering values of the index h between l and (k + 1)q j , and between j and j). Entries in matrices of the form Tα and Tε are computed exactly as done in lines 6 and 7 of Algorithm 1, respectively.

In the second stage, column j is computed in matrices EDTα and ED. This is achieved by extending Equations 13 and 12 to have an entire column on the left-hand side, as follows:

$\begin{array}{l}\phantom{\rule{-9.0pt}{0ex}}\begin{array}{l}\mathit{\text{ED}}{T}^{\phantom{\rule{0.3em}{0ex}}\alpha }\left[{I}_{2,i+1},j\right]\stackrel{\text{Eq.13}}{←}\mathit{\text{ED}}\left[{I}_{2,i+1},{I}_{1,j}\right]\otimes {T}^{\alpha }\left[{I}_{1,j},j\right]\end{array}\end{array}$
(15)
(16)

This completes the update of the DP tables due to the addition of the letter tj-1.

#### Complexity analysis

After receiving n letters, the prefixes s0,i and t0,j of the input strings were preprocessed for some i,j such that i + j = n. The maintenance of lookup tables for efficient D-discrete multiplications requires at most $O\left(\frac{|\Sigma |{n}^{1+\lambda }}{|D|}\right)$ operations per step among the first n steps, and $O\left(\frac{|\Sigma |{n}^{2+\lambda }}{|D|}\right)$ operations for all first n steps, as shown in Section “Online preprocessing of D-discrete matrices”.

Adding a letter tj-1 to the instance, the time required for processing the entries in column j of the T matrices is as follows. $O\left(\frac{|\Sigma |j}{{q}_{j}}\right)$ vectors yα,k need to be computed, each vector is computed in $O\left(\frac{j}{{q}_{j}}\right)$ time, and their total computation time is therefore $O\left(\frac{|\Sigma |{j}^{2}}{{q}_{j}^{2}}\right)$. In addition, O (|Σ|j) entries in tables Tα are computed in O (q j ) time each, and O (|Σ|j) entries in tables Tαand Tεare computed in O (|Σ|) time each. Therefore, the total time for computing column j in all these matrices is $O\left(\frac{|\Sigma |{j}^{2}}{{q}_{j}^{2}}+|\Sigma {|}^{2}j\right)=O\left(\frac{|\Sigma |{n}^{2}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n\right)}+|\Sigma {|}^{2}n\right)$.

Computing column j in matrices EDTα and ED, the algorithm performs O (|Σ|) matrix-vector min-plus multiplications (Equations 15 and 16), each taking $O\left(\frac{{n}^{2}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n\right)}\right)$ time using the algorithm in Section “The EDDC algorithm based on efficient D-discrete min-plus matrix-vector multiplication”, and computes the entry-wise minimum of |Σ|i-length vectors (Equation 16) in O (|Σ|i) time. Hence, the total time complexity of computing column j is $O\left(\frac{|\Sigma |{n}^{2}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n\right)}+|\Sigma {|}^{2}n\right)$. Symmetrically, this bounds the running time when the n-th letter comes from the source string s, and so the total running time over all first n steps is $O\left(\frac{|\Sigma |{n}^{3}}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n\right)}+|\Sigma {|}^{2}{n}^{2}\right)$. The algorithm requires $O\left(\frac{|\Sigma |{n}^{2+\lambda }}{{\lambda }^{2}\underset{|D|}{\overset{2}{\text{log}}}\left(n\right)}\right)$ space for the computed tables.

### Online VMT algorithms

The online algorithm for EDDC presented in the previous section can be generalized for other problems with similar properties. Specifically, VMT problems [11], which utilize min-plus multiplications and for which it can be guaranteed that computed DP matrices are D-discrete, can have their algorithms implemented using the same framework as we have presented above. Thus, in contrast to the general case for VMT problems in which it is required that the complete input be available at the beginning of the algorithm’s run, in the D-discrete case the input can be obtained in a streaming model. In addition, the asymptotic time complexity in such cases is slightly reduced with respect to the time complexity of the case of min-plus multiplication of general matrices. A concrete example to such a problem is the RNA base-pairing maximization problem [11, 15], in which the difference between adjacent entries (in the single DP matrix the algorithm uses) is either 0 or 1. This property was previously exploited by Frid and Gusfield [14] to obtain an $O\left(\frac{{n}^{3}}{\text{log}n}\right)$ algorithm for the problem. Using the D-discrete min-plus multiplication algorithm presented here, this immediately implies an algorithm having the improved time bound of $O\left(\frac{{n}^{3}}{\stackrel{2}{\text{log}}n}\right)$. Additional related problems from the domains of RNA folding and Context Free Grammars (CFGs) parsing fall under the VMT framework, and it is likely that D-discreteness can be exploited for accelerating the computation of more problems within this family.

## Additional acceleration using run-length encoding

Let w be a string. A maximal substring of w containing multiple repeats of the same letter is called a run in w. The Run Length Encoding (RLE) of w is a representation of the string in which each run is encoded by the corresponding repeating letter α and its repeat count p (denoted αp). For example, the string w = aabbbaccc is a concatenation of the four runs a a, bbb, a, and ccc, and its RLE is a2b3a1c3. Denote by $\stackrel{~}{w}$ the compressed form of w, which replaces each run in w by a single occurrence of the corresponding letter. When n denotes the length of w, $ñ$ will denote the length of the compressed form of w. The run index$ĩ$ of a letter w i  in w is the index of the run in which w i  participates. It can be asserted that the compressed form of the substring wi,j of w is the substring ${\stackrel{~}{w}}_{ĩ,\left(\stackrel{~}{j-1}\right)+1}$ of $\stackrel{~}{w}$. In the above example, $\stackrel{~}{w}=\mathit{\text{abac}}$, and therefore $ñ=4$ (while n = 9). The run indices of all letters in w are given by the sequence [ 0,0,1,1,1,2,3,3,3], and the compressed form of w3,8 = bbacc is ${\stackrel{~}{w}}_{\stackrel{~}{3},\stackrel{~}{7}+1}={\stackrel{~}{w}}_{1,4}=\mathit{\text{bac}}$.

Previous works [79] showed how RLE can be exploited for improving the efficiency of EDDC algorithms. In these works it was required that the costs of duplications and contractions be less than the costs of all other operations (the requirement was implicit in [9], see discussion in Section “A comparison with previous works”). This requirement is somewhat unnatural for the application of minisatellite map comparison, since it assumes that mutations, which are typically common events, should cost more than the less common events of duplications and contractions. In this section, we adapt a similar RLE-based acceleration to our EDDC algorithm. The application of this acceleration requires the following constraint over cost functions:

### Constraint 1.

For every α,βΣ, dup (α) ≤ dup (β) + mut (β,α) ≤ ins (α), and cont (α) ≤ cont (β) + mut (α,β) ≤ del (α).

The constraint dup (β) + mut (β,α) ≤ ins (α) implies that it never costs more to replace an insertion of some letter α into some nonempty string by the duplication of a letter β adjacent to the insertion position, and its consecutive mutation to α. Thus, we may assume w.l.o.g that optimal edit scripts do not contain insertions (unless applied to empty strings), or in other words, generation of new letters can only be obtained via duplications. Such an assumption is relatively reasonable in the context of minisatellite map comparison, considering the biological mechanisms that describe generative modifications.

The constraint dup (α) ≤ dup (β) + mut (β,α) can be intuitively understood by the example of generating a string of the form α α β from a string of the form α β. Due to the constraint, it would cost the same or less if the string α α β is obtained by duplicating the α letter in α β, rather than by duplicating the β letter and mutating its left copy into α. Again, such an assumption is relatively reasonable for the minisatellite map application. Symmetric arguments hold with respect to the constraint over contraction and deletion costs.

### Observation 4.

Observation 4.Let s and w be strings. Then, ed (s,w β β) ≤ ed (s,w β) + dup (β), and ed (s,β β w) ≤ ed (s,β w) + dup (β) for every βΣ.

The correctness of Observation 4 follows from the existence of a script from s to w β β whose cost is ed (s,w β) + dup (β): this script first applies an optimal script to transform s into w β at cost ed (s,w β), and then duplicates the last β in w β at cost dup (β).

### Lemmma 5.

Let α,β be letters and w ≠ εa string. When Constraint 1 holds, ed (α,β w),ed (α,w β) ≥ ed (α,w) + dup (β), and ed (β w,α),ed (w β,α) ≥ ed (w,α) + cont (β).

The proof of Lemma 5 appears in Appendix “Proofs to lemmas corresponding to the run-length encoding based EDDC algorithm”.

Next, we show how to reduce the number of expressions that need to be considered in the EDDC recursive equations, in case Constraint 1 applies. For a string w of length at least 2, denote by R (w) P (w) the set of all partitions (wa,wb) of w such the last letter in wa is different from the first letter in wb. For example, for w = aabbbcdddd,R (w) = {(a a,bbbcdddd ), (aabbb,cdddd),(aabbbc,dddd)}. Observe that $|R\left(w\right)|=ñ-1$.

We start by describing how to improve the computation efficiency of EDDC for cases in which one of the input strings contains a single letter. Denote by dupcost (w) the cost of the edit script from $\stackrel{~}{w}$ to w which generates each run αp in w by applying p - 1 duplication operations over the corresponding letter α in $\stackrel{~}{w}$. Similarly, denote by contcost (w) the cost of the edit script from w to $\stackrel{~}{w}$ which reduces each run αp in w by applying p - 1 contraction operations over α. For example, for w = aabbbbaaccc, dupcost (w) = 2 dup (a) + 3 dup (b) + 2 dup (c) and contcost (w) = 2 cont (a) + 3 cont (b) + 2 cont (c). Note that $\mathit{\text{dupcost}}\left(w\right)\ge \mathit{\text{ed}}\left(\stackrel{~}{w},w\right)$, and $\mathit{\text{contcost}}\left(w\right)\ge \mathit{\text{ed}}\left(w,\stackrel{~}{w}\right)$. It is simple to assert the following recursive relations: [b]

$\begin{array}{l}\mathit{\text{dupcost}}\left(w\beta \right)=\left\{\begin{array}{ll}\mathit{\text{dupcost}}\left(w\right)+\mathit{\text{dup}}\left(\beta \right),& w\phantom{\rule{0.3em}{0ex}}\text{ends with}\phantom{\rule{0.3em}{0ex}}\beta ,\\ \mathit{\text{dupcost}}\left(w\right),& \text{otherwise}.\end{array}\right\\end{array}$
(17)
$\begin{array}{l}\mathit{\text{dupcost}}\left(w\right)=\left\{\begin{array}{ll}\mathit{\text{dupcost}}\left({w}^{a}\right)+\mathit{\text{dupcost}}\left({w}^{b}\right),& \left({w}^{a},{w}^{b}\right)\in R\left(w\right),\\ \mathit{\text{dupcost}}\left({w}^{a}\beta \right)+\mathit{\text{dupcost}}\left(\beta {w}^{b}\right)+\mathit{\text{dup}}\left(\beta \right),& \left({w}^{a}\beta ,\beta {w}^{b}\right)\in P\left(w\right).\end{array}\right\\end{array}$
(18)

The following lemma shows that when one of the input strings contains a single letter, the edit distance can be inferred from the edit distance between this letter and the compressed form of the second string.

### Lemma 6.

Let α be a letter and w a string. When Constraint 1 holds,$\mathit{\text{ed}}\left(\alpha ,w\right)=\mathit{\text{ed}}\left(\alpha ,\stackrel{~}{w}\right)+\mathit{\text{dupcost}}\left(w\right)$, and$\mathit{\text{ed}}\left(w,\alpha \right)=\mathit{\text{contcost}}\left(w\right)+\mathit{\text{ed}}\left(\stackrel{~}{w},\alpha \right)$.

The following lemma shows that given a certain edit script from string u, its cost is greater than or equal to the cost of its application on a superstring of u.

For a string s of the form s = sausb and an edit script $\mathcal{E}\mathcal{S}=〈u={u}^{0},{u}^{1},\dots ,{u}^{r}=w〉$ from u to w, denote by $\mathcal{E}\mathcal{S}\left(s\right)$ the edit script $\mathcal{E}\mathcal{S}\left(s\right)=〈s={s}^{a}u{s}^{b}={s}^{a}{u}^{0}{s}^{b},{s}^{a}{u}^{1}{s}^{b},\dots ,{s}^{a}{u}^{r}{s}^{b}={s}^{a}w{s}^{b}〉$from s = sausbto t = sawsb.

### Lemma 7.

For s = sausband$\mathcal{E}\mathcal{S}=〈u={u}^{0},{u}^{1},\dots ,{u}^{r}=w〉$, $\mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\left(s\right)\right)\le \mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\right)$.

The proofs of Lemma 6 and Lemma 7 appear in the Appendix.

Equations 17 and 18 and Lemma 6 support the following preprocessing algorithm, Algorithm 3. Given a target string t, Algorithm 3 generates data structures that enable retrieving in constant time values of the form ed (α,ti,j) for every αΣ and every substring ti,j of t. The algorithm generates tables of the form ${\stackrel{~}{T}}^{\alpha }$ for every αΣ, such that entries ${\stackrel{~}{T}}^{\alpha }\left[\phantom{\rule{0.3em}{0ex}}i,j\right]$ contain the corresponding values $\mathit{\text{ed}}\left(\alpha ,{\stackrel{~}{t}}_{i,j}\right)$. In addition, the algorithm generates a vector DC, such that entries DC [ j] contain the corresponding values dupcost (t0,j). Then, queries of the form ed (α,ti,j) can be answered in a constant time according to Equation 19 below.

$\phantom{\rule{-16.0pt}{0ex}}\begin{array}{ll}\mathit{\text{ed}}\left(\alpha ,{t}_{i,j}\right)& \stackrel{\begin{array}{l}\text{Lem.6}\\ \text{Eq.18}\end{array}}{=}{\stackrel{~}{T}}^{\alpha }\left[\phantom{\rule{0.3em}{0ex}}ĩ,\left(\stackrel{~}{j-1}\right)+1\right]\\ \phantom{\rule{2em}{0ex}}+\left\{\begin{array}{ll}\mathit{\text{DC}}\left[\phantom{\rule{0.3em}{0ex}}j\right]-\mathit{\text{DC}}\left[\phantom{\rule{0.3em}{0ex}}i\right],& {t}_{i-1}\ne {t}_{i},\\ \mathit{\text{DC}}\left[\phantom{\rule{0.3em}{0ex}}j\right]-\mathit{\text{DC}}\left[\phantom{\rule{0.3em}{0ex}}i\right]-\mathit{\text{dup}}\left({t}_{i}\right),& \text{otherwise}.\end{array}\right\\end{array}$
(19)

### Algorithm 3 RL-LETTER-TO-STRING(t)

An algorithm which is symmetric to Algorithm 3 can be described in order to preprocess a string s for queries of the form ed (si,j,α).

We continue to describe the improved computation in the case where both input strings s and t are of length at least 2. To do so, we first add some auxiliary notation. For an interval Ix,y of positions within a string w, denote by ${Ĩ}_{x,y}$ the subsequence of indices in Ix,y which are start positions of runs in w. For example, for w = aabbbaacccc, the interval I1,7 = [ 1,2,…,6] contains all positions of letters within the substring w1,7 = abbbaa, and ${Ĩ}_{1,7}=\left[\phantom{\rule{0.3em}{0ex}}2,5\right]$ contains the start positions in w of the runs bbb and aa that are included in I1,7 (the first letter a in w1,7 belongs to a run in w that starts in position 0, and therefore position 1 is not included in ${Ĩ}_{1,7}$). This notation will be used for defining subsequences of rows and columns in DP matrices maintained by the algorithm, where some of these intervals are derived from the source string s, and some from the target string t. We will assume that the string from which ${Ĩ}_{x,y}$ was derived is clear from the context, and will not specify it explicitly. For example, when ${Ĩ}_{x,y}$ defines rows in matrices ED or EDTα, or either rows or columns in matrices Sα, then the indices in ${Ĩ}_{x,y}$ are derived from the source string s. When ${Ĩ}_{x,y}$ defines columns in matrices ED or EDTα, or either rows or columns in matrices Tα, then the indices in ${Ĩ}_{x,y}$ are derived from the target string t. Subsequences ${Ĩ}_{x,y}$ will be used for defining sparse regions in matrices, i.e. regions containing sets of rows or columns which are not necessarily adjacent.

Consider the computation of ed (s,t) as expressed in Equation 8. Assume first the special case where s ends with a run of length at least 2. In this case, s is of the form s = w β β for some string w and a letter β. For every partition (ta,tb) of t, it is possible to combine an optimal script ${\mathcal{E}\mathcal{S}}^{1}$ from the prefix w β of s to ta and an optimal script ${\mathcal{E}\mathcal{S}}^{2}$ from the suffix β of s to tb, and to obtain a script $\mathcal{E}\mathcal{S}=〈{\mathcal{E}\mathcal{S}}^{1}\left(w\beta \beta \right),{\mathcal{E}\mathcal{S}}^{2}\left({t}^{a}\beta \right)〉$ from s to t. Therefore, $\mathit{\text{ed}}\left(w\beta \beta ,t\right)\le \mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\right)=\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{1}\left(w\beta \beta \right)\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{2}\left({t}^{a}\beta \right)\right)\stackrel{\text{Lem.7}}{\le }\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{1}\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{2}\right)=\phantom{\rule{0.3em}{0ex}}\mathit{\text{ed}}\left(w\beta ,{t}^{a}\right)+\phantom{\rule{0.3em}{0ex}}\mathit{\text{ed}}\left(\beta ,{t}^{b}\right)$. In particular, $\mathit{\text{ed}}\left(w\beta \beta ,t\right)\le \text{min}\left\{\mathit{\text{ed}}\left(w\beta ,{t}^{a}\right)+\mathit{\text{ed}}\left(\beta ,{t}^{b}\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\left({t}^{a},{t}^{b}\right)\in P\left(t\right)\right\}\stackrel{\text{Eq.9}}{=}\mathit{\text{ed}}{t}^{\beta }\left(w\beta ,t\right)$. In addition, it is possible to compose an edit script from w β β to t by first contracting the last two letters to obtain the string w β, and then applying an optimal script from w β to t. The cost of such a script is ed (w β,t) + cont (β), and therefore we get that ed (w β β,t)≤ min{edtβ(w β,t),ed (w β,t) + cont (β)}.

Next, we show that ed (w β β,t) ≥ min {edtβ(w β,t), ed (w β,t) + cont (β)}. From Equation 8, either ed (w β β,t) = ed (w β β,α) + ed (α,t) or ed (w β β,t) = edtα(sa,t) + ed (sb,α) for some αΣ and (sa,sb) P (w β β). Consider first the latter case. If (sa,sb) = (w β,β), then

$\phantom{\rule{-17.0pt}{0ex}}\begin{array}{ll}\mathit{\text{ed}}\left(w\beta \beta ,t\right)& =\mathit{\text{ed}}{t}^{\alpha }\left(w\beta ,t\right)+\mathit{\text{ed}}\left(\beta ,\alpha \right)\\ \phantom{\rule{-6.0pt}{0ex}}\stackrel{\text{Eq.9}}{=}\text{min}\left\{\mathit{\text{ed}}\left(\mathrm{w\beta },{t}^{a}\right)+\mathit{\text{ed}}\left(\alpha ,{t}^{b}\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\left({t}^{a},{t}^{b}\right)\in P\left(t\right)\right\}\\ \phantom{\rule{1em}{0ex}}+\mathit{\text{ed}}\left(\beta ,\alpha \right)\\ \phantom{\rule{-6.0pt}{0ex}}\stackrel{\text{Obs.5}}{\ge }\text{min}\left\{\mathit{\text{ed}}\left(w\beta ,{t}^{a}\right)+\mathit{\text{ed}}\left(\beta ,{t}^{b}\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\left({t}^{a},{t}^{b}\right)\in P\left(t\right)\right\}\\ \phantom{\rule{-6.0pt}{0ex}}\stackrel{\text{Eq.9}}{=}\mathit{\text{ed}}{t}^{\beta }\left(w\beta ,t\right).\end{array}$

Else, sb is of length at least 2, and there is some string u such that sb= u β β and w = sau. In this case, $\mathit{\text{ed}}\left(w\beta \beta ,t\right)=\mathit{\text{ed}}{t}^{\alpha }\left({s}^{a},t\right)+\mathit{\text{ed}}\left(u\beta \beta ,\alpha \right)\stackrel{\text{Lem.5}}{\ge }\mathit{\text{ed}}{t}^{\alpha }\left({s}^{a},t\right)+\mathit{\text{ed}}\left(u\beta ,\alpha \right)+\mathit{\text{cont}}\left(\beta \right)\stackrel{\text{Eq.8}}{\ge }\mathit{\text{ed}}\left(w\beta ,t\right)+\mathit{\text{cont}}\left(\beta \right)$. Similarly, it can be shown that when ed (w β β,t) = ed (w β β,α) + ed (α,t) for some α Σ, ed (w β β,t) ≥ ed (w β,t) + cont (β), and so ed (w β β,t) ≥ min {edtβ(w β,t),ed (w β,t) + cont (β)}. Thus,

$\mathit{\text{ed}}\left(w\beta \beta ,t\right)=\text{min}\left\{\mathit{\text{ed}}{t}^{\beta }\left(w\beta ,t\right),\mathit{\text{ed}}\left(w\beta ,t\right)+\mathit{\text{cont}}\left(\beta \right)\right\}$
(20)

Formulating Equation 20 with respect to the data structures defined in Section “A baseline dynamic-programming algorithm for EDDC” (under the assumption that all values appearing at the right-hand side of the equation are computed and stored in the corresponding entries), we get the following equation:

(21)

Now, consider the case where the last run in s is of length 1 (i.e. s is not of the form w β β). Assume first that the term that yields the minimum value of the right-hand side of Equation 8 is of the form edtα(sa,t) + ed (sb,α) for some partition (sa,sb) P (s) and a letter αΣ. If (sa,sb) R(s), then there is some letter βΣ which is both the last letter of sa and the first letter of sb. In this case, we can write sa = w β and sb = β u. Note that u ≠ ε (since s ≠ w β β by definition), and so $\mathit{\text{ed}}\left(s,t\right)=\mathit{\text{ed}}{t}^{\alpha }\left(w\beta ,t\right)+\mathit{\text{ed}}\left(\beta u,\alpha \right)\stackrel{\text{Eq.9}}{=}\text{min}\left\{\mathit{\text{ed}}\left(w\beta ,{t}^{a}\right)+\mathit{\text{ed}}\left(\alpha ,{t}^{b}\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\left({t}^{a},{t}^{b}\right)\in P\left(t\right)\right\}+\mathit{\text{ed}}\left(\beta u,\alpha \right)\stackrel{\begin{array}{l}\text{Lem.5},\\ \text{Obs.4}\end{array}}{\ge }\text{min}\left\{\left(\mathit{\text{ed}}\left(w\beta \beta ,{t}^{a}\right)-\mathit{\text{dup}}\left(\beta \right)\right)+\mathit{\text{ed}}\left(\alpha ,{t}^{b}\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\left({t}^{a},{t}^{b}\right)\in P\left(t\right)\right\}+\left(\mathit{\text{ed}}\left(u,\alpha \right)+\mathit{\text{dup}}\left(\beta \right)\right)=\text{min}\left\{\mathit{\text{ed}}\left(w\beta \beta ,{t}^{a}\right)+\mathit{\text{ed}}\left(\alpha ,{t}^{b}\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\left({t}^{a},{t}^{b}\right)\in P\left(t\right)\right\}+\mathit{\text{ed}}\left(u,\alpha \right)\stackrel{\text{Eq.9}}{=}\mathit{\text{ed}}{t}^{\alpha }\left(w\beta \beta ,t\right)+\mathit{\text{ed}}\left(u,\alpha \right)$. From the optimality of the partition (sa,sb) = (w β,β u), it follows that ed (s,t) = edtα(w β β,t) + ed (u,α). If u starts with β this step can be repeated, and inductively we can apply such partition refinements until obtaining a partition (sa,sb) of s such that ed (s,t) = edtα(sa,t) + ed (sb,α) and (sa,sb) R (s). Now, Equation 8 can be revised as follows:

$\begin{array}{ll}\mathit{\text{ed}}\left(s,t\right)& =\text{min}\left\{\begin{array}{l}\mathit{\text{ed}}\left(s,\alpha \right)+\mathit{\text{ed}}\left(\alpha ,t\right),\\ \mathit{\text{ed}}{t}^{\alpha }\left({s}^{a},t\right)+\mathit{\text{ed}}\left({s}^{b},\alpha \right)\end{array}\left|\begin{array}{l}\left({s}^{a},{s}^{b}\right)\in R\left(s\right),\\ \alpha \in \Sigma \end{array}\right\right\}\\ \phantom{\rule{3.77626pt}{0ex}}\phantom{\rule{2.77626pt}{0ex}}\left(\text{when}\phantom{\rule{0.3em}{0ex}}s\phantom{\rule{0.3em}{0ex}}\text{is not of the form}\phantom{\rule{0.3em}{0ex}}\mathrm{w\beta \beta }\right)\end{array}$
(22)

Using the DP formulation, we get

(23)

Similarly as shown for the computation of ed (s,t), it is possible to revise the computation of edtα(s,t) under Constraint 1 and obtain the following equations:

$\phantom{\rule{-14.22636pt}{0ex}}\mathit{\text{ed}}{t}^{\alpha }\left(s,w\beta \beta \right)=\text{min}\left\{\begin{array}{l}\mathit{\text{ed}}{t}^{\alpha }\left(s,w\beta \right)+\mathit{\text{dup}}\left(\beta \right),\\ \mathit{\text{ed}}\left(s,w\beta \right)+\mathit{\text{mut}}\left(\alpha ,\beta \right)\end{array}\right\}$
(24)
$\phantom{\rule{-14.22636pt}{0ex}}\begin{array}{ll}\mathit{\text{ed}}{t}^{\alpha }\left({s}_{0,i},{t}_{0,j}\right)& =\text{min}\left\{\begin{array}{l}\mathit{\text{ED}}{T}^{\phantom{\rule{0.3em}{0ex}}\alpha }\left[\phantom{\rule{0.3em}{0ex}}i,j-1\right]+\mathit{\text{dup}}\left({t}_{j-1}\right),\\ \mathit{\text{ED}}\left[\phantom{\rule{0.3em}{0ex}}i,j-1\right]+\phantom{\rule{0.3em}{0ex}}\mathit{\text{mut}}\left(\alpha ,{t}_{j-1}\right)\end{array}\right\}\\ \phantom{\rule{12.77626pt}{0ex}}\phantom{\rule{2.77626pt}{0ex}}\left(\text{when}\phantom{\rule{0.3em}{0ex}}{t}_{j-1}={t}_{j-2}\right)\end{array}$
(25)
$\phantom{\rule{-14.22636pt}{0ex}}\begin{array}{ll}\mathit{\text{ed}}{t}^{\alpha }\left(s,t\right)=& \phantom{\rule{0.3em}{0ex}}\text{min}\left\{\mathit{\text{ed}}\left(s,{t}^{a}\right)+\mathit{\text{ed}}\left(\alpha ,{t}^{b}\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\left({t}^{a},{t}^{b}\right)\in R\left(t\right)\right\}\\ \phantom{\rule{6em}{0ex}}\left(\text{when}\phantom{\rule{0.3em}{0ex}}t\phantom{\rule{0.3em}{0ex}}\text{is not of the form}\phantom{\rule{0.3em}{0ex}}\mathrm{w\beta \beta }\right)\end{array}$
(26)
$\phantom{\rule{-14.22636pt}{0ex}}\begin{array}{ll}\mathit{\text{ed}}{t}^{\alpha }\left({s}_{0,i},{t}_{0,j}\right)& =\phantom{\rule{0.3em}{0ex}}\text{min}\left\{\mathit{\text{ED}}\left[\phantom{\rule{0.3em}{0ex}}i,h\right]+{T}^{\alpha }\left[\phantom{\rule{0.3em}{0ex}}h,j\right]\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}h\in {Ĩ}_{0,j}\right\}\\ =\phantom{\rule{0.3em}{0ex}}\mathit{\text{ED}}\left[\phantom{\rule{0.3em}{0ex}}i,{Ĩ}_{0,j}\right]\otimes {T}^{\alpha }\left[{Ĩ}_{0,j},j\right]\\ \phantom{\rule{9em}{0ex}}\left(\text{when}\phantom{\rule{0.3em}{0ex}}{t}_{j-1}\ne {t}_{j-2}\right)\end{array}$
(27)

Finally, we present Algorithm 4, which is an efficient version of Algorithm 2. Stage 1 of the new algorithm is accelerated by using Algorithm 3 to compute for every αΣ distances from α all substrings of s and distances from all substrings of t to α. In Stage 2, we use the equations developed above in order to accelerate the computation. The correctness of the algorithm follows from the correctness of the recursive equations, and can be asserted similarly as done for Algorithm 2.

### Algorithm 4 RL-MATRIX-EDDC(s,t)

#### Complexity analysis

Assume for simplicity that compressed forms of both input strings s and t have the same length $ñ$.

##### Algorithm 3.

The running time of line 1 of the algorithm is O (n) for computing the compressed form of the input string, and $O\left(|\Sigma |\mathit{\text{MP}}\left(ñ\right)\right)$ for running Stage 1 of Algorithm 2 over this compressed string. Lines 2 and 3 require O (n) time, and so the overall time complexity of Algorithm 3 is $O\left(n+|\Sigma |\mathit{\text{MP}}\left(ñ\right)\right)$. The space complexity for computing and maintaining all matrices is $O\left(|\Sigma |{ñ}^{2}\right)$, and an additional O (n) space is required for the vector DC. Hence, the overall space complexity of the algorithm is $O\left(n+|\Sigma |{ñ}^{2}\right)$ (see Section “Time complexity analysis” for complexity analysis of Stage 1 of Algorithm 2).

##### Algorithm 4.

Time and space complexities of Stage 1 of the algorithm are identical to those of Algorithm 3. As in Section “The algorithm”, the computations governing the running time of Stage 2 are those of matrix multiplications performed within recursive calls to RL-COMPUTE-MATRIX.

The recursive computation of RL-COMPUTE-MATRIX can be visualized as a tree (see Figure 4). Each node in the tree corresponds to a call to RL-COMPUTE-MATRIX over some regions Ii,k × Ij,l, which is either a leaf in case that k = i + 1 and l = j + 1, or otherwise an internal node. In the latter case, the node has exactly two children, corresponding to the two recursive calls obtained from either a vertical (lines 11 and 13) or a horizontal (lines 16 and 18) partition of the region. For simplicity, assume that the interval length $ñ=\left|{Ĩ}_{2,n+1}\right|={2}^{x}$ for some integer x. It can be observed that the algorithm alternates between vertical and horizontal partitions along paths from the root of the tree, where regions of two different nodes in the same depth y are disjoint, and the union of all regions of nodes in depth y covers the entire initial region I2,n + 1 × I2,n + 1 of the root node. For every 0 ≤ y ≤ log(n), there are two series of intervals ${I}^{y,0},{I}^{y,1},\dots ,{I}^{y,{2}^{y}-1}$ and ${J}^{y,0},{J}^{y,1},\dots ,{J}^{y,{2}^{y}-1}$, such that the set of regions corresponding to all nodes in depth 2y is {Iy,f× Jy,g | 0 ≤ f  < 2y,0 ≤ g < 2y}, and the set of regions corresponding to all nodes in depth 2 y + 1 is {Iy,f× Jy + 1,g | 0 ≤ f < 2y,0 ≤ g < 2y + 1}. In addition, the corresponding subsequences ${Ĩ}^{y,0},\dots ,{Ĩ}^{y,{2}^{y}-1}$ and ${\stackrel{~}{J}}^{y,0},\dots ,{\stackrel{~}{J}}^{y,{2}^{y}-1}$ have all the same size 2x - y.

Consider a node of depth 2y whose corresponding region is Iy,f × Jy,g, and the two regions corresponding to its children Iy,f × Jy + 1,2g and Iy,f × Jy + 1,2g + 1. The computation time spent on the node is dominated by the matrix multiplications performed in line 12 of RL-COMPUTE-MATRIX. This includes |Σ| matrix multiplications between pairs of matrices such the dimensions of the first matrix in each pair is $\left|{I}^{y,f}\right|×\left|{\stackrel{~}{J}}^{y+1,2g}\right|=\left|{I}^{y,f}\right|×{2}^{x-y-1}$, and the dimensions of the second matrix in a pair is $\left|{\stackrel{~}{J}}^{y+1,2g}\right|×\left|{\stackrel{~}{J}}^{y+1,2g+1}\right|={2}^{x-y-1}×{2}^{x-y-1}$. Observe that |Iy,f|≥ 2x-y-1, and such multiplications can be implemented by dividing the interval Iy,f into $\frac{\left|{I}^{y,f}\right|}{{2}^{x-y-1}}$ intervals of length 2x-y-1 each, and performing $\frac{\left|{I}^{y,f}\right|}{{2}^{x-y-1}}$ multiplications between square matrices of dimensions 2x-y-1 × 2x-y-1 in a total time of $\frac{\left|{I}^{y,f}\right|}{{2}^{x-y-1}}\mathit{\text{MP}}\left({2}^{x-y-1}\right)$. Therefore, the time required for all matrix multiplications performed within nodes in depth 2 y is

$\begin{array}{ll}\sum _{0\le f<{2}^{y}}\sum _{0\le g<{2}^{y}}& \frac{|\Sigma |\left|{I}^{y,f}\right|\mathit{\text{MP}}\left({2}^{x-y-1}\right)}{{2}^{x-y-1}}\\ \phantom{\rule{0.5em}{0ex}}=\sum _{0\le f<{2}^{y}}\frac{{2}^{y}|\Sigma |\left|{I}^{y,f}\right|\mathit{\text{MP}}\left({2}^{x-y-1}\right)}{{2}^{x-y-1}}\\ \phantom{\rule{0.5em}{0ex}}=\frac{2|\Sigma |\mathit{\text{nMP}}\left({2}^{x-y-1}\right)}{ñ}.\end{array}$

Similarly, it is possible to show that the total time required for all matrix multiplications performed within nodes in depth 2 y + 1 is also $\frac{2n|\Sigma |\mathit{\text{MP}}\left({2}^{x-y-1}\right)}{ñ}$, and so the total computation time of matrix multiplications throughout the entire algorithm run is $O\left(\frac{|\Sigma |n}{ñ}\sum _{0\le y. As in Section “The algorithm”, using the Master Theorem [16], this summation evaluates to $O\left(\frac{|\Sigma |\mathit{\text{nMP}}\left(ñ\right)}{ñ}\right)$. In addition to matrix multiplications, RL-COMPUTE-MATRIX performs O (|Σ|n2) operations in base computations (lines 2-7), and so the total time complexity of the complete algorithm is $O\left(|\Sigma |{n}^{2}+\frac{|\Sigma |\mathit{\text{nMP}}\left(ñ\right)}{ñ}\right)$.

A simple implementation of Algorithm 4 can be done using the same space complexity of O (|Σ|n2), as the space complexity of Algorithm 2. A more involved implementation can be applied by observing that in fact the algorithm only examines and updates entries in matrices of dimensions at most $n×ñ$ or $ñ×n$ when performing matrix multiplications, and in addition it examines adjacent entries “to the left” or “above” an entry in a base-case region. This observation can be used in order to reduce the space complexity of the algorithm to $O\left(|\Sigma |\mathrm{nñ}\right)$, where the complete details of such an implementation are omitted from this text.

## A comparison with previous works

In this section, we review the previous main algorithms for EDDC by Behzadi and Steyaert [7], Bérard et al. [8] and Abouelhoda et al. [9], and point out similarities and improvements made in our current work.

The main contribution of our work is in obtaining sub-cubic algorithms for EDDC, whereas all previous algorithms have cubic time complexities (for |Σ| the alphabet size, n the length of the input strings and $ñ$ the length of their RLE compressed forms, the algorithms of [7], [8], and [9] obtain the time complexities $O\left({n}^{2}+n{ñ}^{2}+|\Sigma |{ñ}^{3}+|\Sigma {|}^{2}{ñ}^{2}\right)$, $O\left({n}^{3}+|\Sigma |{ñ}^{3}\right)$, and $O\left({n}^{2}+n{ñ}^{2}\right)$, respectively).

Notably, the algorithm of [9] eliminates a |Σ| factor that appears in the time complexities of the algorithms given in [7, 8] and here. However, this improvement is confined to a constrained model of duplication histories. As we do not assume this model here, we could not use the representation of [9] that allows the elimination of the |Σ| time complexity factor.

In general, the frameworks of all algorithms in [79] as well as the algorithms presented here are similar. All these algorithms apply two phases, where the first phase computes costs corresponding to all substrings of each one of the input strings separately, and the second phase uses these precomputed costs in order to compute the edit distance between each pair of prefixes of the input strings (our online variant described in Section “An online algorithm for EDDC using min-plus matrix-vector multiplication for discrete cost functions” interleaves these two phases, yet each operation it conducts can be conceptually attributed to one of the phases). The recursive formulas are similar as well, where those for the first phase can be viewed as special kinds of Weighted Context Free Grammar derivation rules.

Next, we address the cost function constraints. All algorithms assume that operation costs are nonnegative and apply additional assumptions similarly to those listed in our Property 1, which can be made without loss of generality.

In [8], operation costs were limited so that all duplications and contractions have the same constant cost (regardless of the letter over which they are applied), all deletions and insertions have the same constant cost, and all mutation costs are symmetric (i.e. mut (α,β) = mut (β,α) for every α,βΣ). While it was argued that these restrictions allow edit distance to be a metric, they limit the generality of the algorithm of [8], where the rest of the previous algorithms we mentioned can handle scoring schemes that not necessarily abide by these restrictions.

Both in [7] and in [8], it was required that all duplication and contraction costs are lower than the costs of any of the insertion, deletion, or mutation costs. This restriction is not explicitly stated in [9], yet seems to be required there as well. For the application of minisatellite map comparison, this requirement is somewhat unnatural since it assumes that mutations, which are typically common events, should cost more than the less common events of duplications and contractions. Our algorithms can be applied even when this restriction does not hold. However, one of our algorithms, the RLE variant (Section “Additional acceleration using run-length encoding”) adds a new requirement that was absent from those previous algorithms: it requires that for every α,βΣ, dup (α) ≤ dup (β) + mut (β,α) ≤ ins (α), and cont (α) ≤ cont (β) + mut (α,β) ≤ del (α) (our Constraint 1). On one hand, our Constraint 1 is more strict than the constraint of [7] and [8], in the sense that it implies nonnegative lower bounds over differences of the form ins (α) - dup (α) and del (α) - cont (α), while in [7] and [8] it was only required that these differences be nonnegative. On the other hand, our Constraint 1 does not require that the cost of mutations be higher than the cost of duplications and contractions.

We showed that our algorithms are more general with respect to the assumed constraints. We also claim that our algorithms are more precise with respect to the formal problem specification. All previous algorithms (excluding the first algorithm by Bérard and Rivals [2], which had an O (n4) running time and assumed a constant cost for all mutations in addition to the restrictions in [8]) might output non-optimal solutions in certain cases, as demonstrated in the following example. Consider the input s = a b, t = e f, and the cost function in which all duplications and contractions cost 1, all deletions and insertions cost 20, and the symmetric mutation costs are as given in Table 1. It can be shown that all three algorithms in [7], [8], and [9] would output the value 18 as the edit distance between the input strings, reflecting one of the edit scripts 〈ab,eb,ef〉 or 〈ab,af,ef〉. Nevertheless, the correct value is 17, due to the script 〈ab,cb,cc,c,d,dd,ed,ef〉. Perhaps it could be possible to specify additional restrictions over the cost functions in order to guarantee that the algorithms in [7], [8], and [9] return optimal solutions for all instances.

## Conclusions and discussion

This work presents computational techniques for improving the time complexity of algorithms for the EDDC problem. We adapt the problem to the VMT framework defined in [11], which incorporates efficient matrix multiplication subroutines in order to accelerate standard dynamic programming algorithms. We describe an efficient algorithm, as well as two variants which are even more efficient, given some restrictions on the cost functions.

An additional result we give is the currently most efficient algorithm for the min-plus multiplication of D-discrete matrices (matrices for which differences between adjacent entries are integers within an interval of length D).

We note that the running times of our algorithms depend on the alphabet size |Σ|. For the general algorithm, the running time is O (|ΣMP(n)), where MP(n) is the time complexity of the min-plus multiplication of two n × n matrices, which is currently upper-bounded by $O\left(\frac{{n}^{3}\stackrel{3}{\text{log}}\text{log}n}{\stackrel{2}{\text{log}}n}\right)$[12]. Some of the previous algorithms obtain alphabet independent time complexities, for example the algorithms in [9] and [2]. As we discussed in Section “A comparison with previous works”, such algorithms do not solve the most general variant of the problem and require some assumptions on the cost function. Nevertheless, we believe that the matrix multiplication-based techniques for improving the time complexity presented in this paper can also be incorporated to the algorithm of [9], however the details of this enhancement are beyond the scope of this paper.

In contrast to the work of [9], our model assumes that intermediate strings along edit scripts may contain characters which are absent from both source and target strings. This implies that the size of the alphabet |Σ| is not bounded by the length of the input sequences. In the context of minisatellite comparison, identifying a feasible alphabet and cost function for this task is an interesting problem beyond the scope of this paper.

## Appendix

### Correctness of the recursive computation

This section proves Theorem 1, thus asserting the correctness of the recursive computation for the EDDC problem given in Section “The recurrence formula”. We start by adding some required notation and showing how long edit scripts can be decomposed to shorter partial scripts. Then, we use the observed recursive properties in order to prove the correctness of the recurrence.

Let s,w,t be strings, ${\mathcal{E}\mathcal{S}}^{1}=〈s={u}^{1,0},{u}^{1,1},\dots ,{u}^{1,{r}_{1}}=w〉$ an edit script from s to w, and ${\mathcal{E}\mathcal{S}}^{2}=〈w={u}^{2,0},{u}^{2,1},\dots ,{u}^{2,{r}_{2}}=t〉$ an edit script from w to t. Denote by $\mathcal{E}\mathcal{S}=〈{\mathcal{E}\mathcal{S}}^{0},{\mathcal{E}\mathcal{S}}^{1}〉$ the concatenated edit script $\mathcal{E}\mathcal{S}=〈s\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{u}^{1,0},{u}^{1,1},\dots ,{u}^{1,{r}_{1}}=w={u}^{2,0},{u}^{2,1},\dots ,{u}^{2,{r}_{2}}=t〉$ from s to t. Note that $\mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\right)=\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{1}\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{2}\right)$, and $\left|\mathcal{E}\mathcal{S}\right|=\left|{\mathcal{E}\mathcal{S}}^{1}\right|+\left|{\mathcal{E}\mathcal{S}}^{2}\right|$. This notation extends naturally to concatenations of more than two scripts. For example, $\mathcal{E}\mathcal{S}=〈{\mathcal{E}\mathcal{S}}^{1},{\mathcal{E}\mathcal{S}}^{2},\dots ,{\mathcal{E}\mathcal{S}}^{q}〉$ denotes an edit script from a string s to a string t obtained by a concatenation of q scripts, each script ${\mathcal{E}\mathcal{S}}^{i}$ transforms some intermediate string wi-1 into a string wi, and s = w0 and t = wq.

#### Obeservation 5.

For every three strings s,w,t, ed(s,t) ≤ ed (s,w) + ed (w,t).

The correctness of the above observation follows from the fact that for a pair of optimal edit scripts ${\mathcal{E}\mathcal{S}}^{1}$ from s to w and ${\mathcal{E}\mathcal{S}}^{2}$ from w to t, the script $\mathcal{E}\mathcal{S}=〈{\mathcal{E}\mathcal{S}}^{1},{\mathcal{E}\mathcal{S}}^{2}〉$ from s to t satisfies $\mathit{\text{ed}}\left(s,t\right)\le \mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\right)=\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{1}\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{2}\right)=\mathit{\text{ed}}\left(s,w\right)+\mathit{\text{ed}}\left(w,t\right)$.

Lemma 7.

For s = sausband$\mathcal{E}\mathcal{S}=〈u={u}^{0},{u}^{1},\dots ,{u}^{r}=w〉$, $\mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\left(s\right)\right)\le \mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\right)$.

#### Proof.

Each edit operation transforming sauisbto saui + 1sbin $\mathcal{E}\mathcal{S}\left(s\right)$ corresponds to an operation transforming uito ui + 1 in $\mathcal{E}\mathcal{S}$. The only cases where corresponding operations may have different costs are those of insertions or deletions in $\mathcal{E}\mathcal{S}$ at the beginning or ending of ui, which become duplications or contractions in $\mathcal{E}\mathcal{S}\left(s\right)$, respectively. For example, in case the applied operation over ui in $\mathcal{E}\mathcal{S}$ is the deletion of its first letter α, and α is also the last letter of sa, then the cost of the operation in $\mathcal{E}\mathcal{S}$ is del (α) while the cost of the corresponding operation in $\mathcal{E}\mathcal{S}\left(s\right)$ is cont (α) ≤ del (α). Similar scenarios may occur in case of an insertion of a letter at the beginning of uiwhich is identical to the last letter of sa, as well as in cases of insertions and deletions at the end of uiof letters identical to the first letter of sb. In any other case, each pair of corresponding operations have the same cost. Therefore the cost of each operation in $\mathcal{E}\mathcal{S}\left(s\right)$ is smaller than or equals to the cost of its corresponding operation in $\mathcal{E}\mathcal{S}$, and $\mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\left(s\right)\right)\le \mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\right)$. □

#### Lemma 8.

Let s and t be two strings, and (sa, sb) P (s), (ta, tb) P (t) partitions of s and t, respectively. Then, ed (s,t) ≤ ed (sa,ta) + ed (sb,tb).

#### Proof.

Let ${\mathcal{E}\mathcal{S}}^{a}$ be an optimal script from sa to ta and ${\mathcal{E}\mathcal{S}}^{b}$ an optimal script from sb to tb. The script ${\mathcal{E}\mathcal{S}}^{a}\left(s\right)$ is a script from s = sasbto tasb. Similarly, ${\mathcal{E}\mathcal{S}}^{b}\left({t}^{a}{s}^{b}\right)$ is a script from tasbto tatb= t. For the script $\mathcal{E}\mathcal{S}=〈{\mathcal{E}\mathcal{S}}^{a}\left(s\right),{\mathcal{E}\mathcal{S}}^{b}\left({t}^{a}{s}^{b}\right)〉$ from s to t, we have that $\mathit{\text{ed}}\left(s,t\right)\le \mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\right)=\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{a}\left(s\right)\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{b}\left({t}^{a}{s}^{b}\right)\right)\stackrel{\text{Lem.7}}{\le }\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{a}\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{b}\right)=\mathit{\text{ed}}\left({s}^{a},{t}^{a}\right)+\mathit{\text{ed}}\left({s}^{b},{t}^{b}\right)$. □

Let s and t be strings. Call a pair of partitions (sa, sb) P (s) and (ta, tb) P (t) an optimal pairwise partition of s and t if ed (s,t) = e (sa, ta) + ed (sb, tb). Say that an edit script $\mathcal{E}\mathcal{S}$ from s to t is a shortest optimal script from s to t if $\mathcal{E}\mathcal{S}$ is optimal, and for every other optimal script ${\mathcal{E}\mathcal{S}}^{\prime }$ from s to t, $\left|\mathcal{E}\mathcal{S}\right|\le \left|{\mathcal{E}\mathcal{S}}^{\prime }\right|$. For a script $\mathcal{E}\mathcal{S}=〈s={u}^{0},{u}^{1},\dots ,{u}^{r}=t〉$ from s to t and 0 ≤ i ≤ j ≤ r, denote by ${\mathcal{E}\mathcal{S}}^{i,j}=〈{u}^{i},{u}^{i+1},\dots ,{u}^{j}〉$ the partial script of $\mathcal{E}\mathcal{S}$ from ui to uj.

#### Observation 6.

Let$\mathcal{E}\mathcal{S}=〈{u}^{0},{u}^{1},\dots ,{u}^{r}〉$be a shortest optimal edit script from u0to ur. For every 0 ≤ i ≤ j ≤ r, the partial script${\mathcal{E}\mathcal{S}}^{i,j}$is a shortest optimal edit script from uito uj. Moreover, for any shortest optimal script${\mathcal{E}\mathcal{S}}^{\ast i,j}$from uito ujwithin$\mathcal{E}\mathcal{S}$, the script${\mathcal{E}\mathcal{S}}^{\ast }=〈{\mathcal{E}\mathcal{S}}^{0,i},{\mathcal{E}\mathcal{S}}^{\ast i,j},{\mathcal{E}\mathcal{S}}^{j,r}〉$is a shortest optimal script from u0 to ur.

The correctness of the above observation is obtained by noting that if ${\mathcal{E}\mathcal{S}}^{i,j}$ is not a shortest optimal script from ui to uj, then for some shortest optimal script ${\mathcal{E}\mathcal{S}}^{\ast i,j}$ from ui to uj we get that the script ${\mathcal{E}\mathcal{S}}^{\ast }=〈{\mathcal{E}\mathcal{S}}^{0,i},{\mathcal{E}\mathcal{S}}^{\ast i,j},{\mathcal{E}\mathcal{S}}^{j,r}〉$ either has a lower cost than $\mathcal{E}\mathcal{S}$, or is a shorter script of the same cost, in contradiction to $\mathcal{E}\mathcal{S}$ being a shortest optimal script from u0 to ur.

#### Lemma 9.

Let$\mathcal{E}\mathcal{S}=〈{u}^{0},{u}^{1},\dots ,{u}^{r}〉$be a shortest optimal edit script from u0 to ur. If there are two indices 0 ≤ i < j ≤ r such that uiand ujare strings of length 1, then j = i + 1. In addition, for every 0 < k < r, uk≠ ε.

#### Proof.

Assume there are two indices 0 ≤ i < j ≤ r such that uiand ujare strings of length 1, i.e ui= α and uj= β for some α,βΣ. From Observation 6, the partial script ${\mathcal{E}\mathcal{S}}^{i,j}=〈\alpha ={u}^{i},{u}^{i+1},\dots ,{u}^{j}=\beta 〉$ is a shortest optimal script from α to β. Since j > i, it must be that α ≠ β (otherwise the script ${\mathcal{E}\mathcal{S}}^{\prime }=〈{\mathcal{E}\mathcal{S}}^{0,i},{\mathcal{E}\mathcal{S}}^{j,r}〉$ is a shorter script from u0 to ur of no greater cost than $\mathcal{E}\mathcal{S}$, in contradiction to $\mathcal{E}\mathcal{S}$ being a shortest optimal script from u0 to ur). From Property 1, ed (α,β) = mut (α,β), and so the edit script containing the single operation of mutating α to β is an optimal script from α to β, and it must be that j = i + 1.

In addition, assume by contradiction there is some index 0 < k < r such that uk = ε. The only edit operation which may yield an empty string is a deletion from a single-letter string, and therefore uk-1 = α for some letter α. Similarly, the only edit operation which may be applied over an empty string is an insertion, therefore uk+1 = β for some letter β, in contradiction to the fact that two intermediate strings of length 1 must be consecutive along a shortest optimal script, as shown above. □

Call an edit script $\mathcal{E}\mathcal{S}$ from a string s to a string t simple if $\mathcal{E}\mathcal{S}$ is a shortest optimal script from s to t, in which no generating operation precedes a reducing operation. The following lemma generalizes Lemma 2 of [6], by considering also indels in addition to contractions and duplications.

#### Lemma 10.

For every pair of strings s and t, there exists a simple edit script from s to t.

#### Proof.

Let s and t be two strings, and r the length of a shortest optimal script from s to t. When r ≤ 1, any shortest optimal script from s to t either contains no reducing operation or contains no generating operation, and in particular is a simple script. Otherwise, r > 1, and assume by induction the lemma holds for every pair of strings such that the length of a shortest optimal script from the source string to the target string is less than r. Let $\mathcal{E}\mathcal{S}=〈s={u}^{0},{u}^{1},\dots ,{u}^{r}=t〉$ be a shortest optimal script from s to t.

Case 1: The first operation in$\mathsc{E}\mathsc{S}$ is not a generating operation. From Observation 6, the partial script ${\mathcal{E}\mathcal{S}}^{1,r}$ is a shortest optimal script from u1 to ur, whose length is r - 1. From the inductive assumption, there is a simple script ${\mathcal{E}\mathcal{S}}^{\ast 1,r}$ from u1 to ur, and from Observation 6 the script ${\mathcal{E}\mathcal{S}}^{\ast }=〈{\mathcal{E}\mathcal{S}}^{0,1},{\mathcal{E}\mathcal{S}}^{\ast 1,r}〉$ is a shortest optimal script from s to t. As the first operation in ${\mathcal{E}\mathcal{S}}^{\ast }$ is non-generating (being the same first operation as in $\mathcal{E}\mathcal{S}$), ${\mathcal{E}\mathcal{S}}^{\ast }$ is simple, and the lemma follows.

Case 2: The first operation in$\mathsc{E}\mathsc{S}$ is a generating operation. Similarly as above, we may assume w.l.o.g. by applying the inductive assumption that the partial script ${\mathcal{E}\mathcal{S}}^{1,r}$ is simple. If this partial script is non-reducing, then $\mathcal{E}\mathcal{S}$ is non-reducing, and in particular it is simple. Otherwise, let 1≤i<r be the smallest index such that the transformation of ui to ui+1 is by a reducing operation. Since neither generating nor reducing operations may precede this operation in the partial script ${\mathcal{E}\mathcal{S}}^{1,i}$, it follows that all operations in the partial script ${\mathcal{E}\mathcal{S}}^{1,i}$ (if there are any) are mutations.

The generating operation transforming u0 to u1 is either an insertion or a duplication of some letter α in u0. In both cases, we can write s = u0 = vxw and u1 = vxw (v,x,x and w are strings), where in the former case x = ε and x = α, and in the latter case x=α and x = α α. As all operations in the partial script ${\mathcal{E}\mathcal{S}}^{1,i}$ are mutations, each intermediate string uj, for 1 ≤ j ≤ i, is of the form vjxjwj, where vj,xj, and wjare string obtained by applying zero or more mutations over v,x, and w, respectively. We argue that the reducing operation transforming ui= vixiwito ui+1 cannot be the deletion of a letter or a contraction involving at least one letter within the substring xi. This is true, since in such a case it would have been possible to avoid the first generating operation in $\mathcal{E}\mathcal{S}$ (transforming x to x), as well as all mutation operations over a reduced letter in xi, and the reducing operation from ui to ui+1. This would yield a script ${\mathcal{E}\mathcal{S}}^{\ast 0,i+1}$ from u0 to ui+1 which is shorter and of no higher coast than ${\mathcal{E}\mathcal{S}}^{0,i+1}$, in contradiction to Observation 6. Hence, the reducing operation from ui to ui+1 either deletes a letter or contracts two letters within one of the substrings vior wiof ui.

Consider first the case where the reducing operation over uiis applied within its prefix vi. Thus, we can write ui+1 = vi+1xi+1wi+1, where vi+1 is the string obtained by applying the corresponding reducing operation over vi, xi+1 = xi, wi+1 = wi, and cost (〈 ui,ui+1 〉) = cost (〈 vi,vi+1 〉). The operations in ${\mathcal{E}\mathcal{S}}^{0,i+1}$ can be assigned into two independent scripts: a script ${\mathcal{E}\mathcal{S}}_{v}=〈v={v}^{\prime 0},{v}^{\prime 1},\dots ,{v}^{\mathrm{\prime p}}={v}^{i+1}〉$ from v to vi+1 obtained by merging each multiple occurrence of consecutive identical strings in the series v = v1,v2,…,vi+1 into a single occurrence, and similarly a script from x w to xi+1wi+1. Each operation in ${\mathcal{E}\mathcal{S}}^{0,i+1}$ corresponds to exactly one operation in either ${\mathcal{E}\mathcal{S}}_{v}$ or ${\mathcal{E}\mathcal{S}}_{\mathit{\text{xw}}}$, where the costs of corresponding operations are equal, and therefore $\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{0,i+1}\right)=\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}_{v}\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}_{\mathit{\text{xw}}}\right)$ and $|{\mathcal{E}\mathcal{S}}^{0,i+1}|=|{\mathcal{E}\mathcal{S}}_{v}|+|{\mathcal{E}\mathcal{S}}_{\mathit{\text{xw}}}|$.

Now, the script ${\mathcal{E}\mathcal{S}}_{v}\left({u}^{0}\right)=〈{u}^{0}=\mathit{\text{vxw}}={v}^{\prime 0}\mathit{\text{xw}},{v}^{\prime 1}\mathit{\text{xw}},\dots ,{v}^{\prime p}\mathit{\text{xw}}={v}^{i+1}\mathit{\text{xw}}〉$ is a script from u0 to vi+1xw, and similarly the script ${\mathcal{E}\mathcal{S}}_{\mathit{\text{xw}}}\left({v}^{i+1}\mathit{\text{xw}}\right)=〈{v}^{i+1}\mathit{\text{xw}}={v}^{i+1}{\left(\mathit{\text{xw}}\right)}^{0},{v}^{i+1}{\left(\mathit{\text{xw}}\right)}^{1},\dots ,{v}^{i+1}{\left(\mathit{\text{xw}}\right)}^{q}={v}^{i+1}{x}^{i+1}{w}^{i+1}={u}^{i+1}〉$ is a script from vi+1xw to ui+1. Thus, the script ${\mathcal{E}\mathcal{S}}^{\ast 0,i+1}=〈{\mathcal{E}\mathcal{S}}_{v}\left({u}^{0}\right),{\mathcal{E}\mathcal{S}}_{\mathit{\text{xw}}}\left({v}^{i+1}\mathit{\text{xw}}\right)〉$ is a script from u0 to ui+1. Since ${\mathcal{E}\mathcal{S}}_{v}$ contains at least one operation (the reducing operation from vi to vi+1) and no generating operation (since besides the reducing operation ${\mathcal{E}\mathcal{S}}_{v}$ may contain only mutations), ${\mathcal{E}\mathcal{S}}^{\ast 0,i+1}$ starts with a non-generating operation. In addition, $\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{\ast 0,i+1}\right)=\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}_{v}\left({u}^{0}\right)\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}_{\mathit{\text{xw}}}\left({v}^{i+1}\mathit{\text{xw}}\right)\right)\stackrel{\text{Lem.7}}{\le }\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}_{v}\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}_{\mathit{\text{xw}}}\right)=\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{0,i+1}\right)$ and $\left|{\mathcal{E}\mathcal{S}}^{\ast 0,i+1}\right|=\left|{\mathcal{E}\mathcal{S}}_{v}\left({u}^{0}\right)\right|+\left|{\mathcal{E}\mathcal{S}}_{\mathit{\text{xw}}}\left({v}^{i+1}\mathit{\text{xw}}\right)\right|=\left|{\mathcal{E}\mathcal{S}}_{v}\right|+\left|{\mathcal{E}\mathcal{S}}_{\mathit{\text{xw}}}\right|=\left|{\mathcal{E}\mathcal{S}}^{0,i+1}\right|$. From Observation 6, ${\mathcal{E}\mathcal{S}}^{0,i+1}$ is a shortest optimal script from u0 to ui+1, and so ${\mathcal{E}\mathcal{S}}^{\ast 0,i+1}$ is a shortest optimal script from u0 to ui+1. Applying Observation 6 again, the script ${\mathcal{E}\mathcal{S}}^{\ast }=〈{\mathcal{E}\mathcal{S}}^{\ast 0,i+1}{\mathcal{E}\mathcal{S}}^{i+1,r}〉$ is a shortest optimal script from s to t. Now, the lemma follows from Case 1 of this proof and from the fact the first operation in ${\mathcal{E}\mathcal{S}}^{\ast }$ is not a generating operation. □

#### Lemma 11.

For every αΣ and every nonempty string t, any simple script from α to t is non-reducing.

#### Proof.

Let $\mathcal{E}\mathcal{S}$ be a simple script from α to t, and assume by contradiction $\mathcal{E}\mathcal{S}$ contains a reducing operation. Since $\mathcal{E}\mathcal{S}$ is simple, all reducing operations in $\mathcal{E}\mathcal{S}$ occur prior to any generating operation, and in particular the first reducing operation is applied after applying zero or more mutations over α. Such a reducing operation must be a deletion from a string of length 1, resulting with an empty intermediate string, in contradiction to Lemma 9. □

#### Lemma 12.

Let w and t be strings and β a letter, such that w ≠ ε, t is of length at least 2, and there is a non-reducing simple script from w βto t. Then, ed (w β,t) = min {ed (w,ta) + ed (β,tb) | (ta,tb) P (t)}.

#### Proof.

Let $\mathcal{E}\mathcal{S}=〈\mathrm{w\beta }={u}^{0},{u}^{1},\dots ,{u}^{r}=t〉$ be a non-reducing simple script from w β to t. For every 0 ≤ i ≤ r, construct a partition (ui,a, ui, b) of uiwhich sustains that ed (w β, ui) ≥ ed (w, ui, a) + ed (β,ui,b), as follows. For i = 0, set (u0,a,u0,b) = (w,β), where by definition ed (w β,u0) = ed (w,u0,a) + ed (β,u0,b) = 0. Now, assume inductively for some 0 < i ≤ r and a partition (ui-1,a, ui-1, b) of ui-1 that ed (w β, ui-1) ≥ ed (w, ui-1, a) + ed (β, ui-1, b). If the non-reducing operation transforming ui-1 to ui is a mutation, an insertion, or a duplication of a letter within the prefix ui-1, a, then set ui,a to be the string obtained by applying this operation over ui-1, a, and set ui, b = ui-1, b. Otherwise, the operation is a mutation, an insertion, or a duplication of a letter within the suffix ui-1, b, and in this case set ui,b to be the string obtained by applying this operation over ui-1, b, and ui,a = ui-1, a. Note that in both cases, $\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{i-1,i}\right)=\mathit{\text{ed}}\left({u}^{i-1,a},{u}^{i,a}\right)+\mathit{\text{ed}}\left({u}^{i-1,b},{u}^{i,b}\right)$, therefore we get from the inductive assumption that $\mathit{\text{ed}}\left(w\beta ,{u}^{i}\right)\stackrel{\text{Obs.6}}{=}\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{0,i}\right)=\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{0,i-1}\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{i-1,i}\right)\phantom{\rule{0.3em}{0ex}}\ge \phantom{\rule{0.3em}{0ex}}\left(\mathit{\text{ed}}\left(w,{u}^{i-1,a}\right)\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\mathit{\text{ed}}\left(\beta ,{u}^{i-1,b}\right)\right)\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\left(\mathit{\text{ed}}\left({u}^{i-1,a},{u}^{i,a}\right)\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\mathit{\text{ed}}\left({u}^{i-1,b},{u}^{i,b}\right)\right)\phantom{\rule{0.3em}{0ex}}\stackrel{\text{Obs.5}}{\ge }\phantom{\rule{0.3em}{0ex}}\mathit{\text{ed}}\left(w,{u}^{i,a}\right)+\mathit{\text{ed}}\left(\beta ,{u}^{i,b}\right)$.

The process above generates a partition (ta, tb) = (ur, a,ur, b) of t=ur, for which ed (ui, t) ≥ ed (w, ta) + ed (β,tb). In particular, ed (w β,t) ≥ min {ed (w,ta) + ed (β,tb) | (ta,tb) P (t)}. On the other hand, $\mathit{\text{ed}}\left(w\beta ,t\right)\stackrel{\text{Lem.8}}{\le }\text{min}\left\{\mathit{\text{ed}}\left(w,{t}^{a}\right)\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}\mathit{\text{ed}}\left(\beta ,{t}^{b}\right)\phantom{\rule{1em}{0ex}}|\phantom{\rule{1em}{0ex}}\left({t}^{a},{t}^{b}\right)\phantom{\rule{1em}{0ex}}\in \phantom{\rule{1em}{0ex}}P\left(t\right)\right\}$, and the lemma follows. □

Based on the above observations and lemmas, we now turn to prove the recursive computation given in Section “The recurrence formula”, starting with Equation 7. Fix henceforth a pair of input strings s and t, each containing at least two letters. Note that for every αΣ and every partitions (sa, sb) P (s) and (ta, tb) P (t), ed (s,t) ≤ Obs.5 ed (s,α) + ed (α,t), and ed (s,t) ≤ Lem.8 ed (sa, ta) + ed (sb, tb) ≤ Obs.5 ed (sa, ta) + ed (sb, α) + ed (α,tb), therefore

$\phantom{\rule{-15.0pt}{0ex}}\begin{array}{l}\mathit{\text{ed}}\left(s,t\right)\le \text{min}\left\{\begin{array}{l}\mathit{\text{ed}}\left(s,\alpha \right)\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\mathit{\text{ed}}\left(\alpha ,t\right),\\ \mathit{\text{ed}}\left({s}^{a},{t}^{a}\right)\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\mathit{\text{ed}}\left({s}^{b},\alpha \right)\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\mathit{\text{ed}}\left(\alpha ,{t}^{b}\right)\end{array}\phantom{\rule{0.3em}{0ex}}\left|\phantom{\rule{0.3em}{0ex}}\begin{array}{l}\left({s}^{a},{s}^{b}\right)\in P\left(s\right),\\ \left({t}^{a},{t}^{b}\right)\in P\left(t\right),\\ \alpha \in \Sigma \end{array}\right\phantom{\rule{0.3em}{0ex}}\right\}.\end{array}$

Thus, to prove the correctness of Equation 7, it remains to show that

$\phantom{\rule{-17.0pt}{0ex}}\begin{array}{l}\mathit{\text{ed}}\left(s,t\right)\phantom{\rule{0.3em}{0ex}}\ge \phantom{\rule{0.3em}{0ex}}\text{min}\left\{\phantom{\rule{0.3em}{0ex}}\begin{array}{l}\mathit{\text{ed}}\left(s,\alpha \right)+\mathit{\text{ed}}\left(\alpha ,t\right),\\ \mathit{\text{ed}}\left({s}^{a},{t}^{a}\right)+\mathit{\text{ed}}\left({s}^{b},\alpha \right)+\mathit{\text{ed}}\left(\alpha ,{t}^{b}\right)\end{array}\phantom{\rule{0.3em}{0ex}}\left|\phantom{\rule{0.3em}{0ex}}\begin{array}{l}\left({s}^{a},{s}^{b}\right)\in P\left(s\right),\\ \left({t}^{a},{t}^{b}\right)\in P\left(t\right),\\ \alpha \in \Sigma \end{array}\right\phantom{\rule{0.3em}{0ex}}\right\}.\end{array}$

From Lemma 10, there is a simple script $\mathcal{E}\mathcal{S}=〈s={u}^{0},{u}^{1},\dots ,{u}^{r}=t〉$ from s to t, and in particular, there is a string ui along $\mathcal{E}\mathcal{S}$ such that the partial script ${\mathcal{E}\mathcal{S}}^{0,i}$ is non-generating, and the partial script ${\mathcal{E}\mathcal{S}}^{i,r}$ is non-reducing. Recall that $\mathit{\text{ed}}\left(s,t\right)=\mathit{\text{cost}}\left(\mathcal{E}\mathcal{S}\right)=\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{0,i}\right)+\mathit{\text{cost}}\left({\mathcal{E}\mathcal{S}}^{i,r}\right)\stackrel{\text{Obs.6}}{=}\mathit{\text{ed}}\left(s,{u}^{i}\right)+\mathit{\text{ed}}\left({u}^{i},t\right)$.

If ui= β for some letter β, then ed (s,t) = ed (s,β) + ed (β,t) ≥ min {ed (s,α) + ed (α,t) | αΣ}. Otherwise, ui contains at least two letters. In this case, we can write ui = w β, where β is the last letter in ui and w is the nonempty prefix of ui containing all letters except for the last one. From Lemma 12, ed (ui,t) = ed (w β, t) = min {ed (w, ta) + ed (β, tb) | (ta, tb) P (t)}. Symmetrically, it is possible to show that ed (s, ui) = min {ed (sa, w) + ed (sb, β) | (sa, sb) P (s)}, and so

concluding the proof of Equation 7.

We next continue to develop the recursive computation, considering the simpler cases where one of the input strings is either empty or contains a single letter, and the other string contains at least two letters. Let $\mathcal{E}\mathcal{S}$ be a simple script from ε to t whose length is r. $\mathcal{E}\mathcal{S}$ must start with an insertion of some letter α, and from Observation 6, the remainder of the script ${\mathcal{E}\mathcal{S}}^{1,r}$ is an optimal script from α to t, implying the correctness of Equation 1. The correctness of Equation 4 is shown symmetrically.

Now, consider the computation of ed (α,t), as expressed in the last term of Equation 2. From Lemma 11, a simple script from α to t is non-reducing, and so the first operation in such a script is either the mutation of α, or some generating operation. If there is such a script in which the first operation is generating, then ed (α,t) = ed (α,t) = mut (α,α) + ed (α,t) ≥ min {mut (α,β) + ed (β,t) | βΣ}. Else, there is a simple script from α to t in which the first operation is the mutation of α into some letter β. Due to Lemma 9, the following operation must be a generating operation, and so the reminder of the script is an optimal script from β to t in which the first operation is generating, implying again that ed (α,t) ≥ min {mut (α,β) + ed (β,t) | βΣ}. The other direction of the inequality is shown similarly as done above for Equation 7, concluding the correctness proof of Equation 2.

We now address the correctness of Equation 3. Consider the minimum cost of a script from α to t which starts with a generating operation. Let $\mathcal{E}\mathcal{S}$ be such a script, and let r denote its length.

For the case where the first operation in $\mathcal{E}\mathcal{S}$ is an insertion of some letter γ after α, from Observation 6 we get that ${\mathcal{E}\mathcal{S}}^{1,r}$ is a non-reducing optimal script from u1 = α γ to t, and therefore in this case