 Research
 Open Access
Reducing the worst case running times of a family of RNA and CFG problems, using Valiant’s approach
 Shay Zakov^{1},
 Dekel Tsur^{1} and
 Michal ZivUkelson^{1}Email author
https://doi.org/10.1186/17487188620
© Zakov et al; licensee BioMed Central Ltd. 2011
 Received: 10 November 2010
 Accepted: 18 August 2011
 Published: 18 August 2011
Abstract
Background
RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genomewide data.
Results
We study Valiant's classical algorithm for Context Free Grammar recognition in subcubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars.
Conclusions
The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and basepair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms.
Keywords
 Matrix Multiplication
 Recursive Call
 Input String
 Inside Property
 Matrix Multiplication Algorithm
1 Background
RNA research is one of the classical domains in bioinformatics, receiving increasing attention in recent years due to discoveries regarding RNA's role in regulation of genes and as a catalyst in many cellular processes [1, 2]. It is wellknown that the function of an RNA molecule is heavily dependent on its structure [3]. However, due to the difficulty of physically determining RNA structure via wetlab techniques, computational prediction of RNA structures serves as the basis of many approaches related to RNA functional analysis [4]. Most computational tools for RNA structural prediction focus on RNA secondary structures  a reduced structural representation of RNA molecules which describes a set of paired nucleotides, through hydrogen bonds, in an RNA sequence. RNA secondary structures can be relatively well predicted computationally in polynomial time (as opposed to threedimensional structures). This computational feasibility, combined with the fact that RNA secondary structures still reveal important information about the functional behavior of RNA molecules, account for the high popularity of stateoftheart tools for RNA secondary structure prediction [5].
Over the last decades, several variants of RNA secondary structure prediction problems were defined, to which polynomial algorithms have been designed. These variants include the basic RNA folding problem (predicting the secondary structure of a single RNA strand which is given as an input) [6–8], the RNARNA Interaction problem (predicting the structure of the complex formed by two or more interacting RNA molecules) [9], the RNA Partition Function and Base Pair Binding Probabilities problem of a single RNA strand [10] or an RNA duplex [11, 12] (computing the pairing probability between each pair of nucleotides in the input), the RNA Sequence to StructuredSequence Alignment problem (aligning an RNA sequence to sequences with known structures) [13, 14], and the RNA Simultaneous Alignment and Folding problem (finding a secondary structure which is conserved by multiple homologous RNA sequences) [15]. Sakakibara et al. [16] noticed that the basic RNA Folding problem is in fact a special case of the Weighted Context Free Grammar (WCFG) Parsing problem (also known as Stochastic or Probabilistic CFG Parsing) [17]. Their approach was then followed by Dowell and Eddy [18], Do et al. [19], and others, who studied different aspects of the relationship between these two domains. The WCFG Parsing problem is a generalization of the simpler nonweighted CFG Parsing problem. Both WCFG and CFG Parsing problems can be solved by the CockeKasamiYounger (CKY) dynamic programming algorithm [20–22], whose running time is cubic in the number of words in the input sentence (or in the number of nucleotides in the input RNA sequence).
The CFG literature describes two improvements which allow to obtain a subcubic time for the CKY algorithm. The first among these improvements was a technique suggested by Valiant [23], who showed that the CFG Parsing problem on a sentence with n words can be solved in a running time which matches the running time of a Boolean Matrix Multiplication of two n × n matrices. The current asymptotic running time bound for this variant of matrix multiplication was given by CoppersmithWinograd [24], who showed an O(n^{2.376}) time (theoretical) algorithm. In [25], Akutsu argued that the algorithm of Valiant can be modified to deal also with WCFG Parsing (this extension is described in more details in [26]), and consequentially with RNA Folding. The running time of the adapted algorithm is different from that of Valiant's algorithm, and matches the running time of a MaxPlus Multiplication of two n × n matrices. The current running time bound for this variant is $O\left(\frac{{n}^{3}{log}^{3}log\phantom{\rule{2.77695pt}{0ex}}n}{{log}^{2}\phantom{\rule{2.77695pt}{0ex}}n}\right)$, given by Chan [27].
The second improvement to the CKY algorithm was introduced by Graham et al. [28], who applied the Four Russians technique [29] and obtained an $O\left(\frac{{n}^{3}}{log\phantom{\rule{2.77695pt}{0ex}}n}\right)$ running time algorithm for the (nonweighted) CFG Parsing problem. To the best of our knowledge, no extension of this approach to the WCFG Parsing problem has been described. Recently, Frid and Gusfield [30] showed how to apply the Four Russians technique to the RNA folding problem (under the assumption of a discrete scoring scheme), obtaining the same running time of $O\left(\frac{{n}^{3}}{log\phantom{\rule{2.77695pt}{0ex}}n}\right)$. This method was also extended to deal with the RNA simultaneous alignment and folding problem [31], yielding an $O\left(\frac{{n}^{6}}{log\phantom{\rule{2.77695pt}{0ex}}n}\right)$ running time algorithm.
Several other techniques have been previously developed to accelerate the practical running times of different variants of CFG and RNA related algorithms. Nevertheless, these techniques either retain the same worst case running times of the standard algorithms [14, 28, 32–36], or apply heuristics which compromise the optimality of the obtained solutions [25, 37, 38]. For some of the problem variants, such as the RNA Base Pair Binding Probabilities problem (which is considered to be one of the variants that produces more reliable predictions in practice), no speedup to the standard algorithms has been previously described.
In his paper [23], Valiant suggested that his approach could be extended to additional related problems. However, in more than three decades which have passed since then, very few works have followed. The only extension of the technique which is known to the authors is Akutsu's extension to WCFG Parsing and RNA Folding [25, 26]. We speculate that simplifying Valiant's algorithm would make it clearer and thus more accessible to be applied to a wider range of problems.
Indeed, in this work we present a simple description of Valiant's technique, and then further generalize it to cope with additional problem variants which do not follow the standard structure of CFG/WCFG Parsing (a preliminary version of this work was presented in [39]). More specifically, we define three template formulations, entitled Vector Multiplication Templates (VMTs). These templates abstract the essential properties that characterize problems for which a Valiantlike algorithmic approach can yield algorithms of improved time complexity. Then, we describe generic algorithms for all problems sustaining these templates, which are based on Valiant's algorithm.
Time complexities of several VMT problems
Problem  Standard DP running time  Implicit [explicit] VMT algorithm running time  

Results previously published  CFG Recognition/Parsing  $\Theta (DB(n))\left[\Theta \left({n}^{2.38}\right)\right]$[23]  
WCFG Parsing  $\Theta ({n}^{3})$[17]  $\Theta (MP(n))\left[\tilde{O}\left({\scriptscriptstyle \frac{{n}^{3}}{{\mathrm{log}}^{2}n}}\right)\right]$[25]  
RNA Single Strand Folding  $\Theta (MP(n))\left[\tilde{O}\left({\scriptscriptstyle \frac{{n}^{3}}{{\mathrm{log}}^{2}n}}\right)\right]$[25]  
RNA Partition Function  $\Theta ({n}^{3})$[10]  $\Theta (MP(n))\left[\Theta \left({n}^{2.38}\right)\right]$[25]  
In this paper  WCFG InsideOutside  $\Theta ({n}^{3})$[43]  $\Theta (DB(n))\left[\Theta \left({n}^{2.38}\right)\right]$ 
RNA Base Pair Binding Probabilities  $\Theta ({n}^{3})$[10]  $\Theta (DB(n))\left[\Theta \left({n}^{2.38}\right)\right]$  
RNA Simultaneous Alignment and Folding  $\Theta \left({\left(n/2\right)}^{{}^{3m}}\right)$[15]  $\Theta (MP({n}^{m}))[\tilde{O}({\scriptscriptstyle \frac{{n}^{3}m}{m{\mathrm{log}}^{2}n}})]$  
RNARNA Interaction  $\Theta ({n}^{6})$[9]  $\Theta (MP({n}^{2}))[\tilde{O}({\scriptscriptstyle \frac{{n}^{6}}{{\mathrm{log}}^{2}n}})]$  
RNARNA Interaction Partition Function  $\Theta ({n}^{6})$[12]  $\Theta (DB(n))\left[\Theta \left({n}^{4.75}\right)\right]$  
RNA Sequence to StructuredSequence Alignment  $\Theta ({n}^{4})$[13]  $\Theta (nMP(n))[\tilde{O}({\scriptscriptstyle \frac{{n}^{4}}{{\mathrm{log}}^{2}n}})]$ 
The formulation presented here has several advantages over the original formulation in [23]: First, it is considerably simpler, where the correctness of the algorithms follows immediately from their descriptions. Second, some requirements with respect to the nature of the problems that were stated in previous works, such as operation commutativity and distributivity requirements in [23], or the semiring domain requirement in [42], can be easily relaxed. Third, this formulation applies in a natural manner to algorithms for several classes of problems, some of which we show here. Additional problem variants which do not follow the exact templates presented here, such as the formulation in [12] for the RNARNA Interaction Partition Function problem, or the formulation in [13] for the RNA Sequence to StructuredSequence Alignment problem, can be solved by introducing simple modifications to the algorithms we present. Interestingly, it turns out that almost every variant of RNA secondary structure prediction problem, as well as additional problems from the domain of CFGs, sustain the VMT requirements. Therefore, Valiant's technique can be applied to reduce the worst case running times of a large family of important problems. In general, as explained later in this paper, VMT problems are characterized in that their computation requires the execution of many vector multiplication operations, with respect to different multiplication variants (Dot Product, Boolean Multiplication, and Min/Max Plus Multiplication). Naively, the time complexity of each vector multiplication is linear in the length of the multiplied vectors. Nevertheless, it is possible to organize these vector multiplications as parts of square matrix multiplications, and to apply fast matrix multiplication algorithms in order to obtain a sublinear (amortized) running time for each vector multiplication. As we show, a main challenge in algorithms for VMT problems is to describe how to bundle subsets of vector multiplications operations in order to compute them via the application of fast matrix multiplication algorithms. As the elements of these vectors are computed along the run of the algorithm, another aspect which requires attention is the decision of the order in which these matrix multiplications take place.
Road Map
In Section 2 the basic notations are given. In Section 3 we describe the Inside Vector Multiplication Template  a template which extracts features for problems to which Valiant's algorithm can be applied. This section also includes the description of an exemplary problem (Section 3.1), and a generalized and simplified exhibition of Valiant's algorithm and its running time analysis (Section 3.3). In Sections 4 and 5 we define two additional problem templates: the Outside Vector Multiplication Template, and the Multiple String Vector Multiplication Template, and describe modifications to the algorithm of Valiant which allow to solve problems that sustain these templates. Section 6 concludes the paper, summarizing the main results and discussing some of its implications. Two additional exemplary problems (an Outside and a Multiple String VMT problems) are presented in the Appendix.
2 Preliminaries
As intervals of integers, matrices, and strings will be extensively used throughout this work, we first define some related notation.
2.1 Interval notations
2.2 Matrix notations
Let X be an n_{1} × n_{2} matrix, with rows indexed with 0, 1, ..., n_{1}  1 and columns indexed with 0, 1, ..., n_{2}  1. Denote by X_{i, j}the element in the i th row and j th column of X. For two intervals I ⊆ [0, n_{1}) and J ⊆ [0, n_{2}), let X_{I, J}denote the submatrix of X obtained by projecting it onto the subset of rows I and subset of columns J. Denote by X_{i, J}the submatrix X_{[i,i],J}, and by X_{I, j}the submatrix X_{I,[j,j]}. Let $\mathcal{D}$ be a domain of elements, and ⊗ and ⊕ be two binary operations on $\mathcal{D}$. We assume that (1) ⊕ is associative (i.e. for three elements a, b, c in the domain, (a ⊕ b) ⊕ c = a ⊕ (b ⊕ c)), and (2) there exists a zero element ϕ in $\mathcal{D}$, such that for every element $a\in \mathcal{D}$a ⊕ ϕ = ϕ ⊕ a = a and a ⊗ ϕ = ϕ ⊗ a = ϕ.
In the special case where n_{2} = 0, define the result of the multiplication Z to be an n_{1} × n_{3} matrix in which all elements are ϕ. In the special case where n_{1} = n_{3} = 1, the matrix multiplication X ⊗ Y is also called a vector multiplication (where the resulting matrix Z contains a single element).
Under the assumption that the operations ⊗ and ⊕ between two domain elements consume Θ(1) computation time, a straightforward implementation of a matrix multiplication between two n × n matrices can be computed in Θ(n^{3}) time. Nevertheless, for some variants of multiplications, subcubic algorithms for square matrix multiplications are known. Here, we consider three such variants, which will be referred to as standard multiplications in the rest of this paper:

Dot Product: The matrices hold numerical elements, ⊗ stands for number multiplication (·) and ⊕ stands for number addition (+). The zero element is 0. The running time of the currently fastest algorithm for this variant is O(n^{2.376}) [24].

MinPlus/MaxPlus Multiplication: The matrices hold numerical elements,⊗ stands for number addition and ⊕ stands for min or max (where a min b is the minimum between a and b, and similarly for max). The zero element is ∞ for the MinPlus variant and ∞ for the MaxPlus variant. The running time of the currently fastest algorithm for these variants is $O\left(\frac{{n}^{3}{log}^{3}log\phantom{\rule{2.77695pt}{0ex}}n}{{log}^{2}\phantom{\rule{2.77695pt}{0ex}}n}\right)$[27].

Boolean Multiplication: The matrices hold boolean elements, ⊗ stands for boolean AND (⋀) and ⊕ stands for boolean OR(⋁). The zero element is the false value. Boolean Multiplication is computable with the same complexity as the Dot Product, having the running time of O(n^{2.376}) [24].
2.3 String notations
Let s = s_{0}s_{1} ... s_{n  1}be a string of length n over some alphabet. A position q in s refers to a point between the characters s_{q  1}and s_{ q } (a position may be visualized as a vertical line which separates between these two characters). Position 0 is regarded as the point just before s_{0}, and position n as the point just after s_{n  1}. Denote by s = n + 1 the number of different positions in s. Denote by s_{i, j}the substring of s between positions i and j, i.e. the string s_{ i }s_{i+1}... s_{j  1}. In a case where i = j, s_{i, j}corresponds to an empty string, and for i > j, s_{i, j}does not correspond to a valid string.
An outside property α_{i,j}is a property of the residual string obtained by removing s_{i, j}from s (i.e. the pair of strings s_{0,i}and s_{j,n}, see Figure 2). Such a residual string is denoted by $\overline{{s}_{i,j}}$. Outside property computations occur in algorithms for the RNA Base Pair Binding Probabilities problem [10], and in the InsideOutside algorithm for learning derivation rule weights for WCFGs [43].
In the rest of this paper, given an instance string s, substrings of the form s_{i, j}and residual strings of the form $\overline{{s}_{i,j}}$ will be considered as subinstances of s. Characters and positions in such subinstances are indexed according to the same indexing as of the original string s. That is, the characters in subinstances of the form s_{i, j}are indexed from i to j  1, and in subinstances of the form $\overline{{s}_{i,j}}$ the first i characters are indexed between 0 and i  1, and the remaining characters are indexed between j and n  1. The notation β will be used to denote the set of all values of the form β_{i,j}with respect to substrings s_{i, j}of some given string s. It is convenient to visualize β as an s × s matrix, where the (i, j)th entry in the matrix contains the value β_{i,j}. Only entries in the upper triangle of the matrix β correspond to valid substrings of s. For convenience, we define that values of the form β_{i, j}, when j < i, equal to ϕ (with respect to the corresponding domain of values). Notations such as β_{I, J}, β_{i, J}, and β_{I, j}are used in order to denote the corresponding submatrices of β, as defined above. Similar notations are used for a set α of outside properties.
3 The Inside Vector Multiplication Template
In this section we describe a template that defines a class of problems, called the Inside Vector Multiplication Template (Inside VMT). We start by giving a simple motivating example in Section 3.1. Then, the class of Inside VMT problems is formally defined in Section 3.2, and in Section 3.3 an efficient generic algorithm for all Inside VMT problems is presented.
3.1 Example: RNA BasePairing Maximization
The RNA BasePairing Maximization problem [6] is a simple variant of the RNA Folding problem, and it exhibits the main characteristics of Inside VMT problems. In this problem, an input string s = s_{0}s_{1} ⋯ s_{n  1}represents a string of bases (or nucleotides) over the alphabet A, C, G, U. Besides strong (covalent) chemical bonds which occur between each pair of consecutive bases in the string, bases at distant positions tend to form additional weaker (hydrogen) bonds, where a base of type A can pair with a base of type U, a base of type C can pair with a base of type G, and in addition a base of type G can pair with a base of type U. Two bases which can pair to each other in such a (weak) bond are called complementary bases, and a bond between two such bases is called a basepair. The notation a • b is used to denote that the bases at indices a and b in s are paired to each other.
In order to compute values of the form β_{i, j}, we distinguish between two types of foldings for a substring s_{i, j}: foldings of type I are those which contain the basepair i • (j  1), and foldings of type II are those which do not contain i • (j  1).
Consider a folding F of type I. Since i • (j  1) ∈ F, the folding F is obtained by adding the basepair i • (j  1) to some folding F' for the substring s_{i+1, j  1}(Figure 3a). The number of complementary basepairs in F is thus F' + 1 if the bases s_{ i } and s_{j  1}are complementary, and otherwise it is F'. Clearly, the number of complementary basepairs in F is maximized when choosing F' such that F' = β_{i+1, j  1}. Now, Consider a folding F of type II. In this case, there must exist some position q ∈ (i, j), such that no basepair a • b in F sustains that a < q ≤ b. This observation is true, since if j  1 is paired to some index p (where i < p < j  1), then q = p sustains the requirement (Figure 3b), and otherwise q = j  1 sustains the requirement (Figure 3c). Therefore, q splits F into two independent foldings: a folding F' for the prefix s_{i, q}, and a folding F" for the suffix s_{q, j}, where F = F' + F". For a specific split position q, the maximum number of complementary basepairs in a folding of type II for s_{i, j}is then given by β_{i, q}+ β_{q, j}, and taking the maximum over all possible positions q ∈ (i, j) guarantees that the best solution of this form is found.
where δ_{i, j  1}= 1 if s_{ i } and s_{j  1}are complementary bases, and otherwise δ_{i,j  1}= 0.
3.1.1 The classical algorithm
Upon computing a value β_{i, j}, the algorithm needs to compute term (II) of the recurrence. This computation is of the form of a vector multiplication operation ⊕_{q∈(i,j)}(β_{ i, q } ⊗ β_{q, j}), where the multiplication variant is the Max Plus multiplication. Since all relevant values in B are computed, this computation can be implemented by computing B_{i, (i,j)}⊗ B_{(i,j),j}(the multiplication of the two darkened vectors in Figure 4), which takes Θ(j  i) running time. After computing term (II), the algorithm needs to perform additional operations for computing β_{i, j}which take Θ(1) running time (computing term (I), and taking the maximum between the results of the two terms). It can easily be shown that, on average, the running time for computing each value β_{i, j}is Θ(n), and thus the overall running time for computing all Θ(n^{2}) values β_{i, j}is Θ(n^{3}). Upon termination, the computed matrix B equals to the matrix β, and the required result β_{0,n}is found in the entry B_{0,n}.
3.2 Inside VMT definition
In this section we characterize the class of Inside VMT problems. The RNA BaseParing Maximization problem, which was described in the previous section, exhibits a simple special case of an Inside VMT problem, in which the goal is to compute a single inside property for a given input string. Note that this requires the computation of such inside properties for all substrings of the input, due to the recursive nature of the computation. In other Inside VMT problems the case is similar, hence we will assume that the goal of Inside VMT problems is to compute inside properties for all substrings of the input string. In the more general case, an Inside VMT problem defines several inside properties, and all of these properties are computed for each substring of the input in a mutually recursive manner. Examples of such problems are the RNA Partition Function problem [10] (which is described in Appendix A), the RNA Energy Minimization problem [7] (which computes several folding scores for each substring of the input, corresponding to restricted types of foldings), and the CFG Parsing problem [20–22] (which computes, for every nonterminal symbol in the grammar and every subsentence of the input, a boolean value that indicates whether the subsentence can be derived in the grammar when starting the derivation from the nonterminal symbol).
A common characteristic of all Inside VMT problems is that the computation of at least one type of an inside property requires a result of a vector multiplication operation, which is of similar structure to the multiplication described in the previous section for the RNA BaseParing Maximization problem. In many occasions, it is also required to output a solution that corresponds to the computed property, e.g. a minimum energy secondary structure in the case of the RNA folding problem, or a maximum weight parsetree in the case of the WCFG Parsing problem. These solutions can usually be obtained by applying a traceback procedure over the computed dynamic programming tables. As the running times of these traceback procedures are typically negligible with respect to the time needed for filling the values in the tables, we disregard this phase of the computation in the rest of the paper.
The following definition describes the family of Inside VMT problems, which share common combinatorial characteristics and may be solved by a generic algorithm which is presented in Section 3.3.
Definition 1 A problem is considered an Inside VMT problem if it fulfills the following requirements.
1. Instances of the problem are strings, and the goal of the problem is to compute for every substring s_{i, j} of an input string s, a series of inside properties ${\beta}_{i,j}^{1},\phantom{\rule{2.77695pt}{0ex}}{\beta}_{i,j}^{2},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{\beta}_{i,j}^{K}$.
2. Let s_{i, j} be a substring of some input string s. Let 1 ≤ k ≤ K, and let ${\mu}_{i,j}^{k}$be a result of a vector multiplication of the form ${\mu}_{i,j}^{k}={\oplus}_{q\in \left(i,j\right)}\left({\beta}_{i,q}^{{k}^{\prime}}\otimes {\beta}_{q,j}^{{k}^{\u2033}}\right)$, for some 1 ≤ k', k" ≤ K. Assume that the following values are available: ${\mu}_{i,j}^{k}$, all values ${\beta}_{{i}^{\prime},{j}^{\prime}}^{{k}^{\prime}}$for 1 ≤ k' ≤ K and s_{i',j'} a strict substring of s_{i, j}, and all values ${\beta}_{i,j}^{{k}^{\prime}}$for 1 ≤ k' < k. Then, ${\beta}_{i,j}^{k}$can be computed in o(s) running time.
3. In the multiplication variant that is used for computing ${\mu}_{i,j}^{k}$, the ⊕ operation is associative, and the domain of elements contains a zero element. In addition, there is a matrix multiplication algorithm for this multiplication variant, whose running time M(n) over two n × n matrices satisfies M(n) = o(n^{3}).
3.3 The Inside VMT algorithm
We next describe a generic algorithm, based on Valiant's algorithm [23], for solving problems sustaining the Inside VMT requirements. For simplicity, it is assumed that a single property β_{i, j}needs to be computed for each substring s_{i, j}of the input string s. We later explain how to extend the presented algorithm to the more general cases of computing K inside properties for each substring.
 1.
Each entry B_{i, j}in B_{[I,J], [I,J]}contains the value β_{i, j}, except for entries in B_{I, J}.
 2.
Each entry B_{i, j}in B_{I, J}contains the value ⊕_{q∈(I,J)}(β_{i, q}⊗ β_{q, j}). In other words, B_{I, J}= β_{I,(I,J)}⊗ β_{(I,J),J}.
Let n denote the length of s. Upon initialization, I = J = [0, n], and all values in B are set to ϕ. At this stage (I, J) is an empty interval, and so the precondition with respect to the complete matrix B is met. Now, consider a call to the algorithm with some pair of intervals I, J. If I = [i, i] and J = [j, j], then from the precondition, all values β_{i', j'}which are required for the computation β_{i, j}of are computed and stored in B, and B_{i, j}= μ_{i, j}(Figure 4). Thus, according to the Inside VMT requirements, β_{i, j}can be evaluated in o(s) running time, and be stored in B_{i, j}.
The Inside VMT algorithm
InsideVMT (s) 

1: Allocate a matrix B of size s × s, and initialize all entries in B with ϕ elements. 
2: Call ComputeInsideSubMatrix ([0, n], [0, n]), where n is the length of s. 
3: return B 
ComputeInsideSubMatrix (I, J) 
precondition: The values in B_{[I,J], [I,J]}, excluding the values in B_{I,J}, are computed, and B_{I,J}= β_{I,(I,J)}⊗ β_{(I,J),J}. 
postcondition: B_{[I,J], [I,J]}= β_{[I,J], [I,J]}. 
1: if I = [i, i] and J = [j, j] then 
2: If i ≤ j, compute β_{i,j}(in o (s) running time) by querying computed values in B and the value μ_{i,j}which is stored in B_{i,j}. Update B_{i,j}← β_{i,j}. 
3: else 
4: if I ≤ J then 
5: Let J_{1} and J_{2} be the two intervals such that J_{1}J_{2} = J, and J_{1} = ⌊J/2⌋. 
6: Call ComputeInsideSubMatrix (I, J_{1}). 
7: Let L be the interval such that (I, J)L = (I, J_{2}). 
8: Update ${B}_{I,{J}_{2}}\leftarrow {B}_{I,{J}_{2}}\oplus \left({B}_{I,L}\otimes {B}_{L,{J}_{2}}\right)$. 
9: Call ComputeInsideSubMatrix (I, J_{2}). 
10: else 
11: Let I_{1} and I_{2} be the two intervals such that I_{2}I_{1} = I, and I_{2} = ⌊I/2⌋. 
12: Call ComputeInsideSubMatrix (I_{1}, J). 
13: Let L be the interval such that L(I, J) = (I_{2}, J). 
14: Update ${B}_{{I}_{2},J}\leftarrow \left({B}_{{I}_{2},L}\otimes {B}_{L,J}\right)\oplus {B}_{{I}_{2},J}$. 
15: Call ComputeInsideSubMatrix (I_{2}, J). 
16: end if 
17: end if 
3.3.1 Time complexity analysis for the Inside VMT algorithm
In order to analyze the running time of the presented algorithm, we count separately the time needed for computing the basecases of the recurrence, and the time for nonbasecases.
In the basecases of the recurrence (lines 12 in Procedure ComputeInsideSubMatrix, Table 2), I = J = 1, and the algorithm specifically computes a value of the form β_{i,j}. According to the VMT requirements, each such value is computed in o(s) running time. Since there are Θ(s^{2}) such basecases, the overall running time for their computation is o(s^{3}).
Next, we analyze the time needed for all other parts of the algorithm except for those dealing with the basecases. For simplicity, assume that s = 2^{ x } for some integer x. Then, due to the fact that at the beginning I = J = 2^{ x }, it is easy to see that the recurrence encounters pairs of intervals I, J such that either I = J or I = 2J.
Denote by T(r) and D(r) the time it takes to compute all recursive calls (except for the basecases) initiated from a call in which I = J = r (exemplified in Figure 8) and I = 2J = r (exemplified in Figure 9), respectively.
Therefore, $T\left(r\right)=4T\left(\frac{r}{2}\right)+4M\left(\frac{r}{2}\right)+\Theta \left({r}^{2}\right)$. By the master theorem [44], T(r) is given by

T(r) = Θ(r^{2}log^{k+1}r), if M(r) = O(r^{2}log^{ k }r) for some k ≥ 0, and

$T\left(r\right)=\Theta \left(M\left(\frac{r}{2}\right)\right)$, if $M\left(\frac{r}{2}\right)=\Omega \left({r}^{2+\epsilon}\right)$ for some ε > 0, and $4M\left(\frac{r}{2}\right)\le dM\left(r\right)$ for some constant d < 1 and sufficiently large r.
The running time of all operations except for the computations of base cases is thus T(s). In both cases listed above, T(s) = o (s^{3}), and therefore the overall running time of the algorithm is subcubic running time with respect to the length of the input string.
The currently best algorithms for the three standard multiplication variants described in Section 2.2 satisfy that M(r) = Ω(r^{2+ε}), and imply that T(r) = Θ(M(r)). When this case holds, and the time complexity of computing the basecases of the recurrence does not exceed M(s) (i.e. when the amortized running time for computing each one of the Θ(s^{2}) base cases is $O\left(\frac{M\left(\left\rights\left\right\right)}{\left\rights{}^{2}}\right)$), we say that the problem sustains the standard VMT settings. The running time of the VMT algorithm over such problems is thus Θ (M(s)). All realistic inside VMT problems familiar to the authors sustain the standard VMT settings.
3.3.2 Extension to the case where several inside properties are computed
 1.
For all 1 ≤ k ≤ K, all values in ${B}_{\left[I,J\right],\left[I,J\right]}^{k}$ are computed, excluding the values in ${B}_{I,J}^{k}$,
 2.
If a result of a vector multiplication of the form ${\mu}_{i,j}^{k}={\oplus}_{q\in \left(i,j\right)}\left({\beta}_{i,q}^{{k}^{\prime}}\otimes {\beta}_{q,j}^{{k}^{\u2033}}\right)$ is required for the computation of ${\beta}_{i,j}^{k}$, then ${B}_{I,J}^{k}={\beta}_{I,\left(I,J\right)}^{{k}^{\prime}}\otimes {\beta}_{\left(I,J\right),J}^{{k}^{\u2033}}$.
The algorithm presented in this section extends to handling this case in a natural way, where the modification is that now the matrix multiplications may occur between submatrices taken from different matrices, rather than from a single matrix. The only delicate aspect here is that for the base case of the recurrence, when I = [i, i] and J = [j, j], the algorithm needs to compute the values in the corresponding entries in a sequential order ${B}_{i,j}^{1},\phantom{\rule{2.77695pt}{0ex}}{B}_{i,j}^{2},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{B}_{i,j}^{K}$, since it is possible that the computation of a property ${\beta}_{i,j}^{k}$ requires the availability of a value of the form ${\beta}_{i,j}^{{k}^{\prime}}$ for some k' < k. Since K is a constant which is independent of the length on the input string, it is clear that the running time for this extension remains the same as for the case of a single inside valueset.
The following Theorem summarizes our main results with respect to Inside VMT problems.
Theorem 1 For every Inside VMT problem there is an algorithm whose running time over an instance s is o(s^{3}). When the problem sustains the standard VMT settings, the running time of the algorithm is Θ(M(s)), where M(n) is the running time of the corresponding matrix multiplication algorithm over two n × n matrices.
4 Outside VMT
In this section we discuss how to solve another class of problems, denoted Outside VMT problems, by modifying the algorithm presented in the previous section. Similarly to Inside VMT problems, the goal of Outside VMT problems is to compute sets of outside properties α^{1}, α^{2}, ..., α^{ K } corresponding to some input string (see notations in Section 2.3).
Examples for problems which require outside properties computation and adhere to the VMT requirements are the RNA Base Pair Binding Probabilities problem [10] (described in Appendix A) and the WCFG InsideOutside problem [43]. In both problems, the computation of outside properties requires a set of precomputed inside properties, where these inside properties can be computed with the Inside VMT algorithm. In such cases, we call the problems InsideOutside VMT problems.
The following definition describes the family of Outside VMT problems.
Definition 2 A problem is considered an Outside VMT problem if it fulfills the following requirements.
1. Instances of the problem are strings, and the goal of the problem is to compute for every subinstance $\overline{{s}_{i,j}}$of an input string s, a series of outside properties ${\alpha}_{i,j}^{1},\phantom{\rule{2.77695pt}{0ex}}{\alpha}_{i,j}^{2},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{\alpha}_{i,j}^{K}$.
2. Let $\overline{{s}_{i,j}}$be a subinstance of some input string s. Let 1 ≤ k ≤ K, and let ${\mu}_{i,j}^{k}$be a result of a vector multiplication of the form ${\mu}_{i,j}^{k}={\oplus}_{q\in \left[0,i\right)}\left({\beta}_{q,i}^{k}\otimes {\alpha}_{q,j}^{{k}^{\prime}}\right)$or of the form ${\mu}_{i,j}^{k}={\oplus}_{q\in \left(j,n\right]}\left({\alpha}_{i,q}^{{k}^{\prime}}\otimes {\beta}_{j,q}^{k}\right)$, for some 1 ≤ k' ≤ K and a set of precomputed inside properties β^{ k }. Assume that the following values are available: ${\mu}_{i,j}^{k}$, all values ${\alpha}_{{i}^{\prime},{j}^{\prime}}^{{k}^{\prime}}$for 1 ≤ k' ≤ K and s_{i,j}a strict substring of s_{i',j'}, and all values ${\alpha}_{i,j}^{{k}^{\prime}}$for 1 ≤ k' < k. Then, ${\alpha}_{i,j}^{k}$can be computed in o (s) running time.
3. In the multiplication variant that is used for computing ${\mu}_{i,j}^{k}$, the ⊕ operation is associative, and the domain of elements contains a zero element. In addition, there is a matrix multiplication algorithm for this multiplication variant, which running time M(n) over two n × n matrices satisfies M(n) = o(n^{3}).
We now turn to describe a generic recursive algorithm for Outside VMT problems. For simplicity, we consider the case of a problem whose goal is to compute a single set of outside properties α, given a single precomputed set of inside properties β. As for the Inside VMT algorithm, it is simple to extend the presented algorithm to the case where the goal is to compute a series of outside properties for every subinstance of the input.
 1.
Each entry A_{i, j}in A_{[0,I], [J,n]}contains the value α_{i, j}, except for entries in A_{I, J}.
 2.
If the computation of α_{i, j}requires the result of a vector multiplication of the form μ_{ i, j } = ⊕_{q∈[0,i)}(β_{ q, i } ⊗ α_{ q, j }), then A_{ I, J } = (β_{[0,I),I})^{ T } ⊗ α_{[0,I),J}. Else, if the computation of α_{ i, j } requires the result of a vector multiplication of the form μ_{i, j}= ⊕_{q∈(j,n]}(α_{i, q}⊗ β_{j, q}), then A_{ I, J } = α_{I, (j, n]}⊗ (β_{j(J, n]})^{ T }.
The outside VMT algorithm
OutsideVMT (s, β) 

1: Allocate a matrix A of size s × s, and initialize all entries in A with ϕ elements. 
2: Call ComputeOutsideSubMatrix ([0, n], [0, n]), where n is the length of s. 
3: return A 
ComputeOutsideSubMatrix (I, J) 
precondition: The values in A_{[0,I], [J,n]}, excluding the values in A_{ I,J }, are computed. 
If μ_{ i,j }= ⊕_{q∈[0,i)}(β_{ q,i }⊗ α_{ q,j }), then A_{ I,J } = (β_{[0,I),I})^{ T } ⊗ α_{[0,I),J}. Else, if μ_{ i,j }= ⊕_{q∈(j,n]}(α_{ i,q }⊗ β_{ j,q }), then A_{ I,J } = α_{I,(J,n]}⊗ (β_{J,(J,n]})^{ T }. 
postcondition: A_{[0,I], [J,n]}= α_{[0,I], [J,n]}. 
1: if I = [i, i] and J = [j, j] then 
2: If i ≤ j, compute α_{ i,j } (in o (s) running time) by querying computed values in A and the value μ_{ i,j } which is stored in A_{ i,j }. Update A_{ i,j } ← α_{ i,j }. 
3: else 
4: if I ≤ J then 
5: Let J_{1} and J_{2} be the two intervals such that J_{2}J_{1} = J, and J_{2} = ⌊J/2⌋. 
6: Call ComputeOutsideSubMatrix (I, J_{1}). 
7: if μ_{ i,j } is of the form ⊕_{q∈(j,n]}(α_{ i,q } ⊗ β_{ j,q }) then 
8: Let L be the interval such that L(J, n] = (J_{2}, n]. 
9: Update ${A}_{I,{J}_{2}}\leftarrow \left({A}_{I,L}\otimes {\left({\beta}_{{J}_{2},L}\right)}^{T}\right)\oplus {A}_{I,{J}_{2}}$. 
10: end if 
11: Call ComputeOutsideSubMatrix (I, J_{2}). 
12: else 
13: Let I_{1} and I_{2} be the two intervals such that I_{1}I_{2} = I, and I_{1} = ⌊I/2⌋. 
14: Call ComputeOutsideSubMatrix (I_{1}, J). 
15: if μ_{ i,j } is of the form ⊕_{q∈[0,i)}(β_{ q,i } ⊗ α_{ q,j }) then 
16: Let L be the interval such that [0, I)L = [0, I_{2}). 
17: Update ${A}_{{I}_{2},J}\leftarrow {A}_{{I}_{2},J}\oplus \left({\left({\beta}_{L,{I}_{2}}\right)}^{T}\otimes {A}_{L,J}\right)$. 
18: end if 
19: Call ComputeOutsideSubMatrix (I_{2}, J). 
20: end if 
21: end if 
Theorem 2 For every Outside VMT problem there is an algorithm whose running time over an instance s is o (s^{3}). When the problem sustains the standard VMT settings, the running time of the algorithm is Θ(M(s)), where M(n) is the running time of the corresponding matrix multiplication algorithm over two n × n matrices.
5 Multiple String VMT
In this section we describe another extension to the VMT framework, intended for problems for which the instance is a set of strings, rather than a single string. Examples of such problems are the RNA Simultaneous Alignment and Folding problem [15, 37], which is described in details in Appendix B, and the RNARNA Interaction problem [9]. Additional problems which exhibit a slight divergence from the presented template, such as the RNARNA Interaction Partition Function problem [12] and the RNA Sequence to StructuredSequence Alignment problem [13], can be solved in similar manners.
In order to define the Multiple String VMT variant in a general manner, we first give some related notation. An instance of a Multiple String VMT problem is a set of strings S = (s^{0}, s^{1}, ..., s^{m1}), where the length of a string s^{ p } ∈ S is denoted by n_{ p }. A position in S is a set of indices X = (i_{0}, i_{1}, ..., i_{m1}), where each index i_{ p } ∈ X is in the range 0 ≤ i_{ p } ≤ n_{ p }. The number of different positions in S is denoted by $\left\rightS\left\right\phantom{\rule{2.77695pt}{0ex}}=\prod _{0\le p<m}\left\right{s}^{p}\left\right$.
The notation X ≰ Y is used to indicate that it is not true that X ≤ Y. Here, the relation '≤' is not a linear relation, thus X ≰ Y does not necessarily imply that Y < X. In the case where X ≰ Y, we say that S_{X, Y}does not correspond to a valid subinstance (Figure 15b). The notations $\stackrel{\u0304}{0}$ and N will be used in order to denote the first position (0, 0, ..., 0) and the last position (n_{0}, n_{1}, ..., n_{m1}) in S, respectively. The notations which were used previously for intervals, are extended as follows: [X, Y] denotes the set of all positions Q such that X ≤ Q ≤ Y, (X, Y) denotes the set of all positions Q such that X < Q < Y, [X, Y) denotes the set of all positions Q such that X ≤ Q < Y, and (X, Y] denotes the set of all positions Q such that X < Q ≤ Y. Note that while previously these notations referred to intervals with sequential order defined over their elements, now the notations correspond to sets, where we assume that the order of elements in a set is unspecified.
Inside and outside properties with respect to multiple string instances are defined in a similar way as for a single string: An inside property β_{X, Y}is a property that depends only on the subinstance S_{X, Y}, where an outside property α_{X, Y}depends on the residual subinstance of S, after excluding from each string in S the corresponding substring in S_{X, Y}. In what follows, we define Multiple String Inside VMT problems, and show how to adopt the Inside VMT algorithm for such problems. The "outside" variant can be formulated and solved in a similar manner.
Definition 3 A problem is considered a Multiple String Inside VMT problem if it fulfills the following requirements.
1. Instances of the problem are sets of strings, and the goal of the problem is to compute for every subinstance S_{X, Y}of an input instance S, a series of inside properties ${\beta}_{X,Y}^{1},\phantom{\rule{2.77695pt}{0ex}}{\beta}_{X,Y}^{2},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{\beta}_{X,Y}^{K}$.
2. Let S_{X, Y} be a subinstance of some input instance S. Let 1 ≤ k ≤ K, and let ${\mu}_{X,Y}^{k}$be a value of the form ${\mu}_{X,Y}^{k}={\oplus}_{Q\in \left(X,Y\right)}\left({\beta}_{X,Q}^{{k}^{\prime}}\otimes {\beta}_{Q,Y}^{{k}^{\u2033}}\right)$, for some 1 ≤ k', k" ≤ K. Assume that the following values are available: ${\mu}_{X,Y}^{k}$, all values ${\beta}_{{X}^{\prime},{Y}^{\prime}}^{{k}^{\prime}}$for 1 ≤ k' ≤ K and S_{X', Y'} a strict subinstance of S_{X, Y}, and all values ${\beta}_{X,Y}^{{k}^{\prime}}$for 1 ≤ k' < k. Then, ${\beta}_{X,Y}^{k}$can be computed in o (S) running time.
3. In the multiplication variant that is used for computing ${\mu}_{X,Y}^{k}$, the ⊕ operation is associative and commutative, and the domain of elements contains a zero element. In addition, there is a matrix multiplication algorithm for this multiplication variant, which running time M(n) over two n × n matrices satisfies M(n) = o(n^{3}).
Note that here there is an additional requirement with respect to the single string variant, that the ⊕ operator is commutative. This requirement was added, since while in the single string variant split positions in the interval (i, j) could have been examined in a sequential order, and the (single string) Inside VMT algorithm retains this order when evaluating ${\mu}_{i,j}^{k}$, here there is no such natural sequential order defined over the positions in the set (X, Y). The ⊕ commutativity requirement is met in all standard variants of matrix multiplication, and thus does not pose a significant restriction in practice.
Consider an instance S = (s^{0}, s^{1}, ..., s^{m1}) for a Multiple String Inside VMT problem, and the simple case where a single property set β needs to be computed (where β corresponds to all inside properties of the form β_{X, Y}). Again, we compute the elements of β in a square matrix B of size S × S, and show that values of the form μ_{X, Y}correspond to results of vector multiplications within this matrix. For simplicity, assume that all string s^{ p } ∈ S are of the same length n, and thus S = (n + 1)^{ m } (this assumption may be easily relaxed).
Define a onetoone and onto mapping h between positions in S and indices in the interval [0, S), where for a position X = (i_{0}, i_{1}, ..., i_{m1}) in S,
$h\left(X\right)={\sum}_{p=0}^{m1}{i}_{p}\cdot {\left(n+1\right)}^{p}$. Let h^{1} denote the inverse mapping of h, i.e. h(X) = i ⇔ h^{1}(i) = X. Observe that X ≤ Y implies that h(X) ≤ h(Y), though the opposite is not necessary true (i.e. it is possible that i ≤ j and yet h^{1}(i) ≰ h^{1}(j), as in the example in Figure 15b).
Each value of the form β_{X, Y}will be computed and stored in a corresponding entry B_{i, j}, where i = h(X) and j = h(Y). Entries of B which do not correspond to valid subinstances of S, i.e. entries B_{i, j}such that h^{1}(i) ≰ h^{1}(j), will hold the value ϕ. The matrix B is computed by applying the Inside VMT algorithm (Table 2) with a simple modification: in the basecases of the recurrence (line 2 in Procedure ComputeInsideSubMatrix, Table 2), the condition for computing the entry B_{i, j}is that h^{1}(i) ≤ h^{1}(j) rather than i ≤ j. If h^{1}(i) ≤ h^{1}(j), the property ${\beta}_{{h}^{1}\left(i\right),{h}^{1}\left(j\right)}$ is computed and stored in this entry, and otherwise the entry retains its initial value ϕ.
The following theorem summarizes the main result of this section.
Theorem 3 For every Multiple String (Inside or Outside) VMT problem there is an algorithm whose running time over an instance S is o (S^{3}). When the problem sustains the standard VMT settings, the running time of the algorithm is Θ (M(S)), where M(n) is the running time of the corresponding matrix multiplication algorithm over two n × n matrices.
6 Concluding remarks
This paper presents a simplification and a generalization of Valiant's technique, which speeds up a family of algorithms by incorporating fast matrix multiplication procedures. We suggest generic templates that identify problems for which the approach is applicable, where these templates are based on general recursive properties of the problems, rather than on their specific algorithms. Generic algorithms are described for solving all problems sustaining these templates.
The presented framework yields new worst case running time bounds for a family of important problems. The examples given here come from the fields of RNA secondary structure prediction and CFG Parsing. Recently, we have shown that this technique also applies for the Edit Distance with Duplication and Contraction problem [45], suggesting that it is possible that more problems from other domains can be similarly accelerated. While previous works describe other practical acceleration techniques for some of the mentioned problems [14, 25, 28, 32–38], Valiant's technique, along with the Four Russians technique [28, 30, 31], are the only two techniques which currently allow to reduce the theoretical asymptotic worst case running times of the standard algorithms, without compromising the correctness of the computations.
The usage of the Four Russians technique can be viewed as a special case of using Valiant's technique. Essentially, the Four Russians technique enumerates solutions for small computations, stores these solutions in lookup tables, and then accelerates bigger computations by replacing the execution of subcomputations with queries to the lookup tables. In the original paper [29], this technique was applied to obtain an O(log n) improvement factor for Boolean Matrix Multiplication. This approach was modified in [30] in order to accelerate Min/MaxPlus Multiplication, for integer matrices in which the difference between adjacent cells are taken from a finite interval of integers. While in [30] this technique was used ad hoc for accelerating RNA folding, it is possible to decouple the description of the fast matrix multiplication technique from the RNA folding algorithm, and present the algorithm of [30] in the same framework presented here. The latter special matrix multiplication variant was further accelerated in [45] to $O\left(\frac{{n}^{3}}{{log}^{2}n}\right)$ running time, implying that RNA folding under discrete scoring schemes (e.g. [6]) can be computed in $O\left(\frac{{n}^{3}}{{log}^{2}n}\right)$ time, instead of in $O\left(\frac{{n}^{3}{log}^{3}logn}{{log}^{2}n}\right)$ time as implied by the fastest Min/MaxPlus Multiplication algorithm for general matrices [27].
Many of the current acceleration techniques for RNA and CFG related algorithms are based on sparsification, and are applicable only to optimization problems. Another important advantage of Valiant's technique is that it is the only technique which is currently known to reduce the running times of algorithms for the nonoptimization problem variants, such as RNA Partition Function related problems [10, 12] and the WCFG InsideOutside algorithm [43], in which the goals are to compute the sum of scores of all solutions of the input, instead of computing the score of an optimal solution.
Our presentation of the algorithm also improves upon previous descriptions in additional aspects. In Valiant's original description there was some intersection between the inputs of recursive calls in different branches of the recurrence tree, where portions of the data were recomputed more than once. Following Rytter's description of the algorithm [42], our formulation applies the recurrence on mutuallyexclusive regions of the data, in a classic divideandconquer fashion. The formulation is relatively explicit, avoiding reductions to problems such as transitive closure of matrix multiplication [23, 26] or shortest paths on lattice graphs [42]. The requirements specified here for VMT problems are less strict than requirements presented in previous works for such problems. Using the terminology of this work, additional requirements in [23] for Inside VMT problems are that the ⊕ operation of the multiplication is commutative, and that the ⊕ operation distributes over ⊕. Our explicit formulation of the algorithm makes it easy to observe that none of the operations of the presented algorithm requires the assumption that ⊕ distributes over ⊕. In addition, it can be seen that the ⊕ operation is always applied in a lefttoright order (for nonMultiple String VMT problems), thus the computation is correct even if ⊕ is not commutative. In [42], it was required that the algebraic structure $\left(\mathcal{D},\phantom{\rule{2.77695pt}{0ex}}\oplus ,\phantom{\rule{2.77695pt}{0ex}}\otimes \right)$ would be a semiring, i.e. adding to the requirements of [23] an additional requirement that $\mathcal{D}$ also contains an identity element 1 with respect to the ⊕ operation. Again, the algorithms presented here do not require that such a property be obeyed.
The time complexities of VMT algorithms are dictated by the time complexities of matrix multiplication algorithms. As matrix multiplication variants are essential operations in many computational problems, much work has been done to improve both the theoretical and the practical running times of these operations, including many recent achievements [24, 27, 40, 41, 46–50]. Due to its importance, it is expected that even further improvements in this domain will be developed in the future, allowing faster implementations of VMT algorithms. Theoretical sequential subcubic matrix multiplication algorithms (e.g. [24]) are usually considered impractical for realistic matrix sizes. However, practical, hardwarebased fast computations of matrix multiplications are gaining popularity within recent years [40, 41], due the highly parallelized nature of such computations and the availability of new technologies that exploit this parallelism. Such technologies were previously used for some related problems [51, 52], yet there is an intrinsic advantage for its utilization via the VMT framework. While optimizing the code for each specific problem and each specific hardware requires special expertise, the VMT framework conveniently allows differing the bottleneck part of the computation to the execution of matrix multiplication subroutines, and thus offtheshelf, hardware tailored optimized solutions can be easily integrated into all VMT problems, instead of being developed separately for each problem.
Appendix
A RNA Partition Function and Base Pair Binding Probabilities
The following example is a simplified formalization of the RNA Partition Function and Base Pair Binding Probabilities problem [10]. The example demonstrates the InsideOutside VMT settings, with respect to the Dot Product variant. This problem requires the computation of several inside and outside properties for every subinstance of the input.
 1.
For a given folding F of s, what is the probability that s folds spontaneously to F?
 2.
For every pair of indices a and b, what is the probability that a spontaneous folding of s contains the basepair a • b?
Note that a naive computation of the RNA folding partition function would require the examination of all possible foldings of the input instance, where the number of such foldings grows exponentially with the length of the instance [53]. In [10], McCaskill showed how to efficiently compute the partition function for RNA folding in Θ(n^{3}) running time. In addition, he presented a variant that allows the computation of basepairing probabilities for every given pair of indices in the RNA string. We next present a simplified version of McCaskill's computation. For the sake of clarity of presentation, assume that w(F) = e^{F}, where F is the number of basepairs in F, and assume that noncomplementary basepairs are not allowed to occur. This simplification of the weight function allows focusing on the essential properties of the computation, avoiding a tedious notation.
For the "inside" phase of the computation, define two sets of inside properties β^{1} and β^{2} with respect to an input RNA string s. The property ${\beta}_{i,j}^{2}$ is the partition function of the substring s_{i, j}, i.e. the sum of weights of all possible foldings of s_{i, j}. The property ${\beta}_{i,j}^{1}$ is the sum of weights of all possible foldings of s_{i, j}that contain the basepair i • (j  1). For j ≤ i + 1, the only possible folding of s_{i, j}is the empty folding, and thus ${\beta}_{i,j}^{1}=0$ (since there are no foldings of s_{i, j}that contain the basepair i • (j  1)) and ${\beta}_{i,j}^{2}=1$ (since the weight of the empty folding is e^{0} = 1). For j > i + 1, ${\beta}_{i,j}^{1}$ and ${\beta}_{i,j}^{2}$ can be recursively computed as follows.
where δ_{i, j  1}= e if s_{ i } and s_{j1}are complementary bases, and otherwise δ_{i,j1}= 0.
Since ${\beta}_{j1,j}^{1}=0$, ${\mu}_{i,j}^{2}$ can be written as ${\mu}_{i,j}^{2}=\sum _{q\in \left(i,j\right)}\left\{{\beta}_{i,q}^{2}\cdot {\beta}_{q,j}^{1}\right\}$.
The computation of the inside property sets β^{1} and β^{2} abides by the Inside VMT requirements of Definition 1 with respect to the Dot Product, and thus may be computed by the Inside VMT algorithm. The partition function of the input RNA string s is given by $Z={\beta}_{0,n}^{2}$.
The second phase of McCaskill's algorithm computes for every pair of indices a and b in s, the probability that a spontaneous folding of s contains the basepair a • b. According to the partition function model, this probability equals to the sum of weights of foldings of s which contain a • b, divided by Z. Therefore, there is a need to compute values that correspond to sums of weights of foldings which contain specific basepairs. We will denote by γ_{i, j}the sum of weights of foldings of s which contain the base pair i • (j  1). The probability of a basepair a • b is thus $\frac{{\gamma}_{a,b+1}}{Z}$. In order to compute values of the form γ_{i, j}, we define the following outside property sets α^{1}, α^{2}, α^{3} and α^{4}.
A value of the form ${\alpha}_{i,j}^{1}$ reflects the sum of weights of all foldings of $\overline{{s}_{i,j}}$ that contain the basepair (i  1) • j. A value of the form ${\alpha}_{i,j}^{2}$ reflects the sum of weights of all foldings of $\overline{{s}_{i,j}}$ that contain a basepair of the form (q  1) • j, for some index q ∈ [0, i). A value of the form ${\alpha}_{i,j}^{3}$ reflects the sum of weights of all foldings of $\overline{{s}_{i,j}}$ that contain a basepair of the form j • (q  1), for some index q ∈ (j, n], and a value of the form ${\alpha}_{i,j}^{4}$ reflects the partition function of $\overline{{s}_{i,j}}$, i.e. the sum of weights of all foldings of $\overline{{s}_{i,j}}$.
It is now left to show how to compute values of the form ${\alpha}_{i,j}^{4}$. This computation is showed next, via a mutually recursive formulation that also computes values in the sets α^{1}, α^{2}, and α^{3}.
It can now be verified that the computation of the property sets α^{1}, α^{2}, α^{3}, and α^{4} sustains all requirements from an Outside VMT problem as listed in Definition 2, therefore the Base Pair Binding Probabilities problem may be computed by the OutsideVMT algorithm.
B RNA Simultaneous Alignment and Folding
The RNA Simultaneous Alignment and Folding (SAF) problem is an example of a Multiple String VMT problem, defined by Sankoff [15]. Similarly to the classical sequence alignment problem, the goal of the SAF problem is to find an alignment of several RNA strings, and in addition to find a common folding for the aligned segments of the strings. The score of a given alignment with folding takes into account both standard alignment elements such as character matchings, substitutions and indels, as well as the folding score. For clarity, our formulation assumes a simplified scoring scheme.
We use the notation of Section 5 regarding multiple string instances, positions, and subinstances. Let Q = (q^{0}, q^{1}, ..., q^{m 1}) be a position in a multiple string instance S = (s^{0}, s^{1}, ..., s^{m1}). Say that a position Q' = (q'^{0}, q'^{1}, ..., q'^{m1}) in S is a local increment of Q if Q < Q', and for every 0 ≤ p < m, q'^{ p } ≤ q^{ p } + 1. That is, a local increment of a position increases the value of each one of the sequence indices of the position by at most 1, where at least one of the indices strictly increases. Symmetrically, say that in the above case Q is a local decrement of Q'. The position sets inc(Q) and dec(Q) denote the set of all local increments and the set of all local decrements of Q, respectively. The size of each one of these sets is O(2^{ m }). An SAF instance S is a set of RNA strings S = (s^{0}, s^{1}, ..., s^{m1}). An alignment A of S is a set of strings A = (a^{0}, a^{1}, ..., a^{m1}) over the alphabet {A, C, G, U, } ('' is called the "gap" character), satisfying that:

All strings in A are of the same length.

For every 0 ≤ p < m, the string which is obtained by removing from a^{ p }all gap characters is exactly the string s^{ p }.
Here, ρ is a column aligning score function, and τ is a columnpair aligning score function. ρ (A_{ r }) reflects the alignment quality of the r th column in A, giving high scores for aligning nucleotides of the same type and penalizing alignment of nucleotides of different types or aligning nucleotides against gaps. τ(A_{ a }, A_{ b }) reflects the benefit from forming a basepair in each one of the RNA strings in S between the bases corresponding to columns A_{ a } and A_{ b } of the alignment (if gaps or noncomplementary bases are present in these columns, it may induce a score penalty). In addition, compensatory mutations in these columns may also increase the value of τ(A_{ a }, A_{ b }) (thus it may compensate for some penalties taken into account in the computation of ρ(A_{ a }) and ρ(A_{ b })). We assume that both scoring functions ρ and τ can be computed in O(m) running time. In addition, we use as arguments for ρ and τ subinstances of the form S_{Q, Q'}, where Q' is a local increment of Q. In such cases, S_{Q, Q'}corresponds to a unique alignment column, where ρ and τ are computed with respect to this column.
The goal of the SAF problem is to find the maximum score of an alignment with folding for a given SAF instance S. In order to compute this value, we define a set β of inside properties with respect to S, where β_{X, Y}is the maximum score of an alignment with folding of S_{X, Y}.
Similarly to single RNA string folding (Section 3.1), we think of two kinds of alignments with foldings for the subinstance S_{X, Y}: type I are alignments with foldings in which the first and last alignment columns are paired to each other, and type II are alignments with foldings in which the first and last alignment columns are not paired to each other.
In the computation of term (I) of the recurrence, O(2^{2m}) expressions are examined, and each expression is computed in O(m) running time. Under the reasonable assumption that m is sufficiently small with respect to S (typically, m = 2), we can assume that m 2^{2m}= o (S), and even assume that $m{2}^{2m}=O\left(\frac{M\left(\left\rightS\left\right\right)}{\left\rightS{}^{2}}\right)$, where M(r) is the running time of the Max Plus multiplication of two r × r matrices.
The computation of term (II) is a Max Plus vector multiplication of the form μ_{ X, Y } = ⊕_{Q∈(X,Y)}(β_{ X, Q } ⊗ β_{Q,Y}), and thus the recursive formulation abides by the standard VMT settings for the Multiple String Inside VMT requirements, as listed in Definition 3. Therefore, the SAF problem with an instance S can be solved in Θ (M (S)) = o (S^{3}) running time. This result improves the running time of the original algorithm [15], which is Θ(S^{3}).
Declarations
Acknowledgements
The research of SZ and MZU was supported by ISF grant 478/10.
Authors’ Affiliations
References
 Eddy SR: Noncoding RNA genes. Current Opinions in Genetic Development. 1999, 9 (6): 695699. 10.1016/S0959437X(99)000222. http://view.ncbi.nlm.nih.gov/pubmed/10607607 10.1016/S0959437X(99)000222View ArticleGoogle Scholar
 Mandal M, Breaker R: Gene regulation by riboswitches. Cell. 2004, 6: 451463.Google Scholar
 GriffithsJones S, Moxon S, Marshall M, Khanna A, Eddy S, Bateman A: Rfam: annotating noncoding RNAs in complete genomes. Nucleic Acids Research. 2005, D12133 DatabaseGoogle Scholar
 , Backofen R, Bernhart SH, Flamm C, Fried C, Fritzsch G, Hackermuller J, Hertel J, Hofacker IL, Missal K, Mosig A, Prohaska SJ, Rose D, Stadler PF, Tanzer A, Washietl S, Will S: RNAs everywhere: genomewide annotation of structured RNAs. J Exp Zoolog B Mol Dev Evol. 2007, 308: 125.Google Scholar
 Gardner P, Giegerich R: A comprehensive comparison of comparative RNA structure prediction approaches. BMC bioinformatics. 2004, 5: 140 10.1186/147121055140PubMedPubMed CentralView ArticleGoogle Scholar
 Nussinov R, Jacobson AB: Fast Algorithm for Predicting the Secondary Structure of SingleStranded RNA. PNAS. 1980, 77 (11): 63096313. 10.1073/pnas.77.11.6309PubMedPubMed CentralView ArticleGoogle Scholar
 Zuker M, Stiegler P: Optimal Computer Folding of Large RNA Sequences using Thermodynamics and Auxiliary Information. Nucleic Acids Research. 1981, 9: 133148. 10.1093/nar/9.1.133PubMedPubMed CentralView ArticleGoogle Scholar
 Hofacker IL, Fontana W, Stadler PF, Bonhoeffer SL, Tacker M, Schuster P: Fast Folding and Comparison of RNA Secondary Structures. Monatsh Chem. 1994, 125: 167188. 10.1007/BF00818163View ArticleGoogle Scholar
 Alkan C, Karakoç E, Nadeau JH, Sahinalp SC, Zhang K: RNARNA Interaction Prediction and Antisense RNA Target Search. Journal of Computational Biology. 2006, 13 (2): 267282. 10.1089/cmb.2006.13.267PubMedView ArticleGoogle Scholar
 McCaskill JS: The equilibrium partition function and base pair binding probabilities for RNA secondary structure. Biopolymers. 1990, 29 (67): 11051119. 10.1002/bip.360290621PubMedView ArticleGoogle Scholar
 Bernhart S, Tafer H, Mückstein U, Flamm C, Stadler P, Hofacker I: Partition function and base pairing probabilities of RNA heterodimers. Algorithms for Molecular Biology. 2006, 1: 3 10.1186/1748718813PubMedPubMed CentralView ArticleGoogle Scholar
 Chitsaz H, Salari R, Sahinalp SC, Backofen R: A partition function algorithm for interacting nucleic acid strands. Bioinformatics. 2009, 25 (12): i365373. 10.1093/bioinformatics/btp212PubMedPubMed CentralView ArticleGoogle Scholar
 Zhang K: Computing Similarity Between RNA Secondary Structures. INTSYS '98: Proceedings of the IEEE International Joint Symposia on Intelligence and Systems. 1998, 126Washington, DC, USA: IEEE Computer SocietyGoogle Scholar
 Jansson J, Ng S, Sung W, Willy H: A faster and more spaceefficient algorithm for inferring arcannotations of RNA sequences through alignment. Algorithmica. 2006, 46 (2): 223245. 10.1007/s0045300612070View ArticleGoogle Scholar
 Sankoff D: Simultaneous Solution of the RNA Folding, Alignment and Protosequence Problems. SIAM Journal on Applied Mathematics. 1985, 45 (5): 810825. 10.1137/0145048View ArticleGoogle Scholar
 Sakakibara Y, Brown M, Hughey R, Mian I, Sjolander K, Underwood R, Haussler D: Stochastic contextfree grammers for tRNA modeling. Nucleic Acids Research. 1994, 22 (23): 5112 10.1093/nar/22.23.5112PubMedPubMed CentralView ArticleGoogle Scholar
 Teitelbaum R: ContextFree Error Analysis by Evaluation of Algebraic Power Series. STOC ACM. 1973, 196199.Google Scholar
 Dowell R, Eddy S: Evaluation of several lightweight stochastic contextfree grammars for RNA secondary structure prediction. BMC bioinformatics. 2004, 5: 71 10.1186/14712105571PubMedPubMed CentralView ArticleGoogle Scholar
 Do CB, Woods DA, Batzoglou S: CONTRAfold: RNA secondary structure prediction without physicsbased models. Bioinformatics. 2006, 22 (14): e908. 10.1093/bioinformatics/btl246PubMedView ArticleGoogle Scholar
 Cocke J, Schwartz JT: Programming Languages and Their Compilers. 1970, New York: Courant Institute of Mathematical SciencesGoogle Scholar
 Kasami T: An efficient recognition and syntax analysis algorithm for contextfree languages. Tech. Rep. AFCRL65758, Air Force Cambridge Res. Lab., Bedford Mass. 1965Google Scholar
 Younger DH: Recognition and Parsing of ContextFree Languages in Time n^{3}. Information and Control. 1967, 10 (2): 189208. 10.1016/S00199958(67)80007XView ArticleGoogle Scholar
 Valiant L: General ContextFree Recognition in Less than Cubic Time. Journal of Computer and System Sciences. 1975, 10: 308315. 10.1016/S00220000(75)800468View ArticleGoogle Scholar
 Coppersmith D, Winograd S: Matrix Multiplication via Arithmetic Progressions. J Symb Comput. 1990, 9 (3): 251280. 10.1016/S07477171(08)800132View ArticleGoogle Scholar
 Akutsu T: Approximation and Exact Algorithms for RNA Secondary Structure Prediction and Recognition of Stochastic Contextfree Languages. Journal of Combinatorial Optimization. 1999, 3: 321336. 10.1023/A:1009898029639View ArticleGoogle Scholar
 Benedí J, Sánchez J: Fast Stochastic ContextFree Parsing: A Stochastic Version of the Valiant Algorithm. Lecture Notes in Computer Science. 2007, 4477: 8088. 10.1007/9783540728474_12View ArticleGoogle Scholar
 Chan TM: More Algorithms for AllPairs Shortest Paths in Weighted Graphs. SIAM J Comput. 2010, 39 (5): 20752089. 10.1137/08071990XView ArticleGoogle Scholar
 Graham SL, Harrison MA, Ruzzo WL: An improved contextfree recognizer. ACM Transactions on Programming Languages and Systems. 1980, 2 (3): 415462. 10.1145/357103.357112View ArticleGoogle Scholar
 Arlazarov VL, Dinic EA, Kronod MA, Faradzev IA: On Economical Construction of the Transitive Closure of an Oriented Graph. Soviet Math Dokl. 1970, 11: 12091210.Google Scholar
 Frid Y, Gusfield D: A Simple, Practical and Complete $O\left(\frac{{n}^{3}}{logn}\right)$Time Algorithm for RNA Folding Using the FourRussians Speedup. WABI. 2009, 5724: 97107. SpringerGoogle Scholar
 Frid Y, Gusfield D: A WorstCase and Practical Speedup for the RNA Cofolding Problem Using the FourRussians Idea. WABI. 2010, 112.Google Scholar
 Klein D, Manning CD: A* Parsing: Fast Exact Viterbi Parse Selection. HLTNAACL. 2003, 119126.Google Scholar
 Wexler Y, Zilberstein CBZ, ZivUkelson M: A Study of Accessible Motifs and RNA Folding Complexity. Journal of Computational Biology. 2007, 14 (6): 856872. 10.1089/cmb.2007.R020PubMedView ArticleGoogle Scholar
 ZivUkelson M, GatViks I, Wexler Y, Shamir R: A Faster Algorithm for Simultaneous Alignment and Folding of RNA. Journal of Computational Biology. 2010, 17 (8): 10511065. http://www.liebertonline.com/doi/abs/10.1089/cmb.2009.0197 10.1089/cmb.2009.0197PubMedView ArticleGoogle Scholar
 Backofen R, Tsur D, Zakov S, ZivUkelson M: Sparse RNA folding: Time and space efficient algorithms. Journal of Discrete Algorithms. 2010,http://www.sciencedirect.com/science/article/B758J511TNF71/2/8d480ed24b345199f8997c1141a47d60, Google Scholar
 Salari R, Mohl M, Will S, Sahinalp S, Backofen R: Time and Space Efficient RNARNA Interaction Prediction via Sparse Folding. RECOMB. 2010, 6044: 473490.Google Scholar
 Havgaard J, Lyngso R, Stormo G, Gorodkin J: Pairwise local structural alignment of RNA sequences with sequence similarity less than 40%. Bioinformatics. 2005, 21 (9): 18151824. 10.1093/bioinformatics/bti279PubMedView ArticleGoogle Scholar
 Will S, Reiche K, Hofacker IL, Stadler PF, Backofen R: Inferring NonCoding RNA Families and Classes by Means of GenomeScale StructureBased Clustering. PLOS Computational Biology. 2007, 3 (4): e65 10.1371/journal.pcbi.0030065PubMedPubMed CentralView ArticleGoogle Scholar
 Zakov S, Tsur D, ZivUkelson M: Reducing the Worst Case Running Times of a Family of RNA and CFG Problems, Using Valiant's Approach. WABI. 2010, 6577.Google Scholar
 Ryoo S, Rodrigues CI, Baghsorkhi SS, Stone SS, Kirk DB, Hwu WmW: Optimization principles and application performance evaluation of a multithreaded GPU using CUDA. Proceedings of the 13th ACM SIGPLAN Symposium on Principles and practice of parallel programming, PPoPP '08, New York, NY, USA: ACM. 2008, 7382.Google Scholar
 Volkov V, Demmel JW: Benchmarking GPUs to tune dense linear algebra. Proceedings of the 2008 ACM/IEEE conference on Supercomputing. 2008, 31:131:11. SC '08, Piscataway, NJ, USA: IEEE Press, http://portal.acm.org/citation.cfm?id=1413370.1413402Google Scholar
 Rytter W: Contextfree recognition via shortest paths computation: a version of Valiant's algorithm. Theoretical Computer Science. 1995, 143 (2): 343352. 10.1016/03043975(94)00265KView ArticleGoogle Scholar
 Baker JK: Trainable grammars for speech recognition. The Journal of the Acoustical Society of America. 1979, 65 (S1): S132S132.View ArticleGoogle Scholar
 Bentley JL, Haken D, Saxe JB: A General Method For Solving Divideandconquer Recurrences. SIGACT News. 1980, 12 (3): 3644. 10.1145/1008861.1008865View ArticleGoogle Scholar
 Pinhas T, Tsur D, Zakov S, ZivUkelson M: Edit Distance with Duplications and Contractions Revisited. CPM of Lecture Notes in Computer Science. Edited by: Giancarlo R, Manzini G. 2011, 6661: 441454. 10.1007/9783642214585_37. Springer Berlin/HeidelbergGoogle Scholar
 Goto K, Geijn R: Anatomy of highperformance matrix multiplication. ACM Transactions on Mathematical Software (TOMS). 2008, 34 (3): 125.View ArticleGoogle Scholar
 Robinson S: Toward an optimal algorithm for matrix multiplication. News Journal of the Society for Industrial and Applied Mathematics. 2005, 38 (9):Google Scholar
 Basch J, Khanna S, Motwani R: On diameter verification and boolean matrix multiplication. Tech rep Citeseer. 1995Google Scholar
 Williams R: Matrixvector multiplication in subquadratic time (some preprocessing required). Proceedings of the eighteenth annual ACMSIAM symposium on Discrete algorithms, Society for Industrial and Applied Mathematics. 2007, 9951001.Google Scholar
 Bansal N, Williams R: Regularity Lemmas and Combinatorial Algorithms. FOCS. 2009, 745754.Google Scholar
 Rizk G, Lavenier D: GPU Accelerated RNA Folding Algorithm. Computational Science  ICCS. Edited by: Allen G, Nabrzyski J, Seidel E, van Albada G, Dongarra J, Sloot P. 2009, 10041013. Springer Berlin/Heidelberg, , Volume 5544 of Lecture Notes in Computer ScienceGoogle Scholar
 Chang D, Kimmer C, Ouyang M: Accelerating the Nussinov RNA folding algorithm with CUDA/GPU. Signal Processing and Information Technology (ISSPIT). 2010, 120125. IEEE, IEEE International Symposium onGoogle Scholar
 Waterman M: Secondary structure of singlestranded nucleic acids. Adv math suppl studies. 1978, 1: 167212.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.