 Research
 Open Access
 Published:
Fast algorithms for approximate circular string matching
Algorithms for Molecular Biology volume 9, Article number: 9 (2014)
Abstract
Background
Circular string matching is a problem which naturally arises in many biological contexts. It consists in finding all occurrences of the rotations of a pattern of length m in a text of length n. There exist optimal averagecase algorithms for exact circular string matching. Approximate circular string matching is a rather undeveloped area.
Results
In this article, we present a suboptimal averagecase algorithm for exact circular string matching requiring time
. Based on our solution for the exact case, we present two fast averagecase algorithms for approximate circular string matching with kmismatches, under the Hamming distance model, requiring time
for moderate values of k, that is
. We show how the same results can be easily obtained under the edit distance model. The presented algorithms are also implemented as library functions. Experimental results demonstrate that the functions provided in this library accelerate the computations by more than three orders of magnitude compared to a naïve approach.
Conclusions
We present two fast averagecase algorithms for approximate circular string matching with kmismatches; and show that they also perform very well in practice. The importance of our contribution is underlined by the fact that the provided functions may be seamlessly integrated into any biological pipeline. The source code of the library is freely available at http://www.inf.kcl.ac.uk/research/projects/asmf/.
Background
Circular sequences appear in a number of biological contexts. This type of structure occurs in the DNA of viruses [1, 2], bacteria [3], eukaryotic cells [4], and archaea [5]. In [6], it was noted that, due to this, algorithms on circular strings may be important in the analysis of organisms with such structure. Circular strings have previously been studied in the context of sequence alignment. In [7], basic algorithms for pairwise and multiple circular sequence alignment were presented. These results were later improved in [8], where an additional preprocessing stage was added to speed up the execution time of the algorithm. In [9], the authors also presented efficient algorithms for finding the optimal alignment and consensus sequence of circular sequences under the Hamming distance metric.
In order to provide an overview of our results and algorithms, we begin with a few definitions, generally following [10]. We think of a string x of length n as an array x[ 0..n−1], where every x[ i], 0≤i<n, is a letter drawn from some fixed alphabet Σ of size σ=Σ. The empty string of length 0 is denoted by ε. A string x is a factor of a string y if there exist two strings u and v, such that y=u x v. Let the strings x,y,u, and v be such that y=u x v. If u=ε, then x is a prefix of y. If v=ε, then x is a suffix of y.
Let x be a nonempty string of length n and y be a string. We say that there exists an occurrence of x in y, or, more simply, that x occurs in y, when x is a factor of y. Every occurrence of x can be characterised by a position in y. Thus we say that x occurs at the starting position i in y when y[ i..i+n−1]=x. The Hamming distance between strings x and y, both of length n, is the number of positions i, 0≤i<n, such that x[ i]≠y[ i]. Given a nonnegative integer k, we write x≡_{ k }y if the Hamming distance between x and y is at most k.
A circular string of length n can be viewed as a traditional linear string which has the left and rightmost symbols wrapped around and stuck together in some way. Under this notion, the same circular string can be seen as n different linear strings, which would all be considered equivalent. Given a string x of length n, we denote by x^{i}=x[ i..n−1]x[0..i−1], 0<i<n, the ith rotation of x and x^{0}=x. Consider, for instance, the string x=x^{0}=abababbc; this string has the following rotations: x^{1}=bababbca, x^{2}=ababbcab, x^{3}=babbcaba, x^{4}=abbcabab, x^{5}=bbcababa, x^{6}=bcababab, x^{7}=cabababb.
Here we consider the problem of finding occurrences of a pattern string x of length m with circular structure in a text string t of length n with linear structure. For instance, the DNA sequence of many viruses has circular structure, so if a biologist wishes to find occurrences of a particular virus in a carriers DNA sequence—which may not be circular—they must consider how to locate all positions in t that at least one rotation of x occurs. This is the problem of circular string matching.
The problem of exact circular string matching has been considered in [11], where an $\mathcal{O}\left(n\right)$time algorithm was presented. A naïve solution with quadratic complexity consists in applying a classical algorithm for searching a finite set of strings after having built the trie of rotations of x. The approach presented in [11] consists in preprocessing x by constructing a suffix automaton of the string xx, by noting that every rotation of x is a factor of xx. Then, by feeding t into the automaton, the lengths of the longest factors of xx occurring in t can be found by the links followed in the automaton in time $\mathcal{O}\left(n\right)$. In [12], the authors presented an optimal averagecase algorithm for exact circular string matching, by also showing that the averagecase lower bound for single string matching of $\mathcal{O}(n\underset{\sigma}{log}m/m)$ also holds for circular string matching. Very recently, in [13], the authors presented two fast averagecase algorithms based on wordlevel parallelism. The first algorithm requires averagecase time $\mathcal{O}(n\underset{\sigma}{log}m/w)$, where w is the number of bits in the computer word. The second one is based on a mixture of wordlevel parallelism and qgrams. The authors showed that with the addition of qgrams, and by setting $q=\mathcal{O}\left(\underset{\sigma}{log}m\right)$, an optimal averagecase time of $\mathcal{O}(n\underset{\sigma}{log}m/m)$ is achieved. Indexing circular patterns [14] and variations of approximate circular string matching under the edit distance model [15]—both based on the construction of a suffix tree—have also been considered.
In this article, we consider the following problems.
Problem 1 (Exact Circular String Matching).
Given a pattern x of length m and a text t of length n>m, find all factors u of t such that u=x^{i}, 0≤i<m.
Problem
2 (Approximate Circular String Matching with kMismatches).
Given a pattern x of length m, a text t of length n>m, and an integer threshold k<m, find all factors u of t such that u≡_{ k }x^{i}, 0≤i<m.
The aforementioned algorithms for the exact case exhibit the following disadvantages: first, they cannot be applied in a biological context since both single nucleotide polymorphisms as well as errors introduced by wetlab sequencing platforms might have occurred in the sequences; second, it is not clear whether they could easily be adapted to deal with the approximate case. Similar to the exact case [12], it can be shown that the averagecase lower bound for single approximate string matching of $\mathcal{O}\left(n\right(k+\underset{\sigma}{log}m)/m)$[16] also holds for approximate circular string matching with kmismatches under the Hamming distance model. To the best of our knowledge, no optimal averagecase algorithm exists for this problem. Therefore, to achieve optimality, one could use the optimal averagecase algorithm for multiple approximate string matching, presented in [17], for matching the r=m rotations of x requiring, on average, time $\mathcal{O}\left(n\right(k+\underset{\sigma}{log}\mathit{\text{rm}})/m)$, only if $k/m<1/2\mathcal{O}(1/\sqrt{\sigma})$, $r=\mathcal{O}(min({n}^{1/3}/{m}^{2},{\sigma}^{o\left(m\right)}\left)\right)$, and we have $\mathcal{O}\left({m}^{4}{r}^{2}{\sigma}^{\mathcal{O}\left(1\right)}\right)$ space available; which is impractical for large m: e.g. the genome of the smallest known viruses replicating autonomously in eukaryotic cells is around 1.8 KB long. The authors propose solutions to reduce the required space, however using various space–time tradeoff techniques.
Our Contribution. We present a new suboptimal averagecase algorithm for exact circular string matching requiring time $\mathcal{O}\left(n\right)$. Although suboptimal, this algorithm can be easily extended to tackle the approximate case efficiently. Based on our solution for the exact case, we present two new fast averagecase algorithms for approximate circular string matching with kmismatches, under the Hamming distance model, requiring time $\mathcal{O}\left(n\right)$ for moderate values of k, that is $k=\mathcal{O}(m/\underset{\sigma}{log}m)$. The first algorithm requires space $\mathcal{O}\left(n\right)$ and the second one $\mathcal{O}\left(m\right)$. We show how the same results can be easily obtained under the edit distance model. The presented algorithms are also implemented as library functions. Experimental results demonstrate that the functions provided in this library accelerate the computations by more than three orders of magnitude compared to a naïve approach. The source code of the library is freely available at http://www.inf.kcl.ac.uk/research/projects/asmf/.
Properties of the partitioning technique
In this section, we give a brief outline of the partitioning technique in general; and then show some properties of the version of the technique we use for our algorithms. The partitioning technique, introduced in [18], and in some sense earlier in [19], is an algorithm based on filtering out candidate positions that could never give a solution to speed up stringmatching algorithms. An important point to note about this technique is that it reduces the search space but does not, by design, verify potential occurrences. To create a stringmatching algorithm filtering must be combined with some verification technique. The idea behind the partitioning technique was initially proposed for approximate string matching, but here we show that this can also be used for exact circular string matching.
The idea behind the partitioning technique is to partition the given pattern in such a way that at least one of the fragments must occur exactly in any valid approximate occurrence of the pattern. It is then possible to search for these fragments exactly to give a set of candidate occurrences of the pattern. It is then left to the verification portion of the algorithm to check if these are valid approximate occurrences of the pattern. It has been experimentally shown that this approach yields very good practical performance on largescale datasets [20], even if it is not theoretically optimal.
For exact circular string matching, for an efficient solution, we cannot simply apply wellknown exact stringmatching algorithms, as we must also take into account the rotations of the pattern. We can, however, make use of the partitioning technique and, by choosing an appropriate number of fragments, ensure that at least one fragment must occur in any valid exact occurrence of a rotation. Lemma 1 together with the following fact provide this number.
Fact
1. Any rotation of x=x[ 0..m−1] is a factor of x^{′}=x[ 0..m−1]x[ 0..m−2]; and any factor of length m of x^{′} is a rotation of x.
Lemma
1. If we partition x^{′}=x[ 0..m−1]x[ 0.. m−2] in 4 fragments of length ⌊(2m−1)/4⌋ and ⌈(2m−1)/4⌉, at least one of the 4 fragments is a factor of any factor of length m of x^{′}.
Proof.
Let ℓ_{ f } denote the length of the fragment. If we partition x^{′} in at least 4 fragments of length ⌊(2m−1)/4⌋ and ⌈(2m−1)/4⌉, we have that
which gives 2m>4ℓ_{ f } and m>2ℓ_{ f }. Therefore any factor of length m of x^{′}, and, by Fact 1, any rotation of x, must contain at least one of the fragments. For a graphical illustration of this proof inspect Figure 1. ■
Lemma
2. Let x and y=y_{0}y_{1}…y_{ k } be two strings, both of length n, such that y_{0},y_{1},…,y_{ k } are k+1≤n nonempty strings and x≡_{ k }y. Then there exists at least one string y_{ i }, 0≤i≤k, starting at position j of y, 0≤j<n, occurring at the starting position j of x.
Proof. Immediate from the pigeonhole principle—if n items are put into m<n pigeonholes, then at least one pigeonhole must contain more than one item. ■
Based on Lemma 2, we take a similar approach to the one described by Lemma 1, to obtain the sufficient number of fragments in the case of approximate circular string matching with kmismatches.
Lemma
3. If we partition x^{′}=x[ 0.. m−1]x[ 0.. m−2] in 2k+4 fragments of length ⌊(2m−1)/(2k+4)⌋ and ⌈(2m−1)/(2k+4)⌉, at least k+1 of the 2k+4 fragments are factors of any factor of length m of x^{′}.
Proof. Let ℓ_{ f } denote the length of the fragment. If we partition x^{′} in 2k+4 fragments of length ⌊(2m−1)/(2k+4)⌋ and ⌈(2m−1)/(2k+4)⌉, we have that
which gives 2m−1≥2(k+2)ℓ_{ f } and m>(k+2)ℓ_{ f }. Therefore any factor of length m of x^{′}, and, by Fact 1, any rotation of x, must contain at least k+1 of the fragments. For a graphical illustration of this proof inspect Figure 2. ■
Exact circular string matching via filtering
In this section, we present ECSMF, a new suboptimal averagecase algorithm for exact circular string matching via filtering. It is based on the partitioning technique and a series of practical and wellestablished data structures such as the suffix array (for more details see [21]).
Longest common extension
First, we describe how to compute the longest common extension, denoted by lce, of two suffixes of a string in constant time (for more details see [22]). lce queries are an important part of the algorithms presented later on.
Let SA denote the array of positions of the sorted suffixes of string x of length n, i.e. for all 1≤r<n, we have x[ SA[ r−1]..n−1]<x[ SA[ r]..n−1]. The inverse iSA of the array SA is defined by iSA[ SA[ r]]=r, for all 0≤r<n. Let lcp(r,s) denote the length of the longest common prefix of the strings x[ SA[ r].. n−1] and x[ SA[ s].. n−1], for all 0≤r,s<n, and 0 otherwise. Let LCP denote the array defined by LCP[ r]=lcp(r−1,r), for all 1<r<n, and LCP[ 0]=0. We perform the following lineartime and linearspace preprocessing:

Compute arrays SA and iSA of x[21].

Compute array LCP of x[23].

Preprocess array LCP for range minimum queries, we denote this by RMQ_{LCP}[24].
With the preprocessing complete, the lce of two suffixes of x starting at positions p and q can be computed in constant time in the following way [22]:
Example
1. Let the string x =abbababba. The following table illustrates the arrays SA, iSA, and LCP for x.
We have LCE(x,1,2)=LCP[ RMQ_{LCP}(iSA[ 2]+1,iSA[ 1])]=LCP[ RMQ_{LCP}(6,8)]=1, implying that the lce of bbababba and bababba is 1.
Algorithm ECSMF
Given a pattern x of length m and a text t of length n>m, an outline of algorithm ECSMF for solving Problem 1 is as follows.

1.
Construct the string x ^{′}=x[ 0.. m−1]x[ 0.. m−2] of length 2m−1. By Fact 1, any rotation of x is a factor of x ^{′}.

2.
The pattern x ^{′} is partitioned in 4 fragments of length ⌊(2m−1)/4⌋ and ⌈(2m−1)/4⌉. By Lemma 1, at least one of the 4 fragments is a factor of any rotation of x.

3.
Match the 4 fragments against the text t using an Aho Corasick automaton [25]. Let be a list of size Occ of tuples, where $<{p}_{{x}^{\prime}},\ell ,{p}_{t}>\in \mathcal{\mathcal{L}}$ is a 3tuple such that $0\le {p}_{{x}^{\prime}}<2m1$ is the position where the fragment occurs in x ^{′}, ℓ is the length of the corresponding fragment, and 0≤p _{ t }<n is the position where the fragment occurs in t.

4.
Compute SA, iSA, LCP, and RMQ_{LCP} of T=x ^{′} t. Compute SA, iSA, LCP, and RMQ_{LCP} of T _{ r }=rev(t x ^{′}), that is the reverse string of t x ^{′}.

5.
For each tuple $<{p}_{{x}^{\prime}},\ell ,{p}_{t}>\in \mathcal{\mathcal{L}}$, we try to extend to the right via computing
$${\mathcal{E}}_{r}\leftarrow \mathsf{\text{LCE}}(T,{p}_{{x}^{\prime}}+\ell ,2m1+{p}_{t}+\ell );$$in other words, we compute the length ${\mathcal{E}}_{r}$ of the longest common prefix of ${x}^{\prime}[\phantom{\rule{0.3em}{0ex}}{p}_{{x}^{\prime}}+\ell \phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}2m1]$ and t[ p_{ t }+ℓ.. n−1], both being suffixes of T. Similarly, we try to extend to the left via computing ${\mathcal{E}}_{l}$ using lce queries on the suffixes of T_{ r }.

6.
For each ${\mathcal{E}}_{l},{\mathcal{E}}_{r}$ computed for tuple $<{p}_{{x}^{\prime}},\ell ,{p}_{t}>\in \mathcal{\mathcal{L}}$, we report all the valid starting positions in t by first checking if the total length ${\mathcal{E}}_{l}+\ell +{\mathcal{E}}_{r}\ge m$; that is the length of the full extension of the fragment is greater than or equal to m, matching at least one rotation of x. If that is the case, then we report positions
$$max\{{p}_{t}{\mathcal{E}}_{\ell},{p}_{t}+\ell m\},\dots ,min\{{p}_{t}+\ell m+{\mathcal{E}}_{r},{p}_{t}\}.$$
Example 2.
Let the pattern x=GGGTCTA of length m=7, and the text t=GATACGATACCTAGGGTGATAGAATAG. Then x^{′}=GGGTCTAGGGTCT (Step 1). x^{′} is partitioned in GGGT, CTA, GGG, and TCT (Step 2). Consider $<4,3,10>\in \mathcal{\mathcal{L}}$, that is, fragment x^{′}[ 4..6]=CTA, of length ℓ=3, occurs at starting position p_{ t }=10 in t (Step 3). Then T=GGGTCTAGGGTCTGATACGATACCTAGGGTGATAGAATAG and T_{ r }=TCTGGGATCTGGGGATAAGATAGTGGGATCCATAGCATAG (Step 4). Extending to the left gives ${\mathcal{E}}_{l}=0$, since T_{ r }[ 9]≠T_{ r }[ 30]; and extending to the right gives ${\mathcal{E}}_{r}=4$, since T[ 7..10]=T[ 26..29] and T[ 11]≠T[ 30] (Step 5). We check that ${\mathcal{E}}_{l}+\ell +{\mathcal{E}}_{r}=7=m$, and therefore we report position 10 (Step 6):
that is, x^{4}=CTAGGGT occurs at starting position 10 in t.
Theorem
1. Given a pattern x of length m drawn from alphabet Σ, σ=Σ, and a text t of length n>m drawn from Σ, algorithm ECSMF requires averagecase time $\mathcal{O}\left(n\right)$ to solve Problem 1.
Proof.
Constructing and partitioning the string x^{′} from x can trivially be done in time $\mathcal{O}\left(m\right)$ (Step 12). Building the AhoCorasick automaton of the 4 fragments requires time $\mathcal{O}\left(m\right)$; and the search time is $\mathcal{O}(n+\mathit{\text{Occ}})$ (Step 3) [25]. The preprocessing step for the lce queries on the suffixes of T and T_{ r } can be done in time $\mathcal{O}\left(n\right)$ (Step 4). Computing ${\mathcal{E}}_{l}$ and ${\mathcal{E}}_{r}$ for each occurrence of a fragment requires time $\mathcal{O}\left(\mathit{\text{Occ}}\right)$ (Step 5). For each extended occurrence of a fragment, we report $\mathcal{O}\left(m\right)$ valid starting positions, thus $\mathcal{O}\left(m\mathit{\text{Occ}}\right)$ in total (Step 6). Since the expected number Occ of occurrences of the 4 fragments in t is $4n/{\sigma}^{(2m1)/4}=\mathcal{O}\left(\frac{n}{{\sigma}^{\frac{2m1}{4}}}\right)$, algorithm ECSMF requires averagecase time $\mathcal{O}\left(\right(1+\frac{m}{{\sigma}^{\frac{2m1}{4}}}\left)n\right)$. It achieves averagecase time $\mathcal{O}\left(n\right)$iff
for some fixed constant c. For σ=2, the maximum value of f is attained at
and so for σ>1 we get
Approximate circular string matching with kmismatches via filtering
In this section, based on the ideas presented in algorithm ECSMF, we present algorithms ACSMF and ACSMFSimple, two new fast averagecase algorithms for approximate circular string matching with kmismatches via filtering.
Algorithm ACSMF
The first four steps of algorithm ACSMF are essentially the same as in algorithm ECSMF. A small difference exists in Step 2, where the sufficient number of fragments in the case of approximate circular string matching with kmismatches is used. The main difference is in Step 5, where algorithm ACSMF tries to extend k+1 times to the right and k+1 times to the left. Given a pattern x of length m, a text t of length n>m, and an integer threshold k<m, an outline of algorithm ACSMF for solving Problem 2 is as follows.

1.
Construct the string x ^{′}=x[ 0.. m−1]x[0.. m−2] of length 2m−1. By Fact 1, any rotation of x is a factor of x ^{′}.

2.
The pattern x ^{′} is partitioned in 2k+4 fragments of length ⌊(2m−1)/(2k+4)⌋ and ⌈(2m−1)/(2k+4)⌉. By Lemma 3, at least k+1 of the 2k+4 fragments are factors of any rotation of x.

3.
Match the 2k+4 fragments against the text t using an Aho Corasick automaton [25]. Let be a list of size Occ of tuples, where $<{p}_{{x}^{\prime}},\ell ,{p}_{t}>\in \mathcal{\mathcal{L}}$ is a 3tuple such that $0\le {p}_{{x}^{\prime}}<2m1$ is the position where the fragment occurs in x ^{′}, ℓ is the length of the corresponding fragment, and 0≤p _{ t }<n is the position where the fragment occurs in t.

4.
Compute SA, iSA, LCP, and RMQ_{LCP} of T=x ^{′} t. Compute SA, iSA, LCP, and RMQ_{LCP} of T _{ r }=rev(t x ^{′}), that is the reverse string of t x ^{′}.

5.
For each tuple $<{p}_{{x}^{\prime}},\ell ,{p}_{t}>\in \mathcal{\mathcal{L}}$, we try to extend k+1 times to the right via computing
$${\mathcal{E}}_{r}^{0}\leftarrow \mathsf{\text{LCE}}(T,{p}_{{x}^{\prime}}+\ell ,2m1+{p}_{t}+\ell )+1$$$$\begin{array}{c}{\mathcal{E}}_{r}^{1}\leftarrow \mathsf{\text{LCE}}(T,{p}_{{x}^{\prime}}+\ell +{\mathcal{E}}_{r}^{0},2m1+{p}_{t}+\ell +{\mathcal{E}}_{r}^{0})+1\\ \dots \end{array}$$$${\mathcal{E}}_{r}^{k1}\leftarrow \mathsf{\text{LCE}}(T,{p}_{{x}^{\prime}}+\ell +{\mathcal{E}}_{r}^{k2},2m1+{p}_{t}+\ell +{\mathcal{E}}_{r}^{k2})+1$$$${\mathcal{E}}_{r}^{k}\leftarrow \mathsf{\text{LCE}}(T,{p}_{{x}^{\prime}}+\ell +{\mathcal{E}}_{r}^{k1},2m1+{p}_{t}+\ell +{\mathcal{E}}_{r}^{k1});$$in other words, we compute the length ${\mathcal{E}}_{r}^{k}$ of the longest common prefix of ${x}^{\prime}[\phantom{\rule{0.3em}{0ex}}{p}_{{x}^{\prime}}+\ell \phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}2m1]$ and t[ p_{ t }+ℓ.. n−1], both being suffixes of T, with k mismatches. Similarly, we try to extend to the left k+1 times via computing ${\mathcal{E}}_{l}^{k}$ using lce queries on the suffixes of T_{ r }.

6.
For each tuple $<{p}_{{x}^{\prime}},\ell ,{p}_{t}>\in \mathcal{\mathcal{L}}$ we try to extend, we also maintain an array M of size 2m−1, initialised with zeros, where we mark the position of the ith left and right mismatch, 1≤i≤k, by setting
$$\mathsf{\text{M}}[\phantom{\rule{0.3em}{0ex}}{p}_{{x}^{\prime}}{\mathcal{E}}_{l}^{i1}1]\leftarrow 1\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\text{and}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\mathsf{\text{M}}[\phantom{\rule{0.3em}{0ex}}{p}_{{x}^{\prime}}+\ell +{\mathcal{E}}_{r}^{i1}]\leftarrow 1.$$ 
7.
For each ${\mathcal{E}}_{l}^{k},{\mathcal{E}}_{r}^{k},\mathsf{\text{M}}$ computed for tuple $<{p}_{{x}^{\prime}},\ell ,{p}_{t}>\in \mathcal{\mathcal{L}}$, we report all the valid starting positions in t by first checking if the total length ${\mathcal{E}}_{l}^{k}+\ell +{\mathcal{E}}_{r}^{k}\ge m$; that is the length of the full extension of the fragment is greater than or equal to m. If that is the case, then we count the total number of mismatches of the occurrences at starting positions
$$max\{{p}_{t}\underset{\ell}{\overset{k}{\mathcal{E}}},{p}_{t}+\ell m\},\dots ,min\{{p}_{t}+\ell m+\underset{r}{\overset{k}{\mathcal{E}}},{p}_{t}\},$$by first summing up the mismatches for the leftmost starting position
$${\mu}_{j}\leftarrow \mathsf{\text{M}}[{p}_{{x}^{\prime}}{\mathcal{E}}_{l}^{k}]+\dots +\mathsf{\text{M}}[\phantom{\rule{0.3em}{0ex}}{p}_{{x}^{\prime}}{\mathcal{E}}_{l}^{k}+m1],$$$$\text{where}j=max\{{p}_{t}\underset{\ell}{\overset{k}{\mathcal{E}}},{p}_{t}+\ell m\}.$$
For each subsequent position j+1, we subtract the value of the leftmost element of M computed for μ_{ j } and add the value of the next element to compute μ_{j+1}. In case μ_{ j }≤k, we report position j.
Example
3. Let the pattern x =GGGTCTA of length m=7, the text t = GATACGATACCTAGGGTGATAGAATAG, and k =1. Then x^{′} = GGGTCTAGGGTCT (Step 1). x^{′} is partitioned in GGG, TC, TA, GG, GT, and CT (Step 2). Consider $<9,2,15>\in \mathcal{\mathcal{L}}$, that is, fragment x^{′}[ 9.. 10]=GT, of length ℓ = 2, occurs at starting position p_{ t }=15 in t (Step 3). Then T =GGGTCTAGGGTCTGATACGATACCTAGGGTGATAGAATAG and T_{ r }=TCTGGGATCTGGGGATAAGATAGTGGGATCCATAGCATAG (Step 4). Extending to the left gives ${\mathcal{E}}_{l}^{k}=6$, since T_{ r }[ 4.. 9]≡_{ k }T_{ r }[ 25.. 30] and T_{ r }[ 10]≠T_{ r }[ 31]; and extending to the right gives ${\mathcal{E}}_{r}^{k}=1$, since T[ 11]≡_{ k }T[ 30] and T[ 12]≠T [ 31] (Step 5). We also set M[ 3]=1 and M[ 11]=1 (Step 6). We check that ${\mathcal{E}}_{l}+\ell +{\mathcal{E}}_{r}=9>m$, and therefore we report positions 10, since $\sum _{i=4}^{10}\mathsf{\text{M}}\left[\phantom{\rule{0.3em}{0ex}}i\right]=0<k$, and 11, since $\sum _{i=5}^{11}\mathsf{\text{M}}\left[\phantom{\rule{0.3em}{0ex}}i\right]=1=k$ (Step 7):
that is, x ^{4} =CTAGGGT and x ^{5} =TAGGGTC occur at starting position 10 in t with no mismatch and at starting position 11 in t with 1 mismatch, respectively.
Theorem
2. Given a pattern x of length m drawn from alphabet Σ, σ=Σ, a text t of length n>m drawn from Σ, and an integer threshold k<m, algorithm ACSMF requires averagecase time $\mathcal{O}\left(\right(1+\frac{\mathit{\text{km}}}{{\sigma}^{\frac{2m1}{2k+4}}}\left)n\right)$ and space $\mathcal{O}\left(n\right)$ to solve Problem 2.
Proof.
Constructing and partitioning the string x^{′} from x can trivially be done in time $\mathcal{O}\left(m\right)$ (Step 12). Building the AhoCorasick automaton of the 2k+4 fragments requires time $\mathcal{O}\left(m\right)$; and the search time is $\mathcal{O}(n+\mathit{\text{Occ}})$ (Step 3) [25]. The preprocessing step for the lce queries on the suffixes of T and T_{ r } can be done in time and space $\mathcal{O}\left(n\right)$ (Step 4)—see Section 3. Computing ${\mathcal{E}}_{l}^{k}$ and ${\mathcal{E}}_{r}^{k}$ for each occurrence of a fragment requires time $\mathcal{O}\left(k\mathit{\text{Occ}}\right)$ (Step 5)—see Section 3. Maintaining array M is of no extra cost (Step 6). For each extended occurrence of a fragment, we report $\mathcal{O}\left(m\right)$ valid starting positions, thus $\mathcal{O}\left(m\mathit{\text{Occ}}\right)$ in total (Step 7). Since the expected number Occ of occurrences of the 2k+4 fragments is $(2k+4)n/{\sigma}^{(2m1)/(2k+4)}=\mathcal{O}\left(\frac{\mathit{\text{kn}}}{{\sigma}^{\frac{2m1}{2k+4}}}\right)$, algorithm ACSMF requires averagecase time $\mathcal{O}\left(\right(1+\frac{\mathit{\text{km}}}{{\sigma}^{\frac{2m1}{2k+4}}}\left)n\right)$ and space $\mathcal{O}\left(n\right)$. ■
Corollary
1. Given a pattern x of length m drawn from alphabet Σ, σ=Σ, a text t of length n>m drawn from Σ, and an integer threshold $k=\mathcal{O}(m/\underset{\sigma}{log}m)$, algorithm ACSMF requires averagecase time $\mathcal{O}\left(n\right)$.
Proof. Algorithm ACSMF achieves averagecase time $\mathcal{O}\left(n\right)$iff
for some fixed constant c. Let r=(2m−1)/(2k+4). We have
Since k<m, we can (pessimistically) replace k by m−1. Then we have
Solving for r, and using k≤(2m−1)/2r−2, gives the maximum value of k, that is
Algorithm ACSMFsimple
Algorithm ACSMFsimple is very similar to Algorithm ACSMF. The only differences are:

Algorithm ACSMFsimple does not perform Step 4 of Algorithm ACSMF;

For each tuple $<{p}_{{x}^{\prime}},\ell ,{p}_{t}>\in \mathcal{\mathcal{L}}$, Step 5 of Algorithm ACSMF is performed without the use of the precomputed indexes. In other words, we compute ${\mathcal{E}}_{r}^{k}$ and ${\mathcal{E}}_{\ell}^{k}$ by simply performing letter comparisons and counting the number of mismatches occurred. The extension stops right before the k+1th mismatch.
Fact
2. The expected number of letter comparisons required for each extension in algorithm ACSMFsimple is less than 3.
Proof. Recall that on an alphabet of size σ, the probability that two random strings of length ℓ are equal is (1/σ)^{ℓ}. Thus, given two long strings, and setting r=1/σ, there is probability r that the initial letters are equal, r^{2} that the prefixes of length two are equal, and so on. Thus the expected number of positions to be matched before inequality occurs is
for some n≥2. Hall & Knight [26, p. 44] tell us that
which as n→∞ approaches r/(1−r)^{2}<2 for all r. Thus S, the expected number of matching positions, is less than 2, and hence the expected number of letter comparisons required for each extension in algorithm ACSMFSimple is less than 3. ■
Theorem
3. Given a pattern x of length m drawn from alphabet Σ, σ=Σ, a text t of length n>m drawn from Σ, and an integer threshold k<m, algorithm ACSMFsimple requires averagecase time $\mathcal{O}\left(\right(1+\frac{\mathit{\text{km}}}{{\sigma}^{\frac{2m1}{2k+4}}}\left)n\right)$ and space $\mathcal{O}\left(m\right)$ to solve Problem 2.
Proof.
By Fact 2, computing ${\mathcal{E}}_{\ell}^{k}$ and ${\mathcal{E}}_{r}^{k}$ for each occurrence of a fragment requires time $\mathcal{O}\left(k\mathit{\text{Occ}}\right)$. Therefore algorithm ACSMFsimple requires averagecase time $\mathcal{O}\left(\right(1+\frac{\mathit{\text{km}}}{{\sigma}^{\frac{2m1}{2k+4}}}\left)n\right)$. The required space is reduced to $\mathcal{O}\left(m\right)$ since Step 4 of Algorithm ACSMF is not performed. ■
Corollary
2. Given a pattern x of length m drawn from alphabet Σ, σ=Σ, a text t of length n>m drawn from Σ, and an integer threshold $k=\mathcal{O}(m/\underset{\sigma}{log}m)$, algorithm ACSMFsimple requires averagecase time $\mathcal{O}\left(n\right)$.
In practical cases, algorithm ACSMFsimple should be preferred over algorithm ACSMF as (i) it has less memory requirements (see Theorem 3); and (ii) it avoids the construction of a series of data structures (see Section 3 in this regard).
Edit distance model
Algorithm ACSMFsimple could be easily extended for approximate circular string matching under the edit distance model (for a definition, see [10]). Since each singleletter edit operation can change at most one of the 2k+4 fragments of x^{′}, any set of at most k edit operations leaves at least one of the fragments untouched. In other words, Lemma 2 holds under the edit distance model as well [27]. An area of length $\mathcal{O}\left(m\right)$ surrounding each potential occurrence found in the filtration phase (Steps 13 of algorithm ACSMF) is then searched using the standard dynamicprogramming algorithm in time $\mathcal{O}\left({m}^{2}\right)$[28] and space $\mathcal{O}\left(m\right)$[29]. Since the expected number Occ of occurrences of the 2k+4 fragments is $\mathcal{O}\left(\frac{\mathit{\text{kn}}}{{\sigma}^{\frac{2m1}{2k+4}}}\right)$, the averagecase time complexity becomes $\mathcal{O}\left(\right(1+\frac{k{m}^{2}}{{\sigma}^{\frac{2m1}{2k+4}}}\left)n\right)$ and the space complexity remains $\mathcal{O}\left(m\right)$. When $k=\mathcal{O}(m/\underset{\sigma}{log}m)$, the averagecase time complexity is $\mathcal{O}\left(n\right)$.
Experimental results
We implemented algorithms ACSMF and ACSMFSimple as library functions to perform approximate circular string matching with kmismatches. The functions were implemented in the C programming language and developed under GNU/Linux operating system. They take as input arguments the pattern x of length m, the text t of length n, and the integer threshold k<m; and then return the list of starting positions of the occurrences of the rotations of x in t with kmismatches as output. The library implementation is distributed under the GNU General Public License (GPL), and it is available at http://www.inf.kcl.ac.uk/research/projects/asmf/, which is set up for maintaining the source code and the manpage documentation. The experiments were conducted on a Desktop PC using one core of Intel i7 2600 CPU at 3.4 GHz under GNU/Linux.
Approximate circular string matching is a rather undeveloped area. To the best of our knowledge, there does not exist an optimal (average or worstcase) algorithm for approximate circular string matching with kmismatches. Therefore, keeping in mind that we wish to evaluate the efficiency of our algorithms in practical terms, we compared their performance to the respective performance of the C implementation^{a} of the optimal averagecase algorithm for multiple approximate string matching, presented in [17], for matching the r=m rotations of x. We denote this algorithm by FredNava.
Tables 1, 2, 3 illustrate elapsedtime and speedup comparisons for various pattern sizes and moderate values of k, using a corpus of DNA data taken from the Pizza & Chili website [30]. As it is demonstrated by the experimental results, algorithm ACSMFSimple is in all cases the fastest with a speedup improvement of more than three orders of magnitude over FredNava. ACSMF is always the second fastest, while ACSMFSimple still retains a speedup improvement of more than one order of magnitude over ACSMF. Another important observation, also suggested by Corollaries 1 and 2, is that the ACSMFbased algorithms are essentially independent of m for moderate values of k.
Conclusions
In this article, we presented new averagecase algorithms for exact and approximate circular string matching. Algorithm ECSMF for exact circular string matching requires averagecase time $\mathcal{O}\left(n\right)$; and Algorithms ACSMF and ACSMFsimple for approximate circular string matching with kmismatches require time $\mathcal{O}\left(n\right)$ for moderate values of k, that is $k=\mathcal{O}(m/\underset{\sigma}{log}m)$. We showed how the same results can be easily obtained under the edit distance model. The presented algorithms were also implemented as library functions. Experimental results demonstrate that the functions provided in this library accelerate the computations by more than three orders of magnitude compared to a naïve approach.
For future work, we will explore the possibility of optimising our algorithms and the corresponding library implementation for the approximate case by using lossless filters for eliminating a possibly large fraction of the input that is guaranteed not to contain any approximate occurrence, such as [31] for the Hamming distance model or [32] for the edit distance model. In addition, we will try to improve our algorithms for the approximate case in order to achieve averagecase optimality.
Endnote
^{a} Personal communication with author.
References
 1.
Weil R, Vinograd J:The cyclic helix and cyclic coil forms of polyoma viral DNA. Proc Natl Acad Sci. 1963, 50 (4): 730738.
 2.
Dulbecco R, Vogt M:Evidence for a ring structure of polyoma virus DNA. Proc Natl Acad Sci. 1963, 50 (2): 236243.
 3.
Thanbichler M, Wang SC, Shapiro L:The bacterial nucleoid: A highly organized and dynamic structure. J Cell Biochem. 2005, 96 (3): 506521. [http://dx.doi.org/10.1002/jcb.20519], []
 4.
Lipps G: Plasmids: Current Research and Future Trends. 2008, Norfolk, UK: Caister Academic Press.
 5.
Allers T, Mevarech M:Archaeal genetics — the third way. Nat Rev Genet. 2005, 6: 5873.
 6.
Gusfield D: Algorithms on Strings, Trees and Sequences. 1997, New York, NY, USA: Cambridge University Press.
 7.
Mosig A, Hofacker IL, Stadler PF, Zell A:Comparative analysis of cyclic sequences: viroids and other small circular RNAs. German Conference on Bioinformatics, Volume 83 of LNI. Edited by: Huson DH, Kohlbacher O, Lupas AN, Nieselt K. 2006, 93102. GI.
 8.
Fernandes F, Pereira L, Freitas A:CSA: An efficient algorithm to improve circular DNA multiple alignment. BMC Bioinformatics. 2009, 10: 113.
 9.
Lee T, Na JC, Park H, Park K, Sim JS:Finding optimal alignment and consensus of circular strings. Proceedings of the 21st annual Conference on Combinatorial Pattern Matching. 2010, 310322. CPM’10, Berlin, Heidelberg: SpringerVerlag.
 10.
Crochemore M, Hancart C, Lecroq T: Algorithms on Strings. 2007, New York, NY, USA: Cambridge University Press.
 11.
Applied Combinatorics on Words. Edited by: Lothaire M. 2005, New York, NY, USA: Cambridge University Press.
 12.
Fredriksson K, Grabowski S:Averageoptimal string matching. J Discrete Algorithms. 2009, 7 (4): 579594. 10.1016/j.jda.2008.09.001.
 13.
Chen KH, Huang GS, Lee RCT:Bitparallel algorithms for exact circular string matching. Comput J. 2013, doi:10.1093/comjnl/bxt023.
 14.
Iliopoulos CS, Rahman MS:Indexing circular patterns. Proceedings of the 2nd International Conference on Algorithms and Computation. 2008, 4657. WALCOM’08, Berlin, Heidelberg: SpringerVerlag.
 15.
Lin J, Adjeroh D:Allagainstall circular pattern matching. Comput J. 2012, 55 (7): 897906. 10.1093/comjnl/bxr126.
 16.
Chang WI, Marr TG:Approximate string matching and local similarity. Proceedings of the 5th Annual Symposium on Combinatorial Pattern Matching. 1994, 259273. CPM ’94, London, UK: SpringerVerlag.
 17.
Fredriksson K, Navarro G:Averageoptimal single and multiple approximate string matching. J Exp Algorithmics. 2004, 9:http://dl.acm.org/citation.cfm?id=1041513,
 18.
Wu S, Manber U:Fast text searching: allowing errors. Commun ACM. 1992, 35 (10): 8391. 10.1145/135239.135244.
 19.
Rivest R:Partialmatch retrieval algorithms. SIAM J Comput. 1976, 5: 1950. 10.1137/0205003.
 20.
Frousios K, Iliopoulos CS, Mouchard L, Pissis SP, Tischler G:REAL: an efficient REad ALigner for next generation sequencing reads. Proceedings of the First ACM International Conference on Bioinformatics and Computational Biology. 2010, 154159. BCB ’10, USA: ACM.
 21.
Nong G, Zhang S, Chan WH:Linear suffix array construction by almost pure inducedsorting. Proceedings of the 2009 Data Compression Conference. 2009, 193202. DCC ’09, Washington, DC, USA: IEEE Computer Society.
 22.
Ilie L, Navarro G, Tinta L:The longest common extension problem revisited and applications to approximate string searching. J Discrete Algorithms. 2010, 8 (4): 418428. 10.1016/j.jda.2010.08.004.
 23.
Fischer J:Inducing the LCPArray. Algorithms and Data Structures, Volume 6844 of Lecture Notes in Computer Science. Edited by: Dehne F, Iacono J, Sack JR. 2011, 374385. Berlin Heidelberg: Springer.
 24.
Fischer J, Heun V:Spaceefficient preprocessing schemes for range minimum queries on static arrays. SIAM J Comput. 2011, 40 (2): 465492. 10.1137/090779759.
 25.
Dori S, Landau GM:Construction of Aho Corasick automaton in linear time for integer alphabets. Inf Process Lett. 2006, 98 (2): 6672. 10.1016/j.ipl.2005.11.019.
 26.
Hall HS, Knight SR: Higher Algebra. 1950, London, UK: MacMillan.
 27.
BaezaYates RA, Perleberg CH:Fast and practical approximate string matching. Inform Process Lett. 1996, 59: 2127.0.1016/00200190(96)00083X.http://www.sciencedirect.com/science/article/pii/002001909600083X],
 28.
Wagner RA, Fischer MJ:The stringtostring correction problem. J ACM. 1974, 21: 168173. 10.1145/321796.321811.
 29.
Hirschberg DS:A linear space algorithm for computing maximal common subsequences. Commun ACM. 1975, 18 (6): 341343. 10.1145/360825.360861.
 30.
Pizza & Chili. http://pizzachili.dcc.uchile.cl/ 2013,
 31.
Peterlongo P, Pisanti N, Boyer F, do Lago AP, Sagot MF:Lossless filter for multiple repetitions with Hamming distance. J Discrete Algorithms. 2008, 6 (3): 497509. 10.1016/j.jda.2007.03.003.
 32.
Peterlongo P, Sacomoto GAT, do Lago AP, Pisanti N, Sagot MF:Lossless filter for multiple repeats with bounded edit distance. Algorithms Molecular Biol. 2009,4.http://www.almob.org/content/pdf/1748718843.pdf,
Acknowledgements
The publication costs for this article were funded by the Open Access funding scheme of King’s College London. CB is supported by an EPSRC grant (Doctoral Training Grant #EP/J500252/1). The authors would like to warmly thank the “Reviewer #1” and the “Reviewer #2” whose meticulous comments were beyond the call of duty.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
CSI and SPP designed the study. CB, CSI, and SPP devised the algorithms. SPP developed the library and conducted the experiments. CB and SPP wrote the manuscript with the contribution of CSI. The final version of the manuscript is approved by all authors.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Received
Accepted
Published
DOI
Keywords
 Approximate circular string matching
 Circular pattern matching
 Algorithms on strings