 Research
 Open Access
An averagecase sublinear forward algorithm for the haploid Li and Stephens model
 Yohei M. Rosen^{1, 2}Email authorView ORCID ID profile and
 Benedict J. Paten^{2}
https://doi.org/10.1186/s1301501901449
© The Author(s) 2019
 Received: 1 November 2018
 Accepted: 13 March 2019
 Published: 2 April 2019
Abstract
Background
Hidden Markov models of haplotype inheritance such as the Li and Stephens model allow for computationally tractable probability calculations using the forward algorithm as long as the representative reference panel used in the model is sufficiently small. Specifically, the monoploid Li and Stephens model and its variants are linear in reference panel size unless heuristic approximations are used. However, sequencing projects numbering in the thousands to hundreds of thousands of individuals are underway, and others numbering in the millions are anticipated.
Results
To make the forward algorithm for the haploid Li and Stephens model computationally tractable for these datasets, we have created a numerically exact version of the algorithm with observed average case sublinear runtime with respect to reference panel size k when tested against the 1000 Genomes dataset.
Conclusions
We show a forward algorithm which avoids any tradeoff between runtime and model complexity. Our algorithm makes use of two general strategies which might be applicable to improving the time complexity of other future sequence analysis algorithms: sparse dynamic programming matrices and lazy evaluation.
Keywords
 Forward algorithm
 Haplotype
 Complexity
 Sublinear algorithms
Background
Probabilistic models of haplotypes describe how variation is shared in a population. One application of these models is to calculate the probability P(oH), defined as the probability of a haplotype o being observed, given the assumption that it is a member of a population represented by a reference panel of haplotypes H. This computation has been used in estimating recombination rates [1], a problem of interest in genetics and in medicine. It may also be used to detect errors in genotype calls.
Early approaches to haplotype modeling used coalescent [2] models which were accurate but computationally complex, especially when including recombination. Li and Stephens wrote the foundational computationally tractable haplotype model [1] with recombination. Under their model, the probability P(oH) can be calculated using the forward algorithm for hidden Markov models (HMMs) and posterior sampling of genotype probabilities can be achieved using the forward–backward algorithm. Generalizations of their model have been used for haplotype phasing and genotype imputation [3–7].
The Li and Stephens model
Consider a reference panel H of k haplotypes sampled from some population. Each haplotype \(h_j \in H\) is a sequence \((h_{j,1}, \ldots , h_{j,n})\) of alleles at a contiguous sequence \(1, \ldots , n\) of genetic sites. Classically [1], the sites are biallelic, but the model extends to multiallelic sites [8].
Consider an observed sequence of alleles \(o = (o_1, \ldots , o_n)\) representing another haplotype. The monoploid Li and Stephens model (LS) [1] specifies a probability that o is descended from the population represented by H. LS can be written as a hidden Markov model wherein the haplotype o is assembled by copying (with possible error) consecutive contiguous subsequences of haplotypes \(h_j \in H\).
Definition 1
We will write \(\mu _i(j)\) as shorthand for \(p(o_ix_{j,i})\). We will also define the values of the initial probabilities \(p(x_{j,1}, o_1  H) = \frac{\mu _1(j)}{k}\), which can be derived by noting that if all haplotypes have equal probabilities \(\frac{1}{k}\) of randomly being selected, and that this probability is then modified by the appropriate emission probability.
The probability P(oH) will be equal to the sum \(\sum _j p_n[j]\) of all entries in the final column of the dynamic programming matrix. In practice, the Li and Stephens forward algorithm is \(\mathcal {O}(nk)\) (see "Efficient dynamic programming" section).
Li and Stephens like algorithms for large populations
The \(\mathcal {O}(nk)\) time complexity of the forward algorithm is intractable for reference panels with large size k. The UK Biobank has amassed \(k = 500,000\) array samples. Whole genome sequencing projects, with a denser distribution of sites, are catching up. Major sequencing projects with \(k = 100,000\) or more samples are nearing completion. Others numbering k in the millions have been announced. These large population datasets have significant potential benefits: They are statistically likely to more accurately represent population frequencies and those employing genome sequencing can provide phasing information for rare variants.
In order to handle datasets with size k even fractions of these sizes, modern haplotype inference algorithms depend on models which are simpler than the Li and Stephens model or which sample subsets of the data. For example, the common tools Eagle2, Beagle, HAPIUR and Shapeit2 and 3 [3–7] either restrict where recombination can occur, fail to model mutation, model longrange phasing approximately or sample subsets of the reference panel.
Lunter’s “fastLS” algorithm [8] demonstrated that haplotypes models which include all k reference panel haplotype could find the Viterbi maximum likelihood path in time sublinear in k, using preprocessing to reduce redundant information in the algorithm’s input. However, his techniques do not extend to the forward and forward–backward algorithms.
Our contributions
We have developed an arithmetically exact forward algorithm whose expected time complexity is a function of the expected allele distribution of the reference panel. This expected time complexity proves to be significantly sublinear in reference panel size. We have also developed a technique for succinctly representing large panels of haplotypes whose size also scales as a sublinear function of the expected allele distribution.
Our forward algorithm contains three optimizations, all of which might be generalized to other bioinformatics algorithms. In "Sparse representation of haplotypes" section, we rewrite the reference panel as a sparse matrix containing the minimum information necessary to directly infer all allele values. In "Efficient dynamic programming" section, we define recurrence relations which are numerically equivalent to the forward algorithm but use minimal arithmetic operations. In "Lazy evaluation of dynamic programming rows", we delay computation of forward states using a lazy evaluation algorithm which benefits from blocks of common sequence composed of runs of major alleles. Our methods apply to other models which share certain redundancy properties with the monoploid Li and Stephens model.
Sparse representation of haplotypes
The forward algorithm to calculate the probability P(oH) takes as input a length n vector o and a \(k \times n\) matrix of haplotypes H. In general, any algorithm which is sublinear in its input inherently requires some sort of preprocessing to identify and reduce redundancies in the data. However, the algorithm will truly become effectively sublinear if this preprocessing can be amortized over many iterations. In this case, we are able to preprocess H into a sparse representation which will on average contain better than \(\mathcal {O}(nk)\) data points.
This is the first component of our strategy. We use a variant of columnsparserow matrix encoding to allow fast traversal of our haplotype matrix H. This encoding has the dual benefit of also allowing reversible size compression of our data. We propose that this is one good general data representation on which to build other computational work using very large genotype or haplotype data. Indeed, extrapolating from our singlechromosome results, the 1000 Genomes Phase 3 haplotypes across all chromosomes should simultaneously fit uncompressed in 11 GB of memory.
We will show that we can evaluate the Li and Stephens forward algorithm without needing to uncompress this sparse matrix.
Sparse column representation of haplotype alleles
Consider a biallelic genetic site i with alleles \(\{A, B\}\). Consider the vector \(h_{1,i},\) \(h_{2,i}, \ldots , h_{k,i}\) \(\in \{A,B\}^k\) of alleles of haplotypes j at site i. Label the allele A, B which occurs more frequently in this vector as the major allele 0, and the one which occurs less frequently as the minor allele 1. We then encode this vector by storing the value A or B of the major allele 0, and the indices \(j_1, j_2, \ldots\) of the haplotypes which take on allele value 1 at this site.
We will write \(\phi _i\) for the subvector \(h_{j_1,i}, h_{j_2,i}, \ldots\) of alleles of haplotypes consisting of those haplotypes which possess the minor allele 1 at site i. We will write \(\phi _i\) for the multiplicity of the minor allele. We call this vector \(\phi _i\) the information content of the haplotype cohort H at the site i.
Relation to the allele frequency spectrum
Our sparse representation of the haplotype reference panel benefits from the recent finding [9] that the distribution over sites of minor allele frequencies is biased towards low frequencies.^{1}
Clearly, the distribution of \(\phi _i\) is precisely the allele frequency spectrum. More formally,
Lemma 1
Corollary 1
If \(\mathcal {O}(\mathbb {E}[\overline{f}]) < \mathcal {O}(k)\), then \(\mathcal {O}(\sum _i \left \phi _i\right ) < \mathcal {O}(nk)\) in expected value.
Dynamic reference panels
Adding or rewriting a haplotype is constant time per site per haplotype unless this edit changes which allele is the most frequent. It can be achieved by addition or removal or single entries from the rowsparsecolumn representation, wherein, since our implementation does not require that the column indices be stored in order, these operations can be made \(\mathcal {O}(1)\). This allows our algorithm to extend to uses of the Li and Stephens model where one might want to dynamically edit the reference panel. The exception occurs when \(\phi _i = \frac{k}{2}\)—here it is not absolutely necessary to keep the formalism that the indices stored actually be the minor allele.
Implementation
Generating these data structures takes \(\mathcal {O}(nk)\) time but is embarrassingly parallel in n. Our “*.slls” data structure doubles as a succinct haplotype index which could be distributed instead of a large vcf record (though genotype likelihood compression is not accounted for). A vcf \(\rightarrow\) slls conversion tool is found in our github repository.
Efficient dynamic programming
Lemma 2
The summation (8) is calculable using strictly fewer than k summands.
Proof
Lemma 3
If \(j \notin \phi _i\) and \(j \notin \phi _{i1}\), then \(S_i\) can be calculated without knowing \(p_{i1}[j]\) and \(p_i[j]\). If \(j \notin \phi _{i1}\) and \(j' \ne j\), then \(p_i[j']\) can be calculated without knowing \(p_{i1}[j]\).
Proof
Corollary 2
The recurrences (8) and the minimum set of recurrences (7) needed to compute (8) can be evaluated in \(\mathcal {O}(\phi _i)\) time, assuming that \(p_{i1}[j]\) have been computed \(\forall j \in \phi _i\).
We address the assumption on prior calculation of the necessary \(p_{i1}[j]\)’s in "Lazy evaluation of dynamic programming rows" section.
Time complexity
Recall that we defined \(\mathbb {E}[\overline{f}](k)\) as the expected mean minor allele frequency in a sample of size k. Suppose that it is comparatively trivial to calculate the missing \(p_{i1}[j]\) values. Then by Corollary 2 the procedure in Eq. (12) has expected time complexity \(\mathcal {O}\left( \sum _i \left \phi _i\right \right) = \mathcal {O}\left( n\mathbb {E}[\overline{f}](k)\right)\).
Lazy evaluation of dynamic programming rows
Corollary 2 was conditioned on the assumption that specific forward probabilities had already been evaluated. We will describe a second algorithm which performs this task efficiently by avoiding performing any arithmetic which will prove unnecessary at future steps.^{2}
Equivalence classes of longest major allele suffixes
Lemma 4
Suppose that \(h_j \notin \phi _{\ell } \;\cup \; \phi _{\ell + 1} \;\cup \; \ldots \;\cup \; \phi _{i  1}\). Then the dynamic programming matrix entries \(p_\ell [j],\; p_{\ell + 1}[j],\; \ldots ,\; p_{i1}[j]\) need not be calculated in order to calculate \(S_\ell ,\; S_{\ell + 1},\; \ldots ,\; S_{i1}\).
Proof
By repeated application of Lemma (3). \(\square\)
Corollary 3
Under the same assumption on j, \(p_\ell [j],\; p_{\ell + 1}[j],\; \ldots ,\; p_{i1}[j]\) need not be calculated in order to calculate \(F_{\ell + 1},\; \ldots ,\; F_{i}\). This is easily seen by definition of \(F_i\).
Lemma 5
Suppose that \(p_{\ell  1}[j]\) is known, and \(x_j \notin \phi _{\ell } \;\cup \; \phi _{\ell + 1} \;\cup \; \ldots \;\cup \; \phi _{i  1}\). Then \(p_{i1}[j]\) can be calculated in the time which it takes to calculate \(F_{i1} \circ \ldots \circ F_{\ell }\).
Proof
\(p_{i1}[j] = F_{i1}\circ \ldots \circ F_{\ell }(p_{\ell 1}[j])\) \(\square\)
It is immediately clear that calculating the \(p_i[j]\) lends well to lazy evaluation. Specifically, the \(x_j \notin \phi _{i}\) are data which need not be evaluated yet at step i. Therefore, if we can aggregate the work of calculating these data at a later iteration of the algorithm, and only if needed then, we can potentially save a considerable amount of computation.
Definition 2
(Longest major allele suffix classes) Define \(E_{\ell \rightarrow i  1} = \phi _{\ell  1} \cap \left[ \bigcup _{\iota = \ell }^{i  1} \phi _\iota \right] ^c\) That is, let \(E_{\ell \rightarrow i  1}\) be the class of all haplotypes whose sequence up to site \(i  1\) shares the suffix from \(\ell\) to \(i  1\) inclusive consisting only of major alleles, but lacks any longer suffix composed only of major alleles.
Remark 1
\(E_{\ell \rightarrow i  1}\) is the set of all \(h_{j}\) where \(p_{\ell  1}[j]\) was needed to calculate \(S_{\ell  1}\) but no \(p_{(\cdot )}[j]\) has been needed to calculate any \(S_{(\cdot )}\) since.
Note that for each i, the equivalence classes \(E_{\ell \rightarrow i1}\) form a disjoint cover of the set of all haplotypes \(h_j \in H\).
Remark 2
\(\forall h_j \in E_{\ell \rightarrow i  1}\), \(p_{i  1}[j] = F_{i1}\circ \ldots \circ F_{\ell }(p_{\ell  1}[j])\)
Definition 3
Write \(F_{a \rightarrow b}\) as shorthand for \(F_b \circ \ldots \circ F_a\).
The lazy evaluation algorithm
 1.
Never evaluate \(p_i[j]\) explicitly unless \(h_j \in \phi _i\).
 2.
Amortize the calculations \(p_i[j] = f_i \circ F_{i1} \circ \ldots \circ F_{\ell }(p_{\ell  1}[j])\) over all \(h_j \in E_{\ell \rightarrow i  1}\).
 3.
Share the work of calculating subsequences of compositions of maps \(F_{i1} \circ \ldots \circ F_{\ell }\) with other compositions of maps \(F_{i'1} \circ \ldots \circ F_{\ell '}\) where \(\ell ' \le \ell\) and \(i' \ge i\).
 1.
The partition of all haplotypes \(h_j \in H\) into equivalence classes \(E_{\ell \rightarrow i1}\) according to longest major allele suffix of the truncated haplotype at \(i  1\). See Definition 2 and Fig. 3.
 2.
The tuples \(T_\ell = (E_{\ell \rightarrow i1}, F_{\ell \rightarrow m}, m)\) of equivalence classes \(E_{\ell \rightarrow i1}\) stored with linear map prefixes \(F_{\ell \rightarrow m} =\) \(F_{m} \circ \ldots \circ F_\ell\) of the map \(F_{\ell \rightarrow i  1}\) which would be necessary to fully calculate \(p_{i}[j]\) for the j they contain, and the index m of the largest index in this prefix. See Fig. 5.
 3.
The ordered sequence \(m_1> m_2 > \ldots\), in reverse order, of all distinct \(1 \le m \le i  1\) such that m is contained in some tuple. See Figs. 3, 5.
 4.
The maps \(F_{min\{\ell \} \rightarrow m_{min}},\;\ldots ,\) \(F_{m_2 + 1 \rightarrow m_1},\) \(F_{m_1 + 1 \rightarrow i  1}\) which partition the longest prefix \(F_{i1} \circ \ldots \circ F_{min\{\ell \}}\) into disjoint submaps at the indices m. See Fig. 3. These are used to rapidly extend prefixes \(F_{\ell \rightarrow m}\) into prefixes \(F_{\ell \rightarrow i  1}\).
Definition 4
Impose a partial ordering < on the \(T_\ell = (E_{\ell \rightarrow i  1}, F_{\ell \rightarrow m}, m)\) by \(T_\ell < T_{\ell '}\) iff \(m < m'\). See Fig. 4.
We are now ready to describe our lazy evaluation algorithm which evaluates \(p_i[j] = f_{i} \circ F_{\ell \rightarrow i 1}(p_{\ell  1}[j])\) justintime while fulfilling the aims listed at the top of this section, by using the auxiliary state data specified above.
 Step 1::

Identifying the tuples containing \(\phi\)—\(\mathcal {O}(\phi _i)\) time complexity
 Step 2::

Identifying the preparatory map suffix calculations to be performed—\(\mathcal {O}(\phi _i)\) time complexity
 Step 3::

Performing preparatory map suffix calculations—\(\mathcal {O}(d)\) time complexity
 1
\(\mathcal {O}(d)\): Let \(m_1, \ldots , m_d\) be the last d indices m in the reverse ordered list of indices \(m_1, m_2, \ldots\). By iteratively composing the maps \(F_{m_1 + 1 \rightarrow i 1}, F_{m_2 + 1 \rightarrow m_1}\) which we have already stored, construct the telescoping suffixes \(F_{m_1 + 1 \rightarrow i1},\) \(F_{m_2 + 1 \rightarrow i1}, \ldots ,\) \(F_{m_d + 1 \rightarrow i1}\) needed to update the tuples \((E_{\ell \rightarrow i  1}, F_{\ell \rightarrow m}, m)\) to \((E_{\ell \rightarrow i  1}, F_{\ell \rightarrow i  1}, i  1)\).
 2
\(\mathcal {O}(d)\): For each \(m_1 \le m_i \le m_d\), choose an arbitrary \((E_{\ell \rightarrow i  1}, F_{\ell \rightarrow m_i}, m_i)\) and update it to \((E_{\ell \rightarrow i  1}, F_{\ell \rightarrow i  1}, i  1)\).
 Step 4::

Performing the deferred calculations for the tuples containing \(h_j \in \phi _i\)—\(\mathcal {O}(\phi _i)\) time complexity
 Step 5::

Calculating \(p_i[j]\) justintime—\(\mathcal {O}(\phi _i)\) time complexity
Note: The calculation of interest is performed here.
 Step 6::

Updating our equivalence class/update map prefix tuple auxiliary data structures—\(\mathcal {O}(\phi _i + d)\) time complexity
 1.
Create the new tuple \((E_{i \rightarrow i}, F_{i \rightarrow i} = \text { identity map }, i)\).
 2.
Remove the \(h_j \in \phi _i\) from their equivalence classes \(E_{\ell \rightarrow i  1}\) and place them in the new equivalence class \(E_{i \rightarrow i}\). If this empties the equivalence class in question, delete its tuple. To maintain memory use bounded by number of haplotypes, our implementation uses an object pool to store these tuples.
 3.
If an index \(m_i\) no longer has any corresponding tuple, delete it, and furthermore replace the stored maps \(F_{m_{i1} + 1 \rightarrow m_i}\) and \(F_{m_i + 1} \rightarrow m_{i + 1}\) with a single map \(F_{m_{i1} + 1 \rightarrow m_{i+1}}\). This step is added to reduce the upper bound on the maximum possible number of compositions of maps which are performed in any given step.
Lemma 6
The number of nonempty equivalence classes \(E_{\ell \rightarrow i1}\) in existence at any iteration i of the algorithm is bounded by the number of haplotypes k.
Proof
Trivial but worth noting. \(\square\)
Lemma 7
The number of unique indices m in existence at any iteration i of the algorithm is bounded by the number of nonempty equivalence classes \(E_{\ell \rightarrow i1}\).
Results
Implementation
Our algorithm was implemented as a C++ library located at https://github.com/yoheirosen/sublinearLiStephens. Details of the lazy evaluation algorithm will be found there.
We also implemented the linear time forward algorithm for the haploid Li and Stephens model in C++ as to evaluate it on identical footing. Profiling was performed using a single Intel Xeon X7560 core running at 2.3 GHz on a shared memory machine. Our reference panels H were the phased haplotypes from the 1000 Genomes [10] phase 3 vcf records for chromosome 22 and subsamples thereof. Haplotypes o were randomly generated simulated descendants.
Minor allele frequency distribution for the 1000 Genomes dataset
Comparison of our algorithm with the linear time forward algorithm
In order to compare the dependence of our algorithm’s runtime on haplotype panel size k against that of the standard linear LS forward algorithm, we measured the CPU time per genetic site of both across a range of haplotype panel sizes from 30 to 5008. This analysis was achieved as briefly described above. Haplotype panels spanning the range of sizes from 30 to 5008 haplotypes were subsampled from the 1000 Genomes phase 3 vcf records and loaded into memory in both uncompressed and our columnsparserow format. Random sequences were sampled using a copying model with mutation and recombination, and the performance of the classical forward algorithm was run back to back with our algorithm for the same random sequence and same subsampled haplotype panel. Each set of runs was performed in triplicate to reduce stochastic error.
For data points where we used all 1000 Genomes project haplotypes (\(k = 5008\)), on average, time per site is 37 μs for our algorithm and 1308 μs for the linear LS algorithm. For the forthcoming 100,000 Genomes Project, these numbers can be extrapolated to 251 μs for our algorithm and 260,760 μs for the linear LS algorithm.
Lazy evaluation of dynamic programming rows
Sparse haplotype encoding
Generating our sparse vectors
We generated the haplotype panel data structures from "Sparse representation of haplotypes" section using the vcfencoding tool vcf2slls which we provide. We built indices with multiallelic sites, which increases their time and memory profile relative to the results in "Minor allele frequency distribution for the 1000 Genomes dataset" section but allows direct comparison to vcf records. Encoding of chromosome 22 was completed in 38 min on a single CPU core. Use of M CPU cores will reduce runtime proportional to M.
Size of sparse haplotype index
In uncompressed form, our whole genome *.slls index for chromosome 22 of the 1000 genomes dataset was 285 MB in size versus 11 GB for the vcf record using uint16_t’s to encode haplotype ranks. When compressed with gzip, the same index was 67 MB in size versus 205 MB for the vcf record.
In the interest of speed (both for our algorithm and the \(\mathcal {O}(nk)\) algorithm) our experiments loaded entire chromosome sparse matrices into memory and stored haplotype indices as uint64_t’s. This requires on the order of 1 GB memory for chromosome 22. For long chromosomes or larger reference panels on low memory machines, the algorithm can operate by streaming sequential chunks of the reference panel.
Discussions and Conclusion
To the best of our knowledge, ours is the first forward algorithm for any haplotype model to attain sublinear time complexity with respect to reference panel size. Our algorithms could be incorporated into haplotype inference strategies by interfacing with our C++ library. This opens the potential for tools which are tractable on haplotype reference panels at the scale of current 100,000 to 1,000,000+ sample sequencing projects.
Applications which use individual forward probabilities
Our algorithm attains its runtime specifically for the problem of calculating the single overall probability \(P(oH,\rho ,\mu )\) and does not compute all nk forward probabilities. We can prove that if m many specific forward probabilities are also required as output, and if the time complexity of our algorithm is \(\mathcal {O}(\sum _i\left \phi _i \right )\), then the time complexity of the algorithm which also returns the m forward probabilities is \(\mathcal {O}(\sum _i\left \phi _i \right + m)\).
In general, haplotype phasing or genotype imputation tools use stochastic traceback or other similar sampling algorithms. The standard algorithm for stochastic traceback samples states from the full posterior distribution and therefore requires all forward probabilities. The algorithm output and lower bound of its speed is therefore \(\mathcal {O}(nk)\). The same is true for many applications of the forward–backward algorithm.
There are two possible approaches which might allow runtime sublinear in k for these applications. Using stochastic traceback as an example, first is to devise an \(\mathcal {O}(f(m))\) sampling algorithm which uses \(m = g(k)\) forward probabilities such that \(\mathcal {O}(f \circ g(k)) < \mathcal {O}(k)\). The second is to succinctly represent forward probabilities such that nested sums of the nk forward probabilities can be queried from \(\mathcal {O}(\phi ) < \mathcal {O}(nk)\) data. This should be possible, perhaps using the positional Burrows–Wheeler transform [11] as in [8], since we have already devised a forward algorithm with this property for a different model in [12].
Generalizability of algorithm
The optimizations which we have made are not strictly specific to the monoploid Li and Stephens algorithm. Necessary conditions for our reduction in the time complexity of the recurrence relations are
Condition 1
The number of distinct transition probabilities is constant with respect to number of states k.
Condition 2
The number of distinct emission probabilities is constant with respect to number of states k.
Favourable conditions for efficient time complexity of the lazy evaluation algorithm are
Condition 1
The number of unique update maps added per step is constant with respect to number of states k.
Condition 2
The update map extension operation is composition of functions of a class where composition is constanttime with respect to number of states k.
The reduction in time complexity of the recurrence relations depends on the Markov property, however we hypothesize that the delayed evaluation needs only the semiMarkov property.
Other haplotype forward algorithms
Our optimizations are of immediate interest for other haplotype copying models. The following related algorithms have been explored without implementation.
Example 1
(Diploid Li and Stephens) We have yet to implement this model but expect average runtime at least subquadratic in reference panel size k. We build on the statement of the model and its optimizations in [13]. We have found the following recurrences which we believe will work when combined with a system of lazy evaluation algorithms:
Lemma 8
Implementing and verifying the runtime of this extension of our algorithm will be among our next steps.
Example 2
(Multipopulation Li and Stephens) [14] We maintain separate sparse haplotype panel representations \(\phi ^A_i(o_i)\) and \(\phi ^B_i(o_i)\) and separate lazy evaluation mechanisms for the two populations A and B. Expected runtime guarantees are similar.
This model, and versions for \(> 2\) populations, will be important in large sequencing cohorts (such as NHLBI TOPMed) where assuming a single related population is unrealistic.
Example 3
(More detailed mutation model) It may also be desirable to model distinct mutation probabilities for different pairs of alleles at multiallelic sites. Runtime is worse than the biallelic model but remains average case sublinear.
Example 4
(Sequence graph Li and Stephens analogue) In [12] we described a hidden Markov model for a haplotypecopying with recombination but not mutation in the context of sequence graphs. Assuming we can decompose our graph into nested sites then we can achieve a fast forward algorithm with mutation. An analogue of our rowsparsecolumn matrix compression for sequence graphs is being actively developed within our research group.
While a haplotype HMM forward algorithm alone might have niche applications in bioinformatics, we expect that our techniques are generalizable to speeding up other forward algorithmtype sequence analysis algorithms.
We observe similar results in our own analyses in "Minor allele frequency distribution for the 1000 Genomes dataset" section.
Declarations
Authors' contributions
YR designed and prototyped the algorithm described in this article and performed its speed benchmarking. BP conceived of the theoretical need for such an algorithm and designed its integration into ongoing variant calling research. Both authors read and approved the final manuscript.
Acknowledgements
This work was supported by the National Human Genome Research Institute of the National Institutes of Health under Award Number 5U54HG007990, the National Heart, Lung, and Blood Institute of the National Institutes of Health under Award Number 1U01HL13718301, and grants from the W.M. Keck foundation and the Simons Foundation. We would like to thank Jordan Eizenga for his helpful discussions throughout the development of this work.
Competing interests
The authors declare that they have no competing interests.
Availability of data and materials
The dataset used as template haplotypes for the performance profiling during the current study are available in the 1000 Genomes Phase 3 variant call release, ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/ The runtime data produced during the current study are available from the corresponding author on reasonable request. The randomly generated subsampled haplotype cohorts and haplotypes used in these analyses persisted only in memory and were not saved to disk due to their immense aggregate size.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Authors’ Affiliations
References
 Li N, Stephens M. Modeling linkage disequilibrium and identifying recombination hotspots using singlenucleotide polymorphism data. Genetics. 2003;165(4):2213–33.PubMedPubMed CentralGoogle Scholar
 Kingman JFC. The coalescent. Stoch Process Appl. 1982;13(3):235–48.View ArticleGoogle Scholar
 Loh PR, Danecek P, Palamara PF, Fuchsberger C, Reshef YA, Finucane HK, Schoenherr S, Forer L, McCarthy S, Abecasis GR. Referencebased phasing using the haplotype reference consortium panel. Nat Genet. 2016;48(11):1443.View ArticleGoogle Scholar
 Browning BL, Browning SR. A unified approach to genotype imputation and haplotypephase inference for large data sets of trios and unrelated individuals. Am J Human Genet. 2009;84(2):210–23.View ArticleGoogle Scholar
 Williams AL, Patterson N, Glessner J, Hakonarson H, Reich D. Phasing of many thousands of genotyped samples. Am J Human Genet. 2012;91(2):238–51.View ArticleGoogle Scholar
 Delaneau O, Zagury JF, Marchini J. Improved wholechromosome phasing for disease and population genetic studies. Nat Methods. 2013;10(1):5.View ArticleGoogle Scholar
 O’Connell J, Sharp K, Shrine N, Wain L, Hall I, Tobin M, Zagury JF, Delaneau O, Marchini J. Haplotype estimation for biobankscale data sets. Nat Genet. 2016;48(7):817.View ArticleGoogle Scholar
 Lunter G. Fast haplotype matching in very large cohorts using the li and stephens model. bioRxiv 2016. https://doi.org/10.1101/048280. https://www.biorxiv.org/content/early/2016/04/12/048280.full.pdf.
 Keinan A, Clark AG. Recent explosive human population growth has resulted in an excess of rare genetic variants. Science. 2012;336(6082):740–3.View ArticleGoogle Scholar
 Consortium GP, et al. A global reference for human genetic variation. Nature. 2015;526(7571):68.View ArticleGoogle Scholar
 Durbin R. Efficient haplotype matching and storage using the positional Burrows–Wheeler transform (PBWT). Bioinformatics. 2014;30(9):1266–72.View ArticleGoogle Scholar
 Rosen Y, Eizenga J, Paten B. Modelling haplotypes with respect to reference cohort variation graphs. Bioinformatics. 2017;33(14):118–23.View ArticleGoogle Scholar
 Li Y, Willer CJ, Ding J, Scheet P, Abecasis GR. Mach: using sequence and genotype data to estimate haplotypes and unobserved genotypes. Genet Epidemiol. 2010;34(8):816–34.View ArticleGoogle Scholar
 Donnelly P, Leslie S. The coalescent and its descendants. 2010. arXiv preprint arXiv:1006.1514.