 Research
 Open Access
 Published:
Autovalidating von Neumann rejection sampling from small phylogenetic tree spaces
Algorithms for Molecular Biology volume 4, Article number: 1 (2009)
Abstract
Background
In phylogenetic inference one is interested in obtaining samples from the posterior distribution over the tree space on the basis of some observed DNA sequence data. One of the simplest sampling methods is the rejection sampler due to von Neumann. Here we introduce an autovalidating version of the rejection sampler, via interval analysis, to rigorously draw samples from posterior distributions over small phylogenetic tree spaces.
Results
The posterior samples from the autovalidating sampler are used to rigorously (i) estimate posterior probabilities for different rooted topologies based on mitochondrial DNA from human, chimpanzee and gorilla, (ii) conduct a nonparametric test of rate variation between proteincoding and tRNAcoding sites from three primates and (iii) obtain a posterior estimate of the humanneanderthal divergence time.
Conclusion
This solves the open problem of rigorously drawing independent and identically distributed samples from the posterior distribution over rooted and unrooted small tree spaces (3 or 4 taxa) based on any multiplyaligned sequence data.
Background
Obtaining samples from a realvalued target density f^{•} (t) is a basic problem in statistical estimation. The target f^{•} (t): $\mathbb{T}$ ↦ ℝ maps ndimensional real points in ℝ^{n}to real numbers in ℝ, i.e. t ∈ $\mathbb{T}$ ⊂ ℝ^{n}. In Bayesian phylogenetic estimation, we want to draw independent and identically distributed samples from a target posterior density on the space of phylogenetic trees. The standard pointvalued or punctual Monte Carlo methods via conventional floatingpoint arithmetic are typically nonrigorous as they do not account for all sources of numerical errors and are limited to evaluating the target at finitely many points. The standard approaches to sampling from the posterior density, especially over phylogenetic trees, rely on Markov chain Monte Carlo (MCMC) methods. Despite their asymptotic validity, it is nontrivial to guarantee that an MCMC algorithm has converged to stationarity [1], and thus MCMC convergence diagnostics on phylogenetic tree spaces are heuristic [2].
A more direct sampler that is capable of producing independent and identically distributed samples from the target density f^{•} (t):= f(t)/(N_{ f }), by only evaluating the target shape f(t) without knowing the normalizing constant ${N}_{f}:={\displaystyle {\int}_{\mathbb{T}}f(t)dt}$, is the von Neumann rejection sampler [3]. However, the limiting step in the rejection sampler is the construction of an envelope function $\widehat{g}$(t) that is not only greater than the target shape f(t):= N_{ f }f^{•} (t) at every t ∈ $\mathbb{T}$, but also easy to normalize and draw samples from. Moreover, a practical and efficient envelope function has to be as close to the target shape as possible from above. When an envelope function is constructed using pointvalued methods, except for simple classes of targets, one cannot guarantee that the envelope function dominates the target shape globally.
None of the available samplers can rigorously produce independent and identically distributed samples from the posterior distribution over phylogenetic tree spaces, even for 3 or 4 taxa. We describe a new approach for rigorously drawing samples from a target posterior distribution over small phylogenetic tree spaces using the theory of interval analysis. This method can circumvent the problems associated with (i) heuristic convergence diagnostics in MCMC samplers and (ii) pseudoenvelopes constructed via nonrigorous pointvalued methods in rejection samplers.
Informally, our method partitions the domain into boxes and uses interval analysis to rigorously bound the target shape in each box; then we use as envelope the simple function which takes on in each box the upper bound obtained for that box. It is easy to draw samples from the density corresponding to this step function envelope. More formally, the method employs an interval extension of the target posterior shape f(t): $\mathbb{T}$ ↦ ℝ to produce rigorous enclosures of the range of f over each interval vector or box in an adaptive partition $\mathfrak{T}:=\{{t}^{(1)},{t}^{(2)},\mathrm{...},{t}^{(\mathfrak{T})}\}$ of the tree space $\mathbb{T}$ = ∪_{ i }t^{(i)}. This partition is adaptively constructed by a priority queue. The interval extended target shape maps boxes in $\mathbb{T}$ to intervals in ℝ. This image interval provides an upper bound for the global maximum and a lower bound for the global minimum of f over each element of the partition of $\mathbb{T}$. We use this information to construct an envelope as a simple function over the partition $\mathfrak{T}$. Using the Alias method [4] we efficiently propose samples from this normalized stepfunction envelope for von Neumann rejection sampling.
We call our method autovalidating because we employ interval methods to rigorously construct the envelope for a large class of target densities. The method was described in a more rudimentary form in [5]. Unlike many conventional samplers, each sample produced by our method is equivalent to a computerassisted proof that it is drawn from the desired target, up to the pseudorandomness of the underlying, deterministic, pseudorandom number generator. MRS 0.1.2, a C++ class library for statistical set processing is available from http://www.math.canterbury.ac.nz/~r.sainudiin/codes/mrs under the terms of the GNU General Public License.
The rest of the paper is organized as follows. In the Methods Section, we introduce (i) von Neumann rejection sampler (RS), (ii) phylogenetic estimation problem, (iii) interval analysis and (iv) an interval extension of the rejection sampler called the Moore rejection sampler (MRS) in honor of Ramon E. Moore. Moore was one of the influential founders of interval analysis [6]. In Results Section, we employ MRS to rigorously draw samples from the posterior density over small tree spaces. Using one of the earliest primate mitochondrial DNA data sets we use the posterior samples to estimate the posterior probability of each rooted tree topology and conduct a nonparametric test of rate variation between proteincoding and tRNAcoding sites. Using one of the latest data sets we obtain a rigorous posterior estimate of the humanneanderthal divergence time. We can also draw samples from the space of unrooted triplet and quartet trees. We conclude after a discussion of the method.
Methods
In the following sections, we first introduce the rejection sampler (RS) due to von Neumann [3]. Secondly, we describe the basic phylogenetic inference problem (e.g. [7–9]). Then, we introduce the basic principles of interval methods (e.g. [6, 10–13]). Finally, we construct interval extensions of RS to rigorously draw independent and identically distributed samples from small phylogenetic tree spaces. We leave the formal proofs to the Appendix for completeness.
Rejection sampler (RS)
Rejection sampling [3] is a Monte Carlo method to draw independent samples from a target random variable or random vector T with density f^{•} (t):= f(t)/N_{ f }, where t ∈ $\mathbb{T}$ ⊂ ℝ^{n}, i.e. T ~ f^{•}. The challenge is to draw the samples without any knowledge of the normalizing constant ${N}_{f}:={\displaystyle {\int}_{\mathbb{T}}f(t)dt}$. Typically the target f^{•} (t) is any density that is absolutely continuous with respect to the Lebesgue measure. The von Neumann rejection sampler (RS) can produce samples from T ~ f^{•} according to Algorithm 1 when provided with (i) a fundamental sampler that can produce independent samples from the Uniform [0, 1] random variable M with density given by the indicator function 1_{[0,1]}(m): ℝ ↦ ℝ, (ii) a target shape f(t): $\mathbb{T}$ ↦ ℝ, (iii) an envelope function $\widehat{g}(t):\mathbb{T}\mapsto \mathbb{R}$, such that,

(iv)
a normalizing constant ${N}_{\widehat{g}}:={\displaystyle {\int}_{\mathbb{T}}\widehat{g}(t)dt}$, (v) a proposal density $g(t):={({N}_{\widehat{g}})}^{1}\widehat{g}(t)$ over $\mathbb{T}$ from which independent samples can be drawn and finally (vi) f(t) and $\widehat{g}$(t) must be computable for any t ∈ $\mathbb{T}$.
input : (i) f; (ii) samplers for V ~ g and M ~ 1_{[0,1]}; (iii) $\widehat{g}$; (iv) integer MaxTrials;
output : (i) possibly one sample t from T ~ f^{•} and (ii) Trials
initialize: Trials ← 0; Success ← false; t ← ∅;
repeat //propose at most MaxTrials times until acceptance
v ← sample(g); //draw a sample v from RV V with density g
u ← $\widehat{g}$(v) sample(1_{[0,1]}); //draw a sample u from RV U with density ${1}_{[0,\widehat{g}(v)]}$
if u ≤ f(v) then //accept the proposed v and flag Success
t ← v; Success ← true
end
Trials ← Trials +1; //track the number of proposal trials so far
until Trials ≥ MaxTrials or Success = true;
return t and Trials
Algorithm 1: von Neumann RS
We use the Mersenne Twister pseudorandom number generator [14] to imitate independent samples from M ~ 1_{[0,1]}. The random variable T, if generated by Algorithm 1, is distributed according to f^{•} (e.g. [15]). Let A($\widehat{g}$) be the probability that a point proposed according to g gets accepted as an independent sample from f^{•} through the envelope function $\widehat{g}$. Observe that the envelopespecific acceptance probability A($\widehat{g}$) is the ratio of the integrals
and the probability distribution over the number of samples from g to obtain one sample from f^{•} is geometrically distributed with mean 1/A($\widehat{g}$) (e.g. [15]).
Phylogenetic estimation
In this section we briefly review phylogenetic estimation. A more detailed account can be found in [7–9]. Inferring the ancestral relationship among a set of extant species based on their DNA sequences is a basic problem in phylogenetic estimation. One can obtain the likelihood of a particular phylogenetic tree that relates the extant species of interest at its leaves by superimposing a continuous time Markov chain model of DNA substitution upon that tree. The length of an edge (branch length) connecting two nodes (species) in the tree represents the amount of evolutionary time (divergence) between the two species. The internal nodes represent ancestral species. During the likelihood computation, one needs to integrate over all possible states at the unobserved ancestral nodes.
Next we give a brief introduction to some phylogenetic nomenclature. A phylogenetic tree is said to be rooted if one of the internal nodes, say node r, is identified as the root of the tree, otherwise it is said to be unrooted. The rooted tree is conventionally depicted with the root node r at the top. The four topologylabeled, threeleaved, rooted trees, namely, ^{0}t, ^{1}t, ^{2}t and ^{3}t, with leaf label set {1, 2, 3}, are depicted in Figure 1(i)–(iv). The unrooted, threeleaved tree with topology label 4 or the unrooted triplet ^{4}t is shown in Figure 1(v). For each tree, the terminal branch lengths, i.e. the branch lengths leading to the leaf nodes, have to be strictly positive and the internal branch lengths have to be nonnegative. Our rooted triplets (Figure 1(i)–(iv)) are said to satisfy the molecular clock, since the branch lengths of each ^{k}t, where k ∈ {0, 1, 2, 3}, satisfy the constraint that the distance from the root node r to each of the leaf nodes is equal to ^{k}t_{0} + ^{k}t_{1} with ^{k}t_{1} > 0 and ^{k}t_{0} ≥ 0.
Likelihood of a tree
Let d denote a homologous set of sequences of length v with character set $\mathfrak{U}=\{{a}_{1},{a}_{2},\mathrm{...},{a}_{\left\mathfrak{U}\right}\}$ from n taxa. We think of d as an n × v matrix with entries from $\mathfrak{U}$. We are interested in estimating the branch lengths and topologies of the tree underlying our observed d. Let b_{ k }denote the number of branches and s_{ k }denote the number of nodes of a tree with a specific topology or branching order labeled by k. Thus, for a given topology label k, n labeled leaves and b_{ k }many branches, the labeled tree ^{k}t is the topologylabeled vector of branch lengths (^{k}t_{1},...,${{\displaystyle {}^{k}t}}_{{b}_{k}}$) contained in the topologylabeled tree space ${}^{k}\mathbb{T}$, i.e.,
Any subset of the tree space with . Such a model prescribes ${P}_{{a}_{i},{a}_{j}}(t)$, the probability of mutation from a character a_{ i }∈ $\mathfrak{U}$ to another character a_{ j }∈ $\mathfrak{U}$ in time t. Using such a transition probability we may compute ℓ_{ q }(^{k}t), the loglikelihood of the data d at site q ∈ {1,..., v} or the qth column of d, via the postorder traversal over the labeled tree with branch lengths ^{k}t := (^{k}t_{1}, ^{k}t_{2},...,${{\displaystyle {}^{k}t}}_{{b}_{k}}$). This amounts to the sumproduct Algorithm 2 [16] that associates with each node h ∈ {1,..., s_{ k }} of ^{k}t subtending ℏ many descendants, a partial likelihood vector, ${l}_{h}:=({l}_{h}^{({a}_{1})},{l}_{h}^{({a}_{2})},\mathrm{...},{l}_{h}^{({a}_{\left\mathfrak{U}\right})})\in {\mathbb{R}}^{\left\mathfrak{U}\right}$, and specifies the length of the branch leading to its ancestor as ^{k}t_{ h }.
input : (i) a labeled tree with branch lengths ^{k}t := (^{k}t_{1}, ^{k}t_{2},...,${{\displaystyle {}^{k}t}}_{{b}_{k}}$), (ii) transition probability ${P}_{{a}_{i},{a}_{j}}(t)$ for any a_{ i }, a_{ j }∈ $\mathfrak{U}$, (iii) stationary distribution π(a_{ i }) over each character a_{ i }∈ $\mathfrak{U}$, (iv) site pattern or data d_{•, q}at site q
output : ${l}_{{d}_{\xb7,q}}$(^{k}t), the likelihood at site q with pattern d_{•, q}
initialize: For a leaf node h with observed character a_{ i }= d_{h, q}at site q, set ${l}_{h}^{({a}_{i})}$ = 1 and ${l}_{h}^{({a}_{j})}$ = 0 for all j ≠ i. For any internal node h, set l_{ h }:= (1, 1,...,1).
recurse : compute l_{ h }for each subterminal node h, then those of their ancestors recursively to finally compute l_{ r }for the root node r to obtain the likelihood for site q,
For an internal node h with descendants s_{1}, s_{2},..., s_{ℏ},
Algorithm 2: Likelihood by postorder traversal
Assuming independence across all v sites we obtain the likelihood function for the given data d, by multiplying the sitespecific likelihoods
The maximum likelihood estimate is a point estimate (single best guess) of the unknown phylogenetic tree on the basis of the observed data d and it is
The simplest probability models for character mutation are continuous time Markov chains with finite state space $\mathfrak{U}$. We introduce three such models employed in this study next. We only derive the likelihood functions for the simplest model with just two characters as it is thought to wellrepresent the core problems in phylogenetic estimation (see for e.g. [17]).
Posterior density of a tree
The posterior density f^{•} (^{k}t) conditional on data d at tree ^{k}t is the normalized product of the likelihood l_{ d }(^{k}t) and the prior density p(^{k}t) over a given tree space ${}^{\mathfrak{K}}\mathbb{T}$:
We assume a uniform prior density over a large box or a union of large boxes in a given tree space ${}^{\mathfrak{K}}\mathbb{T}$. Typically, the sides of the box giving the range of branch lengths, are extremely long, say, [0, 10] or [10^{10}, 10]. The branch lengths are measured in units of expected number of DNA substitutions per site and therefore the support of our uniform prior density over ${}^{\mathfrak{K}}\mathbb{T}$ contains the biologically relevant branch lengths. If ${}^{\mathfrak{K}}\mathbb{T}$ is a union of distinct topologies then we let our prior be an equally weighted finite mixture of uniform densities over large boxes in each topology. Naturally, other prior densities are possible especially in the presence of additional information. We choose at priors for the convenient interpretation of the target posterior shape $f({}^{k}t)={f}^{\xb7}({}^{k}t){\displaystyle {\int}_{{}^{\mathfrak{K}}\mathbb{T}}{l}_{d}({}^{k}t)p({}^{k}t)\partial ({}^{k}t)}$ to be the likelihood function in the absence of prior information beyond a compact support specification.
Likelihood of a triplet under CavenderFarrisNeyman (CFN) model
We now describe the simplest model for the evolution of binary sequences under a symmetric transition matrix over all branches of a tree. This model has been used by authors in various fields including molecular biology, information theory, operations research and statistical physics; for references see [7, 18]. This model is referred to as the CavenderFarrisNeyman (CFN) model in molecular biology, although in other fields it has been referred to as 'the onoff machine', 'symmetric binary channel' and the 'symmetric twostate Poisson model'. Although the relatively tractable CFN model itself is not popular in applied molecular evolution, the lessons learned under the CFN model often extend to more realistic models of DNA mutation (e.g. [17]). Thus, our first stop is the CFN model.
Model 1 (CavenderFarrisNeyman (CFN) model) Under the CFN mutation model, only pyrimidines and purines, denoted respectively by Y:= {C, T} and R:= {A, G}, are distinguished as evolutionary states among the four nucleotides {A, G, C, T}, i.e. $\mathfrak{U}$ = {Y, R}. Time t is measured by the expected number of substitutions in this homogeneous continuous time Markov chain with rate matrix:
and transition probability matrix P(t) = e^{Qt}:
Thus, the probability that Y mutates to R, or vice versa, in time t is a(t): (1e^{2t})/2. The stationary distribution is uniform on $\mathfrak{U}$, i.e. π(R) = π(Y) = 1/2.
When there are only three taxa, there are five tree topologies of interest as depicted in Figure 1. There are 2^{3} = 8 possible site patterns, i.e. for each site q ∈ {1, 2,..., v}, the qth column of the data d, denoted by d_{•, q}, is one of eight possibilities, numbered 0, 1,...,7 for convenience:
Given a multiple sequence alignment data d from 3 taxa at v homologous sites, i.e. d ∈ {Y, R}^{3 × v}, the likelihood function over the tree space ${}^{k}\mathbb{T}$ is simplified from (2) as follows:
where l_{ i }(^{k}t) is the likelihood of the the ith site pattern as in (4) and c_{ i }is the count of sites with pattern i. In fact, l_{ i }(^{k}t) = P(i^{k}t) is the probability of observing site pattern i given topology label k and branch lengths t and similarly l_{ d }(^{k}t) = P(d^{k}t).
Consider the unrooted treespace with a single topology labeled 4 and three nonnegative terminal branch lengths ^{4}t = (^{4}t_{1}, ^{4}t_{2}, ^{4}t_{3}) ∈ ${\mathbb{R}}_{+}^{3}$ as shown in Figure 1(v). An application of Algorithm 2 to compute the likelihoods l_{0}(^{4}t), l_{1}(^{4}t),..., l_{7}(^{4}t), as derived in (19)(25), reveals symmetry. There are in fact four minimally sufficient site pattern classes, namely, xxx, xxy, yxx and xyx, where x and y simply denote distinct characters in the alphabet set $\mathfrak{U}$ = {R, Y}. The corresponding likelihoods are:
Therefore, the multiple sequence alignment data d from three taxa evolving under Model 1 can be summarized by the minimal sufficient site pattern counts
(c_{xxx}, c_{xxy}, c_{yxx}, c_{xyx}):= (c_{0} + c_{1}, c_{2} + c_{3}, c_{4} + c_{5}, c_{6} + c_{7}),
which simplifies (5) to:
Note that the probability of our sample space with eight patterns given in (4) is ${\sum}_{i=0}^{7}{l}_{i}({}^{4}t)=1$. Our likelihoods are half of those in [17] that are prescribed over a sample space of only four classes of patterns: {0, 1}, {2, 3}, {4, 5} and {6, 7}. This is because we distinguish between the sample space of data from that of the minimal sufficient statistics. We compute the rooted topologyspecific likelihood functions, i.e. l(^{k}t) for k ∈ {0, 1, 2, 3} (Figure 1) by substituting the appropriate constraints on branch lengths in ${}^{4}\mathbb{T}={\mathbb{R}}_{+}^{3}$, the space of unrooted triplets.
Likelihood of a triplet under JukesCantor (JC) model
The rstate symmetric model introduced in [19] is specified by the r × r rate matrix with equal offdiagonal entries over an alphabet set $\mathfrak{U}$ of size r. The stationary distribution under this model is the uniform distribution on $\mathfrak{U}$. Thus, CFN model is the 2state symmetric model over $\mathfrak{U}$ = {Y, R}. The JukesCantor (JC) model [20] is the 4state symmetric model over $\mathfrak{U}$ = {A, C, G, T}. This is perhaps the simplest model on four characters.
Model 2 (JukesCantor (JC) model) All four nucleotides form the state space for this mutation model, i.e. $\mathfrak{U}$ = {A, C, G, T}. Once again, evolutionary time t is measured by the expected number of substitutions in the homogeneous continuous time Markov chain with rate matrix:
The transition probability matrix P(t) = e^{Qt}is also symmetric. The probability that any given nucleotide mutates to any other nucleotide in time t is P_{x, y}(t) and that it is found in the same state is P_{x, x}(t). These transition probabilities are:
The stationary distribution is uniform, i.e. π(A) = π(C) = π(G) = π(T) = 1/4.
Consider the three nonnegative terminal branch lengths ^{4}t = (^{4}t_{1}, ^{4}t_{2}, ^{4}t_{3}) ∈ ${\mathbb{R}}_{+}^{3}$ of an unrooted tree ^{4}t of Figure 1(v). An application of Algorithm 2 to compute the likelihoods of the 64 possible site patterns (see for e.g. [21–24]), reveals five minimally sufficient site pattern classes. Let x, y and z simply denote distinct characters from the alphabet set $\mathfrak{U}$ = {A, C, G, T} at taxon 1, 2 and 3, respectively. The minimally sufficient site pattern classes xxx, xyz, xxy, yxx and xyx encode 4, 24, 12, 12 and 12 nucleotide site patterns, respectively. By a computation similar to that in (19)(25), the likelihoods are:
Notice that the probability of observing one of the 64 possible site patterns is 1 for any ^{4}t ∈ (0, ∞)^{3} :
4l_{xxx}(^{4}t) + 24l_{xyz}(^{4}t) + 12l_{xxy}(^{4}t) + 12l_{yxx}(^{4}t) + 12l_{yxx}(^{4}t) = 1.
Let c_{ijk} denote the number of sites with the site pattern ijk ∈ {xxx, xyz, xxy, yxx, xyx}. Then, under the assumption of independence across sites, we obtain the likelihood of a given data d by multiplying the sitespecific likelihoods:
Once again, the likelihood of a rooted tree or the star tree can be obtained from that of the unrooted tree by substituting the appropriate constraints on branch lengths in the above equations or by directly applying Algorithm 2 with the appropriate input tree with its topology and branch lengths.
Model 3 (HasegawaKishinoYano (HKY) model) The HasegawaKishinoYano or HKY model [25]has all four nucleotides in the state space, i.e. $\mathfrak{U}$ = {A, C, G, T}. There are five parameters in this more flexible model. Transitions are changes within the purine {A, G} or pyrimidine {C, T} state subsets, while transversions are changes from purine to pyrimidine or from pyrimidine to purine. In this model, we have a mutational parameter κ that allows for transition:transversion bias and four additional parameters π_{A}, π_{C}, π_{G} and π_{T} that explicitly control the stationary distribution. The entries of the rate matrix are:
The transition probabilities are known analytically for this model (see for e.g. [[8], p. 203]). We can use these expressions when evaluating the likelihood of a rooted or unrooted tree along with the five mutational parameters via Algorithm 2. For simplicity we set the stationary distribution parameters to the empirical nucleotide frequencies and κ to be 2.0 in this study.
Interval analysis
Let $\mathbb{I}\mathbb{R}$ denote the set of closed and bounded real intervals. Let any element of $\mathbb{I}\mathbb{R}$ be denoted by x: [$\underset{\xaf}{x}$, $\overline{x}$], where, $\underset{\xaf}{x}$ ≤ $\overline{x}$ and $\underset{\xaf}{x}$, $\overline{x}$ ∈ ℝ. Next we define arithmetic over $\mathbb{I}\mathbb{R}$.
Definition 1 (Interval Operation) If the binary operator ⋆ is one of +, , ×,/, then we define an arithmetic on operands in $\mathbb{I}\mathbb{R}$ by
x⋆ y:= {x ⋆ y : x ∈ x, y ∈ y},
with the exception that x/y is undefined if 0 ∈ y.
Theorem 1 (Interval arithmetic) Arithmetic on the pair x, y∈ $\mathbb{I}\mathbb{R}$is given by:
When computing with finite precision, say in floatingpoint arithmetic, directed rounding must be taken into account (see e.g., [6, 10]) to contain the solution. Interval multiplication is branched into nine cases, on the basis of the signs of the boundaries of the operands, such that only one case entails more than two real multiplications. Therefore, a rigorous computer implementation of an interval operation mostly requires two directed rounding floatingpoint operations. Interval addition and multiplication are both commutative and associate but not distributive. For example,
Interval arithmetic satisfies a weaker rule than distributivity called subdistributivity:
x(y+ z) ⊆ xy+ xz.
An extremely useful property of interval arithmetic that is a direct consequence of Definition 1 is summarized by the following theorem.
Theorem 2 (Fundamental property of interval arithmetic) If x⊆ x' and y⊆ y' and ⋆ ∈ {+, , ×,/}, then
x⋆ y⊆ x' ⋆ y',
where we require that 0 ∉ y' when ⋆ = /.
Note that an immediate implication of Theorem 2 is that when x= [x, x] and y= [y, y] are thin intervals, i.e. $\underset{\xaf}{x}$= $\overline{x}$ = x and $\underset{\xaf}{y}$ = $\overline{y}$ = y are real numbers, then x' ⋆ y' will contain the result of the real arithmetic operation x ⋆ y.
Let $\underset{\xaf}{x}$, $\overline{x}$ ∈ ℝ^{n}be real vectors such that $\underset{\xaf}{x}$_{ i }≤ ${\overline{x}}_{i}$, for all i = 1, 2,..., n, then x: [$\underset{\xaf}{x}$, $\overline{x}$] is an interval vector or a box. The set of all such boxes is $\mathbb{I}{\mathbb{R}}^{n}$. The ith component of the box x= (x_{1},..., x_{ n }) is the interval x_{ i }= [$\underset{\xaf}{x}$_{ i }, ${\overline{x}}_{i}$] and the interval extension of a set $\mathbb{D}\subseteq {\mathbb{R}}^{n}$ is $\mathbb{I}\mathbb{D}:=\{x\in \mathbb{I}{\mathbb{R}}^{n}:\underset{\xaf}{x},\overline{x}\in \mathbb{D}\}$. We write inf x:= $\underset{\xaf}{x}$for the lower bound, sup x:= $\overline{x}$ for the upper bound. Let the maximum norm of a vector x ∈ ℝ^{n}be ∥x∥_{∞} := max_{ k }x_{ k }. Let the vector valued hypermetric between boxes x and y be
and the Hausdorff distance between the boxes x and y in the metric given by the maximum norm is then
dist_{∞} (x, y) = ∥dist(x, y)∥_{∞}.
We can make $\mathbb{I}{\mathbb{R}}^{n}$a metric space by equipping it with the Hausdorff distance.
Our main motivation for the extension to intervals is to enclose the range:
range(f; S):= {f(x): x ∈ S},
of a realvalued function f : ℝ^{n}↦ ℝ over a set S ⊆ ℝ^{n}. Except for trivial cases, few tools are available to obtain the range.
Definition 2 (Directed acyclic graph (DAG) expression of a function) One can think of the process by which a function f : ℝ^{m}↦ ℝ is computed as the result of a sequence of recursive operations with the subexpressions f_{ i }of its expression f where, i = 1,..., n < ∞. This involves the evaluation of the subexpression f_{ i } at node i with operands ${s}_{{i}_{1}}$, ${s}_{{i}_{2}}$from the subterminal nodes of i given by the directed acyclic graph (DAG) for f
The leaf or terminal node of the DAG is a constant or a variable and thus the f_{ i }for a leaf i is set equal to the respective constant or variable. The recursion starts at the leaves and terminates at the root of the DAG. The DAG for an elementary f is simply its expression f with n subexpressions f_{1}, f_{2},...,f_{ n }:
where each ⊙f_{ i }is computed according to (8).
We look at some DAGs for 0 functions to concretely illustrate these ideas.
Example 1 Consider the constant zero function f(x) = 0 expressed as (i) f(x) = 0, (ii) f'(x) = x × 0 and (iii) f"(x) = x  x. The corresponding DAG expressions are shown in Figure 2.
Definition 3 (The natural interval extension) Consider a realvalued function f(x): ℝ^{n}↦ ℝ^{m} given by a formula or a DAG expression f(x). If real constants, variables, and operations in f(x) are replaced by their interval counterparts, then one obtains
f(x) is known as the natural interval extension of the expression f(x) for f(x). This extension is welldefined if we do not run into division by zero.
Although the three distinct expressions f(x), f'(x) and f"(x) of the real function f: ℝ ↦ ℝ of Example 1 are equivalent upon evaluation in the reals, their respective interval extensions f(x) = [0, 0], f'(x) = x× [0, 0], and f"(x) = x x are not. For instance, if x= [1, 2],
and in general for any x: [$\underset{\xaf}{x}$, $\overline{x}$] ∈ $\mathbb{I}\mathbb{R}$,
Thus, f(x) = f'(x) ≠ f"(x) for any x∈ $\mathbb{I}\mathbb{R}$, albeit f(x) = f'(x) = f"(x) for any x ∈ ℝ.
Theorem 3 (Interval rational functions) Consider the rational function f(x) = p(x)/q(x), where p and q are polynomials. Let f be the natural interval extension of its DAG expression f such that f(y) is welldefined for some y ∈ $\mathbb{I}\mathbb{R}$ and let x, x'∈ $\mathbb{I}\mathbb{R}$. Then we have
Definition 4 (Standard functions) Piecewise monotone functions, including exponential, logarithm, rational power, absolute value, and trigonometric functions, constitute the set of standard functions
$\mathfrak{S}$ = {a^{x}, log_{ b }(x), x^{p/q}, x, sin(x), cos(x), tan(x), sinh(x),...arcsin(x),...}.
Such functions have welldefined interval extensions that satisfy inclusion isotony and exact range enclosure, i.e. range(f; x) = f(x). Consider the following definitions for the interval extensions for some monotone functions in $\mathfrak{S}$ with x∈ $\mathbb{I}\mathbb{R}$,
and a piecewise monotone function in $\mathfrak{S}$; with ℤ_{+} and ℤ_{} representing the set of positive and negative integers, respectively. Let the mignitude of an interval x be the number $\u3008x\u3009$ = min{x:x ∈ x} and the absolute value of x be the number x = max{x:x ∈ x} = sup{$\underset{\xaf}{x}$, $\overline{x}$}. Then, the intervalextended power function that plays a basic role in product likelihood functions is:
Definition 5 (Elementary functions) A realvalued function that can be expressed as a finite combination of constants, variables, arithmetic operations, standard functions and compositions is called an elementary function. The set of all such elementary functions is referred to as $\mathfrak{E}$.
Example 2 (Probability of the pattern xxx under CFN star tree ^{0}t ) The trifurcating startree ^{0}t := (^{0}t_{1}) has topology label 0 and common branch length parameter ^{0}t_{1} as shown in Figure 1(i). Either a direct application of Algorithm 2 with input as ^{0}t := (^{0}t_{1}) or a substitution of ^{0}t_{1} for ^{4}t_{1}, ^{4}t_{2} and ^{4}t_{3} in (6), yields the likelihood for pattern xxx as:
The probability of the pattern xxx under CFN star tree ^{0}t given by l_{xxx}(^{0}t) with the corresponding DAG expression shown in Figure 3 is an elementary function.
It would be convenient if guaranteed enclosures of the range of an elementary f can be obtained by the natural interval extension f of one of its expressions f. The following Theorem 4 is the workhorse of interval Monte Carlo algorithms.
Theorem 4 (The fundamental theorem of interval analysis) Consider any elementary function f ∈ $\mathfrak{E}$ with expression f. Let f : y↦ $\mathbb{I}\mathbb{R}$ be its natural interval extension such that f(y) is welldefined for some y ∈ $\mathbb{I}\mathbb{R}$ and let x, x'∈ $\mathbb{I}\mathbb{R}$. Then we have
The fundamental implication of the above theorem is that it allows us to enclose the range of any elementary function and thereby produces an upper bound for the global maximum and a lower bound for the global minimum over any compact subset of the domain upon which the function is welldefined. This is the workhorse for rigorously constructing an envelope for rejection sampling.
Unlike the natural interval extension of an f ∈ $\mathfrak{S}$ that produces exact range enclosures, the natural interval extension f(x) of an f ∈ $\mathfrak{E}$ often overestimates range(f; x), but can be shown under mild conditions to linearly approach the range as the maximal width of the box x goes to zero. This implies that a partition of x into smaller boxes {x^{(1)},⋯, x^{(m)}} gives better enclosures of range(f; x) through the union ${\cup}_{i=1}^{m}f({x}^{(i)})$ as illustrated in Figure 4. Next we make the above statements precise in terms of the width and radius of a box x defined by wid x:= $\overline{x}$  $\underset{\xaf}{x}$and rad x:= ($\overline{x}$  $\underset{\xaf}{x}$)/2, respectively.
Definition 6 A function f: $\mathbb{D}$ ↦ ℝ is Lipschitz if there exists a Lipschitz constant K such that, for all x, y ∈ $\mathbb{D}$, we have f(x)  f(y) ≤ Kx  y. We define ${\mathfrak{E}}_{\mathfrak{L}}$ to be the set of elementary functions whose subexpressions f_{ i }, i = 1,..., n at the nodes of its corresponding DAG f are all Lipschitz:
Theorem 5 (Range enclosure tightens linearly with mesh) Consider a function f : $\mathbb{D}$ ↦ ℝwith f ∈ ${\mathfrak{E}}_{\mathfrak{L}}$. Let f be an inclusion isotonic interval extension of the DAG expression f of f such that f (x) is welldefined for some x∈ $\mathbb{I}\mathbb{R}$. Then there exists a positive real number K, depending on f and x, such that if $x={\displaystyle {\cup}_{i=1}^{k}{x}^{(i)}}$, then
and
Likelihood of a box of trees
The likelihood function (2) over trees with a DAG expression that is directly or indirectly obtained via Algorithm 2 has a natural interval extension over boxes of trees [5, 26]. This interval extension of the likelihood function allows us to produce rigorous enclosures of the likelihood over a box in the tree space. Next we give a concrete example of the natural interval extension of the likelihood function over an interval of trees ^{0}t in the startree space . The same ideas extend to any labeled box of trees ^{k}t when the number of branch lengths is greater than one and more generally to a finite union of labeled boxes with possibly distinct labels.
Example 3 (Posterior density over the CFN startree space ) The trifurcating startree ^{0}t := (^{0}t_{1}) has topology label 0 and common branch length ^{0}t_{1} > 0. Either a direct application of Algorithm 2 with input triplet ^{0}t or a substitution of ^{4}t_{1}, ^{4}t_{2} and ^{4}t_{3} in (6) by ^{0}t_{1} yields the following specific likelihoods:
Therefore, on the basis of (4), (5), (6) and (7), the likelihood of the data at the startree ^{0}t ∈ is
the posterior density (3) based on a uniform prior p(^{0}t_{1}) = 1/10 over = (0, 10] is
Thus, under our conveniently chosen uniform prior, the target posterior shape (without the normalizing constant) is simply the likelihood function, i.e.
Observe that the minimal sufficient statistics over are the number of sites with the same character c_{xxx} := c_{0} + c_{1} and the total number of sites v. Let the natural interval extension of the DAG expression for the posterior shape f(^{0}t): ↦ ℝ be:
Thus, f maps an interval ^{0}t in the tree space to an interval in $\mathbb{I}\mathbb{R}$that encloses the target shape or likelihood of ^{0}t.
For the human, chimpanzee and gorilla mitochondrial sequence data [27]analyzed in [17], c_{xxx} = 762 and v = 895. Figure 4 shows log(f(^{0}t)) or the loglikelihood function for this data set as the white line. Evaluations of its interval extension over partitions by 3, 7 and 19 intervals are depicted by colored rectangles in Figure 4. Notice how the range enclosure by the interval extension of the loglikelihood function, our target shape, tightens with domain refinement as per Theorem 5. The maximum likelihood estimate derived in [17](the red dot in Figure 4) is
Moore rejection sampler (MRS)
Moore rejection sampler (MRS) is an autovalidating rejection sampler (RS). MRS is said to be autovalidating because it automatically obtains a proposal g that is easy to simulate from, and an envelope that is guaranteed to satisfy the envelope condition (1). MRS can produce independent samples from any target shape f whose DAG expression f has a welldefined natural interval extension f over a compact domain $\mathbb{T}$. In summary, the defining characteristics and notations of MRS are:
Suppose f is an elementary function and its DAG expression f has a welldefined interval extension f on $\mathbb{T}$. If $\mathfrak{T}:=\{{t}^{(1)},{t}^{(2)},\mathrm{...},{t}^{(\mathfrak{T})}\}$ is a finite partition of $\mathbb{T}$, then by Theorem 4 we can enclose range(f; t^{(i)}), i.e. the range of f over the ith element of $\mathfrak{T}$, with the interval extension f of f:
For a given partition $\mathfrak{T}$, we can construct a partitionspecific envelope function:
The necessary envelope condition (1) is satisfied by ${\widehat{g}}^{\mathfrak{T}}$(t) because of (13). We can obtain the corresponding proposal ${g}^{\mathfrak{T}}$(t) as a normalized simple function over $\mathbb{T}$:
where the normalizing constant ${N}_{{\widehat{g}}^{\mathfrak{T}}}:={\displaystyle {\sum}_{i=1}^{\left\mathfrak{T}\right}(\text{vol}{t}^{(i)}\cdot \overline{f}({t}^{(i)}))}$ and $\text{vol}t:={\displaystyle {\prod}_{i=1}^{n}\text{wid}{t}_{i}}$ is the volume of the box t. The volume of an interval x is simply its width, i.e. vol x= wid x, if x ∈ $\mathbb{I}\mathbb{R}$. Now, we have all the ingredients to perform a more efficient, partitionspecific, autovalidating von Neumann rejection sampling or simply Moore rejection sampling.
Before making formal statements about our sampler let us gain geometric insight into the sampler from Example 3 and Figure 4. The upper boundaries of rectangles of a given color, depicting a simple function in Figure 4, is a partitionspecific envelope function (14) for the logarithm of the posterior shape or the loglikelihood function of Example 3 over the priorspecified support [10^{10}, 10] ⊂ . In Figure 4 only a small interval about the maximum likelihood estimate (red dot) that contains the posterior samples (gray '+' markers) is depicted since the likelihood falls sharply outside this range. Normalization of the envelope gives the corresponding proposal function (15). As the refinement of the domain proceeds through adaptive bisections (described later), the partition size increases. We show partitions of size 3,7 and 19 over an interval containing the posterior samples. These samples were obtained from the partition with 19 intervals. Each of the corresponding envelope functions (upper boundaries of rectangles of a given color) can be used to draw independent and identically distributed samples from the target posterior density. Note how the acceptance probability (ratio of the area below the target shape to that below the envelope) increases with refinement.
Theorem 6 shows that Moore rejection sampler (MRS) indeed produces independent samples from the desired target and Theorem 7 describes the asymptotics of the acceptance probability as the partition of the domain is refined. Proofs for both Theorems are included in the Appendix for completeness.
Theorem 6 Suppose that the DAG expression f of the target shape f has a welldefined natural interval extension f over . If T is generated according to Algorithm 1, and if the the envelope function ${\widehat{g}}^{\mathfrak{T}}$(t) and the proposal density ${g}^{\mathfrak{T}}$(t) are given by (14) and (15), respectively, then T is distributed according to the target density f^{•} : $\mathbb{T}$ ↦ ℝ.
Next we bound the partitionspecific acceptance probability $A(\mathfrak{T}):A({\widehat{g}}^{\mathfrak{T}})$ for this sampler. For simplicity, let the domain $\mathbb{T}$ of the target shape f be an interval. Due to the linearity of the integral operator and (13),
Therefore,
If f ∈ ${\mathfrak{E}}_{\mathfrak{L}}$, the Lipschitz class of elementary functions (Definition 6), then we might expect the enclosure of N_{ f }to be proportional to the mesh $meshw:={\mathrm{max}}_{i\in \{1,\mathrm{...},\mathfrak{T}\}}\text{wid}({t}^{(i)})$ of the partition $\mathfrak{T}$.
Theorem 7 Let ${\mathfrak{U}}_{W}$be the uniform partition of $\mathbb{T}$ = [$\underset{\xaf}{t}$, $\overline{t}$] into W intervals each of width w
and let f ∈ ${\mathfrak{E}}_{\mathfrak{L}}$, then
Theorem 7 shows that if f ∈ ${\mathfrak{E}}_{\mathfrak{L}}$ and ${\mathfrak{U}}_{W}$ is a uniform partition of $\mathbb{T}$ into W intervals, then the acceptance probability $A({\mathfrak{U}}_{W})=1\mathcal{O}(1/W)$. Thus, the acceptance probability approaches 1 at a rate that is no slower than linearly with the mesh.
Prioritized partitions and preprocessed proposals
We studied the efficiency of uniform partitions for their mathematical tractability. In practice, we may further increase the acceptance probability for a given partition size by adaptively partitioning $\mathbb{T}$. In our context, adaptive means the possible exploitation of any current information about the target. We can refine the current partition ${\mathfrak{T}}_{\alpha}$ and obtain a finer partition ${\mathfrak{T}}_{{\alpha}^{\prime}}$ with an additional box by bisecting a box ${t}^{(*)}$ ∈ ${\mathfrak{T}}_{\alpha}$ along the midpoint of its side with the maximal width into a left box ${t}_{L}^{(\ast )}$ and a right box ${t}_{R}^{(\ast )}$. There are several ways to choose a box ${t}^{(*)}$ ∈ ${\mathfrak{T}}_{\alpha}$ for bisection. For instance, a relatively optimal choice is
We employ a priority queue to conduct sequential refinements of $\mathbb{T}$ under this partitioning scheme. This approach avoids the exhaustive argmax computations to obtain the t^{(*)} for bisection at each refinement step. Thus, the current partition is represented by a queue of boxes that are prioritized in descending order by the the priority function vol (t^{(i)}) · wid (f(t^{(i)}) in (17). Therefore, the box with the largest uncertainty in the enclosure of the integral over it gets bisected first. There are several ways to decide when to stop refining the partition. A simple strategy is to stop when the number of boxes reaches a number that is well within the memory constraints of the computer, say 10^{6}, or when the lower bound of the acceptance probability given by (16) is above a desired threshold, say 0.1.
Once we have a partition $\mathfrak{T}$ of $\mathbb{T}$, we can sample t from the proposal density ${g}^{\mathfrak{T}}$ given by (15) in two steps:

1.
Sample a box t^{(i)}∈ $\mathfrak{T}$ according to the discrete distribution:
$$\begin{array}{cc}{\ddot{g}}^{\mathfrak{T}}({t}^{(i)})=\frac{\text{vol}{t}^{(i)}\overline{f}({t}^{(i)})}{{\displaystyle {\sum}_{i=1}^{\left\mathfrak{T}\right}\left(\text{vol}{t}^{(i)}\overline{f}({t}^{(i)})\right)}},& {t}^{(i)}\in \mathfrak{T},\end{array}$$(18) 
2.
Sample a point t uniformly at random from the box t^{(i)}.
Sampling from large discrete distributions (with million states or more) can be made faster by preprocessing the probabilities and saving the result in some convenient lookup table. This basic idea [28] allows samples to be drawn rapidly. We employ an efficient preprocessing strategy known as the Alias Method [4] that allows samples to be drawn in constant time even for very large discrete distributions as implemented in the GNU Scientific Library [29]. We also minimize the number of evaluations of the target shape f by saving the boxspecific computations of $\underset{\xaf}{f}$(t^{(i)}) and $\overline{f}$(t^{(i)}) and exploiting the socalled "squeeze principle", i.e. immediately accepting those points proposed in the box t^{(i)}that fall below $\underset{\xaf}{f}$(t^{(i)}) when uniformly stretched toward $\overline{f}$(t^{(i)}).
Thus, by means of priority queues and lookup tables we can efficiently manage our adaptive partitioning of the domain for envelope construction, and rapidly draw samples from the proposal distribution. Our sampler class MRSampler implemented in MRS 0.1.2, a C++ class library for statistical set processing, builds on CXSC 2.0, a C++ class library for extended scientific computing using interval methods [30]. All computations were done on a 2.8 GHz Pentium IV machine with 1 GB RAM. Having given theoretical and practical considerations to our Moore rejection sampler, we are ready to draw samples from various targets over small tree spaces.
Results
The natural interval extension of the likelihood function over labeled boxes in the tree space allows us to employ the Moore rejection sampler to rigorously draw independent and identically distributed samples from the posterior distribution over a compact box in the tree space given by our prior distribution. We draw samples from the posterior distribution based on two mitochondrial DNA data sets and use these samples (i) to estimate the posterior probabilities of each of the three rooted topologies, (ii) to conduct a nonparametric test of rate homogeneity between proteincoding and tRNAcoding sites and (iii) to estimate the humanneanderthal divergence time.
Human, chimpanzee and gorilla
We revisit the data from a segment of the mitochondrial DNA of human, chimpanzee and gorilla [27] that was analyzed under the CFN model of DNA mutation (Model 1) within a point estimation setting [17]. The sufficient statistics of pattern counts for this data with total number of sites v = 895 under the CFN model over the space of all threeleaved phylogenetic trees are:
(c_{xxx}, c_{xxy}, c_{yxx}, c_{xyx}) = (762, 54, 41, 38)
Let human, chimpanzee and gorilla be denoted by leaf labels 1, 2 and 3, or H, C and G, respectively. Let the set of rooted tree labels corresponding to (ii),(iii) and (iv) of Figure 1 be $\mathfrak{K}$ = {1, 2, 3}. The maximum likelihood estimate over ${}^{\mathfrak{K}}\mathbb{T}:={}^{1}\mathbb{T}\cup {}^{2}\mathbb{T}\cup {}^{3}\mathbb{T}$, the rooted and clocked threeleaved phylogenetic tree space, is derived in [17] as
Recall that due to our flat priors, our posterior shape f(^{i}t):= f(^{i}t_{0}, ^{i}t_{1}) with i ∈ $\mathfrak{K}$ = {1, 2, 3} is our likelihood function over ${}^{\mathfrak{K}}\mathbb{T}$. Now, suppose are n independent and identically distributed samples from the posterior density f^{•}. over ${}^{\mathfrak{K}}\mathbb{T}$. We can obtain asymptotically consistent estimates of the posterior probabilities of ${}^{1}\mathbb{T}$, ${}^{2}\mathbb{T}$ and ${}^{3}\mathbb{T}$ from Monte Carlo integration of the indicator function of each of the three topology labels using
The 95% confidence interval for ^{j}P, based on asymptotic normality of the Monte Carlo estimator, is
Point estimate and a symmetric 95% confidence interval for the posterior probability of each of the three topologies from n = 10^{6} posterior samples are
These point estimates are in agreement with estimates obtained in [31, 32] through quadrature routines in Mathematica. The first 10,000 of these samples are shown in Figure 5 upon transforming the rooted and clocked trees, ^{i}t := (^{i}t_{0}, ^{i}t_{1}), i ∈ {1, 2, 3}, into constrained unrooted trees, ^{4}t := (^{4}t_{1}, ^{4}t_{2}, ^{4}t_{3}), according to Table 1.
Obtaining confidence intervals from dependent MCMC samples requires nontrivial computations for the burnin period and the thinning rate [1]. These are not readily available for phylogenetic MCMC samplers. Thus, the independent and identically distributed samples from our rejection sampler has the advantage of producing valid confidence intervals for our integrals of interest. The point estimate of the posterior mean $E({}^{1}T):={\displaystyle {\int}_{{}^{1}\mathbb{T}}{}^{1}tf({}^{1}t)\partial ({}^{1}t)}$ for topology label 1 is (0.010863, 0.048994). This posterior mean is close to (0.010036, 0.048559), the mode of our target shape or the maximum likelihood estimate derived in [17].
Chimpanzee, gorilla and orangutan
We focus here on the 895 bp long homologous segment of mitochondrial DNA from chimpanzee, gorilla and orangutan [27]. This gives us a greater phylogenetic depth than the human, chimpanzee and gorilla sequences that were just analyzed. These sequences encode the genes for three transfer RNAs and parts of two proteins. Under the assumption of independence across sites, the sufficient statistics, under the JC model of DNA mutation (Model 2) over triplets, are given in Table 2 for all of the data as well as a partition of the data into tRNAcoding and proteincoding sites.
Ten thousand independent and identically distributed samples were drawn in 942 CPU seconds from the posterior distribution over JC triplets, i.e. unrooted trees with three edges corresponding to the three primates. Figure 6 shows these samples (blue dots) scattered about the verified global maximum likelihood estimate (MLE) of the triplet obtained in [5, 26] and subsequently confirmed algebraically in [23]. We also drew ten thousand independent and identically distributed samples from the posterior based on the 198 tRNAcoding DNA sites (green dots in Figure 6) as well as from that based on the remaining 697 proteincoding sites (red dots in Figure 6). The former posterior samples, corresponding to the tRNAcoding sites, are more dispersed than the posterior samples based on the entire sequence. This is due to the smaller number of tRNAcoding sites making the posterior less concentrated. Moreover, the cluster of samples from the posterior based on tRNAcoding sites seem to be farther away from that based on proteincoding sites. Such a clustering of two sets of posterior samples is a signal of mutational rate heterogeneity between the two types of sites. Hotelling's trace statistics, being a natural measure of distance between two clusters of points, can be used as a test statistic to determine the significance of the observed test statistic. On the basis of 100 random permutations of the sites, we obtain the null distribution of Hotelling's trace statistics. We were able to reject the null hypothesis of rate homogeneity between the posterior samples based on the tRNAcoding sites and that based on the proteincoding sites at the 10% significance level using this permutation test (Pvalue = 0.06). Any biological interpretation of this test must be done cautiously since the JC model employed here forbids any transition:transversion bias that is reportedly relevant for this data [27].
Neanderthal, human and chimpanzee
We used the 15 site patterns and their counts in Table 3 to infer the humanneanderthal divergence time. These counts are obtained from a multiple sequence alignment of the data made available in [33]. Our alignment procedure is more robust at the ends of each locus than that of [33]. We do an ordered concatenation of all the loci for each species prior to a multiple sequence alignment. The alignment was further edited by hand to obtain the locusspecific alignments. Under the assumption of independence across sites, the sufficient statistics, under any Markov model of DNA mutation, is the set of distinct site patterns and their respective counts. They are given in Table 3 for this data set.
We drew 10,000 samples that were independently and identically distributed from each of three posterior densities; (i) over the space of unrooted triplets under the JC model in 312 CPU seconds, (ii) over the clocked and rooted triplets under the JC model in 375 CPU seconds and (iii) over the clocked and rooted triplets under the HKY model in 1.2 CPU hours. In the HKY model we used the empirical nucleotide frequencies from the data (π(T) = 0.2588, π(C) = 0.2571, π(A) = 0.2916, π(G) = 0.1925) and a hominidspecific transition:transversion rate of 2.0. Unlike the JC model with five sufficient statistics (c_{xxx}, c_{xxy}, c_{yxx}, c_{xyx}, c_{xyz}) = (2343, 56, 2, 4, 0), all 15 distinct site patterns are required for the likelihood computations under the HKY model and this is reflected in its longer CPU time. Both models gave similar posterior samples over rooted triplets, as shown in Figure 7.
We transformed the three posterior distributions over the triplet spaces; (i) unrooted JC triplets that were rooted using the midpoint rooting method, (ii) rooted JC triplets and (iii) rooted HKY triplets, respectively, into three posterior distributions over the humanneanderthal divergence time relative to the humanchimp divergence time. The corresponding posterior quantiles ({5%, 50%, 95%}) for the humanneanderthal divergence time in units of humanchimp divergence time are {0.0643, 0.125, 0.214}, {0.0694, 0.142, 0.263} and {0.0682, 0.143, 0.268}, respectively. We constrained the neanderthal lineage to be a fraction of the human lineage in branch length in order to estimate the age of the neanderthal fossil from the rooted HKY triplets. The posterior quantiles of the fossil date in units of humanchimp divergence is {0.00685, 0.0666, 0.195}. The estimate of 38; 310 years based on carbon14 accelerator mass spectrometry [33] is within our [5%, 95%] posterior quantile interval for the fossil date, provided the humanchimp divergence estimate ranges in [196103, 5.6 × 10^{6}]. Thus, reasonable bounds for the humanchimp divergence are 4 × 10^{6} and 5.6 × 10^{6} years, under the assumption that 4 × 10^{6} is an acceptable lowerbound. Based on these two calendar year estimates, we transformed the posterior quantiles of the humanneanderthal divergence times from the rooted HKY triplets into {272680, 571124, 1073375} and {381752, 799574, 1502724} years, respectively. Our [5%, 95%] posterior intervals contain the interval estimate of [461000, 825000] years reported in [33]. However, our confidence intervals are from perfectly independent samples from the posterior and account for the finite number of neanderthal sites that were successfully sequenced, unlike those obtained on the basis of a bootstrap of site patterns [34] or heuristic MCMC [1]. Unfortunately, our humanneanderthal divergence estimates are overestimates as they ignore the nonnegligible time to coalescence of the human and neanderthal homologs within the humanneanderthal ancestral population. Improvements to our estimates based on the other 310 human and 4 chimpanzee homologs reported in [33] may be possible with more sophisticated models of populations within a phylogeny and needs further investigation.
Chimpanzee, gorilla, orangutan and gibbon
We were able to draw samples from JC quartets on the basis of the mitochondrial DNA of chimpanzee, gorilla, orangutan and gibbon [27]. The data for all four primates can be summarized by 61 distinct site patterns [5]. Now, the problem is more challenging because there are three distinct tree topologies in the unrooted, bifurcating, quartet tree space, and each of these topologies has five edges. Thus, the domain of quartets is a piecewise Euclidean space that arises from a fusion of 3 distinct five dimensional orthants. Since the postorder traversals (Algorithm 2) specifying the likelihood function are topologyspecific, we extended the likelihood over a compact box of quartets in a topologyspecific manner. The computational time was about a day and a half to draw 10000 samples from the quartet target due to low acceptance probability of the naive likelihood function based on the 61 distinct site patterns. All the samples had the topology which grouped Chimp and Gorilla together, i.e. ((chimpanzee, gorilla), (orangutan, gibbon)). The samples (results not shown) were again scattered about the verified global MLE of the quartet [5]. This quartet likelihood function has an elaborate DAG with numerous operations. When the data got compressed into sufficient statistics, the efficiency increased tremendously (e.g. for triplets the efficiency increases by a factor of 3.7). This is due to the number of leaf nodes in the target DAG, which encode the distinct site patterns of the observed data into the likelihood function, getting reduced from 29 to 5 for the triplet target and from 61 to 15 for the quartet target [24].
Discussion
Interval methods provide for a rigorous sampling from posterior target densities over small phylogenetic tree spaces. When one substitutes conventional floatingpoint arithmetic for real arithmetic in a computer and uses discrete lattices to construct the envelope and/or proposal, it is generally not possible to guarantee the envelope property, and thereby ensure that samples are drawn from the desired target density, except in special cases [35]. Thus, the construction of the Moore rejection sampler through interval methods, that enclose the target shape over the entire real continuum in any box of the domain with machinerepresentable bounds, in a manner that rigorously accounts for all sources of numerical errors (see [36] for a discussion on error control), naturally guarantees that the Moore rejection samples are independent draws from the desired target. Moreover, the target is allowed to be multivariate and/or nonlogconcave with possibly 'pathological' behavior, as long as it has a welldefined interval extension. The efficiency of MRS is not immune to the curse of dimensionality and target DAG complexity. When the DAG expression for the likelihood gets large, its natural interval extension can have terrible overenclosures of the true range, which in turn forces the adaptive refinement of the domain to be extremely fine for efficient envelope construction. Thus, a naive application of interval methods to targets with large DAGs can be terribly inefficient. In such cases, sampler efficiency rather than rigor is the issue. Thus, one may fail to obtain samples in a reasonable time, rather than (as may happen with nonrigorous methods) produce samples from some unknown and undesired target.
There are several ways in which efficiency can be improved for such cases. First, the particular structure of the target DAG should be exploited to avoid any redundant computations. For example, sufficient statistics must be used to dissolve symmetries in the DAG. Second, we can further improve efficiency by limiting ourselves to differentiable targets in C^{n}. Tighter enclosures of the range of f(t^{(i)}) with f(t^{(i)}) can come from the enclosures of Taylor expansions of f around the midpoint mid (t^{(i)}) through intervalextended automatic differentiation (e.g. [36]) that can then yield tighter estimates of the integral enclosures [37]. Third, we can employ preprocessing to improve efficiency. For example, we can preenclose the range of a possibly rescaled f over a partition of the domain and then obtain the enclosure of f over some arbitrary t through a combination of hash access and hull operations on the preenclosures. Such a preenclosing technique reduces not only the overestimation of target shapes with large DAGs but also the computational cost incurred while performing interval operations with processors that are optimized for floatingpoint arithmetic. In the next version of the MRS library we plan to extend interval arithmetic beyond $\mathbb{I}{\mathbb{R}}^{n}$ to a class of multidimensional datastructures related to regular subpavings (e.g. [38]) to improve the efficiency of our sampler. Fourth, various contractors can be used to improve the range enclosure in polynomial time (e.g. [38]). The most promising contractors employ interval constraint propagation. Finally, efficiency at the possible cost of rigor can also be gained (up to 30%) by foregoing directed rounding during envelope construction.
Poor sampler efficiency makes it currently impractical to sample from trees with five leaves and 15 topologies. However, one could use such triplets and quartets drawn from the posterior distribution to stochastically amalgamate and produce estimates of larger trees via fast amalgamating algorithms (e.g. [39, 40]), which may then be used to combat the slow mixing in MCMC methods [2] by providing a good set of initial trees. A collection of large trees obtained through such stochastic amalgamations would account for the effect of finite sample sizes (sequence length) as well as the sensitivity of the amalgamating algorithm itself to variation in the input vector of small tree estimates. It would be interesting to investigate if such stochastic amalgamations can help improve mixing of MCMC algorithms on large tree spaces, albeit autovalidating rejection sampling via the natural interval extension of the likelihood function may not be practical for trees with more than four leaves.
Conclusion
None of the currently available punctual samplers can rigorously produce independent and identically distributed samples from the posterior distribution over phylogenetic tree spaces, even for 3 or 4 taxa. We describe a new approach for rigorously drawing samples from a target posterior distribution over small phylogenetic tree spaces using the theory of interval analysis. Our Moore rejection sampler (MRS), being an autovalidating von Neumann rejection sampler (RS), can produce independent samples from any target shape f whose DAG expression f has a welldefined natural interval extension f over a compact domain $\mathbb{T}$. MRS is said to be autovalidating because it automatically obtains a proposal g that is easy to simulate from, and an envelope $\widehat{g}$ that is guaranteed to satisfy the envelope condition (1). MRS can circumvent the problems associated with (i) heuristic convergence diagnostics in MCMC samplers and (ii) pseudoenvelopes constructed via nonrigorous punctual methods in rejection samplers. When the target DAG is large, MRS becomes inefficient and may fail to produce the desired samples in a reasonable time, rather than (as may happen with nonrigorous methods) produce samples from some unknown and undesired target. MRS solves the open problem of rigorously drawing independent and identically distributed samples from the posterior distribution over small rooted and unrooted phylogenetic tree spaces (3 or 4 taxa) based on any multiplyaligned sequence data.
Appendix
Likelihoods for the CFN model on unrooted triplets
Recall that the probability that Y mutates to R, or vice versa, in time t is a(t):= (1  e^{2t})/2 and the stationary distribution π(R) = π(Y) = 1/2. Next we apply Algorithm 2 to compute the likelihood ${l}_{{d}_{\xb7,q}}$ at a given site q which could be one of l_{0}(^{4}t), l_{1}(^{4}t),..., l_{7}(^{4}t).
Proof of Theorem 1 (cf. [37])
Since any real arithmetic operation x ⋆ y, where ⋆ ∈ {+,  ×,/} and x, y ∈ ℝ, is a continuous function x ⋆y := ⋆(x, y): ℝ ⊗ ℝ ↦ ℝ, except when y = 0 under / operation. Since x and y are simply connected compact intervals, so is their Cartesian product x ⊗ y. On such a domain x⊗ y, the continuity of ⋆(x, y) (except when ⋆ =/and 0 ∈ y) ensures the attainment of a minimum, a maximum and all intermediate values. Therefore, with the exception of the case when ⋆ = / and 0 ∈ y, the range x⋆ y has an interval form [min (x ⋆y), max (x ⋆y)], where the min and max are taken over all pairs (x, y) ∈ x⊗ y. Fortunately, we do not have to evaluate x ⋆y over every (x, y) ∈ x⊗ y to find the global min and global max of ⋆(x, y) over x⊗ y, because the monotonicity of the ⋆(x, y*) in terms of x ∈ x for any fixed y* ∈ y implies that the extremal values are attained on the boundary of x⊗ y, i.e. the set {x, y, $\overline{x}$, and $\overline{y}$}. Thus the theorem can be verified by examining the finitely many boundary cases.
Proof of Theorem 2
x⋆ y= {x ⋆y : x ∈ x, y ∈ y} ⊆ {x ⋆y : x ∈ x', y ∈ y'} = x'⋆ y'.
Proof of Theorem 3 (cf. [37])
Since f(y) is welldefined, we will not run into division by zero, and therefore (i) follows from the repeated invocation of Theorem 2. We can prove (ii) by contradiction. Suppose range(f; x) ⊈ f(x). Then there exists x ∈ x, such that f(x) ∈ range(f; x) but f(x) ∉ f(x). This in turn implies that f(x) = f([x, x]) ∉ f(x), which contradicts (i). Therefore, our supposition cannot be true and we have proved (ii) range(f; x) ⊆ f(x).
Proof of Theorem 4 (cf. [37])
Any elementary function f ∈ $\mathfrak{E}$ with expression f is defined by the recursion 9 on its subexpressions f_{ i }where i ∈ {1,..., n} according to its DAG. If f(x) = p(x)/q(x) is a rational function, then the theorem already holds by Theorem 3, and if f ∈ $\mathfrak{S}$ then the theorem holds because the range enclosure is exact for standard functions. Thus it suffices to show that if the theorem holds for f_{1}, f_{2} ∈ $\mathfrak{E}$, then the theorem also holds for f_{1} ⋆ f_{2}, where ⋆ ∈ {+, ,/, ×, ◦}. By ◦ we mean the composition operator. Since the proof is analogous for all five operators, we only focus on the ◦ operator. Since f is welldefined on its domain y, neither the realvalued f nor any of its subexpressions f_{ i }has singularities in its respective domain y_{ i }induced by y. In particular f_{2} is continuous on any x_{2} and ${{x}^{\prime}}_{2}$ such that x_{2} ⊆ ${{x}^{\prime}}_{2}$ ⊆ y_{2} implying the compactness of f_{2}(x_{2}) =: w_{2} and f_{2}(${{x}^{\prime}}_{2}$) =: ${{w}^{\prime}}_{2}$, respectively. By our assumption that f_{1} and f_{2} are inclusion isotonic we have that w_{2} ⊆ ${{w}^{\prime}}_{2}$ and also that
The range enclosure is a consequence of inclusion isotony by an argument identical to that given in the proof for Theorem 3.
Proof of Theorem 5 (cf. [37])
The proof is given by an induction on the DAG for f similar to the proof of Theorem 4 (See [37]).
Proof of Theorem 6
Let the domain $\mathbb{T}$ of the target f^{•} be an element of $\mathbb{I}{\mathbb{R}}^{n}$. From (15) and (14) observe that ${\widehat{g}}^{\mathfrak{T}}(t)={g}^{\mathfrak{T}}(t){N}_{{\widehat{g}}^{\mathfrak{T}}}$. Let us define the following two subsets of ℝ^{n+1},
Algorithm 1 first produces a sample from the random vector (V, U) that is uniformly distributed in $\mathcal{B}({\widehat{g}}^{\mathfrak{T}})$. We can see this by letting h(v, u) denote the joint density of (V, U) and h(uv) denote the conditional density of U given V = v. Then,
Since we sample a height u for a given v from the Uniform [0, ${\widehat{g}}^{\mathfrak{T}}$] distribution,
Therefore,
Thus, we have shown that the joint density of the random vector (V, U) initially produced by Algorithm 1 is uniformly distributed on $\mathcal{B}({\widehat{g}}^{\mathfrak{T}})$. The above relationship also makes geometric sense since the volume of $\mathcal{B}({\widehat{g}}^{\mathfrak{T}})$ is exactly ${N}_{{\widehat{g}}^{\mathfrak{T}}}$.
Now, let (T, S) be the accepted random vector during the accept/reject step of Algorithm 1, i.e.
Then, the uniform distribution of (V, U) on $\mathcal{B}({\widehat{g}}^{\mathfrak{T}})$ implies the uniform distribution of (T, S) on $\mathcal{B}$(f). Since the volume of $\mathcal{B}$(f) is N_{ f }, the density of (T, S) is identically 1/N_{ f }on $\mathcal{B}$(f) and 0 elsewhere. Hence, the marginal density of T on $\mathbb{T}$ is
Thus, we have shown that the accepted random vector T has the desired density f^{•}.
Proof of Theorem 7
Due to Theorem 5,
Therefore
and we have
Therefore the lower bound for the acceptance probability A(${\mathfrak{U}}_{W}$) of MRS approaches 1 no slower than linearly with the refinement of $\mathbb{T}$ by ${\mathfrak{U}}_{W}$. Note that this should hold for a general nonuniform partition with w replaced by the mesh.
References
 1.
Jones G, Hobert J: Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Statistical Science. 2001, 16 (4): 312334. 10.1214/ss/1015346317.
 2.
Mossel E, Vigoda E: Phylogenetic MCMC algorithms are misleading on mixtures of trees. Science. 2005, 309: 22072209.
 3.
von Neumann J: Various techniques used in connection with random digits. John Von Neumann, Collected Works. 1963, V: Oxford University Press
 4.
Walker A: An efficient method for generating discrete random variables with general distributions. ACM Trans on Mathematical Software. 1977, 3: 253256. 10.1145/355744.355749.
 5.
Sainudiin R: Machine interval experiments. pHd dissertation. 2005, Cornell University, Ithaca, New York
 6.
Moore R: Interval analysis. 1967, PrenticeHall
 7.
Semple C, Steel M: Phylogenetics. 2003, Oxford University Press
 8.
Felsenstein J: Inferring phylogenies. 2003, Sunderland, MA: Sinauer Associates
 9.
Yang Z: Computational molecular evolution. 2006, UK: Oxford University Press
 10.
Moore R: Methods and applications of interval analysis. 1979, Philadelphia, Pennsylvania: SIAM
 11.
Alefeld G, Herzberger J: An introduction to interval computations. 1983, Academic press
 12.
Hammer R, Hocks M, Kulisch U, Ratz D: C++ toolbox for verified computing: basic numerical problems. 1995, SpringerVerlag
 13.
Kulisch U, Lohner R, Facius A, : Perspectives on encolsure methods. 2001, SpringerVerlag
 14.
Matsumoto M, Nishimura T: Mersenne Twister: A 623dimensionally equidistributed uniform pseudorandom number generator. ACM Trans Model Comput Simul. 1998, 8: 330. 10.1145/272991.272995.
 15.
Williams D: Weighing the Odds: A Course in Probability and Statistics. 2001, Cambridge University Press
 16.
Felsenstein J: Evolutionary trees from DNA sequences: a maximum likelihood approach. Jnl Mol Evol. 1981, 17: 368376. 10.1007/BF01734359.
 17.
Yang Z: Complexity of the simplest phylogenetic estimation problem. Proceedings Royal Soc London B Biol Sci. 2000, 267: 109119. 10.1098/rspb.2000.0974.
 18.
Evans W, Kenyon C, Peres Y, Schulman L: Broadcasting on trees and the Ising model. Advances in Applied Probability. 2000, 10: 410433. 10.1214/aoap/1019487349.
 19.
Neyman J: Molecular studies of evolution: a source of novel statistical problems. Statistical decision theory and related topics. Edited by: Gupta S, Yackel J. 1971, 127. New York Academy Press
 20.
Jukes T, Cantor C: Evolution of protein molecules. Mammalian Protein Metabolism. Edited by: Munro H. 1969, 2132. New York Academic Press
 21.
Saitou N: Property and efficiency of the maximum likelihood method for molecular phylogeny. Jnl Mol Evol. 1988, 27: 261273. 10.1007/BF02100082.
 22.
Yang Z: Statistical properties of the maximum likelihood method of phylogenetic estimation and comparison with distance matrix methods. Syst Biol. 1994, 43: 329342. 10.2307/2413672.
 23.
Hosten S, Khetan A, Sturmfels B: Solving the likelihood equations. Found Comput Math. 2005, 5 (4): 389407. 10.1007/s1020800401568.
 24.
Casanellas M, Garcia L, Sullivant S: Catalog of small trees. Algebraic statistics for computational biology. Edited by: Pachter L, Sturmfels B. 2005, 291304. Cambridge University Press
 25.
Hasegawa M, Kishino H, Yano T: Dating of the humanape splitting by a molecular clock of mitochondrial DNA. Jnl Mol Evol. 1985, 22: 160174. 10.1007/BF02101694.
 26.
Sainudiin R, Yoshida R: Applications of interval methods to phylogenetic trees. Algebraic statistics for computational biology. Edited by: Pachter L, Sturmfels B. 2005, 359374. Cambridge University Press
 27.
Brown W, Prager E, Wang A, Wilson A: Mitochondrial DNA sequences of primates, tempo and mode of evolution. Jnl Mol Evol. 1982, 18: 225239. 10.1007/BF01734101.
 28.
Marsaglia G: Generating discrete random numbers in a computer. Comm ACM. 1963, 6: 3738. 10.1145/366193.366228.
 29.
Galassi M, Davies J, Theiler J, Gough B, Jungman G, Booth M, Rossi F: GNU Scientific Library Reference Manual. 2003, Network Theory Ltd, 2, http://www.gnu.org/software/gsl/
 30.
Hofschuster , Krämer : CXSC 2.0: A C++ library for extended scientific computing. Numerical software with result verification, of Lecture notes in computer science. Edited by: Alt R, Frommer A, Kearfott R, Luther W. 2004, 2991: 1535. SpringerVerlag
 31.
Rannala B, Yang Z: Probability distribution of molecular evolutionary trees: a new method of phylogenetic inference. Jnl Mol Evol. 1996, 43: 304311. 10.1007/BF02338839.
 32.
Yang Z, Rannala B: Branchlength prior in uences Bayesian posterior probability of phylogeny. Syst Biol. 2005, 54: 455470.
 33.
Green R, Krause J, Ptak S, Briggs A, Ronan M, Simons J, Du L, Egholm M, Rothberg J, Paunovic M, Pääbo S: Analysis of one million base pairs of Neandertal DNA. Nature. 2006, 444: 330336.
 34.
Efron B, Halloran E, Holmes S: Bootstrap confidence levels for phylogenetic trees. Proc Natl Acad Sci. 1996, 93: 1342913429.
 35.
Gilks W, Wild P: Adaptive rejection sampling for Gibbs sampling. Applied Statistics. 1992, 41: 337348. 10.2307/2347565.
 36.
Kulisch U: Advanced arithmetic for the digital computer, interval arithmetic revisited. Perspectives on encolsure methods. Edited by: Kulisch U, Lohner R, Facius A. 2001, 5070. SpringerVerlag
 37.
Tucker W: Autovalidating numerical methods. 2004, Lecture notes, Uppsala University
 38.
Jaulin L, Kieffer M, Didrit O, Walter E: Applied interval analysis: with examples in parameter and state estimation, robust control and robotics. 2004, SpringerVerlag
 39.
Strimmer K, von Haeseler A: Quartet puzzling: A quartet maximum likelihood method for reconstructing tree topologies. Mol Biol Evol. 1996, 13: 964969.
 40.
Levy D, Yoshida R, Pachter L: Beyond pairwise distances: neighbor joining with phylogenetic diversity estimates. Mol Biol Evol. 2006, 23: 491498.
Acknowledgements
R.S. is a Research Fellow of the Royal Commission for the Exhibition of 1851. This was partly supported by a joint NSF/NIGMS grant DMS0201037. Many thanks to Rob Strawderman, Warwick Tucker and Stephane ArisBrosou for constructive comments, Joe Felsenstein for clarifying the transition probabilities under the HKY model and Ziheng Yang for various clarifications and encouragement.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
RS developed the basic algorithm, analyzed the data and wrote the first draft. TY improved the objectoriented interface and refined the final implementation of the algorithm. Both authors edited the manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Sainudiin, R., York, T. Autovalidating von Neumann rejection sampling from small phylogenetic tree spaces. Algorithms Mol Biol 4, 1 (2009). https://doi.org/10.1186/1748718841
Received:
Accepted:
Published:
Keywords
 Markov Chain Monte Carlo
 Branch Length
 Directed Acyclic Graph
 Acceptance Probability
 Target Shape