Auto-validating von Neumann rejection sampling from small phylogenetic tree spaces

Background In phylogenetic inference one is interested in obtaining samples from the posterior distribution over the tree space on the basis of some observed DNA sequence data. One of the simplest sampling methods is the rejection sampler due to von Neumann. Here we introduce an auto-validating version of the rejection sampler, via interval analysis, to rigorously draw samples from posterior distributions over small phylogenetic tree spaces. Results The posterior samples from the auto-validating sampler are used to rigorously (i) estimate posterior probabilities for different rooted topologies based on mitochondrial DNA from human, chimpanzee and gorilla, (ii) conduct a non-parametric test of rate variation between protein-coding and tRNA-coding sites from three primates and (iii) obtain a posterior estimate of the human-neanderthal divergence time. Conclusion This solves the open problem of rigorously drawing independent and identically distributed samples from the posterior distribution over rooted and unrooted small tree spaces (3 or 4 taxa) based on any multiply-aligned sequence data.


Background
Obtaining samples from a real-valued target density f • (t) is a basic problem in statistical estimation. The target f • (t): T ↦ R maps n-dimensional real points in R n to real numbers in R, i.e. t T ⊂ R n . In Bayesian phylogenetic estimation, we want to draw independent and identically distributed samples from a target posterior density on the space of phylogenetic trees. The standard point-valued or punctual Monte Carlo methods via conventional floating-point arithmetic are typically nonrigorous as they do not account for all sources of numerical errors and are limited to evaluating the target at finitely many points. The standard approaches to sampling from the posterior density, especially over phylogenetic trees, rely on Markov chain Monte Carlo (MCMC) methods. Despite their asymptotic validity, it is nontrivial to guarantee that an MCMC algorithm has converged to stationarity [1], and thus MCMC convergence diagnostics on phylogenetic tree spaces are heuristic [2].
A more direct sampler that is capable of producing independent and identically distributed samples from the target density f • (t):= f(t)/(N f ), by only evaluating the target shape f(t) without knowing the normalizing constant N f t d t f : () , is the von Neumann rejection sampler [3]. However, the limiting step in the rejection sampler is the construction of an envelope functionĝ (t)

BioMed Central
Open Access that is not only greater than the target shape f(t):= N f f • (t) at every t T , but also easy to normalize and draw samples from. Moreover, a practical and efficient envelope function has to be as close to the target shape as possible from above. When an envelope function is constructed using point-valued methods, except for simple classes of targets, one cannot guarantee that the envelope function dominates the target shape globally.
None of the available samplers can rigorously produce independent and identically distributed samples from the posterior distribution over phylogenetic tree spaces, even for 3 or 4 taxa. We describe a new approach for rigorously drawing samples from a target posterior distribution over small phylogenetic tree spaces using the theory of interval analysis. This method can circumvent the problems associated with (i) heuristic convergence diagnostics in MCMC samplers and (ii) pseudoenvelopes constructed via non-rigorous point-valued methods in rejection samplers.
Informally, our method partitions the domain into boxes and uses interval analysis to rigorously bound the target shape in each box; then we use as envelope the simple function which takes on in each box the upper bound obtained for that box. It is easy to draw samples from the density corresponding to this step function envelope. More formally, the method employs an interval extension of the target posterior shape f(t): = t t t 1 2 of the tree space T = ∪ i t (i) . This partition is adaptively constructed by a priority queue. The interval extended target shape maps boxes in T to intervals in R. This image interval provides an upper bound for the global maximum and a lower bound for the global minimum of f over each element of the partition of T . We use this information to construct an envelope as a simple function over the partition T . Using the Alias method [4] we efficiently propose samples from this normalized step-function envelope for von Neumann rejection sampling.
We call our method auto-validating because we employ interval methods to rigorously construct the envelope for a large class of target densities. The method was described in a more rudimentary form in [5]. Unlike many conventional samplers, each sample produced by our method is equivalent to a computer-assisted proof that it is drawn from the desired target, up to the pseudorandomness of the underlying, deterministic, pseudorandom number generator. MRS 0.1.2, a C++ class library for statistical set processing is available from http://www.math.canterbury.ac.nz/~r.sainudiin/codes/ mrs under the terms of the GNU General Public License.
The rest of the paper is organized as follows. In the Methods Section, we introduce (i) von Neumann rejection sampler (RS), (ii) phylogenetic estimation problem, (iii) interval analysis and (iv) an interval extension of the rejection sampler called the Moore rejection sampler (MRS) in honor of Ramon E. Moore. Moore was one of the influential founders of interval analysis [6]. In Results Section, we employ MRS to rigorously draw samples from the posterior density over small tree spaces. Using one of the earliest primate mitochondrial DNA data sets we use the posterior samples to estimate the posterior probability of each rooted tree topology and conduct a non-parametric test of rate variation between protein-coding and tRNAcoding sites. Using one of the latest data sets we obtain a rigorous posterior estimate of the human-neanderthal divergence time. We can also draw samples from the space of unrooted triplet and quartet trees. We conclude after a discussion of the method.

Methods
In the following sections, we first introduce the rejection sampler (RS) due to von Neumann [3]. Secondly, we describe the basic phylogenetic inference problem (e.g. [7][8][9]). Then, we introduce the basic principles of interval methods (e.g. [6,[10][11][12][13]). Finally, we construct interval extensions of RS to rigorously draw independent and identically distributed samples from small phylogenetic tree spaces. We leave the formal proofs to the Appendix for completeness.

Rejection sampler (RS)
Rejection sampling [3] is a Monte Carlo method to draw independent samples from a target random variable or random vector T with density f • (t):= f(t)/N f , where t T ⊂ R n , i.e. T~f • . The challenge is to draw the samples without any knowledge of the normalizing constant (iv) a normalizing constant N gt d t ĝ :ˆ( ) over T from which independent samples can be drawn and finally (vi) f(t) andĝ (t) must be computable for any t T .  [15]). Let A(ĝ ) be the probability that a point proposed according to g gets accepted as an independent sample from f • through the envelope functionĝ . Observe that the envelope-specific acceptance probability A(ĝ ) is the ratio of the integrals T and the probability distribution over the number of samples from g to obtain one sample from f • is geometrically distributed with mean 1/A(ĝ ) (e.g. [15]).

Phylogenetic estimation
In this section we briefly review phylogenetic estimation. A more detailed account can be found in [7][8][9]. Inferring the ancestral relationship among a set of extant species based on their DNA sequences is a basic problem in phylogenetic estimation. One can obtain the likelihood of a particular phylogenetic tree that relates the extant species of interest at its leaves by superimposing a continuous time Markov chain model of DNA substitution upon that tree. The length of an edge (branch length) connecting two nodes (species) in the tree represents the amount of evolutionary time (divergence) between the two species. The internal nodes represent ancestral species. During the likelihood computation, one needs to integrate over all possible states at the unobserved ancestral nodes.
Next we give a brief introduction to some phylogenetic nomenclature. A phylogenetic tree is said to be rooted if one of the internal nodes, say node r, is identified as the root of the tree, otherwise it is said to be unrooted. The rooted tree is conventionally depicted with the root node r at the top. The four topology-labeled, three-leaved, rooted trees, namely, 0 t, 1 t, 2 t and 3 t, with leaf label set {1, 2, 3}, are depicted in Figure 1(i)-(iv). The unrooted, three-leaved tree with topology label 4 or the unrooted triplet 4 t is shown in Figure 1(v). For each tree, the terminal branch lengths, i.e. the branch lengths leading to the leaf nodes, have to be strictly positive and the internal branch lengths have to be non-negative. Our rooted triplets (Figure 1(i)-(iv)) are said to satisfy the molecular clock, since the branch lengths of each k t, where k {0, 1, 2, 3}, satisfy the constraint that the distance from the root node r to each of the leaf nodes is equal to k t 0 + k t 1 with k t 1 > 0 and k t 0 ≥ 0. from n taxa. We think of d as an n × v matrix with entries from U . We are interested in estimating the branch lengths and topologies of the tree underlying our observed d. Let b k denote the number of branches and s k denote the number of nodes of a tree with a specific topology or branching order labeled by k. Thus, for a given topology label k, n labeled leaves and b k many branches, the labeled tree k t is the topology-labeled vector of branch lengths ( k t 1 ,..., b k k t ) contained in the topology-labeled tree space k T , i.e., Any subset of the tree space with | K | many topologies in the topology label set K can be defined as follows: An explicit model of sequence evolution is prescribed in order to obtain the likelihood of observing data d at the leaf nodes as a function of the parameter k t K T for each topology label k K . Such a model prescribes P t a a i j , ( ) , the probability of mutation from a character a i U to another character a j U in time t. Using such a transition probability we may compute ℓ q ( k t), the loglikelihood of the data d at site q {1,..., v} or the q-th column of d, via the post-order traversal over the labeled tree with branch lengths k t := ( k t 1 , k t 2 ,..., b k k t ). This amounts to the sum-product Algorithm 2 [16]  R , and specifies the length of the branch leading to its ancestor as k t h .
input : (i) a labeled tree with branch lengths k t := ( k t 1 , k t 2 ,..., b k k t ), (ii) transition probability P t a a i j , ( ) for any a i , a j U , (iii) stationary distribution π(a i ) over each character a i U , (iv) site pattern or data d •, q at site q recurse : compute l h for each sub-terminal node h, then those of their ancestors recursively to finally compute l r for the root node r to obtain the likelihood for site q, The maximum likelihood estimate is a point estimate (single best guess) of the unknown phylogenetic tree on the basis of the observed data d and it is The simplest probability models for character mutation are continuous time Markov chains with finite state space U . We introduce three such models employed in this study next. We only derive the likelihood functions for the simplest model with just two characters as it is thought to well-represent the core problems in phylogenetic estimation (see for e.g. [17]).

Posterior density of a tree
The posterior density f • ( k t) conditional on data d at tree k t is the normalized product of the likelihood l d ( k t) and the prior density p( k t) over a given tree space K T : We assume a uniform prior density over a large box or a union of large boxes in a given tree space K T . Typically, the sides of the box giving the range of branch lengths, are extremely long, say, [0, 10] or [10 -10 , 10]. The branch lengths are measured in units of expected number of DNA substitutions per site and therefore the support of our uniform prior density over K T contains the biologically relevant branch lengths. If K T is a union of distinct topologies then we let our prior be an equally weighted finite mixture of uniform densities over large boxes in each topology. Naturally, other prior densities are possible especially in the presence of additional information. We choose at priors for the convenient interpretation of the target posterior shape f t f t l t p t t to be the likelihood function in the absence of prior information beyond a compact support specification.

Likelihood of a triplet under Cavender-Farris-Neyman (CFN) model
We now describe the simplest model for the evolution of binary sequences under a symmetric transition matrix over all branches of a tree. This model has been used by authors in various fields including molecular biology, information theory, operations research and statistical physics; for references see [7,18]. This model is referred to as the Cavender-Farris-Neyman (CFN) model in molecular biology, although in other fields it has been referred to as 'the on-off machine', 'symmetric binary channel' and the 'symmetric two-state Poisson model'. Although the relatively tractable CFN model itself is not popular in applied molecular evolution, the lessons learned under the CFN model often extend to more realistic models of DNA mutation (e.g. [17]). Thus, our first stop is the CFN model. Thus, the probability that Y mutates to R, or vice versa, in time t is a(t): (1e -2t )/2. The stationary distribution is uniform on U , i.e. π(R) = π(Y) = 1/2.
When there are only three taxa, there are five tree topologies of interest as depicted in Figure 1. There are 2 3 = 8 possible site patterns, i.e. for each site q {1, 2,..., v}, the q-th column of the data d, denoted by d •, q , is one of eight possibilities, numbered 0, 1,...,7 for convenience: Given a multiple sequence alignment data d from 3 taxa at v homologous sites, i.e. d {Y, R} 3 × v , the likelihood function over the tree space k T is simplified from (2) as follows: where l i ( k t) is the likelihood of the the i-th site pattern as in (4) and c i is the count of sites with pattern i. In fact, l i ( k t) = P(i| k t) is the probability of observing site pattern i given topology label k and branch lengths t and similarly l d ( k t) = P(d| k t).
Consider the unrooted tree-space with a single topology labeled 4 and three non-negative terminal branch lengths 4 t = ( 4 t 1 , 4 t 2 , 4 t 3 ) R + 3 as shown in Figure 1 (v). An application of Algorithm 2 to compute the likelihoods l 0 ( 4 t), l 1 ( 4 t),..., l 7 ( 4 t), as derived in (19) Therefore, the multiple sequence alignment data d from three taxa evolving under Model 1 can be summarized by the minimal sufficient site pattern counts (c xxx , c xxy , c yxx , c xyx ):= (c 0 + c 1 , c 2 + c 3 , c 4 + c 5 , c 6 + c 7 ), which simplifies (5) to: Note that the probability of our sample space with eight patterns given in (4)

Likelihood of a triplet under Jukes-Cantor (JC) model
The r-state symmetric model introduced in [19] is specified by the r × r rate matrix with equal off-diagonal entries over an alphabet set U of size r. The stationary distribution under this model is the uniform distribution on U . Thus, CFN model is the 2-state symmetric model over U = {Y, R}. The Jukes-Cantor (JC) model [20] is the 4-state symmetric model over U = {A, C, G, T}. This is perhaps the simplest model on four characters.
The transition probability matrix P(t) = e Qt is also symmetric. The probability that any given nucleotide mutates to any other nucleotide in time t is P x, y (t) and that it is found in the same state is P x, x (t). These transition probabilities are: The stationary distribution is uniform, i.e. π(A) = π(C) = π(G) = π(T) = 1/4.
Consider the three non-negative terminal branch lengths 4 t = ( 4 t 1 , 4 t 2 , 4 t 3 ) R + 3 of an unrooted tree 4 t of Figure 1 (v). An application of Algorithm 2 to compute the likelihoods of the 64 possible site patterns (see for e.g. [21][22][23][24]), reveals five minimally sufficient site pattern classes. Let x, y and z simply denote distinct characters from the alphabet set U = {A, C, G, T} at taxon 1, 2 and 3, respectively. The minimally sufficient site pattern classes xxx, xyz, xxy, yxx and xyx encode 4, 24, 12, 12 and 12 nucleotide site patterns, respectively. By a computation similar to that in (19)- (25), the likelihoods are:
Let c ijk denote the number of sites with the site pattern ijk {xxx, xyz, xxy, yxx, xyx}. Then, under the assumption of independence across sites, we obtain the likelihood of a given data d by multiplying the sitespecific likelihoods: Once again, the likelihood of a rooted tree or the star tree can be obtained from that of the unrooted tree by substituting the appropriate constraints on branch lengths in the above equations or by directly applying Algorithm 2 with the appropriate input tree with its topology and branch lengths.

Model 3 (Hasegawa-Kishino-Yano (HKY) model) The
Hasegawa-Kishino-Yano or HKY model [25]has all four nucleotides in the state space, i.e. U = {A, C, G, T}. There are five parameters in this more flexible model. Transitions are changes within the purine {A, G} or pyrimidine {C, T} state subsets, while transversions are changes from purine to pyrimidine or from pyrimidine to purine. In this model, we have a mutational parameter that allows for transition:transversion bias and four additional parameters π A , π C , π G and π T that explicitly control the stationary distribution. The entries of the rate matrix are: The transition probabilities are known analytically for this model (see for e.g. [ [8], p. 203]). We can use these expressions when evaluating the likelihood of a rooted or unrooted tree along with the five mutational parameters via Algorithm 2. For simplicity we set the stationary distribution parameters to the empirical nucleotide frequencies and to be 2.0 in this study.

Interval analysis
Let IR denote the set of closed and bounded real intervals. Let any element of IR be denoted by x: where, x ≤ x and x , x R. Next we define arithmetic over IR .

Definition 1 (Interval Operation)
If the binary operator ⋆ is one of +, -, ×,/, then we define an arithmetic on operands in IR by with the exception that x/y is undefined if 0 y. Theorem 1 (Interval arithmetic) Arithmetic on the pair x, y IR is given by: x y x y x y [ , ] [min{ , , , }, max{ , , x y x y x y x y xy xy xy xy xy xy x xy xy y y provided When computing with finite precision, say in floatingpoint arithmetic, directed rounding must be taken into account (see e.g., [6,10]) to contain the solution. Interval multiplication is branched into nine cases, on the basis of the signs of the boundaries of the operands, such that only one case entails more than two real multiplications. Therefore, a rigorous computer implementation of an interval operation mostly requires two directed rounding floating-point operations. Interval addition and multiplication are both commutative and associate but not distributive. For example, Interval arithmetic satisfies a weaker rule than distributivity called sub-distributivity: x(y + z) ⊆ xy + xz.
An extremely useful property of interval arithmetic that is a direct consequence of Definition 1 is summarized by the following theorem.
Note that an immediate implication of Theorem 2 is that when x = [x, x] and y = [y, y] are thin intervals, i.e. x = x = x and y = y = y are real numbers, then x' ⋆ y' will contain the result of the real arithmetic operation x ⋆ y.
We can make IR n a metric space by equipping it with the Hausdorff distance.
Our main motivation for the extension to intervals is to enclose the range: Except for trivial cases, few tools are available to obtain the range. The leaf or terminal node of the DAG is a constant or a variable and thus the f i for a leaf i is set equal to the respective constant or variable. The recursion starts at the leaves and terminates at the root of the DAG. The DAG for an elementary f is simply its expression f with n sub-expressions f 1 , f 2 ,...,f n : where each ⊙f i is computed according to (8).
We look at some DAGs for 0 functions to concretely illustrate these ideas. IR , Theorem 3 (Interval rational functions) Consider the rational function f(x) = p(x)/q(x), where p and q are polynomials. Let f be the natural interval extension of its DAG expression f such that f(y) is well-defined for some y OE IR and let x, x' IR . Then we have Inclusion isotony and ii Range enclo The fundamental implication of the above theorem is that it allows us to enclose the range of any elementary function and thereby produces an upper bound for the global maximum and a lower bound for the global minimum over any compact subset of the domain upon which the function is well-defined. This is the workhorse for rigorously constructing an envelope for rejection sampling.
Unlike the natural interval extension of an f S that produces exact range enclosures, the natural interval extension f(x) of an f E often overestimates range(f; x), but can be shown under mild conditions to linearly approach the range as the maximal width of the box x goes to zero. This implies that a partition of x into smaller boxes {x (1)    .., f 10 in the above directed acyclic graph (DAG) expression of l 0 ( 0 t). Note that the leaf nodes are constants (s 2 , s 5 , s 7 and s 9 ) or variables (s 1 ).

Figure 4
Adaptive range enclosure of the posterior density over the star-tree space. Range enclosure of the loglikelihood (white line) for the human, chimpanzee and gorilla mitochondrial sequence data [27] analyzed in [17], under the CFN model with c xxx = 762 and v = 895 over star-trees, via its interval extension linearly tightens with the mesh. One hundred samples (+) from the MRS and the maximum likelihood estimate (red dot) are shown.

Likelihood of a box of trees
The likelihood function (2) over trees with a DAG expression that is directly or indirectly obtained via Algorithm 2 has a natural interval extension over boxes of trees [5,26]. This interval extension of the likelihood function allows us to produce rigorous enclosures of the likelihood over a box in the tree space. Next we give a concrete example of the natural interval extension of the likelihood function over an interval of trees 0 t in the startree space 0 T . The same ideas extend to any labeled box of trees k t when the number of branch lengths is greater than one and more generally to a finite union of labeled boxes with possibly distinct labels.
Example 3 (Posterior density over the CFN star-tree space 0 T ) The trifurcating star-tree 0 t := ( 0 t 1 ) has topology label 0 and common branch length 0 t 1 > 0. Either a direct application of Algorithm 2 with input triplet 0 t or a substitution of 4 t 1 , 4 t 2 and 4 t 3 in (6) by 0 t 1 yields the following 0 T -specific likelihoods: Therefore, on the basis of (4), (5), (6) and (7), the likelihood of the data at the star-tree 0 t 0 T is Thus, f maps an interval 0 t in the tree space 0 T to an interval in IR that encloses the target shape or likelihood of 0 t.
For the human, chimpanzee and gorilla mitochondrial sequence data [27]analyzed in [17], c xxx = 762 and v = 895. Figure 4 shows log(f( 0 t)) or the log-likelihood function for this data set as the white line. Evaluations of its interval extension over partitions by 3, 7 and 19 intervals are depicted by colored rectangles in Figure 4. Notice how the range enclosure by the interval extension of the log-likelihood function, our target shape, tightens with domain refinement as per Theorem 5. The maximum likelihood estimate derived in [17](the red dot in Figure 4) is

Moore rejection sampler (MRS)
Moore rejection sampler (MRS) is an auto-validating rejection sampler (RS). MRS is said to be auto-validating because it automatically obtains a proposal g that is easy to simulate from, and an envelopeĝ that is guaranteed to satisfy the envelope condition (1). MRS can produce independent samples from any target shape f whose DAG expression f has a well-defined natural interval extension f over a compact domain T . In summary, the defining characteristics and notations of MRS are: For a given partition T , we can construct a partitionspecific envelope function: The necessary envelope condition (1) is satisfied bŷ g T (t) because of (13). We can obtain the corresponding proposal g T (t) as a normalized simple function over T : (15) where the normalizing constant is the volume of the box t. The volume of an interval x is simply its width, i.e. vol x = wid x, if x OE IR . Now, we have all the ingredients to perform a more efficient, partition-specific, auto-validating von Neumann rejection sampling or simply Moore rejection sampling.
Before making formal statements about our sampler let us gain geometric insight into the sampler from Example 3 and Figure 4. The upper boundaries of rectangles of a given color, depicting a simple function in Figure 4, is a partitionspecific envelope function (14) for the logarithm of the posterior shape or the log-likelihood function of Example 3 over the prior-specified support [10 -10 , 10] ⊂ 0 T . In Figure 4 only a small interval about the maximum likelihood estimate (red dot) that contains the posterior samples (gray '+' markers) is depicted since the likelihood falls sharply outside this range. Normalization of the envelope gives the corresponding proposal function (15). As the refinement of the domain proceeds through adaptive bisections (described later), the partition size increases. We show partitions of size 3,7 and 19 over an interval containing the posterior samples. These samples were obtained from the partition with 19 intervals. Each of the corresponding envelope functions (upper boundaries of rectangles of a given color) can be used to draw independent and identically distributed samples from the target posterior density. Note how the acceptance probability (ratio of the area below the target shape to that below the envelope) increases with refinement.
Theorem 6 shows that Moore rejection sampler (MRS) indeed produces independent samples from the desired target and Theorem 7 describes the asymptotics of the acceptance probability as the partition of the domain is refined. Proofs for both Theorems are included in the Appendix for completeness.
Theorem 6 Suppose that the DAG expression f of the target shape f has a well-defined natural interval extension f over T IR ∈ n . If T is generated according to Algorithm 1, and if the the envelope functionĝ T (t) and the proposal density g T (t) are given by (14) and (15), respectively, then T is distributed according to the target density f • : T ↦ R.  T  T  T  , t ]

Next we bound the partition-specific acceptance probability
into W intervals each of width w . Thus, the acceptance probability approaches 1 at a rate that is no slower than linearly with the mesh.

Prioritized partitions and pre-processed proposals
We studied the efficiency of uniform partitions for their mathematical tractability. In practice, we may further increase the acceptance probability for a given partition size by adaptively partitioning T . In our context, adaptive means the possible exploitation of any current information about the target. We can refine the current partition T a and obtain a finer partition T ′ a with an additional box by bisecting a box t (*) T a along the midpoint of its side with the maximal width into a left box t L ( ) * and a right box t R ( ) * . There are several ways to choose a box t (*) T a for bisection. For instance, a relatively optimal choice is We employ a priority queue to conduct sequential refinements of T under this partitioning scheme. This approach avoids the exhaustive argmax computations to obtain the t (*) for bisection at each refinement step. Thus, the current partition is represented by a queue of boxes that are prioritized in descending order by the the priority function vol (t (i) ) · wid (f(t (i) ) in (17). Therefore, the box with the largest uncertainty in the enclosure of the integral over it gets bisected first. There are several ways to decide when to stop refining the partition. A simple strategy is to stop when the number of boxes reaches a number that is well within the memory constraints of the computer, say 10 6 , or when the lower bound of the acceptance probability given by (16) is above a desired threshold, say 0.1.
Once we have a partition T of T , we can sample t from the proposal density g T given by (15) in two steps: 1. Sample a box t (i) T according to the discrete distribution: Sampling from large discrete distributions (with million states or more) can be made faster by pre-processing the probabilities and saving the result in some convenient look-up table. This basic idea [28] allows samples to be drawn rapidly. We employ an efficient pre-processing strategy known as the Alias Method [4] that allows samples to be drawn in constant time even for very large discrete distributions as implemented in the GNU Scientific Library [29]. We also minimize the number of evaluations of the target shape f by saving the boxspecific computations of f (t (i) ) and f (t (i) ) and exploiting the so-called "squeeze principle", i.e. immediately accepting those points proposed in the box t (i) that fall below f (t (i) ) when uniformly stretched toward f (t (i) ).
Thus, by means of priority queues and look-up tables we can efficiently manage our adaptive partitioning of the domain for envelope construction, and rapidly draw samples from the proposal distribution. Our sampler class MRSampler implemented in MRS 0.1.2, a C++ class library for statistical set processing, builds on C-XSC 2.0, a C++ class library for extended scientific computing using interval methods [30]. All computations were done on a 2.8 GHz Pentium IV machine with 1 GB RAM. Having given theoretical and practical considerations to our Moore rejection sampler, we are ready to draw samples from various targets over small tree spaces.

Results
The natural interval extension of the likelihood function over labeled boxes in the tree space allows us to employ the Moore rejection sampler to rigorously draw independent and identically distributed samples from the posterior distribution over a compact box in the tree space given by our prior distribution. We draw samples from the posterior distribution based on two mitochondrial DNA data sets and use these samples (i) to estimate the posterior probabilities of each of the three rooted topologies, (ii) to conduct a nonparametric test of rate homogeneity between protein-coding and tRNA-coding sites and (iii) to estimate the human-neanderthal divergence time.
Human, chimpanzee and gorilla We revisit the data from a segment of the mitochondrial DNA of human, chimpanzee and gorilla [27] that was analyzed under the CFN model of DNA mutation (Model 1) within a point estimation setting [17]. The sufficient statistics of pattern counts for this data with Recall that due to our flat priors, our posterior shape f ,..., are n independent and identically distributed samples from the posterior density f • . over K T . We can obtain asymptotically consistent estimates of the posterior probabilities of 1 T , 2 T and 3 T from Monte Carlo integration of the indicator function of each of the three topology labels using The 95% confidence interval for j P, based on asymptotic normality of the Monte Carlo estimator, is j n j n j n P P P n ± − 1 96 1 .

Figure 5
Posterior samples from the rooted tree space of human, chimpanzee and gorilla. Ten thousand independent and identically distributed posterior samples from the rooted and clocked binary tree space of human, chimpanzee and gorilla with topology label set {1 ∪ 2 ∪ 3} (see Figure 1(ii),(iii),(iv)) on the basis of mitochondrial data [27] summarized by (c xxx , c xxy , c yxx , c xyx ) = (762, 54, 41, 38) under the Cavender-Farris-Neyman model (blue ∪ red ∪ green dots, respectively) are depicted.
Obtaining confidence intervals from dependent MCMC samples requires nontrivial computations for the burnin period and the thinning rate [1]. These are not readily available for phylogenetic MCMC samplers. Thus, the independent and identically distributed samples from our rejection sampler has the advantage of producing valid confidence intervals for our integrals of interest. The point estimate of the posterior mean E T t f t t ( ): ( ) ( ) for topology label 1 is (0.010863, 0.048994). This posterior mean is close to (0.010036, 0.048559), the mode of our target shape or the maximum likelihood estimate derived in [17].

Chimpanzee, gorilla and orangutan
We focus here on the 895 bp long homologous segment of mitochondrial DNA from chimpanzee, gorilla and orangutan [27]. This gives us a greater phylogenetic depth than the human, chimpanzee and gorilla sequences that were just analyzed. These sequences encode the genes for three transfer RNAs and parts of two proteins. Under the assumption of independence across sites, the sufficient statistics, under the JC model of DNA mutation (Model 2) over triplets, are given in Table 2 for all of the data as well as a partition of the data into tRNA-coding and protein-coding sites.
Ten thousand independent and identically distributed samples were drawn in 942 CPU seconds from the posterior distribution over JC triplets, i.e. unrooted trees with three edges corresponding to the three primates. Figure 6 shows these samples (blue dots) scattered about the verified global maximum likelihood estimate (MLE) of the triplet obtained in [5,26] and subsequently confirmed algebraically in [23]. We also drew ten thousand independent and identically distributed samples from the posterior based on the 198 tRNA-coding DNA sites (green dots in Figure 6) as well as from that based on the remaining 697 protein-coding sites (red dots in Figure 6). The former posterior samples, corresponding to the tRNA-coding sites, are more dispersed than the posterior samples based on the entire sequence. This is due to the smaller number of tRNAcoding sites making the posterior less concentrated.
Moreover, the cluster of samples from the posterior based on tRNA-coding sites seem to be farther away from that based on protein-coding sites. Such a clustering of two sets of posterior samples is a signal of mutational rate heterogeneity between the two types of sites. Hotelling's trace statistics, being a natural measure of distance between two clusters of points, can be used as a test statistic to determine the significance of the observed test statistic. On the basis of 100 random permutations of the sites, we obtain the null distribution of Hotelling's trace statistics. We were able to reject the null hypothesis of rate homogeneity between the posterior samples based on the tRNA-coding sites and that based on the proteincoding sites at the 10% significance level using this permutation test (P-value = 0.06). Any biological interpretation of this test must be done cautiously since the JC The minimal sufficient statistics under the JC model for all 895 sites, 198 tRNA-coding sites and 697 protein-coding sites based on the homologous segment of mitochondrial DNA from chimpanzee, gorilla and orangutan [27].

Figure 6
Posterior samples from the unrooted tree space of chimpanzee, gorilla and orangutan. Ten thousand Moore rejection samples from the posterior distribution over the three branch lengths of the unrooted tree space of chimpanzee, gorilla and orangutan based on their homologous mitochondrial DNA sequence of length 895 base pairs (blue dots), the tRNA-coding sequence with 198 base pairs (green dots) and the protein-coding sequence with 697 base pairs (red dots). The verified maximum likelihood estimate is the large black dot within the blue dots.
model employed here forbids any transition:transversion bias that is reportedly relevant for this data [27].

Neanderthal, human and chimpanzee
We used the 15 site patterns and their counts in Table 3 to infer the human-neanderthal divergence time. These counts are obtained from a multiple sequence alignment of the data made available in [33]. Our alignment procedure is more robust at the ends of each locus than that of [33]. We do an ordered concatenation of all the loci for each species prior to a multiple sequence alignment. The alignment was further edited by hand to obtain the locus-specific alignments. Under the assumption of independence across sites, the sufficient statistics, under any Markov model of DNA mutation, is the set of distinct site patterns and their respective counts. They are given in Table 3 for this data set.
We We transformed the three posterior distributions over the triplet spaces; (i) unrooted JC triplets that were rooted using the mid-point rooting method, (ii  [33]. However, our confidence intervals are from perfectly independent samples from the posterior and account for the finite number of neanderthal sites that were successfully sequenced, unlike those obtained on the  Site patterns and their counts from a multiple sequence alignment of the whole mitochondrial genome shotgun sequence (gi|115069275) of a neanderthal fossil Vi-80, from Vindija cave, Croatia [33], and its homologous sequence in a human (gi|13273200) and a chimpanzee (gi| 1262390). The first column, (01.aaa.685) T , expresses that there are 685 sites with nucleotide a in all three species,..., and the fif-teenth column, (15.agg.1) T , expresses that there is 1 site with nucleotide g in human and chimpanzee and nucleotide a in neanderthal.

Figure 7
Posterior samples from the unrooted tree space of neanderthal, human and chimpanzee. Ten thousand Moore rejection samples each from the posterior distribution over the three branch lengths of the unrooted tree space of neanderthal, human and chimpanzee under the JC model (blue dots) and the HKY model (red dots).
basis of a bootstrap of site patterns [34] or heuristic MCMC [1]. Unfortunately, our human-neanderthal divergence estimates are overestimates as they ignore the non-negligible time to coalescence of the human and neanderthal homologs within the human-neanderthal ancestral population. Improvements to our estimates based on the other 310 human and 4 chimpanzee homologs reported in [33] may be possible with more sophisticated models of populations within a phylogeny and needs further investigation.
Chimpanzee, gorilla, orangutan and gibbon We were able to draw samples from JC quartets on the basis of the mitochondrial DNA of chimpanzee, gorilla, orangutan and gibbon [27]. The data for all four primates can be summarized by 61 distinct site patterns [5]. Now, the problem is more challenging because there are three distinct tree topologies in the unrooted, bifurcating, quartet tree space, and each of these topologies has five edges. Thus, the domain of quartets is a piecewise Euclidean space that arises from a fusion of 3 distinct five dimensional orthants. Since the post-order traversals (Algorithm 2) specifying the likelihood function are topology-specific, we extended the likelihood over a compact box of quartets in a topologyspecific manner. The computational time was about a day and a half to draw 10000 samples from the quartet target due to low acceptance probability of the naive likelihood function based on the 61 distinct site patterns. All the samples had the topology which grouped Chimp and Gorilla together, i.e. ((chimpanzee, gorilla), (orangutan, gibbon)). The samples (results not shown) were again scattered about the verified global MLE of the quartet [5]. This quartet likelihood function has an elaborate DAG with numerous operations. When the data got compressed into sufficient statistics, the efficiency increased tremendously (e.g. for triplets the efficiency increases by a factor of 3.7). This is due to the number of leaf nodes in the target DAG, which encode the distinct site patterns of the observed data into the likelihood function, getting reduced from 29 to 5 for the triplet target and from 61 to 15 for the quartet target [24].

Discussion
Interval methods provide for a rigorous sampling from posterior target densities over small phylogenetic tree spaces. When one substitutes conventional floatingpoint arithmetic for real arithmetic in a computer and uses discrete lattices to construct the envelope and/or proposal, it is generally not possible to guarantee the envelope property, and thereby ensure that samples are drawn from the desired target density, except in special cases [35]. Thus, the construction of the Moore rejection sampler through interval methods, that enclose the target shape over the entire real continuum in any box of the domain with machine-representable bounds, in a manner that rigorously accounts for all sources of numerical errors (see [36] for a discussion on error control), naturally guarantees that the Moore rejection samples are independent draws from the desired target. Moreover, the target is allowed to be multivariate and/or non-log-concave with possibly 'pathological' behavior, as long as it has a well-defined interval extension. The efficiency of MRS is not immune to the curse of dimensionality and target DAG complexity. When the DAG expression for the likelihood gets large, its natural interval extension can have terrible over-enclosures of the true range, which in turn forces the adaptive refinement of the domain to be extremely fine for efficient envelope construction. Thus, a naive application of interval methods to targets with large DAGs can be terribly inefficient. In such cases, sampler efficiency rather than rigor is the issue. Thus, one may fail to obtain samples in a reasonable time, rather than (as may happen with non-rigorous methods) produce samples from some unknown and undesired target.
There are several ways in which efficiency can be improved for such cases. First, the particular structure of the target DAG should be exploited to avoid any redundant computations. For example, sufficient statistics must be used to dissolve symmetries in the DAG. Second, we can further improve efficiency by limiting ourselves to differentiable targets in C n . Tighter enclosures of the range of f(t (i) ) with f(t (i) ) can come from the enclosures of Taylor expansions of f around the midpoint mid (t (i) ) through interval-extended automatic differentiation (e.g. [36]) that can then yield tighter estimates of the integral enclosures [37]. Third, we can employ pre-processing to improve efficiency. For example, we can pre-enclose the range of a possibly rescaled f over a partition of the domain and then obtain the enclosure of f over some arbitrary t through a combination of hash access and hull operations on the pre-enclosures. Such a pre-enclosing technique reduces not only the overestimation of target shapes with large DAGs but also the computational cost incurred while performing interval operations with processors that are optimized for floatingpoint arithmetic. In the next version of the MRS library we plan to extend interval arithmetic beyond IR n to a class of multi-dimensional data-structures related to regular subpavings (e.g. [38]) to improve the efficiency of our sampler. Fourth, various contractors can be used to improve the range enclosure in polynomial time (e.g. [38]). The most promising contractors employ interval constraint propagation. Finally, efficiency at the possible cost of rigor can also be gained (up to 30%) by foregoing directed rounding during envelope construction.
Poor sampler efficiency makes it currently impractical to sample from trees with five leaves and 15 topologies. However, one could use such triplets and quartets drawn from the posterior distribution to stochastically amalgamate and produce estimates of larger trees via fast amalgamating algorithms (e.g. [39,40]), which may then be used to combat the slow mixing in MCMC methods [2] by providing a good set of initial trees. A collection of large trees obtained through such stochastic amalgamations would account for the effect of finite sample sizes (sequence length) as well as the sensitivity of the amalgamating algorithm itself to variation in the input vector of small tree estimates. It would be interesting to investigate if such stochastic amalgamations can help improve mixing of MCMC algorithms on large tree spaces, albeit autovalidating rejection sampling via the natural interval extension of the likelihood function may not be practical for trees with more than four leaves.

Conclusion
None of the currently available punctual samplers can rigorously produce independent and identically distributed samples from the posterior distribution over phylogenetic tree spaces, even for 3 or 4 taxa. We describe a new approach for rigorously drawing samples from a target posterior distribution over small phylogenetic tree spaces using the theory of interval analysis. Our Moore rejection sampler (MRS), being an auto-validating von Neumann rejection sampler (RS), can produce independent samples from any target shape f whose DAG expression f has a well-defined natural interval extension f over a compact domain T . MRS is said to be auto-validating because it automatically obtains a proposal g that is easy to simulate from, and an envelopê g that is guaranteed to satisfy the envelope condition (1). MRS can circumvent the problems associated with (i) heuristic convergence diagnostics in MCMC samplers and (ii) pseudo-envelopes constructed via non-rigorous punctual methods in rejection samplers. When the target DAG is large, MRS becomes inefficient and may fail to produce the desired samples in a reasonable time, rather than (as may happen with non-rigorous methods) produce samples from some unknown and undesired target. MRS solves the open problem of rigorously drawing independent and identically distributed samples from the posterior distribution over small rooted and unrooted phylogenetic tree spaces (3 or 4 taxa) based on any multiply-aligned sequence data.
Likelihoods for the CFN model on unrooted triplets Recall that the probability that Y mutates to R, or vice versa, in time t is a(t):= (1 -e -2t )/2 and the stationary distribution π(R) = π(Y) = 1/2. Next we apply Algorithm 2 to compute the likelihood l d q i, at a given site q which could be one of l 0 ( 4 t), l 1 ( 4 t),..., l 7 ( 4 t).
l t P t P t P t P t P