Auto-validating von Neumann rejection sampling from small phylogenetic tree spaces
- Raazesh Sainudiin^{1, 2}Email author and
- Thomas York^{3, 4}
DOI: 10.1186/1748-7188-4-1
© Sainudiin and York; licensee BioMed Central Ltd. 2009
Received: 05 June 2007
Accepted: 07 January 2009
Published: 07 January 2009
Abstract
Background
In phylogenetic inference one is interested in obtaining samples from the posterior distribution over the tree space on the basis of some observed DNA sequence data. One of the simplest sampling methods is the rejection sampler due to von Neumann. Here we introduce an auto-validating version of the rejection sampler, via interval analysis, to rigorously draw samples from posterior distributions over small phylogenetic tree spaces.
Results
The posterior samples from the auto-validating sampler are used to rigorously (i) estimate posterior probabilities for different rooted topologies based on mitochondrial DNA from human, chimpanzee and gorilla, (ii) conduct a non-parametric test of rate variation between protein-coding and tRNA-coding sites from three primates and (iii) obtain a posterior estimate of the human-neanderthal divergence time.
Conclusion
This solves the open problem of rigorously drawing independent and identically distributed samples from the posterior distribution over rooted and unrooted small tree spaces (3 or 4 taxa) based on any multiply-aligned sequence data.
Background
Obtaining samples from a real-valued target density f^{•} (t) is a basic problem in statistical estimation. The target f^{•} (t): $\mathbb{T}$ ↦ ℝ maps n-dimensional real points in ℝ^{ n }to real numbers in ℝ, i.e. t ∈ $\mathbb{T}$ ⊂ ℝ^{ n }. In Bayesian phylogenetic estimation, we want to draw independent and identically distributed samples from a target posterior density on the space of phylogenetic trees. The standard point-valued or punctual Monte Carlo methods via conventional floating-point arithmetic are typically non-rigorous as they do not account for all sources of numerical errors and are limited to evaluating the target at finitely many points. The standard approaches to sampling from the posterior density, especially over phylogenetic trees, rely on Markov chain Monte Carlo (MCMC) methods. Despite their asymptotic validity, it is nontrivial to guarantee that an MCMC algorithm has converged to stationarity [1], and thus MCMC convergence diagnostics on phylogenetic tree spaces are heuristic [2].
A more direct sampler that is capable of producing independent and identically distributed samples from the target density f^{•} (t):= f(t)/(N_{ f }), by only evaluating the target shape f(t) without knowing the normalizing constant ${N}_{f}:={\displaystyle {\int}_{\mathbb{T}}f(t)dt}$, is the von Neumann rejection sampler [3]. However, the limiting step in the rejection sampler is the construction of an envelope function $\widehat{g}$(t) that is not only greater than the target shape f(t):= N_{ f }f^{•} (t) at every t ∈ $\mathbb{T}$, but also easy to normalize and draw samples from. Moreover, a practical and efficient envelope function has to be as close to the target shape as possible from above. When an envelope function is constructed using point-valued methods, except for simple classes of targets, one cannot guarantee that the envelope function dominates the target shape globally.
None of the available samplers can rigorously produce independent and identically distributed samples from the posterior distribution over phylogenetic tree spaces, even for 3 or 4 taxa. We describe a new approach for rigorously drawing samples from a target posterior distribution over small phylogenetic tree spaces using the theory of interval analysis. This method can circumvent the problems associated with (i) heuristic convergence diagnostics in MCMC samplers and (ii) pseudo-envelopes constructed via non-rigorous point-valued methods in rejection samplers.
Informally, our method partitions the domain into boxes and uses interval analysis to rigorously bound the target shape in each box; then we use as envelope the simple function which takes on in each box the upper bound obtained for that box. It is easy to draw samples from the density corresponding to this step function envelope. More formally, the method employs an interval extension of the target posterior shape f(t): $\mathbb{T}$ ↦ ℝ to produce rigorous enclosures of the range of f over each interval vector or box in an adaptive partition $\mathfrak{T}:=\{{t}^{(1)},{t}^{(2)},\mathrm{...},{t}^{(|\mathfrak{T}|)}\}$ of the tree space $\mathbb{T}$ = ∪_{ i }t^{(i)}. This partition is adaptively constructed by a priority queue. The interval extended target shape maps boxes in $\mathbb{T}$ to intervals in ℝ. This image interval provides an upper bound for the global maximum and a lower bound for the global minimum of f over each element of the partition of $\mathbb{T}$. We use this information to construct an envelope as a simple function over the partition $\mathfrak{T}$. Using the Alias method [4] we efficiently propose samples from this normalized step-function envelope for von Neumann rejection sampling.
We call our method auto-validating because we employ interval methods to rigorously construct the envelope for a large class of target densities. The method was described in a more rudimentary form in [5]. Unlike many conventional samplers, each sample produced by our method is equivalent to a computer-assisted proof that it is drawn from the desired target, up to the pseudo-randomness of the underlying, deterministic, pseudo-random number generator. MRS 0.1.2, a C++ class library for statistical set processing is available from http://www.math.canterbury.ac.nz/~r.sainudiin/codes/mrs under the terms of the GNU General Public License.
The rest of the paper is organized as follows. In the Methods Section, we introduce (i) von Neumann rejection sampler (RS), (ii) phylogenetic estimation problem, (iii) interval analysis and (iv) an interval extension of the rejection sampler called the Moore rejection sampler (MRS) in honor of Ramon E. Moore. Moore was one of the influential founders of interval analysis [6]. In Results Section, we employ MRS to rigorously draw samples from the posterior density over small tree spaces. Using one of the earliest primate mitochondrial DNA data sets we use the posterior samples to estimate the posterior probability of each rooted tree topology and conduct a non-parametric test of rate variation between protein-coding and tRNA-coding sites. Using one of the latest data sets we obtain a rigorous posterior estimate of the human-neanderthal divergence time. We can also draw samples from the space of unrooted triplet and quartet trees. We conclude after a discussion of the method.
Methods
In the following sections, we first introduce the rejection sampler (RS) due to von Neumann [3]. Secondly, we describe the basic phylogenetic inference problem (e.g. [7–9]). Then, we introduce the basic principles of interval methods (e.g. [6, 10–13]). Finally, we construct interval extensions of RS to rigorously draw independent and identically distributed samples from small phylogenetic tree spaces. We leave the formal proofs to the Appendix for completeness.
Rejection sampler (RS)
- (iv)
a normalizing constant ${N}_{\widehat{g}}:={\displaystyle {\int}_{\mathbb{T}}\widehat{g}(t)dt}$, (v) a proposal density $g(t):={({N}_{\widehat{g}})}^{-1}\widehat{g}(t)$ over $\mathbb{T}$ from which independent samples can be drawn and finally (vi) f(t) and $\widehat{g}$(t) must be computable for any t ∈ $\mathbb{T}$.
input : (i) f; (ii) samplers for V ~ g and M ~ 1_{[0,1]}; (iii) $\widehat{g}$; (iv) integer MaxTrials;
output : (i) possibly one sample t from T ~ f^{•} and (ii) Trials
initialize: Trials ← 0; Success ← false; t ← ∅;
repeat //propose at most MaxTrials times until acceptance
v ← sample(g); //draw a sample v from RV V with density g
u ← $\widehat{g}$(v) sample(1_{[0,1]}); //draw a sample u from RV U with density ${1}_{[0,\widehat{g}(v)]}$
if u ≤ f(v) then //accept the proposed v and flag Success
t ← v; Success ← true
end
Trials ← Trials +1; //track the number of proposal trials so far
until Trials ≥ MaxTrials or Success = true;
return t and Trials
Algorithm 1: von Neumann RS
and the probability distribution over the number of samples from g to obtain one sample from f^{•} is geometrically distributed with mean 1/A($\widehat{g}$) (e.g. [15]).
Phylogenetic estimation
In this section we briefly review phylogenetic estimation. A more detailed account can be found in [7–9]. Inferring the ancestral relationship among a set of extant species based on their DNA sequences is a basic problem in phylogenetic estimation. One can obtain the likelihood of a particular phylogenetic tree that relates the extant species of interest at its leaves by superimposing a continuous time Markov chain model of DNA substitution upon that tree. The length of an edge (branch length) connecting two nodes (species) in the tree represents the amount of evolutionary time (divergence) between the two species. The internal nodes represent ancestral species. During the likelihood computation, one needs to integrate over all possible states at the unobserved ancestral nodes.
Likelihood of a tree
Any subset of the tree space with | . Such a model prescribes ${P}_{{a}_{i},{a}_{j}}(t)$, the probability of mutation from a character a_{ i }∈ $\mathfrak{U}$ to another character a_{ j }∈ $\mathfrak{U}$ in time t. Using such a transition probability we may compute ℓ_{ q }(^{ k }t), the log-likelihood of the data d at site q ∈ {1,..., v} or the q-th column of d, via the post-order traversal over the labeled tree with branch lengths ^{ k }t := (^{ k }t_{1}, ^{ k }t_{2},...,${{\displaystyle {}^{k}t}}_{{b}_{k}}$). This amounts to the sum-product Algorithm 2 [16] that associates with each node h ∈ {1,..., s_{ k }} of ^{ k }t subtending ℏ many descendants, a partial likelihood vector, ${l}_{h}:=({l}_{h}^{({a}_{1})},{l}_{h}^{({a}_{2})},\mathrm{...},{l}_{h}^{({a}_{\left|\mathfrak{U}\right|})})\in {\mathbb{R}}^{\left|\mathfrak{U}\right|}$, and specifies the length of the branch leading to its ancestor as ^{ k }t_{ h }.
input : (i) a labeled tree with branch lengths ^{ k }t := (^{ k }t_{1}, ^{ k }t_{2},...,${{\displaystyle {}^{k}t}}_{{b}_{k}}$), (ii) transition probability ${P}_{{a}_{i},{a}_{j}}(t)$ for any a_{ i }, a_{ j }∈ $\mathfrak{U}$, (iii) stationary distribution π(a_{ i }) over each character a_{ i }∈ $\mathfrak{U}$, (iv) site pattern or data d_{•, q}at site q
output : ${l}_{{d}_{\xb7,q}}$(^{ k }t), the likelihood at site q with pattern d_{•, q}
initialize: For a leaf node h with observed character a_{ i }= d_{h, q}at site q, set ${l}_{h}^{({a}_{i})}$ = 1 and ${l}_{h}^{({a}_{j})}$ = 0 for all j ≠ i. For any internal node h, set l_{ h }:= (1, 1,...,1).
Algorithm 2: Likelihood by post-order traversal
The simplest probability models for character mutation are continuous time Markov chains with finite state space $\mathfrak{U}$. We introduce three such models employed in this study next. We only derive the likelihood functions for the simplest model with just two characters as it is thought to well-represent the core problems in phylogenetic estimation (see for e.g. [17]).
Posterior density of a tree
We assume a uniform prior density over a large box or a union of large boxes in a given tree space ${}^{\mathfrak{K}}\mathbb{T}$. Typically, the sides of the box giving the range of branch lengths, are extremely long, say, [0, 10] or [10^{-10}, 10]. The branch lengths are measured in units of expected number of DNA substitutions per site and therefore the support of our uniform prior density over ${}^{\mathfrak{K}}\mathbb{T}$ contains the biologically relevant branch lengths. If ${}^{\mathfrak{K}}\mathbb{T}$ is a union of distinct topologies then we let our prior be an equally weighted finite mixture of uniform densities over large boxes in each topology. Naturally, other prior densities are possible especially in the presence of additional information. We choose at priors for the convenient interpretation of the target posterior shape $f({}^{k}t)={f}^{\xb7}({}^{k}t){\displaystyle {\int}_{{}^{\mathfrak{K}}\mathbb{T}}{l}_{d}({}^{k}t)p({}^{k}t)\partial ({}^{k}t)}$ to be the likelihood function in the absence of prior information beyond a compact support specification.
Likelihood of a triplet under Cavender-Farris-Neyman (CFN) model
We now describe the simplest model for the evolution of binary sequences under a symmetric transition matrix over all branches of a tree. This model has been used by authors in various fields including molecular biology, information theory, operations research and statistical physics; for references see [7, 18]. This model is referred to as the Cavender-Farris-Neyman (CFN) model in molecular biology, although in other fields it has been referred to as 'the on-off machine', 'symmetric binary channel' and the 'symmetric two-state Poisson model'. Although the relatively tractable CFN model itself is not popular in applied molecular evolution, the lessons learned under the CFN model often extend to more realistic models of DNA mutation (e.g. [17]). Thus, our first stop is the CFN model.
Thus, the probability that Y mutates to R, or vice versa, in time t is a(t): (1e^{-2t})/2. The stationary distribution is uniform on $\mathfrak{U}$, i.e. π(R) = π(Y) = 1/2.
where l_{ i }(^{ k }t) is the likelihood of the the i-th site pattern as in (4) and c_{ i }is the count of sites with pattern i. In fact, l_{ i }(^{ k }t) = P(i|^{ k }t) is the probability of observing site pattern i given topology label k and branch lengths t and similarly l_{ d }(^{ k }t) = P(d|^{ k }t).
Therefore, the multiple sequence alignment data d from three taxa evolving under Model 1 can be summarized by the minimal sufficient site pattern counts
(c_{xxx}, c_{xxy}, c_{yxx}, c_{xyx}):= (c_{0} + c_{1}, c_{2} + c_{3}, c_{4} + c_{5}, c_{6} + c_{7}),
Note that the probability of our sample space with eight patterns given in (4) is ${\sum}_{i=0}^{7}{l}_{i}({}^{4}t)=1$. Our likelihoods are half of those in [17] that are prescribed over a sample space of only four classes of patterns: {0, 1}, {2, 3}, {4, 5} and {6, 7}. This is because we distinguish between the sample space of data from that of the minimal sufficient statistics. We compute the rooted topology-specific likelihood functions, i.e. l(^{ k }t) for k ∈ {0, 1, 2, 3} (Figure 1) by substituting the appropriate constraints on branch lengths in ${}^{4}\mathbb{T}={\mathbb{R}}_{+}^{3}$, the space of unrooted triplets.
Likelihood of a triplet under Jukes-Cantor (JC) model
The r-state symmetric model introduced in [19] is specified by the r × r rate matrix with equal off-diagonal entries over an alphabet set $\mathfrak{U}$ of size r. The stationary distribution under this model is the uniform distribution on $\mathfrak{U}$. Thus, CFN model is the 2-state symmetric model over $\mathfrak{U}$ = {Y, R}. The Jukes-Cantor (JC) model [20] is the 4-state symmetric model over $\mathfrak{U}$ = {A, C, G, T}. This is perhaps the simplest model on four characters.
The stationary distribution is uniform, i.e. π(A) = π(C) = π(G) = π(T) = 1/4.
Notice that the probability of observing one of the 64 possible site patterns is 1 for any ^{4}t ∈ (0, ∞)^{3} :
4l_{xxx}(^{4}t) + 24l_{xyz}(^{4}t) + 12l_{xxy}(^{4}t) + 12l_{yxx}(^{4}t) + 12l_{yxx}(^{4}t) = 1.
Once again, the likelihood of a rooted tree or the star tree can be obtained from that of the unrooted tree by substituting the appropriate constraints on branch lengths in the above equations or by directly applying Algorithm 2 with the appropriate input tree with its topology and branch lengths.
The transition probabilities are known analytically for this model (see for e.g. [[8], p. 203]). We can use these expressions when evaluating the likelihood of a rooted or unrooted tree along with the five mutational parameters via Algorithm 2. For simplicity we set the stationary distribution parameters to the empirical nucleotide frequencies and κ to be 2.0 in this study.
Interval analysis
Let $\mathbb{I}\mathbb{R}$ denote the set of closed and bounded real intervals. Let any element of $\mathbb{I}\mathbb{R}$ be denoted by x: [$\underset{\xaf}{x}$, $\overline{x}$], where, $\underset{\xaf}{x}$ ≤ $\overline{x}$ and $\underset{\xaf}{x}$, $\overline{x}$ ∈ ℝ. Next we define arithmetic over $\mathbb{I}\mathbb{R}$.
Definition 1 (Interval Operation) If the binary operator ⋆ is one of +, -, ×,/, then we define an arithmetic on operands in $\mathbb{I}\mathbb{R}$ by
x⋆ y:= {x ⋆ y : x ∈ x, y ∈ y},
with the exception that x/y is undefined if 0 ∈ y.
Interval arithmetic satisfies a weaker rule than distributivity called sub-distributivity:
x(y+ z) ⊆ xy+ xz.
An extremely useful property of interval arithmetic that is a direct consequence of Definition 1 is summarized by the following theorem.
Theorem 2 (Fundamental property of interval arithmetic) If x⊆ x' and y⊆ y' and ⋆ ∈ {+, -, ×,/}, then
x⋆ y⊆ x' ⋆ y',
where we require that 0 ∉ y' when ⋆ = /.
Note that an immediate implication of Theorem 2 is that when x= [x, x] and y= [y, y] are thin intervals, i.e. $\underset{\xaf}{x}$= $\overline{x}$ = x and $\underset{\xaf}{y}$ = $\overline{y}$ = y are real numbers, then x' ⋆ y' will contain the result of the real arithmetic operation x ⋆ y.
and the Hausdorff distance between the boxes x and y in the metric given by the maximum norm is then
dist_{∞} (x, y) = ∥dist(x, y)∥_{∞}.
We can make $\mathbb{I}{\mathbb{R}}^{n}$a metric space by equipping it with the Hausdorff distance.
Our main motivation for the extension to intervals is to enclose the range:
range(f; S):= {f(x): x ∈ S},
of a real-valued function f : ℝ^{ n }↦ ℝ over a set S ⊆ ℝ^{ n }. Except for trivial cases, few tools are available to obtain the range.
where each ⊙f_{ i }is computed according to (8).
We look at some DAGs for 0 functions to concretely illustrate these ideas.
f(x) is known as the natural interval extension of the expression f(x) for f(x). This extension is well-defined if we do not run into division by zero.
Thus, f(x) = f'(x) ≠ f"(x) for any x∈ $\mathbb{I}\mathbb{R}$, albeit f(x) = f'(x) = f"(x) for any x ∈ ℝ.
Definition 4 (Standard functions) Piece-wise monotone functions, including exponential, logarithm, rational power, absolute value, and trigonometric functions, constitute the set of standard functions
$\mathfrak{S}$ = {a^{ x }, log_{ b }(x), x^{p/q}, |x|, sin(x), cos(x), tan(x), sinh(x),...arcsin(x),...}.
Definition 5 (Elementary functions) A real-valued function that can be expressed as a finite combination of constants, variables, arithmetic operations, standard functions and compositions is called an elementary function. The set of all such elementary functions is referred to as $\mathfrak{E}$.
It would be convenient if guaranteed enclosures of the range of an elementary f can be obtained by the natural interval extension f of one of its expressions f. The following Theorem 4 is the work-horse of interval Monte Carlo algorithms.
The fundamental implication of the above theorem is that it allows us to enclose the range of any elementary function and thereby produces an upper bound for the global maximum and a lower bound for the global minimum over any compact subset of the domain upon which the function is well-defined. This is the work-horse for rigorously constructing an envelope for rejection sampling.
Likelihood of a box of trees
The likelihood function (2) over trees with a DAG expression that is directly or indirectly obtained via Algorithm 2 has a natural interval extension over boxes of trees [5, 26]. This interval extension of the likelihood function allows us to produce rigorous enclosures of the likelihood over a box in the tree space. Next we give a concrete example of the natural interval extension of the likelihood function over an interval of trees ^{0}t in the star-tree space . The same ideas extend to any labeled box of trees ^{ k }t when the number of branch lengths is greater than one and more generally to a finite union of labeled boxes with possibly distinct labels.
Thus, f maps an interval ^{0}t in the tree space to an interval in $\mathbb{I}\mathbb{R}$that encloses the target shape or likelihood of ^{0}t.
Moore rejection sampler (MRS)
where the normalizing constant ${N}_{{\widehat{g}}^{\mathfrak{T}}}:={\displaystyle {\sum}_{i=1}^{\left|\mathfrak{T}\right|}(\text{vol}{t}^{(i)}\cdot \overline{f}({t}^{(i)}))}$ and $\text{vol}t:={\displaystyle {\prod}_{i=1}^{n}\text{wid}{t}_{i}}$ is the volume of the box t. The volume of an interval x is simply its width, i.e. vol x= wid x, if x ∈ $\mathbb{I}\mathbb{R}$. Now, we have all the ingredients to perform a more efficient, partition-specific, auto-validating von Neumann rejection sampling or simply Moore rejection sampling.
Before making formal statements about our sampler let us gain geometric insight into the sampler from Example 3 and Figure 4. The upper boundaries of rectangles of a given color, depicting a simple function in Figure 4, is a partition-specific envelope function (14) for the logarithm of the posterior shape or the log-likelihood function of Example 3 over the prior-specified support [10^{-10}, 10] ⊂ . In Figure 4 only a small interval about the maximum likelihood estimate (red dot) that contains the posterior samples (gray '+' markers) is depicted since the likelihood falls sharply outside this range. Normalization of the envelope gives the corresponding proposal function (15). As the refinement of the domain proceeds through adaptive bisections (described later), the partition size increases. We show partitions of size 3,7 and 19 over an interval containing the posterior samples. These samples were obtained from the partition with 19 intervals. Each of the corresponding envelope functions (upper boundaries of rectangles of a given color) can be used to draw independent and identically distributed samples from the target posterior density. Note how the acceptance probability (ratio of the area below the target shape to that below the envelope) increases with refinement.
Theorem 6 shows that Moore rejection sampler (MRS) indeed produces independent samples from the desired target and Theorem 7 describes the asymptotics of the acceptance probability as the partition of the domain is refined. Proofs for both Theorems are included in the Appendix for completeness.
Theorem 6 Suppose that the DAG expression f of the target shape f has a well-defined natural interval extension f over . If T is generated according to Algorithm 1, and if the the envelope function ${\widehat{g}}^{\mathfrak{T}}$(t) and the proposal density ${g}^{\mathfrak{T}}$(t) are given by (14) and (15), respectively, then T is distributed according to the target density f^{•} : $\mathbb{T}$ ↦ ℝ.
If f ∈ ${\mathfrak{E}}_{\mathfrak{L}}$, the Lipschitz class of elementary functions (Definition 6), then we might expect the enclosure of N_{ f }to be proportional to the mesh $meshw:={\mathrm{max}}_{i\in \{1,\mathrm{...},\mathfrak{T}\}}\text{wid}({t}^{(i)})$ of the partition $\mathfrak{T}$.
Theorem 7 shows that if f ∈ ${\mathfrak{E}}_{\mathfrak{L}}$ and ${\mathfrak{U}}_{W}$ is a uniform partition of $\mathbb{T}$ into W intervals, then the acceptance probability $A({\mathfrak{U}}_{W})=1-\mathcal{O}(1/W)$. Thus, the acceptance probability approaches 1 at a rate that is no slower than linearly with the mesh.
Prioritized partitions and pre-processed proposals
We employ a priority queue to conduct sequential refinements of $\mathbb{T}$ under this partitioning scheme. This approach avoids the exhaustive argmax computations to obtain the t^{(*)} for bisection at each refinement step. Thus, the current partition is represented by a queue of boxes that are prioritized in descending order by the the priority function vol (t^{(i)}) · wid (f(t^{(i)}) in (17). Therefore, the box with the largest uncertainty in the enclosure of the integral over it gets bisected first. There are several ways to decide when to stop refining the partition. A simple strategy is to stop when the number of boxes reaches a number that is well within the memory constraints of the computer, say 10^{6}, or when the lower bound of the acceptance probability given by (16) is above a desired threshold, say 0.1.
- 1.Sample a box t^{(i)}∈ $\mathfrak{T}$ according to the discrete distribution:$\begin{array}{cc}{\ddot{g}}^{\mathfrak{T}}({t}^{(i)})=\frac{\text{vol}{t}^{(i)}\overline{f}({t}^{(i)})}{{\displaystyle {\sum}_{i=1}^{\left|\mathfrak{T}\right|}\left(\text{vol}{t}^{(i)}\overline{f}({t}^{(i)})\right)}},& {t}^{(i)}\in \mathfrak{T},\end{array}$(18)
- 2.
Sample a point t uniformly at random from the box t^{(i)}.
Sampling from large discrete distributions (with million states or more) can be made faster by pre-processing the probabilities and saving the result in some convenient look-up table. This basic idea [28] allows samples to be drawn rapidly. We employ an efficient pre-processing strategy known as the Alias Method [4] that allows samples to be drawn in constant time even for very large discrete distributions as implemented in the GNU Scientific Library [29]. We also minimize the number of evaluations of the target shape f by saving the box-specific computations of $\underset{\xaf}{f}$(t^{(i)}) and $\overline{f}$(t^{(i)}) and exploiting the so-called "squeeze principle", i.e. immediately accepting those points proposed in the box t^{(i)}that fall below $\underset{\xaf}{f}$(t^{(i)}) when uniformly stretched toward $\overline{f}$(t^{(i)}).
Thus, by means of priority queues and look-up tables we can efficiently manage our adaptive partitioning of the domain for envelope construction, and rapidly draw samples from the proposal distribution. Our sampler class MRSampler implemented in MRS 0.1.2, a C++ class library for statistical set processing, builds on C-XSC 2.0, a C++ class library for extended scientific computing using interval methods [30]. All computations were done on a 2.8 GHz Pentium IV machine with 1 GB RAM. Having given theoretical and practical considerations to our Moore rejection sampler, we are ready to draw samples from various targets over small tree spaces.
Results
The natural interval extension of the likelihood function over labeled boxes in the tree space allows us to employ the Moore rejection sampler to rigorously draw independent and identically distributed samples from the posterior distribution over a compact box in the tree space given by our prior distribution. We draw samples from the posterior distribution based on two mitochondrial DNA data sets and use these samples (i) to estimate the posterior probabilities of each of the three rooted topologies, (ii) to conduct a nonparametric test of rate homogeneity between protein-coding and tRNA-coding sites and (iii) to estimate the human-neanderthal divergence time.
Human, chimpanzee and gorilla
We revisit the data from a segment of the mitochondrial DNA of human, chimpanzee and gorilla [27] that was analyzed under the CFN model of DNA mutation (Model 1) within a point estimation setting [17]. The sufficient statistics of pattern counts for this data with total number of sites v = 895 under the CFN model over the space of all three-leaved phylogenetic trees are:
(c_{xxx}, c_{xxy}, c_{yxx}, c_{xyx}) = (762, 54, 41, 38)
Rooted triplets as constrained unrooted triplets
Rooted and Clocked Trees, ^{ i }t := (^{ i }t_{0}, ^{ i }t_{1}), i ∈ {1, 2, 3} | Unrooted Trees ^{4}t := (^{4}t_{1}, ^{4}t_{2}, ^{4}t_{3}) | |||
---|---|---|---|---|
Labeled Tree | Newick Representation of ^{ i }t | ^{4} t _{1} | ^{4} t _{2} | ^{4} t _{3} |
^{1}t := (^{1}t_{0}, ^{1}t_{1}) | ((H:^{1}t_{1}, C:^{1}t_{1}):^{1}t_{0}, G:^{1}t_{0} + ^{1}t_{1})) | ^{1} t _{1} | ^{1} t _{1} | ^{1}t_{1} + ^{1}t_{0} + ^{1}t_{0} |
^{2}t := (^{2}t_{0}, ^{2}t_{1}) | ((C:^{2}t_{1}, G:^{2}t_{1}):^{2}t_{0}, H:^{2}t_{0} + ^{2}t_{1})) | ^{2}t_{1} + ^{2}t_{0} + ^{2}t_{0} | ^{2} t _{1} | ^{2} t _{1} |
^{3}t := (^{3}t_{0}, ^{3}t_{1}) | ((H:^{3}t_{1}, G:^{3}t_{1}):^{3}t_{0}, C:^{3}t_{0} + ^{3}t_{1})) | ^{3} t _{1} | ^{3}t_{1} + ^{3}t_{0} + ^{3}t_{0} | ^{3} t _{1} |
Obtaining confidence intervals from dependent MCMC samples requires nontrivial computations for the burn-in period and the thinning rate [1]. These are not readily available for phylogenetic MCMC samplers. Thus, the independent and identically distributed samples from our rejection sampler has the advantage of producing valid confidence intervals for our integrals of interest. The point estimate of the posterior mean $E({}^{1}T):={\displaystyle {\int}_{{}^{1}\mathbb{T}}{}^{1}tf({}^{1}t)\partial ({}^{1}t)}$ for topology label 1 is (0.010863, 0.048994). This posterior mean is close to (0.010036, 0.048559), the mode of our target shape or the maximum likelihood estimate derived in [17].
Chimpanzee, gorilla and orangutan
Minimal sufficient statistics for the chimpanzee, gorilla and orangutan data
Site type | v | c _{xxx} | c _{xxy} | c _{yxx} | c _{xyx} | c _{xyz} |
---|---|---|---|---|---|---|
All | 895 | 700 | 100 | 46 | 42 | 7 |
tRNA-coding | 198 | 173 | 13 | 7 | 3 | 2 |
protein-coding | 697 | 527 | 87 | 39 | 39 | 5 |
Neanderthal, human and chimpanzee
Minimal sufficient statistics for the neanderthal, human and chimpanzee data
Site | : 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 |
---|---|
Pattern | : 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 |
. . . . . . . . . . . . . . . . . . . . . | |
neanderthal | : a t c g a t c g t t g a c a a |
human | : a t c g a t c g t c a g t a g |
chimpanzee | : a t c g g c t a a t a a c t g |
. . . . . . . . . . . . . . . . . . . . . | |
site | : 6 6 6 4 1 1 1 1 2 2 1 1 1 1 1 |
pattern | : 8 0 0 5 5 4 4 0 |
counts | : 5 5 3 0 |
We transformed the three posterior distributions over the triplet spaces; (i) unrooted JC triplets that were rooted using the mid-point rooting method, (ii) rooted JC triplets and (iii) rooted HKY triplets, respectively, into three posterior distributions over the human-neanderthal divergence time relative to the human-chimp divergence time. The corresponding posterior quantiles ({5%, 50%, 95%}) for the human-neanderthal divergence time in units of human-chimp divergence time are {0.0643, 0.125, 0.214}, {0.0694, 0.142, 0.263} and {0.0682, 0.143, 0.268}, respectively. We constrained the neanderthal lineage to be a fraction of the human lineage in branch length in order to estimate the age of the neanderthal fossil from the rooted HKY triplets. The posterior quantiles of the fossil date in units of human-chimp divergence is {0.00685, 0.0666, 0.195}. The estimate of 38; 310 years based on carbon-14 accelerator mass spectrometry [33] is within our [5%, 95%] posterior quantile interval for the fossil date, provided the human-chimp divergence estimate ranges in [196103, 5.6 × 10^{6}]. Thus, reasonable bounds for the human-chimp divergence are 4 × 10^{6} and 5.6 × 10^{6} years, under the assumption that 4 × 10^{6} is an acceptable lower-bound. Based on these two calendar year estimates, we transformed the posterior quantiles of the human-neanderthal divergence times from the rooted HKY triplets into {272680, 571124, 1073375} and {381752, 799574, 1502724} years, respectively. Our [5%, 95%] posterior intervals contain the interval estimate of [461000, 825000] years reported in [33]. However, our confidence intervals are from perfectly independent samples from the posterior and account for the finite number of neanderthal sites that were successfully sequenced, unlike those obtained on the basis of a bootstrap of site patterns [34] or heuristic MCMC [1]. Unfortunately, our human-neanderthal divergence estimates are overestimates as they ignore the non-negligible time to coalescence of the human and neanderthal homologs within the human-neanderthal ancestral population. Improvements to our estimates based on the other 310 human and 4 chimpanzee homologs reported in [33] may be possible with more sophisticated models of populations within a phylogeny and needs further investigation.
Chimpanzee, gorilla, orangutan and gibbon
We were able to draw samples from JC quartets on the basis of the mitochondrial DNA of chimpanzee, gorilla, orangutan and gibbon [27]. The data for all four primates can be summarized by 61 distinct site patterns [5]. Now, the problem is more challenging because there are three distinct tree topologies in the unrooted, bifurcating, quartet tree space, and each of these topologies has five edges. Thus, the domain of quartets is a piecewise Euclidean space that arises from a fusion of 3 distinct five dimensional orthants. Since the post-order traversals (Algorithm 2) specifying the likelihood function are topology-specific, we extended the likelihood over a compact box of quartets in a topology-specific manner. The computational time was about a day and a half to draw 10000 samples from the quartet target due to low acceptance probability of the naive likelihood function based on the 61 distinct site patterns. All the samples had the topology which grouped Chimp and Gorilla together, i.e. ((chimpanzee, gorilla), (orangutan, gibbon)). The samples (results not shown) were again scattered about the verified global MLE of the quartet [5]. This quartet likelihood function has an elaborate DAG with numerous operations. When the data got compressed into sufficient statistics, the efficiency increased tremendously (e.g. for triplets the efficiency increases by a factor of 3.7). This is due to the number of leaf nodes in the target DAG, which encode the distinct site patterns of the observed data into the likelihood function, getting reduced from 29 to 5 for the triplet target and from 61 to 15 for the quartet target [24].
Discussion
Interval methods provide for a rigorous sampling from posterior target densities over small phylogenetic tree spaces. When one substitutes conventional floating-point arithmetic for real arithmetic in a computer and uses discrete lattices to construct the envelope and/or proposal, it is generally not possible to guarantee the envelope property, and thereby ensure that samples are drawn from the desired target density, except in special cases [35]. Thus, the construction of the Moore rejection sampler through interval methods, that enclose the target shape over the entire real continuum in any box of the domain with machine-representable bounds, in a manner that rigorously accounts for all sources of numerical errors (see [36] for a discussion on error control), naturally guarantees that the Moore rejection samples are independent draws from the desired target. Moreover, the target is allowed to be multivariate and/or non-log-concave with possibly 'pathological' behavior, as long as it has a well-defined interval extension. The efficiency of MRS is not immune to the curse of dimensionality and target DAG complexity. When the DAG expression for the likelihood gets large, its natural interval extension can have terrible over-enclosures of the true range, which in turn forces the adaptive refinement of the domain to be extremely fine for efficient envelope construction. Thus, a naive application of interval methods to targets with large DAGs can be terribly inefficient. In such cases, sampler efficiency rather than rigor is the issue. Thus, one may fail to obtain samples in a reasonable time, rather than (as may happen with non-rigorous methods) produce samples from some unknown and undesired target.
There are several ways in which efficiency can be improved for such cases. First, the particular structure of the target DAG should be exploited to avoid any redundant computations. For example, sufficient statistics must be used to dissolve symmetries in the DAG. Second, we can further improve efficiency by limiting ourselves to differentiable targets in C^{ n }. Tighter enclosures of the range of f(t^{(i)}) with f(t^{(i)}) can come from the enclosures of Taylor expansions of f around the midpoint mid (t^{(i)}) through interval-extended automatic differentiation (e.g. [36]) that can then yield tighter estimates of the integral enclosures [37]. Third, we can employ pre-processing to improve efficiency. For example, we can pre-enclose the range of a possibly rescaled f over a partition of the domain and then obtain the enclosure of f over some arbitrary t through a combination of hash access and hull operations on the pre-enclosures. Such a pre-enclosing technique reduces not only the overestimation of target shapes with large DAGs but also the computational cost incurred while performing interval operations with processors that are optimized for floating-point arithmetic. In the next version of the MRS library we plan to extend interval arithmetic beyond $\mathbb{I}{\mathbb{R}}^{n}$ to a class of multi-dimensional data-structures related to regular sub-pavings (e.g. [38]) to improve the efficiency of our sampler. Fourth, various contractors can be used to improve the range enclosure in polynomial time (e.g. [38]). The most promising contractors employ interval constraint propagation. Finally, efficiency at the possible cost of rigor can also be gained (up to 30%) by foregoing directed rounding during envelope construction.
Poor sampler efficiency makes it currently impractical to sample from trees with five leaves and 15 topologies. However, one could use such triplets and quartets drawn from the posterior distribution to stochastically amalgamate and produce estimates of larger trees via fast amalgamating algorithms (e.g. [39, 40]), which may then be used to combat the slow mixing in MCMC methods [2] by providing a good set of initial trees. A collection of large trees obtained through such stochastic amalgamations would account for the effect of finite sample sizes (sequence length) as well as the sensitivity of the amalgamating algorithm itself to variation in the input vector of small tree estimates. It would be interesting to investigate if such stochastic amalgamations can help improve mixing of MCMC algorithms on large tree spaces, albeit auto-validating rejection sampling via the natural interval extension of the likelihood function may not be practical for trees with more than four leaves.
Conclusion
None of the currently available punctual samplers can rigorously produce independent and identically distributed samples from the posterior distribution over phylogenetic tree spaces, even for 3 or 4 taxa. We describe a new approach for rigorously drawing samples from a target posterior distribution over small phylogenetic tree spaces using the theory of interval analysis. Our Moore rejection sampler (MRS), being an auto-validating von Neumann rejection sampler (RS), can produce independent samples from any target shape f whose DAG expression f has a well-defined natural interval extension f over a compact domain $\mathbb{T}$. MRS is said to be auto-validating because it automatically obtains a proposal g that is easy to simulate from, and an envelope $\widehat{g}$ that is guaranteed to satisfy the envelope condition (1). MRS can circumvent the problems associated with (i) heuristic convergence diagnostics in MCMC samplers and (ii) pseudo-envelopes constructed via non-rigorous punctual methods in rejection samplers. When the target DAG is large, MRS becomes inefficient and may fail to produce the desired samples in a reasonable time, rather than (as may happen with non-rigorous methods) produce samples from some unknown and undesired target. MRS solves the open problem of rigorously drawing independent and identically distributed samples from the posterior distribution over small rooted and unrooted phylogenetic tree spaces (3 or 4 taxa) based on any multiply-aligned sequence data.
Appendix
Likelihoods for the CFN model on unrooted triplets
Proof of Theorem 1 (cf. [37])
Since any real arithmetic operation x ⋆ y, where ⋆ ∈ {+, - ×,/} and x, y ∈ ℝ, is a continuous function x ⋆y := ⋆(x, y): ℝ ⊗ ℝ ↦ ℝ, except when y = 0 under / operation. Since x and y are simply connected compact intervals, so is their Cartesian product x ⊗ y. On such a domain x⊗ y, the continuity of ⋆(x, y) (except when ⋆ =/and 0 ∈ y) ensures the attainment of a minimum, a maximum and all intermediate values. Therefore, with the exception of the case when ⋆ = / and 0 ∈ y, the range x⋆ y has an interval form [min (x ⋆y), max (x ⋆y)], where the min and max are taken over all pairs (x, y) ∈ x⊗ y. Fortunately, we do not have to evaluate x ⋆y over every (x, y) ∈ x⊗ y to find the global min and global max of ⋆(x, y) over x⊗ y, because the monotonicity of the ⋆(x, y*) in terms of x ∈ x for any fixed y* ∈ y implies that the extremal values are attained on the boundary of x⊗ y, i.e. the set {x, y, $\overline{x}$, and $\overline{y}$}. Thus the theorem can be verified by examining the finitely many boundary cases.
Proof of Theorem 2
x⋆ y= {x ⋆y : x ∈ x, y ∈ y} ⊆ {x ⋆y : x ∈ x', y ∈ y'} = x'⋆ y'.
Proof of Theorem 3 (cf. [37])
Since f(y) is well-defined, we will not run into division by zero, and therefore (i) follows from the repeated invocation of Theorem 2. We can prove (ii) by contradiction. Suppose range(f; x) ⊈ f(x). Then there exists x ∈ x, such that f(x) ∈ range(f; x) but f(x) ∉ f(x). This in turn implies that f(x) = f([x, x]) ∉ f(x), which contradicts (i). Therefore, our supposition cannot be true and we have proved (ii) range(f; x) ⊆ f(x).
Proof of Theorem 4 (cf. [37])
The range enclosure is a consequence of inclusion isotony by an argument identical to that given in the proof for Theorem 3.
Proof of Theorem 5 (cf. [37])
The proof is given by an induction on the DAG for f similar to the proof of Theorem 4 (See [37]).
Proof of Theorem 6
Thus, we have shown that the joint density of the random vector (V, U) initially produced by Algorithm 1 is uniformly distributed on $\mathcal{B}({\widehat{g}}^{\mathfrak{T}})$. The above relationship also makes geometric sense since the volume of $\mathcal{B}({\widehat{g}}^{\mathfrak{T}})$ is exactly ${N}_{{\widehat{g}}^{\mathfrak{T}}}$.
Thus, we have shown that the accepted random vector T has the desired density f^{•}.
Proof of Theorem 7
Therefore the lower bound for the acceptance probability A(${\mathfrak{U}}_{W}$) of MRS approaches 1 no slower than linearly with the refinement of $\mathbb{T}$ by ${\mathfrak{U}}_{W}$. Note that this should hold for a general nonuniform partition with w replaced by the mesh.
Declarations
Acknowledgements
R.S. is a Research Fellow of the Royal Commission for the Exhibition of 1851. This was partly supported by a joint NSF/NIGMS grant DMS-02-01037. Many thanks to Rob Strawderman, Warwick Tucker and Stephane Aris-Brosou for constructive comments, Joe Felsenstein for clarifying the transition probabilities under the HKY model and Ziheng Yang for various clarifications and encouragement.
Authors’ Affiliations
References
- Jones G, Hobert J: Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Statistical Science. 2001, 16 (4): 312-334. 10.1214/ss/1015346317.View ArticleGoogle Scholar
- Mossel E, Vigoda E: Phylogenetic MCMC algorithms are misleading on mixtures of trees. Science. 2005, 309: 2207-2209.PubMedView ArticleGoogle Scholar
- von Neumann J: Various techniques used in connection with random digits. John Von Neumann, Collected Works. 1963, V: Oxford University PressGoogle Scholar
- Walker A: An efficient method for generating discrete random variables with general distributions. ACM Trans on Mathematical Software. 1977, 3: 253-256. 10.1145/355744.355749.View ArticleGoogle Scholar
- Sainudiin R: Machine interval experiments. pHd dissertation. 2005, Cornell University, Ithaca, New YorkGoogle Scholar
- Moore R: Interval analysis. 1967, Prentice-HallGoogle Scholar
- Semple C, Steel M: Phylogenetics. 2003, Oxford University PressGoogle Scholar
- Felsenstein J: Inferring phylogenies. 2003, Sunderland, MA: Sinauer AssociatesGoogle Scholar
- Yang Z: Computational molecular evolution. 2006, UK: Oxford University PressView ArticleGoogle Scholar
- Moore R: Methods and applications of interval analysis. 1979, Philadelphia, Pennsylvania: SIAMView ArticleGoogle Scholar
- Alefeld G, Herzberger J: An introduction to interval computations. 1983, Academic pressGoogle Scholar
- Hammer R, Hocks M, Kulisch U, Ratz D: C++ toolbox for verified computing: basic numerical problems. 1995, Springer-VerlagGoogle Scholar
- Kulisch U, Lohner R, Facius A, : Perspectives on encolsure methods. 2001, Springer-VerlagGoogle Scholar
- Matsumoto M, Nishimura T: Mersenne Twister: A 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Trans Model Comput Simul. 1998, 8: 3-30. 10.1145/272991.272995.View ArticleGoogle Scholar
- Williams D: Weighing the Odds: A Course in Probability and Statistics. 2001, Cambridge University PressView ArticleGoogle Scholar
- Felsenstein J: Evolutionary trees from DNA sequences: a maximum likelihood approach. Jnl Mol Evol. 1981, 17: 368-376. 10.1007/BF01734359.View ArticleGoogle Scholar
- Yang Z: Complexity of the simplest phylogenetic estimation problem. Proceedings Royal Soc London B Biol Sci. 2000, 267: 109-119. 10.1098/rspb.2000.0974.View ArticleGoogle Scholar
- Evans W, Kenyon C, Peres Y, Schulman L: Broadcasting on trees and the Ising model. Advances in Applied Probability. 2000, 10: 410-433. 10.1214/aoap/1019487349.View ArticleGoogle Scholar
- Neyman J: Molecular studies of evolution: a source of novel statistical problems. Statistical decision theory and related topics. Edited by: Gupta S, Yackel J. 1971, 1-27. New York Academy PressGoogle Scholar
- Jukes T, Cantor C: Evolution of protein molecules. Mammalian Protein Metabolism. Edited by: Munro H. 1969, 21-32. New York Academic PressView ArticleGoogle Scholar
- Saitou N: Property and efficiency of the maximum likelihood method for molecular phylogeny. Jnl Mol Evol. 1988, 27: 261-273. 10.1007/BF02100082.View ArticleGoogle Scholar
- Yang Z: Statistical properties of the maximum likelihood method of phylogenetic estimation and comparison with distance matrix methods. Syst Biol. 1994, 43: 329-342. 10.2307/2413672.View ArticleGoogle Scholar
- Hosten S, Khetan A, Sturmfels B: Solving the likelihood equations. Found Comput Math. 2005, 5 (4): 389-407. 10.1007/s10208-004-0156-8.View ArticleGoogle Scholar
- Casanellas M, Garcia L, Sullivant S: Catalog of small trees. Algebraic statistics for computational biology. Edited by: Pachter L, Sturmfels B. 2005, 291-304. Cambridge University PressView ArticleGoogle Scholar
- Hasegawa M, Kishino H, Yano T: Dating of the human-ape splitting by a molecular clock of mitochondrial DNA. Jnl Mol Evol. 1985, 22: 160-174. 10.1007/BF02101694.View ArticleGoogle Scholar
- Sainudiin R, Yoshida R: Applications of interval methods to phylogenetic trees. Algebraic statistics for computational biology. Edited by: Pachter L, Sturmfels B. 2005, 359-374. Cambridge University PressView ArticleGoogle Scholar
- Brown W, Prager E, Wang A, Wilson A: Mitochondrial DNA sequences of primates, tempo and mode of evolution. Jnl Mol Evol. 1982, 18: 225-239. 10.1007/BF01734101.View ArticleGoogle Scholar
- Marsaglia G: Generating discrete random numbers in a computer. Comm ACM. 1963, 6: 37-38. 10.1145/366193.366228.View ArticleGoogle Scholar
- Galassi M, Davies J, Theiler J, Gough B, Jungman G, Booth M, Rossi F: GNU Scientific Library Reference Manual. 2003, Network Theory Ltd, 2, http://www.gnu.org/software/gsl/Google Scholar
- Hofschuster , Krämer : C-XSC 2.0: A C++ library for extended scientific computing. Numerical software with result verification, of Lecture notes in computer science. Edited by: Alt R, Frommer A, Kearfott R, Luther W. 2004, 2991: 15-35. Springer-VerlagView ArticleGoogle Scholar
- Rannala B, Yang Z: Probability distribution of molecular evolutionary trees: a new method of phylogenetic inference. Jnl Mol Evol. 1996, 43: 304-311. 10.1007/BF02338839.View ArticleGoogle Scholar
- Yang Z, Rannala B: Branch-length prior in uences Bayesian posterior probability of phylogeny. Syst Biol. 2005, 54: 455-470.PubMedView ArticleGoogle Scholar
- Green R, Krause J, Ptak S, Briggs A, Ronan M, Simons J, Du L, Egholm M, Rothberg J, Paunovic M, Pääbo S: Analysis of one million base pairs of Neandertal DNA. Nature. 2006, 444: 330-336.PubMedView ArticleGoogle Scholar
- Efron B, Halloran E, Holmes S: Bootstrap confidence levels for phylogenetic trees. Proc Natl Acad Sci. 1996, 93: 13429-13429.PubMedPubMed CentralView ArticleGoogle Scholar
- Gilks W, Wild P: Adaptive rejection sampling for Gibbs sampling. Applied Statistics. 1992, 41: 337-348. 10.2307/2347565.View ArticleGoogle Scholar
- Kulisch U: Advanced arithmetic for the digital computer, interval arithmetic revisited. Perspectives on encolsure methods. Edited by: Kulisch U, Lohner R, Facius A. 2001, 50-70. Springer-VerlagView ArticleGoogle Scholar
- Tucker W: Auto-validating numerical methods. 2004, Lecture notes, Uppsala UniversityGoogle Scholar
- Jaulin L, Kieffer M, Didrit O, Walter E: Applied interval analysis: with examples in parameter and state estimation, robust control and robotics. 2004, Springer-VerlagGoogle Scholar
- Strimmer K, von Haeseler A: Quartet puzzling: A quartet maximum likelihood method for reconstructing tree topologies. Mol Biol Evol. 1996, 13: 964-969.View ArticleGoogle Scholar
- Levy D, Yoshida R, Pachter L: Beyond pairwise distances: neighbor joining with phylogenetic diversity estimates. Mol Biol Evol. 2006, 23: 491-498.PubMedView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.