Accelerating calculations of RNA secondary structure partition functions using GPUs
© Stern and Mathews; licensee BioMed Central Ltd. 2013
Received: 31 January 2013
Accepted: 14 October 2013
Published: 1 November 2013
Skip to main content
© Stern and Mathews; licensee BioMed Central Ltd. 2013
Received: 31 January 2013
Accepted: 14 October 2013
Published: 1 November 2013
RNA performs many diverse functions in the cell in addition to its role as a messenger of genetic information. These functions depend on its ability to fold to a unique three-dimensional structure determined by the sequence. The conformation of RNA is in part determined by its secondary structure, or the particular set of contacts between pairs of complementary bases. Prediction of the secondary structure of RNA from its sequence is therefore of great interest, but can be computationally expensive. In this work we accelerate computations of base-pair probababilities using parallel graphics processing units (GPUs).
Calculation of the probabilities of base pairs in RNA secondary structures using nearest-neighbor standard free energy change parameters has been implemented using CUDA to run on hardware with multiprocessor GPUs. A modified set of recursions was introduced, which reduces memory usage by about 25%. GPUs are fastest in single precision, and for some hardware, restricted to single precision. This may introduce significant roundoff error. However, deviations in base-pair probabilities calculated using single precision were found to be negligible compared to those resulting from shifting the nearest-neighbor parameters by a random amount of magnitude similar to their experimental uncertainties. For large sequences running on our particular hardware, the GPU implementation reduces execution time by a factor of close to 60 compared with an optimized serial implementation, and by a factor of 116 compared with the original code.
Using GPUs can greatly accelerate computation of RNA secondary structure partition functions, allowing calculation of base-pair probabilities for large sequences in a reasonable amount of time, with a negligible compromise in accuracy due to working in single precision. The source code is integrated into the RNAstructure software package and available for download at http://rna.urmc.rochester.edu.
RNA performs many diverse functions in the cell in addition to its role as a messenger of genetic information. It can form enzymes, for example for cleavage of itself or of other RNA, or to create peptide bonds as a fundamental constituent of the ribosome . It can act as a signalling molecule for regulation of gene expression, for protein export, or for guiding post-translational modifications [2–5].
As for proteins, RNA function depends on its folding to a well-defined three-dimensional shape. In contrast to proteins, the folding of RNA is hierarchical . Secondary structure, or the particular set of contacts between pairs of complementary bases mediated by hydrogen bonding and stacking of bases, provides a significant amount of information. This can be helpful in predicting function or accessibility to ligands [7–10]. Computational prediction of the secondary structure of RNA from its sequence is therefore of great interest. The most widely-used automated prediction methods attempt to estimate the thermodynamic stability of RNA, using empirical parameters determined from experiments on oligonucleotides [11, 12].
CUDA is a programming interface developed by the company NVIDIA to facilitate general-purpose, parallel high-performance computing on multiprocessor graphics processing units (GPUs) . In recent years, many scientific computing applications have been implemented on GPUs using CUDA, in many cases yielding speed-ups of several orders of magnitude [14, 15]. However, to our knowledge, only a handful of publications have appeared describing GPU implementations of codes for RNA secondary structure prediction. Rizk and Lavenier described a CUDA implementation of structure prediction by free energy minimization . Their work was limited to relatively short sequences (up to 120 bases) and a simplified energy model that neglects coaxial stacking. The GPU implementation was faster than the serial implementation by a factor of 10 and 17 depending on the particular hardware. More recently, Lei et al.  also reported a parallelized implementation of free energy minimization using CUDA. They used only a coarse-grained parallelization scheme where the minimum-free-energy structures for subsequences of a given length are calculated in parallel, but the search over structures for each subsequence is done serially. Their work was also limited to relatively short sequences (up to 221 bases). They reported speedups of up to a factor of 16. It should be noted that these parallelized implementations neither use the latest thermodynamic parameters for loops , nor include coaxial stacking interactions .
In addition, recent work demonstrates that calculations of base-pairing probabilities calculated with partition functions can provide additional useful information . Structures composed of highly probable pairs are more accurate than lowest free energy structures [20, 21]. The base pair probabilities provide confidence intervals for prediction of individual pairs. Base pairing probabilities can also be used to predict pseudoknots . We have additionally extended this work to predictions using multiple, homologous sequences, where the same principles hold true [23–26].
Biological RNA sequences examined *
Candida albicans 5S rRNA
Escherichia coli 5S rRNA
P546 folding domain of Tetrahymenathermophilia group I intron
Bacillus stearothermophilus SRP RNA
3’ UTR of Bombyx mori R2 element with flanking vector sequence
Tetrahymena thermophilia group I intron
Saccharomyces cerevisiae A5 group II intron
Escherichia coli small subunit rRNA
Escherichia coli large subunit rRNA
human ICAM-1 mRNA
HIV-1 NL43 genome (GenBank: AF324493)
We employed a sophisticated and accurate energy calculation that includes coaxial stacking . Before attempting a parallel implementation, we first wrote an optimized serial version of the original code (the “partition” program in RNAstructure), implementing only a subset of its functionality, while improving efficiency and reducing memory usage. Subsequently, we parallelized the optimized code for GPU hardware, using CUDA. Here we made use of a fine-grained parallelization scheme, in which the calculation of the restricted partition function for each subsequence of a given length is parallelized, as well as the calculation of the restricted partition function for each subsequence. We found that this fine-grained parallelization resulted in greater speedups than a simpler coarse-grained-only parallelization (up to factors of ∼60 compared to the optimized serial version and ∼116 compared to the original code).
is the Boltzmann weight corresponding to standard free energy change g at temperature T.
These recursions are slightly different from—but equivalent to—those presented in reference  and used in the previous code. It should be noted that there was an error in equation 15 of reference : in the second line, WMBL(k+1,j) should be replaced by [WMBL(k+1,j) + WL(k+1,j)].
Elements of Y are −R T times the logarithm of the sum of the Boltzmann weights of corresponding elements of W and W MB.
Elements of Y L are −R T times the logarithm of the sum of the Boltzmann weights of corresponding elements of W L and W MBL.
Elements of Z are −R T times the logarithm of the sum of the Boltzmann weights of corresponding elements of W L (except for the term depending on the next-smallest fragment) and W coax.
Reorganizing the recursions in this way might appear to use more memory because of the additional arrays. In fact, the modified version requires less memory, because several of the arrays do not need to be stored in their entirety. Specifically, using the modified recursions, storage is only required for two diagonals of W, WL, and WMBL; for five diagonals of WMB; and for a half-triangle of WQ. Reducing memory usage is important as the size of the full arrays scales as O(N2) and the available GPU memory on our hardware was limited to ∼2.5 GB. The modified recursions use four full N×N arrays and one half-triangle, rather than the six full N×N arrays used in the original recursions, and therefore reduce memory usage by about 25%. In addition, the calculation of W5′ and W3′ is simplified (compare equations 20 and 21 above with equation 11 of reference ).
Here n is the number of unpaired nucleotides and h is the number of branching helices . By convention, the size of internal loops is limited to thirty unpaired nucleotides, so the number of terms in equation 8 and the overall computational expense scales as O(N3) where N is the size of the sequence.
This was done for two reasons: it requires at most a single call to exp, rather than two; and it can make use of the log1p function from the standard math library, which calculates log(1+x) accurately even for small x. This is important because often e−a and e−b will differ by several orders of magnitude, and simply adding them and then taking the logarithm can lead to significant roundoff error.
In order to determine how much additional computational overhead was imposed by the calculation of exp and log1p we performed a comparison with an artificial reference calculation, which was identical except that calls to these functions were omitted. We found that for a 1,000-mer, the actual GPU calculation is only ∼20% more expensive than this reference calculation.
For a serial calculation on the CPU, there is a larger performance hit; the actual calculation is about a factor of two more expensive than the reference without exp or log1p. However, it should be noted that this is not the entire story, because overall, the new optimized serial code, which uses logarithms, is still faster than the original code, which does not. Running the calculation in log space results in simplifications such as not requiring checking for overflow and not having to multiply by scaling factors, which reduces computational expense.
In the CUDA programming model, overall execution of the program is still governed by the CPU. Compute-intensive portions of the program are then delegated to subroutines executed by the GPU, or kernels. In general, the GPU has its own memory, so data must be copied to and from the GPU before and after kernel execution. Many copies of a kernel, or threads, run in parallel, each of which belongs to a block. During kernel execution, threads belonging to the same block can share data and synchronize, whereas threads belonging to different blocks cannot. A program can contain many kernels, which can execute either serially or in parallel .
The algorithm for calculating partition functions is recursive: partition functions for larger fragments depend on those for smaller fragments. As such, the overall calculation proceeds serially, in order of fragment size. We used two levels of parallelization, a block level and a thread level. Calculations for all fragments of a given size may be done in parallel, with no communication. This was implemented in CUDA at the level of blocks of threads. The partition function for a given fragment depends on sums, with the number of terms on the order of the fragment size (e.g., equation 9). These sums were parallelized at the level of threads within a block, since calculating a sum in parallel relies on communication between the threads. In our experience, a greater speedup was obtained from this “inner loop” parallelization, even though it requires more communication between threads. Most likely, this is because optimal efficiency on GPU hardware is obtained when identical mathematical operations are performed in lockstep on different data . We stress that these two different levels of parallelization are not mutually exclusive and optimal performance was obtained from including both. A separate block of threads was run for each fragment, while 256 threads were run within a block. The number of threads per block was chosen by trial and error and was optimal for our hardware (the simple sum reduction scheme we chose requires it to be a power of two). In our code, this value is set at compile time (but this is not required by CUDA—it could be set at run time if desired).
Peak floating point performance for NVIDIA Tesla GPUs are faster by a factor of two when working in single compared with working in double precision (http://www.nvidia.com), but single precision introduces greater roundoff error. In order to examine accuracy, we calculated base-pair probabilities for the same sequences using both the parallel CUDA/GPU implementation in single precision, and the serial implementation in double precision. We also calculated probabilities in double precision using a set of nearest-neighbor parameters slightly modified by adding a random variate chosen from a Gaussian distribution with mean 0 and standard deviation 0.01 kcal/mol, which is comparable to or smaller than their experimental uncertainty (0.1 kcal/mol for parameters describing helical stacking and 0.5 kcal/mol for parameters describing loops [18, 37]).
We also used the calculated base pair probabilities to determine a single consensus structure for each sequences, using the ProbKnot algorithm . In this case, working in single precision led to differences with double precision only for one sequence (a random sequence of 6000 bases) out of the 44 we examined, and these were very small (only three bases out of the 6000 were matched with a different partner). Working with the modified parameters led to larger discrepancies although these were still fairly small (for sequences containing more than 100 bases, at most 6% of bases were matched with different partners). This is consistent with a previous report that predictions using base pair probabilities are significantly less sensitive to errors in thermodynamic parameters than using only lowest free energy structures .
In this work, we introduced a modified set of recursions for calculating RNA secondary structure partition functions and base-pairing probabilities using a dynamic programming algorithm, and implemented these in parallel using the CUDA framework for multiprocessor GPUs. For large sequences, the GPU implementation reduces execution time by a factor of close to 60 compared with an optimized serial implementation, and by a factor of 116 compared with the original code. It is clear from our work that using GPUs can greatly accelerate computation of RNA secondary structure partition functions, allowing calculation of base-pair probabilities for large sequences in a reasonable amount of time, with a negligible compromise in accuracy due to working in single precision. It is expected that parallelization using CUDA should be applicable to other implementations of dynamic programming algorithms  besides ours, and result in similar speedups.
Two levels of parallelization were implemented. Calculations for all fragments of a given size were done in parallel, with no communication between threads. This was implemented in CUDA at the level of blocks of threads. In addition, the sums contributing to the partition function for a given fragment were calculated in parallel, with communication required between threads. These sums were parallelized at the level of thread within a block. We found that this “inner loop” parallelization resulted in a significantly greater speedup than the “outer loop” parallelization alone.
● Project name: partition-cuda; part of RNAstructure, version 5.5 and later
● Project home page: http://rna.urmc.rochester.edu/RNAstructure.html
● Operating system(s): Unix
● Programming languages: C and CUDA
● Other requirements: CUDA compiler, available from
● License: GNU GPL
● Any restrictions to use by non-academics: None.
This work was supported by NIH grant R01GM076485 to D. H. M. and the Center for Integrated Research Computing at the University of Rochester.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.