Predicting direct protein interactions from affinity purification mass spectrometry data

Background Affinity purification followed by mass spectrometry identification (AP-MS) is an increasingly popular approach to observe protein-protein interactions (PPI) in vivo. One drawback of AP-MS, however, is that it is prone to detecting indirect interactions mixed with direct physical interactions. Therefore, the ability to distinguish direct interactions from indirect ones is of much interest. Results We first propose a simple probabilistic model for the interactions captured by AP-MS experiments, under which the problem of separating direct interactions from indirect ones is formulated. Then, given idealized quantitative AP-MS data, we study the problem of identifying the most likely set of direct interactions that produced the observed data. We address this challenging graph theoretical problem by first characterizing signatures that can identify weakly connected nodes as well as dense regions of the network. The rest of the direct PPI network is then inferred using a genetic algorithm. Our algorithm shows good performance on both simulated and biological networks with very high sensitivity and specificity. Then the algorithm is used to predict direct interactions from a set of AP-MS PPI data from yeast, and its performance is measured against a high-quality interaction dataset. Conclusions As the sensitivity of AP-MS pipeline improves, the fraction of indirect interactions detected will also increase, thereby making the ability to distinguish them even more desirable. Despite the simplicity of our model for indirect interactions, our method provides a good performance on the test networks.


Background
Understanding the organization of protein-protein interactions (PPIs) as a complex network is one of the main pursuits in proteomics today. With the help of highthroughput experimental techniques, a large amount of PPI data has recently become available, providing us with a rough picture of how proteins interact in biological systems. However, the interaction data from these high-throughput experiments suffer from low resolution as compared to data from low-throughput technologies such as protein co-crystallization, and to make matters worse, they are prone to problems including relatively high error rates and protocol-specific biases. Therefore, inferring the direct, physical PPI network from highthroughput data remains a challenge in systems biology.
The leading technologies for identifying PPIs are Yeast 2-Hybrid (Y2H) [1,2] and Affinity Purification followed by Mass Spectrometry (AP-MS) [3][4][5][6]. Due to the ability to perform in vivo at biologically reasonable expression levels, as well as the ability to detect protein complexes with fewer false-positives [6], AP-MS approaches have become increasingly popular, although their throughput is lower than Y2 H approaches. In an AP-MS experiment, a protein of interest (the bait) is tagged and expressed in vivo. The bait is then immuno-precipitated (IP), together with all of its interacting partners (the preys), and finally, preys are identified using mass spectrometry. For a more detailed overview of the technique, see [6,7]. Like Y2 H and other high-throughput experimental methods, however, AP-MS suffers from experimental noise. A number of approaches have been proposed to separate true interactions from falsepositives. These approaches mostly focus on reducing false-positives due to protein misidentification from MS data [8][9][10], on detecting contaminants [11], or a combination of both [7,[12][13][14][15][16]. These methods often make use of the guilty-by-association principle, and quantify the confidence level of an interaction by considering alternative paths between two protein molecules. In this context, authors say that a true interaction between bait b and prey p is a true positive if, at some point in the set of cells considered, there exists a complex that contains both b and p. We note that as the sensitivity of the AP-MS methods improves and the stability of the complexes that can be detected decreases, the transience of detectable interactions will increase, to a point where, eventually, every protein may be shown to marginally interact with every other protein.
A key property of AP-MS approaches is that a significant number of the co-purified prey proteins are in fact indirect interaction partners of the bait protein, in the sense that they do not interact physically and directly with the bait, but interact with it through a chain of physical interactions involving other proteins in the complex. Therefore, it is critical, when interpreting AP-MS-derived PPI networks, to understand the meaning of the term "interaction". Although not designed to identify physical interactions, AP-MS experiments produce data that may allow separating direct physical interactions from indirect ones. This is the problem we consider in this paper: given the results of a set of AP-MS experiments, filtered for protein misidentifications and contaminants, how can we distinguish direct (physical) interactions from indirect interactions? Note that since the false-positive filtering methods listed above consider indirect interactions as true-positives, they cannot be used to address this problem. Gordân et al. [17] study the related problem of distinguishing direct vs. indirect interactions between transcription factors (TF) and DNA. While the objective of their study is similar to ours, their method makes use of information specific to TF-DNA interactions (e.g. TF binding data, motifs from protein binding microarrays), and thus is not immediately applicable to the problem on general PPI networks. In fact, to our knowledge, no existing approach seems directly applicable. This paper is organized as follows. We first describe the mathematical modelling of an AP-MS experiment and introduce an algorithmic formulation of the problem. We then describe an overview of our method, which is based on a collection of graph theoretic approaches that succeeds at inferring a large fraction of the network nearly exactly, followed by a genetic algorithm that infers the remainder of the network. The accuracy of the proposed method is assessed using both biological and simulated PPI networks. Finally, we apply our algorithm to the prediction of direct interactions based on a large set of AP-MS PPI data in yeast [18]. Our work opens the way to a number of interesting and challenging problems, and the results obtained indicate that useful inferences can be made despite the simplicity of our modelling.

Results
Because the main contribution of this paper is methodological, we start by giving an overview of the approach developed before detailing the results obtained.
Throughout this work, we make the assumption that appropriate methods have been used to reduce as much as possible protein misidentifications and contaminants, in such a way that all interactions detected are either direct or indirect interactions. Our task is to separate the former from the latter. To avoid confusion, we note that false-positives (resp. false-negatives) henceforth refer to falsely detected (resp. undetected) direct interactions inferred by our algorithm.

Mathematical modelling of AP-MS data
We first describe a simple model of the AP-MS PPI data that shall be used throughout this paper. Although admittedly rather simplistic, our model has the benefit of allowing the formulation of a well-defined computational problem.
Let G direct = (V, E direct ) be an undirected graph whose nodes V represent the set of proteins, and edges E direct represent direct (physical) interactions between the proteins. Let N(b) = {p V : (b, p) E direct } be the set of direct interaction partners of protein b. We model the physical process through which PPIs are identified in an AP-MS experiment as follows. If a bait protein b is in contact with a direct interaction partner p N(b), the IP on b will pull p down, which will then be identified through mass spectrometry. In addition, if p interacts with p' N(p) at the same time as it interacts with b, protein p' may also be pulled down by the IP on b, although the two proteins only indirectly interact. In general, any protein x that is connected to b by a series of simultaneous direct interactions may be pulled down by b. As a result, all interaction partners of b (direct or indirect) will be identified together. Figure 1 depicts an example of this effect. In order to distinguish direct physical interactions from indirect ones, the availability of quantitative AP-MS data is helpful.
Although quantitative AP-MS remains at its infancy, prey abundance can be estimated fairly accurately using approaches such as the peptide count [19], spectral count [20], sequence coverage [21], and protein abundance index [22]. Combined with increasing accuracy and sensitivity of mass spectrometers, these methods are becoming more reliable. Throughout the discussions in this paper, we assume that this quantitative data is available to us.
The strength of a physical interaction can be measured by the energy required to break it. Let A(b, x) denote the abundance of a prey protein x obtained by IP on bait b, and let c(p 1 , p 2 ) denote the number of pairs of molecules p 1 and p 2 that interact directly in the cells considered. When there are more than two interaction partners, we let c(p 1 , p 2 , ..., p k ) denote the number of copies of complexes simultaneously containing p 1 , p 2 , ..., p k . Since protein interactions may be disrupted by the purification process, we expect A(b, x) to be correlated with the strength of the interaction between b and x. Thus, we assume that a direct interaction between a pair of individual proteins b and x survives the purification process with probabilitŷ ( , ) p b x , and breaks with probability 1 −ˆ( , ) p b x . Then the amount of protein x obtained from the pull-down on b would be . In general, the amount of protein x that will be obtained upon pulldown of b will be proportional to the probability that b and x remain connected after each edge (u, v) E direct is broken with probabilityp (u, v). Our goal is then to infer G direct from the set of observed abundances A(x, y).
In this paper, we make the following simplifying assumptions: 1. All direct interactions (u, v) E direct survive with the uniform probabilityp , and fail independently with probability 1 -p .
2. All possible direct interactions take place at the same time, irrespective of the presence of other interactions, and with the same frequency.
Although these assumptions are clearly unrealistic, they provide a useful starting point for separating direct interactions from indirect ones (see Discussion for possible relaxation of these assumptions). Despite its simplicity, our mathematical modelling of AP-MS does fit existing biological data reasonably well (see Model validation). We note that Asthana et al. [23] have proposed a probabilistic graph model that is similar to ours. However, their model measures the likelihood of a protein's membership in a protein complex, and thus is not applicable to our problem.

Problem formulation
We are now ready to formulate the algorithmic problem addressed in this paper. We henceforth consider the (unknown) direct interaction network G direct as a probabilistic graph, where each edge in G direct survives the AP-MS process with probabilityp , and fails otherwise.
Let  G direct be a random graph obtained from G direct by removing edges in E direct independently with probability 1 -p . Then, define P u v G direct ( , ) to be the probability that vertices u and v remain connected (directly or We call P G direct the connectivity matrix of G direct . See Figure 2 for an example of a direct interaction network and its connectivity matrix. Although P u v G direct ( , ) can be estimated from G direct by straight-forward Monte Carlo sampling, its exact computation (known as twoterminal network reliability problem [24,25]) is #P-Complete [24], and so is its approximation within a relative error of [26].
A set of AP-MS experiments where all proteins have been tagged and used as baits yields an approximation of A(x, y) for all pairs of proteins (x, y), which can be transformed into an estimate M(x, y) of P x y G direct ( , ) through appropriate normalization. We are thus interested in inferring G direct from M:

EXACT DIRECT INTERACTION GRAPH FROM CONNECTIVITY MATRIX (E-DIGCOM)
Given: A connectivity matrix M n × n Find: In a more realistic setting, the connectivity matrix M would not be observed precisely, and the E-DIGCOM problem may not admit a solution. We are thus interested in an approximate, optimization version of the problem: Given: A connectivity matrix M n× n and a tolerance level 0 ≤ δ ≤ 1. Find: Note that although the computational complexity of the DIGCOM problems is currently unknown, the fact that simply verifying a candidate solution is #P-Complete suggests that the problem may be hard and may not belong to  -candidate solutions to problems in NP can, by definition, be verified in polynomial time, whereas #P is widely believed to require super-polynomial time computations. Related problems include network design problems that have been studied extensively in the computer networking community. For example, one related, but different, problem is to choose a minimal set of edges over a set of nodes so that the resulting network has at least the prescribed all-pairs  terminal reliability; various algorithms including branchand-bound heuristics [27] and genetic algorithms [28,29] have been proposed.

Algorithm overview
Our algorithm for the A-DIGCOM problem has three main phases outlined here and detailed in Methods.
Phase I. We start by identifying, based on the connectivity matrix M, vertices from G direct with low degree, together with edges incident to them. As most PPI networks exhibit the properties of scalefree networks [30], this resolves the edges incident to a significant portion of the vertices (~75% in our networks; see below). Phase II. At the other end of the spectrum, G direct contains densely clustered regions (cliques or quasicliques), possibly corresponding to protein complexes. We use a heuristic to detect these dense regions from the connectivity matrix M. Phase III. To infer the remainder of the network, we use a genetic algorithm. This highly customized genetic algorithm makes use of the findings from the previous two steps in order to dramatically reduce the dimension of the problem space, and to guide the mating process between parent candidates to create good offspring solutions.
In what follows, we highlight the main theoretical results on which these three phases rely. Details are given in Methods and proofs in Appendix.

I-a. Finding cut edges
A cut edge in a graph G is an edge (u, v) whose removal would result in u and v belonging to two distinct connected components (e.g. edge (3,4) in Figure 2). The following theorem allows the identification of all cut edges based on the connectivity matrix P G . Theorem 1. A pair of vertices u and v from V forms a cut edge in G if and only if the following two conditions hold.
The above theorem immediately provides an efficient algorithm, requiring time O(|V| 2 ), to test whether a pair of vertices forms a cut edge. Observe that removing a cut edge (u, v) from a connected graph allows us to decompose the graph into two connected components (subgraphs induced by V u and V v , respectively), and the probability of connectivity between every pair of vertices in V u (V v , resp.) remains the same after removing (u, v). Therefore, the submatrices that correspond to V u and V v can be treated as independent subproblems, and one can recursively detect cut edges in the remaining subproblems. Note that if the input graph is assumed to be a tree, such a recursive algorithm would identify the entire graph exactly. On the other hand, PPI networks are sparse in general, and contain many cut edges and degree-1 vertices. As a result, this algorithm allows a significant simplification of our problem by identifying all cut edges.

I-b. Finding degree-2 vertices
We now consider the problem of identifying degree-2 vertices from the connectivity matrix M. After degree-1 vertices, which are identified in the previous step, they constitute the next most frequent vertices in the biological networks we studied. While we do not have a full characterization of these vertices, the following theorem gives a set of necessary conditions. Theorem 2. Let s be a degree-2 vertex in G such that N(s) = {u, v}. Then, the following three conditions must hold.
(i) Low connectivity: for each t V, These necessary conditions allow us to rule out vertices that cannot be of degree 2, and give rise to a O(|V| 2 ) heuristic for predicting degree-2 vertices (see Algorithm 1 in Methods). In practice, our studies have shown that vertices satisfying these conditions while having degree higher than two are extremely rare (see below).

II. Detecting densely connected regions
We now turn to the problem of finding densely connected regions in the network. These regions may correspond to protein complexes, where tagging any one of the members of the complex results in the identification of all other members of the complex with high probability. While correctly predicting the physical interactions within each complex is a difficult task, separating these dense regions from the remainder of the network is essential to improving the accuracy of the genetic algorithm (part III).
Based on the connectivity matrix M, our algorithm identifies (possibly overlapping) clusters of proteins of size at least k such that, for every pair u, v in each cluster, M(u, v) ≥ t k for some threshold t k . For appropriately chosen values of k and t k (see Methods), the set of clusters found corresponds to cliques in G direct with high accuracy (see below).
The dense regions discovered at this phase provide us (1) the set of edges within each dense region; and (2) sparse cuts between disjoint dense regions. The edge set within each cluster will be used in the initial candidates for the genetic algorithm, whereas the cuts defined by the clusters will be used as crossover points during the crossover operation in the genetic algorithm.

III. Cut-based genetic algorithm
To predict the remaining section of the network, we use a customized genetic algorithm that aims at finding an optimal solution to the A-DIGCOM problem. We first devise a solution to a generalization of the A-DIGCOM problem, and then show how the results of parts I and II of the algorithm are used to improve performance.
Genetic algorithms have been shown to be an effective family of heuristics for a wide variety of optimization problems [31], including network design under connectivity constraints [28,29]. A genetic algorithm models a set of candidate solutions as individuals of a population. From this population, pairs of promising candidate solutions are mated, and their off spring solutions inherit properties of the parents with some random mutations. Over generations, this process of natural selection improves the fitness of the population.
The A-DIGCOM problem is a hard optimization problem, because (i) the size of the search space is huge -2 2 n ( ) for a graph of size n, and (ii) there is no known polynomial-time algorithm to evaluate a proposed candidate solution (i.e. compute P G from G). For these reasons, a straight-forward genetic algorithm implementation failed to produce satisfactory results (data not shown). Instead, we use a more sophisticated approach by making use of the results obtained in previous sections in order to reduce the search space and to guide the mating operations for more effective search. Details are given in Methods.

Model validation
In order to test our approach, we first sought to validate our model of AP-MS indirect interactions. To this end, we used one of the most comprehensive AP-MS-based networks published to date on yeast, obtained by Krogan et al. [18]. The dataset reports the Mascot score [32] and the number of peptides detected for each baitprey pair (peptide count). The complete set of interactions reported contains 2186 proteins and 5496 interactions (Krogan et al. Table S six); we call the resulting network G KroganFull . The authors identified a subset of these interactions as high-confidence, based on their Mascot scores (Krogan et al. Table S five). We call this set of high-confidence interactions G KroganHigh ; this network consists of 1210 proteins and 2357 interactions. We expect that G KroganHigh is relatively rich in direct interactions, whereas the complete set of interactions G KroganFull consists in part of indirect interactions.
Considering G KroganHigh as a direct interaction network, we simulated Monte Carlo sampling to estimate P G Krogan High , usingˆ. p = 0 5 and 50,000 samples, which yields a 95% confidence interval of size at most 0.007 on each P uv G Krogan High ( , ) entry. Next, we normalized the peptide counts of the interactions in G KroganFull using protein lengths (See Methods). We then compared P G Krogan High to the normalized peptide counts of the interactions in G KroganFull . We expect that a significant fraction of low-confidence interactions in G KroganFull -G KroganHigh are likely to be indirect interactions. If our model is correct, their peptide counts should then be correlated with the corresponding entries in P G Krogan High .
Indeed, the positive linear correlation between the predicted connectivity P G Krogan High and the observed normalized peptide counts is very significant (regression p-value of 8.17 × 10 -11 , Student t-test; see Additional file 1). Furthermore, this correlation is strongest when . p ≈ 0 5, as compared toˆ. p = 0 3 or 0.7, justifying the use of this value in our subsequent analyses.

Accuracy of the prediction algorithm
The ideal validation of the accuracy of our algorithm would involve (i) constructing a connectivity matrix M using actual quantitative AP-MS data; (ii) predict direct interactions based on M using our algorithm; and then (iii) comparing our predictions to experimentally generated direct interaction data. Yeast 2-Hybrid (Y2H) experiments are less prone to detect indirect interactions than are AP-MS methods, and several large-scale efforts have been reported [2,33,34]. Unfortunately, for a number of technical reasons, the overlap between AP-MS PPI networks and Y2H networks remains very small [35]. As a consequence, Y2 H data cannot be used directly to validate predictions made on AP-MS data. Instead, we had to rely on partially-synthetic data set, where an actual network of high-quality Y2 H interactions is assumed to form the direct interaction graph, and a connectivity matrix is generated from it using Monte Carlo sampling, under our model. Two sets of Y2 H interactions were used: (i) G Y u is the network constructed from the gold standard dataset of Yu et al. [35]. This network consists of 1090 proteins and 1318 interactions with high confidence of direct interactions; (ii) G DIP is the core, high-quality, physical interaction network of yeast, available at DIP database, version 20090126CR [36], consisting of 1406 proteins and 1967 interactions. These biological networks were complemented with two artificial 1000-vertices networks. The first was generated using the preferential attachment model (PAM) [30]. For the second, we used the duplication model (DM) [37], which, in contrast to the PAM, generates graphs containing several dense clusters. The resulting artificial "direct" interactions graphs are called G PAM and G DM and contain 1500~2000 interactions each. We then used the Monte Carlo sampling approach described above to estimate the connectivity matrices , and P G DM . These will form the input to our inference algorithm, whose output will then be compared to the corresponding direct interaction graph.
It is important to note that these input matrices are not perfectly accurate and may contain sampling errors. However, it is easy to bound the size of the errors with high probability and use it as a tolerance level within our algorithm. We also note that the results presented in this section only aim at evaluating the performance of the inference algorithm on input data that was generated exactly according to our probabilistic model. As such, the error rates reported may be considered as lower bounds for those on actual biological data. An assumption-free evaluation in provided later in this section.

Identification of weakly connected vertices
Theorem 1 provides an efficient algorithm that guarantees the identification of all cut edges, provided that the given connectivity matrix is precise. We say that a vertex v is a 1-cut vertex if all edges incident on v are cut edges. By applying Theorem 1 recursively to detect cut edges and decomposing the graph into two connected components, we can detect and remove all 1-cut vertices from the input connectivity matrix. Table 1 (i) reports the number of 1-cut vertices that are detected by the recursive algorithm from Theorem 1. In both the Yu and the DIP network, 1-cut vertices constitute approximately 50% of the network, and identifying them allows a significant reduction in the problem size. We note that the inaccuracies in the input connectivity matrices could, in principle, have introduced errors in the detection of cut edges. However, this rare event was never observed on any of our networks.
Algorithm 1 (see Methods) guarantees to efficiently identify all degree-2 vertices (again, provided that the connectivity matrix is known), but may also incorrectly flag some higher-degree vertices. As seen in Table 1 (ii), nearly all degree-2 vertices were identified, with a low false-discovery rate ranging from 6 to 9%. Moreover, the false-positives incorrectly detected as degree-2 vertices indeed had small degrees, and their predicted neighbors were mostly correct (but incomplete) predictions. Flagging degree-2 vertices reduces the problem size further by 15 to 36%.
After repeatedly detecting and removing 1-cut vertices and degree-2 vertices from the problem space, the edges adjacent to approximately 70% of the vertices are detected with very low error rate. The remaining vertices only constitute approximately 30% of the original network. We call this remaining subset the hard core of the connectivity matrix. Because it is more densely connected than the rest of the network, the topology of hard core is more difficult to reconstruct.
Running our algorithm on the PAM simulated data yields similar resolution and error rate as on the Y2 H networks. However, our DM network is found to be less amenable to these strategies, leaving 55% of vertices unresolved and resulting in an error rate approximately twice that seen for other networks. This is simply due to the fact that networks generated by the duplication model do not contain as many 1-cut vertices or degree 2 vertices when compared to other networks, including the biological Y2 H networks.

Identification of dense regions
Our dense region detection algorithm aims at identifying all edges that belong to a k-clique in G Direct , for a given value of k. We report the accuracy of the algorithm in Table 2. As expected, our algorithm achieves extremely high sensitivity for clique edges. However, the false-discovery rate is quite high, especially for smaller values of k (e.g. k = 5). This is due to the fact that distinguishing a 5-clique from, say, a quasi-clique of size 7 is extremely difficult, causing false-positive predictions. We note however that these erroneous predictions are mostly inconsequential, as the intra-cluster topology of each dense region shall only be used in generating the initial candidate solutions for the genetic algorithm.

Cut-based genetic algorithm
The various parameters of the genetic algorithm (population size, mate selection probability, mutation rate, etc.) were optimized for the running time and accuracy of the solution based on G Yu . Although our genetic algorithm could in principle be used on any connectivity matrix, running it on the full matrix of > 1000 proteins is impossible: the search space is huge, and the amount of time required to evaluate the fitness of a given candidate solution is too large. However, as discussed previously, applying first the 1-cut and degree-2 vertex detection algorithms significantly reduces the problem size and makes it accessible to our genetic algorithm. Table 3 (i) reports the accuracy of the genetic algorithm predictions on the hard core of each connectivity matrix. We note that since the network to be inferred is relatively highly connected, the problem is significantly more difficult than the identification of 1-cuts and degree-2 vertices. Indeed, the false-discovery and falsenegative rates range from 35% to 55% for most datasets. For a comparison, an algorithm that would pick edges randomly would achieve 98.75% false-discovery and false-negative rates. Combining the three phases of the algorithm, the overall error rate obtained on each data set ranges from 10 to 20% false-discovery and falsenegative rates, except for the DM data set, which fares considerably worse, for the reasons explained earlier.
To the best of our knowledge, there has been no other efforts to solve the DIGCOM problems (neither exact, nor approximate version). We thus compared our approach to a simple hill climbing search algorithm on the Yu et al. data set (see Methods). We let this algorithm run over several days (as opposed to few hours spent using our approach), with multiple restarts, and discovered that it provides very poor sensitivity and specificity (see Table 4 for the best results obtained). This is not surprising since the hill climbing method is highly dependent on the initial solution (in this case, a spanning tree chosen randomly based on the connectivity matrix) and the search space is simply too large to exhaustively search for the a good initial solution. We also tested the hill-climbing approach in the same setting as the genetic algorithm, i.e. combining it with the 1-cut edges and degree-2 vertices detection algorithms. Here, the modified hill-climbing approach showed a better sensitivity and specificity than the pure hill-climbing approach, but still performed much worse than our genetic algorithm (Table  4). Furthermore, the improvement over the pure hillclimbing approach was mostly due to the high sensitivity and specificity of our algorithm for detecting weakly connected vertices.
To provide an idea of the running times, Table 5 gives the empirical data from our experiments. The first two phases (detecting weaking connected vertices and Number of edges that belong to maximal cliques of size k. Real: actual number of edges that belong to maximal cliques of size k; pred: predicted number of maximal k-clique edges; FD(false discovery ratio): percentage of false-positives in the predicted set; FN(false negative ratio): percentage of false-negatives in the real set.   Comparison of our method to simple hill-climbing approach. (i) accuracy of the hill-climbing approach used over the complete network; (ii) accuracy of the hill-climbing approach after fixing the weakly connected nodes using our algorithm; (iii) accuracy of our combined pipeline using the genetic algorithm.
recognizing dense regions) were run within seconds while the genetic algorithm was run for a fixed amount of time, and the top scoring candidates where chosen as shown in Table 3.

Inferring direct interactions from AP-MS experimental data
In order to apply our algorithm to biological data from AP-MS experiments, we used the raw data reported by Krogan et al. [18] for the 2186 putative interactions of G KroganFull . We only considered the subnetwork of tagged proteins, and further focussed our efforts on the analysis of 77 proteins that are well separated in the tag-induced subnetwork. Quantitative abundance estimates were derived from the peptide counts reported for each prey, and an experimentally derived connectivity matrix M was obtained after normalization (see Methods). Our full prediction algorithm was then run on the estimated connectivity matrix, resulting in a direct interaction graph prediction we call G Kim that consists of 164 interactions (See Additional File 2). The network G Kim was compared to G KroganHigh , the set of high-confidence interactions reported by Krogan et al., and to G KroganHigh Top , a subset of G KroganHigh consisting of the 164 (to compare against G Kim ) most confident interactions they reported. Both G KroganFull and G KroganHigh overlap G Kim quite substantially. These three sets of predictions were then compared against a set of high-quality binary interactions from G Yu . In Y2 H experiments, the interaction partners are separately screened using a genetic readout. Therefore, interactions from G Y u are believed to be direct, and thus used to test against the predictions from AP-MS data. On the other hand, these interactions may reflect only a subset of all direct interactions among the 77 proteins.
As shown in Figure 3, our results show that the highconfidence AP-MS data G KroganHigh exhibited very little overlap with the direct binary interaction set G Yu . 72.6% of interactions in G KroganHigh is disjoint from G Y u , and 25% of G Yu remains undetected by G KroganHigh . Furthermore, even the top scoring set of interactions

G KroganHigh
Top showed high discrepancy ratios against G Yu .
In contrast, G Kim produced by our algorithm coincide with G Yu with better sensitivity and specificity. Given the crudeness of the method in translating the AP-MS data into a connectivity matrix, our algorithm has thus performed relatively well in predicting direct interactions from real AP-MS data.

Discussion and conclusion
The approaches for determining bait-prey abundance remain in their infancy, and to date, no large-scale PPI networks have this type of quantitative data. As these approaches gain in accuracy, so will the results of our approach. Furthermore, as the sensitivity of AP-MS pipelines improves, the fraction of indirect interactions detected will also increase, thereby making the ability to distinguish them even more critical. In this paper, we lay the bases of modelling the indirect interactions in AP-MS experiments. We formulate the DIGCOM problem, which aims at distinguishing direct interactions from indirect ones, and provide a set of theoretical and heuristic approaches that are shown to be highly accurate on both biological PPI networks and simulated networks. Despite the unrealistic assumptions that should eventually be relaxed, our results show that the predicted set of interactions fits the experimental data reasonably well. In addition, applying our algorithms to a large-scale AP-MS data set from Krogan et al. results in predictions that overlap Y2H data approximately 35% more often than the equivalent number of top-scoring interactions reported by these authors.
The DIGCOM problems raise a number of challenging, yet fascinating computational and mathematical problems to investigate. Is the solution to the exact DIGCOM problem, if it exists, always unique? We suspect it is. What is the computational complexity of the exact and approximate DIGCOM problems?
We believe they are NP-hard, and possibly not even in NP. Are there types of graph substructures, other than those discussed here, that can be unambiguously inferred from P G ? Are there special properties of PPI networks, other than the power-law degree distribution, Running times of our method on the model networks for each phase of the algorithm: (i) detecting 1-cut vertices and degree two vertices; (ii) predicting quasiclique clusters; and (iii) running the genetic algorithm. We are reporting the average run time over three runs on each network. The implementation was tested on a Powermac G5 2 Ghz with 4 GB of RAM. Note that the genetic algorithm was run for a fixed amount of time, and the top scoring candidates achieved the quality as shown in Table 3. The times are shown in hh.mm.ss format.
of which an algorithm can take advantage to make more accurate predictions and/or provide approximation or probabilistic guarantees?
The modelling and algorithm proposed here is only a first step toward an accurate detection of direct interactions from AP-MS data. Several generalizations and improvements are worth investigating. First, the abundance of an interaction is not constant and needs to be modelled more accurately. Second, the strength of all physical interactions is non-uniform, and some interactions may be more prone to disruption by the affinity purification process than others. Given sufficient quantitative AP-MS data, one may study a generalization of the DIGCOM problem that aims at identifying not only the set of direct interactions, but also their individual strengths and abundances. While modelling these aspects is in theory possible, the amount and quality of experimental data required is currently unavailable, and the computational complexity of the resulting problems is likely to be daunting.
Perhaps a more significant limitation of our model is that all direct interactions are assumed to occur simultaneously, though it is clear that certain interactions are either mutually exclusive, or restricted to specific subcellular compartments or conditions. We are currently investigating approaches to decompose the observed network into a family of simultaneously occurring interactions in such a way that the observed interaction abundances are the sum of the direct and indirect interactions over all cell compartments and conditions. However, it is clear that complementary experimental data, such as comprehensive protein localization assays or cell cycle expression data, would be required to reduce the space of possible solutions in a biologically meaningful manner.
An additional assumption that may need to be relaxed is the independence of the edge failures, which may not hold in cases where the loss of an interaction between two proteins causes a significant destabilization of the larger complex they belong to. Unfortunately, in the presence of strong dependencies between edge failures, it becomes almost impossible to distinguish direct from indirect interactions. Nonetheless, it may be possible to at least identify complexes where such dependencies hold, by studying subsets of proteins for which the AP-MS data differs significantly from our model.
In conclusion, this paper opens the door to a number of fascinating modelling and algorithmic questions that will lead to important implications in systems biology. Any improvements in tackling these questions would take us one step further towards this goal.

Methods
In this section, we describe the algorithmic details of our approach to the DIGCOM problem.

Identification of weakly connected vertices
The algorithm to identify 1-cut vertices is trivial given Theorem 1: recursively find edges satisfying conditions in Theorem 1, and decompose the input matrix into two independent subproblems. On the other hand, the conditions in Theorem 2 yields Algorithm 1, which predicts the set of degree 2 vertices as well as the edges adjacent to them. It is easy to see that the algorithm to identify 1-cut vertices runs in time O(|V| 4 ) and Algorithm 1 runs in time O(|V| 2 ).
Note that the identification algorithm for 1-cut vertices allows us to remove these vertices, i.e., the corresponding rows and columns in the input matrix. This is possible due to the fact that removing a cut edge does not change the connectivity between any two nodes on the same side of the cut. On the other hand, we cannot simply remove degree-2 vertices without affecting the remaining entries in the matrix. Therefore, as shown in Algorithm 1, degree-2 vertices and their incident edges will be marked as such in the solution, but are not removed from the input matrix.

Identification of densely connected regions
Densely connected regions are identified using a cliquecover algorithm (see Algorithm 2). We note that the algorithm guarantees to identify all cliques of size k' ≥ k contained within G direct . However, sets of vertices that do not form a k-clique may also be reported, provided that they are sufficiently connected among themselves, possibly via vertices outside the set. However, for sufficiently large values of k, we found this to be a very rare occurrence. While finding cliques in a graph is a computationally intensive task in general, the construction of G t for large values of k creates few small connected components and leaves the remaining vertices isolated. Therefore, in practice, Algorithm 2 can be implemented to run in a reasonable amount of time.

Cut-based genetic algorithm
The genetic algorithm aims at solving a generalization of the A-DIGCOM. First, we allow each edge (u, v) in the network to survive with a non-uniform probabilitŷ ( , ) p u v , instead of one probabilityp over all edges. Secondly, we assume that we are given two sets of edges E YES and E NO that indicate the set of edges that are guaranteed to be in the solution, and guaranteed not to be in the solution, respectively. This will later allow us to factor in the outcome of the previous sections. Therefore, the edges whose presence remains to be deter-

Encoding of candidate solutions
To represent a candidate solution, we first create a hash table that maps each putative edge in E MAYBE to an integer. Each candidate is then encoded as a list of integers (edges). Edges in E YES , which are part of all solutions, are not explicitly listed, in order to save space. Since the networks we consider are sparse (|E| = O(|V|)), such an encoding technique significantly reduces the space requirements.

Initial population
The initial population of candidates is generated using a preferential attachment model [30] using the following observations: (i) The average connectivity of vertex u, is strongly positively correlated with the degree of u in G direct ; (ii) the age of a vertex, measured by when the vertex was introduced to the graph, is positively correlated with the degree of the vertex. Therefore, during the generation of each candidate, we choose the next vertex to be added with probability proportional to its average connectivity. This results in a candidate solution where the degree of most vertices is likely to be close to the their true degree in G direct . Furthermore, in order to create candidates that are clustered similarly to the true direct interaction graph, we include the set of edges predicted by Algorithm 2 to each initial candidate.

Fitness function
The fitness of a candidate solution G, fitness(G) is obtained by first estimating the probability matrix P G using 500 Monte Carlo samples, and then counting the number of vertex pairs (u, v) whose estimated connectivity P G (u, v) is within the tolerance level δ, i.e., M(u, v) ± δ (See below for choosing the tolerance level δ ).

Crossover
The crossover operation needs to hybridize two parent candidates to produce off springs preserving the good properties of the parents. This operation will be guided by a randomly chosen balanced cut V = V 1 ∪ V 2 . Let G 1 and G 2 denote the two parent networks and let E 1 (G i ) and E 2 (G i ) denote the edges of G i such that both endpoints lie in V 1 and V 2 , respectively. Furthermore, let E 1,2 (G i ) denote the edges of G i that crosses from V 1 and V 2 . Mating G 1 and G 2 results in two children G' = (V, E'), and G'' = (V, E'') such that: ,

G E G Ç
While choosing a random cut as the crossover point is a reasonable strategy to construct a new pair of offsprings, our studies have shown that a planned strategy in choosing the crossover points results in better performance and less chance of premature convergence. In particular, if the crossover point is chosen at a dense cut in the parent networks, then the connectivity among vertices within each partition would be deteriorated significantly. This results in offsprings with much poorer fitness than their parents. On the other hand, if the parents are hybridized at a sparse cut, the connectivity among vertices within each partition are more localized. Therefore, crossover operations are best done by selecting sparse balanced cuts (|V 1 | ≈ |V 2 |). Finding sparse balanced cuts is a well-studied problem in combinatorial optimization, for which various approximation algorithms exist [38,39]. However, these algorithms assume that the graph itself, not the connectivity matrix M, is given as input. We therefore use a simple heuristic that avoids cutting through the dense regions of the network. To generate these sparse cuts, we contract each dense region identified in Algorithm 2 to a single vertex, and then generate weighted (by the number of vertices in each dense region) balanced partitioning of the vertices at random.

Mutation
In order to introduce variability to the population of candidates, a small number of edges (5~10%) are randomly inserted or deleted. Moreover, observe that the child network constructed as above may not remain connected. Aside from the random mutation, therefore, we employ a simple local search that greedily adds edges to keep the network connected.

Genetic algorithm parameter selection
The various parameters of the genetic algorithm were selected based on the resulting performance on the Yu et al. data set. Two main parameters that affect the performance significantly are the population size and the selection criteria. For selection criteria, we tested several different selection criteria by setting the probability of choosing a candidate as a parent. The best compromise between running time and accuracy was obtained using a population size of 500, and selection probability for a parent proportional to fitness(c i ) -minFit, where minFit is the fitness of the worst candidate in the population (data not shown).

Restricting the solution space
While our genetic algorithm offers a plausible method for the A-DIGCOM problem, one can reduce the size of the solution space, which typically results in faster convergence to better solutions, using the results in Theorem 1 and 2. First, recall that finding all cut edges decomposes the problem into independent subproblems on 2-edge-connected components. Second, the identification of degree-2 vertices defines two sets of edges E YES and E NO that constitute all putative edges incident to the identified degree-2 vertices. In other words, E MAY BE forms the subgraph of G induced by the set V 3+ of vertices with degree ≥ 3. Furthermore, observe that the edges in E Y ES form parallel paths between vertices in V 3+ . A classical result in network reliability (see Fact 2 in Appendix) suggests that these parallel paths can be merged into a single meta-edge whose reliability can be efficiently computed. To be more formal, let  (u, v) = {P 1 (u, v), P 2 (u, v), ..., P k (u, v)} be the set of paths between u and v in E Y ES . These paths can then be replaced by a single edge (u, v) with its survival prob-abilityˆ( , ). .. . By merging every set of parallel paths, we obtain a compact network over V 3+ that efficiently encodes the edges in E Y ES . Since our genetic algorithm handles the case where the edge survival probability is non-uniform, this compact encoding results in substantial gains in running time for estimating the fitness of the candidates, as well as in time and space requirements for handling large population sizes. In our applications, this allows us to remove approximately 70~75% of the original set of vertices.

Randomized hill-climbing algorithm
For performance comparisons, we tested our algorithm against a simple randomized hill-climbing approach. In this approach, we start with a randomly chosen spanning tree G 1 of the vertex set V. At the i th iteration, we first sample the connectivity probability P G i of G i , using the Monte Carlo simulation. Then, we randomly pick a vertex pair u, v with probability proportional to but M(u, v) < P G i (u, v), then we add (u, v) to G i . We repeat this local optimization heuristic while making sure the candidate solution remains connected.
Choosing a tolerance level δ and handling numerical errors In order to deal with numerical errors from Monte Carlo sampling, we use a well-defined tolerance level δ as additive errors. Note that the sampling process for estimating the probability matrix P G is a binomial process, which, by the central limit theorem, is closely approximated by a normal distribution. The confidence interval is largest when the estimated probability is equal to 0.5, in which case we obtain a confidence interval ofˆ, wherep denotes the fraction of samples where the two vertices are connected after n samples, and z 1 -α/2 is the z-value for desired level of confidence. Using this formula, we can conclude: 1. When n = 20000 (computation of our input matrix M from test networks), we obtain a 95% confidence interval of size at most 2·δ = 2·0.007 = 0.014. 2. When n = 500 (computation of the connectivity matrix for each candidate solution in our genetic algorithm), the 95% confidence interval is of size at most 2·δ = 2·0.04 = 0.08.
With the chosen tolerance level δ, we modify our algorithm as appropriate each time we compare two connectivity probabilities. For example, in Theorem 1, the first condition P G (u, v) =p is modified to ,∈ − + ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ δ δ ; and in Theorem 2, we modify the first condition P s t p p G ( , )^< − 2 2 to P s t p p G ( , )^< − + 2 2 δ .

Generation of scale-free networks
In order to generate artificial scale-free networks, we used two generation models: the preferential attachment model, and the duplication model. In the preferential attachment model, we evaluated the degree distribution of the two biological networks (G Y u and G DIP ) and used the Barabási-Albert algorithm to construct a scale-free network with attachment factor 1.5 (each iteration adds a new vertex with 1~2 edges attached to existing vertices). In the duplication model, at each iteration, we randomly pick a vertex to duplicate with probability proportional to its degree and randomly drop the duplicated edges with probability at 0.5 in order to t the degree distributions and sparsity of biological networks.

Calculation of connectivity matrix from peptide counts
The peptide count of a prey protein in an AP-MS experiment is the number of different peptides that have been observed by MS for that protein. We note that the peptide counts are biased towards preys with longer protein sequences, and to rectify this propensity, we normalized the abundance data by the protein sequence lengths to obtain the abundance ratios R(i, j). In order to turn the normalized abundance ratios into the connectivity matrix for our probabilistic graph model, we used a simple logistic function where the parameters a, b are chosen so that the computed distribution ofp fits the simulated connectivity distribution of G Y u , using a c 2 test (a = 2.8921, b = -0.6318). In the cases where R(i, j) differ from R(j, i), we choose the average of the two entries to symmetrize the matrix.

Appendix
We start with two basic results that will prove useful when proving more complex theorems. Let G 1 = (V 1 , E 1 ) and G 2 = (V 2 , E 2 ) be two graphs. Then the following are true.  .

Proof of Theorem 1
Proof. Necessity is trivial. For sufficiency, suppose the conditions (i) and (ii) hold, and (u, v) is not a cut edge. Then, to keep the graph connected, there must be an edge (s, t) ≠ (u, v) joining V u and V v . Since (s, t) is an