chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Aligning three sequences
Now that we have seen how to align a pair of sequences, it is natural to extend this idea to multiple sequences. Suppose we would like to find the optimal alignment of 3 sequences. How might we proceed?
Recall that when we align two sequences S and T, we choose the maximum of three possibilities for the final position of the alignment (sequence T aligned against a gap, sequence S aligned against a gap, or sequence S aligned against sequence T):
$F_{i, j}=\max \left\{\begin{array}{l} F_{i, j-1}+d \ F_{i-1, j}+d \ F_{i-1, j-1}+s\left(S_{i}, T_{j}\right) \end{array}\right. \nonumber$
For three sequences S,T, and U, there are seven possibilities for the final position of the alignment. That is, there are three ways to have two gaps in the final position, three ways to have one gap, and one way to have all three sequences aligned $\left(\left(\begin{array}{l} 3 \ 1 \end{array}\right)+\left(\begin{array}{l} 3 \ 2 \end{array}\right)+\left(\begin{array}{l} 3 \ 3 \end{array}\right)=7\right)$. The update rule is now:
$F_{i, j, k}=\max \left\{\begin{array}{l} F_{i-1, j, k}+s\left(S_{i},-,-\right) \ F_{i, j-1, k}+s\left(-, T_{j},-\right) \ F_{i, j, k-1}+s\left(-,-, U_{k}\right) \ F_{i-1, j-1, k}+s\left(S_{i}, T_{j},-\right) \ F_{i-1, j, k-1}+s\left(S_{i},-, U_{k}\right) \ F_{i, j-1, k-1}+s\left(-, T_{j}, U_{k}\right) \ F_{i-1, j-1, k-1}+s\left(S_{i}, T_{j}, U_{k}\right) \end{array}\right. \nonumber$
where s is the function describing gap, match, and mismatch scores.
This approach, however, is exponential in the number of sequences we are aligning. If we have k sequences of length $n$, computing the optimal alignment using a k-dimensional dynamic programming matrix takes $O\left((2 n)^{k}\right)$ time (the factor of 2 results from the fact that a k-cube has 2k vertices, so we need to take the maximum of 2k − 1 neighboring cells for each entry in the score matrix). As you can imagine, this algorithm quickly becomes impractical as the number of sequences increases.
Heuristic multiple alignment
One commonly used approach for multiple sequence alignment is called progressive multiple alignment. As- sume that we know the evolutionary tree relating each of our sequences. Then we begin by performing a pairwise alignment of the two most closely-related sequences. This initial alignment is called the seed alignment. We then proceed to align the next closest sequence to the seed, and this new alignment replaces the seed. This process continues until the final alignment is produced.
In practice, we generally do not know the evolutionary tree (or guide tree), this technique is usually paired with some sort of clustering algorithm that may use a low-resolution similarity measure to generate an estimation of the tree.
While the running time of this heuristic approach is much improved over the previous method (polynomial in the number of sequences rather than exponential), we can no longer guarantee that the final alignment is optimal.
Note that we have not yet explained how to align a sequence against an existing alignment. One possible approach would be to perform pairwise alignments of the new sequence with each sequence already in the seed alignment (we assume that any position in the seed alignment that is already a gap will remain one). Then we can add the new sequence onto the seed alignment based on the best pairwise alignment (this approach was previously described by Feng and Doolittle[4]). Alternatively, we can devise a function for scoring the alignment of a sequence with another alignment (such scoring functions are often based on the pairwise sum of the scores at each position).
Design of better multiple sequence alignment tools is an active area of research. Section 2.7 details some of the current work in this field.
2.07: Tools and Techniques
Lalign finds local alignments between two sequences. Dotlet is a browser-based Java applet for visualizing the alignment of two sequences in a dot-matrix.
The following tools are available for multiple sequence alignment:
• Clustal Omega - A multiple sequence alignment program that uses seeded guide trees and HMM
profile-profile techniques to generate alignments.[10]
• MUSCLE - MUltiple Sequence Comparison by Log-Expectation[3]
• T-Coffee - Allows you to combine results obtained with several alignment methods[2]
• MAFFT - (Multiple Alignment using Fast Fourier Transform) is a high speed multiple sequence align- ment program[5]
• Kalign - A fast and accurate multiple sequence alignment algorithm[9] | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/02%3A_Sequence_Alignment_and_Dynamic_Programming/2.06%3A_Multiple_alignment.txt |
Homology
One of the key goals of sequence alignment is to identify homologous sequences (e.g., genes) in a genome. Two sequences that are homologous are evolutionarily related, specifically by descent from a common ancestor. The two primary types of homologs are orthologous and paralogous (refer to Figure 2.1411). Other forms of homology exist (e.g., xenologs), but they are outside the scope of these notes.
Orthologs arise from speciation events, leading to two organisms with a copy of the same gene. For example, when a single species A speciates into two species B and C, there are genes in species B and C that descend from a common gene in species A, and these genes in B and C are orthologous (the genes continue to evolve independent of each other, but still perform the same relative function).
Paralogs arise from duplication events within a species. For example, when a gene duplication occurs in some species A, the species has an original gene B and a gene copy B′, and the genes B and B′ are paralogus. Generally, orthologous sequences between two species will be more closely related to each other than paralogous sequences. This occurs because orthologous typically (although not always) preserve function over time, whereas paralogous often change over time, for example by specializing a gene’s (sub)function or by evolving a new function. As a result, determining orthologous sequences is generally more important than identifying paralogous sequences when gauging evolutionary relatedness.
Natural Selection
The topic of natural selection is a too large topic to summarize effectively in just a few short paragraphs; instead, this appendix introduces three broad types of natural selection: positive selection, negative selection, and neutral selection.
• Positive selection occurs when a trait is evolutionarily advantageous and increases an individual’s fitness, so that an individual with the trait is more likely to have (robust) offspring. It is often associated with the development of new traits.
• Negative selection occurs when a trait is evolutionarily disadvantageous and decreases an individual’s fitness. Negative selection acts to reduce the prevalence of genetic alleles that reduce a species’ fitness. Negative selection is also known as purifying selection due to its tendency to ’purify’ genetic alleles until only the most successful alleles exist in the population.
• Neutral selection describes evolution that occurs randomly, as a result of alleles not affecting an indi- vidual’s fitness. In the absence of selective pressures, no positive or negative selection occurs, and the result is neutral selection.
Dynamic Programming v. Greedy Algorithms
Dynamic programming and greedy algorithms are somewhat similar, and it behooves one to know the distinctions between the two. Problems that may be solved using dynamic programming are typically optimization problems that exhibit two traits: 1. optimal substructure and 2. overlapping subproblems.
Problems solvable by greedy algorithms require both these traits as well as (3) the greedy choice property. When dealing with a problem “in the wild,” it is often easy to determine whether it satisfies (1) and (2) but difficult to determine whether it must have the greedy choice property. It is not always clear whether locally optimal choices will yield a globally optimal solution.
For computational biologists, there are two useful points to note concerning whether to employ dynamic programming or greedy programming. First, if a problem may be solved using a greedy algorithm, then it may be solved using dynamic programming, while the converse is not true. Second, the problem structures that allow for greedy algorithms typically do not appear in computational biology.
To elucidate this second point, it could be useful to consider the structures that allow greedy programming to work, but such a discussion would take us too far afield. The interested student (preferably one with a mathematical background) should look at matroids and greedoids, which are structures that have the greedy choice property. For our purposes, we will simply state that biological problems typically involve entities that are highly systemic and that there is little reason to suspect sufficient structure in most problems to employ greedy algorithms.
Pseudocode for the Needleman-Wunsch Algorithm
The first problem in the first problem set asks you to finish an implementation of the Needleman-Wunsch (NW) algorithm, and working Python code for the algorithm is intentionally omitted. Instead, this appendix summarizes the general steps of the NW algorithm (Section 2.5) in a single place.
Problem: Given two sequences S and T of length m and n, a substitution matrix vU of matching scores, and a gap penalty G, determine the optimal alignment of S and T and the score of the alignment.
Algorithm:
• Create two m + 1 by n + 1 matrices A and B. A will be the scoring matrix, and B will be the traceback matrix. The entry (i, j) of matrix A will hold the score of the optimal alignment of the sequences S[1, . . . , i] and T [1, . . . , j], and the entry (i, j) of matrix B will hold a pointer to the entry from which the optimal alignment was built.
• Initialize the first row and column of the score matrix A such that the scores account for gap penalties, and initialize the first row and column of the traceback matrix B in the obvious way.
• Go through the entries (i, j) of matrix A in some reasonable order, determining the optimal alignment of the sequences S[1,...,i] and T[1,...,j] using the entries (i − 1,j − 1), (i − 1,j), and (i,j − 1). Set the pointer in the matrix B to the corresponding entry from which the optimal alignment at (i,j) was built.
• Once all entries of matrices A and B are completed, the score of the optimal alignment may be found in entry (m, n) of matrix A.
• Construct the optimal alignment by following the path of pointers starting at entry (m,n) of matrix B and ending at entry (0, 0) of matrix B.
11 R.B. - BIOS 60579
2.09: Bibliography
[1] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algo- rithms. The MIT Press, London, third edition, 1964.
[2] Paolo Di Tommaso, Sebastien Moretti, Ioannis Xenarios, Miquel Orobitg, Alberto Montanyola, Jia-Ming Chang, Jean-Fran ̧cois Taly, and Cedric Notredame. T-Coffee: a web server for the multiple sequence alignment of protein and RNA sequences using structural information and homology extension. Nucleic Acids Research, 39(Web Server issue):W13–W17, 2011.
[3] Robert C Edgar. MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic acids research, 32(5):1792–7, January 2004.
[4] D F Feng and R F Doolittle. Progressive sequence alignment as a prerequisite to correct phylogenetic trees. Journal of Molecular Evolution, 25(4):351–360, 1987.
[5] Kazutaka Katoh, George Asimenos, and Hiroyuki Toh. Multiple alignment of DNA sequences with MAFFT. Methods In Molecular Biology Clifton Nj, 537:39–64, 2009.
[6] John D. Kececioglu and David Sankoff. Efficient bounds for oriented chromosome inversion distance. In Proceedings of the 5th Annual Symposium on Combinatorial Pattern Matching, CPM ’94, pages 307–325, London, UK, UK, 1994. Springer-Verlag.
[7] Manolis Kellis. Dynamic programming practice problems. http://people.csail.mit.edu/bdean/6.046/dp/, September 2010.
[8] Manolis Kellis, Nick Patterson, Matthew Endrizzi, Bruce Birren, and Eric S Lander. Sequencing and comparison of yeast species to identify genes and regulatory elements. Nature, 423(6937):241–254, 2003.
[9] Timo Lassmann and Erik L L Sonnhammer. Kalign–an accurate and fast multiple sequence alignment algorithm. BMC Bioinformatics, 6(1):298, 2005.
[10] Fabian Sievers, Andreas Wilm, David Dineen, Toby J Gibson, Kevin Karplus, Weizhong Li, Rodrigo Lopez, Hamish McWilliam, Michael Remmert, Johannes S ̈oding, Julie D Thompson, and Desmond G Higgins. Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega. Molecular Systems Biology, 7(539):539, 2011.
[11] Zhaolei Zhang and Mark Gerstein. Patterns of nucleotide substitution, insertion and deletion in the human genome inferred from pseudogenes. Nucleic Acids Research, 31(18):5338–5348, 2003. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/02%3A_Sequence_Alignment_and_Dynamic_Programming/2.08%3A_Appendix.txt |
In this section we explored alignment algorithms beyond global alignment. We began by reviewing our use of dynamic programming to solve global alignment problems using the Needleman-Wunsch algorithm. We then the explored alternatives of local (Smith-Waterman) and semi-global alignments. We then discussed using hash function to match exact strings in linear time (Karp-Rabin) as well as doing a neighborhood search, investigating similar sequences in probabilistic linear time (pigeonhole principle, combs, 2-hit blast, random projections). We have also addressed using pre-processing for linear time string matching, as well as the probabilistic background for sequence alignment.
3.02: Introduction
In the previous chapter, we used dynamic programming to compute sequence alignments in \(O(n^2)\). In particular, we learned the algorithm for global alignment, which matches complete sequences with one another at the nucleotide level. We usually apply this when the sequences are known to be homologous (i.e. the sequences come from organisms that share a common ancestor).
The biological significance of finding sequence alignments is to be able to infer the most likely set of evolutionary events such as point mutations/mismatches and gaps (insertions or deletions) that occurred in order to transform one sequence into the other. To do so, we first assume that the set of transformations with the lowest cost is the most likely sequence of transformations. By assigning costs to each transformation type (mismatch or gap) that reflect their respective levels of evolutionary difficulty, finding an optimal alignment reduces to finding the set of transformations that result in the lowest overall cost.
We achieve this by using a dynamic programming algorithm known as the Needleman-Wunsch algorithm. Dynamic programming uses optimal substructures to decompose a problem into similar sub-problems. The problem of finding a sequence alignment can be nicely expressed as a dynamic programming algorithm since alignment scores are additive, which means that finding the alignment of a larger sequence can be found by recursively finding the alignments of smaller subsequences. The scores are stored in a matrix, with one sequence corresponding to the columns and the other sequence corresponding to the rows. Each cell represents the transformation required between two nucleotides corresponding to the cell’s row and column. An alignment is recovered by tracing back through the dynamic programming matrix (shown below). The dynamic programming approach is preferable to a greedy algorithm that simply chooses the transition with minimum cost at each step because a greedy algorithm does not guarantee that the overall result will give the optimal or lowest-cost alignment.
To summarize the Needleman-Wunsch algorithm for global alignment:
We compute scores corresponding to each cell in the matrix and record our choice (memoization) at that step i.e. which one of the top, left or diagonal cells led to the maximum score for the current cell. We are left with a matrix full of optimal scores at each cell position, along with pointers at each cell reflecting the optimal choice that leads to that particular cell.
We can then recover the optimal alignment by tracing back from the cell in the bottom right corner (which contains the score of aligning one complete sequence with the other) by following the pointers reflecting locally optimal choices, and then constructing the alignment corresponding to an optimal path followed in the matrix.
The runtime of Needleman-Wunsch algorithm is O(n2) since for each cell in the matrix, we do a finite amount of computation. We calculate 3 values using already computed scores and then take the maximum of those values to find the score corresponding to that cell, which is a constant time (O(1)) operation.
To guarantee correctness, it is necessary to compute the cost for every cell of the matrix. It is possible that the optimal alignment may be made up of a bad alignment (consisting of gaps and mismatches) at the start, followed by many matches, making it the best alignment overall. These are the cases that traverse the boundary of our alignment matrix. Thus, to guarantee the optimal global alignment, we need to compute every entry of the matrix.
Global alignment is useful for comparing two sequences that are believed to be homologous. It is less useful for comparing sequences with rearrangements or inversions or aligning a newly-sequenced gene against reference genes in a known genome, known as database search. In practice, we can also often restrict the alignment space to be explored if we know that some alignments are clearly sub-optimal.
This chapter will address other forms of alignment algorithms to tackle such scenarios. It will first intro- duce the Smith-Waterman algorithm for local alignment for aligning subsequences as opposed to complete sequences, in contrast to the Needleman-Wunsch algorithm for global alignment. Later on, an overview will be given of hashing and semi-numerical methods like the Karp-Rabin algorithm for finding the longest (contiguous) common substring of nucleotides. These algorithms are implemented and extended for inexact matching in the BLAST program, one of the most highly cited and successful tools in computational biology. Finally, this chapter will go over BLAST for database searching as well as the probabilistic foundation of sequence alignment and how alignment scores can be interpreted as likelihood ratios.
Outline:
1. Introduction
• Review of global alignment (Needleman-Wunsch)
2. Global alignment vs. Local alignment vs. Semi-global alignment
• Initialization, termination, and update rules for Global alignment (Needleman-Wunsch) vs. Local alignment (Smith-Waterman) vs. Semi-global alignment
• Varying gap penalties, algorithmic speedups
3. Linear-time exact string matching
• Karp-Rabin algorithm and semi-numerical methods • Hash functions and randomized algorithms
4. The BLAST algorithm and inexact matching • Hashing with neighborhood search
• Two-hit blast and hashing with combs
5. Pre-processing for linear-time string matching
• Fundamental pre-processing
• Suffix Trees
• Suffix Arrays
• The Burrows-Wheeler Transform
6. Probabilistic foundations of sequence alignment • Mismatch penalties, BLOSUM and PAM matrices
• Statistical significance of an alignment score | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/03%3A_Rapid_Sequence_Alignment_and_Database_Search/3.01%3A_What_Have_We_Learned.txt |
A global alignment is defined as the end-to-end alignment of two strings s and t.
A local alignment of string s and t is an alignment of substrings of s with substrings of t.
In general are used to find regions of high local similarity. Often, we are more interested in finding local
alignments because we normally do not know the boundaries of genes and only a small domain of the gene may be conserved. In such cases, we do not want to enforce that other (potentially non-homologous) parts of the sequence also align. Local alignment is also useful when searching for a small gene in a large chromosome or for detecting when a long sequence may have been rearranged (Figure 4).
A semi-global alignment of string s and t is an alignment of a substring of s with a substring of t.
This form of alignment is useful for overlap detection when we do not wish to penalize starting or ending gaps. For finding a semi-global alignment, the important distinctions are to initialize the top row and leftmost column to zero and terminate end at either the bottom row or rightmost column.
The algorithm is as follows:
\begin{array}{l} \text { Initialization } : \begin{aligned} F(i, 0)=0 \ F(0, j)=0 \end{aligned} \ \qquad \begin{aligned} \text {Iteration} : & F(i, j)=\max \left\{\begin{aligned} F(i-1, j)-d \ F(i, j-1)-d & \ F(i-1, j-1)+s\left(x_{i}, y_{j}\right) \end{aligned}\right. \end{aligned} \end{array}
$\text{Termination : Bottom row or Right column} \nonumber$
Using Dynamic Programming for local alignments
In this section we will see how to find local alignments with a minor modification of the Needleman-Wunsch algorithm that was discussed in the previous chapter for finding global alignments.
To find global alignments, we used the following dynamic programming algorithm (Needleman-Wunsch algorithm):
$\text {Initialization : F(0,0)=0} \nonumber$
\begin{aligned} \text { Iteration } &: F(i, j)=\max \left\{\begin{aligned} F(i-1, j)-d \ F(i, j-1)-d \ F(i-1, j-1)+s\left(x_{i}, y_{j}\right) \end{aligned}\right.\end{aligned}
$\text{Termination : Bottom right} \nonumber$
For finding local alignments we only need to modify the Needleman-Wunsch algorithm slightly to start over and find a new local alignment whenever the existing alignment score goes negative. Since a local alignment can start anywhere, we initialize the first row and column in the matrix to zeros. The iteration step is modified to include a zero to include the possibility that starting a new alignment would be cheaper than having many mismatches. Furthermore, since the alignment can end anywhere, we need to traverse the entire matrix to find the optimal alignment score (not only in the bottom right corner). The rest of the algorithm, including traceback, remains unchanged, with traceback indicating an end at a zero, indicating the start of the optimal alignment.
These changes result in the following dynamic programming algorithm for local alignment, which is also known as the :
$\begin{array}{ll} \text { Initialization }: & F(i, 0)=0 \ & F(0, j)=0 \end{array} \nonumber$
$\text {Iteration}: \quad F(i, j)=\max \left\{\begin{array}{c} 0 \ F(i-1, j)-d \ F(i, j-1)-d \ F(i-1, j-1)+s\left(x_{i}, y_{j}\right) \end{array}\right. \nonumber$
$Termination : Anywhere \nonumber$
Algorithmic Variations
Sometimes it can be costly in both time and space to run these alignment algorithms. Therefore, this section presents some algorithmic variations to save time and space that work well in practice.
One method to save time, is the idea of bounding the space of alignments to be explored. The idea is that good alignments generally stay close to the diagonal of the matrix. Thus we can just explore matrix cells within a radius of k from the diagonal. The problem with this modification is that this is a heuristic and can lead to a sub-optimal solution as it doesn’t include the boundary cases mentioned at the beginning of the chapter. Nevertheless, this works very well in practice. In addition, depending on the properties of the scoring matrix, it may be possible to argue the correctness of the bounded-space algorithm. This algorithm requires $O(k ∗ m)$ space and $O(k ∗ m)$ time.
We saw earlier that in order to compute the optimal solution, we needed to store the alignment score in each cell as well as the pointer reflecting the optimal choice leading to each cell. However, if we are only interested in the optimal alignment score, and not the actual alignment itself, there is a method to compute the solution while saving space. To compute the score of any cell we only need the scores of the cell above, to the left, and to the left-diagonal of the current cell. By saving the previous and current column in which we are computing scores, the optimal solution can be computed in linear space.
If we use the principle of divide and conquer, we can actually find the optimal alignment with linear space. The idea is that we compute the optimal alignments from both sides of the matrix i.e. from the left to the right, and vice versa. Let $u=\left\lfloor\frac{n}{2}\right\rfloor$. Say we can identify v such that cell $(u, v)$ is on the optimal
alignment path. That means v is the row where the alignment crosses column u of the matrix. We can find the optimal alignment by concatenating the optimal alignments from (0,0) to (u,v) plus that of (u,v) to (m, n), where m and n is the bottom right cell (note: alignment scores of concatenated subalignments using our scoring scheme are additive. So we have isolated our problem to two separate problems in the the top left and bottom right corners of the DP matrix. Then we can recursively keep dividing up these subproblems to smaller subproblems, until we are down to aligning 0-length sequences or our problem is small enough to apply the regular DP algorithm. To find v the row in the middle column where the optimal alignment crosses we simply add the incoming and outgoing scores for that column.
One drawback of this divide-and-conquer approach is that it has a longer runtime. Nevertheless, the runtime is not dramatically increased. Since v can be found using one pass of regular DP, we can find v for each column in $O(mn)$ time and linear space since we don’t need to keep track of traceback pointers for this step. Then by applying the divide and conquer approach, the subproblems take half the time since we only need to keep track of the cells diagonally along the optimal alignment path (half of the matrix of the previous step) That gives a total run time of $O\left(m n\left(1+\frac{1}{2}+\frac{1}{4}+\ldots\right)\right)=O(2 M N)=O(m n)$ (using the sum of geometric series), to give us a quadratic run time (twice as slow as before, but still same asymptotic behavior). The total time will never exceed $2MN$ (twice the time as the previous algorithm). Although the runtime is increased by a constant factor, one of the big advantages of the divide-and-conquer approach is that the space is dramatically reduced to $O(N)$.
Q: Why not use the bounded-space variation over the linear-space variation to get both linear time and linear space?
A: The bounded-space variation is a heuristic approach that can work well in practice but does not guarantee the optimal alignment.
Generalized gap penalties
Gap penalties determine the score calculated for a subsequence and thus affect which alignment is selected. The normal model is to use a where each individual gap in a sequence of gaps of length k is penalized equally with value p. This penalty can be modeled as $w(k) = k ∗ p$. Depending on the situation, it could be a good idea to penalize differently for, say, gaps of different lengths. One example of this is a in which the incremental penalty decreases quadratically as the size of the gap grows. This can be modeled as $w(k) = p+q∗k+r∗k2$. However, the trade-off is that there is also cost associated with using more complex gap penalty functions by substantially increasing runtime. This cost can be mitigated by using simpler approximations to the gap penalty functions. The is a fine intermediate: you have a fixed penalty to start a gap and a linear cost to add to a gap; this can be modeled as $w(k) = p + q ∗ k$.
You can also consider more complex functions that take into consideration the properties of protein coding sequences. In the case of protein coding region alignment, a gap of length mod 3 can be less penalized because it would not result in a frame shift. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/03%3A_Rapid_Sequence_Alignment_and_Database_Search/3.03%3A_Global_alignment_vs._Local_alignment_vs._Semi-global_alignment.txt |
While we have looked at various forms of alignment and algorithms used to find such alignments, these algorithms are not fast enough for some purposes. For instance, we may have a 100 nucleotide sequence which we want to search for in the whole genome, which may be over a billion nucleotides long. In this case, we want an algorithm with a run-time that depends on the length of query sequence, possibly with some pre-processing on the database, because processing the entire genome for every query would be extremely slow. For such problems, we enter the realm of randomized algorithms where instead of worrying about the worst-case performance, we are more interested in making sure that the algorithm is linear in the expected case. When looking for exact(consecutive) matches of a sequence, the Karp-Rabin algorithm interprets such a match numerically. There are many other solutions to this problem and some of them that can ensure the problem is linear in the worst case such as: the Z-algorithm, Boyer-Moore and Knuth-Morris-Pratt algorithm, algorithms based on suffix trees, suffix arrays, etc. (discussed in the “Lecture 3 addendum” slides)
Karp-Rabin Algorithm
This algorithm tries to match a particular pattern to a string, which is the basic principle of database search. The problem is as follows: in text T of length n we are looking for pattern P of length m. Strings are mapped to numbers to enable fast comparison. A naive version of the algorithm involves mapping the string P and m-length substrings of T sinto numbers x and y, respectively, sliding x along T at every offset until there is a match of the numbers.
However, one can see that the algorithm, as stated, is in fact non-linear for two reasons:
1. Computing each yi takes more than constant time (it is in fact linear if we naively compute each number from scratch for each subsequence)
2. Comparing x and yi can be expensive if the numbers are very large which might happen if the pattern to be matched is very long
To make the algorithm faster, we first modify the procedure for calculating yi in constant time by using the previously computed number, $y_i − 1$. We can do this using some bit operations: a subtraction to remove the high-order bit, a multiplication to shift the characters left, and an addition to append the low-order digit. For example, in Figure 10, we can compute $y_2$ from $y_1$ by
• removing the highest order bit: 23590 mod 10000 = 3590
• shifting left: 3590 ∗ 10 = 35900
• adding the new low-order digit: 35900 + 2 = 35902
Our next issue arises when we have very long sequences to compare. This causes our calculations to be with very large numbers, which becomes no longer linear time. To keep the numbers small to ensure efficient comparison, we do all our computations modulo p (a form of hashing), where p reflects the word length available to us for storing numbers, but is small enough such that the comparison between x and $y_i$ is doable in constant time.
: Using a function to map data values to a data set of fixed size.
Because we are using hashing, mapping to the space of numbers modulo p can result in spurious hits due to hashing collisions, and so we modify the algorithm to deal with such spurious hits by explicitly verifying reported hits of the hash values. Hence, the final version of the Karp-Rabin algorithm is:
To compute the expected runtime of Karp-Rabin, we must factor in the expect cost of verification. If we can show the probability of spurious hits is small, the expected runtime is linear.
Questions:
Q: What if there are more than 10 characters in the alphabet?
A: In such a case, we can just modify the above algorithm by including more digits i.e. by working in a base other than 10, e.g. say base 256. But in general, when hashing is used, strings are mapped into a space of numbers and hence the strings are interpreted numerically.
Q: How do we apply this to text?
A: A hash function is used that changes the text into numbers that are easier to compare. For example, if the whole alphabet is used, letters can be assigned a value between 0 and 25, and then be used similar to a string of numbers.
Q: Why does using modulus decrease the computation time?
A: Modulus can be applied to each individual part in the computation while preserving the answer. For instance: imagine our current text is ”314152” and word length is 5. After making our first computation on ”31415”, we move our frame over to make our second computation, which is:
$14152 = (31415 − 3 ∗ 10000) ∗ 10 + 2(mod13) \nonumber$
$= (7−3∗3)∗10+2(mod13) \nonumber$
$= 8(mod13) \nonumber$
This computation can be done now in linear time.
Q: Are there provisions in the algorithm for inexact matches?
A: The above algorithm only works when there are regions of exact similarity between the query sequence and the database. However, the BLAST algorithm, which we look at later, extends the above ideas to include the notion of searching in a biologically meaningful neighborhood of the query sequence to account for some inexact matches. This is done by searching in the database for not just the query sequence, but also some variants of the sequence up to some fixed number of changes.
In general, in order to reduce the time for operations on arguments like numbers or strings that are really long, it is necessary to reduce the number range to something more manageable. Hashing is a general solution to this and it involves mapping keys k from a large universe $U$ of strings/numbers into a hash of the key $h(k)$ which lies in a smaller range, say $[1...m]$. There are many hash function that can be used, all with different theoretical and practical properties. The two key properties that we need for hashing are:
1. Reproducibility if $x = y$, then $h(x) = h(y)$. This is essential for our mapping to make sense.
2. Uniform output distribution This implies that regardless of the input distribution, the output distribution is uniform. i.e. if $x! = y$, then $P(h(x) = h(y)) = 1/m$, irrespective of the input distribution. This is a desirable property to reduce the chance of spurious hits.
An interesting idea that was raised was that it might be useful to have locality sensitive hash functions from the point of view of use in neighborhood searches, such that points in U that are close to each other are mapped to nearby points by the hash function. The notion of Random projections, as an extension of the BLAST algorithm, is based on this idea. Also, it is to be noted that modulo doesn't satisfy property 2 above because it is possible to have input distributions (e.g. all multiples of the number vis--vis which the modulo is taken) that result in a lot of collisions. Nevertheless, choosing a random number as the divisor of the modulo can avoid many collisions.
Working with hashing increases the complexity of analyzing the algorithm since now we need to compute the expected run time by including the cost of verification. To show that the expected run time is linear, we need to show that the probability of spurious hits is small. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/03%3A_Rapid_Sequence_Alignment_and_Database_Search/3.04%3A_Linear-time_exact_string_matching.txt |
The BLAST algorithm looks at the problem of sequence database search, wherein we have a query, which is a new sequence, and a target, which is a set of many old sequences, and we are interested in knowing which (if any) of the target sequences is the query related to. One of the key ideas of BLAST is that it does not require the individual alignments to be perfect; once an initial match is identified, we can fine-tune the matches later to find a good alignment which meets a threshold score. Also, BLAST exploits a distinct characteristic of database search problems: most target sequences will be completely unrelated to the query sequence, and very few sequences will match.
However, correct (near perfect) alignments will have long substrings of nucleotides that match perfectly. E.g. if we looking for sequences of length 100 and are going to reject matches that are less than 90% identical, we need not look at sequences that do not even contain a consecutive stretch of less than 10 matching nucleotides in a row. We base this assumption on the : if m items are put in n containers and m>n, at least 2 items must be put in one of the n containers.
In addition, in biology, functional DNA is more likely to be conserved, and therefore the mutations that we find will not actually be distributed randomly, but will be clustered in nonfunctional regions of DNA while leaving long stretches of functional DNA untouched. Therefore because of the pigeonhole principle and because highly similar sequences will have stretches of similarity, we can pre-screen the sequences for common long stretches. This idea is used in BLAST by breaking up the query sequence into W-mers and pre-screening the target sequences for all possible \( W − mers \)by limiting our seeds to be \( W − mers \)in the neighborhood that meet a certain threshold.
The other aspect of BLAST that allows us to speed up repeated queries is the ability to preprocess a large database of DNA off-line. After preprocessing, searching for a sequence of length m in a database of length n will take only O(m) time. The key insights that BLAST is based on are the ideas of hashing and neighborhood search that allows one to search for W − mers, even when there are no exact-matches.
The BLAST algorithm
The steps are as follows:
1. Split query into overlapping words of length W (the W-mers)
2. Find a “neighborhood” of similar words for each word (see below)
3. Lookup each word in teh neighborhood in a hash table to find the location in the database where each word occurs. Call these the seeds, and let S be the collection of seeds.
4. Extend the seeds in S until the score of the alignment drops off below some threshold X.
5. Report matches with overall highest scores
The pre-processing step of BLAST makes sure that all substrings of W nucleotides will be included in our database (or in a hash table). These are called the W -mers of the database. As in step 1, we first split the query by looking at all substrings of W consecutive nucleotides in the query. To find the neighborhood of these W-mers, we then modify these sequences by changing them slightly and computing their similarity to the original sequence. We generate progressively more dissimilar words in our neighborhood until our similarity measure drops below some threshold T. This affords us flexibility to find matches that do not have exactly W consecutive matching characters in a row, but which do have enough matches to be considered similar, i.e. to meet a certiain threshold score.
Then, we look up all of these words in our hash table to find seeds of W consecutive matching nucleotides. We then extend these seeds to find our alignment using the Smith-Waterman algorithm for local alignment, until the score drops below a certain threshold X. Since the region we are considering is a much shorter segment, this will not be as slow as running the algorithm on the entire DNA database.
It is also interesting to note the influence of various parameters of BLAST on the performance of the algorithm vis-a-vis run-time and sensitivity:
• W Although large W would result in fewer spurious hits/collisions, thus making it faster, there are also tradeoffs associated, namely: a large neighborhood of slightly different query sequences, a large hash table, and too few hits. On the other hand, if W is too small, we may get too many hits which pushes runtime costs to the seed extension/alignment step.
• T If T is higher, the algorithm will be faster, but you may miss sequences that are more evolutionarily distant. If comparing two related species, you can probably set a higher T since you expect to find more matches between sequences that are quite similar.
• X Its influence is quite similar to T in that both will control the sensitivity of the algorithm. While W and T affect the total number of hits one gets, and hence affect the runtime of the algorithm dramatically, setting a really stringent X despite less stringent W and T, will result runtime costs from trying unnecessary sequences that would not meet the stringency of X. So, it is important to match the stringency of X with that of W and T to avoid unnecessary computation time.
Extensions to BLAST
• Filtering Low complexity regions can cause spurious hits. For instance, if our query has a string of copies of the same nucleotide e.g. repeats of AC or just G, and the database has a long stretch of the same nucleotide, then there will be many many useless hits. To prevent this, we can either try to filter out low complexity portions of the query or we can ignore unreasonably over-represented portions of the database.
• Two-hit BLAST The idea here is to use double hashing wherein instead of hashing one long W -mer, we will hash two small W-mers. This allows us to find small regions of similarity since it is much more likely to have two smaller W-mers that match rather than one long W-mer. This allows us to get a higher sensitivity with a smaller W, while still pruning out spurious hits. This means that we’ll spend less time trying to extend matches that don’t actually match. Thus, this allows us to improve speed while maintaining sensitivity.
Q: For a long enough W, would it make sense to consider more than 2 smaller W-mers?
A: It would be interesting to see how the number of such W-mers influences the sensitivity of the algorithm. This is similar to using a comb, described next.
• Combs This is the idea of using non-consecutive W-mers for hashing. Recall from your biology classes that the third nucleotide in a triplet usually doesnt actually have an effect on which amino acid is represented. This means that each third nucleotide in a sequence is less likely to be preserved by evolution, since it often doesnt matter. Thus, we might want to look for W-mers that look similar except in every third codon. This is a particular example of a comb. A comb is simply a bit mask which represents which nucleotides we care about when trying to find matches. We explained above why 110110110 . . . (ignoring every third nucleotide) might be a good comb, and it turns out to be. However, other combs are also useful. One way to choose a comb is to just pick some nucleotides at random. Rather than picking just one comb for a projection, it is possible to randomly pick a set of such combs and project the W-mers along each of these combs to get a set of lookup databases. Then, the query string can also be projected randomly along these combs to lookup in these databases, thereby increasing the probability of finding a match. This is called Random Projection. Extending this, an interesting idea for a final project is to think of different techniques of projection or hashing that make sense biologically. One addition to this technique is to analyze false negatives and false positives, and change the comb to be more selective. Some papers that explore additions to this search include Califino-Rigoutsos’93, Buhler’01, and Indyk-Motwani’98.
• PSI-BLAST Position-Specific Iterative BLAST create summary profiles of related proteins using BLAST. After a round of BLAST, it updates the score matrix from the multiple alignment, and then runs subsequent rounds of BLAST, iteratively updating the score matrix. It builds a Hidden Markov Model to track conservation of specific amino acids. PSI-BLAST allows detection of distantly-related proteins. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/03%3A_Rapid_Sequence_Alignment_and_Database_Search/3.05%3A_The_BLAST_algorithm_%28Basic_Local_Alignment_Search_Tool%29.txt |
The hashing technique at the core of the BLAST algorithm is a powerful way of string for rapid lookup. A substantial time is invested to process the whole genome, or a large set of genomes, in advance of obtaining a query sequence. Once the query sequence is obtained, it can be similarly processed and its parts searched against the indexed database in linear time.
In this section, we briefly describe four additional ways of pre-processing a database for rapid string lookup, each of which has both practical and theoretical importance.
Suffix Trees
Suffix trees provide a powerful tree representation of substrings of a target sequence T, by capturing all suffixes of T in a radix tree.
Representation of a sequence in a suffix tree
Searching a new sequence against a suffix tree
Linear-time construction of suffix trees
Suffix Arrays
For many genomic applications, suffix trees are too expensive to store in memory, and more efficient rep- resentations were needed. Suffix arrays were developed specifically to reduce the memory consumption of suffix trees, and achieve the same goals with a significantly reduced space need.
Using suffix arrays, any substring can be found by doing a binary search on the ordered list of suffixes. By thus exploring the prefix of every suffix, we end up searching all substrings.
The Burrows-Wheeler Transform
An even more efficient representation than suffix trees is given by the Burrows-Wheeler Transform (BWT), which enables storing the entire hashed string in the same number of characters as the original string (and even more compactly, as it contains frequent homopolymer runs of characters that can be more easily compresed). This has helped make programs that can run even more efficiently.
We first consider the BWT matrix, which is an extension of a suffix array, in that it contains not only all suffixes in sorted (lexicographic) order, but it appends to each suffix starting at position i the prefix ending at position i − 1, each row thus containing a full rotation of the original string. This enables all the suffix-array and suffix-tree operations, of finding the position of suffixes in time linear in the query string.
The key difference from Suffix Arrays is space usage, where instead of storing all suffixes in memory, which even for suffix arrays is very expensive, only the last column of the BWT matrix is stored, based on which the original matrix can be recovered.
An auxiliary array can be used to speed things even further and avoid having to repeat operations of finding the first occurrence of each character in the modified suffix array.
Lastly, once the positions of 100,000s of substrings are found in the modified string (the last column of the BTW matrix), these coordinates can be transformed to the original positions, saving runtime by amortizing the cost of the transformation across the many many reads.
The BWT has had a very strong impact on short-string matching algorithms, and nearly all the fastest read mappers are currently based on the Burrows-Wheeler Transform.
Fundamental pre-processing
This is a variation of processing that has theoretical interest but has found relatively little practical use in bioinformatics. It relies on the Z vector, that contains at each position i the length of the longest prefix of a string that also matches the substring starting at i. This enables computing the L and R (Left and Right) vectors that denote the end of the longest duplicate substrings that contains the current position i.
Educated String Matching
The Z algorithm enables an easy computation of both the Boyer-Moore and the Knuth-Morris-Pratt algorithms for linear-time string matching. These algorithms use information gathered at every comparison when matching strings to improve string matching to O(n). The naive algorithm is as follows: it compares its string of length m character by character to the sequence. After comparing the entire string, if there are any mismatches, it moves to the next index and tries again. This completes in \( O(m ∗ n) \) time.
One improvement to this algorithm is to discontinue the current comparison if a mismatch is found. However, this still completes in \( O(m ∗ n) \) time when the string we are comparing matches the entire sequence.
The key insight comes from learning from the internal redundancy in the string to compare, and using that to make bigger shifts down the target sequence. When a mistake is made, all bases in the current comparison can be used to move the frame considered for the next comparison further down. As seen below, this greatly reduces the number of comparisons required, decreasing runtime to \( O(n) \). | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/03%3A_Rapid_Sequence_Alignment_and_Database_Search/3.06%3A_Pre-processing_for_linear-time_string_matching.txt |
As described above, the BLAST algorithm uses a scoring (substitution) matrix to expand the list of W -mers in order to look for and determine an approximately matching sequence during seed extension. Also, a scoring matrix is used in evaluating matches or mismatches in the alignment algorithms. But how do we construct this matrix in the first place? How do you determine the value of $s\left(x_{i}, y_{j}\right)$ in global/local alignment?
The idea behind the scoring matrix is that the score of alignment should reflect the probability that two similar sequences are homologous i.e. the probability that two sequences that have a bunch of nucleotides in common also share a common ancestry. For this, we look at the likelihood ratios between two hypotheses.
1. Hypothesis 1: – That the alignment between the two sequence is due to chance and the sequences are, in fact, unrelated.
2. Hypothesis 2: – That the alignment is due to common ancestry and the sequences are actually related.
Then, we calculate the probability of observing an alignment according to each hypothesis. Pr(x, y|U ) is the probability of aligning x with y assuming they are unrelated, while Pr(x,y|R) is the probability of the
alignment, assuming they are related. Then, we define the alignment score as the log of the likelihood ratio between the two:
$S \equiv \log \frac{P(\mathbf{x}, \mathbf{y} \mid R)}{P(\mathbf{x}, \mathbf{y} \mid U)} \nonumber$
Since a sum of logs is a log of products, we can get the total score of the alignment by adding up the scores of the individual alignments. This gives us the probability of the whole alignment, assuming each individual alignment is independent. Thus, an additive matrix score exactly gives us the probability that the two sequences are related, and the alignment is not due to chance. More formally, considering the case of aligning proteins, for unrelated sequences, the probability of having an n-residue alignment between x and y is a simple product of the probabilities of the individual sequences since the residue pairings are independent.
That is,
\begin{aligned} \mathbf{x} &=\left\{x_{1} \ldots x_{n}\right\} \ \mathbf{y} &=\left\{y_{1} \ldots x_{n}\right\} \ q_{a} &=P(\text { amino acid } a) \ P(\mathbf{x}, \mathbf{y} \mid U) &=\prod_{i=1}^{n} q_{x_{i}} \prod_{i=1}^{n} q_{y_{i}} \end{aligned} \nonumber
For related sequences, the residue pairings are no longer independent so we must use a different joint
probability, assuming that each pair of aligned amino acids evolved from a common ancestor:
\begin{aligned} p_{a b} &=P(\text { evolution gave rise to } a \text { in } \mathbf{x} \text { and } b \text { in } \mathbf{y}) \ P(\mathbf{x}, \mathbf{y} \mid R) &=\prod_{i=1}^{n} p_{x_{i} y_{i}} \end{aligned} \nonumber
Then, the likelihood ratio between the two is given by:
\begin{aligned} \frac{P(\mathbf{x}, \mathbf{y} \mid R)}{P(\mathbf{x}, \mathbf{y} \mid U)} &=\frac{\prod_{i=1}^{n} p_{x_{i} y_{i}}}{\prod_{i=1}^{n} q_{x_{i}} \prod_{i=1}^{n} q_{y_{i}}} \ &=\frac{\prod_{i=1}^{n} p_{x_{i} y_{i}}}{\prod_{i=1}^{n} q_{x_{i}} q_{y_{i}}} \end{aligned} \nonumber
Since we eventually want to compute a sum of scores and probabilities require add products, we take the log of the product to get a handy summation:
\begin{aligned} S & \equiv \log \frac{P(\mathbf{x}, \mathbf{y} \mid R)}{P(\mathbf{x}, \mathbf{y} \mid U)} \ v &=\sum_{i} \log \left(\frac{p_{x_{i} y_{i}}}{q_{x_{i}} q_{y_{i}}}\right) \ & \equiv \sum_{i} s\left(x_{i}, y_{i}\right) \end{aligned} \nonumber
Thus, the substitution matrix score for a given pair a, b is give by
$s(a, b)=\log \left(\frac{p_{a b}}{q_{a} q_{b}}\right) \nonumber$
The above expression is then used to crank out a substitution matrix like the BLOSUM62 for amino acids. It is interesting to note that the score of a match of an amino acid with itself depends on the amino acid itself because the frequency of random occurrence of an amino acid affects the terms used in calculating the likelihood ratio score of alignment. Hence, these matrices capture not only the sequence similarity of the alignments, but also the chemical similarity of various amino acids.
Further Reading:
BLAST related algorithms: Califino-Rigoutsos’93, Buhler’01, and Indyk-Motwani’98 | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/03%3A_Rapid_Sequence_Alignment_and_Database_Search/3.07%3A_Probabilistic_Foundations_of_Sequence_Alignment.txt |
In this chapter we will explore the emerging field of comparative genomics, primarily through examples of multiple species genome alignments (work done by the Kellis lab.) One approach to the analysis of genomes is to infer important gene functions through applying an understanding of evolution to search for expected evolutionary patterns. Another approach is to discover evolutionary trends by studying genomes themselves. Taken together, evolutionary insight and large genomic datasets offer great potential for discovery of novel biological phenomena.
04: Comparative Genomics I- Genome Annotation
A recurring theme of this work is to take a global computational approach to analyzing elements of genes and RNAs encoded in the genome and use it to find interesting new biological phenomena. We can do this by seeing how individual examples “diverge” or differ from the average case. For example, by examining many protein–coding genes, we can identify features representative of that class of loci. We can then come up with highly accurate tests for distinguishing protein–coding from non–protein–coding genes. Often, these computational tests, based on thousands of examples, will be far more definitive than conventional low– throughput wet lab tests. (Such tests can include mass spectrometry to detect protein products, in cases where we want to know if a particular locus is protein coding.)
Motivation and Challenge
As the cost of genome sequencing continues to drop, the availability of sequenced genome data has exploded. However, analysis of the data has not kept up, while there are many interesting biological phenomena lying undiscovered in the endless strings of ATGCs. The goal of comparative genomics is to leverage the vast amounts of information available to look for biological patterns.
As the name suggests, comparative genomics does not focus on one specific set of genomes. The problem with purely focusing on the single genome level is that key evolutionary signatures are missed. Comparative genomics solves this problem by comparing genomes from many species that evolved from a common ancestor. As evolution changes a species’s genome, it leaves behind traces of its presence. We will see later in this chapter that evolution discriminates between portions of a genome on the basis of biological function. By exploiting this correlation between evolutionary fingerprints and the biological role of a genomic subsequence, comparative genomics is able to direct wet lab research to interesting portions of the genome and discover new biological phenomena.
FAQ
Q: Why do mutations only accumulate in certain regions of the genome, whereas other regions are conserved?
A: In non-functional regions of DNA, accumulated mutations are kept because they do not disturb the function of the DNA. In functional regions, these mutations can lead to decreased fitness; these mutations are then discarded from the species by natural selection.
We can glean much information about evolution through studying genomics, and, similarly, we can learn about the genome through studying evolution. For example, from the principle of “survival of the fittest,” we can compare related species to discover which portions of the genome are functional elements. The evolutionary process introduces mutations into any genome. In non-functional regions of DNA, accumulated mutations are kept because they do not disturb the function of the DNA. However, in functional regions, accumulated mutations often lead to decreased fitness. Thus, these fitness-decreasing mutations are not likely to perpetuate to future generations. As time progresses, evolutionarily unfit organisms are likely to not survive and their genes thin out. By comparing surviving species’ genomes with their ancestors’ genomes, we can see which portions constitute functional elements and which constitute “junk DNA.”
To date various important biological markers and phenomena have been discovered through comparative genomics methods. For example, CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats), found in bacteria and archaea, were first discovered through comparative genomics. Follow–up experiments revealed that they provide adaptive immunity to plasmids and phages. Another example, which we will look at later in this chapter, is the phenomenon of stop–codon read–through, where stop codons are occasionally ignored during the process of translation phase of protein biosynthesis. Without comparative genomics to guide them, experimentalists might have ignored both of these features for many years.
Without a system for interpreting and identifying important features in genomes, all of the DNA sequences on earth are just a meaningless sea of data. However, we cannot ignore the importance of both computer science and biology in comparative genomics. Without knowledge of biology, one might miss the signatures of synonymous substitutions or frame shift mutations. On the other hand, ignoring computational approaches would lead to an inability to parse ever larger datasets emerging from sequencing centers. Comparative genomics require rare multidisciplinary skills and insight.
This is a particularly exciting time to enter the field of comparative genomics, because the field is mature enough that there are tools and data available to make discoveries. But it is young enough that important findings will likely continue to be made for many years.
Importance of many closely–related genomes
In order to resolve significant biological features we need both sufficient similarity to enable comparison and sufficient divergence to identify signatures of change over evolutionary time. This is difficult to achieve in a pairwise comparison. We improve the resolution of our analysis by extending analysis to many genomes simultaneously with some clusters of similar organisms and some dissimilar organisms. A simple analogy is one of observing an orchestra. If you place a single microphone, it will be difficult to decipher the signal coming from the entire system, because it will be overwhelmed by the local noise from the single point of observation, the nearest instrument. If you place many microphones distributed across the orchestra at reasonable distances, then you get a much better perspective not only on the overall signal, but also on the structure of the local noise. Similarly, by sequencing many genomes across the tree of life we are able to distinguish the biological signals of functional elements from the noise of neutral mutations. This is because nature selects for conservation of functional elements across large phylogenetic distances while constantly introducing noise through mutagenic processes operating at shorter time scales.
In this chapter, we will assume that we already have a complete genome–wide alignment of multiple closely–related species, spanning both coding and non–coding regions. In practice, constructing complete genome assemblies and whole–genome alignments is a very challenging problem; that will be the topic of the next chapter.
FAQ
Q: Why is there more resolving power when the evolutionary distance or branch length between species increases?
A: If we are comparing two species like human and chimp that are very close to each other, we expect to see little to no mutations. This gives us little discriminative power because we see no difference between the number of mutations in functional elements vs. the number of mutations in non-functional elements. However, as we increase the evolutionary time between species, we expect to see more mutations, but what we actually see are a notable decrease in the observed number of mutations in certain regions of the genome. We can conclude that these regions are functional regions. Therefore, our confidence in perceived functional elements increases as branch length increases.
FAQ
Q: Why is it better to have many closely related species for the same branch length rather than one distantly related species?
A: As branch length increases between distantly related species, even functional elements are not conserved. Furthermore, reliably aligning genes from distantly related relatives of the same species is difficult if not impossible using current technology such as BLAST.
Comparative genomics and evolutionary signatures
Given a genome-wide alignment, we can subsequently analyze the level of conservation of functional elements in each of the genomes considered. Using the UCSC genome browser, one may see a level of conservation for every gene in the human genome derived from aligning the genomes of many other species. In Figure 4.1 below, we see a DNA sequence represented on the x–axis, while each “row” represents a different species. The y–axis within each row represents the amount of conservation for that species in that part of the chromosome (though other species that are not shown were also used to calculate conservation). Higher bars correspond with greater conservation.
From this figure, we can see that there are blocks of conservation separated by regions that are not conserved. The 12 exons (highlighted by red rectangles) are mostly conserved across species, but sometimes, certain exons are missing; for example, zebrafish is missing exon 9. However, we also see that there is a spike in some species (as circled in red) that do not correspond to a known protein coding gene. This tells us that some intronic regions have also been evolutionarily conserved, since DNA regions that do not code for proteins can still be important as functional elements, such as RNA, microRNA, and regulatory motifs. By observing how regions are conserved, instead of just looking at the amount of conservation, we can observe ‘evolutionary signatures’ of conservation for different functional elements.
The pattern of mutation/insertion/deletion can help us distinguish different types of functional elements in the genome. Different functional elements are under different selective pressures and by considering which selective pressures each element is under, we can develop evolutionary signatures characteristic of each function. For example, we see the difference in evolutionary signatures as exhibited by protein-coding genes as opposed to regulatory motifs...etc.
FAQ
Q: Given an alignment of genes from multiple species, what can you measure to determine the level of conservation of a specific gene(s)?
A: One simple method is just to look at the alignment score for each gene. If one wants to distinguish between highly conserved protein coding segments from non-protein coding segments, one may also look at codon conservation. However, in both of these approaches, we have to consider the position of each species being compared in the phylogenetic tree. A pairwise comparison score that is lower between two species separated by a greater distance in the phylogenetic tree than the pairwise score between two closely related species would not necessarily imply lower conservation. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/04%3A_Comparative_Genomics_I-_Genome_Annotation/4.01%3A_Introduction.txt |
Functional elements in Drosophila
In a 2007 paper1, Stark et al. identified evolutionary signatures of different functional elements and predicted function using conserved signatures. One important finding is that across evolutionary time, genes tend to remain in a similar location. This is illustrated by Figure 4.2, which shows the result of a multiple alignment on orthologous segments of genomes from twelve Drosophila species. Each genome is represented by a horizontal blue line, where the top line represents the reference sequence. Grey lines connect orthologous functional elements, and it is clear that their positions are generally conserved across the different species.
FAQ
Q: Why is it significant that the position of orthologous elements is conserved?
A: The fact that positions are conserved is what allows us to make comparisons across species. Otherwise, we would not be able to align non-coding regions reliably.
Drosophila is a great species to study because, in fact, the separation of fruit flies is greater than that of mammals. This brings us to an interesting side-note, that of which species to select when looking at conservation signatures. You don’t want to have very similar species (such as humans and chimpanzees, which share 98% of the genome), because it would be difficult to distinguish regions that are different from ones that are the same. When comparing species to humans, the right level of conservation to look at is the mammals. Specifically, most research done in this field is done using 29 eutherian mammals (placental mammals, no marsupials or monotremes) to study. Another things to take into account is branch-length differences between two species. Your ideal subjects of study would be a few closely related (short branch- length) species, to avoid problems of interpretation that arise with a long branch-length mutations, such as back-mutations.
Rates and patterns of selection
Now that we have established that there is structure to the evolution of genomic sequences, we can begin analyzing specific features of the conservation. For this section, let us consider genomic data at the level of individual nucleotides. Later on in this chapter we will see that we can also analyze amino acid sequences.
We may estimate the intensity of a constraint of selection ω by making a probabilities model of the substitution rate inferred from genome alignment data. Using a Maximum Likelihood (ML) estimation of ω can provide us with the rate of selection ω as well as the log odds score that the rate is non-natural.
One property that this measures that we may consider is the rate of nucleotide substitution in a genome. Figure 4.3 shows two nucleotide sequences from a collection of mammals. One of the sequences is subject to normal rates of change, while the other demonstrates a reduced rate. Hence we may hypothesize that the latter sequence is subject to a greater level of evolutionary constraint, and may represent a more biologically important section of the genome.
We can further detect unusual patterns of selection π by looking at a probabilistic model of a stationary distribution that is different from the background distribution. The ML estimation of π provides us with the Probability Weight Matrix (PWM) for each k-mer in the genome as well as the log odds score for substitutions that are unusual (e.g. one base changing to one and only one other base). As one may see from Figure 4.4, specific letters matter because some bases selectively change to one (or two other bases), and the specific base it changes to may suggest what the function of the sequence may be.
We can increase our detection power of constraint elements by looking at more species, as shown in Figure 4.5 where we see a dramatic increase in the power to detect small constrained elements.
1 www.nature.com/nature/journal...ture06340.html | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/04%3A_Comparative_Genomics_I-_Genome_Annotation/4.02%3A_Conservation_of_genomic_sequences.txt |
In most regions of the genome where we see conservation across species, we expect there to be at least some amount of synonymous substitution. These are “silent” nucleotide substitutions that modify a codon in such a way that the amino acid it encodes is unchanged. In a 2011 paper2, Lindblad–Toh et al. studied evolutionary constraint in the human genome by doing comparative analysis of 29 mammalian species. They found that among the 29 genomes, the average nucleotide site showed 4.5 substitutions per site.
Given such a high average substitution rate, we do not expect to see perfect conservation across all regions that are conserved. For example, ignoring all other effects, the probability of a 12–mer remaining fixed across all 29 species is less than 10-25. Thus, regions which are nearly perfectly conserved across multiple species stand out as being unique and worthy of further study. One such region is shown in Figure 4.6.
Causes of Excess Constraint
The question is what evolutionary pressures cause certain regions to be so perfectly conserved? The following were all mentioned in class as possibilities:
• Could it be that there is a special structure of DNA shielding this area from mutation?
• Is there some special error correcting machinery that sits at this spot?
• Can the cell use the methylation state of the two copies of DNA as an error correcting mechanism? This mechanism would rely on the fact that the new copy of DNA is unmethylated, and therefore the DNA replication machinery could check the new copy against the old methylated copy.
• Maybe the next generation can’t survive if this region is mutated?
Another possible explanation is that selection is occurring to conserve specific codons. Some codons are more efficient than others: for example, higher abundant proteins that need rapid translation might select codons that give the most efficient translation rate, while other proteins might select for codons that give less efficient translation.
Still, these regions seem too perfectly conserved to be explained by codon usage alone. What else can explain excess constraint? There must be some degree of accuracy needed at the nucleotide level that keeps these sequences from diverging.
It could be that we are looking at the same region in two species that have only recently diverged or that there is a specific genetic mechanism protecting this area. However, it is more likely that so much conservation is a sign of protein coding regions that simultaneously encode other functional elements. For example, the HOXB5 gene shows obvious excess constraint, and there is evidence that the 5’ end of the HOXB5 ORF encodes both protein and an RNA secondary structure.
Regions that encode more than one type of functional element are under overlapping selective pressures. There might be pressure in the protein coding space to keep the amino acid sequence corresponding to this region the same, combined with pressure from the RNA space to keep a nucleotide sequence that preserves the RNA’s secondary structure. As a result of these two pressures to keep codons for the same amino acids and to produce the same RNA structure, the region is likely to show much less tolerance for any synonymous substitution patterns.
The process of estimating evolutionary constraint from genomic alignment data across multiple species follows the steps below:
• Count the number of edit operations (i.e. the number of substitutions and/or deletions/insertions)
• Estimate the number of mutations including back-mutations
• Incorporate information about the neighborhood elements of the conserved element by looking at ”conservation windows”
• Estimate the probability of a constrained “hidden state” through using Hidden Markov Models
• Use phylogeny to estimate tree mutation rate (i.e. reject substitutions that should occur along the
tree)
• Allow different portions of the tree to have different mutation rates
Modeling Excess Constraint
To better study region of excess constraint, we develop mathematical models to systematically measure the amount of synonymous and non-synonymous conservation of different regions. We will measure two rates: codon and nucleotide conservation.
To represent the null model, we can build rate matrices (4 × 4 in the nucleotide case and 64 × 64 for the codon case) that give the rates of substitutions between either codons or nucleotides for a unit time. We estimate the rates in the null model by looking at a ton of data and estimating the probabilities of each type of substitution. See Figure 4.18a in 4.5.2 for an example of a null matrix for the codon case.
• λs: the rate of synonymous substitutions
For example, if λs = 0.5, then the rate of synonymous substitutions is half of what is expected from the null model in that region. We can then evaluate the statistical significance of the rate estimates we obtain, and find regions where the rate of substitution is much lower than expected.
Using a null model here helps us account for biases in alignment coverage of certain codons and also accounts for the possibility of codon degeneracy, in which case we would expect to see a much higher rate of substitutions. We will learn how to combine such models with phylogenic methods when we talk about phylogenic trees and evolution later on in the course.
Applying this model shows that the sequences in the first translated codons, cassette exons (exons that are present in one mRNA transcript but absent in an isoform of the transcript), and alternatively spliced regions have especially low rates of synonymous substitutions.
Excess Constraint in the Human Genome
In this section, we will examine the problem of determining the total proportion of the human genome under excess constraint. In particular, we will revisit the work of Lindblad–Toh et al. (2011), which compared 29 mammalian genomes. They measured conservation levels throughout the genome by applying the process described in the previous section to 50–mers. By considering only 50–mers which were part of ancestral repeats, it is possible to determine a background level of conservation. We can imagine that the intensities of conservation among the 50–mers are distributed according to a hidden probability distribution, as illustrated in Figure 4.8. In the figure, the background curve represents the distribution of constraint in the absence of special mechanisms for excess constraint, as determined by looking at ancestral repeats, while the signal (foreground) curve represents the actual distribution of the genome. The signal curve has more conservation overall due to the purifying effects of natural selection.
We may wish to investigate specific regions of the genome which are under excess constraint by setting a threshold level of conservation and examining regions which are more conserved. In the illustration, this corresponds to considering all 50–mers which fall to the right of one of the orange lines. We see that while this method does indeed give us regions under excess constraint, it also gives us false positives. This is because even in the absence of purifying selection and other effects, certain regions will be heavily conserved, simply due to random chance. Setting the threshold higher, such as by using the dotted orange line as our threshold, reduces the proportion of false positives (FP) to true positives (TP), while also lowering the number of true positives detected, thus trading higher specificity for lower sensitivity.
However, not all hope is lost. It is possible to empirically measure both the background (BG) and foreground (FG) signal curves, as described above. Once that is done, the area of the region between them, which is shaded in gray in Figure 4.8, can be determined by integration. This area represents the proportion of the genome which is under excess constraint. Because the curves overlap, we cannot detect all conserved elements but we can estimate the total amount of excess constraint. This number of estimated constraint turns out to be about 5% of the human genome, depending on how large a window is used. Those regions are likely to all be functional, but since about 1.5% of the human genome is protein–coding, we can infer that the remaining 3.5% consists of functional, non–coding elements, most of which probably play regulatory roles.
We have seen that evolutionary constraint over the whole genome can be estimated by evaluating genomic constraint against a background distribution. Lindblad-Toh et al. (2011) compare genome conservation across 29 mammals against a background calculated from ancestral repeat elements to find regions with excess constraint (Figure 4.9A and B). Annotation of evolutionarily constrained bases reveals that the majority of discovered regions are intergenic and intronic and demonstrates that going from four (HMRD) to 29 mammalian genomes increases the power of this analysis primarily in non-coding regions (Figure 4.9C). The most constrained regions in the genome are coding regions (Figure 4.9D).
As shown in Figure 4.9, the increase from HMRD to a 29 genome alignment vastly improves the power
of this analysis. However, while the amount of intergenic elements detected increased significantly, detection is still limited by the fact that non-functional elements have much lower species coverage depth in multiple alignments than functional regions (Figure 4.10). For example, ancestral repeats (AR, μ = 11.4) have a much lower average coverage depth than exons (μ = 20.9). On one hand, this shows evidence of selection against insertions and deletions in functional elements, which are not examined in the analysis of base constraint. On the other, it also complicates the analysis of evolutionary constraint, as such work must then handle varying coverage across the genome.
Examples of Excess Constraint
Examples of excess constraint have been found in the following cases:
• Most Hox genes show overlapping constraint regions. In particular, as mentioned above the first 50 amino acids of HOXB5 are almost completely conserved. In addition, HOXA2 shows overlapping regulatory modules. These two loci encode developmental enhancers, providing a mechanism for tissue specific expression.
• ADAR: the main regulator of mRNA editing, has a splice variant where a low synonymous substitution rate was found at a resolution of 9 codons.
• BRCA1: Hurst and Pal (2001) found a low rate of synonymous substitutions in certain regions of BRCA1, the main gene involved in breast cancer. They hypothesized that purifying selection is occur- ring in these regions. (This claim was refuted by Schmid and Yang (2008) who claim this phenomenon is the artifact of a sliding window analysis).
• THRA/NR1D1: these genes, also involved in breast cancer, are part of a dual coding region that codes for both genes and is highly conserved.
• SEPHS2: has a hairpin involved in selenocysteine recoding. Because this region must select codons to both conserve the protein’s amino acid sequence and the nucleotides to keep the same RNA secondary structure, it shows excess constraint.
Measuring constraint at individual nucleotides
By measuring evolutionary constraint at individual nucleotides instead of blocks of the sequence, we may find individual transcription factor binding sites, position-specific bias within motif instances, and reveal motif consensus among most species. Specifically, we can detect SNPs that disrupt conserved regulatory motifs and determine the level of evolution by looking at every nucleotide in the gene. By looking at nucleotides individually, we can find SNPs that are important in the function of a specific sequence.
2 www.nature.com/nature/journal...ture10530.html | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/04%3A_Comparative_Genomics_I-_Genome_Annotation/4.03%3A_Excess_Constraint.txt |
Independently of the substitution rate, we may also consider the pattern of substitutions in a particular nucleotide subsequence. Consider a sequence of nucleotides which encodes a protein. Due to tRNA wobble, a mutation in the third nucleotide of a codon is less likely to affect the final protein than a mutation in the other positions. Hence we expect to see a pattern of increased substitutions on the third position when looking at protein–coding subsequences of the genome. This is indeed verified experimentally, as shown in Figure 4.11.
FAQ
Q: In Figure 4.11, we also see nucleotide substitutions in groups of three or sixes. Why is this the case?
A: Insertions and deletions in groups of threes and sixes also contribute to preserving the reading frame. If all the nucleotides are deleted in one codon, the rest of the codons are unaffected during amino acid translation. However, if we delete a number of nucleotides that is not a multiple of three (i.e. we only delete part of some codon), then the translation of the rest of the codons become nonsensical since the reading frame has been shifted.
In Figure 4.12, we can see one more feature of protein-coding genes. The boundaries of conservation are very distinct and they lie near splice sites. Periodic mutations (in multiples of three) begin to occur after the splice site boundary.
As we can see with detecting protein-coding genes, it is not only important to consider the substitution rate but also the pattern of substitutions. By observing how regions are conserved, instead of just looking at the amount of conservation, we can observe ‘evolutionary signatures’ of conservation for different functional elements.
Selective Pressures On Different Functional Elements
Different functional elements have different selective pressures (due to their structure and other characteristics); some changes (insertions, deletions, or mutations) that can be extremely harmful to one functional element may be innocuous to another. By figuring out what the “signatures” are for different elements, we can more accurately annotate a region by observing the patterns of conservation it shows.
Such a pattern is called an evolutionary signature: a pattern of change that is tolerated within elements that still preserve their function. An evolutionary signature is different from the degree of conservation in that you tolerate mutation, but only specific types of mutations in specific places. Evolutionary signatures arise because evolution and natural selection are acting on different levels in certain functional elements. For instance, in a protein-coding gene evolution is acting on the level of amino acids, and so natural selection will not filter out nucleotide changes which do not affect the amino acid sequence. Whereas a structural RNA will have pressure to preserve nucleotide pairs, but not necessarily individual nucleotides.
Importantly, the pattern of conservation has a distinct phylogenetic structure. More similar species (mammals) group together with shared conserved domains that fish lack, suggesting a mammalian specific innovation, perhaps for regulatory elements not shared by fish. Meanwhile, some features are globally conserved, suggesting a universal significance, such as protein coding. Initial approximate annotation of protein coding regions in the human genome was possible using the simple heuristic that if it was conserved from human to fish it likely served as a protein coding region.
An interesting idea for a final project would be to map divergences in the multiple alignment and call these events “births” of new coding elements. By focusing on a particular element (say microRNAs) one could identify periods of innovation and isolate portions of a phylogenetic tree enriched for certain classes of these elements.
The rest of the chapter will focus on quantifying the degree to which a sequence follows a given pattern. Kellis compared the process of evolution to exploring a fitness landscape, with the fitness score of a particular sequence constrained by the function it encodes. For example, protein coding genes are constrained by selection on the translated product, so synonymous substitutions in the third base pair of a codon are tolerated.
Below is a summary of the expected patterns followed by various functional elements:
• Protein–coding genes exhibit particular frequencies of codon substitution as well as reading frame conservation. This makes sense because the significance of the genes is the proteins they code for; therefore, changes that result in the same or similar amino acids can be easily tolerated, while a tiny change that drastically changes the resulting protein can be considered disastrous. In addition to the error correction of the mismatch repair system and DNA polymerase itself, the redundancy of the genetic code provides an additional level of intrinsic error correction/tolerance.
• Structural RNA is selected based on the secondary sequence of the transcribed RNA, and thus requires compensatory changes. For example, some RNA has a secondary stem–loop structure such that sections of its sequence bind to other sections of its sequence in its “stem”, as shown in figure 4.13.
Imagine that a nucleotide (A) and its partner (T) bind to each other in the stem, and then (A) mutates to a (C). This would ruin the secondary structure of the RNA. To correct this, either the (C) would mutate back to an (A), or the (T) would mutate to a (G). Then the (C)-(G) pair would maintain the secondary structure. This is called a compensatory mutation. Therefore, in RNA structures, the amount of change to the secondary structure (e.g. stem–loop) is more important than the amount of change in the primary structure (just the sequence). Understanding the effects of changes in RNA structure requires knowledge of the secondary structure. The likely secondary structure of an RNA can be determined by modeling the stability of many possible conformations and choosing the most likely conformation.
• MicroRNA is a molecule that is ejected from the nucleus into the cytoplasm. Their characteristic trait is that they also have the hairpin (stem–loop) structure illustrated in Figure 4.13, but a section of the stem is complementary to a portion of mRNA.
• When microRNA binds its complementary sequence to the respective portion of mRNA, it degrades the mRNA. This means that it is a post–transcriptional regulator, since it’s being used to limit the production of a protein (translation) after transcription. MicroRNA is conserved differently than structural RNA. Due to its binding to an mRNA target, the region of binding is much more conserved to maintain target specificity.
• Finally, regulatory motifs are conserved in sequence (to bind particular interacting protein partners) but not necessarily in location. Regulatory motifs can move around since they only need to recruit a factor to a particular region. Small changes (insertions and deletions) that preserve the consensus of the motif are tolerated, as are changes upstream and downstream that move the location of the motif.
When trying to understand the role of conservation in functional class prediction, an important question is how much of observed conservation can be explained by known patterns. Even after accounting for “random” conservation, roughly 60% of non–random conservation in the fly genome was not accounted for — that is, we couldn’t identify it as a protein–coding gene, RNA, microRNA, or regulatory motif. The fact that they remain conserved however suggests a functional role. That so much conserved sequence remains poorly understood underscores that many exciting questions remain to be answered. One final project for 6.047 in the past was using clustering (unsupervised learning) to account for the other conservation. It developed into an M.Eng project, and some clusters were identified, but the function of these clusters was, and is, still unclear. It’s an open problem! | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/04%3A_Comparative_Genomics_I-_Genome_Annotation/4.04%3A_Diversity_of_evolutionary_signatures-_An_Overview_of_Selection_Pat.txt |
In slide 12, we see three examples of conservation: an intronic sequence with poor conservation, a coding region with high conservation, and a non–coding region with high conservation, meaning it is probably a functional element. As we saw at the beginning of this section, the important characteristic of protein– coding regions to remember is that codons (triples of nucleotides) code for amino acids, which make up proteins. This results in the evolutionary signature of protein–coding regions, as shown in slide 13: (i) reading–frame conservation and (ii) codon–substitution patterns. The intuition for this signature is relatively straightforward.
Firstly, reading frame conservation makes sense, since an insertion or deletion of one or two nucleotides will “shift” how all the following codons are read. However, if an insertion or deletion happens in a multiple of 3, the other codons will still be read in the same way, so this is a less significant change. Secondly, it makes sense that some mutations are less harmful than others, since different triplets can code for the same amino acids (a conservative substitution, as evident from the matrix below), and even mutations that result in a different amino acid may be evolutionarily neutral if the substitutions occur with similar amino acids in a domain of the protein where exact amino acid properties are not required. These distinctive patterns allow us to “color” the genome and clearly see where the exons are, as shown in Figure 4.15.
When using these patterns in distinguishing evolutionary signatures, we have to make sure to consider the ideas below:
• Quantify the distinctiveness of all 642 possible codon substitutions by considering synonymous (frequent in protein-coding sequences) and nonsense (more frequent in non-coding than coding sequences) regions.
• Model the phylogenetic relationship among the species: multiple apparent substitutions may be ex- plained by one evolutionary event.
• Tolerate uncertainty in the input such as unknown ancestral sequences and gaps in alignment (missing data).
• Report the certainty or uncertainty of the result: quantify the confidence that a given alignment is protein-coding using various units such as p-value, bits, decibans...etc.
Reading–Frame Conservation (RFC)
Now that we know about this pattern of conservation in protein coding genes, we can develop methods to determine if a gene is protein-coding or if it is not.
By scoring the pressure to stay in the same reading frame we can quantify how likely a region is to be protein–coding or not. As shown in slide 20, we can do this by having a target sequence (Scer, the genome of S. cerevisiae), and then aligning a selecting sequence (Spar, S. paradoxus) to it and calculating what proportion of the time the selected sequence matches the target sequence’s reading frame.
Since we don’t know where the reading frame starts in the selected sequence, we align three times to try all possible offsets:
(Sparf1, Sparf2, Sparf3)
From these, we choose the alignment where the selected sequence is most often in sync with the target sequence. For example, we can begin numbering the nucleotides “1, 2, 3...etc.” until we reach a gap that we do not number. Or we can start numbering the nucleotides “2, 3, 1...etc.” where each triplet of “1,2,3” represents a codon.
Finally, for the best alignment, we calculate the percentage of nucleotides that are out of frame — if it is above a cutoff, this selected species “votes” that this region is a protein–coding region , and if it is low, this species “votes” that this is an intergenic region. The “votes” are tallied from all the species to sum to the RFC score.
This method is not robust to sequencing error. We can compensate for these errors by using a smaller scanning window and observing local reading frame conservation.
The method was shown to have 99.9% specificity and 99% sensitivity when applied to the yeast genome. When applied to 2000 hypothetical ORFs (open reading frames, or proposed genes)3 in yeast, it rejected 500 of these putative protein coding genes as not being protein coding.
Similarly, 4000 hypothetical genes in the human genome were rejected by this method. This model created a specific hypothesis (that these DNA sequences were unlikely to code for proteins) that has subsequently been supported with experimental confirmation that the regions do not code for proteins in vivo.4
This represents an important step forward for genome annotation, because previously it was difficult to conclude that a DNA sequence was non–coding simply from lack of evidence. By narrowing the focus and creating a new null hypothesis (that the gene in question appears to be a non–coding gene) it became much easier to not only accept coding genes, but to reject non–coding genes with computational support. During the discussion of reading frame conservation in class, we identified an exciting idea for a final project which would be to look for the birth of new functional proteins resulting from frame shift mutations.
Codon–Substitution Frequencies (CSFs)
The second signature of protein coding regions, the codon substitution frequencies, acts on multiple levels of conservation. To explore these frequencies, it is helpful to remember that codon evolution can be modeled
by conditional probability distributions (CPDs) — the likelihood of a descendant having a codon b where an ancestor had codon a an amount of time t ago.
The most conservative event is exact maintenance of the codon. A mutation that codes for the same amino acid may be conservative but not totally synonymous, because of species specific codon usage biases. Even mutations that alter the identity of the amino acid might be conservative if they code for amino acids with similar biochemical properties.
We use a CPD in order to capture the net effect of all of these considerations. To calculate these CPDs, we need a “rate” matrix, Q, which measures the exchange rate for a unit time; that is, it indicates how often codon a in species 1 is substituted for codon b in species 2, for a unit branch length. Then, by using eQt, we can estimate the frequency of substitution at time t.
When the CPD is considered in conjunction with the topology of a network graph representing the evolutionary tree, it has a approximately (2L − 2) · 642 parameters, where L is the number of leaves in the tree (species in the evolutionary phylogeny). This number of parameters is derived from the number of entries in Q and the number of independent branch lengths, t. Estimates of these parameters can be determined by MLE from training data.
The CPD is defined in terms of eQt as follows:
$\operatorname{Pr}(\text {child}=a \mid \text {parent}=b ; t)=\left[e^{Q t}\right]_{a, b}$
The intuition, is that as time increases, the probability of substitutions increase, while at the “initial” time (t = 0), eQt is the identity matrix, since every codon is guaranteed to be itself. But how do we get the rate matrix?
• Q is “learned” from the sequences, by using Expectation–Maximization, for example. Many known protein-coding sequences are used as training data (or non-coding regions when generating that model).
• Given the parameters of the model, we can use Felsenstein’s algorithm[1] to compute the probability of any alignment, while taking into account phylogeny, given the substitution model (the E–step).
$\text { Likelihood}(\boldsymbol{Q})=\operatorname{Pr}(\text { Training Data; } \boldsymbol{Q}, t)$
• Then, given the alignments and phylogeny, we can choose the parameters (the rate matrix: Q, and branch lengths: t) that maximize the likelihood of those alignments in the M–step; for example, to estimate Q, we can count the number of times one codon is substituted for another in the alignment. The argument space consists of thousands of possibilities for Q and t. This space is represented by Q. $\hat{Q}$ is the parameter that maximizes the likelihood:
$\hat{Q}=\operatorname{argmax}_{\boldsymbol{Q}}(\text {Likelihood}(Q))$
Other maximization strategies include: expectation maximization, gradient ascent, simulated annealing, spectral decomposition. Branch length, t, can be optimized using the same method simultaneously.
FAQ
Q: How does the branch length contribute to determining the rate matrix?
A: The branch lengths specify how much “time” passed between any two nodes. The rate matrix describes the relative frequencies of codon substitutions per unit branch length.
With two estimated rate matrices, the calculated probabilities of any given alignment is different for each matrix. Now, we can compare the likelihood ratio, $\frac{\operatorname{Pr}\left(\text {Leaves} ; Q_{c}, t\right)}{\operatorname{Pr}\left(\text {Leaves} ; Q_{N}, t\right)}$, that the alignment came from a protein-coding region as opposed to coming from a non-protein-coding region.
© source unknown. All rights reserved. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.
Figure 4.18: Rate matrices for the null and alternate models. A lighter color means substitution is more likely.
Now that we know how to obtain our model, we note that, given the specific pattern of codon substitution frequencies for protein–coding, we want two models so that we can distinguish between coding and non– coding regions. Figures 4.18a and 4.18b show rate matrices for intergenic and genic regions, respectively. A number of salient features present themselves in the codon substitution matrix (CSM) for genes. Note that the main diagonal element has been removed, because the frequency of a triplet being exchanged for itself will obviously be much higher than any other exchange. Nevertheless,
1. it is immediately obvious that there is a strong diagonal element in the protein coding regions.
2. We also note certain high–scoring off diagonal elements in the coding CSM: these are substitutions that are close in function rather than in sequence, such as 6–fold degenerate codons or very similar amino acids.
3. We also note dark vertical stripes, which indicate these substitutions are especially unlikely. These columns correspond to stop codons, since substitutions to this triplet would significantly alter protein function, and thus are strongly selected against.
On the other hand, in the matrix for intergenic regions, the exchange rates are more uniform. In these regions, what matters is the mutational proximity, i.e. the edit distance or number of changes from one sequence to another. Genetic regions are dictated by selective proximity, or the similarity in amino acid sequence of the protein resulting from the gene.
Now that we have the two rate matrices for the two regions, we can calculate the probabilities that each matrix generated the genomes of the two species. This can be done by using Felsenstein’s algorithm, and adding up the “score” for each pair of corresponding codons in the two species. Finally, we can calculate the likelihood ratio that the alignment came from a coding region to a non–coding region by dividing the two scores — this demonstrates our confidence in our annotation of the sequence. If the ratio is greater than 1, we can guess that it is a coding region, and if it is less than 1, then it is a non–coding region. For example, in Figure 4.16, we are very confident about the respective classifications of each region.
It should be noted, however, that although the “coloring” of the sequences confirms our classifications, the likelihood ratios are calculated independently of the ‘coloring,’ which uses our knowledge of synonymous or conservative substitutions. This further implies that this method automatically infers the genetic code from the pattern of substitutions that occurs, simply by looking at the high scoring substitutions. In species with a different genetic code, the patterns of codon exchange will be different; for example, in Candida albumin, the CTG codes for serine (polar) rather than leucine (hydrophobic), and this can be deduced from the CSMs. However, no knowledge of this is required by the method; instead, we can deduce this a posteriori from the CSM.
In summary, we are able to distinguish between non–coding and coding regions of the genome based on their evolutionary signatures, by creating two separate 64 by 64 rate matrices: one measuring the rate of codon substitutions in coding regions, and the other in non–coding regions. The rate matrix gives the exchange rate of codons or nucleotides over a unit time.
We used the two matrices to calculate two probabilities for any given alignment: the likelihood that it came from a coding region and the likelihood that it came from a non–coding region. Taking the likelihood ratio of these two probabilities gives a measure of confidence that the alignment is protein–coding as demon- strated in Figure 4.19. Using this method we can pick out regions of the genome that evolve according to the protein coding signature.
scores — this demonstrates our confidence in our annotation of the sequence. If the ratio is greater than 1, we can guess that it is a coding region, and if it is less than 1, then it is a non–coding region. For example, in Figure 4.16, we are very confident about the respective classifications of each region.
It should be noted, however, that although the “coloring” of the sequences confirms our classifications, the likelihood ratios are calculated independently of the ‘coloring,’ which uses our knowledge of synonymous or conservative substitutions. This further implies that this method automatically infers the genetic code from the pattern of substitutions that occurs, simply by looking at the high scoring substitutions. In species with a different genetic code, the patterns of codon exchange will be different; for example, in Candida albumin, the CTG codes for serine (polar) rather than leucine (hydrophobic), and this can be deduced from the CSMs. However, no knowledge of this is required by the method; instead, we can deduce this a posteriori from the CSM.
In summary, we are able to distinguish between non–coding and coding regions of the genome based on their evolutionary signatures, by creating two separate 64 by 64 rate matrices: one measuring the rate of codon substitutions in coding regions, and the other in non–coding regions. The rate matrix gives the exchange rate of codons or nucleotides over a unit time.
We used the two matrices to calculate two probabilities for any given alignment: the likelihood that it came from a coding region and the likelihood that it came from a non–coding region. Taking the likelihood ratio of these two probabilities gives a measure of confidence that the alignment is protein–coding as demon- strated in Figure 4.19. Using this method we can pick out regions of the genome that evolve according to the protein coding signature.
We will see later how to combine this likelihood ratio approach with phylogenetic methods to find evolutionary patterns of protein coding regions.
However, this method only lets us find regions that are selected at the translational level. The key point is that here we are measuring for only protein coding selection. We will see today how we can look for other conserved functional elements that exhibit their own unique signatures.
Classification of Drosophila Genome Sequences
We have seen that using these RFC and CSF metrics allows us to classify exons and introns with extremely high specificity and sensitivity. The classifiers that use these measures to classify sequences can be imple- mented using a HMM or semi–Markov conditional random field (SMCRF). CRFs allow the integration of diverse features that do not necessarily have a probabilistic nature, whereas HMMs require us to model everything as transition and emission probabilities. CRFs will be discussed in an upcoming lecture. One might wonder why these more complex methods need to be implemented, when the simpler method of checking for conservation of the reading frame worked well. The reason is that in very short regions, insertions and deletions will be very infrequent, even by chance, so there won’t be enough signal to make the distinction between protein–coding and non–protein–coding regions. In the figure below, we see a DNA sequence along the x–axis, with the rows representing an annotated gene, amount of conservation, amount of protein–coding evolutionary signature, and the result of Viterbi decoding using the SMCRF, respectively.
This is one example of how utilization of the protein–coding signature to classify regions has proven very successful. Identification of regions that had been thought to be genes but that did not have high protein– coding signatures allowed us to strongly reject 414 genes in the fly genome previously classified as CGid–only genes, which led FlyBase curators to delete 222 of them and flag another 73 as uncertain. In addition, there were also definite false negatives, as functional evidence existed for the genes under examination. Finally, in the data, we also see regions with both conversation, as well as a large protein–coding signature, but had not been previously marked as being parts of genes, as in Figure 4.20. Some of these have been experimentally tested and have been show to be parts of new genes or extensions of existing genes. This underscores the utility of computational biology to leverage and direct experimental work.
Leaky Stop Codons
Stop codons (TAA, TAG, TGA in DNA and UAG, UAA, UGA in RNA) typically signal the end of a gene. They clearly reflect translation termination when found in mRNA and release the amino acid chain from the ribosome. However, in some unusual cases, translation is observed beyond the first stop codon. In instances of single read–through, there is a stop codon found within a region with a clear protein–coding signature followed by a second stop–codon a short distance away. An example of this in the human genome is given in Figure 4.21. This suggests that translation continues through the first stop codon. Instances of double read–through, where two stop codons lie within a protein coding region, have also been observed. In these instances of stop codon suppression, the stop codon is found to be highly conserved, suggesting that these skipped stop codons play an important biological role.
Translational read–through is conserved in both flies, which have 350 identified proteins exhibiting stop codon read–through, and humans, which have 4 identified instances of such proteins. They are observed mostly in neuronal proteins in adult brains and brain expressed proteins in Drosophila.
The kelch gene exhibits another example of stop codon suppression at work. The gene encodes two ORFs with a single UGA stop codon between them. Two proteins are translated from this sequence, one from the first ORF and one from the entire sequence. The ratio of the two proteins is regulated in a tissue–specific manner. In the case of the kelch gene, a mutation of the stop codon from UGA to UAA results in a loss of function, suggesting that tRNA suppression is the mechanism behind stop codon suppression.
An additional example of stop codon suppression is Caki, a protein active in the regulation of neurotransmitter release in Drosophila. Open reading frames (ORFs) are DNA sequences which contain a start and stop codon. In Caki, reading the gene in the first reading frame (Frame 0) results in significantly more ORFs than reading in Frame 1 or Frame 2 (a 440 ORF excess). Figure 4.22 lists twelve possible interpretations for the ORF excess. However, because the excess is observed only in Frame 0, only the first 4 interpretations are likely:
• Stop–codon readthrough: the stop codon is suppressed when the ribosome pulls in tRNA that pairs incorrectly with the stop codon.
• Recent nonsense: Perhaps some recent nonsense mutation is causing stop codon readthrough.
• A to I editing: Unlike we previously thought, RNA can still be edited after transcription. In some case the A base is changed to an I, which can be read as a G. This could change a TGA stop codon to a TGG, which encodes an amino acid. However, this phenomenon is only found in a couple of cases.
• Selenocysteine, the “21st amino acid”: Sometimes when the TGA codon is read by a certain loop which leads to a specific fold of the RNA, it can be decoded as selenocysteine. However, this only happens in four fly proteins, so can’t explain all of stop codon suppression.
Among these four, three of them (recent nonsense, A to I editing, and selenocysteine) account for only 17 of the cases. Hence, it seems that read–through must be responsible for most if not all of the remaining cases. In addition, biased stop codon usage is observed hence ruling out other processes such as alternative splicing (where RNA exons following transcription are reconnected in multiple ways leading to multiple proteins) or independent ORFs.
Read–through regions can be determined in a single species based on their pattern of codon usage. The Z–curve as shown in Figure 4.23 measures codon usage patterns in a region of DNA. From the figure, one can observe that the read–through region matches the distribution before the regular stop codon. After the second stop however, the region matches regions found after regular stops.
Another suggestion offered in class was the possibility of ribosome slippage, where the ribosome skips some bases during translation. This might cause the ribosome to skip past a stop codon. This event occurs in bacterial and viral genomes, which have a greater pressure to keep their genomes small, and therefore can use this slipping technique to read a single transcript in each different reading frame. However, humans and flies are not under such extreme pressure to keep their genomes small. Additionally, we showed above that the excess we observe beyond the stop codon is frame specific to frame 0, suggesting that ribosome slipping is not responsible.
Cells are stochastic in general and most processes tolerate mistakes at low frequencies. The system isn’t perfect and stop codon leaks happen. However, the following evidence suggests that stop codon read–through is not random but instead subject to regulatory control:
• Perfect conservation of read–through stop codons is observed in 93% of cases, which is much higher than the 24% found in background.
• Increased conservation is observed upstream of the read–through stop codon.
Stop codon bias is observed. TGAC is the most frequent sequence found at the stop codon in read-through and the least frequent found at normal terminated stop codons. It is known to be a “leaky” stop codon. TAAA is found almost universally only in non–read–through instances.
• Unusually high numbers of GCA repeats observed through read–through stop codons.
• Increased RNA secondary structure is observed following transcription suggesting evolutionarily conserved hairpins.
3Kellis M, Patterson N, Endrizzi M, Birren B, Lander E. S. 2003. Sequencing and comparison of yeast species to identify genes and regulatory elements. Science. 423: 241–254.
4Clamp M et al. 2007. Distinguishing protein–coding and noncoding genes in the human genome. PNAS. 104: 19428–19433. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/04%3A_Comparative_Genomics_I-_Genome_Annotation/4.05%3A_Protein-Coding_Signatures.txt |
One example of functional genomic regions subject to high levels of conservation are sequences encoding microRNAs (miRNAs). miRNAs are RNA molecules that bind to complementary sequences in the 3’ untranslated region of targeted mRNA molecules, causing gene silencing. How do we find evolutionary signatures for miRNA genes and their targets, and can we use these to gain new insights on their biological functions? We will see that this is a challenging task, as miRNAs leave a highly conserved but very subtle evolutionary signal.
Computational Challenge
Predicting the location of miRNA genes and their targets is a computationally challenging problem. We can look for “hairpin” regions, where we find nucleotide sequences that are complementary to each other and predict a hairpin structure. But out of 760,355 miRNA–like hairpins found in the cell, only 60–100 were true miRNAs. So to make any test that will give us regions statistically likely to be miRNAs, we need a test with 99.99% specificity.
Figure 4.25 is an example of the conservation pattern for miRNA genes. You can see the two hairpin structures conserved in the red and blue regions, with a region of low conservation in the middle. This pattern is characteristic of miRNAs.
By analyzing evolutionary and structural features specific to miRNA, we can use combinations of these features to pick out regions of miRNAs with >4,500-fold enrichment compared to random hairpins. The following are examples of features that help pick out miRNAs:
• miRNAs bind to highly conserved target motifs in the 3’ UTR
• miRNAs can be found in introns of known genes
• miRNAs have a preference for the positive strand of DNA and for transcription factors
• miRNAs are typically not found in exonic and repetitive elements of the genome (counter-example in Figure 4.29).
• Novel miRNAs may cluster with known miRNAs, especially if they are in the same family or have a common origin
These features of miRNA-coding regions can be grouped into structural families, enabling classifiers to be built based on known RNAs in each family. Energy considerations for RNA structure can be used to support this classification into families. Within each family, orthologous conservation(genes in different species for same function with common ancestral gene) and paralogous conservation (duplicated genes within same species that evolved to serve different functions) occurs.
Evolutionary
• Correlation with conservation profile
• MFE of the consensus fold
• Structure conservation index
Structural
• Hairpin stability (MFE z-score)
• Number of asymmetric loops
• Number of symmetric loops
We can combine several features into one test by using a decision tree, as illustrated in Figure 4.28. At each node of the tree, a test is applied which determines which branch will be followed next. The tree is traversed starting from the root until a terminal node is reached, at which point the tree will output a classification. A decision tree can be trained using a body of classified genome subsequences, after which it can be used to predict whether new subsequences are miRNAs or not. In addition, many decision trees can be combined into a “random forest,” where several decision trees are trained. When a new nucleotide sequence needs to be classified, each tree votes on whether or not it is an miRNA, and then the votes are aggregated to determine the final classification.
Applying this technique to the fly genome showed 101 hairpins above the 0.95 cutoff, rediscovering 60 of 74 of known miRNAs, predicting 24 novel miRNAs that were experimentally validated, and finding an additional 17 candidates that showed evidence of diverse function.
Unusual miRNA Genes
The following four “surprises” were found when looking at specific miRNA genes:
Surprise 1 Both strands might be expressed and functional. For instance, in the miR–iab–4 gene, expression of the sense and antisense strands are seen in distinct embryonic domains. Both strands score > 0.95 for miRNA prediction.
Surprise 2 Some miRNAs might have multiple 5’ ends for a single miRNA arm, giving evidence for an imprecise start site. This could give rise to multiple mature products, each potentially with its own functional targets.
Surprise 3 High scoring miRNA* regions (the star arm is complementary to the actual miRNA sequence) are very highly expressed, giving rise to regions of the genome that are both highly expressed and contain functional elements.
Surprise 4 Both miR–10 and miR–10* have been shown to be very important Hox regulators, leading to the prediction that miRNAs could be “master Hox regulators”. Pages 10 and 11 of the first set of lecture 5 slides show the importance of miRNAs that form a network of regulation for different Hox genes.
Example: Re-examining ’dubious’ protein-coding genes
Two genes, CG31044 and CG33311 were independently rejected because their conservation patterns did not match those characteristic of a protein evolutionary signatures (see Section 4.5). They were identified as precursor miRNA based on genomic properties and high expression levels (Lin et al.). This is a rare example of miRNA being found in previously exonic sequences and illustrates the challenge of identifying miRNA evolutionary signatures. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/04%3A_Comparative_Genomics_I-_Genome_Annotation/4.06%3A_microRNA_%28miRNA%29_Gene_Signatures.txt |
Another class of functional element that is highly conserved across many genomes contains regulatory motifs. A regulatory motif is a highly conserved sequence of nucleotides that occurs many times throughout the genome and serves some regulatory function. For instance, these motifs might characterize enhancers, promoters, or other genomic elements.
Computationally Detecting Regulatory Motifs
Computational methods have been developed to measure conservation of regulatory motifs across the genome, and to find new unannotated motifs de novo. Known motifs are often found in regions with high conservation, so we can increase our testing power by testing for conservation, and then finding signatures for regulatory motifs.
Evaluating the pattern of conservation for known motifs versus the “null model” of regions without motifs gives the following signature:
Conservation within:
Gal4 (known motif region)
Controls
All intergenic regions
13%
2%
Intergenic: coding 13%: 3% 2%:7%
Upstream: downstream 12: 0 1:1
So as we can see, regions with regulatory motifs show a much higher degree of conservation in intergenic regions and upstream of the gene of interest.
To discover novel motifs, we can use the following pipeline:
• Pick a motif “seed” consisting of two groups of three non–degenerate characters with a variable size gap in the middle.
• Use a conservation ratio to rank the seed motifs
• Expand the seed motifs to fill in the bases around the seeds using a hill climbing algorithm.
• Cluster to remove redundancy.
Discovering motifs and performing clustering has led to the discovery of many motif classes, such as tissue specific motifs, function specific motifs, and modules of cooperating motifs.
Individual Instances of Regulatory Motifs
To look for expected motif regions, we can first calculate a branch–length score for a region suspected to be a regulatory motif, and then use this score to give us a confidence level of how likely something is to be a real motif.
The branch length score (BLS) sums evidence for a given motif over branches of a phylogenetic tree. Given the pattern of presence or absence of a motif in each species in the tree, this score evaluates the total branch length of the sub–tree connecting the species that contain the motif. If all species have the motif, the BLS is 100%. Note more distantly related species are given higher scores, since they span a longer evolutionary distance. If a predicted motif has spanned such a long evolutionary time frame, it is likely it is a functional element rather than just a region conserved by random chance.
To create a null model, we can choose control motifs. The null model motifs should be chosen to have the same composition as the original motif, to not be too similar to each other, and to be dissimilar from known motifs. We can get a confidence score by comparing the fraction of motif instances to control motifs at a given BLS score.
4.08: Further Reading Tools and Techniques Bibliography
1. For more on constraint calculations and identification, refer to Lindblad-Toh’s et. al.’s “A high-resolution map of human evolutionary constraint using 29 mammals”.
2. For more on translational read–through and evolutionary signature, refer to Lin et. al.’s “Revisiting the protein-coding gene catalog of Drosophila melanogaster using 12 fly genomes”.
4.9: Tools and Techniques
1. For sequence alignment of proteins, see http://mafft.cbrc.jp/alignment/software/.
2. For prediction of genes through frameshifts in prokaryotes, see GeneTack.
Bibliography
[1] Joseph Felsenstein. Evolutionary trees from dna sequences: A maximum likelihood approach. Journal of Molecular Evolution, 17:368–376, 1981. 10.1007/BF01734359. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/04%3A_Comparative_Genomics_I-_Genome_Annotation/4.07%3A_Regulatory_Motifs.txt |
In the previous chapter, we saw the importance of comparative genomics analysis for discovering functional elements. In “part IV” of this book, we will see how we can use comparative genomics for studying gene evolution across species and individuals. In both cases however, we assumed that we had access to complete and aligned genomes across multiple species.
In this chapter, we will study the challenges of genome assembly and whole-genome alignment that are the foundations of whole-genome comparative genomics methodologies. First, we will study the core algorithmic principles underlying many of the most popular genome assembly methods available today. Second, we will study the problem of whole-genome alignment, which requires understanding mechanisms of genome rearrangement (e.g. segmental duplication and other translocations). The two problems of genome assembly and whole-genome alignment are similar in nature, and we close by discussing some of the parallels between them.
5.02: Genome Assembly I- Overlap-Layout-Consensus Approach
Many areas of research in computational biology rely on the availability of complete whole-genome sequence data. Yet the process to sequence a whole genome is itself non-trivial and an area of active research. The problem lies in the fact that current genome-sequencing technologies cannot continuously read from one end of a long genome sequence to the other; they can only accurately sequence small sections of base pairs (ranging from 100 to a few thousand, depending on the method), called reads. Therefore, in order to construct a sequence of millions or billions of base pairs (such as the human genome), computational biologists must find ways to combine smaller reads into larger, continuous DNA sequences. First, we will examine aspects of the experimental setup for the overlap-layout-consensus approach, and then we will move forward to learning about how to combine reads and learn information from them
Setting up the experiment
The first challenge that must be tackled when setting up this experiment is that we need to start with many copies of each chromosome in order to use this approach. This number is on the order of 105. It is important to note that the way we obtain these copies is very important and will affect our outcomes later on as it many of the comparisons we make will depend on consistent data. The first way that we may think to get this much data is to amplify a given genome. However, amplification does damage which will throw off our algorithms in later steps and cause worse results. Another possible method would be to inbreed the genome to get many copies of each chromosome. If you are looking to get rid of polymorphism, this may be a good technique, but we also lose valuable data from the polymorphic sites when we inbreed. A suggested method for obtaining this data is to use one individual, though the organism would need to be rather large. We could also use techniques such as progeny of one or progeny of two to get as few versions of each chromosome as possible. This will get high sequencing depth on each chromosome, which is the reason we want all chromosomes to be as similar as possible.
Next, let’s look as how we could decide on our read lengths given current technology. Looking at (Figure 5.2), we can see that a cost-benefit analysis must be done to decide which platform to use on a given project. With current technology, we commonly use HiSeq2500 with a read length of about 250, though this is rapidly changing.
Finally, let’s look at a few sequences that cause trouble when using platforms with short reads. Sequences with high GC content (e.g. GGCGGCGATC), low GC content (e.g. AAATAATCAA), or low complexity (e.g. ATATATATA) can cause trouble with short reads. This is still an active area of research, but some possible explanations include Polymerase slippage and DNA denaturing too easily or not easily enough.
This section will examine one of the most successful early methods for computationally assembling a genome from a set of DNA reads, called shotgun sequencing (Figure 5.3). Shotgun sequencing involves randomly shearing multiple copies of the same genome into many small fragments, as if the DNA were shot with a shotgun. Typically, the DNA is actually fragmented using either sonication (brief bursts from an ultrasound) or a targeted enzyme designed to cleave the genome at specific sequence motifs. Both of these methods can be tuned to create fragments of varying sizes.
After the DNA has been amplified and fragmented, the technique developed by Frederick Sanger in 1977 called chain-termination sequencing (also called Sanger sequencing) is used to sequence the fragments. In brief, fragments are extended by DNA polymerase until a dideoxynucleotriphosphate is incorporated; these special nucleotides cause the termination of a fragment’s extension. The length of the fragment therefore becomes a proxy for where a given ddNTP was added in the sequence. One can run four separate reactions, each with a different ddNTP (A, G, C, T) and then run out the results on a gel in order to determine the relative ordering of bases. The result is many sequences of bases with corresponding per-base quality scores, indicating the probability that each base was called correctly. The shorter fragments can be fully sequenced, but the longer fragments can only be sequenced at each of their ends since the quality diminishes significantly
after about 500-900 base pairs. These paired-end reads are called mate pairs. In the rest of this section, we discuss how to use the reads to construct much longer sequences, up to the size of entire chromosomes.
Finding overlapping reads
To combine the DNA fragments into larger segments, we must find places where two or more reads over- lap, i.e. where the beginning sequence of one fragment matches the end sequence of another fragment. For example, given two fragments such as ACGTTGACCGCATTCGCCATA and GACCGCATTCGCCATACG- GCATT, we can construct a larger sequence based on the overlap: ACGTTGACCGCATTCGCCATACGGCATT (Figure 5.4).
One method for finding matching sequences is the Needleman-Wunsch dynamic programming algorithm, which was discussed in chapter 2. The Needleman-Wunsch method is impractical for genome assembly, however, since we would need to perform millions of pairwise-alignments, each taking O(n2) time, in order to construct an entire genome from the DNA fragments.
A better approach is to use the BLAST algorithm (discussed in chapter 3) to hash all the k-mers (unique sequences of length k) in the reads and find all the locations where two or more reads have one of the k-mers in common. This allows us to achieve O(kn) efficiency rather than O(n2) pairwise comparisons. k can be any number smaller than the size of the reads, but varies depending on the desired sensitivity and specificity. By adjusting the read length to span the repetitive regions of the genome, we can correctly resolve these regions and come very close to the ideal of a complete, continuous genome. One popular overlap-layout-consensus assembler called Arachne uses k = 24 [2].
Given the matching k-mers, we can align each of the corresponding reads and discard any matches that are less than 97% similar. We do not require that the reads be identical since we allow for the possibility of sequencing errors and heterozygosity (i.e., a diploid organism like a human may have two different variants at a polymorphic site).
Merging reads into contigs
Using the techniques described above to find overlaps between DNA fragments, we can piece together larger segments of continuous sequences called contigs. One way to visualize this process is to create a graph in which all the nodes represent reads, and the edges represent overlaps between the reads (Figure 5.5). Our graph will have transitive overlap; that is, some edges will connect disparate nodes that are already connected by intermediate nodes. By removing the transitively inferable overlaps, we can create a chain of reads that have been ordered to form a larger contig. These graph transformations are discussed in greater depth in section 5.3.1 below. In order to get a better understanding of the size of contigs, we calculate something known as N50. Because measures of contig length tend to be highly sensitive to the smallest contig cutoff, N50 is calculated as the length-weighted median. For a human, N50 is usually close to 125 kb.
In theory, we should be able to use the above approach to create large contigs from our reads as long as we have adequate coverage of the given region. In practice, we often encounter large sections of the genome that are extremely repetitive and as a result are difficult to assemble. For example, it is unclear exactly how to align the following two sequences: ATATATAT and ATATATATAT. Due to the extremely low information content in the sequence pattern, they could overlap in any number of ways. Furthermore, these repetitive regions may appear in multiple locations in the genome, and it is difficult to determine which reads come from which locations. Contigs made up of these ambiguous, repetitive reads are called overcollapsed contigs.
In order to determine which sections are overcollapsed, it is often possible to quantify the depth of coverage of fragments making up each contig. If one contig has significantly more coverage than the others, it is a likely candidate for an overcollapsed region. Additionally, several unique contigs may overlap one contig in the same location, which is another indication that the contig may be overcollapsed (Figure 5.6).
After fragments have been assembled into contigs up to the point of a possible repeated section, the result is a graph in which the nodes are contigs, and the edges are links between unique contigs and overcollapsed contigs (Figure 5.7).
Laying out contig graph into scaffolds
Once our fragments are assembled into contigs and contig graphs, we can use the larger mate pairs to link contigs into supercontigs or scaffolds. Mate pairs are useful both to orient the contigs and to place them in the correct order. If the mate pairs are long enough, they can often span repetitive regions and help resolve the ambiguities described in the previous section (Figure 5.8).
Unlike contigs, supercontigs may contain some gaps in the sequence due to the fact that the mate pairs connecting the contigs are only sequenced at the ends. Since we generally know how long a given mate pair is we can estimate how many base pairs are missing, but due to the randomness of the cuts in shotgun sequencing, we may not have the data available to fill in the exact sequence. Filling in every single gap can be extremely expensive, so even the most completely assembled genomes usually contain some gaps.
Deriving consensus sequence
The goal of genome assembly is to create one continuous sequence, so after the reads have been aligned into contigs, we need to resolve any differences between them. As mentioned above, some of the overlapping reads may not be identical due to sequencing errors or polymorphism. We can often determine when there has been a sequencing error when one base disagrees with all the other bases aligned to it. Taking into account the quality scores on each of the bases, we can usually resolve these conflicts fairly easily. This method of conflict resolution is called weighted voting (Figure 5.9). Another alternative is to ignore the frequencies of each base and take the maximum quality letter as the consensus. Sometimes, you will want to keep all of the bases that form a polymorphic set because it can be important information. In this case, we would be unable to use these methods to derive a consensus sequence.
In some cases, it is not possible to derive a consensus if, for example, the genome is heterozygous and there are equal numbers of two different bases at one location. In this case, the assembler must choose a representative.
Did You Know?
Since polymorphism can significantly complicate the assembly of diploid genomes, some researchers induce several generations of inbreeding in the selected species to reduce the amount of heterozygosity before attempting to sequence the genome.
In this section, we saw an algorithm to do genome assembly given reads. However, this algorithm works well when the reads are 500 - 900 bases long or more, which is typical of Sanger sequencing. Alternate genome assembly algorithms are required is the reads we get from our sequencing methods are much shorter. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/05%3A_Genome_Assembly_and_Whole-Genome_Alignment/5.01%3A_Introduction.txt |
Shotgun sequencing, which is a more modern and economic method of sequencing, gives reads that around 100 bases in length. The shorter length of the reads results in a lot more repeats of length greater than that of the reads. Hence, we need new and more sophisticated algorithms to do genome assembly correctly.
String graph definition and construction
The idea behind string graph assembly is similar to the graph of reads we saw in section 5.2.2. In short, we are constructing a graph in which the nodes are sequence data and the edges are overlap, and then trying to find the most robust path through all the edges to represent our underlying sequence.
Starting from the reads we get from Shotgun sequencing, a string graph is constructed by adding an edge for every pair of overlapping reads. Note that the vertices of the graph denote junctions, and the edges correspond to the string of bases. A single node corresponds to each read, and reaching that node while traversing the graph is equivalent to reading all the bases upto the end of the read corresponding to the node. For example, in figure 5.10, we have two overlapping reads A and B and they are the only reads we have. The corresponding string graph has two nodes and two edges. One edge doesn’t have a vertex at its tail end, and has A at its head end. This edge denotes all the bases in read A. The second edge goes from node A to node B, and only denotes the bases in B-A (the part of read B which is not overlapping with A). This way, when we traverse the edges once, we read the entire region exactly once. In particular, notice that we do not traverse the overlap of read A and read B twice.
There are a couple of subtleties in the string graph (figure 5.11) which need mentioning:
• We have two different colors for nodes since the DNA can be read in two directions. If the overlap is between the reads as is, then the nodes receive same colors. And if the overlap is between a read and the complementary bases of the other read, then they receive different colors.
• Secondly, if A and B overlap, then there is ambiguity in whether we draw an edge from A to B, or from B to A. Such ambuigity needs to be resolved in a consistent manner at junctions caused due to repeats.
After constructing the string graph from overlapping reads, we:-
• Remove transitive edges: Transitive edges are caused by transitive overlaps, i.e. A overlap B overlaps C in such a way that A overlaps C. There are randomized algorithms which remove transitive edges in O(E) expected runtime. In figure 5.12, you can see the an example of removing transitive edges.
• Collapse chains: After removing the transitive edges, the graph we build will have many chains where each node has one incoming edge and one outgoing edge. We collapse all these chains to a single edge. An example of this is shown in figure 5.13.
Flows and graph consistency
After doing everything mentioned above we will get a pretty complex graph, i.e. it will still have a number of junctions due to relatively long repeats in the genome compared to the length of the reads. We will now see how the concepts of flows can be used to deal with repeats.
First, we estimate the weight of each edge by the number of reads we get corresponds to the edge. If we have double the number of reads for some edge than the number of DNAs we sequenced, then it is fair to assume that this region of the genome gets repeated. However, this technique by itself is not accurate enough. Hence sometimes we may make estimates by saying that the weight of some edge is ≥ 2, and not assign a particular number to it.
We use reasoning from flows in order to resolve such ambiguities. We need to satisfy the flow constraint at every junction, i.e. the total weight of all the incoming edges must equal the total weight of all the outgoing edges. For example, in the figure 5.14 there is a junction with an incoming edge of weight 1, and two outgoing edges of weight ≥ 0 and ≥ 1. Hence, we can infer that the weights of the outgoing edges are exactly equal to 0 and 1 respectively. A lot of weights can be inferred this way by iteratively applying this same process throughout the entire graph.
Feasible flow
Once we have the graph and the edge weights, we run a min cost flow algorithm on the graph. Since larger genomes may not a have unique min cost flow, we iteratively do the following:
• Add ε penalty to all edges in solution
• Solve flow again - if there is an alternate min cost flow it will now have a smaller cost relative to the previous flow
• Repeat until we find no new edges
After doing the above, we will be able to label each edge as one of the following
Required: edges that were part of all the solutions
Unreliable: edges that were part of some of the solutions
Not required: edges that were not part of any solution
Dealing with sequencing errors
There are various sources of errors in the genome sequencing procedure. Errors are generally of two different kinds, local and global.
Local errors include insertions, deletions and mutations. Such local errors are dealt with when we are looking for overlapping reads. That is, while checking whether reads overlap, we check for overlaps while being tolerant towards sequencing errors. Once we have computed overlaps, we can derive a consensus by mechanisms such as removing indels and mutations that are not supported by any other read and are contradicted by at least 2.
Global errors are caused by other mechasisms such as two different sequences combining together before being read, and hence we get a read which is from different places in the genome. Such reads are called chimers. These errors are resolved while looking for a feasible flow in the network. When the edge corresponding to the chimer is in use, the amount of flow going through this edge is smaller compared to the flow capacity. Hence, the edge can be detected and then ignored.
Each step of the algorithm is made as robust and resilient to sequencing errors as possible. And the number of DNAs split and sequenced is decided in a way so that we are able to construct most of the DNA (i.e. fulfill some quality assurance such as 98% or 95%).
Resources
Some popular genome assemblers using String Graphs are listed below
• Euler (Pevzner, 2001/06) : Indexing → deBruijn graphs → picking paths → consensus
• Valvel (Birney, 2010) : Short reads → small genomes → simplification → error correction
• ALLPATHS (Gnerre, 2011) : Short reads → large genomes → jumping data → uncertainty | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/05%3A_Genome_Assembly_and_Whole-Genome_Alignment/5.03%3A_Genome_Assembly_II-_String_graph_methods.txt |
Once we have access to whole-genome sequences for several different species, we can attempt to align them in order to infer the path that evolution took to differentiate these species. In this section we discuss some of the methods for performing whole-genome alignments between multiple species.
Global, local, and ’glocal’ alignment
The Needleman-Wunsch algorithm discussed in chapter 2 is the best way to generate an optimal alignment between two or more genome sequences of limited size. At the level of whole genomes, however, the O(n2) time bound is impractical. Furthermore, in order to find an optimal alignment between k different species, the time for the Needleman-Wunsch algorithm is extended to O(nk). For genomes that are millions of bases long, this run time is prohibitive (Figure 5.15).
One alternative is to use an efficient local alignment tool such as BLAST to find all of the local alignments, and then chain them together along the diagonal to form global alignments. This approach can save a significant amount of time, since the process of finding local alignments is very efficient, and then we only need to perform the time-consuming Needleman-Wunsch algorithm in the small rectangles between local alignments (Figure 5.16).
Another novel approach to whole genome alignment is to extend the local alignment search to include inversions, duplications and translocations. Then we can chain these elements together using the least-cost transformations between sequences. This approach is commonly called glocal alignment, since it seeks to combine the best of local and global alignment to create the most accurate picture of how genomes evolve over time (Figure 5.17).
Lagan: Chaining local alignments
LAGAN is a popular software toolkit that incorporates many of the above ideas and can be used for local, global, glocal, and multiple alignments between species.
The regular LAGAN algorithm consists of finding local alignments, chaining local alignments along the diagonal, and then performing restricted dynamic programming to find the optimal path between local alignments.
Multi-LAGAN uses the same approach as regular LAGAN but generalizes it to multiple species alignment. In this algorithm, the user must provide a set of genomes and a corresponding phylogenetic tree. Multi- LAGAN performs pairwise alignment guided by the phylogenetic tree. It first compares highly related species, and then iteratively compares more and more distant species.
Shuffle-LAGAN is a glocal alignment tool that finds local alignments, builds a rough homology map, and then globally aligns each of the consistent parts (Figure 5.18). In order to build a homology map, the algorithm chooses the maximum scoring subset of local alignments based on certain gap and transformation penalties, which form a non-decreasing chain in at least one of the two sequences. Unlike regular LAGAN, all possible local alignment sequences are considered as steps in the glocal alignment, since they could represent translocations, inversions and inverted translocations as well as regular untransformed sequences. Once the rough homology map has been built, the algorithm breaks the homologous regions into chunks of local alignments that are roughly along the same continuous path. Finally, the LAGAN algorithm is applied to each chunk to link the local alignments using restricted dynamic programming.
By running Shuffle-LAGAN or other glocal alignment tools, we can discover inversions, translocations, and other homologous relations between different species. By mapping the connections between these rear- rangements, we can gain insight into how each species evolved from the common ancestor (Figure 5.19).
5.05: Gene-based region alignment
An alternative way for aligning multiple genomes anchors genomic segments based on the genes that they contain, and uses the correspondence of genes to resolve corresponding regions in each pair of species. A nucleotide-level alignment is then constructed based on previously-described methods in each multiply- conserved region.
Because not all regions have one-to-one correspondence and the sequence is not static, this is more difficult: genes undergo divergence, duplication, and losses and whole genomes undergo rearrangements. To help overcome these challenges, researchers look at the amino-acid similarity of gene pairs across genomes and the locations of genes within each genome.
Gene correspondence can be represented by a weighted bipartite graph with nodes representing genes with coordinates and edges representing weighted sequence similarity (Figure 5.20). Orthologous relationships are one-to-one matches and paralogous relationships are one-to-many or many-to-many matches. The graph is first simplified by eliminating spurious edges and then edges are selected based on available information such as blocks of conserved gene order and protein sequence similarity.
The Best Unambiguous Subgroups (BUS) algorithm can then be used to resolve the correspondence of genes and regions. BUS extends the concept of best-bidirectional hits and uses iterative refinement with an increasing relative threshold. It uses the complete bipartite graph connectivity with integrated amino acid similarity and gene order information.
Did You Know?
ThA bipartite graph is a graph whose vertices can be split into two disjoint sets U and V such that every edge connects a vertex in U to a vertex in V.
In the example of a correctly resolved gene correspondence of S.cerevisiae with three other related species, more than 90% of the genes had a one-to-one correspondence and regions and protein families of rapid change were identified. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/05%3A_Genome_Assembly_and_Whole-Genome_Alignment/5.04%3A_Whole-Genome_Alignment.txt |
Once we have alignments of large genomic regions (or whole genomes) across multiple related species, we can begin to make comparisons in order to infer the evolutionary histories of those regions.
Rates of evolution vary across species and across genomic regions. In S. cerevisiae, for example, 80% of ambiguities are found in 5% of the genome. Telomeres are repetitive DNA sequences at the end of chromosomes which protect the ends of the chromosomes from deterioration. Telomere regions are inherently unstable, tending to undergo rapid structural evolution, and the 80% of variation corresponds to 31 of the 32 telomeric regions. Gene families contained within these regions such as HXT, FLO, COS, PAU, and YRF show significant evolution in number, order, and orientation. Several novel and protein-coding sequences can be found in these regions. Since very few genomic rearrangements are found in S. cerevisiae aside from the telomeric regions, regions of rapid change can be identified by protein family expansions in chromosome ends.
Geness evolve at different rates. For example as illustrated in Figure 5.22, on one extreme, there is YBR184W in yeast which shows unusually low sequence conservation and exhibits numerous insertions and deletions across species. On the other extreme there is MatA2, which shows perfect amino acid and nu- cleotide conservation. Mutation rates often also vary by functional classification. For example, mitochondrial ribosomal proteins are less conserved than ribosomal proteins.
The fact that some genes evolve more slowly in one species versus another may be due to factors such as longer life cycles. Lack of evolutionary change in specific genes, however, suggests that there are additional biological functions which are responsible for the pressure to conserve the nucleotide sequence. Yeast can switch mating types by switching all their A and α genes and MatA2 is one of the four yeast mating-type genes (MatA2, Matα2, MatA1, Matα1). Its role could potentially be revealed by nucleotide conservation analysis.
Fast evolving genes can also be biologically meaningful. Mechanisms of rapid protein change include:
• Protein domain creation via stretches of Glutamine (Q) and Asparagine (N) and protein-protein inter- actions,
• Compensatory frame-shifts which enable the exploration of new reading frames and reading/creation of RNA editing signals,
• Stop codon variations and regulated read-through where gains enable rapid changes and losses may result in new diversity
• Inteins, which are segments of proteins that can remove themselves from a protein and then rejoin the remaining protein, gain from horizontal transfers of post-translationally self-splicing inteins.
We now look at differences in gene content across different species (S.cerevisiae, S.paradoxus, S.mikatae, and S.bayanus.) A lot can be revealed about gene loss and conversion by observing the positions of paralogs across related species and observing the rates of change of the paralogs. There are 8-10 genes unique to each genome which are involved mostly with metabolism, regulation and silencing, and stress response. In addition, there are changes in gene dosage with both tandem and segment duplications. Protein family expansions are also present with 211 genes with ambiguous correspondence. All in all however, there are few novel genes in the different species.
Chromosomal Rearrangements
These are often mediated by specific mechanisms as illustrated for Saccharomyces in Figure 5.23. [MattFox]Fig11ChromEvolImageissuperblurryasfarasIcansee.Whereeverthiswasfound,itshouldbereplacedwithahigh
Translocations across dissimilar genes often occur across transposable genetic elements (Ty elements in yeast for example). Transposon locations are conserved with recent insertions appearing in old locations and long terminal repeat remnants found in other genomes. They are evolutionarily active however (for example with Ty elements in yeast being recent), and typically appear in only one genome. The evolution- ary advantage of such locationally conserved transposons may lie in the possibility of mediating reversible arrangements. Inversions are often flanked by tRNA genes in opposite transcriptional orientation. This may suggest that they originate from recombination between tRNA genes.
5.07: Whole Genome Duplication
As you trace species further back in evolutionary time, you have the ability to ask different sets of questions. In class, the example used was K. waltii, which dates to about 95 millions years earlier than S.cerevisiae and 80 million years earlier than S.bayanus.
Looking at the dotplot of S.cerevisiae chromosomes and K.waltii scaffolds, a divergence was noted along the diagonal in the middle of the plot, whereas most pairs of conserved region exhibit a dot plot with a clear and straight diagonal. Viewing the segment at a higher magnification (Figure 5.25), it seems that S.cerevisiae sister fragments all map to corresponding K.waltii scaffolds.
Schematically (Figure 5.26) sister regions show gene interleaving. In duplicate mapping of centromeres, sister regions can be recognized based on gene order. This observed gene interleaving provides evidence of complete genome duplication.
Bibliography
1. [1] Embl allextron database - cassette exons.
2. [2] Batzoglou S et al. Arachne: a whole-genome shotgun assembler. Genome Res, 2002.
3. [3] Manolis Kellis. Lecture slides 04: Comparative genomics i. September 21,2010.
4. [4] Manolis Kellis. Lecture slides 05.1: Comparative genomics ii. September 23, 2010.
5. [5] Manolis Kellis. Lecture slides 05.2: Comparative genomics iii, evolution. September 25,2010.
6. [6] Nikolaus Rajewsky Kevin Chen. The evolution of gene regulation by transcription factors and micrornas. Nature Reviews Genetics, 2007.
[7] Douglas Robinson and Lynn Cooley. Examination of the function of two kelch proteins generated by stop codon suppression. Development, 1997.
[8] Stark. Discovery of functional elements in 12 drosophila genomes using evolutionary signatures. Nature, 2007.
[9] Angela Tan. Lecture 15 notes: Comparative genomics i: Genome annotation. November 4, 2009. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/05%3A_Genome_Assembly_and_Whole-Genome_Alignment/5.06%3A_Mechanisms_of_Genome_Evolution.txt |
With the magnitude and diversity of bacterial populations in human body, human microbiome has many common properties with natural ecosystems researched in environmental biology. As a field with a large number of quantitative problems to tackle, bacterial genomics offers an opportunity for computational biologist to be actively involved in the progress of this research area.
There are approximately 1014 microbial cells in an average human gut, whereas there are only 1013 human cells in a human body in total. Furthermore, there are 1012 external microbial cells living on our skin. From a cell count perspective, this corresponds to 10 times more bacterial cells in our body than our own cells. From a gene count perspective, there are 100 times more genes belonging to the bacteria living in/on us than to our own cells. For this reason, these microbial communities living in our bodies are an integral part of what makes us human and we should research upon these genes that are not directly encoded in our genome, but still have a significant effect on our physiology.
Evolution of microbiome research
Earlier stages of microbiome research were mostly based on data collection and analysis of surveys of bacterial groups present in a particular ecosystem. Apart from collecting data, this type of research also involved sequencing of bacterial genomes and identification of gene markers for determining different bacterial groups present in the sample. The most commonly used marker for this purpose is 16S rRNA gene, which is a section of the prokaryotic DNA that codes for ribosomal RNA. Three main features of 16S gene that makes it a very effective marker for microbiome studies are: (1) its short size (∼1500 bases) that makes it cheaper to sequence and analyze, (2) high conservation due to exact folding requirements of the ribosomal RNA it encodes for, and (3) its specificity to prokaryote organisms that allows us to differentiate from contaminant protist, fungal, plant and animal DNAs.
A further direction in early microbial research was inferring rules from generated datasets upon microbial ecosystems. These studies investigated initially generated microbial data and tried to understand rules of microbial abundance in different types of ecosystems and infer networks of bacterial populations regarding their co-occurrence, correlation and causality with respect to one another.
A more recent type of microbial research takes a predictive approach and aims to model the change of bacterial populations in an ecosystem through time making use of differential equations. For example, we can model the rate of change for the population size of a particular bacterial group in human gut as an ordinary differential equation (ODE) and use this model to predict the size of the population at a future time point by integrating over the time interval.
We can further model change of bacterial populations with respect to multiple parameters, such as time and space. When we have enough data to represent microbial populations temporally and spatially, we can model them using partial differential equations (PDEs) for making predictions using multivariate functions.
Data generation for microbiome research
Data generation for microbiome research usually follows the following work-flow: (1) a sample of microbial ecosystem is taken from the particular site being studied (e.g. a patient’s skin or a lake), (2) the DNAs of the bacteria living in the sample are extracted, (3) 16S rDNA genes are sequenced, (4) conserved motifs in some fraction of the 16S gene (DNA barcodes) are clustered into operational taxonomic units (OTUs), and (5) a vector of abundance is constructed for all species in the sample. In microbiology, bacteria are classified into OTUs according to their functional properties rather than species, due to the difficulty in applying the conventional species definition to the bacterial world.
In the remainder of the lecture, a series of recent studies that are related to the field of bacterial genomics and human microbiome studies are described. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/06%3A_Bacterial_Genomics--Molecular_Evolution_at_the_Level_of_Ecosystems/6.01%3A_Introduction.txt |
This study [2] is inspired from a quote by Max Delbruck: ”Any living cell carries with it the experience of a billion years of experimentation by its ancestors”. In this direction, it is possible to find evidence in the genomes of living organisms for ancient environmental changes with large biological impacts. For instance, the oxygen that most organisms currently use would have been extremely toxic to almost all life on earth before the accumulation of oxygen via oxygenic photosynthesis. It is known that this event happened approximately 2.4 billion years ago and it caused a dramatic transformation of life on earth.
A dynamic programming algorithm was developed in order to infer gene birth, duplication, loss and horizontal gene transfer events given the phylogeny of species and phylogeny of different genes. Horizontal gene transfer is the event in which bacteria transfer a portion of their genome to other bacteria from different taxonomic groups.
Figure 6.1 shows an overview of these inferred events in a phylogenetic tree focusing on prokaryote life. In each node, the size of the pie chart represents the amount of genetic change between two branches and each colored slice stands for the rate of a particular genetic modification event. Starting from the root of the tree, we see that almost the entire pie chart is represented by newly born genes represented by red. However, around 2.5 billion years ago green and blue slices become more prevalent, which represent rate of horizontal gene transfer and gene duplication events.
In Figure 6.2, a large spike can be seen during Archean eon representing large amount of genetic change on earth occurring during this particular time period. This study looked for enzymatic activity of genes that were born in this eon different from the genes that were already present. On the right hand side of Figure 6.2, logarithmic enrichment levels of different metabolites are displayed. Most enriched metabolites produced by these genes were discovered to be functional in oxidation reduction and electron transport. Overall, this study suggests that life invented modern electron transport chain around 3.3 billion years ago and around 2.8 billion years ago organisms evolved to use the same proteins that are used for producing oxygen also to breathe oxygen.
6.03: Study 2- Pediatric IBD study with Athos B
In some diseases such as Inflammatory Bowel Disease (IBD); if the disease is not diagnosed and monitored closely, the results can be very severe, such as the removal of the patient’s colon. On the other hand, currently existing most reliable diagnosis methods are very invasive (e.g. colonoscopy). An alternative approach for diagnosis can be abundance analysis of the microbial sample taken from the patients’ colon. This study aims to predict the disease state of the subject from bacterial abundances in stool samples taken from the patient.
105 samples were collected for this study among the patients of Dr. Athos Boudvaros; some of them
displaying IBD symptoms and others different diseases (control group). In Figure 6.3, each row block represents a set of bacterial groups at a taxonomic level (phylum level at the top and genus level at the bottom) and each column block represents a different patient group: control patients, Crohn’s disease (CD), and ulcerative colitis (UC). The only significant single biomarker was E. Coli, which is not seen in control and CD patients but seen in about a third of the UC patients. There seems to be no other single bacterial group that gives significant classification between the patient groups from these abundance measures.
Since E. Coli abundance is not a clear-cut single bacterial biomarker, using it as a diagnostic tool would yield low accuracy classification. On the other hand, we can take the entire bacterial group abundance distribution and feed them into a random forest and estimate cross-validation accuracy. After the classification method was employed, it was able to tell with 90% accuracy if the patient is diseased or not. This suggests that it is a competitive method with respect to other non-invasive diagnotic approaches which are generally highly specific but not sensitive enough.
One key difference between control and disease groups is the decrease in the diversity of the ecosystem. This suggests that the disease status is not controlled by a single germ but the overall robustness and the resilience of the ecosystem. When diversity in the ecosystem decreases, the patient might start showing disease symptoms. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/06%3A_Bacterial_Genomics--Molecular_Evolution_at_the_Level_of_Ecosystems/6.02%3A_Study_1-_Evolution_of_life_on_earth.txt |
This study aims to identify more than three hundred dietary and environmental factors affecting human microbiome. The factors, which were regularly tracked by an iPhone App, were the food the subject ate, how much they slept, the mood they were in etc. Moreover, stool samples were taken from the subjects every day for a year in order to perform sequence analysis of the bacterial group abundances for a specific day
relevant to a particular environmental factor. The motivation behind carrying out this study is that, it is usually very hard to get a strong signal between bacterial abundances and disease status. Exploring dietary effects on human microbiome might potentially elucidate some of these confounding factors in bacterial abundance analysis. However, this study analyzed dietary and environmental factors on only two subjects’ gut ecosystems; inferring statistically significant correlations with environmental factors would require large cohorts of subjects.
Figure 6.4 shows abundance levels of different bacterial groups in the gut of the two donors throughout the experiment. One key point to notice is that within an individual, the bacterial abundance is very similar through time. However, bacterial group abundances in the gut significantly differ from person to person.
One statistically significant dietary factor that was discovered as a predictive marker for bacterial popu- lation abundances is fiber consumption. It was inferred that fiber consumption is highly correlated with the abundance of bacterial groups such as Lachnospiraceae, Bifidobacteria, and Ruminococcaceae. In Donor B, 10g increase in fiber consumption increased the overall abundance of these bacterial groups by 11%.
In Figure 6.6 and Figure 6.7, a horizon plot of the two donors B and A are displayed respectively. A legend to read these horizon plots is given in Figure 6.5. For each bacterial group the abundance-time graph is displayed with different colors for different abundance layers, segments of different layers are collapsed into the height of a single layer displaying only the color with the highest absolute value difference from the normal abundance, and finally the negative peaks are switched to positive peaks preserving their original color.
In Figure 6.6, we see that during the donor’s trip to Thailand, there is a significant change in his gut bacterial ecosystem. A large number of bacterial groups disappear (shown on the lower half of the horizon plot) as soon as the donor starts living in Thailand. And as soon as the donor returns to U.S., the abundance levels of these bacterial groups quickly return back to their normal levels. Moreover, some bacterial groups that are normally considered to be pathogens (first 8 groups shown on top) appears in the donor’s ecosystem almost as soon as the donor moves to Thailand and mostly disappears when he returns back to United States. This indicates that environmental factors (such as location) can cause major changes in our gut ecosystem while the environmental factor is present but can disappear after the factor is removed.
Figures from the David lab removed due to copyright restrictions.
Figure 6.4: Gut bacterial abundances plotted through time for the two donors participating in HuGE project.
Figures from the David lab removed due to copyright restrictions.
Figure 6.5: Description of how to read a horizon plot.
Figures from the David lab removed due to copyright restrictions.
Figure 6.6: Horizon plot of Donor B in HuGE study.
In Figure 6.7, we see that after the donor is infected with salmonella, a significant portion of his gut ecosystem is replaced by other bacterial groups. A large number of bacterial groups permanently disappear during the infection and other bacterial groups replace their ecological niches. In other words, the introduc- tion of a new environmental factor takes the bacterial ecosystem in the donor’s gut from one equilibrium point to a completely different one. Even though the bacterial population mostly consists of salmonella during the infection, before and after the infection the bacterial count stays more or less the same. The scenario that happened here is that salmonella drove some bacterial groups to extinction in the gut and similar bacterial groups took over their empty ecological niches.
In Figure 6.8, p-values are displayed for day-to-day bacterial abundance correlation levels for Donor A and B. In Donor A’s correlation matrix, there is high correlation within the time interval a corresponding to pre-infection and within the time interval b corresponding to post-infection. However, between a and b there is almost no correlation at all. On the other hand, in the correlation matrix of donor B, we see that pre-Thailand and post-Thailand time intervals, c, have high correlation within and between themselves. However, the interval d that correspond to the time period of Donor B’s trip to Thailand, we see relatively little correlation to c. This suggests that the perturbations in the bacterial ecosystem of Donor B wasn’t enough to cause a permanent shift of the abundance equilibrium as in the case with Donor A due to salmonella infection.
Figures from the David lab removed due to copyright restrictions.
Figure 6.7: Horizon plot of Donor A in HuGE study. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/06%3A_Bacterial_Genomics--Molecular_Evolution_at_the_Level_of_Ecosystems/6.04%3A_Study_3-_Human_Gut_Ecology_%28HuGE%29_pro.txt |
In a study by Mozaffarian et al. [4] more than a hundred thousand patients were analyzed with the goal of discovering the effect of diet and lifestyle choices on long-term weight gain and obesity. This study built a model to predict the patients’ weights based on the types and amounts of food they consumed over a certain period of time. They found out that fast-food type of food (processed meats, potato chips, sugar-sweetened beverages) were were most highly correlated with obesity. On the other hand, consumption level of yogurt was inversely correlated with obesity.
Further experiments with mouse and human cohorts showed that, within both control group and fast-food group, increased consumption of yogurt leads to weight loss. In the experiment with mice, some female mice were given Lactobacillus reuteri (a group of bacteria found in yogurt) and allowed to eat as much regular food or fast-food they wanted to. This resulted in significant weight loss in the group of mice that were given the purified bacterial extract.
An unexpected phenotypical effect of organic yogurt consumption was discovered to be shinier coat of the mice and dogs that were given yogurt as part of their diet. A histological analysis of the skin biopsy of the control and yogurt fed mice proves that the mice that were fed the bacteria in yogurt had hair follicles that are active, leading to active development of healthier and shiny coat and hair.
6.06: Study 5- Horizontal Gene Transfer (HGT%
A study by Hehemann et al. [3] discovered a specific gene that digests a type of sulfonated carbohydrate that is only found in seaweed sushi wrappers. This gene is found in the gut microbes of Japanese people but not North Americans. The study concluded that this specific gene has transferred at some point in history from the algae itself to the bacteria living on it and then to the gut microbiome of a Japanese person by horizontal gene transfer. This study also suggests that, even though some bacterial group might live in our gut for our entire lives, they can gain new functionalities throughout our lives by picking up new genes depending on the type of food that we eat.
In this direction, a study in Alm’s Laboratory investigated around 2000 bacterial genomes published in [1] with the aim of detecting genes that are 100% similar but belong to bacteria in different taxonomic groups. Any gene that is exactly the same between different bacterial groups would indicate a horizontal gene transfer event. In this study, around 100000 such instances were discovered.
When looked at specific environments, it was discovered that the bacteria isolated from humans share genes mostly with other bacteria isolated from human sites. If we focus on more specific sites; we see that bacterial genomes isolated from human gut share genes mostly with with other bacteria that are isolated from gut, and bacterial genomes isolated from human skin shared gene mostly with other isolated from human skin. This finding suggests that independent from the phylogeny of the bacterial groups, ecology is the most important factor determining the amount of gene transfer instances between bacterial groups.
In Figure 6.9, we see that between different bacterial groups taken from human that has at least 3% 16S gene distance, there is around 23% chance that they will share an identical gene in their genome. Furthermore, there is more than 40% chance that they share an identical gene if they are sampled from the same site as well.
On the other hand, Figure 6.10 shows that geography is a weak influence on horizontal gene transfer. Bacterial populations sampled from the same continent and different continents had little difference in terms of the amount of horizontal gene transfer detected.
Figure 6.11 shows a color coded matrix of the HGT levels between various human and non-human environments; top-right triangle representing the amount of horizontal gene transfers and the bottom-left triangle showing the percentage of antibiotic resistance (AR) genes among the transferred genes. In the top-right corner, we see that there is a slight excess of HGT instances between human microbiome and bacterial samples taken from farm animals. And when we look at the corresponding percentages of antibiotic resistance genes, we see that more than 60% of the transfers are AR genes. This result shows the direct effect of feeding subtherapeutic antibiotics to livestock on the emergence of antibiotic resistance genes in the bacterial populations living in human gut. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/06%3A_Bacterial_Genomics--Molecular_Evolution_at_the_Level_of_Ecosystems/6.05%3A_Study_4-_Microbiome_as_the_connection_bet.txt |
Bacterial meningitis is a disease that is caused by very diverse bacteria that are able to get into the blood stream and cross the blood-brain barrier. This study aimed to investigate the virulence factors that can turn bacteria into a type that can cause meningitis.
Figures removed due to copyright restrictions. See similar figures in this journal article: Smillie, Chris S. et al. "Ecology drives a global network of gene exchange connecting the human microbiome." Nature 480, no. 7376 (2011): 241-244.
Figure 6.9: Rate of horizontal gene transfer between different bacterial groups taken from non-human sites, human sites, same site within human, and different sites within human.
Figures removed due to copyright restrictions. See similar figures in this journal article: Smillie, Chris S. et al. "Ecology drives a global network of gene exchange connecting the human microbiome." Nature 480, no. 7376 (2011): 241-244.
Figure 6.10: Rate of horizontal gene transfer between bacterial groups sampled from the same continent and from different continents.
The study involved 70 bacterial strains isolated from meningitis patients, comprising 175172 genes in total. About 24000 of these genes had no known function. There could be some genes among these 24000
that might be leading to meningitis causing bacteria and might be good drug targets. Moreover, 82 genes were discovered to be involved in horizontal gene transfer. 69 of these had known functions and 13 of them belonged to the 24000 genes that we do not have any functional information. Among the genes with known function, some of them were related to AR, detoxification, and also some were related to known virulence factors such as hemalysin that lets the bacteria live in the blood stream and adhesin that helps the bacteria latch onto the vein and potentially cross blood brain barrier.
6.08: Q A
Q: Do you think after some time Donor A in Study 3 will have its bacterial ecosystem return back to its original pre-infection state?
A: The salmonella infection caused certain niches to be wiped out from the bacterial ecosystem of Donor A which were then filled in by similar type of bacteria and reached to a different ecosystem at a new equilibrium. Since these niches are dominated by the new groups of bacteria, it would not be possible for the previous bacterial groups to replace them without a large-scale change in his gut ecosystem.
Q: Is the death of certain bacterial groups in the gut during salmonella infection caused directly by the infection or is it an immune response to cure the disease?
A: It can be both, but it is very hard to tell from the data in Study 3 since it is only a data point that corresponds to the event that we can observe. A future study that tries to figure out what is happening in our immune system during the infection can be observed by drawing blood from the patients during the infection.
Q: Is there a particular connection between an individual’s genome and the dominant bacterial groups in the bacterial ecosystem? Would twins show more similar bacterial ecosystems?
A: Twins in general have similar bacterial ecosystems independent from whether they live together or are separated. Even though this seems to be a genetic factor at first, monozygotic and dizygotic twins have the exact same effect, as well as displaying similarity to their mothers’ bacterial ecosystem. The reason for this is that starting from birth there is a period of time in which the bacterial ecosystem is programmed. The similarity effect between twins is based on this more than genetic factors.
6.09: Current Research Directions Further Readi
Current research directions
A further extension to HuGE study could observe mice gut microbiome during a salmonella infection and observe the process of some bacterial groups being driven to extinction and other types of bacteria replacing the ecological niches that are emptied by them. A higher resolution observation of this phenomenon in mice could illuminate how bacterial ecosystems shift from one equilibrium to another.
6.10 Further Reading
• Overview of Human Microbiome Project: commonfund.nih.gov/hmp/overview.aspx
• Lawrence A. David and Eric J. Alm. (2011). Rapid evolutionary innovation during an Archaean
genetic expansion. Nature, 469(7328):93-96.
• A tutorial on 16S rRNA gene and its use in microbiome research: http://greengenes.lbl.gov/
cgi-bin/JD_Tutorial/nph-Tutorial_2Main2.cgi
• Dariush Mozaffarian, Tao Hao, Eric B. Rimm, Walter C. Willett, and Frank B. Hu. (2011). Changes in diet and lifestyle and long-term weight gain in women and men. The New England journal of medicine, 364(25):2392-2404.
• JH Hehemann, G Correc, T Barbeyron, W Helbert, M Czjzek, and G Michel. (2010). Transfer of carbohydrate- active enzymes from marine bacteria to japanese gut microbiota. Nature, 464(5):908-12.
• The Human Microbiome Jumpstart Reference Strains Consortium. (2010). A Catalog of Reference Genomes from the Human Microbiome. Science, 328(5981):994-999
6.12 What have we learned?
In this lecture, we learned about the field of bacterial genomics in general and how bacterial ecosystems can be used to verify major environmental changes at early stages of evolution (Study 1), can act as a noninvasive diagnostic tool (Study 2), are temporarily or permanently affected by different environmental and dietary factors (Study 3), can act as the link between diet and phenotype (Study 4), can cause antibiotic resistance genes to be carried between different species’ microbiome through horizontal gene transfer (Study 5), and can be used to identify significant virulence factors in disease states (Study 6).
Bibliography
[1] The Human Microbiome Jumpstart Reference Strains Consortium. A Catalog of Reference Genomes from the Human Microbiome. Science, 328(5981):994–999, May 2010.
[2] Lawrence A. David and Eric J. Alm. Rapid evolutionary innovation during an Archaean genetic expan- sion. Nature, 469(7328):93–96, January 2011.
[3] JH Hehemann, G Correc, T Barbeyron, W Helbert, M Czjzek, and G Michel. Transfer of carbohydrate- active enzymes from marine bacteria to japanese gut microbiota. Nature, 464(5):908–12, 2010 Apr 8.
[4] Dariush Mozaffarian, Tao Hao, Eric B. Rimm, Walter C. Willett, and Frank B. Hu. Changes in diet and lifestyle and long-term weight gain in women and men. The New England journal of medicine, 364(25):2392–2404, June 2011. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/06%3A_Bacterial_Genomics--Molecular_Evolution_at_the_Level_of_Ecosystems/6.07%3A_Study_6-_Identifying_virulence_factors_in.txt |
Hidden Markov Models (HMMs) are a fundamental tool from machine learning that is widely used in computational biology. Using HMMs, we can explore the underlying structure of DNA or polypeptide sequences, detecting regions of especial interest. For instance, we can identify conserved subsequences or uncover regions with different distributions of nucleotides or amino acids such as promoter regions and CpG islands. Using this probabilistic model, we can illuminate the properties and structural components of sequences and locate genes and other functional elements.
07: Hidden Markov Models I
In this lecture we will define Markov Chains and HMMs, providing a series of motivating examples. In the second half of this lecture, we wil discuss scoring and decoding. We will learn how to compute the probability of the combination of a particular combination of observations and states. We will introduce the Forward Algorithm, a method for computing the probability of a given sequence of observations, allowing all sequences of states. Finally, we will discuss the problem of determining the most likely path of states corresponding to the given observations, a goal which is achieved by the Viterbi algorithm.
In the second lecture on HMMs, we will continue our discussion of decoding by exploring posterior decoding, which allows us to compute the most likely state at each point in the sequence. We will then explore how to learn a Hidden Markov Model. We cover both supervised and unsupervised learning, explaining how to use each to learn the model parameters. In supervised learning, we have training data available that labels sequences with particular models. In unsupervised learning, we do not have labels so we must seek to partition the data into discrete categories based on discovered probabilistic similarities. In our discussion of unsupervised learning we will introduce the general and widely applicable Expectation Maximization (EM) algorithm.
7.02: Motivation
You have a new sequence of DNA, now what?
1. Align it:
• with things we know about (database search).
• with unknown things (assemble/clustering)
2. Visualize it: “Genomics rule #1”: Look at your data!
• Look for nonstandard nucleotide compositions.
• Look for k-mer frequencies that are associated with protein coding regions, recurrent data, high GC content, etc.
• Look for motifs, evolutionary signatures.
• Translate and look for open reading frames, stop codons, etc.
• Look for patterns, then develop machine learning tools to determine reasonable probabilistic models. For example by looking at a number of quadruples we decide to color code them to see where they most frequently occur.
3. Model it:
1. Make hypothesis.
2. Build a generative model to describe the hypothesis.
3. Use that model to find sequences of similar type.
We’re not looking for sequences that necessarily have common ancestors. Rather, we’re interested in sequences with similar properties. We actually don’t know how to model whole genomes, but we can model small aspects of genomes. The task requires understanding all the properties of genome regions and computationally building generative models to represent hypotheses. For a given sequence, we want to annotate regions whether they are introns, exons, intergenic, promoter, or otherwise classifiable regions.
Building this framework will give us the ability to:
• Emit (generate) sequences of similar type according to the generative model
• Recognize the hidden state that has most likely generated the observation
• Learn (train) large datasets and apply to both previously labeled data (supervised learning) and unlabeled data (unsupervised learning).
In this lecture we discuss algorithms for emission and recognition.
Why probabilistic sequence modeling?
• Biological data is noisy.
• Update previous knowledge about biological sequences.
• Probability provides a calculus for manipulating models.
• Not limited to yes/no answers, can provide degrees of belief.
• Many common computational tools are based on probabilistic models.
• Our tools: Markov Chains and HMM. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/07%3A_Hidden_Markov_Models_I/7.01%3A_Introduction.txt |
Motivating Example: Weather Prediction
Weather prediction has always been difficult, especially when we would like to forecast the weather many days, weeks or even months later. However, if we only need to predict the weather of the next day, we can reach decent prediction precision using some quite simple models such as Markov Chain and Hidden Markov Model by building graphical models in Figure 7.2.
For the Markov Chain model on the left, four kinds of weather (Sun, Rain, Clouds and Snow) can directly transition from one to the other. This is a “what you see is what you get” in that the next state only depends on the current state and there is no memory of the previous state. However for HMM on the right, all the types of weather are modeled as the emission(or outcome) of the hidden seasons (Summer, Fall, Winter and Spring). The key insight behind is that the hidden states of the world (e.g. season or storm system) determines emission probabilities while state transitions are governed by a Markov Chain.
Formalizing of Markov Chain and HMMS
To take a closer look at Hidden Markov Model, let’s first define the key parameters in Figure 7.3. Vector x represents sequence of observations. Vector π represents the hidden path, which is the sequence of hidden states. Each entry akl of Transition matrix A denotes the probability of transition from state k to state l. Each entry ek(xi) of emission vector denotes the probability of observing xi from state k. And finally with these parameters and Bayes’s rule, we can use p(xii = k) to estimate p(πi = k|xi).
Markov Chains
A Markov Chain is given by a finite set of states and transition probabilities between the states. At every time step, the Markov Chain is in a particular state and undergoes a transition to another state. The probability of transitioning to each other state depends only on the current state, and in particular is independent of how the current state was reached. More formally, a Markov Chain is a triplet (Q, p, A) which consists of:
A set of states Q.
• A transition matrix A whose elements correspond to the probability Aij of transitioning from state i to state j.
• A vector p of initial state probabilities.
The key property of Markov Chains is that they are memory-less, i.e., each state depends only on the previous state. So we can immediately define a probability for the next state, given the current state:
$P\left(x_{i} \mid x_{i-1}, \ldots, x_{1}\right)=P\left(x_{i} \mid x _{i-1}\right)$
In this way, the probability of the sequence can be decomposed as follows:
$P (x) = P (x_L, x_{L−1}, ..., x_1) = P (x_L|x_{L−1})P (x_{L−1}|x_{L−2})...P (x_2|x_1)P (x_1)$
$P(xL)$ can also be calculated from the transition probabilities: If we multiply the initial state probabilities at time t = 0 by the transition matrix A, we get the probabilities of states at time t = 1. Multiplying by the appropriate power AL of the transition matrix, we obtain the state probabilities at time t = L.
Hidden Markov Models
Hidden Markov Models are used as a representation of a problem space in which observations come about as a result of states of a system which we are unable to observe directly. These observations, or emissions, result from a particular state based on a set of probabilities. Thus HMMs are Markov Models where the states are hidden from the observer and instead we have observations generated with certain probabilities associated with each state. These probabilities of observations are known as emission probabilities.
Formally, a Hidden Markov Model is a 5-tuple (Q, A, p, V , E) which consists of the following parameters:
• A series of states, Q.
• A transition matrix, A
• A vector of initial state probabilities , p.
• A set of observation symbols, V , for example {A, T, C, G} or the set of amino acids or words in an English dictionary.
• A matrix of emission probabilities, E: For each s, t, in Q, the emission probability is esk = P(vk at time t|qt = s)
The key property of memorylessness is inherited from Markov Models. The emissions and transitions depend only on the current state and not on the past history. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/07%3A_Hidden_Markov_Models_I/7.03%3A_Markov_Chains_and_HMMS-_From_Example_to_Formalizing.txt |
The Dishonest Casino
Imagine the following scenario: You enter a casino that offers a dice-rolling game. You bet $1 and then you and a dealer both roll a die. If you roll a higher number you win$2. Now there’s a twist to this seemingly simple game. You are aware that the casino has two types of dice:
1. Fair die: P(1)=P(2)=P(3)=P(4)=P(5)=P(6)=1/6
2. Loaded die: P(1) = P(2) = P(3) = P(4) = P(5) = 1/10 and P(6) = 1/2
The dealer can switch between these two dice at any time without you knowing it. The only information that you have are the rolls that you observe. We can represent the state of the casino die with a simple Markov model:
The model shows the two possible states, their emissions, and probabilities for transition between them. The transition probabilities are educated guesses at best. We assume that switching between the states doesn’t happen too frequently, hence the .95 chance of staying in the same state with every roll.
Staying in touch with biology: An analogy
For comparison, Figure 7.5 below gives a similar model for a situation in biology where a sequence of DNA has two potential sources: injection by a virus versus normal production by the organism itself:
Given this model as a hypothesis, we would observe the frequencies of C and G to give us clues as to the source of the sequence in question. This model assumes that viral inserts will have higher CpG prevalence, which leads to the higher probabilities of C and G occurrence.
Running the Model
Say we are at the casino and observe the sequence of rolls given in Figure 7.6. We would like to know whether it is more likely that the casino is using the fair die or the loaded die.
Let’s look at a particular sequence of rolls.
Therefore, we will consider two possible sequences of states in the underlying HMM, one in which the dealer is always using a fair die, and the other in which the dealer is always using a loaded die. We consider each execution path to understand the implications. For each case, we compute the joint probability of an observed outcome with that sequence of underlying states.
In the first case, where we assume the dealer is always using a fair die, the transition and emission probabilities are shown in Figure 7.7. The probability of this sequence of states and observed emissions is a product of terms which can be grouped into three components: 1/2, the probability of starting with the fair die; (1/6)10, the probability of the sequence of rolls if we always use the fair die; and lastly (0.95)9, the probability that we always continue to use the fair die.
In this model, we assume $π = {F,F,F,F,F,F,F,F,F,F}$, and we observe $x = {1,2,1,5,6,2,1,6,2,4}$.
Now we can calculate the joint probability of x and π as follows:
\begin{aligned} P(x, \pi) &=P(x \mid \pi) P(\pi) \ &=\frac{1}{2} \times P(1 \mid F) \times P(F \mid F) \times P(2 \mid F) \cdots \ &=\frac{1}{2} \times\left(\frac{1}{6}\right)^{10} \times(0.95)^{9} \ &=5.2 \times 10^{-9} \end{aligned}\nonumber
With a probability this small, this might appear to be an extremely unlikely case. In actuality, the probability is low because there are many equally likely possibilities, and no one outcome is a priori likely. The question is not whether this sequence of hidden states is likely, but whether it is more likely than the alternatives.
Let us consider the opposite extreme where the dealer always uses a loaded die, as depicted in Figure 7.8. This has a similar calculation except that we note a difference in the emission component. This time, 8 of the 10 rolls carry a probability of 1/10 because the loaded die disfavors non-sixes. The remaining two rolls of six have each a probability of 1/2 of occurring. Again we multiply all of these probabilities together according to principles of independence and conditioning. In this case, the calculations are as follows:
\begin{align*} P(x, \pi) &=\frac{1}{2} \times P(1 \mid L) \times P(L \mid L) \times P(2 \mid L) \cdots \ &=\frac{1}{2} \times\left(\frac{1}{10}\right)^{8} \times\left(\frac{1}{2}\right)^{2} \times(0.95)^{9} \ &=7.9 \times 10^{-10} \end{align*}
Note the difference in exponents. If we make a direct comparison, we can say that the situation in which a fair die is used throughout the sequence is 52 × 10−10 (as compared with 7.9 × 10−10 with the loaded die).
Therefore, it is six times more likely that the fair die was used than that the loaded die was used. This is not too surprising—two rolls out of ten yielding a 6 is not very far from the expected number 1.7 with the fair die, and farther from the expected number 5 with the loaded die.
Adding Complexity
Now imagine the more complex, and interesting, case where the dealer switches the die at some point during the sequence. We make a guess at an underlying model based on this premise in Figure 7.9.
Again, we can calculate the likelihood of the joint probability of this sequence of states and observations. Here, six of the rolls are calculated with the fair die, and four with the loaded one. Additionally, not all of the transition probabilities are 95% anymore. The two swaps (between fair and loaded) each have a probability of 5%.
\begin{aligned} P(x, \pi) &=\frac{1}{2} \times P(1 \mid L) \times P(L \mid L) \times P(2 \mid L) \cdots \ &=\frac{1}{2} \times\left(\frac{1}{10}\right)^{2} \times\left(\frac{1}{2}\right)^{2} \times\left(\frac{1}{6}\right)^{6} \times(0.95)^{7} \times(0.05)^{2} \ &=4.67 \times 10^{-11} \end{aligned} \nonumber
Back to Biology
Now that we have formalized HMMs, we want to use them to solve some real biological problems. In fact, HMMs are a great tool for gene sequence analysis, because we can look at a sequence of DNA as being emitted by a mixture of models. These may include introns, exons, transcription factors, etc. While we may have some sample data that matches models to DNA sequences, in the case that we start fresh with a new piece of DNA, we can use HMMs to ascribe some potential models to the DNA in question. We will first introduce a simple example and think about it a bit. Then, we will discuss some applications of HMM in solving interesting biological questions, before finally describing the HMM techniques that solve the problems that arise in such a first-attempt/native analysis.
A simple example: Finding GC-rich regions
Imagine the following scenario: we are trying to find GC rich regions by modeling nucleotide sequences drawn from two different distributions: background and promoter. Background regions have uniform distribution of 0.25 for each of A, T, G, C. Promoter regions have probabilities: A: 0.15, T: 0.13, G: 0.30, C: 0.42. Given one nucleotide observed, we cannot say anything about the region from which it was originated, because either region will emit each nucleotide at some probability. We can learn these initial state probabilities based in steady state probabilities. By looking at a sequence, we want to identify which regions originate from a background distribution (B) and which regions are from a promoter model (P).
We are given the transition and emission probabilities based on relevant abundance and average length of regions where x = vector of observable emissions consisting of symbols from the alphabet {A,T,G,C}; π = vector of states in a path (e.g. BPPBP); π = maximum likelihood of generating that path. In our interpretation of sequence, the max likelihood path will be found by incorporating all emission and transition probabilities by dynamic programming.
HMMs are generative models, in that an HMM gives the probability of emission given a state (using Bayes’ Rule), essentially telling you how likely the state is to generate those sequences. So we can always run a generative model for transitions between states and start anywhere. In Markov Chains, the next state will give different outcomes with different probabilities. No matter which state is next, at the next state, the next symbol will still come out with different probabilities. HMMs are similar: You can pick an initial state based on the initial probability vector. In the example above, we will start in state B with high probability since most locations do not correspond to promoter regions. You then draw an emission from the P(X|B). Each nucleotide occurs with probability 0.25 in the background state. Say the sampled nucleotide is a G. The distribution of subsequent states depends only on the fact that we are in the background state and is independent of this emission. So we have that the probability of remaining in state B is 0.85 and the probability of transitioning to state P is 0.15, and so on.
We can compute the probability of one such generation by multiplying the probabilities that the model makes exactly the choices we assumed. Consider the examples shown in Figures 7.11, 7.12, and 7.13.
We can calculate the joint probability of a particular sequence of states corresponding to the observed emissions as we did in the Casino examples:
\begin{aligned} P\left(x, \pi_{P}\right) &=a_{P} \times e_{P}(G) \times a_{P P} \times e_{P}(G) \times \cdots \ &=a_{P} \times(0.75)^{7} \times(0.15)^{3} \times(0.13) \times(0.3)^{2} \times(0.42)^{2} \ &=9.3 \times 10^{-7} \ P\left(x, \pi_{B}\right) &=(0.85)^{7} \times(0.25)^{8} \ &=4.9 \times 10^{-6} \ P\left(x, \pi_{\text {mixed}}\right) &=(0.85)^{3} \times(0.25)^{6} \times(0.75)^{2} \times(0.42)^{2} \times 0.3 \times 0.15 \ &=6.7 \times 10^{-7} \end{aligned} \nonumber
The pure-background alternative is the most likely option of the possibilities we have examined. But how do we know whether it is the option most likely out of all possibile paths of states to have generated the observed sequence?
The brute force approach is to examine at all paths, trying all possibilities and calculating their joint probabilities P(x,π) as we did above. The sum of probabilities of all the alternatives is 1. For example, if all states are promoters, $P(x, \pi)=9.3 \times 10^{-7}$. If all emissions are Gs, $P(x, \pi)=4.9 \times 10^{-6}$. the mixture of B’s and P ’s as in Figure 7.13, P (x, π) = 6.7 × 10−7; which is small because a lot of penalty is paid for the transitions between B’s and P’s which are exponential in length of sequence. Usually, if you observe more G’s, it is more likely to be in the promoter region and if you observe more A and Ts, then it is more likely to be in the background. But we need something more than just observation to support our belief. We will see how can we mathematically support our intuition in the following sections.
Application of HMMs in Biology
HMMs are used in answering many interesting biological questions. Some biological application of HMMs are summarized in Figure 7.14. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/07%3A_Hidden_Markov_Models_I/7.04%3A_Apply_HMM_to_Real_World-_From_Casino_to_Biology.txt |
We use HMMs for three types of operation: scoring, decoding, and learning. We will talk about scoring and decoding in this lecture. These operations can happen for a single path or all possible paths. For the single path operations, our focus is on discovering the path with maximum probability. However, we are interested in a sequence of observations or emissions for all path operations regardless of its corresponding paths.
Scoring
Scoring over a single path
The Dishonest Casino problem and Prediction of GC-rich Regions problem described in section 7.4 are both examples of finding the probability score corresponding to a single path. For a single path we define the scoring problem as follows:
• Input: A sequence of observations x = x1x2 . . . xn generated by an HMM M(Q, A, p, V, E) and a path of states π = π1π2 ...πn.
• Output: Joint probability, P(x,π) of observing x if the hidden state sequence is π.
The single path calculation is essentially the likelihood of observing the given sequence over a particular path using the following formula:
P(x,π) = P(x|π)P(π)
We have already seen the examples of single path scoring in our Dishonest Casino and GC-rich region
examples.
Scoring over all paths
We define the all paths version of scoring problem as follows:
• Input: A sequence of observations x = x1x2 ...xn generated by an HMM M(Q,A,p,V,E).
• Output: The joint probability, P(x,π) of observing x over all possible sequences of hidden states π.
The probability over all paths π of hidden states of the given sequence of observations is given by the following formula.
$P(x)=\sum_{\pi} P(x, \pi) \nonumber$
We use this score when we are interested in knowing the likelihood of a particular sequence for a given HMM. However, naively computing this sum requires considering an exponential number of possible paths. Later in the lecture we will see how to compute this quantity in polynomial time.
7.5.2 Decoding
Decoding answers the question: Given some observed sequence, what path gives us the maximum likelihood of observing this sequence? Formally we define the problem as follows:
• Decoding over a single path:
– Input: A sequence of observations x = x1x2 ...xN generated by an HMM M(Q,A,p,V,E).
– Output: The most probable path of states, $\pi^{*}=\pi_{1}^{*} \pi_{2}^{*} \ldots \pi_{N}^{*}$
• Decoding over all paths:
– Input: A sequence of observations x = x1x2 ...xN generated by an HMM M(Q,A,p,V,E).
– Output: The path of states, $\pi^{*}=\pi_{1}^{*} \pi_{2}^{*} \ldots \pi_{N}^{*}$ that contains the most likely state at each time point.
In this lecture, we will look only at the problem of decoding over a single path. The problem of decoding over all paths will be discussed in the next lecture.
For the single path decoding problem, we can imagine a brute force approach where we calculate the joint probabilities of a given emission sequence and all possible paths and then pick the path with the maximum joint probability. The problem is that there are an exponential number of paths and using such a brute force search for the maximum likelihood path among all possible paths is very time consuming and impractical. Dynamic Programming can be used to solve this problem. Let us formulate the problem in the dynamic programming approach.
We would like to find out the most likely sequence of states based on the observation. As inputs, we are given the model parameters ei(s),the emission probabilities for each state, and aijs, the transition probabilities. The sequence of emissions x is also given. The goal is to find the sequence of hidden states, π, which maximizes the joint probability with the given sequence of emissions. That is,
$\pi^{*}=\arg \max _{\pi} P(x, \pi) \nonumber$
Given the emitted sequence x we can evaluate any path through hidden states. However, we are looking for the best path. We start by looking for the optimal substructure of this problem.
For a best path, we can say that, the best path through a given state must contain within it the following: • The best path to previous state
• The best transition from previous state to this state
• The best path to the end state
Therefore the best path can be obtained based on the best path of the previous states, i.e., we can find a recurrence for the best path. The Viterbi algorithm is a dynamic programming algorithm that is commonly used to obtain the best path.
Most probable state path: the Viterbi algorithm
Suppose vk(i) is the known probability of the most likely path ending at position (or time instance) i in state k for each k. Then we can compute the corresponding probabilities at time i + 1 by means of the following recurrence.
$v_{l}(i+1)=e_{l}\left(x_{i+1}\right) \max _{k}\left(a_{k l} v_{k}(i)\right)\nonumber$
The most probable path π, or the maximum P(x,π), can be found recursively. Assuming we know vj (i − 1), the score of the maximum path up to time i − 1, we need to increase the computation for the next time step. The new maximum score path for each state depends on
• The maximum score of the previous states
• The transition probability
• The emission probability.
In other words, the new maximum score for a particular state at time i is the one that maximizes the transition of all possible previous states to that particular state (the penalty of transition multiplied by their maximum previous scores multiplied by emission probability at the current time).
All sequences have to start in state 0 (the begin state). By keeping pointers backwards, the actual state sequence can be found by backtracking. The solution of this Dynamic Programming problem is very similar to the alignment algorithms that were presented in previous lectures.
The steps of the Viterbi algorithm [2] are summarized below:
1. Initialization $(i=0): v_{0}(0)=1, v_{k}(0)=0 \text { for } k>0$
2. Recursion $(i=1 \ldots N): v_{k}(i)=e_{k}\left(x_{i}\right) \max _{j}\left(a_{j k} v_{j}(i-1)\right) ; p t r_{i}(l)=\arg \max _{j}\left(a_{j k} v_{j}(i-1)\right)$
3.Termination: $P\left(x, \pi^{*}\right)=\max _{k} v_{k}(N) ; \pi_{N}^{*}=\arg \max _{k} v_{k}(N)$
4. Traceback $(i=N \ldots 1): \pi_{i-1}^{*}=p t r_{i}\left(\pi_{i}^{*}\right)$
As we can see in Figure 7.16, we fill the matrix from left to right and trace back. Each position in the matrix has K states to consider and there are KN cells in the matrix, so, the required computation time is O(K2N) and the required space is O(KN) to remember the pointers. In practice, we use log scores for the computation. Note that the running time has been reduced from exponential to polynomial.
Evaluation
Evaluation is about answering the question: How well does our model of the data capture the actual data? Given a sequence x, many paths can generate this sequence. The question is how likely is the sequence given the model? In other words, is this a good model? Or, how well does the model capture the exact characteristics of a particular sequence? We use evaluation of HMMs to answer these questions. Additionally, with evaluation we can compare different models.
Let us first provide a formal definition of the Evaluation problem.
• Input: A sequence of observations x = x1x2 ...xN and an HMM M(Q,A,p,V,E).
• Output: The probability that x was generated by M summed over all paths.
We know that if we are given an HMM we can generate a sequence of length n using the following steps:
• Start at state π1 according to probability a0π1 (obtained using vector, p).
• Emit letter x1 according to emission probability eπ1 (x1).
• Go to state π2 according to the transition probability aπ12
• Keep doing this until emit xN.
Thus we can emit any sequence and calculate its likelihood. However, many state sequence can emit the same x. Then, how do we calculate the total probability of generating a given x over all paths? That is, our goal is to obtain the following probability:
$P(x \mid M)=P(x)=\sum_{\pi} P(x, \pi)=\sum_{\pi} P(x \mid \pi) P(\pi)\nonumber$
The challenge of obtaining this probability is that there are too many paths (an exponential number) and each path has an associated probability. One approach may be using just the Viterbi path and ignoring the others, since we already know how to obtain this path. But its probability is very small as it is only one of the many possible paths. It is a good approximation only if it has high probability density. In other cases, the Viterbi path will give us an inaccurate approximation. Alternatively, the correct approach for calculating the exact sum iteratively is through the use of dynamic programming. The algorithm that does this is known as Forward Algorithm.
The Forward Algorithm
First we derive the formula for forward probability f(i).
\begin{aligned} f_{l}(i) &=P\left(x_{1} \ldots x_{i}, \pi=l\right) \ &=\sum_{\pi_{1} \ldots \pi_{i-1}} P\left(x_{1} \ldots x_{i-1}, \pi_{1}, \ldots, \pi_{i-2}, \pi_{i-1}, \pi_{i}=l\right) e_{l}\left(x_{i}\right) \ &=\sum_{k} \sum_{\pi_{1} \ldots \pi_{i-2}} P\left(x_{1} \ldots x_{i-1}, \pi_{1}, \ldots, \pi_{i-2}, \pi_{i-1}=k\right) a_{k l} e_{l}\left(x_{i}\right) \ &=\sum_{k} f_{k}(i-1) a_{k l} e_{l}\left(x_{i}\right) \ &=e_{l}\left(x_{i}\right) \sum_{k} f_{k}(i-1) a_{k l} \end{aligned}
The full algorithm[2] is summarized below:
• Initialization $(i=0): f_{0}(0)=1, f_{k}(0)=0 \text { for } k>0$
• Iteration $(i=1 \ldots N): f_{k}(i)=e_{k}\left(x_{i}\right) \sum_{j} f_{j}(i-1) a_{j k}$
• Termination: $P\left(x, \pi^{*}\right)=\sum_{k} f_{k}(N)$
From Figure 7.17, it can be seen that the Forward algorithm is very similar to the Viterbi algorithm. In the Forward algorithm, summation is used instead of maximization. Here we can reuse computations of the previous problem including penalty of emissions, penalty of transitions and sums of previous states. The required computation time is O(K2N) and the required space is O(KN). The drawback of this algorithm is that in practice, taking the sum of logs is difficult; therefore, approximations and scaling of probabilities are used instead. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/07%3A_Hidden_Markov_Models_I/7.05%3A_Algorithmic_Settings_for_HMMs.txt |
The answer to this question is - Yes, we can! But how? Recall that, Markov models are memoryless. In other words, all memory of the model is enclosed in states. So, in order to store additional information, we must increase the number of states. Now, look back to the biological example we gave in Section 7.4.2. In our model, state emissions were dependent only on the current state. And, the current state encoded only one nucleotide. But, what if we want our model to count di-nucleotide frequencies (for CpG islands1), or, tri-nucleotide frequencies (for codons), or di-codon frequencies involving six-nucleotide? We need to expand number of states.
For example, the last-seen nucleotide can be incorporated into the HMM’s “memory” by splitting the plus and minus states from our High-GC/Low-GC HMM into multiple states: one for each nucleotide/region combination, as in Figure 7.18.
Moving from two to eight states allows us to retain memory of the last nucleotide observed, while also distinguishing between two distinct regions. Four new states now correspond to each of the original two states in the High/Low-GC HMM. Whereas the transition weights in the smaller HMM were based purely on the frequencies of individual nucleotides, now in the larger one, they are based on di-nucleotide frequencies.
With this added power, certain di-nucleotide sequences, such as CpG islands, can be modeled specifically: the transition from C+ to G+ can be assigned greater weight than the transition from A+ to G+. Further, transitions between + and - can be modeled more specifically to reflect the frequency (or infrequency) of particular di-nucleotide sequences within one or the other.
The process of adding memory to an HMM can be generalized and more memory can be added to allow the recognition of sequences of greater length. For instance, we can detect codon triplets with 32 states, or di-codon sextuplets with 2048 states. Memory within the HMM allows for increasingly tailored specificity in scanning.
1CpG stands for C-phosphate-G. So, CpG island refers to a region where GC di-nucleotide appear on the same strand.
7.07: Further Reading What Have We Learned
Length Distributions of States and Generalized Hidden Markov Models
Given a Markov chain with the transition from any state to the end state having probability τ , the probability of generating a sequence of length L (and then finishing with a transition to the end state) is given by:
$\tau(1-\tau)^{L-1} \nonumber$
Similarly, in the HMMs that we have been examining, the length of states will be exponentially dis- tributed, which is not appropriate for many purposes. (For example, in a genomic sequence, an exponential distribution does not accurately capture the lengths of genes, exons, introns, etc). How can we construct a model that does not output state sequences with an exponential distribution of lengths? Suppose we want to make sure that our sequence has length exactly 5. We might construct a sequence of five states with only a single path permitted by the transition probabilities. If we include a self loop in one of the states, we will output sequences of minimum length 5, with longer sequences exponentially distributed. Suppose we have a chain of n states, with all chains starting with state π1 and transitioning to an end state after πn. Also assume that the transition probability between state πi and πi+1 is 1−p, while the self transition probability of state πi is p. The probability that a sequence generated by this Markov chain has length L is given by:
$\left(\begin{array}{l} L-1 \ n-1 \end{array}\right) p^{L-n}(1-p)^{n} \nonumber$
This is called the negative binomial distribution.
More generally, we can adapt HMMs to produce output sequences of arbitrary length. In a Generalized Hidden Markov Model [1] (also known as a hidden semi-Markov model), the output of each state is a string of symbols, rather than an individual symbol. The length as well as content of this output string can be chosen based on a probability distribution. Many gene finding tools are based on generalized hidden Markov models.
Conditional random fields
The conditional random field model a discriminative undirected probabilistic graphical model that is used alternatively to HMMs. It is used to encode known relationships between observations and construct con- sistent interpretations. It is often used for labeling or parsing of sequential data. It is widely used in gene finding. The following resources can be helpful in order to learn more about CRFs:
• Lecture on Conditional Random Fields from Probabilistic Graphical Models course: class. coursera.org/pgm/lecture/preview/33. For background, you might also want to watch the two previous segments, on pairwise Markov networks and general Gibbs distributions.
• Conditional random fields in biology: www.cis.upenn.edu/~pereira/papers/crf.pdf
• Conditional Random Fields tutorial: http://people.cs.umass.edu/~mccallum...f-tutorial.pdf
What Have We Learned?
In this section, the main contents we covered are as following:
• First, we introduced the motivation behind adopting Hidden Markov Models in our analysis of genome annotation.
• Second, we formalized Markov Chains and HMM under the light of weather prediction example.
• Third, we got a sense of how to apply HMM in real world data by looking at Dishonest Casino and CG-rich region problems.
• Fourthly, we systematiclly introduced algorithmic settings of HMM and went into detail of three of them:
– Scoring: scoring over single path
– Scoring: scoring over all paths
– Decoding: Viterbi coding in determing most likely path
• Finally, we discussed the possibility of introducing memory in the analysis of HMM and provided further readings for interested readers.
Bibliography
[1] Introduction to GHMMs: www.cs.tau.ac.il/~rshamir/algmb/00/scribe00/html/lec07/node28. html.
[2] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological sequence analysis. eleventh edition, 2006. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/07%3A_Hidden_Markov_Models_I/7.06%3A_An_Interesting_Question-_Can_We_Incorporate_Memory_in_Our_Model.txt |
Introduction to Hidden Markov Models
In the last lecture, we familiarized ourselves with the concept of discrete-time Markov chains and Hidden Markov Models (HMMs). In particular, a Markov chain is a discrete random process that abides by the Markov property, i.e. that the probability of the next state depends only on the current state; this property is also frequently called ”memorylessness.” To model how states change from step to step, the Markov chain uses a matrix of transition probabilities. In addition, it is characterized by a one-to-one correspondence between the states and observed symbols; that is, the state fully determines all relevant observables. More formally, a Markov chain is fully defined by the following variables:
• πi ∈ Q, the state at the ith step in a sequence of finite states Q of length N that can hold a value from a finite alphabet Σ of length K
• ajk, the transition probability of moving from state j to state k, P(πi = k|πi−1 = j), for each j,k in Q
• a0j ∈ P , the probability that the initial state will be j
Examples of Markov chains are abundant in everyday life. In the last lecture, we considered the canonical example of a weather system in which each state is either rain, snow, sun or clouds and the observables of the system correspond exactly to the underlying state: there is nothing that we don’t know upon making an observation, as the observation, i.e. whether it is sunny or raining, fully determines the underlying state, i.e. whether it is sunny or raining. Suppose, however, that we are considering the weather as it is probabilistically determined by the seasons - for example, it snows more often in the winter than in the spring - and suppose further that we are in ancient times and did not yet have access to knowledge about what the current season is. Now consider the problem of trying to infer the season (the hidden state) from the weather (the observable). There is some relationship between season and weather such that we can use information about the weather to make inferences about what season it is (if it snows a lot, it’s probably not summer); this is the task that HMMs seek to undertake. Thus, in this situation, the states, the seasons, are considered “hidden” and no longer share a one-to-one correspondence with the observables, the weather. These types of situations require a generalization of Markov chains known as Hidden Markov Models (HMMs).
Did You Know?
Markov Chains may be thought of as WYSIWYG - What You See Is What You Get
HMMs incorporate additional elements to model the disconnect between the observables of a system and the hidden states. For a sequence of length N, each observable state is instead replaced by a hidden state (the season) and a character emitted from that state (the weather). It is important to note that characters from each state are emitted according to a series of emission probabilities (say there is a 50% chance of snow, 30% chance of sun, and 20% chance of rain during winter). More formally, the two additional descriptors of an HMM are:
• xi ∈ X, the emission at the ith step in a sequence of finite characters X of length N that can hold a character from a finite set of observation symbols vl ∈ V
• ek(vl) ∈ E, the emission probability of emitting character vl when the state is k, P(xi = vli = k) In summary, an HMM is defined by the following variables:
• ajk, ek(vl), and a0j that model the discrete random process
• πi, the sequence of hidden states
• xi, the sequence of observed emissions
Genomic Applications of HMMs
The figure below shows some genomic applications of HMMs
Niceties of some of the applications shown in figure 8.1 include:
• Detection of Protein coding conservation
This is similar to the application of detecting protein coding exons because the emissions are also not nucleotides but different in the sense that, instead of emitting codons, substitution frequencies of the codons are emitted.
• Detection of Protein coding gene structures
Here, it is important for different states to model first, last and middle exons independently, because they have distinct relevant structural features: for example, the first exon in a transcript goes through a start codon, the last exon goes through a stop codon, etc., and to make the best predictions, our model should encode these features. This differs from the application of detecting protein coding exons because in this case, the position of the exon is unimportant.
It is also important to differentiate between introns 1,2 and 3 so that the reading frame between one exon and the next exon can be remembered e.g. if one exon stops at the second codon position, the next one has to start at the third codon position. Therefore, the additional intron states encode the codon position.
• Detection of chromatin states
Chromatin state models are dynamic and vary from cell type to cell type so every cell type will have its own annotation. They will be discussed in fuller detail in the genomics lecture including strategies for stacking/concatenating cell types.
Viterbi decoding
Previously, we demonstrated that when given a full HMM (Q,A,X,E,P), the likelihood that the discrete random process produced the provided series of hidden states and emissions is given by:
$P\left(x_{1}, \ldots, x_{N}, \pi_{1}, \ldots, \pi_{N}\right)=a_{0 \pi_{i}} \prod_{i} e_{\pi_{i}}\left(x_{i}\right) a_{\pi_{i} \pi_{i+1}}$
This corresponds to the total joint probability, P(x,π). Usually, however, the hidden states are not given and must be inferred; we’re not interested in knowing the probability of the observed sequence given an underlying model of hidden states, but rather want to us the observed sequence to infer the hidden states, such as when we use an organism’s genomic sequence to infer the locations of its genes. One solution to this decoding problem is known as the Viterbi decoding algorithm. Running in O(K2N) time and O(KN) space, where K is the number of states and N is the length of the observed sequence, this algorithm determines the sequence of hidden states (the path π) that maximizes the joint probability of the observables and states, i.e. P(x,π). Essentially, this algorithm defines Vk(i) to be the probability of the most likely path ending at state πi = k, and it utilizes the optimal substructure argument that we saw in the sequence alignment module of the course to recursively compute Vk(i) = ek(xi) × maxj(Vj(i − 1)ajk) in a dynamic programming algorithm.
Forward Algorithm
Returning for a moment to the problem of ’scoring’ rather than ’decoding,’ another problem that we might want to tackle is that of, instead of computing the probability of a single path of hidden state emitting the observed sequence, calculating the total probability of the sequence being produced by all possible paths. For example, in the casino example, if the sequence of rolls is long enough, the probability of any single observed sequence and underlying path is very low, even if it is the single most likely sequence-path combination. We may instead want to take an agnostic attitude toward the path and assess the total probability of the observed sequence arising in any way.
In order to do that, we proposed the Forward algorithm, which is described in Figure 8.2
$\begin{array}{l} \text { Input: } x=x_{1} \ldots x_{N} \ \text { Initialization: } \ \qquad f_{0}(0)=1, f_{k}(0)=0, \quad \text { for all } k>0 \ \text { Iteration: } \ \qquad f_{k}(i)=e_{k}\left(x_{i}\right) \times \sum_{j} a_{j k} f_{j}(i-1) \end{array} \nonumber$
$\text {Termination:} \ P\left(x, \pi^{*}\right)=\sum_{k} f_{k}(N) \nonumber$
The forward algorithm first calculates the joint probability of observing the first t emitted characters and being in state k at time t. More formally,
$f_{k}(t)=P\left(\pi_{t}=k, x_{1}, \ldots, x_{t}\right)$
Given that the number of paths is exponential in t, dynamic programming must be employed to solve this problem. We can develop a simple recursion for the forward algorithm by employing the Markov property as follows:
$f_{k}(t)=\sum_{l} P\left(x_{1}, \ldots, x_{t}, \pi_{t}=k, \pi_{t-1}=l\right)=\sum_{l} P\left(x_{1}, \ldots, x_{t-1}, \pi_{t-1}=l\right) * P\left(x_{t}, \pi_{t} \mid \pi_{t-1}\right)$
Recognizing that the first term corresponds to fl(t − 1) and that the second term can be expressed in terms of transition and emission probabilities, this leads to the final recursion:
$f_{k}(t)=e_{k}\left(x_{t}\right) \sum_{l} f_{l}(t-1) * a_{l k}$
Intuitively, one can understand this recursion as follows: Any path that is in state k at time t must have come from a path that was in state l at time t − 1. The contribution of each of these sets of paths is then weighted by the cost of transitioning from state l to state k. It is also important to note that the Viterbi algorithm and forward algorithm largely share the same recursion. The only difference between the two algorithms lies in the fact that the Viterbi algorithm, seeking to find only the single most likely path, uses a maximization function, whereas the forward algorithm, seeking to find the total probability of the sequence over all paths, uses a sum.
We can now compute fk(t) based on a weighted sum of all the forward algorithm results tabulated during the previous time step. As shown in Figure 8.2, the forward algorithm can be easily implemented in a KxN dynamic programming table. The first column of the table is initialized according to the initial state probabilities ai0 and the algorithm then proceeds to process each column from left to right. Because there are KN entries and each entry examines a total of K other entries, this leads to O(K2N) time complexity and O(KN) space.
In order now to calculate the total probability of a sequence of observed characters under the current HMM, we need to express this probability in terms of the forward algorithm gives in the following way:
$P\left(x_{1}, \ldots, x_{n}\right)=\sum_{l} P\left(x_{1}, \ldots, x_{n}, \pi_{N}=l\right)=\sum_{l} f_{l}(N)$
Hence, the sum of the elements in the last column of the dynamic programming table provides the total probability of an observed sequence of characters. In practice, given a sufficiently long sequence of emitted characters, the forward probabilities decrease very rapidly. To circumvent issues associated with storing small floating point numbers, logs-probabilities are used in the calculations instead of the probabilities themselves. This alteration requires a slight adjustment to the algorithm and the use of a Taylor series expansion for the exponential function.
This lecture
• This lecture will discuss posterior decoding, an algorithm which again will infer the hidden state sequence π that maximizes a different metric. In particular, it finds the most likely state at every position over all possible paths and does so using both the forward and backward algorithm.
• Afterwards, we will show how to encode “memory” in a Markov chain by adding more states to search a genome for dinucleotide CpG islands.
• We will then discuss how to use Maximum Likelihood parameter estimation for supervised learning with a labelled dataset
• We will also briefly see how to use Viterbi learning for unsupervised estimation of the parameters of an unlabelled dataset
• Finally, we will learn how to use Expectation Maximization (EM) for unsupervised estimation of parameters of an unlabelled dataset where the specific algorithm for HMMs is known as the Baum- Welch algorithm. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/08%3A_Hidden_Markov_Models_II-Posterior_Decoding_and_Learning/8.01%3A_Review_of_previous_lecture.txt |
Motivation
Although the Viterbi decoding algorithm provides one means of estimating the hidden states underlying a sequence of observed characters, another valid means of inference is provided by posterior decoding.
Posterior decoding provides the most likely state at any point in time. To gain some intuition for posterior decoding, let’s see how it applies to the situation in which a dishonest casino alternates between a fair and loaded die. Suppose we enter the casino knowing that the unfair die is used 60 percent of the time. With this knowledge and no die rolls, our best guess for the current die is obviously the loaded one. After one roll, the probability that the loaded die was used is given by
$P(\text {die}=\text {loaded} \mid \text {roll}=k)=\frac{P(\text {die}=\text {loaded}) * P(\text {roll}=k \mid \text {die}=\text {loaded})}{P(\text {roll}=k)}$
If we instead observed a sequence of N die rolls, how do perform a similar sort of inference? By allowing information to flow between the N rolls and influence the probability of each state, posterior decoding is a natural extension of the above inference to a sequence of arbitrary length. More formally, instead of identifying a single path of maximum likelihood, posterior decoding considers the probability of any path lying in state k at time t given all of the observed characters, i.e. P(πt = k|x1,...,xn). The state that maximizes this probability for a given time is then considered as the most likely state at that point.
It is important to note that in addition to information flowing forward to determine the most likely state at a point, information may also flow backward from the end of the sequence to that state to augment or reduce the likelihood of each state at that point. This is partly a natural consequence of the reversibility of Bayes’ rule: our probabilities change from prior probabilities into posterior probabilities upon observing more data. To elucidate this, imagine the casino example again. As stated earlier, without observing any rolls, the state0 is most likely to be unfair: this is our prior probability. If the first roll is a 6, our belief that state1 is unfair is reinforced (if rolling sixes is more likely in an unfair die). If a 6 is rolled again, information flow backwards from the second die roll and reinforces our state1 belief of an unfair die even more. The more rolls we have, the more information that flows backwards and reinforces or contrasts our beliefs about the state thus illustrating the way information flows backward and forward to affect our belief about the states in Posterior Decoding.
Using some elementary manipulations, we can rearrange this probability into the following form using Bayes’ rule:
$\pi_{t}^{*}=\operatorname{argmax}_{k} P\left(\pi_{t}=k \mid x_{1}, \ldots, x_{n}\right)=\operatorname{argmax}_{k} \frac{P\left(\pi_{t}=k, x_{1}, \ldots, x_{n}\right)}{P\left(x_{1}, \ldots, x_{n}\right)}$
Because P (x) is a constant, we can neglect it when maximizing the function. Therefore,
$\pi_{t}^{*}=\operatorname{argmax}_{k} P\left(\pi_{t}=k, x_{1}, \ldots, x_{t}\right) * P\left(x_{t+1}, \ldots, x_{n} \mid \pi_{t}=k, x_{1}, \ldots, x_{t}\right)$
Using the Markov property, we can simply write this expression as follows:
$\pi_{t}^{*}=\operatorname{argmax}_{k} P\left(\pi_{t}=k, x_{1}, \ldots, x_{t}\right) * P\left(x_{t+1}, \ldots, x_{n} \mid \pi_{t}=k\right)=\operatorname{argmax}_{k} f_{k}(t) * b_{k}(t)$
Here, we’ve defined $f_{k}(t)=P\left(\pi_{t}=k, x_{1}, \ldots, x_{t}\right) \text { and } b_{k}(t)=P\left(x_{t+1}, \ldots, x_{n} \mid \pi_{t}=k\right)$. As we will shortly see, these parameters are calculated using the forward algorithm and the backward algorithm respectively. To solve the posterior decoding problem, we merely need to solve each of these subproblems. The forward algorithm has been illustrated in the previous chapter and in the review at the start of this chapter and the backward algorithm will be explained in the next section.
Backward Algorithm
As previously described, the backward algorithm is used to calculate the following probability:
$b_{k}(t)=P\left(x_{t+1}, \ldots, x_{n} \mid \pi_{t}=k\right)$
We can begin to develop a recursion n by expanding into the following form:
$b_{k}(t)=\sum_{l} P\left(x_{t+1}, \ldots, x_{n}, \pi_{t+1}=l \mid \pi_{t}=k\right)$
From the Markov property, we then obtain:
$b_{k}(t)=\sum_{l} P\left(x_{t+2}, \ldots, x_{n} \mid \pi_{t+1}=l\right) * P\left(\pi_{t+1}=l \mid \pi_{t}=k\right) * P\left(x_{t+1} \mid \pi_{t+1}=k\right)$
The first term merely corresponds to bl(t+1). Expressing in terms of emission and transition probabilities gives the final recursion:
$b_{k}(t)=\sum_{l} b_{l}(i+1) * a_{k l} * e_{l}\left(x_{t+1}\right)$
Comparison of the forward and backward recursions leads to some interesting insight. Whereas the forward algorithm uses the results at t − 1 to calculate the result for t, the backward algorithm uses the results from t + 1, leading naturally to their respective names. Another significant difference lies in the emission probabilities; while the emissions for the forward algorithm occur from the current state and can therefore be excluded from the summation, the emissions for the backward algorithm occur at time t + 1 and therefore must be included within the summation.
Given their similarities, it is not surprising that the backward algorithm is also implemented using a KxN dynamic programming table. The algorithm, as depicted in Figure 8.3, begins by initializing the rightmost column of the table to unity. Proceeding from right to left, each column is then calculated by taking a weighted sum of the values in the column to the right according to the recursion outlined above. After calculating the leftmost column, all of the backward probabilities have been calculated and the algorithm terminates. Because there are KN entries and each entry examines a total of K other entries, this leads to O(K2N) time complexity and O(KN) space, bounds identical to those of the forward algorithm.
Just as P(X) was calculated by summing the rightmost column of the forward algorithm’s DP table, P(X) can also be calculated from the sum of the leftmost column of the backward algorithm’s DP table. Therefore, these methods are virtually interchangeable for this particular calculation.
Did You Know?
Note that even when executing the backward algorithm, forward transition probabilities are used i.e if moving in the backward direction involves a transition from state B → A, the probability of transitioning from state A → B is used. This is because moving backward from state B to state A implies that state B follows state A in our normal, forward order, thus calling for the same transition probability.
The Big Picture
Why do we have to make both forward and backward calculations for posterior decoding, while the algorithms that we have discussed previously call for only one direction? The difference lies in the fact that posterior decoding seeks to produce probabilities for the underlying states of individual positions rather than whole sequences of positions. In seeking to find the most likely underlying state of a given position, we need to take into account the entire sequence in which that position exists, both before and after it, as befits a Bayesian approach - and to do this in a dynamic programming algorithm, in which we compute recursively and end with a maximizing function, we must approach our position of interest from both sides.
Given that we can calculate both fk(t) and bk(t) in θ(K2N) time and θ(KN) space for all t = 1 . . . n, we can use posterior decoding to determine the most likely state πt for t = 1 . . . n. The relevant expression is given by
$\pi_{t}^{*}=\operatorname{argmax}_{k} P\left(\pi_{i}=k \mid x\right)=\frac{f_{k}(i) * b_{k}(i)}{P(x)}$
With two methods (Viterbi and posterior) to decode, which is more appropriate? When trying to classify each hidden state, the Posterior decoding method is more informative because it takes into account all possible paths when determining the most likely state. In contrast, the Viterbi method only takes into account one path, which may end up representing a minimal fraction of the total probability. At the same time, however, posterior decoding may give an invalid sequence of states! By selecting for the maximum probability state of each position independently, we’re not considering how likely the transitions between these states are. For example, the states identified at time points t and t + 1 might have zero transition probability between them. As a result, selecting a decoding method is highly dependent on the application of interest.
FAQ
Q: What does it imply when the Viterbi algorithm and Posterior decoding disagree on the path?
A: In a sense, it is simply a reminder that our model gives us what it’s selecting for. When we seek the maximum probability state of each independent position and disregard transitions between these max probability states, we may get something different than when we seek to find the most likely total path. Biology is complicated; it is important to think about what metric is most relevant to the biological situation at hand. In the genomic context, a disagreement might be a result of some ’funky’ biology; alternative splicing, for instance. In some cases, the Viterbi algorithm will be close to the Posterior decoding while in some others they may disagree. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/08%3A_Hidden_Markov_Models_II-Posterior_Decoding_and_Learning/8.02%3A_Posterior_Decoding.txt |
CpG islands are defined as regions within a genome that are enriched with pairs of C and G nucleotides on the same strand. Typically, when this dinucleotide is present within a genome, it becomes methylated, and when deamination of the cytosine occurs, as it does at some base frequency, it becomes a thymine, another natural nucleotide, and thus cannot as easily be recognized by the cell as a mutation, causing a C to T mutation. This increased mutation frequency at CpG islands depletes CpG islands over evolutionary time and renders them relatively rare. Because the methylation can occur on either strand, CpGs usually mutate into a TpG or a CpA. However, when situated within an active promoter, methylation is suppressed, and CpG dinucleotides are able to persist. Similarly, CpGs in regions important to cell function are conserved due to evolutionary pressure. As a result, detecting CpG islands can highlight promoter regions, other transcriptionally active regions, or sites of purifying selection within a genome.
Did You Know?
CpG stands for [C]ytosine - [p]hosphate backbone - [G]uanine. The ’p’ implies that we are referring to the same strand of the double helix, rather than a G-C base pair occurring across the helix.
Given their biological significance, CpG islands are prime candidates for modelling. Initially, one may attempt to identify these islands by scanning the genome for fixed intervals rich in GC. This approach’s efficacy is undermined by the selection of an appropriate window size; while too small of a window may not capture all of a particular CpG island, too large of a window would result in missing many smaller but bona fide CpG islands. Examining the genome on a per codon basis also leads to difficulties because CpG pairs do not necessarily code for amino acids and thus may not lie within a single codon. Instead, HMMs are much better suited to modelling this scenario because, as we shall shortly see in the section on unsupervised learning, HMMs can adapt their underlying parameters to maximize their likelihood.
Not all HMMs, however, are well suited to this particular task. An HMM model that only considers the single nucleotide frequencies of C’s and G’s will fail to capture the nature of CpG islands. Consider one such HMM with the two following hidden states :
• ’+’ state representing CpG islands
• ’-’ state: representing non-islands
Each of these two states then emits A, C, G and T bases with a certain probability. Although the CpG islands in this model can be enriched with C’s and G’s by increasing their respective emission probabilities, this model will fail to capture the fact that the C’s and G’s predominantly occur in pairs.
Because of the Markov property that governs HMM’s, the only information available at each time step must be contained within the current state. Therefore, to encode memory within a Markov chain, we need to augment the state space. To do so, the individual ’+’ and ’-’ states can be replaced with 4 ’+’ states and 4 ’-’ states: A+, C+, G+, T+, A-, C-, G-, T- (Figure 8.4). Specifically, there are 2 ways to model this, and this choice will result in different emission probabilities:
• One model suggests that the state A+, for instance, implies that we are currently in a CpG island and the previous character was an A. The emission probabilities here will carry most of the information and the transitions will be fairly degenerate.
• Another model suggests that the state A+, for instance, implies that we are currently in a CpG island and the current character is an A. The emission probability here will be 1 for A and 0 for all other letters and the transition probabilities will bear most of the information in the model and the emissions will be fairly degenerate. We will assume this model from now on.
Did You Know?
The number of transitions is the square of the number of states. This gives a rough idea of how increasing HMM “memory” (and hence states) scale.
• The memory of this system derives from the fact that each state can only emit one character and therefore “remembers” its emitted character. Furthermore, the dinucleotide nature of the CpG islands is incorporated within the transition matrices. In particular, the transition frequency from C+ to G+ states is significantly higher than from C− to a G− states, demonstrating that these pairs occur more often within the islands.
FAQ
Q: Since each state emits only one character, can we then say this reduces to a Markov Chain instead of a HMM?
A: No. Even though the emissions indicate the letter of the hidden state, they do not indicate if the state is a CpG island or not: both an A- and an A+ state emit only the observable A.
FAQ
Q: How do we incorporate our knowledge about the system while training HMM models eg. some emission probabilities of 0 in the CpG island detection case?
A: We could either force our knowledge on the model by setting some parameters and leaving others to vary or we could let the HMM loose on the model and let it discover those relationships. As a matter of fact, there are even methods that simplify the model by forcing a subset of parameters to be 0 but allowing the HMM to choose which subset.
Given the above framework, we can use posterior decoding to analyze each base within a genome and determine whether it is most likely a constituent of a CpG island or not. But having constructed the expanded HMM model, how can we verify that it is in fact better than the single nucleotide model? We previously demonstrated that the forward or backward algorithm can be used to calculate P(x) for a given
model. If the likelihood of our dataset is higher given the second model than the first model, it most likely captures the underlying behavior more effectively.
However, there is one risk in complicating the model, which is overfitting. Increasing the number of parameters for an HMM makes the HMM more likely to overfit the data and be less accurate in capturing the underlying behavior. A common solution to this in machine learning is to use regularization, which is essentially using fewer parameters. In this case, it is possible to reduce number of parameters to learn by constraining all +/- transition probabilities to be the same value and all -/+ transition probabilities to be the same value, as the transitions back and forth from the + and - states are what we are interested in modeling, and the actual bases where the transition occurred are not that important to our model. Thus for this constrained model we have to learn fewer parameters which leads to a simpler model and can help to avoid overfitting.
FAQ
Q: Are there other ways to encode the memory for CpG island detection? A: Other ideas that may be experimented with include
- Emit dinucleotides and figure out a way to deal with overlap.
- Add a special state that goes from C to G. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/08%3A_Hidden_Markov_Models_II-Posterior_Decoding_and_Learning/8.03%3A_Encoding_Memory_in_a_HMM-_Detection_of_CpG_islands.txt |
We saw how to score and decode an HMM-generated sequence in two different ways. However, these methods assumed that we already knew the emission and transition probabilities. While we are always free to hazard a guess at these, we may sometimes want to use a more data-driven, empirical approach to deriving these parameters. Fortunately, the HMM framework enables the learning of these probabilities when provided a set of training data and a set architecture for the model.
When the training data is labelled, estimation of the probabilities is a form of supervised learning. One such instance would occur if we were given a DNA sequence of one million nucleotides in which the CpG islands had all been experimentally annotated and were asked to use this to estimate our model parameters.
In contrast, when the training data is unlabelled, the estimation problem is a form of unsupervised learning. Continuing with the CpG island example, this situation would occur if the provided DNA sequence contained no island annotation whatsoever and we needed to both estimate model parameters and identify
the islands.
Supervised Learning
When provided with labelled data, the idea of estimating model parameters is straightforward. Suppose that you are given a labelled sequence x1, . . . , xN as well as the true hidden state sequence π1, . . . , πN. Intuitively, one might expect that the probabilities that maximize the data’s likelihood are the actual probabilities that one observes within the data. This is indeed the case and can be formalized by defining Akl to be the number of times hidden state k transitions to l and Ek(b) to be the number of times b is emitted from hidden state k. The parameters θ that maximize P(x|θ) are simply obtained by counting as follows:
\begin{aligned} a_{k l} &=\frac{A_{k l}}{\sum_{i} A_{k i}} \ e_{k}(b) &=\frac{E_{k}(b)}{\sum_{c} E_{k}(c)} \end{aligned}
One example training set is shown in Figure 8.5. In this example, it is obvious that the probability of transitioning from B to P is $\frac{1}{3+1}=\frac{1}{4}$ (there are 3 B to B transitions and 1 B to P transitions) and the probability of emitting a G from the B state is $\frac{2}{2+2+1}=\frac{2}{5}$ (there are 2 G's emitted from the B state, 2 C's and 1 A)
Notice, however, that in the above example the emission probability of character T from state B is 0because no such emissions were encountered in the training set. A zero probability, either for transitioning or emitting, is particularly problematic because it leads to an infinite log penalty. In reality, however, the zero probability may merely have arisen due to over-fitting or a small sample size. To rectify this issue and maintain flexibility within our model, we can collect more data on which to train, reducing the possibility that the zero probability is due to a small sample size. Another possibility is to use ’pseudocounts’ instead of absolute counts: artificially adding some number of counts to our training data which we think more accurately represent the actual parameters and help counteract sample size errors.
$\begin{array}{l} A_{k l}^{*}=A_{k l}+r_{k l} \ E_{k}(b)^{*}=E_{k}(b)+r_{k}(b) \end{array} . \nonumber$
Larger pseudocount parameters correspond to a strong prior belief about the parameters, reflected in the fact that these pseudocounts, derived from your priors, are comparatively overwhelming the observations, your training data. Likewise, small pseudocount parameters (r << 1) are more often used when our priors are relatively weak and we are aiming not to overwhelm the empirical data but only to avoid excessively harsh probabilities of 0.
Unsupervised Learning
Unsupervised learning involves estimating parameters based on unlabelled data. This may seem impossible - how can we take data about which we know nothing and use it to ”learn”? - but an iterative approach can yield surprisingly good results, and is the typical choice in these cases. This can be thought of loosely as an evolutionary algorithm: from some initial choice of parameters, the algorithm assesses how well the parameters explain or relate to the data, uses some step in that assessment to make improvements on the parameters, and then assesses the new parameters, producing incremental improvements in the parameters at every step just as the fitness or lack thereof of a particular organism in its environment produces incremental increases over evolutionary time as advantageous alleles are passed on preferentially.
Suppose we have some sort of prior belief about what each emission and transition probability should be. Given these parameters, we can use a decoding method to infer the hidden states underlying the provided data sequence. Using this particular decoding parse, we can then re-estimate the transition and emission counts and probabilities in a process similar to that used for supervised learning. If we repeat this procedure until the improvement in the data’s likelihood remains relatively stable, the data sequence should ultimately drive the parameters to their appropriate values.
FAQ
Q: Why does unsupervised learning even work? Or is it magic?
A: Unsupervised learning works because we have the sequence (input data) and this guides every step of the iteration; to go from a labelled sequence to a set of parameters, the later are guided by the input and its annotation, while to annotate the input data, the parameters and the sequence guide the procedure.
For HMMs in particular, two main methods of unsupervised learning are useful.
Expectation Maximization using Viterbi training
The first method, Viterbi training, is relatively simple but not entirely rigorous. After picking some initial best-guess model parameters, it proceeds as follows:
E step: Perform Viterbi decoding to find π
M step: Calculate the new parameters $A_{k l}^{*}, E_{k}(b)^{*}$ using the simple counting formalism in supervised learning (Maximization step)
Iteration: Repeat the E and M steps until the likelihood P(x|θ) converges
Although Viterbi training converges rapidly, its resulting parameter estimations are usually inferior to those of the Baum-Welch Algorithm. This result stems from the fact that Viterbi training only considers the most probable hidden path instead of the collection of all possible hidden paths.
Expectation Maximization: The Baum-Welch Algorithm
The more rigorous approach to unsupervised learning involves an application of Expectation Maximization to HMM’s. In general, EM proceeds in the following manner:
Init: Initialize the parameters to some best-guess state
E step: Estimate the expected probability of hidden states given the latest parameters and observed sequence (Expectation step)
M step: Choose new maximum likelihood parameters using the probability distribution of hidden states (Maximization step)
Iteration: Repeat the E and M steps until the likelihood of the data given the parameters converges
The power of EM lies in the fact that P (x|θ) is guaranteed to increase with each iteration of the algorithm. Therefore, when this probability converges, a local maximum has been reached. As a result, if we utilize a variety of initialization states, we will most likely be able to identify the global maximum, i.e. the best parameters θ. The Baum-Welch algorithm generalizes EM to HMM’s. In particular, it uses the forward and backward algorithms to calculate P(x|θ) and to estimate Akl and Ek(b). The algorithm proceeds as follows:
Initialization 1. Initialize the parameters to some best-guess state
Iteration 1. Run the forward algorithm
2. Run the backward algorithm
3. Calculate the new log-likelihood P(x|θ)
4. Calculate Akl and Ek(b)
5. Calculate akl and ek(b) using the pseudocount formulas 6. Repeat until P(x|θ) converges
Previously, we discussed how to compute P(x|θ) using either the forward or backward algorithm’s final results. But how do we estimate Akl and Ek(b)? Let’s consider the expected number of transitions from state k to state l given a current set of parameters θ. We can express this expectation as
$A_{k l}=\sum_{t} P\left(\pi_{t}=k, \pi_{t+1}=l \mid x, \theta\right)=\sum_{t} \frac{P\left(\pi_{t}=k, \pi_{t+1}=l, x \mid \theta\right)}{P(x \mid \theta)}\nonumber$
Exploiting the Markov property and the definitions of the emission and transition probabilities leads to the following derivation:
\begin{aligned} A_{k l} &=\sum_{t} \frac{P\left(x_{1} \ldots x_{t}, \pi_{t}=k, \pi_{t+1}=l, x_{t+1} \ldots x_{N} \mid \theta\right)}{P(x \mid \theta)} \ &=\sum_{t} \frac{P\left(x_{1} \ldots x_{t}, \pi_{t}=k\right) * P\left(\pi_{t+1}=l, x_{t+1} \ldots x_{N} \mid \pi_{t}, \theta\right)}{P(x \mid \theta)} \ &=\sum_{t} \frac{f_{k}(t) * P\left(\pi_{t+1}=l \mid \pi_{t}=k\right) * P\left(x_{t+1} \mid \pi_{t+1}=l\right) * P\left(x_{t+2} \ldots x_{N} \mid \pi_{t+1}=l, \theta\right)}{P(x \mid \theta)} \ \Rightarrow A_{k l} &=\sum_{t} \frac{f_{k}(t) * a_{k l} * e_{l}\left(x_{t+1}\right) * b_{l}(t+1)}{P(x \mid \theta)} \end{aligned}
A similar derivation leads to the following expression for Ek(b):
$E_{k}(b)=\sum_{i \mid x_{i}=b} \frac{f_{k}(t) * b_{k}(t)}{P(x \mid \theta)}$
Therefore, by running the forward and backward algorithms, we have all of the information necessary to calculate P(x|θ) and to update the emission and transition probabilities during each iteration. Because these updates are constant time operations once P(x|θ), fk(t) and bk(t) have been computed, the total time complexity for this version of unsupervised learning is θ(K2NS), where S is the total number of iterations.
FAQ
Q: How do you encode your prior beliefs when learning with Baum-Welch?
A: Those prior beliefs are encoded in the initializations of the forward and backward algorithms | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/08%3A_Hidden_Markov_Models_II-Posterior_Decoding_and_Learning/8.04%3A_Learning.txt |
We can use HMM to align sequences with affine gap penalties. Recall that affine gap penalties penalizes more to open/ start the gap than to extend it, thus the penalty of a gap of length g is r(g) = -d -(g-1)*e, where d is the penalty to open the gap and e is the penalty to extend an already open gap.
We will look into aligning two sequences with the affine gap penalty. We are given two sequences are X and Y, the scoring matrix S (S(xi,yj) = score of matching xi with yj), gap opening penalty of d and gap extension penalty of e. We can map this problem into an HMM problem by using the following states, transition probabilities and emission probabilities.
States:
There are three states involves: M (matching xi with yj), X (aligning xi with a gap), Y (aligning yj with a gap). Also, alongside each transition, there’s an update of the i,j indices. Whenever we are in state M, (i,j) = (i,j) + (1,1). In state X, (i,j) = (i,j) + (1,0). In state Y, (i,j) = (i,j) + (0,1).
Transition probabilities:
There are 7 transition probabilities to consider as shown in figure 6. P(next State = M | current = M) = S(xi,yj)
P(next State = X | current = M) = d
P(next State = Y | current = M ) = d
P(next State = X | current = X) = e
P(next State = M | current = X ) = S(xi,yj)
P(next State = Y | current = Y) = e
P(next State = M | current = Y) = S(xi,yj)
We can also save the transition probabilities in a transition matrix A = [aij], where aij = P(next State = j | current = i) and $\sum_{j}$jAij = 1
Emission probabilities:
The emission probabilities are:
From state M: pxiyi = p(xi aligned to yj)
From state X: qxi = p(xi aligned to gap)
From state Y: qyi= p(yjaligned to gap)
Example:
X = ’VLSPADK’
Y = ’HLAESK’
The alignment generated by the model is: MMXXMYM
Which corresponds to:
X = ’VLSPAD K’
Y = ’HL_ _AESK’
Did You Know?
For classification purposes, Posterior decoding ’path’ is more informative than Viterbi path as it is a more refined measure of which hidden states generated x. However, it may give an invalid sequence of states, for example when not all j $->$ k transitions may be possible, it might have state(i) = j and state(i+1) =k
8.06: Current Research Directions What Have We Learned Bib
• HMM’s have been used extensively in various fields of computational biology. One of the first such applications was in a gene-finding algorithm known as GENSCAN written by Chris Burge and Samuel Karlin [1]. Because the geometric length distribution of HMM’s does not model exonic regions well, Burge et. al used an adaptation of HMM’s known as hidden semi-Markov models (HSMM’s). These types of models differ in that whenever a hidden state is reached, the length of duration of that state (di) is chosen from a distribution and the state then emits exactly di characters. The transition from this hidden state to the next is then analogous to the HMM procedure except that akk = 0 for all k, thereby preventing self-transitioning. Many of the same algorithms that were previously developed for HMM’s can be modified for HSMM’s. Although the details won’t be discussed here, the forward and backward algorithms can be modified to run in O(K2N3) time, where N is the number of observed characters. This time complexity assumes that there is no upper bound on the length of a state’s duration, but imposing such a bound reduces the complexity to O(K2ND2), where D is the maximum possible duration of a state.
The basic state diagram underlying Burge’s model is depicted in Figure 8.7. The included diagram only lists the states on the forward strand of DNA, but in reality a mirror image of these states is also included for the reverse strand, resulting in a total of 27 hidden states. As the diagram illustrates, the model incorporates many of the major functional units of genes, including exons, introns, promoters, UTR’s and poly-A tails. In addition, three different intronic and exonic states are used to ensure that the total length of all exons in a gene is a multiple of three. Similar to the CpG island example, this expanded state-space enabled the encoding of memory within the model.
• A recent effort has been made to make an HMM-based approach to homology searches, called HMMER, a viable alternative to BLAST in terms of computational efficiency. Unlike most other homology search algorithms, HMMER, written by Sean Eddy, uses the Forward algorithm’s average over alignment un- certainty, rather than only reporting the maximum likelihood alignment (a la Viterbi ); this approach is often better for detecting more remote homologies, as as divergence times increase, there may become more viable ways of aligning sequences, each of them individually not sufficiently strong to be differentiated from noise but together giving evidence for homology. A particularly exciting recent development is that HMMER is now available as a web server; it can be found at http://www.ebi.ac.uk/Tools/hmmer/.
• An interesting subject that may be explored also concerns the agreement of Viterbi and Posterior decoding paths; not just for CpG island detection but even for chromatin state detection. One may look at multiple paths by sampling, asking questions such as:
– What is the maximum a posteriori vs viterbi path? Where do they differ?
– Can complete but maximally disjoint (from Viterbi) paths be found?
8.9 What Have We Learned?
Using the basic computational framework provided by Hidden Markov Models, we’ve learned how to infer the most likely set of hidden states underlying a sequence of observed characters. In particular, a combination of the forward and backward algorithms enabled one form of this inference, i.e. posterior decoding, in O(KN2) time. We also learned how either unsupervised or supervised learning can be used to identify the best parameters for an HMM when provided with an unlabelled or labelled dataset. The combination of these decoding and parameter estimation methods enable the application of HMM’s to a wide variety of problems in computational biology, of which CpG island and gene identification form a small subset. Given the flexibility and analytical power provided by HMM’s, these methods will play an important role in computational biology for the foreseeable future.
Bibliography
[1] Christopher B Burge and Samuel Karlin. Finding the genes in genomic dna. Current Opinion in Structural Biology, 8(3):346 – 354, 1998. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/08%3A_Hidden_Markov_Models_II-Posterior_Decoding_and_Learning/8.05%3A_Using_HMMs_to_align_sequences_with_affine_gap_penalt.txt |
After a genome has been sequenced, a common next step is to attempt to infer the functional potential of the organism or cell encoded through careful analysis of that sequence. This mainly takes the form of identifying the protein coding genes within the sequence as they are thought to be the primary units of function within living systems; this is not to say that they are the only functional units within genomes as things such as regulatory motifs and non-coding RNAs are also imperative elements.
This annotation of the protein coding regions is too laborious to do by hand, so it is automated in a process known as computational gene identification. The algorithms underlying this process are often based on Hidden Markov Models (HMMs), a concept discussed in previous chapters to solve simple problems such as knowing whether a casino is rolling a fair versus a loaded die. Genomes, however, are very complicated sets of data, replete with long repeats, overlapping genes (where one or more nucleotides are part of two or more distinct genes) and pseudogenes (non-transcribed regions that look very similar to genes) among many other obfuscations. Thus, experimental and evolutionary data often needs to be included into HMMs for greater annotational accuracy, which can result in a loss of scalability or a reliance on incorrect assumptions of independence. Alternative algorithms have been utilized to address the problems of HMMs including those based on Conditional Random Fields (CRFs), which rely on creating a distribution of the hidden states of the genomic sequence in question conditioned on known data. Use of CRFs has not phased out HMMs as both are used with varying degrees of success in practice.1
1R. Guigo (1997). “Computational gene identification: an open problem.” Computers Chem. Vol. 21. 165
9.02: Overview of Chapter Contents
This chapter will begin with a discussion of the complexities of the Eukaryotic gene. It will then describe how HMMs can be used as a model to parse Eukaryotic genomes into protein coding genes and regions that are not; this will include reference to the strengths and weaknesses of an HMM approach. Finally, the use of CRFs to annotate protein coding regions will be described as an alternative.
9.03: Eukaryotic Genes- An Introduction
Within eukaryotic genomes, only a small fraction of the nucleotide content actually consists of protein coding genes (in humans, protein coding regions make up about 1%-1.5% of the entire genome). The rest of the DNA is classified as intergenic regions (See Figure 9.1) and contains things such as regulatory motifs, transposons, integrons and non-protein coding genes.2
Further, of the small fraction of the DNA that is transcribed into mRNA, not all of it is translated into protein. Certain regions known as introns, are removed or “spliced” out of the precursor mRNA. This now processed mRNA, containing only “exons” and some other additional modifications discussed in previous chapters, is translated into protein. (See Figure 9.2) The goal of computational gene identification is thus not only to pick out the few regions of the entire Eukaryotic genome that encode for proteins but also to parse those protein coding regions into identities of exon or intron so that the sequence of the synthesized protein can be known.
2"Intergenic region." http://en.Wikipedia.org/wiki/Intergenic region
9.04: Assumptions for Computational Gene Identification
The general assumptions for computational gene identification are that exons are delineated by a sequence AG at the start of the exon and a sequence of GT at the end of the exon. For protein-coding genes, the start codon (ATG) and the end codons (TAA, TGA, TAG) delineate the open reading frame. (Most of these ideas can be seen in Figure 9.3) These assumptions will be incorporated into more complex HMMs described below. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/09%3A_Gene_Identification-_Gene_Structure_Semi-Markov_CRFS/9.01%3A_Introduction.txt |
A toy Hidden Markov Model is a generative approach to model this behavior. Each emission of the HMM is one DNA base/letter. The hidden states of the model are intergenic, exon, intron. Improving upon this model would involve including hidden states DonorG and DonorT. The DonorG and DonorT states utilize the information that exons are delineated by GT at the end of the sequence before the start of an intron. (See Figure 9.4 for inclusion of DonorG and DonorT into the model)
The e in each state represents emission probabilities and the arrows indicate the transition probabilities.
Aside from the initial assumptions, additional evidence such as evolutionary conservation and experi- mental mRNA data can help create an HMM to better model the behavior. (See Figure 9.5)
Combining all the lines of evidence discussed above, we can create an HMM with composite emissions in that each emitted value is a “tuple” of collected values. (See Figure 9.6)
A few assumptions of this composite model are that each new emission “feature” is independent of the rest. However, this creates the problem that with each new feature, the tuple increases in length, and the number of states of the HMM increases exponentially, leading to a combinatorial explosion, which thus means poor scaling. (Examples of more complex HMMs that can result in poor scaling can be found in Figure 9.7)
9.06: Conditional Random Fields
Conditional Random Fields, CRFs, are an alternative to HMMs. Being a discriminative approach, this type of model doesnt take into account the joint distribution of everything, as does a poorly scaling HMM. The hidden states in a CRF are conditioned on the input sequence. (See Figure 9.8)3
A feature function is like a score, returning a real-valued number as a function of its inputs that reflects the evidence for a label at a particular position. (See Figure 9.9) The conditional probability of the emitted sequence is its score divided by the total score of the hidden state. (See Figure 9.10)
Each feature function is weighted, so that during the training, the weights can be set accordingly.
The feature functions can incorporate vast amounts of evidence without the Naive Bayes assumption of independence, making them both scalable and accurate. However, training is much more difficult with CRFs than HMMs.
3Conditional Random Field. Wikipedia. http://en.Wikipedia.org/wiki/Conditional random field
9.07: Other Methods
Besides HMMs and CRFs, other methods do exist for computational gene identification. Semi-markov models generate variable sequence length emissions, meaning that the transitions are not entirely memory-less on the hidden states.
Max-min models are adaptations of support vector machines. These methods have not yet been applied to mammalian genomes.4
4 For better understanding of SVM: http://dspace.mit.edu/bitstream/hand...663/6-034Fall- 2002/OcwWeb/Electrical-Engineering-and-Computer-Science/6-034Artificial-IntelligenceFall2002/Tools/detail/svmachine.htm
9.08: Conclusion Bibliography
Computational gene identification, because it entails finding the functional elements encoded within a genome, has a lot of practical significance as well as theoretical significance for the advancement of bio- logical fields.
The two approaches described above are summarized below in Figure 9.11:
HMM
• generative model
• randomly generates observable data, usually with a hidden state
• specifies a joint probability distribution
• P(x,y) = P(x|y)P(y)
• sometimes hard to model dependencies correctly
• hidden states are the labels for each DNA base/letter
• composite emissions are a combination of the DNA base/letter being emitted with additional evidence
CRF
• discriminative model
• models dependence of unobserved variable y on an observed variable x • P(y|x)
• hard to train without supervision
• more effective for when the model doesnt require joint distribution
In practice, the resulting gene specification using CONTRAST, a CRF implementation, is about 46.2% at its maximum. This is because in biology, there are a lot of exceptions to the standard model, such as overlapping genes, nested genes, and alternative splicing. Having models include all of those exceptions sometimes yields worse predictions; this is a non-trivial tradeoff. However, technology is improving and within the next five years, there will be more experimental data to fuel the development of computational gene identification, which in turn will help generate a better understanding of the syntax of DNA.
Bibliography
1.R. Guigo (1997). “Computational gene identification: an open problem.”
2.“Intergenic region.” http://en.Wikipedia.org/wiki/Intergenic region
3.Conditional Random Field. Wikipedia. http://en.Wikipedia.org/wiki/Conditional random field 4.http://dspace.mit.edu/bitstream/hand.../svmachine.htm | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/09%3A_Gene_Identification-_Gene_Structure_Semi-Markov_CRFS/9.05%3A_Hidden_Markov_Chains.txt |
RNA (Ribonucleic acid) as a molecule has been posited as being the origin of life. Though it was long considered nothing more than an intermediary between the code in the DNA and the functional proteins, RNA has been shown to serve many different functions, spanning the entire realm of genomics. Part of the cause for its versatility is the many possible conformations that RNA can be found in. Being made up of a more flexible backbone than DNA, RNA exhibits interesting and varied structures that can inform us on its many purposes. Certain structures of RNA, for example, lend themselves to catalytic activities while others serve as the tRNA, and mRNA that are so important during the process of converting the DNA’s code into proteins The aim for this chapter is to learn methods that can explain, or even predict the secondary structure of RNA in the hope that they will shed light on the many properties of this versatile molecule.
To accomplish this, we first look at RNA from a biological perspective and explain the known biological roles of RNA. Then, we study the different methods that exist to predict RNA structure. There are two main approaches to the RNA folding problem: 1) predicting the RNA structure based on thermodynamic stability of the molecule, and looking for a thermodynamic optimum 2) probabilistic models which try to find the states of the RNA molecule in a probabilistic optimum.
Finally, we can use evolutionary data in order to increase the confidence of our predictions by these methods.
10.02: Chemistry of RNA
RNA consists of a 5-carbon sugar, ribose, which is attached to an organic base (either adenine, uracil, cytosine or guanine). There are two biochemical differences between DNA and RNA:
1. the 5-carbon sugar has no hydroxyl group in the 5 position
2. the uracil presence in the RNA which is the non-methylated form of thymine instead of just thymine.
The presence of ribose in RNA makes its structure more flexible than DNA, letting the RNA molecule fold and make bonds within itself which makes the single stranded RNA more than single stranded DNA.
10.03: Origin and Functions of RNA
People initially believed that RNA only acted as an intermediate between the DNA code and the protein, however, in early 80s, the discovery of catalytic RNAs (ribozymes) expanded the perspective on what this molecule can actually do in living things. Sidney Altman and Thomas Cech discovered the first ribozyme, RNase P which is able to cleave off the head of tRNA. Self-splicing introns (group I introns) were also one of the first ribozymes that were discovered. They do not need any protein as catalysts to splice. Single or double stranded RNA also serves as the information storage and replication agent in some viruses.
The RNA World Hypothesis, proposed by Walter Gilbert in 1986, suggests that RNA was the precursor to modern life. It relies on the fact that RNA can have both information storage, and catalytic activity at the same time, both of which are fundamental characteristics of a living system. In short, the RNA World hypothesis says that, because RNA can have a catalytic role in cells and there is evidence that RNA can self-replicate without depending on other molecules, an RNA World is a plausible precursor of today’s DNA and protein based world. Although to this day, there are no natural self-replicating RNA found in vivo, self- replicating RNA molecules have been created in lab via artificial selection. For example, a chimeric construct of a natural ligase ribozyme with an in vitro selected template binding domain has been shown to be able to replicate at least one turn of an RNA helix. For this reason, Gilbert proposed RNA as a plausible origin for life. The theory suggests that through evolution, RNA has passed its information storage role to DNA, a more stable molecule and one less prone to mutation. RNA then assumed the role of intermediate between DNA and proteins, which took over some of RNA’s catalytic role in the cell. Thus, scientists sometimes refer to RNA as molecular fossils. Even though RNA has lost a lot of its information-storage functionality to DNA and its functional properties to proteins, RNA still plays an integral role in the living organisms. For instance, the catalytic portion of the ribosome i.e. the main functional part of the ribosomal complex consists of RNA. RNA also has regulatory roles in the cell, and basically serves as an agent for the cell to sense and react to the environment.
Riboswitches
Regulatory RNAs have different families, and one of the most important ones are riboswitches. Riboswitches are involved in different levels of gene regulation. In some bacteria, important regulations are done by simple RNA families. One example is the thermosensor in Listeria, a riboswitch that blocks the ribosomes at low temperature (since the hydrogen bonds are more stable). The RNA then forms a semi- double stranded conformation which does not bind to the ribosome and turns the ribosome off. At higher temperatures (37 C), the double strand opens up and allows ribosome to attach to a certain region in the riboswitch, making translation possible once again. Another famous Riboswitch is the adenine Riboswitch (and in general purine riboswitches) , which regulate protein synthesis. For example the ydhl mRNA which has a terminator stem at the end and blocks it from translation, but when the Adenine concentration in- creases in the cell, it binds to the mRNA and changes its conformation such that the terminator stem disappears.
microRNAs
There are other sorts of RNAs such as microRNAs, a more modern variant of RNA (relatively). Their discovery unveiled a novel non-protein layer of gene regulation (e.g. the EVF-2 and HOTAIR miRNAs). EVF-2 is interesting because its transcribed from an ultra conserved enhancer, and separates from the transcription string by forming a hairpin, and thereafter returns to the very same enhancer (along with a protein Dlx-2) and regulates its activity. HOTAIR RNA induces changes in chromatin state, and regulates the methylation of Histones, which in turn silences the HOX-D cluster.
Other types of RNA
We can also look at types of noncoding RNAs.
• piRNAs are the largest class of small non-coding RNA molecules in animals. They are primarily involved in the silencing of transposons, but likely have a lot of functions. They are also involved in epigenetic modications, and post-transcriptional gene silencing.
• lncRNAs are long transcripts produced that operate functionally as RNAs and are not translated into proteins. Many studies implicate lncRNAs in epigenetic modications, maybe acting as a targeting mechanism or as a molecular scaffold for Polycomb proteins. lncRNAs are likely to possess numerous functions, many are nuclear, many are cytoplasmic.
10.04: RNA Structure
We have learned about different functions of RNA, and it should be clear by now how fundamental the role of RNA in living systems is. Because it is impossible to understand how RNA actually does all these activities in the cell, without knowing what its structure is, in this part we will look into the structure of RNA.
RNA structure can be studied in three different levels:
1. Primary structure: the sequence in which the bases (U, A, C, G) are aligned.
2. Secondary structure: the 2-D analysis of the [hydrogen] bonds between different parts of RNA. In other words, where RNA becomes double-stranded, where RNA forms a hairpin or a loop or other similar forms.
3. Tertiary structure: the complete 3-D structure of RNA, i.e. how the string bends, where it twists and such.
As mentioned before, the presence of ribose in RNA enables it to fold and create double-helixes with itself. The primary structure is fairly easy to obtain through sequencing the RNA. We are mainly interested in understanding the secondary structure for RNA: where the loops and hydrogen bonds form and create the functional attributes of RNA. Ideally, we would like to study the tertiary structure because this is the final state of the RNA, and what gives it its true functionality. However, the tertiary structure is very hard to compute and beyond the scope of this lecture.
Even though studying the secondary structure can be tricky, there are some simple ideas that work quite well in predicting it. Unlike proteins, in RNA, most of the stabilizing]free energy for the molecule comes from its secondary structure (rather than tertiary in case of proteins). RNAs initially fold into their secondary structure and then form their tertiary structure, and therefore there are very interesting facts that we can learn about a certain RNA molecule by just knowing its secondary structure.
Finally, another great property of the secondary structure is that it is usually well conserved in evolution, which helps us improve the secondary structure predictions and also to find ncRNA (non-coding RNA)s. There are widely used representations for the secondary structure of RNA:
Formally: A secondary structure is a vertex labeled graph on n vertices with an adjacency matrix A = (aij) fulfilling:
• ai,i+1 = 1for1 ≤ i ≤ n1 (continuous backbone)
• For each i, 1≤i≤N there is at most one aij =1 where \(j \gneqq i+/-1\)(a base only forms a pair with one other at the time)
• If aij =akl =1andi<k<jtheni<l<j (ignore pseudo knots) | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/10%3A_RNA_Folding/10.01%3A_Motivation_and_Purpose.txt |
Finally, we get to the point where we want to study the RNA structure. The goal here is to predict the secondary structure of the RNA, given its primary structure (or its sequence). The good news is we can find the optimal structure using dynamic programming. Now in order to set up our dynamic programming framework we would need a scoring scheme, which we would create using the contribution of each base pairing to the physical stability of the molecule. In other words, we want to create a structure with minimum free energy, in in our simple model we would assign each base pair an energy value. 10.7
The optimum structure is going to be the one with a minimum free energy and by convention negative energy is stabilizing, and positive energy is non-stabilizing. Using this framework, we can use dynamic programming (DP) to calculate the optimal structure because 1) this scoring scheme is additive 2) we disallowed pseudo knots, which means we can divide the RNA into two smaller ones which are independent, and solve the problem for these smaller RNAs.
We want to find a DP matrix Eij, in which we calculate the minimum free energy for subsequence i to j. The first approach to this is Nussinov’s algorithm.
Nussinov’s algorithm
The recursion formula for this problem was first described by Nussinov in 1978.
The intuition behind this algorithm is as follows: given a subsequence [i,j], there is either no edge connecting to the ith base (meaning it is unpaired) or there is some edge connecting the ith base to the kth base where i < k ≤ j (meaning the ith base is paired to the kth base). In the case were the ith base is unpaired, the energy of the subsequence, Ei,j , simply reduces to the energy of the subsequence from i + 1 to j, Ei+1,j. This is the first term of the Nussinov recurrence relation. If the ith base is paired to the kth base, however, then Ei,j reduces to the energy contribution of the i,k pairing, βi,k, plus the energy of the subsequences formed by dividing [i + 1, j] around k, Ei+1,k−1 and Ek+1,j . Choosing the k which minimizes that value yields the second term of the Nussinov recurrence relation. The optimal subsequence energy, therefore, is the minimum of the subsequence energy when the ith base is paired with the optimal kth base and when the ith base is unpaired. This produces the overall relation described in figure 10.8.
From this recurrence relation, we can see that the DP matrix will contain entries for all i,j where 1≤i≤nandi≤j≤nandnisthelengthoftheRNAsequence. In other words,the matrix will be n ∗ n and only contain entries in the upper right triangle. The matrix is first initialized such that all values on the diagonal are equal to zero. We then iterate over i = n − 1...1 and j = i + 1...n (bottom to top, left to right) and fill each entry according to the recurrence relation. The overall score is the score of the [1, n] subsequence, which is the upper right corner of the matrix. Figure 10.9 illustrates this procedure.
When we calculate the minimum free energy, we are often interested in the corresponding fold. In order to recover the optimal fold from the DP algorithm, a traceback matrix is used to store pointers from each entry to its parent entry. Figure 10.10 describes the backtracking algorithm.
This model is very simplistic and there are some limitations to it. Nussinov’s algorithm, as implemented naively, does not take into account some of the limiting aspects of RNA folding. Most importantly, it does not consider stacking interactions between neighboring pairs, a vital factor (even more so than hydrogen bonds) in RNA folding. Figure 10.11
Therefore, it is desirable to integrate biophysical factors into our prediction. One improvement, for instance, is to assign energies to graph faces (structural elements in figure 10.12), rather than single base pairs. The total energy of the structure then becomes the sum of the energies of the substructures. The stacking energies can be calculated by melting oligonucleotides experimentally.
Zuker Algorithm
Therefore, we use a variant which includes stacking energies to calculate the RNA structure. This is called the Zuker algorithm. Like Nussinovs, it assumes that the optimal structure is the one with the lowest equilibrium free energy. Nevertheless, it includes the total energy contributions from the various substructures which is partially determined by the stacking energy. Some modern RNA folding algorithms use this algorithm for RNA structure predictions.
In the Zuker algorithm, we have four different cases to deal with. Figure 10.13 shows a graphical outline of the decomposition steps. The procedure requires four matrices. Fij contains the free energy of the overall optimal structure of the subsequence xij. The newly added base can be unpaired or it can form a pair. For
the latter case, we introduce the helper matrix Cij, that contains the free energy of the optimal substructure of xij under the constraint that i and j are paired. This structure closed by a base-pair can either be a hairpin, an interior loop or a multi-loop.
The hairpin case is trivial because no further decomposition is necessary. The interior loop case is also simple because it reduces again to the same decomposition step. The multi-loop step is more complicated. The energy of a multi loop depends on the number of components, i.e. substructures that emanate from the loop. To implicitly keep track of this number, there is a need for two additional helper matrices. Mij holds the free energy of the optimal structure of xij under the constraint that xij is part of a multi loop with at least one component. Mij1 holds the free energy of the optimal structure of xij under the constraint that xij is part of a multi-loop and has exactly one component closed by pair (i,k) with i < k < j. The idea is to decompose a multi loop in two arbitrary parts of which the first is a multi-loop with at least one component and the second a multi-loop with exactly one component and starting with a base-pair.
These two parts corresponding to M and M1 can further be decomposed into substructures that we already know, i.e. unpaired intervals, substructures closed by a base-pair,or (shorter) multi-loops. (The recursions are also summarized in 10.13.
In reality, however, at room temperature (or cell temperature), RNA is not actually in one single state, but rather varies in a Thermodynamic ensemble of structure. Base pairs can break their bonds quite easily, and although we might find an absolute optimum in terms of free energy, it might be the case that there is another sub-optimal structure which is very different from what e predicted and has an important role in the cell. To fix the problem we can calculate the base pair probabilities to get the ensemble of structures, and then we can have a much better idea of what the RNA structure probably looks like. In order to do this, we utilize the Boltzman factor:
$\operatorname{Prob}(\mathcal{S})=\frac{\exp (-\Delta G(\mathcal{S}) / R T)}{Z}\nonumber$
This gives us the probability of a given structure, in a thermodynamic system. We need to normalize the temperature using the partition function Z,which is the weighted sum of all structures, based on their Boltzman factor:
$z=\sum_{s} \exp (-\Delta G(\mathcal{S}) / R T)\nonumber$
We can also represent this ensemble graphically, using a dot plot to visualize the base pair probabilities. To calculate the specific probability for a base pair (i, j) , we need to calculate the partition function, which is given by the following formula :
$p_{i j}=\frac{\widehat{Z}_{i} Z_{i+1,-1} \exp \left(-\beta_{j} / R T\right)}{Z}\nonumber$
To calculate Z (the partition function over the whole structure), we use the recursion similar to the Nussinovs Algorithm (known as McCaskill Algorithm).The inner partition function is calculated using the formula:
$Z_{i j}=Z_{i+1, j}+\sum_{i+1 \leq k \leq j \atop n_{k}=1} Z_{i+1, k-1} Z_{k+1, j} \exp \left(-\beta_{i k} / R T\right)\nonumber$
With each of the additions corresponding to a different split in our sequence as the next figure illustrates. Note that the addition are multiplied to the energy functions since it is expressed as a exponential.
Similarly the outer partition function is calculated with a the same idea using the formula:
corresponding to different splits in the area outside the base pairs (i,j). | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/10%3A_RNA_Folding/10.05%3A_RNA_Folding_Problem_and_Approaches.txt |
It is useful to understand the evolution of RNA structure, because it unveils valuable data, and can also give us hints to refine our structure predictions. When we look into functionally important RNAs over time, we realize their nucleotides have changed at some parts, but their structure is well-conserved.
In RNA there are a lot of compensatory mutations and consistent mutations. In a consistent mutation, the structure doesnt change e.g. an AU pair mutates to form a G pair. In a compensatory mutation there are actually two mutations, one disrupts the structure, but the second mutation restores it, for example an AU pair changes to a CU which does not pair well, but in turn the U mutates to a G to restore a CG pair. In an ideal world, if we have this knowledge, this is the be the key to predict the RNA structure, because evolution never lies. We can calculate the mutual information content for two different RNAs and compare it. In other words, you compare the probabilities of two base pair structures agreeing randomly vs. if they have evolved to be conserve the structure.
The mutual information content is calculated via this formula:
$M_{i j}=\sum_{X, Y} f_{i j}(X Y) \log \frac{f_{i j}(X Y)}{f_{i}(X) f_{j}(Y)} \nonumber$
If we normalize these probabilities, and store the MI in bits, we can plot it in a 3D model and track the evolutionary signatures. In fact, this was the method for determining the structure of ribosomal RNAs long before they were found by crystallography.
The real problem is that we dont have so much information, so what we usually do is combine the folding prediction methods with phylogenetic information in order to get a reliable prediction. The most common way to do this is to combine to Zuker algorithm with some covariance scores. For example, we add stabilizing energy if we have a compensatory mutation, and destabilizing energy if we have a single nucleotide mutation.
10.07: Probabilistic Approach to the RNA Folding Problem
RNA-coding sequence inside the genome Finding RNA-coding sequences inside the genome is a very hard problem. However there are ways to do it. One way is to combine the thermodynamic stability information, with a normalized RNAfold score and then we can do a Support Vector Machine (SVM) classification, and compare the thermodynamic stability of the sequence to some random sequences of the same GC content and the same length and see how many standard deviations is the given structure more stable that the expected value.
We can combine it with the evolutionary measure and see if the RNA is more conserved or not. This gives us (with relative accuracy) an idea if the genomic sequence is actually coding an RNA.
We have studied only half of the story. Although the thermodynamic approach is a good way (and the classic way) of folding the RNAs, some part of the community like to study it from a different aspect. Lets assume for now that we don't
RNA-coding sequence inside the genome
Finding RNA-coding sequences inside the genome is a very hard problem. However there are ways to do it. One way is to combine the thermodynamic stability information, with a normalized RNA fold score and then we can do a Support Vector Machine (SVM) classification, and compare the thermodynamic stability of the sequence to some random sequences of the same GC content and the same length and see how many standard deviations is the given structure more stable that the expected value.
know anything about the physics of RNA or the Boltzman factor. Instead, we look into the RNA as a string of letters for which we want to find the most probable structure. We have already learned about the Hidden Markov Models in the previous lectures. They are a nice way to make predictions about the hidden states of a probabilistic system. The question is can we use Hidden Markov models for the RNA folding problem? The answer is yes.
We can combine it with the evolutionary measure and see if the RNA is more conserved or not. This gives us (with relative accuracy) an idea if the genomic sequence is actually coding an RNA.
We have studied only half of the story. Although the thermodynamic approach is a good way (and the classic way) of folding the RNAs, some part of the community like to study it from a different aspect.
Lets assume for now that we don't know anything about the physics of RNA or the Boltzman factor. Instead, we look into the RNA as a string of letters for which we want to find the most probable structure. We have already learned about the Hidden Markov Models in the previous lectures. They are a nice way to make predictions about the hidden states of a probabilistic system. The question is can we use Hidden Markov models for the RNA folding problem? The answer is yes.
We can represent RNA structure as a set of hidden states of dots and brackets (recall the dot-bracket representation of RNA in part 3). There is an important observation to make here: the positions and the pairings inside the RNA are not independent, so we cannot simply have a state of an opening bracket without any considerations of the events that are happening downstream.
Therefore we need to extend the HMM framework to allow for nested correlations. Fortunately, the probabilistic framework to deal with such a problem already exists. It is known as stochastic context-free grammar (SCFG).
Context Free Grammar in a nutshell
You have:
• Finite set of non-terminal symbols (states) e.g. {A, B, C} and terminal symbols e.g. {a, b, c}
• Finite set of Production rules. e.g. {A → aB, B → AC, B → aa, → ab}
• An initial (start) nonterminal
You want to find a way to get from one state to another (or to a terminal). $A→aB→aAC→ aaaC → aaaab \nonumber$
In a stochastic CFG, the only difference is that each relation has a certain probability .e.g.: $P(B → AC) = 0.25 P(B → aa) = 0.75 \nonumber$
Phylogenetic evaluation is easily combined with SCFGs, since there are many probabilistic models for phylogenetic data. The Probabilistic models are not discussed in detail in this lecture but the following picture basically gives an analogy between the Stochastic models and the methods that we have see so far in the class.
• Analogies to thermodynamic folding:
– CYK ↔ Minimum Free energy (Nussinov/Zuker)
– Inside/outside algorithm ↔ Partition functions (McCaskill)
• Analogies to Hidden Markov models:
– CYK Minimum ↔ Viterbi’s algorithm
– Inside/outside algorithm ↔ Forward/backwards algorithm
• Given a parameterized SCFG (Θ,Ω) and a sequence x, the Cocke-Younger-Kasami (CYK) dynamic programming algorithm finds an optimal (maximum probability) parse tree $\hat{\pi}$:
$\hat{\pi}$ = argmaxProb(π, x|Θ, Ω)
• The Inside algorithm, is used to obtain the total probability of the sequence given the model summed over all parse trees, $\text{Prob}(x|Θ, Ω) = \sum \text{Prob}(x, π|Θ, Ω)$
Application of SCFGs
• Consensus secondary structure prediction: Pfold – First Phylo-SCFG
• Structural RNA gene nding: EvoFold
10.8
– Uses Pfold grammar
– Two competing models:
∗ Non-structural model with all columns treated as evolving independently
∗ Structural model with dependent and independent columns
– Sophisticated parametrization | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/10%3A_RNA_Folding/10.06%3A_Evolution_of_RNA.txt |
There still remain a host of other problems that need to be solved by studying RNA structure. This section will profile some of them.
Other problems
Observe some of the problems depicted graphically below:
Relevance
There are plenty of RNAs inside the cell aside from mRNAs, tRNAs and rRNAs. The question is what is the relevance of all this non-coding RNA? Some believe it is noise resulted through experiment, some think its just biological noise that doesn't have a meaning in the living organism. On the other hand some believe junk RNA might actually have an important role as signals inside the cell and all of it is actually functional, the truth probably lies somewhere in between.
Current research
There are conserved regions in the genome that do not code any proteins, and now Stefans et al. are looking into them to see if they have structures that are stable enough to form functional RNAs. It turns out that around 6% of these regions have hallmarks of good RNA structure, which is still 30000 structural elements. The group has annotated some of these elements, but there is still a long way to go. a lot of miRNA, snowRNAs have been found and of course lots of false positives. But there exciting results coming up in this topic! so the final note is, it’s a very good area to work in!
Summary and key points
1. The functional spectrum of RNAs is practically unlimited
(a) RNAs similar to contemporary Ribozymes and Riboswitches might have existed in an RNA world. Some of them still exist as living fossils in current cells.
(b) Evolutionarily younger RNAs including miRNAs and many long ncRNAs form a non-protein based regulatory layer.
1. RNA structure is critical for their function and can be predicted computationally
(a) Nussinov/Zuker: Minimum Free Energy structure (b) McCaskill: Partition function and pair probabilities
(c) CYK/Inside-Outside: probabilistic solution to the problem using SCFGs
2. Phylogenetic information can improve structure prediction
3. Computational biology of RNAs is an active eld of research with many hard algorithmic problems still open
10.10 Further reading
• Overview
– Washietl S, Will S. et al. Computational analysis of noncoding RNAs. Wiley Interdiscip Rev RNA. 2012, 10.1002/wrna.1134
• RNA function: review papers by John Mattick
• Single sequence RNA folding
– Nussinov R, Jacobson AB, Fast algorithm for predicting the secondary structure of single-stranded RNA.Proc Natl Acad Sci U S A. 1980 Nov; 77:(11)6309-13
– Zuker M, Stiegler P Optimal computer folding of large RNA sequences using thermodynamics and auxiliary information. Nucleic Acids Res. 1981 Jan; 9:(1)133-48
– McCaskill JS The equilibrium partition function and base pair binding probabilities for RNA secondary structure. Biopolymers. 1990; 29:(6-7)1105-19
– Dowell RD, Eddy SR, Evaluation of several lightweight stochastic context-free grammars for RNA secondary structure prediction. BMC Bioinformatics. 2004 Jun; 5:71
– Do CB, Woods DA, Batzoglou S, CONTRAfold: RNA secondary structure prediction without physics-based models. Bioinformatics. 2006 Jul; 22:(14)e90-8
• Consensus RNA folding
– Hofacker IL, Fekete M, Stadler PF, Secondary structure prediction for aligned RNA sequences. J Mol Biol. 2002 Jun; 319:(5)1059-66
– Knudsen B, Hein J, RNA secondary structure prediction using stochastic context-free grammars and evolutionary history. Bioinformatics. 1999 Jun; 15:(6)446-54
• RNA gene finding
– Pedersen JS, Bejerano G, Siepel A, Rosenbloom K, Lindblad-Toh K, Lander ES, Kent J, Miller W, Haussler D Identication and classication of conserved RNA secondary structures in the human genome. PLoS Comput Biol. 2006 Apr; 2:(4)e33
– Washietl S, Hofacker IL, Stadler PF, Fast and reliable prediction of noncoding RNAs. Proc Natl Acad Sci U S A. 2005 Feb; 102:(7)2454-9
Bibliography
1. [1] R Durbin. Biological Sequence Analysis.
2. [2] W. Gilbert. ”origin of life: The rna world”. Nature., 319(6055):618, 1986.
3. [3] Rachel Sealfon, 2012. Extra information taken from Recitation 5 slides.
4. [4] Z. Wang, M. Gestein, and M. Snyder. Rna-seq: a revolutionary tool for transcriptomics. Nat Rev Genet., 10(1):57–63, 2009.
5. [5] Stefan Washietl, 2012. All pictures/formulas courtesy of Stefan’s slides.
6. [6] R. Weaver. Molecular Biology. 3rd edition. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/10%3A_RNA_Folding/10.08%3A_Advanced_topics_Summary_and_key_points_Further_Reading_Bibliography.txt |
Many ideas in biology rely on knowing the protein levels in a cell. Protein abundance is often extrapolated from corresponding mRNA levels. This extrapolation is made as it is relatively easy to measure mRNA levels. In addition, for a long time, it was thought that all of the regulation of expression occurred prior to mRNA formation. Now, it is known that expression continues to be regulated at the translation stage. Figure 1 shows that the data available for post-transcriptional regulation is minimal and illustrates an example of how mRNA levels are not indicative of protein abundance.
There are many factors that may be affecting how mRNA is translated, causing mRNA level to not be directly related to protein levels. These factors include:
1. Translation elongation rates
- depends on codon usage bias, tRNA adaptation, and RNA editing
2. Translation initiation rates
- depends on AUG frequency, TOP presence, type of initiation (cap-dependent/IRES), and secondary structures
3. Translation termination rates
- depends on termination codon identity
4. mRNA degradation rates
- depends on polyA tail length, capping, mRNA editing, and secondary structure
5. Protein degradation rates
- depends on PEST sequences, protein stability, unstructured regions, and the presence of polar amino acids
6. Cis and Trans regulatory elements
- depends on AU-rich elements, miRNAs, ncRNAs, and RNA-binding proteins
11.02: Post-Transcriptional Regulation
Basics of Protein Translation
For the basics of transcription and translation, refer to Lecture 1, sections 4.3 - 4.5.
The genetic code is almost universal.
FAQ
Q: Why is genetic code so similar across organisms?
A: Genomic material is not only transmitted vertically (from parents) but also horizontally between organisms. This gene interaction creates an evolutionary pressure for an universal genetic code.
FAQ
Q: What accounts for the slight differences in the genetic code across organisms?
A: Late/early evolutionary arrival of amino acids can account for the differences. Also, certain species (e.g. bacteria in deep sea vents) have more resources to synthesize specific amino acids, thus they will favor those in the genetic code.
Did You Know?
Threonine and Alanine are often accidentally interchanged by tRNA sythetase because they originated from one amino acid.
Measuring Translation
Translation efficiency is defined as,
$T_{e f f}=\frac{[\mathrm{mRNA}]}{[\mathrm{protein}]}\nonumber$
We are interested in seeing just how much of our mRNA is translated to protein, i.e. the efficiency. However, specifically measuring how much mRNA becomes protein is a difficult task, one that requires a bit of creativity. There are a variety of ways to tackle this problem, but each has its own downfalls:
1. Measure mRNA and protein levels directly
Pitfall: Does not consider rates of synthesis and degradation. This method measures the protein levels for the ’old’ mRNA since there is a time lag from mRNA to protein.
2. Use drugs to inhibit transcription and translation Pitfall: Drugs have side effects altering translation
3. Artificial fusion of proteins with tags
Pitfall: Protein tags can affect protein stability
4. Pulse label with radioactive nucleosides or amino acids (SILAC) **in use today**
Pitfall: Offers no information on dynamic changes: it is simply a snapshat of the resulting mRNA and protein levels after X hours 193
Another common technique is using ’ribosome profiling’ to measure protein translation at subcodon res- olution. This is done by freezing ribosomes in the process of translation and degrading the non-ribosome protected sequences. At this point, the sequences can be pieced back together and the frequency with which a region is translated can be interpolated. The disadvantage to using these ribosome footprints, to see which regions are being translated, is that regions in between ribosomes are lost. This technique requires an RNA-seq in parallel.
The question remains, why is Ribosome profiling advantageous? This technique is a better approach to measuring protein abundance as it:
1. Is a better measure of protein abundance
2. Is independent of protein degradation (compared to the protein abundance/mRNA ratio)
3. Allows us to measure codon-specific translation rates
Using ribosome profiling, it is possible to see which codon is being decoded: this is done by mapping ribosome footprints and then deciphering the translating codon based on footprint length. We can the verify our prediction by mapping translated codon profiles based on periodicity (three bases in a codon). The technique can be improved even further by using anti-translation drugs such as harringtonine and cyclohexamide. Cyclohexamide blocks elongation and Harringtonine inhibits initiation. The later can be used to find the starting points (which genes are about to be translated). Figure 4 shows the effects of the drugs on the ribosome profiles.
This technique has much more to offer than simply quantifying translation. Ribosome profiling allows for:
1. Prediction of alternative isoforms (different places where translation can start) images/AltIsoforms.png
2. Prediction of un-indentified ORFs (open reading frames)
Figure 11.6: Ribosome profile when harringtonine is used vs. no drug. The red peaks previously un-identified ORFs.
3. Comparing translation across different environmental conditions
4. Comparing translation across life stages
Thus, we see that ribosome profiling is a very powerful tool with lots of potential to reveal previously elusive information about the translation of a genome.
Codon Evolution
Basic concepts
Something to make clear is that codons are not used with equal frequencies. In fact, which codons can be considered optimal differs across different species based on RNA stability, strand-specific mutation bias, transcriptional efficacy, GC composition, protein hydropathy, and translational efficiency. Likewise, tRNA isoacceptors are not used with equal frequencies within and across species. The motivation for the next section is to determine how we may measure this codon bias.
Measures of Codon Bias
There are a few methods to accomplish this task:
a) Calculate the frequency of optimal codons, which is defined as “optimal” codons/ sum of “optimal” and “non-optimal” codons. The limitations to this method are that this requires knowing which codon is recognized by each tRNA and it assumes that tRNA abundance is highly correlated with tRNA gene copy number.
b) Calculate a codon bias index. This measures the rate of optimal codons with respect to the total codons encoding for that same amino acid. However, in this case the number of optimal codons are normalized with respect to the expected random usage. CBI = (oopt − erand)/(otot − erand). The limitation of this method is that it requires a reference set of proteins, such as highly expressed ribosomal proteins.
c) Calculate a codon adaptation index. This measures the relative adaptiveness or deviation of the codon us- age of a gene towards the codon usage of a reference set of proteins, i.e. highly expressed genes. It is defined as the geometric mean of the relative adaptiveness values, measured as weights associated to each codon over the length of the gene sequence (measured in codons). Each weight is computed as the ratio between the observed frequency of a given codon and the frequency of its corresponding amino acid. The limita- tion to this approach is that it requires the definition of a reference set of proteins, just as the last method did.
d) Calculate the effective number of codons. This measures the total number of different codons used in a sequence, which measures the bias toward the use of a smaller subset of codons, away from equal use of synonymous codons. Nc = 20 if only one codon is used per amino acid, and Nc = 61 when all possible synonymous codons are used equally. The steps to the process are to compute the homozygosity for each amino acid as estimated from the squared codon frequencies, obtain effective number of codons per amino acid, and compute the overall number of effective codons. This method is advantageous because it does not require any knowledge of tRNA-codon pairing, and it does not require any reference set However, it is limited in that it does not take into account the tRNA pool.
e) Calculate the tRNA adaptation index. Assume that tRNA gene copy number has a high positive correla- tion with tRNA abundance within the cell. This then measures how well a gene is adapted to the tRNA pool.
It is important to distinguish among when to use each index. The situation in which a certain index is favorable is very context-based, and thus it is often preferable to use one index above all others when the situation calls for it. By carefully choosing an index, one can uncover information about the frequency by which a codon is translated to an amino acid.
RNA Modifications
The story becomes more complicated when we consider modifications that can occur to RNA. For instance, some modifications can expand or restrict the wobbling capacity of the tRNA. Examples include insosine modifications and xo5U modifications. These modifications allow tRNAs to decode a codon that they could not read before. One might ask why RNA modification was positively selected in the context of evolution, and the rationale is that this allows for the increase in the probability that a matching tRNA exists to decode a codon in a given environment.
Examples of applications
There are a few natural applications that result form our understanding of codon evolution.
a) Codon optimization for heterologous protein expression
b) Predicting coding and non-coding regions of a genome
c) Predicting codon read-through
d) Understanding how genes are decoded - studying patterns of codon usage bias along genes
Translational Regulation
There are many known means of regulation at the post-transcriptional level. These include modulation of tRNA availability, changes in mRNA, and cis -and trans-regulatory elements. First, tRNA modulation has a large impact. Changes in tRNA isoacceptors, changes in tRNA modifications, and regulation at tRNA aminoacylation levels. Changes in mRNA that affect translation include changes in mRNA modification, polyA tail, splicing, capping, and the localization of mRNA (importing to and exporting from nucleus). Cis- and trans- regulatory elements include RNA interference (i.e. siRNA and miRNA), frameshift events, and riboswitches. Additionally, many regulatory elements are still yet to be discovered!
11.03: What Have We Learned
Hopefully at the end of this chapter we have come to realize the importance in transcriptional regulation. We see that mRNA levels are not 1:1 with protein levels. Additionally, we saw that the genetic code is not universal, and what are considered preferred tRNA-codon pairs are dynamic. Likewise, synonymous mutations are not equivalent across species. We have seen how powerful the technique of ribosome profiling is, as it allows us to measure translation with subcodon resolution. Despite all this, it is possible to model translation and codon evolutions using tools to help increase translation efficiency/folding of proteins in heterologous systems, predict coding regions, understand cell type-specific translation patterns, and compare translation between healthy and disease states. Finally, by analyzing translational regulation, we see how protein levels are tuned, and we see that there are many different ways to achieve post-transcriptional regulation. Perhaps we may come to realize that there is more interconnection between these different regulation strategies than we originally thought. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/11%3A_RNA_Modifications/11.01%3A_Introduction.txt |
Bibliography
1. [1] R.P. Dilworth. A decomposition theorem for partially ordered sets. Annal of Mathematics, 1950.
2. [2] Mitchell Guttman, Manuel Garber, Joshua Z Levin, Julie Donaghey, James Robinson, Xian Adiconis, Lin Fan, Magdalena J Koziol, Andreas Gnirke, Chad Nusbaum, and et al. Ab initio reconstruction of cell type-specific transcriptomes in mouse reveals the conserved multi-exonic structure of lincrnas. Nature Biotechnology, 28(5):503–510, 2010.
[3] C. Trapnell.
12.02: Introduction
Epigenetics is the study of heritable changes in genetic expression and phenotype that do not result from a sequence of DNA. Each cell, despite having an identical copy of the genome, is able to differentiate into a specialized type. There are many biological devices for accomplishing these including DNA methylation, histone modification, and various types of RNA.
DNA methylation is a binary code that is effectively equivalent to turning a gene ”on” or ”off”. However, often times a gene might need to be more highly expressed as opposed to just being turned on. For this, histones have tails that are subject to modification. The unique combination of these two elements on a stretch of DNA can be thought of as a barcode for cell type. Even more important is the method of their preservation during replication. In the case of DNA methylation, one appropriately methylated strand is allocated to each mother or daughter cell. By leaving one trail behind, the cell is able to fill in the gaps and appropriately methylate the other cell.
As the intermediary between DNA sequences and proteins, RNA is arguably the most versatile means of regulation. As such, they will be the focus of this chapter.
Did You Know?
Cell types can be determined by histone modification or DNA methylation (a binary code, which relies on a euchromatic and heterochromatic state). These histone modifications can be thought of as a type of epigenetic barcode that allows cell DNA to be scanned for types. Non-coding RNAs called Large Intergenic Non-Coding RNAs (lincRNAs) are heavily involved in this process.
A quick history of RNA:
• 1975: A lab testing relative levels of RNA and DNA in bull sperm discovers twice as much RNA as DNA.
• 1987: After automated sequencing developed, weird non-coding RNAs are first found.
• 1988: RNA is proved to be important for maintaining chromosome structures, via chromatin architecture
• 1990s: A large number of experiments start to research
• 2000s: Study shows Histone-methyltransferases depend on RNA, as RNAase causes the proteins to delocalize.
Transcription is a good proxy of what’s active in the cell and what will turn into protein. Microarrays led to the discovery of twice as many non-coding genes as coding genes initially; now we know the ratio is even far higher than this.
12.03: Noncoding RNAs from Plants to Mammals
Basic Cycle: large RNA gets chopped up into small RNAs (siRNAs) RNA use by category:
Protists: RNA is used as a template to splice out DNA (RNA-dependent DNA elimination and splicing)
mRNA and DNA in nucleus: DNA chopped and recombined based on gaps in mRNA (“quirky phenom- ena”)
Plants: RNA-dependent RNA polymerase, where the polymerase takes template of RNA and make a copy of it, is available in plants but not humans, and can make small RNAs. Mammals have at most one copies. Very different than RNA polymerase and DNA polymerase in structure. From this, we know that plants do DNA methylation with noncoding RNA.
Flies: use RNAs for an RNA switch; coordinated regulation of hox gene requires noncoding RNA. Mammals: Non-coding RNAs can form triple helices, guide proteins to them; chromatin-modifying complexes; involved in germ line; guide behaviour of transcription factors.
For the rest of this talk, we focus on specifically lincRNA, which we will define as RNA larger than 200 nucleotides.
Long non-coding RNAs
There are a number of different mechanisms and biological devices by which epigenetic regulation occurs. One of these is long non-coding RNAs which can be thought of as fulfilling an air traffic control function within the cell.
Long non-coding RNAs share many similar characteristics with microRNAs. They are spliced, contain multiple exons, are capped, and poly-adenuated. However, they do not have open reading frames. They look just like protein coding genes, but cannot.
They are better classified by their anatomical position:
Antisense: These are encoded on the opposite strand of a protein coding gene.
Intronic: Entirely contained with an intron of a protein coding gene.
Bidirectional: These share the same promoter as a protein coding gene, but are on the opposite side.
Intergenic: These do not overlap with any protein coding genes. Think of them as sitting blindly out in the open. They are much easier targets and will be the focus of this chapter. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/12%3A_Large_Intergenic_Non-Coding_RNAs/12.01%3A_Bibliography.txt |
RNA-seq is a method that utilizes next-generation sequencing technology to sequence cDNA allowing us to gain insight into the contents of RNA. The two main problems that RNA-seq addresses are (1) discover new genes such as splice isoforms of previously discovered genes and (2) uncover the expression levels of genes and transcripts from the sequencing data. Additionally, RNA-seq is also beginning to replace many traditional sequencing techniques allowing labs to perform experiments more efficiently.
How it works
The RNA-Seq machine grabs a transcript and breaks it into different fragments, where the fragments are normally distributed. With the speed that the RNA-seq can sequence these transcript fragments (or reads), there are an abundant number of reads allowing us to extract expression levels. The basic idea behind this method relies on the fact that the more abundant a transcript is, the more fragments we’ll sequence from it.
The tools used to analyze RNA-Seq data are collectively known as the “Tuxedo Tools”
Aligning RNA-Seq reads to genomes and transcriptomes
Since RNA-Seq produces so many reads, the alignment algorithm must have a fast runtime, approximately of the order of O(n). There are two main strategies for aligning short reads, which require that we already have the transcripts.
1. Spaced seeds indexing
Spaced seeds indexing involves taking each read and breaking it into fragments, or “seeds”. We take every combination of two fragments (“seed pairs”) and compare them to an index of seeds (which will take tens of gigabytes of space) for potential hits. Compare the other seeds to the index to make sure we have a hit.
2. Burrows-Wheeler indexing
Burrows-Wheeler indexing takes the genome and scrambles it up in such a way such that you can look at the read one character at a time and throw out a huge chunk of the genome as possible alignment positions very quickly.
One major problem with these two general purpose alignment strategies is that they don’t account for large gaps in alignment.
To get around this, TopHat breaks the reads into smaller pieces. These pieces are aligned and reads with pieces that are mapped far apart are flagged for possible intron sites. The pieces that weren’t able to be aligned are used to confirm the splice sites. The reads are then stitched back together to make full read alignments.
There are two strategies for assembling transcripts based on RNA-Seq reads.
1. Genome-guided approach (used in software such as Cufflinks)
The idea behind this approach is that we don’t necessarily know if two reads come from the same transcript, but we will know if they come from different transcripts. The algorithm is as follows: take the alignments and put them in a graph. Add an edge from x → y if x is to the left of y in the genome, x and y overlap consistently, and y is not contained in x. So we have an edge from x → y if they might come from the same transcript.
If we walk across this graph from left to right, we get a potential transcript. Applying Dilworth’s theorem to read partial orders, we can see that the size of the largest antichain in the graph is the minimum number of transcripts needed to explain the alignment. An antichain is a set of alignments with the property that no two are compatible (i.e. could arise from the same transcript)
2. Genome-independent approach (used in software such as trinity)
The genome-independent approach attempts to piece together the transcripts directly from the reads using classical methods for overlap based read assembly, similar to the genome assembly methods.
Calculating expression of genes and transcripts
We want to count the number of reads from each transcript to find the expression level of the transcript. However, since we divide transcripts into equally-sized fragments, we run into the problem that longer transcripts will naturally produce more reads than a shorter transcript. To account for this, we compute expression levels in FPKM, fragments per kilobase per million fragments mapped.
Likelihood function for a gene
Suppose we sequence a particular read, call it F1.
In order to get this particular read, we need to pick the particular transcript it’s in and then we need to pick this particular read out from the whole transcript. If we define $\gamma_{\text {green }}$ to be the relative abundance of the green transcript, then we have
$P\left(F_{1} \mid \gamma_{\text {green }}\right)=\frac{\gamma_{\text {green }}}{l_{\text {green }}}\nonumber$
where lgreen is the length of the green transcript. Now suppose we look at a different read, F2.
It could have come from either the green transcript of the blue transcript, so:
$P\left(F_{2} \mid \gamma\right)=\frac{\gamma_{\text {green }}}{l_{\text {green }}}+\frac{\gamma_{\text {blue }}}{l_{\text {blue }}} \nonumber$
We can see that the probability of getting both F1 and F2 is just the product of the individual probabilities:
$P(F \mid \gamma)=\frac{\gamma_{\text {green }}}{l_{\text {green }}} \cdot\left(\frac{\gamma_{\text {green }}}{l_{\text {green }}}+\frac{\gamma_{\text {blue }}}{l_{\text {blue }}}\right) \nonumber$
We define this as our likelihood function, L(F|γ). Given an input of abundances, we get a probability of how likely our sequence of reads is. So from a set of reads and transcripts, we can build a likelihood function and calculate the values for gamma that will maximize this function. Cufflinks achieves this using hill climbing or EM on the log-likelihood function.
Differential analysis with RNA-Seq
Suppose we perform an RNA-Seq analysis for a gene under two different conditions. How can we tell if there is a significant difference in the fragment counts? We calculate expression by estimating the expected number of fragments that come from each transcript. To test for significance, we need to know the variance of that estimate. We model the variance as:
Var(expression) = Technical variability + Biological variability
Technical variability, which is variability from uncertainty in mapping reads, can be modeled well with a Poisson distribution (see figure below). However, using Poisson to model biological variability, or variability across replicates, results in overdispersion.
In the simple case where we have variability across replicates, but no uncertainty, we can mix the Poisson distributions from each replicate into a new distribution to model biological variability. We can treat the lambda parameter of the Poisson distribution as a random variable that follows a gamma distribution:
$X \sim \operatorname{Poisson}(\Gamma(r, p)) \nonumber$
The counts from this model follow a negative binomial distribution. To figure out the parameters for the negative binomial for each gene, we can fit a gamma function through a scatter plot of the mean count vs. count variance across replicates.
In the simple case where there is read mapping uncertainty, but not biological variability, we need to in- clude the mapping uncertainty in our variance estimate. Since we assign reads to transcripts probabilistically, we need to calculate the variance in that assignment.
The two threads of RNA-Seq expression analysis research focus on the problems in these two simple cases. One of the threads focuses on inferring the abundances of individual isoforms to learn about differential splicing and promoter use, while the other thread focuses on modeling variability across replicates to create more robust differential gene expression analysis. Cuffdiff unites these two separate threads to study the case where we have biological variability and read mapping ambiguity. Since overdispersion can be modeled with a negative binomial distribution and mapping uncertainty can be modeled with a Beta distribution, we combine these two to model this case with a beta negative binomial distribution. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/12%3A_Large_Intergenic_Non-Coding_RNAs/12.04%3A_Practical_topic-_RNAseq.txt |
Let’s examine human skin as an example of long non-coding RNAs being used in epigenetic regulation. Human skin is huge, in fact it is the largest organ by weight in the body. It is intricate, with specialized features, and it is constantly regenerating to replace old dead cells with new ones. The skin must be controlled so hair only grows on the back of your hand rather than on your palm. Moreover, these boundaries cannot change and are maintained ever since birth.
The skin in all parts of the body is composed of an epithelial layer and a layer of connective tissue made up of cells called fibroblasts. These fibroblasts secrete cytokine signals that control the outer layer, determining properties such as the presence or absence of hair. Fibroblasts all around the body are identical except for the specific epigenetic folding that dictates what type of skin will be formed in a given location. Based on whether the skin is distal or proximal, interior or exterior, posterior or anterior, a different set of epigenetic folds will determine the type of skin that forms.
It has been found that specific HOX genes delineate these anatomical boundaries during development. Just by looking at the human HOX genetic code, one can predict where a cell will be located. Using ChIP- on-chip (chromatin immunoprecipitation microarrays) diamteric chromatin domains have been found among these HOX genes. In the figure below, we can see a clear boundary between the chromatin domains of a cell type located proximally and another located distally. Not only is this boundary precise, but it is maintained across trillions of skin cells.
HOTAIR or HOX transcript antisense intergenic RNA has been investigated as possible RNA regulator that keeps these boundary between the diametric domains in chromatin. When HOTAIR was knocked out in the HOXC locus, it was hypothesized that the chromatin domains might slip through into one another. While it was found that this HOTAIR did not directly affect the epigenetic boundary, researchers did find evidence of RNA based genomic cross talk. The HOTAIR gene affected a different locus called HOXD.
Through a process of ncRNA dependent Polycomb repression, the HOTAIR sequence can control epige- netic regulation. Plycomb is a portein that puts stop marks on the tails of histones so that they can cause specific folds in the genetic material. On their own histones, are undirected, so it is necessary for some mechanism to dictate how they attach to the genome. This process of discovery has led to great interest in the power of long intergenic non-coding RNAs to affect epigenetic regulation.
12.06: Integergenic Non-coding RNAs- missing lincs in Stem Cancer cells
example: XIST
XIST was one of the first lincRNAs to be characterized. It is directly involved in deactivation of one of the female X chromosomes during embryonic development. It has been described as having the ability to ”crumple an entire chromosome”. This is important because deactivation prevents lethal overexpression of genes found on the X chromosome.
RNA is important for getting polychrome complex to chromosome ncRNAs can activate downstream genes in Cis, opposite in trans; Xist does the same thing. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/12%3A_Large_Intergenic_Non-Coding_RNAs/12.05%3A_Long_non-coding_RNAs_in_Epigenetic_Regulation.txt |
How would we find ncRNAs? We have about 20-30 examples of ncRNAs with evidence of importance, but more are out there. Chromatin state maps (from ENCODE, chip-seq) can be used to find transcriptional units that do not overlap proteins. We can walk along map and look for genes (look by eye at chromatin map to find ncRNAs). Nearly 90% of time such a signature is found, RNA will be transcribed from it. We can validate this through northern blot
When looking at a chromatin map to find ncRNAs, we are essentially looking through the map with a window of a given size and seeing how much signal vs. noise we are getting, compared to what we might expect from a random-chance hypothesis. As both large and small windows have benefits, both should be used on each map section. Larger windows encapulate more information; smaller windows are more sensitive.
After finding integenic regions, we find conserved regions.
We check if new regions are under selective pressure; fewer mutations in conserved regions. If a nucleotide never has a mutation between species, it’s highly conserved.
linc-RNAs are more conserved than introns, but less conserved than protein-coding introns, possibly due to non-conserved sequences in loop regions of lincRNAs.
Finding what lincRNAs’ functions are: “Guilt by association”: We can find proteins that correlate with particular lincRNA in terms of expression; lincRNAs are probably correlated to a particular pathway. In this way, we acquire a multidimensional barcode for each lincRNA (what it is and is not related to). We Can cluster lincRNA signatures and identify common patterns. Lots have to do with cell cycle genes. (This approach works 60-70% of the time)
As most lincRNAs are over 3000 bases, many contain sequences for 100 amino acid open reading frames, simply by chance. This results in many false negatives during detection.
It has been found that many lincRNAs tend to neighbor developmental regions of the genome. They also tend to be lowly expressed compared to protein coding genes.
Example: p53
Independent validation: we use animal models, where one is a wild-type p53, andone is a knockout. We induce p53, then ask if lincRNAs turn on. 32 of 39 lincRNAs found associated with p53 were temporally induced upon turning on p53.
One RNA in particular sat next to a protein-coding gene in the p53 pathway. We tried to figure out if p53 bound to promoter and turned it on. To do this, we cloned the promoter of lincRNA, and asked does p53 turn it on? We IPed the p53 protein, to see if it associated with the lincRNA of the promoter. It turned out that lincRNA is directly related to p53 - p53 turns it on. P53 also turns genes off - certain lincRNAs act as a repressor.
From this example (and others), we start to see that RNAs usually have a protein partner
RNA can bring myriad of different proteins together, allowing the cell lots of diversity. In this way its similar to phosphorylation. RNAs bind to important chromatin complexes, and is required for reprogramming skin cells into stem cells. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/12%3A_Large_Intergenic_Non-Coding_RNAs/12.07%3A_Technologies-_in_the_wet_lab_how_can_we_find_these.txt |
Large-scale analyses in the 1990s using expressed sequence tags have estimated a total of 35,000 - 100,000 genes encoded by the human genome. However, the complete sequencing of human genome has surprisingly revealed that the numbers of protein-coding genes are likely to be ∼20,000 – 25,000 [12]. While this represents <2% of the total genome sequence, whole genome and transcriptome sequencing and tiling resolution genomic microarrays suggests that over >90% of the genome is still actively transcribed [8], largely as non-protein- coding RNAs (ncRNAs). Although initial speculation has been that these are non-functional transcriptional noise inherent in the transcription machinery, there has been rising evidence suggesting the important role these ncRNAs play in cellular processes and manifestation/progression of diseases. Hence these findings challenged the canonical view of RNA serving only as the intermediate between DNA and protein.
ncRNA classifications
The increasing focus on ncRNA in recent years along with the advancements in sequencing technologies (i.e. Roche 454, Illumina/Solexa, and SOLiD; refer to [16] for a more details on these methods) has led to an explosion in the identification of diverse groups of ncRNAs. Although there has not yet been a consistent nomenclature, ncRNAs can be grouped into two major classes based on transcript size: small ncRNAs (<200 nucleotides) and long ncRNAs (lncRNAs) (≥200 nucleotides) (Table 13.1 ) [6, 8, 13, 20, 24]. Among these, the role of small ncRNAs microRNA (miRNA) and small interfering RNA (siRNA) in RNA silencing have been the most well-documented in recent history. As such, much of the discussion in the remainder of this chapter will be focused on the roles of these small ncRNAs. But first, we will briefly describe the other diverse set of ncRNAs.
Table 13.1: ncRNA classifications (based on [6, 8, 13, 20, 24])
Name Abbreviation Function
Ribosomal RNA
Transfer RNA
Small nucleolar RNA
Small Cajal body-specific RNA Small nuclear RNA
Guide RNA
Housekeeping RNAs
rRNA
tRNA
snoRNA (∼60-220 nt) scaRNA
snRNA (∼60-300 nt) gRNA
translation
translation
rRNA modification splicesome modification RNA splicing
RNA editing
MicroRNA
Small interfering RNA
Piwi interacting RNA
Tiny transcription initiation RNA
Promoter-associated short RNA
Transcription start site antisense RNA
Termini-associated short RNA
Antisense termini associated short RNA Retrotransposon-derived RNA
3’UTR-derived RNA
x-ncRNA
Small NF90-associated RNA Unusually small RNA
Vault RNA
Human Y RNA
Small ncRNAs (<200 nt)
miRNA (∼19-24 nt)
siRNA (∼21-22 nt)
piRNA (∼26-31 nt)
tiRNA (∼17-18 nt)
PASR (∼22-200 nt)
TSSa-RNA (∼20-90 nt)
TASR
aTASR
RE-RNA
uaRNA
x-ncRNA
snaR usRNA
vtRNA
hY RNA
RNA silencing
RNA silencing
Transposon silencing, epigenetic regulation
Transcriptional regulation? unknown
Transcriptional maintainence? not clear
not clear
not clear
not clear
not clear
not clear
not clear
not clear
not clear
Large intergenic ncRNA
Transcribed ultraconserved regions Pseudogenes
Promoter upstream transcripts
Telomeric repeat-containing RNA
GAA-repeat containing RNA
Enhancer RNA
Long intronic ncRNA
Antisense RNA
Promoter-associated long RNA
Stable excised intron RNA
Long stress-induced non-coding transcripts
Long ncRNAs (≥200 nt) lincRNA
T-UCR
none
PROMPT
TERRA
GRC-RNA
eRNA
none
aRNA
PALR
none
LSINCT
Epigenetics regulation
miRNA regulation?
miRNA regulation? Transcriptional activation? telomeric heterochromatin main- tenance
not clear
not clear
not clear
not clear
not clear
not clear
not clear
Small ncRNA
For the past decades, there have been a number of well-studied small non-coding RNA species. All of these species are either involved in RNA translation (transfer RNA (tRNA)) or RNA modification and processing (small nucleolar RNA (snoRNA) and small nuclear RNA (snRNA)). In particular, snoRNA (grouped into two broad classes: C/D Box and H/ACA Box, involved in methylation and pseudouridylation, respectively) are localized in the nucleous and participates in rRNA processing and modification. Another group of small ncRNAs are snRNAs that interact with other proteins and with each other to form splicesomes for RNA splicing. Remarkably, these snRNAs are modified (methylation and pseudouridylation) by another set of small ncRNAs - small Cajal body-specific RNAs (scaRNAs), which are similar to snoRNA (in sequence, structure, and function) and are localized in the Cajal body in the nucleus. Yet in another class of small ncRNAs, guide RNAs (gRNAs) have been shown predominately in trypanosomatids to be involved in RNA editing. Many other classes have also been recently proposed (see Table 13.1) although their functional roles remain to be determined. Perhaps the most widely studied ncRNA in the recent years are microRNAs (miRNAs), involved in gene silencing and responsible to the regulation of more than 60% protein-coding genes [6]. Given the extensive work that has been focused on RNAi and wide range of RNAi-based applications that have emerged in the past years, the next section (RNA Interference) will be entirely devoted to this topic.
Long ncRNA
Long ncRNAs (lncRNAs) make up the largest portion of ncRNAs [6]. However the emphasis placed on the study of long ncRNA has only been realized in the recent years. As a result, the terminology for this family of ncRNAs are still in its infancy and oftentimes inconsistent in the literature. This is also in part complicated by cases where some lncRNAs can also serve as transcripts for the generation of short RNAs. In light of these confusions, as discussed in the previous chapter, lncRNA have been arbitrarily defined as ncRNAs with size greater than 200 nts (based on the cut-off in RNA purification protocols) and can be broadly categorized into: sense, antisense, bidirectional, intronic, or intergenic [19]. For example, one particular class of lncRNA called long intergenic ncRNA (lincRNA) are found exclusively in the intergenic region and possesses chromatin modifications indicative of active transcription (e.g. H3K4me3 at the transcriptional start site and H3K36me3 throughout the gene region) [8].
Despite the recent rise of interest in lncRNAs, the discovery of the first lncRNAs (XIST and H19), based on searching cDNA libraries, dated back to the 1980s and 1990s before the discovery of miRNAs [3, 4]. Later studies demonstrated the association of lncRNAs with polycomb group proteins, suggesting potential roles of lncRNAs in epigenetic gene silencing/activation [19]. Another lncRNA, HOX Antisense Intergenic RNA (HOTAIR), was recently found to be highly upregulated in metastatic breast tumors [11]. The association of HOTAIR with the polycomb complex again supports a potential unified role of lncRNAs in chromatin remodeling/epigenetic regulation (in either a cis-regulatory (XIST and H19), or trans-regulatory (e.g. HOTAIR) fashion) and disease etiology.
Recent studies have also identified HULC and pseudogene (transcript resembling real genes but contains mutations that prevent their translation into functional proteins) PTENP1 that may function as a decoy in binding to miRNAs to reduce the overall effectiveness of miRNAs [18, 25]. Other potential roles of lncRNAs remains to be explored. Nevertheless, it is becoming clear that lncRNAs are less likely to be the result of transcriptional noise, but may rather serve critical role in the control of cellular processes. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/13%3A_Small_RNA/13.01%3A_Introduction.txt |
RNA interference has been one of the most significant and exciting discoveries in recent history. The impact of this discovery is enormous with applications ranging from knockdown and loss-of-function studies to the generation of better animal models with conditional knockdown of desired gene(s) to large-scale RNAi-based screens to aid drug discovery.
History of discovery
The discovery of the gene silencing phenomenon dated back as early as the 1990s with Napoli and Jorgensen demonstrating the down-regulation of chalcone synthase following introduction of exogenous transgene in plants [17]. Similar suppression was subsequently observed in other systems [10, 22]. In another set unrelated work at the time, Lee et al. identified in a genetic screen that endogenous lin-4 expressed a non-protein- coding product that is complementary to the lin-14 gene and controlled the timing of larval development (from first to second larval state) in C. elegans [15]. We now know this as the first miRNA to be discovered. In 2000, another miRNA, let-7, was discovered in the same organism and was found to be involved in promoting the late-larval to adult transition [21]. The seminal work by Mello and Fire in 1998 (for which was awarded the Nobel Prize in 2006) demonstrated that the introduction of exogenous dsRNA in C. elegans specifically silenced genes via RNA interference, explaining the prior suppression phenomenon observed in plants [7]. Subsequent studies found the conversion of dsRNA into siRNA in the RNAi pathway. In 2001, the term miRNA and the link between miRNA and RNAi was described in three papers in Science [23]. With this, we have come to realize the gene regulatory machinery was composed of predominately of two classes small RNAs, with miRNA involved in the regulation of endogenous genes and siRNA involved in defense in response to viral nucleic acids, transposons, and transgenes [5]. Later works revealed downstream effectors: Dicers (for excision of precursor species) and Argonaute proteins (part of the RNA-induced silencing complex to perform the actual silencing effects), completing our current understanding of the RNA silencing pathways. The details of the mechanism and the differences among the species are further discussed below.
Biogenesis pathways
There is a common theme involved for both siRNA-mediated and miRNA-mediated silencing. In the biogenesis of both siRNA and miRNA, the double-stranded precursors are cleaved by a RNase into short ∼22 nt fragments. One of the strands (the guide strand) is loaded into an Argonaute protein, a central component of the larger ribonucleoprotien complex RISC that facilitates target RNA recognition and silencing. The mechanism of silencing are either cleaveage of the target mRNA or translation repression.
Aside from this common theme, the proteins involved in these processes differ among species and there exists additional steps in miRNA processing prior to its maturation and incorporation into RISC (Figure 13.1). For the biogenesis of siRNA, the precursors are dsRNAs, oftentimes from exogenous sources such as viruses or transposons. However, recent studies have also found endogenous siRNAs [9]. Regardless of the source, these dsRNAs are processed by the RNase III endonuclease, Dicer, into ∼22 nt siRNAs. This RNase III-catalyzed cleavage leaves the characteristic 5’phosphates and 2 nt 3’ overhangs [2]. It is worth noting that different species have evolved with different number of paralogs. This becomes important as, to be discussed later, the miRNA biogenesis pathway also utilizes Dicer for the processing of miRNA precursors (more specifically pre-miRNAs). For species such as D. melanogaster, there are two distinct Dicer proteins and as a result there is typically a preferential processing of the precursors (e.g. Dicer-1 for miRNA cleavage and Dicer-2 for siRNA cleavage) [5]. In contrast, mammals and nematodes only have a single Dicer protein and as such both biogenesis pathways converge to the same processing step [5]. In subsequent steps of the siRNA biogenesis pathway, one of the strands in the siRNA duplex is loaded into RISC to silence target RNAs (Figure 13.1C).
In the miRNA biogenesis pathway, majority of the precursors are pol II transcripts of the intron regions, some of which encode multiple miRNAs in clusters. These precursors, in the form of a stem-loop structure, are named pri-miRNAs. The pri-miRNAs are first cleaved in the nucleus by a RNase III endonuclease (Drosha in animals and Dcl1 in plants) into ∼60-70 nt stem loop intermediates, termed pre-miRNAs [2]. In animals, the pre-miRNA is then exported into the cytoplasm by Exportin-5. This is followed by the cleavage of pre-miRNA intermediate by Dicer to remove the stem loop. One of the strands in the resulting mature miRNA duplex is loaded to RISC, similar to that described for siRNA biogenesis Figure 13.1B. Interestingly, in plants, the pri-miRNA is processed into mature miRNA through two cleavages by the same enzyme, Dcl1, in the nucleus before export into the cytoplasm for loading (Figure 13.1A).
Functions and silencing mechanism
The classical view of miRNA function based on the early discoveries of miRNA has been analogous to a binary switch whereby miRNA represses translation of a few key mRNA targets to initiate a developmental transition. However, subsequent studies have greatly broaden this definition. In plants, most miRNAs bind to the coding region of the mRNA with near-perfect complementarity. On the other hand, animal miRNAs bind with partial complementarity (except for a seed region, residues 2-8) to the 3’ UTR regions of mRNA. As such, there are potentially hundreds targets by a single miRNA in animals rather than just a few [1]. In addition, in mammals, only a few portion of the predicted targets are involved in development, with the rest predicted to cover a wide range of molecular and biological processes [2]. Lastly, miRNA silencing acts through both translation repression and mRNA cleavage (and also destabilization as discussed below)(as shown for example showed by Bartel and coworkers on the miR-196-directed cleavage of HOXB6 [26]). Taken together, the modern view of miRNA function has been that miRNA dampens expression of many mRNA targets to optimize expression, reinforce cell identity, and sharpen transitions.
The mechanism for which miRNA mediates the silencing of target mRNA is still an area of active research. As previously discussed, RNA silencing can take the form of either cleavage, destabilization (leading to subsequent degradation of the mRNA), or translation repression. In plants, it has been found that the predominate mode of RNA silencing is through Argonaute-catalyzed cleavage. However, the contribution of these different modes of silencing has been less clear in animals. Recent global analyses from the Bartel group in collaboration with Gygi and Ingolia and Weissman shed light on this question. In a 2008 study, Bartel and Gygi groups examined the global changes in protein level using mass spectrometry following miRNA introduction or deletion [1]. Their results revealed the repression of hundreds of genes by individual miRNAs, and more importantly mRNA destabilization accounts for majority of the highly repressed targets (Figure 13.2).
This is further supported by a subsequent study using both RNA-seq and a novel ribosome-profiling first demonstrated by Inoglia and Weissman 2009 that enables the interrogation of global translation activities with sub-codon resolution [14]. The results showed destabilization of target mRNA is the predominate mechanism through which miRNA reduces the protein output. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/13%3A_Small_RNA/13.02%3A_RNA_Interference.txt |
Bibliography
[1] Daehyun Baek, Judit Vill ́en, Chanseok Shin, Fernando D Camargo, Steven P Gygi, and David P Bartel. The impact of microRNAs on protein output. Nature, 455(7209):64–71, September 2008.
[2] David P Bartel. MicroRNAs: genomics, biogenesis, mechanism, and function. Cell, 116(2):281–97, January 2004.
[3] M S Bartolomei, S Zemel, and S M Tilghman. Parental imprinting of the mouse H19 gene. Nature, 351(6322):153–5, May 1991.
[4] C J Brown, A Ballabio, J L Rupert, R G Lafreniere, M Grompe, R Tonlorenzi, and H F Willard. A gene from the region of the human X inactivation centre is expressed exclusively from the inactive X chromosome. Nature, 349(6304):38–44, January 1991.
[5] Richard W Carthew and Erik J Sontheimer. Origins and Mechanisms of miRNAs and siRNAs. Cell, 136(4):642–55, February 2009.
[6] Manel Esteller. Non-coding RNAs in human disease. Nature Reviews Genetics, 12(12):861–874, Novem- ber 2011.
[7] A Fire, S Xu, M K Montgomery, S A Kostas, S E Driver, and C C Mello. Potent and specific genetic interference by double-stranded RNA in Caenorhabditis elegans. Nature, 391(6669):806–11, February 1998.
[8] Ewan a Gibb, Carolyn J Brown, and Wan L Lam. The functional role of long non-coding RNA in human carcinomas. Molecular cancer, 10(1):38, January 2011.
[9] Daniel E Golden, Vincent R Gerbasi, and Erik J Sontheimer. An inside job for siRNAs. Molecular cell, 31(3):309–12, August 2008.
[10] S Guo and K J Kemphues. par-1, a gene required for establishing polarity in C. elegans embryos, encodes a putative Ser/Thr kinase that is asymmetrically distributed. Cell, 81(4):611–20, May 1995.
[11] Rajnish A Gupta, Nilay Shah, Kevin C Wang, Jeewon Kim, Hugo M Horlings, David J Wong, Miao- Chih Tsai, Tiffany Hung, Pedram Argani, John L Rinn, Yulei Wang, Pius Brzoska, Benjamin Kong, Rui Li, Robert B West, Marc J van de Vijver, Saraswati Sukumar, and Howard Y Chang. Long non-coding RNA HOTAIR reprograms chromatin state to promote cancer metastasis. Nature, 464(7291):1071–6, April 2010.
[12] Masahira Hattori. Finishing the euchromatic sequence of the human genome. Nature, 431(7011):931–45, October 2004.
[13] Christopher L Holley and Veli K Topkara. An introduction to small non-coding RNAs: miRNA and snoRNA. Cardiovascular Drugs and Therapy, 25(2):151–159, 2011.
[14] Nicholas T Ingolia, Sina Ghaemmaghami, John R S Newman, and Jonathan S Weissman. Genome-wide analysis in vivo of translation with nucleotide resolution using ribosome profiling. Science (New York, N.Y.), 324(5924):218–23, April 2009.
[15] R C Lee, R L Feinbaum, and V Ambros. The C. elegans heterochronic gene lin-4 encodes small RNAs with antisense complementarity to lin-14. Cell, 75(5):843–54, December 1993.
[16] Michael L Metzker. Sequencing technologies - the next generation. Nature Reviews Genetics, 11(1):31– 46, January 2010.
[17] C. Napoli, C. Lemieux, and R. Jorgensen. Introduction of a Chimeric Chalcone Synthase Gene into Petunia Results in Reversible Co-Suppression of Homologous Genes in trans. The Plant cell, 2(4):279– 289, April 1990.
[18] Laura Poliseno, Leonardo Salmena, Jiangwen Zhang, Brett Carver, William J Haveman, and Pier Paolo Pandolfi. A coding-independent function of gene and pseudogene mRNAs regulates tumour biology. Nature, 465(7301):1033–8, June 2010.
[19] Chris P Ponting, Peter L Oliver, and Wolf Reik. Evolution and functions of long noncoding RNAs. Cell, 136(4):629–41, February 2009.
[20] J. R. Prensner and A. M. Chinnaiyan. The Emergence of lncRNAs in Cancer Biology. Cancer Discovery, 1(5):391–407, October 2011.
[21] B J Reinhart, F J Slack, M Basson, A E Pasquinelli, J C Bettinger, A E Rougvie, H R Horvitz, and G Ruvkun. The 21-nucleotide let-7 RNA regulates developmental timing in Caenorhabditis elegans. Nature, 403(6772):901–6, February 2000.
[22] N Romano and G Macino. Quelling: transient inactivation of gene expression in Neurospora crassa by transformation with homologous sequences. Molecular microbiology, 6(22):3343–53, November 1992.
[23] G Ruvkun. Molecular biology. Glimpses of a tiny RNA world. Science, 294(5543):797–9, October 2001.
[24] Ryan J Taft, Ken C Pang, Timothy R Mercer, Marcel Dinger, and John S Mattick. Non-coding RNAs:
regulators of disease. The Journal of pathology, 220(2):126–39, January 2010.
[25] Jiayi Wang, Xiangfan Liu, Huacheng Wu, Peihua Ni, Zhidong Gu, Yongxia Qiao, Ning Chen, Fenyong Sun, and Qishi Fan. CREB up-regulates long non-coding RNA, HULC expression through interaction with microRNA-372 in liver cancer. Nucleic acids research, 38(16):5366–83, September 2010.
[26] Soraya Yekta, I-Hung Shih, and David P Bartel. MicroRNA-directed cleavage of HOXB8 mRNA. Science, 304(5670):594–6, April 2004. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/13%3A_Small_RNA/13.03%3A_Bibliography.txt |
The purpose of mRNA sequencing (RNA-seq) is to measure the levels of mRNA transcripts for every gene in a given cell. mRNA sequencing was a daunting task, and requires approximately 40 million aligned reads in order to accurately measure mRNA transcripts.This did not become possible until 2009, when next-generation sequencing technologies became more advanced and efficient.
In this chapter, we will explore the different techniques for using mRNA sequencing data to aid in gene and transcript discovery as well as in expression analysis.
14.02: Expression Microarrays
Prior to the development of mRNA sequencing technology, mRNA levels were measured using expression microarrays. These microarrays function by inserting a DNA probe on a slide and measuring the levels transcripts that undergo complimentary hybridization with the DNA, a process that could analyze expression on a gene by gene basis (Figure 1).
However, this technology has several limitations: it cannot distinguish mRNA isoforms, it cannot analyze on the sequence, or digital level, it can only measure known transcripts, and the expression measurements become less reliable for highly saturated transcript levels.
14.03: The Biology of mRNA Sequencing
The first step in mRNA sequencing is to lyse the cells of interest. This creates a mass of proteins, nucleotides, and other molecules which are then filtered through so that only RNA (or specifically mRNA) molecules remain. The resulting transcripts are then fragmented into reads 200-1000 base pairs long and undergo a reverse transcription reaction to build a strand-specific DNA library. Finally, both ends of these DNA fragments are sequenced. After establishing these sequenced reads, the computational part of RNA-Seq can be divided into three parts: read mapping, reconstruction, and quantification.
14.04: Read Mapping - Spaced Seed Alignment
The idea behind read mapping is to align the sequenced reads to a reference genome. Sequence alignment algorithms discussed in earlier chapters will not work for this case due to the scale of the problem. The goal is to align millions of reads to the genome and would take too long if each was aligned individually. Instead, we will introduce the Spaced Seed Alignment approach. This process begins by using the reference genome to creating a hash table of 8-mers, which do not have to be contiguous. The positions of these stored spaced seeds are mapped to the hash table. Using these spaced 8-mers, each read is then compared with each possible position in the reference genome and scored based on the number of base pair matches (Figure 2).
More accurately, for each position, it is possible to calculate the score using the equation qMS = −10 log10(1 − P (i|G, q)), where P (i|G, q) represents the probability that the read, q, is mapped to posi- tion i of reference genome G. More details on deriving this score can be found in Figure 13.2.
It is possible to adjust the parameters of this method in order to alter the sensitivity, speed, and memory
of the algorithm. Using smaller k-mer seeds allows for less precise base pair matching (greater sensitivity), but requires more matches to be attempted. Smaller seeds take up less memory, while larger seeds run faster.
There exist methods other than the one described above to perform this alignment. The most popular of which is the Burrows-Wheeler approach. The Burrows-Wheeler transform is an even more efficient algorithm for mapping reads and will be discussed in a later chapter. It is able to speed up the process of finding matches in the large genome by reordering the genome in a very specific permutation. This allows reads to be matched solely as a function of the length of the read and not the genome. As better sequencing technology allows for larger read lengths, more algorithms will need to be developed to handle the extra processing.
Unlike ChIP-Seq, a similar technology, RNA-seq is more complex. This is because the read mapper needs to worry about small exons interspersed between large introns and be able to find both sides of an exon. This complexity can be overcome by using the above mentioned spaced seed matching technique, and detecting when two k-mers from the same read are separated by a long distance. This would signal a possible intron and can be fixe by then extending the k-mers to fill in gaps (SNO methods). Another method is to base the alignment on contiguous reads, which are further fragmented into 20-30 bp regions. These regions are remapped, and the positions with two or more different alignments are marked as splice junctions. Exon-first aligners are faster than the previous methods, but come at a cost: they fail to differentiate psuedogenes, prespliced genes, and transposed genes. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/14%3A_MRNA_Sequencing_for_Expression_Analysis_and_Transcript_Discovery/14.01%3A_Introduction.txt |
Reconstruction of reads is a largely statistical problem. The goal is to determine a score for each fixed-sized window in the genome. This score represents the probability of seeing the observed number of reads given the window size. In other words, is the number of reads in a particular window unlikely given the genome? The expected number of reads per window is derived from a uniform distribution based on the total number of reads (Figure 3). This score is modeled by a Poisson distribution.
However, this score must account for the problem of multiple testing hypotheses, due to the approximately 150 million expected bases. One option for dealing with this is the Bonferroni correction, where the nominal
Figure 14.3: Box 1: How Do We Calculate qMS?
p-value = n * p-value. This method leads to low sensitivity, due to its very conservative nature. Another option is to permute the reads observed in the genome, and find the maximum number of reads seen on a single base. This allows for a max count distribution model, but the process is very slow. The scan distribution speeds up this process by computing a closed form for max count distribution to account for dependency of overlapping windows (Figure 4). The probability of observing k reads on a window of size w in a genome of size L given a total of N reads can be approximated by [slide is not clear].
Choosing a window size is also an important decision, as genes exist at different expression levels and span different orders of magnitude. Small windows are better at detecting punctuate regions, while larger windows can detect longer spans of moderate enhancement. In most cases, windows of different sizes are used to pick up signals of varying size.
Transcript reconstruction can be seen as a segmentation problem, with several challenges. As mentioned above, genes are expressed at different levels, over several orders of magnitude. In addition, the reads used for reconstruction are obtained from both mature and immature mRNA, the latter still containing introns. Finally, many genes have multiple isoforms, and the short nature of reads makes it difficult to differentiate between these different transcripts. A computational tool called Scripture uses a priori knowledge of fragment connectivity to detect transcripts.
Alternative isoforms can only be detected via exon junction spanning reads, which contain the ends of an exon. Longer reads have a greater chance of spanning these junctions (Figure 5). Scripture works by modeling the reads using graph structure, where bases are connected to neighbor bases, as well as splice neighbors. This process differs from the string graph technique, because it focuses on whole genome, and does not map overlapping sequences directly. When sliding the window, Scripture can jump across splice junctions yet still examine alternative isoforms. From this oriented connectivity graph, the program identifies segments across the graph, and looks for significant segments (Box 2).
Direct transcript assembly is another method of reconstruction (as opposed to genome-guided methods like Scripture). Transcript assembly methods are able to reconstruct transcripts from organisms without a reference sequence, while genome-guided approaches are ideal for annotating high quality genomes and expanding the catalog of expressed transcripts. Hybrid approaches are used for lesser quality transcripts or transcriptomes that have underwent major rearrangements, such as those of cancer cells. Popular transcript assembly tools include Oasis, Trans-ABySS, and Trinity. Another popular genome-guided software is Cufflinks. Regardless of methodology or software type, any sequencing experiment that produces more genome coverage will experience better transcript reconstruction.
Figure 14.7: Box 2: The Scripture Method
14.06: Quantification
The goal of the quantification step is to score regions in the genome based on the number of reads. Recall that each transcript is fragmented into many smaller reads. Therefore, it is insufficient to simply count the number of reads per region, as this value would be influenced by (1) expression rates and (2) length of transcript. The higher the expression rate of a transcript the more reads we will have for it. Similarly, the longer a transcript is, the more reads we will have. This issue can be solved by normalizing the number of reads by the length of the transcript and the total number of reads in the experiment. This provides the RPKM value, or reads per kilobase of exonic sequence per million mapped reads.
This method is robust for genes with only one isoform. However, there is the possibility of overlap between conflicting variants of a transcript. When multiple transcript variants are involved, this problem is known as differential expression analysis. There are a few different methods for handling this complexity. The exon intersection model scores only the constituent exons. The exon union model simply scores based on a merged transcript, but can easily be biased based on the relative ratios of each isoform. A more thorough model is the transcript expression model, which assigns unique reads to different isoforms. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/14%3A_MRNA_Sequencing_for_Expression_Analysis_and_Transcript_Discovery/14.05%3A_Reconstruction.txt |
In this chapter, we consider the problem of discerning similarities or patterns within large datasets. Finding structure in such data sets allows us to draw conclusions about the process as well as the structure underlying the observations. We approach this problem through the application of clustering techniques. The following chapter will focus on classification techniques.
Clustering vs Classification
One important distinction to be made early on is the difference between classification and clustering. Clas- sification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations or instances whose category member- ship is known. The training set is used to learn rules that will accurately assign labels to new observations. The difficulty is to find the most important features (feature selection).
In the terminology of machine learning, classification is considered an instance of supervised learning, i.e. learning where a training set of correctly-identified observations is available. The corresponding unsupervised procedure is known as clustering or cluster analysis, and involves grouping data into categories based on some measure of inherent similarity, such as the distance between instances, considered as vectors in a multi- dimensional vector space. The difficulty is to identify the structure of the data. Figure 15.1 illustrates the difference between clustering and classification.
Applications
Clustering was originally developed within the field of artificial intelligence. Being able to group similar objects, with full implications of generality implied, is indeed a fairly desirable attribute for an artificial intelligence, and one that humans perform routinely throughout life. As the development of clustering algorithms proceeded apace, it quickly becomes clear that there was no intrinsic barrier involved in applying these algorithms to larger and larger datasets. This realization led to the rapid introduction of clustering to computational biology and other fields dealing with large datasets.
Clustering has many applications to computational biology. For example, let’s consider expression profiles of many genes taken at various developmental stages. Clustering may show that certain sets of genes line up (i.e. show the same expression levels) at various stages. This may indicate that this set of genes has common expression or regulation and we can use this to infer similar function. Furthermore, if we find a uncharacterized gene in such a set of genes, we can reason that the uncharacterized gene also has a similar function through guilt by association.
Chromatin marks and regulatory motifs can be used to predict logical relationships between regulators and target genes in a similar manner. This sort of analysis enables the construction of models that allow us to predict gene expression. These models can be used to modify the regulatory properties of a particular gene, predict how a disease state arose, or aid in targeting genes to particular organs based on regulatory circuits in the cells of the relevant organ.
Computational biology deals with increasingly large and open-access datasets. One such example is the ENCODE project [2]. Launched is 2003, the goal of ENCODE is to build a comprehensive list of functional elements in the human genome, including elements that act at the protein and RNA levels, and regulatory elements that control cells and circumstances in which a gene is active. ENCODE data are now freely and immediately available for the entire human genome: http://genome.ucsc.edu/ENCODE/. Using all of this data, it is possible to make functional predictions about genes through the use of clustering. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/15%3A_Gene_Regulation_I_-_Gene_Expression_Clustering/15.01%3A_Introduction.txt |
The most intuitive way to investigate a certain phenotype is to measure the expression levels of functional proteins present at a given time in the cell. However, measuring the concentration of proteins can be difficult, due to their varying locations, modifications, and contexts in which they are found, as well as due to the incompleteness of the proteome. mRNA expression levels, however, are easier to measure, and are often a good approximation. By measuring the mRNA, we analyze regulation at the transcription level, without the added complications of translational regulation and active protein degradation, which simplifies the analysis at the cost of losing information. In this chapter, we will consider two techniques for generating gene expression data: microarrays and RNA-seq.
Microarrays
Microarrays allow the analysis of the expression levels of thousands of preselected genes in one experiment. The basic principle behind microarrays is the hybridization of complementary DNA fragments. To begin, short segments of DNA, known as probes, are attached to a solid surface, commonly known as a gene chip. Then, the RNA population of interest, which has been taken from a cell, is reverse transcribed to cDNA (complementary DNA) via reverse transcriptase, which synthesizes DNA from RNA using the poly-A tail as a primer. For intergenic sequences which have no poly-A tail, a standard primer can be ligated to the ends of the mRNA. The resulting DNA has more complementarity to the DNA on the slide than the RNA. The cDNA is than washed over the chip and the resulting hybridization triggers the probes to fluoresce. This can be detected to determine the relative abundance of the mRNA in the target, as illustrated in figure 15.2.
Two basic types of microarrays are currently used. Affymetrix gene chips have one spot for every gene and have longer probes on the order of 100s of nucleotides. On the other hand, spotted oligonucleotide arrays tile genes and have shorter probes around the tens of bases.
There are numerous sources of error in the current methods and future methods seek to remove steps in the process. For instance, reverse transcriptase may introduce mismatches, which weaken interaction with the correct probe or cause cross hybridization, or binding to multiple probes. One solution to this has been to use multiple probes per gene, as cross hybridization will be different for each gene. Still, reverse transcription is necessary due to the secondary structure of RNA. The structural stability of DNA makes it less probable to bend and not hybridize to the probe. The next generation of technologies, such as RNA-Seq, sequences the RNA as it comes out of the cell, essentially probing every base of the genome.
RNA-seq
RNA-Seq, also known as whole transcriptome shotgun sequencing, attempts to perform the same function that DNA microarrays have been used to perform in the past, but with greater resolution. In particular, DNA microarrays utilize specific probes, and creation of these probes necessarily depends on prior knowledge of the genome and the size of the array being produced. RNA-seq removes these limitations by simply sequencing all of the cDNA produced in microarray experiments. This is made possible by next-generation sequencing technology. The technique has been rapidly adopted in studies of diseases like cancer [4]. The data from RNA-seq is then analyzed by clustering in the same manner as data from microarrays would normally be analyzed.
Gene Expression Matrices
Microarrays and RNA-seq are frequently used to compare the gene expression profiles of cells under various conditions. The amount of data generated from these experiments is enormous. Microarrays can analyze thousands of genes, and RNA-seq can, in principle, analyze every gene that is actively expressed. The expression level of each of those genes is measured across a variety of conditions, including time courses, stages of development, phenotypes, healthy vs. sick, and other factors.
To understand what the heatmap of a gene expression matrix (Figure 15.4) convey, we have to first understand what the expression data matrix tells us. By using microarrays and RNA-seq, we can obtain gene expression level in quantitative form in an experiment. If we have multiple experiments, we can construct a value matrix (Figure 15.5) representing a log value of (T/R), where T is the gene expression level in test sample and R is the gene expression level in reference sample.
The Expression Matrix removed due to copyright restrictions.
Figure 15.4: Transforming Figure 4 to a heatmap
If we visualize the matrix as a heatmap, then we obtain the following new colored-matrix:
These matrices can be clustered hierarchically showing the relation between pairs of genes, pairs of pairs, and so on, creating a dendrogram in which the rows and columns can be ordered using optimal leaf ordering algorithms.
Image in the public domain. This graph was generated using the program Cluster from Michael Eisen, which is available from rana.lbl.gov/EisenSoftware.htm, with data extracted from the StemBase database of gene expression data.
By revealing the hidden structure of a long segment of genome, we obtain great insight of what a fragment of gene does, and subsequently understand more about the root cause of an unknown disease.
This predictive and analytical power is increased due to the ability of biclustering the data; that is, clustering along both dimensions of the matrix. The matrix allows for the comparison of expression profiles of genes, as well as comparing the similarity of different conditions such as diseases. A challenge, though, is the curse of dimensionality. As the space of the data increases, the clustering of the points diminishes. Sometimes, the data can be reduced to lower dimensional spaces to find structure in the data using clustering to infer which points belong together based on proximity.
Interpreting the data can also be a challenge, since there may be other biological phenomena in play. For example, protein-coding exons have higher intensity, due to the fact that introns are rapidly degraded. At the same time, not all introns are junk and there may be ambiguities in alternative splicing. There are also cellular mechanisms that degrade aberrant transcripts through non-sense mediated decay. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/15%3A_Gene_Regulation_I_-_Gene_Expression_Clustering/15.02%3A_Methods_for_Measuring_Gene_Expression.txt |
To analyze the gene expression data, it is common to perform clustering analysis. There are two types of clustering algorithms: partitioning and agglomerative. Partitional clustering divides objects into non- overlapping clusters so that each data object is in one subset. Alternatively, agglomerative clustering methods yield a set of nested clusters organized as a hierarchy representing structures from broader to finer levels of detail.
K-Means Clustering
The k-means algorithm clusters n objects based on their attributes into k partitions. This is an example of partitioning, where each point is assigned to exactly one cluster such that the sum of distances from each point to its correspondingly labeled center is minimized. The motivation underlying this process is to make the most compact clusters possible, usually in terms of a Euclidean distance metric.
The k-means algorithm, as illustrated in figure 15.8, is implemented as follows:
1. Assume a fixed number of clusters, k
2. Initialization: Randomly initialize the k means μk associated with the clusters and assign each data point xi to the nearest cluster, where the distance between xi and μk is given by di,k = (xi − μk)2 .
3. Iteration: Recalculate the centroid of the cluster given the points assigned to it: $\mu_{k}(n+1)=\sum_{x_{i} \in k} \frac{x_{i}}{\left|x^{k}\right|}$ where xk is the number of points with label k. Reassign data points to the k new centroids by the given distance metric. The new centers are effectively calculated to be the average of the points assigned to each cluster.
4. Termination: Iterate until convergence or until a user-specified number of iterations has been reached. Note that the iteration may be trapped at some local optima.
There are several methods for choosing k: simply looking at the data to identify potential clusters or iteratively trying values for n, while penalizing model complexity. We can always make better clusters by increasing k, but at some point we begin overfitting the data.
We can also think of k-means as trying to minimize a cost criterion associated with the size of each cluster, where the cost increases as the clusters get less compact. However, some points can be almost halfway between two centers, which doesn’t fit well with the binary belonging k-means clustering.
Fuzzy K-Means Clustering
In fuzzy clustering, each point has a probability of belonging to each cluster, rather than completely belonging to just one cluster. Fuzzy k-means specifically tries to deal with the problem where points are somewhat in between centers or otherwise ambiguous by replacing distance with probability, which of course could be some function of distance, such as having probability relative to the inverse of the distance. Fuzzy k-means uses a weighted centroid based on those probabilities. Processes of initialization, iteration, and termination are the same as the ones used in k-means. The resulting clusters are best analyzed as probabilistic distributions rather than a hard assignment of labels. One should realize that k-means is a special case of fuzzy k-means when the probability function used is simply 1 if the data point is closest to a centroid and 0 otherwise.
The fuzzy k-means algorithm is the following:
1. Assume a fixed number of clusters k
2. Initialization: Randomly initialize the k means μk associated with the clusters and compute the probability that each data point xi is a member of a given cluster k, P(point xi has label k|xi,k).
3. Iteration: Recalculate the centroid of the cluster as the weighted centroid given the probabilities of membership of all data points xi: $\mu_{k}(n+1)=\frac{\sum_{x_{i} \in k} x_{i} \times P\left(\mu_{k} \mid x_{i}\right)^{b}}{\sum_{x_{i} \in k} P\left(\mu_{k} \mid x_{i}\right)^{b}} \nonumber$ And recalculate updated memberships P (μk|xi)(there are different ways to define membership, here is just one example): $P\left(\mu_{k} \mid x_{i}\right)=\left(\sum_{j=1}^{k}\left(\frac{d_{i k}}{d_{j k}}\right)^{\frac{2}{b-1}}\right)^{-1} \nonumber$
4. Termination: Iterate until membership matrix converges or until a user-specified number of iterations has been reached (the iteration may be trapped at some local maxima or minima)
The b here is the weighting exponent which controls the relative weights places on each partition, or the degree of fuzziness. When b− > 1, the partitions that minimize the squared error function is increasingly hard (non-fuzzy), while as b− > ∞ the memberships all approach 1 , which is the fuzziest state. There is no k theoretical evidence of how to choose an optimal b, while the empirical useful values are among [1, 30], and in most of the studies, 1.5 $\leqslant$ b $\legslant$ 3.0 worked well.
K-Means as a Generative Model
A generative model is a model for randomly generating observable-data values, given some hidden parameters. While a generative model is a probability model of all variables, a discriminative model provides a conditional model only of the target variable(s) using the observed variables.
In order to make k-means a generative model, we now look at it in a probabilistic manner, where we assume that data points in cluster k are generated using a Gaussian distribution with the mean on the center of cluster and a variance of 1, which gives
$P\left(x_{i} \mid \mu_{k}\right)=\frac{1}{\sqrt{2 \pi}} \exp \left\{-\frac{\left(x_{i}-\mu_{k}\right)^{2}}{2}\right\}.$
This gives a stochastic representation of the data, as shown in figure 15.10. Now this turns to a maximum likelihood problem, which, we will show in below, is exactly equivalent to the original k-means algorithm mentioned above.
In the generating step, we want to find a most likely partition, or assignment of label, for each xi given the mean μk. With the assumption that each point is drawn independently, we could look for the maximum likelihood label for each point separately:
$\arg \max _{k} P\left(x_{i} \mid \mu_{k}\right)=\arg \max _{k} \frac{1}{\sqrt{2 \pi}} \exp \left\{-\frac{\left(x_{i}-\mu_{k}\right)^{2}}{2}\right\}=\arg \min _{k}\left(x_{i}-\mu_{k}\right)^{2} \nonumber$
This is totally equivalent to finding the nearest cluster center in the original k-means algorithm.
In the Estimation step, we look for the maximum likelihood estimate of the cluster mean μk, given the partitions (labels):
$\left.\arg \max _{\mu}\left\{\log \prod_{i} P\left(x_{i} \mid \mu\right)\right\}=\arg \max _{\mu} \sum_{i}\left\{-\frac{1}{2}\left(x_{i}-\mu\right)^{2}+\log \left(\frac{1}{\sqrt{2 \pi}}\right)\right)\right\}=\arg \min _{\mu} \sum_{i}\left(x_{i}-\mu\right)^{2} \nonumber$
Note that the solution of this problem is exactly the centroid of the xi, which is the same procedure as the original k-means algorithm.
Unfortunately, since k-means assumes independence between the axes, covariance and variance are not accounted for using k-means, so models such as oblong distributions are not possible. However, this issue can be resolved when generalize this problem into expectation maximization problem.
Expectation Maximization
K-means can be seen as an example of EM (expectation maximization algorithms), as shown in figure 15.11 where expectation consists of estimation of hidden labels, Q, and maximizing of expected likelihood occurs given data and Q. Assigning each point the label of the nearest center corresponds to the E step of estimating the most likely label given the previous parameter. Then, using the data produced in the E step as observation, moving the centroid to the average of the labels assigned to that center corresponds to the M step of maximizing the likelihood of the center given the labels. This case is analogous to Viterbi learning. A similar comparison can be drawn for fuzzy k-means, which is analogous to Baum-Welch from HMMs. Figure 15.12 compares clustering, HMM and motif discovery with respect to expectation minimization algorithm.
It should be noted that using the EM framework, the k means approach can be generalized to clusters of oblong shape and varying sizes. With k means, data points are always assigned to the nearest cluster center. By introducing a covariance matrix to the Gaussian probability function, we can allow for clusters of different sizes. By setting the variance to be different along different axes, we can even create oblong distributions.
EM is guaranteed to converge and guaranteed to find the best possible answer, at least from an algorithmic point of view. The notable problem with this solution is that the existence of local maxima of probability density can prevent the algorithm from convergin to the global maximum. One approach that may avoid this complication is to attempt multiple initializations to better determine the landscape of probabilities.
The limitations of the K-Means algorithm
The k-means algorithm has a few limitations which are important to keep in mind when using it and before choosing it. First of all, it requires a metric. For example, we cannot use the k-means algorithm on a set of words since we would not have any metric.
The second main limitation of the k-means algorithm is its sensitivity to noise. One way to try to reduce the noise is to run a principle component analysis beforehand. Another way is to weight each variable in order to give less weight to the variables affected by significant noise: the weights will be calculated dynamically at each iteration of the algorithm K-means [3].
The third limitation is that the choice of initial centers can influence the results. There exist heuristics to select the initial cluster centers, but none of them are perfect.
Lastly, we need to know a priori the number of classes. As we have seen, there are ways to circumvent this problem, essentially by running several times the algorithm while varying k or using the rule of thumb \(k \approx \sqrt{n / 2} if we are short on the computational side. en.Wikipedia.org/wiki/Determining_ the_number_of_clusters_in_a_data_set summarizes well the different techniques to select the number of clusters. Hierarchical clustering provides a handy approach to choosing the number of cluster.
Hierarchical Clustering
While the clustering discussed thus far often provide valuable insight into the nature of various data, they generally overlook an essential component of biological data, namely the idea that similarity might exist on multiple levels. To be more precise, similarity is an intrinsically hierarchical property, and this aspect is not addressed in the clustering algorithms discussed thus far. Hierarchical clustering specifically addresses this in a very simple manner, and is perhaps the most widely used algorithm for expression data. As illustrated in figure 15.13, it is implemented as follows:
1. Initialization: Initialize a list containing each point as an independent cluster.
2. Iteration: Create a new cluster containing the two closest clusters in the list. Add this new cluster to
the list and remove the two constituent clusters from the list.
One key benefit of using hierarchical clustering and keeping track of the times at which we merge certain clusters is that we can create a tree structure that details the times at which we joined every cluster, as can be seen in figure 15.13. Thus, to get a number of clusters that fits your problem, you simply cut at a cut-level of your choice as in figure 15.13 and that gives you the number of clusters corresponding to that cut-level. However, be aware that one potential pitfall with this approach is that at certain cut-levels, elements that are fairly close in space (such as e and b in figure 15.13), might not be in the same cluster.
Of course, a method for determining distances between clusters is required. The particular metric used varies with context, but (as can be seen in figure 15.14 some common implementations include the maximum,
minimum, and average distances between constituent clusters, and the distance between the centroids of the clusters.
Noted that when choosing the closest clusters, calculating all pair-wise distances is very time and space consuming, therefore a better scheme is needed. One possible way of doing this is : 1) define some bounding boxes that divide the feature space into several subspaces 2) calculate pair-wise distances within each box 3)shift the boundary of the boxes in different directions and recalculate pair-wise distances 4) choose the closest pair based on the results in all iterations.
Evaluating Cluster Performance
The validity of a particular clustering can be evaluated in a number of different ways. The overrepresentation of a known group of genes in a cluster, or, more generally, correlation between the clustering and confirmed biological associations, is a good indicator of validity and significance. If biological data is not yet available, however, there are ways to assess validity using statistics. For instance, robust clusters will appear from clustering even when only subsets of the total available data are used to generate clusters. In addition, the statistical significance of a clustering can be determined by calculating the probability of a particular distribution having been obtained randomly for each cluster. This calculation utilizes variations on the hypergeometric distribution. As can be seen from figure 15.15, we can do this by calculating the probability that we have more than r +’s when we pick k elements from a total of N elements. http://en.Wikipedia.org/wiki/Cluster...tering_results gives several formula to assess the quality of the clustering.
15.04: Current Research Directions
The most significant problems associated with clustering now are associated with scaling existing algorithms cleanly with two attributes: size and dimensionality. To deal with larger and larger datasets, algorithms such as canopy clustering have been developed, in which datasets are coarsely clustered in a manner intended to pre-process the data, following which standard clustering algorithms (e.g. k-means) are applied to sub- divide the various clusters. Increase in dimensionality is a much more frustrating problem, and attempt to remedy this usually involve a two stage process in which appropriate relevant subspaces are first identified by appropriate transformations on the original space and then subjected to standard clustering algorithms.
15.05: Further Reading
• Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Second Edition, February 2009. Found online at www-stat. stanford.edu/~tibs/ElemStatLearn/download.html
• Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press.
• McLachlan, G.J. and Basford, K.E. (1988) ”Mixture Models: Inference and Applications to Clustering”, Marcel Dekker.
• Bezdek, J. C., Ehrlich, R., Full, W. (1984). FCM: The fuzzy c-means clustering algorithm. Computers and Geosciences, 10(2), 191-203.
• nlp.stanford.edu/IR-book/html...stering-1.html
• compbio.uthsc.edu/microarray/lecture1.html
15.06: Resources
• Cluster 3.0: open source clustering software that implements the most commonly used clustering methods for gene expression data analysis.
15.07: What Have We Learned Bibliography
To summarize, in this chapter we have seen that:
• In clustering, we identify structure in unlabeled data. For example, we might use clustering to identify groups of genes that display similar expression profiles.
• – Partitioning clustering algorithms, construct non-overlapping clusters such that each item is assigned to exactly one cluster. Example: k-means
• – Agglomerative clustering algorithms construct a hierarchical set of nested clusters, indicating the relatedness between clusters. Example: hierarchical clustering
• – By using clustering algorithms, we can reveal hidden structure of a gene expression matrix, which gives us valuable clues for understanding the mechanism of complicated diseases and categorizing different diseases
• In classification, we partition data into known labels. For example, we might construct a classifier to partition a set of tumor samples into those likely to respond to a given drug and those unlikely to respond to a given drug based on their gene expression profiles. We will focus on classification in the next chapter.
Bibliography
[1] en.Wikipedia.org/wiki/File:Heatmap.png.
[3] J.Z. Huang, M.K. Ng, Hongqiang Rong, and Zichen Li. Automated variable weighting in k-means type clustering. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 27(5):657 –668, may 2005.
[4] Christopher A. Maher, Chandan Kumar-Sinha, Xuhong Cao, Shanker Kalyana-Sundaram, Bo Han, Xiao- jun Jing, Lee Sam, Terrence Barrette, Nallasivam Palanisamy, and Arul M. Chinnaiyan. Transcriptome sequencing to detect gene fusions in cancer. Nature, 458(7234):97–101, Mar 05 2009. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/15%3A_Gene_Regulation_I_-_Gene_Expression_Clustering/15.03%3A_Clustering_Algorithms.txt |
In the previous chapter we looked at clustering, which provides a tool for analyzing data without any prior knowledge of the underlying structure. As we mentioned before, this is an example of “unsupervised” learning. This chapter deals with supervised learning, in which we are able to use pre-classified data to construct a model by which to classify more datapoints. In this way, we will use existing, known structure to develop rules for identifying and grouping further information.
There are two ways to do classification. The two ways are analogous to the two ways in which we perform motif discovery: HMM, which is a generative model that allows us to actually describe the probability of a particular designation being valid, and CRF, which is a discriminative method that allows us to distinguish between objects in a specific context. There is a dichotomy between generative and discriminative approaches. We will use a Bayesian approach to classify mitochondrial proteins, and SVM to classify tumor samples.
In this lecture we will look at two new algorithms: a generative classifier, Nave Bayes, and a discriminative classifier, Support Vector Machines (SVMs). We will discuss biological applications of each of these models, specifically in the use of Nave Bayes classifiers to predict mitochondrial proteins across the genome and the use of SVMs for the classification of cancer based on gene expression monitoring by DNA microarrays. The salient features of both techniques and caveats of using each technique will also be discussed.
Like with clustering, classification (and more generally supervised learning) arose from efforts in Artificial Intelligence and Machine Learning. Furthermore, much of the motivating infrastructure for classification had already been developed by probability theorists prior to the advent of either AI or ML.
16.02: Classification--Bayesian Techniques
Consider the problem of identifying mitochondrial proteins. If we look at the human genome, how do we determine which proteins are involved in mitochondrial processes, or more generally which proteins are targeted to the mitochondria1?1 This is particularly useful because if we know the mitochondrial proteins, we can study how these proteins mediate disease processes and metabolic functions. The classification method we will look considers 7 features for all human proteins:
1. targeting signal
2. protein domains
3. co-expression
4. mass spectrometry
5. sequence homology
6. induction
7. motifs
Our overall approach will be to determine how these features are distributed for both mitochondrial and non-mitochondrial proteins. Then, given a new protein, we can apply probabilistic analysis to these seven features to decide which class it most likely falls into.
Single Features and Bayes Rule
Let’s just focus on one feature at first. We must first assume that there is a class dependent distribution for the features. We must first derive this distribution from real data. The second thing we need is the a priori chance of drawing a sample of particular class before looking at the data. The chance of getting a particular class is simply the relative size of the class. Once we have these probabilities, we can use Bayes rule to get the probability a sample is in a particular class given the data(this is called the posterior). We have forward generative probabilities, and use Bayes rules to perform the backwards inference. Note that it is not enough to just consider the probability the feature was drawn from each class dependent distribution, because if we knew a priori that one class(say class A) is much more common than the other, then it should take overwhelming evidence that the feature was drawn from class B’s distribution for us to believe the feature was indeed from class B. The correct way to find what we need based on both evidence and prior knowledge is to use Bayes Rule:
$P( Class \mid feature )=\left(\frac{P(\text { feature } \mid \text { Class }) P(\text { Class })}{P(\text { feature })}\right) \nonumber$
• Posterior : P(Class|feature)
• Prior : P(Class)
• Likelihood : P(feature|Class)
This formula gives us exactly the connection we need to flip known feature probabilities into class proba- bilities for our classifying algorithm. It lets us integrate both the likelihood we derive from our observations and our prior knowledge about how common something is. In the case of mtDNA, for example, we can estimate that mitochondrial DNA makes up something like 1500/21000 (i.e. less than 10%) of the human genome. Therefore, applying Bayes rule, our classifier should only classify a gene as mitochondrial if there is a very strong likelihood based on the observed features, since the prior probability that any gene is mitochondrial is so low.
With this rule, we can now form a maximum likelihood rule for predicting an objects class based on an observed feature. We want to choose the class that has the highest probability given the observed feature, so we will choose Class1 instead of Class2 if:
$\left(\frac{P(\text { feature } \mid \text { Class } 1) P(\text { Class } 1)}{P(\text { feature })}\right)>\left(\frac{P(\text { feature } \mid \text { Class } 2) P(\text { Class } 2)}{P(\text { feature })}\right) \nonumber$
Notice that P(feature) appears on both sides, so we can cancel that out entirely, and simply choose the class with the highest value of P(feature|Class)P(Class).
Another way of looking at this is as a discriminant function: By rearranging the formulas above and taking the logarithm, we should select Class1 instead of Class2 precisely when
$\log \left(\frac{P(\mathrm{X} \mid \mathrm{Class} 1) P(\text { Class } 1)}{P(\mathrm{X} \mid \text { Class } 2) P(\text { Class } 2)}\right)>0 \nonumber$
In this case the use of logarithms provide distinct advantages:
1. Numerical stability
2. Easier math (its easier to add the expanded terms than multiply them)
3. Monotonically increasing discriminators.
This discriminant function does not capture the penalties associated with misclassification (in other words, is one classification more detrimental than other). In this case, we are essentially minimizing the number of misclassifications we make overall, but not assigning penalties to individual misclassifications. From examples discussed in class and in the problem set - if we are trying to classify a patient as having cancer or not, it could be argued that it is far more harmful to misclassify a patient as being healthy if they have cancer than to misclassify a patient as having cancer if they are healthy. In the first case, the patient will not be treated and would be more likely to die, whereas the second mistake involves emotional grief but no greater chance of loss of life. To formalize the penalty of misclassification we define something called a loss function,Lkf , which assigns a loss to the misclassification of an object as class j when the true class is class k (a specific example of a loss function was seen in Problem Set 2).
Collecting Data
The preceding tells us how to handle predictions if we already know the exact probabilities corresponding to each class. If we want to classify mitochondrial proteins based on feature X, we still need ways of determining the probabilities P(mito), P(not mito), P(X|mito) and P(X|not mito). To do this, we need a training set: a set of data that is already classified that our algorithm can use to learn the distributions corresponding to each class. A high-quality training set (one that is both large and unbiased) is the most important part of any classifier. An important question at this point is, how much data do we need about known genes in order to build a good classifier for unknown genes? This is a hard question whose answer is not fully known. However, there are some simple methods that can give us a good estimate: when we have a fixed set of training data, we can keep a holdout set that we dont use for our algorithm, and instead use those (known) data points to test the accuracy of our algorithm when we try to classify them. By trying different sizes of training versus holdout set, we can check the accuracy curve of our algorithm. Generally speaking, we have enough training data when we see the accuracy curve flatten out as we increase the amount of training data (this indicates that additional data is likely to give only a slight marginal improvement). The holdout set is also called the test set, because it allows us to test the generalization power of our classifier.
Supposing we have already collected our training data, however, how should we model P(X|Class)? There are many possibilities. One is to use the same approach we did with clustering in the last lecture and model the feature as a Gaussian then we can follow the maximum likelihood principle to find the best center and variance. The one used in the mitochondrial study is a simple density estimate: for each feature, divide the range of possibilities into a set of bins (say, five bins per feature). Then we use the given data to estimate the probability of a feature falling into each bin for a given class. The principle behind this is again maximum likelihood, but for a multinomial distribution rather than a Gaussian. We may choose to discretize a otherwise continuous distribution because estimating a continuous distribution can be complex.
There is one issue with this strategy: what if one of the bins has zero samples in it? A probability of zero will override everything else in our formulas, so that instead of thinking this bin is merely unlikely, our classifier will believe it is impossible. There are many possible solutions, but the one taken here is to apply the Laplace Correction: add some small amount (say, one element) into each bin, to draw probability estimates slightly towards uniform and account for the fact that (in most cases) none of the bins are truly impossible. Another way to avoid having to apply the correction is to choose bins that are not too small so that bins will not have zero samples in them in practice. If you have many many points, you can have more bins, but run the risk of overfitting your training data.
Estimating Priors
We now have a method for approximating the feature distribution for a given class, but we still need to know the relative probability of the classes themselves. There are three general approaches:
1. Estimate the priors by counting the relative frequency of each class in the training data. This is prone to bias, however, since data available is often skewed disproportionately towards less common classes (since those are often targeted for special study). If we have a high-quality (representative) sample for our training data, however, this works very well.
2. Estimate from expert knowledge—there may be previous estimates obtained by other methods inde- pendent of our training data, which we can then use as a first approximation in our own predictions. In other words, you might ask experts what the percentage of mitochondrial proteins are.
3. Assume all classes are equally likely we would typically do this if we have no information at all about the true frequencies. This is effectively what we do when we use the maximum likelihood principle: our clustering algorithm was essentially using Bayesian analysis under the assumption that all priors are equal. This is actually a strong assumption, but when you have no other data, this is the best you can do.
For classifying mitochondrial DNA, we use method (2), since some estimates on the proportions of mtDNA were already known. But there is an complication there are more than 1 features.
Multiple features and Naive Bayes
In classifying mitochondrial DNA, we were looking at 7 features and not just one. In order to use the preceding methods with multiple features, we would need not just one bin for each individual feature range, but one for each combination of features if we look at two features with five ranges each, thats already 25 bins. All seven features gives us almost 80,000 bins and we can expect that most of those bins will be empty simply because we dont have enough training data to fill them all. This would cause problems because zeroes cause infinite changes in the probabilities of being in one class. Clearly this approach wont scale well as we add more features, so we need to estimate combined probabilities in a better way.
The solution we will use is to assume the features are independent, that is, that once we know the class, the probability distribution of any feature is unaffected by the values of the other features. This is the Nave Bayes Assumption, and it is almost always false, but it is often used anyway for the combined reasons that it is very easy to manipulate mathematically and it is often close enough to the truth that it gives a reasonable approximation. (Note that this assumption does not say that all features are independent: if we look at the overall model, there can be strong connections between different features, but the assumption says that those connections are divided by the different classes, and that within each individual class there are no further dependencies.) Also, if you know that some features are coupled, you could learn the joint distribution in only some pairs of the features.
Once we assume independence, the probability of combined features is simply the product of the individual probabilities associated with each feature. So we now have:
$P\left(f_{1}, f_{2}, K, f_{N} \mid\right. Class )=P\left(f_{1} \mid\right. Class ) P\left(f_{2} \mid\right. Class ) K P\left(f_{N} \mid\right. Class ) \nonumber$
Where f1 represents feature 1. Similarly, the discriminant function can be changed to the multiplication
of the prior probabilities:
$G\left(f_{1}, f_{2}, K, f_{N}\right)=\log \left(\frac{\Pi P\left(f_{1} \mid \text { Class } 1\right) P(\text { Class } 1)}{\Pi P\left(f_{1} \mid \text { Class } 2\right) P(\text { Class } 2)}\right) \nonumber$
Testing a classifier
A classifier should always be tested on data not contained in its training set. We can imagine in the worst case an algorithm that just memorized its training data and behaved randomly on anything else a classifier that did this would perform perfectly on its training data, but that indicates nothing about its real performance on new inputs. This is why its important to use a test, or holdout, set as mentioned earlier. However, a simple error rate doesnt encapsulate all of the possible consequences of an error. For a simple binary classifier (an object is either in or not in a single target class), there are the following for types of errors:
1. True positive (TP)
2. True negative (TN)
3. False positive (FP)
4. False negative (FN)
The frequency of these errors can be encapsulated in performance metrics of a classifier which are defined as,
1. Sensitivity what fraction of objects that are in a class are correctly labeled as that class? That is, what fraction have true positive results? High sensitivity means that elements of a class are very likely to be labeled as that class. Low sensitivity means there are too many false negatives.
2. Specificity what fraction of objects not in a class are correctly labeled as not being in that class? That is, what fraction have true negative results? High specificity means that elements labeled as belonging to a class are very likely to actually belong to it. Low specificity means there are too many false positives.
In most algorithms there is a tradeoff between sensitivity and specificity. For example, we can reach a sensitivity of 100% by labeling everything as belonging to the target class, but we will have a specificity of 0%, so this is not useful. Generally, most algorithms have some probability cutoff they use to decide whether to label an object as belonging to a class (for example, our discriminant function above). Raising that threshold increases the specificity but decreases the sensitivity, and decreasing the threshold does the reverse. The MAESTRO algorithm for classifying mitochondrial proteins (described in this lecture) achieves 99% specificity and 71% sensitivity.
MAESTRO Mitochondrial Protein Classification
They find a class dependent distribution for each feature by creating several bins and evaluating the pro- portion of mitochondrial and non mitochondrial proteins in each bin. This lets you evaluate the usefulness of each feature in classification. You end up with a bunch of medium strength classifiers, but when you combine them together, you hopefully end up with a stronger classifier. Calvo et al. [1] sought to construct high-quality predictions of human proteins localized to the mitochondrion by generating and integrating data sets that provide complementary clues about mitochondrial localization. Specifically, for each human gene product p, they assign a score si(p), using each of the following seven genome-scale data sets targeting signal score, protein domain score, cis-motif score, yeast homology score, ancestry score, coexpression score, and induction score (details of each of the meaning and content of each of these data sets can be found in the manuscript). Each of these scores s1 − S7 can be used individually as a weak genome-wide predictor of mitochondrial localization. Each methods performance was assessed using large gold standard curated training sets - 654 mitochondrial proteins Tmito maintained by the MitoP2 database1 and 2,847 nonmitochondrial proteins T mito annotated to localize to other cellular compartments. To improve prediction accuracy, the authors integrated these eight approaches using a nave Bayes classifier that was implemented as a program called MAESTRO. So we can take several weak classifiers, and combine them to get a stronger classifier.
When MAESTRO was applied across the human proteome, 1451 proteins were predicted as mitochondrial proteins and 450 novel proteins predictions were made. As mentioned in the previous section The MAESTRO algorithm achieves a 99% specificity and a 71% sensitivity for the classification of mitochondrial proteins, suggesting that even with the assumption of feature independence, Nave Bayes classification techniques can prove extremely powerful for large-scale (i.e. genome-wide) scale classification.
1 Mitochondria is the energy producing machinery of cell. Very early in life, the mitochondria was engulfed by the predecessor to modern day eukaryotes, and now, we have different compartments in our cells. So the mitochonria has its own genome, but it is very depleted from its own ancestral genome - only about 11 genes remain. But there are hundreds are genes that make the mitochondria work, and these proteins are encoded by genes transcribed in the nucleus, and then transported to the mitochondria. So the goal is to figure out which proteins encoded in the genome are targeted to the mitochondria. This is important because there are many diseases associated with the mitochonria, such as aging. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/16%3A_Gene_Regulation_II_-_Classification/16.01%3A_Introduction.txt |
The previous section looked at using probabilistic (or generative) models for classification, this section looks at using discriminative techniques in essence, can we run our data through a function to determine its structure? Such discriminative techniques avoid the inherent cost involved in generative models which might require more information than is actually necessary.
Support vector machine techniques essentially involve drawing a vector thats perpendicular to the line(hyperplane) separating the training data. The approach is that we look at the training data to obtain a separating hyperplane so that two classes of data lie on different sides of the hyperplane. There are, in general, many hyperplanes that can separate the data, so we want to draw the hyperplane that separates the data the most - we wish to choose the line that maximizes the distance from the hyperplane to any data point. In other words, the SVM is a maximum margin classifier. You can think of the hyperplane being surrounded with margins of equal size on each side of the line, with no data points inside the margin on either side. We want to draw the line that allows us to draw the largest margin. Note that once the separating line and margin are determined, some data points will be right on the boundary of the margin. These are the data points that keep us from expanding the margin any further, and thus determine the line/margin. Such points are called the support vectors. If we add new data points outside the margin or remove points that are not support vectors, we will not change the maximum margin we can achieve with any hyperplane.
Suppose that the vector perpendicular to the hyperplane is w, and that the hyperplane passes through the point $\left(\frac{b}{|w|}\right)$. Then a point x is classified as being in the positive class if w ∗ x is greater than b, and negative otherwise. It can be shown that the optimal w, that is, the hyperplane that achieves the maximum margin, can actually be written as a linear combination of the data vectors Σai ∗ xi. Then, to classify a new data point x, we need to take the dot product of w with x to arrive at a scalar. Notice that this scalar, Σai ∗ (xi ∗ x) only depends on the dot product between x and the training vectors xis. Furthermore, it can be shown that finding the maximum margin hyperplane for a set of (training) points amounts to maximizing a linear program where the objective function only depends on the dot product of the training points with each other. This is good because it tells us that the complexity of solving that linear program is independent of the of dimension of the data points. If we precompute the pairwise dot products of the training vectors, then it makes no difference what the dimensionality of the data is in regards to the running time of solving the linear program.
Kernels
We see that SVMs are dependent only on the dot product of the vectors. So, if we call our transformation φ(v), for two vectors we only care about the value of φ(v1)·φ(v2) The trick to using kernels is to realize that for certain transformations φ, there exists a function K(v1,v2), such that:
K(v1, v2) = φ(v1) · φ(v2)
In the above relation, the right-hand side is the dot product of vectors with very high dimension, but the left-hand side is the function of two vectors with lower dimension. In our previous example of mapping x → (x,y = x2), we get
K(x1, x2) = (x1x21) · (x2, x22) = x1x2 + (x1x2)2
Now we did not actually apply the transformation φ, we can do all our calculations in the lower dimensional space, but get all the power of using a higher dimension.
Example kernels are the following:
1. Linear kernel: K (v1 , v2 ) = v1 · v2 which represents the trivial mapping of φ(x) = x
2. Polynomial kernel: K (v1 , v2 ) = (1 + v1 · v2 )n which was used in the previous example with n = 2.
3. Radial basis kernel: K(v1,v2) = exp(−β|v1 −v2|2) This transformation is actually from a point v1 to a function (which can be thought of as being a point in Hilbert space) in an infinite-dimensional space. So what were actually doing is transforming our training set into functions, and combining the to get a decision boundary. The functions are Gaussians centered at the input points.
4. Sigmoid kernel: K(v1, v2) = tanh[β(v1T v2 + r)] Sigmoid kernels have been popular for use in SVMs due to their origin in neural networks (e.g. sigmoid kernel functions are equivalent to two-level, perceptron neural networks). It has been pointed out in previous work (Vapnik 1995) that the kernel matrix may not be positive semi-definite for certain values of the parameters μ and r. The sigmoid kernel has nevertheless been used in practical applications [2].
Here is a specific example of a kernel function. Consider the two classes of one-dimensional data:
{−5, −4, −3, 3, 4, 5}and{−2, −1, 0, 1, 2}
This data is clearly not linearly separable, and the best separation boundary we can find might be x > −2.5. Now consider applying the transformation . The data can now be written as new pairs,
{−5, −4, −3, 3, 4, 5} → {(−5, 25), (−4, 16), (−3, 9), (3, 9), (4, 16), (5, 25)}
and
{−2, −1, 0, 1, 2} → {(−2, −4), (−1, 1), (0, 0), (1, 1), (2, 4)}
This data is separable by the rule y > 6.5, and in general the more dimensions we transform data to the more separable it becomes.
An alternate way of thinking of this problem is to transform the classifier back in to the original low- dimensional space. In this particular example, we would get the rule x2 < 6.5 , which would bisect the number line at two points. In general, the higher dimensionality of the space that we transform to, the more complicated a classifier we get when we transform back to the original space.
One of the caveats of transforming the input data using a kernel is the risk of overfitting (or over- classifying) the data. More generally, the SVM may generate so many feature vector dimensions that it does not generalize well to other data. To avoid overfitting, cross-validation is typically used to evaluate the fitting provided by each parameter set tried during the grid or pattern search process. In the radial-basis kernel, you can essentially increase the value of β until each point is within its own classification region (thereby defeating the classification process altogether). SVMs generally avoid this problem of over-fitting due to the fact that they maximize margins between data points.
When using difficult-to-separate training sets, SVMs can incorporate a cost parameter C, to allow some flexibility in separating the categories. This parameter controls the trade-off between allowing training errors and forcing rigid margins. It can thereby create a soft margin that permits some misclassifications. Increasing the value of C increases the cost of misclassifying points and forces the creation of a more accurate model that may not generalize well.
Can we use just any function as our kernel? The answer to this is provided by Mercers Condition which provides us an analytical criterion for choosing an acceptable kernel. Mercers Condition states that a kernel K(x, y) is a valid kernel if and only if the following holds For any g(x) such that $\int g(x)^{2} dx$ is finite, we have:
$\iint K(x, y) g(x) g(y) d x d y \geq 0 \nonumber [3]$
In all, we have defined SVM discriminators and shown how to perform classification with appropriate kernel mapping functions that allow performing computations on lower dimension while being to capture all the information available at higher dimensions. The next section describes the application of SVMs to the classification of tumors for cancer diagnostics. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/16%3A_Gene_Regulation_II_-_Classification/16.03%3A_Classification_Support_Vector_Machines.txt |
A generic approach for classifying two types of acute leukemias acute myeloid leukemia (AML) and acute lymphoid leukemia (ALL) was presented by Golub et al. [4]. This approach centered on effectively addressing three main issues:
1. Whether there were genes whose expression pattern to be predicted was strongly correlated with the class distinction (i.e. can ALL and AML be distinguished)
2. How to use a collection of known samples to create a “class predictor” capable of assigning a new sample to one of two classes
3. How to test the validity of their class predictors
They addressed (1) by using a “neighbourhood analysis” technique to establish whether the observed correlations were stronger than would be expected by chance. This analysis showed that roughly 1100 genes were more highly correlated with the AML-ALL class distinction than would be expected by chance. To address (2) they developed a procedure that uses a fixed subset of “informative genes” (chosen based on their correlation with the class distinction of AML and ALL) and makes a prediction based on the expression level of these genes in a new sample. Each informative gene casts a “weighted vote” for one of the classes, with the weight of each vote dependent on the expression level in the new sample and the degree of that genes correlation with the class distinction. The votes are summed to determine the winning class. To address (3) and effectively test their predictor by first testing by cross-validation on the initial data set and then assessing its accuracy on an independent set of samples. Based on their tests, they were able to identify 36 of the 38 samples (which were part of their training set!) and all 36 predictions were clinically correct. On the independent test set 29 of 34 samples were strongly predicted with 100% accuracy and 5 were not predicted.
A SVM approach to this same classification problem was implemented by Mukherjee et al.[5]. The output of classical SVM is a binary class designation. In this particular application it is particularly important to be able to reject points for which the classifier is not confident enough. Therefore, the authors introduced a confidence interval on the output of the SVM that allows for rejection of points with low confidence values. As in the case of Golub et al.[4] it was important for the authors to infer which genes are important for the classification. The SVM was trained on the 38 samples in the training set and tested on the 34 samples in the independent test set (exactly in the case of Golub et al.). The authors results are summarized in the following table (where |d| corresponds to the cutoff for rejection).
These results a significant improvement over previously reported techniques, suggesting that SVMs play an important role in classification of large data sets (as those generated by DNA microarray experiments).
16.05: Semi-Supervised Learning
In some scenarios we have a data set with only a few labeled data points, a large number of unlabeled data points and inherent structure in the data. This type of scenario both clustering and classification do not perform well and a hybrid approach is required. This semi-supervised approach could involve the clustering of data first followed by the classification of the generated clusters.
16.06: Further Reading Resources Bibliography
Further Reading
• Richard O. Duda, Peter E. Hart, David G. Stork (2001) Pattern classification (2nd edition), Wiley, New York
• See previous chapter for more books and articles.
Resources
• Statistical Pattern Recognition Toolbox for Matlab.
• See previous chapter for more tools
Bibliography
[1] Calvo, S., Jain, M., Xie, X., Sheth, S.A., Chang, B., Goldberger, O.A., Spinaz- zola, A., Zeviani, M., Carr, S.A., and Mootha, V.K. (2006). Systematic identifi- cation of human mitochondrial disease genes through integrative genomics. Nat. Genet. 38, 576582.
[2] Scholokopf, B., et al., 1997. Comparing support vector machines with Gaussian kernels to radial basis function classifiers. IEEE Transactions on Signal Processing.
[3] Christopher J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2:121–167, 1998.
[4] T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, and C. D. Bloomfield. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science, 286:531–537, 1999.
[5] S. Mukherjee, P. Tamayo, D. Slonim, A. Verri, T. Golub, J. P. Mesirov, and T. Poggio. Support vector machine classification of microarray data. Technical report, AI Memo 1677, Massachusetts Institute of Technology, 1998.
Genes
Rejects Errors Confidence level |d|
7129 3 0 93% 0.1
40 0 0 93% 0.1
5 3 0 92% 0.1 | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/16%3A_Gene_Regulation_II_-_Classification/16.04%3A_Tumor_Classification_with_SVMs.txt |
Instead of a Profile Matrix, we can also represent Motifs using information theory. In information theory, information about a certain event is communicated through a message. The amount of information carried by a message is measured in bits. We can determine the bits of information carried by a message by observing the probability distribution of the event described in the message. Basically, if we dont know anything about the outcome of the event, the message will contain a lot of bits. However, if we are pretty sure how the event is going to play out, and the message only confirms our suspicions, the message carries very few bits of information. For example, The sentence 0un will rise tomorrow” is not very surprising, so the information of that sentence if quite low.. However, the sentence 0un will not rise tomorrow” is very surprising and it has high information content. We can calculate the specific amount of information in a given message with the equation: − log p.
Shannon Entropy is a measure of the expected amount of information contained in a message. In other words, it is the information contained by a message of every event that could possibly occur weighted by each events probability. The Shannon entropy is given by the equation:
$H(X)=-\sum_{i} p_{i} \log _{2} p_{i} \nonumber$
Entropy is maximum when all events have an equal probability of occurring. This is because Entropy tells us the expected amount of information we will learn. If each even has the same chance of occurring we know as little as possible about the event, so the expected amount of information we will learn is maximized. For example, a coin flip has maximal entropy only when the coin is fair. If the coin is not fair, then we know more about the event of the coin flip, and the expected message of the outcome of the coin flip will contain less information.
We can model a motif by how much information we have of each position after applying Gibs Sampling or EM. In the following figure, the height of each letter represents the number of bits of information we have learned about that base. Higher stacks correspond to greater certainty about what the base is at that position of the motif while lower stacks correspond to a higher degree of uncertainty. With four codons to choice from, the Shannon Entropy of each position is 2 bits. Another way to look at this figure is that the height of a letter is proportional to the frequency of the base at that position.
There is a distance metric on probability distributions known as the Kullback-Leibler distance. This allows us to compare the divergence of the motif distribution to some true distribution. The K-L distance is given by
$D_{K L}\left(P_{\text {motif}} \mid P_{\text {background}}\right)=\Sigma_{A, T, G, C} P_{\text {motif}}(i) \log \underset{P \text {background}(i)}{P_{\text {motif}}(i)} \nonumber$
In Plasmodium, there is a lower G-C content. If we assume a G-C content of 20%, then we get the following representation for the above motif. C and G bases are much more unusual, so their prevalence is highly unusual. Note that in this representation, we used the K-L distance, so that it is possible for the stack to be higher than 2.
Bibliography
1. [1] Timothy L. Bailey. Fitting a mixture model by expectation maximization to discover motifs in biopoly- mers. In Proceedings of the Second International Conference on Intelligent Systems for Molecular Biology, pages 28–36. AAAI Press, 1994.
2. [2] C E Lawrence and A A Reilly. An expectation maximization (em) algorithm for the identification and characterization of common sites in unaligned biopolymer sequences. Proteins, 7(1):41–51, 1990. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/17%3A_Regulatory_Motifs_Gibbs_Sampling_and_EM/17.01%3A_Motif_Representation_and_Information_Content.txt |
We have already explored the areas of dynamic programming, sequence alignment, sequence classification and modeling, hidden Markov models, and expectation maximization. In the following chapter, we will look at how these techniques are also useful in identifying novel motifs and elucidating their functions.
The regulatory code: Transcription Factors and Motifs
Motifs are short (6-8 bases long), recurring patterns that have well-defined biological functions. Motifs include DNA patterns in enhancer regions or promoter motifs, as well as motifs in RNA sequences such as splicing signals. As we have discussed, genetic activity is regulated in response to environmental variations. Motifs are responsible for recruiting Transcription Factors, or regulatory proteins, to the appropriate target gene. Motifs can also be recognized by microRNAs, which bind to motifs given through complementarity; nucleosomes, which recognize motifs based on their GC content; and other RNAs, which use a combination of DNA sequence and structure. Once bound, they can activate or repress the expression of the associated gene.
Transcription factors (TFs) can use several mechanisms in order to control gene expression, including acetylation and deacetylation of histone proteins, recruitment of cofactor molecules to the TF-DNA complex, and stabilization or disruption of RNA-DNA interfaces during transcription. They often regulate a group of genes that are involved in similar cellular processes. Thus, genes that contain the same motif in their upstream regions are likely to be related in their functions. In fact, many regulatory motifs are identified by analyzing the regions upstream of genes known to have similar functions.
Motifs have become exceedingly useful for defining genetic regulatory networks and deciphering the functions of individual genes. With our current computational abilities, regulatory motif discovery and analysis has progressed considerably and remains at the forefront of genomic studies.
Challenges of motif discovery
Before we can get into algorithms for motif discovery, we must first understand the characteristics of motifs, especially those that make motifs somewhat difficult to find. As mentioned above, motifs are generally very short, usually only 6-8 base pairs long. Additionally, motifs can be degenerate, where only the nucleotides at certain locations within the motif affect the motif’s function. This degeneracy arises because transcrip- tion factors are free to interact with their corresponding motifs in manners more complex than a simple complementarity relation. As seen in 17.1, many proteins interact with the motif not by opening up the DNA to check for base complementarity, but instead by scanning the spaces, or grooves, between the two sugar phosphate backbones. Depending on the physical structure of the transcription factor, the protein may only be sensitive to the difference between purines and pyrimidines or weak and strong bases, as opposed to identifying specific base pairs. The topology of the transcription factor may even make it such that certain nucleotides aren’t interacted with at all, allowing those bases to act as wildcards.
This issue of degeneracy within a motif poses a challenging problem. If we were only looking for a fixed k-mer, we could simply search for the k-mer in all the sequences we are looking at using local alignment
tools. However, the motif may vary from sequence to sequence. Because of this, a string of nucleotides that is known to be a regulatory motif is said to be an instance of a motif because it represents one of potentially many different combinations of nucleotides that fulfill the function of the motif.
In our approaches, we make two assumptions about the data. First, we assume that there are no pairwise correlations between bases, i.e. that each base is independent of every other base. While such correlations do exist in real life, considering them in our analysis would lead to an exponential growth of the parameter space being considered, and consequently we would run the risk of overfitting our data. The second assumption we make is that all motifs have fixed lengths; indeed, this approximation simplifies the problem greatly. Even with these two assumptions, however, motif finding is still a very challenging problem. The relatively small size of motifs, along with their great variety, makes it fairly difficult to locate them. In addition, a motif’s location relative to the corresponding gene is far from fixed; the motif can be upstream or downstream, and the distance between the gene and the motif also varies. Indeed, sometimes the motif is as far as 10k to 10M base pairs from the gene.
Motifs summarize TF sequence specificity
Because motif instances exhibit great variety, we generally use a Position Weight Matrix (PWM) to char- acterize the motif. This matrix gives the frequency of each base at each location in the motif. The figure below shows an example PWM, where pck corresponds to the frequency of base c in position k within the motif, with pc0 denoting the distribution of bases in non-motif regions.
We now define the problem of motif finding more rigorously. We assume that we are given a set of co-regulated and functionally related genes. Many motifs were previously discovered by doing footprint
experiments, which isolate sequences bound by specific transcription factors, and therefore more likely to correspond to motifs. There are several computational methods that can be used to locate motifs:
1. Perform a local alignment across the set of sequences and explore the alignments that resulted in a very high alignment score.
2. Model the promoter regions using a Hidden Markov Model and then use a generative model to find non-random sequences.
3. Reduce the search space by applying prior knowledge for what motifs should look like.
4. Search for conserved blocks between different sequences.
5. Examine the frequency of kmers across regions highly likely to contain a motif.
6. Use probabilistic methods, such as EM, Gibbs Sampling, or a greedy algorithm
Method 5, using relative kmer frequencies to discover motifs, presents a few challenges to consider. For example, there could be many common words that occur in these regions that are in fact not regulatory motifs but instead different sets of instructions. Furthermore, given a list of words that could be a motif, it is not certain that the most likely motif is the most common word; for instance, while motifs are generally overrepresented in promoter regions, transcription factors may be unable to bind if an excess of motifs are present. One possible solution to this problem might be to find kmers with maximum relative frequency in promoter regions as compared to background regions. This strategy is commonly performed as a post processing step to narrow down the number of possible motifs.
In the next section, we will talk more about these probabilistic algorithms as well as methods to use kmer frequency for motif discovery. We will also come back to the idea of using kmers to find motifs in the context of using evolutionary conservation for motif discovery. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/17%3A_Regulatory_Motifs_Gibbs_Sampling_and_EM/17.02%3A_Introduction_to_regulatory_motifs_and_gene_regulation.txt |
The key idea behind EM
We are given a set of sequences with the assumption that motifs are enriched in them. The task is to find the common motif in those sequences. The key idea behind the following probabilistic algorithms is that if we were given motif starting positions in each sequence, finding the motif PWM would be trivial; similarly, if we were given the PWM for a particular motif, it would be easy to find the starting positions in the input sequences. Let Z be the matrix in which Zij corresponds to the probability that a motif instance starts at position j in sequence i (a graphical of the probability distributions summarized in Z is shown in Figure 17.8). These algorithms therefore rely on a basic iterative approach: given a motif length L and an initial matrix Z, we can use the starting positions to estimate the motif, and in turn use the resulting motif to re-estimate the starting positions, iterating over these two steps until convergence on a motif.
The E step: Estimating Zij from the PWM
Step 1: Initialization The first step in EM is to generate an initial probability weight matrix (PWM). The PWM describes the frequency of each nucleotide at each location in the motif. In 17.5, there is an example of a PWM. In this example, we assume that the motif is eight bases long.
If you are given a set of aligned sequences and the location of suspected motifs within them, then finding the PWM is accomplished by computing the frequency of each base in each position of the suspected motif. We can initialize the PWM by choosing starting locations randomly.
We refer to the PWM as pck, where pck is the probability of base c occurring in position k of the motif. Note: if there is 0 probability, it is generally a good idea to insert pseudo- counts into your probabilities. The PWM is also called the profile matrix. In addition to the PWM, we also keep a background distribution pck,k=0, a distribution of the bases not in the motif.
Step 2: Expectation In the expectation step, we generate a vector Zij which contains the probability of the motif starting in position j in sequence i. In EM, the Z vector gives us a way of classifying all of the nucleotides in the sequences and tell us whether they are part of the motif or not. We can calculate Zij using Bayes’ Rule. This simplifies to:
$Z_{i j}^{t}=\frac{\operatorname{Pr}^{t}\left(X_{i} \mid Z_{i j}\right) \operatorname{Pr}^{t}\left(Z_{i j}=1\right)}{\Sigma_{k=1}^{L-W+1} \operatorname{Pr}^{t}\left(X_{i} \mid Z_{i j}=1\right) \operatorname{Pr}^{t}\left(Z_{i k}=1\right)} \nonumber$
where $\operatorname{Pr}^{t}\left(X_{i} \mid Z_{i j}=1\right)=\operatorname{Pr}\left(X_{i} \mid Z_{i j}=1, p\right)$ is defined as
This is the probability of sequence i given that the motif starts at position j. The first and last products correspond to the probability that the sequences preceeding and following the candidate motif come from some background probability distribution whereas the middle product corresponds to the probability that the candidate motif instance came from a motif probability distribution. In this equation, we assume that the sequence has length L and the motif has length W .
step: Finding the maximum likelihood motif from starting positions Zij
Step 3: Maximization Once we have calculated Zt, we can use the results to update both the PWM and the background probability distribution. We can update the PWM using the following equation
Step 4: Repeat Repeat steps 2 and 3 until convergence.
One possible way to test whether the profile matrix has converged is to measure how much each element in the PWM changes after step maximization. If the change is below a chosen threshold, then we can terminate the algorithm. EM is a deterministic algorithm and is entirely dependent on the initial starting points because it uses an average over the full probability distribution. It is therefore advisable to rerun the algorithm with different intial starting positions to try reduce the chance of converging on a local maximum that is not the global maximum and to get a good sense of the solution space. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/17%3A_Regulatory_Motifs_Gibbs_Sampling_and_EM/17.03%3A_Expectation_maximization.txt |
Sampling motif positions based on the Z vector
Gibbs sampling is similar to EM except that it is a stochastic process, while EM is deterministic. In the expectation step, we only consider nucleotides within the motif window in Gibbs sampling. In the maximization step, we sample from Zij and use the result to update the PWM instead of averaging over all values as in EM.
Step 1: Initialization As with EM, you generate your initial PWM with a random sampling of initial starting positions. The main difference lies in the Maximization step. During EM, the algorithm creates the sequence motif by considering all possible starting points of the motif. During Gibbs, the algorithm picks a single starting point of the motif with the probability of the starting points Z.
Step 2: Remove Remove one sequence, Xi, from your set of sequences. You will change the starting location of for this particular sequence.
Step 3: Update Using the remaining set of sequences, update the PWM by counting how often each base occurs in each position, adding pseudocounts as necessary.
Step 4: Sample Using the newly updated PWM, compute the score of each starting point in the sequence Xi. To generate each score, Zij, the following formula is used:
$A_{i j}=\frac{\prod_{k=j}^{j+W-1} p_{e k}, k-j+1}{\prod_{k=j}^{j+W-1} p_{c k}, 0} \nonumber$
This is simply the probability that the sequence was generated using the motif PWM divided by the probability that the sequence was generated using the background PWM.
Select a new starting position for Xi by randomly choosing a position based on its Zij.
Step 5: Iterate Loop back to Step 2 and iterate the algorithm until convergence.
More likely to find global maximum, easy to implement
Because Gibbs updates its sequence motif during Maximization based of a single sample of the Motif rather than every sample weighted by their scores, Gibbs is less dependent on the starting PWM. EM is much more likely to get stuck on a local maximum than Gibbs because of this fact. However, this does not mean that Gibbs will always return the global maximum. Gibbs must be run multiple times to ensure that you have found the global maximum and not the local maximum.Two popular implementations of Gibbs Sampling applied to this problem are AlignACE and BioProspector. A more general Gibbs Sampler can be found in the program WinBUGS. Both AlignACE and BioProspector use the aforementioned algorithm for several choices of initial values and then report common motifs. Gibbs sampling is easier to implement than E-M, and in theory, it converges quickly and is less likely to get stuck at a local optimum. However, the search is less systematic.
17.05: De novo motif discovery
As discussed in beginning of this chapter, the core problem for motif finding is to define the criteria for what is a valid motif and where they are located. Since most motifs are linked to important biological functions, one could subject the organism to a variety of conditions in hope of triggering these biological functions. One could then search for differentially expressed genes, and then use those genes as a basis for which genes are functionally related and thus likely to be controlled by the same motif instance. However, this technique not only relies on prior knowledge of interesting biological functions to probe for, but is also subject to biases in the experimental procedure. Alternatively, one could use ChIP-seq to search for motifs, but this method relies on not only having a known Transcription Factor of interest, but also requires developing antibodies to recognize said Transcription Factor, which can be costly and time consuming.
Ideally one would be able to discover motifs de novo, or without relying on an already known gene set or Transcription Factor. While this seems like a difficult problem, it can in fact be accomplished by taking advantage of genome-wide conservation. Because biological functions are usually conserved across species and have distinct evolutionary signatures, one can align sequences from close species and search specifically in conserved regions (also known as Island of Conservation) in order to increase the rate of finding functional motifs.
Motif discovery using genome-wide conservation
Conservation islands often overlap known motifs, so doing genome-wide scans through evolutionary conserved regions can help us discover motifs, de novo. However, not all conserved regions will be motifs; for instance, nucleotides surrounding motifs may also be conserved even though they are not themselves part of a motif. Distinguishing motifs from background conserved regions can be done by looking for enrichments which will select more specifically for kmers involved in regulatory motifs. For instance, one can find regulatory motifs by searching for conserved sequences enriched in intergenic regions upstream of genes as compared to control regions such as coding sequences, since one would expect motifs to be enriched in or around promoters of genes. One can also expand this model to find degenerate motifs: we can look for conservation of smaller, non-degenerate motifs separated by a gap of variable length, as shown in the figure below. We can also extend this motif through a greedy search in order to get closer to find the local maximum likelihood motif. Finally, evolution of motifs can also reveal which motifs are degenerate; since a particular motif is more likely to be degenerate if it is often replaced by another motif throughout evolution, motif clustering can reveal which kmers are likely to correspond to the same motif.
In fact, the strategy has its biological relevance. In 2003, Professor Kellis argued that there must be some selective pressure to cause a particular sequence to be occur on specific places. His PhD. thesis on the topic can be found at the following location:
Validation of discovered motifs with functional datasets
These predicted motifs can then be validated with functional datasets. Predicted motifs with at least one of the following features are more likely to be real motifs: -enrichment in co-regulated genes. One can extend this further to larger gene groups; for instance, motifs have been found to be enriched in genes expressed in specific tissues -overlap with TF binding experiments -enrichment in genes from the same complex -positional biases with respect to the transcription start site (TSS): motifs are enriched in gene TSS’s -upstream vs. downstream of genes, inter- vs. intra-genic positonal biases: motifs are generally depleted in coding sequences -similarity to known transcription factor motifs: some, but not all, discovered motifs may match known motifs (however, not all motifs are conserved and known motifs may not be exactly correct) | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/17%3A_Regulatory_Motifs_Gibbs_Sampling_and_EM/17.04%3A_Gibbs_Sampling-_Sample_from_joint_%28MZij%29_distribution.txt |
Greedy
While the greedy algorithm is not used very much in practice, it is important know how it functions and mainly its advantages and disadvantages compared to EM and Gibbs sampling. The Greedy algorithm works just like Gibbs sampling except for a main difference in Step 4. Instead of randomly choosing selecting a new starting location, it always picks the starting location with the highest probability.
This makes the Greedy algorithm slightly faster than Gibbs sampling but reduces its chances of finding a global maximum considerably. In cases where the starting location probability distribution is fairly evenly distributed, the greedy algorithm ignores the weights of every other starting position other than the most likely.
17.07: Comparing different Methods
The main difference between Gibbs, EM, and the Greedy algorithm lies in their maximization step after computing their Z matrix. Examples of the Z matrix are graphically represented below.THis Z matrix is then used to recompute the original profile matrix until convergence. Some examples of this matrix are graphically represented by 17.8
Intuitively, the greedy algorithm will always pick the most probable location for the motif. The EM algorithm will take an average of all values while Gibbs Sampling will actually use the probability distribution given by Z to sample a motif in a step.
17.08: OOPSZOOPSTCM
The different types of sequence model make differing assumptions about how and where motif occurrences appear in the dataset. The simplest model type is OOPS (One-Occurence-Per-Sequence) since it assumes that there is exactly one occurrence per sequence of the motif in the dataset. This is the case we have analyzed in the Gibbs sampling section. This type of model was introduced by Lawrence & Reilly (1990) [2], when they describe for the first time a generalization of OOPS, called ZOOPS (Zero-or-One-Occurrence-Per-Sequence), which assumes zero or one motif occurrences per dataset sequence. Finally, TCM (Two-Component Mixture) models assume that there are zero or more non-overlapping occurrences of the motif in each sequence in the dataset, as described by Baily & Elkan (1994). [1] Each of these types of sequence model consists of two components, which model, respectively, the motif and non-motif (background) positions in sequences. A motif is modelled by a sequence of discrete random variables whose parameters give the probabilities of each of the different letters (4 in the case of DNA, 20 in the case of proteins) occurring in each of the different positions in an occurrence of the motif. The background positions in the sequence are modelled by a single discrete random variable.
17.09: Extension of the EM Approach
ZOOPS Model
The approach presented before (OOPS) relies on the assumption that every sequence is characterized by only one motif (e.g., there is exactly one motif occurrence in a given sequence). The ZOOPS model takes into consideration the possibility of sequences not containing motifs.
In this case let i be a sequence that does not contain a motif. This extra information is added to our previous model using another parameter λ to denote the prior probability that any position in a sequence is the start of a motif. Next, the probability of the entire sequence to contain a motif is λ = (L − W + 1) ∗ λ
The E-Step
The E-step of the ZOOPS model calculates the expected value of the missing information–the probability that a motif occurrence starts in position j of sequence Xi. The formulas used for the three types of model are given below.
where λt is the probablity that sequence i has a motif, Prt(Xi|Qi = 0) is the probablity that Xi is generated from a sequence i that does not contain a motif
The M-Step
The M-step of EM in MEME re-estimates the values for λ using the preceding formulas. The math remains the same as for OOPS, we just update the values for λ and γ
The model above takes into consideration sequences that do not have any motifs. The challenge is to also take into consideration the situation in which there is more than one motif per sequence. This can be accomplished with the more general model TCM. TCM (two-component mixture model) is based on the assumption that there can be zero, one, or even two motif occurrences per sequence.
Finding Multiple Motifs
All the above sequence model types model sequences containing a single motif (notice that TCM model can describe sequences with multiple occurences of the same motif). To find multiple, non-overlapping, different motifs in a single dataset, one incorporates information about the motifs already discovered into the current model to avoid rediscovering the same motif. The three sequence model types assume that motif occurrences are equally likely at each position j in sequences xi. This translates into a uniform prior probability distribution on the missing data variables Zij. A new prior on each Zij had to be used during the E-step that takes into account the probability that a new width-W motif occurrence starting at position Xij might overlap occurrences of the motifs previously found. To help compute the new prior on Zij we introduce variables Vij where Vij = 1 if a width-W motif occurrence could start at position j in the sequence Xi without overlapping an occurrence of a motif found on a previous pass. Otherwise Vij = 0. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/17%3A_Regulatory_Motifs_Gibbs_Sampling_and_EM/17.06%3A_Possibly_deprecated_stuff_below-.txt |
Every cell has the same DNA, but they all have different expression patterns due to the temporal and spatial regulation of genes. Regulatory genomics explains these complex gene expression patterns. The regulators we will be discussing are:
• Transcription Factor (TF)- Regulates transcription of DNA to mRNA. TFs are proteins which bind to DNA before transcription and either increase or decrease transcription. We can determine the specificity of a TF through experimental methods using protein or antibodies. We can find the genes by their similarity to know TFs.
• Micro RNA (miRNA)- Regulates translation of mRNA to Proteins. miRNAs are RNA molecules which bind to mRNA after transcription and can reduce translation. We can determine the specificity of an miRNA through experimental methods, such as cloning, or computational methods, using conservation and structure.
Open Problems
Both TFs and miRNAs are regulators and we can find them through both experimental and computational methods. We will discuss some of these computational methods, specifically the use of evolutionary signatures. These regulators bind to specific patterns, called motifs. We can predict the motifs to which a regulator will bind using both experimental and computational methods. We will be discussing identification of miRNAs through evolutionary and structural signatures and the identification of both TFs and miRNAs through de novo comparative discovery, which theoretically can find all motifs. Given a motif, it is difficult to find the regulator which binds to it.
A target is a place where a factor binds. There are many sequence motifs, however many will not bind; only a subset will be targets. Targets for a specified regulator can be determined using experimental methods. In Lecture 11, methods for finding a motif given a target were discussed. We will also discuss finding targets given a motif.
18.02: De Novo Motif Discovery
Motif Discovery
Transcription Factors influence the expression of target genes as either activators or repressors by binding to the DNA near genes. This binding is guided by TF sequence specificity. The closer the DNA is to the base preference, the more likely it is that the factor will bind. These motifs can be found both computationally and experimentally. There are three main approaches for discovering these motifs.
• Co-Regulation - In Lecture 11, we discussed a co-regulation type of discovery of motifs by finding sequences which are likely to have the motif bound. We can then use enumerative approaches or alignment methods to find these motifs in the upstream regions. We can apply similar techniques to experimental data where you know where motif is bound.
• Factor Centric - There are also factor centric methods for discovering motifs. These are mostly experimental methods which require a protein or antibody. Examples include SELEX, DIP-Chip, and PBMs. All of these methods are in vitro.
• Evolutionary - Instead of focusing on only one factor, evolutionary methods focus on all factors. We can begin by looking at a single factor and determining which properties we can exploit. There are certain sequences which are preferentially conserved (conservation islands). However, these are not always motifs and instead can be due to chance or non-motif conservation. We can then look at many regions, find more conserved motifs, and determine which ones are more conserved overall. By testing conservation in many regions across many genomes, we increase the power. These motifs have certain evolutionary signatures that help us to identify them: motifs are more conserved in intergenic regions than in coding regions, motifs are more likely to be upstream from a gene than downstream. This is a method for taking a known motif and testing if it is conserved.
We now want to find everything that is more conserved than expected. This can be done using a hill climbing approach. We begin by enumerating the motif seeds, which are typically in 3-gap-3 form. Then, each of these seeds is scored and ranked using a conservation ratio corrected for composition and small counts. These seeds are then expanded to fill unspecified bases around the seed using hill climbing. Through these methods, it is possible to arrive at the same, or very similar seeds in different manners. Thus, our final step consists of clustering the seeds using sequence similarity to remove redundancy.
A final method that we can use is recording the frequency with which one sequence is replaced by another in evolution. This produces clusters of k-mers that correspond to a single motif.
Validating Discovered Motifs
There are many ways that we can validate discovered motifs. Firstly, we expect them to match real motifs, which does happen significantly more often than with random motifs. However, this is not a perfect agreement, possibly due to the fact that many known motifs are not conserved and that known motifs are biased and may have missed real motifs. Positional bias. Biased towards TSS,
Motifs also have functional enrichments. If a specific TF is expressed in a tissue, then we expect the upstream region will have that factor’s motif. This also reveals modules of cooperating motifs. We also see that most motifs are avoided in ubiquitously expressed genes, so that they are not randomly turned on and off.
Summary
There are disadvantages to all of these approaches. Both TF and region-centric approaches are not comprehensive and are biased. TF centric approaches require a transcription factor or antibody, take lots of time and money, and also have computational challenges. De novo discovery using conservation is unbiased, but it can’t match motifs to factors and requires multiple genomes. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/18%3A_Regulatory_Genomics/18.01%3A_Introduction_to_Regulatory_Genomics.txt |
Motif Instance Identification
Once potential motifs are discovered, the next step is to discover which motif matches are real. This can be done by both experimental and computational methods.
• Experimental - Instances can be identified experimentally using ChIP-Chip and ChIP-Deq methods. Both of these are in vivo methods. This is done by cross linking cells. DNA is first broken into sections. Then the protein and its antibody or tagged protein is added, which binds to various sequences. These bound sequences are now pulled out and cross linking is reversed. This allows us to determine where in the genome the factor was bound. This has a high false positive rate because there are many instances where a factor binds, but is not functional. This is a very popular experimental methods, but it is limited by the availability of antibodies, which are difficult to get for many factors.
• Computational- Computation approaches. There are also many computational approaches to identify instances. Single genome approaches use motif clustering. They look for many matches to increase power and are able to find regulatory regions (CRMs). However, they miss instances of motifs that occur alone and require a set of specific factors that act together. Multi-genome approaches, known as phylogentic footprinting, face many challenges. They begin by aligning many sequences, but even in functional motifs, sequences can move, mutate, or be missing. The approach taken by Kheradpour handles this by not requiring perfect conservation (by using a branch length score) and by not requiring an exact alignment (by searching within a window).
Branch Length Scores (BLS) are computed by taking a motif match and searching for it in other species. Then, the smallest subtree containing all species with a motif match is found. The percentage of total tree is the BLS. Calculating the BLS in this way allows for mutations permitted by motif degeneracy, misalighment and movement within a window, and missing motifs in dense species trees.
This BLS is then translated into a confidence score. This enables us to evaluate the likelihood of a given score and to account for differences in motif composition and length. We calculate this confidence score by counting all motif instances and control motifs at each BLS. We then want to see which fraction of the motif instances seem to be real. The confidence score is then signal/(signal+noise). The control motifs used in this calculation are produced by producing 100 shuffles of the original motif, and filtering the results by requiring that they match the genome with +/- 20% of the original motif. These are then sorted based on their similarity to known motifs and clustered. At most one motif is taken from each cluster, in increasing order of similarity, to produce our control motifs.
Validating Targets
Similar to motif discovery, we can validate targets by seeing where they fall in the genome. Confidence selects for TF motif instances in promoters and miRNA motifs in 3’ UTRs, which is what we expect. TFs can occur on either strand, whereas miRNA must fall on only one strand. Thus, although there is no preference for TFs, miRNA are found preferentially on the plus strand.
Another method of validating targets is by computing enrichments. This requires having a background and foreground set of regions. These could be a promoter of co-regulated genes vs all genes or regions bound by a factor vs other intergenic regions. Enrichment is computed by taking the fraction of motif instances inside the foreground vs the fraction of bases in the foreground. Composition and conservation level are corrected for with control motifs. These fractions can be made more conservative using a binomial confidence interval.
Targets can then be validated by comparing to experimental instances found using ChIP-Seq. This shows the conserved CTCF motif instances are highly enriched in ChIP-Seq sites. Increasing confidence also increases enrichment. Using this, many motif instances are verified. ChIP-Seq does not always find functional motifs, so these results can further be verified by comparing to conserved bound regions. This finds that enrichment in intersections is dramatically higher. This shows where factors are binding that have an effect worthwhile conserving in evolution. These two approaches are complementary and are even more effective when used together. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/18%3A_Regulatory_Genomics/18.03%3A_Predicting_Regular_Targets.txt |
MiRNA Gene Discovery
MiRNAs are post-transcriptional regulators that bind to mRNAs to silence a gene. They are an extremely important regulator in development. These are formed when a miRNA gene is transcribed from the genome. The resulting strand forms a hairpin at some point. This is processed, trimmed and exported to the cyto- plasm. Then, another protein trims the hairpin and one half is incorporated into a RISK complex. By doing this, it is able to tell the RISK complex where to bind, which determines which gene is turned off. The second strand is usually discarded. It is a computational problem to determine which strand is which. The computational problem here is how to find the genes which correspond to these miRNAs.
The first problem is finding hairpins. Simply folding the genome produces approximately 760,000 hairpins, but there are only 60 to 200 true miRNAs. Thus we need methods to help improve specificity. Structural features, including folding energy, loops (number, symmetry), hairpin length and symmetry, substructures and pairings, can be considered, however, this only increases specificity by a factor of 40. Thus structure alone cannot predict miRNAs. Evolutionary signatures can also be considered. MiRNA show characteristic conservation properties. Hairpins consist of a loop, two arms and flanking regions. In most RNA, the loop is the most well conserved due to the fact that it is used in binding. In miRNA, however, the arms are more conserved because they determine where the RISK complex will bind. This increases specificity by a factor of 300. Both these structural features and conservation properties can be combined to better predict potential miRNAs.
These features are combined using machine learning, specifically random forests. This produces many weak classifiers (decision trees) on subsets of positives and negatives. Each tree then votes on the final classification of a given miRNA. Using this technique allows us to reach the desired sensitivity (increased by 4,500 fold).
Validating Discovered MiRNAs
Discovered miRNAs can be validated by comparing to known miRNAs. An example given in class shows that 81% of discovered miRNAs were already known to exist, which shows that these methods perform well. The putative miRNAs have yet to be tested, however this can be difficult to do as testing is done by cloning.
Region specificity is another method for validating miRNAs. In the background, hairpins are fairly evenly distributed between introns, exons, intergenic regions, and repeats and transposons. Increasing confidence in predictions causes almost all miRNAs to fall in introns and intergenic regions, as expected. These predictions also match sequencing reads.
This also produced some genomic properties typical of miRNAs. They have a preference for transcribed strand. This allows them to piggyback in intron of real gene, and thus not require a separate transcription. They also clustering with known and predicted miRNAs. This indicates that they are in the same family and have a common orgin.
MiRNA’s 5’ End Identification
The first seven bases determine where an miRNA binds, thus it is important to know exactly where clevage occurs. If this clevage point is wrong by even two bases, the miRNA will be predicted to bind to a completely different gene. These clevage points can be discovered computationally by searching for highly conserved 7-mers which could be targets. These 7-mers also correlate to a lack of anti-targets in ubiquitously expressed genes. Using these features, structural features and conservational features, it is possible to take a machine learning approach (SVMs) to predict clevage site. Some miRNAs have no single high scoring position, and these also show imprecise processing in the cell. If the star sequence is highly scored, then it tends to be more expressed in the cell also.
Functional Motifs in Coding Regions
Each motif type has distinct signatures. DNA is strand symmetric, RNA is strand-specific and frame- invariant, and Protein is strand-specific and frame-biased. This frame-invariance can be used as a signature. Each frame can then be evaluated separately. Motifs due to di-codon usage biases are conserved in only one frame offset while motifs due to RNA-level regulation are conserved in all three frame offsets. This allows the ability to distinguish overlapping pressures. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/18%3A_Regulatory_Genomics/18.04%3A_MicroRNA_Genes_and_Targets.txt |
The human body contains approximately 210 different cell types, but each cell type shares the same genomic sequence. In spite of having the same genetic code, cells not only develop into distinct types from this same sequence, but also maintain the same cell type over time and across divisions. This information about the cell type and the state of the cell is called epigenomic information. The epigenome (“epi” means above in Greek, so epigenome means above genome) is the set of chemical modifications or marks that influence gene expression and are transferred across cell divisions and, in some limited cases, across generations of organisms.
As shown in Figure 19.1, epigenomic information in a cell is encoded in diverse ways. For example, methylation of DNA (e.g. at CpG dinucleotides) can alter gene expression. Similarly, positioning of nu- cleosomes (unit of packing of DNA) determines which parts of DNA are accessible for transcription factors to bind to and other enzymes. Almost two decades of work have revealed hundreds of post translational modifications of histone tails. Since an extremely large number of histone modification states are possible for any given histone tail, the ”histone code hypothesis” has been proposed. This hypothesis states that particular combinations of histone modifications encode information. Although a controversial hypothesis, it has guided the epigenetics field. The core of epigenetics is understanding how chemical modifications to chromatin (be they DNA methylation, histone modifications or chromatin architecture) are established and how the cell ”interprets” this information to establish and maintain gene expression states.
In this chapter we will explore the experimental and computational techniques used to uncover chromatin states within a cell type. We will learn how chromatin immunoprecipitation can be used to infer the regions of the genome bound by a protein or interest, and a common algorithm (the Burrows-Wheeler) transform can be used to rapidly map large numbers of short sequencing reads to a reference genome. From this we then abstract a level and use a hidden Markov model (HMM) to segment the genome into regions which share similar chromatin states. We will close by showing how these comprehensive maps of chromatin states can be compared across cell types and can be used to provide information on how cell states are established and maintained and the impact of genetic variation on gene expression.
19.02: Epigenetic Information in Nucleosomes
In order to fit two meters of DNA into a 5-20 μm diameter cell nucleus and arrange the DNA for easy access to transcriptional machinery, DNA is packaged into chromatin. Nucleosomes form the unit of this packaging. A nucleosome is composed of DNA approximately 150-200 bp long wrapped around an octamer consisting of two copies each of histone proteins H2A, H2B, H3, and H4 (and occasionally a linker histone H1 or H5). While the structure and importance of higher-level packaging of nucleosomes is less known, the lower-level arrangement and modification of nucleosomes is very important to transcriptional regulation and the development of different cell types. Histone proteins H3 and H4 are the most highly conserved proteins in the eukaryotic domain of life.
Nucleosomes encode epigenetic information in two main ways: chromatin accessibility and histone mod- ifications.
First, the nucleosomes’ positions on the DNA determine which parts of DNA are accessible. Nucleosomes are often positioned at the promoters of inactive genes. To initiate transcription of a gene, transcription factors (TFs) and the RNA polymerase complex have to bind to its promoter. Therefore, when a gene becomes active, the nucleosomes located at its promoter are often removed from the promoter to allow RNA polymerase to initiate transcription. Hence, nucleosome positioning on the DNA is stable, yet mutable. This property of stability and mutability is a prerequisite for any form of epigenetic information because cells need to maintain the identity of a particular cell type, yet still be able to change their epigenetic state to respond to environmental circumstances.
Chromatin accessibility can also be modulated by transcribed RNA (specifically, “enhancer RNA,” or eRNA) floating around the nucleus. In particular, Mousavi et al. found in 2013 that eRNAs, which are tran- scribed at extragenic enhancer regions, enhance RNA pol II occupancy (which is rate-limited by chromatin accessibility) and deployment of other transcriptional machinery, leading to enhanced expression of distal target genes [9].
Second, histones contain unstructured tails protruding from the globular core domains that comprise the nucleosome octamer. These tails can undergo post-translational modification such as methylation, acetyla- tion and phosphorylation, each of which affect gene expression. Some proteins involved in transcriptional regulation bind specifically to particular histone modifications or combinations of modifications, and recruit yet more transcription factors which enhance or repress expression of nearby genes. Thus, the “histone code hypothesis” posits that different combinations of histone modifications at specific genomic loci encode bio- logical function via differential transcriptional regulation. In this model, histone modifications are analogous to different readers marking sections of a book with different-colored post-it notes – histone modifications allow the same genome to be interpreted (i.e., transcribed) differently at different times and in different tissues. There are over 100 distinct histone modifications that have been found experimentally. Six of the most well-characterized histone modifications, along with the typical signature widths of their appearances in the genome and their putative associated regulatory elements, are listed in Table 19.1. Note that all of these modifications are on lysines in H3 and H4. Modifications of H3 and H4 are most well-characterized be- cause H3 and H4 are the most highly conserved histones (making modifications of those histones more likely to have conserved regulatory function) and because good antibodies exist for all of the commonly-observed modifications of those histones.
Histone modifications are so commonly-referenced that a shorthand has been developed to identify them. This shorthand consists of the name of the histone protein, the amino acid residue on its tail that has been modified and the type of modification made to this residue. To illustrate, the fourth residue from the N-terminus of histone H3, lysine, is often methylated at the promoters of active genes. This modification is described as H3K4me3 (if methylated thrice). The first part of the shorthand corresponds to the histone protein, in this case H3; K4 corresponds to the 4th residue from the end, in this case a lysine, and me3 corresponds to the actual modification, the addition of 3 methyl groups in this case.
Histone Modification Signature Associated Regulatory Element
H3K4me1 (wide) focal active promoters / enhancers
H3K4me3 (wide) focal active promoters / enhancers
H3K9me3 wide repressed regions
H3K27ac focal active promoters / enhancers
H3K27me3 wide repressed regions
H3K36me3 wide transcribed regions
Table 19.1: Six of the most well-characterized histone modifications along with their typical signature widths and putative associated regulatory elements. “Focal” indicates that each instance of the histone modification has a relatively narrow signature in the genome (peak width < 5kb) whereas “wide” indicates wide signatures.
One example of epigenetic modifications influencing biological function is often seen in enhancer regions of the genome. Often these enhancer regions are far away from the genes and promoters that they regulate. The enhancer is able to come into contact with a specific promoter by histone modification (acetylation and methylation). This causes the DNA to fold upon itself to bring the promoter, enhancer, and recruited transcription factors into contact, activating the previously repressed promoter. This system can be very dynamic such that less than a minute after histone modification the cell will show signs of epigenetic influence, while other modifications (mainly those during development) will show themselves in a slower manner. This is also an example how how certain types of modifications of the histones can help us to predict enhancer regions.
It is possible for more than one histone modification to be present at a given genomic locus, and histone modifications thereby can act cooperatively and competitively. It is even possible for the two copies of a given histone protein within the same nucleosome to have different modifications (though usually the histone modification “writers” will localize together, thereby creating the same modification on both copies within the nucleosome). Thus, it is necessary to simultaneously take into account all histone modifications in a genomic region in order to accurately call the chromatin state of that region. As described in Section , with the completion of the Roadmap Epigenome Project in 2015, a robust hidden Markov model (with histone modifications as emissions and chromatin states as hidden states) can be used to do so.
Did You Know?
The simplest organisms that have epigenetic modifications are yeasts. Yeast is a single celled organism; thus, epigenetic modifications are not responsible for cell differentiation. As organisms become more complex they tend to have more epigenetic modifications.
Epigenetic Inheritance
The extent to which epigenetic/epigenomic features are heritable is poorly understood and is therefore the subject of much debate and ongoing investigation. In organisms that reproduce sexually, most epige- netic modifications are lost during meiosis and/or at fertilization, but some modifications are sometimes maintained. Additionally, biases exist in the ways in which paternal versus maternal epigenetic marks are removed or remodeled during this process. In particular, maternal DNA methylation is often retained at fertilization, whereas paternal DNA is almost always completely demethylated. Furthermore, for unknown reasons, some genomic elements, such as centromeric satellites, are more likely to evade epigenetic reset. In cases where epigenomic erasure does not occur completely at meiosis and fertilization, trans-generational epigenetic inheritance can occur. See generally [4].
Another mechanism likely to be governed by epigenetic inheritance is the phenomenon of parental imprinting. In parental imprinting, certain autosomal genes are expressed if and only if they are inherited from an individual’s mother, and other automsomal genes are expressed if and only if they are inherited from an individual’s father. Examples are the Igf2 gene in mice (only expressed if inherited from the father) and the H19 gene in mice (only expressed if inherited from the mother). There are no changes in the DNA sequence of these genes, but extra methyl groups are observed on certain nucleotides within the inactivated copy of the gene. The mechanisms and causality of this imprinting are poorly understood. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/19%3A_Epigenomics_Chromatin_States/19.01%3A_Introduction.txt |
ChIP: a method for determining where proteins bind to DNA or where histones are modified
Given the importance of epigenomic information in biology, great efforts have been made to study signals that quantify this information. One common method for epigenomic mark measurement is called chromatin immunoprecipitation (ChIP). ChIP technology yields fragments of DNA whose location in the genome denote the positions of a particular histone modification or transcription factor. The procedures of ChIP are described as follows and are depicted in Figure 19.2:
1. Cells are exposed to a cross-linking agent such as formaldehyde, which causes covalent bonds to form between DNA and its bound proteins (e.g., histones with specific modifications).
2. Genomic DNA is isolated from the cell nucleus.
3. Isolated DNA is sheared by sonication or enzymes.
4. Antibodies are grown to recognize a specific protein, such as those involved in histone modification. The antibodies are grown by exposing the proteins of interest to mammals, such as goats or rats, whose immune response then causes the production of the desired antibodies.
5. Antibodies are added to the solution to immunoprecipitate and purify the complexes.
6. The cross-linking between the protein and DNA is reversed and the DNA fragments specific to the epigenetic marks are purified.
After a ChIP experiment, we have short sequences of DNA that correspond to places where histones were bound to the DNA. To identify the location of these DNA fragments in the genome, one can hybridize them to known DNA segments on an array or gene chip and visualize them with fluorescent marks; this method is known as ChIP-chip. Alternatively, one can do massive parallel next-generation sequencing of these fragments; this is known as ChIP-seq. The latter approach, ChIP-seq, is a newer approach that is used much more frequently. It is preferred because it has a wider dynamic range of detection and avoids problems like cross-hybridization in ChIP-chip.
Each sequence tag is 30 base pairs long. These tags are mapped to unique positions in the reference genome of 3 billion bases. The number of reads depending on sequencing depth, but typically there are on the order of 10 million mapped reads for each ChIP-seq experiment.
There is a fairly standard pipeline used to infer the enrichment of the protein of interest at each site in the genome given a set of short sequencing reads from a ChIP-seq experiment. First, the DNA fragments must be mapped to the DNA (called read mapping). Next, we must determine which regions of the genome have statistically significant enrichment of the protein of interest (called peak calling). After these preprocessing steps, we can build different supervised and unsupervised models to study chromatin states and their relation to biological function. We look at each of these steps in turn.
Bisulfite Sequencing: a method for determining where DNA is methylated
DNA methylation was the first epigenomic modification to be discovered and is an important transcrip- tional regulator in that the methylation of cytosine residues in CpG dinucleotides results in “silencing,” or repression, of transcription. Bisulfite sequencing is a method by which DNA is treated with bisulfite before sequencing, allowing the precise determination of the nucleotides at which the DNA had been methylated. Bisulfite treatment converts unmethylated cytosine residues to uracil, but does not affect methylated cy- tosines. Thus, genomic DNA can be sequenced with or without bisulfite treatment, and the sequences can be compared, and the sites at which cytosine has not been converted to uracil in the treated DNA (or, equiv- alently, sites at which there is bisulfite-generated difference between the treated and untreated sequences) are sites at which cytosine was methylated. This analysis assumes complete conversion of unmethylated cytosine residues to uracil, so incomplete conversion can result in false positives (i.e., nucleotides identified as methylated but which in fact were not methylated) [11]. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/19%3A_Epigenomics_Chromatin_States/19.03%3A_Epigenomic_Assays.txt |
Read mapping
The problem of read mapping seeks to assign a given read to the best matching location in the reference genome. Given the large number of reads and the size of human genome, one common requirement of all read mapping algorithms is that they be efficient in both space and time. Furthermore, they must allow mismatches due to sequencing errors and SNPs.
Based on previous lectures, we know various ways to perform mapping of reads: sequence alignment (O(mn) time) and hash-based approaches such as BLAST, for example. Other approaches exist as well: linear time string matching (O(m + n) time) and suffix trees and suffix arrays (O(m) time). However, a problem with all these techniques is that they have a large memory requirement (often O(mn)). Instead, state-of-the-art techniques based on the Burrows-Wheeler transformation [1] are used. These run in O(m) time and require just O(n) space.
The Burrows-Wheeler transform originally arose from the need to compress information. It takes a long string and rearranges it in a way that has adjacent repeating letters. This string can be compressed because, for example, instead of writing 100 A’s the computer can now just indicate that there are 100 A’s in a row. The Burrows-Wheeler transform also has some other special properties that we will exploit to search in sublinear time.
The Burrows-Wheeler transform creates a unique transformed string that is shorter than the original string. It also can be reversed easily to generate the original string, so no information is lost. The transformed string is in sorted order, which allows for easy searching. The details of Burrows-Wheeler transformation are described below and are illustrated in Figure 19.3.
First, we produce a transform from an original string by the following steps. In particular, we produce a transform of the reference genome.
1. For a given reference genome, add a special character at the beginning and end of the string (e.g., “BANANA” becomes ^BANANA@). Then generate all the rotations of this string (e.g., one such rotation would be NANA@^BA).
2. Sort the rotations lexicographically — i.e., in alphabetical order — with special characters sorted last.
3. Only keep the last column of the sorted list of rotations. This column contains the transformed string
Once a Burrows-Wheeler transform has been computed, it is possible to reverse the transform to compute the original string. This can be done with the procedure in Figure ??. Briefly, the reverse transformation works as follows: given the transformed string, sort the string characters in alphabetical order; this gives the first column in the transform. Combine the last column with the first to get pairs of characters from the original rotations. Sort the pairs and repeat.
By using sorting pointers rather than full strings, it is possible to generate this transform of the reference genome using a space that is linear in its size. Furthermore, even with a very large number of reads, it is only necessary to do the transform one in a forward direction. After counting the reads in the transformed space, it is then only necessary to do the reverse transform once to map the counts to genome coordinates.
In particular, from the Burrows-Wheeler transform we observe that all occurrences of the same suffix are effectively next to each other rather than scattered throughout the genome. Moreover, the ith occurrence of a character in the first column corresponds to the ith occurrence in the last column. Searching for substrings using the transform is also easy. Suppose we are looking for the substring “ANA” in the given string. Then the problem of search is reduced to searching for a prefix “ANA” among all possible sorted suffixes (generated by rotations). The last letter of the substring (“A”) is first searched for in the first letters of the sorted rotations. Then, the one-letter rotations of these matches are considered; the last two letters of the substring (“NA”) are searched for among the first two letters of these one-letter rotations. This process can be continued with increasing length suffixes to find the substring as a prefix of a rotation. Specifically, each read is searched for and is found as a prefix of a rotation of the reference genome; this gives the position of the read in the genome. By doing a reverse transform, it is possible to find the genomic coordinates of the mapped reads.
Note that this idea is no faster in theory than hashing, but it can be faster in practice because it uses a smaller memory footprint.
Quality control metrics
As with all experimental data, ChIP methods contain biases and their output may be of varied quality. As a result, before processing the data, it is necessary to control for these biases, to determine which reads in the data achieve a certain level of quality, and to set target thresholds on the quality of the data set as a whole. In this section we will describe these quality control problems and metrics associated with them.
QC1: Use of input DNA as control
First, the reads given by ChIP are not uniformly scattered in the genome. For example, accessible regions of the genome can be fragmented more easily, leading to non-uniform fragmentation. To control for this bias, we can run the ChIP experiment on the same portion of DNA without using an antibody. This yields input DNA, which can then be fragmented and mapped to give a signal track that can be thought of as a background — i.e., reads we would expect by chance. (Indeed, even in the background we do not see uniformity.) Additionally, we have a signal track for the true experiment, which comes from the chromo-immunoprecipitated DNA. Shown in Figure 19.4
QC2: Read-level sequencing quality score threshold
When sequencing DNA, each base pair is associated with a quality score. Thus, the reads given by ChIP- seq contain quality scores on the base pair level, where lower quality scores imply a greater probability of mismappings. We can easily use this information in a preprocessing step by simply rejecting any reads whose average quality score falls below some threshold (e.g., only use reads where Q, the average quality score, is greater than 10). Shown in Figure 19.5
QC3: Fraction of short reads mapped
Each read that passes the above quality metric may map to exactly one location in the genome, to multiple locations, or to no locations at all. When reads map to multiple locations, there are a number of approaches for handling this:
• A conservative approach: We do not assign the reads to any location because we are so uncertain. Con: we can lose signal
• A probabilistic approach: We fractionally assign the reads to all locations. Con: can add artifacts (unreal peaks)
• A sampling approach: We only select one location at random for a read. Chances are, across many reads, we will assign them uniformly. Con: can add artifacts (unreal peaks)
• An EM approach: We can map reads based on the density of unambiguous reads. That is, many unique reads that map to a region give a high prior probability that a read maps to that region. Note: we must make the assumption that the densities are constant within each region
• A paired-end approach: Because we sequence both ends of a DNA fragment, if we know the mapping of the read from one end, we can determine the mapping of the read at the other end even if it is ambiguous.
Either way, there will likely be reads that do not map to the genome. One quality control metric would be considering the fraction of reads that map; we may set a target of 50%, for instance. Similarly, there may be regions to which no reads map. This may be due to a lack of assembly coverage or too many reads mapping to the region; we treat unmappable regions as missing data.
QC4: Cross-correlation analysis
An additional quality control that is cross-correlation analysis. If single-end reads are employed, the a DNA binding protein will generate a peak of reads mapping to the forward strand offset a distance roughly equal to the DNA fragment length from a peak of reads mapping to the reverse strand. A similar pattern is generated from paired end reads, in which read ends fall into two groups with a given offset, one read end will map to the forward strand and the other to the reverse strand. The average fragment length can be inferred by computing the correlation between the number of reads mapping to the forward strand and number of reads mapping to the reverse strand as a function of distance between the forward and reverse reads. The correlation will peak at the mean fragment length.
The cross-correlation analysis also provides information on the quality of the ChIP-seq data set. Input DNA should not contain any real peaks, but often shows a strong cross-correlation at a distance equal to the read length. This occurs because some reads map uniquely in between regions that are unmappable. If a read can map uniquely at position x in between two unmappable regions on the forward strand, then a read can also map uniquely to the reverse strand at position x + r -1, where r is the read length. Reads that map in this manner generate the strong cross- correlation at distance equal to the read length in the input DNA. If a ChIP-seq experiment was unsuccessful and did not significantly enrich for the protein of interest, then a large component of the reads will be similar to the unenriched input, which will produce a peak in the cross-correlation at read length. Thus, the strength of the cross-correlation at read length relative to the strength at fragment length can be used to evaluate the quality of the ChIP-seq data set. Acceptable ChIP-seq libraries should have a cross-correlation at fragment length at least as high as at read-length, and the higher the ratio between the fragment-length cross-correlation and the read-length cross-correlation, the better.
QC5: Library Complexity
As a final quality control metric, we can consider the complexity of the library, or the fraction of reads that are non-redundant. In a region with signal, we might expect reads to come from all positions in that region; however, we sometimes see that only a small number of positions in a region have reads mapping to them. This may be the result of an amplification artifact in which a single read amplifies much more than it should. Consequently, we consider the non-redundant fraction of a library:
$\mathrm{NRF}=\frac{\text { No. of distinct unique-mapping reads }}{\text { No. of unique mapping reads }} \nonumber$
This value measures the complexity of the library. Low values indicate low complexity, which may occur, for example, when there is insufficient DNA or one DNA fragment is over-sequenced. When working with at least 10 million uniquely mapped reads, we typically set a target of at least 0.8 for the NRF.
Peak Calling and Selection
After reads are aligned, signal tracks as shown in Figure 19.6 can be generated. This data can be ordered into a long histogram spanning the length of the genome, which corresponds to the number of reads (or degree of fluorescence in the case of ChIP-chip) found at each position in the genome. More reads (or fluorescence) suggests a stronger presence of the epigenetic marker of interest at this particular location.
In particular, to generate these signal tracks we transform the read counts into a normalized intensity signal. First, we can use the strand cross-correlation analysis to estimate the fragment length distribution f. Since we now know f, as well as the length of each read, we can extend each read (typically just 36 bp) from the 5’ to 3’ direction so that its length equals the average fragment length. Then, rather than just summing the intensity of each base in the original reads, we can sum the intensity of each base in the extended reads from both strands. In other words, even though we only sequence a small read, we are able to use information about an entire segment of which that read is a part. We can do this same operation on the control data. This yields signal tracks for both the true experiment and the control, as shown in Figure 19.7.
To process the data, we are first interested in using these signal tracks to discover regions (i.e., discrete intervals) of enrichment. This is the goal of peak calling. There are many programs that perform peak calling with different approaches. For example, MACS uses a local Poission distribution as its statistical model, whereas PeakSeq uses a conditional binomial model.
One way to model the read count distribution is with a Poisson distribution. We can estimate the expected read count, λlocal from the control data. Then,
$\operatorname{Pr}(\text { count }=x)=\frac{\lambda_{\text {local }}^{x} e^{-\lambda_{\text {local }}}}{x !} \nonumber$
Thus, the Poisson p-value for a read count x is given by Pr(count ≥ x). We specify a threshold p-value (e.g., 0.00001) below which genomic regions are considered peaks.
We can transform this p-value into an empirical false discovery rate, or eFDR, by swapping the ChIP (true) experiment data with the input DNA (control) tracks. This would yield the locations in the genome where the background signal is higher than the ChIP signal. For each p-value, we can find from both the ChIP data and the control data. Then, for each p-value, the eFDR is simply the number of control peaks divided by the number of ChIP peaks. With this, we can then choose which peaks to call based on an eFDR threshold.
A major problem that arises is that no single universal eFDR or p-value threshold can be used. Ideal thresholds depend on a range of factors, including the ChIP, the sequencing depth, and the ubiquity of the target factor. Furthermore, small changes in the eFDR threshold can yield very large changes in the peaks that are discovered. An alternative measure is the Irreproducible Discovery Rate, or IDR, and this measure avoids these FDR-specific issues.
Irreducible Discovery Rate (IDR)
A major drawback of using traditional statistical methods to evaluate the significance of ChIP-seq peaks is that FDR and p-value-based approaches make particular assumptions regarding the relationship between enrichment and significance. Evaluating the significance of ChIP peaks using IDR rather than a p-value or FDR is advantageous because it allows us to leverage the information present in biological replicates to call peaks without setting a threshold for significance. IDR-based approaches rely upon the idea that real signal is likely to be reproducible between replicates, whereas noise should not be reproducible. Using IDR to call significant peaks returns peaks that satisfy a given threshold for significance. To determine which peaks are significant via IDR, the peaks in each biological replicate are ranked based on their enrichment in descending order.The top N peaks in each replicate are then compared against each other, and the IDR for a given replicate is the fraction of peaks present in the top N peaks in the replicate that are not present in the other replicates (i.e, the fraction of peaks that are not reproducible between replicates). To develop more mathematical intuition, the following (entirely optional) subsection will rigorously introduce the concept of the IDR.
Mathematical Derivation of the IDR
Since the IDR utilizes ranks, this mean that the marginal distributions are uniform, and the information is mostly encoded in the joint distributions of the ranks across biological replicates. Specifically, when the marginal distributions are uniform, we can model the joint distributions through a copula model. Simply put, a copula is a multivariate probability distribution in which the marginal probability of each variable is uniform. Skar’s Theorem states that there exists at least one copula function which allows us to express the joint in terms of the dependence of the marginal distributions.
$F_{k}\left(x_{1}, x_{2}, \ldots x_{k}\right)=C_{x}\left(F_{X_{1}}\left(x_{1}\right), \ldots F_{X_{k}}\left(x_{K}\right)\right) \nonumber$
Where Cx is the copula function and the F(x) is the cumulative distribution for a variable x. Given this information, we can set a Bernoulli distribution Ki ~ Bern(πi) that denotes whether the ith peak is from the consistent set or the spurious set. We can derive z1 = (z1,1, z1,2) if Ki = 1 or z0 = (z0,1, z0,2) if Ki = 0 (where z0,i means that it’s from the spurious set in biological replicate i). Using this, we can model the z1,1 and z0,1 models as the following:
$\left(\begin{array}{c} z_{i, 1} \ z_{i, 2} \end{array}\right) \mid K_{i}=k \sim N\left(\left(\begin{array}{c} \mu_{k} \ \mu_{k} \end{array}\right),\left(\begin{array}{cc} \sigma_{k}^{2} & \rho_{k} \sigma_{k}^{2} \ \rho_{k} \sigma_{k}^{2} & \sigma_{k}^{2} \end{array}\right)\right) \nonumber$
We can utilize two different models to model whether it comes from the spurious set (denoted by 0), or the real set(1). If the real set, we have μ1 > 0 and 0<ρ1<1, where as in the null set we have μ0 =0, and σ02 = 1. We can model a variable ui,1 and ui,2 with the following formulas:
$u_{i, 1}=G\left(z_{i, 1}\right)=\pi_{1} \Phi\left(\frac{z_{i, 1}-\mu_{1}}{\sigma_{1}}\right)+\pi_{0} \Phi\left(z_{i, 1}\right) \nonumber$
$u_{i, 2}=G\left(z_{i, 2}\right)=\pi_{1} \Phi\left(\frac{z_{i, 2}-\mu_{1}}{\sigma_{1}}\right)+\pi_{0} \Phi\left(z_{i, 2}\right) \nonumber$
Where Φ is the normal cumulative distribution function. Then, let the observed xi,1 = $F^{-1}\left(u_{i, 1}\right)$ and $x_{i, 2}=F^{-1}\left(u_{i, 2}\right)$, F1 and F2 are the marginal distributions of the two coordinates. Thus, for a signal i, we have:
$P\left(X_{i, 1} \leq x_{1}, X_{i, 2} \leq x_{2}\right)=\pi_{0} h_{0}\left(G^{-1}\left(F_{1}\left(x_{i, 1}\right), G^{-1}\left(F_{2}\left(x_{i, 2}\right)\right)+\pi_{1} h_{1}\left(G^{-1}\left(F_{1}\left(x_{i, 1}\right), G^{-1}\left(F_{2}\left(x_{i, 2}\right)\right)\right.\right.\right.\right. \nonumber$
We can express h0 and h1 with the following normal distributions, similar to the z1 and z2 that were defined above:
\begin{aligned} &h_{0} \sim N\left(\left(\begin{array}{l} 0 \ 0 \end{array}\right),\left(\begin{array}{ll} 1 & 0 \ 0 & 1 \end{array}\right)\right)\ &h_{1} \sim N\left(\left(\begin{array}{c} \mu_{1} \ \mu_{1} \end{array}\right),\left(\begin{array}{cc} \sigma_{1}^{2} & \rho_{1} \sigma_{1}^{2} \ \rho_{1} \sigma_{1}^{2} & \sigma_{1}^{2} \end{array}\right)\right) \end{aligned} \nonumber
We can now infer the parameters θ = (μ1 , ρ1 , σ1 , π0 ), using a EM algorithm, where the inference is based on P (Ki = 1 | (xi,1, xi,2); $\hat{\theta}$). Thus, we can define the local irreproducible discovery rate as:
idr(xi,1, xi,2) = P (Ki = 0 | (xi,1, xi,2); $\hat{\theta}$)
So to control the IDR at some level \alpha, we can rank (xi,1, xi,2) by their IDR values. We can then select (x(i),1, x(i),2), i = 1 . . . l, where
$I=\operatorname{argmax}_{i} \frac{1}{i} \sum_{j=1}^{i} i d r_{j} \leq \alpha \nonumber$
IDR is analogous to a FDR control in this copula mixture model. This subsection summarizes the in- formation provided in this lecture: www.biostat.wisc.edu/~kendzi...AT877/SK_2.pdf. The original paper, along with an even more detailed formulation of IDR, can be found in Li et al. [10]
Advantages and use cases of the IDR
IDR analysis can be performed with increasing N, until the desired IDR is reached (for example, N is increased until IDR=0.05, meaning that 5% of the top N peaks are not reproducible). Note that N can be different for different replicates of the same experiment, as some replicates may be more reproducible than others due to either technical or biological artifacts.
IDR is also superior to simpler approaches to use the reproducibility between experiments to define significance. One approach might be to take the union of all peaks in both replicates as significant, however; this method will accept both real peaks and the noise in each data set. Another approach is to take the intersection of peaks in both replicates, that is, only count peaks present in both data sets as significant. While this method will very effectively eliminate spurious peaks, it is likely to miss many genuine peaks. IDR can be thought of as combining both these approaches, as it accepts all peaks, regardless as to whether they are reproducible, so long as the peaks have sufficient enrichment to fall within the segment of the data with an overall irreproducibility rate above a given threshold. Another advantage to IDR is that it can still be performed even if biological replicates are not available, which can often be the case for ChIP experiments performed in rare cell types. Psuedo-replicates can be generated from a single data set by randomly assigning half the reads to one pseudo-replicate and half to another pseudo-replicate.
Interpreting Chromatin Marks
We now move onto techniques for interpreting chromatin marks. There are many ways to analyze epigenomic marks, such as aggregating chromatin signals (e.g., H3K4me3) on known feature types (e.g., promoters of genes with high or low expression levels) and performing supervised or unsupervised machine learning methods to derive epigenomic features that are predictive of different types of genomics elements such as promoters, enhancers or large intergenic non-coding RNAs. In particular, in this lecture, we examine in detail the analysis of chromatin marks as done in [7]. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/19%3A_Epigenomics_Chromatin_States/19.04%3A_Primary_data_processing_of_ChIP_data.txt |
The histone code hypothesis suggests that chromatin-DNA interactions are guided by combinatorial his- tone modifications. These combinatorial modifications, when taken together, can in part determine how a region of DNA is interpreted by the cell (i.e. as a transcription factor binding domain, a splice site, an enhancer region, an actively expressed gene, a repressed gene, or a non functional region). We are interested in interpreting this “code” (i.e. determining from histone marks at a region whether the region is a transcription start site, enhancer, promoter, etc.). With an understanding of the combinatorial histone marks, we can annotate the genome into functional regions and predict novel enhancers, promoters, genes, etc. The challenge is that there are dozens of marks and they exhibit complex combinatorial effects.
Stated another way, DNA can take on a series of (hidden) states (coding, noncoding, etc). Each of these states emits a specific combination of epigenetic modifications (H3K4me3, H3K36me3, etc) that the cell recognizes. We want to be able to predict these hidden, biologically relevant states from observed epigenetic modifications.
In this section, we explore a technique for interpreting the “code” and its application to a specific dataset [7], which measured 41 chromatin marks across the human genome.
Data
Data for this analysis consisted of 41 chromatin marks including acetylations, methylations, H2AZ, CTCF and PollI in CD4 T cells. First, the genome was divided into 200 bp non-overlapping bins in which the binary absence or presence of each of the 41 chromatin marks was determined. This data was processed using data binarization, in which each mark in each interval is assigned a value of 0 or 1 depending on whether the enrichment of the mark’s signal in that interval exceeds a threshold. Specifically, let Cij be the number of reads detected by ChIP-seq for mark i, mapping to the 200bp bin j. Let $\lambda_{i}$ be the average number of reads mapping to a bin for mark i. The mark i is determined to be present in bin j if P(X > Cij) is less than the accepted threshold of 10-4 where X is a Poisson random variable with mean $\lambda_{i}$ and absent otherwise. The threshold is user defined, similar to a Poisson p-value. In order words, the read enrichment for a specific bin has to be significantly greater than a random process of putting reads into bins. An example for chromatin states around the CAPZA2 gene on chromosome 7 is shown in Figure 19.8. So in this way, for each mark i, we can label each bin j with a 1 if the mark is present and a 0 if it isn’t. Looking at the data as a whole, we can think of it as large binary matrix, where each row corresponds to a mark and each column corresponds to a bin (which is simply a 200bp region of the genome).
Additional data used for analysis included gene ontology data, SNP data, expression data, and others.
HMMs for Chromatin State Annotation
Our goal is to identify biologically meaningful and spatially coherent combinations of chromatin marks. Remember that we broke the genome up into 200bp blocks, so by spatially coherent we mean that if we have a genomic element that is longer than 200bps, we expect the combination of chromatin marks to be consistent on each 200bp bin in the region. We’ll call these biologically meaningful and spatially coherent combinations of chromatin marks chromatin states. In previous lectures, we’ve seen HMMs applied to genome annotation for genes and CpG islands. We would like to apply the same ideas to this situation, but in this case, we don’t know the hidden states a priori (e.g. CpG island region or not), we’d like to learn them de novo. This model can capture both the functional ordering of different states (e.g from promoter to transcribed regions) and the spreading of certain chromatin domains across the genomes. To summarize, we want to learn an HMM where the hidden states of the HMM are chromatin states.
As we learned previously, even if we don’t know the emission probabilities and transition probabilities of an HMM, we can use the Baum-Welch training algorithm to learn the maximum likelihood values for those parameters. In our case, we have an added difficulty, we don’t even know how many chromatin states exist! In the following subsections, we’ll expand on how the data is modeled and how we can choose the number of states for the HMM.
Emission of a Vector
In HMMs from previous lectures, each state emitted either a single nucleotide or a single string of nucleotides at a time. In the HMM for this problem, each state emits a combination of epigenetic marks. Each combination can be represented as an n-dimensional vector where n is the number of chromatin marks being analyzed (n = 41 for our data). For example, assuming you have four possible epigenetic modifications: H3K4me3, H2BK5ac, Methyl-C, and Methyl-A, a sequence containing H3K4me3 and Methyl-C could be presented as the vector (1, 0, 1, 0). One could imagine many different probability distributions on binary n-vectors and for simplicity, we assume that the marks are independent and modeled as Bernoulli random variables. So we are assuming the marks are independent given the hidden state of the HMM (note that this is not the same as assuming the marks are independent).
If there are n input marks, each state k has a vector (pk1,..,pkn) of probabilities of observing marks 1 to n. Since the probability is modeled as a set of independent Bernoulli random variables, the probability of observing a set of marks given that we are in the hidden state k equals the product of the probabilities of observing individual marks. For example if n = 4, the observed marks at bin j were (1, 0, 1, 0) and we were in state k, then the likelihood of that data is pk1(1-pk2)pk3(1-pk4).
The learned emission probabilities for the data are shown in Figure 19.9.
Transition Probabilities
Recall that the transition probabilities represent the frequency of transitioning from one hidden state to another hidden state. In this case, our hidden states are chromatin states. The transition matrix for our data is shown in Figure 19.10. As seen from the figure, the matrix is sparse, indicating that only a few of the possible transitions actually occur. The transition matrix reveals the spatial relationships between neighboring states. Blocks of states in the matrix reveal sub-groups of states and from these higher level blocks, we can see transitions between these meta-states.
Choosing the Number of states to model
As with most machine learning algorithms, increasing the complexity of the model (e.g. the number of hidden states) will allow it to better fit training data. However, the training data is only a limited sample of the true population. As we add more complexity, at some point we are fitting patterns in the training data that only exist due to limited sampling, so that the model will not generalize to the true population. This is called over-fitting training data; we should stop adding complexity to the model before it fits the noise in the training data.
Bayesian Information Criterion (BIC) is a common technique for optimizing the complexity of a model that balances increased fit to the data with complexity of the model. Using BIC, we can visualize the increasing power of the HMM as a function of the number of states. Generally, one will choose a value for k (the number of states) such that the addition of more states has relatively little benefit in terms of predictive power gain. However, there is a tradeoff between model complexity and model interpretability that BIC cannot help with. The optimal model according to BIC is likely to have more states than an ideal model because we are willing to trade some predictive power for a model with fewer states that can be interpreted biologically. The human genome is so big and the chromatin marks so complex that statistically significant differences are easy to find, yet many of these differences are not biologically significant.
To solve this problem, we start with a model with more hidden states than we believe are necessary and prune hidden states as long as all states of interest in the larger model are adequately captured. The Baum-Welch algorithm (and EM in general) is sensitive to the initial conditions, so we try several random initializations in our learning. For each number of hidden states from 2 - 80, we generate three random initializations of the parameters and train the model using Baum-Welch. The best model according to BIC had 79 states and states were then iteratively removed from this set of 79 states.
As we mentioned earlier, Baum-Welch is sensitive to the initial parameters, so when we pruned states, we used a nested initialization rather than a randomized initialized for the pruned model. Specifically, states were greedily removed from the BIC-optimal 79 state model. The state to be removed was the state that such that all states from the 237 randomly initialized models were well captured. When removing a state, the emission probabilities would be removed and any state transitioning to the removed state would have that transition probability uniformly redistributed to the remaining states. This was used as the initialization to the Baum-Welch training. The number of states for a model to analyze can then be selected by choosing the model trained from such nested initialization with the smallest number of states that suciently captures all states offering distinct biological interpretations. The resulting final model had 51 states.
We can also check model fit by looking at how the data violates model assumptions. Given the hidden state, the HMM assumes that each mark is independent. We can test how well the data conforms to this assumption by plotting the dependence between marks. This can reveal states that fit well and those that do not. In particular, repetitive states reveal a case where the model does not fit well. As we add more states, the model is better able to fit the data and hence fit the dependencies. By monitoring the fit on individual states that we are interested in, we can control the complexity of the model.
Results
This multivariate HMM model resulted in a set of 51 biologically relevant chromatin states. However, there were no one-to-one relationship between each state and known classes of genomic elements (e.g. introns, exons, promoters, enhancers, etc) Instead, multiple chromatin states were often associated with one genomic element. Each chromatin state encoded specific biological relevant information about its associated genomic element. For instance, three di↵erent chromatin states were associated with transcription start site (TSS), but one was associated with TSS of highly expressed genes, while the other two were associated with TSS of medium and lowly expressed genes respectively. Such use of epigenetic markers greatly improved genome annotation, particularly when combined with evolutionary signals discussed in previous lectures. The 51 chromatin states can be divided in five large groups. The properties of these groups are described as follows and further illustrated in 19.11:
1. Promoter-Associated States (1-11):
These chromatin states all had high enrichment for promoter regions. 40-89% of each state was within 2 kb of a RefSeq TSS. compared to 2.7% genome-wide. These states all had a high frequency of H3K4me3, significant enrichments for DNase I hypersensitive sites, CpG islands, evolutionarily con- served motifs and bound transcription factors. However, these states differed in the levels of associated marks such as H3K79me2/3, H4K20me1, acetylations etc. These states also di↵ered in their functional enrichment based on Gene Ontology (GO). For instance, genes associated with T cell activation were enriched in state 8 while genes associated with embryonic development were enriched in state 4. Additionally, among these promoter states there were distinct positional enrichments. States 1-3 peaked both upstream and downstream of TSS; states 4-7 were concentrated right over TSS whereas states 8-11 peaked between 400 bp and 1200 bp downstream of TSS. This suggests that chromatin marks can recruit initiation factors and that the act of transcript can reinforce these marks. The distinct functional enrichment also suggests that the marks encode a history of activation.
2. Transcription-Associated States (12-28):
This was the second largest group of chromatin states and included 17 transcription-associated states. There are 70-95% contained in annotated transcribed regions compared to 36% for rest of genome. These states were not predominantly associated with a single mark but rather they were defined by a combination of seven marks - H3K79me3, H3K79me2, H3K79me1, H3K27me1, H2BK5me1, H4K20me1 and H3K36me3. These states have subgroups associated with 5’-proximal or 5’-distal locations. Some of these states were associated with spliced exons, transcription start sites or end sites. Of interest, state 28, which was characterized by high frequency for H3K9me3, H4K20me3, and H3K36me3, showed a high enrichment in zinc-finger genes. This specific combination of marks was previously reported as marking regions of KAP1 binding, a zinc-finger specific co-repressor.
3. Active Intergenic States (29-39):
These states were associated with several classes of candidate enhancer regions and insulator regions and were associated with higher frequencies for H3K4me1, H2AZ, several acetylation marks but lower frequencies of methylation marks. Moreover, the chromatin marks could be used to distinguish active from less active enhancers. These regions were usually away from promoters and were outside of transcribed genes. Interestingly, several active intergenic states showed a significant enrichment for disease SNPs, or single nucleotide polymorphism in genome-wide association study (GWAS). For instance, a SNP (rs12619285) associated with plasma eosinophil count levels in inflammatory diseases was found to be located in the chromatin state 33, which was enriched for GWAS hits. In contrast, the surrounding region of this SNP was assigned to other chromatin states with no significant GWAS association. This can shed light on the possible functional significance of disease SNPs based on its distinct chromatin states.
4. Large-Scale Repressed States (40-45):
These states marked large-scale repressed and heterochromatic regions, representing 64% of the genome. H3K27me3 and H3K9me3 were two most frequently detected marks in this group.
5. Repetitive States (46-51):
These states showed strong and distinct enrichments for specific repetitive elements. For instance, state 46 had a strong sequence signature of low-complexity repeats such as (CA)n, (TG)n, and (CATG)n. States 48-51 showed seemingly high frequencies for many modification but also enrichment in reads from non-specific antibody control. The model was thus able to also capture artifacts resulting from lack of coverage for additional copies of repeat elements.
Since many of the chromatin states were described by multiple marks, the contribution of each mark to a state was quantified. Varying subsets of chromatin marks were tested to evaluate their potential for distinguishing between chromatin states. In general, increasing subsets of marks were found to converge to an accurate chromatin state when marks were chosen greedily.
The predictive power of chromatin states for discovery of functional elements consistently outperformed predictions based on individual marks. Such unsupervised model using epigenomic mark combination and spatial genomic information performed as well as many supervised models in genome annotation. It was shown that this HMM model based on chromatin states was able to reveal previously unannotated promoters and transcribed regions that were supported by independent experimental evidence. When chromatin marks were analyzed across the whole genome, some of the properties observed were satellite enriched states (47-51) enriched in centromere, the zinc-finger enriched state (state 28) enriched on chromosome 19 etc. Thus, such genome-wide annotation based on chromatin states can help better interpret biological data and potentially discover new classes of functional elements in the genome.
Multiple Cell Types
All of the above work was done in a single cell type (CD4+ T cells). Since epigenomic markers vary over time, across cell types, and environmental circumstances, it is important to consider the dynamics of the chromatin states across different cell types and experimental conditions. The ENCODE project [3] in the Brad Bernstein Chromatin Group has measured 9 different chromatin marks in nine human cell lines. In this case, we want to a learn a single set of chromatin marks for all of the data. There are two approaches to this problem: concatenation and stacking. For concatenation, we could combine all of the 9 cell lines as if they were a single cell line. By concatenating the different cell lines, we ensure that a common set of state definitions are learned. We can do this here because the profiled marks were the same in each experiment. However, if we profiled different marks for different cell lines, we need to use another approach. Alternatively, we can align the 9 cell lines and treat all of the marks as a super-vector. This allows us to learn cell line specific activity states, for example there might be a state for ES-specific enhancers (in that state there would be enhancer marks in ES, but no marks in other cell types). Unfortunately, this greatly increases the dimension of the vectors emitted by the HMM, which translates to an increase in the model complexity needed to adequately fit the data.
Suppose we had multiple cell types where we profiled different marks and we wanted to concatenate them. One approach is to learn independent models and then combine them. We could find corresponding states by matching emission vectors that are similar or by matching states that appear at the same places in the genome. A second approach is to treat the missing marks as missing data. The EM framework allows for unspecified data points, so as long as pairwise relationships are observed between marks in some cell type, we can use EM. Lastly, we can predict the missing chromatin marks based on the observed marks using maximum-likelihood as in the Viterbi algorithm. This is a less powerful approach if the ultimate goal is chromatin state learning because we are only looking at the most likely state instead of averaging over all possibilities as in the second approach.
In the case with 9 marks in 9 human cell lines, the cell lines were concatenated and a model with 15 states was learned [8]. Each cell type was analyzed for class enrichment. It was shown that some chromatin states, such as those encoding active promoters were highly stable across all cell types. Other states, such as those encoding strong enhancers, were highly enriched in a cell-type specific manner, suggesting their roles in tissue specific gene expression. Finally, it was shown that there was significant correlation between the epigenetic marks on enhancers and the epigenetic marks on the genes they regulate, even though these can be thousands of base pairs away. Such chromatin state model has proven useful in matching enhancers to their respective genes, a problem that has been largely unsolved in modern biology. Thus, chromatin states provide a means to study the dynamic nature of chromatin across many cell types. In particular, we can see the activity of a particular region of the genome based on the chromatin annotation. It also allows us to summarize important information contained in 2.4 billion reads in just 15 chromatin states.
A 2015 Nature publication by the Epigenome Roadmap Project has shown produced an unparalleled reference for human epigenomics signatures across over a hundred different tissues [2]. In their analysis, they make use of several of the concepts we have discussed in-depth in this chapter, such as a 15-state or 18-state ChromHMM model to annotate the epigenome. Training over a 111 data sets allowed for greater robustness to the HMM models discussed earlier. The Roadmap project explored many interesting directions in their paper, and interested readers are strongly encouraged to read over this publication. Interesting conclusions include that H3K4-me1 associated states are the most tissue-specific chromatin marks, and that bivalent promoters and repressed states were also the most highly variable annotations across different tissue types. For enhancers, the Roadmap project found that a significant amount of disease-related SNPs are associated with annotated enhancer regions. Active exploration of this connection is ongoing in the Computational Biology Group at MIT. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/19%3A_Epigenomics_Chromatin_States/19.05%3A_Annotating_the_Genome_Using_Chromatin_Signatures.txt |
Several large-scale data production e↵orts such as ENCODE, modENCODE and Epigenome Roadmap projects are currently in progress and therefore there are several opportunities to computationally ana- lyze this new data. Epigenomic data is also being used to study how behavior can alter your genome. There are studies being done that look at diet and exercise and their e↵ects on disease susceptibility.
Another interesting area of research is the analysis of epigenetic changes in disease. Current research in the Computational Biology Group at MIT is looking at the link between chromatin states and Alzheimer’s disease. A selection of papers in epigenetics-disease linkage has been provided below.
19.07: Further Reading Tools and Techniques
There are several interesting papers that are looking at chromatin states and epigenetics in general. Several urls are listed below to begin your exploration:
1. www.nature.com/nmeth/journal/...meth.1673.html
2. www.nature.com/nature/journal...ture09906.html
3. www.nature.com/nbt/journal/v2.../nbt.1662.html
4. http://www.nytimes.com/2012/09/09/op...tter.html?_r=1 5. www.nature.com/doifinder/10.1038/nature14248
These are a few selected publications that deal with epigenetics and disease.
1. www.nature.com/nature/journal...ture02625.html 2. http://www.sciencedirect.com/science...11124712003725
3. www.nature.com/nbt/journal/v2...f/nbt.1685.pdf
Tools and Techniques
ChromHMM is the HMM described in the text. It is available free for download with instructions and examples at: http://compbio.mit.edu/ChromHMM/.
Segway is another method for analyzing multiple tracks of functional genomics data. It uses a dynamic Bayesian network (HMMs are a particular type of dynamic Bayesian network) which enables it to analyze the entire genome at 1-bp resolution. The downside is that it is much slower than ChromHMM. It is available free for download here: http://noble.gs.washington.edu/proj/segway/.
19.08: What Have We Learned Bibliography
In this lecture, we learned how chromatin marks can be used to infer biologically relevant states. The analysis in [7] presents a sophisticated method to apply previously learned techniques such as HMMs to a complex problem. The lecture also introduced the powerful Burrows-Wheeler transform that has enabled ecient read mapping.
Bibliography
[1] Langmead B, Trapnell C, Pop M, and Salzberg S. Ultrafast, memory-ecient alignment of short DNA sequences to the human genome. Genome Biology, 10(3), 2009.
[2] Roadmap Epigenomics Consortium, Kundaje A, Meuleman W, et al. Integrative analysis of 111 reference human epigenomes. Nature, 518(7539):317–330, 2015.
[3] The ENCODE Project Consortium. An integrated encyclopedia of DNA elements in the human genome. Nature, 489(7414):57–74, 2012.
[4] Heard E and Martienssen RA. Transgenerational epigenetic inheritance: Myths and mechanisms. Cell, 157(1):95–109, 2014.
[5] Mardis ER. ChIP-seq: welcome to the new frontier. Nature Methods, 4(8):614–614, 2007.
[6] Herz H-M, Hu D, and Shilatifard A. Enhancer malfunction in cancer. Molecular Cell, 53(6):859–866, 2014.
[7] Ernst J and Kellis M. Discovery and characterization of chromatin states for systematic annotation of the human genome. Nature Biotechnology, 28:817–825, 2010.
[8] Ernst J, Kheradpour P, Mikkelsen TS, et al. Mapping and analysis of chromatin state dynamics in nine human cell types. Nature, 473(7345):43–49, 2011.
[9] Mousavi K, Zare H, Dell’orso S, Grontved L, et al. eRNAs promote transcription by establishing chromatin accessibility at defined genomic loci. Molecular Cell, 51(5):606–17, 2013.
[10] Qunhua Li, James B. Brown, Haiyan Huang, and Peter J. Bickel. Measuring reproducibility of high-throughput experiments. The Annals of Applied Statistics, 5(3):1752–1779, 2011.
[11] Li Y and Tollefsbol TO. DNA methylation detection: Bisulfite genomic sequencing analysis. Methods Molecular Biology, 791:11–21, 2011. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/19%3A_Epigenomics_Chromatin_States/19.06%3A_Current_Research_Directions.txt |
Molecular and cellular biology describe a hugely diverse system of interacting components that is capable of producing intricate and complex phenomena. Interactions within the proteome describe cellular metabolism, signaling cascades, and response to the environment. Networks are a valuable tool to assist in representing, understanding, and analyzing the complex interactions between biological components. Living systems can be viewed as a composition of multiple layers that each encode information about the system. Some important layers are:
1. Genome: Includes coding and non-coding DNA. Genes defined by coding DNA are used to build RNA, and Cis-regulatory elements regulate the expression of these genes.
2. Epigenome: Defined by chromatin configuration. The structure of chromatin is based on the way that histones organize DNA. DNA is divided into nucleosome and nucleosome-free regions, forming its final shape and influencing gene expression.1
3. Transcriptome RNAs (ex. mRNA, miRNA, ncRNA, piRNA) are transcribed from DNA. They have regulatory functions and manufacture proteins.
4. Proteome Composed of proteins. This includes transcription factors, signaling proteins, and metabolic enzymes.
Each layer consists of a network of interactions. For example, mRNAs and miRNAs interact to regulate the production of proteins. Layers can also interact with each other, forming a network between networks. For example, a long non-coding RNA called Xist produces epigenomic changes on the X-chromosome to achieve dosage compensation through X-inactivation.
Introducing Biological Networks
Five example types of biological networks:
Regulatory Network – set of regulatory interactions in an organism.
• Nodes represent regulators (ex. transcription factors) and associated targets.
• Edges represent regulatory interaction, directed from the regulatory factor to its target. They are signed according to the positive or negative e↵ect and weighted according to the strength of the reaction.
Metabolic Network – connects metabolic processes. There is some flexibility in the representation, but an example is a graph displaying shared metabolic products between enzymes.
• Nodes represent enzymes.
• Edges represent regulatory reactions, and are weighted according to the strength of the reaction. Edges are undirected.
Signaling Network – represents paths of biological signals.
• Nodes represent proteins called signaling receptors.
• Edges represent transmitted and received biological signals, directed from transmitter to receiver. Edges are directed and unweighted.
Protein Network – displays physical interactions between proteins.
• Nodes represent individual proteins.
• Edges represent physical interactions between pairs of proteins. These edges are undirected and unweighted.
Coexpression Network – describes co-expression functions between genes. Quite general; represents functional rather than physical interaction networks, unlike the other types of nets. Powerful tool in computational analysis of biological data.
• Nodes represent individual genes.
• Edges represent co-expression relationships. These edges are undirected and unweighted.
Today, we will focus exclusively on regulatory networks. Regulatory networks control context-specific gene expression, and thus have a great deal of control over development. They are worth studying because they are prone to malfunction and are associated with disease.
Interactions Between Biological Networks
Individual biological networks (that is, layers) can themselves be considered nodes in a larger network repre- senting the entire biological system. We can, for example, have a signaling network sensing the environment governing the expression of transcription factors. In this example, the network would display that TFs govern the expression of proteins, proteins can play roles as enzymes in metabolic pathways, and so on.
The general paths of information exchange between these networks are shown in figure 21.1a.
Network Representation
In figure 20.2 we show a number of these networks and their visualizations as graphs. However, how did we decide on these particular networks to represent the underlying biological models? Given a large biological dataset, how can we understand dependencies between biological objects and what is the best way to model these dependencies? Below, we introduce several approaches to network representation. In practice, no model is perfect. Model choice should balance biological knowledge and computability for reasonably ecient analysis.
Networks are typically described as graphs. Graphs are composed of 1. nodes, which represent objects; and 2. edges, which represent connections or interactions between nodes. There are three main ways to think about biological networks as graphs.
Probabilistic Networks – also known as graphical models. They model a probability distribution between nodes.
• Modeling joint probability distribution of variables using graphs.
• Some examples are Bayesian Networks (directed), Markov Random Fields (Undirected). More on Bayesian networks in the later chapters.
Physical Networks – In this scheme we usually think of nodes as physically interacting with each other and the edges capture that interaction.
• Edges represent physical interaction among nodes. • Example: physical regulatory networks.
Relevance Network – Model the correlation between nodes. • Edge weights represent node similarities.
• Example: functional regulatory networks.
Networks as Graphs
Computer scientists consider subtypes of graphs, each with different properties for their edges and nodes.
Weighted graph: Edges have an associated weight. Weights are generally positive. When all the weights are 1, then we call it an unweighted graph.
Directed graphs: Edges possess directionality. For example A ! B is not the same as A B. When the edges do not have direction, we call it an undirected graph.
Multigraphs (pseudographs): When we allow more than one edge to go between two nodes (more than two if it’s directed) then we call it a multigraph. This can be useful for modeling multiple interactions between two nodes each with di↵erent weights for example.
Simple graph: All edges are undirected and unweighted. Multiple edges between nodes and self-edges are forbidden.
Matrix Representation of Graphs
Adjacency matrix One way to represent a network is using the so-called adjacency matrix. The adjacency matrix of a network with n nodes is an $n \times n$ matrix A where Aij is equal to one if there is an edge between nodes i and j, and 0 otherwise. For example, the adjacency matrix of the graph represented in figure 21.6b is given by:
$A=\left[\begin{array}{lll} 0 & 0 & 1 \ 0 & 0 & 1 \ 1 & 1 & 0 \end{array}\right]$
If the network is weighted (i.e., if the edges of the network each have an associated weight), the definition of the adjacency matrix is modified so that Aij holds the weight of the edge between i and j if the edge exists, and zero otherwise.
Another convenience that comes with the adjacency matrix representation is that when we have a binary matrix (unweighted graph) then the sum of row i gives us the degree of node i. In an undirected graph, the degree of a node is the number of edges it has. Since every entry in the row tells us whether node i is connected to another node, by summing all these values we know how many nodes is node i connected to, thus we get the degree.
1More in the epigenetics lecture. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/20%3A_Networks_I-_Inference_Structure_Spectral_Methods/20.01%3A_Introduction.txt |
We discussed in the previous chapter how we can take a biological network and model it mathematically. Now as we visualize these graphs and try to understand them we need some measure for the importance of a node/edge to the structural characteristics of the system. There are many ways to measure the importance (what we refer to as centrality) of a node. In this chapter we will explore these ideas and investigate their significance.
Degree Centrality
The first idea about centrality is measure importance by the degree of a node. This is probably one of the most intuitive centrality measures as it’s very easy to visualize and reason about. The more edges you have connected to you, the more important to the network you are.
Let’s explore a simple example and see how the go about finding these centralities. We have the following graph
And our goal is to find the degree centrality of every node in the graph. To proceed, we first write out the adjacency matrix for this graph. The order for the edges is A, B, C, D, E
$A=\left[\begin{array}{lllll} 0 & 1 & 1 & 0 & 0 \ 1 & 0 & 1 & 1 & 1 \ 1 & 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 \end{array}\right]$
Previously we discussed how to find the degree for a node given an adjacency matrix. We sum along every row of the adjacency matrix.
$D=\left[\begin{array}{l} 1 \ 4 \ 3 \ 1 \ 1 \end{array}\right]$
Now D is a vector with the degree of every node. This vector gives us a relative centrality measures for nodes in this network. We can observe that node B has the highest degree centrality.
Although this metric gives us a lot of insight, it has its limitations. Imagine a situation where there is one node that connects two parts of the network together. The node will have a degree of 2, but it is much more important than that.
Betweenness Centrality
Betweenness centrality gives us another way to think about importance in a network. It measures the number of shortest paths in the graph that pass through the node divided by the total number of shortest paths. In other words, this metric computes all the shortest paths between every pair of nodes and sees what is the percentage of that passes through node k. That percentage gives us the centrality for node k.
• Nodes with high betweenness centrality control information flow in a network.
• Edge betweenness is defined in a similar fashion.
Closeness Centrality
In order to properly define closeness we need to define the term farness. Distance between two nodes is the shortest paths between them. The farness of a node is the sum of distances between that node and all other nodes. And the closeness of a node is the inverse of its farness. In other words, it is the normalized inverse of the sum of topological distances in the graph.
The most central node is the node that propagates information the fastest through the network.
The description of closeness centrality makes it similar to the degree centrality. Is the highest degree centrality always the highest closeness centrality? No. Think of the example where one node connects two components, that node has a low degree centrality but a high closeness centrality.
Eigenvector Centrality
The eigenvector centrality extends the concept of a degree. The best to think of it is the average of the centralities of it’s network neighbors. The vector of centralities can be written as:
$x=\frac{1}{\lambda} A x \nonumber$
where A is the adjacency matrix. The solution to the above equation is going to be the eigenvector corresponding to the principle component (largest eigenvalue).
The following section includes a review of linear algebra concepts including eigenvalue and eigenvectors. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/20%3A_Networks_I-_Inference_Structure_Spectral_Methods/20.02%3A_Network_Centrality_Measures.txt |
Our goal of this section is to remind you of some concepts you learned in your linear algebra class. This is not meant to be a detailed walk through. If you would want to learn more about any of the following concepts, I recommend picking up a linear algebra book and reading from that section. But this will serve as a good reminder and noting concepts that are important for us in this chapter.
Eigenvectors
Given a square matrix A, $(m \times m)$, the eigenvector v is the solution to the following equation.
$A v=\lambda v \nonumber$
In other words, if we multiply the matrix by that vector, we only change our position parallel the vector (we get back a scaled version of the vector v).
And $\lambda$ (how much the vector v is scaled) is called the eigenvalue.
So how many eigenvalues are there at most? Let’s take the first steps to solving this equation.
$A v=\lambda v \Rightarrow(A-\lambda I) v=0 \nonumber$
that has non-zero solutions when $|A-\lambda I|=0$. That is an m-th order equation in $\lambda$ which can have at most m distinct solutions. Remember that those solutions can be complex, even though A is real.
Vector decomposition
Since the eigenvectors form the set of all bases they fully represent the column space. Given that, we can decompose any arbitrary vector x to a combination of eigenvectors.
$x=\sum_{i} c_{i} v_{i} \nonumber$
Thus when we multiply a vector with a matrix A, we can rewrite it in terms of the eigenvectors.
$\begin{array}{c} A x=A\left(c_{1} v_{1}+c_{2} v_{2}+\ldots\right) \ A x=c_{1} A v_{1}+c_{2} A_{v} 2+\ldots \ A x=c_{1} \lambda_{1} v_{1}+c_{2} \lambda_{2} v_{2}+c_{3} \lambda_{3} v_{3}+\ldots \end{array} \nonumber$
So the action of A on x is determined by the eigenvalues of and eigenvectors. And we can observe that small eigenvalues have a small e↵ect on the multiplication.
Did You Know?
• For symmetric matrices, eigenvectors for distinct eigenvalues are orthogonal.
• All eigenvalues of a real symmetric matrix are real.
• All eigenvalues of a positive semidefinite matrix are non-negative.
Diagonal Decomposition
Also known as Eigen Decomposition. Let S be a square $m \times m$ matrix with m linearly independent eigenvectors (a non-defective matrix).
Then, there exist a decomposition (matrix digitalization theorem) $S=U \Lambda U^{-1} \nonumber$
Where the columns of U are the eigenvectors of S. And $\Lambda$ is a diagonal matrix with eigenvalues in its diagonal.
Singular Value Decomposition
Oftentimes, singular value decomposition (SVD) is used for the more general case of factorizing an $m \times n$ non-square matrix:
$\mathbf{A}=\mathbf{U} \boldsymbol{\Sigma} \mathbf{V}^{T}$
where U is a $m \times m$ matrix representing orthogonal eigenvectors of AAT , V is a $n \times n$ matrix representing orthogonal eigenvectors of AT A and V is a $n \times n$ matrix representing square roots of the eigenvalues of AT A (called singular values of A):
$\mathbf{\Sigma}=\operatorname{diag}\left(\sigma_{1}, \ldots, \sigma_{r}\right), \sigma_{i}=\sqrt{\lambda_{i}}$
The SVD of any given matrix can be calculated with a single command in Matlab and we will not cover the technical details of computing it. Note that the resulting “diagonal” matrix $\mathbf{\Sigma}$ may not be full-rank, i.e. it may have zero diagonals, and the maximum number of non-zero singular values is min(m, n).
For example, let
$A=\left[\begin{array}{cc} 1 & -1 \ 1 & 0 \ 1 & 0 \end{array}\right] \nonumber$
thus m=3, n=2. Its SVD is
$\left[\begin{array}{ccc} 0 & \frac{2}{\sqrt{6}} & \frac{1}{\sqrt{3}} \ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}} \ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} & -\frac{1}{\sqrt{3}} \end{array}\right]\left[\begin{array}{cc} 1 & 0 \ 0 & \sqrt{3} \ 0 & 0 \end{array}\right]\left[\begin{array}{cc} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \end{array}\right] \nonumber$
Typically, the singular values are arranged in decreasing order.
SVD is widely utilized in statistical, numerical analysis and image processing techniques. A typical application of SVD is optimal low-rank approximation of a matrix. For example if we have a large matrix of data ,e.g. 1000 by 500, and we would like to approximate it with a lower-rank matrix without much loss of information, formulated as the following optimization problem:
Find Ak of rank k such that $\mathbf{A}_{k}=\min _{\mathbf{X}: r a n k(\mathbf{X})=k}\|\mathbf{A}-\mathbf{X}\|_{F}$
where the subscript F denotes Frobenius norm $\|\mathbf{A}\|_{F}=\sqrt{\sum_{i} \sum_{j}\left|a_{i j}\right|^{2}}$. Usually k is much smaller than r. The solution to this problem is the SVD of X, $\mathbf{U} \boldsymbol{\Sigma} \mathbf{V}^{T}$, with the smallest r-k singular values in $\mathbf{\Sigma}$ set to zero:
$\mathbf{A}_{k}=\operatorname{Udiag}\left(\sigma_{1}, \ldots, \sigma_{k}, \ldots, 0\right) \mathbf{V}^{T} \nonumber$
Such an approximation can be shown to have an error of $\left\|\mathbf{A}-\mathbf{A}_{k}\right\|_{F}=\sigma_{k+1}$. This is also known as the Eckart-Young theorem.
A common application of SVD to network analysis is using the distribution of singular values of the adjacency matrix to assess whether our network looks like a random matrix. Because the distribution of the singular values (Wigner semicircle law) and that of the largest eigenvalue of a matrix (Tracy-Widom distribution) have been theoretically derived, it is possible to derive the distribution of eigenvalues (singular values in SVD) of an observed network (matrix), and calculate a p-value for each of the eigenvalues. Then we need only look at the significant eigenvalues (singular values) and their corresponding eigenvectors (singular vectors) to examine significant structures in the network. The following figure shows the distribution of singular values of a random Gaussian unitary ensemble (GUE, see this Wikipedia link for definition and properties en.Wikipedia.org/wiki/Random_matrix) matrix, which form a semi-circle according to Wigner semicircle law (Figure 20.4).
An example of using SVD to infer structural patterns in a matrix or network is shown in Figure 20.5. The top-left panel shows a structure (red) added to a random matrix (blue background in the heatmap), spanning the first row and first three columns. SVD detects this by the identification of a large singular value (circled in red on singular value distribution) and corresponding large row loadings (U1) as well as three large column loadings (V1). As more structures are added to the network (top-right and bottom panels), they can be discovered using SVD by looking at the next largest singular values and corresponding row/column loadings, etc.. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/20%3A_Networks_I-_Inference_Structure_Spectral_Methods/20.03%3A_Linear_Algebra_Review.txt |
Limitations of Principal Component Analysis
When analyzing microarray-based gene expression data, we are often dealing with data matrices of dimensions $m \times n$ where m is the number of arrays and n is the number of genes. Usually n is in the order of thousands and m is in the order of hundreds. We would like to identify the most important features (genes) that best explain the expression variation, or patterns, in the dataset. This can be done by performing PCA on the expression matrix:
$\mathbf{E}=\mathbf{U D V}^{T} \nonumber$
This is in essence an SVD of the expression matrix E that rotates and scales the feature space so that expression vectors of each gene in the new orthogonal coordinate system are as uncorrelated as possible, where E is the m by n expression matrix, U is the m by m matrix of left singular vectors (i.e. principal components), or “eigen-genes”, V is the n by n matrix of right singular vectors, or “eigen-arrays”, and D is a diagonal matrix of singular values, or “eigen-expressions” of eigen-genes. This is illustrated in Figure 20.6.
In PCA, each principal component (eigen-gene, a column of U) is a linear combination of n variables (genes), which corresponds to a loading vector (column of V) where the loadings are coecients corresponding to variables in the linear combination.
However, a straightforward application of PCA to expression matrices or any large data matrices can be problematic because the principal components (eigen-genes) are linear combinations of all n variables (genes), which is difficult to interpret in terms of functional relevance. In practice we would like to use a combination of as few genes as possible to explain expression patterns, which can be achieved by a sparse version of PCA.
Sparse PCA
Sparse PCA (SPCA) modifies PCA to constrain the principal components (PCs) to have sparse loadings, thus reducing the number of explicitly used variables (genes in microarray data, etc.) and facilitating interpretation. This is done by formulating PCA as a linear regression-type optimization problem and imposing sparsity constraints.
A linear regression problem takes a set of input variables x = (1,x1,...,xp) and response variables $\mathbf{y}=\mathbf{x} \beta+\epsilon$ where $\beta$ is a row vector of regression coefficients $\left(\beta_{0}, \beta_{1}, \ldots, \beta_{p}\right)^{T}$ and $\epsilon$ is the error. The regression model for N observations can be written in matrix form:
$\left[\begin{array}{c} y_{1} \ y_{2} \ \vdots \ y_{N} \end{array}\right]=\left[\begin{array}{ccccc} 1 & x_{1,1} & x_{1,2} & \cdots & x_{1, p} \ 1 & x_{2,1} & x_{2,2} & \cdots & x_{2, p} \ \vdots & \vdots & \vdots & \ddots & \vdots \ 1 & x_{N, 1} & x_{N, 2} & \cdots & x_{N, p} \end{array}\right]\left[\begin{array}{c} \beta_{0} \ \beta_{1} \ \vdots \ \beta_{p} \end{array}\right]+\left[\begin{array}{c} \epsilon_{1} \ \epsilon_{2} \ \vdots \ \epsilon_{N} \end{array}\right]$
The goal of the linear regression problem is to estimate the coefficients $\beta$. There are several ways to do this, and the most commonly used methods include the least squares method, the Lasso method and the elastic net method.
Least Squares method minimizes the residual sum of squared error:
$\hat{\beta}=\operatorname{argmin}_{\beta}\{R S S(\beta) \mid D\}$
where $R S S\left(\beta \equiv \sum_{i=1}^{N}\left(y_{i}-X_{i} \beta\right)^{2}\right.$ (Xi is the ith instance of input variables x). This is illustrated in Figure 20.7 for the 2-D and 3-D cases, where either a regression line or hyperplane is produced.
Lasso method not only minimizes the sum of residual errors but at the same time minimizes a Lasso penalty, which is proportional to the L-1 norm of the coffiecient vector $\beta$:
$\hat{\beta}=\operatorname{argmin}_{\beta}\left\{R S S(\beta)+L_{1}(\beta) \mid D\right\}$
where $L_{1}(\beta)=\lambda \sum_{j=1}^{p}\left|\beta_{j}\right|, \lambda \geq 0$ The ideal penalty for Sparse PCA is the L0 norm which penalizes j=1 each non-zero element by 1, while zero elements are penalized by 0. However, the L0 penalty function is non-convex and the best solution for exploring the exponential space (number of possible combinations of non-zero elements) is NP-hard. The L1 norm provides a convex approximation to the L0 norm. The Lasso regression model in essence continuously shrinks the coecients toward zero as much as possible, producing a sparse model. It automatically selects for the smallest set of variables that explain variations in the data. However, the Lasso method su↵ers from the problem that if there exists a group of highly correlated variables it tends to select only one of these variables. In addition, Lasso selects at most N variables, i.e. the number of selected variables is limited by sample size.
Elastic Net method removes the group selection limitation of the Lasso method by adding a ridge constraint:
$\hat{\beta}=\operatorname{argmin}_{\beta}\left\{R S S(\beta)+L_{1}(\beta)+L_{2}(\beta) \mid D\right\}$
where $L_{2}(\beta)=\lambda_{2} \sum_{j=1}^{p}\left|\beta_{j}\right|^{2}, \quad \lambda_{2} \geq 0$. In the elastic net solution, a group of highly correlated variables will be selected once one of them is included.
All of the added penalty terms above arise from the theoretical framework of regularization. We skip the mathematics behind the technique and point to an online concise explanation and tutorial of regularization at http://scikit-learn.org/stable/modul...ear_model.html.
PCA can be reconstructed in a regression framework by viewing each PC as a linear combination of the p variables. Its loadings can thus be recovered by regressing PC on the p variables (Figure 20.8). Let $\mathbf{x}=\mathbf{U D V}^{T} . \forall i$, denote Yi = UiDii, then Yi is the ith principal component of X. We state without proving the following theorem that confirms the correctness of reconstruction:
Theorem 20.4.1. $\forall \lambda>0$, $\text {suppose } \hat{\beta}_{\text {ridge}}$ is the ridge estimate given by
$\hat{\beta}_{\text {ridge}}=\operatorname{argmin}_{\beta}\left|Y_{i}-X_{i} \beta\right|^{2}+\lambda|\beta|^{2}\nonumber$
$\text {and let } \hat{\mathbf{v}}=\frac{\hat{\beta}_{\text {ridge}}}{\left|\beta_{\text {ridge}}\right|}, \text {then } \hat{\mathbf{v}}=V_{i} \nonumber$
Note that the ridge penalty does not penalize the coecients but rather ensure the reconstruction of the PCs. Such a regression problem cannot serve as an alternative to naive PCA as it uses exactly its results U in the model, but it can be modified by adding the Lasso penalty to the regression problem to penalize for the absolute values of coefficients:
$\hat{\beta}_{\text {ridge}}=\operatorname{argmin}_{\beta}\left|Y_{i}-X_{i} \beta\right|^{2}+\lambda|\beta|^{2}+\lambda_{1}|\beta|$
where $\mathbf{X}=\mathbf{U D V}^{T}$ and $\forall i, Y_{i}=U_{i} D_{i i}$ is the ith principal component of X. The resulting $\hat{\beta}$ when scaled by its norm are exactly what SPCA aims at - sparse loadings:
$\hat{V}_{i}=\frac{\hat{\beta}}{|\hat{\beta}|} \approx V_{i}$
with $X \hat{V}_{i} \approx Y_{i}$ being the ith sparse principal component.
Here we give a simulated example dataset and compare the recovery of hidden factors using PCA and SPCA. We have 10 variables for which to generate data points: X = (X1, ..., X10), and a model of 3 hidden factors V1, V2 and V3 is used to generate the data:
$\begin{array}{l} V_{1} \sim N(0,290) \ V_{2} \sim N(0,300) \ V_{3} \sim-0.3 V_{1}+0.925 V_{2}+e, e \sim N(0,1) \ X_{i}=V_{1}+e_{i}^{1}, e_{i}^{1} \sim N(0,1), i=1,2,3,4 \ X_{i}=V_{2}+e_{i}^{2}, e_{i}^{2} \sim N(0,1), i=5,6,7,8 \ X_{i}=V_{3}+e_{i}^{3}, e_{i}^{3} \sim N(0,1), i=9,10 \end{array}\nonumber$
From these data we expect two significant structures to arise from a sparse PCA model, each governed by hidden factors V1 and V2 respectively (V3 is merely a linear mixture of the two). Indeed, as shown in Figure 20.9, by limiting the number of variables used, SPCA correctly recovers the PCs explaining the effects of V1 and V2 while PCA does not distinguish well among the mixture of hidden factors. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/20%3A_Networks_I-_Inference_Structure_Spectral_Methods/20.04%3A_Sparse_Principal_Component_Analysis.txt |
Is it possible to use networks to infer the labels of unlabeled nodes, or data? Assuming that some of the data is labeled in a network, we can use the idea that networks capture relational information through a “Guilt by association” methodology. Simply put, we can look at the labeled “friends” of a node in a network to infer the label of a new node. Even though the “Guilt By Association” way of reasoning is a logical fallacy and insufficient in legal court settings, it is often helpful to predict labels (e.g. gene functions) for nodes in a network by looking at the labels of a node’s neighbors. Essentially, a node connected to many nodes with the same label is likely to have that label too. In terms of biological networks where nodes represent genes, and edges represent interactions (regulation, co-expression, protein-protein interactions etc., see Figure 20.11), it is possible to predict function of an unannotated gene based on the functions of the genes that the query gene is connected to. It is easy to see that we can immediately apply this into an iterative algorithm, where we start with a set of labeled nodes and unlabeled nodes, and we iteratively update relational attributes and then re-infer labels of nodes. We iterate until all nodes are labeled. This is known as the iterative classification algorithm.
“Guilt By Association” implies a notion of association. The definition of association we implicitly considered above is a straightforward definition where we consider all the nodes directly connected to a particular node. Can we give a better definition of association? Considering this question, we arrive naturally at the idea of communities, or modules, in graphs. The term community attempts to capture the notion of a region in a graph with densely connected nodes, linked to other regions in the graph with a sparse number of edges. Graphs like these, with densely connected subgraphs, are often termed as modular. Note that there is no consensus upon the exact definition of communities. For practical use, the definition of communities should be biologically motivated and informed by prior knowledge about the system being modeled. In biology, regulatory networks are often modular, with genes in each densely connected subgraph sharing similar functions and co-regulation. However, broad categories of communities have been developed based on di↵erent topological features. They can be roughly divided into 4 categories: node-centric, group-centric, network-centric and hierarchy-centric communities. Here we examine a commonly used criterion for each of the first three types and briefly walk through some well-known algorithms that detect these communities.
Node-Centric Communities
Node-centric community criteria usually require that each node in a group satisfies certain properties. A frequently used node-centric community definition is the clique, which is a maximum complete subgraph in which all nodes are adjacent to each other. Figure 20.12 shows an example of a clique (nodes 5,6 7 and 8) in a network.
Exactly finding the maximum clique in a network is NP-hard, thus it is very computationally expensive to implement a straightforward algorithm for clique-finding. Heuristics are often used to limit time complexity by trading a certain fraction of accuracy. A commonly used heuristic for maximum clique finding is based on the observation that in a clique of size k, each node maintains degree of at least k-1. We therefore can apply the following pruning procedure:
• Sample a sub-network from the given network and find a clique in the subnetwork using an efficient
(e.g. greedy) approach
• Suppose the identified clique has size k, to find a larger clique, all nodes with degree less than or equal to k-1 are removed
• Repeat until network is small enough
In practice many nodes will be pruned as social media networks and many forms of biological networks follow a power law distribution of node degrees that results in large numbers of nodes with low degrees.
Take the network in Figure 20.12 for an example of such a clique finding procedure. Suppose we sampled a subnetwork with nodes numbered 1 to 9 and found a clique {1,2,3} of size 3. In order to find a clique with size larger than 3, we iteratively remove al nodes with degree $\leq$ 2, i.e. nodes {2, 9}, {1, 3} and 4 will be sequentially removed. This leaves us with the 4-clique {5, 6, 7, 8}.
Group-Centric Communities
Group-centric community criteria consider connections within a group as a whole, and the group has to satisfy certain properties without zooming into node-level, e.g. the group edge density must exceed a given threshold. We call a subgraph \G_{s}\left(V_{s}, E_{s}\right) \text { a } \gamma-\text {dense}\) a dense quasi-clique if
$\frac{2\left|E_{s}\right|}{\left|V_{s}\right|\left|V_{s}-1\right|} \geq \gamma$
where the denominator is the maximum number of edges in the network. With such a definition, a similar strategy to the heuristic we discussed for finding maximum cliques can be adopted:
• Sample a subnetwork and find a maximal $\gamma$ -dense quasi-clique (e.g. of size |Vs|
• Remove nodes with degree less than the average degree $\left(<\left|V_{s}\right| \gamma \leq \frac{2\left|E_{s}\right|}{\left|V_{s}\right|-1}\right)$
• Repeat until network is small enough
Network-Centric Communities
Network-centric definitions seek to partition the entire network into several disjoint sets. Several approaches exist for such a goal, as listed below:
• Markov clustering algorithm [6]: The Markov Clustering Algorithm (MCL) works by doing a ran- dom walk in the graph and looking at the steady-state distribution of this walk. This steady-state distribution allows to cluster the graph into densely connected subgraphs.
• Girvan-Newman algorithm [2]: The Girvan-Newman algorithm uses the number of shortest paths going through a node to compute the essentiality of an edge which can then be used to cluster the network.
• Spectral partitioning algorithm
In this section we will look in detail at the spectral partitioning algorithm. We refer the reader to the references [2, 6] for a description of the other algorithms.
The spectral partitioning algorithm relies on a certain way of representing a network using a matrix. Before presenting the algorithm we introduce an important description of a network - its Laplacian matrix.
Laplacian matrix For the clustering algorithm that we will present later in this section, we will need to count the number of edges between the two different groups in a partitioning of the network. For example, in Figure 21.6a, the number of edges between the two groups is 1. The Laplacian matrix which we will introduce now comes in handy to represent this quantity algebraically. The Laplacian matrix L of a network on n nodes is a $n \times n$ matrix L that is very similar to the adjacency matrix A except for sign changes and for the diagonal elements. Whereas the diagonal elements of the adjacency matrix are always equal to zero (since we do not have self-loops), the diagonal elements of the Laplacian matrix hold the degree of each node (where the degree of a node is defined as the number of edges incident to it). Also the off-diagonal elements of the Laplacian matrix are set to be -1 in the presence of an edge, and zero otherwise. In other words, we have:
$L_{i, j}=\left\{\begin{array}{ll}\operatorname{degree}(i) & \text { if } i=j \ -1 & \text { if } i \neq j \text { and there is an edge between } i \text { and } j \ 0 & \text { if } i \neq j \text { and there is no edge between } i \text { and } j\end{array}\right.$
For example the Laplacian matrix of the graph of figure 21.6b is given by (we emphasized the diagonal elements in bold):
$L=\left[\begin{array}{ccc} 1 & 0 & -1 \ 0 & 1 & -1 \ -1 & -1 & 2 \end{array}\right] \nonumber$
Some properties of the Laplacian matrix The Laplacian matrix of any network enjoys some nice properties that will be important later when we look at the clustering algorithm. We briefly review these here.
The Laplacian matrix L is always symmetric, i.e., Li,j = Lj,i for any i,j. An important consequence of this observation is that all the eigenvalues of L are real (i.e., they have no complex imaginary part). In fact one can even show that the eigenvalues of L are all nonnegative2 The final property that we mention about L is that all the rows and columns of L sum to zero (this is easy to verify using the definition of L). This means that the smallest eigenvalue of L is always equal to zero, and the corresponding eigenvector is s = (1,1,...,1).
Counting the number of edges between groups using the Laplacian matrix Using the Laplacian matrix we can now easily count the number of edges that separate two disjoint parts of the graph using simple matrix operations. Indeed, assume that we partitioned our graph into two groups, and that we define a vector s of size n which tells us which group each node i belongs to:
$s_{i}=\left\{\begin{array}{ll} 1 & \text { if node } i \text { is in group } 1 \ -1 & \text { if node } i \text { is in group } 2 \end{array}\right. \nonumber$
Then one can easily show that the total number of edges between group 1 and group 2 is given by the quantity $\frac{1}{4} s^{T} L s$ where L is the Laplacian of the network.
To see why this is case, let us first compute the matrix-vector product Ls. In particular let us fix a node i say in group 1 (i.e., si = +1) and let us look at the i’th component of the matrix-vector product Ls. By definition of the matrix-vector product we have:
L s)_{i}=\sum_{j=1}^{n} L_{i, j} s_{j}\nonumber\] We can decompose this sum into three summands as follows: $(L s)_{i}=\sum_{j=1}^{n} L_{i, j} s_{j}=L_{i, i} s_{i}+\sum_{j \text { in group } 1} L_{i, j} s_{j}+\sum_{j \text { in group } 2} L_{i, j} s_{j} \nonumber$ Using the definition of the Laplacian matrix we easily see that the first term corresponds to the degree of i, i.e., the number of edges incident to i; the second term is equal to the negative of the number of edges connecting i to some other node in group 1, and the third term is equal to the number of edges connecting i to some node ingroup 2. Hence we have: (Ls)i = degree(i) (# edges from i to group 1) + (# edges from i to group 2) Now since any edge from i must either go to group 1 or to group 2 we have degree(i) = (# edges from i to group 1) + (# edges from i to group 2). Thus combining the two equations above we get: $(L s)_{i}=2 \times(\# \text { edges from } i \text { to group } 2) \nonumber$ Now to get the total number of edges between group 1 and group 2, we simply sum the quantity above over all nodes i in group 1: $\text { (# edges between group 1 and group 2) }=\frac{1}{2} \sum_{i \text { in group } 1}(L s)_{i} \nonumber$ We can also look at nodes in group 2 to compute the same quantity and we have: $\text { (# edges between group 1 and group 2) }=-\frac{1}{2} \sum_{i \text { in group } 2}(L s)_{i}\nonumber$ Now averaging the two equations above we get the desired result: $\text { (# edges between group 1 and group 2) }=\frac{1}{4} \sum_{i \text { in group } 1}(L s)_{i}-\frac{1}{4} \sum_{i \text { in group 2 }}(L s)_{i}\nonumber$ \begin{aligned} &=\frac{1}{4} \sum_{i} s_{i}(L s)_{i} \ &=\frac{1}{4} s^{T} L s \end{aligned}\nonumber where sT is the row vector obtained by transposing the column vector s. The spectral clustering algorithm We will now see how the linear algebra view of networks given in the previous section can be used to produce a “good” partitioning of the graph. In any good partitioning of a graph the number of edges between the two groups must be relatively small compared to the number of edges within each group. Thus one way of addressing the problem is to look for a partition so that the number of edges between the two groups is minimal. Using the tools introduced in the previous section, this problem is thus equivalent to finding a vector \(s \in\{-1,+1\}^{n} taking only values -1 or +1 such that $\frac{1}{4} s^{T} L s$ is minimal, where L is the Laplacian matrix of the graph. In other words, we want to solve the minimization problem:
$\operatorname{minimize}_{s \in\{-1,+1\}^{n}} \frac{1}{4} s^{T} L s \nonumber$
If s* is the optimal solution, then the optimal partioning is to assign node i to group 1 if si = +1 or else to group 2.
This formulation seems to make sense but there is a small glitch unfortunately: the solution to this problem will always end up being s = (+1,...,+1) which corresponds to putting all the nodes of the network in group 1, and no node in group 2! The number of edges between group 1 and group 2 is then simply zero and is indeed minimal!
To obtain a meaningful partition we thus have to consider partitions of the graph that are nontrivial. Recall that the Laplacian matrix L is always symmetric, and thus it admits an eigendecomposition:
$L=U \Sigma U^{T}=\sum_{i=1}^{n} \lambda_{i} u_{i} u_{i}^{T} \nonumber$
where $\sigma$ is a diagonal matrix holding the nonnegative eigenvalues $\lambda_{1}, \ldots, \lambda_{n}$ of L and U is the matrix eigenvectors and it satisfies UT=U-1.
The cost of a partitioning $s \in\{-1,+1\}^{n}$ is given by
$\frac{1}{4} s^{T} L s=\frac{1}{4} s^{T} U \Sigma U^{T} s=\frac{1}{4} \sum_{i=1}^{n} \lambda_{i} \alpha_{i}^{2} \nonumber$
where $\alpha$ = UT s give the decomposition of s as a linear combination of the eigenvectors of L: $s=\sum_{i=1}^{n} \alpha_{i} u_{i}$.
Recall also that $0=\lambda_{1} \leq \lambda_{2} \leq \cdots \leq \lambda_{n}$. Thus one way to make the quantity above as small as possible (without picking the trivial partitioning) is to concentrate all the weight on 2 which is the smallest nonzero eigenvalue of L. To achieve this we simply pick s so that $\alpha$2 = 1 and $\alpha$k = 0 for all $k \neq 2$. In other words, this corresponds to taking s to be equal to u2 the second eigenvector of L. Since in general the eigenvector u2 is not integer-valued (i.e., the components of u2 can be different than -1 or +1), we have to convert first
the vector u2 into a vector of +1’s or 1’s. A simple way of doing this is just to look at the signs of the components of u2 instead of the values themselves. Our partition is thus given by:
$s=\operatorname{sign}\left(u_{2}\right)=\left\{\begin{array}{ll} 1 & \text { if }\left(u_{2}\right)_{i} \geq 0 \ -1 & \text { if }\left(u_{2}\right)_{i}<0 \end{array}\right. \nonumber$
To recap, the spectral clustering algorithm works as follows:
Spectral partitioning algorithm
• Input: a network
• Output: a partitioning of the network where each node is assigned either to group 1 or group 2 so that the number of edges between the two groups is small
1. Compute the Laplacian matrix L of the graph given by:
$L_{i, j}=\left\{\begin{array}{ll} \operatorname{degree}(i) & \text { if } i=j \ -1 & \text { if } i \neq j \text { and there is an edge between } i \text { and } j \ 0 & \text { if } i \neq j \text { and there is no edge between } i \text { and } j \end{array}\right. \nonumber$
2. Compute the eigenvector u2 for the second smallest eigenvalue of L.
3. Output the following partition: Assign node i to group 1 if (u2)i $\geq$ 0, otherwise assign node i to group 2.
We next give an example where we apply the spectral clustering algorithm to a network with 8 nodes.
Example We illustrate here the partitioning algorithm described above on a simple network of 8 nodes given in figure 21.7. The adjacency matrix and the Laplacian matrix of this graph are given below:
$A=\left[\begin{array}{llllllll} 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \ 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \ 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \ 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 \ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 \ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 \ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 \end{array}\right] \quad L=\left[\begin{array}{rrrrrr} 3 & -1 & -1 & -1 & 0 & 0 & 0 & 0 \ -1 & 3 & -1 & -1 & 0 & 0 & 0 & 0 \ -1 & -1 & 3 & -1 & 0 & 0 & 0 & 0 \ -1 & -1 & -1 & 4 & -1 & 0 & 0 & 0 \ 0 & 0 & 0 & -1 & 4 & -1 & -1 & -1 \ 0 & 0 & 0 & 0 & -1 & 3 & -1 & -1 \ 0 & 0 & 0 & 0 & -1 & -1 & 3 & -1 \ 0 & 0 & 0 & 0 & -1 & -1 & -1 & 3 \end{array}\right] \nonumber$
Using the eig command of Matlab we can compute the eigendecomposition $L=U \Sigma U^{T}$ of the Laplacian matrix and we obtain:
\begin{aligned} &U=\left[\begin{array}{rrrrrrr} 0.3536 & -0.3825 & 0.2714 & -0.1628 & -0.7783 & 0.0495 & -0.0064 & -0.1426 \ 0.3536 & -0.3825 & 0.5580 & -0.1628 & 0.6066 & 0.0495 & -0.0064 & -0.1426 \ 0.3536 & -0.3825 & -0.4495 & 0.6251 & 0.0930 & 0.0495 & -0.3231 & -0.1426 \ 0.3536 & -0.2470 & -0.3799 & -0.2995 & 0.0786 & -0.1485 & 0.3358 & 0.6626 \ 0.3536 & \mathbf{0 . 2 4 7 0} & -0.3799 & -0.2995 & 0.0786 & -0.1485 & 0.3358 & -0.6626 \ 0.3536 & \mathbf{0 . 3 8 2 5} & 0.3514 & 0.5572 & -0.0727 & -0.3466 & 0.3860 & 0.1426 \ 0.3536 & \mathbf{0 . 3 8 2 5} & 0.0284 & -0.2577 & -0.0059 & -0.3466 & -0.7218 & 0.1426 \ 0.3536 & \mathbf{0 . 3 8 2 5} & 0.0000 & 0.0000 & 0.0000 & 0.8416 & -0.0000 & 0.1426 \end{array}\right]\ &\Sigma=\left[\begin{array}{llllllll} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0.3542 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 4.0000 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 4.0000 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 4.0000 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 4.0000 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 4.0000 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5.6458 \end{array}\right] \end{aligned} \nonumber
We have highlighted in bold the second smallest eigenvalue of L and the associated eigenvector. To cluster the network we look at the sign of the components of this eigenvector. We see that the first 4 components are negative, and the last 4 components are positive. We will thus cluster the nodes 1 to 4 together in the same group, and nodes 5 to 8 in another group. This looks like a good clustering and in fact this is the “natural” clustering that one considers at first sight of the graph.
Did You Know?
The mathematical problem that we formulated as a motivation for the spectral clustering algorithm is to find a partition of the graph into two groups with a minimimal number of edges between the two groups. The spectral partitioning algorithm we presented does not always give an optimal solution to this problem but it usually works well in practice.
Actually it turns out that the problem as we formulated it can be solved exactly using an ecient algorithm. The problem is sometimes called the minimum cut problem since we are looking to cut a minimum number of edges from the graph to make it disconnected (the edges we cut are those between group 1 and group 2). The minimum cut problem can be solved in polynomial time in general, and we refer the reader to the Wikipedia entry on minimum cut [9] for more information. The problem however with minimum cut partitions it that they usually lead to partitions of the graph that are not balanced (e.g., one group has only 1 node, and the remaining nodes are all in the other group). In general one would like to impose additional constraints on the clusters (e.g., lower or upper bounds on the size of clusters, etc.) to obtain more realistic clusters. With such constraints, the problem becomes harder, and we refer the reader to the Wikipedia entry on Graph partitioning [8] for more details.
FAQ
Q: How to partition the graph into more than two groups?
A: In this section we only looked at the problem of partitioning the graph into two clusters. What if we want to cluster the graph into more than two clusters? There are several possible extensions of the algorithm presented here to handle k clusters instead of just two. The main idea is to look at the k eigenvectors for the k smallest nonzero eigenvalues of the Laplacian, and then to apply the k-means clustering algorithm appropriately. We refer the reader to the tutorial [7] for more information.
2One way of seeing this is to notice that L is diagonally dominant and the diagonal elements are strictly positive (for more details the reader can look up “diagonally dominant” and “Gershgorin circle theorem” on the Internet). | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/20%3A_Networks_I-_Inference_Structure_Spectral_Methods/20.05%3A_Network_Communities_and_Modules.txt |
Earlier, we defined a distance metric between two nodes as the weighted shortest path. This simple distance metric is sucient for many purposes, but it notably does not use any information about the overall graph structure. Often times, defining distance based on the number of possible paths between two nodes, weighted by the plausibility or likelihood of taking such paths, gives a better representation of the actual system we are modeling. We explore alternative distance metrics in this section.
Diffusion kernel matrices help capture the global network structure of graphs, informing a more complex definition of distance.
Let A be our regular adjacency matrix. D is the diagonal matrix of degrees. We can define L, the Laplacian matrix, as follows:
$L=D-A\nonumber$
We then define a diffusion kernel K as
$K=\exp (-\beta L) \nonumber$
Where $\beta$ is the diffusion parameter. Note that we are taking a matrix exponential and not an element-wise exponential, which is based on the Taylor series expansion as follows:
$=\sum_{k=0}^{\infty} \frac{1}{k}(-\beta L)^{k} \nonumber$
So what does the matrix K represent? There are multiple ways to interpret K, we will list the most relevant to us below:
Random Walks – One way to interpret K as the results of a random walk. Let’s assume we have a graph and at the node of interest, we have a probability distribution over the edges representing that probability that we move along that edge. Like the figure below:
$\beta$ is the transition probability along a specific edge. And there is also a probability that we don’t move (represented here as a self loop). Note that for the probability distribution to be valid it must sum up to 1.
If we have the setup above, then Kij is equal to the probability of the walk that started at i being at j after infinite time steps. To derive that result, we can write our graph as a Markov model and take the limit as $t \rightarrow \infty$
Stochastic Process – Another way we can interpret the diffusion kernel is through a stochastic process.
• for each node i, consider a random variable Zi(t)
• let Zi(t) be zero-mean with some defined variance.
• covariance for Zi(t) and Zj(t) is zero (independent to each other).
• each variable sends a fraction to the neighbors
$\begin{array}{c} Z_{i}(t+1)=Z_{i}(t)+\beta \sum_{j \neq i}\left(Z_{j}(t)-Z_{i}(t)\right) \ Z(t+1)=(I-\beta L) Z(t) \ Z(t)=(I-\beta L)^{t} Z(0) \end{array}\nonumber] let the time evolution operator T(t) be \[T(t)=(I-\beta L)^{t} \nonumber$
then the covariance is equal to
$\operatorname{Cov}_{i j}(t)=\sigma^{2} T_{i j}(2 t) \nonumber$
Then as we take $\Delta t \rightarrow 0$ we get
$\operatorname{Cov}(t)=\sigma^{2} \exp (-2 \beta t L)\nonumber$
20.07: Neural Networks
Neural networks came out modeling the brain and the nervous system in an attempt to achieve brain-like learning. They are highly parallel and by learning simple concepts we can achieve very complex behaviors. In relevance to this book, they also have proved to be very good biological models (not surprising giving where they came about).
Feed-forward nets
In a neural network we map the input to the output passing through hidden states that are parametrized by learning.
• Information flow is unidirectional
• Data is presented to Input layer
• Passed on to Hidden Layer
• Passed on to Output layer
• Information is distributed
• Information processing is parallel
Back-propagation
Back-propagation is one of the most influential results for training neural nets and allowing us to easily deal with multi-layer networks.
• Requires training set (input / output pairs)
• Starts with small random weights
• Error is used to adjust weights (supervised learning)
It basically performs gradient descent on the error landscape trying to minimize the error. Thus, back propagation can be slow.
Deep Learning
Deep learning is a collection of statistical machine learning techniques used to learn feature hierarchies. Often based on artificial neural networks. Deep neural networks have more than one hidden layer. Each successive layer in a neural network uses features in the previous layer to learn more complex features. One of the (relevant) aims of deep learning methods is to perform hierarchical feature extraction. This makes deep learning an attractive approach to modeling hierarchical generative processes as are commonly found in systems biology.
Example: DeepBind (Alipanahi et al. 2015)
DeepBind[1] is a machine learning tool developed by Alipanahi et al. to predict the sequence specificities of DNA- and RNA-binding proteins using deep learning based methods.
The authors point out three diculties encountered when training models of sequence of specificities on the large volumes of sequence data produced by modern high-throughput technologies: (a) the data comes in qualitatively different forms, including protein binding microarrays, RNAcompete assays, ChIP- seq and HT-SELEX, (b) the quantity of data is very large (typical experiments measure ten to a hundred thousand sequences and (c) each data acquisition technology has it’s own formats and error profile and thus an algorithm is needed that is robust to these unwanted effects.
The DeepBind method is able to resolve these diculties by way of (a) parallel implementation on a graphics processing unit, (b) tolerating a moderate degree of noise and mis-classified training data and (c) train predictive model in an automatic fashion while avoiding the need for hand-tuning. The following figures illustrate aspects of the Deep Bind pipeline.
To address the concern of overfitting, the authors used several regularizers, including dropout, weight decay and early stopping.
Dropout: Prevention of Over-Fitting
Dropout[5] is a technique for addressing the problem of overfitting on the training data in the context of large networks. Due to the multiplication of gradients in the computation of the chain rule, hidden unit weights are co-adapted which can lead to overfitting. One way to avoid co-adaption of hidden unit weights is to simply drop units (randomly). A beneficial consequence of dropping units is that larger neural networks are more computationally intensive to train.
However, this approach take a little longer with respect to training. Furthermore, tuning step-size is a bit of a challenge. The authors provide an Appendix, in which they (in part (A)) provide a helpful “Practical Guide for Training Dropout Networks.” They note that typical values for the dropout parameter p (which
Courtesy of Macmillan Publishers Limited. Used with permission.
Source: Alipanahi, Babak, Andrew Delong, et al. "Predicting the Sequence Specificities of
DNA-and RNA-binding Proteins by Deep Learning." Nature Biotechnology (2015)
determines the probability that a node will be dropped) are between 0.5 and 0.8 for hidden layers and 0.8 for input layers.
20.08: Open Issues and Challenges
Some of the challenges regarding the previous covered topics are
• Validation How do we know the network structure is right?
• How do we know if the network function is right?
• Measuring and modeling protein expression
• Understanding the evolution of regulatory networks
• Mostly it is intractable to compute joint distributions so we focus on marginal distributions.
• Often we have a very large number of regulators or targets making some of the problems require simplifying assumption to be able to make it tractable.
20.09: Further Reading What Have We Learned Bibliography
To learn more about the topics discussed in this chapter, you can look for following key terms.
• Probabilistic graphical models
• Network Completion
• Non-negative matrix factorization
• Network Alignment
• Network Integration
20.10: What Have We Learned?
• Networks come in various types and can be represented in probabilistic and algebraic views
• Different centrality measures gauge the importance of nodes/edges from di↵erent aspects
• PCA and SVD are useful for uncovering structural patterns in the network by performing matrix decomposition
• Sparse PCA improves upon PCA by selecting a few most representative variables in the data and more accurately recovers community structure
• Network communities have a variety of definitions, each of which has specific algorithms designed for community detection
• Neural networks and deep learning networks are supervised learning machines that capture complex patterns in data.
Bibliography
[1] B. Alipanahi, A. Delong, M.T. Weirauch, and B.J. Frey. Predicting the sequence specificities of dna and rna-binding proteins by deep learning. Nature Biotechnology, 33:831–838, 2015.
[2] M. Girvan and M.E.J. Newman. Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12):7821–7826, 2002.
[3] O. Hein, M. Schwind, and W. K ̈onig. Scale-free networks: The impact of fat tailed degree distribution on diffusion and communication processes. Wirtschaftsinformatik, 48(4):267–275, 2006.
[4] T.I. Lee, N.J. Rinaldi, F. Robert, D.T. Odom, Z. Bar-Joseph, G.K. Gerber, N.M. Hannett, C.T. Harbison, C.M. Thompson, I. Simon, et al. Transcriptional regulatory networks in saccharomyces cerevisiae. Science Signalling, 298(5594):799, 2002.
[5] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014.
[6] S.M. van Dongen. Graph clustering by flow simulation. PhD thesis, University of Utrecht, The Nether- lands, 2000.
[7] U. Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395–416, 2007.
[8] Wikipedia. Graph partitioning. en.Wikipedia.org/wiki/Graph_partitioning, 2012.
[9] Wikipedia. Minimum cut. en.Wikipedia.org/wiki/Minimum_cut, 2012. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/20%3A_Networks_I-_Inference_Structure_Spectral_Methods/20.06%3A_Network_Diffusion_Kernals.txt |
Living systems are composed of multiple layers that encode information about the system. The primary layers are:
1. Epigenome: Defined by chromatin configuration. The structure of chromatin is based on the way that histones organize DNA. DNA is divided into nucleosome and nucleosome-free regions, forming its final shape and influencing gene expression.
1. Genome: Includes coding and non-coding DNA. Genes defined by coding DNA are used to build RNA, and Cis-regulatory elements regulate the expression of these genes.
2. Transcriptome RNAs (ex. mRNA, miRNA, ncRNA, piRNA) are transcribed from DNA. They have regulatory functions and manufacture proteins.
3. Proteome Composed of proteins. This includes transcription factors, signaling proteins, and metabolic enzymes.
Interactions between these components are all different, but understanding them can put particular parts of the system into the context of the whole. To discover relationships and interactions within and between layers, we can use networks.
Introducing Biological Networks
Biological networks are composed as follows:
Regulatory Net – set of regulatory interactions in an organism.
• Nodes are regulators (ex. transcription factors) and associated targets.
• Edges correspond to regulatory interaction, directed from the regulatory factor to its target. They are signed according to the positive or negative e↵ect and weighted according to the strength of the reaction.
Metabolic Net – connects metabolic processes. There is some flexibility in the representation, but an example is a graph displaying shared metabolic products between enzymes.
• Nodes are enzymes.
• Edges correspond to regulatory reactions, and are weighted according to the strength of the reaction.
Signaling Net – represents paths of biological signals.
• Nodes are proteins called signaling receptors.
• Edges are transmitted and received biological signals, directed from transmitter to receiver.
Protein Net – displays physical interactions between proteins.
• Nodes are individual proteins.
• Edges are physical interactions between proteins.
Co-Expression Net – describes co-expression functions between genes. Quite general; represents functional rather than physical interaction networks, unlike the other types of nets. Powerful tool in computational analysis of biological data.
• Nodes are individual genes.
• Edges are co-expression relationships.
Today, we will focus exclusively on regulatory networks. Regulatory networks control context-specific gene expression, and thus have a great deal of control over development. They are worth studying because they are prone to malfunction and causing disease.
Interactions Between Biological Networks
Individual biological networks (that is, layers) can themselves be considered nodes in a larger network representing the entire biological system. We can, for example, have a signaling network sensing the environment governing the expression of transcription factors. In this example, the network would display that TFs govern the expression of proteins, proteins can play roles as enzymes in metabolic pathways, and so on.
The general paths of information exchange between these networks are shown in figure 21.4.
Studying Regulatory Networks
In general, networks are used to represent dependencies among variables. Structural dependencies can be represented by the presence of an edge between nodes - as such, unconnected nodes are conditionally independent. Probabilistically, edges can be assigned a ”weight” that represents the strength or the likelihood of the interaction. Networks can also be viewed as matrices, allowing mathematical operations. These frameworks provides an effective way to represent and study biological systems.
These networks are particularly interesting to study because malfunctions can have a large effect. Many diseases are caused by rewirings of regulatory networks. They control context specific expression in development. Because of this, they can be used in systems biology to predict development, cell state, system state, and more. In addition, they encapsulate much of the evolutionary difference between organisms that are genetically similar.
To describe regulatory networks, there are several challenging questions to answer.
Element Identification What are the elements of a network? Elements constituting regulatory networks were identified last lecture. These include upstream motifs and their associated factors.
Network Structure Analysis How are the elements of a network connected? Given a network, structure analysis consists of examination and characterization of important properties. It can be done biological networks but is not restricted to them.
Network Inference How do regulators interact and turn on genes? This is the task of identifying gene edges and characterizing their actions.
Network Applications What can we do with networks once we have them? Applications include predict- ing function of regulating genes and predicting expression levels of regulated genes.
1More in the epigenetics lecture. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/21%3A_Regulatory_Networks-_Inference_Analysis_Application/21.01%3A_Regulatory_Networks-_Inference_Analysis_Application.txt |
Key Questions in Structure Inteference
How to choose network models? A number of models exist for representing networks, a key problem is choosing between them based on data and predicted dynamics.
How to choose learning methods? Two broad methods exist for learning networks. Unsupervised methods attempt to infer relationships for unalabeled datapoints and will be described in sections to come. Supervised methods take a subset of network edges known to be regulatory, and learn a classifier to predict new ones.2
How to incorporate data? A variety of data sources can be used to learn and build networks including Motifs, ChIP binding assays, and expression. Data sources are always expanding; expanding availability of data is at the heart of the current revolution in analyzing biological networks.
Abstract Mathematical Representations for Networks
Think of a network as a function, a black box. Regulatory networks for example, take input expressions of regulators and spit out output expression of targets. Models differ in choosing the nature of functions and assigning meaning to nodes and edges.
Boolean Network This model discretizes node expression levels and interactions. Functions represented by edges are logic gates.
Differential Equation Model These models capture network dynamics. Expression rate changes are func- tion of expression levels and rates of change of regulators. For these it can be very difficult to estimate parameters. Where do you find data for systems out of equilibrium?
Probabilistic Graphical Model These systems model networks as a joint probability distribution over random variables. Edges represent conditional dependencies. Probabilistic graphical models (PGMs) are focused on in the lecture.
Probabilistic Graphical Models
Probabilistic graphical models (PGMs) are trainable and able to deal with noise and thus they are good Bayesian Network Directed graphical technique.3 In PGMs, nodes can be transcription factors or genes and they are modeled by random variables. If you know the joint distribution over these random variables, you can build the network as a PGMs. Since this graph structure is a compact representation of the network, we can work with it easily and accomplish learning tasks. Examples of PGMS include:
Bayesian Network Directed graphical technique. Every node is either a parent or a child. Parents fully determine the state of children but their states may not be available to the experimenter. The network structure describes the full joint probablility distribution of the network as a product of individual distributions for the nodes. By breaking up the network into local potentials, computational complexity is drastically reduced.
Dynamic Bayesian Network Directed graphical technique. Static bayesian networks do not allow cyclic dependencies but we can try to model them with bayesian networks allowing arbitrary dependencies between nodes at di↵erent time points. Thus cyclic dependencies are allowed as the network progresses through time and the network joint probability itself can be described as a joint over all times.
Markov Random Field Undirected graphical technique. Models potentials in terms of cliques. Allows modelling of general graphs including cyclic ones with higher order than pairwise dependencies.
Factor Graph Undirected graphical technique. Factor graphs introduce “factor” nodes specifying interac- tion potentials along edges. Factor nodes can also be introduced to model higher order potentials than pairwise.
It is easiest to learn networks for Bayesian models. Markov random fields and factor graphs require determination of a tricky partition function. To encode network structure, it is only necessary to assign random variables to TFs and genes and then model the joint probability distribution.
Bayesian networks provide compact representations of JPD
The main strength of Bayesian networks comes from the simplicity of their decomposition into parents and children. Because the networks are directed, the full joint probability distribution decomposes into a product of conditional distributions, one for each node in the network.4
Network Inference From Expression Data
Using expression data and prior knowledge, the goal of network inference is to produce a network graph. Graphs will be undirected or directed. Regulatory networks for example will often be directed while expression nets for example will be undirected.
2Supervised methods will not be addressed today.
3These are Dr. Roys models of choice for dealing with biological nets. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/21%3A_Regulatory_Networks-_Inference_Analysis_Application/21.02%3A_Structure_Inference.txt |
We have to learn parameters from the data we have. Once we have a set of parameters, we have to use parametrizations to learn structure. We will focus on score based approaches to network building, defining a score to be optimized as a metric for network construction.
Parameter Learning for Bayesian Networks
Maximum Likelihood Chooses parameters to maximize the likelihood of the available data given the model.
In maximum likelihood, compute data likelihood as scores of each random variable given parents and note that scores can be optimized independently. Depending on the choice of a model, scores will be maximized in different manners. For gaussian distriubution it is possible to simply compute parameters optimizing score. For more complicated model choices it may be necessary to do gradient descent.
Bayesian Parameter Estimation Treats $theta$ itself as a random variable and chooses the parameters maximizing the posterior probability. These methods require a fixed structure and seek to choose internal parameters maximizing score.
Structure Learning
We can compute best guess parametrizations of structured networks. How do we find structures themselves?
Structure learning proceeds by comparing likelihood of ML parametrizations across different graph structures and in order to seek those structures realizing optimal of ML score.
A Bayesian framework can incorporate prior probabilities over graph structures if given some reason to believe a-priori that some structures are more likely than others.
To perform search in structure learning, we will inevitably have to use a greedy approach because the space of structures is too large to enumerate. Such methods will proceed by an incremental search analogous to gradient descent optimization to find ML parametrizations.
A set of graphs are considered and evaluated according to ML score. Since local optima can exist, it is good to seed graph searches from multiple starting points.
Besides being unable to capture cyclic dependencies as mentioned above, Bayesian networks have certain other limitations.
Indirect Links Since Bayesian networks simply look at statistical dependencies between nodes, it is easy for them to be tricked into putting edges where only indirect relations are in fact present.
Neglected Interactions Especially when structural scores are locally optimized, it is possible that significant biological interactions will be missed entirely. Coexpressed genes may not share proper regulators.
Slow Speed Bayesian methods so far discussed are too slow to work effectively whole-genome data.
Excluding Indirect Links
How to eliminate indirect links? Information theoretic approaches can be used to remove extraneous links by pruning network structures to remove redundant information. Two methods are described.
ARACNE For every triplet of edges, a mutual information score is computed and the ARACNE algorithm excludes edges with the least information subject to certain thresholds above which minimal edges are kept.
MRNET Maximizes dependence between regulators and targets while minimizing the amount of redundant information shared between regulators by stripping edges corresponding to regulators with low variance.
Alternately it is possible to simply look at regulatory motifs and eliminate regulation edges not predicted by common motifs.
Learning Regulatory Programs for Modules
How to fix omissions for coregulated genes? By learning parameters for regulatory models instead of individual genes, it is possible to exploit the tendency of coexpressed genes to be regulated similarly. Similar to the method of using regulatory motifs to prune redundant edges, by modeling modules at once, we reduce network edge counts while increasing data volume to work with.
With extensions, it is possible to model cyclic dependencies as well. Module networks allow clustering revisitation where genes are reassigned to clusters based on how well hey are predicted by a regulatory program for a module.
Modules however cannot accomodate genes sharing module membership. divide and conquer for speeding up learning
How to speed up learning? Dr. Roy has developed a method to break the large learning problem into smaller tasks using a divide and conquer technique for undirected graphs. By starting with clusters it is possible to infer regulatory networks for individual clusters then cross edges, reassign genes, and iterate.
Conclusions in Network Inference
Regulatory networks are important but hard to construct in general. By exploiting modularity, it is often possible to find reliable structures for graphs and subgraphs.5
Many extensions are on the horizon for regulatory networks. These include inferring causal edges from expression correlations, learning how to share genes between clusters, and others.
4Bayesian networks are parametrized by $\theta$ according to our specific choice of network model. With different choices of random variables, we will have different options for parametrizations,$\theta$ and therefore different learning tasks:
Discrete Random variables suggest simple $\theta$ corresponding to parameter choices for a multinomial distribution.
Continuous Random variables may be modelled with $\theta$ corresponding to means and covariances of gaussians or other continuous distribution.
5Dr. Roy notes that many algorithms are available for running module network inference with various distributions. Neural net pacakges and Bayesian packages among others are available. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/21%3A_Regulatory_Networks-_Inference_Analysis_Application/21.03%3A_Overview_of_the_PGM_Learning_Task.txt |
Using linear regression and regression trees, we will try to predict expression from networks. Using collective classification and relaxation labeling, we will try to assign function to unknown network elements.
We would like to use networks to:
1. predict the expression of genes from regulators.
In expression prediction, the goal is to parametrize a relationship giving gene expression levels from regulator expression levels. It can be solved in various manners including regression and is related to the problem of finding functional networks.
2. predict functions for unknown genes.
Overview of Functional Models
One model for prediction is a conditional gaussian: a simple model trained by linear regression. A more complex prediction model is a regression tree trained by nonlinear regression.
Conditional Gaussian Models
Conditional gaussian models predict over a continuous space and are trained by a simple linear regression to maximize likelihood of data. They predict targets whose expression levels are means of gaussians over regulators.
Conditional gaussian learning takes a structured, directed net with targets and regulating transcription factors. You can estimate gaussian parameters,μ, from the the data by finding parameters maximizing likelihood - after a derivative, the ML approach reduces to solving a linear equation.
From a functional regulatory network derived from multiple data sources 6,Dr, Roy trained a gaussian model for prediction using time course expression data and tested it on a hold-out testing set. In comparisons to predictions by a modle trained from a random network, found out that the network predicted substantially better than random.
The linear model used makes a strong assumption on linearity of interaction. This is probably not a very accurate assumption to make but it appears to work to some extent with the dataset tested.
Regression Tree Models
Regression tree models allow the modeler to use a multimodal distribution incorporating nonlinear dependencies between regulator and target gene expression. The final structure of a regression tree describes expression grammar in terms of a series of choices made at regression tree nodes. Because targets can share regulatory programs, notions of recurring motifs may be incorporated. Regression trees are rich models but tricky to learn. regression trees in predicting expression
In practice, prediction works its way down a regression tree given regulator expression levels. Upon reaching the leaf nodes of the regression tree, a prediction for gene expression is made.
Functional Prediction for Unannotated Nodes
Given a network with an incomplete set of labels, the goal of function annotation is to predict labels for unknown genes. We will use methods falling under the broad category of guilt by association. If we know nothing about a node but that its neighbors are involved in a function, assign that function to the unknown node.
Association can include any notion of network relatedness discussed above such as co-expression, protein- protein interactions and co-regulation. Many methods work, two will be discussed: collective classification and relaxation classification; both of which work for regulatory networks encoded as undirected graphs.
Collective Classification
View functional prediction as a classification problem: Given a node, what is its regulatory class?.
In order to use the graph structure in the prediction problem, we capture properties of the neighborhood of a gene in relational attribute. Since all points are connected in a network, data points are no longer inde- pendently distributed - the prediction problem becomes substantially harder than a standard classification problem.
Iterative classification is a simple method with which to solve the classification problem. Starting with an initial guess for unlabeled genes it infers labels iteratively, allowing changed labels to influence node label predictions in a manner similar to gibbs sampling7
Relaxation labeling is another approach originally developed to trac terrorist networks. The model uses a suspicion score where nodes are labeled with a suspiciousness according to the suspiciousness of its neighbors. The method is called relaxation labeling because it gradually settles on to a solution according to a learning parameter. It is another instance of iterative learning where genes are assigned probabilities of having a given function.
Regulatory Networks for Function Prediction
For pairs of nodes, compute a regulatory similarity – the interaction quantity – equal to the size of the intersection of their regulators divided by the size of their union. Having this interaction similarity in the form of an undirected graph over network targets, can use clusters derived from a network in final functional classification.
The model is successful in predicting invaginal disk and neural system development. The blue line in Fig. 21.2a shows the score of every gene predicting its participation in neural system development.
Co-expression an co-regulation can be used side by side to augment the set of genes known to particiapte in neural system development.
6data sources included chromatin, physical binding, expression, motif
7see the previous lecture by Manolis describing motif discovery
21.05: Structural Properties of Networks
Much of the early work on networks was done by scientists outside of biology. Physicists looked at internet and social networks and described their properties. Biologists observed that the same properties were also present in biological networks and the field of biological networks was born. In this section we look at some of these structural properties shared by the different biological networks, as well as the networks that arise in other disciplines as well.
Degree distribution
In a network, the degree of a node is the number of neighbors it has, i.e., the number of nodes it is connected to by an edge. The degree distribution of the network gives the number of nodes having degree d for each possible value of d = 1, 2, 3, . . . . For example figure 21.3 gives the degree distribution of the S. cerevisiae gene regulatory network. It was observed that the degree distribution of biological networks follow a power law, i.e., the number of nodes in the network having degree d is approximately cd where c is a normalization constant and is a positive coefficient. In such networks, most nodes have a small number of connections, except for a few nodes which have very high connectivity.
This property –of power law degree distribution– was actually observed in many different networks across different disciplines (e.g., social networks, the World Wide Web, etc.) and indicates that those networks are not “random”: indeed random networks (constructed from the Erd ̋os-Renyi model) have a degree distribution that follows a Poisson distribution where almost all nodes have approximately the same degree and nodes with higher or smaller degree are very rare [6] (see figure 21.4).
Networks that follow a power law degree distribution are known as scale-free networks. The few nodes in a scale-free network that have very large degree are called hubs and have very important interpretations. For example in gene regulatory networks, hubs represent transcription factors that regulate a very large number of genes. Scale-free networks have the property of being highly resilient to failures of “random” nodes, however they are very vulnerable to coordinated failures (i.e., the network fails if one of the hub nodes fails, see [1] for more information).
(a) Scale-free graph vs. a random graph (figure taken from [10]) .
In a regulatory network, one can identify four levels of nodes:
1. Influential, master regulating nodes on top. These are hubs that each indirectly control many targets.
2. Bottleneck regulators. Nodes in the middle are important because they have a maximal number of direct targets.
3. Regulators at the bottom tend to have fewer targets but nonetheless they are often biologically essential!
4. Targets.
Network motifs
Network motifs are subgraphs of the network that occur significantly more than random. Some will have interesting functional properties and are presumably of biological interest.
Figure 21.5 shows regulatory motifs from the yeast regulatory network. Feedback loops allow control of regulator levels and feedforward loops allow acceleration of response times among other things. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/21%3A_Regulatory_Networks-_Inference_Analysis_Application/21.04%3A_Application_of_Networks.txt |
An important problem in network analysis is to be able to cluster or modularize the network in order to identify subgraphs that are densely connected (see e.g., figure 21.6a). In the context of gene interaction networks, these clusters could correspond to genes that are involved in similar functions and that are co- regulated.
There are several known algorithms to achieve this task. These algorithms are usually called graph partitioning algorithms since they partition the graph into separate modules. Some of the well-known algorithms include:
• Markov clustering algorithm [5]: The Markov Clustering Algorithm (MCL) works by doing a random walk in the graph and looking at the steady-state distribution of this walk. This steady-state distribution allows to cluster the graph into densely connected subgraphs.
• Girvan-Newman algorithm [2]: The Girvan-Newman algorithm uses the number of shortest paths going through a node to compute the essentiality of an edge which can then be used to cluster the network.
• Spectral partitioning algorithm
In this section we will look in detail at the spectral partitioning algorithm. We refer the reader to the references [2, 5] for a description of the other algorithms.
The spectral partitioning algorithm relies on a certain way of representing a network using a matrix. Before presenting the algorithm we will thus review how to represent a network using a matrix, and how to extract information about the network using matrix operations.
Figure 21.6
Algebraic view to networks
Adjacency matrix One way to represent a network is using the so-called adjacency matrix. The adjacency matrix of a network with n nodes is an n $\times$ n matrix A where Ai,j is equal to one if there is an edge between nodes i and j, and 0 otherwise. For example, the adjacency matrix of the graph represented in figure 21.6b is given by:
$A=\left[\begin{array}{lll} 0 & 0 & 1 \ 0 & 0 & 1 \ 1 & 1 & 0 \end{array}\right] \nonumber$
If the network is weighted (i.e., if the edges of the network each have an associated weight), the definition of the adjacency matrix is modified so that Ai,j holds the weight of the edge between i and j if the edge exists, and zero otherwise.
Laplacian matrix For the clustering algorithm that we will present later in this section, we will need to count the number of edges between the two different groups in a partitioning of the network. For example, in Figure 21.6a, the number of edges between the two groups is 1. The Laplacian matrix which we will introduce now comes in handy to represent this quantity algebraically. The Laplacian matrix L of a network on n nodes is a n $\times$ n matrix L that is very similar to the adjacency matrix A except for sign changes and for the diagonal elements. Whereas the diagonal elements of the adjacency matrix are always equal to zero (since we do not have self-loops), the diagonal elements of the Laplacian matrix hold the degree of each node (where the degree of a node is defined as the number of edges incident to it). Also the off-diagonal elements of the Laplacian matrix are set to be 1 in the presence of an edge, and zero otherwise. In other words, we have:
$L_{i, j}=\left\{\begin{array}{ll} \operatorname{degree}(i) & \text { if } i=j \ -1 & \text { if } i \neq j \text { and there is an edge between } i \text { and } j \ 0 & \text { if } i \neq j \text { and there is no edge between } i \text { and } j \end{array}\right. \nonumber$
For example the Laplacian matrix of the graph of figure 21.6b is given by (we emphasized the diagonal elements in bold):
$L=\left[\begin{array}{ccc} 1 & 0 & -1 \ 0 & 1 & -1 \ -1 & -1 & 2 \end{array}\right] \nonumber$
Some properties of the Laplacian matrix The Laplacian matrix of any network enjoys some nice properties that will be important later when we look at the clustering algorithm. We briefly review these here.
The Laplacian matrix L is always symmetric, i.e., Li,j = Lj,i for any i,j. An important consequence of this observation is that all the eigenvalues of L are real (i.e., they have no complex imaginary part). In fact one can even show that the eigenvalues of L are all nonnegative8 The final property that we mention about L is that all the rows and columns of L sum to zero (this is easy to verify using the definition of L). This means that the smallest eigenvalue of L is always equal to zero, and the corresponding eigenvector is s = (1,1,...,1).
Counting the number of edges between groups using the Laplacian matrix Using the Laplacian matrix we can now easily count the number of edges that separate two disjoint parts of the graph using simple matrix operations. Indeed, assume that we partitioned our graph into two groups, and that we define a vector s of size n which tells us which group each node i belongs to:
$s_{i}=\left\{\begin{array}{ll} 1 & \text { if node } i \text { is in group } 1 \ -1 & \text { if node } i \text { is in group } 2 \end{array}\right. \nonumber$
Then one can easily show that the total number of edges between group 1 and group 2 is given by the quantity $\frac{1}{4} s^{T}$ Ls where L is the Laplacian of the network.
To see why this is case, let us first compute the matrix-vector product Ls. In particular let us fix a node i say in group 1 (i.e., si = +1) and let us look at the i’th component of the matrix-vector product Ls. By definition of the matrix-vector product we have:
$(L s)_{i}=\sum_{j=1}^{n} L_{i, j} s_{j} \nonumber$
We can decompose this sum into three summands as follows:
$(L s)_{i}=\sum_{j=1}^{n} L_{i, j} s_{j}=L_{i, i} s_{i}+\sum_{j \text { in group } 1} L_{i, j} s_{j}+\sum_{j \text { in group } 2} L_{i, j} s_{j} \nonumber$
Using the definition of the Laplacian matrix we easily see that the first term corresponds to the degree of i, i.e., the number of edges incident to i; the second term is equal to the negative of the number of edges connecting i to some other node in group 1, and the third term is equal to the number of edges connecting i to some node ingroup 2. Hence we have:
(Ls)i = degree(i) (# edges from i to group 1) + (# edges from i to group 2)
Now since any edge from i must either go to group 1 or to group 2 we have:
degree(i) = (# edges from i to group 1) + (# edges from i to group 2).
Thus combining the two equations above we get:
(Ls)i = 2 ⇥ (# edges from i to group 2).
Now to get the total number of edges between group 1 and group 2, we simply sum the quantity above
over all nodes i in group 1:
$\text { (# edges between group 1 and group 2) }=\frac{1}{2} \sum_{i \text { in group } 1}(L s)_{i} \nonumber$
We can also look at nodes in group 2 to compute the same quantity and we have:
$\text { (# edges between group 1 and group 2) }=-\frac{1}{2} \sum_{i \text { in group 2 }}(L s)_{i} \nonumber$
Now averaging the two equations above we get the desired result:
\begin{aligned} &\text { (# edges between group 1 and group 2) }=\frac{1}{4} \sum_{i \text { in group 1 }}(L s)_{i}-\frac{1}{4} \sum_{i \text { in group 2 }}(L s)_{i}\ &\begin{array}{l} =\frac{1}{4} \sum_{i} s_{i}(L s)_{i} \ =\frac{1}{4} s^{T} L s \end{array} \end{aligned} \nonumber
where sT is the row vector obtained by transposing the column vector s.
The spectral clustering algorithm
We will now see how the linear algebra view of networks given in the previous section can be used to produce a “good” partitioning of the graph. In any good partitioning of a graph the number of edges between the two groups must be relatively small compared to the number of edges within each group. Thus one way of addressing the problem is to look for a partition so that the number of edges between the two groups is minimal. Using the tools introduced in the previous section, this problem is thus equivalent to finding a vector$s \in\{-1,+1\}^{n}$ taking only values 1 or +1 such that $\frac{1}{4} s^{T}$ Ls is minimal, where L is the Laplacian matrix of the graph. In other words, we want to solve the minimization problem:
$\operatorname{minimize}_{s \in\{-1,+1\}^{n}} \frac{1}{4} s^{T} L s \nonumber$
If s* is the optimal solution, then the optimal partioning is to assign node i to group 1 if si = +1 or else to group 2.
This formulation seems to make sense but there is a small glitch unfortunately: the solution to this problem will always end up being s = (+1,...,+1) which corresponds to putting all the nodes of the network in group 1, and no node in group 2! The number of edges between group 1 and group 2 is then simply zero and is indeed minimal!
To obtain a meaningful partition we thus have to consider partitions of the graph that are nontrivial. Recall that the Laplacian matrix L is always symmetric, and thus it admits an eigendecomposition:
$L=U \Sigma U^{T}=\sum_{i=1}^{n} \lambda_{i} u_{i} u_{i}^{T}\nonumber$
where $\sigma$ is a diagonal matrix holding the nonnegative eigen values $\lambda_{1}, \ldots, \lambda_{n}$ of L and U is the matrix of eigenvectors and it satisfies UT=U-1.
The cost of a partitioning $s \in\{-1,+1\}^{n}$ is given by
$\frac{1}{4} s^{T} L s=\frac{1}{4} s^{T} U \Sigma U^{T} s=\frac{1}{4} \sum_{i=1}^{n} \lambda_{i} \alpha_{i}^{2}\nonumber$
where $\alpha=U^{T}s$ give the decomposition of s as a linear combination of the eigenvectors of L: $s=\sum_{i=1}^{n} \alpha_{i} u_{i}$.
Recall also that $0=\lambda_{1} \leq \lambda_{2} \leq \cdots \leq \lambda_{n}$. Thus one way to make the quantity above as small as possible (without picking the trivial partitioning) is to concentrate all the weight on $\lambda_{2}$ which is the smallest nonzero eigenvalue of L. To achieve this we simply pick s so that $\alpha_{2}$ =1 and $\alpha_{k}$ = 0 for all k $\neq$ 2. In other words, this corresponds to taking s to be equal to u2 the second eigenvector of L. Since in general the eigenvector u2 is not integer-valued (i.e., the components of u2 can be different than 1 or +1), we have to convert first the vector u2 into a vector of +1’s or 1’s. A simple way of doing this is just to look at the signs of the components of u2 instead of the values themselves. Our partition is thus given by:
$s=\operatorname{sign}\left(u_{2}\right)=\left\{\begin{array}{ll} 1 & \text { if }\left(u_{2}\right)_{i} \geq 0 \ -1 & \text { if }\left(u_{2}\right)_{i}<0 \end{array}\right.\nonumber$
To recap, the spectral clustering algorithm works as follows:
Spectral partitioning algorithm
• Input: a network
• Output: a partitioning of the network where each node is assigned either to group 1 or group
2 so that the number of edges between the two groups is small
1. Compute the Laplacian matrix L of the graph given by:
$L_{i, j}=\left\{\begin{array}{ll} \operatorname{degree}(i) & \text { if } i=j \ -1 & \text { if } i \neq j \text { and there is an edge between } i \text { and } j \ 0 & \text { if } i \neq j \text { and there is no edge between } i \text { and } j \end{array}\right.\nonumber$
2. Compute the eigenvector u2 for the second smallest eigenvalue of L.
3. Output the following partition: Assign node i to group 1 if (u2)i $\geq$0, otherwise assign node i to group 2.
We next give an example where we apply the spectral clustering algorithm to a network with 8 nodes.
Example We illustrate here the partitioning algorithm described above on a simple network of 8 nodes given in figure 21.7. The adjacency matrix and the Laplacian matrix of this graph are given below:
$A=\left[\begin{array}{llllllll} 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \ 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \ 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \ 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 \ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 \ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 \ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 \end{array}\right] \quad L=\left[\begin{array}{rrrrrr} 3 & -1 & -1 & -1 & 0 & 0 & 0 & 0 \ -1 & 3 & -1 & -1 & 0 & 0 & 0 & 0 \ -1 & -1 & 3 & -1 & 0 & 0 & 0 & 0 \ -1 & -1 & -1 & 4 & -1 & 0 & 0 & 0 \ 0 & 0 & 0 & -1 & 4 & -1 & -1 & -1 \ 0 & 0 & 0 & 0 & -1 & 3 & -1 & -1 \ 0 & 0 & 0 & 0 & -1 & -1 & 3 & -1 \ 0 & 0 & 0 & 0 & -1 & -1 & -1 & 3 \end{array}\right] \nonumber$
Using the eig command of Matlab we can compute the eigendecomposition $L=U \Sigma U^{T}$ of the Laplacian matrix and we obtain:
$U=\left[\begin{array}{rrrrrrr} 0.3536 & -0.3825 & 0.2714 & -0.1628 & -0.7783 & 0.0495 & -0.0064 & -0.1426 \ 0.3536 & -0.3825 & 0.5580 & -0.1628 & 0.6066 & 0.0495 & -0.0064 & -0.1426 \ 0.3536 & -0.3825 & -0.4495 & 0.6251 & 0.0930 & 0.0495 & -0.3231 & -0.1426 \ 0.3536 & -0.2470 & -0.3799 & -0.2995 & 0.0786 & -0.1485 & 0.3358 & 0.6626 \ 0.3536 & \mathbf{0 . 2 4 7 0} & -0.3799 & -0.2995 & 0.0786 & -0.1485 & 0.3358 & -0.6626 \ 0.3536 & \mathbf{0 . 3 8 2 5} & 0.3514 & 0.5572 & -0.0727 & -0.3466 & 0.3860 & 0.1426 \ 0.3536 & \mathbf{0 . 3 8 2 5} & 0.0284 & -0.2577 & -0.0059 & -0.3466 & -0.7218 & 0.1426 \ 0.3536 & \mathbf{0 . 3 8 2 5} & 0.0000 & 0.0000 & 0.0000 & 0.8416 & -0.0000 & 0.1426 \end{array}\right]\nonumber$
$\Sigma=\left[\begin{array}{llllllll} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0.3542 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 4.0000 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 4.0000 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 4.0000 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 4.0000 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 4.0000 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5.6458 \end{array}\right]\nonumber$
We have highlighted in bold the second smallest eigenvalue of L and the associated eigenvector. To cluster the network we look at the sign of the components of this eigenvector. We see that the first 4 components are negative, and the last 4 components are positive. We will thus cluster the nodes 1 to 4 together in the same group, and nodes 5 to 8 in another group. This looks like a good clustering and in fact this is the “natural” clustering that one considers at first sight of the graph.
Did You Know?
The mathematical problem that we formulated as a motivation for the spectral clustering algorithm is to find a partition of the graph into two groups with a minimimal number of edges between the two groups. The spectral partitioning algorithm we presented does not always give an optimal solution to this problem but it usually works well in practice.
Actually it turns out that the problem as we formulated it can be solved exactly using an efficient algorithm. The problem is sometimes called the minimum cut problem since we are looking to cut a minimum number of edges from the graph to make it disconnected (the edges we cut are those between group 1 and group 2). The minimum cut problem can be solved in polynomial time in general, and we refer the reader to the Wikipedia entry on minimum cut [9] for more information. The problem however with minimum cut partitions it that they usually lead to partitions of the graph that are not balanced (e.g., one group has only 1 node, and the remaining nodes are all in the other group). In general one would like to impose additional constraints on the clusters (e.g., lower or upper bounds on the size of clusters, etc.) to obtain more realistic clusters. With such constraints, the problem becomes harder, and we refer the reader to the Wikipedia entry on Graph partitioning [8] for more details.
FAQ
Q: How to partition the graph into more than two groups?
A: In this section we only looked at the problem of partitioning the graph into two clusters. What if we want to cluster the graph into more than two clusters? There are several possible extensions of the algorithm presented here to handle k clusters instead of just two. The main idea is to look at the k eigenvectors for the k smallest nonzero eigenvalues of the Laplacian, and then to apply the k-means clustering algorithm appropriately. We refer the reader to the tutorial [7] for more information.
8One way of seeing this is to notice that L is diagonally dominant and the diagonal elements are strictly positive (for more details the reader can look up “diagonally dominant” and “Gershgorin circle theorem” on the Internet).
Bibliography
[1] R. Albert. Scale-free networks in cell biology. Journal of cell science, 118(21):4947–4957, 2005.
[2] M. Girvan and M.E.J. Newman. Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12):7821–7826, 2002.
[3] O. Hein, M. Schwind, and W. K ̈onig. Scale-free networks: The impact of fat tailed degree distribution on diffusion and communication processes. Wirtschaftsinformatik, 48(4):267–275, 2006.
[4] T.I. Lee, N.J. Rinaldi, F. Robert, D.T. Odom, Z. Bar-Joseph, G.K. Gerber, N.M. Hannett, C.T. Harbi- son, C.M. Thompson, I. Simon, et al. Transcriptional regulatory networks in saccharomyces cerevisiae. Science Signalling, 298(5594):799, 2002.
[5] S.M. van Dongen. Graph clustering by flow simulation. PhD thesis, University of Utrecht, The Nether- lands, 2000.
[6] M. Vidal, M.E. Cusick, and A.L. Barabasi. Interactome networks and human disease. Cell, 144(6):986– 998, 2011.
[7] U. Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395–416, 2007.
[8] Wikipedia. Graph partitioning, 2012.
[9] Wikipedia. Minimum cut, 2012.
[10] Wikipedia. Scale-free network, 2012. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/21%3A_Regulatory_Networks-_Inference_Analysis_Application/21.06%3A_Network_clustering_Bibliography.txt |
In recent years, many subtle and previously disregarded mechanisms for fine genetic regulation have been discovered. Aside from direct regulation by proteins, these mechanisms include the involvement of non- protein coding regions of the genome, epigenomic factors such histone modifications, and diverse RNA switches. The spatial organization of chromatin inside the nucleus, chromatin modifier complexes and its functional consequences have also become an area of interest. In this chapter, we will delve into the study of 3D chromatin structures, starting with the state of art in this field, the most relevant terminology and current methods. Specially we will focus on the study of DNA regions located by peripheral regions of the nucleus (thus in close contact with the nuclear lamina) Finally, we will discuss the computational methods involved in studying nuclear genome organization.
What’s already known
DNA is locally compacted in nucleosomes, by wrapping around histone octamers. Each nucleosome comprises about 147 bps packed in 1.67 lef-handed superhelical turns. DNA is globally compacted as chromosomes (during cell division and mitosis). Chromosomes have been dyed with di↵erent colors, and it has been shown that some chromosomes have radial preferences within the cell nucleus, even when the cell is not actively undergoing division and the chromosomes are not condensed. That is, some chromosomes prefer to stay near the center of the nucleus while others tend towards the periphery. These are known as chromosome territories (CT). The territories of homologous chromosomes usually do not lie not near one another. It is also know that there is an overarching nuclear ’architecture’ that is observable, conserved even between different cell types.
What we don’t know
While local organization (nucleosome packaging) and global organization (chromosome condensation) of DNA are somehow understood, the intermediate structures of DNA are not yet well characterized - many speculated states have only been observed in vitro. The positioning of genomic regions in the nucleus on a sub-chromosomal level, for example the specific 3D conformation of a certain chromosomal region containing several genes, is also largely unknown.
While it is known that chromosomes retain some general architecture during the entire cell cycle, it is unknown how that location is maintained and how di↵erent chromosomes continue to interact over the course of the entire cell cycle.
Together, although we do understand certain parts of the function of chromosomes, we do not have a complete mechanistic understanding of this process.
Why do we study it?
In general, we are interested in understanding the functional characteristics of genomic regions and the molecular mechanisms encoded within, which might have implications in human diseases. Particularly, it has been shown that genes that are encoded in spatially neighboring regions are likely to be co-regulated. Also, the DNA packed inside the nucleus is the equivalent of wrapping 20 km of 20 μm thick thread in something the size of a tennis ball, which would reach from Kendall Square to Harvard and back over 6 and a half times! Isn't this amazing??
22.02: Relevant terminology
Lamina Associated Domains(LADs)
Lamina associated domains(LADs) are the portions of the chromatin that interact with the nuclear lamina. Mapping the interactions of the chromatin and the nuclear lamina provides insight towards mapping chro- mosome folding. While not much is known about LADs, it is known that these regions are closely related to both high gene expression and low gene density, an interesting combination. Additionally LADs are associated with CTCFs, promoters, and CPG islands along its borders.
Histones
Histones are highly alkaline proteins found eukaryotes that comprise the core of nucleosomes, packaging and ordering the nuclear DNA. An octamer form by 2 copies of the core histones H2A, H2B, H3, and H4 forms the nucleosome, which acts as a spool for DNA to wind around.
Chromatin
Chromatin is a complex form by DNA, proteins and RNA that generates the global architecture of DNA in eukaryote nuclei. Its main functions involve DNA packaging, reinforcing the DNA macromolecule to allow mitosis, preventing DNA damage, and regulating gene expression and DNA replication Most of the mechanisms underlying the formation and regulation of the structure of chromatin are largely unknown; however, during cell division, chromatin organizes by way of chromosomes.
Chromosome territories (CT)
Chromosomes are not randomly distributed throughout the nucleus. Chromosomes occupy specific regions of the nucleus. These regions are called chromosome territories. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/22%3A_Chromatin_Interactions/22.01%3A_Introduction.txt |
There are two main types of methods for investigating the three-dimensional structure of chromatin in the nucleus.
• The first set of methods, ChIP and DamID, are methods that measure DNA-’landmark’ interactions. That is, they measure interactions of genome loci with relatively fixed nuclear landmarks, and only regions of the genome that come into contact with the nuclear lamina will be identified.
The second set of methods, the 3C-based methods, are those that measure DNA-DNA interactions. Any two regions of DNA that interact may be identified, regardless of whether they are near the interior or periphery of the nucleus.
Methods for measuring DNA-Nuclear Lamina interactions
The following methods, ChIP and DamID, both examine regions of DNA the specifically come in contact with the nuclear lamina.
ChIP: Chromatin Immuno Precipitation
ChIP is a method for detecting regions of DNA that are bound to proteins of interest. Proteins bound with DNA are cross-linked in place with formaldehyde. The protein-DNA complexes are pulled-down using anity chromatography, mainly using specific antibodies that target the protein of interest. The recovered complexes are then dessassociated, cross-links are broken, and the DNA that was bound to the proteins is fragmented and analyzed. DNA fragments can be then analyzed using sequencing (ChiP-Seq) or microarrays (ChiP-Chip). However, a big challenge associated with the various ChIP techniques is that it can be dicult to get a high-anity antibody. To study the 3D structure of DNA within the nucleus, ChIP-Seq can be used with antibodies that targed lamina proteins.
DamID: DNA adenine methyltransferase IDentification
DamID is used to map the binding sites of Chromatin-binding proteins. In the DamID method, DNA adenine methyltransferase (Dam) from E. coli is fused to the LaminB1 protein (the Dam enzyme hangs off the end of the protein and is thus in the vicinity for interactions). In E. coli, the Dam enzyme methylates the adenine in the sequence GATC; bacterial genomes contain proteins with functions like Dam to protect their own DNA from digestion by restriction enzymes, or as part of their DNA repair systems. As this process doesnt naturally occur in eukaryotes, the methylated adenines in a region can thus be attributed to an interaction with the protein fused with Dam, thereby implying that that particular region came into close contact with the nuclear lamina. As a control, unfused Dam can be expressed at low levels. This results in a sparse distribution of methylated adenine for which the precise position of the methylated adenines can be used to infer variation in DNA accessibility. The methylated adenine are determined using disulphide PCR assays or other PCR technique sensitive to methylations in the template DNA. In one of those assays, the genome can digested by DpnI, which only cuts methylated GATC sequences. Adapter sequences are then ligated to the ends of these digested pieces, and PCR is run using primers matching the adapters. Only the regions occurring between proximal GATC positions are amplified. The final measurement is the log of the ratio between test and control lamina association: positive values are preferentially lamina-associated, and are thus identified as LADs. One advantage of using DamID over ChIP is that DamID does not require a specific antibody which may be dicult to find. However, a disadvantage of using DamID is that the fusion protein must be made and expressed.
FAQ
Q: How close does DNA have to come to DamID to be methylated?
A: It doesn’t have to bind directly to the lamina, but it does have to come pretty close. DamID has a range of about 1.5kb.
Measuring DNA-DNA contacts
All of the following methods are based on Chromosome Conformation Capturing (3C) with certain modifications.
3C
Chromosome Conformation Capturing (3C) is a method that detects which genomic loci are in close vicinity to other loci within the nucleus. Similar to the ChIP method, a cross-linking agent is used freeze proteins bound to DNA in place and forming protein-DNA complexes. The DNA can be then be digested by a restriction enzyme after allowing the bound protein to disassociate. Typically an enzyme with a 6 bps long recognition site that leaves sticky ends, like HindIII, is used. The generated fragments are then induce to self-ligate (Using very low concentration of DNA to prevent the ligation of the fragment with another random fragment). The result is a pool of linear DNA fragments, known as the 3C library, that may be analyzed via PCR by designing primers specifically for the interaction of interest. 3C can be described as a ’one vs one’ method, because the primers used are specifically target to amplify the product of the interaction between 2 regions of interest.
Circularized Chromatin Conformation Capture (4C)
4C methods can be described as a ’one vs all’ because for a single region of interest, we can examine all its interactions with all other regions in the genome. 4C works similarly to 3C with the main di↵erence being the restriction enzyme used. In 4C, a common cutter is employed to generate more and smaller fragments. These fragments are then again ligated. Some smaller fragments may be excluded, but the result is a circularized fragment of DNA. Primers can be designed to amplify the ’unknown’ fragment of DNA so that all interactions with the region of interest are identified.
Carbon-Copy Chromosome Conformation Capture (5C)
5C is a ’many vs many’ method and allows the identification of interactions between many regions of interest and many other regions, also of interest, to be analyzed at once. 5C works similarly to 3C. However, after obtaining the 3C library, multiplex ligation-mediated amplification (LMA) is performed. LMA is a method in which multiple targets are amplified. The resulting 5C library may be analyzed on a microarray or high-throughput sequencing.
Hi-C
Hi-C can be described as an ’all vs all’ method because it identifies all chromatin interactions. Hi-C works by labeling all DNA fragments with biotin before ligation, which marks all the ligation junctions. Magnetic beads are then used to purify the biotin-marked junctions. This Hi-C library may then be fed into next generation sequencing.
ChIP-loop
ChIP-loop can be described as a ’one vs one’ method, because similar to 3C, only an interaction between two regions of interest may be identified. ChIP-loop is a hybrid between ChIP and the 3C methods. DNA-protein complexes are first cross-linked and digested. Then, as in ChIP, the protein of interest and the DNA bound to it are pulled down using an antibody. The protocol then proceeds as in 3C: the free ends of the fragments are ligated, the cross-linking are reversed, and sequencing can proceed using primers designed specifically for a ’one vs one’ interaction.
ChIA-PET
Chromatin Interception Analysis by Paired-End Tag Sequencing, or ChIA-PET, combines the ChIP and 3C methodologies to determine long-range chromatin interactions genome-wide. It can be described as a ’all vs all’ method, because although a single protein of interest must be identified, any interactions will be identified. In ChIA-PET, DNA-protein complexes are cross-linked, as in previously discussed methods. However, sonication is then used to break up chromatin, and to reduce non-specific interactions. As in the ChIP protocol, an antibody is used to pull down regions of DNA bound to a protein of interest. Two di↵erent oligonucleotide linkers are then ligated to the free ends of the DNA. These linkers both have MmeI cut sites. The linkers are then ligated together so that the free ends are connected, after which the fragments are digested with MmeI. MmeI cuts 20 nt downstream of its recognition sequence, so the result of the digestion is the linker bordered by the sequence of interest on either side. This is a ’tag-linker-tag’ structure, and the fragments are known as PETs. The PETs may be sequenced and mapped back to the genome to determine regions of interacting DNA. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/22%3A_Chromatin_Interactions/22.03%3A_Molecular_Methods_for_Studying_Nuclear_Genome_Organization.txt |
In this section, we will present how the DamID and Hi-C methods were used to map lamina-associated domains in the genome.
Interpreting DamID Data
The DamID method (described in section 3) was used to identify regions of DNA that interacted with the lamin protein at the nuclear lamina.
Did You Know?
DamID experiments typically run for 24 hours and the methylation is irreversible. The results are also the average over millions of cells. Therefore, DamID is not suitable for exact time related positioning of the genome, though single cell studies may soon make address this issue!
The results of the DamID experiment were plotted as $\log _{2} \frac{\text {Dam fusionprotein}}{\text {Dam only}}$ protein, as done in the figure below (black peaks). For the LaminB1 fusion experiment, positive regions (underlined in yellow in the figure below) indicate regions which preferentially associate with the nuclear lamina. These positive regions are de- fined as Lamina Associated Domains, or LADs. Approximately 1300 LADs were discovered in human
fibroblasts. They were surprisingly large, ranging from about 0.1Mb - 10Mb, with a median size of 5Mb.
Did You Know?
DamID experiments typically run for 24 hours and the methylation is irreversible. The results are also the average over millions of cells. Therefore, DamID is not suitable for exact time related positioning of the genome, though single cell studies may soon make address this issue!
FAQ
Q: In this representation is a value of 0 significant?
A: No, there is definitely a point where we do not know where the real 0 is. Instead, we can try to make a good estimate of where the 0 value should be, in order to see relative preference (the interior vs exterior of the nucleus)
After LADs have been identified, we can align their boundaries to discover various interesting features such as known gene densities or gene expression levels to the data to build our LAD model. Experiments have shown that LADs are characterized by low gene density and gene expression levels. It was noticed that the LAD boundaries are very sharply defined. By aligning the start positions of many LADs, it was discovered that the borders are particularly marked by CpG islands, outward pointing promoters, and CTCF binding sites.
FAQ
Q: Why CTCF binding sites? What’s so important about them?
A: That’s the question! Perhaps they help maintain LADs. They could perhaps prevent the LADs from ’spreading’.
FAQ
Q: How does organization relate to polyclonal expression?
A: Certainly something going on; however, polyclonal repression works on a smaller scale than LAD. It occurs outside of LADs, as an additional repression mechanism
Interpreting Hi-C Data
Hi-C data was collected, and the read were mapped back to the genome. The read counts were compiled into a matrix O (shown below for chromosome 14) where the element Oi,j indicates the number of reads corresponding to an interaction between positions i and j. A strong diagonal is clearly present, and indicates that regions that are close together in 1D are also likely to interact in 3D. Errors in Hi-C data interpretation may occur when the assumptions of the technique are violated: for example, the assumption that the reference genome is correct, which may not be true in the case of a cancerous cell. The matrix was then
normalized to account genetic distance between two regions, and a matrix indicating which interactions are enriched or depleted in the data. In order to compare the data in the matrix, which is two dimensional, to genomic data sets, which are one dimensional, Principal Component Analysis (PCA) must be used. After PCA, functional characterization of the data is possible. Hi-C identified two global types of regions:
• Type A, which is characterized by open chromatin, gene richness, and active chromatin marks.
• Type B, which is characterized by closed chromatin, and is gene poor.
Both types of regions are primarily self-interacting and interactions between the two types are infrequent. Hi-C also confirmed the presence of chromosome territories, as there were far more intra-chromosomal rather than inter-chromosomal interactions. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/22%3A_Chromatin_Interactions/22.04%3A_Mapping_Genome-Nuclear_Lamina_Interactions_%28LADs%29.txt |
Sources of Bias
The three steps that could potentially introduce biases include: Digestion, Ligation, and Sequencing. Digestion efficiency is a function of the restriction enzymes used and therefore some regions of the genome could be less prone to be digested as their distribution of the particular recognition site could be really sparse. Also, some regions could be enriched in the recognition site and thereby will be over-represented in the results. One solution for this is using many different restriction enzymes and compare the results. Ligation efficiency is a function of the fragment lengths. Depending on how the restriction enzymes cut the sequence, some ends may be more or less likely to ligate together. Finally, sequencing efficiency is a function of the composition of the sequence. Some DNA strands will be more difficult to sequence, based on GC richness and presence of repeats, which will introduce bias.
Bias Correction
To minimize ligation bias, non-specific ligation products are removed. Since non-specific ligation products typically have far-away restriction sites, they introduce much larger fragments. In addition, the influence of fragment size on ligation efficiency(Flen(alen,blen)), the influence of G/C content on amplification and sequencing(Fgc(agc, bgc)), and the influence of sequence uniqueness on mappability(M(a) * M(b)) can all be accounted for and corrected with the equation:
P(Xa,b)=Pprior * Flen(alen,blen)*Fgc(agc,bgc)*M(a)*M(b) Alternatively, the sources of bias can be less explicitly represented by the following equation:
Oi,jj = Bi * Bj * Ti,j
where the sum of all relative contact probabilities Ti,j for each bin equals 1. The biases are only assumed to be multiplicative. This is solved by matrix balancing, or proportional fitting by an iterative correction algorithm.
3D-modeling of 3C-based data
3D-modeling can reveal many general principles of genome organization. Current models are generated using a combination of inter-locus interactions and known spatial distances between nuclear landmarks. However, a lot of uncertainty remains in current 3D-models because the data is gathered from millions of cells. The practical problems affecting 3D-modeling are due to the large amount of data necessary to construct models and the different dynamics between an individual cell and a population, which lead to unstable models. Next generation modeling is trending towards using single cell genomics.
22.06: Architecture of Genome Organization
Multiple cell types influence on determining architecture
Embryonic stem cells (ESC), Neural Progenitor Cells (NPC), and Astrocytes(AC) are all isogenic cell types (they all start as embryonic stem cells). Embryonic stem cells are constantly dividing and are completely undifferentiated; they generate the neural progenitor cells, which are still dividing but less so, and are only halfway differentiated. The neural progenitor cells then generate the completely differentiated astrocytes. It was discovered that during this differentiation process, some areas switched from being Lamina Associated Domains to being interior domains. In the embryonic stem cells, there is very little transcription. However, transcription goes up as the cells become more and more differentiated. This matches the localization of the domains from being primarily associated with the lamina (and thus not expressed) to being localized to the interior. Even though these cell types each have very different properties, a DamID map shows a high level of similarity between the three isogenic cells as well as an independent fibroblast cell. Hidden Markov Models were employed to identify the Lamina Associated Domains between the cells. A core chromosome architecture was found with about 70% of the chromosome being constitutive (cLad/ ciLAD) and 30% of the chromosome being facultative(fLAD).
Inter-species comparison of lamina associations
To determine lamina associations between species, a mouse and a human were used. A genome wide alignment was constructed between a mouse and a human. For each genomic region in the mouse, the best reciprocal region was matched in the human. Then the human genome was re-mapped, and used to reconstruct a mouse genome. DamID data was projected onto this map and there was 83% concordance between the two genomes (91% for the constitutive regions; 67% for the facultative regions).
A-T Content Rule
A-T content has been found to be a strong predictor for lamina association within core architecture. Addi- tional support for this prediction is that the LAD-structure that makes up the core architecture is similar to an isochore structure (a large uniform region of DNA). | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/22%3A_Chromatin_Interactions/22.05%3A_Computational_Methods_for_Studying_Nuclear_Genome_Organization.txt |
Organization of chromosomes, particularly in spatial relation to other parts of the chromosome, is not well understood during the mitotic process. The conformation of cells is thought to conform to two di↵erent states. Highly compartmentalized and cell-type specific conformations are almost entirely limited to interphase. During the transition into metaphase, chromosomes enter a locus and tissue-independent folding state.
During the mitotic process, approximately 30% of LADs are positioned along the cellular periphery. This positioning, however, reflects protein-lamina contact at intermittent intervals, however, the cells are restricted to the periphery of the cells. During mitotic division, this laminar positioning is stochastically inherited by child cells.
Modeling
Three dimensional modeling will be increasingly important in understanding the chromosomal interactions. Current techniques have modeled the yeast genome and the ↵ -globin locus (Duan et al. Nature (2010), Bau et al. Nature SMB (2011)). From modeling studeis it has become clear that we cannot generate a direct relationship between contact probability and spatial distance (i.e. contact probability != spatial distance).
Modelling, however, is an inverse problem, it is harder to go one way than the other. Specifically, it is easier to go from protein structure to a protein contact map than vice versa. Similarly, chromosomal structure is a hard problem, even if we have a contact mapping.
22.08: Current Research Directions
LADs
(a) 30% of the genome is variable between cell types, how are we able to differentiate these differences
(b) How do lamina and LADs interact? Is there an attractions between LADs and these domains, or is it based on repulsion of the interior
(c) Why and how are the genes along the periphery of LADs repressed.
TADs and Other Compartments:
(a) What is the biological basis of compartments, is there some multifaceted component of the compartments?
(b) How do cohesins work? Are cohesin-extrusion pairs enough to explain all domains?
(c) Enhancer-promoter loops are confined to specific domains? Are these dynamic components/are they architectural loops mediated by CTCF?
Other/Miscellaneous:
(a) How do we relate the different chromosomal components (i.e. LADs, TADs, polycomb domain, replica- tion origins, histone modifications, gene expression)?
(b) Evolutionary basis of genomic architecture: was there an evolutional pressure and when did the folding principles emerge?
(c) In chromosomal changes do localizations or changes in expression happen first?
Did You Know?
This question has (partially) been addressed! In investigating cells that go through multiple rounds of differentiation, it has been observed that some regions will localize to the lamina in the first differentiation but won’t become repressed until the second differentiation!
Body Guard Hypothesis
The body guard hypothesis was proposed in 1975 by Hsu TC. It suggests that inactive DNA is localized to the periphery of the nucleus so that it can ’guard’ the important, active regions of DNA from foreign dangers like viruses or free radicals. Attempts to test the hypothesis by introducing artificial DNA damage have produced circumstantial results, and the question remains open. Single Cell Experiments
It is known that cells retain their original organization after mitosis, as shown by chromosome staining experiments. However, recent experiments have shown that there may be a large difference in organization between the parent and daughter cells. Certain global properties, like chromosome territories, are conserved, but organization at a finer detail may greatly differ. Single cell experimentation is an emerging technique that may be able to address this open question.
FAQ
Q: Has anyone tried increasing expression of a gene in the middle of a LAD? What happened?
A: It’s unclear if there is a specific example of this, however several related studies have been conducted. Researchers have tried to ’tether’ a region of DNA to the nuclear lamina to see if it spontaneously becomes deactivated. However, the results were inconclusive as in half of the cases the region would become inactive and in the other half it wouldn’t! So far these types of manipulations haven’t yielded much, but it was found that if a protein-devoid segment of DNA was digested and mixed with highly purified lamina proteins, the bound fragments reveal a very similar pattern as the LADs. This tells us that lamina directly binds to DNA. However, this does seem to vary between species. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/22%3A_Chromatin_Interactions/22.07%3A_Mechanistic_Understanding_of_Genome_Architecture.txt |
Metabolic modeling allows us to use mathematical models to represent complex biological systems. This lecture discusses the role of modeling the steady state of biological systems in understanding the metabolic capabilities of organisms. We also briefly discuss how well steady state models are able to replicate in-vitro experiments.
What is Metabolism?
According to Matthews and van Holde, metabolism is the totality of all chemical reactions that occur in living matter. This includes catabolic reactions, which are reactions that lead to the breakdown of molecules into smaller components, and anabolic reactions, which are responsible for the creation of more complex molecules (e.g. proteins, lipids, carbohydrates, and nucleic acids) from smaller components. These reactions are responsible for the release of energy from chemical bonds and the storage of this energy. Metabolic reactions are also responsible for the transduction and transmission of information (for example, via the generation of cGMP as a secondary messenger or mRNA as a substrate for protein translation).
Why Model Metabolism?
An important application of metabolic modeling is in the prediction of drug effects. An important subject of modeling is the organism Mycobacterium tuberculosis [15]. The disruption of the mycolic acid synthesis pathways of this organism can help control TB infection. Computational modeling gives us a platform for identifying the best drug targets in this system. Gene knockout studies in Escherichia coli have allowed scientists to determine which genes and gene combinations affect the growth of this important model organism [6]. Both agreements and disagreements between models and experimental data can help us assess our knowledge of biological systems and help us improve our predictions about metabolic capabilities. In the next lecture, we will learn the importance of incorporating expression data into metabolic models. In addition, a variety of infectious disease processes involve metabolic changes at the microbial level.
23.02: Model Building
An overarching goal of metabolic modeling is the ability to take a schematic representation of a pathway and change that it into a mathematical formula modeling the pathway. For example, converting the following pathway into a mathematical model would be incredible useful.
Chemical Reactions
In metabolic models, we are concerned with modeling chemical reactions that are catalyzed by enzymes. Enzymes work by acting on a transition state of the enzyme-substrate complex that lowers the activation energy of a chemical reaction. The diagram on slide 5 of page 1 of the lecture slides demonstrates this phenomenon. A typical rate equation (which describes the conversion of the substrates S of the enzyme reaction into its products P) can be described by a Michaelis-Menten rate law:
$\frac{V}{V_{\max }}=\frac{[S]}{K_{\mathrm{m}}+[S]}\nonumber$
In this equation, V is the rate of the equation as a function of substrate concentration [S]. It is clear that the parameters Km and Vmax are necessary to characterize the equation.
The inclusion of multiple substrates, products, and regulatory relationships quickly increases the number of parameters necessary to characterize such equations. The figures on slides 1, 2, and 3 of page 2 of the lecture notes demonstrate the complexity of biochemical pathways. Kinetic modeling quickly becomes infeasible: the necessary parameters are dicult to measure, and also vary across organisms [10]. Thus, we are interested in a modeling method that would allow us to use a small number of precisely determined parameters. To this end, we recall the basic machinery of stoichiometry from general chemistry. Consider the chemical equation A+2B ! 3C, which says that one unit of reactant A combines with 2 units of reactant B to form 3 units of reactant C. The rate of formation of the compound X is given by the time derivative of [X]. Note that C forms three times as fast as A. Therefore, due to the stoichiometry of the reaction, we see that the reaction rate (or reaction flux) is given by
$f l u x=\frac{d[A]}{d t}=\frac{1}{2} \frac{d[B]}{d t}=\frac{1}{3} \frac{d[C]}{d t} \nonumber$
This will be useful in the subsequent sections. We must now state the simplifying assumptions that make our model tractable.
Steady-State Assumption
The steady state assumption assumes that there is no accumulation of any metabolite in the system. This allows us to represent reactions entirely in terms of their chemistry (i.e. the stoichiometric relationships between the components of the enzymatic reaction). Note that this does not imply the absence of flux through any given reaction. Rather, steady-state actually implies two assumptions that are critical to simplify metabolic modeling. The first is that the internal metabolite concentrations are constant, and the second is that fluxes, ie input and output fluxes, are also constant.
An analogy is a series of waterfalls that contribute water to pools. As the water falls from one pool to another, the water levels do not change even though water continues to flow (see page 2 slide 5). This framework prevents us from being hindered by the overly complicated transient kinetics that can result from perturbations of the system. Since we are usually interested in long-term metabolic capabilities (functions on a scale longer than milliseconds or seconds), the steady state dynamics may give us all the information that we need.
The steady-state assumption makes the ability to generalize across species and reuse conserved pathways in models much more feasible. Reaction stochiometries are often conserved across species, since they involve only conservation of mass. The biology of enzyme catalysis, and the parameters that characterize it, are not similarly conserved. These include species-dependent parameters such as the activation energy of a reaction, substrate anity of an enzyme, and the rate constants for various reactions. However, none of these are required for steady-state modeling.
It is also of interest to note that, since time constants for metabolic reactions are usually in the order of milliseconds, most measurement technologies used today are not able to capture these extremely fast dynamics. This is the case of metabolomics mass spectrometry based measurements for example. In this method, the amounts of all the internal metabolites in a system are measured at a given point in time, but measurements can be taken at best every hour. In the majority of circumstances, all that is ever measured is steady state.
Reconstructing Metabolic Pathways
There are several databases that can provide the information necessary to reconstruct metabolic pathways in silico. These databases allow reaction stoichiometry to be accessed using Enzyme Commission numbers. Reaction stochiometries are the same in all the organisms that utilize a given enzyme. Among the databases of interest are ExPASy [5], MetaCyc [16], and KEGG [14]. These databases often contain pathways organized by function that can be downloaded in SBML format, making pathway reconstruction very easy for well- characterized pathways. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/23%3A_Introduction_to_Steady_State_Metabolic_Modeling/23.01%3A_Introduction.txt |
Metabolic flux analysis (MFA) is a way of computing the distribution of reaction fluxes that is possible in a given metabolic network at steady state. We can place constraints on certain fluxes in order to limit the space described by the distribution of possible fluxes. In this section, we will develop a mathematical formulation for MFA. Once again, this analysis is independent of the particular biology of the system; rather, it will only depend on the (universal) stoichiometries of the reactions in question.
Mathematical Representation
Consider a system with m metabolites and n reactions. Let xi be the concentration of substrate i, so that the rate of change of the substrate concentration is given by the time derivative of xi . Let x be the column vector (with m components) with elements xi . For simplicity, we consider a system with m = 4 metabolites A, B, C, and D. This system will consist of many reactions between these metabolites, resulting in a complicated balance between these compounds.
Once again, consider the simple reaction A + 2B $\rightarrow$ 3C. We can represent this reaction in vector form as (-1 -2 3 0). Note that the first two metabolites (A and B) have negative signs, since they are consumed in the reaction. Moreover, the elements of the vector are determined by the stoichiometry of the reaction, as in Section 2.1. We repeat this procedure for each reaction in the system. These vectors become the columns of the stoichiometric matrix S. If the system has m metabolites and n reactions, S will be a m n matrix. Therefore, if we define v to be the n-component column vector of fluxes in each reaction, the vector Sv describes the rate of change of the concentration of each metabolite. Mathematically, this can be represented as the fundamental equation of metabolic flux analysis:
$\frac{d x}{d t}=S v\nonumber$
The matrix S is an extraordinarily powerful data structure that can represent a variety of possible scenarios in biological systems. For example, if two columns c and d of S have the property that c = d, the columns represent a reversible reaction. Moreover, if a column has the property that only one component is nonzero, it represents in exchange reaction, in which there is a flux into (or from) a supposedly infinite sink (or source), depending on the sign of the nonzero component.
We now impose the steady state assumption, which says that the left size of the above equation is identically zero. Therefore, we need to find vectors v that satisfy the criterion Sv = 0. Solutions to this equation will determine feasible fluxes for this system.
Null Space of S
The feasible flux space of the reactions in the model system is defined by the null space of S, as seen above. Recall from elementary linear algebra that the null space of a matrix is a vector space; that is, given two vectors y and z in the nullspace, the vector ay + bz (for real numbers a, b) is also in the null space. Since the null space is a vector space, there exists a basis bi, a set of vectors that is linearly independent and spans the null space. The basis has the property that for any flux v in the null space of S, there exist real numbers $\alpha$i such that
$v=\Sigma_{i} \alpha_{i} b_{i}\nonumber$
How do we find a basis for the null space of a matrix? A useful tool is the singular value decomposition (SVD) [4]. The singular value decomposition of a matrix S is defined as a representation S = UEV*, where U is a unitary matrix of size m, V is a unitary matrix of size n, and E is a mxn diagonal matrix, with the (necessarily positive) singular values of S in descending order. (Recall that a unitary matrix is a matrix with orthonormal columns and rows, i.e. U * U = U U * = I the identity matrix). It can be shown that any matrix has an SVD. Note that the SVD can be rearranged into the equation $S v=\sigma u$, where u and v are columns of the matrices U and V and is a singular value. Therefore, if $\sigma$ = 0, v belongs to the null space of S. Indeed, the columns of V that correspond to the zero singular values form an orthonormal basis for the null space of S. In this manner, the SVD allows us to completely characterize the possible fluxes for the system.
Constraining the Flux Space
The first constraint mentioned above is that all steady-state flux vectors must be in the null space. Also negative fluxes are not thermodynamically possible. Therefore a fundamental constraint is that all fluxes must be positive. (Within this framework we represent reversible reactions as separate reactions in the stoichiometric matrix S having two unidirectional fluxes.)
These two key constraints form a system that can be solved by convex analysis. The solution region can be described by a unique set of Extreme Pathways. In this region, steady state flux vectors v can be described as a positive linear combination of these extreme pathways. The Extreme Pathways, represented in slide 25 as vectors bi, circumscribe a convex flux cone. Each dimension is a rate for some reaction. In slide 25, the z-dimension represents the rate of reaction for v3 . We can recognize that at any point in time, the organism is living at a point in the flux cone, i.e. is demonstrating one particular flux distribution. Every point in the flux cone can be described by a possible steady state flux vector, while points outside the cone cannot.
One problem is that the flux cone goes out to infinity, while infinite fluxes are not physically possible. Therefore an additional constraint is capping the flux cone by determining the maximum fluxes of any of our reactions (these values correspond to our Vmax parameters). Since many metabolic reactions are interior to the cell, there is no need to set a cap for every flux. These caps can be determined experimentally by measuring maximal fluxes, or calculated using mathematical tools such as diffusivity rules.
We can also add input and output fluxes that represent transport into and out of our cells (Vin and Vout). These are often much easier to measure than internal fluxes and can thus serve to help us to generate a more biologically relevant flux space. An example of an algorithm for solving this problem is the simplex algorithm [1]. Slides 24-27 demonstrate how constraints on the fluxes change the geometry of the flux cone. In reality, we are dealing with problems in higher dimensional spaces.
Linear Programming
Linear programming is a generic solution that is capable of solving optimization problems given linear constraints. These can be represented in a few different forms.
Canonical Form :
• Maximize: $c^{T} x$
• Subject to: $A x \leq b$
Standard Form :
• Maximize $\Sigma c_{i} * x_{i}$
• Subject to $a_{i j} X_{i} \leq b_{i} \text { foralli, } j$
• Non-negativity constraint: $X_{i} \geq 0$
A concise and clear introduction to Linear Programming is available here: www.purplemath. com/modules/linprog.htm The constraints described throughout section 3 give us the linear programming problem described in lecture. Linear programming can be considered a first approximation and is a classic problem in optimization. In order to try and narrow down our feasible flux, we assume that there exists a fitness function which is a linear combination of any number of the fluxes in the system. Linear programming (or linear optimization) involves maximizing or minimizing a linear function over a convex polyhedron specified by linear and non-negativity constraints.
We solve this problem by identifying the flux distribution that maximizes an objective function:
The key point in linear programming is that our solutions lie at the boundaries of the permissible flux space and can be on points, edges, or both. By definition however, an optimal solution (if one exists) will lie at a point of the permissible flux space. This concept is demonstrated on slide 30. In that slide, A is the stoichiometric matrix, x is the vector of fluxes, and b is a vector of maximal permissible fluxes.
Linear programs, when solved by hand, are generally done by the Simplex method. The simplex method sets up the problem in a matrix and performs a series of pivots, based on the basic variables of the problem statement. In worst case, however, this can run in exponential time. Luckily, if a computer is available, two other algorithms are available. The ellipsoid algorithm and Interior Point methods are both capable of solving any linear program in polynomial time. It is interesting to note, that many seemingly dicult problems can be modeled as linear programs and solved eciently (or as eciently as a generic solution can solve a specific problem).
In microbes such as E. coli, this objective function is often a combination of fluxes that contributes to biomass, as seen in slide 31. However, this function need not be completely biologically meaningful.
For example, we might simulate the maximization of mycolates in M. tuberculosis, even though this isnt happening biologically. It would give us meaningful predictions about what perturbations could be performed in vitro that would perturb mycolate synthesis even in the absence of the maximization of the production of those metabolites.Flux balance analysis (FBA) was pioneered by Palssons group at UCSD and has since been applied to E. coli, M. tuberculosis, and the human red blood cell [? ]. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/23%3A_Introduction_to_Steady_State_Metabolic_Modeling/23.03%3A_Metabolic_Flux_Analysis.txt |
Silico Detection Analysis
With the availability of such a powerful tool like FBA, more questions naturally arise. For example, are we able to predict gene knockout phenotype based on their simulated effects on metabolism? Also, why would we try to do this, even though other methods, like protein interaction map connective, exist? Such analysis is actually necessary, since other methods do not take into direct consideration the metabolic flux or other specific metabolic conditions.
Knocking out a gene in an experiment is simply modeled by removing one of the columns (reactions) from the stochiometric matrix. (A question during class clarified that a single gene can knock out multiple columns/reactions.) Thereby, these knockout mutations will further constrain the feasible solution space by removing fluxes and their related extreme pathways. If the original optimal flux was outside is outside the new space, then new optimal flux is created. Thus the FBA analysis will produce different solutions. The solution is a maximal growth rate, which may be confirmed or disproven experimentally. The growth rate at the new solution provides a measure of the knockout phenotype. If these gene knockouts are in fact lethal, then the optimal solution will be a growth rate of zero.
Studies by Edwards, Palsson (1900) explore knockout phenotype prediction use to predict metabolic changes in response to knocking out enzymes in E. coli, a prokaryote [? ]. In other words, an in silico metabolic model of E.coli was constructed to simulate mutations affecting the glycolysis, pentose phosphate, TCA, and electron transport pathways (436 metabolites and 719 reactions included). For each specific condition, the optimal growth of mutants was compared to non-mutants. The in vivo and in silico results were then compared, with 86% agreement. The errors in the model indicate an underdeveloped model (lack of knowledge). The authors discuss 7 errors not modeled by FBA, including mutants inhibiting stable RNA synthesis and producing toxic intermediates.
Quantitative Flux In Silico Model Predictions
Can models quantitatively predict fluxes, growth rate? We demonstrate the ability of FBA to give quantitative predictions about growth rate and reaction fluxes given different environmental conditions. More specifically, prediction refers to externally measurable fluxes as a function of controlled uptake rates and environmental conditions. Since FBA maximizes an objective function, resulting in a specific value for this function, we should in theory be able to extract quantitative information from the model.
An early example by Edwards, Ibarra, and Palsson (1901), predicted the growth rate of E. coli in culture given a range of fixed uptake rates of oxygen and two carbon sources (acetate and succinate), which they could control in a batch reactor [6]. They assumed that E. coli cells adjust their metabolism to maximize growth (using a growth objective function) under given environmental conditions and used FBA to model the metabolic pathways in the bacterium. The input to this particular model is acetate and oxygen, which is labeled as VIN .
The controlled uptake rates fixed the values of the oxygen and acetate/succinate input fluxes into the network, but the other fluxes were calculated to maximize the value of the growth objective.
The growth rate is still treated as the solution to the FBA analysis. In sum, optimal growth rate is predicted as a function of uptake constraints on oxygen versus acetate and oxygen versus succinate. The basic model is a predictive line and may be confirmed in a bioreactor experimentally by measuring the uptake and growth from batch reactors (note: experimental uptake was not constrained, only measured).
This model by Palsson was the first good proof of principle in silico model. The authors quantitative growth rate predictions under the di↵erent conditions matched very closely to the experimentally observed growth rates, implying that E. coli do have a metabolic network that is designed to maximize growth. It had good true positive and true negative rates. The agreement between the predictions and experimental results is very impressive for a model that does not include any kinetic information, only stoichiometry. Prof. Galagan cautioned, however, that it is often dicult to know what good agreement is, because we dont know the significance of the size of the residuals. The organism was grown on a number of di↵erent nutrients. Therefore, the investigators were able to predict condition specific growth. Keep in mind this worked, since only certain genes are necessary for some nutrients, like fbp for gluconeogenesis. Therefore, knocking out fbp will only be lethal when there is no glucose in the environment, a specific condition that resulted in a growth solution when analyzed by FBA.
Quasi Steady State Modeling (QSSM)
We’re now able describe how to use FBA to predict time-dependent changes in growth rates and metabolite concentrations using quasi steady state modeling. The previous example used FBA to make quantitative growth predictions under specific environmental conditions (point predictions). Now, after growth and uptake fluxes, we move on to another assumption and type of model.
Can we use a steady state model of metabolism to predict the time-dependent changes in the cell or environments? We do have to make a number of quasi steady state assumptions (QSSA):
1. The metabolism adjusts to the environmental/cellular changes more rapidly than the changes themselves
2. The cellular and environmental concentrations are dynamic, but metabolism operates on the condition that the concentration is static at each time point (steady state model).
Is it possible to use QSSM to predict metabolic dynamics over time? For example, if there is less acetate being taken in on a per cell basis as the culture grows, then the growth rate must slow. But now, QSSA assumptions are applied. That is, in effect, at any given point in time, the organism is in steady state.
What values does one get as a solution to the FBA problem? There are fluxes the growth rate. We are predicting rate and fluxes (solution) where VIN/OUT included. Up to now we assumed that the input and output are infinite sinks and sources. To model substrate/growth dynamics, the analysis is performed a bit differently from prior quantitative flux analysis. We first divide time into slices t. At each time point t, we use FBA to predict cellular substrate uptake (Su) and growth (g) during interval t. The QSSA means these predictions are constant over t. Then we integrate to get the biomass (B) and substrate concentration (Sc) at the next time point t + t. Therefore, the new VIN is calculated each time based on points t in-between time. Thus we can predict the growth rate and glucose and acetate uptake (nutrients available in the environment). The four step analysis is:
1. The concentration at time is given by the substrate concentration from the last step plus any additional substrate provided to the cell culture by an inflow, such as in a fed batch.
2. The substrate concentration is scaled for time and biomass (X) to determine the substrate availability to the cells. This can exceed the maximum uptake rate of the cells or be less than that number.
3. Use the flux balance model to evaluate the actual substrate uptake rate, which may be more or less than the substrate available as determined by step 2.
4. The concentration for the next time step is then calculated by integrating the standard differential equations:
$\frac{d B}{d t}=g B \rightarrow B=B_{o} e^{g t}\nonumber] \[\frac{d S c}{d t}=-S u B \rightarrow S c=S c_{o} \frac{X}{g}\left(e^{g t}-1\right)\nonumber$
The additional work by Varma et al. (1994) specifies the glucose uptake rate a priori [17]. The model simulations work to predict time-dependent changes in growth, oxygen uptake, and acetate secretion. This converse model plots uptake rates versus growth, while still achieving comparable results in vivo and in silico. The researchers used quasi steady state modeling to predict the time-dependent profiles of cell growth and metabolite concentrations in batch cultures of E. coli that had either a limited initial supply of glucose (left) or a slow continuous glucose supply (right diagram). A great fit is evident.
The diagrams above show the results of the model predictions (solid lines) and compare it to the experimental results (individual points). Thus, in E. coli, quasi steady state predictions are impressively accurate even with a model that does not account for any changes in enzyme expression levels over time. However, this model would not be adequate to describe behavior that is known to involve gene regulation. For example, if the cells had been grown on half-glucose/half-lactose medium, the model would not have been able to predict the switch in consumption from one carbon source to another. (This does occur experimentally when E. coli activates alternate carbon utilization pathways only in the absence of glucose.)
Regulation via Boolean Logic
There is a number of levels of regulation through which metabolic flux is controlled at the metabolite, transcriptional, translational, post-translational levels. FBA associated errors may be explained by incorporation of gene regulatory information into the models. One way to do this is Boolean logic. The following table describes if genes for associated enzymes are on or off in presence of certain nutrients (an example of incorporating E. coli preferences mentioned above):
ON
no glucose(0)
ON
acetate present(1)
ON
glucose present(1)
OFF
acetate present(1)
Therefore, one may think that the next step to take is to incorporate this fact into the models. For example, if we have glucose in the environment, the acetate processing related genes are off and therefore absent from the S matrix which now becomes dynamic as a result of incorporation of regulation into our model. In the end, our model is not quantitative. The basic regulation then describes that if one nutrient- processing enzyme is on, the other is off. Basically it is a bunch of Boolean logic, based on presence of enzymes, metabolites, genes, etc. These Boolean style assumptions are then used at every small change in time dt to evaluate the growth rate, the fluxes, and such variables. Then, given the predicted fluxes, the VIN ,the VOUT , and the system states, one can use logic to turn genes off and on, effectively a S per t. We can start putting together all of the above analyses and come up with a general approach in metabolic modeling. We can tell that if glycolysis is on, then gluconeogenesis must be off.
The first attempt to include regulation in an FBA model was published by Covert, Schilling, and Palsson in 1901 [7]. The researchers incorporated a set of known transcriptional regulatory events into their analysis of a metabolic regulatory network by approximating gene regulation as a Boolean process. A reaction was said to occur or not depending on the presence of both the enzyme and the substrate(s): if either the enzyme that catalyzes the reaction (E) is not expressed or a substrate (A) is not available, the reaction flux will be zero:
rxn = IF (A) AND (E)
Similar Boolean logic determined whether enzymes were expressed or not, depending on the currently ex- pressed genes and the current environmental conditions. For example, transcription of the enzyme (E) occurs only if the appropriate gene (G) is available for transcription and no repressor (B) is present:
trans = IF (G) AND NOT (B)
The authors used these principles to design a Boolean network that inputs the current state of all relevant genes (on or off) and the current state of all metabolites (present or not present), and outputs a binary vector containing the new state of each of these genes and metabolites. The rules of the Boolean network were constructed based on experimentally determined cellular regulatory events. Treating reactions and enzyme/metabolite concentrations as binary variables does not allow for quantitative analysis, but this method can predict qualitative shifts in metabolic fluxes when merged with FBA. Whenever an enzyme is absent, the corresponding column is removed from the FBA reaction matrix, as was described above for knockout phenotype prediction. This leads to an iterative process:
1. Given the initial states of all genes and metabolites, calculate the new states using the Boolean network;
2. perform FBA with appropriate columns deleted from the matrix, based on the states of the enzymes, to determine the new metabolite concentrations;
3. repeat the Boolean network calculation with the new metabolite concentrations; etc. The above model is not quantitative, but rather a pure simulation of turning genes on and off at any particular time instant.
On a few metabolic reactions, there are rules about allowing organism to shift carbon sources (C1, C2).
An application of this method from the study by Covert et al.[7] was to simulate diauxic shift, a shift from metabolizing a preferred carbon source to another carbon source when the preferred source is not available. The modeled process includes two gene products, a regulatory protein RPc1, which senses (is activated by) Carbon 1, and a transport protein Tc2, which transports Carbon 2. If RPc1 is activated by Carbon 1, Tc2 will not be transcribed, since the cell preferentially uses Carbon 1 as a carbon source. If Carbon 1 is not available, the cell will switch to metabolic pathways based on Carbon 2 and will turn on expression of Tc2.
The Booleans can represent this information:
RPc1 = IF(Carbon1) Tc2 = IF NOT(RPc1)
Covert et al. found that this approach gave predictions about metabolism that matched results from experimentally induced diauxic shift. This diauxic shift is well modeled by the in silico analysis see above figure. In segment A, C1 is used up as a nutrient and there is growth. In segment B, there is no growth as C1 has run out and C2 processing enzymes are not yet made, since genes have not been turned on (or are in the process), thus the delay of constant amount of biomass. In segment C, enzymes for C2 turned on and the biomass increases as growth continues with a new nutrient source. Therefore, if there is no C1, C2 is used up. As C1 runs out, the organism shifts metabolic activity via genetic regulation and begins to take up C2. Regulation predicts diauxie, the use of C1 before C2. Without regulation, the system would grow on both C1 and C2 together to max biomass.
So far we have discussed using this combined FBA-Boolean network approach to model regulation at the transcriptional/translational level, and it will also work for other types of regulation. The main limitation is for slow forms of regulation, since this method assumes that regulatory steps are completed within a single time interval (because the Boolean calculation is done at each FBA time step and does not take into account previous states of the system). This is fine for any forms of regulation that act at least as fast as transcription/translation. For example, phosphorylation of enzymes (an enzyme activation process) is very fast and can be modeled by including the presence of a phosphorylase enzyme in the Boolean network.
However, regulation that occurs over longer time scales, such as sequestration of mRNA, is not taken into account by this model. This approach also has a fundamental problem in that it does not allow actual experimental measurements of gene expression levels to be inputted at relevant time points.
We do not need our simulations to artificially predict whether certain genes are on or off. Microarray expression data allows us to determine which genes are being expressed, and this information can be incorporated into our models.
Coupling Gene Expression with Metabolism
In practice, we do not need to artificially model gene levels, we can measure them. As discussed previousky, it is possible to measure the expressions levels of all the mRNAs in a given sample. Since mRNA expression data correlates with protein expression data, it would be extremely useful to incorporate it into the FBA. Usually, data from microarray experiments is clustered, and unknown genes are hypothesised to have function similar to the function of those known genes with which they cluster. This analysis can be faulty, however, as genes with similar actions may not always cluster together. Incorporating microarray expression data into FBA could allow an alternate method of interpretation of the data. Here arises a question, what is the relationship between gene level and flux through a reaction?
Say the reaction A ! B is catalyzed by an enzyme. If a lot of A present, increased expression of the gene for the enzyme causes increased reaction rate. Otherwise, increasing gene expression level will not increase reaction rate. However, the enzyme concentration can be treated as a constraint on the maximum possible flux, given that the substrate also has a reasonable physiological limit.
The next step, then, is to relate the mRNA expression level to the enzyme concentration. This is more dicult, since cells have a number of regulatory mechanisms to control protein concentrations independently of mRNA concentrations. For example, translated proteins may require an additional activation step (e.g. phosphorylation), each mRNA molecule may be translated into a variable number of proteins before it is degraded (e.g. by antisense RNAs), the rate of translation from mRNA into protein may be slower than the time intervals considered in each step of FBA, and the protein degradation rate may also be slow. Despite these complications, the mRNA expression levels from microarray experiments are usually taken as upper bounds on the possible enzyme concentrations at each measured time point. Given the above relationship between enzyme concentration and flux, this means that the mRNA expression levels are also upper bounds on the maximum possible fluxes through the reactions catalyzed by their encoded proteins. The validity of this assumption is still being debated, but it has already performed well in FBA analyses and is consistent with recent evidence that cells do control metabolic enzyme levels primarily by adjusting mRNA levels. (In 1907, Professor Galagan discussed a study by Zaslaver et al. (1904) that found that genes required in an amino acid biosynthesis pathway are transcribed sequentially as needed [2]). This is a particularly useful assumption for including microarray expression data in FBA, since FBA makes use of maximum flux values to constrain the flux balance cone.
Colijn et al. address the question of algorithmic integration of expression data and metabolic networks [3]. They apply FBA to model the maximum flux through each reaction in a metabolic network. For example, if microarray data is available from an organism growing on glucose and from an organism growing on acetate, significant regulatory di↵erences will likely be observed between the two datasets. Vmax tells us what the maximum we can reach. Microarray detects the level of transcripts, and it gives an upper boundary of Vmax.
In addition to predicting metabolic pathways under different environmental conditions, FBA and microarray experiments can be combined to predict the state of a metabolic system under varying drug treatments. For example, several TB drugs target mycolic acid biosynthesis. Mycolic acid is a major cell wall constituent. In a 1904 paper by Bosho↵ et al., researchers tested 75 drugs, drug combinations, and growth conditions to
see what effect different treatments had on mycolic acid synthesis [9]. In 1905, Raman et al. published an FBA model of mycolic acid biosynthesis, consisting of 197 metabolites and 219 reactions [13].
The basic flow of the prediction was to take a control expression value and a treatment expression value for a particular set of genes, then feed this information into the FBA and measure the final effect on the treatment on the production of mycolic acid. To examine predicted inhibitors and enhancers, they examined significance, which examines whether the effect is due to noise, and specificity, which examines whether the effect is due to mycolic acid or overall supression/enhancement of metabolism. The results were fairly encouraging. Several known mycolic acid inhibitors were identified by the FBA. Interesting results were also found among drugs not specifically known to inhibit mycolic acid synthesis. 4 novel inhibitors and 2 novel enhancers of mycolic acid synthesis were predicted. One particular drug, Triclosan, appears to be an enhancer according to the FBA model, whereas it is currently known as an inhibitor. Further study of this particular drug would be interesting. Experimental testing and validation are currently in progress.
Clustering may also be ineffective in identifying function of various treatments. Predicted inhibitors, and predicted enhancers of mycolic acid synthesis are not clustered together. In addition, no labeled training set is required for FBA-based algorithmic classification, whereas it is necessary for supervised clustering algorithms.
Predicting Nutrient Source
Now, we get the idea of predicting the nutrient source that an organism may be using in an environment, by looking at expression data and looking for associated nutrient processing gene expression. This is easier, since we cant go into the environment and measure all chemical levels, but we can get expression data rather easily. That is, we try to predict a nutrient source through predictions of metabolic state from expression data, based on the assumption that organisms are likely to adjust metabolic state to available nutrients. The nutrients may then be ranked by how well they match the metabolic states.
The other way around could work too. Can I predict a nutrient given a state? Such predictions could be useful for determining the nutrient requirements of an organism with an unknown natural environment, or for determining how an organism changes its environment. (TB, for example, is able to live within the environment of a macrophage phagolysosome, presumably by altering the environmental conditions in the
We can use FBA to define a space of possible metabolic states and choose one. The basic steps are to:
• Start with max flux cone (representing best growth with all nutrients available in environment). Find optimal flux for each nutrient.
• Apply expression data set (still not knowing nutrient). This will allow you to constrain the cone shape and figure out the nutrient, which is represented as one with the closest distance to optimal solution.
In Figure 8, you may see that the first cone has a number of optimals, so the real nutrient is unknown. However, after expression data is applied, the cone is reshaped. It has only one optimal, which is still in feasible space and thereby must be that nutrient you are looking for.
As before, the measured expression levels provide constraints on the reaction fluxes, altering the shape of the flux-balance cone (now the expression-constrained flux balance cone). FBA can be used to determine the optimal set of fluxes that maximize growth within these expression constraints, and this set of fluxes can be compared to experimentally-determined optimal growth patterns under each environmental condition of interest. The difference between the calculated state of the organism and the optimal state under each condition is a measure of how sub-optimal the current metabolic state of the organism would be if it were in fact growing under that condition.
Expression data from growth and metabolism may then be applied to predict the carbon source being used. For example, consider E. coli nutrient product. We can simulate this system for glucose versus acetate. The color indicates the distance from the constrained flux cone to the optimal flux solution for that nutrient combo (same procedure described above). Then, multiple nutrients may be ranked, prioritized according to expression data. Unpublished data from Desmond Lun and Aaron Brandes provide an example of this approach.
They used FBA to predict which nutrient source E. coli cultures were growing on, based on gene expression data. They compared the known optimal fluxes (the optimal point in flux space) for each nutrient condition to the allowed optimal flux values within the expression-constrained flux-balance cone. Those nutrient conditions with optimal fluxes that remained within (or closest to) the expression-constrained cone were the most likely possibilities for the actual environment of the culture.
Results of the experiment are shown in Figure 9, where each square in the results matrices is colored based on the distance between the optimal fluxes for that nutrient condition and the calculated optimal fluxes based on the expression data. Red values indicate large distances from the expression-constrained flux cone and blue values indicate short distances from the cone. In the glucose-acetate experiments, for example, the results of the experiment on the left indicate that low acetate conditions are the most likely (and glucose was the nutrient in the culture) and the results of the experiment on the right indicate that low glucose/medium acetate conditions are the most likely (and acetate was the nutrient in the culture). When 6 possible nutrients were considered, the model always predicted the correct one, and when 18 possible nutrients were considered, the correct one was always one of the top 4 ranking predictions. These results suggest that it is possible to use expression data and FBA modeling to predict environmental conditions from information about the metabolic state of an organism.
This is important because TB uses fatty acids in macrophages in immune systems. We do not know which ones exactly are utilized. We can figure out what the TB sees in its environment as a food source and proliferation factor by analyzing what related nutrient processing genes are turned on at growth phases and such. Thereby we can figure out the nutrients it needs to grow, allowing for a potential way to kill it off by not supplying such nutrients or knocking out those particular genes.
It is easier to get expression data to see flux activity than see whats being used up in the environment by analyzing the chemistry on such a tiny level. Also, we might not be able to grow some bacteria in lab, but we can solve the problem by getting the expression data from the bacteria growing in a natural environment and then seeing what it is using to grow. Then, we can add it to the laboratory medium to grow the bacteria successfully. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/23%3A_Introduction_to_Steady_State_Metabolic_Modeling/23.04%3A_Applications.txt |
• Becker, S. A. and B. O. Palsson (1908). Context-Specific Metabolic Networks Are Consistent with Experiments. PLoS Computational Biology 4(5): e1000082.
– If gene expression lower than some threshold, turn the gene off in the model.
• Shlomi, T., M. N. Cabili, et al. (1908). Network-based prediction of human tissue-specific metabolism.
Nat Biotech 26(9): 1003-1010.
- Nested optimization problem.
- First, standard FBA
- Second, maximize the number of enzymes whose predicted flux activity is consistent with their measured expression level
23.6: Tools and Techiniques
• Kegg
• BioCyc
• Pathway Explorer (pahtwayexplorer.genome.tugraz.at)
• Palssons group at UCSD (gcrg.ucsd.edu/)
• www.systems-biology.org
• Biomodels database (www.ebi.ac.uk/biomodels/)
• JWS Model Database (jjj.biochem.sun.ac.za/database/index.html)
Bibliography
[1]
[2] Zaslaver A, Mayo AE, Rosenberg R, Bashkin P, Sberro H, Tsalyuk M, Surette MG, and Alon U. Just-in-time transcription program in metabolic pathways. Nat. Gen, 36:486–491, 2004. 379
[3] Caroline Coljin. Interpreting expression data with metabolic flux models: Predicting mycobacterium tuberculosis mycolic acid production. PLoS Computational Biology, 5(8), Aug 2009.
[4] Price N. D., Reed J. L., Papin J.A, Famili I., and Palsson B.O. Analysis of metabolic capabilities using singular value decomposition of extreme pathway matrices. Biophys J., 84(2):794–804, Feb 2003.
[5] Gasteiger E., Gattiker A., Hoogland C. andIvanyi I., Appel R.D., , and Bairoch A. Expasy: The proteomics server for in-depth protein knowledge and analysis. Nucleic Acids Res, 31(13):3784–3788.
[6] J.S. Edwards, R. U. Ibarra, and B.O. Palsson. In silico predictions of e coli metabolic capabilities are consis ent with experimental data. Nat Biotechnology, 19:125–130, 2001.
[7] Covert M et al. Regulation of gene expression in flux balance models of metabolism. Journal of Theoretical Biology, 213:73–88, Nov 2001.
[8] J. Forster, I. Famili, B.O. Palsson, and J. Nielsen. Large-scale evaluation of in silico gene deletions in saccharomyces cerevisiae. OMICS, 7(2):193–202, 2003. PMID: 14506848.
[9] Boshoff H.I., Myers T.G., Copp B.R., McNeil M.R., Wilson M.A., and Bary C.E. The transcriptional response of mycobacterium tuberculosis to inhibitors of metabolism: novel insights into drug mechanisms of action. J Biol Chem, 279:40174–40184, Sep 2004.
[10] Holmberg. On the practical identifiability of microbial-growth models incorporating michaelis-menten type nonlinearities. Mathematical Biosciences, 62(1):23–43, 1982.
[11] Edwards J.S. and Palsson B.O. volume 97, pages 5528–5533. Proceedings of the National Academy of Sciences of the United States of America, May 2000. PMC25862.
[12] Edwards J.S., Covert M., , and Palsson B. Metabolic modeling of microbes: the flux balance approach. Environmental Microbiology, 4(3):133–140, 2002.
[13] Raman Karthik, Preethi Rajagopalan, and Nagasuma Chandra. Flux balance analysis of mycolic acid pathway: Targets for anti-tubercular drugs. PLoS Computational Biology, 1, Oct 2005.
[14] Kanehisa M., Goto S., Kawashima S., and Nakaya. From genomics to chemical genomics: new devel- opments in kegg. Nucleic Acids Res., 34, 2006.
[15] Jamshidi N. and Palsson B. Investigating the metabolic capabilities of mycobacterium tuberculosis h37rv using the in silico strain inj661 and proposing alternative drug targets. BMC Systems Biology, 26, 2007.
[16] Caspi R., Foerster H., Fulcher C.A., Kaipa P., Krummenacker M., Latendresse M., Paley S., Rhee S.Y., Shearer A.G., Tissier C., Walk T.C. ZhangP., and Karp P. The metacyc database of metabolic pathways and enzymes and the biocyc collection of pathway/genome databases. Nucleic Acids Res, 36(Suppl), 2008.
[17] A. Varma and B. O. Palsson. Stoichiometric flux balance models quantitatively predict growth and metabolic by-product secretion in wild-type escherichia coli w3110. Applied and Environmental Micro- biology, 60:3724–3731, Oct 1994. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/23%3A_Introduction_to_Steady_State_Metabolic_Modeling/23.05%3A_Further_Reading_Tools_and_Techniques_Bibliography.txt |
The human genome was sequenced in 2003, an important step in understanding the blueprint of life. However, before this information can be fully utilized, the location, identity, and function of all protein- encoding and non-protein-encoding genes must be determined. Moreover, the human genome has many other functional elements, ranging from promotors, regulatory sequences, and other factors that determine chromatin structure. These must also be determined to fully understand the human genome.
The ENCODE (Encyclopedia of DNA Elements) project aims to solve these problems by delineating all functional elements of the human genome. To accomplish this goal, a consortium was formed to guide the project. The consortium aimed to advance and develop technologies for annotating the human genome with higher accuracy, completeness, and cost-effectiveness, along with more standardization.They also aimed to develop a series of computational techniques to parse and analyze the data obtained.
To accomplish this goal, a pilot project was launched. The ENCODE pilot project aimed to study 1% of the human genome in depth, roughly from 2003 to 2007. From 2007 to 2012, the ENCODE project ramped up to annotate the entire genome. Finally, from 2012 onwards, the ENCODE project aims further increases in all dimensions: deeper sequencing, more assays, more transcription factors, etc.
This chapter will describe some of the experimental and computational techniques used in the ENCODE project.
24.02: Experimental Techniques
The ENCODE project used a wide range of experimental techniques, ranging from RNA-seq, CAGE-seq, Exon Arrays, MAINE-seq, Chromatin ChIP-seq, DNase-seq, and many more.
One of the most important techniques used was ChIP-seq (chromatin immunoprecipitation followed by sequencing). The first step in a ChIP experiment is to target DNA fragments associated with a specific protein. This is done by using an anti-body that targets the specific protein and is used to immunoprecipitate the DNA-protein complex. The final step is to assay the DNA. This will determine the sequences bound to the proteins.
ChIP-seq has several advantages over previous techniques (e.g. ChIP-chip). For example, ChIP-seq has single nucleotide resolution and its alignability increases with read length. However, ChIP-seq has several disadvantages. Sequencing errors tend to increase substantially near the end of reads. Also, with low number of reads, sensitivity and specificity tend to decrease when detecting enriched regions. Both of these problems arise when processing the data and many of the computational techniques seek to rectify this.
24.03: Computational Techniques
This section will focus on techniques on processing raw data from the ENCODE project. Before ENCODE data can be analyzed (e.g. for motif discovery, co-association analysis, signal aggregation over elements, etc), the raw data must be processed.
Even before the data is processed, some quality control is applied. Quality control is needed for several reasons. Even without anti-bodies, reads are not uniformly-scattered. The biological reasons include non- uniform fragmentation of the genome, open chromatin regions fragmenting easier, and repetitive sequences over-collapsed in assembled genomes. The ENCODE project corrected for these biases in several ways. Portions of the DNA were removed before the ChIP step, removing large portions of unwanted data. Control experiments were also conducted without the use of anti-bodies. Finally, fragment input DNA sequence reads were used as a background.
Because of inherent noise in the ChIP-seq process, some reads will be of lower quality. Using a read quality metric, reads below a threshold were thrown out.
Shorter reads (and to a lesser extent, longer reads) can map to exactly one location (uniquely mapping), multiple locations (repetitive mapping), or no locations at all (unmappable) in the genome. There are many potential ways to deal with repetitive mapping, ranging from probabilistically spreading the read to use an EM approach. However, since the ENCODE project aims to be as correct as possible, it does not assign repetitive reads to any location.
If a sample does not contain sucient DNA and/or if it is over-sequenced, you will simply be repeatedly sequencing PCR duplicates of a restricted pool of distinct DNA fragments. This is known a low-complexity library and is not desirable. To solve this problem, a histogram with the number of duplicates is created and samples with a low non-redundant fraction (NRF) are thrown out.
ChIP-seq randomly sequences from one end of each fragment, so to determine which reads came from which segment, typically strand cross-correlation analysis is used [Fig. 04]. To accomplish this, the forward and and reverse strand signals are calculated. Then, they are sequentially shifted towards each other. At every step, the correlation is calculated. At the fragment length offset f, the correlation peaks. f is the length at which ChIP DNA is fragmented. Using further analysis, we can determine that we should have a high absolute cross-correlation at fragment length, and high fragment length cross-correlation relative to read-length cross-correlation. The RSC (Relative Strand Correlation) should be greater than 1.
$R S C=\frac{C C_{\text {fragment}}-\min (C C)}{C C_{\text {readlength}}-\min (C C)}$
Once quality control is applied, the data is further processed to determine actual areas of enrichment. To accomplish this, the ENCODE project used a modified version of peak calling. There are many existing peak calling algorithms, but the ENCODE project used MACS and PeakSeq, as they are deterministic. However, it is not possible to set a uniform p-value or false discovery rate (FDR) constant. The FDR and p-value depends on ChIP and input sequencing depth, the binding ubiquity of the factor, and is highly unstable. Moreover, different tools require different values.
The ENCODE project uses replicates (of the same experiment) and combines the data to find more meaningful results. Simple solutions have major issues: taking the union of the peaks keeps garbage from both, the intersection is too stringent and throws away good peaks, and taking the sum of the data does not exploit the independence of the datasets. Instead, the ENCODE project uses the independent discovery rate (IDR). The key idea is that true peaks will be highly ranked in both replicates. Thus, to find significant peaks, the peaks are considered in rank order, until ranks are no longer correlated.
The cutoff could be different for the two replicates and actual peaks included may differ between replicates. It is modeled as a Gaussian mixture model, which can be fit via an EM-like algorithm. Using IDR leads to higher consistence between peak callers. This is because FDR only relies on enrichment over input, IDR exploits replicates. Also, using sampling methods, if there is only one replicate, the IDR pipeline can still be used with pseudo-replicates.
24.04: Current Research Directions
The ENCODE project is still ongoing. Using saturation techniques, we believe we only have discovered a maximum 50% of elements. This number is likely to be lower due to inaccessible cell types and other factors. Also, several cell types are extremely rare and dicult to access, so sequencing data from these cell types is another challenge.
In computational frontiers, the ENCODE project has produced an enormous amount of raw data. Similar to how the full sequence of the human genome unleashed a series of computational projects, the ENCODE data can be used for a variety of computational projects.
24.05: Further Reading Tools and Technique
The Nature site with ENCODE papers is available at http://www.nature.com/encode/.
The ocial ENCODE portal is http://encodeproject.org/ENCODE/.
To browse ENCODE data, visit http://encodeproject.org/cgi-bin/hgHubConnect.
Data processing tools for ENCODE data are available at http://encodeproject.org/ENCODE/analysis. html.
24.6: Tools and Techniques
ENCODE data mining, http://genome.ucsc.edu/cgi-bin/hgTab...up=regulation&hgta_track=wgEncodeHudsonalphaChipSeq
ENCODE data visualization, http://genome.ucsc.edu/cgi-bin/hgTra...erUser=submit& hgS_otherUserName=Kate&hgS_otherUserSessionName=encodePortalSession
Software and resources for rnalyzing ENCODE data, http://genome.ucsc.edu/ENCODE/analysisTools. html
Software tools used to create the ENCODE resource, http://genome.ucsc.edu/ENCODE/encodeTools. html
Bibliography
[1] S G Landt, G K Marinov, A Kundaje, P Kheradpour, F Pauli, S Batzoglou, B E Bernstein, P Bickel, J B Brown, P Cayting, Y Chen, G DeSalvo, C Epstein, K I Fisher-Aylor, G Euskirchen, M Gerstein, J Gertz, A J Hartemink, M M Hoffman, V R Iyer, Y L Jung, S Karmakar, M Kellis, P V Kharchenko, Q Li, T Liu, X S Liu, L Ma, A Milosavljevic, R M Myers, P J Park, M J Pazin, M D Perry, D Raha, T E Reddy, J Rozowsky, N Shoresh, A Sidow, M Slattery, J A Stamatoyannopoulos, M Y Tolstorukov, K P White, S Xi, P J Farnham, J D Lieb, B J Wold, and M Snyder. ChIP-seq guidelines and practices of the ENCODE and modENCODE consortia. Genome Research, 22(9):1813–1831, 2012.
[2] Philippe Lefran ̧cois, Ghia M Euskirchen, Raymond K Auerbach, Joel Rozowsky, Theodore Gibson, Christopher M Yellman, Mark Gerstein, and Michael Snyder. Ecient yeast ChIP-Seq using multiplex short-read DNA sequencing. BMC Genomics, 10(1):37, 2009.
[3] P J Park. ChIP-seq: advantages and challenges of a maturing technology. Nature reviews. Genetics, 10(10):669–680, October 2009.
Section 7: What Have We Learned?
This chapter provides an overview the ENCODE project which aims to annotate the entire human genome. It collects DNA sequences using various experimental techniques such as CHIP-seq, RNA-seq, and CAGE-seq. After the data has been obtained it needs to be processed before attempting analysis. The data goes through a number of steps; quality control, peak calling, IDR processing, and blacklist filtering. Once the accuracy of the data has been ensured other analysis can be done in the form of motif discovery, co-association analysis, and signal aggregation over elements. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/24%3A_The_Encode_Project-_Systematic_Experimentation_and_Integrative_Genomics/24.01%3A_Introduction.txt |
A cell is like robot in that it needs to be able to sense it surroundings and internal state, perform computations and make judgments, and complete a task or function. The emerging discipline of synthetic biology aims to make control of biological entities such as cells and proteins similar to designing a robot. Synthetic biology combines technology, science, and engineering to construct biological devices and systems for useful purposes including solutions to world problems in health, energy, environment and, security.
Synthetic biology involves every level of biology, from DNA to tissues. Synthetic biologist aims to create layers of biological abstraction like those in digital computers in order to create biological circuits and programs efficiently. One of the major goals in synthetic biology is development of a standard and well- defined set of tools for building biological systems that allows the level of abstraction available to electrical engineers building complex circuits to be available to synthetic biologists.
Synthetic biology is a relatively new field. The size and complexity of synthetic genetic circuits has so far been small, on the order of six to eleven promoters. Synthetic genetic circuits remain small in total size (103 - 105 base pairs) compared to size of the typical genome in a mammal or other animal (105 - 107 base pairs) as well.
One of the first milestones in synthetic biology occurred in 2000 with the repressilator. The repressilator [2] is a synthetic genetic regulatory network which acts like an electrical oscillator system with fixed time periods. A green fluorescent protein was expressed within E. coli and the fluorescence was measured over time. Three genes in a feedback loop were set up so that each gene repressed the next gene in the loop and was repressed by the previous gene.
The repressilator managed to produce periodic fluctuations in fluorescence. It served as one of the first triumphs in synthetic biology. Other achievements in the past decade include programmed bacterial population control, programmed pattern formation, artificial cell-cell communication in yeast, logic gate creation by chemical complementation with transcription factors, and the complete synthesis, cloning, and assembly of a bacterial genome.
25.02: Current Research Directions
Encoding functionality in DNA is one way synthetic biologists program cells. As the price of sequencing and synthesis of DNA continues to decrease, coding DNA strands has become more feasible. In fact, the number of base pairs that can be synthesized per US\$ has increased exponentially, akin to Moore’s Law.
This has made the process of designing, building, and testing biological circuits much faster and cheaper. One of the major research areas in synthetic biology is the creation of fast, automated synthesis of DNA molecules and the creation of cells with the desired DNA sequence. The goal of creating a such a system is speeding up the design and debugging of making a biological system so that synthetic biological systems can be prototyped and tested in a quick, iterative process.
Synthetic biology also aims to develop abstract biological components that have standard and well-defined behavior like a part an electrical engineer might order from a catalogue. To accomplish this, the Registry of Standard Biological Parts (partsregistry.org) [4] was created in 2003 and currently contains over 7000 available parts for users. The research portion of creating such a registry includes the classification and description of biological parts. The goal is to find parts that have desirable characteristics such as:
Orthogonality Regulators should not interfere with each other. They should be independent.
Composability Regulators can be fused to give composite function.
Connectivity Regulators can be chained together to allow cascades and feedback.
Homogeneity Regulators should obey very similar physics. This allows for predictability and efficiency.
Synthetic biology is still developing, and research can still be done by people with little background in the field. The International Genetically Modified Machine (iGEM) Foundation (igem.org) [3] created the iGEM competition where undergraduate and high school students compete to design and build biological systems that operate within living cells. The student teams are given a kit of biological parts at the beginning of the summer and work at their own institutions to create biological system. Some interesting projects include:
Arsenic Biodetector The aim was to develop a bacterial biosensor that responds to a range of arsenic concentrations and produces a change in pH that can be calibrated in relation to arsenic concentration. The team’s goal was to help many under-developed countries, in particular Bangladesh, to detect arsenic contamination in water. The proposed device was intended be more economical, portable and easier to use in comparison with other detectors.
BactoBlood The UC Berkeley team worked to develop a cost-effective red blood cell substitute constructed from engineered E. coli bacteria. The system is designed to safely transport oxygen in the bloodstream without inducing sepsis, and to be stored for prolonged periods in a freeze-dried state.
E. Chromi The Cambridge team project strived to facilitate biosensor design and construction. They designed and characterised two types of parts - Sensitivity Tuners and Colour Generators – E. coli engineered to produce different pigments in response to different concentrations of an inducer. The availability of these parts revolutionized the path of future biosensor design. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/25%3A_Synthetic_Biology/25.01%3A_Introduction_to_Synthetic_Biology.txt |
Synthetic biology combines many fields, and the techniques used are not particular to synthetic biology. Much like the process of solving other engineering problems, the process of creating a useful biological system has designing, building, testing, and improving phases. Once a design or statement of the desired properties of a biological system are created, the problem becomes finding the proper biological components to build such a system.
BioCompiler [1] is a tool developed to allow the programming of biological circuits using a high-level programming language. One can write programs in a language similar to LISP and compile their program into a biological circuit. BioCompiler uses a process similar to that of a compiler for a programming language. It uses a human-written program as a high-level description of the genetic circuit, then generates a formal description of the program. From there, it looks up abstract genetic regulatory network pieces that can be combined to create the genetic circuit and goes through its library of DNA parts to find appropriate sequences to match the functionality of the abstract genetic regulatory network pieces. Assembly instructions can then be generated for creating cells with the appropriate genetic regulatory network.
Figure 26.5: An example of a BioCompiler program and the process of actualizing it (credit to Ron Weiss)
BioBrick standard biologic parts (biobricks.org)are another tool used in synthetic biology. Similar to the parts in the Registry of Standard Biological Parts, BioBrick standard biological parts are DNA sequences of defined structure and function. Each BioBrick part is a DNA sequence held together in a circular plasmid. At either end of the BioBrick contains a known and well-defined sequence with restriction enzymes that can cut open the plasmid at known positions. This allows for the creation of larger BioBrick parts by chaining together smaller ones. Some competitors in the iGEM competition used BioBrick systems to develop an E. coli line that produced scents such as banana or mint.
25.04: What Have We Learned Bibliography
Synthetic biology is an emerging disciplines that aims to create useful biological systems to solve problems in energy, medicine, environment, and many more fields. Synthetic biologists attempt to use abstraction to enable them to build more complex systems from simpler ones in a similar way to how a software engineer or an electrical engineer would make a computer program or a complex circuit. The Registry of Standard Biological Parts and BioBrick standard biological parts aim to characterize and standardize biological pieces just as one would a transistor or logic gate to enable abstraction. Tools such as BioCompiler allow people to describe a genetic circuit using a high-level language and actually build a genetic circuit with the described functionality. Synthetic biology is still new, and research can be done by those unfamiliar with the field, as demonstrated by the iGEM competition.
Bibliography
[1] J. Beal and J. Bachrach. Cells are plausible targets for high-level spatial languages, 2008.
[2] M. Elowitz and S. Leibler. A synthetic oscillatory network of transcriptional regulators. Nature, 403:335– 338, 2000.
[3] iGEM. igem: Synthetic biology based on standard parts, December 2012.
[4] Registry of Standard Biological Parts. Registry of standard biological parts, December 2012. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/25%3A_Synthetic_Biology/25.03%3A_Tools_and_Techniques.txt |
Phylogenetics is the study of relationships among a set of objects having a common origin, based on the knowledge of the individual traits of the objects. Such objects may be species, genes, or languages, and their corresponding traits may be morphological characteristics, sequences, words etc. In all these examples the objects under study change gradually with time and diverge from common origins to present day objects.
In Biology, phylogenetics is particularly relevant because all biological species happen to be descendants of a single common ancestor which existed approximately 3.5 to 3.8 billion years ago. Throughout the passage of time, genetic variation, isolation and selection have created the great variety of species that we observe today. Not just speciation however, but extinction has also played a key role in shaping the biosphere as we see today. Studying the ancestry between di↵erent species is fundamentally important to biology because they shed much light in understanding di↵erent biological functions, genetic mechanisms as well as the process of evolution itself.
26.02: Basics of Phylogeny
Trees
A tree is a mathematical representation of relationships between objects. A general tree is built from nodes and edges. Each node represents an object, and each edge represents a relationship between two nodes. In the case of phylogenetic trees, we represent evolution using trees. In this case, each node represents a divergence event between two ancestral lineages, the leaves denote the set of present objects and the root represents the common ancestor.
However, sometimes more information is reflected in the branch lengths, such as time elapsed or the amount of dissimilarity. According to these di↵erences, biological phylogenetic trees may be classified into three categories:
Cladogram: gives no meaning to branch lengths; only the sequence and topology of the branching matters.
Phylogram: Branch lengths are directly related to the amount of genetic change. The longer the branch of a tree, the greater the amount of phylogenetic change that has taken place. The leaves in this tree may not necessarily end on the same vertical line, due to different rates of mutation.
Chronogram (ultrametric tree): Branch lengths are directly related to time. The longer the branches of a tree, the greater the amount of time that has passed. The leaves in this tree necessarily end on the same vertical line (i.e. they are the same distance from the root), since they are all in the present unless extinct species were included in the tree. Although there is a correlation between branch lengths and genetic distance on a chronogram, they are not necessarily exactly proportional because evolution rates / mutation rates are not constant. Some species evolve and mutate faster than others, and some historical time periods foster faster rates of evolution than others.
A trait is any characteristic that an object or species possesses. In humans, an example of a trait may be bipedalism (the ability to walk upright) or the opposable thumb. Another human trait may be a specific DNA sequence that humans possess. The first examples of physical traits are called morphological traits, while the latter DNA traits are called sequence traits. Each has its advantages and disadvantages to study. All methods for tree-reconstruction rely on studying the occurrence of di↵erent traits in the given objects. In traditional phylogenetics the morphological data of different species were used for this purpose. In modern methods, genetic sequence data is used instead. Each has its advantages and disadvantages.
Morphological Traits: Arise from empirical evaluation of physical traits. This can be advantageous be- cause physical characteristics are very easy to quantify and understand for everyone, scientists and children alike. The disadvantages to this approach are that we can only evaluate a small set of traits, such as hair, nails, hoofs, teeth, etc. Further, these traits only allow us to build species. Finally, it is much easier to be ”tricked” by convergent evolution. Species that diverged millions of years ago may converge again on the few traits that are observable to scientists, giving a false representation of how closely related the species are.
Sequence Traits: Are discovered by studying the genomes of different species. This approach can be advantageous because it creates much more data and allows scientists to create gene trees in addition to species trees. The primary difficulty with this approach is that DNA is only built from 4 bases, so back mutations are frequent. In this approach, scientists must reconcile the signals of a large number of ill-behaved traits as opposed to that of a small number of well-behaved traits in the traditional approach. The rest of the chapter will focus principally on tree building from gene sequences.
Since this approach deals with comparing between pairs of genes, it is useful to understand the concept of homology: A pair of genes are called paralogues if they diverged from a duplication event, and orthologues if they diverged from a speciation event.
FAQ
Q: Would it be possible to use extinct species’ DNA sequences?
A: Current technologies only allow for usage of extant sequences. However, there have been a few successes in using extinct species’ DNA. DNA from frozen mammoths have been collected and are being sequences but due to DNA breaking down over time and contamination from the environment, it is very hard to extract correct sequences.
Once we have found genetic data for a set of species, we are interested in learning how those species relate to one another. Since we can, for the most part, only obtain DNA from living creatures, we must infer the existence of ancestors of each species, and ultimately infer the existence of a common ancestor. This is a challenging problem, because very limited data is available. The following sections will explore the modern methods for inferring ancestry from sequence data. They can be classified into two approaches, distance based methods and character based methods.
Distance based approaches take two steps to solve the problem, i.e. to quantify the amount of mutation that separates each pair of sequences (which may or may not be proportional to the time since they have been separated) and to fit the most likely tree according to the pair-wise distance matrix. The second step is usually a direct algorithm, based on some assumtions, but may be more complex.
Charecter based approaches instead try to find the tree that best explains the observed sequences. As opposed to direct reconstruction, these methods rely on tree proposal and scoring techniques to perform a heuristic search over the space of trees.
Did You Know?
Occam’s Razor, as discussed in previous chapters, does not always provide the most accurate hypothesis. In many cases during tree reconstruction, the simplest explanation is not the most probable. For example, a set of possible ancestries may be possible, given some observed data. In this case, the simplest ancestry may not be correct if a trait arose independently in two seperate lineages. This issue will be considered in a later section. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/26%3A_Molecular_Evolution_and_Phylogenetics/26.01%3A_Introduction.txt |
The distance based models sequester the sequence data into pairwise distances. This step loses some information, but sets up the platform for direct tree reconstruction. The two steps of this method are hereby discussed in detail.
From alignment to distances
In order to understand how a distance-based model works, it is important to think about what distance means when comparing two sequences. There are three main interpretations.
Nucleotide Divergence is the idea of measuring distance between two sequences based on the number of places where nucleotides are not consistent. This assumes that evolution happens at a uniform rate across the genome, and that a given nucleotide is just as likely to evolve into any of the other three nucleotides. Although it has shortcomings, this is often a great way to think about it.
Transitions and Transversions This is similar to nucleotide divergence, but it recognizes that A-G and T-C substitutions are most frequent. Therefore, it keeps two parameters, the probability of a transition and the probability of a transversion.
Synonymous and non-synonymous substitutions This method keeps tracks of substitutions that affect the coded amino-acid by assuming that substitutions that do not change the coded protein will not be selected against, and will thus have a higher probability of occurring than those substitutions which do change the coded amino acid.
The naive way to interpret the separation between two sequences may be simply the number of mismatches, as described by nucleotide divergence above. While this does provide us a distance metric (i.e. d(a, b) + d(b, c) d(a, c)) this does not quite satisfy our requirements, because we want additive distances, i.e. those that satisfy d(a, b) + d(b, c) = d(a, c) for a path a ! b ! c of evolving sequence, because the amount of mutations accumulated along a path in the tree should be the sum of that of its individual components. However, the naive mismatch fraction do not always have this property, because this quantity is bounded by 1, while the sum of individual components can easily exceed 1.
The key to resolving this paradox is back-mutations. When a large number of mutations accumulate on a sequence, not all the mutations introduce new mismatches, some of them may occur on already mutated base pair, resulting in the mismatch score remaining the same or even decreasing. For small mismatch- scores however, this e↵ect is statistically insignificant, because there are vastly more identical pairs than mismatching pairs. However, for sequences separated by longer evolutionary distance, we must correct for this effect. The Jukes-Cantor model is one such simple markov model that takes this into account.
Jukes-Cantor distances
To illustrate this concept, consider a nucleotide in state ’A’ at time zero. At each time step, it has a probability 0.7 of retaining its previous state and probability 0.1 of transitioning to each of the other three states. The probability P(B|t) of observing state (base) B at time t essentially follows the recursion
$P(B \mid t+1)=0.7 P(B \mid t)+0.1 \sum_{b \neq B} P(b \mid t)=0.1+0.6 P(B \mid t)\nonumber$
Figure 26.5: Markov chain accounting for back mutations
If we plot P(B|t) versus t, we observe that the distribution starts o↵ as concentrated at the state ’A’ and gradually spreads over to the rest of the states, eventually going towards an equilibrium of equal probabilities. This progression makes sense, intuitively. Over millions of years, species can evolve so dramatically that they no longer resemble their ancestors. At that extreme, a given base location in the ancestor is just as likely to have evolved to any of the four possible bases in that location over time.
$\begin{array}{lccccr} \text { time:- } & 0 & 1 & 2 & 3 & 4 \ \hline \text { A } & 1 & 0.7 & 0.52 & 0.412 & 0.3472 \ \text { C } & 0 & 0.1 & 0.16 & 0.196 & 0.2196 \ \text { G } & 0 & 0.1 & 0.16 & 0.196 & 0.2196 \ \text { T } & 0 & 0.1 & 0.16 & 0.196 & 0.2196 \end{array}\nonumber$
The essence of the Jukes Cantor model is to backtrack t, the amount of time elapsed from the fraction of altered bases. Conceptually, this is just inverting the x and y axis of the green curve. To model this quantitatively, we consider the following matrix S(t) which denotes the respective probabilities P(x|y,t) of observing base x given a starting state of base y in time $\Delta$t.
$S(\Delta t)=\left(\begin{array}{cccc} P(A \mid A, \Delta t) & P(A \mid G, \Delta t) & \cdots & P(A \mid T, \Delta t) \ P(G \mid A, \Delta t) & \cdots & & \cdots \ \cdots & & & \cdots \ P(T \mid A, \Delta t) & \cdots & \cdots & P(T \mid T \Delta T) \end{array}\right)\nonumber$
We can assume this is a stationary markov model, implying this matrix is multiplicative, i.e.
$S\left(t_{1}+t_{2}\right)=S\left(t_{1}\right) S\left(t_{2}\right)$
For a very short time $\epsilon$, we can assume that there is no second order effect, i.e. there isn’t enough time for two mutations to occur at the same nucleotide. So the probabilities of cross transitions are all proportional to $\epsilon$. Further, in Jukes Cantor model, we assume that all the transition rates are same from each nucleotide to another nucleotide. Hence, for a short time $\epsilon$
$S(\epsilon)=\left(\begin{array}{cccc} 1-3 \alpha \epsilon & \alpha \epsilon & \alpha \epsilon & \alpha \epsilon \ \alpha \epsilon & 1-3 \alpha \epsilon & \alpha \epsilon & \alpha \epsilon \ \alpha \epsilon & \alpha \epsilon & 1-3 \alpha \epsilon & \alpha \epsilon \ \alpha \epsilon & \alpha \epsilon & \alpha \epsilon & 1-3 \alpha \epsilon \end{array}\right)\nonumber$
At time t, the matrix is given by
$S(t)=\left(\begin{array}{cccc} r(t) & s(t) & s(t) & s(t) \ s(t) & r(t) & s(t) & s(t) \ s(t) & s(t) & r(t) & s(t) \ s(t) & s(t) & s(t) & r(t) \end{array}\right)\nonumber$
From the equation $S(t+\epsilon)=S(t) S(\epsilon)$ we obtain
$r(t+\epsilon)=r(t)(1-3 \alpha \epsilon)+3 \alpha \epsilon s(t) \text { and } s(t+\epsilon)=s(t)(1-\alpha \epsilon)+\alpha \epsilon r(t))\nonumber$
Which rearrange as the coupled system of differential equations
$r^{\prime}(t)=3 \alpha(-r(t)+s(t)) \text { and } s^{\prime}(t)=\alpha(r(t)-s(t))\nonumber$
With the initial conditions r(0) = 1 and s(0) = 0. The solutions can be obtained as
$r(t)=\frac{1}{4}\left(1+3 e^{-4 \alpha t}\right) \text { and } s(t)=\frac{1}{4}\left(1-e^{-4 \alpha t}\right)\nonumber$
Now, in a given alignment, if we have the fraction f of the sites where the bases differ, we have:
$f=3 s(t)=\frac{3}{4}\left(1-e^{-4 \alpha t}\right)\nonumber$
implying
$t \propto-\log \left(1-\frac{4 f}{3}\right)\nonumber$
To agree asymptotically with f, we set the evolutionary distance d to be
$d=-\frac{3}{4} \log \left(1-\frac{4 f}{3}\right)\nonumber$
Note that distance is approximately proportional to f for small values of f and asymptotically approaches infinity when f ! 0.75. Intuitively this happens because after a very long period of time, we would expect the sequence to be completely random and that would imply about three-fourth of the bases mismatching with original. But the uncertainty values of the Jukes-Cantor distance also becomes very large when f approaches 0.75.
Other Models
The Jukes Cantor model is the simplest model that gives us theoretically consistent additive distance model. However, it is a one-parameter model that assumes that the mutations from each base to a different base has the same chance. But, changes between AG or between TC are more likely than changes across them. The first type of substitution is called transitions while the second type is called transversions. The Kimura model has two parameters which take this into account. There are also many other modifications of this distance model that takes into account the different rates of transitions and transversions etc. that are depicted below.
FAQ
Q: Can we use different parameters for different parts of the tree? To account for different mutation rates?
A: Its possible, it is a current area of research.
Distances to Trees
If we have a weighted phylogenetic tree, we can find the total weight (length) of the shortest path between a pair of leaves by summing up the individual branch lengths in the path. Considering all such pairs of leaves, we have a distance matrix representing the data. In distance based methods, the problem is to reconstruct the tree given this distance matrix.
FAQ
Q: In Figure 27.9 The m and r sequence divergence metrics can have some overlap so distance be- tween mouse and rat is not simply m+r. Wouldn’t that only be the case if there was no overlap?
A: If you model evolution correctly, then you would get evolutionary distance. It’s an inequality rather than an equality and we agree that you can’t exactly infer that the given distance is the precise distance. Therefore, the sequences’ distance between mouse and rat is probably less than m + r because of overlap, convergent evolution, and transversions.
However, note that there is not a one-to-one correspondence between a distance matrix and a weighted tree. Each tree does correspond to one distance matrix, but the opposite is not always true. A distance matrix has to satisfy additional properties in order to correspond to some weighted tree. In fact, there are two models that assume special constraints on the distance matrix:
Ultrametric: For all triplets (a, b, c) of leaves, two pairs among them have equal distance, and the third distance is smaller; i.e. the triplet can be labelled i, j, k such that
$d_{i j} \leq d_{i k}=d_{j k}\nonumber$
Conceptually this is because the two leaves that are more closely related (say i,j) have diverged from the thrid (k) at exactly the same time. and the time separation from the third should be equal, whereas the separation between themselves should be smaller.
Additive: Additive distance matrices satisfy the property that all quartet of leaves can be labelled i, j, k, l such that
$d_{i j}+d_{k l} \leq d_{i k}+d_{j l}=d_{i l}+d_{j k}\nonumber$
This is in fact true for all positive-weight trees. For any 4 leaves in a tree, there can be exactly one topology, i.e.
Then the above condition is term by term equivalent to
$(a+b)+(c+d) \leq(a+m+c)+(b+m+d)=(a+m+d)+(b+m+c)\nonumber$.
This equality corresponds to all pairwise distances that are possible from traversing this tree.
These types of redundant equalities must occur while mapping a tree to a distance matrix, because a tree of n nodes has n 1 parameters, one for each branch length, while a distance matrix has n2 parameters. Hence, a tree is essentially a lower dimensional projection of a higher dimensional space. A corollary of this observation is that not all distance matrices have a corresponding tree, but all trees map to unique distance matrices.
However, real datasets do not exactly satisfy either ultrameric or additive constraints. This can be due to noise (when our parameters for our evolutionary models are not precise), stochasticity and randomness (due to small samples), fluctuations, different rates of mutations, gene conversions and horizontal transfer. Because of this, we need tree-building algorithms that are able to handle noisy distance matrices.
Next, two algorithms that directly rely on these assumptions for tree reconstruction will be discussed.
UPGMA - Unweighted Pair Group Method with Arithmetic Mean
This is exactly same as the method of Hierarchical clustering discussed in Lecture 13, Gene Expression Clustering. It forms clusters step by step, from closely related nodes to ones that are further separated. A branching node is formed for each successive level. The algorithm can be described properly by the following steps:
Initialization:
1. Define one leaf i per sequence xi.
2. Place each leaf i at height 0.
3. Define Clusters Ci each having one leaf i.
Iteration:
1. Find the pairwise distances dij between each pairs of clusters Ci,Cj by taking the arithmetic mean of the distances between their member sequences.
2. Find two clusters Ci,Cj such that dij is minimized.
3. Let Ck = $C_{i} \cup C_{j}$.
4. Define node k as parent of nodes i, j and
place it at height dij/2 above i,j.
5. Delete Ci,Cj.
Termination: When two clusters Ci, Cj remain, place the root at height dij/2 as parent of the nodes i, j
Ultrametrification of non-ultrametric trees
If a tree does not satisfy ultrametric conditions, we can attempt to find a set of alterations to an nxn symmetric distance matrix that will make it ultrametric. This can be accomplished by constructing a completely connected graph with weights given by the original distance matrix, finding a minimum spanning tree (MST) of this graph, and then building a new distance matrix with elements D(i,j) given by the largest weight on the unique path in the MST from i to j. A spanning tree of the fully connected graph simply identifies a subset of edges that connects all nodes without creating any cycles, and a minimum spanning tree is a spanning tree that minimizes the total sum of edge weights. An MST can be found using ie Prims algorithm, and then used to correct a non-ultrametric tree.
Weaknesses of UPGMA
Although this method is guaranteed to find the correct tree if the distance matrix obeys the ultrameric property, it turns out to be a inaccurate algorithm in practice. Apart from lack of robustness, it suffers from the molecular clock assumption that the mutation rate over time is constant for all species. However, this is not true as certain species such as rat and mice evolve much faster than others. Such differences in mutation rate can lead to long branch attraction; nodes sharing a lower mutation rate but found in distinct lineages may be merged, leaving those nodes with higher mutation rates (long branches) to appear together in the tree. The following figure illustrates an example where UPGMA fails:
Neighbor Joining
The neighbor joining method is guaranteed to produce the correct tree if the distance matrix satisfies the additive property. It may also produce a good tree when there is some noise in the data. The algorithm is described below:
Finding the neighboring leaves: Let
$D_{i j}=d_{i j}-\left(r_{i}+r_{j}\right) \text { where } r_{a}=\frac{1}{n-2} \sum_{k} d_{a k}, a \in\{i, j\}\nonumber$
Here n is the number of nodes in the tree; hence, ri is the average distance of a node to the other nodes. It can be proved that the above modification ensures that Dij is minimal only if i, j are neighbors. (A proof can be found in page 189 of Durbin’s book).
Initialization: Define T to be the set of leaf nodes, one per sequence. Let L = T
Iteration:
1. Pick i, j such that Dij is minimized.
2. Define a new node k,and set $d_{k m}=\frac{1}{2}\left(d_{i m}+d_{j m} d_{i j}\right) \forall m \in L\nonumber$
3. Add k to T, with edges of lengths $d_{i k}=\frac{1}{2}\left(d_{i j}+r_{i} r_{j}\right)\nonumber$
4. Remove i, j from L
5. Add k to L
Termination: When L consists of two nodes i,j, and the edge between them of length dij, add the root node as parent of i and j.
Summary of Distance Methods Pros and Cons
The methods described above have been shown to capture many interesting features of phylogenetic relationships, and are typically very fast in the algorithmic sense. However, some information is certainly lost in the distance matrix, and typically only a single tree is proposed. Serious errors, such as long branch attraction, can be made when basic assumptions about mutation rate etc. are violated. Finally, distance methods make no inference about the history of a particular site, and thus do not make suggestions about the ancestral state of a sequence. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/26%3A_Molecular_Evolution_and_Phylogenetics/26.03%3A_Distance_Based_Methods.txt |
In character-based methods, the goal is to first create a valid algorithm for scoring the probability that a given tree would produce th observed sequences at its leaves, then to search through the space of possible trees for a tree that maximizes that probability. Good algorithms for tree scoring, and while searching the space of trees is theoretically NP-Hard (Due to the large number of possible trees), tractable heuristic search methods can in many cases find good trees. We'll first discuss tree scoring algorithms, then search techniques.
Scoring
There are two main algorithms for tree scoring. The first approach, which we will call parsimony reconstruction, is based on Occam’s razor, and scores a topology based on the minimum number of mutations it implies, given the (known) sequences at the leaves. This method is simple, intuitive, and fast. The second approach is a maximum likelihood method which scores trees by explicitly modeling the probability of observing the sequences at the leaves given a tree topology.
Parsimony
Conceptually, this method is simple. It simply assigns a value of for each base pair at each ancestral node such that the number of substitutions is minimized. The score is then just the sum over all base pairs of that minimal number of mutations at each base pair. (Recall that the eventual goal is to find a tree that minimizes that score.)
To reconstruct the ancestral sequences at internal nodes on the tree, the algorithm first scans up from the (known) leaf sequences, assigning a set of bases at each internal node based on its children. Next, it iterates down the tree, picking bases out of the allowed sets at each node, this time based on the node’s parents. The following illustrates this algorithm in detail (note that there are 2N-1 total nodes, indexed from the root, such that the known leaf nodes have indices N-1 through 2N-1):
As we mentioned before, this method is simple and fast. However, this simplicity can distort the scores it assigns. For one thing, the algorithm presented here assumes that a given base pair undergoes a substitution along at most one branch from a given node, which may lead it to ignore highly probably internal sequences that violate this assumption. Furthermore, this method does not explicitly model the time represented along each edge, and thus cannot account for the increased chance of a substitution along edges that represent a long temporal duration, or the possibility of different mutation rates across the tree. Maximum likelihood methods largely resolve these shortcomings, and are thus more commonly used for tree scoring.
Maximum Likelihood - Peeling Algorithm
As with the general Maximum likelihood methods, this algorithm scores a tree according to the (log) joint probability of observing the data and the given tree, i.e. P(D,T). The peeling algorithm again considers individual base pairs and assumes that all sites evolve independently. As in the parsimony method, this algorithm considers all base pairs independently: it calculates the probability of observing the given characters at each base pair in the leaf nodes, given the tree, a set of branch lengths, and the maximum likelihood assignment of the internal sequence, then simply multiplies this probabilities over all base pairs to get the total probability of observing the tree. Note that the explicit modeling of branch lengths is a difference from the previous approach.
Here each node has a character xi and ti is the corresponding branch length from its parent. Note that we already know the values x1, x2···xn, so they are constants, but xn+1,···x2n-1 are unknown characters at ancestral nodes which are variables to which we will assign maximum likelihood values. (Also note that we have adopted a leaves-to-root indexing scheme for the nodes, the opposite of the scheme we used before.) We want to compute P (x1x2 · · · xn|T ). For this we sum over all possible combinations of values at the ancestral nodes. this is called marginalization. In this particular example
$P\left(x_{1} x_{2} x_{3} x_{4} \mid T\right)=\sum_{x_{5}} \sum_{x_{6}} \sum_{x_{7}} P\left(x_{1} x_{2} \cdots x_{7} \mid T\right)\nonumber$
There are 4n-1 terms in here, but we can use the following factorization trick:
$=\sum_{x_{7}}\left[P\left(x_{7}\right)\left(\sum_{x_{5}} P\left(x_{5} \mid x_{7}, t_{5}\right) P\left(x_{1} \mid x_{5}, t_{1}\right) P\left(x_{2} \mid x_{5}, t_{2}\right)\right)\left(\sum_{x_{6}} P\left(x_{6} \mid x_{7}, t_{6}\right) P\left(x_{3} \mid x_{6}, t_{3}\right) P\left(x_{4} \mid x_{6}, t_{4}\right)\right)\right] \nonumber$
Here we assume that each branch evolves independently. And the probability P(b|c,t) denotes the probability of base c mutating to base b given time t, which is essentially obtained from the Jukes Cantor model or some more advanced model discussed earlier. Next we can move the factors that are independent of the summation variable outside the summation. That gives:
$=\sum_{x_{7}}\left[P\left(x_{7}\right)\left(\sum_{x_{5}} P\left(x_{5} \mid x_{7}, t_{5}\right) P\left(x_{1} \mid x_{5}, t_{1}\right) P\left(x_{2} \mid x_{5}, t_{2}\right)\right)\left(\sum_{x_{6}} P\left(x_{6} \mid x_{7}, t_{6}\right) P\left(x_{3} \mid x_{6}, t_{3}\right) P\left(x_{4} \mid x_{6}, t_{4}\right)\right)\right]\nonumber$
Let Ti be the subtree below i. In this case, our 2n-1$\times$4 dynamic programming array computes L[i,b], the probability P(Ti|xi = b) of observing Ti, if node i contains base b. Then we want to compute the probability of observing T = T2n-1, which is
$\sum_{b} P\left(x_{2 n-1}=b\right) L[2 n-1, b]\nonumber$
Note that for each ancestral node i and its childer j, k, we have
$L[i, b]=\left(\sum_{c} P\left(c \mid b, t_{j}\right) L[j, c]\right)\left(\sum_{c} P\left(c \mid b, t_{k}\right) L[k, c]\right)\nonumber$
Subject to the initial conditions for the leaf nodes, i.e. for i $\leq$ n:
L[i, b] = 1 if xi = b and 0 otherwise
Note that we still do not have the values P (x2n-1 = b). It is usually assigned equally or from some prior distribution, but it does not affect the results greatly. The final step is of course to multiply all the probabilities for individual sites to obtain the probability of observing the set of entire sequences. In addition, once we have assigned the maximum likelihood values for each internal node given the tree structure and the set of branch lengths, we can multiply the resulting score by some prior probabilities of the tree structure and the set of branch lengths, which are often generated using explicit modeling of evolutionary processes, such as the Yule process or birth-death models like the Moran process. The result of this final multiplication is called the a posteriori probability, using the language of Bayesian inference. The overall complexity of this algorithm is O(nmk2) where n is the number of leaves (taxa), m is the sequence length, and k is the number of characters.
There are addvantages and disadvantages of this algorithm. Such as
Advantages:
1. Inherently statistical and evolutionary model-based.
2. Usually the most consistent of the methods available.
3. Used for both character and rate analyses
4. Can be used to infer the sequences of the extinct ancestors.
5. Account for branch-length effects in unbalanced trees.
6. Nucleotide or amino acid sequences, other types of data.
Disadvantages:
1. Not as simple and intuitive as many other methods.
2. Computationally intense Limited by, number of taxa and sequence length).
3. Like parsimony, can be fooled by high levels of homoplasy.
4. Violations of model assumptions can lead to incorrect trees.
Search
A comprehensive search over the space of all trees would be extremely costly. The number of full rooted trees with n + 1 leaves is the n-th catalan number
$C_{n}=\frac{1}{n+1}\left(\begin{array}{c} 2 n \ n \end{array}\right) \approx \frac{4^{n}}{n^{3 / 2} \sqrt{\pi}}\nonumber$
Moreover, we must compute the maximum likelihood set of branch lengths for each of these trees. Thus, it is an NP-Hard problem to maximize the score absolutely for all trees. Fortunately, heuristic search algorithms can generally identify good solutions in the tree space. The general framework for such search algorithms is as follows:
Inititalization: Take some tree as the base of iteration (randomly or according to some other prior, or from the distance based direct algorithms).
Proposal: Propose a new tree by randomly modifying the current tree slightly.
Score: Score the new proposal according to the methods described above.
Select: Randomly select the new tree or the old tree (corresponding probabilities according to the score (likelihood) ratio.
Iterate: Repeat to proposal step unless some termination criteria is met (some threshold score or number of steps reached.
the basic idea here is the heuristic assumption that the scores of closely related trees are similar, so that good solutions may be obtained by successive local optimization, which is expected to converge towards a overall good solution.
Tree Proposal
One method for modifying trees is the Nearest Neighbor Exchange (NNI), illustrated below.
Figure 27.20: An unit step using Nearest Neighbor Interchange scheme
Another common method, not described here, is Tree Bisection and Join (TBJ). The important criteria for such proposal rules is that:
1. (a) The tree space should be connected, i.e. any pair of trees should be obtainable from each other by successive proposals.
2. (b) An individual new proposal should be suciently close to the original. So that it is more likely to be a good solution by virtue of the proximity to an already discovered good solution. If individual steps are too big, the algorithm may move away from an already discovered solution (also depends on the selection step). In particular, note that the measure of similarity by which the measure these step sizes is precisely the di↵erence in the likelihood scores assigned to the two trees.
Selection
Choosing whether or not to adopt a given proposal, like the process of generating the proposal itself, is inherently heuristic and varies. A general rules of thumb is:
1. If the new one has a better score, always accept it.
2. If it has a worse score, there should be some probability of selecting it, otherwise the algorithm will soon fixate in a local minima, ignoring better alternatives a little far away.
3. There should not be too much probability of selecting an worse new proposal, otherwise, it risks rejecting a known good solution.
It is the trade-off between the steps 2 and 3 that determines a good selection rule. Metropolis Hastings is a Markov Chain Monte Carlo Method (MCMC) that defines specific rules for exploring the state space in a way that makes it a sample from the posterior distribution. These algorithms work somewhat well in practice, but there is no guarantee for finding the appropriate tree. So a method known as bootstrapping is used, which is basically running the algorithm over and over using subsets of the base pairs in the leaf sequences,. then favoring global trees that match the topologies generated by using only these subsequences. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/26%3A_Molecular_Evolution_and_Phylogenetics/26.04%3A_Character-Based_Methods.txt |
A special point must be made about distances. Since distances are typically calculated between aligned gene sequences, most current tree reconstruction methods rely on heavily conserved genes, as non-conserved genes would not give information on species without those genes. This causes the ignoring of otherwise useful data. Therefore, there are some algorithms that try to take into account less conserved genes in reconstructing trees but these algorithms tend to take a long time due to the NP-Hard nature of reconstructing trees.
Additionally, aligned sequences are still not explicit in regards to the events that created them. That is, combinations of speciation, duplication, loss, and horizontal gene transfer (hgt) events are easy to mix up because only current DNA sequences are available. (see [11] for a commentary on such theoretical issues) A duplication followed by a loss would be very hard to detect. Additionally, a duplication followed by a speciation could look like an HGT event. Even the probabilities of events happening is still contested, especially horizontal gene transfer events.
Another issue is that often multiple marker sequences are concatenated and the concatenated sequence is used to calculate distance and create trees. However, this approach assumes that all the concatenated genes had the same history and there is debate over if this is a valid approach given that events such as hgt and duplications as described above could have occurred differently for different genes. [8] is an article showing how different phylogenetic relationships were found depending on if the tree was created using multiple genes concatenated together or if it was created using each of the individual genes. Conversely, additional [4] claims that while hgt is prevalent, orthologs used for phylogenetic reconstruction are consistent with a single tree of life. These two issues indicate that there is clearly debate in the field on a non arbitrary way to define species and to infer phylogenetic relationships to recreate the tree of life.
26.06: Towards final project What Have We Learned Bibliography
Project Ideas
1. Creating better distance models such as taking into account duplicate genes or loss of genes. It may also be possible to analyze sequences for peptide coding regions and calculate distances based on peptide chains too.
2. Creating a faster/more accurate search algorithm for turning distances into trees.
3. Analyze sequences to calculate probabilities of speciation, duplication, loss, and horizontal gene transfer events.
4. Extending an algorithm that looks for HGTs to look for extinct species. A possible use for HGTs is that if a program were to infer HGTs between different times, it could mean that there was a speciation where one branch is now extinct (or not yet discovered) and that branch had caused an HGT to the other extant branch.
Project Datasets
1. 1000 Genomes Project http://www.1000genomes.org/
2. Microbes Online http://microbesonline.org/
26.7: What Have We Learned
In this chapter, we have learnt different methods and approaches for reconstructing Phylogenetic trees from sequence data. In the next chapter, its application in gene trees and species trees and the relationship between those two will be discussed, as well as modelling phylogenies among populations within a species and between closely related species.
Bibliography
[1] 1000 genomes project.
[2] et al Ciccarelli, Francesca. Toward automatic reconstruction of a highly resolved tree of life. Science, 311, 2006.
[3] Tal Dagan and William Martin. The tree of one percent. Genome Biology, Nov 2006.
[4] Ochman Howard Daubin Vincent, Moran Nancy A. Phylogenetics and the cohesion of bacterial genomes.
Science, 301, 2003.
[5] A.J. Enright, S. Van Dongen, and C. A. Ouzounis. An ecient algorithm for large-scale detection of
protein familes. Nucleic Acids Research, 30(7):1575–1584, Apr 2002.
[6] Stephanie Guindon and Olivier Gascuel. A simple, fast, and accurate algorithm to estimate large
phylogenies by maximum likelihood. Systems Biology, 52(5):696–704, 2003.
[7] Sanderson MJ. r8s: Inferring absolute rates of molecular evolution and divergence times in the absence
of a molecular clock. Bioinformatics, 19(2):301–302, Jan 2003.
[8] R. Thane Papke, Olga Zhaxybayeva, Edward J Fiel, Katrin Sommerfeld, Denise Muise, and W. Ford
Doolittle. Searching for species in haloarchaea. PNAS, 104(35):14092–14097, 2007.
[9] Pere Puigbo, Yuri I Wolf, and Eugene V Koonin. Search for a ’tree of life’ in the thicket of the
phylogenetic forest. Journal of Biology, 8(59), July 2009.
[10] Sagi Snir, Yuri I Wolf, and Eugene V Koonin. Universal pacemaker of genome evolution. PLoS compu-
tational biology, 8(11), 2012.
[11] Douglas L Theobald. A formal test of the theory of universal common ancestry. Nature, 465:219–222, 2010. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/26%3A_Molecular_Evolution_and_Phylogenetics/26.05%3A_Possible_Theoretical_and_Practical_Issues_with_Discussed_Approach.txt |
In the previous chapter, we covered techniques for reasoning about evolution in terms of trees of descent. The algorithms we covered for tree-building, UPGMA and neighbor-joining, assumed that we were comparing fully aligned sections of sequences.
In this section, we present additional models for using phylogenetic trees in different contexts. Here we clarify the differences between species and gene trees. We then cover a framework called reconciliation which lets us effectively combine the two by mapping gene trees onto species trees. This mapping gives us a means of inferring gene duplication and loss events.
We will also present a phylogenetic perspective for reasoning about population genetics. Since population genetics deals with relatively recent mutation events, we offer the Wright-Fisher model as a tool for representing changes in whole populations. Unfortunately, when dealing with real-world data, we usually are only able to sequence genes from the current living descendants of a group. As a remedy to this shortcoming, we cover the Coalescent model, which you can think of as a time-reversed Wright-Fisher analog.
By using coalescence, we gain a new means for estimating divergence times and population sizes across multiple species. At the end of the chapter, we touch briefly on the challenges of using trees to model recombination events and summarize recent work in the field along with frontiers open for exploration.
27.02: SPIDR
Background
As presented in the supplementary information for SPIDIR, a gene family is the set of genes that are descendents of a single gene in the most recent common ancestor (MRCA) of all species under consideration. Furthermore, genetic sequences undergo evolution at multiple scales, namely at the level of base pairs, and at the level of genes. In the context of this lecture, two genes are orthologs if their MRCA is a speciation event; two genes are paralogs if their MRCA is a duplication event.
In the genomic era, the species of a modern genes is often known; ancestral genes can be inferred by reconciling gene- and species-trees. A reconciliation maps every gene-tree node to a species-tree node. A common technique is to perform Maximum Parsimony Reconciliation (MPR), which finds the reconciliation R implying the fewest number of duplications or losses using the recursion over inner nodes v of a gene tree G. MPR fist maps each leaf of the gene tree to the corresponding species leaf of the species tree. Then the internal nodes of G are mapped recursively:
$R(v)=\operatorname{MRCA}(R(\operatorname{right}(v)), R(\operatorname{left}(v)))\nonumber$
If a speciation event and its ancestral node are mapped to the same node on the species tree. Then the ancestral node must be an duplication event.Using MPR, the accuracy of the gene tree is crucial. Suboptimal gene trees may lead to an excess of loss and duplication events. For example, if just one branch is misplaced (as in ??) then reconciliation infers 3 losses and 1 duplication event. In [6], the authors show that the contemporaneous current gene tree methods perform poorly (60% accuracy) on single genes. But if we have longer concatenated genes, then accuracy may go up towards 100%. Furthermore, very quickly or slowly evolving genes carry less information as compared with moderately diverging sequences (40-50% sequence identity), and perform correspondingly worse. As corroborated by simulations, single genes lack sucient information to reproduce the correct species tree. Average genes are too short and contains too few phylogenetically informative characters. While many early gene tree construction algorithms ignored species information, algorithms like SPIDIR capitalize on the insight that the species tree can provide additional information which can be leveraged for gene tree construction. Synteny can be used to independently test the relative accuracy of di↵erent gene tree reconstructions. This is because syntenic blocks are regions of the genome where recently diverged organisms have the same gene order, and contain much more information than single genes.
There have been a number of recent phylogenomic algorithms including: RIO [2], which uses neighbor joining (NJ) and bootstrapping to deal with incogruencies, Orthostrapper [7], which uses NJ and reconciles to a vague species tree, TreeFAM [3], which uses human curation of gene trees as well as many others. A number of algorithms take a more similar track to SPIDIR [6], including [4], a probabilistic reconciliation algorithm [8], a Bayesian method with a clock,[9],and parsimony method using species tree , as well as more recent developments: [1] a Bayesian method with relaxed clock and [5], a Bayesian method with gene and species specific relaxed rates (an extension to SPIDIR) .
Method and Model
SPIDIR exemplifies an iterative algorithm for gene tree construction using the species tree. In SPIDIR, the authors define a generative model for gene-tree evolution. This consists of a prior for gene-tree topology and branch lengths. SPIDIR uses a birth and death process to model duplications and losses (which informs the prior on topology) and then then learns gene-specific and species-specific substitution rates (which inform the prior on branch lengths). SPIDIR is a Maximum a posteriori (MAP) method, and, as such, enjoys several nice optimality criteria.
In terms of the estimation problem, the full SPIDIR model appears as follows:
$\operatorname{argmax} L, T, R P(L, T, R \mid D, S, \Theta)=\operatorname{argmax} L, T, R P(D \mid T, L) P(L \mid T, R, S, \Theta) P(T, R \mid S, \Theta)\nonumber$
The parameters in the above equation are: D = alignment data , L = branch length T = gene tree topology , R = reconciliation , S = species tree (expressed in times) , $\Theta$ = ( gene and species specific parameters [estimated using EM training], , μ dup/loss parameters)). This model can be understood through the three terms in the right hand expression, namely:
1. the sequence model– P(D|T,L). The authors used the common HKY model for sequence substitutions, which unifies Kimura’s two parameter model for transitions and transversions with Felsenstein’s model where substitution rate depends upon nucleotide equilibrium frequency.
2. the first prior term, for the rates model– P(L|T,R,S,$\Theta$), which the authors compute numerically after learning species and gene specific rates.
3. the second prior term, for the duplication/loss model– P(T,R|S,$\Theta$), which the authors describe using a birth and death process.
Having a rates model is very rates model very useful, since mutation rates are quite variable across genes. In the lecture, we saw how rates were well described by a decomposition into gene and species specific rates. In lecture we saw that an inverse gamma distribution appears to parametrize the gene specific substitution rates, and we were told that a gamma distribution apparently captures species specific substitution rates. Accounting for gene and species specific rates allows SPIDIR to build gene trees more accurately than previous methods. A training set for learning rate parameters can be chosen from gene trees which are congruent to the species tree. An important algorithmic concern for gene tree reconstructions is devising a fast tree search method. In lecture, we saw how the tree search could be sped up by only computing the full argmaxL,T,RP(L,T,R|D,S,$\Theta$) for trees with high prior probabilites. This is accomplished through a computational pipeline where in each iteration 100s of trees are proposed by some heuristic. The topology prior P(T,R|D,S,$\Theta$) can be computed quickly. This is used as a filter where only the topologies with high prior probabilities are selected as candidates for the full likelihood computation.
The performance of SPIDIR was tested on a real dataset of 21 fungi. SPIDER recovered over 96% of the synteny orthologs while other algorithms found less than 65%. As a result, SPIDER invoked much fewer number of duplications and losses. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/27%3A_Phylogenomics_II/27.01%3A_Introduction.txt |
In Figure 28.24 a, the two chromosomes at the top represent the homologous chromosomes of a parent. The red chromosome represents the genetic information from the mother and the blue chromosome represents the genetic information from the father (of the grandparent generation). Without crossing-over (recombination), the parent will either pass on the red or the blue genetic information to the offspring. In reality, recombination happens during meiosis so that a parent will pass on some genetic information from both grandparents, effectively passing on a better representation of the parent genetic information.
At each generation, a recombination event can occur at any loci. The evolutionary history of recombination can be tracked through a sequential graph of trees, such that the ith tree in the graph represents recombination at the ith locus.
Fill in this section based on: www.eecs.berkeley.edu/yss/Pub/SH-JCBO5.pdf and the course notes from 2012. More on this topic could be added in the future
The Sequentially Markov Coalescent
The Sequentially Markov Coalescent Model addresses the role of recombination in tree construction. With recombination involved, a sequence may have two parents, which complicates construction. The Sequentially Markov Coalescent Model tells us that move sequentially from left to right is a simpler and much more efficient approach to analyzing the tree; the approach essentially breaks the tree into local trees and overlays them to describe recombination events. More can be read in the following paper:
Elaborate upon intricacies of the model itself: http://www.ncbi.nlm.nih.gov/pubmed/21270390
27.04: Conclusion Further Reading What Have We Learned
Incorporating species tree information into the gene tree building process via introducing separate gene and species substitution rates allows for accurate parsimonious gene tree reconstructions. Previous gene tree reconstructions probably vastly overestimated the number of duplication and loss events. Reconstructing gene trees for large families remains a challenging problem.
27.05: Inferring Orthologs Paralogs Gene Duplication and Loss
There are two commonly used trees, Species tree and Gene tree. This section explains how these trees can be used and how to fit a gene tree inside a species tree (reconciliation).
Species Tree
Species trees that show how different species evolved from one another. These trees are created using morphological characters, fossil evidence, etc. The leaves of each tree are labeled as species and the rest of the tree shows how these species are related. An example of a species tree is shown in Figure 27.1. Note: in lecture it is mentioned that a species can be thought of as a ”bag of genes”, that is to say the group of common genes among members of a species.
Gene Tree
Gene trees are trees that look at specific genes in different species. The leaves of gene trees are labeled with gene sequences or gene ids associated with specific sequences. Figure 27.2 shows an example of a gene tree that has 4 genes (leaves). The sequences associated with each gene are presented on the right side of Figure 27.2.
Gene Family Evolution
Gene trees evolve inside a species tree. An example of a gene tree contained in a species tree is shown in Figure 27.3 below.
The next sub section explains how we can fit gene trees inside a species trees using Reconciliation.
Reconciliation
Reconciliation is an algorithm that helps compare gene trees to genome trees by fitting a gene tree fits inside a species tree. This is done by by mapping the vertices in the gene tree to vertices in the species tree. This sub section will focus on Reconciliation, related definitions, algorithms (Maximum Parsimony Reconciliation and SPIDIR) and examples.
Definitions
Two genes are orthologs if their most recent common ancestor (MRCA) is a speciation (splitting into different species).
Paralogs are genes whose MRCA is a duplication.
Figure 27.4 below illustrates how these types of genes can be represented in a gene tree. The tree below has 4 speciation nodes, one duplication and one loss.
A mapping diagram is a diagram that shows the node mapping from the gene tree to the species tree. Figure 27.5 shows an example of a mapping diagram.
A nesting diagram shows how the gene tree can be nested inside the species tree. For every mapping diagram there is a nesting diagram. Figure 27.6 shows an example of a possible nesting diagram for the mapping diagram in Figure 27.5.
Maximum Parsimony Reconciliation (MPR) Algorithm
MPR is an algorithm that fits a gene tree into a species tree while minimizing the number of duplications and deletions.
Given a gene tree and a species tree, the algorithm finds the reconciliation that minimizes the number of duplications and deletions. Figure 27.7 above shows an example of a possible mapping from a gene tree to a species tree. Figure 27.8 presents the pseudocode for the MPR algorithm. The base case involves matching the leaves of the gene tree to the leaves of the species tree; the algorithm then progresses up the vertices of the gene tree, drawing a relationship between the MRCA of all leaves within a given vertex’s sub-tree and the corresponding MRCA vertex in the species tree. In the pseudocode, I(G) represents the species tree and L(G) represents the gene tree.
We map the arrows low as possible, since lower mapping usually results in fewer events. However, we cannot map too low. Mapping too low means that we’re violating the constraint that the MRCA of a given node is at least as high as the MRCA of its children. We map as low as we can without violating the descendent- ancestor relationships. The algorithm goes recursively from bottom up, starting from the leaves. Since we sample genes from known species to build the gene tree, there’s a direct mapping between the leaves of the gene tree and the leaves of the species tree. To map the ancestors, for each node (going recursively up the tree) we look at the right child and left child and take the least common ancestor (LCA) of the species that they map to. If a node maps to its right or left child, we know there is a duplication. An expected branch that does not exist indicates a loss.
Reconciliation Examples
In Figure 27.10, we see a parsimonious (minimum number of losses and duplications) reconciliation for a case in which nodes from the gene tree cannot be mapped straight across. This is a result of the swapped locations of h1 and d1 in the gene tree; the least common ancestor for d1, m1, and r1 is now the root vertex of the species tree.
Figure 27.11 shows a non-parsimonious reconciliation . The parsimonious mapping for the same trees is shown in Figure 27.9.
Figure 27.12 shows an invalid reconciliation. This reconciliation is invalid since it does not respect descendent- ancestor relationships. In order for this reconciliation to be possible, the descendent would have to travel back in time and be created before its ancestor. Clearly, such a scenario would be impossible. A valid reconciliation must satisfy the following: If a < b in G, then R[a] $\leq$ R[b] in S.
Interpreting Reconciliation Examples
Gene trees, when reconciled with species trees, offer significant insight into evolutionary events (namely duplications and losses). Duplications describe the same gene being found at a separate loci - m2 or r2, in this situation - and is a major mechanism for creating new genes and functions. These evolutionary consequences fall into three categories: nonfunctionalization, neofunctionalization and subfunctionalization. Nonfunctionalization is quite common and causes one of the copies, unsurprisingly, to simply not function. Neofunctionalization is when one of the copies develops an entirely new function. Subfunctionalization is when the copies retain different parts (dividing up the labor, in a way), and together, perform the same function.
In Figure 4, we see that a duplication event occurred before the divergence of mice and rats as species. This is why we see similar genes at both m1 and m2, which represent two separate loci. d2 and h2 are not included in the graph because at the gene being considered is not present at those loci (since no duplication event occurred), whereas it is at both m2 and r2.
If the duplication event were to have occurred one level higher in Figure 4, without seeing a corresponding h2 in the gene tree, this would imply a loss within the h branch of the species tree. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/27%3A_Phylogenomics_II/27.03%3A_Ancestral_Recombination_Graphs.txt |
In the previous section we learned how to compare and combine gene trees and species trees. In this section, we will use this information to reconstruct gene trees and species trees.
Species Tree Reconstruction
In the past, it was really hard to identify a marker gene that would give insight into the differentiation for a specific species. As sequencing improved, we started having lots of sequencing data on various genes. Based on different sets of loci, people built different trees, which were highly dependent on the set of loci chosen. Possible reasons why trees differ include noise (from statistical estimate errors and noise), hidden duplications and losses, and allele sorting in a population.
Species Tree Reconstruction Problem
Given lots of different gene trees that disagree, our goal is to make them into one species tree (as shown in Figure 27.13. There are lots of different algorithms that reconstruct species trees. These algorithms include Supermatrix methods (Rokas 2003, Ciccareli 2006), Supertree methods (Creevey & McInerney 2005), Minimizing Deep Coalescence (Maddison & Knowles 2006) and Modeling coalescence (Liu & Pearl 2007).
One way to do this, which is mostly effective for noisy data, is to pull more data together in order to increase accuracy. This is done by concatenating gene alignments into a super-matrix.
Another method involves building a tree for each one and using a consensus method to summarize these trees. Then we identify analogous branches across the a lot of trees and build a species tree that has the branches that occur most frequently.
There is another way to reconstruct a species tree, which is effective in case the gene trees disagree because of duplications and losses. The goal is to find the species tree that applies the fewest duplications. We build all the gene trees and then propose a species tree. Next, we use reconciliation to determine the number of events each gene tree combined with the proposed species tree implies. Then, we propose other species trees and move branches around. Wrong species trees tend to have lots of events that did not happen. The correct tree should have the fewest number of events.
Improving Gene Tree Reconstruction and Learning Across Gene Trees
We can use methods similar to those described above to build better gene trees. This can be done by using information from a species tree to study a gene tree of interest. For example, species trees can be used to determine when losses and duplications occurred. The idea is that we can use the fact that species trees are often built from the entire genome, to obtain more information about related gene trees. We can use both the branch length and the number of events to do this.
If we know the species tree, we can develop a model for what kind of branch lengths we can expect. We can use conserved gene order to tell orthologs and build trees.
When a gene is fast evolving in one species, it is fast evolving in all species. We can model a branch length as two different rate components. One is gene specific (present across all species) and the other is species specific, which is customized to a specific species.
This method greatly improves reconstruction accuracy. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/27%3A_Phylogenomics_II/27.06%3A_Reconstruction.txt |
With the advent of next-gen sequencing, it is becoming economical to sequence the genomes of many individuals within a population. In order to make sense of how alleles spread through a population, it’s helpful to have a model to compare data against. The Wright-Fisher reproduction model has filled this role for the past 70 years.
The Wright-Fisher Model
Like HMMs, Wright-Fisher is a Markov process: at each step, the system randomly progresses, and the current state of the system depends only on the previous state. In this case, state transitions represent reproduction. By modeling the transmission of chromosomes to offspring, we can study genetic drift.
The model makes a number of simplifying assumptions:
1. Population size, N, is constant at each generation.
2. Only members of the same generation reproduce (no overlap).
3. Reproduction occurs at random.
4. The gene being modeled only has 2 alleles.
5. Genes undergo neutral selection.
Note that Wright-Fisher is not an appropriate choice if you’re trying to model the change in frequency of a gene that is positively or negatively selected for. If we use Wright-Fisher to model the chromosomes of diploid individuals, the population size of the model becomes 2N.
In English, here’s how Wright-Fisher works:
At every generation, for each child, we randomly select from the parents (with replaccement). The allele of the child becomes that of the randomly selected parent.
We repeat this process for many generation, with the children serving as the new parents, ignoring the ordering of chromosomes.
It really is that simple. To determine the probability of k copies of an allele existing in the child generation when it had a frequency of p in the parent generation, we can use this formula:
$\left(\begin{array}{c} 2 N \ k \end{array}\right) p^{k} q^{2 N-k}$
Here, q = (1-p). It is the frequency of non-p alleles in the parent generation.
Now we can begin to explore such questions as: how probable is it and how many generations is it expected to take for a given allele to become fixed, meaning the allele is present in every member of the population?
The expected time (in generations) for fixation, given the assumptions made by Wright-Fisher, is proportionate to 4NE, where NE is the effective population size.
Again, it’s important to keep in mind the limitations of this model and ask if it actually makes sense for the system you’re trying to represent. Consider how you could tweak the proposed model to account for a selection coecient ranging between -1 (lethal negative selection) and 1 (strong positive selection).
The Coalescent Model
The problem with the Wright-Fisher model is that it assumes you know the allele frequencies of the ancentral generation. When dealing with the genomes of present species, these quantities are unknown. The Coalescent Model solves this conundrum by thinking retrospectively. That is to say: we start with the alleles of the current generation, and work our way backwards in time. The basic Coalescence Model makes the same assumptions as Wright-Fisher. At each generation, we ask: what is the probability of the two identical alleles coalescing, or sharing a parent, in the previous generation.
We can pose the probability of a coalescence event occuring in the previous generation as the probability of coalescence not occuring in any of the t-1 generations prior to the last one, times the probability of it occuring in the previous (the t-th) generation. This is equivalent to the expression:
$P_{c}(t)=\left(1-\frac{1}{2 N_{e}}\right)^{t-1}\left(\frac{1}{2 N_{e}}\right)$
Where Ne is the effective population size.
By approximating this geometric distribution as an exponential one: $P_{c}(t)=\frac{1}{2 N_{e}} e^{-\left(\frac{t-1}{2 N_{e}}\right)}$, we can determine the expected number of generations back until coalescence, which turns out to be 2Ne, with a standard deviation of 2Ne.
To ask about the coalescence of multiple lineages at a given generation, we must, as in Wright-Fisher, use a binomial distribution. The probability of k lineages coalescing for the first time at generation t is:
$P\left(T_{k}=t\right)=\left(1-\left(\begin{array}{l} k \ 2 \end{array}\right) \frac{1}{2 N}\right)^{t-1}\left(\begin{array}{l} k \ 2 \end{array}\right) \frac{1}{2 N}$
And again, this can be approximated with an exponential distribution for sufficiently large k. The individual at which two lineages converge is referred to as the Most Recent Common Ancestor. By continually moving backwards until all ancestors coalesce, we end up with a new kind of tree! And by comparing the tree resulting from coalescence with a gene tree we’ve constructed, discrepancies between the two may signal that certain assumptions of the Coalescent Model have been violated. Namely, selection may be occuring.
The Multispecies Coalescent Model
We can take this idea once step further and track coalescence events across multiple species. Here, each genome of an individual species is treated as a lineage.
Note that there is a lag time between the separation of two populations and the time at which two gene lineages coalesce into a common ancestor. Also note how the rate of coalescence slows down as N gets bigger and for short branches.
In the image above, deep coalescence is depicted in light blue for three lineages. The species and gene trees here are incongruent since C and D are sisters in gene tree but not the species tree.
There is a $\frac{2}{3}$ chance that incongruence will occur because once we get to the light blue section, Wright- Fisher is memoryless and there is only $\frac{1}{3}$ chance that it will be congruent. The effect of incongruence is called Incomplete Lineage Sorting. By measuring the frequency at which ILS occurs, we gain insight into unusually large populations or unsually short branch lengths within the species tree.
You can build a maximum parsimony species tree based on the notion of minimizing the number of ILS events rather than minimizing implied duplication/loss events as covered previously. It is even possible to combine these two methods to, ideally, create a phylogeny that is more accurate than either of them would be individually.
27.10 What Have We Learned
In this chapter, we drew conclusions regarding the relationship between gene trees and species trees. We then explored methods using gene trees to develop more accurate species trees and vice versa, involving the mutation rates of specific to both genes and species. The Wright-Fisher Model, as well as the Coalescent Model, helped us further interpret these mutation rates and understand the dynamics of allele frequencies within a population.
27.9 Further Reading
• Paper on discovering Whole Genome Duplication event in yeast: http://www.nature.com/nature/journal...ature02424.pdf
Bibliography
[1] O. Akerborg, B. Sennblad, L. Arvestad, and J. Lagergren. Bayesian gene tree reconstruction and recon- ciliation analysis. Proc Natl Acad Sci, 106(14):5714–5719, Apr 2009.
[2] Zmasek C.M. and Eddy S.R. Analyzing proteomes by automated phylogenomics using resampled infer- ence of orthologs. BMC Bioinformatics, 3(14), 2002.
[3] Li H, Coghlan A, Ruan J, Coin LJ, Heriche JK, Osmotherly L, Li R, Liu T, Zhang Z, Bolund L, Wong GK, Zheng W, DEhal P, Wang J, and Durbin R. Treefam: a curated database of phylogenetic trees of animal gene families. Nucleic Acids Res, 34, 2006.
[4] Arvestad L., Berglund A., Lagergren J., and Sennblad B. Bayesian gene/species tree reconciliation and orthology analysis using mcmc. Bioinformatics, 19 Suppl 1, 2003.
[5] M. D. Rasmussen and M. Kellis. A bayesian approach for fast and accurate gene tree reconstruction. Mol Biol Evol, 28(1):273290, Jan 2011.
[6] Matthew D. Rasmussen and Manolis Kellis. Accurate gene-tree reconstruction by learning gene and species-specific substitution rates across multiple complete genomes. Genome Res, 17(12):1932–1942, Dec 2007.
[7] C.E.V. Storm and E.L.L. Sonnhammer. Automated ortholog inference from phylogenetic trees and calculation of orthology reliability. Bioinformatics, 18(1):92–99, Jan 2002.
[8] Hollich V., Milchert L., Arvestad L., and Sonnhammer E. Assessment of protein distance measures and tree-building methods for phylogenetic tree reconstruction. Mol Biol Evol, 22:2257–2264, 2005.
[9] Wapinski, I. A. Pfeffer, N. Friedman, and A. Regev. Automatic genome-wide reconstruction of phyloge- netic gene trees. Bioinformatics, 23(13):i549–i558, 2007. | textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/27%3A_Phylogenomics_II/27.07%3A_Modeling_Population_and_Allele_Frequencies.txt |
Subsets and Splits