chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Humans share 99.9% of the same genetic information, and are 99% similar to chimpanzees. Although humans have less genetic diversity than many other species [? ], polymorphisms in populations can nonetheless lead to differences in disease risk. Learning about the 0.1% difference between humans can be used to understand population history, trace lineages, predict disease, and analyze natural selection trends. In this lecture, Dr. David Reich of Harvard Medical School describes three historic examples of gene flow between human populations: gene flow between Africans and Europeans due to the slave trade, Indian intermixing due to migration, and interbreeding between Neanderthals, Denisovans and modern humans of Western Eurasian decent. 28.02: Quick Survey of Human Genetic Variation In the human genome, there is generally a polymorphism every 1000 bases, though there are regions of the genome where this rate can quadruple. These Single Nucleotide Polymorphisms (SNPs) are one manifestation of genetic variation. When SNPs occur, they segregate according to recombination rates, advantages or disadvantages of the mutation, and the population structure that exists and continues during the lifespan of the SNP. Following a genetic mixing event, for example, one initially sees entire chromosomes, or close to entire chromosomes, coming from each constituent. As generations pass, recombination splits the SNP haplotype blocks into smaller pices. The rate of change of the length of these blocks, then, is dependent on the rate of recombination and the stability of the recombination product. Therefore, the length of conserved haplotypes can be used to infer the age of a mutation or its selection. An important consideration, however, is that the rate of recombination is not uniform across the genome; rather, there are recombination hot spots that can skew the measure of haplotype age or selectivity. This makes the haplotype blocks longer than expected under a uniform model. Every place in the genome can be thought of as a tree when compared across individuals. Depending on where are you look within the genome, one tree will be different than another tree you may get from a specific set of SNPs. The trick is to use the data that we have available on SNPs to infer the underlying trees, and then the overarching phylogenetic relationships. For example, the Y chromosome undergoes little to no recombination and thus can produce a highly accurate tree as it passed down from father to son. Likewise, we can look at mitochondrial DNA passed down from mother to child. While these trees can have high accuracy, other autosomal trees are confounded with recombination, and thus show lower accuracy to predict phylogenetic relationships. Gene trees are best made by looking at areas of low recombination, as recombination mixes trees. In general, there are about 1 to 2 recombinations per generation. Humans show about 10,000 base-pairs of linkage, as we go back about 10,000 generations. Fruit fly linkage equilibrium blocks, on the other hand, are only a few hundred bases. Fixation of an allele will occur over time, proportional to the size of the population. For a population of about 10,000, it will take about 10,000 years to reach that point. When a population grows, the effect of gene drift is reduced. Curiously enough, the variation in humans looks like what would have been formed in a population size of 10,000. If long haplotypes are mapped to genetic trees, approximately half of the depth is on the first branch; most morphology changes are deep in the tree because there was more time to mutate. One simple model of mutation without natural selection is the Wright-Fisher neutral model which utilizes binomial sampling. In this model, a SNP will either reach fixation (frequency 1) or die out (frequency 0). In the human genome, there are 10-20 million common SNPs. This is less diversity than chimpanzees, implying that humans are genetically closer to one another. With this genetic similarity in mind, comparing human sub-populations can give information about common ancestors and suggest historical events. The similarity between two sub-populations can be measured by comparing allele frequencies in a scatter plot. If we plot the frequencies of SNPs across different populations on a scatterplot, we see more spread between more distant populations. The plot below, for example, shows the relative dissimilarity of European American and American Indian populations along with the greater similarity of European American and Chinese populations. The plots indicate that there was a divergence in the past between Chinese and Native Americans, evidence for the North American migration bottleneck that has been hypothesized by archaeologists. The spread among different populations within Africa is quite large. We can measure spread by the fixation index (Fst) which describes the variance. Several current studies have shown that unsupervised clustering of genetic data can recover self-selected labels of ethnic identity.[3] Rosenberg's experiment used a Bayesian clustering algorithm. They took a sample size of 1000 people (50 populations, 20 people per population), and clustered those people by their SNP genetic data, but they did not tag any of the people with their population, so they could see how the algorithm would cluster without knowledge of ethnicity. They tried many different numbers of clusters to find the optimal number. With 2 clusters, East-Asians and non-East-Asians were separated. With 3 clusters, Africans were separated from everyone else. With 4, East-Asians and Native Americans were separated. With 5, the smaller sub-populations began to emerge. When waves of humans left Africa, genetic diversity decreased; the small numbers of people in the groups that left Africa allowed for serial founder events to occur. These serial founder events lead to the formation of sub-populations with less genetic diversity. This founder effect is demonstrated by the fact that genetic diversity decreases moving out of Africa and that West Africans have the highest diversity of any human sub-population. 28.03: African and European Gene Flow The Atlantic Slave Trade took place from the 16th century to the 19th century, and moved about 5 million people from Africa to the Americas. Most African-Americans today have a mixture of 80% African and 20% European heritage. When two parents of different ethnicities have children, their children will inherit one chromosome from each parent, and their grandchildren will inherit chromosomes that are a mosaic of the two ethnicities due to recombination. As time passes, the increasing number of recombination events will decrease the length of the “African” or “European” stretches of DNA. Recombination events are not spread evenly throughout the chromosomes, but happen at hotspots. African and European DNA have different hot spots, which could be due to differences in the amino acid composition of PRDM9, a histone H3(K4) trimethyltransferase which is essential for meiosis. Difference in disease succeptibility can be predicted for African and European populations. With se- quencing, this knowledge can also be applied to mixed populations. For example, Africans have a higher risk of prostrate cancer which is directly linked to an area in chromosome 8 that maps to a cancer proto- oncogene[? ]. If a mixed individual has the African sequence in that area, he or she will have the increased risk, but if the individual has the European sequence, he or she will not have an increased risk. The same approach can be applied to breast cancer, colon cancer, multiple sclerosis, and other diseases.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/28%3A_Population_History/28.01%3A_Introduction.txt
Genetic evidence suggests that modern populations on the Indian subcontinent descended from two different ancestral populations that mingled 4,000 years ago. SNP array data was collected from about 500 different people from 73 Indian groups with different language families [? ]. A principle component analysis plot reveals that the the Dravidian/Indo-European language groups and the Austro-Asiatic language groups are in two different clusters, which suggests they have different lineages. Within the Dravidian/Indo-European language groups, there is a gradient of relatedness to West Eurasian groups. The same mosaic technique used in the African/European intermixing study was used to estimate the date of mixture. The Indian population is a mixture of a Central Asian/European group and another group most closely related to the people of the Andaman Islands. The chunk size of the DNA belonging to each group suggests a mixture about 100 generations old, or 2,000 to 4,000 years ago. Many groups have this mixed heritage, but mixture stops after the creation of the caste system. Knowledge of the heritage of genes can predict diseases. For example, a South Asian mutation in myosin binding protein C causes a seven-fold increase in heart failure Many ethnic groups are endogamous and have a low genetic diversity, resulting in a higher prevelance of recessive diseases. Past surveys in India have studied such aspects as anthropometric variation, mtDNA, and the Y chromosome. The anthropometric study looked at significant differences in physical characteristics between groups separated by geography and ethnicity. The results showed variation much higher than that of Europe. The mtDNA study was a survey of maternal lineage and the results suggested that there was a single Indian tree such that age of lineage could be inferred by the number of mutations. The data also showed that Indian populations were separated from non-Indian populations at least 40,000 years ago. Finally, the Y chromosome study looked at paternal lineage and showed a more recent similarity to Middle Eastern men and dependencies on geography and caste. This data conflicts with the mtDNA results. One possible ex- planation is that there was a more recent male migration. Either way, the genetic studies done in India have served to show its genetic complexity. The high genetic variation, dissimilarity with other samples, and diculty of obtaining more samples lead to India being left out of HapMap, the 1000 Genomes Project, and the HGDP. In David Reich and collaborators study of India, 25 Indian groups were chosen to represent various geographies, language roots, and ethnicities. The raw data included five samples for each of the twenty five groups. Even though this number seems small, the number of SNPs from each sample has a lot of information. Approximately five hundred thousand markers were genotyped per individual. Looking at the data to emerge from the study, if Principal Components Analysis is used on data from West Eurasians and Asians, and if the Indian populations are compared using the same components, the India Cline emerges. This shows a gradient of similarity that might indicate a staggered divergence of Indian populations and European populations. Almost All Mainland Indian Groups are Mixed Further analysis of the India Cline phenomenon produces interesting results. For instance, some Pakistani sub-populations have ancestry that also falls along the Indian Cline. Populations can be projected onto the principal components of other populations: South Asians projected onto Chinese and European principal components produces a linear effect (the India Cline), while Europeans projected onto South Asian and Chinese principal components does not. One interpretation is that Indian ancestry shows more variability than the other groups. A similar variability assessment appears when comparing African to non-African populations. Two tree hypotheses emerge from this analysis: 1. there were serial founder events in India's history or 2. there was gene flow between ancestral populations. The authors developed a formal four population test to test ancestry hypotheses in the presence of admixture or other confounding effects. The test takes a proposed tree topology and sums over all SNPs of (Pp1 Pp2)(Pp3 Pp4), where P values are frequencies for the four populations. If the proposed tree is correct, the correlation will be 0 and the populations in question form a clade. This method is resistant to several problems that limit other models. A complete model can be built to fit history. The topology information from the admixture graphs can be augmented with Fst values through a fitting procedure. This method makes no assumptions about population split times, expansion and contractions, and duration of gene flow, resulting in a more robust estimation procedure. Furthermore, estimating the mixture proportions using the 4 population statistic gives error estimates for each of the groups on the tree. Complicated history does not factor into this calculation, as long as the topology as determined by the 4-population test is valid. These tests and the cline analysis allowed the authors to determine the relative strength of Ancestral North Indian and Ancestral South Indian ancestry in each representative population sample. They found that high Ancestral North Indian ancestry is correlated with traditionally higher caste and certain language groupings. Furthermore, Ancestral North Indian (ANI) and South Indian (ASI) ancestry is as different from Chinese as European. Population structure in India is different from Europe Population structure in India is much less correlated with geography than in Europe. Even correcting populations for language, geographic, and social status differences, the Fst value is 0.007, about 7 times that of the most divergent populations in Europe. An open question is whether this could be due to missing (largely India-specific) SNPs on the genotyping arrays. This is because the set of targeted SNPs were identified primarily from the HapMap project, which did not include Indian sources. Most Indian genetic variation does not arise from events outside India. Additionally, consanguineous marriages cannot explain the signal. Many serial founder events, perhaps tied to the castes or precursor groups, could contribute. Analyzing a single group at a time, it becomes apparent that castes and subcastes have a lot of endogamy. The autocorrelation of allele sharing between pairs of samples within a group is used to determine whether a founder event occurred and its relative age. There are segments of DNA from a founder, many indicating events more than 1000 years old. In most groups there is evidence for a strong, ancient founder event and subsequent endogamy. This stands in contrast to the population structure in most of Europe or Africa, where more population mixing occurs (less endogamy). These serial founder events and their resulting structure have important medical implications. The strong founder events followed by endogamy and some mixing have lead to groups that have strong propensities for various recessive diseases. This structure means that Indian groups have a collection of prevalent diseases, similar to those already known in other groups, such as Ashkenazi Jews or Finns. Unique variation within India means that linkages to disease alleles prevalent in India might not be discoverable using only non-Indian data sources. A small number of samples are needed from each group, and more groups, to better map these recessive diseases. These maps can then be used to better predict disease patterns in India. 28.4.3 Discussion Overall, strong founder events followed by endogamy have given India more substructure than Europe. All surveyed tribal and caste groups show a strong mixing of ANI and ASI ancestry, varying between 35% and 75% ANI identity. Estimating the time and mechanism of the ANI-ASI mixture is currently a high priority. Additionally, future studies will determine whether and how new techniques like the 4-population test and admixture graphs can be applied to other populations.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/28%3A_Population_History/28.04%3A_Gene_Flow_on_the_Indian_Subcontinent.txt
Dr. Reich worked with the Max Planck Institute as a population geneticist studying Neanderthal genetic data. This section will discuss the background of his research as part of the Neanderthal genome project, the draft sequence that they assembled, and the evidence that has been compiled for gene flow between modern humans and Neanderthals. Background Neanderthals are the only other hominid with a brain as large as Homo sapiens. Neanderthal fossils from 200,000 years ago have been found in West Eurasia (Europe and Western Asia), which is far earlier than Homo erectus. The earliest human fossils come from Ethiopia dating about 200,000 years ago. However, there is evidence that Neanderthals and humans overlapped in time and space between 135,000 and 35,000 years ago. The first place of contact could have occurred in The Levant, in Israel. There are human fossils from 120,000 years ago, then a gap, Neanderthal fossils about 80,000 years ago, another gap, and then human fossils again 60,000 years ago. This is proof of an overlap in place, but not in time. In the upper paleolithic era, there was an explosion of populations leaving Africa (the migration about 60,000 to 45,000 years ago). In Europe after 45,000 years ago, there are sites where Neanderthals and humans exist side by side in the fossil record. Since there is evidence that the two species co-existed, was there interbreeding? This is a question that can be answered by examining population genomics. See Tools and Techniques for a discussion of DNA extraction from Neanderthals. 28.5.2 Evidence of Gene Flow between Humans and Neanderthals 1. A comparison test between Neanderthal DNA and human DNA from African and non-African populations demonstrates that non-African populations are more related to Neanderthals than African populations. We can look at all the SNPs in the genome and see whether the human SNP from one population matches the Neanderthal SNP. When different human populations were compared to Neanderthals, it was found that French, Chinese, and New Guinea SNPs matched Neanderthal SNPs much more than Nigerian Yoruba SNPs matched Neanderthal SNPs. San Bushmen and Yoruba populations from Africa, despite being very distinct genetically, both had the same distance from Neanderthal DNA. This evidence suggests that human populations migrating from Africa interbred with Neanderthals. 2. A long-range haplotype study demonstrates that when the deepest branch of a haplotype tree was in non-African populations, the regions frequently matched Neanderthal DNA. African populations today are the most diverse populations in the world. When humans migrated out of Africa, diversity decreased due to the founder e↵ect. From this history, one would expect that if you built a tree of relations, the deepest split would be African. To show Neanderthal heritage, Berkley researchers picked long range sections of the genome and compared them among randomly chosen humans from various populations. The deepest branch of the tree constructed from that haplotype is almost always from the African population. However, occasionally non-Africans have the deepest branch. The study found that there were 12 regions where non-Africans have the deepest branch. When this data was used to analyze the Neanderthal genome, it was found that 10 out of 12 of these regions in non-Africans matched Neanderthals more than the matched the human reference sequence (a compilation of sequences from various populations). This is evidence of that haplotype actually being of Neanderthal origin. 3. Lastly, there is a bigger divergence than expected among humans. The average split between a Ne- anderthal and a human is about 800,000 years. The typical divergence between two humans is about 500,000 years. When looking at African and non-African sequences, regions of low divergence emerged in non-African sequences when compared with Neanderthal material. The regions found were highly enriched for Neanderthal material (94% Neanderthal), which would increase the average divergence between humans (as the standard Neanderthal - human divergence is about 800,000 years). Gene Flow between Humans and Denisovans In 2010, scientists discovered a 50,000 year old finger bone in southern Siberia. The DNA in this Denisovan sample was not like any previous human DNA. Denisovan mitochondrial DNA is an out-group to both Neanderthals and modern humans. (Mitochondrial DNA was used because it is about 1000 times more frequent than somatic DNA. The polymorphism rate is also 10 times higher.) Denisovans are more closely related to Neanderthals than humans. Using the same SNP matching technique from the Neanderthal example, it was discovered that Deniso- van DNA matches New Guinean DNA more than Chinese DNA or European DNA. It is estimated that Denisovans contributed about 5% of the ancestry of New Guineans today. A princple component analysis projection (see figure) between relatedness to chimpanzees, Neanderthals, and Denisovans shows that non- African populations are more related to Neanderthals, and New Guinean/Bougainvillians are more related to Denisovans. This evidence suggests a model for human migration and interbreeding. Humans migrated out of Africa and interbred with Neanderthals, then spread across Asia and interbred with Denisovans in Southeast Asia. It is less plausible that humans interbred with Denisovans in India because not all of the populations in Southeast Asia have Denisovan ancestry. Analysis of High Coverage Archaic Genomes High-coverage archaic genomes can tell us a lot about the history of hominid populations. A high coverage Altai Neanderthal sequence was acquired from a toe bone found in Denisova cave. From this sequence, we can look at the time to convergence of the two copies of chromosomes to estimate the size of the population. Neanderthal DNA contains many long stretches of homozygosity, indicating a persistant small population size and inbreeding. For the Altai Neanderthal, one eighth of the genome was homozygous, about the expected level of inbreeding of half-siblings. Applying the technique to non-African populations shows a bottleneck 50,000 years ago and a subsequent population expansion, which is consistent with the Out Of Africa theory. Neanderthals and Denisovans also interbred, demonstrating the remarkable proclivity of humanoids to- wards reproduction. Although most of the Neanderthal genome has a minimum depth of hundreds of thousands of years from the Denisovan genome, at least 0.5% of the Denisovan genome has a much shorter distance from Neanderthal genome, especially for immune genes. Denisovans most likely have ancestry from an unknown archaic population unrelated to Neanderthals. An African sequence has a 23% match with Neanderthal DNA and 47% match with Denisovan DNA, which is statistically significant. If you stratify the D-statistic by the frequency of an allele in the population, you see an increasing slope and a sharp jump when you reach fixation which most closely matches the predictions one would obtain from an unknown population flowing into Denisovans (see figure). Discussion The bottleneck caused by the migration from Africa is only one example of many. Most scientists usually concentrate on the age and intensity of migration events and not necessarily the duration, but the duration is very important because long bottlenecks create a smaller range of diversity. One way to predict the length of a bottleneck is to determine if any new variations arose during it, which is more likely during longer bottlenecks. The change in the range of diversity is also what helped create the different human sub-populations that became geographically isolated. This is just another way that population genomics can be useful for helping to piece together historical migrations. Genetic differences between species (here within primates) can be used to help understand the phylogenetic tree from which we are all derived. We looked at the case study of comparisons with Neanderthal DNA, learned about how ancient DNA samples are obtained, how sequences are found and interpreted, and how that evidence shows high likelihood of interbreeding between modern humans (of Eurasian descent) and Neanderthals. Those very small differences between one species and the next, and within species, allow us to deduce a great deal of human history through population genetics.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/28%3A_Population_History/28.05%3A_Gene_Flow_Between_Archaic_Human_Populations.txt
Techniques for Studying Population Relationships There are several different methods for studying population relationships with genetic data. The first general type of study utilizes both phylogeny and migration data. It fits the phylogenies to Fst values, values of sub-population heterozygosity (pioneered by Cavalli-Sforza and Edwards in 1967 [? ]). This method also makes use of synthetic maps and Principal Components Analysis. [2] The primary downside to analyzing population data this way is uncertainty about results. There are mathematical and edge effects in the data processing that cannot be predicted. Also, certain groups have shown that separate, bounded mixing populations can produce significant-seeming principal components by chance. Even if the results of the study are correct, then, they are also uncertain. The second method of analyzing sub-population relationships is genetic clustering. Clusters can be formed using self-defined ancestry [1] or the STRUCTURE database. [3] This method is overused and can over-fit the data; the composition of the database can bias the clustering results. Technological advances and increased data collection, though, have produced data sets that are 10,000 times larger than before, meaning that most specific claims can be disproved by some subset of data. So in effect, many models that are predicted either by phylogeny and migration or genetic clustering will be disproved at some point, leading to large-scale confusion of results. One solution to this problem is to use a simple model that makes a statement that is both useful and has less probability of being falsified. Extracting DNA from Neanderthal Bones Lets take a look at how you go about finding and sequencing DNA from ancient remains. First, you have to obtain a bone sample with DNA from a Neanderthal. Human DNA and Neanderthal DNA is very similar (we are more similar to them than we are to chimps), so when sequencing short reads with very old DNA, it is impossible to tell if the DNA is Neanderthal or human. The cave where the bones were found is first classified as human or non-human using trash or tools as an identifier, which helps predict the origin of the bones. Even if you have a bone, it is still very unlikely that you have any salvageable DNA. In fact, 99% of the sequence of Neanderthals comes from only three long bones found in one site: the Vindija cave in Croatia (5.3 Gb, 1.3x full coverage). Next, the DNA is sent to an ancient-DNA lab. Since they are 40,000 year old bones, there is very little DNA left in them. So, they are first screened for DNA. If they find DNA, the next question is whether it is primate DNA? Usually it is DNA from microbes and fungi that live in soil and digest dead organisms. Only about 1-10% of the DNA on old bones is the primates DNA. If it is primate DNA, is it contamination from the human (archeologist or lab tech) handling it? Only one out of 600 bp are di↵erent between humans and Neanderthals DNA. The size of reads from a 40,000 year old bone sample is 30-40 bp. The reads are almost always identical for a human and Neanderthal, so it is difficult to distinguish them. In one instance, 89 DNA extracts were screened for Neanderthals DNA, but only 6 bones were actually sequenced (requires lack of contamination and high enough amount of DNA). The process of retrieving the DNA requires drilling beneath the bone surface (to minimize contamination) and taking samples from within. For the three long bones, less than 1 gram of bone powder was able to be obtained. Then the DNA is sequenced and aligned to a reference chimp genome. It is mapped to a chimp instead of a particular human because mapping to a human might cause bias if you are looking to see how the sequence relates to specific human sub-populations. Most successful finds have been in cool limestone caves, where it is dry and cold and perhaps a bit basic. The best chance of preservation occurs in permafrost areas. Very little DNA is recoverable from the tropics. The tropics have a great fossil record, but DNA is much harder to obtain. Since most bones don't yield enough or good DNA, scientists have the screen samples over and over again until they eventually find a good one. Reassembling Ancient DNA DNA extracted from Neanderthal bones have short reads, about 37 bp on average. There are lots of holes due to mutations caused by time eroding the DNA. It is difficult to tell whether a sequence is the result of contamination because humans and Neanderthals only differ in one out of one thousand bases. However, we can use DNA damage characteristic of ancient DNA to distinguish old and new DNA. Old DNA has a tendency towards C to T and G to A errors. The C to T error is by far the most common, and is seen about 2% of the time. Over time, a methyl group gets knocked off of a C, which causes it to resemble to U. When PCR is used to amplify the DNA for sequencing, the polymerase sees a U and repairs it to a T. In order to combat this error, scientists use a special enzyme that recognizes the U, and cuts the strand instead of replacing it with a T. This helps to identify those sites. The G to A mutations are the result of seeing that on the opposite strand. The average fragment size is quite small, and the error rate is still 0.1% - 0.3%. One way to combat the mutations is to note that on a double stranded fragment, the DNA is frayed towards the ends, where it becomes single stranded for about 10 bp. There tend to be high rates of mutations in the first and last 10 bases, but high quality DNA elsewhere, i.e. more C to T mutations in the beginning and G to A in the end. In chimps, the most common mutations are transitions (purine to purine, pyrimidine to pyrimidine), and transversions are much rarer. The same goes for humans. Since the G to A and C to T mutations are transitions, it can be determined that there are about 4x more mutations in the old Neanderthal DNA than if it were fresh by noting the number of transitions seen compared to the number of transversions seen (by comparing Neanderthal to human DNA). Transversions have a fairly stable rate of occurrence, so that ratio helps determine how much error has occurred through C to T mutations. We are now able to get human contamination of artifact DNA down to around $\text{i} 1 \%$. When the DNA is brought in, as soon as it is removed from the bone it is bar coded with a 7 bp tag. That tag allows you to avoid contamination at any later point in the experiment, but not earlier. Extraction is also done in a clean room with UV light, after having washed the bone. Mitochondrial DNA is helpful for distinguishing what percent of the sample is contaminated with human DNA. Mitochondrial DNA is filled with characteristic event sites because humans and Neanderthals are reciprocally monophylogenetic. The contamination can be measured by counting the ratio of those sites. In the Neanderthal DNA, contamination was present, but it was $\text { ¡ } 0.5 \%$. In sequencing, the error rate is almost always higher than the polymorphism rate. Therefore, most sites in the sequence that are different from humans are caused by sequencing errors. So we cant exactly learn about Neanderthal biology through the sequence generated, but we can analyze particular SNPs as long as we know where to look. The probability of a particular SNP being changed due to an error in sequencing is only $\frac{1}{300}$ to 11000, so usable data can still be obtained. After aligning the chimp, Neanderthal, and modern human sequences, we can measure the distance from Neanderthals to humans and chimps. This distance is only about 12.7% from the human reference sequence. A French sample measures about 8% distance from the reference sequence, and a Bushman about 10.3%. What this says is that the Neanderthal DNA is within our range of variation as a species.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/28%3A_Population_History/28.06%3A_Tools_and_Techniques.txt
Research Directions Currently, the most exciting trend in the field is the existence of more and more data on both ancient and modern population genetics. With more samples, we can devise more fine statistical tests, and tease more and more information about population composition and history. Further Reading Bibliography [1] Bowcock AM, Ruiz-Linares A, Tomfohrde J, Minch E, Kidd JR, and Cavalli-Sforza LL. High resolution of human evolutionary history trees with polymorphic microsatellites. Nature, 368:455–457, 1994. [2] Menozzi. Synthetic maps of human gene frequencies in europeans. Science, 201(4358):768–792, Sep 1978. [3] Rosenberg N. Genetic structure of human populations. Science, 298(5602):2381–2385, 2002. 28.08: European Ancestry and Migrations Tracing the Origins of European Genetics Before 2014, it was believed that modern European genetics was primarily a mixture of two ancestral populations. The first population is what is known as the Western hunter-gathere (WHG) population, and is considered the indigenous European population. The second population is known as the Early European farmer (EEF) population, and represents the rapid migration of farming peoples into Europe, and the subsequent mixing of the new farming population with the original WHG population. However, in 2012, Patterson et al [? ] used the principal component analysis in Figure 29.6 to show that European genetics does not match up with being a mixture of only these two populations. Rather, genetic mixture analysis showed that some Europeans could only be explained as a mix of EEF/WHG populations with a third population whose genetics resembled Native Americans. While this does not mean that Native Americans are ancestral to Europeans, the study concluded that the most likely hypothesis was the mixture of these two known populations with an Ancient North Eurasian (ANE) population which migrated to both Asia and Europe, and is no longer found in North Eurasia. This study called this mystery population the ”Ghost of North Eurasia.” Two years later, in 2014, however, a sample was found confirming the existence of this population. Proclaiming that ”The ghost is Found”, Raghavan, Skoglund et al. [? ] studied the newly found ”Mal’ta” sample from Lake Baikal (currently in Southern Russia) and determined that it matched the predicted ghost population from 2012, and could explain the two dimensional variation in modern European populations. In particular, modern Europeans were found to be composed of 0-50% WHG, 32-93% EEF, and 1-18% ANE populations. Migration from the Steppe Given this new population as a source of European ancestry, the natural questions are when and why did the members of the ANE population migrate to Europe? The answer, of course, can be teased out of further genetic data about the history of European populations. The first clue was found in mitochondrial DNA data, in a 2013 paper by Brandt, Haak et al. [? ], which found that there were two discontinuities in European mitochondrial DNA: one between the Mesolithic and the early Neolithic ages, and one between the mid Neolithic age and the Late Neolithic and Bronze ages. In 2014, studies of 9, and then 94 samples of ancient European individuals showed clearly the two migration events, visualized in Figure 29.7. The first migration, at roughly 6500 BCE, was a migration of the EEF population, which replaced the existing WHG population at a rate of between 60 and 100%. The second migration was a migration of steppe pastoralists, known as the Yamnaya, which replaced the existing population with a rate between 60 and 80%. In both cases, the migrating population takes over a chunk of the genetic composition almost immediately, and then the previous population gradually resurges over several thousand years. Screening for Natural Selection Another application of DNA data to history is in tracing natural selection events. Essentially, one can look at the frequencies of various alleles in modern European DNA data, and find cases where it does not match the ancestral mixing model of the population. Such cases will tend to signify alleles that have been selected for or against since the ancestral mixing events occurred. The easiest to identify, and most well known example of such a trait is lactase persistence. The current level of prevalence of this trait is well above any of the levels represented by ancestral populations, suggesting that it underwent positive selection (due to the domestication and milking of animals) since the ancestral mixing events. Several other traits can also be detected as candidates for selection. Another straightforward example is skin pigmentation. More interesting is the tale of height selection shown by the genetics of Northern and Southern Europeans. In particular, the data shows that two distinct selection effects occurred. First, the early farmers of Southern Europe underwent selection for decreasing height between 8000 and 4000 years ago. Second, the peoples of Northern Europe (modern Scandinavians, etc.) underwent positive selection around the same time period and through the present. While the anthropological explanations of these effects are disputed, the effects themselves are shown clearly in the genetic data.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/28%3A_Population_History/28.07%3A_Research_Directions_Further_Reading_Bibliography.txt
For centuries, biologists had to rely on morphological and phenotypical properties of organisms in order to infer the tree of life and make educated guesses about the evolutionary history of species. Only recently, the ability to cheaply sequence entire genomes and find patterns in them has transformed evolutionary biology. Sequencing and comparing genomes on a molecular level has become a fundamental tool that allows us to gain insight into much older evolutionary history than before, but also to understand evolution at a much smaller resolution of time. With these new tools, we can not only learn the relationship between distant clades that separated billions of years ago, but also understand the present and recent past of species and even different populations inside a species. In this chapter we will discuss the study of Human genetic history and recent selection. The methodological framework of this section builds largely on the concepts from previous chapters. Most specifically, the methods for association mapping of disease and phylogenetic constructs such as tree building among species and genes, and the history of mutations using coalescence. Having learned about these methods in the last chapter, we now will study how their application can inform us about the relationships, and differences between human populations. Additionally, we will look for how these differences can be exploited to look for signals of recent natural selection and the identification of disease loci. We will also discuss in this chapter what we currently know about the differences between human populations and describe some parameters we can infer that quantify population differences, using only the extent genetic variation we observe. In the study of Human Genetic history and recent selection, there are two principal topics of investigation which are often studied. The first is the history of population sizes. The second is the history of interactions between populations. Questions are often asked about these areas because the answers can often provide knowledge to improve the disease mapping process. Thus far, all present research based knowledge of human history was found by investigating functionally neutral regions of the genome, and assuming genetic drift. The reason that neural regions are employed is because mutations are subject to positive, negative and balancing selection pressure, when they take place on a functional region. Hence investigating a neural regions provides a selection unbiased proxy for the drift between species. In this chapter we will delve into some of the characteristics of selection process in humans and look for patterns of human variation in terms of cross species comparisons, comparison synonymous and non-synonymous mutations, and haplotype structure. 29.02: Population Selection Basics Polymorphisms Polymorphisms are differences in appearance amongst members of the same species. Many of them arise from mutations in the genome. These mutations, or genetic polymorphisms, can be characterized into different types. Single Nucleotide Polymorphisms (SNPs) • The mutation of only a single nucleotide base within a sequence. In most cases, these changes are without consequence. However, there are some cases where the mutation of a single nucleotide has a major effect. • For example, is caused by a from A to T, that causes a change from glutamic acid (GAG) to valine (GTG) in hemoglobin. Variable Number Tandem Repeats • When a short sequence is repeated multiple times, DNA Polymerase can sometimes ”slip”, causing it to make either too many or too few copies of the repeat. This is called a . • For example, Huntingtons disease that is caused by too many repeats of the trinucleotide CAG repeat in the HTT gene. Having more than 36 repeats can lead to gradual muscle control loss and severe neurological degradation. Generally, the more repeats there are, the stronger the symptoms. Insertion/Deletion • Through faulty copying or DNA-repair, or of one or multiple nucleotides can occur. • If the insertion or deletion is inside an exon (the protein-coding region of a gene) and does not consist of a multiple of three nucleotides, a will occur. • Prime example is deletions in the CFTR gene, which codes for chloride channels in the lungs and may cause Cystic Fibrosis where the patient cannot clear mucous in the lungs and causes infection Did You Know? DNA profiling is based on short variable number tandem repeats (STR). DNA is cut with certain restriction enzymes, resulting in fragments of variable length that can be used to identify an individual. Different countries use different (but often overlapping) loci for these profiles. In North America, a system based on 13 loci is used. Allele and Genotype Frequencies In order to understand the evolution of a species through analysis of alleles or genotypes, we must have a model of how the alleles are passed on from one generation to another. It is of immense importance that the reader has a firm intuition for the Hardy-Weinberg Principle and Wright fisher model before continuing. Hence, we will provide here a short reminder of modelling the history of mutations via the these methods. First introduced over a hundred years ago, the Wright-Fisher Model is a mathematical model of genetic drift in a population. Specifically, it describes the probability of obtaining k copies of a new allele p within a population of size N, with a non-mutant frequency of q, and what its expected frequency will be in successive generations. Hardy-Weinberg Principle The states that allele and genotype frequencies within a population will remain constant unless there is an outside influence that pushes them away from that equilibrium. The Hardy-Weinberg principle is based on the following assumptions: • The population observed is very large • The population is isolated, i.e. there is no introduction of another subpopulation into the general population • All individuals have equal probability of producing offspring • All mating in the population is at random • No random mutations occur in the population from one generation to the next • Allele frequency drives future genotype frequency (Prevalent allele drives Prevalent genotype) In a Hardy-Weinberg Equilibrium, for two alleles A and a, occurring with probability p and q = 1p, respectively, the probabilities of a randomly chosen individual having the homozygous AA or aa (pp or qq, respectively) or heterozygous Aa or aA (2pq) genotypes can be described by the equation: $p^{2}|2 p q| q^{2}=1\nonumber$ This equation gives a table of probabilities for each genotype, which can be compared with the observed genotype frequencies using statistical error tests such as the chi-squared test to determine if the Hardy- Weinberg model is applicable. Figure 29.1 shows the distribution of genotype frequencies at different allele frequencies. In natural populations, the assumptions made by the Hardy-Weinberg principle will rarely hold. Natural selection occurs, small populations undergo genetic drift, populations are split or merged, etc. In Nature a mutation will always either disappear (frequency = 0) from the population or become prevalent in a species - this is called fixation; in general, 99% of mutations disappear. Figure 29.2 shows a simulation of a mutations prevalence in a finite-sized population over time: both perform random walks, with one mutation disappearing and the other becoming prevalent: Once a mutation has disappeared, the only way for it to reappear is the introduction of a new mutation into the population. For humans, it is believed that a given mutation under no selective pressure should fixate to 0 or 1 (within, e.g., 5%) within a few million years. However, under selection this will happen much faster. Wright-Fisher Model Under this model the time to fixation is 4N and the probability of fixation is 1/2N. In general Wright-Fisher is used to answer questions related to fixation in one way or another. To make sure your intuitions about the method are absolutely clear considering the following questions: FAQ Q: Say you have a total of 5 mutations on a chromosome among a population of size 30, on average, how many mutations will be present in the next generation if each entity produces only one child? A: If each parent has only one offspring, then there will be, on average, 5 mutations in the next generation because the expectation of allele frequencies is to remain constant according to the Hardy-Weinberg equilibrium principle in basic biology. FAQ Q: Is the Hardy-Weinberg Equilibrium principle’s assumption about constant allele frequency reasonable? A: No, the reality is far more complex as there is stochasticity in population size and selection at each generation. A more appropriate way to envision this is to image drawing alleles from a set of parents, with the amount of alleles in the next generation varying with the size of the population. Hence the frequency in the next generation could very well go up or down. Note here that if the allele frequency goes to zero it will always be at zero. The probability at each successive generation is lower if it’s under negative selection and higher if it’s under positive selection. Hence if it’s a beneficial mutation the fixation time will be smaller, if the mutation is deleterious the fixation will be larger. If there are no offspring with a given mutation, then there won’t be any decedents with that mutation either. If one produces multiple o↵spring however, who in turn produce multiple offspring of their own, then there is a greater chance that this allele frequency will rise. FAQ Q: Consider that the average human individual carries roughly 100 entirely unique mutations. So, when an individual produces offspring we could expect that half (or 50) of those mutations may appear in the child because in each sperm or egg cell, 50 of those mutations will be present, on average. Hence the offspring of an individual are likely to inherit approximately 100 mutations, 50 from one parent, and 50 from another in addition to their own unique mutations which come from neither parent. With this in mind, one might be interested in understanding what the chances are of some mutations appearing in the next generation if an individual produces, say, n children. How can one do this? A: Hint: To compute this value, we assume that some allele originates in the founder, at some arbitrary chromosome (1 for example). Then we ask the question, how many chromosome 1s exist in the entire population? At the moment, the size of the human population is 7 Billion, each carrying two copies of chromosome 1. The above questions and answers should make it painfully clear that the standard Hardy-Weinberg assumption of allele frequencies remaining constant from one generation to the next is violated in many natural cases including migration, genetic mutation, and selection. In the case of selection, this issue is addressed by modifying the formal definition to include a S, term which measures the skew in genotypes due to selection. See table 29.1 for a comparison of the original and selection compensated versions: Behavior With only drift With drift and selection n in next generation Mean: n(=2Np), Dist: Binomial(2N, p) Mean: n($n\left(1+\frac{s}{1+n s}\right)$), Dist: Binomial(2N, $2 N, p \frac{1+s}{1+p s}$) Time to fixation 4N $\frac{4 N}{1+\frac{3}{4} N|s|}\left(\frac{1+\frac{1}{2}(\ln N)|s|}{1+|s|}\right)$ Probability of fixation $\frac{1}{2 N}$ $\frac{1-e^{-2 a}}{1-e^{-4 N s}}$ Table 29.1: Comparison of Wright-Fisher Model With Drift, Versus Drift and Selection The main point to take away from Table 29.1, and this section of the chapter is that weather you have selection or not, it is highly unlikely that a single allele will fixate in a population. If you have a very small population, however, then the chances of an allele fixating are much better. This is often the case in human populations, where there are often small, interbred populations which allow for mutations to fix in a population after only a few generations, even if the mutation is deleterious in nature. This is precisely why we tend to see recessive deleterious mandolin disorders in isolated populations. Ancestral State of Polymorphisms How can we determine for a given polymorphism which version was the and which one is the mutant? The ancestral state can be inferred by comparing the genome to that of a closely related species (e.g. humans and chimpanzees) with a known phylogenetic tree. Mutations can occur anywhere along the phylogenetic tree sometimes mutations at the split fix differently in different populations (“fixed difference”), in which case the entire populations differ in genotype. However, recent mutations will not have had enough time to become fixed, and a polymorphism will be present in one species but fully absent in the other as simultaneous mutations in both species are very rare. In this case, the “derived variant” is the version of the polymorphism appearing after the split, while the ancestral variant is the version occuring in both species. 29.2.4 Measuring Derived Allele Frequencies The the frequency of the derived allele in the population can be easily calculated, if we assume that the population is homogeneous. However, this assumption may not hold when there is an unseen divide between two groups that causes them to evolve separately as shown in figure 29.4. In this case the prevalence of the variants among subpopulations is different and the Hardy-Weinberg principle is violated. One way to quantify this difference is to use the (Fst) to compare subpopulations within a species. In reality only a portion of the total heterozygosity in a species is found in a given subpopulation. Fst estimates the reduction in heterozygosity (2pq with alleles p and q) expected when 2 different populations are erroneously grouped together. Given a population having n alleles with frequencies pi where $(1 \leq i \leq n)$, the homozygosity G of the population is calculated as: $\Sigma_{i=1}^{n} p_{i}^{2}\nonumber$ The total heterozygosity in the population is given by 1-G. $F_{s t}=\frac{H \text {eterozygosity}(\text {total})-\text {Heterozygosity}(\text {subpopulation})}{\text {Heterozygosity}(\text {total})}\nonumber$ In the case shown in figure 29.4 there is no heterozygosity between the populations, so Fst = 1. In reality the Fst will be small within one species. In humans, for example, it is only 0.0625. For in practise, the Fst is computed either by clustering sub-populations randomly or using an obvious characteristic such as ethnicity or origin.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/29%3A_Population_Genetic_Variation/29.01%3A_Introduction.txt
In the simple models we’ve seen so far, alleles are assumed to be passed on independently of each other. While this assumption generally holds in the long term, in the short term we will generally observe a that certain alleles are passed on together more frequently than expected. This is termed genetic linkage. The , also known as Mendel’s second law states: Alleles of different genes are passed on independently from parent to offspring. When this “law” holds, there is no correlation between different polymorphisms and the probability of a haplotype (a given set of polymorphisms) is simply the product of the probabilities of each individual polymorphism. In the case where the two genes lie on different chromosomes this assumption of independence generally holds, but if the two genes lie on the same chromosome, they are more often than not passed on together. Without genetic recombination events, in which segments of DNA on homologous chromosomes are swapped (crossing-over), the alleles of the two genes would remain perfectly correlated. With however, the correlation between the genes will be reduced over several generations. Over a suitably long time interval, recombination will completely remove the linkage between two polymorphisms; at which point they are said to be in equilibrium. When, on the other hand, the polymorphisms are correlated, we have Linkage Disequilibrium (LD). The amount of disequilibrium is the difference between the observed haplotype frequencies and those predicted in equilibrium. The linkage disequilibrium can be used to measure the difference between observed and expected assortments. If there are two alleles (1 and 2) and two loci (A and B) we can calculate haplotype probabilities and find the expected allele frequencies. • Haplotype frequencies – P(A1)=x11 – P(B1)=x12 – P(A2)=x21 – P(B2)=x22 • Allele frequencies – P11 = x11 + x12 – P21 = x21 + x22 – P12 = x11 + x21 – P22 = x12 + x22 • D=P11 *P22P12 *P21 Dmax, the maximum value of D with given allele frequencies, is related to D in the following equation: $D^{\prime}=\frac{D}{D_{\max }}\nonumber$ D' is the maximum linkage disequilibrium or complete skew for the given alleles and allele frequencies. Dmax can be found by taking the smaller of the expected haplotype frequencies P (A1, B2) or P (A2, B1). If the two loci are in complete equilibrium, then D' = 0. If D' = 1, there is full linkage. The key point is that relatively recent mutations have not had time to be broken down by crossing-overs. Normally, such a mutation will not be very common. However, if it is under positive selection, the mutation will be much more prevalent in the population than expected. Therefore, by carefully combining a measure of LD and derived allele frequency, we can determine if a region is under positive selection. Decay of is driven by recombination rate and time (in generations) and has an exponential decay. For a higher recombination rate, linkage disequilibrium will decay faster in a shorter amount of time. However, the background recombination rate is dicult to estimate and varies depending on the location in the genome. Comparison of genomic data across multiple species can help in determining these background rates. 29.3.1 Correlation Coefficient r2 Answers how predictive an allele at locus A is of an allele at locus B $r^{2}=\frac{D^{2}}{P\left(A_{1}\right) P\left(A_{2}\right) P\left(B_{1}\right) P\left(B_{2}\right)}\nonumber$ As the value of r2 approaches 1, the more two alleles at two loci are correlated. There may be linkage disequilibrium between two haplotypes, even if the haplotypes are not correlated at all. The correlation coecient is particularly interesting when studying associations of diseases with genes, where knowing the genotype at locus A may not predict a disease whereas locus B does. There is also the possibility where neither locus A nor locus B are predictive of the disease alone but loci A and B together are predictive.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/29%3A_Population_Genetic_Variation/29.03%3A_Genetic_Linkage.txt
In the mid 1800s the concept of evolution was not an uncommon idea, but it wasn’t before Darwin and Wallace proposed natural selection as the mechanism that drives evolution in nature that the theory of evolution got widespread recognition. It took 70 years (1948) until J.B.S Haldanes Malaria Hypothesis found the first example for natural selection in humans. He showed a correlation between genetic mutations in red blood cells and the distribution of malaria prevalence and discovered that individuals who had a specific mutation that made them suffer from sickle cell anaemia also gave made them resistant to malaria. Lactose tolerance (lasting into adulthood) is another example of natural selection. Such explicit examples were hard to prove without genome sequences. With whole genome sequencing readily available, we can now search the genome for regions with the same patterns as these known examples to identify further regions undergoing natural selection. Genomics Signals of Natural Selection • Ka/Ks ratio of non-synonymous to synonymous changes per gene • Low diversity and many rare alleles over a region (ex Tajima’s D with regard to sickle-cell anemia) • High derived allele frequency (or low) over a region (ex Fay and Wu’s H) • Differentiation between populations faster than expected from drift (Measured with Fst) • Long haplotypes: evidence of selective sweep. • Exponential prevalence of a feature in sequential generations • Mutations that help a species prosper Examples of Negative (Purifying) Selection Across species we see negative selection of new mutations in conserved functional elements (exons, etc.). New alleles within one species tend to have lower allele frequencies if the allele is non-synonymous than synonymous. Lethal alleles have very low frequencies. Examples of Positive (Adaptive) Selection • Similar to negative selection in that positive selection more likely in functional elements or non- synonymous alleles. • Across species in a conserved element, a positively selected mutation might be the same over most mammals, but change in a specific species because a positvely selected mutation appeared after speci- ation or caused speciation. • Within a species positvely selected alleles likely differ in allele frequency (Fst) across populations. Examples include malaria resistance in African populations (29.6) and lactose persistence in European populations (29.7). • Polygenic selection within species can arise when a trait is selected for that depends on many genes. An example is human height where 139 SNPs are known to be related to height. Most are not population specific mutations but alleles across all humans that are seleced for in some populations more than others. (29.8) Statistical Tests • Long range correlations (iHs, Xp, EHH): If we tag genetic sequences in an individual based on their ancestry, we end up with a broken haplotype, where the number of breaks (color changes) is correlated with the number of recombinations and can tell us how long ago a particular ancestry was introduced. • SWEEP A program developed by Pardis Sabeti, Ben Fry and Patrick Varilly. SWEEP detects evidence of natural selection by analyzing haplotype structures in the genome using the long range haplotype test (LRH). It looks for high frequency alleles with long range linkage disequilibrium that hints to large scale proliferation of a haplotype that occurred at a rate greater than recombination could break it from its markers . High Frequency Derived Alleles Look for large spikes in the frequency of derived alleles in set positions. High Differentiation (Fst) Large spikes in differentiation at certain positions. Using these tests, we can find genomic regions under selective pressure. One problem is that a single SNP under positive selection will allow nearby SNPs to piggy-back and ride along. It is dicult to distinguish the SNP under selection from its neighbours with only one test. Under selection, all the tests are strongly correlated; however, in the absence of selection they are generally independent. Therefore, by employing a composite statistic built from all of these tests, it is possible to isolate the individual SNP under selection. Examples where a single SNP has been implicated in a trait: • Chr15 Skin pigmentation in Northern Europe • Chr2 Hair traits in Asia • Chr10 Unknown trait in Asia • Chr12 Unknown Trait in Africa
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/29%3A_Population_Genetic_Variation/29.04%3A_Natural_Selection.txt
Not surprisingly, the scientific community has a long, and somewhat controversial history of interest in recent population dynamics. While indeed some of this interest was applied toward more nefarious aims, such as the scientific justifications for racism for eugenics but these are increasingly the exception and not the rule. Early studies of population dynamic were primitive in many ways. Quantifying the differences between human populations was originally performed using blood types, as they seemed to be phenotypically neutral, could be tested for outside of the body, and seemed to be polymorphic in many different human populations. Fast forward to the present, and the scientific community has realized that there are other glycoproteins beyond the A,B and O blood groups that are far more polymorphic in the population. As science continued to advance and sequencing became a reality, they began whole genome sequencing of the Y-chromosome, mitochondrial and microsatellite markers around them. What’s special about those two types of genetic data? First and foremost, they are quite short so they can be sequenced more easily than other chromosomes. Beyond just the size, the reason that the Y and mitochondrial chromosomes were of such interest is because they do not recombine, and can be used to easily reconstruct inheritance trees. This is precisely what makes these chromosomes special relative to a short chunk on an autosome; we know exactly where it comes from because we can trace paternal or maternal lineage backward in time. This type of reconstruction does not work with other chromosomes. If one were to generate a tree using a certain chunk of all of chromosome 1 in a certain population, for instance, they would indeed form a phylogeny but that phylogeny would be picked from random ancestors in each of the family trees. As sequencing continued to develop and grow more effective, the human genome project was being proposed, and along with it there was a strong push to include some sort of diversity measure in genomic data. Technically speaking, it was easiest to simply look at microsatellites for this diversity measure because they can be studied on gel to see size polymorphisms instead of inspecting a sequence polymorphism. As a reminder, a microsatellite is a region of variable length in the human genome often characterised by short tandem repeats. One reason for microsatellites is retroviruses inserting themselves into the genome, such as the ALU elements in the human genome. These elements sometimes become active and will retro-transpose as insertion events and one can trace when those insertion events have happened in human lineage. Hence, there was a push, early on to assay these parts of the genome in a variety of different populations. The really attractive thing about microsatellites is that they are highly polymorphic and one can actually infer their rate of mutation. Hence, we can not only say that there is a certain relationship between populations based on these rates, but we can also say how long they have been evolving and even when certain mutations occurred, and how long it’s been on certain branches of the phylogenetic tree. FAQ Q: Can’t this simply be done with SNPs A: You can’t do it very easily with SNPs. You can get an idea of how old they are based on their allele frequency, but they’re also going to be influenced by selection. After the human genome project, came the Haplotype inheritance Hapmap project which looked at SNPs genome wide. We have discussed Haplotype inheritance in detail in prior chapters where we learned the importance of Hapmap in designing genotyping arrays which look at SNPs that mark common haplotypes in the population. The effects of Bottlenecks on Human diversity Using this wealth of data across studies and a plethora of mathematical techniques has led to the realization that humans, in fact, have a very low diversity given our census population; which implies a small effective population size. Utilizing the Wright-Fisher model it is possible to work back from the level of diversity and the number of mutations we see in the population today to generate a founding population size. When this computation is performed it works out to being around 10,000. FAQ Q: Why is this so much smaller than our census population size? A: There was A population bottleneck somewhere. Most of the total variation between humans is happening within-continent. One can measure how much diversity is explained by geography and how much is not. It turns out that most of it is not explained by geography. In fact, most common variants are polymorphic in every population and if a common variant is unique to a given population, there probably hasn’t been enough time for that to happen by drift itself. Recall what an unlikely process it is to get to a high allele frequency over the course of several generations by mere chance alone. Hence, we may interpret this as a signal of selection when it occurs. All of the evidence in terms of comparing diversity patterns and trees back to ancestral haplotypes converges to an Out-of-Africa hypothesis which is the overwhelming consensus in the field and is the lens through which we review all the genetic population data. Starting from the African founder population, there have been works which have demonstrated that it’s possible to model population growth using the wright fisher model. The studies have shown that the growth rate we see in Asian and European populations are only consistent with large exponential growth after the out-of-Africa event. This helps us understand the reasons for phonotypical differences between the races as Bottlenecks which are followed by exponential growth can lead to an excess of rare alleles. The present theory on human diversity states that there were secondary bottleneck events after the founding population migrated out of Africa. These founders were, at some earlier point subject to an even smaller bottleneck event which is now reflected in every human genome on the planet, regardless of their immediate ancestry. It is possible to estimate how small the original bottle neck was by looking at differences between African and European origin individuals, inferring the effects of the secondary bottleneck, and the term of exponential growth of the European population. The other way of approaching bottleneck event estimation is to simply inspect the allele frequency spectrum needed to build coalescent trees. In this way, one can take haplotypes across the genome and ask what the most recent common ancestor was by observing how the coalescence varies across the genome. For instance, one may guess that some haplotype was positively selected for only recently given the length of the haplotype. An example of one such recent mutation in the European population is the lactase gene. Another example for the Asian population is the ER locus. There is a wealth of literature showing that when one draws a coalescence tree for most haplotypes it ends up going way back before when we think speciation happened. This indicates that certain features have been kept polymorphic for a very long time. One can, however, look at this distribution of features across the whole genome and infer something about population history from it. If there was a recent bottle neck in a population, it will be reflected by the ancestors being very recent whereas more ancient things will have survived the bottleneck. One can take the distribution of coalescent times and run simulations for how the effect of population size would have varied with time. The model for doing this type of study was outlined by Li and Durbin. The Figure 29.11 from their study illustrates two such bottleneck events. The first is the bottleneck which occurred in Africa long before migrations out of the continent. This was then followed by a population specific bottleneck that resulted from migration groups out of Africa. This is reflected in the diversity of the populations today based on their ancestry and it can be derived from looking at a pair of chromosome from any two people in these populations. Understanding Disease Understanding that human populations went through bottlenecks has important implications for under- standing population specific disease. A study published by Tennessen et al. this year was looking at exome sequences in many classes of individuals. The study intended to look at how rare variants might be contributing to disease and as a consequence they were able to fit population genetics models to the data, and ask what sort of deleterious variants were seen when sequencing exomes from a broad population panel. Using this approach, they were then able to generate parameters which describe how long ago exponential growth between the founder, and branching populations occured. See figure 29.12 below for an illustration of this: Understanding Recent Population Admixture In addition to viewing coalescent times, one can also perform Principal Component Analysis on SNPs to gain an understanding of more recent population admixtures. Running this on most populations shows clustering with respect to geographical location. There are some populations, however, that experienced a recent admixture for historical reason. The two most commonly referred to in the scientific literature are: African Americans, who on average are 20 There are two major things one can say about the admixture event of African Americans and Mexican Americans. The first and more obvious is inferring the admixture level. The second, and more interesting, is inferring when the admixture event happened based on the actual mixture level. As we have discussed in previous chapters, the racial signifiers of the genome break down with admixture because of recombination in each generation. If the population is contained, the percentage of those with European and West African origin should stay the same in each generation, but the segments will get shorter, due to the mixing. Hence, the length of the haplotype blocks can be used to date back to when the mixing originally happened. (When it originally happened we would expect large chunks, with some gambits being entirely of African origin, for instance.) Using this approach, one can look at the distribution of recent ancestry traps and then fit a model to when these migrants entered an ancestral population as shown below: 29.06: Current Research Further Reading HapMap project The International Project aims to catalog the genomes of humans from various countries and regions and find similarities and di↵erences to help researchers find genes that will benefit the advance in disease treatment and administration of health related technologies. genomes project The 1000 Genomes Project is an international consortium of researchers aiming to establish a detailed catalogue of human genetic variation. Its aim was to sequence the genomes of more than a thousand anonymous participants from a number of di↵erent ethnic groups. In October 2012, the sequencing of 1092 genomes was announced in a Nature paper. It is hoped that the data collected by this project will help scientists gain more insight into human evolution, natural selection and rare disease-causing variants. 29.7: Further Reading • Campbell Biology, 9th edition; Pearson; Chapter 23: The Evolution of Populations 474 • The Cell, 5th edition, Garland publishing; Chapters 5: DNA replication, repair and recombination, Chapter 20: Germ cells and fertilization
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/29%3A_Population_Genetic_Variation/29.05%3A_Human_Evolution.txt
[1] G.R. Abe ̧casis, S.S. Cherny, W.O. Cookson, and L.R. Cardon. Merlin—rapid analysis of dense genetic maps using sparse gene flow trees. Nature Genetics, 30(1):97–101, 2002. [2] H.L. Allen et al. Hundreds of variants clustered in genomic loci and biological pathways a↵ect human height. Nature, 467(7317):832–838, 2010. [3] Y. Benjamini and Y. Hochberg. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, 57:289–300, 1995. [4] D. Botstein, R.L. White, M. Skolnick, and R.W. Davis. Construction of a genetic linkage map in man using restriction fragment length polymorphisms. American Journal of Human Genetics, 32:314–331, 1980. [5] M.S. Brown and J.L. Goldstein. A receptor-mediated pathway for cholesterol homeostasis. Science, 232(4746):34–47, 1986. [6] Jonathan C. Cohen, Eric Boerwinkle, Thomas H. Mosley, and Helen H. Hobbs. Sequence variations in PCSK9, low LDL, and protection against coronary heart disease. 354(12):1264–1272. [7] B. Devlin and K. Roeder. Genomic control for association studies. Biometrics, 55:997–1004, 1999. [8] R.C. Elston and J. Stewart. A general model for the genetic analysis of pedigree data. Human Heredity, 21:”523–542”, 1971. [9] Kyle Kai-How Farh, Alexander Marson, Jiang Zhu, Markus Kleinewietfeld, William J. Housley, Saman- tha Beik, Noam Shoresh, Holly Whitton, Russell J. H. Ryan, Alexander A. Shishkin, Meital Hatan, Marlene J. Carrasco-Alfonso, Dita Mayer, C. John Luckey, Nikolaos A. Patsopoulos, Philip L. De Jager, Vijay K. Kuchroo, Charles B. Epstein, Mark J. Daly, David A. Hafler, and Bradley E. Bernstein. Genetic and epigenetic fine mapping of causal autoimmune disease variants. [10] Sir R.A. Fisher. The correlation between relatives on the supposition of Mendelian inheritance. Trans- actions of the Royal Society of Edinburgh, 52:399–433, 1918. [11] D.F. Gudbjartsson, K. Jonasson, M.L. Frigge, and A. Kong. Allegro, a new computer program for multipoint linkage analysis. Nature Genetics, 25(1):12–13, 2000. [12] D.F Gudbjartsson, T. Thorvaldsson, A. Kong, G. Gunnarsson, and A. Ingolfsdottir. Allegro version 2. Nature Genetics, 37(10):1015–1016, 2005. [13] Joel T. Haas, Harland S. Winter, Elaine Lim, Andrew Kirby, Brendan Blumenstiel, Matthew DeFe- lice, Stacey Gabriel, Chaim Jalas, David Branski, Carrie A. Grueter, Mauro S. Toporovski, Tobias C. Walther, Mark J. Daly, and Robert V. Farese. DGAT1 mutation is linked to a congenital diarrheal disorder. 122(12):4680–4684. [14] X. Hu, H. Kim, E. Stahl, R. Plenge, M. Daly, and S. Raychaudhuri. Integrating autoimmune risk loci with gene-expression data identifies specific pathogenic immune cell subsets. The American Journal of Human Genetics, 89(4):496–506, 2011. [15] R.M. Idury and R.C. Elston. A faster and more general hidden markov model algorithm for multipoint likelihood calculations. Human Heredity, 47:197–202, 1997. [16] A. Ingolfsdottir and D. Gudbjartsson. Genetic linkage analysis algorithms and their implementation. In Corrado Priami, Emanuela Merelli, Pablo Gonzalez, and Andrea Omicini, editors, Transactions on Computational Systems Biology III, volume 3737 of Lecture Notes in Computer Science, pages 123–144. Springer Berlin / Heidelberg, 2005. [17] L. Kruglyak, M.J. Daly, M.P. Reeve-Daly, and E.S. Lander. Parametric and nonparametric linkage analysis: a unified multipoint approach. American Journal of Human Genetics, 58:1347–1363, 1996. [18] L. Kruglyak and E.S. Lander. Faster multipoint linkage analysis using fourier transforms. Journal of Computational Biology, 5:1–7, 1998. [19] P. Kuballa, A. Huett, J.D. Rioux, M.J. Daly, and R.J. Xavier. Impaired autophagy of an intracellular pathogen induced by a crohn’s disease associated atg16l1 variant. PLoS One, 3(10):e3391, 2008. [20] E.S. Lander and P. Green. Construction of multilocus genetic linkage maps in humans. Proceedings of the National Academy of Sciences, 84(8):2363–2367, 1987. [21] E.S. Lander, P. Green, J. Abrahamson, A. Barlow, M.J. Daly, S.E. Lincoln, and L. Newburg. Mapmaker: An interactive computer package for constructing primary genetic linkage maps of experimental and natural populations. Genomics, 1(2):174–181, 1987. [22] Q. Li, J.B. Brown, H. Huang, and P.J. Bickel. Measuring reproducibility of high-throughput experi- ments. Annals of Applied Statistics, 5:1752–1797, 2011. [23] E.Y. Liu, Q. Zhang, L. McMillan, F.P. de Villena, and W. Wang. Ecient genome ancestry inference in complex pedigrees with inbreeding. Bioinformatics, 26(12):i199–i207, 2010. [24] D.G. MacArthur, S. Balasubramanian, A. Frankish, N. Huang, J. Morris, K. Walter, L. Jostins, L. Habegger, J.K. Pickrell, S.B. Montgomery, et al. A systematic survey of loss-of-function variants in human protein-coding genes. Science, 335(6070):823–828, 2012. [25] B.P. McEvoy and P.M. Visscher. Genetics of human height. Economics & Human Biology, 7(3):294 – 306, 2009. [26] N.E. Morton. Sequential tests for the detection of linkage. The American Journal of Human Genetics, 7(3):277–318, 1955. [27] N. Patterson, A. Price, and D. Reich. Population structure and eigenanalysis. PLoS Genetics, 2:e190, 2006. [28] A. Piccolboni and D. Gusfield. On the complexity of fundamental computational problems in pedigree analysis. Journal of Computational Biology, 10:763–773, October 2003. [29] A. Price et al. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38:904–909, 2006. [30] J. Pritchard, M. Stephens, N. Rosenberg, and P. Donnelly. Association mapping in structured popula- tions. American Journal of Human Genetics, 67:170–181, 2000. [31] M.A. Rivas, M. Beaudoin, A. Gardet, C. Stevens, Y. Sharma, C.K. Zhang, G. Boucher, S. Ripke, D. Ellinghaus, N. Burtt, et al. Deep resequencing of gwas loci identifies independent rare variants associated with inflammatory bowel disease. Nature genetics, 2011. [32] Kaitlin E Samocha, Elise B Robinson, Stephan J Sanders, Christine Stevens, Aniko Sabo, Lauren M McGrath, Jack A Kosmicki, Karola Rehnstrm, Swapan Mallick, Andrew Kirby, Dennis P Wall, Daniel G MacArthur, Stacey B Gabriel, Mark DePristo, Shaun M Purcell, Aarno Palotie, Eric Boerwinkle, Joseph D Buxbaum, Edwin H Cook, Richard A Gibbs, Gerard D Schellenberg, James S Sutcliffe, Bernie Devlin, Kathryn Roeder, Benjamin M Neale, and Mark J Daly. A framework for the interpretation of de novo mutation in human disease. 46(9):944–950. [33] Evan A. Stein, Scott Mellis, George D. Yancopoulos, Neil Stahl, Douglas Logan, William B. Smith, Eleanor Lisbon, Maria Gutierrez, Cheryle Webb, Richard Wu, Yunling Du, Therese Kranz, Evelyn Gasparino, and Gary D. Swergold. E↵ect of a monoclonal antibody to PCSK9 on LDL cholesterol. 366(12):1108–1118. [34] T. Strachan and A.P. Read. Human Molecular Genetics. Wiley-Liss, New York, 2 edition, 1999. [35] S. Yang, Y. Xiao, D. Kang, J. Liu, Y. Li, E. A. B. Undheim, J. K. Klint, M. Rong, R. Lai, and G. F. King. Discovery of a selective NaV1.7 inhibitor from centipede venom with analgesic ecacy exceeding morphine in rodent pain models. 110(43):17534–17539.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/30%3A_Medical_Genetics--The_Past_to_the_Present/30.01%3A_Bibliography.txt
Mark J. Daly, Ph.D., is an Associate Professor at the Massachusetts General Hospital/Harvard Medical School and an Associate Member of the Broad Institute. This lecture explains how statistical and computational methods can aid researchers in understanding, diagnosing, and treating disease. Association mapping is the process identifying genetic variation which can explain phenotypic variation, which is particularly important for understanding disease phenotypes (e.g., susceptibility). Historically, the method of choice for solving this problem was linkage analysis. However, advances in genomic technology have allowed for a more powerful method called genome-wide association. More recent advances in technology and genomic data have allowed for novel integrative analyses which can make powerful predictions about diseases. Any discussion about the basis of disease must consider both genetic and environmental effects. However, it is known that many traits, for example those in Figure 30.1, have significant genetic components. Formally, the heritability of a phenotype is the proportion of variation in that phenotype which can be explained by genetic variation. The traits in Figure 30.1 are all at least 50% heritable. Accurately estimating heritability involves statistical analyses on samples with highly varied levels of shared genetic variation (e.g., twins, siblings, relatives, and unrelated). Studies on the heritability of Type 2 diabetes, for example, have shown that given you have diabetes, the risk to the person sitting next to you (an unrelated person) increases by 5–10%; the risk to a sibling increases by 30%; and the risk to an identical twin increases by 85%–90%. 30.03: Goals of investigating the genetic basis of disease Having established that there is a genetic component to disease traits, how can this research help meet outstanding medical challenges? There are two main ways: Personalized genomic medicine Variants can be used in genetic screens to test for increased risk for the disease trait and provide individualized medical insights. A large number of companies are now providing personalized genomic services through screening for cancer recurrence risk, genetic disorders (including prenatal screening), and common disease. Individualized genomic medicine can help identify likelihood to benefit from specific therapeutic interventions, or can predict adverse drug responses. Informing therapeutic development Identifying genetic variants which explain the disease trait contributes to our ability to understand the mechanism (the biochemical pathways, etc.) by which the disease manifests. This allows us to engineer drugs that are more effective at targeting the causal pathways in disease. This is of particular interest because our current drug development process makes it difficult to develop drugs for certain disorders. For example, in the last 50 years, no truly novel compounds have been developed to treat various psychiatric disorders such as schizophrenia. The identification of genetically associated genes can help identify targets to start drug development. Figure 30.2 depicts the cycle of drug development. The drug development process starts with hypothesizing a possible target of interest that might be related to a disease. After biochemical evaluations and drug development, the target is tested in model organisms. If the drug is effective in model organisms, it is tested in humans through clinical trials. However, the vast majority of drugs which make it through this process end up being ineffective in treating the disease for which they were originally designed. This result is mainly a consequence of faulty target selection as the basis of the disease in question. Statins are a prominent example of highly effective drugs developed after work on understanding the genetic basis of the disease trait they are targeted at. Dr. Michael Brown and Dr. Joseph Goldstein won the Nobel Prize in Physiology or Medicine in 1985 for their work on the regulation of LDL cholesterol metabolism [5]. They were able to isolate the cause of extreme familial hypercholesterolemia (FH), a Mendelian disorder, to mutations of a single gene encoding an LDL receptor. Moreover, they were able to identify the biochemical pathway which was affected by the mutation to create the disease condition. Statins target that pathway, making them useful not only to individuals suffering from FH, but also as an effective treatment for high LDL cholesterol in the general population.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/30%3A_Medical_Genetics--The_Past_to_the_Present/30.02%3A_Introduction.txt
Mendel Gregor Mendel identified the first evidence of inheritance in 1865 using plant hybridization. He recognized discrete units of inheritance related to phenotypic traits, and noted that variation in these units, and therefore variations in phenotypes, was transmissible through generations. However, Mendel ignored a discrepancy in his data: some pairs of phenotypes were not passed on independently. This was not understood until 1913, when linkage mapping showed that genes on the same chromosome are passed along in tandem unless a meiotic cross-over event occurs. Furthermore, the distance between genes of interest describes the probability of a recombination event occuring between the two loci, and therefore the probability of the two genes being inherited together (linkage). Linkage Analysis Historically, researchers have used the idea of linkage through linkage analysis to determine genetic variants which explain phenotypic variation. The goal is to determine which variants contribute to the observed pattern of phenotypic variation in a pedigree. Figure 30.3 shows an example pedigree in which squares are male individuals, circles are female individuals, couples and offspring are connected, and individuals in red have the trait of interest. Linkage analysis relies on the biological insight that genetic variants are not independently inherited (as proposed by Mendel). Instead, meiotic recombination happens a limited number of times (roughly once per chromosome), so many variants cosegregate (are inherited together). This phenomenon is known as linkage disequilibrium (LD). As the distance between two variants increases, the probability a recombination occurs between them increases. Thomas Hunt Morgan and Alfred Sturtevant developed this idea to produce linkage maps which could not only determine the order of genes on a chromosome, but also their relative distances to each other. The Morgan is the unit of genetic distance they proposed; loci separated by 1 centimorgan (cM) have 1 in 100 chance of being separated by a recombination. Unlinked loci have 50% chance of being separated by a recombination (they are separated if an odd number of recombinations happens between them). Since we usually do not know a priori which variants are causal, we instead use genetic markers which capture other variants due to LD. In 1980, David Botstein proposed using single nucleotide polymorphisms (SNPs), or mutations of a single base, as genetic markers in humans [4]. If a particular marker is in LD with the actual causal variant, then we will observe its pattern of inheritance contributing to the phenotypic variation in the pedigree and can narrow down our search. The statistical foundations of linkage analysis were developed in the first part of the 20th century. Ronald Fisher proposed a genetic model which could reconcile Mendelian inheritance with continuous phenotypes such as height [10]. Newton Morton developed a statistical test called the LOD score (logarithm of odds) to test the hypothesis that the observed data results from linkage [26]. The null hypothesis of the test is that the recombination fraction (the probability a recombination occurs between two adjacent markers) $\theta$ = 1/2 (no linkage) while the alternative hypothesis is that it is some smaller quantity. The LOD score is essentially a log-likelihood ratio which captures this statistical test: $\mathrm{LOD}=\frac{\log (\text { likelihood of disease given linkage })}{\log (\text { likelihood of disease given no linkage })}\nonumber$ The algorithms for linkage analysis were developed in the latter part of the 20th century. There are two main classes of linkage analysis: parametric and nonparametric [34]. Parametric linkage analysis relies on a model (parameters) of the inheritance, frequencies, and penetrance of a particular variant. Let F be the set of founders (original ancestors) in the pedigree, let gi be the genotype of individual i, let $\Phi_{i}$ be the phenotype of individual i, and let f(i) and m(i) be the father and mother of individual i. Then, the likelihood of observing the genotypes and phenotypes in the pedigree is: $L=\sum_{g_{1}} \ldots \sum_{g_{n}} \prod_{i} \operatorname{Pr}\left(\Phi_{i} \mid g_{i}\right) \prod_{f \in F} \operatorname{Pr}\left(g_{f}\right) \prod_{i \notin F} \operatorname{Pr}\left(g_{i} \mid g_{f(i)}, g_{m(i)}\right)\nonumber$ The time required to compute this likelihood is exponential in both the number of markers being considered and the number of individuals in the pedigree. However, Elston and Stewart gave an algorithm for more efficiently computing it assuming no inbreeding in the pedigree [8]. Their insight was that conditioned on parental genotypes, offspring are conditionally independent. In other words, we can treat the pedigree as a Bayesian network to more efficiently compute the joint probability distribution. Their algorithm scales linearly in the size of the pedigree, but exponentially in the number of markers. There are several issues with parametric linkage analysis. First, individual markers may not be informative (give unambiguous information about inheritance). For example, homozygous parents or genotyping error could lead to uninformative markers. To get around this, we could type more markers, but the algorithm does not scale well with the number of markers. Second, coming up with model parameters for a Mendelian disorder is straightforward. However, doing the same for non-Mendelian disorders is non-trivial. Finally, estimates of LD between markers are not inherently supported. Nonparametric linkage analysis does not require a genetic model. Instead, we first infer the inheritance pattern given the genotypes and the pedigree. We then determine whether the inheritance pattern can explain the phenotypic variation in the pedigree. Lander and Green formulated an HMM to perform the first part of this analysis [20]. The states of this HMM are inheritance vectors which specify the result every meiosis in the pedigree. Each individual is represented by 2 bits (one for each parent). The value of each bit is 0 or 1 depending on which of the grand-parental alleles is inherited. Figure 30.4 shows an example of the representation of two individuals in an inheritance vector. Each step of the HMM corresponds to a marker; a transition in the HMM corresponds to some bits of the inheritance vector changing. This means the allele inherited from some meiosis changed, i.e. that a recombination occurred. The transition probabilities in the HMM are then a function of the recombination fraction between adjacent markers and the Hamming distance (the number of bits which differ, or the number of recombinations) between the two states. We can use the forward-backward algorithm to compute posterior probabilities on this HMM and infer the probability of every inheritance pattern for every marker. This algorithm scales linearly in the number of markers, but exponentially in the size of the pedigree. The number of states in the HMM is exponential in the length of the inheritance vector, which is linear in the size of the pedigree. In general, the problem is known to be NP-hard (to the best of our knowledge, we cannot do better than an algorithm which scales exponentially in the input) [28]. However, the problem is important not only in this context, but also in the contexts of haplotype inference or phasing (assigning alleles to homologous chromosomes) and genotype imputation (inferring missing genotypes based on known genotypes). There have been many optimizations to make this analysis more tractable in practice [1, 11, 12, 15–18, 21, 23]. Linkage analysis identifies a broad genomic region which correlates with the trait of interest. To narrow down the region, we can use fine-resolution genetic maps of recombination breakpoints. We can then identify the affected gene and causal mutation by sequencing the region and testing for altered function. 30.05: Complex Traits Linkage analysis has proven to be highly effective in studying the genetic basis of Mendelian (single gene) diseases. In the past three decades, thousands of genes have been identified as contributing to Mendelian diseases. We have identified the genetic basis of disease such as sickle cell anemia, cystic fibrosis, muscular dystrophy, and severe forms of common diseases such as diabetes and hypertension. For these diseases, mutations are severe and obvious; the environment, behavior, and chance have little effect. Figure 30.5 shows this explosion in published associations. However, most diseases (and many other traits of interest) are not Mendelian. These complex traits arise from the interactions of many genes and possibly the environment and behavior. A canonical complex trait is human height: it is highly heritable, but environmental factors can affect it. Recently, researchers have identified hundreds of variants which are associated with height [2, 25]. Linkage analysis is not a viable approach to find these variants. The first complex trait mapping occured in 1920 by Altenburg and Muller and involved the genetic basis of truncated wing in D. Melanogaster. The polygenicity, or distribution of a complex trait across a large number of genes, provides a fundamental challenge to determining which genes are associated with a phenotype. In complex traits, instead of one gene determining a disease or trait (as in Mendelian inheritance), many genes each exert a small influence. The effect of all of these genes, as well as environmental influences, combine to determine an individual outcome. Furthermore, most common diseases work this way. This is due to the fact that selection agains each individual genotypic difference is very small, because there is no one difference that is causal for the disease. This way, complex traits ”survive” evolution, because they are not targets for selection.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/30%3A_Medical_Genetics--The_Past_to_the_Present/30.04%3A_Mendelian_Traits.txt
In the 1990s, researchers proposed a methodology called genome-wide association to systematically correlate markers with traits. These studies sample large pools of cases and controls, measure their genotypes at on the order of one million markers, and try to correlate variation (SNPs, CNVs, indels) in their genotypes with their variation in phenotype, tracking disease through the population, instead of pedigrees. Events Enabling Genome-wide Association Studies Genome-wide association studies (GWASs) are possible due to three advances. First, advances in our understanding of the genome and the creation of genomic resources have allowed us to better understand and catalogue variation in the genome. From this data, we have realized the key biological insight that humans are one of the least genetically diverse species. On the order of tens of millions of SNPs are shared between different human subpopulations. For any particular region of the genome, we observe only a limited number of haplotypes (allele combinations which are inherited together). This is due to the fact that as a species, we are relatively new, and mutations have not caught up with our rapid growth. Because of this high redundancy, we only need to measure a fraction of all the variants in the human genome in order to capture them all with LD. We can then adapt the algorithms for inferring inheritance patterns in linkage analysis to impute genotypes for the markers which we did not genotype. Furthermore, genome resources allow us to carefully choose markers to measure and to make predictions based on markers which show statistically significant association. We now have the reference sequence of the human genome (allowing for alignments, genotype and SNP calling) and HapMap, a comprehensive catalog of SNPs in humans. We also have genome-wide annotations of genes and regulatory elements. Second, advances in genotyping technology such as microarrays and high-throughput sequencing have given us the opportunity to compare the genomes of those affected with various phenotypes to controls. They are also the easiest and cheapest to measure using these technologies. Although there are many types of variation in the human genome (Figure 30.6 shows some examples), SNPs are the vast majority. Additionally, to account for the other types of variants, recently DNA microarrays have been developed to detect copy-number variation in addition to SNPs, after which we can impute the unobserved data. The third advance is a new expectation of collaboration between researchers. GWASs rely on large sample sizes to increase the power (probability of a true positive) of statistical tests. The explosion in the number of published GWASs has allowed for a new type of meta-analysis which combines the results of several GWASs for the same phenotype to make more powerful associations. Meta-analysis accounts for various technical and population-genetic biases in individual studies. Researchers who conduct GWASs are expected to collaborate with others who have conducted GWASs on the same trait in order to show replicability of results. By pooling together the data, we also have more confidence in the reported associations, and the genes that are discovered may lead to the recognition of key pathways and processes. Did You Know? Modified from the Wellcome Trust Sanger Institute: Crohn’s disease and Ulcerative Colitis have been focuses for complex disease genetics, and the massive collaborative efforts of the International Inflammatory Bowel Disease Genetics Consortium (IIBDGC) strengthen the success of the research. With approximately 40,000 DNA samples from patients with IBD and 20,000 healthy controls, the IIBDGC have discovered 99 definite IBD loci. In all, the 71 Crohn’s disease and 47 UC loci account for 23 % and 16% of disease heritability respectively. Key insights into disease biology have already resulted from gene discovery (e.g. autophagy in Crohn’s disease, defective barrier function in UC and IL23 signalling in IBD and immune-mediated disease generally). It is anticipated that of the many novel drug targets identified by gene discovery, a few will ultimately result in improved therapeutics for these devastating conditions. Improved diagnostics, prognostics and therapeutics are all goals, with a view to personalized therapy (the practice of using an individual’s genetic profile as a guide for treatment decisions) in future. Quality Controls The main problem in conducting GWASs is eliminating confounding factors, but best practices can be used to support quality data. First, there is genotyping error, which is common enough to require special treatment regardless of which technology is used. This is a technical quality control, and to account for such errors, we use thresholds on metrics like minor allele frequency and deviation from Hardy–Weinberg equilibrium and throw out SNPs which do not meet the criteria. Second, systematic genetic differences between human subpopulations require a genetic quality control. There are several methods to account for this population substructure, such as genomic control [7], testing for Mendelian inconsistencies, structured association [30], and principal component analysis [27, 29]. Third, covariates such as environmental and behavioral effects or gender may skew the data. We can account for these by including them in our statistical model. Testing for Association After performing the quality controls, the statistical analysis involved in GWAS is fairly straightforward, with the simplest tests being single marker regression or a chi-square test. In fact, association results requiring arcane statistics/complex multi-marker models are often less reliable. First, we assume the effect of each SNP is independent and additive to make the analysis tractable. For each SNP, we perform a hypothesis test whose null hypothesis is that the observed variation in the genotype at that SNP across the subjects does not correlate with the observed variation in the phenotype across the subjects. Because we perform one test for each SNP, we need to deal with the multiple testing problem. Each test has some probability of giving a false positive result, and as we increase the number of tests, the probability of getting a false positive in any of them increases. Essentially, with linkage, p = 0.001 (.05/ 50 chromosomal arms) would be considered potentially significant, but GWAS involves performing O(10e6) tests that are largely independent. Each study would have hundreds of p <0.001 purely by statistical chance, with no real relationship to disease. There are several methods to account for multiple testing such as Bonferroni correction and measures such as the false discovery rate [3] and the irreproducible discovery rate [22]. Typically, genome-wide significance is set at p = 5*10e-8 (= .05/1 million tests), first proposed by Risch and Merikangas (1996) []. In 2008, three groups [] published empirically derivaed estimates based on dense genome-wide maps of common DNA and estimated appropriate dense-map numbers to be in the range of 2.5 to 7.2e-8. These can be visualized in Figure 30.7. Because of these different thresholds, it’s important to look at multiple studies to validate associations, as even with strict quality control there can be artifiacts that can affect one every thousand or ten thousand SNPs and escape notice. Additionally, strict genomewide significance is generally not dramatically exceeded, if it’s reached at all, in a single study. In addition to reporting SNPs which show the strongest associations, we typically also use Manhattan plots to show where these SNPs are located in the genome and quantile-quantile (Q-Q) plots to detect biases which have not been properly accounted for. A Manhattan plot is a scatter plot of log-transformed p-values against genomic position (concatenating the chromosomes). In Figure 30.8A, the points in red are those which meet the significance threshold. They are labeled with candidate genes which are close by. A Q-Q plot is a scatter plot of log-transformed observed p-values against log-transformed expected p-values. We use uniform quantiles as the expected p-values: assuming there is no association, we expect p-values to be uniformly distributed. Deviation from the diagonal suggests p-values are more significant than would be expected. However, early and consistent deviation from the diagonal suggests too many p-values are too significant, i.e. there is some bias which is confounding the test. In Figure 30.8B, the plot shows observed test statistic against expected test statistic (which is equivalent). Considering all markers includes the Major Histocompatability Complex (MHC), which is the region associated with immune response. This region has a unique LD structure which confounds the statistical analysis, as is clear from the deviation of the black points from the diagonal (the gray area). Throwing out the MHC removes much of this bias from the results (the blue points). GWAS identifies markers which correlate with the trait of interest. However, each marker captures a neighborhood of SNPs which it is in LD with, making the problem of identifying the causal variant harder. Typically, the candidate gene for a marker is the one which is closest to it. From here, we have to do further study to identify the relevance of the variants which we identify. However, this remains a challenging problem for a few reasons: • Regions of interest identified by association often implicate multiple genes • Some of these associations are nowhere near any protein coding segments and do no thave an obviously functional allele as their origin • Linking these regions to underlying biological pathways is difficult Interpretation: How can GWAS inform the biology of disease? Our primary goal is to use these found associations to understand the biology of disease in an actionable manner, as this will help guide therapies in order to treat these diseases. Most associations do not identify specific genes and causal mutations, but rather are just pointers to small regions with causal influences on disease. In order to develop and act on a therapeutic hypothesis, we must go much further, and answer these questions: • Which gene is connected to disease? • What biological process is thereby implicated? • What is the cellular context in which that process acts and is relevant to disease? • What are the specific functional alleles which perturb the process and promote or protect from disease? This can be approached in one of two manners: the bottom-up approach, or the top-down approach. Bottom-up The bottom-up approach is used to investigate a particular gene that has a known association with a disease, and investigate it’s biological importance within a cell. Kuballa et al.[19] were able to use this bottom-up approach to learn that a particular risk variant associated with Crohn’s Disease leads to impairment of autophagy of certain pathogens. Furthermore, the authors were able to create a mouse model of the same risk variant found in humans. Identifying biological implications of risk variants at the cellular level and creating these models is invaluable as the models can be directly used to test new potential treatment compounds. Top-down In contrast, the top-down approach involves looking at all known associations, utilizing the complete set of GWAS results, and trying to link them to shared biological processes/pathways implicated in disease pathogenesis. This approach is based on the idea that many of the associated genes with a disease share relevant biological pathways. This is commonly done by taking existing networks like protein-protein interaction networks, and layering the associated genes on top of them. However, these resulting disease networks may not be significant due to bias in both the discovery of associations and the experimental bias of the data that the associations are being integrated with. This significance can be estimated by permuting the labels for the nodes in the network many times, and then computing how rare the level of connectivity is for the given disease network. This process is illustrated in Figure 30.9. As genes connected in the network should be co-expressed, it has been shown that these disease networks can be further validated from gene-expression profiling[14]. Comparison with Linkage Analysis It is important to note GWAS captures more variants than linkage analysis. Linkage analysis identifies rare variants which have negative effects, and linkage studies are used when pedigrees of related individuals with phenotypic information is available. They can identify rare alleles that are present in smaller numbers of families, usually due to a founder mutatios and have been used to identify mutations such as BRCA1, associated with breast cancer. Alternatively, association studies are used for this purpose and also to find more common genetic changes that confer smaller influences in susceptibility, such as rare variants which have protective effects. Linkage analysis cannot identify these variants because they are anti-correlated with disease status. Furthermore, linkage analysis relies on the assumption that a single variant explains the disease, an assumption that does not hold for complex traits such as disease. Instead, we need to consider many markers in order to explain the genetic basis of these traits. While genomic medicine promises novel discoveries in disease mechanisms, target genes, therapeutics, and personalized medicine, several challenges remain, including that 90+% of hits are non-coding. To fix this, the non-coding genome has been annotated through ENCODE/Roadmap and enhancers have been linked to regulators and target genes. Once each GWAS locus is expanded using SNP linkage desiquilibrium (LD) it can be used to recognize relevant cell types, driver transcription factors, and target genes. These leads to a linking of traits to their relevant cell and tissue types. Conclusions We have learned several lessons from GWAS. First, fewer than one-third of reported associations are coding or obviously functional variants. Second, only some fraction of associated non-coding variants are significantly associated to expression level of a nearby gene. Third, many are associated to regions with no nearby coding gene. Finally, the majority of reported variants are associated to multiple autoimmune or inflammatory diseases. These revelations indicate that there are still many mysteries lurking in the genome waiting to be discovered.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/30%3A_Medical_Genetics--The_Past_to_the_Present/30.06%3A_Genome-wide_Association_Studies.txt
One current challenge in medical genetics is that of translation. In particular, we are concerned if GWAS can inform the development of new therapeutics. GWAS studies have been successful in identifying disease- associated loci. However, they provide little information about the causal alleles, pathways, complexes or cell types that are involved. Nevertheless, many known druggable targets are associated with GWAS hits. We therefore expect that GWAS has great potential in guiding therapeutic development. A new tool in our search for greater insight into genetic perturbations is next generation sequencing (NGS). NGS has made sequencing an individual’s genome a much less costly and time-consuming task. NGS has several uses in the context of medical genetics, including exome/genome sequencing of rare and severe diseases, as well as exome/genome sequencing for the completion of allelic architecture at GWAS locis. However, NGS has in turn brought about new challenges in computation and interpretation. One application of NGS to the study of human disease is in the identification and characterization of loss of function (LoF) variants. LoF variants disrupt the reading frame of protein-coding genes, and are therefore expected to be of scientific and clinical interest. However, the identification of these variants is complicated by errors in automated variant-calling and gene annotation. Many putative LoF variants are therefore likely to be false positives. In 2012, MacArthur et al. set out to describe a stringent set of LoF variants. Their results suggest that the typical human genomes contain about 100 LoF variants. They also presented a method to prioritize candidate genes as a function of their functional and evolutionary characteristics [24]. The MacArthur lab is also involved in an ongoing effort by the Exome Aggregation Consortium to assemble a catalog of human protein-coding variation for data mining. Currently, the catalog includes sequencing data from over 60,000 individuals. Such data allows for the identification of genes that are significantly lacking in functional coding variation. This is important because genes under exceptional constraint are expected to be deleterious. Based on this principle, Samocha et al. were able to identify 1000 genes involved in autism spectrum disorders that were significantly lacking in functional coding variation. This was done using a statistical framework that described a model of de novo mutation [32]. Similarly, De Rubeis et al. were able to identify 107 genes under exceptional evolutionary constraint that occurred in 5% of autistic subjects. Many of these genes were found to encode proteins involved in transcription and splicing, chromatin remodelling and synaptic function, thus advancing our understanding of the disease mechanism of these variants. NGS can also be used to study rare and severe diseases, such as in the case of the DGAT1 mutation. In a study by Haas et al., exome sequencing was used to identify a rare splice site mutation in the DGAT1 gene. This had resulted in congenital diarrheal disorders in the children of a family of Ashkenazi Jewish descent [13]. In this case, sequencing not only had therapeutical applications for the surviving child but also provided insight into an ongoing DGAT1 inhibition clinical trial. While NGS allows us to study highly penetrant variants that result in severe Mendelian diseases, there are also genetic studies that deliver hypotheses for intervention. One example of this is the discovery of SCN9A. The complete loss-of-function of SCN9A, also known as NaV1.7, results in congenital indifference to pain. This has resulted in the development of novel analgesics with ecacy exceeding that of morphine, as in the case of μ-SLPTX-Ssm6a, a selective NaV1.7 inhibitor [35]. Another example is the loss-of-function variant of PCSK9, which lowers LDL and protects against coronary artery disease. This has led to the development of PCSK9 inhibitor REGN727, which has been shown to be safe and effective in phase 1 clinical trails [6]. NGS is also important for fine-mapping loci identified in GWAS studies. For example, GWAS studies from 2010 looking at Crohn’s disease implicated a region on chromosome 15 containing multiple genes. After fine-mapping, the International Inflammatory Bowel Disease Genetics Consortium (IIBDGC) was able to refine the association to a SMAD3 noncoding functional elements. Another example is a study by Farh et al. that looked at candidate causal variants for 21 autoimmune diseases. They showed that 90% of causal variants are non-coding, but only 10-20% alter transcription factor binding motifs, implying that current gene regulatory models cannot explain the mechanism of these variants [9]. Finally, a study by Rivas et al. that analyzed a deep resequencing of GWAS loci associated with inflammatory bowel disease found not only new risk factors but also protective variants. For example, a protective splice variant in CARD9 that causes premature truncation of protein was shown to strongly protect against the development of Crohn’s disease [31]. 30.08: Tools and Techniques • HapMap, a thorough catalog of human SNPs. • PLINK, an open-source C/C++ GWAS tool set that can analyze large data sets with hundreds of thousands of markers genotyped for thousands of individuals to examine potential pathways. • GRASS (Gene set Ridge regression in Association Studies), summarizes the genetic structure for each gene as eigenSNPs and uses group ridge regression to select representative eigenSNPs for each gene, assessing their association with disease risk and reducing the high dimensionality of GWAS data. • GWAMA (Genome-Wide Association Meta-Analysis), performs meta-analysis of GWAS of dichoto- mous phenotypes or quantitative traits. 30.09: What Have We Learned In the past several decades, we have made huge advances in developing techniques to investigate the genetic basis of disease. Historically, we have used linkage analysis to find causal variants for Mendelian disease with great success. More recently, we have used genome-wide association studies to begin investigating more complex traits with some success. However, more work is needed in developing methods to interpret these GWAS and identifying causal variants and their role in disease mechanism. Improving our understanding of the genetic basis of disease will us develop more effective diagnoses and treatments.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/30%3A_Medical_Genetics--The_Past_to_the_Present/30.07%3A_Current_Research_Directions.txt
Differences in gene coding regions across different organisms do not completely explain the phenotypic variation we see. For example, although the phenotypic difference is high between humans and chimpanzees and low between different squirrel species, there is more genetic variation among the squirrel species [1]. These observations lead us to conclude that there must be more than just gene-coding variation that accounts for phenotypic variation; specifically, non-coding variation also influences how genes are expressed, and consequently influences the phenotype of an organism. In fact, previous research has shown that most genetic variation occurs in non-coding regions [2]. Furthermore, most expression patterns have been found to be heritable traits. Understanding how variation in non-coding regions affects co-regulating genes would allow us not only to understand but also control the expression of these and other related genes. This is especially relevant to the control of undesirable trait expressions like complex, polygenic diseases (Figure 31.1). In Mendelian disease, the majority of disease risk is predicted by coding variation, whereas in polygenic diseases the vast majority of causal variation is found outside of coding regions. This suggests that variation in the regulation of gene expression may play a greater role than genotypic variation in these polygenic diseases. Thus, the study of these trait associated variants is a step in the direction of understanding how genetic sequences both code for and control the expression of such diseases and their associated phenotypes. eQTLs (expression quantitative trait loci) encapsulate the idea of non-coding regions influencing mRNA expression introduced above: we can define an eQTL as a region of variants in a genome that are quanti- tatively correlated with the expression of another gene encoded by the organism. Usually, we will see that certain SNPs in certain non-coding regions will either enhance or disrupt the expression of a certain gene. The field of identifying, analyzing, and interpreting eQTLs in the genome has grown immensely over the last couple of years with hundreds of research papers being published. There are four main mechanisms for how eQTLs influence the expression of their associated genes: 1. Altered transcription factor binding 2. Histone modifications 3. Alternative splicing of mRNA 4. miRNA silencing FAQ Q: What is the difference between an eQTL study and a GWAS? A: There are two fundamental differences. The first is in the nature of the phenotype being exam- ined. In an eQTL, the phenotype checked is usually on a lower level of biological abstraction (normalized gene expression levels) instead of a more higher-level, sometimes visible phenotype used in GWAS, such as ”black hair”). Secondly, in GWAS, usually because the phenotype be- ing correlated with various SNPs is a higher-level phenotype, we very rarely see tissue-specific GWAS. However, in eQTLs, the expression patterns of mRNA could vary greatly between tissue-types within the same individual, and eQTL studies for a specific tissue-type, such as neuron and glial cells, can be performed (Figure 31.2) 31.02: eQTL Basics Cis-eQTLs The use of whole genome eQTL analysis has separated eQTLs into two distinct types of manifestation. The first is a cis-eQTL (Figure 31.3) in which the position of the eQTL maps near the physical position of the gene. Because of proximity, cis-eQTL effects tend to be much stronger, and thus can be more easily detected by GWAS and eQTL studies. Often, these function as promoters of certain polymorphisms, affect methylation and chromatin conformation (thus increasing or decreasing access to transcription), and can manifest as insertions and deletions to the genome. Cis-eQTLs are generally classified as variants that lie within 1 million base pairs of the gene of interest. However, this is indeed an arbitrary cutoff and can be altered by an order of magnitude, for instance. Trans-eQTLs The second distinct type of eQTL is a trans-eQTL (Figure 31.4). A trans-eQTL does not map near the physical position of the gene it regulates. Its functions are generally more indirect in their effect on the gene expression (not directly boosting or inhibiting transcription but rather, affecting kinetics, signaling path- ways, etc.). Since such effects are harder to determine explicitly, they are harder to find in eQTL analysis; in addition, such networks can be extremely complex, further limiting trans-eQTL analysis. However, eQTL analysis has led to the discovery of trans hotspots which refer to loci that have widespread transcriptional effects [11]. Perhaps the biggest surprise of eQTL research is that, despite the location of trans hotspots and cis-eQTLs, no major trans loci for specific genes have been found in humans [12]. This is probably attributed the current process of whole genome eQTL analysis itself. As useful and widespread whole genome eQTL analysis is, we find that genome-wide significance occurs at $p=5 \times 10^{-8}$ with multiple testing on about 20,000 genes. Thus, studies generally use an inadequate sample size to determine the significance of many trans-eQTL associations, which start with priors of very low probability to begin with as compared to cis-eQTLs [4]. Further, the bias reduction methods described in earlier sections deflate variance, which is integral to capture the microtrait associations inherent in trans loci. Finally, non-normal distributions limit the statistical significance of associations between trans-eQTLs and gene expression[4]. This has been slightly remedied by the use of cross-phenotype meta-analysis (CPMA)[5] which relies on the summary statistics from GWAS rather than individual data. This cross-trait analysis is effective because trans-eQTLs affect many genes and thus have multiple associations originating from a single marker. Sample CPMA code can be found in Tools and Resources. However, while trans loci have not been found, trans-acting variants have been found. Since it can be inferred trans-eQTLs affect many genes, CPMA and ChIP-Seq can be used to detect such cross-trait variants. Indeed, 24 different significant trans-acting transcription factors were determined from a group of 1311 trans-acting SNP variants by observing allelic effects on populations and target gene interactions/connections.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/31%3A_Variation_2-_Quantitative_Trait_Mapping_eQTLS_Molecular_Trait_Variation/31.01%3A_Introduction.txt
The basic approach behind an eQTL study is to consider each gene’s expression as a quantitative multi-factor trait and regress on principal components that explain the variance in expression. First, cells of the tissue of interest are extracted and their RNA extracted. Expression of proteins of interest is measured either by microarray or by RNA-seq analysis. Expression levels of each gene are regressed on genotypes, controlling for biological and technical noise, such that $Y_{i}=\alpha+X_{i} \beta+\epsilon_{i}\nonumber$ Where Yi is the gene expression of gene i, Xi is a vector containing the allelic composition of each SNP associated with the gene (and can take on values 0, 1, or 2 given a reference allele), $\alpha$ and $\beta$ are column vectors containing the regression coefficients, and $\epsilon_{i}$ is the residual error (See Figure 31.5) [9]. In concept, such a study is extremely simple. In practice, there are hundreds of potential confounders and statistical uncertainties which must be accounted for at every step of the process. However, the same regression model can be used to account for these covariates. Figure 31.9 contains an example eQTL study conducted on asthma. The key result from the study is the linear model in the upper right: we can see as the genotype tends more towards the ”A” variant, the target gene expression decreases. Considerations for Expression Data Quantifying expression of genes is fraught with experimental challenges. For a more detailed discussion of these issues, see Chapter 14. One important consideration for this type of expression analysis is the SNP- under-probe effect: probe sequences that map to regions with common variants provide inconsistent results due to the effect of variation within the probe itself on binding dynamics. Thus, experiments repeated with multiple sets of probes will produce a more reliable result. Expression analysis should also generally exclude housekeeping genes, which are not differentially regulated across members of a population and/or cell types, since these would only dilute the statistical power of the study. Considerations for Genomic Data There are two main considerations for the analysis of genomic data: the minor allele frequency and the search radius. The search radius determines the generality of the effect being considered: an infinite search radius corresponds to a full-genome cis and trans-eQTL scan, while smaller radii restrict the analysis to cis-eQTLs. The minor allele frequency (MAF) determines the cutoff under which a SNP site is not considered: it is a major determinant of the statistical power of the study. A higher MAF cutoff generally leads to higher statistical power, but MAF and search radius interact in nonlinear ways to determine the number of significant alleles detected (see Figure 31.6). Covariate Adjustment There are many possible statistical confounders in an eQTL study, both biological and technical. Many biological factors can affect the observed expression of any given mRNA in an individual; this is exacerbated by the impossibility of controlling the testing circumstances of the large population samples needed to achieve significance. Population stratification and genomic differences between racial groups are additional contributing factors. Statistical variability also exists on the technical side. Even samples run on the same machine at different times show markedly different clustering of expression results. (Figure 31.7). Researchers have successfully used the technique of Principal Component Analysis (PCA) to separate the effects of these confounders. PCA can produce new coordinate axes along which SNP-associated gene expression data has the highest variance, thereby isolating unwanted sources of consistent variation (see Chapter 20.4 for a detailed description of Principal Component Analysis). After extracting the principal components of the gene expression data, we can extend the linear regression model to account for these confounders and produce a more accurate regression. FAQ Q: Why is PCA an appropriate statistical tool to use in this setting and why do we need it? A: Unfortunately, our raw data has several biases and external factors that will make it difficult to infer good eQTLs. However, we can think of these biases as being independent influences on the datasets that create artificial variance in the expression levels we see, confounding the factors that give rise to actual variance. Using PCA, we can decompose and identify these variances into their principal components, and filter them out appropriately. Also, due to the complex nature of the traits being analyzed, PCA can help reduce the dimensionality of the data and thereby facilitate computational analysis. FAQ Q: How do we decide how many principal components to use? A: This is a tough problem; one possible solution would be to try a different number of principal components and examine the eQTLs found afterwards - very this number for future tests by seeing whether the outputted eQTLs are viable. Note that it would be difficult to ”optimize” different parameters for the eQTL study because each dataset will have an optimal number of principal components, a best value for MAF, etc... Points to Consider The following are some points to consider when conducting an eQTL study. • The optimal strategy for eQTL discovery in a specific dataset out of all different ways to conduct normalization procedures, non-specific gene filtering, search radius selection, and minor allele frequency cutoffs may not be transferable to another eQTL study. Many scientists overcome this using greedy tuning of these parameters, running the eQTL study iteratively until a maximum number of significant eQTLs are found. • It is important to note that eQTL studies only find correlation between genetic markers and gene expression patterns, and do not imply causation. • When conducting an eQTL study, note that most significant eQTLs are found within a few kb of the regulated gene. • Historically, it has been found that most eQTL studies are about 30-40% reproducible, and this is a relic of how the dataset is structured and the different normalization and filtering strategies the respective researchers use. However, eQTLs that are found in two or more cohorts consistently follows similar expression influence within each of the cohorts. • Many eQTLs are tissue-specific; that is, their influence in gene expression could occur in one tissue but not in another, and a possible explanation of this is the co-regulation of a single gene by multiple eQTLs that is dependent on one gene having multiple alleles.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/31%3A_Variation_2-_Quantitative_Trait_Mapping_eQTLS_Molecular_Trait_Variation/31.03%3A_Structure_of_an_eQTL_Study.txt
Quantifying Trait Variation Because the study of eQTLs is a study in the level of expression of a gene, the primary step towards conducting an informative study is picking traits that have varying levels of expression rather than binary expression. Examples of such quantitatively viable traits are body mass index (BMI) and height. In the late 1980’s and early 1990’s, the first studies of gene expression through genome-wide mapping studies were initiated by Damerval and de Vienne [8] [6]. However, their use of 2-D electrophoresis for protein separation was inefficient and not thoroughly reliable as it introduced a lot of noise and could not be systematically and quantitatively summarized. It was only in the early 2000s when the introduction of high-throughput array- based methods to measure mRNA incidence accelerated the successful use of this method, first highlighted in a study by Brem [10]. New Applications There are two directions that eQTL studies are headed. First, there is a rush to use whole genome eQTL analysis to validate associations among variances in the human population such as differences in gene expression among ethnic groups, as the statistical power for being able to do so is beginning to reach the threshold of significance. A second direction of research seeks to dislocate genetic associations with varying phenotypes and and population differences based on a non-genetic basis. These non-genetic factors include environment, cell line preparation, and batch effects. 31.05: What Have We Learned In summary, most causal variation for complex polygenic diseases that we have discovered so far is noncod- ing. Moreover, phenotypic differences between species are not well explained by coding variation, while gene expression is highly heritable between generations. Thus, it is proposed that genetic control of expression levels are a crucial factor in determining phenotypic variance. eQTLs are SNP variant loci that are correlated with gene expression levels. They come in one of two forms. Cis-eQTLs are sites whose loci map to near the affected genes, are relatively easy to detect due to their proximity, and generally have clear mechanisms of action. Trans-eQTLs map to distance areas of the genome, are more dicult to detect, and their mechanisms are not as direct. eQTL studies combine a whole-genome approach similar to GWAS with a expression assay, either microarray or RNA-seq. Expression levels of each gene are correlated by linear regression with genotypes after using PCA to extract confounding factors. Determining the optimal parameters for MAF, search radius, and confounder normalization is an open research question. Applications of eQTLs include the identification of disease-associated variants as well as variants associated with population subspecies and the genetic and environmental variance that gives rise to complex traits, 31.06: Further Reading The following is a very good introductory literature review on eQTLs, including their history and current applications: The role of regulatory variation in complex traits and disease Frank W. Albert and Leonid Kruglyak Nature Reviews Genetics 16 2015 There are also some research papers that are trailblazers in what is current in eQTL studies. One such paper is informative on the occurrence of DNA methylation affecting gene expression in the human brain. Another is a study on changes in expression during development in the nematode C. elegans, using age as a covariate during eQTL mapping: 1. Abundant quantitative trait loci exist for DNA methylation and gene expression in human brain Gibbs JR, van der Brug MP, Hernandez DG, Traynor BJ, Nalls MA, et al. PLOS Genet 6 2010 2. The effects of genetic variation on gene expression dynamics during development Francesconi, M. and Lehner, B. Nature 505 2013 In addition, eQTL variants have recently been found to be implicated in diseases such as Crohn’s disease and multiple sclerosis [4]. As mentioned in Section 4.2, there have also been a recent surge in studies applying eQTL studies to delineating differences among human subpopulations and characterizing the contributions of the environment toward trait variation: 1. Common genetic variants account for differences in gene expression among ethnic groups Spielman RS, Bastone LA, Burdick JT, Morley M, Ewens WJ, Cheung VG. Nature Genetics 2007 2. Gene-expression Variation Within and Among Human populations Storey JD, Madeoy J, Strout JL, Wurfel M, Ronald J, Akey JM The American Journal of Human Genetics 2007 3. Population genomics of human gene expression [12] Stranger BE,Nica AC, Forrest MS, et. al. The American Journal of Human Genetics 2007 4. Evaluation of genetic Variation Contributing to Differences in Gene expression between populations Zhang W, Duan S, Kistner EO, Bleibel WK, Huang RS, Clark TA, Chen TX, Schweitzer AC, Blume JE, Cox NJ, Dolan ME The American Journal of Human genetics 2008 5. A Genome-Wide Gene Expression Signature of Environmental Geography in Leukocytes of Moroccan Amazighs Idaghdour Y, Storey JD, Jadallah SJ, Gibson G PLoS 2008 6. On the design and analysis of gene expression studies in human populations Joshua M Akey, Shameek Biswas, Jeffrey T Leek, John D Storey Nature Genetics 2007
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/31%3A_Variation_2-_Quantitative_Trait_Mapping_eQTLS_Molecular_Trait_Variation/31.04%3A_Current_Research_Directions.txt
• The Costapas Lab distributes code to calculate CPMA from GWAS association p-values which can be found here: www.cotsapaslab.info/index.php/software/cpma/ • The Pritchard lab has several resources (found here: http://eqtl.uchicago.edu/Home.html) for eQTL research and gene regulation including: – DNase-seq data from 70 YRI lymphoblastoid cell lines – Downloading positions of transcription factor binding sites inferred in the HapMap lymphoblastoid cell lines by CENTIPEDE – Raw and mapped RNA-Seq data from Pickrell et al. – Assorted scripts for identifying sequencing reads covering genes, polyadenylation sites and exon- exon junctions – Data and meQTL results for Illumina27K methylation data in HapMap lymphoblastoid cell lines. – Files to ignore areas of the genome that are prone to causing false positives in ChIP-seq and other sequencing based functional assays – Browser for eQTLs identified in recent studies in multiple tissues • The Wellcome Trust Sanger Institute has developed packaged database and web services (Genevar) that are designed to help integrative analysis and visualization of SNP-gene associations in eQTL studies. This information can be found here: www.sanger.ac.uk/resources/software/genevar/ • The Wellcome Trust Sanger Institute has also developed databases that contain information relevant to eQTL studies such as finding and identifying all functional elements in the human genome sequence and maintaining automatic annotation on selected eukaryotic genomes. This information can be found here: http://www.sanger.ac.uk/resources/databases/. • Finally, the NIH is progressing on the Genotype-Tissue Expansion Projext (GTEx). Currently, the project stands at 35 tissues from 50 donors; the aim is to acquire and analyze 20,000 tissues from 900 donors, with the hope of gathering even more data for further genetic analyses, especially for eQTL and trans-eQTL analyses that require larger sample sizes. 31.08: Bibliography [1] King, Mary-Claire and Wilson, A.C. (April 1975) Evolution at Two Levels in Humans and Chimpanzees Science Vol.188 No. 4184 [2] 1000 Genomes Project Consortium. Nature. 2010; 467:1061-73. [3] Cheung Vivien G. and Spielman Richard S. (2009) Genetics of Human Gene Expression: Mapping DNA Variants that Influence Gene Expression Nature Reviews Genetics [4] C. Cotsapas, Regulatory variation and eQTLs. 2012 Nov 1. [5] C. Cotsapas, BF Voight, E Rossin, K Lage, BM Neale, et al. (2011) Pervasive Sharing of Genetic E↵ects in Autoimmune Disease. PLoS Genet 7(8):e1002254. doi:10.1371/journal.pgen.1002254 [6] Damerval C, Maurice A, Josse JM, de Vienne D (May 1994). Quantitative Trait Loci Underlying Gene Product Variation: A Novel Perspective for Analyzing Regulation of Genome Expression Genetics 137 (1): 289301.PMC 1205945. PMID 7914503. [7] Dimas AS , et. al. (Sept. 2009) Common regulatory variation impacts gene expression in a cell type- dependent manner. Science 325(5945):1246-50. 2 Epub 2009 Jul 30. [8] D. de Vienne, A. Leonardi, C. Damerval (Nov 1988). Genetic aspects of variation of protein amounts in maize and pea. Electrophoresis 9 (11): 742750. doi:10.1002/elps.1150091110. PMID 3250877. [9] Shengjie Yang, Yiyuan Liu, Ning Jiang, Jing Chen, Lindsey Leach, Zewei Luo, Minghui Wang.Genome- wide eQTLs and heritability for gene expression traits in unrelated individuals. BMC Genomics 15(1): 13. 2014 Jan 9. [10] Rachel B. Brem and Leonid Kruglyak. The landscape of genetic complexity across 5,700 gene expression traits in yeast. PNAS 102(5): 15721577. 23 Nov 2004. [11] Michael Morley, Cliona M. Molony, Teresa M. Weber, James L. Devlin, Kathryn G. Ewens, Richard S. Spielman, Vivian G. Cheung.Genetic analysis of genome-wide variation in human gene expression. Nature 430: 743-747. 12 Aug 2004. [12] Barbara E Stranger, Alexandra C Nica, Matthew S Forrest, Antigone Dimas, Christine P Bird, Claude Beazley, Catherine E Ingle, Mark Dunning, Paul Flicek, Daphne Koller, Stephen Montgomery, Simon Tavar, Panos Deloukas, Emmanouil T Dermitzakis. Population genomics of human gene expression. Nature Genetics 39: 1217 - 1224. 16 Sep 2007.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/31%3A_Variation_2-_Quantitative_Trait_Mapping_eQTLS_Molecular_Trait_Variation/31.07%3A_Tools_and_Resources.txt
George Church discussed a variety of topics that have motivated his past and present research. He first discussed about reading and writing genomes, including his own involvement in the development of sequencing and the Human Genome Project. In that latter half, he discussed about his more recent endeavor, the Personal Genome Project, which he initiated in 2005. 32.02: Reading and Writing Genomes As a motivation, consider the following question: Is there any technology that is not biologically motivated or inspired? Biology and our observations of it influence our lives pervasively. For example, within the energy sector, biomass and bioenergy has always existed and is increasingly becoming the focus of attention. Even in telecommunications, the potential of quantum-level molecular computing is promising, and is expected to be a major player in the future. Church has been involved in molecular computing in his own research, and claims that once harnessed, it has great advantages over their current silicon counterparts. For example, molecular computing can provide at least 10% greater efficiency per Joule in computation. More profound perhaps is its potential effect on data storage. Current data storage media (magnetic disk, solid-state drives, etc.) is much less (billions times) dense than DNA. The limitation of DNA as data storage is that it has a high error rate. Church is currently involved in a project exploring reliable storage through the use of error correction and other techniques. In a 2009 Nature Biotechnology review article [1], Church explores the potential for efficient methods to read and write to DNA. He observes that in the past decade there has been a 10$\times$ exponential curve in both sequencing and oligo synthesis, with double-stranded synthesis lagging behind but steadily increasing. Compared to the 1.5$\times$ exponential curve for VLSI (Moore’s Law), the increase on the biological side is more dramatic, and there is no theoretical argument yet for why the trend should taper off. In summary, there is great potential for genome synthesis and engineering. Did You Know? George Church was an early pioneer of genome sequencing. In 1978, Church was able to sequence plasmids at $10 per base. By 1984, together with Walter Gilbert, he developed the first direct genomic sequencing method [3]. With this breakthrough, he helped initiate the Human Genome Project in 1984. This proposal aimed to sequence an entire human haploid genome at$1 per base, requiring a total budget of \$3 billion. This quickly played out into the well-known race between Celera and UCSC-Broad-Sanger. Although the latter barely won in the end, their sequence had many errors and gaps, whereas Celera’s version was much higher quality. Celera initially planned on releasing the genome in 50 kb fragments, which researchers could perform alignments on, much like BLAST. Church once approached Celera’s founder, Craig Venter, and received a promise to obtain the entire genome on DVD after release. However, questioning the promise, Church decided instead to download the genome directly from Celera by taking advantage of the short fragment releases. Using automated crawl and download scripts, Church managed to download the entire genome in 50 kb fragments within three days! 32.03: Personal Genomes In 2005, George Church initiated the Personal Genome Project [2]. Now that sequencing costs have rapidly decreased to the point that we can currently get the entire diploid human genome for \$4000 (compare to \$3 billion for a haploid human genome in the Human Genome Project), personal genome and sequence information is becoming increasingly affordable. One important application for this information is in personalized medicine. Although many diseases are still complicated to predict, diagnose, and study, we currently already have a small list of diseases that are highly predictable from genome data. Examples include phenylketonuria (PKU), BRCA-mutation-related breast cancer, and hypertrophic cardiomyopathy (HCM). Many of these and similar diseases are uncertain (sudden onset without warning symptoms) and not normally checked for (due to their relative rareness). As such, they are particularly suitable as targets for personalized medicine by personal genomes, because genomic data provide accurate information that otherwise cannot be obtained. Already, there are over 2500 diseases (due to ~ 6000 genes) that are highly predictable and medically actionable, and companies such as 23andMe are exploring these opportunities. As a final remark on the subject, Church remarked on some of his personal philosophy regarding personalized medicine. He finds many people reluctant to obtain their genomic information, and attributes this to a negative view among the general public toward GWAS and personalized medicine. He thinks that the media focuses too much on the failure of GWAS. The long-running argument against personalized medicine is that we should focus first on common diseases and variants before studying rare events. Church counterargues that in fact there is no such thing as a common disease. Phenomena such as high blood pressure or high cholesterol only count as symptoms; many ‘common diseases’ such as heart disease and cancer have many subtypes and finer categories. All along, lumping these diseases into one large category only has the benefit of teaching medical students and to sell pharmaceuticals (e.g., statins, which have fared well commercially but only benfit very few). Church argues that lumping implies a loss of statistical power, and is only useful if it is actually meaningful. Ultimately, everyone dies due to their own constellation of genes and diseases, so Church sees that splitting (personalized genomics) is the way to proceed. Personal genomics provide information for planning and research. As a business model, it is analogous to an insurance policy, which provides risk management. As an additional benefit however, the information received allows for early detection, and consequences may even be avoidable. Access to genomic information allows one to make more informed decisions. 32.04: Further Reading Personal Genome Project: http://www.personalgenomes.org/ 32.05: Bibliography [1] Peter A. Carr and George M. Church. Genome engineering. Nature biotechnology, 27(12):1151–1162, December 2009. [2] G. M. Church. The Personal Genome Project. Molecular Systems Biology, 1(1):msb4100040–E1– msb4100040–E3, December 2005. [3] G. M. Church and W. Gilbert. Genomic sequencing. Proceedings of the National Academy of Sciences of the United States of America, 81(7):1991–1995, April 1984.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/32%3A_Personal_Genomes_Synthetic_Genomes_Computing_in_C_vs._Si/32.01%3A_Introduction.txt
Personalized genomics focuses on the analysis of individuals' genomes and their predispositions for diseases rather than looking at the population level. Personalized medicine is only possible with information about genetics along with information about many other factors such as age, nutrition, lifestyle, or epigenetic markers (such as methylation). To make personalized medicine more of a reality, we need to learn more about the causes and patterns of diseases in populations and individuals. 33.02: Epidemiology- An Overview Epidemiology is the study of patterns, causes, and effects of health and disease conditions in defined populations. In order to talk about epidemiology, we need to first understand some basic definitions and terms: Morbidity level is how sick an individual is whereas mortality is whether an individual is dead or not. The incidence is a rate which describes the number of new cases/people with a disease that appears during a period of time. The prevalence is the total steady state number of cases in the population. The attributable risk is the difference in rate of a disease between those exposed to the disease and those not exposed to the disease. Population burden refers to the years of potential life lost (YPLL), quality-adjusted or disability-adjusted life year (QALY/DALY). Syndrome refers to co-occurring signs or symptoms of a disease that are observed. The prevention challenge is to determine a disease and its cause and understand whether, when, and how to intervene. In order to determine disease causes, studies must be designed according to certain principles of experimental design. These principles include control, randomization, replication, grouping, orthogonality, and combinatorics. Control groups are needed so that comparison to a baseline can be done.The placebo effect is real, so having a control group is necessary. The people who get the putative treatment being tested must also be random so that there is no bias. The study needs to be replicated as well in order to control for variability in the initial sample. (This is similar to the winners curse. Someone may win a race because they did outstanding in that particular round and surpassed their personal average, but in the next round they probably will regress back to performing close to their average.) Understanding variation between different subgroups may also play a large role in the outcomes of experiments. These may include subgroups based on age, gender, or demographics. One subgroup of the population may be contributing in a more profound way then they rest, so looking at each subgroup specifically is important. Orthogonality, or the combination of all factors and treatments, and combinatorics, factorial design, must also be taken into account when designing an experiment. With disease studies in particular, ethics when dealing with human subjects must be taken into account. There are legal and ethical constraints which are overseen by review boards. Clinical trials must be performed either blind (patient does not know if they are getting treatment or not) or double-blind (doctor also doesn't know). A patient who knows if they have gotten a treatment may change their habits causing bias, or a doctor who knows a patient got the treatment may treat them differently or analyze their results differently. Both considerations need to be taken into account to lower the bias that may cause different results of a clinical trial. Example An example of the need for a randomized control trial is the treatment of ebola. A treatment must be distributed randomly to individuals being treated in different hospitals and it must be blind. If someone believes they are getting the vaccine, they may alter their habits to protect themselves which may affect the outcome. If only patients of one hospital get the vaccine, there is a possibility that the effects seen are just from that hospital being more careful. FAQ Q: In poorly designed experiments, is there one aspect that is most commonly overlooked? A: The most commonly missed is subgroup structure. It is sometimes not obvious what the different subgroups could be. To help with this, researchers can look at general properties of a predictor by trying to cluster cases and controls independently and visualize the clustering. If there is substructure other than case/control in the clustering, researchers can look for variables within each cluster to see what is driving substructure.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/33%3A_Personal_Genomics/33.01%3A_Introduction.txt
Genetic epidemiology focuses on the genetic factors contributing to disease. Genome-Wide association studies (GWAS), previously described in depth, identify genetic variants that are associated with a particular disease while ignoring everything else that may be a factor. With the decrease of whole genome sequencing, these types of studies are becoming much more frequent. In genetic epidemiology there are many genetic factors that you can test to identify diseases in a particular individual. You can look at family risk alleles which are inherited with a common trait in specific genes or variants. You can study monogenic, actionable protein-coding mutations which are the most understood, would have the highest impact, and would be the easiest to interpret. There is the possibility of testing all coding SNPs (single nucleotide polymorphisms) with a known disease association. There are debates over whether a patient needs to or would want to know this information sometimes especially if the disease is not treatable. A person’s quality of life may decrease just from knowing they may have the untreatable disease even if no symptoms are exhibited. You can also test all coding and non-coding associations from GWAS, all common SNPs regardless of association to any disease, or the whole genome. Did You Know? 23andMe is a personal genomics company that offers saliva-based direct-to-consumer genome tests. 23andMe gives consumers raw genetic data, ancestry-related results, and estimates of predisposition for more than 90 traits and conditions. In 2010, the FDA notified several genetic testing companies, including 23andMe, that their genetic tests are considered medical devices and federal approval is required to market them. In 2013, the FDA ordered 23andMe to stop marketing its Saliva Collection Kit and Personal Genome Service (PGS) as 23andMe had not demonstrated that they have “analytically or clinically validated the PGS for its intended uses” and the “FDA is concerned about the public health consequences of inaccurate results from the PGS device” [? ]. The FDA expressed concerns over both false negative and false positive genetic risk results, saying that a false positive may cause consumers to undergo surgery, intensive screening, or chemoprevention in the case of BRCA-related risk, for example, while a false negative may prevent consumers from getting the care they need. In class, we discussed whether people should be informed about potential risk alleles they may carry. Often, people may misunderstand the probabilities provided to them and either underestimate or overestimate how concerned they should be. The argument was also raised that people should not be told they are at risk if there is nothing current medicine and technology can do to mitigate the risk. If people are going to be informed about a risk, the risk should be actionable; i.e. they should be able to do something about it, instead of just live in worry, as that added stress may cause other health problems for them. Not only is there the choice of what to test, there is the question of when to test someone for a particular condition. Diagnostic testing occurs after symptoms are displayed in order to confirm a hypothesis or distinguish between different possibilities of having a condition. You can also test predictive risk which occurs before symptoms are even shown by a patient. You may test newborns in order to intervene early or even do pre-natal testing via an ultrasound, maternal serum, probes or chorionic villus sampling. In order to test which disorders you may pass on to your child, you can do pre-conception testing. You can also do carrier testing to determine if you are a carrier of a particular mutant allele that may run in your family history. Testing genetics and biomarkers can be tricky because it can be unknown if the genetics or biomarker seen is causing the disease or is a consequence of having the disease. To interpret disease associations, we need to use epigenomics and functional genomics. The genetic associations are still only probabilistic: if you have a genetic variant, there is still a possibility that you will not get the disease. Based on Bayesian statistics however, the posterior probability increases if the prior increases. As we find more and more associations and variants, the predictive value will increase.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/33%3A_Personal_Genomics/33.03%3A_Genetic_Epidemiology.txt
Molecular Epidemiology involves looking at the molecular biomarkers of a disease state. This includes looking at gene expression profiles, DNA methylation patterns i.e. epigenomics, and chromatin structure and organization in specific cell types. In earlier chapters, we discussed the link between gene expression (as RNA or proteins) and SNPs in the context of eQTL studies. As a reminder, eQTLs (expression quantitative trait loci) seek linear correlations between gene expression levels and different variants of a genetic locus. This section will focus on understanding the the role of epigenomic markers as molecular indicators of a disease. It is important to understand that multiple factors, and thus multiple datasets come into play in understanding the epigenomic basis of disease: methylaytion patterns of sample patients (M), genomic information (G) for the same individuals, enviornmental data (E, covering covariates like age, gender, smoking habits etc.), and phenotype quantifications (P, can capture multiple phenotypic markers, for example in Alzheimer’s Disease, the number of neuronal plaques per patient). Furthermore, we need to understand the various interconnections and dependencies between these data sets to make meaningful conclusions about the influence of methylation for a certain disease. To remove experimental, technical or environmental covariants, we rely on either known, or ICA (Inde- pendent component analysis)-inferred corrections. To link genetic data to methylation patterns, we look for meQTLs (methylation quantitative trati loci), which is equivalent to eQTLs. Molecular phenotypes such as expression level or methylation level are also quantitative traits. Finally, to link methylation patterns with diseases, we implement EWAS (Epigenome-wide association studies). meQTLs The discovery of meQTLs follows a process that is highly similar to the methodology used for discovering eQTLs. To discover cis-meQTLs (i.e. meQTLs where the effect on methylation is proximal to the tested locus) we select a genomic window, and use a linear model to test whether or not we see a correlation between methylation and SNP variants in that region. We test to see if the correlation is significant via an F-test, where our null hypothesis is that the additional model complexity introduced via the genomic information does not explain a significant portion of variation in methylation patterns. Other methods of discovering meQTLs include permutation and Linear Mixed Models (LMM). Example An example of using meQTLs in discovering the connection between methylation, genotype, and disease is the Memory and Aging Project. 750 elderly people enrolled in the project many years ago and today, they have mostly died and given their brain to science. The genotype and methylation of the dorsal lateral prefrontal cortex were determined in order to study the connection between methylation and the phenotype of Alzheimer’s and how the genotype may affect the methylation profile. SNP data, methylation, environmental factors (such as age, gender, sample batch, smoking status, etc..), and phenotype were taken into account. First covariants needed to be discovered and excluded to make sure the results obtained are not due to confounding factors. This is done by decomposing the matrix of methylation data by doing ICA. This enables the discovery of variables that are driving the most variability in the trait. The batch sample and cell mixture can have the biggest effect in the variation between individuals. After this is corrected for, linear models, permutation tests, and linear mixed models are used to determine cis-meQTLs–how much the genotype explains the methylation level. EWAS Epigenome-Wide Genome Studies (EWAS) aim to find connections between the methylation pattern of a patient and their phenotype. Much like GWAS, EWAS relies on linear models and p-value testing for finding linkages between epigenomic profiles and disease states. Together with meQTLs, EWAS can also potentially shine light on whether a given methylation pattern is the cause or result of a disease. Ideally, the idea is to be able to generate models that allow us to predict disease states (phenotypes) based on methylation. There are some drawbacks to EWAS. First, the variance in methylation patterns due to phenotype is typically very small, making it difficult to link epigenomic states to disease states, similar to seeking a needle in a haystack. To improve this situation, we need to control for other sources of variance in our methylation data, such as gender, age etc. Gender, for example, incorporates a large variance for the case of Alzheimer’s Disease. We additionally need to account for variance due to genotype (in the form of meQTLs). Additionally, variability across samples is a major issue in collecting methylation data for EWAS[? ]. As different cell types in the same individual will have different epigenomic signatures, it is important that relevant tissue samples are collected, and the data is corrected for the different cell/tissue types involved in a study.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/33%3A_Personal_Genomics/33.04%3A_Molecular_Epidemiology.txt
A central question for personal genomics is the question of which markers are causal of disease. For example, one might ask whether methylation at a certain loci, or a certain histone modification, increases a person’s risk for a certain disease. This question is difficult because we need to separate spurious correlations from causal effects - for example, it is possible that a mutation elsewhere in the genome causes the disease, and also increases the chance of observing a particular marker, but that the marker has no causal effect on the disease. In this case, we would find a correlation between the disease phenotype and the presence of the marker despite the lack of any causal effect. The key insight that allows us to determine causal effects, as opposed to mere correlations, is the observation that while the genotype may influence a person’s risk for a particular disease, the disease will not modify a person’s genotype. This allows us to use genotype as an instrumental variable for methylation. This limits the number of possible models so that we can statistically test which model is most consistent with the observed data. There are three possibilities for modeling complex human diseases: the independent associations model, the interaction model, and the causal pathway model, depicted in Figure 33.4. We will use the example of studying the causal relationship between methylation at a certain loci and disease to demonstrate how to test for a causal effect. Under the independent associations model, the data should contain no correlation between the genotype and the disease, which distinguishes this model from the interaction and causal pathway models. However, there will be correlations between each of the factors and the disease separately. Thus, this model is straightforward to test for. An example of this would be two independent risk genes. Under the interaction model, factor Bs effect on a disease may vary depending on the value for A. For example, a drugs effect on someone can be different based on their genotype. To test for this, we determine the statistical significance of the effect of the interaction term, $\beta_{2}$, in the regression $D=\beta_{0} A+\beta_{1} B+\beta_{2} A * B$. If there is a significant interaction effect, we can isolate the separate effects by stratifyng across different levels of A. The causal pathway model is a little more complex. If we notice a correlation between a risk factor and a disease, we may wonder whether there is a direct link between risk factor A and a disease, or does the risk factor A affect risk factor B which then affects the disease. In the case that risk factor A only has an effect on the disease through B, we will observe that after conditioning upon B, the correlation between A and D disappears, that is, B “mediates” this interaction. In reality, the effect of A on a disease is usually only partially mediated through B, so we can instead look for if the effect size of A on the disease is decreased when B is observed. Polygenic Risk Prediction One of the most central questions of personal genomics is prediction of genetic predispositions to various genetic traits, using multiple genes to inform our predictions. The basic approach is explained in Figure 33.5. First, the data set is divided into a training and test set, and in the training cohort, we select which SNPs are most important and their appropriate weightings. Then we use the test set to evaluate the accuracy of our predictions. Finally, we use this model to predict genetic predispositions for the target cohort by using the confidences we determined from the test set. 33.6: What Have We Learned? In this section we have learned about the basics of epidemiology, both genetic and molecular. We have learned techniques of designing an epidemiological experiment and how and when to use genetic screens for identifying diseases. Lastly, we focused on resolving causality vs. correlation between epigenetic markers and diseases using genetics as an instrument variable.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/33%3A_Personal_Genomics/33.05%3A_Causality_Modeling_and_Testing.txt
What is cancer? Cancer represents a group of diseases or tumors that trigger abnormal cell growth and have the potential to spread to many parts of the body. A cancer usually starts with mutations in one or more ”driver genes” which are genes that can drive tumorigenesis. These mutations are called driver events, meaning that they provide a selective fitness advantage for the individual; other mutations that don’t provide a fitness advantages are called passenger mutations. The main objective of cancer genomics is to generate a comprehensive catalog of cancer genes and path- ways. Many cancer genome projects have been started within the last ten years (mainly due to the drop in genome sequencing costs); for example, the Cancer Genome Atlas was started in 2006 with the aim of analyzing 20-25 tumor types with 500 tumor / normal pairs each via a large number of experiments (SNP arrays, whole-exome sequencing, RNA seq, and others). The ICGC (international cancer genome consortium) is a bigger, umbrella organization that organizes similar projects around the world with the end goal of studying 50 tumor types with 500 tumor/normal types each. Section 2: Characterization For each tumor, our aim is to obtain a complete, base-level characterization of that tumor, its evolutionary history and the mechanisms that shaped it. We can use massively parallel sequencing to get the base level genome characterization, but this approach brings with it some associated challenges. 1. Massive amounts of data The main challenge with increased amounts of data is an increase in the computational power required to analyze this data, as well as storage costs associated with keeping track of all of the sequenced genomes. There also needs to be an analysis pipeline (automated, standardized, reproducible) to have consistent findings across the different characterization efforts. Finally, we need to come up with new ways of visualizing and reporting on large scale data. 2. Sensitivity / Specificity Cancer characterization starts with the proper identification of SNP mutations present in cancer cells, and maximal removal of false positive reads. When selecting tumor samples, the extracted DNA is a mix of normal genomes and complex tumor genomes. The mutational allelic fraction (the fraction of DNA molecules from a locus that carry a mutation), is used to study significance of a mutation and its prevalence in the cancer subtype. This fraction depends on the purity, local copy number, multiplicity of the tumor sample, and the cancer cell fraction (CCF, amount of cancer cells that carry the mutation). Clonal mutations are carried by all cancer cells, and sub-clonal mutations are carried by a subset of the tumor cells. As well as detecting the presence of clonal and subclonal mutations, proper analysis requires removal of false positive mutagenic events. Two types of false positives include sequencing errors and germline mutations. Sequencing errors can come from misread bases, sequencing artifacts, and misaligned reads, while germline mutations usually occur in predicable places in the genome (1000/MB known, 10-20/MB novel). By having multiple reads of the same sequence the likelihood of repeated errors in sequencing drops rapidly, and by knowing where in the genome a germline mutation is likely, a filter can correct for the additional false positive probability. The overall sensitivity of detecting single nucleotide variations depends on the frequency of background mutations and the number of alternative reads. A third type of false positive can come from cross patient contamination if the tumor sample contains DNA from another person. ContEst is a method to accurately detect contamination by comparison to a SNP array. A mutation caller is a classifier asking at every genomic locus, Is there a mutation here?. These classifiers are evaluated using many Receiver Operators Characteristic (ROC) curves, which depend on the allele fraction, coverage of tumor and normal sample, and sequencing and alignment noise. MuTect is a highly sensitive Somatic Mutation Caller. The MuTect pipeline is as follows: Tumor and normal samples are passed into a variant detection statistic (which compares the variant model to the null hypothesis), which is passed through site-based filters (proximal gap, strand bias, poor mapping, tri- allelic site, clustered position, observed in control), then compared to a panel of normal samples, and finally classified as candidate variants. MuTect can detect low allele fraction mutations and is thus suited for studying impure and heterogenous tumors. 3. Discovering mutational processes Instead of detecting the presence of mutations in cancer genes, a different approach could be to dis- cover if there were specific patterns among mutations in the cancer samples. A ”Lego plot” is a way to visualize patterns of mutations, in which the heights of each of the colors represents frequencies of the 6 types of base pair substitutions, and the frequency of each is plotted relative to the 16 different contexts this mutation could occur in (neighboring nucleotides). The specific types of mutagenic events in each type of cancer can be plotted and analyzed. As an example, a novel mutation pattern (AA ¿ AC) is found in esophageal cancer. Cancers can be grouped by these specific mutational spectra. Dimensionality reductions using non-negative Matrix Factorization (NMF) of lego plot data can be used to identify fundamental spectral signatures. 4. Estimating purity, ploidy and cancer cell functions As well as detecting mutations in cancer cells, removing false positives, and detecting patterns of mutations, a proper characterization of each tumor sample is required. Because of heterogeneity and sample impurities, estimating the purity, absolute copy number and cancer cell fraction (CCF) of the tumor sample being sequenced is needed to get correct total number and prevalence of the mutated alleles. 5. Tumor heterogeneity and evolution Samples can have large distributions of point mutations and copy number alterations, but a Bayesian clustering algorithm can help identify the mutations and copy number alterations in distinct subpopulations.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/34%3A_Cancer_Genomics/Section_1%3A_Introduction.txt
The fundamental challenge in interpreting the sequencing results lies in differentiating driver mutations from passenger mutations. In order to accomplish this, we need to model the background mutational processes of the analyzed sequences and identify pathways/regions with more mutations than would have been predicted solely by the background model. Those regions then become our candidate cancer genes. However, we run into the potential issue of selecting an incorrect background model or we can encounter systematic artifacts in mutation calling. In this case, we have to go back to the drawing board and attempt to come up with a better background model before we can proceed with candidate gene idetification. Many tools have been developed in an effort to accurately detect candidate cancer genes and pathways (sub-networks) including NetSig, GISTIC, and MutSig. NetSig is used to identify clusters of mutated genes in protein-protein interaction networks. GISTIC can be used to score regions according to frequency and am- plitude of copy-number events. MutSig: is used to score genes according to number and types of mutations. The main analysis steps in finding candidate cancer genes are 1) estimation of the background mutation rate (which varies across samples, 2) calculate p-values based on statistical models, and 3) correct for multiple testing hypothesis (N genes). As sample size and or mutation rate increases, the significant gene list for cancer genes increases and contains many fishy genes. One major breakthrough to reduce fishy genes has been the proper modeling of background mutations. Standard tools use consistent background rate (rates for CpG, C/G, A/T, indel) while ignoring heterogeneity across samples, additional sequence contexts, and the genome. But it was dis- covered that the mutation rate across cancer varies ¿1000 fold, mutation rate is lower in highly expressed genes, and the frequency of somatic mutations correlates with DNA replication time. There are more mu- tations in areas of the genome that replicate later than those which divide early. MutSigCV is a tool which corrects for this variation in background mutation rates. Section 6: What Have We Learned? The drop in sequencing costs over the last ten years has led to a need for automized analysis pipelines and more computational / storage power to handle the vast flood of data being generated by a multitude of parallel sequencing efforts. Two major tasks of cancer genome projects going forward can be roughly grouped into two areas: characterization and interpretation. For characterization, there seems to still a need for a systematic benchmark of analysis methods (one example is ROC curves - curves that illustrate the performance of a classifier with a varying discrimination threshhold). We saw that cancer mutation rates tend to vary more than 1,000-fold across different tumor types. We also learned that clonal and subclonal mutations could be used for studying tumor evolution and heterogeneity. Running a significance analysis on the sequencing results identified a long-tailed distribution of significantly mutated genes. Since we’re dealing with a long tail distribution, we can increase the predictive power of our models and detect more cancer genes by integrating multiple sources of evidence. However we have to take into account that mutation rates differ according to the original sample, gene, and category from each study.
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/34%3A_Cancer_Genomics/Section_3%3A_Interpretation.txt
What is CRISPR/Cas? The CRISPR/Cas system is the prokaryotic immune system. When a virus or other foreign attacker attempts to infect a prokaryotic cell and inject its own DNA into a prokaryote’s genome, the prokaryote’s CRISPR/Cas system is responsible from removing the foreign DNA. How does it do this? The CRISPR/Cas system has two parts, CRISPR and Cas. The CRISPR part (a CRISPR array), is responsible for ”remembering” the foreign DNA, while the Cas part (Cas proteins), is responsible for cutting out the recogined foreign DNA. A CRIPSR array is made up of segments of short spacer DNA, which are the results of previous exposure to foreign DNA. These spacer DNA are transcribed to RNAs, which can be used to match the foreign DNA that the spacer DNA was built from. These RNA are then picked up by Cas proteins. When a Cas protein picks up a particular RNA, it becomes sensitive to the matching DNA sequences. The next time the same foreign DNA is inserted into the prokaryote, the Cas proteins sensitive to it will match the foreign DNA and cut it out of the genome, causing it to become inactive. Why is CRISPR/Cas important to us? Because nature is giving us an effective way of editing a genome! In order to accurately edit a genome, it is important to be able to cut a sequence at precisely the targeted location. Once a cut is made, repair mechanism can go in and make a modification at the target site. The CRISPR/Cas system is a naturally occurring time tested method of doing making alterations to DNA sequences. Currently, the ability of researchers to perturb and interrogate the genome is lagging behind the current level of techniques for reading. CRISPR provides an effective way to write to the genome that we are capable of reading, allowing us to determine what variations in the genetic code give rise to diseases of interest. Cas-9 The CRISPR/Cas-9 system is a system that has been of particular interest. Cas-9 is a endonuclease that can trigger gene repair by making cuts at specific target sites, guided by a 20-nucleotide sgRNA. When a target site that is complementary to the guide sgRNA is found and is followed by a NGG PAM region, the Cas-9 protein will cut the DNA at that target site. By programming Cas-9 with specific sgRNA, it can be programmed to create double stranded breaks at specific targets, while the PAM region plays a role in prevent targeting of its own genome. Cas-9 has been shown to be much more ecient at targeting than more established methods. Unfortunately, one drawback of Cas-9 is that it might make cuts at off-target sites that aren’t fully complementary to the RNA guide, which makes it a challenge for accurate genome editing. 2: Current Research Directions Improvement of Cas-9 Recent research has produced a variant of Cas-9 that greatly improves the specificity of Cas-9, reducing the likeliness of offsite errors. Current research being done with CRISPR/Cas-9 The recent improvement of Cas-9 has opened new pathways of research. For example, it can be used to analyze the functions of specific genes by using CRISPR/Cas-9 to remove just that gene and observing the effect of the removal. One example of an application of this is in the study of melanoma cancer cells. Vemurafenib is a FDA approved drug for treating melanoma, and has been shown to be effective on melanoma cells that have a V600E BRAF mutation by interrupting the BRAF pathway and inducing programmed cell death. Unfortunately, in many cases the cancer will become resistant to the drug by creating alternative survival pathways. CRISPR/Cas-9 can be used to determine the genes that allow the cancer cells to develop alternative pathways. By programming Cas-9 proteins to target every gene individually and tagging the proteins so it is possible to determine which protein affected which cell, it is possible to determine the genes that are required for survival. 3: What Have We Learned? CRISPR/Cas-9 produces double stranded breaks in DNA, and has two main components: 20bp DNA PAM
textbooks/bio/Computational_Biology/Book%3A_Computational_Biology_-_Genomes_Networks_and_Evolution_(Kellis_et_al.)/35%3A_Genome_Editing/1%3A_Introduction.txt
INTRODUCTION Energy is the ability to do work. Work is done when a force is applied to an object over a distance. Any moving object has kinetic energy or energy of motion, and it thus can do work. Similarly, work has to be done on an object to change its kinetic energy. The kinetic energy of an object of mass \(m\) and speed \(v\) is given by the relation \[E=\dfrac{1}{2}mv^2.\] Sometimes energy can be stored and used at a later time. For example, a compressed spring and water held back by a dam both have the potential to do work. They are said to possess potential energy. When the spring or water is released its potential energy is transformed into kinetic energy and other forms of energy such as heat. The energy associated to the gravitational force near the surface of the earth is potential energy. Other forms of energy are really combinations of kinetic and potential energy. Chemical energy, for example, is the electrical potential energy stored in atoms. Heat energy is a combination of the potential and kinetic energy of the particles in a substance. FORMS OF ENERGY Mechanical energy puts something in motion. It moves cars and lifts elevators. A machine uses mechanical energy to do work. The mechanical energy of a system is the sum of its kinetic and potential energy. Levers, which need a fulcrum to operate, are the simplest type of machine. Wheels, pulleys and inclined planes are the basic elements of most machines. Chemical energy is the energy stored in molecules and chemical compounds, and is found in food, wood, coal, petroleum and other fuels. When the chemical bonds are broken, either by combustion or other chemical reactions, the stored chemical energy is released in the form of heat or light. For example, muscle cells contain glycogen. When the muscle does work the glycogen is broken down into glucose. When the chemical energy in the glucose is transferred to the muscle fibers some of the energy goes into the surroundings as heat. Electrical energy is produced when unbalanced forces between electrons and protons in atoms create moving electrons called electric currents. For example, when we spin a copper wire through the poles of a magnet we induce the motion of electrons in the wire and produce electricity. Electricity can be used to perform work such as lighting a bulb, heating a cooking element on a stove or powering a motor. Note that electricity is a "secondary" source of energy. That means other sources of energy are needed to produce electricity. Radiant energy is carried by waves. Changes in the internal energy of particles cause the atoms to emit energy in the form of electromagnetic radiation which includes visible light, ultraviolet (UV) radiation, infrared (IR) radiation, microwaves, radio waves, gamma rays, and X-rays. Electromagnetic radiation from the sun, particularly light, is of utmost importance in environmental systems because biogeochemical cycles and virtually all other processes on earth are driven by them. Thermal energy or Heat energy is related to the motion or vibration of molecules in a substance. When a thermal system changes, heat flows in or out of the system. Heat energy flows from hot bodies to cold ones. Heat flow, like work, is an energy transfer. When heat flows into a substance it may increase the kinetic energy of the particles and thus elevate its temperature. Heat flow may also change the arrangement of the particles making up a substance by increasing their potential energy. This is what happens to water when it reaches a temperature of 100ºC. The molecules of water move further away from each other, thereby changing the state of the water from a liquid to a gas. During the phase transition the temperature of the water does not change. Nuclear Energy is energy that comes from the binding of the protons and neutrons that make up the nucleus of the atoms. It can be released from atoms in two different ways: nuclear fusion or nuclear fission. In nuclear fusion, energy is released when atoms are combined or fused together. This is how the sun produces energy. In nuclear fission, energy is released when atoms are split apart. Nuclear fission is used in nuclear power plants to produce electricity. Uranium 235 is the fuel used in most nuclear power plants because it undergoes a chain reaction extremely rapidly, resulting in the fission of trillions of atoms within a fraction of a second. SOURCES AND SINKS The source of energy for many processes occurring on the earth's surface comes from the sun. Radiating solar energy heats the earth unevenly, creating air movements in the atmosphere. Therefore, the sun drives the winds, ocean currents and the water cycle. Sunlight energy is used by plants to create chemical energy through a process called photosynthesis, and this supports the life and growth of plants. In addition, dead plant material decays, and over millions of years is converted into fossil fuels (oil, coal, etc.). Today, we make use of various sources of energy found on earth to produce electricity. Using machines, we convert the energies of wind, biomass, fossil fuels, water, heat trapped in the earth (geothermal), nuclear and solar energy into usable electricity. The above sources of energy differ in amount, availability, time required for their formation and usefulness. For example, the energy released by one gram of uranium during nuclear fission is much larger than that produced during the combustion of an equal mass of coal. Table: US ENERGY PRODUCTION (Quadrillion BTU) (Source: US DOE) 1975 2000 Coal 14.989 (24.4%) 22.663 (31.5%) Natural Gas (dry) 19.640 (32.0%) 19.741 (27.5%) Crude Oil 17.729 (28.9%) 12.383 (17.2%) Nuclear 1.900 (3.1%) 8.009 (11.2%) Hydroelectric 3.155 (5.1%) 2.841 (4.0%) Natural Gas (plant liquid) 2.374 (3.9%) 2.607 (3.6%) Geothermal 0.070 (0.1%) 0.319 (0.4%) Other 1.499 (2.5%) 3.275 (4.6%) TOTAL 61.356 71.838 (Source: US Department of Energy) An energy sink is anything that collects a significant quantity of energy that is either lost or not considered transferable in the system under study. Sources and sinks have to be included in an energy budget when accounting for the energy flowing into and out of a system. CONSERVATION OF ENERGY Though energy can be converted from one form to another, energy cannot be created or destroyed. This principle is called the "law of conservation of energy." For example, in a motorcycle, the chemical potential energy of the fuel changes to kinetic energy. In a radio, electricity is converted into kinetic energy and wave energy (sound). Machines can be used to convert energy from one form to another. Though ideal machines conserve the mechanical energy of a system, some of the energy always turns into heat when using a machine. For example, heat generated by friction is hard to collect and transform into another form of energy. In this situation, heat energy is usually considered unusable or lost. ENERGY UNITS In the International System of Units (SI), the unit of work or energy is the Joule (J). For very small amounts of energy, the erg (erg) is sometimes used. An erg is one ten millionth of a Joule: 1Joule=10,000,000ergs Power is the rate at which energy is used. The unit of power is the Watt (W), named after James Watt, who perfected the steam engine: 1Watt=1Joule/secondsize Power is sometimes measured in horsepower (hp): 1horsepower=746Wattss Electrical energy is generally expressed in kilowatt-hours (kWh): 1kilowatt−hour=3,600,000Joules It is important to realize that a kilowatt-hour is a unit of energy not power. For example, an iron rated at 2000Wattssize would consume 2x3.6x106J of energy in 1hour. Heat energy is often measured in calories. One calorie (cal) is defined as the heat required to raise the temperature of 1 gram of water from 14.5 to 15.5 ºC: 1calorie=4.189Joulessize An old, but still used unit of heat is the British Thermal Unit (BTU). It is defined as the heat energy required to raise the energy temperature of 1 pound of water from 63size 12{"63"} {} to 64∘Fsize 12{"64""" lSup { size 8{ circ } } F} {}. Physical Quantity Name Symbol SI Unit Force Newton N kg⋅m/s2size 12{ ital "kg" cdot m/s rSup { size 8{2} } } {} Energy Joule J kg⋅m2/s2size 12{ ital "kg" cdot m rS Power Watt W kg⋅m2/s3size 12{ ital "kg" cdot m r 1BTU=1055Joules
textbooks/bio/Ecology/AP_Environmental_Science/1.01%3A_Flow_of_Energy.txt
INTRODUCTION The earth's biogeochemical systems involve complex, dynamic processes that depend upon many factors. The three main factors upon which life on the earth depends are: 1. The one-way flow of solar energy into the earth's systems. As radiant energy, it is used by plants for food production. As heat, it warms the planet and powers the weather system. Eventually, the energy is lost into space in the form of infrared radiation. Most of the energy needed to cycle matter through earth's systems comes from the sun. 2. The cycling of matter. Because there are only finite amounts of nutrients available on the earth, they must be recycled in order to ensure the continued existence of living organisms. 3. The force of gravity. This allows the earth to maintain the atmosphere encompassing its surface and provides the driving force for the downward movement of materials in processes involving the cycling of matter. These factors are critical components to the functioning of the earth's systems, and their functions are necessarily interconnected. The main matter-cycling systems involve important nutrients such as water, carbon, nitrogen and phosphorus. WATER CYCLE The earth is sometimes known as the "water planet" because over 70 percent of its surface is covered by water. The physical characteristics of water influence the way life on earth exists. These characteristics include: • Water is a liquid at room temperature, and remains as such over a relatively wide temperature range (0-100° C). This range overlaps the annual mean temperature of most biological environments. • It takes a relatively large amount of energy to raise the temperature of water (i.e., it has a high heat capacity). For this reason, the vast oceans act as a buffer against sudden changes in the average global temperature. • Water has a very high heat of vaporization. Water evaporation thus provides a good means for an organism to dissipate unwanted heat. • Water is a good solvent for many compounds and provides a good medium for chemical reactions. This includes biologically important compounds and reactions. • Liquid water has a very high surface tension, the force holding the liquid surface together. This enables upward transport of water in plants and soil by capillary action. • Solid water (ice) has a lower density than liquid water at the surface of the earth. As a result ice floats on the surface of rivers, lakes, and oceans after it forms, leaving liquid water below where fish and other organisms can continue to live. If ice were more dense than liquid water, it would sink, and bodies of water in cold climates might eventually freeze solid. All living organisms require water for their continued existence. The water cycle (hydrologic cycle) is composed of the interconnections between water reservoirs in the environment and living organisms and the physical processes (e.g., evaporation and condensation) involved in its transport between those reservoirs. The oceans contain about 97 percent of the total water on the planet, which leaves about three percent as fresh water. Most of the fresh water is locked up in glacial and cap ice or buried deep in the earth where it is economically unfeasible to extract it. One estimate gives the amount of fresh water available for human use to be approximately 0.003 percent of the total amount of fresh water. However, this is actually a more than adequate supply, as long as the natural cycle of water is not severely disturbed by an outside force such as human activity. There are several important processes that affect the transport of water in the water cycle. Evaporation is the process by which liquid water is converted to water vapor. The source of energy for this process is usually the sun. For example, the sun's radiation heats the surface water in a lake causing it to evaporate. The resulting water vapor is thus added to the atmosphere where it can be transported to another location. Two important effects of the evaporation are cooling and drying. Transpiration is a process by which water evaporates from living plants. Water from the soil is absorbed by a plant's roots and transported to the leaves. There, some is lost as vapor to the atmosphere through small surface openings. When water vapor in the atmosphere cools, it can transform into tiny droplets of liquid water. This process is called condensation, and it can occur as water vapor is transported into the cooler upper atmosphere. Dust and pollen in the atmosphere help to initiate the process by providing condensation centers. If the droplets remain small enough to be supported by air motions, they can group together to form a cloud. Condensation can also occur in the air near the ground as fog or on plant leaves as dew. When condensed water droplets grow so large that the air can no longer support them against the pull of gravity, they fall to the earth. This is the process called precipitation. If the water droplets fall as liquid, it is called rain. If the temperature of the surrounding air mass is cold enough to freeze the water droplets, the resultant precipitation can be called snow, sleet or hail, depending upon its morphology. Water falling on the ground (e.g., as precipitation or irrigation), can move downslope over the surface (e.g., surface runoff) or penetrate the surface (e.g., infiltration). The amount of surface runoff and infiltration depends upon several factors: water infall rate, surface moisture, soil or rock texture, type and amount of surface cover (e.g., leaves and rooted plants), and surface topography. Surface runoff is the predominate process that occurs after precipitation, with most of the water flowing into streams and lakes. On a groundslope unprotected by vegetation, runoff can occur very rapidly and result in severe erosion. Water that infiltrates the surface can move slowly downward through the layers of soil or porous rock in a process known as percolation. During this process, the water can dissolve minerals from the rock or soil as it passes through. The water collects in the pores of rocks as groundwater when it is stopped by an impermeable layer of rock. The upper limit of this groundwater is known as the water table and the region of water-logged rock is known as an aquifer. The groundwater may slowly flow downhill through rock pores until it exits the surface as a spring or seeps into a stream or lake. Water is the essence of life. There would be no life as we know it without water. The vast oceans of water exert a powerful influence on the weather and climate. Water is also the agent by which the landforms are constantly reshaped. Therefore, the water cycle plays an important role in the balance of nature. Human activity can disrupt the natural balance of the water cycle. The buildup of salts that results from irrigating with groundwater can cause soil infertility and irrigation can also deplete underground aquifers causing land subsidence or salt water intrusion from the ocean. The clearing of land for farming, construction, or mining can increase surface runoff and erosion, thereby decreasing infiltration. Increasing human populations and their concentration in certain geographic localities will continue to stress water systems. Careful thought is needed on local, regional and global scales regarding the use and management of water resources for wetlands, agriculture, industry and home. CARBON CYCLE Carbon is the basic building block of all organic materials, and therefore, of living organisms. However, the vast majority of carbon resides as inorganic minerals in crustal rocks. Other reservoirs of carbon include the oceans and atmosphere. Several physical processes affect carbon as it moves from one reservoir to another. The inter-relationships of carbon and the biosphere, atmosphere, oceans and crustal earth -- and the processes affecting it -- are described by the carbon cycle. The carbon cycle is actually comprised of several inter-connected cycles. The overall effect is that carbon is constantly recycled in the dynamic processes taking place in the atmosphere, at the surface and in the crust of the earth. For example, the combustion of wood transfers carbon dioxide to the atmosphere. The carbon dioxide is taken in by plants and converted to nutrients for growth and sustenance. Animals eat the plants for food and exhale carbon dioxide into the atmosphere when they breathe. Atmospheric carbon dioxide dissolves in the ocean where it eventually precipitates as carbonate in sediments. The ocean sediments are sub ducted by the actions of plate tectonics, melted and then returned to the surface during volcanic activity. Carbon dioxide gas is released into the atmosphere during volcanic eruptions. Some of the carbon atoms in your body today may long ago have resided in a dinosaur's body, or perhaps were once buried deep in the earth's crust as carbonate rock minerals. The main carbon cycling processes involving living organisms are photosynthesis and respiration. These processes are actually reciprocal to one another with regard to the cycling of carbon: photosynthesis removes carbon dioxide from the atmosphere and respiration returns it. A significant disruption of one process can therefore affect the amount of carbon dioxide in the atmosphere. During a process called photosynthesis, raw materials are used to manufacture sugar. Photosynthesis occurs in the presence of chlorophyll, a green plant pigment that helps the plant utilize the energy from sunlight to drive the process. Although the overall process involves a series of reactions, the net reaction can be represented by the following: The sugar provides a source of energy for other plant processes and is also used for synthesizing materials necessary for plant growth and maintenance. The net effect with regard to carbon is that it is removed from the atmosphere and incorporated into the plant as organic materials. The reciprocal process of photosynthesis is called respiration. The net result of this process is that sugar is broken down by oxygen into carbon dioxide and water. The net reaction is: This process occurs not only in plants, but also in humans and animals. Unlike photosynthesis, respiration can occur during both the day and night. During respiration, carbon is removed from organic materials and expelled into the atmosphere as carbon dioxide. Another process by which organic material is recycled is the decomposition of dead plants and animals. During this process, bacteria break down the complex organic compounds. Carbon is released into the soil or water as inorganic material or into the atmosphere as gases. Decomposed plant material is sometimes buried and compressed between layers of sediments. After millions of years fossil fuels such coal and oil are formed. When fossil fuels are burned, the carbon is returned to the atmosphere as carbon dioxide. The carbon cycle is very important to the existence of life on earth. The daily maintenance of living organisms depends on the ready availability of different forms of carbon. Fossil fuels provide an important source of energy for humans, as well as the raw materials used for manufacturing plastics and other industrially important organic compounds. The component processes of the carbon cycle have provided living things with the necessary sources of carbon for hundreds of millions of years. If not for the recycling processes, carbon might long ago have become completely sequestered in crustal rocks and sediments, and life would no longer exist. Human activity threatens to disrupt the natural cycle of carbon. Two important ways by which humans have affected the carbon cycle, especially in recent history, are: 1) the release of carbon dioxide into the atmosphere during the burning of fossil fuels, and 2) the clearing of trees and other plants (deforestation) that absorb carbon dioxide from the atmosphere during photosynthesis. The net effect of these actions is to increase the concentration of carbon dioxide in the atmosphere. It is estimated that global atmospheric carbon dioxide is increasing by about 0.4% annually. Carbon dioxide is a greenhouse gas (i.e., it prevents infrared radiation from the earth's surface from escaping into space). The heat is instead absorbed by the atmosphere. Many scientists believe that the increased carbon dioxide concentration in the atmosphere is resulting in global warming. This global warming may in turn cause significant changes in global weather, which could negatively affect all life on earth. However, increased photosynthesis (resulting from the increase in the concentration of carbon dioxide) may somewhat counteract the effects. Unfortunately, the issues of fossil fuel burning, deforestation and global warming are intertwined with economic and political considerations. Furthermore, though much studied, the processes are still not well-understood and their ramifications cannot be predicted with confidence. NITROGEN CYCLE The element Nitrogen is important to living organisms and is used in the production of amino acids, proteins and nucleic acids (DNA, RNA). Molecular nitrogen (N2) is the most abundant gas in the atmosphere. However, only a few single-cell organisms are able to utilize this nitrogen form directly. These include the bacteria species Rhizobium, which lives on the root nodules of legumes, and cyanobacteria (sometimes called blue-green algae), which are ubiquitous to water and soil environments. In order for multi-cellular organisms to use nitrogen, its molecular form (N2) must be converted to other compounds, e.g., nitrates or ammonia. This process is known as nitrogen fixation. Microbial organisms such as cyanobacteria carry out most of the earth’s nitrogen fixation. The industrial manufacture of fertilizers, emissions from combustion engines and nitrogen burning in lightning account for a smaller fraction. The nitrogen cycle is largely dependent on microbial processes. Bacteria fix nitrogen from the atmosphere in the form of ammonia (NH3) and convert the ammonia to nitrate (NO3-). Ammonia and nitrate are absorbed by plants through their roots. Humans and animals get their nitrogen supplies by eating plants or plant-eating animals. The nitrogen is returned to the cycle when bacteria decompose the waste or dead bodies of these higher organisms, and in the process, convert organic nitrogen into ammonia. In a process called denitrification, other bacteria convert ammonia and nitrate into molecular nitrogen and nitrous oxide (N2O). Molecular nitrogen is thus returned to the atmosphere to start the cycle over again. Humans have disturbed the nitrogen cycle in recent history by activities involving increased fixation of nitrogen. Most of this increased nitrogen fixation results from the commercial production of fertilizers and the increased burning of fuels (which converts molecular nitrogen to nitric oxide, NO). The use of commercial fertilizers on agricultural lands increases the runoff of nitrates into aquatic environments. This increased nitrogen runoff stimulates the rapid growth of algae. When the algae die, the water becomes depleted in oxygen and other organisms die. This process is known as eutrophication. The excessive use of fertilizers also stimulates the microbial denitrification of nitrate to nitrous oxide. Increased atmospheric levels of nitrous oxide are thought to contribute to global warming. Nitric oxide added to the atmosphere combines with water to form nitric acid(HNO3), and when nitric acid dissolves in water droplets, it forms acid rain. Acid rain damages healthy trees, destroys aquatic systems and erodes building materials such as marble and limestone. PHOSPHOROUS CYCLE Phosphorus in earth systems is usually in the form of phosphate (PO43-). In living organisms it is an essential constituent of cell membranes, nucleic acids and ATP (the carrier of energy for all life forms). It is also a component of bone and teeth in humans and animals. The phosphorus cycle is relatively simple compared to the other cycles of matter as fewer reservoirs and processes are involved. Phosphorus is not a nominal constituent of the atmosphere, existing there only in dust particles. Most phosphorus occurs in crustal rocks or in ocean sediments. When phosphate-bearing rock is weathered, the phosphate is dissolved and ends up in rivers, lakes and soils. Plants take up phosphate from the soil, while animals ingest phosphorus by eating plants or plant-eating animals. Phosphate is returned to the soil via the decomposition of animal waste or plant and animal materials. This cycle repeats itself again and again. Some phosphorus is washed to the oceans where it eventually finds its way into the ocean-floor sediments. The sediments become buried and form phosphate-bearing sedimentary rocks. When this rock is uplifted, exposed and weathered, the phosphate is again released for use by living organisms. The movement of phosphorus from rock to living organisms is normally a very slow process, but some human activities speed up the process. Phosphate-bearing rock is often mined for use in the manufacture of fertilizers and detergents. This commercial production greatly accelerates the phosphorous cycle. In addition, runoff from agricultural land and the release of sewage into water systems can cause a local overload of phosphate. The increased availability of phosphate can cause overgrowth of algae. This reduces the oxygen level, causing eutrophication and the destruction of other aquatic species. Marine birds play a unique role in the phosphorous cycle. These birds take up phosphorous from ocean fish. Their droppings on land (guano) contain high levels of phosphorous and are sometimes mined for commercial use.
textbooks/bio/Ecology/AP_Environmental_Science/1.02%3A_Cycling_of_Matter.txt
EARTH'S FORMATION AND STRUCTURE The earth formed approximately 4.6 billion years ago from a nebular cloud of dust and gas that surrounded the sun. As the gas cooled, more solids formed. The dusty material accreted to the nebular midplane where it formed progressively larger clumps. Eventually, bodies of several kilometers in diameter formed; these are known as planetesimals. The largest planetesimals grew fastest, at the expense of the smaller ones. This process continued until an earth-sized planet had formed. Early in its formation, the earth must have been completely molten. The main source of heat at that time was probably the decay of naturally-occurring radioactive elements. As the earth cooled, density differences between the forming minerals caused the interior to become differentiated into three concentric zones: the crust, mantle and core. The crust extends downward from the surface to an average depth of 35 km where the mantle begins. The mantle extends down to a depth of 2900 km where the core begins. The core extends down to the center of the earth, a depth of about 6400 km from the surface. The core makes up 16 percent of the volume of the earth and about 31 percent of the mass. It can be divided into two regions: a solid inner core and a liquid outer core. The inner core is probably mostly metallic iron alloyed with a small amount of nickel, as its density is somewhat greater than that of pure metallic iron. The outer core is similar in composition, but probably also contains small amounts of lighter elements, such as sulfur and oxygen, because its density is slightly less than that of pure metallic iron. The presence of the lighter elements depresses the freezing point and is probably responsible for the outer core's liquid state. The mantle is the largest layer in the earth, making up about 82 percent of the volume and 68 percent of the mass of the earth. The mantle is dominated by magnesium and iron-rich (mafic) minerals. Heat from the core of the earth is transported to the crustal region by large-scale convection in the mantle. Near the top of the mantle is a region of partially melted rock called the asthenosphere. Numerous small-scale convection currents occur here as hot magma (i.e., molten rock) rises and cooler magma sinks due to differences in density. The crust is the thinnest layer in the earth, making up only 1 percent of the mass and 2 percent of the volume. Relative to the rest of the earth, the crust is rich in elements such as silicon, aluminum, calcium, sodium and potassium. Crustal materials are very diverse, consisting of more than 2000 minerals. The less dense crust floats upon the mantle in two forms: the continental crust and the oceanic crust. The oceanic crust, which contains more mafic minerals is thinner and denser than the continental crust which contains minerals richer in silicon and aluminum. The thick continental crust has deep buoyant roots that help to support the higher elevations above. The crust contains the mineral resources and the fossil fuels used by humans. GEOLOGIC TIME SCALE In order to describe the time relationships between rock formations and fossils, scientists developed a relative geologic time scale in which the earth's history is divided and subdivided into time divisions. The three eons (Phanerozoic, Proterozoic, and Archean) represent the largest time divisions (measured in billions of years). They in turn are subdivided into Eras, Periods and Epochs. Major discontinuities in the geologic record and in the corresponding biological (fossil) record are chosen as boundary lines between the different time segments. For example, the Cretaceous-Tertiary boundary (65 million years ago) marks a sudden mass extinction of species, including the dinosaurs. Through the use of modern quantitative techniques, some rocks and organic matter can be accurately dated using the decay of naturally-occurring radioactive isotopes. Therefore, absolute ages can be assigned to some parts of the geologic time scale. THE LITHOSPHERE AND PLATE TECTONICS The layer of the mantle above the asthenosphere plus the entire crust make up a region called the lithosphere. The lithosphere, and therefore, the earth's crust, is not a continuous shell, but is broken into a series of plates that independently "float" upon the asthenosphere, much like a raft on the ocean. These plates are in constant motion, typically moving a few centimeters a year, and are driven by convection in the mantle. The scientific theory that describes this phenomenon is called plate tectonics. According to the theory of plate tectonics, the lithosphere is comprised of some seven major plates and several smaller ones. Because these plates are in constant motion, interactions occur where plate boundaries meet. A convergent (colliding) plate boundary occurs when two plates collide. If the convergent boundary involves two continental plates, the crust is compressed into high mountain ranges such as the Himalayas. If an oceanic plate and a continental plate collide, the oceanic crust (because it is more dense) is subducted under the continental crust. The region where subduction takes place is called a subduction zone and usually results in a deep ocean trench such as the "Mariana Trench" in the western Pacific ocean. The subducted crust melts and the resultant magma can rise to the surface and form a volcano. A divergent plate boundary occurs when two plates move away from each other. Magma upwelling from the mantle region is forced through the resulting cracks, forming new crust. The mid-ocean ridge in the Atlantic ocean is a region where new crustal material continually forms as plates diverge. Volcanoes can also occur at divergent boundaries. The island of Iceland is an example of such an occurrence. A third type of plate boundary is the transform boundary. This occurs when two plates slide past one another. This interaction can build up strain in the adjacent crustal regions, resulting in earthquakes when the strain is released. The San Andreas Fault in California is an example of a transform plate boundary. GEOLOGICAL DISTURBANCES VOLCANOES An active volcano occurs when magma (molten rock) reaches the earth's surface through a crack or vent in the crust. Volcanic activity can involve the extrusion of lava on the surface, the ejection of solid rock and ash, and the release of water vapor or gas (carbon dioxide or sulfur dioxide). Volcanoes commonly occur near plate boundaries where the motion of the plates has created cracks in the lithosphere through which the magma can flow. About eighty percent of volcanoes occur at convergent plate boundaries where subducted material melts and rises through cracks in the crust. The Cascade Range was formed in this way. Volcanoes can be classified according to the type and form of their ejecta. The basic types are: composite volcanoes, shield volcanoes, cinder cones, and lava domes. Composite volcanoes are steep-sided, symmetrical cones built of multiple layers of viscous lava and ash. Most composite volcanoes have a crater at the summit which contains the central vent. Lavas flow from breaks in the crater wall or from cracks on the flanks of the cone. Mt Fuji in Japan and Mt Ranier in Washington are examples of composite volcanoes. Shield volcanoes are built almost entirely of highly fluid (low viscosity) lava flows. They form slowly from numerous flows that spread out over a wide area from a central vent. The resultant structure is a broad, gently sloping cone with a profile like a warrior’s shield. Mt Kilauea in Hawaii is an example of a shield volcano. Cinder cones are the simplest type of volcano. They form when lava blown violently into the area breaks into small fragments that solidify and fall as cinders. A steep-sided cone shape is formed around the vent, with a crater at the summit. Sunset Crater in Arizona is a cinder cone that formed less than a thousand years ago, disrupting the lives of the native inhabitants of the region. Lava domes are formed when highly viscous lava is extruded from a vent and forms a rounded, steep-sided dome. The lava piles up around and on the vent instead of flowing away, mostly growing by expansion from within. Lava domes commonly occur within the craters or on the flanks of composite volcanoes. EARTHQUAKES An earthquake occurs when built up strain in a rock mass causes it to rupture suddenly. The region where the rupture occurs is called the focus. This is often deep below the surface of the crust. The point on the surface directly above the focus is called the epicenter. Destructive waves propagate outward from the region of the quake, traveling throughout the earth. The magnitude of an earthquake is a measure of the total amount of energy released. The first step in determining the magnitude is to measure the propagated waves using a device called a seismograph. Based on this information, the earthquake is given a number classification on a modified Richter scale. The scale is logarithmic, so a difference of one unit means a difference of ten-fold in wave intensity, which corresponds to an energy difference of 32-fold. The intensity of an earthquake is an indicator of the effect of an earthquake at a particular locale. The effect depends not only on the magnitude of the earthquake, but also the types of subsurface materials and the structure and design of surface structures. Earthquakes generally occur along breaks in the rock mass known as faults, and most occur in regions near plate boundaries. Some 80 percent of all earthquakes occur near convergent plate boundaries, triggered by the interaction of the plates. Earthquakes are also often associated with volcanic activity due to the movement of sub-surface magma. When an earthquake occurs under the ocean, it can trigger a destructive tidal wave known as a tsunami. ROCKS AND THE ROCK CYCLE The earth's crust is composed of many kinds of rocks, each of which is made up of one or more minerals. Rocks can be classified into three basic groups: igneous, sedimentary, and metamorphic. Igneous rocks are the most common rock type found in the earth's crust. They form when magma cools and crystallizes subsurface (intrusive igneous rocks) or lava cools and crystallizes on the surface (extrusive igneous rocks). Granite is an example of an intrusive igneous rock, whereas basalt is an extrusive igneous rock. Sedimentary rocks are formed by the consolidation of the weathered fragments of pre-existing rocks, by the precipitation of minerals from solution, or by compaction of the remains of living organisms. The processes involving weathered rock fragments include erosion and transport by wind, water or ice, followed by deposition as sediments. As the sediments accumulate over time, those at the bottom are compacted. They are cemented by minerals precipitated from solution and become rocks. The process of compaction and cementation is known as lithification. Some common types of sedimentary rocks are limestone, shale, and sandstone. Gypsum represents a sedimentary rock precipitated from solution. Fossil fuels such as coal and oil shale are sedimentary rocks formed from organic matter. Metamorphic rocks are formed when solid igneous, sedimentary or metamorphic rocks change in response to elevated temperature and pressure and/or chemically active fluids. This alteration usually occurs subsurface. It may involve a change in texture (recrystallization), a change in mineralogy or both. Marble is a metamorphosed form of limestone, while slate is transformed shale. Anthracite is a metamorphic form of coal. The rock cycle illustrates connections between the earth's internal and external processes and how the three basic rock groups are related to one another. Internal processes include melting and metamorphism due to elevated temperature and pressure. Convective currents in the mantle keep the crust in constant motion (plate tectonics). Buried rocks are brought to the surface (uplift), and surface rocks and sediments are transported to the upper mantle region (subduction). Two important external processes in the rock cycle are weathering and erosion. Weathering is the process by which rock materials are broken down into smaller pieces and/or chemically changed. Once rock materials are broken down into smaller pieces, they can be transported elsewhere in a process called erosion. The main vehicle of erosion is moving water, but wind and glaciers can also erode rock. SOIL FORMATION Soil is one of the earth's most precious and delicate resources. Its formation involves the weathering of parent materials (e.g., rocks) and biological activity. Soil has four principal components: water, eroded inorganic parent material, air, and organic matter (e.g., living and decaying organisms). Soil formation begins with unconsolidated materials that are the products of weathering. These materials may be transported to the location of soil formation by processes such as wind or water, or may result from the weathering of underlying bedrock. The weathering process involves the disintegration and decomposition of the rock. It can be physical (e.g., water seeping into rock cracks and then freezing) or chemical (e.g., dissolution of minerals by acid rain). Physical processes are more prevalent in cold and dry climates, while chemical processes are more prevalent in warm or moist climates. Soil materials tend to move vertically in the formation environment. Organic materials (e.g., leaf litter) and sediments can be added, while other materials (e.g., minerals) can be lost due to erosion and leaching. Living organisms (e.g., bacteria, fungi, worms, and insects) also become incorporated into the developing soil. The living component of the soil breaks down other organic materials to release their nutrients (e.g., nitrogen, potassium and phosphorous). The nutrients are then used and recycled by growing plants and other organisms. This recycling of nutrients helps create and maintain a viable soil. Several factors influence soil formation including: climate, parent material, biologic organisms, topography and time. The climate of an area (precipitation and temperature) may be the most important factor in soil formation. Temperature affects the rates of chemical reactions and rainfall affects soil pH and leaching. Parent material or bedrock varies from region to region and can affect the texture and pH of soils. Vegetation type affects the rate at which nutrients in the soil are recycled, the type and amount of organic matter in the soil, soil erosion, and the types and numbers of micro-organisms living in the soil. Humans can also have a profound effect on soils through such activities as plowing, irrigating and mining. The topography of a region affects rainfall runoff, erosion and solar energy intake. Soil formation is a continuous process. Soils change with time as factors such as organic matter input and mineral content change. The process of making a soil suitable for use by humans can take tens of thousands of years. Unfortunately, the destruction of that soil can occur in a few short generations.
textbooks/bio/Ecology/AP_Environmental_Science/1.03%3A_The_Solid_Earth.txt
INTRODUCTION The atmosphere, the gaseous layer that surrounds the earth, formed over four billion years ago. During the evolution of the solid earth, volcanic eruptions released gases into the developing atmosphere. Assuming the outgasing was similar to that of modern volcanoes, the gases released included: water vapor (H2O), carbon monoxide (CO), carbon dioxide (CO2), hydrochloric acid (HCl), methane (CH4), ammonia (NH3), nitrogen (N2) and sulfur gases. The atmosphere was reducing because there was no free oxygen. Most of the hydrogen and helium that outgassed would have eventually escaped into outer space due to the inability of the earth's gravity to hold on to their small masses. There may have also been significant contributions of volatiles from the massive meteoritic bombardments known to have occurred early in the earth's history. Water vapor in the atmosphere condensed and rained down, eventually forming lakes and oceans. The oceans provided homes for the earliest organisms which were probably similar to cyanobacteria. Oxygen was released into the atmosphere by these early organisms, and carbon became sequestered in sedimentary rocks. This led to our current oxidizing atmosphere, which is mostly comprised of nitrogen (roughly 71 percent) and oxygen (roughly 28 percent). Water vapor, argon and carbon dioxide together comprise a much smaller fraction (roughly 1 percent). The atmosphere also contains several gases in trace amounts, such as helium, neon, methane and nitrous oxide. One very important trace gas is ozone, which absorbs harmful UV radiation from the sun. ATMOSPHERIC STRUCTURE The earth's atmosphere extends outward to about 1,000 kilometers where it transitions to interplanetary space. However, most of the mass of the atmosphere (greater than 99 percent) is located within the first 40 kilometers. The sun and the earth are the main sources of radiant energy in the atmosphere. The sun's radiation spans the infrared, visible and ultraviolet light regions, while the earth's radiation is mostly infrared. The vertical temperature profile of the atmosphere is variable and depends upon the types of radiation that affect each atmospheric layer. This, in turn, depends upon the chemical composition of that layer (mostly involving trace gases). Based on these factors, the atmosphere can be divided into four distinct layers: the troposphere, stratosphere, mesosphere, and thermosphere. The troposphere is the atmospheric layer closest to the earth's surface. It extends about 8 - 16 kilometers from the earth's surface. The thickness of the layer varies a few km according to latitude and the season of the year. It is thicker near the equator and during the summer, and thinner near the poles and during the winter. The troposphere contains the largest percentage of the mass of the atmosphere relative to the other layers. It also contains some 99 percent of the total water vapor of the atmosphere. The temperature of the troposphere is warm (roughly 17º C) near the surface of the earth. This is due to the absorption of infrared radiation from the surface by water vapor and other greenhouse gases (e.g. carbon dioxide, nitrous oxide and methane) in the troposphere. The concentration of these gases decreases with altitude, and therefore, the heating effect is greatest near the surface. The temperature in the troposphere decreases at a rate of roughly 6.5º C per kilometer of altitude. The temperature at its upper boundary is very cold (roughly -60º C). Because hot air rises and cold air falls, there is a constant convective overturn of material in the troposphere. Indeed, the name troposphere means “region of mixing.” For this reason, all weather phenomena occur in the troposphere. Water vapor evaporated from the earth's surface condenses in the cooler upper regions of the troposphere and falls back to the surface as rain. Dust and pollutants injected into the troposphere become well mixed in the layer, but are eventually washed out by rainfall. The troposphere is therefore self cleaning. A narrow zone at the top of the troposphere is called the tropopause. It effectively separates the underlying troposphere and the overlying stratosphere. The temperature in the tropopause is relatively constant. Strong eastward winds, known as the jet stream, also occur here. The stratosphere is the next major atmospheric layer. This layer extends from the tropopause (roughly 12 kilometers) to roughly 50 kilometers above the earth's surface. The temperature profile of the stratosphere is quite different from that of the troposphere. The temperature remains relatively constant up to roughly 25 kilometers and then gradually increases up to the upper boundary of the layer. The amount of water vapor in the stratosphere is very low, so it is not an important factor in the temperature regulation of the layer. Instead, it is ozone (O3) that causes the observed temperature inversion. Most of the ozone in the atmosphere is contained in a layer of the stratosphere from roughly 20 to 30 kilometers. This ozone layer absorbs solar energy in the form of ultraviolet radiation (UV), and the energy is ultimately dissipated as heat in the stratosphere. This heat leads to the rise in temperature. Stratospheric ozone is also very important for living organisms on the surface of the earth as it protects them by absorbing most of the harmful UV radiation from the sun. Ozone is constantly being produced and destroyed in the stratosphere in a natural cycle. The basic reactions involving only oxygen (known as the "Chapman Reactions") are as follows: The production of ozone from molecular oxygen involves the absorption of high energy UV radiation (UVA) in the upper atmosphere. The destruction of ozone by absorption of UV radiation involves moderate and low energy radiation (UVB and UVC). Most of the production and destruction of ozone occurs in the stratosphere at lower latitudes where the ultraviolet radiation is most intense. Ozone is very unstable and is readily destroyed by reactions with other atmospheric species such nitrogen, hydrogen, bromine, and chlorine. In fact, most ozone is destroyed in this way. The use of chlorofluorocarbons (CFCs) by humans in recent decades has greatly affected the natural ozone cycle by increasing the rate of its destruction due to reactions with chlorine. Because the temperature of the stratosphere rises with altitude, there is little convective mixing of the gases. The stratosphere is therefore very stable. Particles that are injected (such as volcanic ash) can stay aloft for many years without returning to the ground. The same is true for pollutants produced by humans. The upper boundary of the stratosphere is known as the stratopause, which is marked by a sudden decrease in temperature. The third layer in the earth's atmosphere is called the mesosphere. It extends from the stratopause (about 50 kilometers) to roughly 85 kilometers above the earth's surface. Because the mesosphere has negligible amounts of water vapor and ozone for generating heat, the temperature drops across this layer. It is warmed from the bottom by the stratosphere. The air is very thin in this region with a density about 1/1000 that of the surface. With increasing altitude this layer becomes increasingly dominated by lighter gases, and in the outer reaches, the remaining gases become stratified by molecular weight. The fourth layer, the thermosphere, extends outward from about 85 kilometers to about 600 kilometers. Its upper boundary is ill defined. The temperature in the thermosphere increases with altitude, up to 1500º C or more. The high temperatures are the result of absorption of intense solar radiation by the last remaining oxygen molecules. The temperature can vary substantially depending upon the level of solar activity. The lower region of the thermosphere (up to about 550 kilometers) is also known as the ionosphere. Because of the high temperatures in this region, gas particles become ionized. The ionosphere is important because it reflects radio waves from the earth's surface, allowing long-distance radio communication. The visual atmospheric phenomenon known as the northern lights also occurs in this region. The outer region of the atmosphere is known as the exosphere. The exosphere represents the final transition between the atmosphere and interplanetary space. It extends about 1000 kilometers and contains mainly helium and hydrogen. Most satellites operate in this region. Solar radiation is the main energy source for atmospheric heating. The atmosphere heats up when water vapor and other greenhouse gases in the troposphere absorb infrared radiation either directly from the sun or re-radiated from the earth's surface. Heat from the sun also evaporates ocean water and transfers heat to the atmosphere. The earth's surface temperature varies with latitude. This is due to uneven heating of the earth's surface. The region near the equator receives direct sunlight, whereas sunlight strikes the higher latitudes at an angle and is scattered and spread out over a larger area. The angle at which sunlight strikes the higher latitudes varies during the year due to the fact that the earth's equatorial plane is tilted 23.5º relative to its orbital plane around the sun. This variation is responsible for the different seasons experienced by the non-equatorial latitudes. WIND Convecting air masses in the troposphere create air currents known as winds, due to horizontal differences in air pressure. Winds flow from a region of higher pressure to one of a lower pressure. Global air movement begins in the equatorial region because it receives more solar radiation. The general flow of air from the equator to the poles and back is disrupted, though, by the rotation of the earth. The earth's surface travels faster beneath the atmosphere at the equator and slower at the poles. This causes air masses moving to the north to be deflected to the right, and air masses moving south to be deflected to the left. This is known as the "Coriolis Effect." The result is the creation of six huge convection cells situated at different latitudes. Belts of prevailing surface winds form and distribute air and moisture over the earth. Jet streams are extremely strong bands of winds that form in or near the tropopause due to large air pressure differentials. Wind speeds can reach as high as 200 kilometers per hour. In North America, there are two main jet streams: the polar jet stream, which occurs between the westerlies and the polar easterlies, and the subtropical jet stream, which occurs between the trade winds and the westerlies. WEATHER The term weather refers to the short term changes in the physical characteristics of the troposphere. These physical characteristics include: temperature, air pressure, humidity, precipitation, cloud cover, wind speed and direction. Radiant energy from the sun is the power source for weather. It drives the convective mixing in the troposphere which determines the atmospheric and surface weather conditions. Certain atmospheric conditions can lead to extreme weather phenomena such as thunderstorms, floods, tornadoes and hurricanes. A thunderstorm forms in a region of atmospheric instability, often occurring at the boundary between cold and warm fronts. Warm, moist air rises rapidly (updraft) while cooler air flows down to the surface (downdraft). Thunderstorms produce intense rainfall, lightning and thunder. If the atmospheric instability is very large and there is a large increase in wind strength with altitude (vertical wind shear), the thunderstorm may become severe. A severe thunderstorm can produce flash floods, hail, violent surface winds and tornadoes. Floods can occur when atmospheric conditions allow a storm to remain in a given area for a length of time, or when a severe thunderstorm dumps very large amounts of rainfall in a short time period. When the ground becomes saturated with water, he excess runoff flows into low-lying areas or rivers and causes flooding. A tornado begins in a severe thunderstorm. Vertical wind shear causes the updraft in the storm to rotate and form a funnel. The rotational wind speeds increase and vertical stretching occurs due to angular momentum. As air is drawn into the funnel core, it cools rapidly and condenses to form a visible funnel cloud. The funnel cloud descends to the surface as more air is drawn in. Wind speeds in tornadoes can reach several hundred miles per hour. Tornadoes are most prevelant in the Great Plains region of the United States, forming when cold dry polar air from Canada collides with warm moist tropical air from the Gulf of Mexico. A cyclone is an area of low pressure with winds blowing counter-clockwise (Northern Hemisphere) or clockwise (Southern Hemisphere) around it. Tropical cyclones are given different names depending on their wind speed. The strongest tropical cyclones in the Atlantic Ocean (wind speed exceeds 74 miles per hour) are called hurricanes. These storms are called typhoons (Pacific Ocean) or cyclones (Indian Ocean) in other parts of the world. Hurricanes are the most powerful of all weather systems, characterized by strong winds and heavy rain over wide areas. They form over the warm tropical ocean and quickly lose intensity when they move over land. Hurricanes affecting the continental United States generally occur from June through November. OCEAN CURRENTS The surface of the earth is over 71 percent water, so it is not surprising that oceans have a significant effect on the weather and climate. Because of the high heat capacity of water, the ocean acts as a temperature buffer. That is why coastal climates are less extreme than inland climates. Most of the radiant heat from the sun is absorbed by ocean surface waters and ocean currents help distribute this heat. Currents are the movement of water in a predictable pattern. Surface ocean currents are driven mostly by prevailing winds. The "Coriolis Effect" causes the currents to flow in circular patterns. These currents help transport heat from the tropics to the higher latitudes. Two large surface currents near the United States are the California current along the west coast and the Gulf Stream along the east coast. Deep ocean currents are driven by differences in water temperature and density. They move in a convective pattern. The less dense (lower salinity) warm water in the equatorial regions rises and moves towards the polar regions, while more dense (higher salinity) cold water in the polar regions sinks and moves towards the equatorial regions. Sometimes this cold deep water moves back to the surface along a coastline in a process known as upwelling. This cold deep water is rich in nutrients that support productive fishing grounds. About every three to seven years, warm water from the western equatorial Pacific moves to the eastern equatorial Pacific due to weakened trade winds. The eastern Pacific Ocean thus becomes warmer than usual for a period of about a year. This is known as El Niño. El Niño prevents the nutrient-rich, cold-water upwellings along the western coast of South America. It also impacts the global weather conditions. Some regions receive heavier than usual rainfall, while other regions suffer drought conditions with lower than usual rainfall. Probably the most important part of weather is precipitation as rainfall or snowfall. Water from the vast salty oceans evaporates and falls over land as fresh water. It is rainfall that provides fresh water for land plants, and land animals. Winter snowfall in mountainous regions provides a stored supply of fresh water which melts and flows into streams during the spring and summer. Atmospheric clouds are the generators of precipitation. Clouds form when a rising air mass cools and the temperature and humidity are right for condensation to occur. Condensation does not occur spontaneously, but instead requires a condensation nuclei. These are tiny (less than 1µm) dust or smoke particles. The condensation droplet is small enough (about 20 µm) that it is supported by the atmosphere against the pull of gravity. The visible result of these condensation droplets is a cloud. Under the right conditions, droplets may continue to grow by continued condensation onto the droplet and/or coalescence with other droplets through collisions. When the droplets become sufficiently large they begin to fall as precipitation. Typical raindrops are about 2 mm in diameter. Depending upon the temperature of the cloud and the temperature profile of the atmosphere from the cloud to the earth's surface, various types of precipitation can occur: rain, freezing rain, sleet or snow. Very strong storms can produce relatively large chunks of ice called hailstones. CLIMATE Climate can be thought of as a measure of a region's average weather over a period of time. In defining a climate, the geography and size of the region must be taken into account. A micro-climate might involve a backyard in the city. A macroclimate might cover a group of states. When the entire earth is involved, it is a global climate. Several factors control large scale climates such as latitude (solar radiation intensity), distribution of land and water, pattern of prevailing winds, heat exchange by ocean currents, location of global high and low pressure regions, altitude and location of mountain barriers. The most widely used scheme for classifying climate is the Köppen System. This scheme uses average annual and monthly temperature and precipitation to define five climate types: 1. tropical moist climates: average monthly temperature is always greater than 18°C 2. dry climates: deficient precipitation most of the year 3. moist mid-latitude climates with mild winters 4. moist mid-latitude climates with severe winters 5. polar climates: extremely cold winters and summers. Using the Köppen system and the seasonal dominance of large scale air masses (e.g., maritime or continental), the earth's climate zones can be grouped as follows: 1. tropical wet 2. tropical wet and dry 3. tropical desert 4. mid-latitude wet 5. mid-latitude dry summer 6. mid-latitude dry winter 7. polar wet 8. dry and polar desert Los Angeles has a mid-latitude dry summer climate, whereas New Orleans has a mid-latitude wet climate. Data from natural climate records (e.g. ocean sediments, tree rings, Antarctic ice cores) show that the earth's climate constantly changed in the past, with alternating periods of colder and warmer climates. The most recent ice age ended only about 10,000 years ago. The natural system controlling climate is very complex. It consists of a large number of feedback mechanisms that involve processes and interactions within and between the atmosphere, biosphere and the solid earth. Some of the natural causes of global climate change include plate tectonics (land mass and ocean current changes), volcanic activity (atmospheric dust and greenhouse gases), and long-term variations in the earth's orbit and the angle of its rotation axis (absolute and spatial variations in solar radiation). More recently, anthropogenic (human) factors may be affecting the global climate. Since the late 19th century, the average temperature of the earth has increased about 0.3 to 0.6º C. Many scientists believe this global warming trend is the result of the increased release of greenhouse gases (e.g., CO2) into the atmosphere from the combustion of fossil fuels.
textbooks/bio/Ecology/AP_Environmental_Science/1.04%3A_The_Atmosphere.txt
INTRODUCTION The biosphere is the region of the earth that encompasses all living organisms: plants, animals and bacteria. It is a feature that distinguishes the earth from the other planets in the solar system. "Bio" means life, and the term biosphere was first coined by a Russian scientist (Vladimir Vernadsky) in the 1920s. Another term sometimes used is ecosphere ("eco" meaning home). The biosphere includes the outer region of the earth (the lithosphere) and the lower region of the atmosphere (the troposphere). It also includes the hydrosphere, the region of lakes, oceans, streams, ice and clouds comprising the earth's water resources. Traditionally, the biosphere is considered to extend from the bottom of the oceans to the highest mountaintops, a layer with an average thickness of about 20 kilometers. Scientists now know that some forms of microbes live at great depths, sometimes several thousand meters into the earth's crust. Nonetheless, the biosphere is a very tiny region on the scale of the whole earth, analogous to the thickness of the skin on an apple. The bulk of living organisms actually live within a smaller fraction of the biosphere, from about 500 meters below the ocean's surface to about 6 kilometers above sea level. Dynamic interactions occur between the biotic region (biosphere) and the abiotic regions (atmosphere, lithosphere and hydrosphere) of the earth. Energy, water, gases and nutrients are exchanged between the regions on various spatial and time scales. Such exchanges depend upon, and can be altered by, the environments of the regions. For example, the chemical processes of early life on earth (e.g. photosynthesis, respiration, carbonate formation) transformed the reducing ancient atmosphere into the oxidizing (free oxygen) environment of today. The interactive processes between the biosphere and the abiotic regions work to maintain a kind of planetary equilibrium. These processes, as well as those that might disrupt this equilibrium, involve a range of scientific and socioeconomic issues. The study of the relationships of living organisms with one another and with their environment is the science known as ecology. The word ecology comes from the Greek words oikos and logos, and literally means "study of the home." The ecology of the earth can be studied at various levels: an individual (organism), a population, a community, an ecosystem, a biome or the entire biosphere. The variety of living organisms that inhabit an environment is a measure of its biodiversity. ORGANISMS Life evolved after oceans formed, as the ocean environment provided the necessary nutrients and support medium for the initial simple organisms. It also protected them from the harsh atmospheric UV radiation. As organisms became more complex they eventually became capable of living on land. However, this could not occur until the atmosphere became oxidizing and a protective ozone layer formed which blocked the harmful UV radiation. Over roughly the last four billion years, organisms have diversified and adapted to all kinds of environments, from the icy regions near the poles to the warm tropics near the equator, and from deep in the rocky crust of the earth to the upper reaches of the troposphere. Despite their diversity, all living organisms share certain characteristics: they all replicate and all use DNA to accomplish the replication process. Based on the structure of their cells, organisms can be classified into two types: eukaryotes and prokaryotes. The main difference between them is that a eukaryote has a nucleus, which contains its DNA, while a prokaryote does not have a nucleus, but instead its DNA is free-floating in the cell. Bacteria are prokaryotes, and humans are eukaryotes. Organisms can also be classified according to how they acquire energy. Autotrophs are "self feeders" that use light or chemical energy to make food. Plants are autotrophs. Heterotrophs (i.e. “other feeders”) obtain energy by eating other organisms, or their remains. Bacteria and animals are heterotrophs. Groups of organisms that are physically and genetically related can be classified into species. There are millions of species on the earth, most of them unstudied and many of them unknown. Insects and microorganisms comprise the majority of species, while humans and other mammals comprise only a tiny fraction. In an ecological study, a single member of a species or organism is known as an individual. POPULATIONS AND COMMUNITIES A number of individuals of the same species in a given area constitute a population. The number typically ranges anywhere from a few individuals to several thousand individuals. Bacterial populations can number in the millions. Populations live in a place or environment called a habitat. All of the populations of species in a given region together make up a community. In an area of tropical grassland, a community might be made up of grasses, shrubs, insects, rodents and various species of hoofed mammals. The populations and communities found in a particular environment are determined by abiotic and biotic limiting factors. These are the factors that most affect the success of populations. Abiotic limiting factors involve the physical and chemical characteristics of the environment. Some of these factors include: amounts of sunlight, annual rainfall, available nutrients, oxygen levels and temperature. For example, the amount of annual rainfall may determine whether a region is a grassland or forest, which in turn, affects the types of animals living there. Each population in a community has a range of tolerance for an abiotic limiting factor. There are also certain maximum and minimum requirements known as tolerance limits, above and below which no member of a population is able to survive. The range of an abiotic factor that results in the largest population of a species is known as the optimum range for that factor. Some populations may have a narrow range of tolerance for one factor. For example, a freshwater fish species may have a narrow tolerance range for dissolved oxygen in the water. If the lake in which that fish species lives undergoes eutrophication, the species will die. This fish species can therefore act as an indicator species, because its presence or absence is a strict indicator of the condition of the lake with regard to dissolved oxygen content. Biotic limiting factors involve interactions between different populations, such as competition for food and habitat. For example, an increase in the population of a meat-eating predator might result in a decrease in the population of its plant-eating prey, which in turn might result in an increase in the plant population the prey feeds on. Sometimes, the presence of a certain species may significantly affect the community make up. Such a species is known as a keystone species. For example, a beaver builds a dam on a stream and causes the meadow behind it to flood. A starfish keeps mussels from dominating a rocky beach, thereby allowing many other species to exist there. ECOSYSTEMS An ecosystem is a community of living organisms interacting with each other and their environment. Ecosystems occur in all sizes. A tidal pool, a pond, a river, an alpine meadow and an oak forest are all examples of ecosystems. Organisms living in a particular ecosystem are adapted to the prevailing abiotic and biotic conditions. Abiotic conditions involve both physical and chemical factors (e.g., sunlight, water, temperature, soil, prevailing wind, latitude and elevation). In order to understand the flow of energy and matter within an ecosystem, it is necessary to study the feeding relationships of the living organisms within it. Living organisms in an ecosystem are usually grouped according to how they obtain food. Autotrophs that make their own food are known as producers, while heterotrophs that eat other organisms, living or dead, are known as consumers. The producers include land and aquatic plants, algae and microscopic phytoplankton in the ocean. They all make their own food by using chemicals and energy sources from their environment. For example, plants use photosynthesis to manufacture sugar (glucose) from carbon dioxide and water. Using this sugar and other nutrients (e.g., nitrogen, phosphorus) assimilated by their roots, plants produce a variety of organic materials. These materials include: starches, lipids, proteins and nucleic acids. Energy from sunlight is thus fixed as food used by themselves and by consumers. The consumers are classed into different groups depending on the source of their food. Herbivores (e.g. deer, squirrels) feed on plants and are known as primary consumers. Carnivores (e.g. lions, hawks, killer whales) feed on other consumers and can be classified as secondary consumers. They feed on primary consumers. Tertiary consumers feed on other carnivores. Some organisms known as omnivores (e.g., bears, rats and humans) feed on both plants and animals. Organisms that feed on dead organisms are called scavengers (e.g., vultures, ants and flies). Detritivores (detritus feeders, e.g. earthworms, termites, crabs) feed on organic wastes or fragments of dead organisms. Decomposers (e.g. bacteria, fungi) also feed on organic waste and dead organisms, but they digest the materials outside their bodies. The decomposers play a crucial role in recycling nutrients, as they reduce complex organic matter into inorganic nutrients that can be used by producers. If an organic substance can be broken down by decomposers, it is called biodegradable. In every ecosystem, each consumer level depends upon lower-level organisms (e.g. a primary consumer depends upon a producer, a secondary consumer depends upon a primary consumer and a tertiary consumer depends upon a secondary consumer). All of these levels, from producer to tertiary consumer, form what is known as a food chain. A community has many food chains that are interwoven into a complex food web. The amount of organic material in a food web is referred to as its biomass. When one organism eats another, chemical energy stored in biomass is transferred from one level of the food chain to the next. Most of the consumed biomass is not converted into biomass of the consumer. Only a small portion of the useable energy is actually transferred to the next level, typically 10 percent. Each higher level of the food chain represents a cumulative loss of useable energy. The result is a pyramid of energy flow, with producers forming the base level. Assuming 10 percent efficiency at each level, the tertiary consumer level would use only 0.1 percent of the energy available at the initial producer level. Because there is less energy available high on the energy pyramid, there are fewer top-level consumers. A disruption of the producer base of a food chain, therefore, has its greatest effect on the top-level consumer. Ecosystem populations constantly fluctuate in response to changes in the environment, such as rainfall, mean temperature, and available sunlight. Normally, such changes are not drastic enough to significantly alter ecosystems, but catastrophic events such as floods, fires and volcanoes can devastate communities and ecosystems. It may be long after such a catastrophic event before a new, mature ecosystem can become established. After severe disturbance the make up of a community is changed. The resulting community of species changes, as early, post disturbance, fast-growing species are out-competed by other species. This natural process is called ecological succession. It involves two types of succession: primary succession and secondary succession. Primary succession is the development of the first biota in a given region where no life is found. An example is of this is the surrounding areas where volcanic lava has completely covered a region or has built up a new island in the ocean. Initially, only pioneer species can survive there, typically lichens and mosses, which are able to withstand poor conditions. They are able to survive in highly exposed areas with limited water and nutrients. Lichen, which is made up of both a fungus and an alga, survives by mutualism. The fungus produces an acid, which acts to further dissolve the barren rock. The alga uses those exposed nutrients, along with photosynthesis, to produce food for both. Grass seeds may land in the cracks, carried by wind or birds. The grass grows, further cracking the rocks, and upon completing its own life cycle, contributes organic matter to the crumbling rock to make soil. In time, larger plants, such as shrubs and trees may inhabit the area, offering habitats and niches to immigrating animal life. When the maximum biota that the ecosystem can support is reached, the climax community prevails. This occurs after hundreds if not thousands of years depending on the climate and location. Secondary succession begins at a different point, when an existing ecosystem’s community of species is removed by fire, deforestation, or a bulldozer's work in a vacant lot, leaving only soil. The first few centimeters of this soil may have taken 1000 years to develop from solid rock. It may be rich in humus, organic waste, and may be stocked with ready seeds of future plants. Secondary succession is also a new beginning, but one with a much quicker regrowth of organisms. Depending on the environment, succession to a climax community may only require 100 to 200 years with normal climate conditions, with communities progressing through stages of early plant and animal species, mid-species and late successional species. Some ecosystems, however, can never by regained. BIOMES The biosphere can be divided into relatively large regions called biomes. A biome has a distinct climate and certain living organisms (especially vegetation) characteristic to the region and may contain many ecosystems. The key factors determining climate are average annual precipitation and temperature. These factors, in turn, depend on the geography of the region, such as the latitude and elevation of the region, and mountainous barriers. The major types of biomes include: aquatic, desert, forest, grassland and tundra. Biomes have no distinct boundaries. Instead, there is a transition zone called an ecotone, which contains a variety of plants and animals. For example, an ecotone might be a transition region between a grassland and a desert, with species from both. Water covers a major portion of the earth's surface, so aquatic biomes contain a rich diversity of plants and animals. Aquatic biomes can be subdivided into two basic types: freshwater and marine. Freshwater has a low salt concentration, usually less than 1 percent, and occurs in several types of regions: ponds and lakes, streams and rivers, and wetlands. Ponds and lakes range in size, and small ponds may be seasonal. They sometimes have limited species diversity due to isolation from other water environments. They can get their water from precipitation, surface runoff, rivers, and springs. Streams and rivers are bodies of flowing water moving in one general direction (i.e., downstream). Streams and rivers start at their upstream headwaters, which could be springs, snowmelt or even lakes. They continue downstream to their mouths, which may be another stream, river, lake or ocean. The environment of a stream or river may change along its length, ranging from clear, cool water near the head, to warm, sediment-rich water near the mouth. The greatest diversity of living organisms usually occurs in the middle region. Wetlands are places of still water that support aquatic plants, such as cattails, pond lilies and cypress trees. Types of wetlands include marshes, swamps and bogs. Wetlands have the highest diversity of species with many species of birds, fur-bearing mammals, amphibians and reptiles. Some wetlands, such as salt marshes, are not freshwater regions. Marine regions cover nearly three-fourths of the earth's surface. Marine bodies are salty, having approximately 35 grams of dissolved salt per liter of water (3.5 percent). Oceans are very large marine bodies that dominate the earth's surface and hold the largest ecosystems. They contain a rich diversity of living organisms. Ocean regions can be separated into four major zones: intertidal, pelagic, benthic and abyssal. The intertidal zone is where the ocean meets the land. Sometimes, it is submerged and at other times exposed, depending upon waves and tides. The pelagic zone includes the open ocean further away from land. The benthic zone is the region below the pelagic zone, but not including the very deepest parts of the ocean. The bottom of this zone consists of sediments. The deepest parts of the ocean are known as the abyssal zone. This zone is very cold (near freezing temperatures), and under great pressure from the overlying mass of water. Mid-ocean ridges occur on the ocean floor in abyssal zones. Coral reefs are found in the warm, clear, shallow waters of tropical oceans around islands or along continental coastlines. They are mostly formed from calcium carbonate produced by living coral. Reefs provide food and shelter for other organisms and protect shorelines from erosion. Estuaries are partially enclosed areas where fresh water and silt from streams or rivers mix with salty ocean water. They represent a transition from land to sea and from freshwater to saltwater. Estuaries are biologically very productive areas and provide homes for a wide variety of plants, birds and animals. Deserts are dry areas where evaporation usually exceeds precipitation. Rainfall is low -- less than 25 centimeters per year -- and can be highly variable and seasonal. The low humidity results in temperature extremes between day and night. Deserts can be hot or cold. Hot deserts (e.g. the Sonovan) are very hot in the summer and have relatively high temperatures throughout the year and have seasonal rainfall. Cold deserts (e.g. the Gobi) are characterized by cold winters and low but year-round precipitation. Deserts have relatively little vegetation and the substrate consists mostly of sand, gravel or rocks. The transition regions between deserts and grasslands are sometimes called semiarid deserts (e.g. the Great Basin of the western United States). Grasslands cover regions where moderate rainfall is sufficient for the growth of grasses, but not enough for stands of trees. There are two main types of grasslands: tropical grasslands (savannas) and temperate grasslands. Tropical grasslands occur in warm climates such as Africa and very limited regions of Australia. They have a few scattered trees and shrubs, but their distinct rainy and dry seasons prevent the formation of tropical forests. Lower rainfall, more variable winter-through-summer temperatures and a near lack of trees characterize temperate grasslands. Prairies are temperate grasslands at fairly high elevation. They may be dominated by long or short grass species. The vast prairies originally covering central North America, or the Great Plains, were the result of favorable climate conditions created by their high elevation and proximity to the Rocky Mountains. Because temperate grasslands are treeless, relatively flat and have rich soil, most have been replaced by farmland. Forests are dominated by trees and can be divided into three types: tropical forests, temperate forests and boreal forests. Tropical forests are always warm and wet and are found at lower latitudes. Their annual precipitation is very high, although some regions may have distinct wet and dry seasons. Tropical forests have the highest biodiversity of this biome. Temperate forests occur at mid-latitudes (i.e., North America), and therefore have distinct seasons. Summers are warm and winters are cold. The temperate forests have suffered considerable alteration by humans, who have cleared much of the forest land for fuel, building materials and agricultural use. Boreal forests are located in higher latitudes, like Siberia, where they are known as "taiga." They have very long, cold winters and a short summer season when most of the precipitation occurs. Boreal forests represent the largest biome on the continents. Very low temperatures, little precipitation and low biodiversity characterize tundra. Its vegetation is very simple, with virtually no trees. The tundra can be divided into two different types: arctic tundra and alpine tundra. The arctic alpine occurs in polar regions. It has a very short summer growing season. Water collects in ponds and bogs, and the ground has a subsurface layer of permanently frozen soil known as permafrost. Alpine tundra is found at high elevations in tall mountains. The temperatures are not as low as in the arctic tundra, and it has a longer summer growing season. EVOLUTION OF LIFE Wherever they are found in the biosphere, living organisms are necessarily linked to their environment. Ecosystems are dynamic and communities change over time in response to abiotic or biotic changes in the environment. For example, the climate may be become warmer or colder, wetter or drier, or the food chain may be disrupted by the loss of a particular population or the introduction of a new one. Species must be able to adapt to these changes in order to survive. As they adapt, the organisms themselves undergo change. Evolution is the gradual change in the genetic makeup of a population of a species over time. It is important to note that it is the population that evolves, rather than individuals. A species evolves to a particular niche either by adapting to use a niche’s environment or adapting to avoid competition with another species. Recall that no two species can occupy the exact same niche in an ecosystem. The availability of resources is pivotal. In the case of five warbler species which all consume insects of the same tree, to survive each species needs to gather its food (insects) in different parts of that tree. This avoids competition and the possible extinction of one or more species. Therefore, one of the bird species will adapt to hunting at the treetops; another the lowest branches; another the mid-section. In this way, these species have evolved into different, yet similar, niches. All five species in this way can survive by adapting to a narrow niche. Organisms with a narrow niche are called specialized species. Another example is a species that may evolve to a narrow niche by consuming only one type of leaf, such as the Giant Panda, which consumes bamboo leaves. This strategy allows it to co-exist with another consumer by not competing with it. In both cases, species with a narrow niche are often vulnerable to extinction because they typically cannot respond to changes in the environment. Evolving to a new niche would take too much time for the specialized species under the duress of a drought, for example. On the other hand, a species that can use many foods and locations in which to hunt or gather are known as generalized species. In the event of a drought, a generalized species such as a cockroach may be more successful in finding alternative forms of food, and will survive and reproduce. Yet another form of evolution is co-evolution, where species adapt to one another by interacting closely. This relationship can be a predator-prey type of interaction. Prey is at risk, but as a species it has evolved chemical defenses or behaviors. On the other hand, co-evolution can be a mutualistic relationship, often characterized by the ants and an acacia tree of South America. The acacia provides ants with food and a habitat, and its large projecting thorns provides protection from predators. The ants, in turn, protect the tree by attacking any animal landing on it and by clearing vegetation at its base. So closely evolved are the species that neither can exist without the other. Similar ecosystems may offer similar niches to organisms, that are adapted or evolved to that niche. Convergent evolution is the development of similar adaptations in two species occupying different yet similar ecosystems. Two species evolve independently to respond to the demands of their ecosystem, and they develop the same mechanism to do so. What emerges are adaptations that resemble look-alikes: Wings of birds and bats are similar, but evolved separately to meet the demands of flying through air. The dolphin, a mammal, shares adaptations that allow for movement through water with the extinct reptile ichthyosaur. They have similar streamlined shapes of fins, head, and nose, which make the bodies better suited for swimming. Natural selection is another process that depends on an organism’s ability to survive in a changing environment. While evolution is the gradual change of the genetic makeup over time, natural selection is the force that favors a beneficial set of genes. For example, birds migrating to an island face competition for the insects on a tropical tree. One genetic pool of a new generation may include a longer beak, which allows the bird to reach into a tropical flower for its nectar. When high populations of birds compete for insects, this ability to use the niche of collecting nectar favors that bird’s survival. The long-beaked gene is passed to the next generation and the next, because birds can coexist with the insect-gathering birds by using a different niche. Through reproduction of the surviving longer-beaked birds, natural selection favors its adaptability. A species, family or larger group of organisms may eventually come to the end of its evolutionary line. This is known as extinction. While bad news for those that become extinct, it's a natural occurrence that has been taking place since the beginning of life on earth. Extinctions of species are constantly occurring at some background rate, which is normally matched by speciation. Thus, in the natural world, there is a constant turnover of species. Occasionally large numbers of species have become extinct over a relatively short geologic time period. The largest mass extinction event in the earth's history occurred at the end of the Permian period, 245 million years ago. As many as 96 percent of all marine species were lost, while on land more than 75 percent of all vertebrate families became extinct. Although, the actual cause of that extinction is unclear, the consensus is that climate change, resulting from sea level change and increased volcanic activity, was an important factor. The most famous of all mass extinctions occurred at the boundary of the Cretaceous and Tertiary periods, 65 million years ago. About 85 percent of species became extinct, including all of the dinosaurs. Most scientists believe that the impact of a small asteroid near the Yucatan Peninsula in Mexico triggered that extinction event. The impact probably induced a dramatic change in the world climate. The most serious extinction of mammals occurred about 11,000 years ago, as the last Ice Age was ending. Over a period of just a few centuries, most of the large mammals around the world, such as the mammoth, became extinct. While climate change may have been a factor in their extinction, a new force had also emerged on the earth - modern humans. Humans, aided by new, sharp-pointed weapons and hunting techniques, may have hurried the demise of the large land mammals. Over the years, human activity has continued to send many species to an early extinction. The best known examples are the passenger pigeon and the dodo bird, but numerous other species, many of them unknown, are killed off by over harvesting and other human-caused habitat destruction, degradation and fragmentation.
textbooks/bio/Ecology/AP_Environmental_Science/1.05%3A_The_Biosphere.txt
INTRODUCTION A population is a group of individuals living together in a given area at a given time. Changes in populations are termed population dynamics. The current human population is made up of all of the people who currently share the earth. The first humans walked the planet millions of years ago. Since that time, the number of humans living on the planet and where they live has constantly changed over time. Every birth and death is a part of human population dynamics. Each time a person moves from one location to another, the spatial arrangement of the population is changed, and this, too, is an element of population dynamics. While humans are unique in many ways as a species, they are subject to many of the same limiting forces and unexpected events of all populations of organisms. In 1999, the human population crossed the six billion mark. At current growth rates, the population will double within 50 years. Long ago, when the human population was small, the doubling of the population had little impact on the human population or its environment. However, with the size of today's population, the effect of doubling the population is quite significant. Already, most of the people of the world do not have adequate clean water, food, housing and medical care, and these deficiencies are at least partially the result of over population. As the population continues to grow, competition for resources will increase. Natural disasters and political conflicts will exacerbate the problems, especially in the more stressed regions of developing nations. The survivors of this competition will likely be determined by factors such as place of birth and educational opportunities. POPULATION GROWTH Human populations are not stagnant. They naturally change in size, density and predominance of age groups in response to environmental factors such as resources availability and disease, as well as social and cultural factors. The increases and decreases in human population size make up what is known as human population dynamics. If resources are not limited, then populations experience exponential growth. A plot of exponential growth over time resembles a "J" curve. Absolute numbers are relatively small at first along the base of the J curve, but the population rapidly skyrockets when the critical time near the stem of the J curve is reached. For most of the history of modern humans (Homo sapiens), people were hunter-gatherers. Food, especially meat from large mammals, was usually plentiful. However, populations were small because the nomadic life did not favor large family sizes. During those times, the human population was probably not more than a few million worldwide. It was still in the base of the J growth curve. With the end of the last Ice Age, roughly 10,000 years ago, the climates worldwide changed and many large mammals that had been the mainstay of human diet became extinct. This forced a change in diet and lifestyle, from one of the nomadic hunter-gatherer to that of a more stationary agricultural society. Humans began cultivating food and started eating more plants and less meat. Having larger families was possible with the more stationary lifestyle. In fact, having a large family increasingly became an asset, as extra hands were needed for maintaining crops and homes. As agriculture became the mainstay of human life, the population increased. As the population increased, people began living in villages, then in towns and finally in cities. This led to problems associated with overcrowded conditions, such as the buildup of wastes, poverty and disease. Large families were no longer advantageous. Infanticide was common during medieval times in Europe, and communicable diseases also limited the human population numbers. Easily spread in crowded, rat-infested urban areas, Black Death, the first major outbreak of the Bubonic Plague (1347-1351) drastically reduced the populations in Europe and Asia, possibly by as much as 50 percent. Starting in the 17th Century, advances in science, medicine, agriculture and industry allowed rapid growth of human population and infanticide again became a common practice. The next big influence on the human population occurred with the start of the Industrial Revolution in the late 18th century. With the advent of factories, children became valuable labor resources, thereby contributing to survival, and family sizes increased. The resulting population boom was further aided by improvements in agricultural technology that led to increased food production. Medical advancements increased control over disease and lengthened the average lifespan. By the early 19th century, the human population worldwide reached one billion. It was now in the stem of the J curve graph. As the world approached the 20th century, the human population was growing at an exponential rate. During the 20th century, another important event in human population dynamics occurred. The birth rates in the highly developed countries decreased dramatically. Factors contributing to this decrease included: a rise in the standard of living, the availability of practical birth control methods and the establishment of child education and labor laws. These factors made large families economically impractical. In Japan, the birth rate has been so low in recent years that the government and corporations are worried about future labor shortages. Therefore, they are actively encouraging population growth. In contrast, the populations in less well-developed countries continue to soar. Worldwide, the human population currently exceeds six billion and continues to grow exponentially. How much more the world population will grow is a topic of intense speculation. One thing is certain: exponential growth cannot continue forever, as earth's resources are limited. POPULATION DEMOGRAPHICS Human demography (population change) is usually described in terms of the births and deaths per 1000 people. When births of an area exceed deaths, population increases. When the births of an area are fewer than deaths, the population decreases. The annual rate at which the size of a population changes is: $\text { Natural Population Change Rate }\% =\dfrac{\text { (Births - Deaths) }}{1000} \times 100$ During the year 2000, the birth rate for the world was 22 and the death rate was 9. Thus, the world's population grew at a rate of 1.3 percent. The annual rate of population change for a particular city or region is also affected by immigration (movement of people into a region) and emigration (movement out of a region). $\text{Population Change Rate}=\left(\begin{array}{c}\text { Birth } \ \text { rate }\end{array}+\begin{array}{c}\text { Immigration } \ \text { Change Rate }\end{array}\right)-\left(\begin{array}{c}\text { Death } \ \text { rate }\end{array}+\begin{array}{c}\text { Emigration } \ \text { rate }\end{array}\right)$ Highly industrialized nations, like the United States, Canada, Japan and Germany, generally have low birth and death rates. Annual rates of natural population change vary from -0.1% to 0.5%. In some industrial nations (e.g. Germany and Russia) death rates exceed birth rates so the net population decreases over time. Newly industrialized countries (e.g. South Korea, Mexico and China) have moderate birth rates and low death rates. The low death rates result from better sanitation, better heath care and stable food production that accompany industrialization. The annual rates of natural population change are about 1 percent to 2 percent in these countries. Countries with limited industrial development (e.g. Pakistan and Ethiopia) tend to have high birth rates and moderate to low death rates. These nations are growing rapidly with annual rates of natural population change exceeding 2 percent. Several factors influence human fertility. Important factors influencing birth and fertility rates in human populations are: affluence, average marriage age, availability of birth control, family labor needs, cultural beliefs, religious beliefs and the cost of raising and educating children. The rapid growth of the world's population over the past 100 years is mainly results from a decline in death rates. Reasons for the drop in death rates include: better nutrition, fewer infant deaths, increased average life span and improvements in medical technology. As countries become developed and industrialized, they experience a movement from high population growth to low population growth. Both death and birth rates decline. These countries usually move from rapid population growth, to slow growth, to zero growth and finally to a reduction in population. This shift in growth rate with development is called the "demographic transition." Four distinct stages occur during the transition: pre-industrial, transitional, industrial and post-industrial. During the pre-industrial stage, harsh living conditions result in a high birth rate and a high death rate. The population grows very slowly, if at all. The transitional stage begins shortly after industrialization. During this phase, the death rate drops because of increased food production and better sanitation and health conditions, but, the birth rate remains high. Therefore, the population grows rapidly. During the industrial stage, industrialization is well established in the country. The birth rate drops and eventually approaches the death rate. Couples in cities realize that children are expensive to raise and that having large families restrict their job opportunities. The post-industrial stage occurs when the birth rate declines even further to equal the death rate, thus population growth reaches zero. The birth rate may eventually fall below the death rate, resulting in negative population growth. The United States and most European countries have experienced this gradual transition over the past 150 years. The transition moves much faster for today's developing countries. This is because improvements in preventive health and medical care in recent decades have dramatically reduced mortality -- especially infant mortality -- and increased life expectancy. In a growing number of countries, couples are having fewer children than the two they need to "replace" themselves. However, even if the level of "replacement fertility" were reached today, populations would continue to grow for several decades because of the large numbers of people now entering their reproductive years. As a result of reduced fertility and mortality, there will be a gradual demographic shift in all countries over the next few decades towards an older population. In developed countries, the proportion of people over age 65 has increased from 8 to 14 percent since 1950, and is expected to reach 25 percent by 2050. Within the next 35 years, those over age 65 will represent 30 percent or more of the populations in Japan and Germany. In some countries, the number of residents over age 85 will more than double. PATTERNS OF RESOURCE USE Humans have always made an impact on the environment through their use of resources. Early humans were primarily hunter-gatherers who used tools to survive. They fashioned wood and stone tools for hunting and food preparation, and used fire for cooking. Early humans developed methods for changing habitat to suit their needs and herding wild animals. As time passed, humans developed more tools and techniques and came to rely on that technology in their daily lives. Although the tools of early humans were primitive by today's standards, they significantly affected the environment and probably hastened the extinction of some large Ice Age mammals. After the end of the last Ice Age, some eight to 10,000 years ago, humans began domesticating wild animals and plants. The first known instance of farming started in a region extending from southeastern Turkey to western Iran, known as the fertile crescent. These early farmers domesticated crops such as chickpea, bitter vetch, grapes, olives, barley, emmer wheat, lentils, and flax. They hybridized wheat for making bread from wild grass and emmer wheat. They also domesticated animals such as sheep, goats, cattle and pigs. The fertile crescent's unique diversity of wild crops and animals offered humans a mix of basic agricultural commodities that allowed a revolution in the development of human society. With a reliable food supply, humans were able to stay in one place and be assured of having a constant supply of carbohydrates, protein, milk and oil. They had animals for transportation and plant and animal materials for producing clothing and rope. Agricultural economies soon displaced hunter-gatherer economies. Within 2,000 years, farming ranged from Pakistan to southern Italy. Most early agriculture was subsistence farming in which farmers grew only enough food to feed their families. Agriculture underwent another important revolution about 5,000 years ago with the invention of the plow. The plow allowed humans to clear and farm larger plots of land than was otherwise possible. This increased the food supply and a concomitant increase in human population growth. More efficient farming methods also resulted in urbanization because a few farmers could produce a large surplus of food to feed those in the urban areas. Over the last 10,000 years, land clearing for agriculture has destroyed and degraded the habitats of many species of plants and animals. Today, growing populations in less developed countries are rapidly clearing tropical forests and savannas for agricultural use. These tropical rainforests and savannas provide habitat for most of the earth's species. It has become clear that modern agricultural practices are not sustainable. Once-fertile areas are becoming infertile because of overgrazing, erosion and nutrient depletion. Furthermore, modern agriculture requires large inputs of energy and fertilizers, usually produced from nonrenewable fossil fuels. The next major cultural change, the Industrial Revolution, began in England in the mid-18th century. It involved a shift from small-scale production of goods by hand to large-scale production of goods by machines. Industrial production of goods increased the consumption of natural resources such as minerals fuel, timber and water by cities. After World War I, more efficient mass production techniques were developed, and industrialization became prevalent in the economies of the United States, Canada, Japan and western Europe. Advanced industrialization leads to many changes in human society, and some of those changes negatively affect the supply of natural resources and result in environmental degradation. These changes include: increased production and consumption of goods by humans, dependence on non-renewable resources such as oil and coal, production of synthetic materials (which may be toxic or non-biodegradable) and consumption of large amounts of energy at home and work. Other changes may have positive benefits. These include: creation and mass production of useful and affordable products, significant increases in the average Gross National Product per person, large increases in agricultural productivity, sharp rises in average life expectancy and a gradual decline in population growth rates. The information age was born with the invention of miniaturized electronics such as integrated circuits and computer central processing units. This stage in human development has changed and continues to change society as we know it. Information and communication have become the most-valued resources. This shift in turn, may lessen our influence on the earth's environment through reduced natural resource consumption. For instance, in recent years energy use in the United States has not increased to the extent expected from economic growth. Online shopping, telecommuting and other Internet activities may be lessening human energy consumption. By making good use of information technologies, less developed countries may be able to reduce potential environmental problems as their economies expand in the future. With so much information easily available, developing countries may not repeat the environmental mistakes that more developed countries made as they became industrialized.
textbooks/bio/Ecology/AP_Environmental_Science/1.06%3A_History_and_Global_Distribution.txt
INTRODUCTION The human carrying capacity is a concept explored by many people, most famously Thomas Robert Malthus (1766 - 1834), for hundreds of years. Carrying capacity, "K," refers to the number of individuals of a population that can be sustained indefinitely by a given area. At carrying capacity, the population will have an impact on the resources of the given area, but not to the point where the area can no longer sustain the population. Just as a population of wildebeest or algae has a carrying capacity, so does a human population. Humans, while subject to the same ecological constraints as any other species (a need for nutrients, water, etc.), have some features as individuals and some as a population that make them a unique species. Unlike most other organisms, humans have the capacity to alter their number of offspring, level of resource consumption and distribution. While most women around the world could potentially have the same number of children during their lives, the number they actually have is affected by many factors. Depending upon technological, cultural, economic and educational factors, people around the world have families of different sizes. Additionally, unlike other organisms, humans invent and alter technology, which allows them to change their environment. This ability makes it difficult to determine the human K. EFFECTS OF TECHNOLOGY AND THE ENVIRONMENT When scholars in the 1700's estimated the total number of people that today earth could sustain, they were living in a very different world than our world. Today airplanes can transport people and food half way around the world in a matter of hours, not weeks or months, as was the case with ships in the 1700s. Today we have sophisticated, powered farm equipment that can rapidly plow, plant, fertilize and harvest acres of crops a day. One farmer can cultivate hundreds of acres of land. This is a far cry from the draft-animal plowing, hand planting and hand harvesting performed by farmers in the 1700s. Additionally, synthetic fertilizers, pesticides and modern irrigation methods allow us to produce crops on formerly marginal lands and increase the productivity of other agricultural lands. With the increase in the amount of land that each individual can farm, the food production has increased. This increased food production, in turn, has increased the potential human K relative to estimates from the 1700s. Whereas technological advances have increased the human K, changes in environmental conditions could potentially decrease it. For example, a global or even a large regional change in the climate could reduce K below current estimates. Coastal flooding due to rising ocean levels associated with global warming and desertification of agricultural lands resulting from poor farming practices or natural climate variation could cause food production to be less than that upon which the human carrying capacity was originally estimated. There are those who believe that advances in technology and other knowledge will continue to provide the means to feed virtually any human population size. Those who subscribe to this philosophy believe that this continuous innovation will "save us" from ourselves and changes in the environment. Others believe that technology will itself reach a limit to its capabilities. This group argues that resources on earth -- including physical space – are limited and that eventually we must learn to live within our means. Aside from the physical limitations of the earth’s natural resources and food production capabilities, we must consider the conditions we are willing to live with. EFFECT OF STANDARD OF LIVING Given the wherewithal to do so, humans have aesthetic expectations in their daily lives. This is a consideration that is less evident in other species. While the earth might be able to hold many more than the current human population of six billion (estimates of the human K with current technology go as high as 50 billion) at some point people will find it unacceptable to live with the crowding and pollution issues associated with a dramatic increase in population. The qualitative measure of a person’s or population's quality of life is called its standard of living. It is associated not only with aesthetics of surroundings and levels of noise, air and water pollution, but also with levels of resource consumption. Americans have one of the world’s highest standards of living. While there are many who live in poverty in the United States, on average we have relatively small families, large homes, many possessions, plentiful food supplies, clean water and good medical care. This is not the case in most of the developing world. While many nations have larger average family sizes, they have smaller homes, fewer possessions and less food. Supplies of clean water may be scarce and medical care may be inadequate. All people desire to have adequate resources to provide good care for their families, and thus population in most developing countries are striving for standard of living of developed nations. Is it possible for all six billion people on earth to live at the same level of resource use as in the United States, Japan and Western Europe? With current technology, the answer is "no." However, this does not mean that the people of one nation are more or less entitled to a given standard of living than those of another. What it does mean for citizens of nations like the United States is that we must reduce our current use of resources. Of all of the food purchased by the average American family, 10 percent is wasted. In addition, because most Americans are not vegetarians, we tend to eat high on the food chain, which requires more resources than a vegetarian diet. Calculation of ecological efficiency indicate that from one trophic level on the food chain to the next, there is only a 10 percent efficiency in the transfer of energy. Thus people who predominately eat more grains, fruits and vegetables are getting more out of the energy required to produce the food than those who eat a lot of meat. The calories that a person gets from beef are much fewer than the calories in the grain required to raise the cattle. The person is better off skipping the middleman -- or middle cow in this case -- and eating the grain. This is why many more people can be sustained on a diet that consists of a larger percentage of rice, millet or wheat, rather than of fish, beef or chicken. In addition to resources used to provide food, Americans use disproportionate amounts of natural resources such as trees (for paper, furniture and building, among other things) and fossil fuels (for automobiles, homes and industry). We also produce a great amount of "quick waste." Packaging that comes on food in the grocery store is a good example of quick waste. The hard plastic packaging used for snack foods that is immediately removed and thrown away and plastic grocery bags are both examples of quick waste. Thus, patronizing fast food restaurants increases resource consumption and solid waste production at the same time. The good news for the environment (from both a solid waste and a resource use standpoint) is that we can easily reduce the amount of goods and resources that we use and waste without drastically affecting our standard of living. By properly inflating car tires, America could save millions of barrels of oil annually. If we were to use more renewable energy resources -- like solar and wind power as opposed to petroleum and nuclear energy --there would be a reduced need to extract non-renewable resources from the earth. The amount of packaging used for goods could also be reduced. Reusable canvas bags could be used for shopping and plastic and paper grocery bags could be reused. At home, many waste materials could be recycled, instead of being thrown away. These relatively easy steps could reduce the overall ecological impact that each person has on the earth. This impact is sometimes termed a person's ecological footprint. The smaller each person's ecological footprint, the greater the standard of living possible for each person.
textbooks/bio/Ecology/AP_Environmental_Science/1.07%3A_Carrying_Capacity.txt
INTRODUCTION Families in developing nations are often larger, but less resource intensive (e.g., they use fewer resources per person) than those in more developed nations. However, increasingly human populations wish to have a "western" standard of living. An increase in the world’s average standard of living significantly lowers the potential human carrying capacity of the earth. Therefore, in order to reduce their impact as a species, humans must not only reduce the resources they use per person, they must also reduce their average family size. Determining ways to reduce family size requires an understanding of the many factors determining family size and the resultant population dynamics of the region. Many economic and cultural influences affect family size. Depending upon the prevailing cultural values and economic forces, a nation's people can be induced to have larger or smaller families. Although human population dynamics are often considered on a global scale, factors that affect population growth vary in different parts of the world. Therefore, it is essential to understand the different forces acting on people throughout the world. ECONOMIC FACTORS Some of the factors influencing family size -- and therefore population growth -- are economic ones. These factors are probably the most easily understood. For instance, a rural agricultural family in a developing country that relies upon a plow pulled by a water buffalo needs many family members to take care of the planting, harvesting and marketing of crops. A family of three would not provide enough labor to sustain the family business. In contrast, families in developed countries tend to be small for economic reasons. It is expensive to raise children at the relatively high standard of living found in such countries. Considerable resources must be devoted to food, clothes, transportation, entertainment and schooling. A large proportion of children from developed countries attend college, thus adding even more to the expense. Therefore, it is economically prudent in such countries for families to have few children. Obviously, there are technological and educational ways to negate the need for many children. If the farm family in a developing country is able to obtain better farming tools and information, they can improve the farm’s production by irrigating crops and by using techniques such as crop rotation (e.g., planting different crops in different years to maintain soil fertility, prevent erosion and maximize yields). With the acquisition of such new tools and farming techniques, fewer family members are required to work the same amount of land. The land may even become more productive, even with less manual labor. Additional economic factors -- such as the cost of medical care and retirement care -- also play a role in family size. If a family is unable to afford adequate medical care, then family planning services and birth control materials may not be attainable. Also, when mortality rates for children are high and significant numbers of children do not live to adulthood, there is a strong motivation to have as many children as possible. Doing so ensures that some of the children will live to help in the family business, and provide a link to posterity Without national social security programs like those in the United States and Sweden, the elderly in developing countries rely on younger, working members of their families to support them in their retirement. A larger family means a more secure future. The expense of a national social security program also acts to reduce family size in a country, as the high taxes imposed on workers to support the system makes supporting large families difficult. CULTURAL FACTORS Around the globe, cultural factors influence family size and as a result, affect population growth rate. From a cultural standpoint, religion can have a profound effect on family planning. Many religions promote large families as a way to further the religion or to glorify a higher power. For example, Orthodox Judaism encourages large families in order to perpetuate Judaism. Roman Catholicism promotes large families for the same reason, and forbids the use of any "artificial" means of birth control. Devout followers of a religion with such values often have large families even in the face of other factors, such as economic ones. This can be seen in countries like Israel (Judaism) and Brazil (Catholicism), which have high percentages of religious followers in their populations. Both countries have high birth rates and high population growth rates. Various factors involving women can also affect family sizes. These factors include: education and employment opportunities available to women, the marriage age of women and the societal acceptance of birth control methods. These factors are sometimes strongly influenced by society’s cultural attitudes towards women. Around the world, statistics indicate that with higher levels of education, women are more likely to be employed outside the home; in addition, higher marriage age of women and the greater the acceptance of birth control methods, the smaller the family size. It is clear that increasing educational and professional opportunities for women would reduce overall population growth and improve standards of living worldwide.
textbooks/bio/Ecology/AP_Environmental_Science/1.08%3A_Population_Growth.txt
INTRODUCTION Water is an abundant substance on earth and covers 71 percent of the earth's surface. Earth’s water consists of three percent freshwater and 97 percent saltwater. All living organisms require water in order to live. In fact, they are mostly comprised of water. Water is also important for other reasons: as an agent of erosion it changes the morphology of the land; it acts as a buffer against extreme climate changes when present as a large body of water, and it helps flush away and dilute pollutants in the environment. The physical characteristics of water influence the way life on earth exists. The unique characteristics of water are: 1. Water is a liquid at room temperature and over a relatively wide temperature range (0 -100°C). This wide range encompasses the annual mean temperature of most biological environments. 2. A relatively large amount of energy is required to raise the temperature of water (i.e., it has a high heat capacity). As a result of this property, large bodies of water act as buffers against extreme fluctuations in the climate, water makes as an excellent industrial coolant, and it helps protect living organisms against sudden temperature changes in the environment. 3. Water has a very high heat of vaporization. Water evaporation helps distribute heat globally; it provides an organism with the means to dissipate unwanted heat. 4. Water is a good solvent and provides a good medium for chemical reactions, including those that are biologically important. Water carries nutrients to an organism's cells and flushes away waste products, and it allows the flow of ions necessary for muscle and nerve functions in animals. 5. Liquid water has a very high surface tension, the force holding the liquid surface together. This, along with its ability to adhere to surfaces, enables the upward transport of water in plants and soil by capillary action. 6. Solid water (ice) has a lower density than liquid water at the surface of the earth. If ice were denser than liquid water, it would sink rather than float, and bodies of water in cold climates would eventually freeze solid, killing the organisms living in them. Freshwater comprises only about three percent of the earth's total water supply and is found as either surface water or groundwater. Surface water starts as precipitation. That portion of precipitation which does not infiltrate the ground is called runoff. Runoff flows into streams and lakes. The drainage basin from which water drains is called a watershed. Precipitation that infiltrates the ground and becomes trapped in cracks and pores of the soil and rock is called groundwater. If groundwater is stopped by an impermeable barrier of rock, it can accumulate until the porous region becomes saturated. The top of this accumulation is known as the water table. Porous layers of sand and rock through which groundwater flows are called aquifers. Most freshwater is locked up in frozen glaciers or deep groundwater where it is not useable by most living organisms. Only a tiny fraction of the earth's total water supply is therefore usable freshwater. Still, the amount available is sufficient to maintain life because of the natural water cycle. In the water cycle, water constantly accumulates, becomes purified, and is redistributed. Unfortunately, as human populations across the globe increase, their activities threaten to overwhelm the natural cycle and degrade the quality of available water. AGRICULTURAL WATER USE Agriculture is the single largest user of water in the world. Most of that water is used for irrigating crops. Irrigation is the process of transporting water from one area to another for the purpose of growing crops. The water used for irrigation usually comes from rivers or from groundwater pumped from wells. The main reason for irrigating crops is that it increases yields. It also allows the farming of marginal land in arid regions that would normally not support crops. There are several methods of irrigation: flood irrigation, furrow irrigation, drip irrigation and center pivot irrigation. Flood irrigation involves the flooding of a crop area located on generally flat land. This gravity flow method of water is relatively easy to implement, especially if the natural flooding of river plains is utilized, and therefore is cost-effective. However, much of the water used in flood irrigation is lost, either by evaporation or by percolation into soil adjacent to the intended area of irrigation. Because farmland must be flat for flood irrigation to be used, flood irrigation is only practical in certain areas (e.g. river flood plains and bottomlands). In addition, because land is completely flooded, salts from the irrigation water can buildup in the soil, eventually rendering it infertile. Furrow irrigation also involves gravity flow of water on relatively flat land. However, in this form of irrigation, the water flow is confined to furrows or ditches between rows of crops. This allows better control of the water and, therefore, less water is needed and less is wasted. Because water can be delivered to the furrows from pipes, the land does not need to be completely flat. However, furrow irrigation involves higher operating costs than flood irrigation due to the increased labor and equipment required. It, too, involves large evaporative loss. Drip irrigation involves delivering small amounts of water directly to individual plants. Water is released through perforated tubing mounted above or below ground near the roots of individual plants. This method was originally developed in Israel for use in arid regions having limited water available for irrigation. It is highly efficient, with little waste of water. Some disadvantages of drip irrigation are the high costs of installation and maintenance of the system. Therefore, it is only practical for use on high-value cash crops. Center-pivot sprinkler systems deliver water to crops from sprinklers mounted on a long boom, which rotates about a center pivot. Water is pumped to the pivot from a nearby irrigation well. This system has the advantage that it is very mobile and can be moved from one field to another as needed. It can also be used on uneven cropland, as the moving boom can follow the contours of the land. Center-pivot systems are widely used in the western plains and southwest regions of the United States. With proper management, properly designed systems can be almost as efficient as drip irrigation systems. Center-pivot systems have high initial costs and require a nearby irrigation well capable of providing a sufficiently high flow. Constant irrigation with well water can also lead to salinization of the soil. DOMESTIC AND INDUSTRIAL WATER USE Water is important for all types of industries (i.e., manufacturing, transportation and mining). Manufacturing sites are often located near sources of water. Among other properties, water is an excellent and inexpensive solvent and coolant. Many manufactured liquid products have water as their main ingredient. Chemical solutions used in industrial and mining processes usually have an aqueous base. Manufacturing equipment is cooled by water and cleaned with water. Water is even used as a means of transporting goods from one place to another in manufacturing. Nuclear power plants use water to moderate and cool the reactor core as well as to generate electricity. Industry would literally come to a standstill without water. People use water for domestic purposes such as personal hygiene, food preparation, cleaning, and gardening. Developed countries, especially the United States, tend to use a great deal of water for domestic purposes. Water used for personal hygiene accounts for the bulk of domestic water use. For example, the water used in a single day in sinks, showers, and toilets in Los Angeles would fill a large football stadium. Humans require a reliable supply of potable water; otherwise serious health problems involving water-borne diseases can occur. This requires the establishment and maintenance of municipal water treatment plants in large populated areas. Much clean water is wasted in industrial and domestic use. In the United States this is mainly due to the generally low cost of water. Providing sufficient quantities of clean water in large population areas is becoming a growing problem, though. Conservation measures can minimize the problem: redesigning manufacturing processes to use less water; using vegetation for landscaping in arid regions that requires less water; using water-conserving showers and toilets and reusing gray water for irrigation purposes. CONTROL OF WATER RESOURCES Households and industry both depend on reliable supplies of clean water. Therefore, the management and protection of water resources is important. Constructing dams across flowing rivers or streams and impounding the water in reservoirs is a popular way to control water resources. Dams have several advantages: they allow long-term water storage for agricultural, industrial and domestic use; they can provide hydroelectric power production and downstream flood control. However, dams disrupt ecosystems, they often displace human populations and destroy good farmland, and eventually they fill with silt. Humans often tap into the natural water cycle by collecting water in man-made reservoirs or by digging wells to remove groundwater. Water from those sources is channeled into rivers, man-made canals or pipelines and transported to cities or agricultural lands. Such diversion of water resources can seriously affect the regions from which water is taken. For example, the Owens Valley region of California became a desert after water projects diverted most of the Sierra Nevada runoff to the Los Angeles metropolitan area. This brings up the question of who owns (or has the rights to) water resources. Water rights are usually established by law. In the eastern United States, the "Doctrine of Riparian Rights" is the basis of rights of use. Anyone whose land is next to a flowing stream can use the water as long as some is left for people downstream. Things are handled differently in the western United States, which uses a "first-come, first-served" approach known as the "Principle of Prior Appropriation" is used. By using water from a stream, the original user establishes a legal right for the ongoing use of the water volume originally taken. Unfortunately, when there is insufficient water in a stream, downstream users suffer. The case of the Colorado River highlights the problem of water rights. The federal government built a series of dams along the Colorado River, which drains a huge area of the southwestern United States and northern Mexico. The purpose of the project was to provide water for cities and towns in this arid area and for crop irrigation. However, as more and more water was withdrawn from these dams, less water was available downstream. Only a limited volume of water reached the Mexican border and this was saline and unusable. The Mexican government complained that their country was being denied use of water that was partly theirs, and as a result a desalinization plant was built to provide a flow of usable water. Common law generally gives property owners rights to the groundwater below their land. However, a problem can arise in a situation where several property owners tap into the same groundwater source. The Ogallala Aquifer, which stretches from Wyoming to Texas, is used extensively by farmers for irrigation. However, this use is leading to groundwater depletion, as the aquifer has a very slow recharge rate. In such cases as this, a general plan of water use is needed to conserve water resources for future use. Water Diversion Water is necessary for all life, as well as for human agriculture and industry. Great effort and expense has gone into diverting water from where it occurs naturally to where people need it to be. The large-scale redistribution of such a vital resource has consequences for both people and the environment. The three projects summarized below illustrate the costs and benefits and complex issues involved in water diversion. Garrison Diversion Project The purpose of the Garrison Diversion Project was to divert water from the Missouri River to the Red River in North Dakota, along the way irrigating more than a million acres of prairie, attracting new residents and industries, and providing recreation opportunities. Construction began in the 1940s, and although \$600 million has been spent, only 120 miles of canals and a few pumping stations have been built. The project has not been completed due to financial problems and widespread objections from environmentalists, neighboring states, and Canada. Some object to flooding rare prairie habitats. Many are concerned that moving water from one watershed to another will also transfer non-native and invasive species that could attack native organisms, devastate habitats, and cause economic harm to fishing and other industries. As construction and maintenance costs skyrocketed, taxpayers expressed concern that excessive public money was being spent on a project with limited public benefits. Melamchi Water Supply Project The Kathmandu Valley in Nepal is an important urban center with insufficient water supplies. One million people receive piped water for just a few hours a day. Groundwater reservoirs are being drained, and water quality is quite low. The Melamchi Water Supply Project will divert water to Kathmandu through a 28 km tunnel from the Melamchi River in a neighboring valley. Expected to cost a half a billion dollars, the project will include improved water treatment and distribution facilities. While the water problems in the Kathmandu Valley are severe, the project is controversial. Proponents say it will improve public health and hygiene and stimulate the local economy without harming the Melamchi River ecosystem. Opponents suggest that the environmental safeguards are inadequate and that a number of people will be displaced. Perhaps their biggest objection is that the project will privatize the water supply and raise costs beyond the reach of the poor. They claim that cheaper and more efficient alternatives have been ignored at the insistence of international banks, and that debt on project loans will cripple the economy. South to North Water Diversion Project Many of the major cities in China are suffering from severe water shortages, especially in the northern part of the country. Overuse and industrial discharge has caused severe water pollution. The South to North Water Diversion project is designed to shift enormous amounts of water from rivers in southern China to the dry but populous northern half of the country. New pollution control and treatment facilities to be constructed at the same time should improve water quality throughout the country. The diversion will be accomplished by the creation of three rivers constructed by man, each more than 1,000 km long. They will together channel nearly 50 billion cubic meters of water annually, creating the largest water diversion project in history. Construction is expected to take 10 years and cost \$60 billion, but after 2 years of work, the diversion is already over budget. Such a massive shift in water resources will have large environmental consequences throughout the system. Water levels in rivers and marshes will drop sharply in the south and rise in the north. People and wildlife will be displaced along the courses of the new rivers. Despite its staggering scale, the South to North Project alone will not be sufficient to solve water shortages. China still will need to increase water conservation programs, make industries and agriculture more water efficient, and raise public awareness of sustainable water practices.
textbooks/bio/Ecology/AP_Environmental_Science/1.09%3A_Water.txt
INTRODUCTION The earth's crust is composed of many kinds of rocks, each of which is an aggregate of one or more minerals. In geology, the term mineral describes any naturally-occurring solid substance with a specific composition and crystal structure. A mineral’s composition refers to the kinds and proportions of elements making up the mineral. The way these elements are packed together determines the structure of the mineral. More than 3,500 different minerals have been identified. There are only 12 common elements (oxygen, silicon, aluminum, iron, calcium, magnesium, sodium, potassium, titanium, hydrogen, manganese, phosphorus) that occur in the earth's crust. They have abundances of 0.1 percent or more. All other naturally occurring elements are found in very minor or trace amounts. Silicon and oxygen are the most abundant crustal elements, together comprising more than 70 percent by weight. It is therefore not surprising that the most abundant crustal minerals are the silicates (e.g. olivine, Mg2SiO4), followed by the oxides (e.g. hematite, Fe2O3). Other important types of minerals include: the carbonates (e.g. calcite, CaCO3) the sulfides (e.g. galena, PbS) and the sulfates (e.g. anhydrite, CaSO4). Most of the abundant minerals in the earth's crust are not of commercial value. Economically valuable minerals (metallic and nonmetallic) that provide the raw materials for industry tend to be rare and hard to find. Therefore, considerable effort and skill is necessary for finding where they occur and extracting them in sufficient quantities. ECONOMIC VALUE OF MINERALS Minerals that are of economic value can be classified as metallic or nonmetallic. Metallic minerals are those from which valuable metals (e.g. iron, copper) can be extracted for commercial use. Metals that are considered geochemically abundant occur at crustal abundances of 0.1 percent or more (e.g. iron, aluminum, manganese, magnesium, titanium). Metals that are considered geochemically scarce occur at crustal abundances of less than 0.1 percent (e.g. nickel, copper, zinc, platinum metals). Some important metallic minerals are: hematite (a source of iron), bauxite (a source of aluminum), sphalerite (a source of zinc) and galena (a source of lead). Metallic minerals occasionally but rarely occur as a single element (e.g. native gold or copper). Nonmetallic minerals are valuable, not for the metals they contain, but for their properties as chemical compounds. Because they are commonly used in industry, they are also often referred to as industrial minerals. They are classified according to their use. Some industrial minerals are used as sources of important chemicals (e.g. halite for sodium chloride and borax for borates). Some are used for building materials (e.g. gypsum for plaster and kaolin for bricks). Others are used for making fertilizers (e.g. apatite for phosphate and sylvite for potassium). Still others are used as abrasives (e.g. diamond and corrundum). MINERAL DEPOSITS Minerals are everywhere around us. For example, the ocean is estimated to contain more than 70 million tons of gold. Yet, it would be much too expensive to recover that gold because of its very low concentration in the water. Minerals must be concentrated into deposits to make their collection economically feasible. A mineral deposit containing one or more minerals that can be extracted profitably is called an ore. Many minerals are commonly found together (e.g. quartz and gold; molybdenum, tin and tungsten; copper, lead and zinc; platinum and palladium). Because various geologic processes can create local enrichments of minerals, mineral deposits can be classified according to the concentration process that formed them. The five basic types of mineral deposits are: hydrothermal, magmatic, sedimentary, placer and residual. Hydrothermal mineral deposits are formed when minerals are deposited by hot, aqueous solutions flowing through fractures and pore spaces of crustal rock. Many famous ore bodies have resulted from hydrothermal deposition, including the tin mines in Cornwall, England and the copper mines in Arizona and Utah. Magmatic mineral deposits are formed when processes such as partial melting and fractional crystallization occur during the melting and cooling of rocks. Pegmatite rocks formed by fractional crystallization can contain high concentrations of lithium, beryllium and cesium. Layers of chromite (chrome ore) were also formed by igneous processes in the famous Bushveld Igneous Complex in South Africa. Several mineral concentration processes involve sedimentation or weathering. Water soluble salts can form sedimentary mineral deposits when they precipitate during evaporation of lake or seawater (evaporate deposits). Important deposits of industrial minerals were formed in this manner, including the borax deposits at Death Valley and Searles Lake, and the marine deposits of gypsum found in many states. Minerals with a high specific gravity (e.g. gold, platinum, diamonds) can be concentrated by flowing water in placer deposits found in stream beds and along shorelines. The most famous gold placer deposits occur in the Witwatersrand basin of South Africa. Residual mineral deposits can form when weathering processes remove water soluble minerals from an area, leaving a concentration of less soluble minerals. The aluminum ore, bauxite, was originally formed in this manner under tropical weathering conditions. The best known bauxite deposit in the United States occurs in Arkansas. MINERAL UTILIZATION Minerals are not evenly distributed in the earth's crust. Mineral ores are found in just a relatively few areas, because it takes a special set of circumstances to create them. Therefore, the signs of a mineral deposit are often small and difficult to recognize. Locating deposits requires experience and knowledge. Geologists can search for years before finding an economic mineral deposit. Deposit size, its mineral content, extracting efficiency, processing costs and market value of the processed minerals are all factors that determine if a mineral deposit can be profitably developed. For example, when the market price of copper increased significantly in the 1970s, some marginal or low-grade copper deposits suddenly became profitable ore bodies. After a potentially profitable mineral deposit is located, it is mined by one of several techniques. Which technique is used depends upon the type of deposit and whether the deposit is shallow and thus suitable for surface mining or deep and thus requiring sub-surface mining. Surface mining techniques include: open-pit mining, area strip mining, contour strip mining and hydraulic mining. Open-pit mining involves digging a large, terraced hole in the ground in order to remove a near-surface ore body. This technique is used in copper ore mines in Arizona and Utah and iron ore mines in Minnesota. Area strip mining is used in relatively flat areas. The overburden of soil and rock is removed from a large trench in order to expose the ore body. After the minerals are removed, the old trench is filled and a new trench is dug. This process is repeated until the available ore is exhausted. Contour strip mining is a similar technique except that it is used on hilly or mountainous terrains. A series of terraces are cut into the side of a slope, with the overburden from each new terrace being dumped into the old one below. Hydraulic mining is used in places such as the Amazon in order to extract gold from hillsides. Powerful, high-pressure streams of water are used to blast away soil and rock containing gold, which is then separated from the runoff. This process is very damaging to the environment, as entire hills are eroded away and streams become clogged with sediment. If land subjected to any of these surface mining techniques is not properly restored after its use, then it leaves an unsightly scar on the land and is highly susceptible to erosion. Some mineral deposits are too deep to be surface mined and therefore require a sub-surface mining method. In the traditional sub surface method a deep vertical shaft is dug and tunnels are dug horizontally outward from the shaft into the ore body. The ore is removed and transported to the surface. The deepest such subsurface mines (deeper than 3500 m) in the world are located in the Witwatersrand basin of South Africa, where gold is mined. This type of mining is less disturbing to the land surface than surface mining. It also usually produces fewer waste materials. However, it is more expensive and more dangerous than surface mining methods. A newer form of subsurface mining known as in-situ mining is designed to co-exist with other land uses, such as agriculture. An in-situ mine typically consists of a series of injection wells and recovery wells built with acid-resistant concrete and polyvinyl chloride casing. A weak acid solution is pumped into the ore body in order to dissolve the minerals. Then, the metal-rich solution is drawn up through the recovery wells for processing at a refining facility. This method is used for the in-situ mining of copper ore. Once an ore has been mined, it must be processed to extract pure metal. Processes for extracting metal include smelting, electrowinning and heap leaching. In preparation for the smelting process, the ore is crushed and concentrated by a flotation method. The concentrated ore is melted in a smelting furnace where impurities are either burned-off as gas or separated as molten slag. This step is usually repeated several times to increase the purity of the metal. For the electrowinning method ore or mine tailings are first leached with a weak acid solution to remove the desired metal. An electric current is passed through the solution and pure metal is electroplated onto a starter cathode made of the same metal. Copper can be refined from oxide ore by this method. In addition, copper metal initially produced by the smelting method can be purified further by using a similar electrolytic procedure. Gold is sometimes extracted from ore by the heap leaching process. A large pile of crushed ore is sprayed with a cyanide solution. As the solution percolates through the ore it dissolves the gold. The solution is then collected and the gold extracted from it. All of the refining methods can damage the environment. Smelters produce large amounts of air pollution in the form of sulfur dioxide which leads to acid rain. Leaching methods can pollute streams with toxic chemicals that kill wildlife. MINERAL SUFFICIENCY AND THE FUTURE Mineral resources are essential to life as we know it. A nation cannot be prosperous without a reliable source of minerals, and no country has all the mineral resources it requires. The United States has about 5 percent of the world's population and 7 percent of the world's land area, but uses about 30 percent of the world's mineral resources. It imports a large percentage of its minerals; in some cases sufficient quantities are unavailable in the U.S., and in others they are cheaper to buy from other countries. Certain minerals, particularly those that are primarily imported and considered of vital importance, are stockpiled by the United States in order to protect against embargoes or other political crises. These strategic minerals include: bauxite, chromium, cobalt, manganese and platinum. Because minerals are produced slowly over geologic time scales, they are considered non-renewable resources. The estimated mineral deposits that are economically feasible to mine are known as mineral reserves. The growing use of mineral resources throughout the world raises the question of how long these reserves will last. Most minerals are in sufficient supply to last for many years, but a few (e.g. gold, silver, lead, tungsten and zinc) are expected to fall short of demand in the near future. Currently, reserves for a particular mineral usually increase as the price for that mineral increases. This is because the higher price makes it economically feasible to mine some previously unprofitable deposits, which then shifts these deposits to the reserves. However, in the long term this will not be the case because mineral deposits are ultimately finite. There are ways to help prolong the life of known mineral reserves. Conservation is an obvious method for stretching reserves. If you use less, you need less. Recycling helps increase the amount of time a mineral or metal remains in use, which decreases the demand for new production. It also saves considerable energy, because manufacturing products from recycled metals (e.g. aluminum, copper) uses less energy than manufacturing them from raw materials. Government legislation that encourages conservation and recycling is also helpful. The current "General Mining Act of 1872," however, does just the opposite. It allows mining companies to purchase government land very inexpensively and not pay any royalties for minerals extracted from that land. As a result, mineral prices are kept artificially low which discourages conservation and recycling.
textbooks/bio/Ecology/AP_Environmental_Science/1.10%3A_Minerals.txt
INTRODUCTION Soil plays an important role in land ecosystems. In order for a community of producers and consumers to become established on land, soil must be present. Furthermore, soil quality is often a limiting factor for population growth in ecosystems. Soil is a complex mixture of inorganic materials, organic materials, microorganisms, water and air. Its formation begins with the weathering of bedrock or the transport of sediments from another area. These small grains of rock accumulate on the surface of the earth. There they are mixed with organic matter called humus, which results from the decomposition of the waste and dead tissue of organisms. Infiltrating rainwater and air also contribute to the mixture and become trapped in pore spaces. This formation process is very slow (hundreds to thousands of years), and thus soil loss or degradation can be very detrimental to a community. SOIL PROFILE Mature soils are layered. These layers are known as soil horizons, and each has a distinct texture and composition. A typical soil has a soil profile consisting of four horizons, which are designated: O, A, B and C. The O horizon is the top layer at the earth's surface. It consists of surface litter, such as fallen leaves (duff), sticks and other plant material, animal waste and dead organisms. A distinct O horizon may not exist in all soil environments (e.g., desert soil). Below the O horizon is the A horizon, which is also known as topsoil. This layer contains organic humus, which usually gives it a distinctive dark color. The B horizon, or sub-soil is the next layer down from the surface. It consists mostly of inorganic rock materials such as sand, silt and clay. The C horizon sits atop bedrock and therefore is made up of weathered rock fragments. The bedrock is the source of the parent inorganic materials found in the soil. The O horizon protects the underlying topsoil from erosion and moisture loss by evaporation. The O and A horizons in typical mature soils have an abundance of microorganisms (e.g. fungi, bacteria), earthworms and insects. These organisms decompose the organic material from dead organisms and animal waste into inorganic nutrients useable by plants. The organic humus in the A horizon aids in holding water and nutrients, making it the most fertile layer. Therefore, plants with shallow roots are anchored in the A horizon. Water seeping through the upper layers may dissolve water-soluble minerals and transport them to lower layers in a process called leaching. Very fine clay particles can also be transported by seeping water and accumulate in the subsoil layer. The accumulation of clay particles and leached minerals can lead to compaction of the B horizon. This compaction can limit the flow of water through the layer and cause the soil above to become waterlogged. The B horizon is not as fertile as the A horizon, but deep-rooted plants can utilize the water and minerals leached into this layer. The C horizon represents a transition zone between the bedrock and the soil. It lacks organic material, but may be saturated with groundwater that is unable to move deeper due to the solid barrier of bedrock below. Different types of soil may have different numbers of horizons, and the composition and thickness of those horizons may vary from soil to soil. The type of soil depends on a number of factors including: the type of parent rock material, the type of vegetation, the availability of organic matter, water and minerals, and the climate. Grassland and desert soils lack a significant O horizon as they generally have no leaf litter. Grassland soil may have a very thick, fertile A horizon, while desert and tropical rain forest soils may have very thin, nutrient poor A horizons. The A horizons in coniferous forests may be severely leached. SOIL CHARACTERISTICS Most soil consists of weathered inorganic rock material. The relative amounts of different sizes and types of rock particles or grains determines the texture of the soil. The three main types of rock grains found in soil are: sand, silt and clay. Sand grains have the largest grain sizes (0.05 - 2.0 mm) of the three. Silt particles are fine-grained (0.05-0.002 mm) and clay particles are very fine-grained (<0.002 mm). Sand grains give soil its gritty feel, and clay particles make it sticky. Soils are named according to where their sand silt and clay composition plots on a soil structure triangle. Various regions of the triangle are given different names. A soil containing about 20:40:40 mixture of clay, silt and sand plot A typical loam soil is made up of about a 20:40:40 mixture of clay, silt and sand. If the percentage of sand is a little higher, the soil is called a sandy loam, and if the percentage of silt is a little higher the soil is a silty loam. The texture of the soil determines its porosity and permeability. Soil porosity is a measure of the volume of pore spaces between soil grains per volume of soil and determines the water and air (oxygen) holding capacity of the soil. Coarse grains with large pores provide better aeration and fine grains with small pores provide good water retention. The average pore size determines the soil permeability or ease with which water can infiltrate the soil. Sandy soils have low porosities and high permeabilities (i.e. water is not retained well, but flows through them easily, and aeration is good). On the other hand, clay soils have high porosities and low permeabilities (i.e. water is retained very well, but does not flow through it easily and aeration is poor). Soil texture is therefore important in determining what type of vegetation thrives on a particular soil. The soil structure or "tilth" is related to the soil texture. Soil tilth describes how the various components of the soil cling together into clumps. It is determined by the amount of clay and humus in the soil. The physical and chemical properties of clay and humus enable them to adhere to other particles in the soil, thus forming large aggregates. These same properties also help protect the soil from nutrient leaching. Soils lacking clay and humus are very loose and are easily blown or shifted by the wind (i.e. sand dunes in the desert). SOIL FERTILITY AND pH There are 16 elements essential for plant growth. Plants obtain three of them primarily from air and water: carbon, hydrogen and oxygen. The other 13 elements generally come from the soil. These essential elements for plant growth can be grouped into three types: primary macronutrients (nitrogen, potassium, phosphorus), secondary macronutrients (calcium, magnesium, sulfur) and micronutrients (boron, chlorine, iron, manganese, copper, zinc, molybdenum). The available primary macronutrients in the soil are usually the limiting factor in plant growth. In undisturbed soils, these macronutrients are replenished by the natural cycles of matter. In farmed soils, they are removed from the natural cycle in such large amounts when crops are harvested that they usually must be replaced by supplementary means (e.g. fertilizer). Because micronutrients are required by plants in much lower quantities, they are often naturally maintained in the soil in sufficient quantities to make supplementation with fertilizers unnecessary. An important factor affecting soil fertility is soil pH (the negative log of the hydrogen ion concentration). Soil pH is a measure of the acidity or alkalinity of the soil solution. On the pH scale (0 to 14) a value of seven represents a neutral solution; a value less than seven represents an acidic solution and a value greater than seven represents an alkaline solution. Soil pH affects the health of microorganisms in the soil and controls the availability of nutrients in the soil solution. Strongly acidic soils (less than 5.5) hinder the growth of bacteria that decompose organic matter in the soil. This results in a buildup of undecomposed organic matter, which leaves important nutrients such as nitrogen in forms that are unusable by plants. Soil pH also affects the solubility of nutrient-bearing minerals. This is important because the nutrients must be dissolved in solution for plants to assimilate them through their roots. Most minerals are more soluble in slightly acidic soils than in neutral or slightly alkaline soils. Strongly acid soils (pH four to five), though, can result in high concentrations of aluminum, iron and manganese in the soil solution, which may inhibit the growth of some plants. Other plants, however, such as blueberries, thrive in strongly acidic soil. At high pH (greater than 8.5) many micronutrients such as copper and iron become limited. Phosphorus becomes limited at both low and high pH. A soil pH range of approximately six to eight is conducive to the growth of most plants. SOIL DEGRADATION Soil can take hundreds or thousands of years to mature. Therefore, once fertile topsoil is lost, it is not easily replaced. Soil degradation refers to deterioration in the quality of the soil and the concomitant reduction in its capacity to produce. Soils are degraded primarily by erosion, organic matter loss, nutrient loss and salinization. Such processes often arise from poor soil management during agricultural activities. In extreme cases, soil degradation can lead to desertification (conversion of land to desert-like conditions) of croplands and rangelands in semi-arid regions. Erosion is the biggest cause of soil degradation. Soil productivity is reduced as a result of losses of nutrients, water storage capacity and organic matter. The two agents of erosion are wind and water, which act to remove the finer particles from the soil. This leads to soil compaction and poor soil tilth. Human activities such as construction, logging, and off-road vehicle use promote erosion by removing the natural vegetation cover protecting the soil. Agricultural practices such as overgrazing and leaving plowed fields bare for extended periods contribute to farmland erosion. Each year, an estimated two billion metric tons of soil are eroded from farmlands in the United States alone. The soil transported by the erosion processes can also create problems elsewhere (e.g. by clogging waterways and filling ditches and low-lying land areas). Wind erosion occurs mostly in flat, dry areas and moist, sandy areas along bodies of water. Wind not only removes soil, but also dries and degrades the soil structure. During the 1930s, poor cultivation and grazing practices -- coupled with severe drought conditions -- led to severe wind erosion of soil in a region of the Great Plains that became known as the "Dust Bowl." Wind stripped large areas of farmlands of topsoil, and formed clouds of dust that traveled as far as the eastern United States. Water erosion is the most prevalent type of erosion. It occurs in several forms: rain splash erosion, sheet erosion, rill erosion and gully erosion. Rain splash erosion occurs when the force of individual raindrops hitting uncovered ground splashes soil particles into the air. These detached particles are more easily transported and can be further splashed down slope, causing deterioration of the soil structure. Sheet erosion occurs when water moves down slope as a thin film and removes a uniform layer of soil. Rill erosion is the most common form of water erosion and often develops from sheet erosion. Soil is removed as water flows through little streamlets across the land. Gully erosion occurs when rills enlarge and flow together, forming a deep gully. When considerable quantities of salt accumulate in the soil in a process known as salinization, many plants are unable to grow properly or even survive. This is especially a problem in irrigated farmland. Groundwater used for irrigation contains small amounts of dissolved salts. Irrigation water that is not absorbed into the soil evaporates, leaving the salts behind. This process repeats itself and eventually severe salinization of the soil occurs. A related problem is water logging of the soil. When cropland is irrigated with excessive amounts of water in order to leach salts that have accumulated in the soil, the excess water is sometimes unable to drain away properly. In this case it accumulates underground and causes a rise in the subsurface water table. If the saline water rises to the level of the plant roots, plant growth is inhibited. SOIL CONSERVATION Because soil degradation is often caused by human activity, soil conservation usually requires changes in those activities. Soil conservation is very important to agriculture, so various conservation methods have been devised to halt or minimize soil degradation during farming. These methods include: construction of windbreaks, no-till farming, contour farming, terracing, strip cropping and agroforestry. Creating windbreaks by planting tall trees along the perimeter of farm fields can help control the effects of wind erosion. Windbreaks reduce wind speed at ground level, an important factor in wind erosion. They also help trap snow in the winter months, leaving soil less exposed. As a side benefit, windbreaks also provide a habitat for birds and animals. One drawback is that windbreaks can be costly to farmers because they reduce the amount of available cropland. One of the easiest ways to prevent wind and water erosion of croplands is to minimize the amount of tillage, or turning over of the soil. In no-till agriculture (also called conservation tillage), the land is disturbed as little as possible by leaving crop residue in the fields. Special seed drills inject new seeds and fertilizer into the unplowed soil. A drawback of this method is that the crop residue can serve as a good habitat for insect pests and plant diseases. Contour farming involves plowing and planting crop rows along the natural contours of gently sloping land. The lines of crop rows perpendicular to the slope help to slow water runoff and thus inhibit the formation of rills and gullies. Terracing is a common technique used to control water erosion on more steeply sloped hills and mountains. Broad, level terraces are constructed along the contours of the slopes, and these act as dams trapping water for crops and reducing runoff. Strip cropping involves the planting of different crops on alternating strips of land. One crop is usually a row crop such as corn, while the other is a ground-covering crop such as alfalfa. The cover crop helps reduce water runoff and traps soil eroded from the row crop. If the cover crop is a nitrogen-fixing plant (e.g. alfalfa, soybeans), then alternating the strips from one planting to the next can also help maintain topsoil fertility. Agroforestry is the process of planting rows of trees interspersed with a cash crop. Besides helping to prevent wind and water erosion of the soil, the trees provide shade which helps promote soil moisture retention. Decaying tree litter also provides some nutrients for the interplanted crops. The trees themselves may provide a cash crop. For example, fruit or nut trees may be planted with a grain crop.
textbooks/bio/Ecology/AP_Environmental_Science/1.11%3A_Soils.txt
INTRODUCTION The needs of humans and the living organisms and processes that comprise the biosphere are inextricably connected. Because of this connection the proper management of biological resources requires that genetic diversity and suitable habitats be maintained. There is a growing realization that diversity in biological systems is fundamental to agricultural production and food security. Unfortunately, the diversity of plants and animals and of the habitats in which they live is currently being drastically reduced. The predominant methods used in agricultural are seriously eroding the genetic diversity of plants and livestock. The variety of species and genes of living organisms -- and the habitats and ecosystems in which those organisms live -- are important resources that must be utilized in a sustainable fashion through conservation. Conservation is not just a matter of protecting wildlife in nature reserves. It also involves safeguarding the natural systems that purify water, recycle nutrients, maintain soil fertility, yield food, and protect genetic diversity. NATURAL AREAS Natural areas, or wilderness areas, comprise ecosystems in which human activity has not significantly affected the plant and animal populations or their environment. Natural processes predominate. According to the "Wilderness Act of 1964," wilderness areas are defined as being those areas where the nearest road is at least five miles away and where no permanent buildings stand. According to the 1898 writings of Naturalist John Muir, "In God's wilderness lies the hope of the world -- the great fresh, unblighted, unredeemed wilderness." More than 100 million acres of land are now preserved as wilderness under this act. Sparsely populated Alaska contains the largest chunk of wilderness areas, over half of it. Although wilderness areas are scattered among most of the lower 48 states, the largest percentage is found in the western states. Few undesignated areas in the contiguous states remain that would qualify as wilderness. California contains significant wilderness areas, with over 4 million acres of National Forest Wilderness areas, and 1.5 million acres of mostly desert wilderness in the Mojave Desert National Preserve. Because of the large population of the state, the demand for recreational use of these areas is very high. Heavy demand for the use of a relatively few natural areas is a problem throughout the contiguous states. It is not an easy task for natural resource managers to manage these natural areas in a way that conserves biological diversity and ecosystem integrity, while supporting a sustainable and balanced level of human use. It is important to preserve natural areas for several reasons. Some people, especially Native Americans, feel a cultural connection to the wilderness through their ancestors that once lived there. Wilderness areas are also of economic importance. Outdoor recreation activities such as hiking and camping benefit tourist industries and manufacturers of outdoor clothes and equipment. Most importantly, the ecological importance of natural areas is worth preserving. Wilderness areas help maintain ecosystem diversity. They protect watersheds, help to improve air quality and provide a natural undisturbed laboratory for scientific study. GENETIC DIVERSITY Whereas ecosystem diversity is a measure of variability among populations of species, genetic diversity refers to variability among individuals within a single species population. A gene represents the fundamental physical unit of heredity, and each individual in a species a different mix of genes. This genetic diversity -- or variation within species allows populations to adapt to changes in environmental conditions. Millions of years of adaptive change may be encoded in the genes of a species population, and it is those genes that provide the basis for future adaptations. Loss of genetic diversity makes a species less able to reproduce successfully and less adaptable to a changing environment. Small populations of species are especially susceptible to loss of genetic diversity. When a species loses too many individuals, it becomes genetically uniform. Some of the causes for the loss in genetic diversity include: inbreeding among closely related individuals, and genetic drift in which the genes of a few individuals, eventually dominate in a population. Genetic diversity is important to agriculture. Much of the world's agriculture is based on introduced or hybrid crop strains, as opposed to native or wild strains. The main purpose of using hybrid stains is to increase productivity. Unfortunately, this approved results in only a few hybrid crop strains being used for commercial agriculture. These hybrid crops lack the genetic diversity of the many wild strains, and the resistance of hybrids to pests and disease is generally much lower. Therefore, it is necessary to protect and conserve the wild strains as a genetic library, from which one can draw the genetic information necessary for producing improved and more resistant hybrid strains. A similar situation exists in livestock breeding, except that the loss of genetic diversity in livestock has even more severe consequences. Many livestock breeds are near extinction because of the policy of favoring a few specialized breeds. It is clear that human activity is primarily responsible for the genetic erosion of plant and animal populations. FOOD RESOURCES The three major sources of food for humans are: croplands, rangelands and fisheries. Croplands provide the bulk of human food. Even though there are thousands of edible plants in the world, only four staple crops (wheat, rice, corn and potatoes) account for most of the caloric intake of humans. Some animals raised for meat, milk and eggs (e.g. cattle, pigs, poultry) are also fed grain from croplands. Rangelands provide another source of meat and milk from grazing animals (e.g. cattle, sheep, goats). Fisheries provide fish, which are a major source of animal protein in the world, especially in Asia and coastal areas. For mainly economic reasons, the diets of most people in the world consist of staple grains. As people become more affluent, they tend to consume more meat, eggs, milk and cheese. There are two types of food production: traditional agriculture and industrialized agriculture. Industrialized agriculture is known as high input agriculture because it utilizes large amounts commercial fertilizers, pesticides, water and fossil fuels. Large fields of single crops (monoculture) are planted, and the plants are selectively bred to produce high yields. The large amounts of grain produced by this method also foster the production of large numbers of livestock animals in feedlots. Most of the food produced by industrialized methods is sold by farmers for income. This type of food production is most common in developed countries because of the technology and high expenses involved. However, large industrialized plantations specializing in a single cash crop (e.g. a crop specifically raised for income such as bananas, cocoa, coffee) are found in some developing countries. Traditional agriculture is the most widely practiced form of food production, occurring mostly in developing countries. It can be classified further as either traditional subsistence or traditional intensive agriculture. The differences between the two involve the relative amounts of resources input and food produced. Subsistence agriculture uses only human and animal labor and only produces enough food for the farmer's family. Traditional, intensive agriculture utilizes more human and animal labor, fertilizers and irrigated water. It may also involve growing methods such as intercropping designed to maintain soil fertility. Intercropping involves planting two crops simultaneously (e.g., a nitrogen-fixing legume crop with a grain crop). The increased production resulting from the more intensive methods provides enough food for the farmer's family and for selling to others in the local area. Rangelands tend to be grasslands in semiarid to arid regions that are not suited to growing crops without irrigation. The grasses provide food for grazing animals such as cattle and sheep. These animals not only provide meat for food, but are also a valuable source of leather and wool. In regions with regular rainfall, livestock can be raised in set areas of open range. In more arid climates, nomadic herding of livestock may be necessary in order to find sufficient supplies of grass. Overgrazing of rangeland by livestock can result in desertification of the area. In developed countries, livestock raised on rangeland are often fattened with grain in feedlots before slaughter. The ocean provides the biggest location of fisheries. Commercial methods used to harvest these fisheries depend upon the types of fish (e.g. surface dwelling, bottom dwelling) being produced and their tendency to form schools. Trawlers drag nets along the ocean bottom to catch bottom dwelling (demersal) fish such as cod and shellfish such as shrimp. Large schools of surface dwelling (pelagic) fish, such as tuna, are caught by purse-seine fishing in which a net surrounds them and then closes like a drawstring purse. Drift nets up to tens of kilometers long hang like curtains below the surface and entangle almost anything that comes in contact with it. The major problem with all of these fishing methods is that they tend to kill large numbers of unwanted fish and marine mammals that are inadvertently caught. An alternative to ocean fishing is aquaculture, a method in which fish and shellfish are deliberately raised for food. There are two types of aquaculture: fish farming and fish ranching. With fish farming, the fish or shellfish (e.g. carp, catfish, oysters) are raised in closed ponds or tanks with a controlled environment. When they reach maturity they are harvested. Fish ranching is used with species such as salmon that live one part of their lives in freshwater and the other part in salt water (anadromous species). Salmon are raised in captivity for a few years and then released. They are harvested when they return to spawn. Some of the disadvantages of aquaculture include the need for supplying large amounts of food and water, and disposal of the large amounts of waste that are produced. Algae into Oil, Bones into Stones Scientists and textbooks tend to separate biological and geological entities and processes, but the complex cycling of matter on Earth actually blurs those categories. Indeed, some of our most important energy and mineral resources have biological origins. As a consequence, the location and size of these resources depends upon the distribution and productivity of ancient habitats. Petroleum Petroleum is a generic term for oil and natural gas, and their products. Petroleum doesn’t look organic, but it is derived from the remains of countless marine organisms. It begins with blooms of microscopic algae and other plankton in oceans and large lakes. These organisms sink when they die, and if the seafloor or lakebed they land on has low oxygen and high sedimentation, they can be buried in mud before they decompose. At depth and over time, heat and pressure begin to convert the organic molecules into hydrocarbons. The hydrocarbons begin to liquefy into oil at 50-60° C, and vaporize into methane at 100° C. If the temperature exceeds 200° C, they break down and disappear. Where petroleum is abundant, it can be pumped from below ground and refined into fuels such as gasoline, propane, jet fuel, and heating oil, and into tar and asphalt. Petroleum is also a component of plastics, dyes, synthetic fibers, fertilizers, compact discs, cosmetics, and explosives. Petroleum is extremely useful, but it is unevenly distributed around the world, and reserves are being depleted rapidly. Petroleum formation is a complex process that requires just the right biological conditions to produce sufficient plankton, and just the right geologic conditions to preserve and cook the organic matter. The entire sequence takes a million years or more. Because many countries, like the United States, have very limited deposits, and because all petroleum reserves are being drained rapidly, conservation and alternatives are gaining importance. For instance, worn highway asphalt is now being reprocessed and replaced rather than discarded. Plastic recycling is becoming more widespread. Wind, solar, nuclear, geothermal, and hydroelectric power is increasing. Limestone Limestone is a type of rock made of calcium carbonate. Although few things seem less life-like than rocks, most limestone is actually biogenic, formed from the shells and skeletons and excretions of marine invertebrates. In the shallows around tropical islands and continents, warm clear water, strong sunlight, and abundant nutrients allow mollusks, crustaceans, and plankton to flourish. When these creatures die or molt, their hard parts fall to the sea floor. As the remains pile up, the weight of the overlying debris compacts the deepest layers. Cements precipitate out of groundwater, fusing the individual fragments into solid rock. Some limestone goes no further, so that the component shells remain distinct and clearly visible. In other limestone, subject to more intense heat and pressure, the organic material is recrystallized into a featureless mass. Limestone is widely used in industrial processes. Crushed limestone is a component of cement, paper, plastic, and paint, and is used to adjust the pH of soil and water. Whole limestone that retains visible shell material is used for decorative stonework. Common blackboard chalk is a limestone made from microscopic skeletons. Limestone is extremely abundant, making up about 10-15% of all sedimentary rocks on Earth, so that even though it is heavily used, its reserve are not being significantly depleted.
textbooks/bio/Ecology/AP_Environmental_Science/1.12%3A_Biological.txt
INTRODUCTION Sufficient, reliable sources of energy are a necessity for industrialized nations. Energy is used for heating, cooking, transportation and manufacturing. Energy can be generally classified as non-renewable and renewable. Over 85% of the energy used in the world is from non-renewable supplies. Most developed nations are dependent on non-renewable energy sources such as fossil fuels (coal and oil) and nuclear power. These sources are called non-renewable because they cannot be renewed or regenerated quickly enough to keep pace with their use. Some sources of energy are renewable or potentially renewable. Examples of renewable energy sources are: solar, geothermal, hydroelectric, biomass, and wind. Renewable energy sources are more commonly by used in developing nations. Industrialized societies depend on non-renewable energy sources. Fossil fuels are the most commonly used types of non-renewable energy. They were formed when incompletely decomposed plant and animal matter was buried in the earth's crust and converted into carbon-rich material that is useable as fuel. This process occurred over millions of years. The three main types of fossil fuels are coal, oil, and natural gas. Two other less-used sources of fossil fuels are oil shales and tar sands. COAL Coal is the most abundant fossil fuel in the world with an estimated reserve of one trillion metric tons. Most of the world's coal reserves exist in Eastern Europe and Asia, but the United States also has considerable reserves. Coal formed slowly over millions of years from the buried remains of ancient swamp plants. During the formation of coal, carbonaceous matter was first compressed into a spongy material called "peat," which is about 90% water. As the peat became more deeply buried, the increased pressure and temperature turned it into coal. Different types of coal resulted from differences in the pressure and temperature that prevailed during formation. The softest coal (about 50% carbon), which also has the lowest energy output, is called lignite. Lignite has the highest water content (about 50%) and relatively low amounts of smog-causing sulfur. With increasing temperature and pressure, lignite is transformed into bituminous coal (about 85% carbon and 3% water). Anthracite (almost 100% carbon) is the hardest coal and also produces the greatest energy when burned. Less than 1% of the coal found in the United States is anthracite. Most of the coal found in the United States is bituminous. Unfortunately, bituminous coal has the highest sulfur content of all the coal types. When the coal is burned, the pollutant sulfur dioxide is released into the atmosphere. Coal mining creates several environmental problems. Coal is most cheaply mined from near-surface deposits using strip mining techniques. Strip-mining causes considerable environmental damage in the forms of erosion and habitat destruction. Sub-surface mining of coal is less damaging to the surface environment, but is much more hazardous for the miners due to tunnel collapses and gas explosions. Currently, the world is consuming coal at a rate of about 5 billion metric tons per year. The main use of coal is for power generation, because it is a relatively inexpensive way to produce power. Coal is used to produce over 50% of the electricity in the United States. In addition to electricity production, coal is sometimes used for heating and cooking in less developed countries and in rural areas of developed countries. If consumption continues at the same rate, the current reserves will last for more than 200 years. The burning of coal results in significant atmospheric pollution. The sulfur contained in coal forms sulfur dioxide when burned. Harmful nitrogen oxides, heavy metals, and carbon dioxide are also released into the air during coal burning. The harmful emissions can be reduced by installing scrubbers and electrostatic precipitators in the smokestacks of power plants. The toxic ash remaining after coal burning is also an environmental concern and is usually disposed into landfills. OIL Crude oil or liquid petroleum, is a fossil fuel that is refined into many different energy products (e.g., gasoline, diesel fuel, jet fuel, heating oil). Oil forms underground in rock such as shale, which is rich in organic materials. After the oil forms, it migrates upward into porous reservoir rock such as sandstone or limestone, where it can become trapped by an overlying impermeable cap rock. Wells are drilled into these oil reservoirs to remove the gas and oil. Over 70 percent of oil fields are found near tectonic plate boundaries, because the conditions there are conducive to oil formation. Oil recovery can involve more than one stage. The primary stage involves pumping oil from reservoirs under the normal reservoir pressure. About 25 percent of the oil in a reservoir can be removed during this stage. The secondary recovery stage involves injecting hot water into the reservoir around the well. This water forces the remaining oil toward the area of the well from which it can be recovered. Sometimes a tertiary method of recovery is used in order to remove as much oil as possible. This involves pumping steam, carbon dioxide gas or nitrogen gas into the reservoir to force the remaining oil toward the well. Tertiary recovery is very expensive and can cost up to half of the value of oil removed. Carbon dioxide used in this method remains sequestered in the deep reservoir, thus mitigating its potential greenhouse effect on the atmosphere. The refining process required to convert crude oil into useable hydrocarbon compounds involves boiling the crude and separating the gases in a process known as fractional distillation. Besides its use as a source of energy, oil also provides base material for plastics, provides asphalt for roads and is a source of industrial chemicals. Over 50 percent of the world's oil is found in the Middle East; sizeable additional reserves occur in North America. Most known oil reserves are already being exploited, and oil is being used at a rate that exceeds the rate of discovery of new sources. If the consumption rate continues to increase and no significant new sources are found, oil supplies may be exhausted in another 30 years or so. Despite its limited supply, oil is a relatively inexpensive fuel source. It is a preferred fuel source over coal. An equivalent amount of oil produces more kilowatts of energy than coal. It also burns cleaner, producing about 50 percent less sulfur dioxide. Oil, however, does cause environmental problems. The burning of oil releases atmospheric pollutants such as sulfur dioxide, nitrogen oxides, carbon dioxide and carbon monoxide. These gases are smog-precursors that pollute the air and greenhouse gases that contribute to global warming. Another environmental issue associated with the use of oil is the impact of oil drilling. Substantial oil reserves lie under the ocean. Oil spill accidents involving drilling platforms kill marine organisms and birds. Some reserves such as those in northern Alaska occur in wilderness areas. The building of roads, structures and pipelines to support oil recovery operations can severely impact the wildlife in those natural areas. NATURAL GAS Natural gas production is often a by-product of oil recovery, as the two commonly share underground reservoirs. Natural gas is a mixture of gases, the most common being methane (CH4). It also contains some ethane (C2H5), propane (C3H8), and butane (C4H10). Natural gas is usually not contaminated with sulfur and is therefore the cleanest burning fossil fuel. After recovery, propane and butane are removed from the natural gas and made into liquefied petroleum gas (LPG). LPG is shipped in special pressurized tanks as a fuel source for areas not directly served by natural gas pipelines (e.g., rural communities). The remaining natural gas is further refined to remove impurities and water vapor, and then transported in pressurized pipelines. The United States has over 300,000 miles of natural gas pipelines. Natural gas is highly flammable and is odorless. The characteristic smell associated with natural gas is actually that of minute quantities of a smelly sulfur compound (ethyl mercaptan) which is added during refining to warn consumers of gas leaks. The use of natural gas is growing rapidly. Besides being a clean burning fuel source, natural gas is easy and inexpensive to transport once pipelines are in place. In developed countries, natural gas is used primarily for heating, cooking, and powering vehicles. It is also used in a process for making ammonia fertilizer. The current estimate of natural gas reserves is about 100 million metric tons. At current usage levels, this supply will last an estimated 100 years. Most of the world's natural gas reserves are found in Eastern Europe and the Middle East. OIL SHALE AND TAR SANDS Oil shale and tar sands are the least utilized fossil fuel sources. Oil shale is sedimentary rock with very fine pores that contain kerogen, a carbon-based, waxy substance. If shale is heated to 490º C, the kerogen vaporizes and can then be condensed as shale oil, a thick viscous liquid. This shale oil is generally further refined into usable oil products. Production of shale oil requires large amounts of energy for mining and processing the shale. Indeed about a half barrel of oil is required to extract every barrel of shale oil. Oil shale is plentiful, with estimated reserves totaling 3 trillion barrels of recoverable shale oil. These reserves alone could satisfy the world's oil needs for about 100 years. Environmental problems associated with oil shale recovery include: large amounts of water needed for processing, disposal of toxic waste water, and disruption of large areas of surface lands. Tar sand is a type of sedimentary rock that is impregnated with a very thick crude oil. This thick crude does not flow easily and thus normal oil recovery methods cannot be used to mine it. If tar sands are near the surface, they can be mined directly. In order to extract the oil from deep-seated tar sands, however, steam must be injected into the reservoir to make the oil flow better and push it toward the recovery well. The energy cost for producing a barrel of tar sand is similar to that for oil shale. The largest tar-sand deposit in the world is in Canada and contains enough material (about 500 billion barrels) to supply the world with oil for about 15 years. However, because of environmental concerns and high production costs these tar sand fields are not being fully utilized. NUCLEAR POWER In most electric power plants, water is heated and converted into steam, which drives a turbine-generator to produce electricity. Fossil-fueled power plants produce heat by burning coal, oil, or natural gas. In a nuclear power plant, the fission of uranium atoms in the reactor provides the heat to produce steam for generating electricity. Several commercial reactor designs are currently in use in the United States. The most widely used design consists of a heavy steel pressure vessel surrounding a reactor core. The reactor core contains the uranium fuel, which is formed into cylindrical ceramic pellets and sealed in long metal tubes called fuel rods. Thousands of fuel rods form the reactor core. Heat is produced in a nuclear reactor when neutrons strike uranium atoms, causing them to split in a continuous chain reaction. Control rods, which are made of a material such as boron that absorbs neutrons, are placed among the fuel assemblies. When the neutron-absorbing control rods are pulled out of the core, more neutrons become available for fission and the chain reaction speeds up, producing more heat. When they are inserted into the core, fewer neutrons are available for fission, and the chain reaction slows or stops, reducing the heat generated. Heat is removed from the reactor core area by water flowing through it in a closed pressurized loop. The heat is transferred to a second water loop through a heat exchanger. The water also serves to slow down, or "moderate" the neutrons which is necessary for sustaining the fission reactions. The second loop is kept at a lower pressure, allowing the water to boil and create steam, which is used to power the turbine-generator and produce electricity. Originally, nuclear energy was expected to be a clean and cheap source of energy. Nuclear fission does not produce atmospheric pollution or greenhouse gases and it proponents expected that nuclear energy would be cheaper and last longer than fossil fuels. Unfortunately, because of construction cost overruns, poor management, and numerous regulations, nuclear power ended up being much more expensive than predicted. The nuclear accidents at Three Mile Island in Pennsylvania and the Chernobyl Nuclear Plant in the Ukraine raised concerns about the safety of nuclear power. Furthermore, the problem of safely disposing spent nuclear fuel remains unresolved. The United States has not built a new nuclear facility in over twenty years, but with continued energy crises across the country that situation may change.
textbooks/bio/Ecology/AP_Environmental_Science/1.13%3A_Non-renewable_energy_sources.txt
INTRODUCTION Renewable energy sources are often considered alternative sources because, in general, most industrialized countries do not rely on them as their main energy source. Instead, they tend to rely on non-renewable sources such as fossil fuels or nuclear power. Because the energy crisis in the United States during the 1970s, dwindling supplies of fossil fuels and hazards associated with nuclear power, usage of renewable energy sources such as solar energy, hydroelectric, wind, biomass, and geothermal has grown. Renewable energy comes from the sun (considered an "unlimited" supply) or other sources that can theoretically be renewed at least as quickly as they are consumed. If used at a sustainable rate, these sources will be available for consumption for thousands of years or longer. Unfortunately, some potentially renewable energy sources, such as biomass and geothermal, are actually being depleted in some areas because the usage rate exceeds the renewal rate. SOLAR ENERGY Solar energy is the ultimate energy source driving the earth. Though only one billionth of the energy that leaves the sun actually reaches the earth's surface, this is more than enough to meet the world's energy requirements. In fact, all other sources of energy, renewable and non-renewable, are actually stored forms of solar energy. The process of directly converting solar energy to heat or electricity is considered a renewable energy source. Solar energy represents an essentially unlimited supply of energy as the sun will long outlast human civilization on earth. The difficulties lie in harnessing the energy. Solar energy has been used for centuries to heat homes and water, and modern technology (photovoltaic cells)has provided a way to produce electricity from sunlight. There are two basic forms of radiant solar energy use: passive and active. Passive solar energy systems are static, and do not require the input of energy in the form of moving parts or pumping fluids to utilize the sun's energy. Buildings can be designed to capture and collect the sun's energy directly. Materials are selected for their special characteristics: glass allows the sun to enter the building to provide light and heat; water and stone materials have high heat capacities. They can absorb large amounts of solar energy during the day, which can then be used during the night. A southern exposure greenhouse with glass windows and a concrete floor is an example of a passive solar heating system. Active solar energy systems require the input of some energy to drive mechanical devices (e.g., solar panels), which collect the energy and pump fluids used to store and distribute the energy. Solar panels are generally mounted on a south or west-facing roof. A solar panel usually consists of a glass-faced, sealed, insulated box with a black matte interior finish. Inside are coils full of a heat-collecting liquid medium (usually water, sometimes augmented by antifreeze). The sun heats the water in the coils, which is pumped to coils in a heat transfer tank containing water. The water in the tank is heated and then either stored or pumped through the building to heat rooms or supply hot water to taps in the building. Photovoltaic cells generate electricity from sunlight. Hundreds of cells are linked together to provide the required flow of current. The electricity can be used directly or stored in storage batteries. Because photovoltaic cells have no moving parts, they are clean, quiet, and durable. Early photovoltaic cells were extremely expensive, making the cost of solar electric panels prohibitive. The recent development of inexpensive semiconductor materials has helped greatly lower the cost to the point where solar electric panels can compete much better cost-wise with traditionally-produced electricity. Though solar energy itself is free, large costs can be associated with the equipment. The building costs for a house heated by passive solar energy may initially be more expensive. The glass, stone materials, and excellent insulation necessary for the system to work properly tend to be more costly than conventional building materials. A long-term comparison of utility bills, though, generally reveals noticeable savings. The solar panels used in active solar energy can be expensive to purchase, install and maintain. Leaks can occur in the extensive network of pipes required, thereby causing additional expense. The biggest drawback of any solar energy system is that it requires a consistent supply of sunlight to work. Most parts of the world have less than ideal conditions for a solar-only home because of their latitude or climate. Therefore, it is usually necessary for solar houses to have conventional backup systems (e.g. a gas furnace or hot-water heater). This double-system requirement further adds to its cost. HYDROELECTRIC ENERGY Hydroelectric power is generated by using the energy of flowing water to power generating turbines for producing electricity. Most hydroelectric power is generated by dams across large-flow rivers. A dam built across river creates a reservoir behind it. The height of the water behind the dam is greater than that below the dam, representing stored potential energy. When water flows down through the penstock of the dam, driving the turbines, some of this potential energy is converted into electricity. Hydroelectric power, like other alternative sources, is clean and relatively cheap over the long term even with initial construction costs and upkeep. But because the river's normal flow rate is reduced by the dam, sediments normally carried downstream by the water are instead deposited in the reservoir. Eventually, the sediment can clog the penstocks and render the dam useless for power generation. Large-scale dams can have a significant impact on the regional environment. When the river is initially dammed, farmlands are sometimes flooded and entire populations of people and wildlife are displaced by the rising waters behind the dam. In some cases, the reservoir can flood hundreds or thousands of square kilometers. The decreased flow downstream from the dam can also negatively impact human and wildlife populations living downstream. In addition, the dam can act as a barrier to fish that must travel upstream to spawn. Aquatic organisms are frequently caught and killed in the penstock and the out-take pipes. Because of the large surface area of the reservoir, the local climate can change due to the large amount of evaporation occurring. WIND POWER Wind is the result of the sun's uneven heating of the atmosphere. Warm air expands and rises, and cool air contracts and sinks. This movement of the air is called wind. Wind has been used as an energy source for millennia. It has been used to pump water, to power ships, and to mill grains. Areas with constant and strong winds can be used by wind turbines to generate electricity. In the United States, the state of California has about 20,000 wind turbines, and produces the most wind-generated electricity. Wind energy does not produce air pollution, can be virtually limitless, and is relatively inexpensive to produce. There is an initial cost of manufacturing the wind turbine and the costs associated with upkeep and repairs, but the wind itself is free. The major drawbacks of wind-powered generators are they require lots of open land and a fairly constant wind supply. Less than 15% of the United States is suitable for generating wind energy. Windmills are also noisy, and some people consider them aesthetically unappealing and label them as visual pollution. Migrating birds and insects can become entangled and killed by the turning blades. However, the land used for windmill farms can be simultaneously used for other purposes such as ranching, farming and recreation. BIOMASS ENERGY Biomass energy is the oldest energy source used by humans. Biomass is the organic matter that composes the tissues of plants and animals. Until the Industrial Revolution prompted a shift to fossil fuels in the mid 18th century, it was the world's dominant fuel source. Biomass can be burned for heating and cooking, and even generating electricity. The most common source of biomass energy is from the burning of wood, but energy can also be generated by burning animal manure (dung), herbaceous plant material (non-wood), peat (partially decomposed plant and animal tissues), or converted biomass such as charcoal (wood that has been partially burned to produce a coal-like substance). Biomass can also be converted into a liquid biofuel such as ethanol or methanol. Currently, about 15 percent of the world's energy comes from biomass. Biomass is a potentially renewable energy source. Unfortunately, trees that are cut for firewood are frequently not replanted. In order to be used sustainably, one tree must be planted for every one cut down. Biomass is most frequently used as a fuel source in developing nations, but with the decline of fossil fuel availability and the increase in fossil fuel prices, biomass is increasingly being used as a fuel source in developed nations. One example of biomass energy in developed nations is the burning of municipal solid waste. In the United States, several plants have been constructed to burn urban biomass waste and use the energy to generate electricity. The use of biomass as a fuel source has serious environmental effects. When harvested trees are not replanted, soil erosion can occur. The loss of photosynthetic activity results in increased amounts of carbon dioxide in the atmosphere and can contribute to global warming. The burning of biomass also produces carbon dioxide and deprives the soil of nutrients it normally would have received from the decomposition of the organic matter. Burning releases particulate matter (such as ash) into the air which can cause respiratory health problems. GEOTHERMAL ENERGY Geothermal energy uses heat from the earth's internal geologic processes in order to produce electricity or provide heating. One source of geothermal energy is steam. Groundwater percolates down though cracks in the subsurface rocks until it reaches rocks heated by underlying magma, and the heat converts the water to steam. Sometimes this steam makes its way back to the surface in the form of a geyser or hot spring. Wells can be dug to tap the steam reservoir and bring it to the surface, to drive generating turbines and produce electricity. Hot water can be circulated to heat buildings. Regions near tectonic plate boundaries have the best potential for geothermal activity. The western portion of the United States is most conducive for geothermal energy sources, and over half of the electricity used by the city of San Francisco comes from the Geysers, a natural geothermal field in Northern California. California produces about 50 percent of the world's electricity that comes from geothermal sources. Entire cities in Iceland, which is located in a volcanically active region near a mid-ocean ridge, are heated by geothermal energy. The Rift Valley region of East Africa also has geothermal power plants. Geothermal energy may not always be renewable in a particular region if the steam is withdrawn at a rate faster than it can be replenished, or if the heating source cools off. The energy produced by the Geysers region of California is already in decline because the heavy use is causing the underground heat source to cool. Geothermal energy recovery can be less environmentally invasive than engaging in recovery methods for non-renewable energy sources. Although it is relatively environmentally friendly, it is not practical for all situations. Only limited geographic regions are capable of producing geothermal energy that is economically viable. Therefore, it will probably never become a major source of energy. The cost and energy requirements for tapping and transporting steam and hot water are high. Hydrogen sulfide, an toxic air pollutant that smells like rotten eggs, is also often associated with geothermal activity.
textbooks/bio/Ecology/AP_Environmental_Science/1.14%3A_Renewable_Energy_Sources.txt
INTRODUCTION The concept of land use (i.e., the way a particular piece of land is utilized by humans and other living organisms), seems at first glance a simple and straightforward subject on the surface. Humans use land to build cities where they live (residential land) and work (commercial land). They use land for growing crops and raising livestock (agricultural land) for food. Forestland provides fuel for energy and lumber for building. Humans use land for play (recreational land) and set some of it aside as exclusive wildlife habitat (wilderness land). But no matter how land is used by humans and other living species, humans ultimately decide how land is used. Given the nature of humans, land use involves a complex interplay of environmental parameters, economic needs and often politics. RESIDENTIAL AND COMMERCIAL LANDS About half of the earth's human inhabitants live in urban areas. These urban areas include residential land for homes and commercial land for businesses. The number of people living in urban areas continues to grow each year, and as a result, the amount of land used for residential and commercial use is also increasing. Cities in the United States usually require that residential land be separated from commercial land. This has been a factor in the development of urban sprawl, the low-density housing developments surrounding many cities and towns. A city grows in three basic ways: concentric, sector and multiple nuclei. In the concentric city model, the city develops outward from a central business district in a set of concentric rings (i.e., New York City). Commercial areas are concentrated in the central district, while the outer rings are typically residential areas. A sector city develops outward in pie-shaped wedges or strips (i.e., the Silicon Valley region south of San Francisco). This type of growth results when commercial and residential areas are built up along major transportation routes. A multiple-nuclei city evolves with several commercial centers or satellite communities scattered over the urban region instead of a single central business district. The Los Angeles metropolitan area is a good example of a multiple-nuclei city. Much of the land converted to residential and commercial use in cities was formerly used for agricultural purposes or consisted of ecologically important areas such as wetlands. Cities are built on such land as a result of conventional land use planning, which encourages substantial urban growth for purely economic reasons (i.e., as a means of increasing the tax base). Unfortunately, when economic factors are the only one considered, degrading effects to the environment are generally disregarded. Some cities now use a smart-growth model in which development of urban areas is designed to strike a balance between economic needs and safeguarding the environment. One city design approach used to control urban growth is establishing greenbelts around the city peripheries. Greenbelts provide habitat such as forest areas for animals and open space for human recreation, while blocking the outward growth of the city. Another method used to lessen the effects of urban sprawl is the cluster development model for new residential areas. In this design housing is concentrated in a restricted portion of a tract, leaving the rest of the land in a relatively natural state with trees, open space and waterways. AGRICULTURAL AND FOREST LANDS Less than half of the land area in the world (and in the United States) is used for agriculture. The majority of agricultural lands are rangeland or pasture. Rangelands are unsuitable for growing grain crops for a variety of reasons: the land may be too rocky or too steep, or the climate may be too cool or too dry. Livestock grazing is the major agricultural use of rangeland and pasture. Together, rangeland and pasture comprise about 35 percent of non-federal land (526 million acres) in the United States. Most of the nation’s rangelands are in vast areas of the western states with arid to semi-arid climates. Pastures, which are smaller managed grassy areas, are found on farms throughout the United States. Croplands are important because they account for the bulk of food production. About 20 percent of the land in the United States (about 400 million acres) is croplands, with the highest concentrations in the central United States. About 70 percent of all cropland in the United States is classified as prime farmland. Prime farmland is land that has a growing season, a water supply from precipitation or irrigation, and sufficiently rich soil to sustain high yields when managed according to modern farming methods. Cropland may become prime farmland with the addition of the irrigation or flooding protection needed to sustain high yields. Farmlands in the eastern and southern United States are generally smaller and produce a greater variety of crops than those in the Corn Belt and Great Plains, where a few major grain crops predominate. In countries throughout the world, agricultural land is being lost for various reasons. Some land is being lost to other uses such as housing developments, commercial developments and roads. Unfortunately, this change in use is taking from us much prime agricultural land. In the United States, federal programs exist that encourage farmers to stop farming agricultural lands defined as sensitive, which pose a risk of environmental degradation. In an attempt to help preserve prime farmland in the United States, some local and state governments and private organizations have programs to purchase easements on cropland that restricts nonagricultural use. Such croplands are temporarily or permanently retired from active production and are planted with perennial grasses or trees. Millions of acres of agricultural land in semiarid regions are lost each year due to a phenomenon called desertification. This occurs when once-productive land becomes too arid for agricultural use because of climate change or poor land management (i.e., overgrazing of rangeland, erosion of croplands). Years ago, the standard practice for replacing lost agricultural lands or increasing overall production in many countries was to develop new farmland from formerly uncultivated land. But now, areas of potentially arable land are shrinking in most countries. Most of the uncultivated land that does remain is marginal, with poor soils and either too little rainfall or too much. Tropical rainforests are being logged at a fast rate to provide farmland. However, soils in rainforests are nutrient poor and prone to erosion by frequent tropical rains. Destruction of rainforest regions may also contribute to global environmental problems such as global warming. Forests of all kinds are very important ecologically. As major biomes, they provide a habitat for living species and support the food webs for those species. Forests play an environmental role by recycling nutrients (i.e., carbon, nitrogen) and generating oxygen through photosynthesis. They even influence local climatic conditions by affecting air humidity through evaporation and transpiration processes. Economically, forests are also very important. Humans have utilized forests for thousands of years as a source of energy (i.e., fuel), building materials (lumber) and pulpwood for paper, and these uses remain important. When forestlands hold valuable mineral resources beneath them, they may be cleared to provide access to the minerals. The United States Forest Service defines forestlands as lands that consist of at least 10 percent trees of any size. They include: transition zones (such as areas between heavily forested and nonforested lands) and forest areas adjacent to urban areas. In the western states they include pinyon-juniper and chaparral areas. Forests cover about one-third of the United States, which is about 70 percent of their extent when European settlement began in the 17th century. About 42 percent of U.S. forestlands are publicly owned. Of these, about 15 percent are in national parks or wilderness areas and are thus protected from timber harvest. Other public forestlands are managed for various uses: recreation, grazing, watershed protection, timber production, wildlife habitat, and mining. Forests in the western states are predominantly publicly owned, while those in eastern states are predominantly privately owned. Forests can be classified by their relative maturity. Old-growth forests have been undisturbed for hundreds of years. They contain numerous dead trees and fallen logs which provide species habitats and are eventually recycled through decay. Second-growth forests are less mature and occur when the original ecological community in a region is destroyed, either by human land-clearing activities or by natural disasters (i.e., fires, storms, volcanic eruptions). Humans sometimes create artificial forests in the form of tree farms. Usually only one tree species is planted in a tree farm. After maturing enough to be of economic value, the trees are harvested and new trees planted in their place. Forest trees can be harvested by different methods: selective cutting, seed-tree cutting, strip cutting and clear cutting. Most of these methods have distinct effects on the ecology of the harvested area. Selective cutting is usually least damaging to the local ecosystem. In this method of harvesting, trees that are moderate to fully mature are cut singly or in small groups. This approach allows most of the trees to remain, which helps maintain habitats and prevent soil erosion and allows uninterrupted recreational use. However, in tropical forests when only the biggest and best trees are removed, selective cutting can lead to significant ecosystem damage. Because the canopy of a tropical forest is thick and intertwined, the removal of one large tree damages a considerable area around it. Other harvesting methods involve removal of most or all of the trees in a given area. Seed-tree cutting removes most of the trees in an area, leaving only a few scattered trees to provide seeds for regrowth. The remaining trees provide some habitat for animals and help reduce soil erosion. However, when seed trees are cut, the forest loses its diversity and is often converted to a tree farm. Clear cutting and strip cutting both remove all trees in an area. Clear-cutting usually involves large areas of land resulting in the concomitant destruction of a large area of wildlife habitat. The logged areas are susceptible to severe erosion, especially when the clear cutting occurs on slopes. With strip cutting, trees are removed from consecutive narrow strips of land. The strips are removed over a period of years and as a result some trees (uncut or regrowth) are always available for animal habitat. The cut area is partially protected from erosion by the uncut or regrowth trees in the adjacent areas. RECREATIONAL AND WILDERNESS LANDS An important human-centered benefit of undeveloped land is their recreational value. Every year, millions of people visit recreational lands such as parks and wilderness areas to experience attractions of the great outdoors: hiking among the giant sequoias in California, traveling on a photo safari in Kenya or just picnicking at a local county park. Besides providing people with obvious health benefits and aesthetic pleasures, recreational lands also generate considerable tourist money for government and local economies. The United States has set aside more land for public recreational use than any other country. Several different federal organizations provide lands for recreational use: the National Forest System, the U.S. Fish and Wildlife Service, the National Park System and the National Wilderness Preservation System. The National Forest System manages more than 170 forestlands and grasslands, which are available for activities such as camping, fishing, hiking and hunting. The U.S. Fish and Wildlife Service manage more than 500 National Wildlife Refuges, which not only protect animal habitats and breeding areas but also provide recreational facilities. The National Park System manages more than 380 parks, recreation areas, seashores, trails, monuments, memorials, battlefields and other historic sites. The National Wilderness Preservation System manages more than 630 roadless areas through the aforementioned government services as well as through the Bureau of Land Management. The National Park System consists of more than 80 million acres nationwide. The largest national park is Wrangell-St. Elias National Park and Preserve in Alaska with over 13 million acres. California has eight national parks: Channel Islands, Death Valley, Joshua Tree, Lassen, Redwood, Sequoia, Kings Canyon and Yosemite. Many national parks such as Yosemite, Yellowstone and the Grand Canyon are such popular recreation destinations that the ecosystems of those parks are being severely tested by human activities. Every state has also set aside significant amounts of land for recreational use. The California State Park System manages more than one million acres of parklands including: coastal wetlands, estuaries, scenic coastlines, lakes, mountains and desert areas. California's largest state park is Anza-Borrego Desert State Park, which is the largest state park in the United States with 600,000 acres. The stated mission of the California State Park System is: "To provide for the health, inspiration and education of the people of California by helping to preserve the state's extraordinary biological diversity, protecting its most valued natural and cultural resources and creating opportunities for high-quality outdoor recreation." This is the basic goal of all recreational lands: to manage and conserve natural ecosystems, while supporting a sustainable and balanced level of human use of those areas. Unfortunately, it is a goal which is sometimes difficult to achieve due to the increasing popularity and use of recreational lands. The "Wilderness Act of 1964" created the world's first wilderness system in the United States. Presently, the National Wilderness Preservation System contains more than 100 million acres of land that will forever remain wild. A wide range of recreation, scientific and outdoor activities are available in wilderness lands. Mining operations and livestock grazing are permitted to continue in certain wilderness areas where such operations existed prior to an area's designation. Hunting and fishing are also allowed in wilderness areas (except in national parks). For most people, wilderness lands provide a means for various forms of recreation: hiking, horseback riding, bird watching, fishing, and hunting. People can escape the stress of modern-day life and enjoy an undisturbed look at nature. Wilderness lands provide an essential habitat for a wide array of fish, wildlife, and plants, and are particularly important in protecting endangered species. For scientists, wilderness lands serve as natural laboratories, where studies can be performed that would not be possible in developed areas. Several other types of public lands complement the designated wilderness land system. These include: national forest roadless areas, national trails system, natural research areas and state and private wilderness lands. The national forest roadless areas consist of millions of acres of wild, undeveloped land without roads that exist on National Forest land outside of designated wilderness lands. The "National Trail System," established by Congress in 1968, includes trails in wilderness areas and other public lands. Research Natural Areas located throughout the country on public lands serve as outdoor laboratories to study natural systems. They are intended in part to serve as gene pools for rare and endangered species and as examples of significant natural ecosystems. Some wilderness lands are maintained by states or private organizations. For example, the state of New York has long preserved a region of the Adirondacks as wilderness. On an international level, important wilderness lands have been designated by the United Nations through its "Man and the Biosphere Program." This program was established in 1973 to protect examples of major natural regions throughout the world, and provide opportunities for ecological research and education. Biosphere reserves are organized into three interrelated zones: the core area, the buffer zone and the transition area. The core area contains the landscape and ecosystems to be preserved. The buffer zone is an area where activities are controlled to protect the core area. The outer transition area contains a variety of agricultural activities, human settlements and other uses. Local communities, conservation agencies, scientists and private enterprises that have a stake in the management of the region work together to make the reserves work. Mt Kenya in Africa and the Galapagos Islands are examples of wilderness areas protected under this provision.
textbooks/bio/Ecology/AP_Environmental_Science/1.15%3A_Land.txt
INTRODUCTION Human activities release a variety of substances into the biosphere, many of which negatively affect the environment. Pollutants discharged into the environment can accumulate in the air, water, or soil. Chemicals discharged into the air that have a direct impact on the environment are called primary pollutants. These primary pollutants sometimes react with other chemicals in the air to produce secondary pollutants. A wide variety of chemicals and organisms are discharged into lakes, rivers and oceans daily. Left untreated, this sewage and industrial waste has a serious impact on the water quality, not only in the immediate area, but also downstream. AIR POLLUTANTS The eight classes of air pollutants are: oxides of carbon, sulfur and nitrogen, volatile organic compounds, suspended particulate matter, photochemical oxidants, radioactive substances and hazardous air pollutants. Oxides of carbon include carbon monoxide (CO) and carbon dioxide (CO2). Carbon monoxide, a primary pollutant, is mainly produced by the incomplete combustion of fossil fuels. It is also present in cigarette smoke. The colorless, odorless gas is poisonous to air-breathing animals. Carbon monoxide binds to hemoglobin, impeding delivery of oxygen to cells. This causes dizziness, nausea, drowsiness, and headaches; at high concentrations it can cause death. Carbon monoxide pollution from automobiles can be reduced through the use of catalytic converters and oxygenated fuels. Carbon dioxide is produced by the complete combustion of fossil fuels. It is considered a greenhouse gas because it heats up the atmosphere by absorbing infrared radiation. As a result of this characteristic, excess amounts of carbon dioxide in the atmosphere may contribute to global warming. Carbon dioxide can also react with water in the atmosphere and produce slightly acidic rain. Carbon dioxide emissions can be reduced by limiting the amount of fossil fuels burned. Oxides of sulfur include sulfur dioxide (SO2) and sulfur trioxide (SO3). Sulfur oxides are primarily produced by the combustion of coal and oil. Oxides of sulfur have a characteristic rotten egg odor, and inhalation of them can lead to respiratory system damage. They react with atmospheric water to produce sulfuric acid, which precipitates as acid rain or acid fog. Acid rain is a secondary pollutant that acidifies lakes and streams, rendering the water unfit for aquatic life. It also corrodes metals, and dissolves limestone and marble structures. Oxides of sulfur can be removed from industrial smokestack gases by "scrubbing" the emissions, by electrostatically precipitating the sulfur, by filtration, or by combining them with water, thereby producing sulfuric acid which can be used commercially. Oxides of nitrogen include: nitric oxide (NO), nitrogen dioxide (NO2), and nitrous oxide (N2O). Nitric oxide is a clear, colorless gas formed during the combustion of fossil fuels. Nitrogen dioxide forms when nitric oxide reacts with atmospheric oxygen; the reddish-brown pungent gas is considered to be a secondary pollutant. Exposure to oxides of nitrogen can cause lung damage, aggravate asthma and bronchitis, and increase susceptibility to the flu and colds. Nitrogen dioxide can combine with atmospheric water to form nitric acid, which is precipitated as acid rain. Nitrogen dioxide is also a key ingredient in the formation of photochemical smog, and nitrous oxide is a greenhouse gas. Automobile emissions of these pollutants can be reduced by catalytic converters which convert them to molecular nitrogen and oxygen. Volatile organic compounds (VOCs) include hydrocarbons such as methane (CH4), propane (C3H8), and octane (C8H18), and chlorofluorocarbons (CFCs) such as dichlorodifluoromethane (CCl2F2). Hydrocarbons are released into the atmosphere in automobile exhaust and from the evaporation of gasoline. They contribute to the formation of photochemical smog. Chlorofluorocarbons were used as propellants for aerosols and as refrigerants until it was discovered they can cause depletion of the protective ozone layer. Volatile organic compound emissions can be reduced by using vapor-recovery gasoline nozzles at service stations and by burning oxygenated gasoline in automobile engines. Suspended particulate matter consists of tiny particles of dust, soot, asbestos, and salts, and of microscopic droplets of liquids such as sulfuric acid and pesticides. Sources of these pollutants include the combustion of fossil fuel (e.g. diesel engines) and road and building construction activity. Exposure to these particles can lead to respiratory irritation, reduction of lung capacity, lung cancer, and emphysema. Photochemical oxidants are primarily produced during the formation of photochemical smog. Ozone (O3) is a highly reactive, irritating gas that causes breathing problems, as well as eye, nose, and throat irritation. It also aggravates asthma, bronchitis, and heart disease. Ozone and other photochemical oxidants can damage or kill plants, reduce visibility, and degrade rubber, paint, and clothes. Photochemical oxidants are secondary pollutants, and can be controlled by reducing the amount of nitrogen dioxide in the atmosphere. Radioactive substances include radon-222, iodine-131, and strontium-90. Radon is gas produced during the decay of uranium that is naturally present in rocks and building materials made with these rocks. It is known to cause lung cancer in humans. The other radioisotopes are produced by nuclear power plants (iodine-131) or are contained in the fallout from atmospheric nuclear testing (strontium-90). They can be introduced into the food chain through plants and become incorporated in the tissues of humans and other animals. Their ionizing radiation can produce cancers, especially those related to the thyroid and bone. Hazardous air pollutants include benzene (C6H6) and carbon tetrachloride (CCl4). Benzene is a common organic solvent with numerous industrial uses. Carbon tetrachloride was formerly used as a solvent in the dry cleaning business. It is still used in industrial processes. Exposure to these compounds can cause cancer, birth defects and central nervous system problems. WATER POLLUTANTS The eight classes of water pollutants are: infectious agents, oxygen-depleting wastes, inorganic chemicals, organic chemicals, plant nutrient pollutants, sediments, radioactive materials and thermal pollution. Infectious agents such as bacteria, viruses, and parasitic worms enter water from human and animal waste, and cause diseases such as typhoid fever, cholera, hepatitis, amoebic dysentery, and schistosomiasis, a condition marked by blood loss and tissue damage. Oxygen-depleting wastes include animal manure in feedlot and farm runoff, plant debris, industrial discharge, and urban sewage. They are consumed by aerobic bacteria. Excessive growth of these organisms can deplete water of dissolved oxygen which leads to eutrophication and the eventual death of oxygen-consuming aquatic life. Inorganic chemical pollutants include mineral acids, toxic metals such as lead, cadmium, mercury, and hexavalent chromium, and mineral salts. They are found in industrial discharge, chemicals in household wastewater, and seepage from municipal dumps and landfills. The presence of inorganic chemical pollutants in water can render it undrinkable, as well as cause cancer and birth defects. In addition, sufficient concentrations of these chemicals in water can kill fish and other aquatic life, cause lower crop yields due to plant damage, and corrode metals. Organic chemical pollutants encompass a wide variety of compounds including oil, gasoline, pesticides, and organic solvents. They all degrade the quality of the water into which they are discharged. Sources of these pollutants include industrial discharge and runoff from farms and urban areas. Sometimes these chemicals enter aquatic ecosystems directly when sprayed on lakes and ponds (e.g. for mosquito control). These types of chemicals can cause cancer, damage the central nervous system and cause birth defects in humans. Plant nutrient pollutants are found mainly in urban sewage, runoff from farms and gardens, and household wastewater. These chemicals include nitrates (NO3-), phosphates (PO43-) and ammonium (NH4+) salts commonly found in fertilizers and detergents. Too much plant nutrients in the water can cause excessive algae growth in lakes or ponds. This, in turn, results in the production of large amounts of oxygen-depleting wastes. The subsequent loss of dissolved oxygen causes eutrophication of the lakes or ponds. Erosion of soils is the main process contributing sediments, or silts, to water bodies. Sediments can cloud the water of streams and rivers, reducing the amount of available sunlight to aquatic plants. The concurrent reduction in photosynthesis can disrupt the local ecosystem. Soil from croplands deposited in lakes and streams can carry pesticides, bacteria, and other substances that are harmful to aquatic life. Sediments can also fill up or clog lakes, reservoirs, and waterways limiting human use and disrupting habitats. Radioactive materials such as iodine-131 and strontium-90 are found in nuclear power plant effluents and fallout from atmospheric nuclear testing. They can be introduced into the food chain through plants and become incorporated in body tissues of humans and animals. Their ionizing radiation can produce cancers, especially in the thyroid and bone where they tend to concentrate. A power generating plant commonly discharges water used for cooling into a nearby river, lake, or ocean. Because the discharged water can be significantly warmer than the ambient environment, it represents a source of thermal pollution. Industrial discharges are also sources of thermal pollution. The increased temperature of the water may locally deplete dissolved oxygen and exceed the range of tolerance of some aquatic species, thus disrupting the local ecosystem. Processing water in treatment plants can reduce the amounts of infectious agents, oxygen-depleting wastes, inorganic chemicals, organic chemicals and plant nutrients. Bans and restrictions on the use of certain chemicals, such as those on DDT and hexavalent chromium compounds, are also very helpful in reducing the amounts of these chemicals in the environment. By limiting exposure to these harmful substances, their negative effects on humans and local ecosystems can be greatly reduced. SOIL POLLUTANTS The persistence of pesticides in the soil is related to how quickly these chemicals degrade in the environment. There are three ways pesticides are degraded in the soil: biodegradation, chemical degradation, and photochemical degradation. Microorganism activity plays the predominant role in the biodegradation of pesticides. Water plays an important role in the chemical degradation of pesticides (e.g. some pesticides are hydrolyzed on the surfaces of minerals by water). Exposure to sunlight can also degrade some pesticides. A variety of pesticides are used to control insects, weeds, fungi, and mildew in agricultural, garden, and household environments. There are three classes of pesticides: insecticides, which kill insects; herbicides, which kill plants; and fungicides, which kill fungi. Each of these classes includes different types of chemicals. These chemicals differ in chemical composition, chemical action, toxicity, and persistence (residence time) in the environment. Some of these pesticides can bioaccumulate (e.g. they concentrate in specific plant and animal tissues and organs). Pesticides can accumulate in the soil if their structures are not easily broken down in the environment. Besides rendering the soil toxic to other living organisms, these pesticides may leach out into the groundwater, polluting water supplies. The five classes of insecticides are: chlorinated hydrocarbons, organophosphates, carbamates, botanicals and synthetic botanicals. Chlorinated hydrocarbons such as DDT, are highly toxic in birds and fishes, but have relatively low toxicity in mammals. They persist in the environment, lasting for many months or years. Because of their toxicity and persistence, their use as insecticides has been somewhat restricted. Organophosphates, such as Malathion, are more poisonous than other types of insecticides, but have much shorter residence times in the environment. Thus, they do not persist in the environment and cannot bioaccumulate. Carbamates, such as Sevin, are generally less toxic to mammals than are organophosphates. They also have a relatively low persistence in the environment and usually do not bioaccumulate. Botanicals, such as camphor, are derived from plant sources. Many of these compounds are toxic to mammals, birds, and aquatic life. Their persistence in the environment is relatively low, and as a result bioaccumulation is not a problem. Synthetic botanicals, such as Allethrin, generally have a low toxicity for mammals, birds, and aquatic life, but it is unclear how persistent they are and whether or not they bioaccumulate. The three classes of herbicides are: contact chemicals, systemic chemicals and soil sterilants. Most herbicides do not persist in the soil for very long. Contact chemicals are applied directly to plants, and cause rapid cell membrane deterioration. One such herbicide, Paraquat, received notoriety when it was used as a defoliant on marijuana fields. Paraquat is toxic to humans, but does not bioaccumulate. Systemic chemicals, such as Alar, are taken up by the roots and foliage of plants, and are of low to moderate toxicity to mammals and birds; some systemic herbicides are highly toxic to fishes. These compounds do not have a tendency to bioaccumulate. Soil sterilants such as Diphenamid, render the soil in which the plants lives toxic. These chemicals have a low toxicity in animals, and do not bioaccumulate. Fungicides are used to kill or inhibit the growth of fungi. They can be separated into two categories: protectants and systemics. Protectant fungicides, such as Captan, protect the plant against infection at the site of application, but do not penetrate into the plant. System fungicides, such as Sovran, are absorbed through the plant’s roots and leaves and prevent disease from developing on parts of the plant away from the site of application. Fungicides are not very toxic and are moderately persistent in the environment. Soil can absorb vast amount of pollutants besides pesticides every year. Sulfuric acid rain is converted in soil to sulfates and nitric acid rain produces nitrates in the soil. Both of these can function as plant nutrient pollutants. Suspended particulate matter from the atmosphere can accumulate in the soil, bringing with it other pollutants such as toxic metals and radioactive materials. Point and Non-point Pollution Sources Environmental regulations are designed to control the amounts and effects of pollutants released by agricultural, industrial, and domestic activities. These laws recognize two categories of pollution and polluters – point source and non-point source. Point Source Pollution Point sources are single, discrete locations or facilities that emit pollution, like a factory, smokestack, pipe, tunnel, ditch, container, automobile engine, or well. Because point sources can be precisely located, the discharge of pollutants from them is relatively easy to monitor and control. The United States Environmental Protection Agency, or EPA, sets emission standards for particular chemicals and compounds. Then, outflow from the point source is sampled, and the pollutants in it are measured precisely to ensure that discharge levels are in compliance with regulations. New techniques to reduce emissions from point sources are more likely to be developed because their effectiveness can be evaluated quickly and directly and because point source polluters have an obvious financial incentive to reduce waste and avoid regulatory fines. Non-point Source Pollution Non-point sources are diffuse and widespread. Contaminants are swept into waterways by rainfall and snowmelt or blown into the air by the wind. They come from multiple sources, such as vehicles dripping oil onto roads and parking lots, pesticides used on lawns and parks and fields, wastes deposited by livestock and pets, or soil disturbed by construction or plowing. Non-point source pollution is more difficult to regulate than point source emissions. Contamination is measured not at the source, but at the destination. Samples are collected from the air, soil, and water, or from the blood and tissues of organisms in polluted areas. The contribution of various non-point sources to these pollution levels can only be estimated. EPA regulations cannot be directed at specific individuals or businesses and are instead generally directed at municipalities. For example, federal standards are set for allowable levels of chemicals in drinking water, and communities are responsible for treating their water until it meets those standards. It can be difficult to reduce many types of non-point source pollution because most of the people who contribute to it are not directly faced with legal or financial consequences. Individuals must be persuaded that their activities are causing ecological harm and that they should alter their behavior or spend their money to remedy the situation. Once they do, they may have to wait a long time for noticeable environmental results. Parts per million (ppm) and Micrograms per milliliter (ug/mL) Very small quantities of some chemicals can have a large impact on organisms. Because of this, substances that are present in trace amounts, such as nutrients and contaminants, are usually measured and recorded using very small units. Two of the most common measures are parts per million and micrograms per milliliter. Micrograms per milliliter (ug/mL) Micrograms per milliliter, or ug/mL, measures mass per volume. It is generally used to measure the concentration of a substance dissolved or suspended in a liquid. One microgram is one millionth of a gram (1 ug = 0.0000001 g), and one milliliter is one thousandth of a liter. Parts per million (ppm) Parts per million, abbreviated as ppm, is a unitless measure of proportion. It is obtained by dividing the amount of a substance in a sample by the amount of the entire sample, and then multiplying by 106. In other words, if some quantity of gas, liquid, or solid is divided into one million parts, the number of those parts made up of any specific substance is the ppm of that substance. For example, if 1 mL of gasoline is mixed with 999,999 mL of water, the water contains 1 ppm of gas. Concentration Equivalents Since a microgram is one millionth of a gram, and a milliliter of water equals one gram of water, ug/mL is equivalent to parts per million. Ppm is also equivalent to many other proportional measurements, including milligrams per liter (mg/L), milligrams per kilogram (mg/Kg), and pounds per acre (lb/acre). But parts per million is often more useful in describing and comparing trace amounts of chemicals because it eliminates specific units and is applicable to liquids, solids, and gases. Examples Both ppm and ug/mL can be used to describe the amount of particulate dust in a sample of air: If the total particulate dust in a one liter volume of air is 5 mg, there is 5 ppm of particulate dust in the air that was sampled, since mg/L (milligrams per liter) = ppm. How much dye should you add to one gallon of water to achieve a final 500 ppm mixture? Concentration Measurements and Environmental Regulations Because many toxins begin to have negative environmental effects at very low levels, their abundance in ppm or ug/mL are used to set the limits of pollutants that are legally permitted in stack smoke, discharge water, soil contamination, and so on. For example, coal fired power plants may be limited to a discharge of 0.5 ppm of SO2 in the stack smoke. If a plant’s emissions exceed that amount, it may be in violation of local or federal air quality standards and could be subject to a fine. Pollution Effects on Wildlife Not unreasonably, we tend to be most concerned by the impact of pollution on human health and interests. However, there is growing documentation of the harm pollution is inflicting on wildlife. The following are just a small sample. Pesticides The pesticide DDT was banned in the U.S. in 1972 because it caused raptor eggs to thin and break. But residual DDT and other persistent organochlorine pesticides continue to impact wildlife today. Additionally, DDT is still used in many other countries as the most effective control of malaria-bearing mosquitoes. Prescription Drugs Prescription drugs, caffeine, and other medications can pass through both the human body and sewage treatment facilities, and are now present in many waterways. Some of these may be toxic to aquatic life. Others, especially steroids, estrogen, testosterone and similar regulatory hormones, are likely to interfere with the development of organisms. Heavy Metals When hunters shoot animals with lead shot, but do not recover the dead or injured animals, the shot is eventually ingested by other wildlife. The lead is concentrated as it passes up the food chain, and the top predators, especially raptors, get lead poisoning. Many states now require the use of steel shot. Mining wastes also release toxic levels of substances like lead and mercury into waterways. Water Acidification Acid rain and snow is produced from the burning of high-sulfur coals in electrical power plants. Acid mine run-off is caused by the reaction of rainwater with mine tailings. Acidification can sterilize water bodies, killing off all aquatic flora and fauna. When wildfowl and other wildlife ingest this water, they can be poisoned by heavy metals. Dioxin Dioxin is generated by burning wastes and in the production of some papers and plastics. It accumulates in animal fats and concentrates up the food chain, and has been linked to cancers and reproductive issues in a number of species. Oil Spills Oil spills have immediate devastating effects – marine mammals and waterfowl coated with oil drown, are poisoned, or die of hypothermia. Balls of oil that sink to the seafloor can smother organisms. Less obvious effects include tumors and reproductive damage in fishes and crustaceans caused by oil byproducts. Noise Pollution Chronic noise pollution from low-flying aircraft, snowmobiles, motorcycles, and traffic can cause wildlife to abandon habitats, lose reproductive function, and become more vulnerable to predation due to loss of hearing. Light Pollution Light pollution at night disorients bats, insects, and migratory birds. Eutrophication Eutrophication results from the addition of enriching agents – detergents, fertilizers, and organic wastes – to water bodies. Explosive growth and subsequent decay of algae use up available oxygen, which in turn suffocates aquatic animals and plants. The change in water chemistry can also drive out native species. Sedimentation Sediments eroded during construction or agricultural practices are washed into waterways, damaging fish spawning grounds and smothering bottom dwelling organisms. Summary Studies of the effects of pollution on wildlife are of more than academic interest. Like the proverbial canary in the coal mine, disease and damage in the natural world is often a harbinger of similar danger to ourselves.
textbooks/bio/Ecology/AP_Environmental_Science/1.16%3A_Air_Water_and_Soil.txt
INTRODUCTION In natural systems, there is no such thing as waste. Everything flows in a natural cycle of use and reuse. Living organisms consume materials and eventually return them to the environment, usually in a different form, for reuse. Solid waste (or trash) is a human concept. It refers to a variety of discarded materials, not liquid or gas, that are deemed useless or worthless. However, what is worthless to one person may be of value to someone else, and solid wastes can be considered to be misplaced resources. Learning effective ways to reduce the amount of wastes produced and to recycle valuable resources contained in the wastes is important if humans wish to maintain a livable and sustainable environment. Solid waste disposal has been an issue facing humans since they began living together in large, permanent settlements. With the migration of people to urban settings, the volume of solid waste in concentrated areas greatly increased. Ancient cultures dealt with waste disposal in various ways: they dumped it outside their settlements, incorporated some of it into flooring and building materials, and recycled some of it. Dumping and/or burning solid waste has been a standard practice over the centuries. Most communities in the United States dumped or burned their trash until the 1960s, when the Solid Waste Disposal Act of 1965 (part of the Clean Air Act) required environmentally sound disposal of waste materials. SOURCES AND TYPES OF SOLID WASTE There are two basic sources of solid wastes: non-municipal and municipal. Non-municipal solid waste is the discarded solid material from industry, agriculture, mining, and oil and gas production. It makes up almost 99 percent of all the waste in the United States. Some common items that are classified as non-municipal waste are: construction materials (roofing shingles, electrical fixtures, bricks); waste-water sludge; incinerator residues; ash; scrubber sludge; oil/gas/mining waste; railroad ties, and pesticide containers. Municipal solid waste is made up of discarded solid materials from residences, businesses, and city buildings. It makes up a small percentage of waste in the United States, only a little more than one percent of the total. Municipal solid waste consists of materials from plastics to food scraps. The most common waste product is paper (about 40 percent of the total). Other common components are: yard waste (green waste), plastics, metals, wood, glass and food waste. The composition of the municipal wastes can vary from region to region and from season to season. Food waste, which includes animal and vegetable wastes resulting from the preparation and consumption of food, is commonly known as garbage. Some solid wastes are detrimental to the health and well-being of humans. These materials are classified as hazardous wastes. Hazardous wastes are defined as materials which are toxic, carcinogenic (cause cancer), mutagenic (cause DNA mutations), teratogenic (cause birth defects), highly flammable, corrosive or explosive. Although hazardous wastes in the United States are supposedly regulated, some obviously hazardous solid wastes are excluded from strict regulation; these include: mining, hazardous household and small business wastes. WASTE DISPOSAL METHODS Most solid waste is either sent to landfills (dumped) or to incinerators (burned). Ocean dumping has also been a popular way for coastal communities to dispose of their solid wastes. In this method, large barges carry waste out to sea and dump it into the ocean. That practice is now banned in the United States due to pollution problems it created. Most municipal and non-municipal waste (about 60%) is sent to landfills. Landfills are popular because they are relatively easy to operate and can handle of lot of waste material. There are two types of landfills: sanitary landfills and secure landfills. In a sanitary landfill solid wastes are spread out and compacted in a hole, canyon area or a giant mound. Modern sanitary landfills are lined with layers of clay, sand and plastic. Each day after garbage is dumped in the landfill, it is covered with clay or plastic to prevent redistribution by animals or the wind. Rainwater that percolates through a sanitary landfill is collected in the bottom liner. This liquid leachate may contain toxic chemicals such as dioxin, mercury, and pesticides. Therefore, it is removed to prevent contamination of local aquifers. The groundwater near the landfill is closely monitored for signs of contamination from the leachate. As the buried wastes are decomposed by bacteria, gases such as methane and carbon dioxide are produced. Because methane gas is very flammable, it is usually collected with other gases by a system of pipes, separated and then either burned off or used as a source of energy (e.g., home heating and cooking, generating electricity). Other gases such as ammonia and hydrogen sulfide may also be released by the landfill, contributing to air pollution. These gases are also monitored and, if necessary, collected for disposal. Finally, when the landfill reaches its capacity, it is sealed with more layers of clay and sand. Gas and water monitoring activities, though, must continue past the useful life of the landfill. Secure landfills are designed to handle hazardous wastes. They are basically the same design as sanitary landfills, but they have thicker plastic and clay liners. Also, wastes are segregated and stored according to type, typically in barrels, which prevents the mixing of incompatible wastes. Some hazardous waste in the United States is sent to foreign countries for disposal. Developing countries are willing to accept this waste to raise needed monies. Recent treaties by the U.N. Environment Programme have addressed the international transport of such hazardous wastes. Federal regulation mandates that landfills cannot be located near faults, floodplains, wetlands or other bodies of water. In many areas, finding landfill space is not a problem, but in some heavily populated areas it is difficult to find suitable sites. There are, of course, other problems associated with landfills. The liners may eventually leak and contaminate groundwater with toxic leachate. Landfills also produce polluting gases, and landfill vehicle traffic can be a source of noise and particulate pollutants for any nearby community. About 15 percent of the municipal solid waste in the United States is incinerated. Incineration is the burning of solid wastes at high temperatures (>1000ºC). Though particulate matter, such as ash, remains after the incineration, the sheer volume of the waste is reduced by about 85 percent. Ash is much more compact than unburned solid waste. In addition to the volume reduction of the waste, the heat from the trash that is incinerated in large-scale facilities can be used to produce electric power. This process is called waste-to-energy. There are two kinds of waste-to-energy systems: mass burn incinerators and refuse-derived incinerators. In mass burn incinerators all of the solid waste is incinerated. The heat from the incineration process is used to produce steam. This steam is used to drive electric power generators. Acid gases from the burning are removed by chemical scrubbers. Any particulates in the combustion gases are removed by electrostatic precipitators. The cleaned gases are then released into the atmosphere through a tall stack. The ashes from the combustion are sent to a landfill for disposal. It is best if only combustible items (paper, wood products, and plastics) are burned. In a refuse-derived incinerator, non-combustible materials are separated from the waste. Items such as glass and metals may be recycled. The combustible wastes are then formed into fuel pellets which can be burned in standard steam boilers. This system has the advantage of removing potentially harmful materials from waste before it is burned. It also provides for some recycling of materials. As with any combustion process, the main environmental concern is air quality. Incineration releases various air pollutants (particulates, sulfur dioxide, nitrogen oxides, and methane) into the atmosphere. Heavy metals (e.g., lead, mercury) and other chemical toxins (e.g., dioxins) can also be released. Many communities do not want incinerators within their city limits. Incinerators are also costly to build and to maintain when compared to landfills.
textbooks/bio/Ecology/AP_Environmental_Science/1.17%3A_Solid_Waste.txt
INTRODUCTION When environmental conditions are degraded such that the range of tolerance is exceeded, there will be a significant impact on human health. Our industrialized society dumps huge amounts of pollutants and toxic wastes into the earth's biosphere without fully considering the consequences. Such actions seriously degrade the health of the earth's ecosystems, and this degradation ultimately affects the health and well-being of human populations. AGENTS For most of human history, biological agents were the most significant factor in health. These included pathogenic (disease causing) organisms such as bacteria, viruses, protozoa, and internal parasites. In modern times, cardiovascular diseases, cancer, and accidents are the leading killers in most parts of the world. However, infectious diseases still cause about 22 million deaths a year, mostly in undeveloped countries. These diseases include: tuberculosis, malaria, pneumonia, influenza, whooping cough, dysentery and Acquired Immune Deficiency Syndrome (AIDS). Most of those affected are children. Malnutrition, unclean water, poor sanitary conditions and lack of proper medical care all play roles in these deaths. Compounding the problems of infectious diseases are factors such as drug-resistant pathogens, insecticide-resistant carriers and overpopulation. Overuse of antibiotics have allowed pathogens to develop a resistance to drugs. For example, tuberculosis (TB) was nearly eliminated in most parts of the world, but drug-resistant strains have now reversed that trend. Another example is malaria. The insecticide DDT was widely used to control malaria-carrying mosquito populations in tropical regions. However, after many years the mosquitoes developed a natural resistance to DDT and again spread the disease widely. Anti-malarial medicines were also over prescribed, which allowed the malaria pathogen to become drug-resistant. In our industrialized society, chemical agents also have significant effects on human health. Toxic heavy metals, dioxins, pesticides, and endocrine disrupters are examples of these chemical agents. Heavy metals (e.g., mercury, lead, cadmium, bismuth, selenium, chromium, thallium) are typically produced as by-products of mining and manufacturing processes. All of them biomagnify (i.e., they become more concentrated in species with increasing food chain level). Mercury from polluted water can accumulate in swordfish to levels toxic to humans. When toxic heavy metals get into the body, they accumulate in tissues and may eventually cause sickness or death. Studies show that people with above-average lead levels in their bones have an increased risk of developing attention deficit disorder and aggressive behavior. Lead can also damage brain cells and affect muscular coordination. Dioxins are organic compounds, usually produced as a byproduct of herbicide production. They are stable compounds and can accumulate in the environment. Dioxins also biomagnify through the food chain and can cause birth defects and death in wildlife. Although dioxin is known to be extremely toxic to mammals, its low-level effects on the human body are not well known. The infamous Agent Orange used as a defoliant during the Vietnam war contained a dioxin component. Many veterans from that war suffer from a variety of medical problems attributed to Agent Orange exposure. Pesticides are used throughout the world to increase crop yields and as a deterrent to insect-borne diseases. The pesticide DDT was widely used for decades. It was seen as an ideal pesticide because it is inexpensive and breaks down slowly in the environment. Unfortunately, the latter characteristic allows it to biomagnify through the food chain. Populations of bird species at the top of the food chain, e.g., eagles and pelicans, are greatly affected by DDT in the environment. When these birds have sufficient levels of DDT, the shells of their eggs are so thin that they break, making reproduction impossible. After DDT was banned in the United States in 1972, affected bird populations made noticeable recoveries. According to the World Health Organization, more than three million people are poisoned by pesticides each year, mostly in undeveloped countries, and about 220,000 of them die. Long-term exposure to pesticides by farm workers and workers in pesticide factories seems to be positively correlated with an increased risk of developing various cancers. Heavy metals, dioxins and pesticides may all be endocrine disrupters. Endocrine disrupters interfere with the functions of hormones in the human body, especially those controlling growth and reproduction. They do this by mimicking certain hormones and sending false messages to the body. Because they are active even in low concentrations, endocrine disrupters may cause problems in relatively low doses. Some of the effects include low sperm count and sterility in males. Since 1940, sperm counts have dropped 50 percent in human males, possibly the result of exposure to endocrine disrupters. EFFECTS An acute effect of a substance is one that occurs rapidly after exposure to a large amount of that substance. A chronic effect of a substance results from exposure to small amounts of a substance over a long period of time. In such a case, the effect may not be immediately obvious. Chronic effects are difficult to measure, as the effects may not be seen for years. Long-term exposure to cigarette smoking, low level radiation exposure, and moderate alcohol use are all thought to produce chronic effects. For centuries, scientists have known that just about any substance is toxic in sufficient quantities. For example, small amounts of selenium are required by living organisms for proper functioning, but large amounts may cause cancer. The effect of a certain chemical on an individual depends on the dose (amount) of the chemical. This relationship is often illustrated by a dose-response curve which shows the relationship between dose and the response of the individual. Lethal doses in humans have been determined for many substances from information gathered from records of homicides and accidental poisonings. Much of the dose-response information also comes from animal testing. Mice, rats, monkeys, hamsters, pigeons, and guinea pigs are commonly used for dose-response testing. A population of laboratory animals is exposed to measured doses under controlled conditions and the effects noted and analyzed. Animal testing poses numerous problems, however. For instance, the tests may be painful to animals, and unrelated species can react differently to the same toxin. In addition, the many differences between test animals and humans makes extrapolating test results to humans very difficult. A dose that is lethal to 50 percent of a population of test animals is called the lethal dose-50 percent or LD-50. Determination of the LD-50 is required for new synthetic chemicals in order to give a measure of their toxicity. A dose that causes 50 percent of a population to exhibit any significant response (e.g., hair loss, stunted development) is referred to as the effective dose-50 percent or ED-50. Some toxins have a threshold amount below which there is no apparent effect on the exposed population. Some scientists believe that all toxins should be kept at a zero-level threshold because their effects at low levels are not well known. That is because of the synergy effect in which one substance exacerbates the effects of another. For example, if cigarette smoking increases lung cancer rates 20 times and occupational asbestos exposure also increases lung cancer rates 20 times, then smoking and working in an asbestos plant may increase lung cancer rates up to 400 times. RELATIVE RISKS Risk assessment helps us estimate the probability that an undesirable event will occur. This enables us to set priorities and manage risks in an effective way. The four steps of risk assessment are: 1. Identification of the hazard. 2. Dose-response assessment. Find the relationship between the dose of a substance and the seriousness of its effect on a population. 3. Exposure assessment. Estimate the amount of exposure humans have to a particular substance. 4. Risk characterization. Combine data from the dose-response assessment and the exposure assessment. Risk management of a substance evaluates its risk assessment in conjunction with relevant political, social, and economic considerations in order to make regulatory decisions about the substance. In our society political, social, and economic considerations tend to count more than the risk assessment information. Signs of this are evident everywhere. People listen to loud music even though the levels are known to damage hearing. They smoke cigarettes that they know can cause cancer and heart disease. People are often not logical in making choices. An example of this is a smoker who drinks bottled water because she is afraid tap water is unhealthy. Risk assessments have shown that a person is 1.8 million times more likely to get cancer from smoking than from drinking tap water. One possible explanation for this behavior is that people feel they can control their smoking if they choose to, but risks over which people have no control, such as public water supplies and nuclear wastes, tend to evoke more fearful responses. Because risk management deals with the unknown, it often is only loosely related to science.
textbooks/bio/Ecology/AP_Environmental_Science/1.18%3A_Impact_on_Human_Health.txt
INTRODUCTION The various components of earth's systems interact with one another through the flow of matter and energy. For example, mass (carbon dioxide and oxygen gases) is exchanged between the biosphere and atmosphere during plant photosynthesis. Gases move across the ocean-atmosphere interface. Bacteria in the soil decompose wastes, providing nutrients for plants and returning gases to the atmosphere. Furthermore, studies of Antarctic and Greenland ice cores show a correlation between abrupt climate changes and storm activities in the Atlantic and Pacific oceans during historical times. All of these processes are linked by natural cycles established over billions of years of the earth's history. Humans have only been present for a tiny fraction of earth's history, and for much of that time their presence had little impact on the global environment. However, in recent history, the human population has grown and developed to the point where it is no longer a relatively passive presence in earth's systems. People have greatly increased their use of air, water, land and other natural resources during the last 200 years. Their industrial and agricultural activities have affected the atmosphere, the water cycle, and the climate. Each year large quantities of carbon dioxide and pollutants are added to the atmosphere and water systems due to fossil fuel burning and industrial processes. Ecological systems have been altered as well. The size of natural ecosystems has shrunk as people increase their use of the land. Plants and animals have been changed by human agricultural practices. Clearly humans are changing the global environment and climate. What is unclear is whether earth's systems can adjust to these changes. ATMOSPHERE The earth is much like a big greenhouse. Energy, in the form of sunlight, passes through its atmosphere, though the clouds, water and land reflect some of that energy back into space, some sunlight is absorbed, converted to heat and radiated back into the atmosphere as infrared radiation. Much of this infrared radiation is absorbed by atmospheric carbon dioxide and other gases rather than radiated into space. The process is similar to that of a greenhouse, with infrared-absorbing gases such as carbon dioxide and methane acting as panes of glass to trap the infrared heat. For this reason, these gases are known as greenhouse gases. The net result of this process is that the atmosphere is warmed. For more than a century, scientists have pondered the possible effects that change in the amounts of greenhouse gases like carbon dioxide would have on the earth's climate. One notable theory that has arisen from this is that of the greenhouse effect. According to this theory, if the concentration of carbon dioxide in the atmosphere steadily increases, then the atmosphere will trap more and more heat. This could cause the earth's mean surface temperature to rise over time. Concerns over possible climate effects led to efforts to monitor carbon dioxide levels. Monitoring began in the late 1950's, with monitoring stations being set up in Alaska, Antarctica and Hawaii. The Mauna Loa, Hawaii, station has been operating since 1958. The data compiled there for more than 40 years show some interesting trends in the concentration of carbon dioxide in the atmosphere. Carbon dioxide concentration varies cyclically by season, with highs occurring during Fall and lows during the Spring. This follows the normal life cycle of plants during the year and their associated photosynthetic output. Superimposed over these seasonal variations is a long-term gradual increase in carbon dioxide concentration. What causes this long-term increase? Will the trend continue? Humans consume large amounts of fossil fuels in order to drive their highly industrialized society. The burning of coal, oil and natural gas releases considerable quantities of carbon dioxide into the atmosphere. In a relatively short time, humans have released organic carbon into the atmosphere that took hundreds of millions of years to store in sedimentary rocks. Deforestation by humans -- especially in tropical areas -- is also a source of net carbon dioxide increase in the atmosphere. The burning of trees produces carbon dioxide directly, and the removal of the trees also results in less carbon dioxide being removed from the atmosphere by photosynthesis. However, it is not clear as to the overall role of the terrestrial biosphere with regard to the carbon dioxide problem. Forests have regrown in some regions of the world (e.g., the northeastern United States). These added forests increase carbon dioxide removal from the atmosphere. Furthermore, some experiments suggest that rising carbon dioxide concentrations in the atmosphere may stimulate plant growth in general. If true, this would also lead to an increase in carbon sequestration by plant life. Models used to predict future levels of carbon dioxide in the atmosphere depend on an accurate knowledge of all relevant carbon sources and sinks. Questions still remain as to the size, location and magnitude of these. Therefore, considerable uncertainty remains as to whether the carbon dioxide concentration in the atmosphere will continue to increase, will instead decrease, or will become constant. Carbon dioxide is not the only greenhouse gas that could significantly affect the global climate. Methane gas could also be a major player. It is released as a by-product of organic decomposition by microbial activity, especially from landfills. It is a pollutant resulting from the use of fossil fuels, and is even produced by cattle. The largest deposits of methane gas, however, may be the oceans and vast tundra wastelands. In cold water, for example, methane can form crystal structures somewhat similar to water ice known as clathrates. Clathrates are known to occur on the edges of the oceans' continental shelves. They also occur in the permafrost of tundra regions. When warmer temperatures occur, the clatharates destabilize, releasing the stored methane. The increase in the greenhouse effect that would result from the release of methane from clathrates on the continental shelves and in permafrost worldwide could equal that from the carbon dioxide produced from the burning of all the world's coal reserves. The buildup of greenhouse gases is not the only atmospheric concern. The concentration of chlorofluorocarbons (CFC's) in the atmosphere has increased since they were first synthesized more than 70 years ago. These compounds have been used as refrigerant gases, aerosol propellants, electronic component cleaners and for blowing bubbles in styrofoam. Most of their uses involve their eventual release into the atmosphere. Because they are chemically very inert and insoluble in water, they are also not easily removed from the atmosphere by normal processes such as rainfall. Therefore, the concentration of CFCs in the atmosphere increase with continued release. When CFCs eventually rise into the stratosphere, they can be broken down by UV radiation from the sun as follows: CCl3F + UV energy → Cl + CCl2F The free chlorine that is produced can react with ozone, which is also present in the stratosphere. This has important consequences for living organisms on the surface of the earth. Ozone in the stratosphere protects living organisms by absorbing most of the harmful UV radiation from the sun. This ozone is constantly produced and destroyed in a natural cycle. The basic reactions involving only oxygen (known as the Chapman Reactions) are as follows: O2 + UV → 2 O O + O2 →O3 (ozone production) O3 + UV →O + O2 (ozone destruction) O + O2 →O3 (ozone production) O3 + O → O2 + O2 (ozone destruction) During the 1960s, measurements of atmospheric ozone showed that it was being destroyed faster than could be accounted for by the natural cycle alone. It was determined that other, faster reactions were controlling the ozone concentrations in the stratosphere. Among the most important of these were those involving the Cl atoms produced from the breakdown of CFC's: Cl + O3 → ClO + O2 ClO + O → Cl + O2 Because the normal fate of the O atom in the above reaction would be to form another ozone molecule, the net result of both reactions is the elimination of one ozone molecule and one would-be ozone molecule. Furthermore, at the end of the reaction the Cl atom is free to start the destructive cycle over again. By this catalytic chain reaction, one Cl atom can destroy about 100,000 ozone molecules before other processes remove it. Ozone destruction caused by CFCs has resulted in the formation of "holes" in the stratospheric ozone layer over the polar regions, where the layer is thinnest. In 1987, the "Montreal Protocol" set forth a worldwide process to reduce and eventually to eliminate the use of CFC's. It has apparently been successful, as current observations show that the increase in CFCs in the stratosphere is leveling off. Unfortunately, it will be many years before ozone levels will return to normal because of the long atmospheric lifetime (50 to 100 years) of the CFCs already present. Curiously, although ozone in the stratosphere is beneficial to life on earth, ozone in the lower atmosphere (troposphere) can harm life by aggravating respiratory ailments in humans and damaging plants. Ozone in the troposphere is produced naturally by lightning. It is also a secondary pollutant produced by photochemical reactions involving primary pollutants such as nitrogen oxides. Smoggy cities such as Los Angeles suffer from considerable ozone pollution. Research studies have shown that biomass burning is also a major source of ozone pollution. Ozone is produced photochemically from precursor molecules released during the burning of forests and grasslands. Biomass burning is mainly concentrated in tropical regions. Indeed, satellite observations of South America and New Guinea show that tropospheric ozone is increasing in those areas where biomass burning is prevalent. OCEANS In order to understand the role the oceans may play in global climate change requires an understanding of the dynamics of ocean circulation changes. Global ocean circulation is controlled by thermohaline circulation. It is driven by differences in the density of seawater, which is determined by the temperature (thermo) and salinity (haline) of the seawater. In the Atlantic, thermohaline circulation transports warm and very saline water to the North. There, the water cools and sinks into the deep ocean. This newly formed deep water subsequently moves southward. Dense water also sinks near Antarctica. The cold, dense waters from the North Atlantic and Antarctica gradually warm and return to the surface, throughout the world's oceans. The entire system moves like a giant conveyor belt. The movement is very slow (roughly 0.1 meters-per-second), but the flow is equivalent to that of 100 Amazon rivers. This circulation system provides western Europe with comparatively warm sea surface temperatures along the coast and contributes to its mild winters. Ocean circulation models show that the thermohaline circulation is coupled to the carbon dioxide content of the atmosphere, and thus to the greenhouse effect. Increases in carbon dioxide in the atmosphere can lead to a slowing or a complete breakdown of the circulation system. One might expect temperatures over western Europe to decrease in such a scenario. However, any such change would be superimposed on warming from the enhanced greenhouse effect. Therefore, there may be little change in temperature over western Europe, and any cooling could be restricted to the ocean area away from land. The potential effects of such circulation changes on marine ecosystems are largely unknown, but would probably be significant. Furthermore, if circulation in the oceans is reduced, their ability to absorb carbon dioxide will also be reduced. This would make the effect of human-produced carbon dioxide emissions even more pronounced. BIOTA Biodiversity is an important part of any ecosystem. The earth's biodiversity is significantly affected by human activities. These activities often lead to biodiversity loss. This loss can result from a number of factors including: habitat destruction, introduction of exotics, and over-harvesting. Of these, habitat destruction is probably the most important. Humans destroy habitats for many reasons: agricultural expansion, urban expansion, road construction and reservoir construction. Larger regions than those directly destroyed are generally affected because of the resulting habitat fragmentation. Habitat fragmentation results in large populations being broken into smaller populations, which may be isolated from one another and may not be large enough to survive. For example, the Aswan High Dam of Egypt was constructed because the desire to increase the supply of water for irrigation and power was considered paramount. The environmental side effects, however, have been enormous and include the spread of the disease schistosomiasis by snails that live in the irrigation channels; loss of land in the delta of the Nile River from erosion once the former sediment load of the river was no longer available for land building; and a variety of other consequences. The advisability agencies concerned with international development to seek the best environmental advice is now generally accepted, but implementation of this understanding has been slow. When the rate of exploitation or utilization of a species exceeds its capacity to maintain a viable population, over-harvesting results. Living resources such as forests and wildlife are usually considered renewable resources. However, they can become non-renewable if over-harvested. Over-harvesting and habitat loss often occur together, because the removal of an organism from its environment can have a detrimental impact on the environment itself. Humans have historically exploited plant and animal species to maximize short-term benefits, usually at the expense of being able to sustain the species in the long-term. A classic example of over-harvesting involves the passenger pigeon. It was once thought to be the most populous bird on earth, with numbers into the billions. Early settlers in North America hunted the bird for food. The hunting was so intense, that the bird disappeared from the wild by 1900 and was extinct by 1914. The American buffalo nearly suffered the same fate. Originally numbering in the tens of millions, fewer than 1000 were left by 1890. The species has, however, made a comeback in reserves and private ranches and is no longer considered threatened. The fishing industry has a long history of over-harvesting its resources. The California sardine industry peaked in the 1930's. By the late 1950s, the sardines were gone as were the canneries in Monterey. The Peruvian anchovy fishery boomed in the 1960s and collapsed in the 1970s. Over-harvesting of fish has only increased over the years, as ships have become bigger and more "efficient" methods of harvesting fish (e.g. the purse-seine net,) have been developed. By the mid-1990s, over 40 percent of the species in American fisheries were over harvested. Over-harvesting of tropical forests is currently a worldwide problem. More efficient methods for harvesting and transporting have made it profitable to remove trees from previously inaccessible areas. Mahogany trees are over harvested by loggers in the tropical forests of Brazil, Bolivia, Peru, Nicaragua and Guatemala. Many other types of tropical trees once considered worthless are now valuable sources of pulp, chipboard, fiberboard and cellulose for plastics production. Developing nations are often willing to sign over timber rights to foreign companies for needed hard currency. Logging operations also act as a catalyst for tropical deforestation. Farmers use roads built by logging companies to reach remote areas, which are then cleared of forests and used for ranching and agriculture. When a species is transplanted into an environment to which it is not native, it is known as an introduced exotic. Whenever man has settled far away from home, he has tried to introduce his familiar animals and plants. Long ago, European explorers released goats and pigs into their colonies to provide a supply of familiar animal protein. Many exotics are accidentally introduced. Often, the introduction of exotics has disastrous effects on the native flora and fauna. Their new habitat may have fewer predators or diseases that affect them, and as a result so their populations grow out of control. Organisms they prey upon may not have evolved defense mechanisms to them and native species may not be successful in competing with them for space or food. Some of the most abundant wild animals and plants in the United States are introduced species. For example, starlings, eucalyptus trees and many types of grasses are introduced exotics. Most insect and plant pests are exotic species. The kudzu vine, a Japanese species introduced in 1876, to shade porches of southern mansions and widely planted in the 1940's to control erosion, grows so rapidly (up to one foot per day) that it kills forests by entirely covering trees and shrubs. The gypsy moth was brought from France in 1869 by an entomologist who hoped to interbreed them with silk moths. They escaped and established a colony that invaded all of the New England states, defoliating trees of many different kinds. Exotics are a factor contributing to the endangered or threatened status of many animals and plants in the U.S. Dangers of Bird Migration All creatures are threatened by habitat degradation and destruction. For migrating birds, the problem is vastly compounded. Birds travel thousands of miles between summer and winter homes, and environmental disruptions anywhere along the route or at either destination can be deadly. Indeed, massive declines in many bird populations have been documented over recent decades. Many of the species common in the United States are Neotropical – they breed in North America in the summer, then over winter in Central or South America. These songbirds, waterfowl, raptors, and shorebirds, who follow the same migration routes their ancestors did, face many hazards along the way. Night-time lighting (light pollution) can disorient them. Collisions with airplanes, wires, and buildings can kill and injure them. Once the birds arrive at their destination, or when they stop in-route, they need food, water, and a place to rest. But urban sprawl is encroaching on bird habitat, and food and water supplies are contaminated by pollution. Recently, a new problem has arisen. For migrating birds, timing is everything – they must arrive at their summer breeding grounds when food supplies are at their peak, so that they can rebuild their body fat and reproduce successfully. Global warming is beginning to upset the delicate balance between the lifecycles of plants and insects and birds. In some areas, birds are showing up early, before flowers open or insects hatch, and finding very little to eat. Fortunately, many people value birds and several conservation efforts are underway, including: • Creation of protective shelter belts and hedgerows around fields and community open space • Easements to provide native habitat for birds in human activity areas • Timing of insecticide applications to avoid loss of the food base during bird movement in the spring and fall • Preservation of the quality and quantity of community wetlands • Minimization of practices that negatively impact birds In addition, many seek to coordinate activities along the migratory flyways to increase the success of the migrating birds. Although humans are working to create natural reserves, the problem of human impact on migratory birds still needs to be addressed to a significant degree.
textbooks/bio/Ecology/AP_Environmental_Science/1.19%3A_First_Order_Effects.txt
INTRODUCTION Although humans have had the capability to monitor earth's systems effectively only relatively recently, previous global environmental events have not gone unrecorded. Climate indicators exist in various forms (e.g., pollen in lake-bottom sediments, patterns in tree-rings, air bubbles frozen in glacial ice and growth rings in coral). These indicators show that significant environmental changes have occurred throughout earth's history. These changes occurred slowly, over relatively long periods of time. However, human activities are altering earth's systems at an accelerated pace. Large-scale pollution, increased natural resource consumption and the destruction of plant and animal species and their habitats by humans are causing significant changes of global proportions. Human-caused global changes include: depletion of stratospheric ozone, increased carbon dioxide concentration in the atmosphere and habitat destruction. The consequences of these changes include: global warming, increased levels of solar UV radiation, increased sea levels and loss of biodiversity. The ramifications of these phenomena are far-reaching and potentially devastating to all life on earth, including humans. Awareness of this has prompted an international effort to increase scientific understanding of global changes and their effects. Most scientists agree on certain points: • Greenhouse gases absorb and then emit infrared radiation. • Atmospheric concentrations of carbon dioxide, methane and chlorofluorocarbons (CFCs) have increased significantly above pre-industrial levels, and the increase is directly attributable to human activities. • Increased concentrations of greenhouse gases produce a net heating effect on the earth. • Globally, average surface air temperatures are about 0.5°C higher than those in the 19th century. • Many centuries will pass before carbon dioxide concentrations will return to normal levels, even if all human-caused emissions are stopped entirely. • The return of CFC concentrations to their pre-industrial levels will take more than a century, even with a halt in human-caused emissions. While a general consensus has been reached on the above points, no such consensus has been reached on the extent to which these changes are affecting the global environment and what course they will follow in the future. The scientific community can only infer what will happen from predictive models based upon their knowledge of relevant environmental processes. This knowledge is often limited because the processes involved and their relationships are exceedingly complex. Moreover, the distinct possibility exists that not all processes are even known. ATMOSPHERE The atmosphere surrounding the earth is both a part and a product of life. Humans have significantly affected the atmosphere. For example, huge amounts of carbon dioxide and methane, among other compounds, are added annually to the atmosphere due to anthropogenic uses of fossil fuels. For many years, CFC's were indiscriminately released into the atmosphere. The addition of these chemical pollutants to the atmosphere raises concerns about how the changes in the atmosphere may affect life on earth. The most immediate effect of increased amounts of greenhouse gases in the atmosphere is global warming. The global mean surface temperature is expected to rise 1 to 3°C by the middle of the 21st century. The extent of the warming will depend in part upon atmospheric water vapor levels and cloud cover feedback processes. Heating of the atmosphere can impact the global climate in several ways. The rate of water evaporation will increase as the environment warms, and this will lead to increases in the global mean precipitation. A warmer, wetter atmosphere may subsequently cause an increase in the frequency of tropical storms, which can cause flooding. In addition to deaths from famine and drowning, floods can bring with them cholera and diseases spread by mosquitoes, such as malaria and yellow fever. Atmospheric heating could also cause severe heat waves, and projections indicate that heat-related deaths may double by 2020. High-altitude cooling, caused by the combination of reduced stratospheric ozone concentrations and increased carbon dioxide concentrations, may lower the upper-stratospheric temperatures by as much as 8 to 20°C. This cooling could change the atmosphere's circulation patterns. In addition, scientists believe that stratospheric ozone depletion could have a serious negative impact on the health of humans, plants and animals. This is due to the concomitant increase in UV radiation, particularly UV-B, that reaches the surface of the earth when stratospheric ozone levels decrease. Humans DNA is susceptible to damage by UV-B radiation, and exposure can cause skin cancer. Studies indicate that a 10 percent reduction in stratospheric ozone could give rise to an additional 20,000 skin cancer cases each year. Other consequences to humans include suppression of the human immune system and increases in the occurrence of eye cataracts. Plants respond adversely to exposure to UV-B radiation, with reduced leaf area, reduced shoot length and decreases in the rate of photosynthesis. Such responses could significantly decrease the yields of agricultural crops. UV-B radiation can kill plankton in the ocean, which in turn could severely impact marine food chains. Increased exposure to UV-B radiation also appears to kill developing embryos in the eggs of some reptiles and amphibians. OCEAN Even a moderate increase in global temperature can melt significant amounts of snow and ice, shrinking glaciers and the polar ice caps. This affects sea levels. Inasmuch as 50 percent of the world's human population lives within 50 kilometers of the sea, the effects of even a moderate rise in sea levels -- on the order of a meter or less -- would be significant. Research suggests that rising sea levels will flood some coastal wetlands and communities, and will amplify the impacts of storm surges, in which sea levels rise because of severe storm winds. Increased precipitation in high northern latitudes may reduce the salinity and density of the ocean waters there, which in turn will influence global ocean (thermohaline) circulation. Coral reefs are directly affected by the amount of carbon dioxide in the atmosphere, global temperature change and increased UV radiation. An increase in atmospheric carbon dioxide leads to a decrease of carbonate ion in the seawater. This decrease can cause a reduction in the rate of coral reef formation, or, in extreme cases, could cause coral reefs to dissolve. A phenomenon known as coral bleaching, which can be fatal to a coral colony, is caused by unusually high or low temperatures, high or low salinity or high amounts of UV radiation. The first two of these are linked to global warming and the last could result from stratospheric ozone depletion. Scientists at the National Center for Atmospheric Research have reported that global warming may accentuate the effects of El Niño events. The name El Niño refers to the warm phase of a large oscillation, known as the El Niño/Southern Oscillation (ENSO), in which the surface temperature of the central/eastern part of the tropical Pacific warms. This is accompanied by changes in winds and rainfall patterns. Abnormally dry conditions occur over northern Australia, Indonesia and the Philippines. Drier than normal conditions are also found in southeastern Africa and northern Brazil. Wetter than normal conditions are observed along the west coast of tropical South America, the North American Gulf Coast and southern Brazil. The warm El Niño phase typically lasts for eight to 10 months. The entire ENSO cycle usually lasts about three to seven years. Over the past century, El Niño events have become more frequent and have caused greater climate changes paralleling the rise in global temperature. BIOTA The variety of life on earth is its biodiversity. The number of species of plants, animals, microorganisms, the enormous diversity of genes in these species, the different ecosystems on the planet -- such as deserts, rainforests and coral reefs -- are all part of a biologically diverse earth. There is a link between biodiversity and climate change. Rapid global warming can affect an ecosystem's chances to adapt naturally in several ways. A species may be incapable of migrating far enough to reach a hospitable climate when faced with significant global warming. Existing habitat may be lost during progressive shifts of climatic conditions. Species diversity may be reduced as a result of reductions in habitat size. The fate of many species in a rapidly warming world will likely depend on their ability to migrate from increasingly less favorable climatic conditions to new areas that meet their physical, biological and climatic needs. Human activity plays a major role in the loss of biodiversity. Forests and wetlands are converted to agricultural and urban land use. Logging has cleared most of the virgin forests of the contiguous 48 states. The biologically diverse tropical forests are currently being rapidly destroyed as the land is converted to farming or cleared by logging and mining operations. On agricultural land, large fields of monoculture crops replace the diverse plant life that once was there. The United States has lost nearly all of the original tall-grass prairie that once covered the Great Plains. Hunting has driven species such as wolves and grizzly bears that were once widespread over the western United States to a few isolated reserves. Large land mammals such as rhinoceri and elephants have had their ranges greatly diminished in Asia and Africa by habitat destruction. Selective breeding by farmers has reduced the genetic diversity of livestock animals. Introduced exotic species have driven out native plants and animals. One of the biggest side effects of the loss of biodiversity is the premature extinction of species. Small changes in the competitive ability of a species in one part of a food web may lead to extinctions in other parts, as changes in population density are magnified by predator-prey or host-parasite interactions. Human activities such as habitat destruction, introduction of exotics and over-harvesting are also causing large numbers of premature extinctions. It is estimated that about one-third of the plant species in the United States are threatened by extinction. Countless unknown species of plants and animals are lost every year because of the destruction of tropical forests. Plants that might hold the ingredients for new medicines are instead lost forever. High biodiversity contributes to the stability of an ecosystem. Each species, no matter how small, plays an important role. Diversity enables ecosystems to avoid and recover from a variety of disasters. Almost all cultures have in some way recognized the importance that Nature and its biological diversity have upon them.
textbooks/bio/Ecology/AP_Environmental_Science/1.20%3A_Higher_order_interactions.txt
INTRODUCTION Economics is the process by which humans manage their environment and its resources. The process is made up of a system of production, distribution and consumption of goods and services. Natural resources provide the raw materials and energy for producing economic goods, while human resources provide the necessary skill and labor to carry out the process. Different societies manage their economies in different ways. In a traditional economy, people are self-sufficient (i.e., they produce their own goods), but in a pure command economy the government controls all steps in the economic process. Capitalist countries such as the United States have a system that is largely based on a pure market economy. Buyers and sellers make economic decisions based on the Principle of Supply and Demand. Sellers supply goods and buyers create demand for goods. These two roles are often in conflict: buyers want to buy goods at low prices and sellers want to sell goods at high prices. However, the two sides eventually compromise on a price at which buyers can find sellers willing to sell and sellers can find buyers willing to buy. This is known as the market equilibrium price. The equilibrium price can be considered as the intersection of the supply and demand curves. Most countries strive to increase their capacities to produce goods and services and consider doing so as a positive sign of development. Economic growth is stimulated by population growth, which in turn increases the consumption of natural resources and increases the per capita consumption of goods and services. Various indicators are used to measure economic growth. One of them is the Gross National Product (GNP), which represents the total market value of final goods and services produced by a country during a given period (usually one year). Unfortunately, GNP does not take into account the global nature of many companies. If a company produces goods in a foreign country, then the "home" country does not really benefit from that production. Thus, if Pepsi bottles and sells soda in Japan, those revenues should not be included in the GNP of the United States. The GDP (Gross Domestic Product) provides a better indicator of the health of a country’s economy. This measure refers to the value of the goods and services produced within the boundaries of an economy during a given period of time. Both the GNP and Gross Domestic Product (GDP) are economic measures and indicate nothing about social or environmental conditions within a country. They are not measures of the quality of life. In fact, severe environmental problems can actually raise the GNP and GDP, because the funds used to clean up environmental messes (such as hazardous waste sites) help to create new jobs and increase the consumption of natural resources. The United Nations Human Development Index is an estimate of the quality of life in a country based on three indicators: life expectancy, literacy rate and per capita GNP. EXTERNAL COSTS Economic activity generally affects the environment, usually negatively. Natural resources are used, and large amounts of waste are produced. These side effects can be seen as ways in which the actions of a producer impact the well being of a bystander. The market fails to allocate adequate resources to address such external costs because it is only concerned with buyers and sellers, not with the well being of the environment. Only direct (or internal) costs are considered relevant. External costs are harmful social or environmental effects caused by the production or consumption of economic goods. Governments may take action to help alleviate the effects of economic activity. When external costs occur, a company’s private production cost and the social cost of production are at odds. The firm does not consider the cost of pollution cleanup to be relevant, while society does. The social costs of production include the negative effects of pollution and the cost of treatment. As a result, the social costs end up exceeding the private production costs. When external pollution and treatment costs are included in the production cost of the product, the supply curve intersects the demand curve at a higher price point. As a result of the higher price there will be less demand for the product and less pollution produced. For example, exhaust pollutants from automobiles adversely affect the health and welfare of the human population. However, oil companies consider their cost of producing gasoline to include only their exploration and production costs. Therefore, any measures to reduce exhaust pollutants represent an external cost. The government tries to help reduce the problem of exhaust pollutants by setting emissions and fuel-efficiency standards for automobiles. It also collects a gasoline tax that increases the final price of gasoline, which may encourage people to drive less. Sometimes, pollution results from the production process because no property rights are involved. For example, if a paper manufacturer dumps waste in a privately owned pond, the landowner generally takes legal action against the paper firm, claiming compensation for a specific loss in property value caused by the industrial pollution. In contrast, the air and most waterways are not owned by individuals or businesses, but instead are considered to be public goods. Because no property rights are involved the generation of pollution does not affect supply and demand. Firms have an incentive to use public goods in the production process because doing so does not cost anything. If the paper manufacturer can minimize production costs by dumping wastes for free into the local river then it will do so. The consequences of this pollution include adverse impacts on the fish and animal populations that depend on the water, degradation of the surrounding environment, decrease in the quality of water used in recreation and business, human health problems and the need for extensive treatment of drinking water by downstream communities. An important role of the government is to protect public goods, especially those with multiple uses, from pollution by companies seeking to minimize company costs and to maximize profits. People desire clean water for recreation and drinking, and the government must act to protect the broad interests of society from the narrow profit-driven focus of companies. One way to "internalize" some of the external costs of pollution is for the government to tax pollution. A pollution tax would require that polluting firms pay a tax based on the air, water and land pollution that they generate. This tax would raise the private production cost of a company to include to the social cost of production. In addition, the generated tax revenues could be used by the government to help mitigate the effects of pollution. The main drawback of such a tax is that it would discourage economic activity by increasing costs to the companies. For example, a tax on coal and oil would increase the cost of electricity and gasoline. Taxed companies would be forced to scale back production in response to these higher costs, and investments and employment would suffer. The trick is to set the tax at a level at which economic loss does not exceed the environmental benefits realized. Tradable Pollution Permits (TPPs) are an alternative to pollution taxes. In 1994, the United States government inaugurated a program to reduce sulfur dioxide emissions by requiring that companies have a permit for each ton of sulfur dioxide they emit. Companies were allocated TPPs based on their historical level of sulfur dioxide emissions. The program allows TPPs to be bought and sold among the companies. Therefore, a company can invest in scrubbers or use more expensive low sulfur coal to reduce its sulfur dioxide emissions and then sell its excess permits, offsetting part of the cost of reducing the pollution. COST-BENEFIT ANALYSES Ideally, one would like to live in a perfect world with zero pollution. Unfortunately, this is not possible with current technology. People drive cars and trucks, and most of these vehicles have internal combustion engines, which emit pollutants. Unless gasoline or diesel powered vehicles are completely banned, that pollution will persist. However, a few electric vehicles are starting to appear on the road, although they are impractical for long distance use or heavy hauling. Obviously, most people are not going to give up their internal combustion engine vehicles in the near future. People generally accept that some pollution is a result of living in a modern society. The critical issue, then, is how much pollution control is economically practical. A cost-benefit analysis provides an estimate of the most economically efficient level of pollution reduction that is practical. A cost-benefit analysis looks at the social benefits (e.g., health and environmental benefits) that can be derived from pollution reduction versus the cost of achieving that reduction. As the pollution reduction increases, so does the money required to reduce pollution further. It may not be very expensive to clean up the bulk of most pollutants. However, as the reduction in pollutants approaches 100 percent (i.e., zero emissions), the marginal cost of each additional unit of pollution reduction rises dramatically. If public funds are used for pollution control, there is a limit to how much money can be spent before the budgets of other important public services (e.g., police, fire and parks departments) are negatively impacted. A balance must therefore be found between the social benefits of pollution reduction and the cost of pollution reduction. The proper balance between costs and benefits represents the optimum economic level of pollution reduction. The optimum level is not static, but can change as circumstances change. As technology improves over time, the cost of pollution reduction may decrease. Likewise, as the hazards of pollution become better known, the perceived benefits to be derived from pollution reduction may also increase. In either case, the optimum level of pollution reduction will then increase and a greater level of pollution reduction will be considered economically feasible. The eco-efficiency program at the 3M Corporation is an example of how the optimum level of pollution reduction can be raised through better management and design of manufacturing processes. Over the time period 1990 to 2000, the company reduced its air pollution by 88 percent, water pollution by 82 percent and waste generation by 35 percent. One problem with using cost-benefit analyses for determining the optimum level of pollution reduction is that it assumes all benefits can be labeled with a price tag. However, aesthetic benefits from pollution reduction cannot be priced, and yet they are just as important as others. The beauty of a clear-running stream and the quiet solitude of a wilderness area cannot be measured in dollars and cents.
textbooks/bio/Ecology/AP_Environmental_Science/1.21%3A_Economic_Forces.txt
INTRODUCTION The world’s industrialized countries are undergoing many changes as they move to the later stages of the Industrial Revolution. Economies are becoming more information based, and capital is being measured not only in terms of tangible products and human workers, but also in terms of social and intellectual assets. For example, the makeup of the Gross Domestic Product (GDP) for the United States has gradually changed from being mainly manufactured goods to one with services predominating. Computer software and many other services, which are not easily categorized under the old economic system, now represent the largest sector of the United States' economy. This change in economic thinking has brought about a deeper awareness of the natural processes and ecological assets found in nature. Society is slowly shifting to an industrial model that includes recycling. Such closed-loop production encompasses the principles of waste-reduction, re-manufacturing and re-use. Conventional industrial economics considered air, water and the earth's natural cycles to be "free" goods. However, such thought led to considerable external environmental and social costs. With the rise of environmentally responsible economics, there is a movement to change to full-cost pricing of goods, which includes the social and environmental costs of production. Attempts have been made to overhaul economic indicators such as the GDP to take into account intangible assets and intellectual property. In 1994, the Clinton Administration attempted to integrate environmental factors into the GDP. The World Bank in 1995 redefined its Wealth Index. A nation's wealth now consists of 60 percent human capital (social and intellectual assets), 20 percent environmental capital (natural assets), and 20 percent built capital (tangible assets). These green GDP figures are intended to provide a better measure of the quality of life in a country than the traditional GDP, which looked only at tangible economic factors. However, such methods fail to take into account other areas that affect the quality of life in a country, such as human rights, health and education. In attempts to develop a better measure of the quality of life of a region, separate sets of economic, environmental and social indicators have been devised. The reasoning of this is that it is better to consider several separate indicators, rather than try to create a single, catch-all index. This approach does not require the difficult, if not impossible, attempt to place monetary values on all factors. The Calvert-Henderson Group chose twelve separate quality of life indicators: education, employment, energy, environment, health, human rights, income, infrastructure, national security, public safety, recreation and shelter. Although separate, each indicator is related to the others, and all are based on readily available demographic data. CATEGORIZING COUNTRIES Countries are categorized by a variety of methods. During the Cold War period, the United States government categorized countries according to each government’s ideology and capitalistic development. In this system, the "First World" included the capitalist countries; the "Second World" included the communist countries and the poorer countries were labeled as "Third World." With the end of the Cold War, this system has been discarded. Current classification models utilize economic (and sometimes other) factors in their determination. One two-tiered classification system developed by the World Bank classifies countries as developing and developed. According to the World Bank classification, developing countries are those with low or middle levels of GNP per capita. More than 80 percent of the world's population lives in the more than 100 developing countries. A few countries, such as Israel, Kuwait and Singapore, are also classified as developing countries, despite their high per capita income. This is either because of the structure of their economies, or because their governments officially classify themselves as such. Developed countries are those that have a large stock of physical capital and in which most people have a high standard of living. Some economists consider middle-income countries as developed countries when they have transitional economies that are highly industrialized. A three-tiered classification system was developed to categorize countries more precisely, especially those that are not easily classified as either developing or developed. These three categories are: less developed country (LDC), moderately developed country (MDC) and highly developed country (HDC). Criteria used to determine a country’s category include: GNP per capita, transportation and communication facilities, energy consumption, literacy and unemployment. A country categorized as an LDC has a marginal physical environment. Most African countries and many Asian countries are categorized as LDC. An LDC has the following characteristics: low energy production and consumption, mostly subsistence farming, a large percentage of the population is under 15, a high infant mortality rate, poorly developed trade and transportation inadequate medical facilities, a low literacy rate, a high unemployment rate and a very low per capita GNP. Countries such as the United States, Japan, and most of the Western European countries are categorized as HDC. HDCs are characterized by: extensive trade, advanced internal communication systems, highly developed transportation networks, high energy production and consumption, advanced medical facilities, low population growth, political stability and a high per capita GNP. The MDCs have characteristics that fit into both the LDC and HDC categories, but have a moderate per capita GNP. Saudi Arabia, Brazil and Mexico are considered MDCs. In a way, progress of less developed countries is determined somewhat, if not actively undermined, by the developed countries. Because developed countries are the more technologically advanced, they are able to maintain their advantage relative to less developed countries. One way they accomplish this is through "brain drain." With brain drain, the best educated people in less developed countries move to developed countries where they have better opportunities to improve their standard of living. Another way is for developed countries to exploit the natural and human resources of less developed countries. Developing countries generally desperately need the capital that developed countries can give them. Because environmental issues often take a backseat to economic issues, environmental disaster can follow. An example of exploitation by a foreign corporation occurred in Bhopal, India. Because of the availability of cheap labor and lax environmental laws, it was economically advantageous to locate a Union Carbide chemical plant there. One day in 1984, a cloud of poisonous methyl isocyanate was accidentally released from the plant, killing most of the unprotected people in the adjacent areas. Houses near the plant were mostly of poor families and streets near the plant were populated with many homeless men, women and children. Several thousand people were killed in this disaster. Even after the settlement of lawsuits stemming from the accident, the injured and relatives of the dead received little compensation. Many of the homeless were completely ignored. In its rush toward development, Bangladesh has established a program of intense use of land, forest, fisheries and water resources. This has led to severe environmental degradation: loss of soil fertility, excessive extraction of groundwater for irrigation, and increased air and water pollution. The lowering of water tables throughout the land, in particular, has led to pollution of ground water by arsenic. As many as 40 million people in Bangladesh may be exposed to toxic levels of arsenic present in many of the nation’s six million private and public wells. The country does not have the economic resources for adequate testing of wells to determine which are poisoned and which are safe. Because of this, millions may die of cancer or “arsenicosis.” Some idealistic people believe that a definition of a developed country must include factors such as conservation and quality of life and that a truly developed country would not exploit a large fraction of the world's resources. Accordingly, characteristics of such a developed country might include: economic prosperity of all people, regardless of gender or age, sustainable use of resources and more controlled use of technology to ensure a high quality of life for all people. An economically and technologically developed country such as the United States would not qualify as being a truly developed country by these criteria. ENVIRONMENTAL JUSTICE Whenever a community is faced with the potential of an environmentally undesirable facility, such as the placement of a hazardous waste dump in its midst, the usual response from residents is: "Not in my back yard!" Such a response is known as the NIMBY principle. Such reactions are usually reactions to visions of previous environmental irresponsibility: uncontrolled dumping of noxious industrial wastes and rusty steel drums oozing hazardous chemicals into the environment. Such occurrences were all too real in the past and some are still taking place. It is now possible -- and much more common -- to build environmentally sound, state-of-the-art disposal facilities. However, the NIMBY principle usually prevents the construction of such new facilities. Instead, hazardous waste facilities tend to be built upon pre-existing, already contaminated sites, even though the geology of such locations may be less favorable for containment than potential new sites. During the 1980’s minority groups protested that hazardous waste sites were preferentially sited in minority neighborhoods. In 1987, Benjamin Chavis of the United Church of Christ Commission for Racism and Justice coined the term environmental racism to describe such a practice. The charges generally failed to consider whether the facility or the demography of the area came first. Most hazardous waste sites are located on property that was used as disposal sites long before modern facilities and disposal methods were available. Areas around such sites are typically depressed economically, often as a result of past disposal activities. Persons with low incomes are often constrained to live in such undesirable, but affordable, areas. The problem more likely resulted from one of insensitivity rather than racism. Indeed, the ethnic makeup of potential disposal facilities was most likely not considered when the sites were chosen. Decisions in citing hazardous waste facilities are generally made on the basis of economics, geological suitability and the political climate. For example, a site must have a soil type and geological profile that prevents hazardous materials from moving into local aquifers. The cost of land is also an important consideration. The high cost of buying land would make it economically unfeasible to build a hazardous waste site in Beverly Hills. Some communities have seen a hazardous waste facility as a way of improving their local economy and quality of life. Emelle County, Alabama had illiteracy and infant mortality rates that were among the highest in the nation. A landfill constructed there provided jobs and revenue that ultimately helped to reduce both figures. In an ideal world, there would be no hazardous waste facilities, but we do not live in an ideal world. Unfortunately, we live in a world plagued by years of rampant pollution and hazardous waste dumping. Our industrialized society has necessarily produced wastes during the manufacture of products for our basic needs. Until technology can find a way to manage (or eliminate) hazardous waste, disposal facilities will be necessary to protect both humans and the environment. By the same token, this problem must be addressed. Industry and society must become more socially sensitive in the selection of future hazardous waste sites. All humans who help produce hazardous wastes must share the burden of dealing with those wastes, not just the poor and minorities. INDIGENOUS PEOPLE Since the end of the 15th century, most of the world's frontiers have been claimed and colonized by established nations. Invariably, these conquered frontiers were home to peoples indigenous to those regions. Some were wiped out or assimilated by the invaders, while others survived while trying to maintain their unique cultures and way of life. The United Nations officially classifies indigenous people as those "having an historical continuity with pre-invasion and pre-colonial societies," and “consider themselves distinct from other sectors of the societies now prevailing in those territories or parts of them." Furthermore, indigenous people are “determined to preserve, develop and transmit to future generations, their ancestral territories, and their ethnic identity, as the basis of their continued existence as peoples in accordance with their own cultural patterns, social institutions and legal systems." A few of the many groups of indigenous people around the world are: the many tribes of Native Americans (i.e., Navajo, Sioux) in the contiguous 48 states; the Eskimos of the arctic region from Siberia to Canada; the rainforest tribes in Brazil and the Ainu of northern Japan. Many problems face indigenous people, including: lack of human rights, exploitation of their traditional lands and themselves, and degradation of their culture. In response to the problems faced by these people, the United Nations proclaimed an "International Decade of the World's Indigenous People" beginning in 1994. The main objective of this proclamation, according to the United Nations, is "the strengthening of international cooperation for the solution of problems faced by indigenous people in such areas as human rights, the environment, development, health, culture and education." Its major goal is to protect the rights of indigenous people. Such protection would enable them to retain their cultural identity, such as their language and social customs, while participating in the political, economic and social activities of the region in which they reside. Despite the lofty U.N. goals, the rights and feelings of indigenous people are often ignored or minimized, even by supposedly culturally sensitive developed countries. In the United States many of those in the federal government are pushing to exploit oil resources in the Arctic National Wildlife Refuge on the northern coast of Alaska. The “Gwich'in,” an indigenous people who rely culturally and spiritually on the herds of caribou that live in the region, claim that drilling in the region would devastate their way of life. Thousands of years of culture would be destroyed for a few months’ supply of oil. Drilling efforts have been stymied in the past, but mostly out of concern for environmental factors and not necessarily the needs of the indigenous people. Curiously, another group of indigenous people, the “Inupiat Eskimo,” favor oil drilling in the Arctic National Wildlife Refuge. Because they own considerable amounts of land adjacent to the refuge, they would potentially reap economic benefits from the development of the region. In the Canadian region encompassing Labrador and northeastern Quebec, the Innu Nation has battled the Canadian Department of National Defense (DND) to prevent supersonic test flights over their hunting territory. The Innu Nation asserts that such flights are potentially harmful to Innu hunters and wildlife in the path of such flights. The nature of Innu hunting includes travelling over long distances and staying out on the land for long periods of time. The Innu Nation claims that low-level supersonic fly-overs generate shock waves, which can irreversibly damage the ears and lungs of anyone in the direct flight path. They also claim that the DND has made no serious efforts to warn the Innu people of the possible dangers. In the rainforest regions of Brazil, indigenous peoples of several tribes are working together to strengthen their common concern over the impact of large development projects on their traditional lands. Such projects range from the construction of dams and hydroelectric power plants to the alteration of the natural courses of rivers to provide commercial waterways. The government of Brazil touts development of the Tocantins-Araguaia waterway as a means to facilitate river navigation in the eastern Amazon. It will promote agricultural development in Brazil's heartland and in the eastern Amazon by providing access to markets of grains, fuel and fertilizers. However, the waterway will negatively impact fifteen indigenous peoples who object that the changes in the natural rivers will cause the death of the fish and animals upon which they depend for survival. The heart of most environmental conflicts faced by governments usually involves what constitutes proper and sustainable levels of development. For many indigenous peoples, sustainable development constitutes an integrated wholeness, where no single action is separate from others. They believe that sustainable development requires the maintenance and continuity of life, from generation to generation and that humans are not isolated entities, but are part of larger communities, which include the seas, rivers, mountains, trees, fish, animals and ancestral spirits. These, along with the sun, moon and cosmos, constitute a whole. From the point of view of indigenous people, sustainable development is a process that must integrate spiritual, cultural, economic, social, political, territorial and philosophical ideals. • i
textbooks/bio/Ecology/AP_Environmental_Science/1.22%3A_Cultural_and_Aesthetic_Considerations.txt
INTRODUCTION The concept of ethics involves standards of conduct. These standards help to distinguish between behavior that is considered right and that which is considered wrong. As we all know, it is not always easy to distinguish between right and wrong, as there is no universal code of ethics. For example, a poor farmer clears an area of rainforest in order to grow crops. Some would not oppose this action, because the act allows the farmer to provide a livelihood for his family. Others would oppose the action, claiming that the deforestation will contribute to soil erosion and global warming. Right and wrong are usually determined by an individual's morals, and to change the ethics of an entire society, it is necessary to change the individual ethics of a majority of the people in that society. The ways in which humans interact with the land and its natural resources are determined by ethical attitudes and behaviors. Early European settlers in North America rapidly consumed the natural resources of the land. After they depleted one area, they moved westward to new frontiers. Their attitude towards the land was that of a frontier ethic. A frontier ethic assumes that the earth has an unlimited supply of resources. If resources run out in one area, more can be found elsewhere or alternatively human ingenuity will find substitutes. This attitude sees humans as masters who manage the planet. The frontier ethic is completely anthropocentric (human-centered), for only the needs of humans are considered. Most industrialized societies experience population and economic growth that are based upon this frontier ethic, assuming that infinite resources exist to support continued growth indefinitely. In fact, economic growth is considered a measure of how well a society is doing. The late economist Julian Simon pointed out that life on earth has never been better, and that population growth means more creative minds to solve future problems and give us an even better standard of living. However, now that the human population has passed six billion and few frontiers are left, many are beginning to question the frontier ethic. Such people are moving toward an environmental ethic, which includes humans as part of the natural community rather than managers of it. Such an ethic places limits on human activities (e.g., uncontrolled resource use), that may adversely affect the natural community. Some of those still subscribing to the frontier ethic suggest that outer space may be the new frontier. If we run out of resources (or space) on earth, they argue, we can simply populate other planets. This seems an unlikely solution, as even the most aggressive colonization plan would be incapable of transferring people to extraterrestrial colonies at a significant rate. Natural population growth on earth would outpace the colonization effort. A more likely scenario would be that space could provide the resources (e.g. from asteroid mining) that might help to sustain human existence on earth. SUSTAINABLE ETHIC A sustainable ethic is an environmental ethic by which people treat the earth as if its resources are limited. This ethic assumes that the earth’s resources are not unlimited and that humans must use and conserve resources in a manner that allows their continued use in the future. A sustainable ethic also assumes that humans are a part of the natural environment and that we suffer when the health of a natural ecosystem is impaired. A sustainable ethic includes the following tenets: • The earth has a limited supply of resources. • Humans must conserve resources. • Humans share the earth’s resources with other living things. • Growth is not sustainable. • Humans are a part of nature. • Humans are affected by natural laws. • Humans succeed best when they maintain the integrity of natural processes sand cooperate with nature. For example, if a fuel shortage occurs, how can the problem be solved in a way that is consistent with a sustainable ethic? The solutions might include finding new ways to conserve oil or developing renewable energy alternatives. A sustainable ethic attitude in the face of such a problem would be that if drilling for oil damages the ecosystem, then that damage will affect the human population as well. A sustainable ethic can be either anthropocentric or biocentric (life-centered). An advocate for conserving oil resources may consider all oil resources as the property of humans. Using oil resources wisely so that future generations have access to them is an attitude consistent with an anthropocentric ethic. Using resources wisely to prevent ecological damage is in accord with a biocentric ethic. LAND ETHIC Aldo Leopold, an American wildlife natural historian and philosopher, advocated a biocentric ethic in his book, A Sand County Almanac. He suggested that humans had always considered land as property, just as ancient Greeks considered slaves as property. He believed that mistreatment of land (or of slaves) makes little economic or moral sense, much as today the concept of slavery is considered immoral. All humans are merely one component of an ethical framework. Leopold suggested that land be included in an ethical framework, calling this the land ethic. “The land ethic simply enlarges the boundary of the community to include soils, waters, plants and animals; or collectively, the land. In short, a land ethic changes the role of Homo sapiens from conqueror of the land-community to plain member and citizen of it. It implies respect for his fellow members, and also respect for the community as such.” (Aldo Leopold, 1949) Leopold divided conservationists into two groups: one group that regards the soil as a commodity and the other that regards the land as biota, with a broad interpretation of its function. If we apply this idea to the field of forestry, the first group of conservationists would grow trees like cabbages, while the second group would strive to maintain a natural ecosystem. Leopold maintained that the conservation movement must be based upon more than just economic necessity. Species with no discernible economic value to humans may be an integral part of a functioning ecosystem. The land ethic respects all parts of the natural world regardless of their utility, and decisions based upon that ethic result in more stable biological communities. “Anything is right when it tends to preserve the integrity, stability and beauty of the biotic community. It is wrong when it tends to do otherwise.” (Aldo Leopold, 1949) Leopold had two interpretations of an ethic: ecologically, it limits freedom of action in the struggle for existence; while philosophically, it differentiates social from anti-social conduct. An ethic results in cooperation, and Leopold maintained that cooperation should include the land. HETCH HETCHY VALLEY In 1913, the Hetch Hetchy Valley -- located in Yosemite National Park in California -- was the site of a conflict between two factions, one with an anthropocentric ethic and the other, a biocentric ethic. As the last American frontiers were settled, the rate of forest destruction started to concern the public. The conservation movement gained momentum, but quickly broke into two factions. One faction, led by Gifford Pinchot, Chief Forester under Teddy Roosevelt, advocated utilitarian conservation (i.e., conservation of resources for the good of the public). The other faction, led by John Muir, advocated preservation of forests and other wilderness for their inherent value. Both groups rejected the first tenet of frontier ethics, the assumption that resources are limitless. However, the conservationists agreed with the rest of the tenets of frontier ethics, while the preservationists agreed with the tenets of the sustainable ethic. The Hetch Hetchy Valley was part of a protected National Park, but after the devastating fires of the 1906 San Francisco earthquake, residents of San Francisco wanted to dam the valley to provide their city with a stable supply of water. Gifford Pinchot favored the dam. “As to my attitude regarding the proposed use of Hetch Hetchy by the city of San Francisco…I am fully persuaded that… the injury…by substituting a lake for the present swampy floor of the valley…is altogether unimportant compared with the benefits to be derived from it's use as a reservoir. “The fundamental principle of the whole conservation policy is that of use, to take every part of the land and its resources and put it to that use in which it will serve the most people.” (Gifford Pinchot, 1913) John Muir, the founder of the Sierra Club and a great lover of wilderness, led the fight against the dam. He saw wilderness as having an intrinsic value, separate from its utilitarian value to people. He advocated preservation of wild places for their inherent beauty and for the sake of the creatures that live there. The issue aroused the American public, who were becoming increasingly alarmed at the growth of cities and the destruction of the landscape for the sake of commercial enterprises. Key senators received thousands of letters of protest. “These temple destroyers, devotees of ravaging commercialism, seem to have a perfect contempt for Nature, and instead of lifting their eyes to the God of the Mountains, lift them to the Almighty Dollar.” (John Muir, 1912) Despite public protest, Congress voted to dam the valley. The preservationists lost the fight for the Hetch Hetchy Valley, but their questioning of traditional American values had some lasting effects. In 1916, Congress passed the “National Park System Organic Act,” which declared that parks were to be maintained in a manner that left them unimpaired for future generations. As we use our public lands, we continue to debate whether we should be guided by preservationism or conservationism. THE TRAGEDY OF THE COMMONS In his essay, The Tragedy of the Commons, Garrett Hardin (1968) looked at what happens when humans do not limit their actions by including the land as part of their ethic. The tragedy of the commons develops in the following way: Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons. Such an arrangement may work satisfactorily for centuries, because tribal wars, poaching and disease keep the numbers of both man and beast well below the carrying capacity of the land. Finally, however, comes the day of reckoning (i.e., the day when the long-desired goal of social stability becomes a reality). At this point, the inherent logic of the commons remorselessly generates tragedy. As a rational being, each herdsman seeks to maximize his gain. Explicitly or implicitly, more or less consciously, he asks: "What is the utility to me of adding one more animal to my herd?" This utility has both negative and positive components. The positive component is a function of the increment of one animal. Since the herdsman receives all the proceeds from the sale of the additional animal, the positive utility is nearly +1. The negative component is a function of the additional overgrazing created by one more animal. However, as the effects of overgrazing are shared by all of the herdsmen, the negative utility for any particular decision-making herdsman is only a fraction of -1. The sum of the utilities leads the rational herdsman to conclude that the only sensible course for him to pursue is to add another animal to his herd, and then another, and so forth. However, this same conclusion is reached by each and every rational herdsman sharing the commons. Therein lies the tragedy: each man is locked into a system that compels him to increase his herd, without limit, in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in the commons brings ruin to all. Hardin went on to apply the situation to modern commons. The public must deal with the overgrazing of public lands, the overuse of public forests and parks and the depletion of fish populations in the ocean. Individuals and companies are restricted from using a river as a common dumping ground for sewage and from fouling the air with pollution. Hardin also strongly recommended restraining population growth. The "Tragedy of the Commons" is applicable to the environmental problem of global warming. The atmosphere is certainly a commons into which many countries are dumping excess carbon dioxide from the burning of fossil fuels. Although we know that the generation of greenhouse gases will have damaging effects upon the entire globe, we continue to burn fossil fuels. As a country, the immediate benefit from the continued use of fossil fuels is seen as a positive component. All countries, however, will share the negative long-term effects.
textbooks/bio/Ecology/AP_Environmental_Science/1.23%3A_Environmental_Ethics.txt
INTRODUCTION Although environmental laws are generally considered a 20th century phenomenon, attempts have been made to legislate environmental controls throughout history. In 2,700 B.C., the middle-eastern civilization in Ur passed laws protecting the few remaining forests in the region. In 80 A.D., the Roman Senate passed a law to protect water stored for dry periods so it could be used for street and sewer cleaning. During American colonial times, Benjamin Franklin argued for "public rights" laws to protect the citizens of Philadelphia against industrial pollution produced by animal hide tanners. Significant environmental action began at the beginning of the 20th century. In 1906, Congress passed the “Antiquities Act,” which authorizes the president to protect areas of federal lands as national monuments. A few years later, Alice Hamilton pushed for government regulations concerning toxic industrial chemicals. She fought, unsuccessfully, to ban the use of lead in gasoline. She also supported the legal actions taken by women who were dying of cancer from their exposure to the radium then used in glow-in-the-dark watch dials. During the early 1960’s, biologist Rachel Carson pointed out the need to regulate pesticides such as DDT to protect the health of wildlife and humans. With the establishment of the Environmental Protection Agency (EPA) in 1970, environmental law became a field substantial enough to occupy lawyers on a full-time basis. Since then, federal and state governments have passed numerous laws and created a vast network of complicated rules and regulations regarding environmental issues. Moreover, international organizations and agencies including the United Nations, the World Bank, and the World Trade Organization have also contributed environmental rules and regulations. Because of the legal and technical complexities of the subjects covered by environmental laws, persons dealing with such laws must be knowledgeable in the areas of law, science and public policy. Environmental laws today encompass a wide range of subjects such as air and water quality, hazardous wastes and biodiversity. The purpose of these environmental laws is to prevent, minimize, remedy and punish actions that threaten or damage the environment and those that live in it. However, some people believe that these laws unreasonably limit the freedom of people, organizations, corporations and government agencies by placing controls on their actions. FEDERAL LAWS Early attempts by Congress to enact laws affecting the environment included the Antiquities Act in 1906, the National Park Service Act in 1916, the Federal Insecticide, Fungicide and Rodenticide Act in 1947 and the Water Pollution Control Act in 1956. The Wilderness Act of 1964, protected large areas of pristine federal lands from development and ushered in the new age of environmental activism that began in the 1960’s. However, it was the National Environmental Policy Act (NEPA) enacted in 1969 and the formation of the Environmental Protection Agency (EPA) in 1970 that started environmental legislation in earnest. The main objective of these two federal enactments was to assure that the environment would be protected from both public and private actions that failed to take into account the costs of damage inflicted on the environment. Many consider NEPA to be the most far-reaching environmental legislation ever passed by Congress. The basic purpose of NEPA is to force governmental agencies to comprehensively consider the effects of their decisions on the environment. This is effected by requiring agencies to prepare detailed Environmental Impact Statements (EIS) for proposed projects. The EPA is the government's environmental watchdog. It is charged with monitoring and analyzing the state of the environment, conducting research, and working closely with state and local governments to devise pollution control policies. The EPA is also empowered to enforce those environmental policies. Unfortunately, the agency is sometimes caught up in conflicts between the public wanting more regulation for environmental reasons and businesses wanting less regulation for economic reasons. Consequently, the development of a new regulation can take many years. Since 1970, Congress has enacted several important environmental laws, all of which include provisions to protect the environment and natural resources. Some of the more notable laws include: • The Federal Clean Air Act (1970, 1977 & 1990) established national standards for regulating the emission of pollutants from stationary and mobile sources. • The Federal Water Pollution Control Act (1972) amended by the Clean Water Act (1977, 1987), established water quality standards; provides for the regulation of the discharge of pollutants into navigable waters and for the protection of wetlands. • The Federal Safe Drinking Water Act (1974, 1977 & 1986) set drinking water standards for levels of pollutants; authorizing the regulation of the discharge of pollutants into underground drinking water sources. • The Toxic Substances Control Act (1976) provided for the regulation of chemical substances by the EPA and the safety testing of new chemicals. • The Resource Conservation and Recovery Act (1976) established cradle-to-grave regulations for the handling of hazardous wastes. • The Comprehensive Environmental Response, Compensation and Liability Act (1980), also known as the Superfund program, provided for the cleanup of the worst toxic waste sites. • The Food Security Act (1985, 1990) later amended by the Federal Agriculture Improvement and Reform Act (1996), discouraged cultivation of environmentally sensitive lands, especially wetlands, and authorized incentives for farmers to withdraw highly erodible lands from production. The application, or enforcement, of an environmental law is not always straightforward, and problems can arise. Often, the biggest problem is that Congress fails to allocate the funds necessary for implementing or enforcing the laws. Administrative red tape may make it impossible to enforce a regulation in a timely manner. It also may be unclear as to which agency (or branch of an agency) is responsible for enforcing a particular regulation. Furthermore, agency personnel decline to enforce a regulation for political reasons. STATE LAWS Most states, like California, have enacted their own environmental laws and established agencies to enforce them. California faced some of its first environmental challenges in the mid-1800’s, with regard to debris from the hydraulic mining of gold. Water quality concerns, dangers of flooding, negative impact on agriculture and hazards to navigation prompted the state to act. Some of California's environmental regulations preceded similar federal laws. For example, California established the nation’s first air quality program in the 1950s. Much of the federal Clean Air Act amendments of 1990 were based upon the California Clean Air Act of 1988. California also pioneered advances in vehicle emission controls, control of toxic air pollutants and control of stationary pollution sources before federal efforts in those areas. The Porter-Cologne Act of 1970, upon which the state’s water quality program is based, also served as the model for the federal Clean Water Act. California's state environmental regulations are sometimes more stringent than the federal laws (e.g., the California Clean Air Act and vehicle emissions standards). In other program areas, no comparable federal legislation exists. For example, the California Integrated Waste Management Act established a comprehensive, statewide system of permitting, inspections, enforcement and maintenance for solid waste facilities and sets minimum standards for solid waste handling and disposal to protect air, water and land from pollution. Also, Proposition 65 (Safe Drinking Water and Toxic Enforcement Act) requires the Governor to publish a list of chemicals that are known to the State of California to cause cancer, birth defects or other reproductive harm. Despite the state’s leadership in environmental programs and laws, the creation of a cabinet-level environmental agency in California lagged more than two decades behind the establishment of the federal EPA. Originally, organization of California's environmental quality programs was highly fragmented. Each separate program handled a specific environmental problem (e.g., the Air Resources Board), with enforcement responsibility falling to both state and local governments. It was not until 1991 that a California EPA was finally established and united the separate programs under one agency. INTERNATIONAL TREATIES AND CONVENTIONS Conventions, or treaties, generally set forth international environmental regulations. These conventions and treaties often result from efforts by international organizations such as the United Nations (UN) or the World Bank. However, it is often difficult, if not impossible, to enforce these regulations because of the sovereign rights of countries. In addition rules and regulations set forth in such agreements may be no more than non-binding recommendations, and often countries are exempted from regulations due to economic or cultural reasons. Despite these shortcomings, the international community has achieved some success via its environmental agreements. These include an international convention that placed a moratorium on whaling (1986) and a treaty that banned the ocean dumping of wastes (1991). The UN often facilitates international environmental efforts. In 1991, the UN enacted an Antarctica Treaty, which prohibits mining of the region, limits pollution of the environment and protects its animal species. The United Nations Environment Program (UNEP) is a branch of the UN that specifically deals with worldwide environmental problems. It has helped with several key efforts at global environmental regulations: • The 1987 Montreal Protocol on Substances that Deplete the Ozone Layer. As a result of this global agreement, industrialized countries have ceased or reduced the production and consumption of ozone-depleting substances such as chlorofluorocarbons. • The Prior Informed Consent Procedure for Certain Hazardous Chemicals and Pesticides in International Trade. This agreement enhances the world's technical knowledge and expertise on hazardous chemicals management. • The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). This agreement protects over 30,000 of the world's endangered species. • In 1995 UNEP and the International Olympic Committee (IOC) signed a partnership agreement to develop environmental guidelines for sports federations and countries bidding to host the Olympic games. • The Rotterdam Convention (1998) addressed the growing trade in hazardous pesticides and chemicals. Importing countries must now give explicit informed consent before hazardous chemicals can cross their borders. • The International Declaration on Cleaner Production (1998). The signatories commit their countries to implement cleaner industrial production and subsequent monitoring efforts. In 1992, the UN member nations committed their resources to limiting greenhouse gas (e.g., carbon dioxide) emissions at or below 1990 levels, as put forth by the UN Framework Convention on Climate Change. Unfortunately, the agreement was non-binding and by the mid-1990’s, it had had no effect on carbon emissions. The 1997 Kyoto Protocol was a binding resolution to reduce greenhouse gases. Although the United States initially supported the resolution, the Senate failed to ratify the treaty, and by 2001 the resolution was opposed by President Bush as threatening the United States economy.
textbooks/bio/Ecology/AP_Environmental_Science/1.24%3A_Environmental_Laws_and_Regulations.txt
INTRODUCTION Environmental issues are a concern of many, if not most, Americans. However, there is considerable disagreement on how such issues should be handled. Different people can interpret even a very general issue such as conservation very differently. Some believe that conservation means limiting the use of resources to allow a resource to last longer. Others see the conservation of resources as a way to maximize benefits to humans. This utilitarian approach to conservation policy, would place no value on saving endangered species that provide no direct benefit to humans. At the other extreme, some envision conservation as meaning the protection of resources without regard to profit or material benefit to humans. This view places the preservation as being of the utmost importance, and is sometimes viewed as elitist. Even the simplest strategies for dealing with environmental issues cannot be carried out without the expenditure of time, effort and money. It follows that environmental policy decisions that are adopted by a country are usually made within the context of the level of affluence and education of that country. This is especially true when it comes to conservation issues. A developed country like the United States can afford to set aside and manage wilderness areas or place restrictions on timber cutting, mining and oil drilling on public lands. However, a developing country must contend with insufficient funds to meet the basic needs of its people. This often leads to short-sighted decisions that allow exploitation of its forests and other natural resources. The need for hard cash overrides the need to conserve. The development and promotion of a platform on environmental issues requires careful planning and well-conceived education programs. Political backing is a necessity for implementing such a platform, as well as for garnering the legislative powers to enforce rules and guidelines. Politicians were for the most part disinterested in environmental issues until the 1970’s. The main reason for this was that issues such as conservation were perceived as long-term issues, and political concerns are mostly short term, changing as the administrations changed. However, politicians realized that they had to formulate some medium or long-term strategies, when a rise in international environmental activism forced them to consider these issues. RESOURCE USE Before the arrival of European settlers, the indigenous people of the North American continent lived in relative harmony with their environment. Although they hunted animals and some raised crops using slash and burn techniques, they had little impact on the environment because their populations were relatively small. This situation changed after European colonists settled what is now the eastern coast of the United States. As their numbers grew, they moved westward. The settlers clear cut forests as they moved through the frontier regions, leaving denuded landscapes. Farmers grew crops until the soil became infertile and then moved on to other locations. People used water resources freely without giving much thought to conservation. The common approach was that of exploitation of the seemingly endless natural resources the country offered. However, this tradition of exploitation began to change as the United States became industrialized and urbanized. As early as the late 18th century, people such as George Washington and Thomas Jefferson began experimenting with crop rotation and soil conservation techniques. During the 19th century, growing cities developed waterworks to supply clean water. Some people began to realize the importance of conserving natural resources such as water. By 1900, various American scientists, politicians and business leaders voiced concerns about the depletion of the forests, soil and other natural resources. The term conservation was first applied to water resources. Much of the western United States was arid, and government scientists developed the idea of building dams to impound water from spring floods. They reasoned that the water could then be used year round for irrigation and other purposes. Use of the term quickly spread to include all natural resources. Conservation emerged from the 19th century as a form of applied science. It involved the scientific planning of the use of natural resources. Conservation leaders came from fields like forestry, agronomy, geology, and hydrology. An early proponent was Gifford Pinchot, the first head of the United States Forest Service. The conservation principles of that time contrasted with those espoused by proponents of preservation. Preservationists wanted natural areas preserved and protected from any type of human development. The leading preservationist of the time was John Muir. Because of their different views, the preservation movement and the conservation movement were sometimes at odds with each other. The most publicized controversy of the early 20th century concerned the plan to build a dam to flood the beautiful Hetch-Hetchy valley to supply the city of San Francisco with fresh water. The dam, supported by conservationists and opposed by preservationists, was eventually built. President Theodore Roosevelt supported both conservation and preservation. He vigorously expanded the nation's infant system of national parks and monuments in order to protect pristine natural areas from exploitation The main issues of resource conservation today differ from those at the turn of the 20th century. During the 1960’s the general public became concerned with the problems of pollution. The effects of pesticides such as DDT on wildlife were documented in a book (Silent Spring) by Rachel Carson. There were highly publicized environmental incidents in Lake Erie (severe water pollution), New York City (air pollution) and Santa Barbara (oil spill). Events such as these fueled the start of a new environmental movement. This movement generally supports the concept that resource conservation includes maintaining the quality of those resources. This movement continues today and supports such issues as government clean-up of old areas of pollution, reduction of current emission levels of pollution and protection of remaining pristine environments. RESTORATION ECOLOGY Humans have deforested the land, stripped its surface to remove its mineral resources, exploited its grasslands and drained its wetlands, all to sustain the growing human population. Rivers have been straightened, diverted and dammed to provide humans with water, transportation, flood control, electric power and recreational facilities. However, when ecosystems are overexploited they degenerate. Healthy ecosystems are necessary in order to sustain the earth's soil, water and air resources. Some people feel that environmental degradation should be reversed through restoration ecology (i.e., the restoration of degraded environments to healthy ecosystems). However, the concepts involved are varied. The modern concept of reclamation involves an attempt to return a damaged ecosystem to some kind of productive use that is socially acceptable. For example, a mined area might be converted to pastureland or an orchard. In this process rehabilitation of the mined area also occurs, making the land more visually pleasing. Historically, the term “reclamation” was used to describe the alteration of a native ecosystem to one of value to humans, such as the filling of a wetlands area in order to provide land for urban housing. Today, such an action might be considered environmental degradation. Because of the conflicting definitions, the use of the term reclamation can be confusing. Sometimes, actions can be taken to avoid, reduce or compensate for the effects of environmental damage. Such mitigation efforts have been taken by the Army Corps of Engineers during construction projects. The native plants are removed from a site before construction begins and transplanted at a special holding site. After the construction project is completed, the native plants are restored using those from the holding site. Another example of mitigation might involve the creation or enhancement of wetlands in one area, in order to compensate for permitted wetland losses in another area. Mitigation often goes hand-in-hand with restoration. Texaco, in conjunction with environmental groups and the United States Fish and Wildlife Service, restored 500 acres of agricultural lands in the lower Mississippi Delta to bottomland hardwoods. Texaco received environmental credits for the mitigating effects of the new woodlands on air quality. Restoration involves returning an altered or degraded site to its approximate condition before alteration. This includes restoring related physical, chemical and biological characteristics. Full restoration involves the complete return of a site to its original state. Restoring an ecosystem to its full productive health is not an easy task. It requires a broad interdisciplinary approach involving many different scientific fields of study (e.g., biology, ecology, hydrology and geology). Inherent in restoration projects are important though questionable and often unrealistic assumptions: historical environmental conditions can be recreated, existing ecosystems can be replaced, the physical environment can be altered in order to support the desired plants and animals, the desired plants and animals will become established, and the ecosystem will be able to sustain itself. Besides physical processes, socio-economic factors must also be considered in a restoration project. Actions of humans have historically been important in shaping ecosystems, and are important in determining the success of restoration efforts. Because the cost to restore an individual site can involve millions of dollars, government support is a necessity. Even with the best efforts, restoration projects can sometimes be hampered by unexpected events. An effort by one environmental group to restore a savannah ecosystem in Illinois was blocked by another environmental group that objected to the removal of the trees from the area. ENVIRONMENTAL INVOLVEMENT "Never doubt that a small group of thoughtful, committed citizens can change the world: indeed it's the only thing that ever has." - Margaret Mead The environmental movement had its beginnings in the early 1960’s, when biologist Rachel Carlson published her book "The Silent Spring." The book highlighted the harmful effects of pesticides on wildlife. Soon there was a growing grassroots campaign demanding that the government act to protect the environment. There was also an increase in the popularity of established conservation groups such as the Sierra Club and the Wilderness Society. The early years of the movement led to such milestones as the passage of the “Wilderness Act” in 1964 and the “Land and Water Conservation Act” in 1965, as well as the establishment of the Environmental Protection Agency in 1970. Environmental groups in the United States carry out a variety of activities: lobbying for new environmental laws, lobbying against harmful projects, acting as pollution watchdogs, actively protecting land and wildlife and educating the public on environmental issues. Some more radical groups such as “Earth First!” add civil disobedience and sabotage to their environmental activities. Greenpeace is one of the largest international environmental groups and is probably best known for its efforts to stop continued commercial whaling by Japan and Norway. An anti-environmentalist movement, the “Wise Use Movement,” is a coalition of timber and mining companies and cattle ranchers. The members advocate logging, mining, grazing and developing all public lands, regardless of the environmental consequences. Throughout the 1990’s the group attempted to repeal or weaken many environmental laws and discredit environmental groups. Their efforts were largely thwarted; however, they were able to block some proposed environmental legislation. Although strength in numbers is always an effective strategy when taking on environmental issues, individuals can also make significant inroads in environmental activism. In 1978, a lone woman living in the Love Canal area of Niagara Falls, New York, awakened the nation to the dangers of hazardous waste dumps. Working first at the local level, then the state level and finally the national level, she lobbied governments to take action to protect people from the toxic chemicals contained in such dumps. Her efforts led to the creation of a national Superfund in 1980 to cleanup and regulate hazardous waste sites. People who want to make their voices heard on environmental issues can do so in a number of ways. Locally, they can send letters to the editors of community newspapers to reach a wide audience. Public hearings and community meetings also provide opportunities to make a strong vocal statement. On a larger political scale, a typed or handwritten letter to a government official is particularly effective. Faxing the letter to the official is another option. Telephone calls to legislators show that the callers care enough to spend a little money, and also offers an unparalleled opportunity for immediate feedback. However, it is not always easy to actually get connected to the recipient. E-mails are less personal than regular letters, but they are very convenient and the have the potential to mobilize hundreds or thousands of messages, making it an indispensable tool for the environmental activist. Sustainability Sustainability refers to practices that allow current populations to meet their needs without impacting the ability of future generations to meet their own needs. The idea was developed to describe the long-term use of natural resources but has been expanded to include a diversity of situations, including community structures, economic policies, and social justice. Sustainability is a relatively new concept that is becoming a common ideal but is not yet widely practiced. Non-renewable Resources The use of non-renewable resources is by definition, unsustainable. The use of fossil fuels is a prime example. Industrial societies rely on oil and natural gas to power manufacturing, propel vehicles, heat homes, and cook meals. In addition, many goods, like plastics, are partially made of petroleum products. Ongoing geologic processes are continuing to produce fossil fuels, as they have for millennia, but the rate at which we are using them far outstrips the rate at which natural cycles regenerate them. Some scientists project that oil and gas reserves will be largely drained in 50 – 200 years. Future generations will have to find other sources of energy. Environmental Degradation Some practices are not sustainable because they cause severe environmental damage. For example, some modern agricultural methods actually destroy the soil they rely on, so that farms flourish for a time but then must be abandoned. Desert lands can grow crops if they are intensively irrigated. But when irrigation water evaporates in hot climates, the soil becomes more and more salty, until plant growth is stunted. In the tropics, when rainforests are chopped down to make way for crops, soils lose the steady nutrient supply the forest provided and soon become infertile. Renewable Resources Renewable resources can be used far into the future. Wind power is a type of renewable energy. Windmills, which turn in the wind to spin turbines that generate electricity, don’t use up or diminish the air. And the supply of wind is renewed every day, when uneven solar heating of the Earth causes hot air to rise and cold air to sink. Best Management Practices Best management practices are techniques and methods designed to minimize environmental impacts. In agriculture, these practices include growing native crops or those suited to local conditions, rotating crops, minimizing soil tilling, and reducing pesticide use. With proper care, soils can remain fertile and healthy for many years. Environmental Remediation For many thousands of years, ever since they built the first campfire, human activity has generated air, water, and soil pollution. For most of human history, however, these contaminants had relatively little environmental impact. But over the last few centuries, pollution levels skyrocketed as a result of population growth and the Industrial Revolution. As a result, regulations have been enacted to control emissions. Even where these are effective in curbing current pollution sources, high levels of contamination may exist from past activity. And new contamination can occur through industrial accidents or other inadvertent releases of toxic substances. Danger to human health from both historic and modern contamination requires that cleanup measures be implemented. This is the purpose of environmental remediation. Contamination Sources Just under 300 million tons of hazardous wastes are produced each year in the United States. Although the safe disposal of wastes is mandated, accidental releases do occur, and sometimes regulations are ignored. Some of the most widespread or dangerous pollutants that require remediation come from mining, fuel spills and leaks, and radioactive materials. Heavy metals (copper, lead, mercury, and zinc) can leach into soil and water from mine tunnels, tailings, and spoil piles. Acid mine drainage is caused by reaction of mine wastes, such as sulfides, with rainfall or groundwater to produce acids, like sulfuric acid. The Environmental Protection Agency estimates that 40% of the watersheds in the western United States are contaminated by mine run-off. Organic contamination can result from discharge of solvents to groundwater systems, natural gas or fuel spills, and above-ground and underground storage tank leakage. Radioactive contamination of soils, water, and air can result from mining activity, processing of radioactive ores, and improper disposal of laboratory waste and spent fuel rods used at nuclear power plants. The best-known example of radioactive contamination is the Chernobyl disaster. In 1986, workers at a Russian nuclear power plant ignored safety procedures during a reactor test, and the fuel rods superheated the cooling water to cause an explosion that killed 30 people and released a huge cloud of radioactive steam. Although more than 100,000 people were evacuated from around the plant, a dramatic increase in cancer rates among the population has occurred. As the steam cloud dispersed into the atmosphere, increases in radioactivity were measured over much of the northern hemisphere. Remediation Efforts Many communities are struggling to find the funds and technological expertise needed to clean up polluted areas. Some settings, such as brownfields, can be reclaimed fairly easily. Other areas, because of their size or the extreme toxicity of their contaminants, require very expensive, complex, and long-term remediation. Many of these have been designated as Superfund sites. Brownfields are abandoned industrial or commercial facilities or blighted urban areas that need to be cleansed of contamination before they can be redeveloped. Superfund sites are areas with the most toxic contamination in the United States. The contamination may not only make the site itself too dangerous to inhabit, but often leaks toxic levels of pollutants into the surrounding soil, water, or air. An example of a Superfund site is Love Canal in Niagara Falls, New York. The canal was a chemical waste dump for many years, then in the 1950’s was covered with soil and sold to the city. Over time, many homes and a school were built over the former dump. In the 1970’s, heavy rains raised the water table and carried contaminants back to the surface. Residents noticed foul smells, and gardens and trees turned black and died. Soon after, rates of birth defects, cancer, and other illnesses began to rise sharply. In 1977, the State of New York and the federal government began remediation work. Buildings were removed, and all residents were bought out and relocated, contaminated deposits and soils were excavated, and remaining soils and groundwater were treated and sealed off to prevent further spread of the contamination. Remediation activities have now been completed at this site. Remediation Methods The type of pollution and the medium affected (air, water, or soil) determine remediation methods. Methods include incineration, absorption onto carbon, ion exchange, chemical precipitation, isolation, or bioremediation. Bioremediation is the use of plants, bacteria, or fungus to “digest” the contaminant to a non-toxic or less toxic form. All of these methods tend to be expensive and time-consuming. Remediation is aimed at neutralization, containment, and/or removal of the contaminant. The goal is to prevent the spread of the pollution, or to reduce it to levels that will not appreciably risk human health. Many times, it is physically impossible or financially unfeasible to completely clear all contamination. Often, experts and the public disagree on how clean is clean enough.
textbooks/bio/Ecology/AP_Environmental_Science/1.25%3A_Issues_and_Options.txt
• 1.1: Atmosphere and Climate Regulation About 3.5 billion years ago, early life forms (principally cyanobacteria) helped create an oxygenated atmosphere through photosynthesis, taking up carbon dioxide from the atmosphere and releasing oxygen. Over time, these organisms altered the composition of the atmosphere, increasing oxygen levels, and paved the way for organisms that use oxygen as an energy source (aerobic respiration), forming an atmosphere similar to that existing today. • 1.2: Land Use Change and Climate Regulation The energy source that ultimately drives the earth's climate is the sun. The amount of solar radiation absorbed by the earth depends primarily on the characteristics of the surface. Although the link between solar absorption, thermodynamics, and ultimately climate is very complex, newer studies indicate that vegetation cover and seasonal variation in vegetation cover affects climate on both global and local scales. • 1.3: Soil and Water Conservation Biodiversity is also important for global soil and water protection. Terrestrial vegetation in forests and other upland habitats maintain water quality and quantity, and controls soil erosion. • 1.4: Nutrient Cycling Nutrient cycling is yet another critical service provided by biodiversity -- particularly by microorganisms. Fungi and other microorganisms in soil help break down dead plants and animals, eventually converting this organic matter into nutrients that enrich the soil. • 1.5: Pollination and Seed Dispersal An estimated 90 percent of flowering plants depend on pollinators such as wasps, birds, bats, and bees, to reproduce. Plants and their pollinators are increasingly threatened around the world. Pollination is critical to most major crops and virtually impossible to replace. For instance, imagine how costly fruit would be (and how little would be available) if its natural pollinators no longer existed and each developing flower had to be fertilized by hand. 1: Global Processes Life on earth plays a critical role in regulating the earth's physical, chemical, and geological properties, from influencing the chemical composition of the atmosphere to modifying climate. About 3.5 billion years ago, early life forms (principally cyanobacteria) helped create an oxygenated atmosphere through photosynthesis, taking up carbon dioxide from the atmosphere and releasing oxygen (Schopf 1983; Van Valen 1971). Over time, these organisms altered the composition of the atmosphere, increasing oxygen levels, and paved the way for organisms that use oxygen as an energy source (aerobic respiration), forming an atmosphere similar to that existing today. Carbon cycles on the planet between the land, atmosphere, and oceans through a combination of physical, chemical, geological, and biological processes (IPCC 2001). One key way biodiversity influences the composition of the earth's atmosphere is through its role in carbon cycling in the oceans, the largest reservoir for carbon on the planet (Gruber and Sarmiento, in press). In turn, the atmospheric composition of carbon influences climate. Phytoplankton (or microscopic marine plants) play a central role in regulating atmospheric chemistry by transforming carbon dioxide into organic matter during photosynthesis. This carbon-laden organic matter settles either directly or indirectly (after it has been consumed) in the deep ocean, where it stays for centuries, or even thousands of years, acting as the major reservoir for carbon on the planet. In addition, carbon also reaches the deep ocean through another biological process -- the formation of calcium carbonate, the primary component of the shells in two groups of marine organisms coccolithophorids (a phytoplankton) and foraminifera (a single celled, shelled organism that is abundant in many marine environments). When these organisms die, their shells sink to the bottom or dissolve in the water column. This movement of carbon through the oceans removes excess carbon from the atmosphere and regulates the earth's climate. Over the last century, humans have changed the atmosphere's composition by releasing large amounts of carbon dioxide. This excess carbon dioxide, along with other 'greenhouse' gases, is believed to be heating up our atmosphere and changing the world's climate, leading to 'global warming'. There has been much debate about how natural processes, such as the cycling of carbon through phytoplankton in the oceans, will respond to these changes. Will phytoplankton productivity increase and thereby absorb the extra carbon from the atmosphere? Recent studies suggest that natural processes may slow the rate of increase of carbon dioxide in the atmosphere, but it is doubtful that either the earth's oceans or its forests can absorb the entirety of the extra carbon released by human activity (Falkowski et al. 2000).
textbooks/bio/Ecology/Biodiversity_(Bynum)/1%3A_Global_Processes/1.1%3A_Atmosphere_and_Climate_Regulation.txt
The energy source that ultimately drives the earth's climate is the sun. The amount of solar radiation absorbed by the earth depends primarily on the characteristics of the surface. Although the link between solar absorption, thermodynamics, and ultimately climate is very complex, newer studies indicate that vegetation cover and seasonal variation in vegetation cover affects climate on both global and local scales. New generations of atmospheric circulation models are increasingly able to incorporate more complex data related to these parameters (Sellers et al. 1997). Besides regulating the atmosphere's composition, the extent and distribution of different types of vegetation over the globe modifies climate in three main ways: • affecting the reflectance of sunlight (radiation balance); • regulating the release of water vapor (evapotranspiration); and • changing wind patterns and moisture loss (surface roughness). The amount of solar radiation reflected by a surface is known as its albedo; surfaces with low albedo reflect a small amount of sunlight, those with high albedo reflect a large amount. Different types of vegetation have different albedos; forests typically have low albedo, whereas deserts have high albedo. Deciduous forests are a good example of the seasonal relationship between vegetation and radiation balance. In the summer, the leaves in deciduous forests absorb solar radiation through photosynthesis; in winter, after their leaves have fallen, deciduous forests tend to reflect more radiation. These seasonal changes in vegetation modify climate in complex ways, by changing evapotranspiration rates and albedo (IPCC 2001). Vegetation absorbs water from the soil and releases it back into the atmosphere through evapotranspiration, which is the major pathway by which water moves from the soil to the atmosphere. This release of water from vegetation cools the air temperature. In the Amazon region, vegetation and climate is tightly coupled; evapotranspiration of plants is believed to contribute an estimated fifty percent of the annual rainfall (Salati 1987). Deforestation in this region leads to a complex feedback mechanism, reducing evapotranspiration rates, which leads to decreased rainfall and increased vulnerability to fire (Laurance and Williamson 2001). Deforestation also influences the climate of cloud forests in the mountains of Costa Rica. The Monteverde Cloud Forest harbors a rich diversity of organisms, many of which are found nowhere else in the world. However, deforestation in lower-lying lands, even regions over 50 kilometers way, is changing the local climate, leaving the "cloud" forest cloudless (Lawton et al. 2001). As winds pass over deforested lowlands, clouds are lifted higher, often above the mountaintops, reducing the ability for cloud forests to form. Removing the clouds from a cloud forest dries the forest, so it can no longer support the same vegetation or provide appropriate habitat for many of the species originally found there. Similar patterns may be occurring in other, less studied montane cloud forests around the world. Different vegetation types and topographies have varying surface roughness, which change the flow of winds in the lower atmosphere and in turn influences climate. Lower surface roughness also tends to reduce surface moisture and increase evaporation. Farmers apply this knowledge when they plant trees to create windbreaks (Johnson et al. 2003). Windbreaks reduce wind speed and change the microclimate, increase surface roughness, reduce soil erosion, and modify temperature and humidity. For many field crops, windbreaks increase yields and production efficiency. They also minimize stress on livestock from cold winds. Glossary albedo the amount of solar radiation reflected by a surface evapotranspiration is the process whereby water is absorbed from soil by vegetation and then released back into the atmosphere surface roughness the average vertical relief and small-scale irregularities of a surface
textbooks/bio/Ecology/Biodiversity_(Bynum)/1%3A_Global_Processes/1.2%3A_Land_Use_Change_and_Climate_Regulation.txt
Biodiversity is also important for global soil and water protection. Terrestrial vegetation in forests and other upland habitats maintain water quality and quantity, and controls soil erosion. In watersheds where vegetation has been removed, flooding prevails in the wet season and drought in the dry season. Soil erosion is also more intense and rapid, causing a double effect: removing nutrient-rich topsoil and leading to siltation in downstream riverine and ultimately oceanic environments. This siltation harms riverine and coastal fisheries as well as damaging coral reefs (Turner and Rabalais 1994; van Katwijk et al. 1993). One of the most productive ecosystems on earth, wetlands have water present at or near the surface of the soil or within the root zone, all year or for a period of time during the year, and the vegetation there is adapted to these conditions. Wetlands are instrumental for the maintenance of clean water and erosion control. Microbes and plants in wetlands absorb nutrients and in the process filter and purify water of pollutants before they can enter coastal or other aquatic ecosystems. Wetlands also reduce flood, wave, and wind damage. They retard the flow of floodwaters and accumulate sediments that would otherwise be carried downstream or into coastal areas. Wetlands also serve as breeding grounds and nurseries for fish and support thousands of bird and other animal species. Glossary watersheds land areas drained by a river and its tributaries wetlands areas where water is present at or near the surface of the soil or within the root zone, all year or for a period of time during the year, and where the vegetation is adapted to these conditions 1.4: Nutrient Cycling Nutrient cycling is yet another critical service provided by biodiversity -- particularly by microorganisms. Fungi and other microorganisms in soil help break down dead plants and animals, eventually converting this organic matter into nutrients that enrich the soil (Pimentel et al. 1995). Nitrogen is essential for plant growth, and an insufficient quantity of it limits plant production in both natural and agricultural ecosystems. While nitrogen is abundant in the atmosphere, only a few organisms (commonly known as nitrogen-fixing bacteria) can use it in this form. Nitrogen-fixing bacteria extract nitrogen from the air, and transform it into ammonia, then other bacteria further break down this ammonia into nitrogenous compounds that can be absorbed and used by most plants. In addition to their role in decomposition and hence nutrient cycling, microorganisms also help detoxify waste, changing waste products into forms less harmful to humans. 1.5: Pollination and Seed Dispersal An estimated 90 percent of flowering plants depend on pollinators such as wasps, birds, bats, and bees, to reproduce. Plants and their pollinators are increasingly threatened around the world (Buchmann and Nabhan 1995; Kremen and Ricketts 2000). Pollination is critical to most major crops and virtually impossible to replace. For instance, imagine how costly fruit would be (and how little would be available) if its natural pollinators no longer existed and each developing flower had to be fertilized by hand. Many animal species are important dispersers of plant seeds. It has been hypothesized that the loss of a seed disperser could cause a plant to become extinct. At present, there is no example where this has occurred. A famous example that has often been cited previously is the case of the dodo (Raphus cucullatus) and the tambalacoque (Sideroxylon grandiflorum). The dodo, a large flightless bird that inhabited the island of Mauritius in the Indian Ocean, became extinct due to overhunting in the late seventeenth century. It was once thought that the tambalacoque, a now endangered tree, depended upon the dodo to germinate its hard-cased seeds (Temple 1977). In the 1970s, only 13 trees remained and it was thought the tree had not reproduced for 300 years. The seeds of the tree have a very hard coat, as an experiment they were fed to a turkey; after passing through its gizzard the seeds were viable and germinated. This experiment led scientists to believe that the extinction of the dodo was coupled to the tambalacoque's inability to reproduce. However, this hypothesis has not stood up to further scrutiny, as there were several other species (including three now extinct species, a large-billed parrot, a giant tortoise, and a giant lizard) that were also capable of cracking the seed (Witmar and Cheke 1991; Catling 2001). Thus many factors, including the loss of the dodo, could have contributed to the decline of the tambalacoque. (For further details of causes of extinction see Historical Perspectives on Extinction and the Current Biodiversity Crisis). Unfortunately, declines and/or extinctions of species are often unobserved and thus it is difficult to tease out the cause of the end result, as multiple factors are often operating simultaneously. Similar problems exist today in understanding current population declines. For example, in a given species, population declines may be caused by loss of habitat, loss in prey species or loss of predators, a combination of these factors, or possibly some other yet unidentified cause, such as disease. In the pine forests of western North America, corvids (including jays, magpies, and crows), squirrels, and bears play a role in seed dispersal. The Clark's nutcracker (Nucifraga columbiana) is particularly well adapted to dispersal of whitebark pine (Pinus albicaulis) seeds (Lanner 1996). The nutcracker removes the wingless seeds from the cones, which otherwise would not open on their own. Nutcrackers hide the seeds in clumps. When the uneaten seeds eventually grow, they are clustered, accounting for the typical distribution pattern of whitebark pine in the forest. In tropical areas, large mammals and frugivorous birds play a key role in dispersing the seeds of trees and maintaining tree diversity over large areas. For example, three-wattled bellbirds (Procnias tricarunculata) are important dispersers of tree seeds of members of the Lauraceae family in Costa Rica. Because bellbirds return again and again to one or more favorite perches, they take the fruit and its seeds away from the parent tree, spreading Lauraceae trees throughout the forest (Wenny and Levy 1998).
textbooks/bio/Ecology/Biodiversity_(Bynum)/1%3A_Global_Processes/1.3%3A_Soil_and_Water_Conservation.txt
The diversity of species, ecosystems and landscapes that surround us today are the product of perhaps 3.7 billion (i.e., 3.7×1093.7×109) to 3.85 billion years of evolution of life on Earth (Mojzsis et al., 1996; Fedo and Whitehouse, 2002). Life may have first evolved under harsh conditions, perhaps comparable to the deep-sea thermal vents where chemo-autotrophic bacteria are currently found (these are organisms that obtain their energy only from inorganic, chemical sources). A subterranean evolution of life has also been suggested. Rock layers deep below the continents and ocean floors, that were previously thought to be too poor in nutrients to sustain life, have now been found to support thousands of strains of microorganisms. Types of bacteria have been collected from rock samples almost 2 miles below the surface, at temperatures up to 75 degrees Celsius. These chemo-autotrophic microorganisms derive their nutrients from chemicals such as carbon, hydrogen, iron and sulphur. Deep subterranean communities could have evolved underground or originated on the surface and become buried or otherwise transported down into subsurface rock strata, where they have subsequently evolved in isolation. Either way, these appear to be very old communities, and it is possible that these subterranean bacteria may have been responsible for shaping many geological processes during the history of the Earth (e.g., the conversion of minerals from one form to another, and the erosion of rocks) (Fredrickson and Onstott, 1996). The earliest evidence for photosynthetic bacteria - suspected to be cyanobacteria - is dated at sometime between 3.5 and 2.75 billion years ago (Schopf, 1993; Brasier et al., 2002; Hayes, 2002). These first photosynthetic organisms would have been responsible for releasing oxygen into the atmosphere. (Photosynthesis is the formation of carbohydrates from carbon dioxide and water, through the action of light energy on a light-sensitive pigment, such as chlorophyll, and usually resulting in the production of oxygen). Prior to this, the atmosphere was mainly composed of carbon dioxide, with other gases such as nitrogen, carbon monoxide, methane, hydrogen and sulphur gases present in smaller quantities. It probably took over 2 billion years, from the initial advent of photosynthesis for the oxygen concentration in the atmosphere to reach the level it is at today (Hayes, 2002). As oxygen levels rose, some of the early anaerobic species probably became extinct, and others probably became restricted to habitats that remained free of oxygen. Some assumed a lifestyle permanently lodged inside aerobic cells. The anaerobic cells might, initially, have been incorporated into the aerobic cells after those aerobes had engulfed them as food. Alternatively, the anaerobes might have invaded the aerobic hosts and become parasites within them. Either way, a more intimate symbiotic relationship subsequently evolved between these aerobic and anaerobic cells. In these cases the survival of each cell was dependent on the function of the other cell. The evolution of this symbiotic relationship was an extremely important step in the evolution of more complex cells that have a nucleus, which is a characteristic of the Eucarya or eucaryotes (eu = good, or true; and karyon = kernel, or nucleus). Recent studies of rocks from Western Australia have suggested that the earliest forms of single-celled eucaryotes might be at least 2.7 billion years old (Anon, 2001). According to contemporary theories, there has been sufficient time, over those 2.7 billion years, for some of the genes of the invading anaerobe to have been lost, or even transferred to the nucleus of the host aerobe cell. As a result, the genomes of the ancestral invader and ancestral host have become mingled and the two entities can now be considered as one from a genetic standpoint. The evolutionary history of the Eucarya is described in various standard references and so is not covered in detail here. Briefly, eucaryotes constitute three well known groups - the Viridiplantae or green plants, the Fungi, and the Metazoa or animals. There are also many basal groups of eucaryotes that are extremely diverse - and many of which are evolutionarily ancient. For example, the Rhodophyta, or red algae, which might be the sister-group to the Viridiplantae, includes fossil representatives dating from the Precambrian, 1025 billion years ago. The Stramenopiles includes small, single-celled organisms such as diatoms, fungus-like species of water moulds and downy mildews, and extremely large, multicellular brown seaweeds such as kelps. The earliest known green plants are green algae, dating from the Cambrian, at least 500 million years ago. By the end of the Devonian, 360 million years ago, plants had become quite diverse and included representatives similar to modern plants. Green plants have been extremely important in shaping the environment. Fueled by sunlight, they are the primary producers of carbohydrates, sugars that are essential food resources for herbivores that are then prey to predatory carnivores. The evolution and ecology of pollinating insects is closely associated with the evolution of the Angiosperms, or flowering plants, since the Jurassic and Cretaceous periods. Fungi, which date back to the Precambrian times about 650 to 540 million years ago, are also important in shaping and sustaining biodiversity. By breaking down dead organic material and using this for their growth, they recycle nutrients back through ecosystems. Fungi are also responsible for causing several plant and animal diseases. Fungi also form symbiotic relationships with tree species, often in nutrient-poor soils such as are found in the humid tropics, allowing their symbiont trees the ability to flourish in what would otherwise be a difficult environment. Metazoa, which date to over 500 million years ago have also been responsible for shaping many ecosystems, from the specialized tubeworms of deep sea, hydrothermal vent communities of the ocean floor, to the birds living in the high altitudes of the Himalayas, such as the impeyan pheasant and Tibetan snow cock. Many species of animals are parasitic on other species and can significantly affect the behavior and life-cycles of their hosts. Thus, the evolutionary history of Earth has physically and biologically shaped our contemporary environment. Many existing landscapes are based on the remains of earlier life forms. For example, some existing large rock formations are the remains of ancient reefs formed 360 to 440 million years ago by communities of algae and invertebrates (Veron, 2000). Glossary Photosynthesis the formation of carbohydrates from carbon dioxide and water, through the action of light energy on a light-sensitive pigment, such as chlorophyll, and usually resulting in the production of oxygen
textbooks/bio/Ecology/Biodiversity_(Bynum)/10%3A_A_Brief_History_of_Life_on_Earth.txt
An ecosystem is a community plus the physical environment that it occupies at a given time. An ecosystem can exist at any scale, for example, from the size of a small tide pool up to the size of the entire biosphere. However, lakes, marshes, and forest stands represent more typical examples of the areas that are compared in discussions of ecosystem diversity. Broadly speaking, the diversity of an ecosystem is dependent on the physical characteristics of the environment, the diversity of species present, and the interactions that the species have with each other and with the environment. Therefore, the functional complexity of an ecosystem can be expected to increase with the number and taxonomic diversity of the species present, and the vertical and horizontal complexity of the physical environment. However, one should note that some ecosystems (such as submarine black smokers, or hot springs) that do not appear to be physically complex, and that are not especially rich in species, may be considered to be functionally complex. This is because they include species that have remarkable biochemical specializations for surviving in the harsh environment and obtaining their energy from inorganic chemical sources (e.g., see discussions of Rothschild and Mancinelli, 2001). The physical characteristics of an environment that affect ecosystem diversity are themselves quite complex (as previously noted for community diversity). These characteristics include, for example, the temperature, precipitation, and topography of the ecosystem. Therefore, there is a general trend for warm tropical ecosystems to be richer in species than cold temperate ecosystems (see "Spatial gradients in biodiversity"). Also, the energy flux in the environment can significantly affect the ecosystem. An exposed coastline with high wave energy will have a considerably different type of ecosystem than a low-energy environment such as a sheltered salt marsh. Similarly, an exposed hilltop or mountainside is likely to have stunted vegetation and low species diversity compared to more prolific vegetation and high species diversity in sheltered valleys (see Walter, 1985, and Smith, 1990 for general discussions on factors affecting ecosystems, and comparative ecosystem ecology). Environmental disturbance on a variety of temporal and spatial scales can affect the species richness and, consequently, the diversity of an ecosystem. For example, river systems in the North Island of New Zealand have been affected by volcanic disturbance several times over the last 25,000 years. Ash-laden floods running down the rivers would have extirpated most of the fish fauna in the rivers, and recolonization has been possible only by a limited number of diadromous species (i.e., species, like eels and salmons, that migrate between freshwater and seawater at fixed times during their life cycle). Once the disturbed rivers had recovered, the diadromous species would have been able to recolonize the rivers by dispersal through the sea from other unaffected rivers (McDowall, 1996). Nevertheless, moderate levels of occasional disturbance can also increase the species richness of an ecosystem by creating spatial heterogeneity in the ecosystem, and also by preventing certain species from dominating the ecosystem. (See the module on Organizing Principles of the Natural World for further discussion). Ecosystems may be classified according to the dominant type of environment, or dominant type of species present; for example, a salt marsh ecosystem, a rocky shore intertidal ecosystem, a mangrove swamp ecosystem. Because temperature is an important aspect in shaping ecosystem diversity, it is also used in ecosystem classification (e.g., cold winter deserts, versus warm deserts) (Udvardy, 1975). While the physical characteristics of an area will significantly influence the diversity of the species within a community, the organisms can also modify the physical characteristics of the ecosystem. For example, stony corals (Scleractinia) are responsible for building the extensive calcareous structures that are the basis for coral reef ecosystems that can extend thousands of kilometers (e.g. Great Barrier Reef). There are less extensive ways in which organisms can modify their ecosystems. For example, trees can modify the microclimate and the structure and chemical composition of the soil around them. For discussion of the geomorphic influences of various invertebrates and vertebrates see (Butler, 1995) and, for further discussion of ecosystem diversity see the module on Processes and functions of ecological systems . Glossary Ecosystem a community plus the physical environment that it occupies at a given time
textbooks/bio/Ecology/Biodiversity_(Bynum)/11%3A_Ecosystem_Diversity.txt
A population is a group of individuals of the same species that share aspects of their genetics or demography more closely with each other than with other groups of individuals of that species (where demography is the statistical characteristic of the population such as size, density, birth and death rates, distribution, and movement of migration). Population diversity may be measured in terms of the variation in genetic and morphological features that define the different populations. The diversity may also be measured in terms of the populations' demographics, such as numbers of individuals present, and the proportional representation of different age classes and sexes. However, it can be difficult to measure demography and genetics (e.g., allele frequencies) for all species. Therefore, a more practical way of defining a population, and measuring its diversity, is by the space it occupies. Accordingly, a population is a group of individuals of the same species occupying a defined area at the same time (Hunter, 2002: 144). The area occupied by a population is most effectively defined by the ecological boundaries that are important to the population (for example, a particular region and type of vegetation for a population of beetles, or a particular pond for a population of fish). The geographic range and distribution of populations (i.e., their spatial structure) represent key factors in analyzing population diversity because they give an indication of likelihood of movement of organisms between populations and subsequent genetic and demographic interchange. Similarly, an estimate of the overall population size provides a measure of the potential genetic diversity within the population; large populations usually represent larger gene pools and hence greater potential diversity. Isolated populations, with very low levels of interchange, show high levels of genetic divergence (Hunter, 2002: 145), and exhibit unique adaptations to the biotic and abiotic characteristics of their habitat. The genetic diversity of some groups that generally do not disperse well - such as amphibians, mollusks, and some herbaceous plants - may be mostly restricted to local populations (Avise, 1994). For this reason, range retractions of species can lead to loss of local populations and the genetic diversity they hold. Loss of isolated populations along with their unique component of genetic variation is considered by some scientists to be one of the greatest but most overlooked tragedies of the biodiversity crisis (Ehrlich & Raven 1969). Populations can be categorized according to the level of divergence between them. Isolated and genetically distinct populations of a single species may be referred to as subspecies according to some (but not all) species concepts. Populations that show less genetic divergence might be recognized as variants or races. However, the distinctions between subspecies and other categories can be somewhat arbitrary (see Species diversity). A species that is ecologically linked to a specialized, patchy habitat may likely assume the patchy distribution of the habitat itself, with several different populations distributed at different distances from each other. This is the case, for example, for species that live in wetlands, alpine zones on mountaintops, particular soil types or forest types, springs, and many other comparable situations. Individual organisms may periodically disperse from one population to another, facilitating genetic exchange between the populations. This group of different but interlinked populations, with each different population located in its own, discrete patch of habitat, is called a metapopulation. There may be quite different levels of dispersal between the constituent populations of a metapopulation. For example, a large or overcrowded population patch is unlikely to be able to support much immigration from neighboring populations; it can, however, act as a source of dispersing individuals that will move away to join other populations or create new ones. In contrast, a small population is unlikely to have a high degree of emigration; instead, it can receive a high degree of immigration. A population that requires net immigration in order to sustain itself acts as a sink. The extent of genetic exchange between source and sink populations depends, therefore, on the size of the populations, the carrying capacity of the habitats where the populations are found, and the ability of individuals to move between habitats. Consequently, understanding how the patches and their constituent populations are arranged within the metapopulation, and the ease with which individuals are able to move among them is key to describing the population diversity and conserving the species. For more discussion, see the module on Metapopulations. Glossary Population a group of individuals of the same species that share aspects of their demography or genetics more closely with each other than with other groups of individuals of that species A population may also be defined as a group of individuals of the same species occupying a defined area at the same time (Hunter, 2002: 144) Metapopulation a group of different but interlinked populations, with each different population located in its own, discrete patch of habitat Source a population patch, in a metapopulation, from which individuals disperse to other population patches or create new ones Sink a population patch, in a metpopulation that does not have a high degree of emigration outside its boundaries but, instead, requires net immigration in order to sustain itself Demography the statistical characteristics of the population such as size, density, birth and death rates, distribution, and movement or migration.
textbooks/bio/Ecology/Biodiversity_(Bynum)/12%3A_Population_Diversity.txt
Biogeography is "the study of the distribution of organisms in space and through time". Analyses of the patterns of biogeography can be divided into the two fields of historical biogeography and ecological biogeography (Wiley, 1981). Historical biogeography examines past events in the geological history of the Earth and uses these to explain patterns in the spatial and temporal distributions of organisms (usually species or higher taxonomic ranks). For example, an explanation of the distribution of closely related groups of organisms in Africa and South America is based on the understanding that these two land masses were formerly connected as part of a single land mass (Gondwana). The ancestors of those related species which are now found in Africa and South America are assumed to have had a cosmopolitan distribution across both continents when they were connected. Following the separation of the continents by the process of plate tectonics, the isolated populations are assumed to have undergone allopatric speciation (i.e., speciation achieved between populations that are completely geographically separate). This separation resulted in the closely related groups of species on the now separate continents. Clearly, an understanding of the systematics of the groups of organisms (i.e., the evolutionary relationships that exists between the species) is an integral part of these historical biogeographic analyses. The same historical biogeographic hypotheses can be applied to the spatial and temporal distributions of marine biota. For example, the biogeography of fishes from different ocean basins has been shown to be associated with the geological evolution of these ocean basins (see Stiassny and Harrison, 2000 for examples with references). However, we cannot assume that all existing distribution patterns are solely the product of these past geological processes. It is evident, for example, that the existing marine fauna of the Mediterranean is a product of the complex geological history of this marine basin, involving separation from the Indian and Atlantic Oceans, periods of extensive desiccation followed by flooding and recolonization from the Atlantic (Por, 1989). However, there is also good evidence that the eastern end of the Mediterranean has been colonized more recently by species that have dispersed from the Red Sea via the Suez canal. Thus, the field of ecological biogeography first examines the dispersal of organisms (usually individuals or populations) and the mechanisms that influence this dispersal, and then uses this information to explain the spatial distribution patterns of these organisms. For further discussion see the module on "Biogeography" and see Wiley, 1981, and Humphries and Parenti, 1999. Glossary biogeography the study of the distribution of organisms in space and through time allopatric speciation speciation achieved between populations that are completely geographically separated (their ranges do not overlap or are not contiguous). Historical biogeography the study of events in the geological history of the Earth and their use to explain patterns in the spatial and temporal distributions of organisms (usually species or higher taxonomic ranks) Ecological biogeography: the study of the dispersal of organisms (usually individuals or populations) and the mechanisms that influence this dispersal, and the use of this information to explain spatial distribution patterns 14: Community Diversity A community comprises the populations of different species that naturally occur and interact in a particular environment. Some communities are relatively small in scale and may have well-defined boundaries. Some examples are: species found in or around a desert spring, the collection of species associated with ripening figs in a tropical forest, those clustered around a hydrothermal vent on the ocean floor, those in the spray zone of a waterfall, or under warm stones in the alpine zone on a mountaintop. Other communities are larger, more complex, and may be less clearly defined, such as old-growth forests of the northwest coast of North America, lowland fen communities of the British Isles, or the community of freshwater species of Lake Baikal. Sometimes biologists apply the term "community" to a subset of organisms within a larger community. For example, some biologists may refer to the "community" of species specialized for living and feeding entirely in the forest canopy, whereas other biologists may refer to this as part of a larger forest community. This larger forest community includes those species living in the canopy, those on the forest floor, and those moving between these two habitats, as well as the functional interrelationships between all of these. Similarly, some biologists working on ecosystem management might distinguish between the community of species that are endemic to an area (e.g. species that are endemic to an island) as well as those "exotic" species that have been introduced to that area. The introduced species form part of the larger, modified community of the area, but might not be considered as part of the regions original and distinctive community. Communities are frequently classified by their overall appearance, or physiognomy. For example, coral reef communities are classified according to the appearance of the reefs where they are located, i.e., fringing reef communities, barrier reef communities, and atoll communities. Similarly, different stream communities may be classified by the physical characteristics of that part of the stream where the community is located, such as riffle zone communities and pool communities. However, one of the easiest, and hence most frequent methods of community classification is based on the dominant types of species present for example, intertidal mussel bed communities, Ponderosa pine forest communities of the Pacific northwest region of the U.S., or Mediterranean scrubland communities. Multivariate statistics provide more complex methods for diagnosing communities, for example, by arranging species on coordinate axes (e.g., x-y axes) that represent gradients in environmental factors such as temperature or humidity. For more information, see the module on "Natural communities in space and time." The factors that determine the diversity of a community are extremely complex. There are many theories on what these factors are and how they determine community and ecosystem diversity. Environmental factors, such as temperature, precipitation, sunlight, and the availability of inorganic and organic nutrients are very important in shaping communities and ecosystems. Hunter (2002: 81) notes that, generally speaking, organisms can persist and evolve in places where there are sufficient environmental resources for the organisms to channel energy into growth and reproduction rather than simply the metabolic requirements for survival. In other words, organisms are less likely to thrive in a harsh environment with low energy resources. One way of measuring community diversity is to examine the energy flow through food webs that unite the species within the community; the extent of community diversity can be measured by the number of links in the food web. However, in practice, it can be very difficult to quantify the functional interactions between the species within a community. It is easier to measure the genetic diversity of the populations in the community, and to count the numbers of species present, and use these measures of genetic diversity and species richness as proxies for describing the functional diversity of the community. The evolutionary or taxonomic diversity of the species present is another way of measuring the diversity of a community, for application to conservation biology. Glossary Ecosystem a community plus the physical environment that it occupies at a given time. Community the populations of different species that naturally occur and interact in a particular environment
textbooks/bio/Ecology/Biodiversity_(Bynum)/13%3A_Biogeographic_Diversity.txt
Since the 1980s, there has been an increasing tendency to map biodiversity over "ecosystem regions" or "ecoregions". An ecoregion is "a relatively large unit of land or water containing a geographically distinct assemblage of species, natural communities, and environmental conditions" (WWF, 1999); thus, the ecosystems within an ecoregion have certain distinct characters in common (Bailey, 1998a). Several standard methods of classifying ecoregions have been developed, with climate, altitude, and predominant vegetation being important criteria (Stein et al., 2000). Bailey's (1983, 1998a, b) classification is one of the most widely adopted. It is a hierarchical system with four levels: domains, divisions, provinces and sections. Domains are the largest geographic levels and are defined by climate, e.g., polar domain, dry domain, or humid tropical domain. Domains are split into smaller divisions that are defined according climate and vegetation, and the divisions are split into smaller provinces that are usually defined by their major plant formations. Some divisions also include varieties of "mountain provinces". These generally have a similar climatic regime to the neighboring lowlands but show some altitudinal zonation, and they are defined according to the types of zonation present. Provinces are divided into sections, which are defined by the landforms present. Because ecoregions are defined by their shared biotic and abiotic characteristics, they represent practical units on which to base conservation planning. Moreover, the hierarchical nature of Bailey's ecoregion classification allows for conservation management to be planned and implemented at a variety of geographical levels, from small scale programs focused on discrete sections, to much larger national or international projects that target divisions. Olson and Dinerstein (2002) identified 238 terrestrial or aquatic ecoregions called the "Global 200" that they considered to be priorities for global conservation. These ecoregions were selected because they harbor exceptional biodiversity and are representative of the variety of Earths ecosystems. For further discussion of ecoregions see the modules on Landscape ecology and Conservation planning on a regional scale. Glossary Ecoregion a relatively large unit of land or water containing a geographically distinct assemblage of species, natural communites, and environmental conditions (WWF, 1999) 16: Extinction Table \(1\) Major Extinction Events Era Period Epoch Approximate Duration of Era, Period or Epoch (millions of years before present) Major Extinction Evens CENOZOIC Quaternary Holocene present - 0.01 \(6^{th}\) major extinction ? \(5^{th}\) major extinction (end of Cretaceous; K-T boundary) \(4^{th}\) major extinction (end of Triassic) \(3^{th}\) major extinction (end of Permian) \(2^{nd}\) major extinction (Late Devonian) \(1^{st}\) major extinction (end of Ordovician) Pleistocene 0.01-1.6 Tertiary Pliocene 1.6-5.3 Miocene 5.3-23 Oligocene 24-37 Eocene 37-58 Paleocene 58-65 MESOZOIC Cretaceous 65-144 Jurassic 144-208 Triassic 208-245 PALEOZOIC Permian 245-286 (Carboniferous) Pennsylvanian 286-325 (Carboniferous) Mississippian 325-360 Devonian 360-408 Silurian 408-440 Ordovician 440-505 Cambrian 505-570 PRECAMBRIAN 570-4500 Each of the first five mass extinctions shown in Table \(1\) represents a significant loss of biodiversity - but recovery has been good on a geologic time scale. Mass extinctions are apparently followed by a sudden burst of evolutionary diversification on the part of the remaining species, presumably because the surviving species started using habitats and resources that were previously "occupied" by more competitively successful species that went extinct. However, this does not mean that the recoveries from mass extinction have been rapid; they have usually required some tens of millions of years (Jablonski, 1995). It is hypothesized that we are currently on the brink of a "sixth mass extinction," but one that differs from previous events. The five other mass extinctions predated humans and were probably the ultimate products of some physical process (e.g. climate change through meteor impacts), rather than the direct consequence of the action of some other species. In contrast, the sixth mass extinction is the product of human activity over the last several hundred, or even several thousand years. These mass extinctions, and their historic and modern consequences are discussed in more detail in the modules on Historical perspectives on extinction and the current biodiversity crisis, and Ecological consequences of extinctions.. Glossary Extinct a species is assumed to be extinct when there is no reasonable doubt that the last individual has died (IUCN, 2002) Extinction the complete disappearance of a species from Earth Mass extinction a period when there is a sudden increase in the rate of extinction, such that the rate at least doubles, and the extinctions include representatives from many different taxonomic groups of plants and animals 17: Landscape Diversity A landscape is "a mosaic of heterogeneous land forms, vegetation types, and land uses" (Urban et al., 1987). Therefore, assemblages of different ecosystems (the physical environments and the species that inhabit them, including humans) create landscapes on Earth. Although there is no standard definition of the size of a landscape, they are usually in the hundred or thousands of square miles. Species composition and population viability are often affected by the structure of the landscape; for example, the size, shape, and connectivity of individual patches of ecosystems within the landscape (Noss, 1990). Conservation management should be directed at whole landscapes to ensure the survival of species that range widely across different ecosystems (e.g., jaguars, quetzals, species of plants that have widely dispersed pollen and seeds) (Hunter, 2002: 83-85, 268-270). Diversity within and between landscapes depends on local and regional variations in environmental conditions, as well as the species supported by those environments. Landscape diversity is often incorporated into descriptions "ecoregions," Glossary Landscape a mosaic of heterogeneous land forms, vegetation types, and land uses (Urban et al., 1987)
textbooks/bio/Ecology/Biodiversity_(Bynum)/15%3A_Ecoregions.txt
Natural communities are finely-tuned systems, where each species has an ecological value to the other species that are part of that ecosystem. Species diversity increases an ecosystem's stability and resilience, in particular its ability to adapt and respond to changing environmental conditions. If a certain amount, or type (such as a keystone species) of species are lost, eventually it leads to the loss of ecosystem function. Many ecosystems though have built-in redundancies so that two or more species' functions may overlap. Because of these redundancies, several changes in the number or type of species may not impact an ecosystem. However, not all species within an ecosystem are of the same importance. Species that are important due to their sheer numbers are often called dominant species. These species make up the most biomass of an ecosystem. Species that have important ecological roles that are greater than one would expect based on their abundance are called keystone species. These species are often central to the structure of an ecosystem, removal of one or several keystone species may have consequences immediately, or decades or centuries later (Jackson et al. 2001). Ecosystems are complex and difficult to study, thus it is often difficult to predict which species are keystone species. The impact of removing an individual or several keystone species from kelp forests in the Pacific is examined in Example. Northern Pacific Kelp Forests Kelp forests, as their name suggests, are dominated by kelp, a brown seaweed of the family Laminariales. They are found in shallow, rocky habitats from temperate to subarctic regions, and are important ecosystems for many commercially valuable fish and invertebrates. Vast forests of kelp and other marine plants existed in the northern Pacific Ocean prior to the 18th century. The kelp was eaten by herbivores such as sea urchins (Family Strongylocentrotidae), which in turn were preyed upon by predators such as sea otters (Enhydra lutris). Hunting during the 18th and 19th centuries brought sea otters to the brink of extinction. In the absence of sea otters, sea urchin populations burgeoned and grazed down the kelp forests, at the extreme creating "urchin barrens," where the kelp was completely eradicated. Other species dependent on kelp (such as red abalone Haliotis rufescens) were affected too. Legal protection of sea otters in the 20th century led to partial recovery of the system. More recently sea otter populations in Alaska seem to be threatened by increased predation from killer whales (Orcinus orca) (Estes et al. 1998). It appears that whales may have shifted their diet to sea otters when populations of their preferred prey, Stellar sea lions (Arctocephalus townsendi) and Harbor seals (Phoca vitulina) declined. The exact reason for the decline in the sea lion and seal populations is still unclear, but appears to be due to declines in their prey in combination with increased fishing and higher ocean temperatures. As a result of the loss of sea otters, increased sea urchin populations are grazing down kelp beds again. Southern Californian Kelp Forests Interestingly, a similar scenario in kelp forests in Southern California did not show immediate effects after the disappearance of sea otters from the ecosystem. This is because the system was more diverse initially. Other predators (California sheephead fish, Semicossyphus pulcher, and spiny lobsters, Panulirus interruptus) and competitors (abalone Haliotis spp) of the sea urchin helped maintain the system. However, when these predators and competitors were over-harvested as well in the 1950s, the kelp forests declined drastically as sea urchin populations boomed. In the 1970s and 1980s, a sea urchin fishery developed which then enabled the kelp forest to recover. However, it left a system with little diversity. The interrelationships among these species and the changes that reverberate through systems as species are removed are mirrored in other ecosystems on the planet, both aquatic and terrestrial. As this example illustrates, biodiversity is incredibly complex and conservation efforts cannot focus on just one species or even on events of the recent past. Glossary ecological value the values that each species has as part of an ecosystem dominant species species that are important due to their sheer numbers in an ecosystem keystone species species that have important ecological roles that are greater than one would expect based on their abundance
textbooks/bio/Ecology/Biodiversity_(Bynum)/18%3A_Ecological_Value.txt
Biodiversity, a contraction of the phrase "biological diversity," is a complex topic, covering many aspects of biological variation. In popular usage, the word biodiversity is often used to describe all the species living in a particular area. If we consider this area at its largest scale - the entire world - then biodiversity can be summarized as "life on earth." However, scientists use a broader definition of biodiversity, designed to include not only living organisms and their complex interactions, but also interactions with the abiotic (non-living) aspects of their environment. Definitions emphasizing one aspect or another of this biological variation can be found throughout the scientific and lay literature (see Gaston, 1996: Table 1.1). For the purposes of this module, biodiversity is defined as: the variety of life on Earth at all its levels, from genes to ecosystems, and the ecological and evolutionary processes that sustain it. Genetic diversity is the “fundamental currency of diversity” (Williams and Humphires, 1996) that is responsible for variation between individuals, populations and species. Therefore, it is an important aspect of any discussion of biodiversity. The interactions between the individual organisms (e.g., reproductive behavior, predation, parasitism) of a population or community, and their specializations for their environment (including ways in which they might modify the environment itself) are important functional aspects of biodiversity. These functional aspects can determine the diversity of different communities and ecosystems. There is also an important spatial component to biodiversity. The structure of communities and ecosystems (e.g. the number of individuals and species present) can vary in different parts of the world. Similarly, the function of these communities and ecosystems (i.e. the interactions between the organisms present) can vary from one place to another. Different assemblages of ecosystems can characterize quite diverse landscapes, covering large areas. These spatial patterns of biodiversity are affected by climate, geology, and physiography (Redford and Richter, 1999). The structural, functional, and spatial aspects of biodiversity can vary over time; therefore there is a temporal component to the analysis of biodiversity. For example, there can be daily, seasonal, or annual changes in the species and number of organisms present in an ecosystem and how they interact. Some ecosystems change in size or structure over time (e.g. forest ecosystems may change in size and structure because of the effects of natural fires, wetlands gradually silt up and decrease in size). Biodiversity also changes over a longer-term, evolutionary, time-scale. Geological processes (e.g., plate tectonics, orogenesis, erosion), changes in sea-level (marine transgressions and regressions), and changes in climate cause significant, long-term changes to the structural and spatial characteristics of global biodiversity. The processes of natural selection and species evolution, which may often be associated with the geological processes, also result in changes to local and global flora and fauna. Many people consider humans to be a part of nature, and therefore a part of biodiversity. On the other hand, some people (e.g., Redford and Richter, 1999 ) confine biodiversity to natural variety and variability, excluding biotic patterns and ecosystems that result from human activity, even though it is difficult to assess the "naturalness" of an ecosystem because human influence is so pervasive and varied ( Hunter, 1996; Angermeier, 2000; Sanderson et al.,2002). If one takes humans as part of nature, then cultural diversity of human populations and the ways that these populations use or otherwise interact with habitats and other species on Earth are a component of biodiversity too. Other people make a compromise between totally including or excluding human activities as a part of biodiversity. These biologists do not accept all aspects of human activity and culture as part of biodiversity, but they do recognize that the ecological and evolutionary diversity of domestic species, and the species composition and ecology of agricultural ecosystems are part of biodiversity. (For further discussion see the modules on Human evolution and Cultural Diversity; in preparation.) Glossary Biodiversity the variety of life on Earth at all its levels, from genes to ecosystems, and the ecological and evolutionary processes that sustain it Plate Tectonics the forces acting on the large, mobile pieces (or "plates") of the Earth's lithosphere (the upper part of the mantle and crust of the Earth where the rocks are rigid compared to those deeper below the Earth's surface) and the movement of those "plates". Orogenesis the process of mountain building.
textbooks/bio/Ecology/Biodiversity_(Bynum)/2%3A_Definition_of_Biodiversity.txt
Generally speaking, warm tropical ecosystems are richer in species than cold temperate ecosystems at high latitudes (see Gaston and Williams, 1996, for general discussion). A similar pattern is seen for higher taxonomic groups (genera, families). Various hypotheses (e.g., environmental patchiness, solar energy, productivity; see Blackburn and Gaston, 1996) have been raised to explain these patterns. For example, it is assumed that warm, moist, tropical environments, with long day-lengths provide organisms with more resources for growth and reproduction than harsh environments with low energy resources (Hunter, 2002). When environmental conditions favor the growth and reproduction of primary producers (e.g., aquatic algae, corals, terrestrial flora) then these may support large numbers of secondary consumers, such as small herbivores, which also support a more numerous and diverse fauna of predators. In contrast, the development of primary producers in colder temperate ecosystems is constrained by seasonal changes in sunlight and temperature. Consequently, these ecosystems may support a less diverse biota of secondary consumers and predators. Recently, (Allen et al. 2002) developed a model for the effect of ambient temperature on metabolism, and hence generation time and speciation rates, and used this model to explain the latitudinal gradient in biodiversity. However, these authors also noted that the principles that underlie these spatial pattern of biodiversity are still not well understood. Species and ecosystem diversity is also known to vary with altitude Walter (1985) and Gaston and Williams (1996: 214-215). Mountainous environments, also called orobiomes, are subdivided vertically into altitudinal belts, such as montane, alpine and nival, that have quite different ecosystems. Climatic conditions at higher elevations (e.g., low temperatures, high aridity) can create environments where relatively few species can survive. Similarly, in oceans and freshwaters there are usually fewer species as one moves to increasing depths below the surface. However, in the oceans there may be a rise in species richness close to the seabed, which is associated with an increase in ecosystem heterogeneity. By mapping spatial gradients in biodiversity we can also identify areas of special conservation interest. Conservation biologists are interested in areas that have a high proportion of endemic species, i.e., species whose distributions are naturally restricted to a limited area. It is obviously important to conserve these areas because much of their flora and fauna, and therefore the ecosystems so-formed, are found nowhere else. Areas of high endemism are also often associated with high species richness (see Gaston and Spicer, 1998 for references). Some conservation biologists have focused their attention on areas that have high levels of endemism (and hence diversity) that are also experiencing a high rate of loss of ecosystems; these regions are biodiversity hotspots. Because biodiversity hotspots are characterized by localized concentrations of biodiversity under threat, they represent priorities for conservation action (Sechrest et al., 2002). A terrestrial biodiversity hotspot is defined quantitatively as an area that has at least 0.5%, or 1,500 of the world's ca. 300,000 species of green plants (Viridiplantae), and that has lost at least 70% of its primary vegetation (Myers et al., 2000; Conservation International, 2002). Marine biodiversity hotspots are quantitatively defined based on measurements of relative endemism of multiple taxa (species of corals, snails, lobsters, fishes) within a region and the relative level of threat to that region (Roberts et al., 2002). According to this approach, the Philippine archipelago and the islands of Bioko, Sao Tome, Principe and Annobon in the eastern Atlantic Gulf of Guinea are ranked as two of the most threatened marine biodiversity hotspot regions. Conservation biologists may also be interested in biodiversity coldspots; these are areas that have relatively low biological diversity but also include threatened ecosystems (Kareiva and Marvier, 2003). Although a biodiversity coldspot is low in species richness, it can also be important to conserve, as it may be the only location where a rare species is found. Extreme physical environments (low or high temperatures or pressures, or unusual chemical composition) inhabited by just one or two specially adapted species are coldspots that warrant conservation because they represent unique environments that are biologically and physically interesting. For further discussion on spatial gradients in biodiversity and associated conservation practices see the related modules on "Where is the world's biodiversity?" and "Conservation Planning at a Regional Scale." Glossary Biodiversity hotspots in general terms these are areas that have high levels of endemism (and hence diversity) but which are also experiencing a high rate of loss of habitat. This concept was originally developed for terrestrial ecosystems. A terrestrial biodiversity hotspot is an area that has at least 0.5%, or 1,500 of the worlds ca. 300,000 species of green plants (Viridiplantae), and that has lost at least 70% of its primary vegetation (Myers et al., 2000). Marine biodiversity hotspots have been defined for coral reefs, based on measurements of relative endemism of multiple taxa (species of corals, snails, lobsters, fishes) within a region and the relative level of threat to that region (Roberts et al., 2002) Orobiome a mountainous environment or landscape with its constituent ecosystems Species richness the number of different species in a particular area. ecosystem a community plus the physical environment that it occupies at a given time. Area of endemism an areas which has a high proportion of endemic species (i.e., species with distributions that are naturally restricted to that region) Endemic species those species whose distributions are naturally restricted to a defined region Terrestrial Biodiversity hotspots Marine Biodiversity hotspots Biodiversity coldspots areas that have relatively low biological diversity but are also experiencing a high rate of habitat loss
textbooks/bio/Ecology/Biodiversity_(Bynum)/3%3A_Spatial_Gradients_in_Biodiversity.txt
To effectively conserve biodiversity, we need to be able to define what we want to conserve, determine where it currently occurs, identify strategies to help conserve it, and track over time whether or not these strategies are working. The first of these items, defining what we want to conserve, is complicated by the remarkable diversity of the organisms themselves. This is a product of the genetic diversity of the organisms, that is, variation in the DNA (deoxyribonucleic acid) that makes up the genes of the organisms. Genetic diversity among organisms exists at the following different levels: • within a single individual; • between different individuals of a single population; • between different populations of a single species (population diversity); • between different species (species diversity). It can be difficult, in some cases, to establish the boundaries between these levels of diversity. For example, it may be difficult to interpret whether variation between groups of individuals represents diversity between different species, or represents diversity only between different populations of the same species. Nevertheless, in general terms, these levels of genetic diversity form a convenient hierarchy for describing the overall diversity of organisms on Earth. Similarly, the functional and spatial aspects of biodiversity can also be discussed at a number of different levels; for example, diversity within or between communities, ecosystems, landscapes, biogeographical regions, and ecoregions. Glossary Genetic Diversity refers to any variation in the nucleotides, genes, chromosomes, or whole genomes of organisms. Community the populations of different species that naturally occur and interact in a particular environment. Ecosystem a community plus the physical environment that it occupies at a given time. Landscapes a mosaic of heterogeneous land forms, vegetation types, and land uses (Urban et al., 1987). Ecoregions a relatively large unit of land or water containing a geographically distinct assemblage of species, natural communities, and environmental conditions (WWF, 1999). The ecosystems within an ecoregion have certain distinct characters in common (Bailey, 1998a). 5.1: Objectives Chapter Outline • 5.1: Objectives To explore through classification of life forms the concept of biological diversity as it occurs at various taxonomic levels. • 5.2: Procedures Spiders are a highly species rich group of invertebrates that exploit a wide variety of niches in virtually all the earth's biomes. Some species of spiders build elaborate webs that passively trap their prey whereas others are active predators that ambush or pursue their prey. Given spiders' taxonomic diversity as well as the variety of ecological niches breadth along with the ease of catching them, spiders can represent useful, fairly easily measured indicators of environmental change and commu • 5.3: Level 1: Sorting and Classifying a Spider Collection and Assessing its Comprehensiveness Obtain a paper copy of the spider collection for forest patch "1." The spiders were captured by a biologist traveling along transects through the patch and striking a random series of 100 tree branches. All spiders dislodged that fell onto an outstretched sheet were collected and preserved in alcohol. They have since been spread out on a tray for you to examine. • 5.4: Level 2: Contrasting spider diversity among sites to provide a basis for prioritizing conservation efforts In this part of the exercise you are provided with spider collections from 4 other forest patches. The forest patches have resulted from fragmentation of a once much larger, continuous forest. You will use the spider diversity information to prioritize efforts for the five different forest patches (including the data from the first patch which you have already classified). • 5.5: Level 3: Considering evolutionary distinctiveness When contrasting patterns of species diversity and community distinctiveness, we typically treat each species as equally important, yet are they? What if a species-poor area actually is quite evolutionarily distinct from others? Similarly, what if your most species-rich site is comprised of a swarm of species that have only recently diverged from one another and are quite similar to species present at another site? These questions allude to issues of biological diversity at higher taxonomic leve 5: What is Biodiversity A comparison of spider communities Objectives To explore through classification of life forms the concept of biological diversity as it occurs at various taxonomic levels.
textbooks/bio/Ecology/Biodiversity_(Bynum)/4%3A_Introduction_to_the_Biodiversity_Hierarchy.txt
Spiders are a highly species rich group of invertebrates that exploit a wide variety of niches in virtually all the earth's biomes. Some species of spiders build elaborate webs that passively trap their prey whereas others are active predators that ambush or pursue their prey. Given spiders' taxonomic diversity as well as the variety of ecological niches breadth along with the ease of catching them, spiders can represent useful, fairly easily measured indicators of environmental change and community level diversity. This exercise focuses on classifying and analyzing spider communities to explore the concept of biological diversity and experience its application to decision making in biological conservation. The exercise can be undertaken in three parts, depending on your interest level. • You will gain experience in classifying organisms by sorting a hypothetical collection of spiders from a forest patch and determining if the spider collection is adequate to accurately represent the overall diversity of spiders present in the forest patch. • If you wish to explore further, you can sort spider collections made at four other forest patches in the same region and contrast spider communities in terms of their species richness, species diversity, and community similarity. You will apply this information to make decisions about the priority that should be given to protecting each forest patch in order to conserve the regional pool of spider diversity. • If you wish to explore the concepts of biodiversity yet further, you will next take into account the evolutionary relationships among the families of spiders collected. This phylogenetic perspective will augment your decision making about priorities for patch protection by accounting for evolutionary distinctiveness in addition to diversity and distinctiveness at the community level. Once you have worked through these concepts and analyses you will have a much enhanced familiarity with the subtleties of what biological diversity is. 5.3: Level 1: Sorting and Classifying a Spider Collection and Assessing its Comprehensiveness Obtain a paper copy of the spider collection for forest patch "1." The spiders were captured by a biologist traveling along transects through the patch and striking a random series of 100 tree branches. All spiders dislodged that fell onto an outstretched sheet were collected and preserved in alcohol. They have since been spread out on a tray for you to examine. The spider collection is hypothetical but the species pictured are actual spiders that occur in central Africa (illustrations used are from Berland 1955). The next task is for you to sort and identify the spiders. To do this you have to identify all the specimens in the collection. To classify the spiders look for external characters that all members of a particular group of spiders have in common but that are not shared by other groups of spiders. For example, leg length, hairiness, relative size of body segments, or abdomen patterning and abdomen shape all might be useful characters. Look for groups of morphologically indistinguishable spiders, and describe briefly the set of characters unique to each group. These operational taxonomic units that you define will be considered separate species. To assist you in classifying these organisms, a diagram of key external morphological characters of beetles is provided (Figure \(1\)). Note that most spider identification depends on close examination of spider genitalia. For this exercise, however, we will be examining gross external characteristics of morphologically dissimilar species. Figure \(1\) Basic external characteristics of spiders useful for identifying individuals to species. Assign each species a working name, preferably something descriptive. For example, you might call a particular species "spotted abdomen, very hairy" or "short legs, spiky abdomen" Just remember that the more useful names will be those that signify to you something unique about the species. Construct a table listing each species, its distinguishing characteristics, the name you have applied to it, and the number of occurrences of the species in the collection (Figure \(2\)). Figure \(2\) Last, ask whether this collection adequately represents the true diversity of spiders in the forest patch at the time of collection. Were most of the species present sampled or were many likely missed? This is always an important question to ask to ensure that the sample was adequate and hence can be legitimately contrasted among sites to, for example, assign areas as low versus high diversity sites. To do this you will perform a simple but informative analysis that is standard practice for conservation biologists who do biodiversity surveys. This analysis involves constructing a so-called collector's curve (Colwell and Coddington 1994). These plot the cumulative number of species observed (y-axis) against the cumulative number of individuals classified (x-axis). The collector's curve is an increasing function with a slope that will decrease as more individuals are classified and as fewer species remain to be identified (Figure \(3\)). If sampling stops while the collector's curve is still rapidly increasing, sampling is incomplete and many species likely remain undetected. Alternatively, if the slope of the collector's curve reaches zero (flattens out), sampling is likely more than adequate as few to no new species remain undetected. Figure \(3\) An example of a collectors curve. Cumulative sample size represents the number of individuals classified. The cumulative number of taxa sampled refers to the number of new species detected. To construct the collector's curve for this spider collection, choose a specimen within the collection at random. This will be your first data point, such that \(X=1\) and \(Y=1\) because after examining the first individual you have also identified one new species! Next move consistently in any direction to a new specimen and record whether it is a member of a new species. In this next step, \(X=2\), but \(Y\) may remain as 1 if the next individual is not of a new species or it may change to 2 if the individual represents a new species different from individual 1. Repeat this process until you have proceeded through all 50 specimens and construct the collector's curve from the data obtained (just plot \(Y\) versus \(X\)). Does the curve flatten out? If so, after how many individual spiders have been collected? If not, is the curve still increasing? What can you conclude from the shape of your collector's curve as to whether the sample of spiders is an adequate characterization of spider diversity at the site?
textbooks/bio/Ecology/Biodiversity_(Bynum)/5%3A_What_is_Biodiversity_A_comparison_of_spider_communities/5.2%3A_Procedures.txt
In this part of the exercise you are provided with spider collections from 4 other forest patches. The forest patches have resulted from fragmentation of a once much larger, continuous forest. You will use the spider diversity information to prioritize efforts for the five different forest patches (including the data from the first patch which you have already classified). Here are the additional spider collections:(See Figure $1$, Figure $2$, Figure $3$, and Figure $4$) Figure $1$ Figure $2$ Figure $3$ Figure $4$ Again, tally how many individuals belonging to each species occur in each site's spider collection (use your classification of spiders completed for Site 1 during Level 1 of the exercise). Specifically, construct a table of species (rows) by site (columns). In the table's cells put the number of individuals of each species you found in the collection from the island. You can then analyze these data to generate different measures of community characteristics to help you to decide how to prioritize protection of the forest patches. Recall that you need to rank the patches in terms of where protection efforts should be applied, and you need to provide a rationale for your ranking. You will find it most useful to base your decisions on three community characteristics: species richness and species diversity within each forest patch, and the similarity of spider communities between patches. Species richness is simply the tally of different spider species that were collected in a forest patch. Species diversity is a more complex concept. We will use a standard index called Simpson Reciprocal Index, $\frac{1}{D}$ where $D$ is calculated as follows: $D=∑{p_i}^2$ where $p_i=$ the fractional abundance of the $i^{th}$ species on an island. For example, if you had a sample of two species with five individuals each, $\frac{1}{0.5^2+0.5^2}=2$. The higher the value, the greater the diversity. The maximum value is the number of species in the sample, which occurs when all species contain an equal number of individuals. Because this index not only reflects the number of species present but also the relative distribution of individuals among species within a community it can reflect how balanced communities are in terms of how individuals are distributed across species. As a result, two communities may have precisely the same number of species, and hence species richness, but substantially different diversity measures if individuals in one community are skewed toward a few of the species whereas individuals are distributed more evenly in the other community. Diversity is one thing, distinctiveness is quite another. Thus another important perspective in ranking sites is how different the communities are from one another. We will use the simplest available measure of community similarity, that is, the Jaccard coefficient of community similarity, to contrast community distinctiveness between all possible pairs of sites: $CC_{j}=\frac{c}{S}$ where $c$ is the number of species common to both communities and SS is the total number of species present in the two communities. For example, if one site contains only 2 species and the other site 2 species, one of which is held in common by both sites, the total number of species present is 3 and the number shared is 1, so 1/3=331/3=33%. This index ranges from 0 (when no species are found in common between communities) to 1 (when all species are found in both communities). Calculate this index to compare each pair of sites separately, that is, compare Site 1 with Site 2, Site 1 with Site 3, …, Site 4 with Site 5 for 10 total comparisons. You might find it useful to determine the average similarity of one community to all the others, by averaging the $CC_j$ values across each comparison a particular site is included. Once you have made these calculations of diversity (species richness and Simpson's Reciprocal Index) you can tackle the primary question of the exercise: How should you rank these sites for protection and why? Making an informed decision requires reconciling your analysis with concepts of biological diversity as it pertains to diversity and distinctiveness. What do you recommend? 5.5: Level 3: Considering evolutionary distinctiveness When contrasting patterns of species diversity and community distinctiveness, we typically treat each species as equally important, yet are they? What if a species-poor area actually is quite evolutionarily distinct from others? Similarly, what if your most species-rich site is comprised of a swarm of species that have only recently diverged from one another and are quite similar to species present at another site? These questions allude to issues of biological diversity at higher taxonomic levels. Only by looking at the underlying evolutionary relationships among species can we gain this additional perspective. We have provided in Figure \(1\) a phylogeny of the spider families that occur in your collections (a genuine phylogeny for these families based in large part on Coddington and Levi 1991). In brief, the more closely related families (and species therein) are located on more proximal branches within the phylogeny. Based on the evolutionary relationships among these families, will you modify any of the conclusions you made on prioritizing forest patches for protection based on patterns of species diversity alone? If so, why? Figure \(1\)
textbooks/bio/Ecology/Biodiversity_(Bynum)/5%3A_What_is_Biodiversity_A_comparison_of_spider_communities/5.4%3A_Level_2%3A_Contrasting_spider_diversity_among_sites_to_provide_a_basis_for_prioritizing_conservation_efforts.txt
Strictly speaking, species diversity is the number of different species in a particular area (species richness) weighted by some measure of abundance such as number of individuals or biomass. However, it is common for conservation biologists to speak of species diversity even when they are actually referring to species richness. Another measure of species diversity is the species evenness, which is the relative abundance with which each species is represented in an area. An ecosystem where all the species are represented by the same number of individuals has high species evenness. An ecosystem where some species are represented by many individuals, and other species are represented by very few individuals has a low species evenness. Table shows the abundance of species (number of individuals per hectare) in three ecosystems and gives the measures of species richness (S), evenness (E), and the Shannon diversity index (H). Shannon's diversity index $H=−∑ρ_iln(ρ_i)$ • $ρ_i$ is the proportion of the total number of specimens ii expressed as a proportion of the total number of species for all species in the ecosystem. The product of $ρ_iln(ρ_i)$ for each species in the ecosystem is summed, and multiplied by $−1$ to give $H$. The species evenness index ($E$) is calculated as $E=\frac{H}{H_{max}}$. • $H_{max}$ is the maximum possible value of $H$, and is equivalent to $ln(S)$. Thus $E=\frac{H}{ln(S)}$ See Gibbs et al., 1998: p157 and Beals et al. (2000) for discussion and examples. Magurran (1988) also gives discussion of the methods of quantifying diversity. In Table, ecosystem A shows the greatest diversity in terms of species richness. However, ecosystem B could be described as being richer insofar as most species present are more evenly represented by numbers of individuals; thus the species evenness (E) value is larger. This example also illustrates a condition that is often seen in tropical ecosystems, where disturbance of the ecosystem causes uncommon species to become even less common, and common species to become even more common. Disturbance of ecosystem B may produce ecosystem C, where the uncommon species 3 has become less common, and the relatively common species 1 has become more common. There may even be an increase in the number of species in some disturbed ecosystems but, as noted above, this may occur with a concomitant reduction in the abundance of individuals or local extinction of the rarer species. Species richness and species evenness are probably the most frequently used measures of the total biodiversity of a region. Species diversity is also described in terms of the phylogenetic diversity, or evolutionary relatedness, of the species present in an area. For example, some areas may be rich in closely related taxa, having evolved from a common ancestor that was also found in that same area, whereas other areas may have an array of less closely related species descended from different ancestors (see further comments in the section on Species diversity as a surrogate for global biodiversity). To count the number of species, we must define what constitutes a species. There are several competing theories, or "species concepts" (Mayden, 1997). The most widely accepted are the morphological species concept, the biological species concept, and the phylogenetic species concept. Although the morphological species concept (MSC) is largely outdated as a theoretical definition, it is still widely used. According to this concept: species are the smallest groups that are consistently and persistently distinct, and distinguishable by ordinary means. (Cronquist, 1978). In other words, morphological species concept states that "a species is a community, or a number of related communities, whose distinctive morphological characters are, in the opinion of a competent systematist, sufficiently definite to entitle it, or them, to a specific name" (Regan, 1926: 75). The biological species concept (BSC), as described by Mayr and Ashlock (1991), states that "a species is a group of interbreeding natural populations that is reproductively isolated from other such groups". According to the phylogenetic species concept (PSC), as defined by Cracraft (1983), a species : "is the smallest diagnosable cluster of individual organism [that is, the cluster of organisms are identifiably distinct from other clusters] within which there is a parental pattern of ancestry and descent". These concepts are not congruent, and considerable debate exists about the advantages and disadvantages of all existing species concepts (for further discussion, see the module on Macroevolution: essentials of systematics and taxonomy). In practice, systematists usually group specimens together according to shared features (genetic, morphological, physiological). When two or more groups show different sets of shared characters, and the shared characters for each group allow all the members of that group to be distinguished relatively easily and consistently from the members of another group, then the groups are considered different species. This approach relies on the objectivity of the phylogenetic species concept (i.e., the use of intrinsic, shared, characters to define or diagnose a species) and applies it to the practicality of the morphological species concept, in terms of sorting specimens into groups (Kottelat, 1995, 1997). Despite their differences, all species concepts are based on the understanding that there are parameters that make a species a discrete and identifiable evolutionary entity. If populations of a species become isolated, either through differences in their distribution (i.e., geographic isolation) or through differences in their reproductive biology (i.e., reproductive isolation), they can diverge, ultimately resulting in speciation. During this process, we expect to see distinct populations representing incipient species - species in the process of formation. Some researchers may describe these as subspecies or some other sub-category, according to the species concept used by these researchers. However, it is very difficult to decide when a population is sufficiently different from other populations to merit its ranking as a subspecies. For these reasons, subspecific and infrasubspecific ranks may become extremely subjective decisions of the degree of distinctiveness between groups of organisms (Kottelat, 1997). An evolutionary significant unit (ESU) is defined, in conservation biology, as a group of organisms that has undergone significant genetic divergence from other groups of the same species. According to Ryder, 1986 identification of ESUs requires the use of natural history information, range and distribution data, and results from analyses of morphometrics, cytogenetics, allozymes and nuclear and mitochondrial DNA. In practice, many ESUs are based on only a subset of these data sources. Nevertheless, it is necessary to compare data from different sources (e.g., analyses of distribution, morphometrics, and DNA) when establishing the status of ESUs. If the ESUs are based on populations that are sympatric or parapatric then it is particularly important to give evidence of significant genetic distance between those populations. ESUs are important for conservation management because they can be used to identify discrete components of the evolutionary legacy of a species that warrant conservation action. Nevertheless, in evolutionary terms and hence in many systematic studies, species are recognized as the minimum identifiable unit of biodiversity above the level of a single organism (Kottelat, 1997). Thus there is generally more systematic information available for species diversity than for subspecific categories and for ESUs. Consequently, estimates of species diversity are used more frequently as the standard measure of overall biodiversity of a region. Taxon Taxon Common Name Number of species described* N as percentage of total number of described species* Bacteria true bacteria 9021 0.5 Archaea archaebacteria 259 0.01 Bryophyta mosses 15000 0.9 Lycopodiophyta clubmosses 1275 0.07 Filicophyta ferns 9500 0.5 Coniferophyta conifers 601 0.03 Magnoliophyta flowering plants 233885 13.4 Fungi fungi 100800 5.8 "Porifera" sponges 10000 0.6 Cnidaria cnidarians 9000 0.5 Rotifera rotifers 1800 0.1 Platyhelminthes flatworms 13780 0.8 Mollusca mollusks 117495 6.7 Annelida annelid worms 14360 0.8 Nematoda nematode worms 20000 1.1 Arachnida arachnids 74445 4.3 Crustacea crustaceans 38839 2.2 Insecta insects 827875 47.4 Echinodermata echinoderms 6000 0.3 Chondrichthyes cartilaginous fishes 846 0.05 Actinopterygii ray-finned bony fishes 23712 1.4 Lissamphibia living amphibians 4975 0.3 Mammalia mammals 4496 0.3 Chelonia living turtles 290 0.02 Squamata lizards and snakes 6850 0.4 Aves birds 9672 0.6 Other 193075 11.0 Table $1$ : Estimated Numbers of Described Species, Based on Lecointre and Guyader (2001) * The total number of described species is assumed to be 1,747,851. This figure, and the numbers of species for taxa are taken from LeCointre and Guyader (2001). Glossary Species diversity the number of different species in a particular area (i.e., species richness) weighted by some measure of abundance such as number of individuals or biomass. Species richness the number of different species in a particular area Species evenness the relative abundance with which each species are represented in an area. Phylogenetic diversity the evolutionary relatedness of the species present in an area. Morphological species concept species are the smallest natural populations permanently separated from each other by a distinct discontinuity in the series of biotype (Du Rietz, 1930; Bisby and Coddington, 1995). Biological species concept a species is a group of interbreeding natural populations unable to successfully mate or reproduce with other such groups, and which occupies a specific niche in nature (Mayr, 1982; Bisby and Coddington, 1995). Phylogenetic species concept a species is the smallest group of organisms that is diagnosably [that is, identifiably] distinct from other such clusters and within which there is a parental pattern of ancestry and descent (Cracraft, 1983; Bisby and Coddington, 1995). Evolutionary significant unit a group of organisms that has undergone significant genetic divergence from other groups of the same species. Identification of ESUs is based on natural history information, range and distribution data, and results from analyses of morphometrics, cytogenetics, allozymes and nuclear and mitochondrial DNA. Concordance of those data, and the indication of significant genetic distance between sympatric groups of organisms, are critical for establishing an ESU. Ecosystem a community plus the physical environment that it occupies at a given time. Sympatric occupying the same geographic area. Parapatric occupying contiguous but not overlapping ranges. 6: Species Diversity Global biodiversity is frequently expressed as the total number of species currently living on Earth, i.e., its species richness. Between about 1.5 and 1.75 million species have been discovered and scientifically described thus far (LeCointre and Guyader, 2001; Cracraft, 2002). Estimates for the number of scientifically valid species vary partly because of differing opinions on the definition of a species.For example, the phylogenetic species concept recognizes more species than the biological species concept. Also, some scientific descriptions of species appear in old, obscure, or poorly circulated publications. In these cases, scientists may accidentally overlook certain species when preparing inventories of biota, causing them to describe and name an already known species. More significantly, some species are very difficult to identify. For example, taxonomically "cryptic species" look very similar to other species and may be misidentified (and hence overlooked as being a different species). Thus, several different, but similar-looking species, identified as a single species by one scientist, are identified as completely different species by another scientist. For further discussion of cryptic species, with specific examples of cryptic frogs from Vietnam, see Inger (1999) and Bain et al., (in press). Scientists expect that the scientifically described species represent only a small fraction of the total number of species on Earth today. Many additional species have yet to be discovered, or are known to scientists but have not been formally described. Scientists estimate that the total number of species on Earth could range from about 3.6 million up to 117.7 million, with 13 to 20 million being the most frequently cited range (Hammond, 1995; Cracraft, 2002). The estimation of total number of species is based on extrapolations from what we already know about certain groups of species. For example, we can extrapolate using the ratio of scientifically described species to undescribed species of a particular group of organisms collected from a prescribed area. However, we know so little about some groups of organisms, such as bacteria and some types of fungi, that we do not have suitable baseline data from which we can extrapolate our estimated total number of species on Earth. Additionally, some groups of organisms have not been comprehensively collected from areas where their species richness is likely to be richest (for example, insects in tropical rainforests). These factors, and the fact that different people have used different techniques and data sets to extrapolate the total number of species, explain the large range between the lower and upper figures of 3.6 million and 117.7 million, respectively. While it is important to know the total number of species of Earth, it is also informative to have some measure of the proportional representation of different groups of related species (e.g. bacteria, flowering plants, insects, birds, mammals). This is usually referred to as the taxonomic or phylogenetic diversity. Species are grouped together according to shared characteristics (genetic, anatomical, biochemical, physiological, or behavioral) and this gives us a classification of the species based on their phylogenetic, or apparent evolutionary relationships. We can then use this information to assess the proportion of related species among the total number of species on Earth. Table contains a selection of well-known taxa.
textbooks/bio/Ecology/Biodiversity_(Bynum)/6%3A_Species_Diversity/6.1%3A_Species_Diversity_as_a_Surrogate_for_Global_Biodiversity.txt
Whittaker (1972) described three terms for measuring biodiversity over spatial scales: alpha, beta, and gamma diversity. Alpha diversity refers to the diversity within a particular area or ecosystem, and is usually expressed by the number of species (i.e., species richness) in that ecosystem. For example, if we are monitoring the effect that British farming practices have on the diversity of native birds in a particular region of the country, then we might want to compare species diversity within different ecosystems, such as an undisturbed deciduous wood, a well-established hedgerow bordering a small pasture, and a large arable field. We can walk a transect in each of these three ecosystems and count the number of species we see; this gives us the alpha diversity for each ecosystem; see Table (this example is based on the hypothetical example given by Meffe et al., 2002; Table 6.1). If we examine the change in species diversity between these ecosystems then we are measuring the beta diversity. We are counting the total number of species that are unique to each of the ecosystems being compared. For example, the beta diversity between the woodland and the hedgerow habitats is 7 (representing the 5 species found in the woodland but not the hedgerow, plus the 2 species found in the hedgerow but not the woodland). Thus, beta diversity allows us to compare diversity between ecosystems. Gamma diversity is a measure of the overall diversity for the different ecosystems within a region. Hunter (2002: 448) defines gamma diversity as "geographic-scale species diversity". In the example in Table, the total number of species for the three ecosystems 14, which represent the gamma diversity. Hypothetical species Woodland habitat Hedgerow habitat Open field habitat A X B X C X D X E X F X X G X X H X X I X X J X X K X L X X M X N X Alpha diversity 10 7 3 Beta diversity Woodland vs. hedgerow: 7 Hedgerow vs. open field: 8 Woodland vs. open field: 13 Gamma diversity 14 Table \(1\) Alpha, beta and gamma diversity for hypothetical species of birds in three different ecosystems Glossary Ecosystem a community plus the physical environment that it occupies at a given time Alpha diversity the diversity within a particular area or ecosystem; usually expressed by the number of species (i.e., species richness) in that ecosystem Beta diversity a comparison of of diversity between ecosystems, usually measured as the amount of species change between the ecosystems Gamma diversity a measure of the overall diversity within a large region. Geographic-scale species diversity according to Hunter (2002: 448) 8: Introduction to Utilitarian Valuation of Biodiversity Determining the value or worth of biodiversity is complex. Economists typically subdivide utilitarian or use values of biodiversity into direct use value for those goods that are consumed directly, such as food or timber, and indirect use value for those services that support the items that are consumed, including ecosystem functions like nutrient cycling. There are several less tangible values that are sometimes called non-use or passive values, for things that we don't use but would consider as a loss if they were to disappear; these include existence value, the value of knowing something exists even if you will never use it or see it, and bequest value, the value of knowing something will be there for future generations (Moran and Pearce 1994). Potential or Option value refers to the use that something may have in the future; sometimes this is included as a use value, we have chosen to include it within the passive values here based on its abstract nature. The components included within the category of "utilitarian" values vary somewhat in the literature. For example, some authors classify spiritual, cultural, and aesthetic values as indirect use values, whiles others consider them to be non-use values, differentiated from indirect use values -- such as nutrient cycling -- because spiritual, cultural, and aesthetic values for biodiversity are not essential to human survival. Still others consider these values as separate categories entirely. (See also, Callicott 1997, Hunter 2002, Moran and Pearce 1994, Perlman and Adelson 1997, Primack 2002, Van Dyke 2003). In this module, we include spiritual, cultural and aesthetic values as a subset of indirect values or services, as they provide a service by enriching our lives (Table). Direct Use Value (Goods) Indirect Use Value (Services) Non-Use Values Food, medicine, building material, fiber, fuel Atmospheric and climate regulation, pollination, nutrient recycling Potential (or Option) Value Future value either as a good or a service Cultural, Spiritual, and Aesthetic Existence Value Value of knowing something exists Bequest Value Value of knowing that something will be there for future generations Table \(1\) Categories of Values of Biodiversity Note Some authors choose to differentiate Cultural, Spiritual, Aesthetic, and Non-Use Values from those services that provide basic survival needs such as the air we breathe. Glossary direct use value refers to products or goods which are consumed directly such as food or timber indirect use value refers to the services that support the products that are consumed, this includes ecosystems functions like nutrient cycling non use or passive value refers to the value for things that we don't use but would feel a loss if they were to disappear existence value the value of knowing something exists even if you will never use it or see it bequest value the value of knowing something will be there for future generations potential or option value refers to the use that something may have in the future 9: Biodiversity over Time The history of life on Earth is described in various publications and web sites (e.g., Speer, B.R. and A.G. Collins. 2000; Tudge, 2000; Lecointre and Guyader, 2001; Maddison, 2001 Eldredge, 2002); it is also discussed in the module on Macroevolution: essentials of systematics and taxonomy. For the current purpose of understanding what is biodiversity, it is only necessary to note that that the diversity of species, ecosystems and landscapes that surround us today are the product of perhaps 3.7 billion (i.e., 3.7×1093.7×109) to 3.85 billion years of evolution of life on Earth (Mojzsis et al., 1996; Fedo and Whitehouse, 2002). Thus, the evolutionary history of Earth has physically and biologically shaped our contemporary environment. As noted in the section on Biogeography, plate tectonics and the evolution of continents and ocean basins have been instrumental in directing the evolution and distribution of the Earth's biota. However, the physical environment has also been extensively modified by these biota. Many existing landscapes are based on the remains of earlier life forms. For example, some existing large rock formations are the remains of ancient reefs formed 360 to 440 million years ago by communities of algae and invertebrates (Veron, 2000). Very old communities of subterranean bacteria may have been responsible for shaping many geological processes during the history of the Earth, such as the conversion of minerals from one form to another, and the erosion of rocks (Fredrickson and Onstott, 1996). The evolution of photosynthetic bacteria, sometime between 3.5 and 2.75 million years ago Schopf, 1993; Brasier et al., 2002; Hayes, 2002), played an important role in the evolution of the Earth's atmosphere. These bacteria released oxygen into the atmosphere, changing it's composition from the former composition of mainly carbon dioxide, with other gases such as nitrogen, carbon monoxide, methane, hydrogen and sulphur gases present in smaller quantities. It probably took over 2 billion years for the oxygen concentration to reach the level it is today (Hayes, 2002), but the process of oxygenation of the atmosphere led to important evolutionary changes in organisms so that they could utilize oxygen for metabolism. The rise of animal and plant life on land was associated with the development of an oxygen rich atmosphere.
textbooks/bio/Ecology/Biodiversity_(Bynum)/7%3A_Alpha_Beta_and_Gamma_Diversity.txt
Winston Churchill pointed out that “All the great things are simple, and many can be expressed in a single word— freedom, justice, honor, duty, mercy, hope.” Should we try to define these? Can we define them? We should at least try to define our subject, ecology; many textbooks start with definitions. But first, for background, consider how we might define life. Marvin Minsky was an artificial intelligence researcher and computer scientist who thought about definitions. When is an object alive? Think about viruses, genes, self-reproducing machines—no one has really been able to give a good definition of “living” that satisfies in general. Some things are clearly living—mice—and some clearly are not—rocks. Lists of what makes something living used to appear in textbooks: 1. Self-reproducing 2. Responds to stimuli 3. Metabolizes 4. Made of protoplasm—protein, carbohydrates, DNA. But (1) puts out the mule, (2) and (3) put out the spore, while if those conditions are dropped, (4) will admit the frankfurter. One can go on to extend the list with more careful qualifications, but questions remain until the list grows to include special mention of everything we can think of. 1.02: Definitions of ecology With caveats in mind, consider definitions of ecology. In the 1860s, Ernst Haeckel, combined the term oikos—a place to live, home, habitat—with logia—discourse, study—to coin the word “ecology.” In the 1890s Ellen Richards included humans and harmony, quite a modern view. Variations over the years are shown in Table \(1\). Table \(1\) Various views of ecology. Haeckel 1860s The total relations of an organism to its organic and inorganic environment Richards 1890s Living in harmony with the environment, first including the human species Elton 1920s Scientific natural history Odum 1960s The study of structure and function of nature, including the human species Andrewartha 1960s The scientific study of the distribution and abundance of organisms Krebs   The scientific study of the interactions that determine the distributions and abundance of organisms Molles 1990s The study of relationships between organisms and the environment Eltis 2010s Life in context Pope Francis 2015 The relationship between living organisms and the environment in which they develop Each of these definitions has merit, but the first two and the last two are closest to the way the term is applied in this book. We humans have become prominent in ecology, locally to globally. No modern treatment of ecology is complete without a strong dose of anthropology. The definition by Andrewartha has been widely quoted, but focusing merely on distribution and abundance reduces ecology to mapping, which is why Krebs modified this definition. The Pope’s definition from his 2015 Encyclical includes the interesting idea of development, which can be taken to mean short-term development like embryogenesis and growth, plus long-term development like evolution. Overall, the definition by Eilts is perhaps the most general and engaging. First and foremost, the most important concepts in ecology are about relationships, plus all of life, the whole environment, the processes of living and development, and, above all context. And in today’s world, harmony. But also consider, “Poetry is the subject of the poem” (Wallace Stevens, 1937) and perhaps “Ecology is what ecologists do.” With these in mind, we strive in the remainder of this book to define a theoretical form of ecology through examples and demonstrations, representative models and symbols, patterns and explanations, and lessons and caveats.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/01%3A_What_is_Ecology/1.01%3A_Introduction.txt
Our early hominin ancestors needed aspects of ecology. To find blueberries or other fruit, or where to dig wild onions, they had to know where these foods grew—their distribution and abundance. These parts of ecology have thus been part of life for hundreds of thousands of years. Ecology is connected with our species. Some elements of the field of ecology were formalized more than 3000 years ago. The Rhind Papyrus lists a number of ecological exercises for students—mathematics from ancient Egypt. Among these oldest ecological problems is this: Number 27. If a mouse eat 521 ikats of grain each year and a cat kills 96 mice a year, in each of 24 barns, how many cats are required to control the destruction of stored grain? This is a little problem in quantitative ecology! Even 36 centuries ago, mathematical ecology was part of life. Knowing how many grain bins determined how many cats were to be employed. Today, ecology has become a glamour word. A product called “Ecogate,” for example, is part of a central vacuum system that keeps sawdust and sanding dust from being tracked around. But why the word Ecogate? Dust collection per se has nothing to do with ecology. Advertisers, however, have found that consumers respond positively to the term. The term “ecosystem” is frequently used in business and finance, but there it means a collection of companies, customers, and products and their interconnections. For better or worse, ecological terminology is expanding to other domains. 1.04: Methods of ecology How do ecologists do ecology? Often, they start with observation, then move to theory—trying to fit observations together to make sense as a whole. Theory then leads to expectations, which in turn lead to experiments. Commonly, experiments aren’t undertaken until there is some theory to be tested and understood. 1. Observation 2. Theory 3. Experiment 4. Serendipity Observation, theory, and experiment, however, are not the whole story. A large part of science turns out to be serendipity—luck and chance—capitalizing on chance and doing something with it. One example is Alexander Fleming, who discovered penicillin. Some of the bacterial cultures in his lab became contaminated with penicillium mold and the cultures died. That ruined his experiment. He could have written a memo to the laboratory staff ordering “Always keep mold away from our bacterial cultures. It destroys the cultures and will ruin the hypotheses we are trying to test.” But instead he capitalized on the serendipity, wondered what was happening, and found a substance in penicillium mold that kills bacteria. Fungi and bacteria have been archenemies for perhaps a billion years. Fleming’s discovery has helped physicians actually cure disease, rather than being limited to diagnosing and prognosticating. Following up on chance is, then, a large part of science. By the way, for an interesting paper, read the original 1929 report by Fleming about penicillium. It is so understated. He writes “the name ‘penicillin’ has been given to filtrates of broth cultures of the mould.” No one had heard of the word before. Then he suggests that “it may be an efficient antiseptic.” One of the greatest discoveries of all time and only, “it may be an efficient antiseptic.” Cedar Creek is a University of Minnesota research site about thirty miles north of the University’s Saint Paul campus, and is one of the classic ecological research sites in the world. Pictured in Figure \(1\) is an experiment set up by Prof. David Tilman. While very carefully designed, it came about because of serendipity—the chance event of a deep two-year drought that altered the abundances of species in a particular way and triggered the idea for this experiment. Keep your eyes open for such chance events; they can crop up anywhere.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/01%3A_What_is_Ecology/1.03%3A_Ecology_then_and_now.txt
• 2.1: Levels of Ecology Ecology covers a vast range of topics and can be viewed on multiple levels. These levels include (1) individual organism, (2) population ecology, (3) community ecology, (4)  and global ecology. • 2.2: Role of theory From its early days, ecology has been in part a theoretical– mathematical science, and it is now also a computational science. Mathematical theory arises where systems are relatively simple. In our modern era, computation can address somewhat more complex systems, though creating computations on complex systems that satisfy the basic tenets of science is still problematic. For very complex systems, narrative is all we have available. • 2.3: What is a model? Science strives for simplicity, and models are part of the process. A model is just a simplified view of something more complex. • 2.4: Present State 02: Ecological Theory Ecology covers a vast range of topics and can be viewed on multiple levels. One level is that of the individual organism— a single bacterium, an individual wolf pup. This includes individual behavior and physiology, with behavior as part of ecology. Population ecology covers groups of organisms of the same species—a bison herd or a grove of maples. Community ecology looks at how different populations interact, and the communities examined can be quite large. Above this level is ecosystem ecology, which examines how different communities interact with their environments. Finally, there is global ecology—ecology of the planetary ecosystem. Individual ecology Single organisms, behavior, and physiology Population ecology Groups of organisms from a single species Community ecology Populations of interacting species Ecosystem ecology Multiple communities and the environment Global ecology The planet as a biosphere 2.02: Role of theory From its early days, ecology has been in part a theoretical– mathematical science, and it is now also a computational science. Mathematical theory arises where systems are relatively simple. In our modern era, computation can address somewhat more complex systems, though creating computations on complex systems that satisfy the basic tenets of science is still problematic. For very complex systems, narrative is all we have available. Examine the levels in Figure 2.1.1 to think about where theory applies. Subatomic particles and atoms are the realm of quantum mechanics, one of the most sublime and successful theories. Theory applies nicely to the hydrogen atom, a two-particle object. And while it applies to larger atoms, the raw mathematics becomes too complex as the number of particles grows, so computation comes into play. At higher levels like the molecular one, theory is harder to apply. Organic chemistry, for example, is not a strongly mathematical science, and at the level of protoplasm and cells there is no comprehensive mathematical theory or computational equivalent. This level is far too complex—with minuscule molecular machines running along tubules and carrying mitochondria on their backs at high speed relative to their size, it is more complex than any industrial factory. At the level of tissues and organs systems, we have only narratives to guide our understanding. What happens, then, at the level of organisms, at the entry to ecology? Individual organisms are exceedingly complex. There is no complete mathematical theory for the internal operation of individual organisms. But externally, organisms behave as a unit and populations become simpler than individuals—glossing over heartbeat, neuron firing rates, white blood cell replication, and so on, with all their enormous complexity. Details disappear. Populations can be described with basic mathematics. Communities are more complex, but are still within the reach of mathematics and, particularly, within the reach of computation. And ecosystems are complex, but with some unifying properties. The whole earth thus begins to be simpler, and at the level of planets and solar systems, things once again become nicely mathematical. This is the level where, with Newton, modern science was born. In part, this emerging simplicity is because levels of detail again merge together. At the level of planetary orbits, it does not matter that dinosaurs once dominated the planet or that Mozart ever wrote any concertos. At larger scales still, solar systems are completely describable with computers, although the mathematics becomes difficult, and as we move out into galaxies and the entire universe the descriptions become difficult again. Changing scales thus involves the successive movement in and out of simplicity. Where is the complexity in the universe greatest? It turns out to be at about one meter. In other words, at our scale. A great spike in complexity appears just where we and other forms of life arose. That is no accident. A philosophical idea called the weak anthropic principle suggests that any part of the universe that can sit around and contemplate itself and the larger universe must itself be complex. We are constrained to live at a scale of great complexity, or not to exist at all. That is worth some reflection. But we try to find simplicity among this complexity, to let us feel we understand, and to let us predict what can happen. 2.03: What is a model Science strives for simplicity, and models are part of the process. What is a model? It is just a simplified view of something more complex. The word “model” is used here essentially as it’s used in everyday English. For example, in ordinary English, “modeling clay” can be used to make simplified miniatures of three-dimensional images of animals, automobiles, buildings, or even full-scale three-dimensional images of objects like the human heart. A “model airplane” can be rendered to show at a glance the physical appearance of a large aircraft, and can even be constructed to fly so as to test aerodynamics under proper rescaling. A “model organism” is a simpler organism that may respond to medical tests or treatments in ways similar to those of a more complex organism. Even the fashion model on the runway meets this definition of a simplified view of something more complex. The infinite complexity of the human spirit is not relevant on the runway; all that is relevant in this context is the person as a realistic way to display fashions. This book focuses on computational and mathematical models of ecological systems. What is left out of these models is as important as what is put in. Simplification is key. If you have a complex natural system you don’t understand, and you construct a computer model incorporating everything you can about that natural system, you now have two systems you don’t understand. — after Chris Payola, UMN A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away. — Antoine de Saint-Exupery Two different simplifications of time are commonly used in ecological models: • Discrete time — Events happen at periodic time steps, as if time is non-existent in between. • Continuous time — Events happen smoothly and at all times. In addition, there are two different classes of models: • Macroscale — Individual organisms are not tracked, but are measured in aggregate and represented by composite variables such as N. • Microscale — Individual organisms are tracked separately. These are also known as agent-based or individual-based models. Macroscale models can be handled either by computers or mathematics, but microscale models are usually restricted to computers. Keep in mind that all four categories are only approximations of reality. Later in this book we will also explore mechanistic versus phenomenological models. 2.04: Present State As a surprising side note, the standard models commonly taught in ecology courses are not complete, and a main purpose of this book is to help make them more so. One aspect of theory related to simple species, for instance—called orthologistic population growth— is rarely even studied, much less taught, yet is essential for understanding rapidly growing populations, including human populations in millennia past. For two-species interactions, another theory concerning mutualisms and a related kind of population growth is highly under-developed, and the theory of three-species interactions is even less complete. Figure \(1\) The eternal mystery of the universe is its comprehensibility. —A. Einstein
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/02%3A_Ecological_Theory/2.01%3A_Levels_of_Ecology.txt
One way to describe populations is to look at how many individuals they contain at various times. Or, instead of individuals, it may be more reasonable to consider total biomass— the total weight of all individuals in the population combined. For example, the number of trees may not be as important as their total weight, or the total area of their canopy. Density may also be relevant—how many individuals occupy a unit of area, or the percentage covered by the population in an area. All of these are gross properties of populations that can enter models. Additional properties that can be taken into account include the age structure—the portions of the population of various ages, and the size structure—the portions of the population of various sizes. Such detail can be important because juveniles or older individuals may not reproduce. Genetic structure can also be important; it is often left out of ecological models, but evolutionary directions can affect the ecology as well. Another important measure is the rate of change—how fast the population is changing. A population can be constant, increasing, or decreasing, or can fluctuate in complex ways. Remember that a model is just a simpler view of something complex, and that all scientific models are just approximations of nature. When the full complexity cannot be understood, which is almost always, we can try to construct a simplified model in hopes of finding the essence. 3.02: Bacterial Growth The place to start a discussion of basic population models is in discrete-time at the macroscale—the most basic. Consider a hypothetical strain of bacteria reproducing every hour. Suppose a colony is started with a single bacterium when the clock reads zero, symbolized t = 0, and one hour later that bacterium divides into two, by time t = 1. Each of those divides in two one hour after that, by time t = 2. And so forth. Suppose you start this culture on a lab bench on Monday, let it go unchecked, and then come back late on Friday to see the results. If the colony grows unchecked in this way, how many bacteria will there be at the end of the work week, after five full days of growth? You could figure that out on a calculator, or more easily you could use computer code. More easily, that is, once you know computer coding. Because computer coding is becoming embedded in almost every aspect of life, appreciating the basics of coding is meaningful for each educated citizen of this century, and this book will expose you to the basics. 3.03: A First Computer Model Below are two lines of computer code forming a program that models the bacterial colony as it doubles every hour. If you have not seen computer code before, don’t be frightened by this; we will go through it carefully below. `N=1; t=0; ` `while(t<=5*24) { print(N); N=N*2; t=t+1; }` The above program is written in a generic programming language—it could be run as the programming language R, as the language AWK, or, with minor adjustments as the language C, Java, or a number of others. This particular code is essentially identical in many languages. The first “statement” is N=1. That instructs the computer to set the number of bacteria, N, equal to 1. A semicolon (;) ends the statement and separates it from subsequent statements. The next statement is t=0. That instructs the computer to set the time t to 0. One bacterium at time zero forms the “initial conditions,” and once the computer has finished that line, the program is said to be “initialized.” The second line of the program is more involved, but can be understood in two parts. The first part on the left, while(t<=5*24), instructs the computer to repeat a set of code for 5 simulated days of 24 hours each. The second part is the code to be repeated, within braces on the right, {...}. Considering the first part, while is a “keyword” that instructs the computer to repeat something until the “condition” in parentheses is no longer true. In this case, inside the parentheses is t<=5*24, which itself consists of three parts, t, <=, and 5*24. The first part, t, represents the time, which has just been initialized to zero in the previous line of code. The second part, <=, is the symbol for “less than or equal to.” Six such “comparison” symbols are possible, ==, <, <=, >, >=, and !=, representing comparison for equal, less than, less than or equal, greater than, greater than or equal, and not equal, respectively. In the third part, the asterisk (*) is a symbol for multiplication, so 5*24 means “five times twenty-four,” a way to represent the number 120, or the number of hours from Monday to Friday—the amount of time the hypothesized bacterial culture is to reproduce. Computer coding is an exacting business, where tiny variations can make huge differences. The computer is the ultimate literal interpreter. An example of this just slipped by in the previous paragraph. In coding, a single equals sign, =, means “change something to be equal to,” whereas two consecutive equals signs, ==, means “compare to see if two things are the same.” If you are accustomed to coding, you will already be familiar with such subtleties; if this is all new to you, it is something to get used to. Various primers on the web can help, but don’t be discouraged if it seems difficult at first; computer coding turns out to be one of the easiest things to jump into but one of the most difficult areas of all human endeavour to get exactly right. Time and patience will assist. Getting back to the code, the phrase while(t<=5*24) means, in this case, to repeat something as long as the time, t, is less than or equal to 120 hours, 5 times 24. And that something to be repeated appears within braces to the right, {...}. (By the way, many programming languages use three main symbols for grouping information—called braces, { }, brackets, [ ], and parentheses, ( ). They are used for various kinds of groupings, but unfortunately their usage is not consistent across all languages.) The first statement within braces is print(N). (Refer back to the two-line program.) “Print” is a term left from the days when computers would communicate largely by printing on paper. Now the term just means “display.” The statement thus means “display the number of individuals in the population, N, at this time.“ That was set to 1 in the previous line, so when the computer runs print(N) for the first time, it will display the number 1, typically on your screen. The next statement, N=N*2, is read “N equals N times two.” It is similar in form to the statement on the first line, N=1, which started things off with a single bacterium. The ‘N=’ part is the same. It tells the computer that the number of bacteria, N, is about to change. (Of course, the computer has no clue what the program is about—that you are running a program about bacteria.) What N will change to is immediately to the right of the equal sign, N*2. The asterisk (*) means multiply. So this statement thus tells the computer to double the value of N. That is what the hypothesized bacterial population does every hour, so this statement models that doubling. The third statement on the line, t=t+1, is read “t equals t plus one.” It is similar in form to the statement on the first line, t=0, which started things off with a clock time of zero. In other words, in this example of letting bacteria grow for a five-day work week, we are taking midnight Monday morning to be hour zero. Five days later, at midnight Friday night, that becomes hour 120 (5 days times 24 hours per day equals 120 hours). So similarly, t= tells the computer that time t is about change. Following the equals sign is what it should change to, t+1, or one more than what it is at the moment, which advances the time by one hour. This is a discrete time model, so it approximates the real system by modeling only specific moments. Those three statements are run in order, from left to right, first displaying the number of bacteria, then modeling the doubling of the bacterial population, and then advancing to the next hour. By the way, it may have occurred to you that the last two statements could be written in either order, or even run at the same time—they are independent, so the ordering would not matter. After all three statements are run, your display will contain the number 1, N will be 2, and t will be 1. The computer next examines the code inside the parentheses associated with the keyword while to see if the three statements inside the braces should be run again. That condition specifies that as as long as the time t is less than or equal to 120, the three statements must be repeated. At this point, t is equal to 1, which certainly is less than 120. Therefore the three statements will be run again. This is called a “loop,” and now the computer will begin the second time around the loop, running the three statements again as they ran before, but now with altered values of N and t. First it will display N, which is now equal to 2, so it will display 2. Then it will double N again, changing it from 2 bacteria to 4, and increase the time t by 1, from hour 1 to hour 2. Thus the process repeats. Examining the condition inside the parentheses, the computer finds that 2 is less than or equal to 120, and so the three statements inside braces are run again. This goes on and on until t is greater than 120, at which time the loop is finished. At the end, t will be 121 and N will be whatever number has been reached by the process of doubling. This code illustrates two fundamental aspects of computer coding: “condition testing” and “looping.” In larger programs loops are “nested” within other loops and condition tests are nested correspondingly. But this two-line program, with the first line initializing and the second line running a loop, is sufficient for our first model. You will soon see that this is not a trivial model, but one that demonstrates an inviolable law of biology, which Darwin put directly to use in creating his theory of evolution.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/03%3A_A_Basic_Population_Model/3.01%3A_Characterizing_populations.txt
Here is what the program produces, shortened to fit on a page of this book. 1 2 4 8 16 32 : 6.64614 × 1035 1.32923 × 1036 If you run this program in R or another suitable language, you should see something essentially identical to the above. Between Monday and Friday, 120 bacterial doublings would produce over 1036 bacteria—that’s 1 followed by 36 zeros. That is the computational result. The scientific question is how many individuals this amounts to. Worked out exactly, it is this number: 2120 = 1,329,227,995,784,915,872,903,807,060,280,344,576. To understand the size of this number, suppose the bacteria are roughly cubical 1 µm on a side—one millionth of a meter, or about four hundred-thousandths of an inch (a suitable order-of-magnitude for a bacterium). What volume will the colony occupy in cubic meters at the end of the work week, after five full days of growing unchecked? You might want to speculate: will it fill the culture plate, overflow onto the lab bench, fill the lab, or what? Work it out and you will see that the answer is 2120 bacteria times 10-18 cubic meters per bacterium equals about 1.3 × 1018 cubic meters total. How large is that? Estimate the ocean to be a film averaging 3.7 kilometers deep and coating two-thirds of a sphere with a 6400 kilometer radius (this approximates the amount of the earth’s surface that is covered by ocean). This is about 1.3 × 1018 cubic meters! At the end of five days, the colony unchecked would thus fill all oceans of the earth with a dense microbial mass, from the greatest depths up to the surface! This result has deep-reaching implications. First, even though this bacterial model can be quite accurate for a day or so, it fails completely over the course of a week. All models are approximations to reality, at best applicable over a suitable range. Second, there are lessons in its failure. It illustrates one of the inviolable laws of biology—that no population growth can remain unlimited for long. And third, in a mind like Charles Darwin’s, and coupled with other biological principles, it leads to the conclusion that organisms must evolve. That is the story of Darwin’s elephants. 3.05: Darwin's elephants With elephants recognized as the slowest breeders of all known animals, Darwin made a laborious calculation, similar to the bacterial calculation above but more detailed, assuming that elephants started breeding at age 30 and continued until age 90, producing 6 young in that time. Of course he had no computers, nor calculators, and apparently kept track of 90 or more age classes and made his calculations on paper. He calculated by hand on paper and alas those notes have never been found. But he said it cost him “some pain” to reach the conclusion that at the end of the fifth century, fifteen million elephants would be walking the earth, descended from one original pair. From this, he concluded that unlimited growth is impossible. There is no exception to the rule that every organic being naturally increases at so high a rate that, if not destroyed, the earth would soon be covered by the progeny. — Charles Darwin, 1859 That, he explained in Chapter Three of his Origin of Species. After explaining results of selection by people in the breeding of domestic animals, he introduced the concept of selection by natural causes in the wild, which he called “natural selection.” The simplest model of unlimited population growth was thus useful in the extreme, leading to an inviolable law of biology and the theory of evolution as one of its consequences. Individuals with qualities that allow them to suffer lower mortality or to reproduce slightly faster, and who pass those qualities to their offspring, will be the ones whose qualities predominate.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/03%3A_A_Basic_Population_Model/3.04%3A_Program_results.txt
Density, in the sense of population density, refers to how many individuals are present on average per unit area. One could say, “The density of elk in Yellowstone National Park during the summer is about 3 to 6 per square mile.” Sometimes, however, you will see density used as the total number in a place. You may see, “The density of elk in Yellowstone National Park during the summer is about 10 to 20 thousand.” The symbol N is often used for population density. In the first case above, you would write N = 4.5, the midpoint between 3 and 6. In the second case you would write N = 15000. It should be clear from context what the area is. With this in mind, all of the following statements are equivalent: 1. The population doubles each hour. (As in the bacterial example of the previous chapter.) 2. The population N(t) doubles each hour. Here the number of individuals is represented by the letter N, and N(t) means population at time t. In the bacterial example, there was one bacterium at the beginning of the experiment when the clock started running, so you would write N (0) = 1. One hour later, the hypothetical population had doubled, so you would write N (1) = 2. Doubling successively then gives N (2) = 4, N (3) = 8, and so forth until after five days, or 120 hours, N (120) = 1036, or slightly more—enough to fill all the oceans of the world. 3. The population N doubles each hour. Often the “(t )” is left off for simplicity, it being understood that the population N is a function of time. 4. N doubles each hour. Since N represents a population in this case, the word “population“ will often be dropped for conciseness. 5. N (t + 1) = 2N (t ). In English this would be read “N of t plus 1 equals two times N of t.” That simply means that the population at some time, anytime, t, when multiplied by 2, is the population in the next time step, t plus one. 6. The change in the population each hour is equal to the size of the population that hour. This may sound pretty confusing. But it means that the amount that the population increases in the time step is equal in size to the whole population. Usually the increase is much less than that, perhaps a few percent, but here we are dealing with a rapidly increasing bacterial population. 7. The change in the population each hour is N (t +1) minus N(t), which is to say N (t +1)− N (t ) = 2N (t )−N (t ) = N (t ). Here the population in the next hour, N (t +1), minus the population now, N (t ), which is the change in population, is twice the current population, 2N (t ), minus the current population, N (t ) (not less confusing, perhaps). 8. The change in the population each hour, call it “Delta N” or ∆N, is ∆N /∆t = 2N(t)N(t) = N(t). Here the symbol delta (∆) means change in or the difference. So ∆N means the change in N, and ∆t means the change in t. So the change in N per unit of time is written ∆N /∆t, where delta t is the time unit being used, such as hour or day. This statement is thus the same as the previous one, but with symbols to shorten it. 9. N /∆t = N. This means the population change in each time unit is equal to the population size itself. That is just because the population is doubling each time step. 10. $\frac{1}{N}\, \,frac{∆N}{∆t}\, =\,1$ This is just dividing both sides of the previous equation by N, and perhaps looks even more confusing. However, in what follows, it turns out to be the most useful of all. To move forward, let’s focus on the last equation, with its parts colored in the box below. In the first row, the “∆N = 1” refers to a change in the population of one individual, because delta (∆) means change. In the second row, the “∆t ” in the denominator modifies this to the change in each time step—in this, case each hour. In the third row, the 1/N modifies it drastically to mean the change in the population per individual in the population. This could mean that one new individual is born while the parent lives on, or that two new individuals are born and the parent dies, or that the parent divides in two, or other equivalent events. In this model, these details are abstractions that do not matter for purposes of projecting the population. The model simply records the number of offspring produced by each member of the population and surviving to reproduce. Multiplied by 100, this becomes the percentage growth of the population. For humans, this is like the number of children per family who survive to adulthood. (Though it has to be divided by two if there are two parents per family.) You have seen how rapidly that blows up, from the calculation in Chapter 3.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/04%3A_Modeling_a_Single_Population/4.01%3A_Density-independent_growth.txt
Darwin made unparalleled use of a model that failed, but how can the model be improved so that it does not fail? Think of only three Black-Eyed Susan plants (Rudbeckia hirta) becoming established in Yellowstone National Park, one near the north-east entrance, one in the center, and a third near the south entrance—the plants thus separated by over 30 miles. How often would the same pollinator be able to visit two of the plants so the plants could reproduce? Rarely or never, because these pollinators travel limited distances. The plant’s growth rate will thus be 0. (In fact, it will be negative, since the three plants will eventually die.) Suppose instead that 1000 of these plants were scattered about the park, making them about 2 miles apart. Occasionally a pollinator might happen by, though the chance of it visiting one of the other Black-Eyed Susans would be very low. Still, with 1000 plants in the area, the growth rate could be slightly positive. Now consider 1,000,000 of those plants, making them about 100 meters apart. Pollination would now become relatively frequent. The growth rate of the population thus depends on the number of plants in the vicinity, meaning that this number must be part of the equation used to calculate the population growth rate. We can use the equation introduced earlier to calculate this rate. First, put a parameter in place of the 1, like this. $\frac{1}{N}\, \frac{∆N}{∆t}\, =\,r$, where r formerly was 1 Then attach a term that recognizes the density of other members of the population, N. $\frac{1}{N}\, \frac{∆N}{∆t}\, =\,r\,+\,sN$, Here r is related to the number of offspring each plant will produce if it is alone in the world or in the area, and s is the number of additional offspring it will produce for each additional plant that appears in its vicinity. Suppose r = 0 and s = 1/20, just for illustration, and start with three plants, so N (0) = 3. That is $\frac{1}{N}\, \frac{∆N}{∆t}\, =\,0\,+\,0.05\,N$, For watching the dynamics of this, multiply it out again $\frac{∆N}{∆t}\, =\,(0\,+\,0.05\,N)\,N$, and convert the model to computer code, like this. r=0; s=0.05; dt=1; t=0; N=3; print(N); while(t<=14) { dN=(r+s*N)*N*dt; N=N+dN; t=t+dt; print(N); } If you run this model in R (or other languages in which this code works, like C or AWK), you will see the numbers below. 3 3.45 4.045125 4.863277 6.045850 7.873465 10.97304 16.99341 31.43222 80.83145 407.5176 8,711.049 3,802,830 723,079,533,905 26,142,200,000,000,000,000,000 Graph these, and you will see the numbers expand past all bounds, vertically off the page. The blue line shows the unlimited bacterial growth (exponential growth) that helped lead Darwin to his idea of natural selection. The red line illustrates the new “density-enhanced growth” just being considered, where growth rate increases with density. Because it approaches a line that is orthogonal to the line approached by the logistic model, described later, we call this an “orthologistic model.” It runs away to infinity so quickly that it essentially gets there in a finite amount of time. In physics and mathematics this situation is called a “singularity”—a place where the rules break down. To understand this, it is important to remember that all models are simplifications and therefore approximations, and apply in their specific range. The orthologistic model applies well at low densities, where greater densities mean greater growth. But a different model will take over when the densities get too high. In fact, if a population is following an orthologistic model, the model predicts that there will be some great change that will occur in the near future—before the time of the singularity. In physics, models with singularities command special attention, for they can reveal previously unknown phenomena. Black holes are one example, while a more mundane one from physics is familiar to all. Consider a spinning coin with one point touching the table, spinning ever more rapidly as friction and gravity compel the angle between the coin and the table to shrink with time. It turns out that the physical equations that quite accurately model this spinning coin include a singularity—a place where the spinning of the coin becomes infinitely fast at a definite calculable time. Of course, the spinning cannot actually become infinitely fast. As the coin gets too close to the singularity—as its angle dips too near the table—it merely switches to a different model. That different model is a stationary coin. The exact nature of the transition between the spinning and stationary states is complex and debated, but the inevitability of the transition is not. It is no different in ecology. Reasonable models leading to singularities are not to be discounted, but rather considered admissible where they apply. They arise inescapably in human population growth, considered in the next chapter.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/04%3A_Modeling_a_Single_Population/4.02%3A_Density-enhanced_growth.txt
What about outside of the range of the orthologistic model? Think of the same Black-Eyed Susans, not only close enough that pollinators can flit fluently from one to another, but also crowded so that they start to shade one another, and their roots start to compete for water and nutrients. What is a suitable model for this? The growth rate will again depend on the number of plants, but now more plants will reduce the growth rate. That just means a minus sign on s. $\frac{1}{N}\, \frac{∆N}{∆t}\, =\,r\,+\,sN$, s <0 Again, r is the number of offspring each will produce if it is alone in the world, but with s negative, s is the number each plant will be unable to produce for each additional plant that appears in its vicinity. Suppose we have r = 1 and s = −1/1000, and we start with three plants, so N (0) = 3. Here is the code, with the new negative s in red. r=1; s=-0.001; dt=1; t=0; N=3; print(N); while(t<=20) { dN=(r+s*N)*N*dt; N=N+dN; t=t+dt; print(N); } Now, because s is negative, the growth rate $\frac{1}{N}\, \frac{∆N}{∆t}$ will drop as the population increases, so you might surmise that the rate will eventually reach zero and the population will level off. In fact, it levels off to 1000. Figure $1$ Logistic growth (green) contrasted with orthologistic growth (red) and exponential growth (blue). The value at which it levels off is called an “equilibrium,” a value where the dynamical system becomes quiescent and stops changing. In the case of the logistic equation, it is also called the “carrying capacity,” a level at which the environment cannot “carry” any larger population. But why 1000? What value of $\frac{1}{N}\, \frac{∆N}{∆t}$ will make the population level off? When ∆N is 0, that means “the change in N is zero.” And that means N stops growing. And when ∆N is zero, the entire term on the left is zero and algebra proceeds as follows. $\frac{1}{N}\, \frac{∆N}{∆t}\,=\,r\,+\,sN$ $0\,=\,r\,+\,sN$ $-sN\,=\,r$ $N\,=\frac{-r}{s}$ So the carrying capacity is −r/s. In Figure 4.3, −r/s = −1/(−0.001) = 1000. Exactly where it ended up! This is the celebrated “logistic equation,” published in 1838 by Pierre Verhulst. It is commonly written $\frac{∆N}{∆t}\,=\,rN(1\,-\frac{N}{K})$ Notice that when N is equal to K, the factor in parentheses on the right becomes 1 − N/N = 1−1 = 0, so the whole growth term ∆N /∆t becomes zero and the population stops growing. Thus K is carrying capacity, and therefore K = −r /s. As an exercise, you might want substitute −r /s for K in the equation above, then simplify and see if you get the r + sN formulation. 4.04: Parameter combinations Before moving further, consider all possible combinations of the parameters, as determined by their signs. There are six possibilities, ignoring growth rates of exactly zero as infinitely unlikely. 1. r>0, s>0 Orthologistic growth. 2. r<0, s>0 Orthologistic growth with an Allee point. 3. r>0, s=0 Exponential growth. 4. r>0, s<0 Logistic growth with a carrying capacity. 5. r<0, s<0 Inviable population declining to extinction. 6. r<0, s=0 Same as above. Figure \(1\) shows three of these possibilities pieced together to form a complete population model. On the left in the figure, number 2 above, orthologistic growth with an Allee point, prevails at low densities, where larger numbers of other members of the species in the vicinity enhance growth. In the middle, number 3 above, exponential growth, occurs as a transition phase. Finally on the right, number 4 above, logistic growth with a carrying capacity, takes over when crowding and other limitations reduce growth rates as larger numbers of other members of the species in the vicinity appear. The vertical axis in Figure \(1\) shows the individual growth rate, and the horizontal axis shows the population density. On the right, where the slope is negative, as the density approaches −r /s from the left the growth rate on the vertical axis drops to zero, so the population stops growing. This is the equilibrium value called the “carrying capacity.” If something pushes the population above that value— immigration of animals from another region, for example— then the growth rate on the vertical axis drops below zero. The growth rate then is negative, and therefore the population declines. On the other hand, if something drops the population below that value—such as emigration of animals to another place—the growth rate on the vertical axis rises above zero. That growth rate is positive, and therefore the population grows. The carrying capacity is “stable.” A value is said to be stable if it tends to restore itself when it is pushed away by some outside force. 4.05: Generalization In summary, the macroscale model for population dynamics of a single species, in its simplest form, is $\frac{1}{N}\,\frac{∆N}{∆t}\,=\,r\,+\,sN$ This is a straight-line form of a more general form presented by Hutchinson, $\frac{1}{N}\,{∆N}{∆t}\,=\,r\,+\,sN\,+\,s_2N^2\,+\,s_3N^3\,+\,s_4N^4\,+\,...$ and of the most general form proposed by Kolomogorov, where f (N ) can be any function of the population density N. $\frac{1}{N}\,\frac{∆N}{∆t}\,=\,f(N)$ The higher-order terms in the second equation could refine population projections if there were enough data to determine them. They are not really needed, however, because straight-line parts can be pieced together to form a general population growth curve, as in Figure 4.4.1. And as human population growth in Figure 6.3.1 will show, a piecewise approach can more closely approximate the real situation. Moreover, blending separate versions of the first equation can generalize to either the Hutchinson or Kolomorgov forms as you will see in Chapter 18. Figure $1$ Trumpeter swans—the largest North America birds, with wingspans reaching ten feet—were nearing extinction until deliberate protection and reintroduction programs brought their r values back to viable levels.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/04%3A_Modeling_a_Single_Population/4.03%3A_Density-limited_growth.txt
$\frac{1}{N}\,\frac{∆N}{∆t}\,=\,r\,+\,sN$ $\leftarrow\,Difference\,equation\,model$ $\frac{1}{N}\,\frac{dN}{dt}\,=\,r\,+\,sN$ $\leftarrow\,Differential\,equation\,model$ Recall that the delta sign (∆) means change in or the difference. Compare the difference equation with the differential form, which uses the terminology $dN$ and $dt$. These represent infinitesimally small time steps, corresponding to our common-sense perception of time as divisible ever more finely without limit. In differential equations populations change smoothly rather than in finite steps—growth approximating that of organisms that can reproduce at any time, such as bacterial or human populations. It turns out that differential equations are harder for computers to solve than difference equations. Computers cannot make infinitely fine time steps, but have to approximate by using very small time steps instead. On the other hand, difference equations can be harder to solve mathematically. r=1; s=-0.001; N=1; t=0; dt=1; print(N); while(t<=20) { dN=(r+s*N)*N; N=N+dN; if (N<0) N=0; t=t+dt; print(N); } Above is computer code for a difference equation presented earlier, which leveled off at 1000, but with an addition in red. If the population is far above its carrying capacity, the calculation could show such a strong decline that the next year’s population would be negative—meaning that the population would die out completely. The addition in red just avoids projecting negative populations. Below is similar code for the corresponding differential equation, with the differences again in red. r=1; s=-0.001; N=1; t=0; dt=1/(365*24*60*60); print(N); while(t<=20/dt) { dN=(r+s*N)*N*dt; N=N+dN; if(N<0) N=0; t=t+dt; print(N); } This intends to model infinitely small time steps. Of course it cannot do that exactly, but must settle for very small time steps. Instead of $dt\,=\,1$, for example, representing one year, it is set here to about one second, dividing 1 year by 365 days and each day by 24 hours, 60 minutes, and 60 seconds. This is hardly infinitely small, but for populations of bacteria and humans it is close enough for practical purposes. Still, it is important to check for negative populations in case the time step is not small enough. Figure $1$. Differential logistic growth (maroon) compared with discrete (green). No dots appear on the differential form, since that represents infinitesimal time steps, whereas the difference form has a dot at each point calculated. How small is close enough to infinitely small? is the question. To find out, you can set the time step to something small and run the code, which will produce a set of population values through time. Then set the step smaller still and run the code again. It will run more slowly because it is calculating more steps, but if essentially the same answer appears—if the answer “converges”—then you can make the step larger again, speeding the calculation. With a few trials you can find a time step that is small enough to give accurate answers but large enough to allow your code to run reasonably fast. Figure $1$ shows the results of running the differential equation version of the program (the second one above, in maroon) versus the difference equation version (the first above, in green). The differential equation has the same parameters and general shape, but the population approaches its carrying capacity more quickly. Because the differential time steps produce offspring earlier—not waiting for the end of the step— offspring are available to reproduce earlier, and so forth. This particular method for differential equations is called “Euler’s method” (pronounced “Oiler’s”), a basic approach not often used in the twentieth century because computers not so long ago were millions of times slower than they are now. Today this method is often fast enough, and is desirable because of its relative simplicity. (By the way, calculating bacterial growth for five days with one-second time steps will be fast enough in any programming language. At the time we are writing this (second decade of the twenty-first century), calculating growth for a human population, second-by-second for 20 years, doing the same for 20 years second by second for humans will be too slow in R, tolerable in AWK, and plenty fast in C, Java, or other high-speed languages.)
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/05%3A_Differential_and_Difference_Forms/5.01%3A_Chapter_Introduction.txt
Differential equations can be amenable to mathematical analysis. To repeat, here is the differential population model. $\frac{1}{N}\,\frac{dN}{dt}\,=\,r\,+\,sN$ It turns out there is something simple about infinity, and when time steps are infinitely small the methods of calculus developed over the centuries can solve this differential equation exactly, mathematically. If you apply a symbolic mathematics computer package, or the methods for integration of functions developed in calculus, you can find the population value N for any future time t. This is called the “solution” to the differential equation. $N(t)\,=\frac{1}{(\frac{s}{r}\,+\frac{1}{N_0})\,e^{-rt}\,-\frac{s}{r}}$ Most differential equations cannot be solved this way but, fortunately, the basic equations of ecology can. This solution becomes useful in projecting forward or otherwise understanding the behavior of a population. If you know the starting N, s, and r, you can plug them into the formula to find the population size at every time in the future, without stepping through the differential equation. 5.03: Exponential Solution To think about this mathematically, first set s to zero, meaning no density dependence. The differential equation then reduces to dN /dt = r, and if you replace s in the equation above with 0 you get this: $N(t)\,=\frac{1}{(\frac{0}{r}\,+\frac{1}{N_0})\,e^{-rt}\,-\frac{0}{r}}$ $\,=\frac{1}{\frac{1}{N_0}\,e^{-rt}\,}$ $\,=\,N_0e^{rt}$ A measure often used for exponential growth, and that we will apply later in this book, is “doubling time”—the time that must elapse for the population to double. For exponential growth, this is always the same, no matter how large or small the population. For exponential growth, the equation above is $N(t)\,=\,N_0\,e^{rt}$ N0 is the starting population at time 0, N (t ) is the population at any time t, and r is the constant growth rate—the “intrinsic rate of natural increase.” How much time, τ, will elapse before the population doubles? At some time t, the population will be N (t ), and at the later time t+τ, the population will be N (t+τ). The question to be answered is this: for what τ will the ratio of those two populations be 2? $\frac{N(t\,+\,τ)}{N(t)}\,=\,2$ Substituting the right-hand side of the exponential growth equation gives $\frac{N_0\,e^{r\,(t\,+\,τ)}}{N_0\,e^{rt}}\,=\,2$ The factor N0 cancels out, and taking natural logarithms of both sides gives $\ln \frac{e^{r(t\,+\,τ)}}{e^{rt}}\,=\,\ln \,2$ Since the log of a ratio is the difference of the logs, this yields $\ln \,e^{r(t\,+\,τ)}\,-\,\ln \,e^{rt}\,=\,\ln \,2$ Since logarithms and exponentials are inverse processes— each one undoes the other—the natural logarithm of ex is simply x. That gives $r(t\,+\,τ)\,-\,rt\,=\,\ln \,2$ $r\,τ\,=\,\ln \,2$ and finally, the doubling time τ is $τ\,=\frac{\ln \,2}{r}$ In other words, the doubling time for exponential growth, where r is positive and s is 0, is just the natural logarithm of 2 (0.69314718...) divided by the growth rate r. 5.04: Logistic solution Recall that the carrying capacity is −r /s, also called K. So wherever −r /s appears, substitute K, as follows. $N(t)\,=\frac{1}{(\frac{s}{r}\,+\frac{1}{N_0})\,e^{-rt}\,-\frac{s}{r}}$ $\,=\frac{1}{(\,-\frac{1}{K}\,+\frac{1}{N_0})\,e^{-rt}\,+\frac{1}{K}}$ $\,=\frac{K}{(\,-\,1\,+\frac{K}{N_0})\,e^{-rt}\,+\,1}$ $\,=\frac{K}{(\frac{K\,-\,N_0}{N_0})\,e^{-rt}\,+\,1}$ This is the solution given in textbooks for logistic growth. There are slight variations in ways it is written, but they are equivalent. 5.05: Orthologistic solution Finally, let s be positive. This creates a vertical asymptote and orthologistic growth. The position in time of the vertical asymptote is the “singularity” mentioned earlier. The interesting question is, when s is positive, what is the time of the singularity? That is, when will the population grow beyond all bounds in this model? What must happen to the denominator for the population to grow to unbounded values? It has to get closer and closer to zero, for then the N (t ) will grow closer and closer to infinity. So to find the singularity, you only have to set the denominator to zero, and then solve for the time t. You can go through the intermediate steps in the algebra below, or use a mathematical equation solver on your computer to do it for you. Setting the denominator in the equation from 5.2 to zero will lead along this algebraic path: $\frac{s}{r}\,=\,(\frac{s}{r}\,+\frac{1}{N_0})\,e^{-rt}$ Multiply through by (r/s)ert, to obtain $e^{rt}\,=\,(\,1\,+\frac{r}{s}\frac{1}{N_0})$ Next take logarithms of both sides $rt\,=\,ln\,(1\,+\frac{r}{s}\frac{1}{N_0})$ Finally, divide through by r to find the time of the singularity. $t\,=\frac{1}{r}\,ln\,(1\,+\frac{r}{s}\frac{1}{N_0})$ In the 1960s, Heinz von Foerster wrote about this in the journal Science. Though the consequences he suggested were deadly serious, his work was not taken very seriously at the time, perhaps in part because the time was so far away (about a human lifetime), but perhaps also because he put the date of the singularity on Friday the 13th, 2026, his 115th birthday. In the title of his paper he called this “doomsday”, when the human population would have demolished itself. Von Foerster used a more complicated model than the r+sN model we are using, but it led to the same result. Some of the ideas were picked up by Paul Ehrlich and others, and became the late-1960s concept of the “population bomb”— which was taken seriously by many.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/05%3A_Differential_and_Difference_Forms/5.02%3A_A_mathematical_view.txt
There is a challenge for this chapter. Coming this far in the book you have learned a little about population growth, and you have access to computer coding, so you are ready for something big. Imagine for fun that you have accepted a new job in Washington as a policy fellow with the United States Geological Survey, or USGS—one of the major research branches of the federal government. This is not farfetched; many recent doctoral graduates land such jobs at reasonably high levels. But suppose that your boss says she wants you to calculate what the world’s population will be in 2100. Other agencies have done this, but she wants a separate USGS estimate, presented in an understandable way. She can give you data from the 18th through the 21st centuries. She discloses that she is meeting with the Secretary General of the United Nations tomorrow and hopes you can figure it out today. You tell her “Sure, no problem.” Are you crazy? No! The rest of this chapter will walk you through how to it. We’ll start by piecing together the parts, as in Figure 4.4.1-the orthologistic part, if there is one, and any exponential and logistic parts as well. 6.02: Phenomological Graph The excerpt of data you have been given includes the world’s population in billions, by year. That is all. Figure $1$ shows the data plotted in a phenomenological way— population size versus year, supplemented with a curve going back 2000 years to provide perspective. The blue dots show the range of data you will be using to project the future population, and the black ‘×’ marks a great demographic transition that is not obvious in this graph, but that will become glaringly so in Figure 6.3.1. Can you project global population by simply extending that curve? The population is clearly rising at an enormous rate, expanding most recently from 3 billion to 7 billion in less than half a century. Simply projecting the curve would lead to a prediction of more than 11 billion people by the middle of the 21th century, and more than 15 billion by the century’s end. But such an approach is too simplistic. In one sense, the data are all contained in that curve, but are obscured by the phenomena themselves. We need to extract the biology inherent in the changing growth rate r as well as the ecology inherent in the changing density dependence s. In other words, we want to look at data showing 1/N N /∆t versus N, as in Figure 4.4.1. Table 6.1.1 shows a subset of the original data, t and N, plus calculated values for ∆N, ∆t, and 1/NN /∆t. In row 1, for example, ∆N shows the change in N between row 1 and row 2: 0.795−0.606 = 0.189 billion. Likewise, ∆t in row 1 shows how many years elapse before the time of row 2: 1750 − 1687 = 63 years. The final column in row 1 shows the value of 1/N N /∆t : 1/0.606 × 0.189/63 = 0.004950495..., which rounds to 0.0050. Row 21 has no deltas because it is the last row in the table. Table $1$. Human population numbers for analysis. Point Year t N billions N t $\frac{1}{N}\frac{∆N}{∆t}$ 1. 1687 0.606 0.189 63 0.0050 2. 1750 0.795 0.174 50 0.0044 3. 1800 0.969 0.296 50 0.0061 4. 1850 1.265 0.391 50 0.0062 5. 1900 1.656 0.204 20 0.0062 6. 1920 1.860 0.210 10 0.0113 7. 1930 2.070 0.230 10 0.0111 8. 1940 2.300 0.258 10 0.0112 9. 1950 2.558 0.224 5 0.0175 10. 1955 2.782 0.261 5 0.0188 11. 1960 3.043 0.307 5 0.0202 12. 1965 3.350 0.362 5 0.0216 13. 1970 3.712 0.377 5 0.0203 14. 1975 4.089 0.362 5 0.0177 15. 1980 4.451 0.405 5 0.0182 16. 1985 4.856 0.432 5 0.0178 17. 1990 5.288 0.412 5 0.0156 18. 1995 5.700 0.390 5 0.0137 19. 2000 6.090 0.384 5 0.0126 20. 2005 6.474 0.392 5 0.0121 21. 2010 6.866
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/06%3A_Human_Population_Growth/6.01%3A_Chapter_Introduction.txt
Figure \(1\) plots the two green columns of Table 6.1.1 through line 12—the mid-1960s—in blue dots, with a green line representing the average trend. A line like this can be drawn through the points in various ways—the simplest with a ruler and pen drawing what looks right. This one was done using a statistical “regression” program, with r the point at which the line intersects the vertical axis and s the line’s slope— its ∆y /∆x. The intrinsic growth rate r for modern, global human population is apparently negative and the slope s is unmistakably positive. From the late 1600s to the mid 1960s, then, it’s clear that the birth rate per family was increasing as the population increased. Greater population was enhancing the population’s growth. Such growth is orthologistic, meaning that the human population has been heading for a singularity for many centuries. The singularity is not a modern phenomenon, and could conceivably have been known before the 20th century. The negative value of r, if it is real, means there is a human Allee point. If the population were to drop below the level of the intersection with the horizontal axis—in this projection, around two hundred million people—the human growth rate would be negative and human populations would decline. The Allee point demonstrates our reliance on a modern society; it suggests that we couldn’t survive with our modern systems at low population levels—although perhaps if we went back to hunter–gatherer lifestyles, this would change the growth curve. The Allee point thus indicates that there is a minimum human population we must sustain to avoid extinction. We depend on each other. 6.04: A global transition In Figure \(1\) we add data from the mid-1960s to the present day. People living in the 1960s were completely unaware of the great demographic transition that was developing. For hundreds of years prior to this time, human populations were stuck on an orthologistic path, with a singularity ever looming and guaranteed by the positive slope. In most of the world, however, the slope abruptly turned about and negative. Not all countries of the world turned about, but on average the world did. Humanity started down a logistic-like path. Where the downward-sloping line crosses the horizontal axis is where population growth would cease. From this simple r + sN model, it appears that world’s population will stabilize between 10 and 12 billion. That is in line with other recently published projections. Prior to the 1960s there were dips in the increasing growth, with World Wars I and II leveling the rate of increase worldwide, though population continued to grow rapidly. The rate also fell in 1960, corresponding to extreme social disruptions in China. What caused this great demographic transition, averaged over the globe? The “Four Horsemen” commonly expected to check human populations were not a primary cause. In many regions birth control, became more available. Education slowed reproduction because people got married later. Modern medicine raised survival rates, making large families unnecessary. The space program looked back at Earth and projected a fragile dot suspended in the black of space, viewed by billions. China’s one-child policy had a noticeable effect. However, so did HIV, one of the few Horsemen that has made a noticeable comeback. Plants and other animals have logistic growth forced upon them because of overcrowding. In humans, however, logistic growth has been largely voluntary. And there could be further developments in a lifetime. In many nations, birth rates are presently below replacement rates. In fact, in all nations with a gross national income above 16K dollars per person, the birth rate is at or below the replacement rate of 2.1 lifetime births per female (Figure \(2\)). This change in demographic rates could conceivably allow present and future generations to voluntarily adjust the population to whatever is desired. The new question just may be: what is the minimum world population we dare have? Returning to your supervisor’s questions, you can now tell her that, in 2100, the world’s population will be between 10 and 12 billion. And you can say “The other population projections are not far off. They are slightly different from what we calculate using this method. But they use very complicated methods so you have to cut them a little slack!”
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/06%3A_Human_Population_Growth/6.03%3A_Biological-ecological_graph.txt
If the growth rate is too fast compared with that feedback, populations can overshoot their carrying capacity, which can lead to highly complex outcomes. Delays in sensing the carrying capacity can start oscillations. For example, a modeled insect population that grows and lays eggs one year and emerges the next year can suffer such oscillations. The insects are, in effect, “keeping their eyes shut” about how many insects will be produced the next year. This is in contrast to species like bacteria or humans, where the population grows more or less continuously. 07: Chaos and Randomness In the density-limited growth examined thus far, the ecological effects of density fed back fast quickly enough that the population’s growth could adjust and the population could reach a carrying capacity, equal to −r /s . But if the growth rate is too fast compared with that feedback, the population can overshoot its carrying capacity, which can lead to highly complex outcomes. Think about feedback in this way. Imagine driving down the road, keeping your eye on the road, instantly correcting any little deviations of your car from your lane, adjusting the steering wheel without even perceiving it, and with only normal blinking of your eyes. In this case there is very little delay in your feedback to the steering wheel, and you stay in the lane. Now suppose you close your eyes for one second at a time, perhaps every ten seconds. (Do not run this experiment; just think about it!) You may have drifted a bit to the left or right in that second and would have to turn the steering wheel further to get back in you lane. And now imagine shutting your eyes for 15 seconds every minute, then opening them and correcting your path down the road. You’ll start oscillating in your lane and precariously jerking back and forth, possibly visiting the ditch. The cause? The delay in the feedback between stimulus and response. So it is with populations. Delays in sensing the carrying capacity can start oscillations. For example, a modeled insect population that grows and lays eggs one year and emerges the next year can suffer such oscillations. The insects are, in effect, “keeping their eyes shut” about how many insects will be produced the next year. This is in contrast to species like bacteria or humans, where the population grows more or less continuously. 7.02: Depicting population growth Figure \(1\) shows four approaches to depicting populations. While not all equally helpful, each has its use. Let’s start with phenomenological graphs for a single species: graphs that merely depict the population phenomena observed without attempting to describe the mechanisms causing the phenomena. Observations might come from successive bacterial plate counts or censuses of people or, in this case, successive insect censuses. Part A in the figure represents the whole population N over time, and is a starting place to view population change. Similarly, Part B represents the whole population’s rate of growth, dN/dt, over time, also phenomenological. A touch of biology is introduced in Part C by transforming the vertical axis to per-capita growth, 1/N dN/dt. This transformation recognizes the growth rate that an individual organism achieves in a unit of time—say in a year or a week—under prevailing conditions. There is a nominal biological limit on the number of offspring produced by an individual in each unit of time—one new bacterium per individual bacterium in twenty minutes, say, or four goslings per family of geese in a year, or one infant per human family each year. This subtle amount of biology can reveal patterns not evident in the phenomenological approaches of Parts A and B—that the number of surviving offspring per individual increases with time for orthologistic growth, does not change for exponential growth, and decreases with time for logistic growth. Finally, in Part D, a touch of ecology is added to the biology of Part C by considering, on the horizontal axis, interactions among organisms. This shows per-capita growth rate versus population density N, rather than versus time t. And it reveals even more clearly the ecological mechanism behind the phenomena and the distinct nature of the three kinds of population growth—orthologistic growth appears as a straight line slanted upward (as in Figure 6.2.1), exponential growth as a straight horizontal line, and logistic growth as a straight line slanted downward (as in Figure 6.3.1). Population density N acts a proxy for space, food, or other resources or limits.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/07%3A_Chaos_and_Randomness/7.01%3A_Chapter_Introduction.txt
For a detailed illustration of the methods used in these four graphs and an illustration of population oscillations, consider the hypothetical insect data in Table \(1\). Insects often have one-year reproductive cycles; these can be prone to oscillations, and also are known for “outbreaks” (e.g. of disease or of pests). The data in Table \(1\) were generated by the difference equation \[\dfrac{1}{N} \dfrac{∆N}{∆t} = r + sN,\] with \(r = 3\) and \(s = −4\). The table shows an initial population of about 11,000 individual organisms. The next year there are about 44,000, then 168,000, then more than 500,000, then more than 900,000. But then something apparently goes wrong, and the population drops to just over 55,000. In nature, this might be attributed to harsh environmental conditions—a drastic change in weather or over-exploitation of the environment. But these data are simply generated from a difference equation, with oscillations induced by overshooting the carrying capacity and getting knocked back to different places, again and again, each time the population recovers. Table \(1\). Hypothetical insect data. (A) (B) (C) (D) t N N ∆I 0 11,107 32,828 2.956 1 43,935 124,082 2.824 2 168,017 391,133 2.328 3 559,150 426,855 0.763 4 986,005 -930,810 -0.944 5 55,195 153,401 2.779 6 208,596 451,738 2.166 7 660,334 236,838 0.359 8 897,172 -528,155 -0.589 9 369,017 562,357 1.524 10 931,374 -675,708 -0.725 11 255,666 505,537 1.977 12 761,203 -34,111 -0.045 13 727,092 66,624 0.092 14 793,716 -138,792 -0.175 15 654,924 249,071 0.380 16 903,995 -556,842 -0.616 17 347,153 559,398 1.611 18 906,551 -567,685 -0.626 19 338,866 557,277 1.645 20 896,143 The repeated growth and setbacks are visible in the phenomenological graph of population growth (Figure \(1\), Part A). It’s easy to see here that the population grows from low levels through year 4, declines drastically in year 5, then rises again and oscillates widely in years 8 through 12. The next four years show smaller oscillations, and in years 16 through 20 there are two sets of nearly identical oscillations. The next phenomenological graph, Part B, shows not the population over time but the change in population over time. The difference in population size from the first year to the following year is about ∆N = 33,000 (44,000 − 11,000 = 33,000). Similarly, the difference in time between years 1 and 2 is just ∆t = 2−1 = 1. So ∆N/∆t is about 33,000/1, or in units of the graph, 0.033 million. Year 0 is therefore marked on the graph vertically at 0.033. Review Chapter 5 for why ∆ is used here rather than dN. For the second year, the population grows from about 44,000 to about 168,000, so ∆N/∆t = (168,000−44,000)/1 = 124,000, or 0.124 million. Year 1 is therefore marked on the graph vertically at 0.124. This continues for all years, with the exact results calculated in the N column of Table \(1\) and plotted in Part B of Figure \(1\). These data are still phenomenological, and simply show the annual changes in population levels rather than the population levels themselves. In Part C we add a bit of biology, showing how many net offspring are produced annually by each individual in the population. This is ∆N/∆t = 33,000/1, the number of new net offspring, divided by about 11,000 parental insects— about three net offspring per insect (more accurately, as shown in the table, 2.956). This can mean that three new insects emerge and the parent lives on, or that four emerge and the parent dies—the model abstracts such details away as functionally equivalent. All such per-insect (per capita) growth rates are calculated in the ∆I column of Table \(1\) and plotted in Part C of Figure \(1\) Part C shows a little biological information—how the net number of offspring per insect is changing through time, Over the first four years it drops from almost 3 down to almost −1. Again, this could mean that 3 new offspring emerge and survive in year 0 and that the parent survives too, and that by year 4 almost no offspring survive and the parent dies as well. The smallest the change per insect (per capita) can ever be is −1, because that means the individual produces no offspring and dies itself—the worst possible case. And since in this case r = 3, the greatest the change can be per insect is 3— realized most closely when N is very close to 0. In the end, however, even with this touch of biology added to the graph, Part C still oscillates wildly. The order underlying the chaos finally is revealed in Part D by retaining the biology with per capita growth on the vertical axis, but adding ecology with density N on the horizontal axis. Successive years are numbered in red above the corresponding dot. Suddenly, all points fall on a straight line! This line reveals the underlying growth equation. Remember that the growth rate is represented as r+sN, which is a straight line. It is equivalent to the algebraic form y = mx +b, only rewritten with s in place of m, N in place of x, and r in place of b. Remember also that it is a “first-order approximation” to the general form proposed by G. Evelyn Hutchenson, \[r +sN +s^2N_2 +s^3N_3 + ...,\] usable when the parameters s2, s3, and so on are small, so that a straight line is a good approximation. And finally, remember that in terms of human population growth, for which we have reasonably good data, a straight line is indeed a good approximation (Figure 6.3.1). Part D of Figure \(1\) thus exposes these population dynamics as density-limited growth, because the individual growth rate on the vertical axis, 1/N dN/dt, gets smaller as density on the horizontal axis, N, gets larger. And because it is a straight line, it is logistic growth. But it is different in that the finite time steps allow the population to go above its carrying capacity, forcing its growth rate negative and pulling the population back down in the next time step—whereupon the growth rate becomes positive again and is pushed up again in a confusing cascade of chaos.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/07%3A_Chaos_and_Randomness/7.03%3A_Hypothetical_insect_data.txt
Figure $1$. Sensitive dependence on initial conditions. Both parts have r = 3 and s = −(r+1), but Part A starts at 0.011107 million and Part B starts at 0.012107 million. Below is the computer code that produced the hypothetical data for the graphs of Figure 7.2.1, quite similar to other code you have seen before. r=3; s=-4; N=0.011107; t=0; print(N); while(t<=20) { dN=(r+s*N)*N; N=N+dN; t=t+1; print(N); } The initial condition is 11,107 insects—0.011107 million in this representation— which produces the time-pattern of Figure 7.2.1, Part A. But change that initial condition just a little, to 0.012107 million, and the time-pattern changes considerably. (Compare Parts A and B of Figure $1$) Part A is identical to Part A of Figure 7.2.1, but Part B of the new figure has quite a different pattern, of repeated population outbreaks—not unlike those seen in some insect populations. The emergence of very different patterns from slightly different starting points is called “sensitive dependence on initial conditions,” and is one of the characteristics of chaos. The carrying capacity in these graphs is $K = −\dfrac{r}{s} = \dfrac{−3}{−4} = 0.75$ which represents 750,000 insects and marked with a horizontal gray dashed line. It is clear that the population is fluctuating about that equilibrium value. Added to the right of the time line is a distribution of the points, showing the proportion of the time that the population occurs at the corresponding place on the vertical axis, with values to the right in the distribution representing higher proportions of time. In this case, the population spends much of the time at very low or very high values. These distributions can be determined by letting the program run for a hundred million steps, more or less, and keeping track of how many times the population occurred at specific levels. In some cases, however, as in this particular case with r = 3, the distribution can be determined algebraically. Here it is called the arcsine distribution and is equal to $x = \dfrac{1}{\pi\sqrt{y(1\,-\,y)}}.$ Though it is not particularly important to population ecology, isn’t it curious to see the value $\pi$ = 3.14159... emerge from a difference equation developed to understand population growth! 7.05: Dampening of Chaos If the growth rate r diminishes, the amount that the population can overshoot its carrying capacity also diminishes, meaning that the size and severity of the fluctuations should diminish as well. Figures \(1\) and \(2\) show this happening. When r diminishes from 3 to 2.84, for example, as in Part A of Figure \(1\), chaos vanishes and the oscillations become regular, jumping from a specific low value to a specific medium value to a specific high value, then dropping back to repeat the cycle. This is called “period three.” Sensitive dependence on initial conditions has also vanished; slight changes in the starting population will not produce different patterns, as in Figure 7.3.1, but will end up approaching exactly the same three levels as before. The pattern is stable. Moreover, changing parameter r slightly will not change the period-three pattern to something else. The exact values of the three levels will shift slightly, but the period-three pattern will remain. But when r is changed more than slightly—to 2.575, for example, as in Part B— the period-three pattern vanishes and, in this case, a chaos-like pattern appears. The population fluctuates among four distinct bands, with a complex distribution within each, as shown on the right in Part B. With r somewhat lower—at 2.48, for example, as in Part C— the bands coalesce into a period-four pattern, which is stable like the period-three pattern in Part A. With further reductions in r, the period-four pattern is cut in half to a period-two pattern, as in Part D, and finally to a period-one, an equilibrium pattern. Figure \(2\) shows the progression from r = 3 downward, as it changes from an oscillation toward the equilibrium, as in Parts A and B, and to a smooth approach, as in Parts C and D. This smooth approach begins when the growth rate is small enough that the population does not overshoot its carrying capacity. 7.06: Bifurcation diagram The dynamics of all possible values of r can summarized in a “bifurcation diagram” (Figure \(1\)). In mathematical terminology, a bifurcation is a place where a tiny change in a parameter causes an extensive and discontinuous change in the behavior of the system. Figure \(1\) shows this by amalgamating the distributions on the right in Figures 7.3.1 through 7.4.2, plus distributions for all other possible values of r. Shading shows where the population spends most of its time. Starting at the right of this figure, fully in the domain of chaos, and moving to the left by reducing r, the behavior moves in and out of chaos-like patterns that never repeat and thus have no period, and also hits stable patterns of every possible period from one up toward infinity. 7.07: Properties Chaos is not precisely defined in mathematics, but it occurs where: 1. Population dynamics appear to oscillate erratically, without outside influence. 2. The population has an unlimited set of different patterns of oscillation, all for the same parameter values. 3. The slightest change in the number of individuals can change the population from one pattern of oscillations to any other pattern. It is not important that you learn all the details of chaos. The important scientific point here is that complexity can arise from simplicity. Complex behavior of something in nature does not imply complex causes of that behavior. As you have seen, as few as two lines of computer code modeling such systems can generate extremely complex dynamics. The important biological point is that populations can oscillate chaotically on their own, with no outside influences disturbing them, and that their precise future course can be unpredictable. Chaos and randomness in deterministic systems were discovered by mathematician Henri Poincaré late in the 19th century, but the knowledge scarcely escaped the domain of mathematics. In the 1960s, meteorologist Edward Lorenz discovered their effects in models of weather, and in the 1970s theoretical ecologist Robert May made further discoveries, publishing a high-profile paper that landed in scientific fields like a bombshell. The details of chaos were then worked out by a large assortment of mathematicians and scientists during the last quarter of the twentieth century. The discrete-time logistic equation examined in this chapter is now designated by Ian Steward as number sixteen of seventeen equations that changed the world. Figure \(1\). The final six of seventeen equations that changed the world, as designated by Ian Steward.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/07%3A_Chaos_and_Randomness/7.04%3A_Sensitive_Dependence.txt
In the first part of this book you’ve seen the two main categories of single-species dynamics—logistic and orthologistic, with exponential growth being an infinitely fine dividing line between the two. And you’ve seen how population dynamics can be simple or chaotically complex. Moving forward you will see three kinds of two-species dynamics—mutualism, competition, and predation—and exactly forty kinds of three-species dynamics, deriving from the parameters of the population equations and their various combinations. To review, the population dynamics of a single species are summarized in the following equation. $\frac{1}{N}\frac{dN}{dt}\,=\,r\,+\,sN$ Here parameter r is the “intrinsic growth rate” of the species— the net rate at which new individuals are introduced to the population when the population is vanishingly sparse, and s is a “density dependence” parameter that reflects how the size of the population affects the overall rate. Parameter s is key. If s is negative, the population grows “logistically,” increasing to a “carrying capacity” of −r /s, or decreasing to that carrying capacity if the population starts above it. If s is positive, then the population grows “orthologistically,” increasing ever faster until it encounters some unspecified limit not addressed in the equation. Exponential growth is the dividing line between these two outcomes, but this would only occur if s remained precisely equal to zero. How should this single-species equation be extended to two species? First, instead of a number N for the population size of one species, we need an N for each species. Call these N1 for species 1 and N2 for species 2. Then, if the two species do not interact at all, the equations could be $\frac{1}{N_1}\frac{dN_1}{dt}\,=\,r_1\,+\,s_{1,1}\,N_1$ $\frac{1}{N_2}\frac{dN_2}{dt}\,=\,r_2\,+\,s_{2,2}\,N_2$ Here r1 and r2 are the intrinsic growth rates for N1 and N2, respectively, and s1,1 and s2,2 are the density dependence parameters for the two species. (The paired subscripts in the two-species equations help us address all interactions.) There are thus four possible si,j parameters here: • s1,1 : How density of species 1 affects its own growth. • s1,2 : How density of species 2 affects the growth of species 1. • s2,1 : How density of species 1 affects the growth of species 2. • s2,2 : How density of species 2 affects its own growth. With these parameters in mind, here are the two-species equations. The new interaction terms are in blue on the right. $\frac{1}{N_1}\,\frac{dN_1}{dt}\,=\,r_1\,+\,s_{1,1}N_1\,+\,\color{blue}{s_{1,2}N_2}$ $\frac{1}{N_2}\,\frac{dN_2}{dt}\,=\,r_2\,+\,s_{2,2}N_2\,+\,\color{blue}{s_{2,1}N_1}$ In the single-species equations, the sign of the s term separates the two main kinds of population dynamics—positive for orthologistic, negative for logistic. Similarly, in the two-species equations, the signs of the interaction parameters s1,2 and s2,1 determine the population dynamics. Two parameters allow three main possibilities—(1) both parameters can be negative, (2) both can be positive, or (3) one can be positive and the other negative. These are the main possibilities that natural selection has to work with. Figure $1$. Both interaction parameters negative, competition. Competition. First consider the case where s1,2 and s2,1 are both negative, as in Figure $1$. For a single species, parameter s being negative causes the population to approach a carrying capacity. The same could be expected when parameters s1,2 and s2,1 are both negative—one or both species approach a carrying capacity at which the population remains constant, or as constant as external environmental conditions allow. One example is shown in Figure $2$, where the population of each species is plotted on the vertical axis and time on the horizontal axis. Here Species 2, in red, grows faster, gains the advantage early, and rises to a high level. Species 1, in blue, grows more slowly but eventually rises and, because of the mutual inhibition between species in competition, drives back the population of Species 2. The two species eventually approach a joint carrying capacity. In other cases of competition, a “superior competitor” can drive the other competitor to extinction—an outcome called “competitive exclusion.” Or, either species can drive the other to extinction, depending on which gains the advantage first. These and other cases are covered in later chapters. In any case, when both interaction terms s1,2 and s2,1 are negative, in minus–minus interaction, each species inhibits the other’s growth, which ecologists call the “interaction competition”. Mutualism. The opposite of competition is mutualism, where each species enhances rather than inhibits the growth of the other. Both s1,2 and s2,1 are positive. Depicted in Figure $3$ is a form of “obligate mutualism,” where both species decline to extinction if either is not present. This is analogous to a joint Allee point, where the growth curves cross the horizontal axis and become negative below certain critical population levels. If this is not the case and the growth curves cross the vertical axis, each species can survive alone; this is called “facultative mutualism,” and we’ll learn more about it in later chapters. For now, the important point is how mutualistic populations grow or decline over time. A single species whose density somehow enhances its own rate of growth becomes orthologistic, increasing ever more rapidly toward a singularity, before which it will grow so numerous that it will be checked by some other inevitable limit, such as space, predation, or disease. It turns out that the dynamics of two species enhancing each other’s growth are similar to those of a single species enhancing its own growth. Both move to a singularity at ever increasing rates, as illustrated earlier in Figure 4.2.1 and below in Figure $4$. Of course, such growth cannot continue forever. It will eventually be checked by some force beyond the scope of the equations, just as human population growth was abruptly checked in the mid-twentieth century— so clearly visible earlier in Figure 6.3.1. Predation. The remaining possibility for these two-species equations is when one interaction parameter si,j is positive and the other is negative. In other words, when the first species enhances the growth of the second while the second species inhibits the growth of the first. Or vice versa. This is “predation,” also manifested as parasitism, disease, and other forms. Think about a predator and its prey. The more prey, the easier it is for predators to catch them, hence the easier it is for predators to feed their young and the greater the predator’s population growth. This is the right part of Figure $5$. The more predators there are, however, the more prey are captured; hence the lower the growth rate of the prey, as shown on the left of the figure. N1 here, then, represents the prey, and N2 represents the predator. Prey can survive on their own, without predators, as reflected on the left in positive growth for N1 when N2 is 0. Predators, however, cannot survive without prey, as reflected on the right in the negative growth for N2 when N1 is 0. This is like an Allee point for predators, which will start to die out if the prey population falls below this point. The question here is this: what will be the population dynamics of predator and prey through time? Will the populations grow logistically and level off at a steady state, as suggested by the negative parameter s1,2, or increase orthologistically, as suggested by the positive parameter s2,1? Actually, they do both. Sometimes they increase faster than exponentially, when predator populations are low and growing prey populations provide ever increasing per capita growth rates for the predator, according to the right part of Figure $5$. In due time, however, predators become abundant and depress prey populations, in turn reducing growth of the predator populations. As shown in Figure $6$, the populations oscillate in ongoing tensions between predator (red line) and prey (blue line). Examine this figure in detail. At the start, labeled A, the prey population is low and predators are declining for lack of food. A steady decline in the number of predators creates better and better conditions for prey, whose populations then increase orthologistically at ever accelerating per capita rates as predators die out and conditions for prey improve accordingly. But then the situation turns. Prey grow abundant, with the population rising above the Allee point of the predator, at B. The number of predators thus start to increase. While predator populations are low and the number of prey is increasing, conditions continually improve for predators, and their populations grows approximately orthologistically for a time. Then predators become abundant and drive the growth rate of the prey negative. The situation turns again, at C. Prey start to decline and predator growth becomes approximately logistic, leveling off and starting to decline at D. By E it has come full circle and the process repeats, ad infinitum. While Figure $6$ illustrates the classical form for predator-prey interactions, other forms are possible. When conditions are right, the oscillations can dampen out and both predator and prey populations can reach steady states. Or the oscillations can become so wild that predators kill all the prey and then vanish themselves. This assumes some effectively-zero value for N1 and N2, below which they “snap” to zero. Or prey populations can become so low that predators all die out, leaving the prey in peace. Or both can go extinct. Or, in the case of human predators, the prey can be domesticated and transformed into mutualists. More on all such dynamics in later chapters. 8.02: Code for two species Below is computer code for two-species dynamics—a logical expansion of the code for one-species dynamics you saw earlier. The code here produced the graph in Figure 8.1.2 and, with other values for Ni, ri, si,j, also produced the graphs in Figures 8.1.4 and 8.1.6. `N1=.01; N2=.01;` `r1=0.5; r2=0.8;` `s11=-0.08; s12=-0.03; s21=-0.09; s22=-0.06;` `t=0; dt=.0001; tmax=75; step=0; ` `print(c(t, N1, N2));` `while(t<tmax)` `{ dN1=(r1+s11*N1+s12*N2)*N1*dt; ` `dN2=(r2+s21*N1+s22*N2)*N2*dt;` `N1=N1+dN1; if(N1<0) N1=0; ` `N2=N2+dN2; if(N2<0) N2=0;` `t=t+dt; step=step+1;` `if(step==1000) ` `{ print(c(t, N1, N2)); step=0; }` This code uses a time step dt of 1/10,000 but displays only every 1000th time step, using the variable named step. This is a reliable way to display output periodically, because step takes on only integer values 0, 1, 2.... Watching variable t for the same purpose may not be reliable; t has rounding errors in its fractional portion, which typically is represented only approximately by the computer. By the way, here is a crucial requirement for coding multi-species dynamics: you must update all of the Ni together. You might be tempted to shorten the code by writing the following, getting rid of the dN1 and dN2 variables. N1=N1+(r1+s11*N1+s12*N2)*N1*dt; if(N1<0) N1=0; N2=N2+(r2+s21*N1+s22*N2)*N2*dt; if(N2<0) N2=0; This shortened code, however, contains a serious bug that could be difficult to detect. (A “bug” is the computerese term for any mistake in computer code, typically one that goes undetected.) The calculation of N2 on the second line uses the new value of N1, not the present value. This will generate subtly incorrect results that could be costly—if, for example, you were using the code to project the course of an epidemic. Careful code reviews with experienced colleagues are one way to help avoid such bugs. If the problem is important enough, having the code written independently by two or more people not communicating with each other, except by accepting the same specifications and comparing the final results, is a way to approach correctness. This is like independent replication of scientific experiments. Other methods of ensuring that code is correct are addressed later. 8.03: Summary of interactions In summary, based on the effects of each population on the other, two species can interact mainly in three different ways, as shown in Figure \(1\). Competition is a ‘−−’ combination, mutualism is ‘++’, and predation is ‘+−’, in either order. Sandwiched between the boxes above are special cases where one of the interaction terms is zero, or very close to zero. These are called “commensalism,” when one parameter is positive and the other is zero, or “amensalism,” when one parameter is negative and the other is zero. We won’t focus further on these special cases.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/08%3A_Theory_of_Interactions/8.01%3A_Dynamics_of_two_Interacting_Species.txt
The absolute simplicity of the three plus–minus combinations for two-species dynamics, explored in the previous chapter, is converted by natural selection into physical forms of splendid complexity—unexpected and subtle, spectacular and sublime, stunning and beautiful. From our human perspective, some details appear to embody timeless altruism and kindness, others unbounded horror and cruelty. But nature seems to implement material possibilities without regard to human values, and we are left merely to observe and wonder, and strive to understand. At this point, to gain perspective before proceeding further, it is well to examine at least a tiny subset of ecological reality. The range of examples of multi-species interactions looks endless, and even within individual organisms are interactions among multiple species. 9.02: Mutualism At the deepest level, the eukaryotic cell seems to have been created of interactions among separate prokaryotic species one or two billion years ago. Mitochondrial genomes are separate genomes within the cell with parallels to bacterial genomes, but mitochondria are no longer able to live on their own except under very special circumstances. Chloroplasts are similar. These are mutualisms at the very basis of complex life. Other mutualisms, like those with gut bacteria, form higher levels. Mutualisms among pollinators and flowers are an ingenious arrangement for the fertilization of immobile organisms. A bee, for example, collects pollen as a food source and in the process spreads pollen from plant to plant. Plants advertise the pollen with bright flowers, and provide sweet nectar as an additional attraction (Figure \(1\), left). A small mammal can act similarly, drinking nectar from one plant, getting pollen on its whiskers, and transferring the pollen when it drinks from another plant, in turn pollinating that plant (Figure \(1\), middle). A fascinating pollination mutualism extends across the wetlands of the North American Upper Midwest, in marsh milkweed, Asclepias incarnata (Figure \(1\), right). Mutualisms are not necessarily perfect, and each member can be exploited in some small way. In this case, pollinators land on the milkweed flower and stand on “landing platforms” while taking nectar. But the platforms are not secure; they are curved and slightly difficult footholds. A pollinator’s legs slip into cracks between the platforms and, when the pollinator pulls its leg out, there are “saddle bags” of pollen (pollinia) wrapped around its leg, which can’t be dislodged. As it flies to a another flower, the saddle bags rotate forward so they easily dislodge, and they’re pulled off when the pollinator slips on another insecure landing platform of the corresponding species. In another example, warthogs (Figure \(2\) left) attract abundant ticks. What better place, then, for Oxpeckers to find morsels of food than in the bristly fur of a water buffalo? A little blood in the tick is probably a bonus. This is good for the water buffalo, good for the oxpecker, and not good for the tick. This three-species interaction is (1) predation on the water buffalo by the tick, (2) predation on the tick by the oxpecker, and thus (3) mutualism between water buffalo and oxpecker. It is an “enemy of my enemy is my friend” interaction, one of the forty kinds of three-species interactions you will see in upcoming chapters. Likewise, ants ward off potential predators from aphids (Figure \(2\) middle), and “cleaner fish” swim freely within the mouth of a large fish (Figure \(2\) right) while they remove the fish’s parasites. Sea anemones have stinging tentacles that other fish must avoid, but clown fish can resist the sting (Figure \(3\) left). This mutualism is more complex. The clown fish protect themselves from predators by living among the anemones, and their bright colors may attract predators who become prey for the anemone. Clown fish eat scraps of food missed by the anemone, plus their own food from the water column, and provide nitrogen to the anemone from their digested waste. These are intricate mutualisms, in which specifically matched species of clown fish and anemone have become permanent partners. Pom-pom crabs employ sea anemones as weapons (Figure \(3\), right), carrying two anemones and waving them in a dance to dissuade approaching predators. Crabs are sloppy eaters, so the sea anemones get the benefit of the mess as payment. Early on, our hunter–gatherer predecessors were predators, but later they domesticated some of their prey, changing some predator–prey interactions to mutualisms. From the point of view of domesticated sheep (Figure \(4\), left), humans may not be ideal mutualists. We protect them from wolves, harbor them from disease, and shield them from the worst vagaries of weather. But we also confine them to pens, shear off their wool, and kill and eat their lambs. Yet as agriculture advanced, the more people there were on the planet, and the more sheep there were, and vice versa. This is the ecological making of a mutualism. It is similar with crops. Instead of gathering grain and fruit from forest and field, we cleared areas specifically for domesticated plants. We protect the crops from competition with other plants, work to keep them free of disease, and add water and nutrients to the soil to help them grow. For thousands of years we used beasts of burden to accelerate cultivation, also in mutualisms with those beasts. Now we are entering a wholly synthetic phase, in which teams of satellite-guided tractors navigate fields with the power of a thousand horses (Figure \(4\) right). For a thought exercise, you may want to ponder our relationship with machines. If we were mutualists with draft horses, are we now mutualists with the machines that have replaced them? As you proceed through the next chapters, consider whether our relationships with our machines meet the ecological requirements of mutualisms.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/09%3A_Embodied_by_Natural_Selection/9.01%3A_Chapter_Introduction.txt
Herbivory is a kind of predation in which the prey is a plant. Upon detecting herbivory, the plant may pump toxins into the leaf to dissuade the herbivore. In response, the herbivore may quickly chew out a pattern to isolate part of the leaf from the toxins and then dine in relative salubrity (Figure \(1\), left). Multitudes of leaf cutter ants can be formidable herbivores (Figure \(1\), middle), cutting luggable-sized pieces of leaves to take back to their nests. The ants do not eat the leaves, but chew them further, feed them to fungus, and then eat the fungus—creating an ant–fungus mutualism, with ants being predators on the trees. At a larger scale, multitudes of bison were formidable herbivores of the prairie (Figure \(1\), right). The native tallgrass prairies were wildflower gardens, with a few dozen species of grasses but hundreds of species of flowers—said to bloom a different color each week. Beavers fell whole aspen trees (Figure \(2\), left) to make their dams and lodges (middle and right), and also for food. This sounds like herbivory. At the same time, however, they girdle and kill other species of trees that they do not use as food, clearing the way for new aspen. This is mutualism. The photo on the right in Figure \(2\) shows a pond colonized by beaver after an absence of more than a century. Though it is midsummer, a few of the trees are defoliated— including one giant bur oak—because beaver have chewed off the cambium layer all around the trees, girdling and killing them. Judiciously followed, this practice would keep the forest in an early successional stage, a condition which favors staple aspen. 9.04: Predation Bears catching fish and a kookaburra ambushing a frog (Figure \(1\), left and middle) are simple kinds of predation. Other ambush strategies are also common. The stonefish (right) is disguised to match its background and waves a lure to attract other fish, who are then instantly swallowed whole. Despite hundreds of millions of years of evolution, the trick still works. Cats typically pursue ambush strategies, lying in wait (Figure \(2\), left), whereas dogs typically pursue sustained chasing strategies. This means that cats must be relatively odorless to avoid detection, but dogs need not be. In a curious ambush strategy only recently achieved, herons wave gathered feathers to attract fish, then drop the feathers and grab the fish (Figure \(2\), right). Such use of feathers is apparently an animal “meme” that has spread rapidly through heron populations after being discovered by some Einstein-heron. Species interactions can lead to remarkable evolutionary adaptations, including the non-messy way of eating an egg (Figure \(3\) left). Nature acknowledges neither kindness nor cruelty, but parasitoidism seems one of the cruelest strategies (Figure \(3\), right). Here an Ammophilia wasp is carrying a caterpillar not to kill and eat, but as a living hatchery for her eggs. Young wasps developing from these eggs consume the caterpillar from within as the caterpillar remains alive, transforming it into wasp larvae. Parasitoidism is widespread. When the predator is small relative to the prey, predator– prey interactions are called “parasitism.” At left in Figure \(4\) is a blood-sucking mosquito attached to an unfortunate human. At right is a beetle seemingly overwhelmed with mites. When the predator is much smaller still, it is called a “pathogen” and the interactions are called “infection” and “disease.” 9.05: Predatory plants Peatlands can form over vast areas where the habitat is isolated from a normal supply of nutrients (Figure \(1\), left) and life there must endure low levels of nitrogen. In this situation, some plants become predators. The pitcher plant (Figure \(1\), middle), for example, attracts insects with nectar and then eats them to obtain nitrogen, trapping them in a pool of digestive fluid at the bottom of a tall green vase with slippery, unidirectional sides of downward-pointing hairs. The sundew (Figure \(1\), right) seems simpler, capturing unwary insects in sticky droplets and then consuming them. 9.06: Defense Prey develop remarkable defenses against predators, following the processes of evolution, and provide warnings of the existence of their defenses. Some such defenses and advertisements are real, like the fetid fluid sprayed by skunks (Figure \(1\) left) that can deter even large bears from attacking. The fire-bellied toad (next to left) is filled with toxins and its bright color advertises, “Do not eat me!” Others species benefit from complex deception to keep predators away, such as the harmless clear-winged moth colored to look anything but harmless (next to right), and an edible caterpillar in disguise with a viper-like tail (right). Hiding is a simple, common strategy for escaping predators. On the left in Figure \(2\) is a young grasshopper on pebbles. Can you see it? Zoom in and look just a little below and to the left of center. The grasshopper’s head is downward and its tail is upward and slightly to the left, with one antenna and one leg clearly visible once you see them. The killdeer (Figure \(2\), right) tempts predators who get too close by stumbling away from the nest, feigning an injured wing and making itself look like an easy catch. Once it has lured the predator far from the nest, it lifts quite competently into the air and flies off. 9.07: Competition Examples of competition seem both more subtle and more ordinary than examples of mutualism and predation. But competition is pervasive. North America has many ecosystems—the tundra of the Arctic, the deserts of the Southwest, the giant conifers of the Pacific Northwest—but the three largest ecosystems merge in a triple ecotone in the Upper Midwest, an area which exemplifies competition among plants. Here, the needle-leaf forests stretch north to the arctic, the broad-leaf forests extend east to the Atlantic, and the prairies’ amber waves of grain flow west to the Rockies. Figure \(1\) shows a whirlpool of competition at this broad triple ecotone. White pines stand tall above the deciduous trees in the background, with Big Bluestem and other native prairie grasses setting seed in the foreground. Staghorn Sumac in red fall colors tries to hold its own in the middle, with pines invading behind it. Leaves of Bur Oak are browning for the oncoming winter. While it may seem a peaceful scene, for the plants it is a scene of intense competition for their very existence. Fire is a foe of trees (Figure \(1\) right), killing many instantly, and thus favoring grasses and prairie flowers. Times of moister conditions allow trees to reenter the grasslands, eventually shading the grassland vegetation to death if the moisture persists. But complexities of weather and climate have kept this competitive tension zone intact for thousands of years, with no group permanently gaining the competitive advantage.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/09%3A_Embodied_by_Natural_Selection/9.03%3A_Herbivory.txt
In differential equation models, the basic population dynamics among species become visible at a glance in “phase space.” The concepts and applications of phase spaces were originally worked out late in the nineteenth century by Henri Poincaré and others for the dynamical systems of physics, but the mathematical foundations also apply to the theories of ecology. (At left is Poincaré seated with Marie Curie at the initial Solvay Conference in 1911. As a point of interest, Marie Curie is the only person to have twice won the Nobel Prize for science—and she was nominated the first time before her doctoral defense!) In a phase space with two species interacting—in competition, predation, or mutualism—the abundance of one species occupies the horizontal axis and that of the other occupies the vertical axis. This makes each possible pair of population values (N1,N2) into a point in the phase space. For example, a measured average abundance of 1.55 individuals per square meter for Species 1 and of 1.1 individuals per square meter for Species 2 corresponds to the point marked with an ‘×’ in Figure $1$ — 1.55 units to the right on the horizontal axis and 1.1 units up the vertical axis. If Species 1 is rare, at 0.05 individuals per square meter, and Species 2 is at 0.85 individuals per square meter, the point is that marked with a ‘+’, near the left in Figure $1$. A phase space, however, is not about the size of populations, but rather about how the populations are changing over time. That change $\frac{dN_1}{dt}\,=\,f_1\,(N_1,\,\,N_2),\frac{dN_2}{dt}\,=\,f_2\,(N_1,\,N_2))$ is made visible as arrows emerging from each point. Suppose that, at the time of the measurement of the populations marked by ×, Species 1 is decreasing slightly and Species 2 is increasing relatively strongly. Decreasing for Species 1 means moving to the left in the phase space, while increasing for Species 2 means moving up, as shown in the inset of Figure 10.1. The net direction of change would thus be north-northwest. In the opposite direction, for the populations marked by +, if Species 1 is increasing slightly and Species 2 is decreasing relatively strongly, the direction of change would be south-southeast. Arrows in the phase space point in the direction of immediate population change. But as the populations change, ecological conditions also change and the paths curve. Figure $2$ shows in green how the populations change in this example as time passes. The pair of abundances starting from × moves up, with Species 1 decreasing at first and then both species increasing and finally coming to rest at the green dot, which marks a joint carrying capacity. From the +, on the other hand, Species 2 decreases uniformly but Species 1 increases at first and then reverses direction. In this case both species go extinct by arriving at the origin (0,0). Something significant separates the + from the ×. What separates them can be judged by calculating an arrow at many points throughout the phase space (Figure $3$). Following the arrows from any pair of abundances (N1,N2) traces the future abundances that will arise as time progresses, and following the arrows backwards shows how the populations could have developed in the past. Note that the arrows seem to be avoiding the open circle near the lower left (at about 0.5, 0.3). That is an Allee point. Some points in the phase space are exceptional: along certain special curves, the arrows aim exactly horizontally or exactly vertically. This means that one of the two populations is not changing—Species 1 is not changing along the vertical arrows, and Species 2 is unchanged along the horizontal arrows. These special curves are the isoclines—from the roots ‘iso-,’ meaning ‘same’ or ‘equal,’ and ‘-cline,’ meaning ‘slope’ or ‘direction.’) The two isoclines of Species 2 are shown in red in Figure $4$, one along the horizontal axis and the other rising and curving to the right. On the horizontal axis, the abundance of Species 2 is zero. Therefore it will always stay zero, meaning it will not change and making that entire axis an isocline. Along the other red isocline, the arrows emerging exactly from the isocline point exactly right or left, because the system is exactly balanced such that the abundance of Species 2 does not change—it has no vertical movement. The situation is similar for the two isoclines of Species 1, shown in blue in Figures $4$ and $5$ —one along the vertical axis and the other rising and curving upward. Along the blue curves, the arrows emerging exactly from the isocline point exactly up or down. Again, along the blue isocline the system is exactly balanced such that the abundance of Species 1 does not change—it has no horizontal movement. Understanding the isoclines of a system goes a long ways toward understanding the dynamics of the system. Where an isocline of one species meets an isocline of the other, the population of neither species changes and therefore an equilibrium forms. These are marked with circles. Notice that the arrows converge on the filled circles (stable equilibria) and judiciously avoid the open circle (unstable equilibrium). And notice that wherever a population (N1,N2) starts, the arrows carry it to one of two outcomes (except, technically, starting on the Allee point itself, where it would delicately remain until perturbed). For further illustration, four population growth curves are traced in green in Figure $5$ and marked as A, B, C, and D. All start with one of the populations at 2.0 and the other at a low or moderate level. And they head to one of the two stable equilibria, avoiding the unstable equilibrium in between. You can view these four growth curves plotted in the usual way, as species abundances versus time, in Figure $6$. Blue indicates Species 1, while red indicates Species 2. 10.02: Phase Space A good way to understand the arrows of phase spaces is to imagine raindrops falling on a curvilinear rooftop and flowing across its surface. Figure \(1\) shows such a surface. Why should thinking of raindrops on rooftops help us understand phase spaces? It is because the differential equations themselves are situated on mathematically surfaces—albeit sometimes higher-dimensional surfaces—with points flowing dynamically across the surfaces, just like raindrops flowing across a roof. It is not completely the same, of course, but is a useful aid to thought. Instead of raindrops, it can also be useful to think of a marble rolling on the surface. At the bottom of the basin at Point C in Figure \(1\), a marble is trapped. The surface goes up in every direction from this point, so after any small disturbance the marble will roll to the bottom again. Point B corresponds to the equilibrium at the origin, stable in this case, where both species are extinct. A marble resting on this surface and experiencing a small positive disturbance away from the origin must roll uphill in every direction, so it will return to that equilibrium as well. It is below the two-species Allee point. For example, Point A divides rain flowing to the left and rain flowing to the right. The basin at Point C corresponds to the carrying capacity, Point B corresponds to extinction at the origin, and Point A corresponds to the unstable Allee point. Point A, on the other hand, corresponds to the Allee point. A marble could be balanced precariously at that place, with the slightest breath of air sending it to extinction at B or carrying capacity in the basin at C, depending on miniscule variations in the breath. Marbles starting close to either of the axes roll to the origin, equilibrium B. Marbles starting farther from the axes are on the other side of a long ridge and roll to the carrying capacity at C.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/10%3A_Phase_Space/10.01%3A_Chapter_Introduction.txt
Think about shapes on which a marble could remain stationary, possibly balanced precariously, on a complicated two-dimensional surface in three-dimensional space, with peaks and valleys in their structure. Figure $1$ shows seven possible configurations for remaining stationary. Configuration A is a summit, a high point with respect to its surroundings. It curves downward in every direction. It is therefore unstable—a marble perched on top can roll away in any direction. Configuration C is the opposite, a basin. The surface curves upward in every direction. It is stable because a marble resting at the bottom rolls back from a disturbance in any direction. Configuration B is like a combination of A and C. It is called a “saddle” for its once-ubiquitous shape (right). It curves upward in some directions and downward in others. A marble resting at its very center is unstable because it can roll away in many directions. Configurations D, E, and F are related to A, B, and C, but are level in at least one direction. A “ridge,” Configuration D, has equilibria all along its very top. A marble balanced precariously there and nudged could conceivably move to a new equilibrium along the ridge, if the nudge were aligned with infinite exactitude; almost certainly, however, the marble would roll off. This configuration has an infinite number of equilibria—all along the ridge—but none of them are stable. A “trough,” Configuration F, is the opposite of a ridge, with equilibria all along its lowest levels. A marble resting there and nudged will move to a new equilibrium position at the base of the trough. Again there are an infinite number of equilibria—all along the base—but none are stable because, pushed along the trough, the marble does not return to its previous location. The equilibria are neutrally stable, however, in the ecological view. An “inflection,” Configuration E, is like a combination of D and F, changing slope and becoming level, but then resuming the same direction as before and not changing to the opposite slope. It too has a level line with an infinite number of equilibria, all unstable, with marbles rolling away from the slightest nudge in half of the possible directions. Configuration G, a perfectly flat “plain,” is perhaps easiest to understand. A marble can rest anywhere, so every point on the flat surface is an equilibrium. But a marble will not return to its former position if nudged, so no equilibrium on the flat surface is stable. In ecology this situation is sometimes called “neutrally stable;” in mathematics it is called “unstable.” With three-dimensional images and human cognitive power, it is possible to visualize a surface at a glance, such as in Figure 10.1.1, and judge the implications for populations growing according to equations that correspond to that surface. You can see at a glance if it is curving up everywhere or down everywhere, if it combines upward and downward directions, or if it has level spots. You can classify each equilibrium into the configurations of Figure $1$. But how can that judgment be quantified, made automatic? The method of eigenvectors and eigenvalues accomplishes this. Think of the prefix eigen- as meaning “proper,” as in “proper vector” or “proper value.” The idea will become clear shortly. The method of eigenvectors and eigenvalues was developed in stages over the course of more than two centuries by some of the best mathematical minds, and now we can apply it intact to ecology. Think of a one-dimensional slice through the surface of Figure 10.1.1, successively passing through points B, A, C, and beyond. It would look like the top of Figure $2$ As before, a marble balanced precisely at A would be unstable, ready to roll either toward B or C. The equilibrium points are level points where the slope is zero, as they are on the surface of Figure 10.1.1 From calculus, this is where the derivative is zero, where dy /dx = 0. Those slopes of zero are marked by green horizontal lines in the figure. The middle graph of Figure $2$ shows sign of the derivative, dy/dx, plus or minus. The sign function on the vertical axis, sgn(u ), is equal to zero if u is zero but is equal to plus or minus one if u is positive or negative, respectively. Whether an equilibrium is in a trough or at a summit is determined by how the slope is changing exactly at the equilibrium point. From calculus, that is the second derivative, $\frac{d^2y}{dx^2}$, recording changes in the first derivative, dy/dx —just as the first derivative records changes in the surface itself. The sign of the second derivative is shown in the bottom part of Figure $2$. Wherever the slope is increasing at an equilibrium point—that is, changing from sloping down on the left to sloping up on the right—that is a basin. Wherever it is decreasing at an equilibrium point—changing from sloping up at the left to sloping down at the right—that is a summit. Whether an equilibrium point is stable or not can thus be determined mathematically merely from the sign of the second derivative of surface at that point! This is easy if there is only one species, as in the models of earlier chapters, with only one direction to consider. But it becomes tricky when two or more species are interacting, for an infinite number of directions become available. It might seem that a configuration will be a basin if the surface curves upward in both the x and y directions, as in Configuration C of Figure $1$. But have a look at the three parts of Figure $3$. Part A is a surface with a trough aligned with the axes. Looking along the x-axis—which would be the N1 axis showing the abundance of Species 1—the surface is curving up on both sides of its minimum (white curve). However, looking along the y -axis—the N2 axis showing the abundance of Species 2—reveals that it is exactly level in that direction (green line), meaning the equilibrium is not stable. But suppose the same surface is rotated 45 degrees, as in part B of the figure. The surface curves upward not only along the x -axis (white curve) but also along the y -axis (green curve). Yet the surface is the same. Contrary to what might have been expected, curving upward in both the x and y directions does not mean the configuration is a basin! Understanding the structure means looking in the proper directions along the surface, not simply along the axes. This is what eigenvalues and eigenvectors do. They align with the “proper” axes for the surface, as illustrated in part C. No matter how twisted, skewed, or rescaled the surface is with respect to the axes, the eigenvectors line up with the “proper” axes of the surface, and the eigenvalues measure whether the slope is increasing or decreasing along those axes at an equilibrium. Box $1$ rules of eigenvalues for hill-climbing systems 1. If all eigenvalues are negative, the equilibrium is stable. 2. If any eigenvalue is positive, the equilibrium is unstable. 3. If some or all of the eigenvalues are zero and any remaining eigenvalues are negative, there is not enough information in the eigenvalues to know whether the equilibrium is stable or not. A deeper look at the system is needed. In short, if all the eigenvalues are positive, the equilibrium is a basin, as in Figures 10.1.1C and $1$C. If all the eigenvalues are negative, the equilibrium is a summit, as in Figures 10.1.1A and $1$A. And if the eigenvalues are of mixed signs, or if some are zero, then we get one of the other configurations. (See Box $1$) It turns out that the proper axes at each equilibrium point—the eigenvectors—can be determined exactly from only four numbers, and how much the slope is increasing or decreasing at each equilibrium point— the eigenvalues—can be determined at the same time from the same four numbers. These are the four partial derivatives in what is called the “Hessian matrix” of the surface, or, equivalently in the “Jacobian matrix” of the population growth equations. An understanding of these matrices and their applications has developed in mathematics over the past two centuries. By expending some effort and attention you can work the eigenvalues out mathematically with pencil and paper. However, you will likely employ computers to evaluate the eigenvalues of ecological systems. This can be done with abstract symbols in computer packages such as Mathematica or Maxima, or numerically in programming languages such as R. For standard two-species systems, we have worked out all equilibria and their corresponding eigenvalues. These are recorded in Table $1$ in mathematical notation and in Program $1$ as code, and identify the equilibria and stability for all predation, mutualism, and competition systems represented by the two-species formulae, which is copied into the table for reference. Table $1$. Two-species formulae Location Equilibrium Eigenvalues Origin (Both species extinct) (0,0) $(\,r_1,\,r_2)$ Horizontal axis (Species 1 at K1) $-\frac{r_1}{s_{1,1}}\,\,0$ $-r_1,\,\frac{q}{s_{1,1}}\,)$ Vertical axis (Species 2 at K2) $0,\,-\frac{r_2}{s_{2,2}}$ $-r_2\,\frac{p}{s_{2,2}}$ Interior (Coexistence) $\frac{p}{a}\,,\,\frac{q}{a}$ $\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ with a = s1,2 s2,1 − s1,1s2,2 b = r1 s2,2 (s2,1 −s1,1) + r2 s1,1 (s1,2 −s2,2) c = −pq p = r1 s2,2 −r2 s1,2 q = r2 s1,1 −r1 s2,1 in the ecological equations for two interacting species, $\frac{1}{N_1}\,\frac{dN_1}{dt}\,=\,r_1\,+\,s_{1,1}N_1\,+\,s_{1,2}N_2$ $\frac{1}{N_2}\,\frac{dN_2}{dt}\,=\,r_2\,+\,s_{2,2}N_1\,+\,s_{2,1}N_1$ where N1, N2 are population abundances of Species 1 and 2 r1, r2 are intrinsic growth rates s1,1, s2,2 measure species effects on themselves s1,2, s2,1 measure effects between species p = r1*s22 -r2*s12; # Compute useful sub q = r2*s11 -r1*s21; # formulae. a = s12*s21 -s11*s22; b = r1*s22*(s21-s11) +r2*s11*(s12-s22); c = -p*q; # Compute the equilibria. x00=0; y00=0; # (at the origin) x10=-r1/s11; y10=0; # (on the x-axis) x01=0; y01=-r2/s22; # (on the y-axis) x11=p/a; y11=q/a; # (at the interior) v00= r1; w00=r2; # Compute the corresponding v10=-r1; w10=q/s11; # four pairs of eigenvalues v01=-r2; w01=p/s22; # (real part only). v11=(-b-Sqrt(b^2-4*a*c))/(2*a); w11=(-b+Sqrt(b^2-4*a*c))/(2*a); Program $1$. The code equivalent to Table $1$, for use in computer programs. Sqrt(w) is a specially written function that returns 0 if w is negative (returns the real part of the complex number 0 +$\sqrt{w}\,i$). The formulae in Table $1$ work for any two-species RSN model—that is, any model of the form $\frac{1}{N_i}\frac{dN_i}{dt}\,=\,r_1\,+\,s_{i,i}N_i\,+\,s_{i,j}N_j$ with constant coefficients—but formulae for other models must be derived separately, from a software package, or following methods for Jacobian matrices. box $2$ parameters for a sample Competitive system $r_1\,=\,1.2$ $r_2\,=\,0.8$ Intrinisic growth rate $s_{1,1}\,=\,-1$ $s_{2,2}\,=\,-1$ Self-limiting terms $s_{1,2}\,=\,-1.2$ $s_{2,1}\,=\,-0.5$ Cross-limiting terms
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/10%3A_Phase_Space/10.03%3A_Surface_generating_the_flow.txt
For an example of finding equilibria and stability, consider two competing species with intrinsic growth rates r1 = 1.2 and r2 = 0.8. Let each species inhibit itself in such a way that s1,1 =−1 and s2,2 =−1, let Species 2 inhibit Species 1 more strongly than it inhibits itself, with s1,2 =−1.2, and let Species 1 inhibit Species 2 less strongly than it inhibits itself, with s2,1 =−0.5. These conditions are summarized in Box 10.2.2 for reference. The question is, what are the equilibria in this particular competitive system, and what will their stability be? First, there is an equilibrium at the origin (0,0) in these systems, where both species are extinct. This is sometimes called the “trivial equilibrium,” and it may or may not be stable. From Table 10.2.1, the eigenvalues of the equilibrium at the origin are r1 and r2 —in this case 1.2 and 0.8. These are both positive, so from the rules for eigenvalues in Box 10.2.1, the equilibrium at the origin in this case is unstable. If no individuals of either species exist in an area, none will arise. But if any individuals of either species somehow arrive in the area, or if both species arrive, the population will increase. This equilibrium is thus unstable. It is shown in the phase space diagram of Figure $1$, along with the other equilibria in the system. box $1$ calculated results for the sample competitive system Equilibrium Coordinates Eigenvalues Condition Origin (0,0) (1.2,0.8) Unstable Horizontal axis (1.2,0) (-1.2,0.2) Unstable Vertical axis (0,0.8) (-0.8,0.24) Unstable Interior (0.6,0.5) (-0.123,-0.977) Stable On the horizontal axis, where Species 2 is not present, the equilibrium of Species 1 is N'1 = −r1/s1,1 = 1.2. That is as expected—it is just like the equilibrium of N' = −r /s for a single species—because it indeed is a single species when Species 2 is not present. As to the stability, one eigenvalue is −r1, which is −1.2, which is negative, so it will not cause instability. For the other eigenvalue at this equilibrium, you need to calculate q from Table 10.2.1. You should get q =−0.2, and if you divide that by s1,1, you should get 0.2. This is positive, so by the rules of eigenvalues in Box 10.2.1, the equilibrium on the horizontal axis is unstable. Thus, if Species 1 is at its equilibrium and an increment of Species 2 arrives, Species 2 will increase and the equilibrium will be abandoned. Likewise, on the vertical axis, where Species 1 is not present, the equilibrium of Species 2 is N2 = −r2/s2,2 = 0.8. Calculate the eigenvalues at this equilibrium from Table 10.2.1 and you should get p=−0.24, and dividing by s2,2 give eigenvalues of −0.8 and 0.24. With one negative and the other positive, by the rules of eigenvalues in Box 10.2.1 the equilibrium on the vertical axis is also unstable. Finally, for the fourth equilibrium—the interior equilibirum where both species are present—calculate a, b, and c from the table. You should get a =−0.4, b =−0.44, and c=−0.048. Now the interior equilibrium is N'1 = p/a = 0.6 and N'2 = q/a = 0.5. But is it stable? Notice the formula for the eigenvalues of the interior equilibrium in Table 10.2.1, in terms of a, b, and c. It is simply the quadratic formula! This is a clue that the eigenvalues are embedded in a quadratic equation, ax2 + bx + c = 0. And if you start a project to derive the formula for eigenvalues with pencil and paper, you will see that indeed they are. In any case, working it out more simply from the formula in the table, you should get −0.123 and −0.977. Both are negative, so by the rules of Box 10.2.1 the interior equilibrium for this set of parameters is stable. As a final note, the presence of the square root in the formula suggests that eigenvalues can have imaginary parts, if the square root covers a negative number. The rules of eigenvalues in Box 10.2.1 still apply in this case, but only to the real part of the eigenvalues. Suppose, for example, that the eigenvalues are $\frac{-1\pm\sqrt{-5}}{2}\,=\,-0..5\pm\,1.118i$. These would be stable because the real part, −0.5, is negative. But it turns out that because the imaginary part, $\pm\,1.118i$, is not zero, the system would cycle around the equilibrium point, as predator–prey systems do. In closing this part of the discussion, we should point out that eigenvectors and eigenvalues have broad applications. They reveal, for instance, electron orbitals inside atoms (right), alignment of multiple variables in statistics, vibrational modes of piano strings, and rates of the spread of disease, and are used for a bounty of other applications. Asking how eigenvalues can be used is a bit like asking how the number seven can be used. Here, however, we simply employ them to evaluate the stability of equilibria. Program $1$ Sample program in R to generate a phase space of arrows, displaying the locations of the beginning and ends of the arrows, which are passed through a graphics program for display. The ‘while(1)’ statement means “while forever”, and is just an easy way to keep looping until conditions at the bottom of the loop detect the end and break out.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/10%3A_Phase_Space/10.04%3A_A_phase_space_example.txt
• 11.1: State Spaces Closely related to “phase spaces” are “state spaces”. While phase spaces are typically used with continuous systems, described by differential equations, state spaces are used with discrete-time systems, described by difference equations. Here the natural system is approximated by jumps from one state to the next, rather than by smooth transitions. While the two kinds of spaces are similar, they differ in important ways. 11: State Spaces Closely related to “phase spaces” are “state spaces”. While phase spaces are typically used with continuous systems, described by differential equations, state spaces are used with discrete-time systems, described by difference equations. Here the natural system is approximated by jumps from one state to the next, as described in Chapter 7, rather than by smooth transitions. While the two kinds of spaces are similar, they differ in important ways. Inspired by the complexities of ecology, and triggered in part by Robert May’s bombshell paper of 1976, an army of mathematicians worked during the last quarter of the twentieth century to understand these complexities, focusing on discrete-time systems and state spaces. One endlessly interesting state space is the delayed logistic equation (Aronson et al. 1982), an outgrowth of the discrete-time logistic equation described in Chapter 7. For a biological interpretation of the delayed logistic equation, let’s examine the example of live grassland biomass coupled with last year’s leaf litter. Biomass next year (Nt+1) is positively related to biomass this year (Nt), but negatively related to biomass from the previous year (Nt−1). The more biomass in the previous year, the more litter this year and the greater the inhibitory shading of next year’s growth. The simplest approximation here is that all biomass is converted to litter, a fixed portion of the litter decays each year, and inhibition from litter is linear. This is not perfectly realistic, but it has the essential properties for an example. Field data and models have recorded this kind of inhibition (Tilman and Wedin, Nature 1991). Program $1$. A program to compute successive points in the state space of the delayed logistic equation. The basic equation has N1 as live biomass and N2 as accumulated leaf litter. N1 and N2 are thus not two different species, but two different age classes of a single species. $N_1\,(t\,+\,1)\,=\,rN_1(t)\,(1\,-\,N_2(t))$ $N_2\,(t\,+\,1)\,=\,N_1\,(t)\,+\,pN_2\,(t)$ The above is a common way to write difference equations, but subtracting Ni from each side, dividing by Ni , and making p = 0 for simplicity gives the standard form we have been using. $\frac{1}{N_1}\frac{∆N_1}{∆t}\,=\,(r\,-\,1)\,-\,rN_2\,=\,r_1\,+\,s_{1,2}N_2$ $\frac{1}{N_2}\frac{∆N_2}{∆t}\,=\,-1\,+\frac{1}{N_2}\,N_1\,=\,r_2\,+\,s_{2,1}N_1$ Notice something new. One of the coefficients, s2,1, is not a constant at all, but is the reciprocal of a dynamical variable. You will see this kind of thing again at the end of the predator–prey chapter, and in fact it is quite a normal result when blending functions (Chapter 18) to achieve a general Kolomogorov form. So the delayed logistic equation is as follows: $\frac{1}{N_1}\frac{∆N_1}{∆t}\,=\,r_1\,+\,s_{1,2}N_2$ $\frac{1}{N_2}\frac{∆N_2}{∆t}\,=\,r_2\,+\,s_{2,1}N_1$ where r1 = r−1, r2 = −1, s1,2 = −r, and s2,1 = 1/N1. Notice also that ri with a subscript is different from r without a subscript. For small values of r, biomass and litter head to an equilibrium, as in the spiraling path of Figure $2$. Here the system starts at the plus sign, at time t = 0, with living biomass N1 =0.5 and litter biomass N2 =0.1. The next year, at time t =1, living biomass increases to N1 =0.85 and litter to N2 = 0.5. The third year, t = 2, living biomass is inhibited slightly to N1 = 0.81 and litter builds up to N2 = 0.85. Next, under a heavy litter layer, biomass drops sharply to N1 =0.22, and so forth about the cycle. The equilibrium is called an “attractor” because populations are pulled into it. For larger values of r, the equilibrium loses its stability and the two biomass values, new growth and old litter, permanently oscillate around the state space, as in the spiraling path of Figure $3$. The innermost path is an attractor called a “limit cycle.” Populations starting outside of it spiral inward, and populations starting inside of it spiral outward— except for populations balanced precariously exactly at the unstable equilibrium point itself. For still larger values of r, the system moves in and out of chaos in a way that itself seems chaotic. By r = 2.15 in Figure $4$, the limit cycle is becoming slightly misshapen in its lower left. By r = 2.27 it has become wholly so, and something very strange has happened. A bulge has appeared between 0 and about 0.5 on the vertical axis, and that bulge has become entangled with the entire limit cycle, folded back on itself over and over again. What happens is shown by magnifying Region 1, inside the red square. Figure $5$ shows the red square of Figure $4$ magnified 50 diameters. The tilted U-shaped curve is the first entanglement of the bulge, and the main part of the limit cycle is revealed to be not a curve, but two or perhaps more parallel curves. Successive images of that bulge, progressively elongated in one direction and compressed in the other, show this limit cycle to be infinitely complex. It is, in fact, not even a one-dimensional curve, but a “fractal,” this one being greater than one-dimensional but less than two-dimensional! Figure $6$ magnifies the red square of Figure $5$, an additional 40 diameters, for a total of 2000 diameters. The upper line looks single, but the lower fatter line from Figure $5$ is resolved into two lines, or maybe more. In fact, every one of these lines, magnified sufficiently, becomes multiple lines, revealing finer detail all the way to infinity! From place to place, pairs of lines fold together in U-shapes, forming endlessly deeper images of the original bulge. In the mathematical literature, this strange kind of attractor is, in fact, called a “strange attractor.” Such strange population dynamics that occur in nature, with infinitely complex patterns, cannot arise in phase spaces of dynamical systems for one or two species flowing in continuous time, but can arise for three or more species in continuous time. And as covered in Chapter 7, they can arise for even a single species approximated in discrete time. What we have illustrated in this chapter is perhaps the simplest ecological system with a strange attractor that can be visualized in a two-dimensional state space.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/11%3A_State_Spaces/11.01%3A_State_Spaces.txt
Competition and mutualism can be understood without much attention to the sizes of the species involved. But predation is quite different. Think of a producer, the prey, and a consumer, the predator. When the consumer is extremely large and the producer very small, as with a whale and krill, the relationship is called filter feeding. There the predator kills the prey. When the producer and consumer are roughly matched in size, they are called predators and prey. When the producer is much smaller than the consumer, but still independently mobile, the relationship is called parasitism. And when the consumer is much smaller still, and has difficulty moving around on its own, the relationship is called infection and disease. Symptoms of disease are part of the ecology of disease. In parasitism and disease, the consumer often does not kill the producer. 12: Predator and Prey Amazing mechanisms for both capturing prey and avoiding predators have been discovered through evolution. Fulmar chicks, for example, can direct “projectile vomit” at predators approaching the nest too closely (Figure \(1\)). This is not just icky for the predator. By damaging the waterproofing of an avian predator’s feathers, this ultimately can kill the predator. The chick’s projectile vomit is thus a lethal weapon. One of the most remarkable predatory weapons is that of pistol shrimp. These shrimp have one special claw adapted to cavitation, and are capable of shooting bullets at their prey; colonies of these shrimp are loud from the sound of these bullets. But where does an underwater crustacean get bullets? Actually, it creates them from nothing at all—from cavitation. If you’ve ever piloted a powerful motorboat and pushed the throttle too hard, or watched a pilot do so, you’ve seen the propellers start kicking out bubbles, which look like air bubbles. But the propellers are well below the water line, where there is no air. The propellers are in fact creating bubbles of vacuum—separating the water so instantly that there is nothing left in between, except perhaps very low density water vapor. Such bubbles collapse back with numerous blasts, each so powerful that it rips off pieces of bronze off the propeller itself, leaving a rough surface that is the telltale sign of cavitation. Figure \(2\). Pistol-packing shrimp shoot cavitation bullets. A pistol crab snap its pistol claw together so quickly that it creates a vacuum where water used to be. With the right circulation of water around the vacuum bubble, the bubble can move, and a crab can actually project its bullet of vacuum toward its prey. When the bubble collapses, the effect is like thunder attending a lightning bolt, when air snaps together after the lightning has created a column of near-vacuum. But the consequences are quite different in water. While a loud sound might hurt the ears of a terrestrial animal, the sound does not rip apart the fabric of the animal’s body. This is, however, what intense sounds in water can do, traveling through water and through the water-filled bodies of animals. In effect, pistol shrimp shoot bullets that explode near their prey and numb them into immobility. Somehow evolution discovered and perfected this amazing mechanism! The ultimate weapons of predation, however, are those of our own species. Figure \(3\) (right) shows a remnant bison herd, a few hundred of the hundreds of millions of bison that migrated the plains not many generations ago. No matter how vast their numbers, they were no match for gunpowder and lead bullets, and they dropped to near extinction by the beginning of the twentieth century. The image at the left illustrates the epic efficiency of lead bullets by showing a nineteenth-century pile of bison bones, with members of the predator species positioned atop and aside. 12.02: Ecological communities are complex Before proceeding with simplified models of predation, we want to stress that ecological communities are complex (Figure \(1\)). Fortunately, progress in understanding them comes piece by piece. Complex food webs, like the one illustrated in the figure, can be examined in simpler “motifs.” You have seen in earlier chapters that there are two motifs for a single Species: logistic and orthologistic, with exponential growth forming a fine dividing line between them. And in the prior chapter, you saw three motifs for two species: predation, competition, and mutualism. Later you will see that there are exactly forty distinct three-species motifs, one of which is two prey pursued by one predator. This is called “apparent competition” because it has properties of competition.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/12%3A_Predator_and_Prey/12.01%3A_Further_Examples_of_Predation.txt
For the next several chapters we will consider two species, starting with one predator and one prey. Figure $1$ depicts this situation, with one line sloping down and the other up. The graph on the left describes the prey, because its numbers N1 are reduced when the numbers of predator, N2, increase. Likewise, the graph on the right describes the predator, because its numbers, N2, increase with the density of its prey, N1. The equations of growth are revealed by the slopes and intercepts of the two lines. Since these are both straight lines, $y\,=\,mx\,+\,b$, the equations can be written down simply from the geometry. The intercept on the left is +1 and the slope is −1. The intercept on the right is −1/2 and its slope is +1/2. The equivalent equations of the two lines appear below the graphs. These specific equations can be generalized using symbols in place of actual numbers, writing r1, s1,2, r2, and s2,1 for the intercept +1.0 and slope −1.0 on the left and the intercept −0.5 and slope +0.5 on the right, as follows. $\frac{1}{N_1}\frac{dN_1}{dt}\,=\,r_1\,+\,s_{1,2}N_2\,\,\,\,\,\,\,with\,r_1\,=\,+\,1.0,\,\,\,s_{1,2}\,=\,-\,1.0$ $\frac{1}{N_2}\frac{dN_2}{dt}\,=\,r_2\,+\,s_{2,1}N_1\,\,\,\,\,\,\,with\,r_2\,=\,-\,0.5,\,\,\,s_{2,1}\,=\,+\,0.5$ Merely by writing down the form of these geometric graphs, the classic Lotka–Volterra predator–prey equations have appeared: $\frac{1}{N_1}\frac{dN_1}{dt}\,=\,r_1\,+\,s_{1,2}N_2\,\,\,\,\,\,\,with\,r_1\,>\,0,\,\,\,s_{1,2}\,<\,0$ $\frac{1}{N_2}\frac{dN_2}{dt}\,=\,r_2\,+\,s_{2,1}N_1\,\,\,\,\,\,\,with\,r_2\,<\,0,\,\,\,s_{2,1}\,>\,0$ Here is how the equations look in many textbooks, with V for prey density and P for predator density: $\frac{dV}{dt}\,=\,rV\,-\alpha\,VP$ $\frac{dP}{dt}\,=\beta\,VP\,-\,qP$ Volterra arrived at the equation rather differently than we did, with a growth rate r for the prey, reduced by a rate $\alpha$ for each encounter between predator and prey, $V\,\cdot\,P$, and with a natural death rate q for predators and compensatory growth rate $\beta$ for each encounter,$V\,\cdot\,P$, between predator and prey. To see the equivalence, divide the first equation through by V and the second by P, then set $V\,=\,N_1,\,\,P\,=\,N_2,\,\,r\,=\,r_1,\,\,q\,=\,-r_2,\,\,\alpha\,=\,-s_{1,2},\,\,\beta\,=\,s_{2,1}$. The Lotka–Volterra formulation will be revealed to be just the r + sN equations in disguise. Figure $1$ exposes the basic predator–prey equations from geometry, which reveal the unity of the equations of ecology, as you saw in Chapter 5. That analysis revealed a form of one-dimensional equation not considered in ecological textbooks—the orthologistic equation—and which is needed for understanding human and other rapidly growing populations. Now analyze these equations a bit. Suppose predator and prey densities are both 1, say 1 individual per hectare (N1 = N2 = 1). Substitute 1 for both N1 and N2. What are the growth rates? $\frac{1}{1}\frac{dN_1}{dt}\,=\,1.0\,-\,1.0\,\times\,1\,=\,0$ $\frac{1}{1}\frac{dN_2}{dt}\,=\,-0.5\,+\,0.5\,\times\,1\,=\,0$ The population growth is zero for both species, so the populations do not change. This is an equilibrium. This can be seen in the graphs below. The fact that both growth rates, $\frac{1}{N_1}\frac{dN_1}{dt}$ and $\frac{1}{N_1}\frac{dN_2}{dt}$, cross the horizontal axis at N1 = N2 = 1 (position of the dots) means that growth stops for both. This is called an equilibrium, a steady state, or, sometimes, a fixed point. But what will happen if both populations are 2, say 2 individuals per hectare? The prey growth rate, $\frac{1}{N_1}\frac{dN_1}{dt}$, is negative at N2 = 2 (the line is below the horizontal axis) and the predator growth rate, $\frac{1}{N_1}\frac{dN_2}{dt}, is positive at N1 = 2 (the line is above the horizontal axis). So the prey population will decrease and the predator population will increase. Exactly how the populations will develop over time can be worked out by putting these parameters into the program in Chapter 8. Here it what it shows. For comparison, here is what early experimenters such as Gause and Huffaker showed for populations of protozoa, mites, and other small systems in the middle of the twentieth century: The dynamics here are much the same as those shown in the calculated version of Figure \(2$ and the experimental version of Figure $3$, but with stochasticity overlayed on the experimental system. Experimenters, however, had difficulty achieving continual cycling. In simple conditions, the predators would find the prey and eat every last one, and then the predators themselves would all die. Continual cycling could be achieved by providing the prey with places to escape, or making it difficult for the predators to move around the environment.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/12%3A_Predator_and_Prey/12.03%3A_Predator-prey_model.txt
The cycling can be understood better in phase space, where the densities of the two species are represented as two-dimensional points. For example, as explained in Chapter 10, if the prey population is 1.5 and the predator population is 0.5, the population will be 1.5 units to the right on the horizontal axis and 0.5 units up on the vertical axis, at the location of the blue plus sign in the graph. Where does the predator population cease to grow? In the equation $\frac{1}{N_2}\frac{dN_2}{dt}\,=\,r_2\,+\,s_{2,1}N_1$ it ceases to grow where $0\,=\,r_2\,+\,s_{2,1}N_1$, or $N_1\,=\frac{-r_2}{s_{2,1}}$. With $r_2$ = −0.5 and $s_{2,1}$ = 0.5, this is a vertical line—the predator isocline—at N1 = 1, as in Figure $2$. To the left of the isocline, prey are sparse and predators decline, as indicated by the downward arrows. To the right of the isocline, in contrast, prey are abundant and predators can increase, as indicated by the upward arrows. Likewise, where does the prey population cease to grow? In the equation $\frac{1}{N_1}\frac{dN_1}{dt}\,=\,r_1\,+\,s_{1,2}N_2$ it ceases to grow where 0 = $r_1\,+\,s_{1,2}N_2$, which means $N_2\,=\,\frac{-r_1}{s_{1,2}}$. With $r_1$= 1 and $s_{1,2}$ = −1, this is a horizontal line—the prey isocline—at N2 = 1. Below the isocline, predators are sparse so prey can increase, as indicated by the arrows pointing right. Above the isocline, in contrast, predators are abundant and prey decreases, as indicated by the arrows pointing left. Putting Figures $2$ and $3$ together gives Figure $4$, which shows rotation in the combined arrows. Here the rotation can be deduced by thinking about the dynamics of predator and prey. The rotation is corroborated by using Table 10.2.1 to calculate the eigenvalues. The eigenvalues of the interior equilibrium turn out to be 0±0.707$i$, a number with both real and imaginary parts. The existence of an imaginary part, ±0.707$i$, implies cycling. The real part, 0, means that eigenvalues alone cannot determine the stability—it could be stable, unstable, or neutral. In fact, for this particular case with no self-limitation, deeper mathematical examination shows that the stability is neutral. The dynamics will rotate indefinitely, maintaining whatever cycle it started on. Taking all the data from Figure 12.3.2 and plotting N1 versus N2 gives Figure $5$. The process starts at day 0 with N1 = N2 = 2. One day later, prey have dropped to N1 ≈ 0.5 and predators have increased to N2 ≈ 2.2, marked by the red numeral 1 on the cycle. (By the symbol ‘≈’, we mean “approximately equal to.”) Two days later, prey have dropped to N1 ≈ 0.2 and predators have dropped to N2 ≈ 1.0, marked by the numeral 3. With predators at relatively low levels, prey then start to increase and, four days later, have reached N1 ≈ 1.0, while predators have dropped further to N2 ≈ 0.3, marked by the numeral 7. Two days later, prey have increased to N1 ≈ 3.0 and predators have increased to N2 ≈ 1.0, marked by the numeral 9. Finally, one day later the cycle begins to repeat, as marked with the numeral 10. This is another way of showing the cycling of Figure 12.3.2. In Figure $6$—a flow diagram, the entire phase space can be filled with arrows to show how cycling proceeds everywhere. The path of Figure 12.3.2, displayed in Figure $5$, is overlayed in blue. 12.05: Assumption of the basic model This basic model of interacting predators and prey reveals the tension between population growth and decline, and shows the kind of cycling that characterizes predator–prey systems. It has, however, a number of simplifying assumptions that can be relaxed in more detailed studies, including the following: 1. The predator lives only on this prey, and perishes without it. 2. The prey has no carrying capacity of its own—only that imposed by the predator. 3. The environment is homogeneous and prey have no hiding places. 4. Growth is continuous, with no age structure, maturation periods, and so forth. 5. The number of prey taken is proportional to the number of prey present. In other words, predators are never satiated. 6. Genetics are uniform within each species, and there is no evolution. Relaxing all of these assumptions is a book in itself, but we will relax some of them in sections ahead. 12.06: Independent carrying capacities Self-limitation or self-enhancement of population growth are within the $r\,+\,sN$ framework. Below these terms are in red. $\frac{1}{N_1}\frac{dN_1}{dt}\,=\,r_1\,s_{1,2}N_2\,\color{red}{+\,s_{1,1}N_1}$ $\frac{1}{N_2}\frac{dN_2}{dt}\,=\,r_2\,s_{2,1}N_1\,\color{red}{+\,s_{2,2}N_2}$ The self-feedback term for the prey, $s_{1,1}$, is typically negative, reflecting a carrying capacity for the prey in the absence of predators, $K_1\,=\,-r_1\,/\,s_{1,1}$. This tends to stabilize the system, dampening oscillations and leading to a joint equilibrium of predator and prey. On the other hand, the self-feedback term for the predator, $s_{2,2}$, is typically zero, meaning the predators vanish in the absence of prey. But it could be positive, indicating benefits from group hunting and the like. A positive value for $s_{2,2}$ tends to destabilize the system, leading to enlarging oscillations.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/12%3A_Predator_and_Prey/12.04%3A_Phase_Space.txt
In the original Lotka–Volterra formulation, doubling the number of prey in the environment doubles the number of prey taken. The same is true in the equivalent $r\,+\,sN$ formulation explored above. While this may be reasonable at low prey densities, eventually the predators become satiated and stop hunting, as in the image at left in Figure $1$. Satiation will therefor truncate the predator growth curve at some maximum rate, as at right in Figure $2$. In the opposite direction, if prey are not available, the predator population starves, as in the sad image at right in Figure $1$. As indicated by a negative vertical intercept in Figures $2$ and elsewhere, the population does not reach a maximum rate of decline. In the complete absence of prey, vertebrate predators decline more and more rapidly, reaching extinction at a definite time in the future, as induced by the increasingly large rates of decline shown in the right part of Figure $3$. This is a different kind of singularity that can actually occur in a finite time. As an exercise and illustration, let us create a predator– prey system in which the predators become satiated and reach a maximum growth rate, but for which there is no maximum death rate in the absence of prey, and see where it leads. For predators, we want to mimic the shape at right in Figure $3$. This has the shape of a hyperbola, $y\,=\,1/x$, but reflected about the horizontal axis and shifted upwards. The equation would be $y\,=\,a\,-\,b/x$, where $a$ and $b$ are positive constants. When $x$ approaches infinity, the term $b/x$ goes to zero and $y$ therefore approaches $a$. It crosses the horizontal axis where $x\,=\,b/a$, then heads downward toward minus infinity as $x$ declines to zero. Such a curve has the right general properties. The predator equation can therefore be the following, with $r_1$ for $a$, $s_{1,2}$ for $−b$, and $N_1$ for $x$. $\frac{1}{N_2}\frac{dN_2}{dt}\,=\,r_2\,+\,s_{2,1}\frac{1}{N_1}$ When there are ample prey $N_1$ will be large, so the term $s_{2,1}/N_1$ will be small and the predator growth rate will be near $r_2$. As prey decline, the term $s_{2,1}/N_1$ will grow larger and larger without limit and, since $s_{2,1}$ is less than zero, the predator growth rate will get more and more negative, also without limit. What about the prey equation? The important point here is that predators become satiated, so the chance of an individual prey being caught goes down as the number of prey in the environment goes up. So instead of a term like $s_{1,2}N_2$ for the chance that an individual prey will be taken, it would be more like $s_{1,2}\,N_2/N_1$. $\frac{1}{N_1}\frac{dN_1}{dt}\,=\,r_1\,+\,s_{1,2}\frac{N_2}{N_1}\,+\,s_{1,1}N_1$ In other words, the rate of prey being taken increases with the number of predators in the environment, but is diluted as there are more and more prey and predators become satiated. Eventually, with an extremely large number of prey in the area relative to the number of predators, the effect of predators on each individual prey becomes negligible. This creates the following predator–prey system, which takes satiation and starvation into account: $\frac{1}{N_1}\frac{dN_1}{dt}\,=\,r_1\,+\,s_{1,2}\frac{N_2}{N_1}\,+\,s_{1,1}N_1$ $\frac{1}{N_2}\frac{dN_2}{dt}\,=\,r_2\,+\,s_{2,1}\frac{1}{N_1}$ This system could be criticized because it is not “mass balanced.” In other words, one unit of mass of prey does not turn directly into a specific amount of mass of predators. But this is not a simple molecular system, and it at least fits more closely the realities of predator and prey behavior. In any case, keep in mind that $s_{1,1}$ is less than 0, to reflect limitation of prey due to crowding and other effects; $s_{2,2}$ is equal to 0, assuming the predator is limited only by the abundance of prey; $s_{1,2}$ is less than 0 because the abundance of predators decreases the growth of prey; and $s_{2,1}$ is also less than 0 because as the number of prey decreases there is an increasingly negative effect on the growth of the predator. The next step is to examine the isoclines for this new set of equations, making a phase-space graph with $N_1$ on the horizontal axis versus $N_2$ on the vertical. Where does the prey growth, $\frac{1}{N_1}\frac{dN_1}{dt}$, cease? Working through some algebra, it is as follows: $\frac{1}{N_1}\frac{dN_1}{dt}\,=\,0\,=\,r_1\,-\,s_{1,2}\frac{N_2}{N_1}\,-\,s_{1,1}N_1$ $\Rightarrow\,\,s_{1,2}\frac{N_2}{N_1}\,=\,r_1\,-\,s_{1,1}N_1$ $\Rightarrow\,\,N_2\,=\frac{r_1}{s_{1,2}}\,N_1\,-\frac{s_{1,1}}{s_{1,2}}\,N_1^2$ Similarly, where does the predator growth, $\frac{1}{N_2}\frac{dN_2}{dt}$, cease for the same phase-plane? Let's follow similar algebra: $\frac{1}{N_2}\frac{dN_2}{dt}\,=\,0\,r_2\,+\,s_{2,1}\frac{1}{N_1}$ $\Rightarrow\,\,-r_2\,=\,s_{2,1}\frac{1}{N_1}$\ $\Rightarrow\,\,N_1\,=\,-\frac{s_{2,1}}{r_2}$ This predator isocline is simply a vertical line, as before in Figures 12.4.2 and following figures. But note that the prey curve has the form of an inverted parabola—a hump, as graphed for two cases in Figure $4$. Remarkably, this formulation of predator–prey equations closely matches what earlier researchers deduced logically and graphically, when computers were slow or not yet available. If you want to better understand the shape of the prey curve, read Rosenzweig’s 1969 paper entitled “Why the prey curve has a hump.” For interest, his hand-drawn published figure with experimental data points is reproduced in Figure $5$. Rosenzweig pointed out a paradoxical effect, which he called “the paradox of enrichment.” At left in Figure $5$, the prey have a relatively low carrying capacity, with $K\,=\,−r_1\,/\,s_{1,1}$ about halfway along the horizontal axis. If you analyze the flow around the red dot that marks the equilibrium point to the right of the hump, or run a program to simulate the equations we just derived, you will find that the populations spiral inward. The equilibrium is stable. The paradox is this: if you try to improve conditions for the prey by increasing their carrying capacity—by artificially providing additional food, for example— you can drive the equilibrium to the left of the hump, as in the right part of Figure $4$. Around the equilibrium marked by the red circle, the populations spiral outward. The system has become unstable. This is a warning from ecological theory. In conservation efforts where predators are present, trying to enhance a prey population by increasing its carrying capacity could have the opposite effect. This is not to say that efforts to enhance prey populations should not be undertaken, only that they should proceed with appropriate caution and study. 12.08: Effects of Space In predator–prey systems, especially in confined areas, the predator tends to capture all the prey and then starve, so the systems “crash.” But over large areas it is conceivable that a predator can completely wipe out its prey in one area and not go extinct, because it can simply move to another area where the prey still exist. Prey can then repopulate the area from which they had been depleted. Imagine a series of interconnected cells where, with some restrictions, predator and prey can migrate between adjacent cells. Now, even though the system may be locally unstable and crash in individual cells, the entire system across all cells could be stable and persist indefinitely. In the 1930s the Russian ecologist Gause conducted a very famous set of early experiments on competition among protozoa, but he also studied predation of Didinium on Paramecium. The populations he set up would commonly crash and go extinct, with the Didinium eating all the Paramecia and then finding themselves without food. If he made places for the Paramecium prey to hide, however, the systems could persist for many cycles. In the 1960s Krebs noticed that populations of fenced mice, even those with a full half-acre within the fence, would crash and disappear after grossly overgrazing their habitat. But in areas where they were allowed to disperse, the populations would persist. Huffaker also ran extensive experiments, again in the 1960s, with mites and oranges. A single population of mites on a single orange would crash and the whole population would disappear. Using multiple oranges with limited migration paths between them, however, allowed the system to persist for many generations. And in the 1970s Lukinbill did similar work with protozoa in aquatic tubs— larger and larger tubs holding miniature predator–prey systems. He found that the larger the tub, the longer the system persisted. The point to remember here is that the mere presence of spatial structure, in one form or another, can allow a let predator–prey system to persist. The basic reason is simply that species can go extinct in some areas while continually recolonizing other areas, always maintaining a population that blinks in and out locally, but persists globally.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/12%3A_Predator_and_Prey/12.07%3A_Predator_satiation_and_starvation.txt
Early humans suffered from predators just as other primates suffer still. Eventually, though, they developed spears longer than the longest teeth and became one of the top predators on land. Even if they did not eat sabre-tooth tigers, they were able to kill them. We know from vivid paintings on rock walls in the protected shelter of caves that our ancestors at least fancied themselves as hunters (Figure \(1\)). Humans are now the dominant large vertebrate on the planet. But on all the continents beyond Africa, large vertebrates were prominent in the ecosystems before our ancestors arrived. In Africa many remain, coevolved with humans and perhaps wiser to our ways. Elsewhere, however, they had little fear when our ancestors arrived (Figure \(2\)). The reasons behind the extinction of so many megafauna are controversial. Archeologist Haynes and others believe humans are responsible. Archeologist Grayson and others blame climate change, though animals had been through many glaciations and deglaciations before. Mammalogist MacPhee and virologist Marx postulate a virulent “hyper- disease” brought by humans. And geologist Kennett and colleagues assign a comet impact as the cause—an interesting theory, as some major human hunters disappeared at about the same time as the larger mammals. In any case, when something extreme happens, several causes may be working in concert. We do know from historical records that only about one human lifetime ago our predecessors in North America hunted bison almost to extinction (Figure \(3\)). Now the large remaining populations of megafauna are in the seas. What is their fate? 13.02: Consciously Controlled Predation Consider predation by humans that is not subject to the cyclical dynamics of natural predator–prey interactions, but is instead consciously controlled to provide steady, reliable returns—from the world’s fisheries, for example. How is this attempted? Recall logistic population growth, where the ecological term $s$ is negative. Figure $1$ shows the individual growth rate on the vertical axis and the population density on the horizontal axis, as you have seen before. Individual growth means the same thing as growth per capita, or relative growth, or percentage growth if multiplied by 100. The intention is to apply this to logistically growing populations of prey. It is the growth rate of the whole population, however, that is of interest in controlled predation—not the per capita growth rate—since it is a fraction of the whole population that is to be taken. So the vertical axis should show $dN/dt$ rather than $1/N\,dN/dt$. Start with the individual growth rate, which reaches its maximum $r$ as $N$ approaches 0. Here the population produces very few individuals because the population itself is almost nonexistent. $\frac{1}{N}\frac{dN}{dt}\,=\,r\,+\,sN$ The goal is to maximize the population growth rate so that the greatest number of prey can be taken. To find that number, multiply both sides of the equation above by the number of individuals, $N$, to get the growth rate of the entire population—in other words, to determine how many individuals are added to the population in a unit of time. The result is $\frac{dN}{dt}\,=\,rN\,+\,sN^2$ The growth of the entire population, $dN/dt$, has the shape of an inverted parabola, shown in Figure $2$, since $s$ is negative. Population growth is lowest when the population is very small, near 0, or when it is high, near its carrying capacity,$−r/s$. It reaches its maximum growth rate midway, at half the carrying capacity, $(−r/s)\,/\,2$. So if the population is kept at half its carrying capacity, it will be growing its fastest and the greatest amount can be “harvested” each year. What is that maximum rate? To find it, substitute half the carrying capacity, $−r/(2s)$, for $N$ in the equation above, giving \begin{align*} \frac{dN}{dt}\Biggr\vert_{max} &= r\left(-\frac{r}{2s}\right)\,+\,s\left(-\frac{r}{2s}\right)^2 \[4pt] &= -\frac{r^2}{2s}\,+\,s\frac{r^2}{4s^2} \[4pt] &=\,-\frac{r^2}{4s} \end{align*} So in this theory the population grows most rapidly at rate $−r^2/(4s)$, producing the greatest number of new individuals if drawn down to half its carrying capacity. This has been called the “maximum sustainable yield.” Let us introduce a harvesting intensity, $H$. When $H$ is zero, there is no harvesting, and when $H$ is 1, harvesting is at the maximum sustainable rate. In between it is proportional. $\frac{dN}{dt}\,=\,(rN\,+\,sN^2)\,+\,H\frac{r^2}{4s}$ $\frac{dN}{dt}\,\Rightarrow$ The net rate of population growth: the number of individuals per time unit, considering births, deaths, and hunting $(rN\,+\,sN^2)\,\Rightarrow$ The rate of addition: the number of individuals born per time unit minus those dying from causes other than hunting $H\frac{r^2}{4s}\,\Rightarrow$ The rate of removal: the number of individuals caught per time unit If individual fishermen predominate (Figure $3$, left), $H$ will be small. This pulls the curve down, as in Figure $4$, lowering the carrying capacity slightly and leaving somewhat fewer fish in the sea. It also introduces an Allee point, though that point is far below the equilibrium and therefore not a significant danger. But with increasingly focused and mechanized fishing (Figure $3$, right), H approaches 1 and the curve is pulled farther down (Figure $5$). The carrying capacity is markedly reduced and the population produces new individuals at a large rate. And the Allee point is pulled close to the carrying capacity, introducing a danger that unforeseen fluctuations in the population could push the population below the Allee point and collapse the fishery. Finally, with hunting or fishing at the maximum sustainable yield, the Allee point coincides with the carrying capacity and in effect annihilates it (Figure $6$). This introduces a dynamical conflict because there is a stable situation to the right but an unstable one to the left, making it inevitable that the population will fall below the Allee point and collapse. The maximum yield is not sustainable!
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/13%3A_Humans_as_Predators/13.01%3A_Chapter_Introduction.txt
Program \(1\) simulates harvesting at the so-called maximal sustainable yield. It introduces small random fluctuations in the population—so small that they cannot be discerned in a graph. The slight stochasticity makes the program take a different trajectory each time it runs, with widely different time courses. Inevitably, however, the populations drift below the Allee point and rapidly collapse, as in the sample run of the program shown in Figure \(1\). In the age of sailing, at the arrow marked "A", fishing was high-effort but low-impact and fisheries stayed approximately at their carrying capacity, \(K\). “Optimal harvesting” was introduced once mathematical ecology combined with diesel technology, and fisheries helped feed the growing human and domestic animal populations, with fish populations near “maximum sustainable yield,” as expected. But throughout the 20th century, as shown on either side of the arrow marked "B", fish populations continued to decline, and before 2015—at the arrow marked "C"—it becomes clear that something is seriously amiss. ```# SIMULATE ONE YEAR # # This routine simulates a differential equation for optimal harvesting # through one time unit, such as one year, taking very small time steps # along the way. # # The ’runif’ function applies random noise to the population. Therefore it # runs differently each time and the collapse can be rapid or delayed. # # ENTRY: ’N’ is the starting population for the species being simulated. # ’H’ is the harvesting intensity, 0 to 1. # ’K’ is the carrying capacity of the species in question. # ’r’ is the intrinsic growth rate. # ’dt’ is the duration of each small time step to be taken throughout # the year or other time unit. # # EXIT: ’N’ is the estimated population at the end of the time unit. ``` ```SimulateOneYear = function(dt) { for(v in 1:(1/dt)) # Advance the time step. { dN = (r+s*N)*N - H*r^2/(4*s)*dt; # Compute the change. N=N+dN; } # Update the population value. if(N<=0) stop("Extinction"); # Make sure it is not extinct. assign("N",N, envir=.GlobalEnv); } # Export the results. ``` ```r=1.75; s=-0.00175; N=1000; H=0; # Establish parameters. ``` ```for(t in 1850:2100) # Advance to the next year. { if(t>=1900) H=1; # Harvesting lightly until 1990. print(c(t,N)); # Display intermediate results. N = (runif(1)*2-1)*10 + N; # Apply stochasticity. SimulateOneYear(1/(365*24)); } # Advance the year and repeat. ``` Program \(1\). This program simulates maximal harvesting with small fluctuations in the populations. What happened? A collapse is part of the dynamics of this kind of harvesting. Inevitable stochasticity in harvest combines unfavorably with an unstable equilibrium in the prey population. In some runs it collapses in 80 years, in others it may take 300. The timing is not predictable; the main predictable property of the simulation is that ultimately the system will collapse. 13.04: Present Situation Many of the world’s fisheries are under collapse, and in the oceans we seem to be on a path like the one our predecessor predators took on land. There are better ecological approaches—a “constant effort” approach, for example, rather than the “constant harvest” examined here—but economic, social, and political pressures have kept them from extensive use. We hope this chapter has shown you that insufficient examination of ecological equations applied on a large scale can generate disasters, that equilibria must not be considered apart from their stability, and that management of real ecological systems requires attention to natural history and social conditions.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/13%3A_Humans_as_Predators/13.03%3A_Stochastic_modeling.txt
Our species no longer suffers large predators as so many species do (Figure 13.0.1). But dangerous predators remain part of the human condition. We call them diseases. By analogy, think of visible organisms living in a pond— from birds, fish, frogs, insects, and plankton to aquatic plants (Figure \(1\), left). To be successful across a region, the organisms must gain resources in the pond while competing with other species, survive predators, and enable their offspring to disperse to another pond. To be successful they must not destroy the pond in which they live, at least until they or their offspring have dispersed to another pond. What is the body of a plant or animal to an infectious virus or bacterium? It is like a pond. There may be other pathogens in the body competing for its metabolic resources. There are predators in the body in the form of an immune system. And to be successful the pathogen must disperse to another body without destroying the body in which it lives, at least until it or its offspring has orchestrated a way to disperse into and colonize another body. 14.02: Portals A pond has entry and exit portals. If an organism is small enough, it can exit a pond on wind and spray. Some organisms can float downstream to another pond, while birds and insects can simply fly. Larger organisms such as amphibians can hop or walk from pond to pond to lay eggs. And a new exit portal has recently appeared in the form of boats and trailers that carry invasive weeds and animals from one pond and deposit them in another. Think analogously of the entry and exit portals of an animal. How can pathogens leave one body and enter another? While skin covers at most two square meters in humans, mucus membranes of the respiratory, digestive, and reproductive tracts cover more than 400 square meters. So, mucus membranes become good portals. Successful diseases can exploit obligate behaviors. Animals, for example, must breathe continually, so exploiting the respiratory pathway—being breathed out into the air by one animal and breathed in by another— is a reliable and ever-present method of transmission. How infections can leave and enter the body (mix and match): Portals of exit: • Breath • Droplets (sneezing, coughing) • Saliva • Sweat • Tears • Feces • Urine • Seminal fluids • Vaginal fluids • Blisters, boils, zits (via scratching, breaking the skin) • Blood (mosquitoes, ticks, hemorrhage) Portals of entry: • Blood (cuts, wounds, insects, needles) • Lungs • Nose • Eyes (conjunctivia) • Mouth (mucosa) • Mouth (tooth/gingivia junction) • Gut • Skin (scabes, triconosis, warts, etc.) • Urinary tract • Rectum • Vagina (mucosa) • Penis (mucosa) Animals must also eat frequently and periodically, so exploiting the oral pathway—leaving through urine and feces and getting back through the alimentary canal—is another reliable path, at least under conditions in the wild or with animals that live on and eat grass (Figure 14.0.1, right). Animals must reproduce, so exploiting the genital pathway—where many animals come into direct bodily contact— is a third reliable path. This can be especially productive, as mucosal tissues of high surface area are touching and infected fluids can be transferred from one sex to another. A pathogen that can get into seminal fluid of a mammal, for example, has a direct path for transmission. Finally, many animals care for young, so exploiting parental care can be a fourth reliable path. A pathogen that can enter a mother’s milk has a direct path for infecting her offspring. Such transmission from parent to offspring is called “vertical transmission,” with other forms called “horizontal transmission.”
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/14%3A_Humans_as_Prey/14.01%3A_Chapter_Introduction.txt
While microscopic pathogens are not independently mobile, their hosts are, and pathogens have evolved ingenious ways of modifying the behavior of a host to enable their transferal to another host. Sneezing and coughing reflexes, for instance, are ancient responses for clearing obstructions from the nose and throat, and some pathogens deceptively induce those responses for a pathway out (Figure \(1\)). What we call “symptoms of disease” are not, then, random effects of a disease, but can often be a pathogen’s way of getting out of one pond, so to speak, and into another. Any of the pathways listed in Chapter 14.1 can be exploited by a pathogen, which in doing so may upset these pathways and cause the host great distress. Some pathogens, for example, get into your eyes and tears and deceptively cause itching and soreness (Figure \(2\), left), inducing you to rub your eyes and transfer the pathogens to your fingers. This in turn can successfully move them to other locations like food, from which they can enter another host by the oral pathway. Other pathogens are able to break the skin and get out of the body on their own. Cold sores (Figure \(2\), right), a form of oral herpes, form around the mouth and nose, transmitting to what touches the sore. A related genital herpes is transmitted sexually, though evolution has been proceeding and the genital form is now able to infect orally and the oral form genitally. This and other sexually transmitted diseases have been increasing since about the middle of the twentieth century. One of the most successful pathogens to use the lungs and skin as pathways out of the body is smallpox (Figure \(3\)). It leaves its host with permanent scars, often over the entire body. Smallpox is an ancient disease, dating from the time before the pyramids, that can kill a majority of those it infects and at times can infect a majority of the population. The very success and horror of smallpox was part of its ultimate destruction—its eradication was the first complete victory in the conquest of disease. Thanks to prolonged diligent attention throughout the world, and of course to the invention of vaccine, smallpox has been made extinct in the natural world. (We say “natural world” because laboratory samples are being retained.) William Foege, a key player in orchestrating the extinction of smallpox, said that we can conquer disease because we evolve so much more rapidly than the disease. This may be startling to hear, given that our physiological evolution is much slower than that of viruses or bacteria. But, he explained, because we evolve socially much more rapidly than a disease can evolve biologically, we are able to “outsmart” the disease. Rinderpest, a viral disease causing high rates of mortality in cattle and wild mammals, was the second disease declared extinct in the natural world. Others—such as polio and Guinea-worm disease— may soon follow, though the latter may simply be eradicated from human populations by our continual isolation from sources of infection. Diseases can exploit blood-sucking parasites to move directly from the blood stream of one host to that of another. Lyme disease (Figure \(4\), left), for example, is spread by ticks, which puncture the skin to obtain a blood meal for themselves but in the process can transfer pathogens. And Ebola (Figure \(4\), right) leaves by almost every exit portal listed, destroying those portals by carrying not just the pathogen but chunks of lung, intestine, or skin in the process. Human populations are so large and dense that even relatively inefficient pathogens can be successful. And diseases, of course, also affect wild and domestic animals as well as crops and other plants. As the next chapter illustrates, many diseases can evolve to be relatively harmless to their hosts, promoting transmission and allowing the disease to become widespread. Rust fungus infections, for example, are common in many plant species, but seldom lead to the death of the host. Powder mildew on the prairie plant Monarda fistulosa is so widespread that it is used in plant identification books as a way to identify the species (Figure \(5\), right). Pathogens can dramatically alter animal behavior. Rabies first gets through the salivary glands and into the saliva of an infected host. Physiological changes then make the host animal salivate profusely—foaming at the mouth—while psychological changes make it appear crazy and angry. The animal then bites through the skin of another animals, transferring the pathogen to that animal’s bloodstream, and the cycle continues. Both the physiological and psychological changes are caused by the pathogen and allow it to spread, even after the death of the initial host. “Mad dog behavior” is thus not an accidental consequence of the disease, but precisely the means the pathogen has developed for getting, so to speak, from pond to pond. It is useful to try to think of all the ways a pathogen might alter the behavior of its host to force the host to transfer the pathogen. This is not just an intellectual exercise, but could help identify potential for new emerging diseases. For example, what should a sexually transmitted disease do to its host in order to spread faster? It should render its host more active sexually! And indeed this happens. Female chimpanzees would normally mate only every two years or so, after having given birth and nursed their young to the point of weaning. But female chimps infected with SIV (simian immunodeficiency virus) reach estrus every month or so, and do not conceive. The pathogen changes their mating behavior to spread itself more than an order of magnitude faster than it would otherwise spread. Inspired to think in this way, one student came up with a novel idea: Imagine a disease that can escape through the sweat glands without harming its host. As behavioral modification, it makes infected hosts want to undergo strenuous exercise in groups, such as in gymnasiums—thus explaining the entire modern exercise phenomenon as a disease! (Gerbils may also harbor this disease.)
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/14%3A_Humans_as_Prey/14.03%3A_Pathogen_Mobility.txt
The tiniest predators—infection and disease—differ from what are usually thought of as predators in a number of ways, some of which you have seen. First, the disease organism is much smaller than its victim and not independently mobile. It must be carried by wind or water, or induce its host to transfer it in one of numerous ways. Second, disease does not necessarily kill its victims. Many diseases, in fact, leave their victims largely intact, the better to transmit the pathogen to another host. And third, after infection, the prey may become forever immune to future infections, both by that pathogen and related ones. This immunity is created by the enormously elaborate “immune system” of vertebrates and other animals— a system recognizing and killing incoming pathogens before they can incubate and do much harm, and as elaborate and complex as the brain and central nervous system. One of the great discoveries of the last millennium was that the immune system could be primed to recognize a pathogen before it invaded, though what was happening inside the body was not understood until the twentieth century. Vaccination played a central role in the eradication of smallpox. The twentieth century also saw the discovery of antibiotics such as penicillin, which allow doctors to cure disease after an infection has progressed. These discoveries show us that humans must be considered separately from plants and other animals, for we have developed special powers against disease. We are not passive prey, and do not simply suffer a disease or make behavioral modifications to avoid it. Instead we actively and globally strive to destroy disease, or subdue it. And we extend these efforts to diseases affecting the animals and plants we depend on. With respect to disease, plants have distinct properties that are in direct contrast with those of animals. Animals, in general, are high-energy organisms— metabolizing rapidly, moving about, and perpetually pumping oxygen throughout the body. Plants are nothing like this. Rather than hearts and rapid fluid flow to distribute food and oxygen and to cleanse waste materials, plants use the passive effects of capillary action and evaporation. This requires the tiniest of veins, or capillary action will fail. And these veins are too small to transport plant cells, or to allow larger pathogens like protozoa and many bacteria to gain access to the entire organism. This also means that plants cannot have the same kind of immune system as animals, with their own cells traveling through their tissues on patrol. In addition, plants are typically modular. An infected part—leaf, flower, or whole limb—can be discarded and grown again. The apical meristem cells at the tips of branches and roots are capable of developing entirely new plants. While cancer cells can spread through the body of an animal and kill it, such cells would simply plug the veins of plants. So while animals get cancer, plants get cankers. Plants have longevity, while animals have mortality—the cost of being a high-energy organism. 14.05: The Strange Case of Polio Polio had long been a relatively rare disease of infants, called “infantile paralysis.” In the middle of the twentieth century, however, it became more common and started affecting older children and adults. A new form of the disease seemed to be emerging. Human Health and Hygiene Which of these people, do you think, performed the greatest service to human health and hygiene in the twentieth century, but inadvertently triggered this mid-century polio epidemic? 1. Louis Pasteur, discoverer of pasteurization 2. Alexander Fleming, discoverer of penicillin 3. Jonas Salk, creator of the polio vaccine 4. Franklin Delano Roosevelt, President of the United States and polio victim 5. Henry Ford, creator of the production line. Answer This seems a strange question, with industrialist Henry Ford under consideration. But indeed, the answer is Henry Ford! At the beginning of the twentieth century, most local transportation was by horse and powered, of course, almost entirely by the biofuel hay. While it has now largely left social memory, in the early decades of the twentieth century the streets were a slurry of gravel and horse manure. Flies were everywhere, and caused little concern paid, for this was the norm. People’s outhouses were ventilated to the open air, and flies laid eggs there and in the streets, then freely entered houses and landed on food. A number of diseases take advantage of the fecal–oral pathway, and polio is one of them. But automobiles and tractors intervened. As the horse-drawn era closed, manure generally vanished, running water arrived and flush toilets arrived, hygiene improved, sealed screen doors became common, and flies died in vast numbers. Without intending it, Henry Ford became the greatest fly killer of all time. The availability of the fecal–oral pathway diminished, and the infectivity of related diseases fell. Figure \(PageIndex{1}\) shows the horse population declining slowly until World War I, then falling rather steadily as the number of cars increased in stages. The first increase in the number of cars ended around 1930 with the Great Depression, when many people could not afford cars. The end of World War II in 1945 brought another boom in car purchases, and by 1975 society had replaced almost every horse per capita with a car. As the chance of catching polio fell, the average age of catching it increased. To understand this, consider residents of the northern hemisphere living at various latitudes. Because residents of the High Arctic have a chance to see the northern lights—the aurora borealis—every week, children living there will likely see the aurora before their first birthday. Farther south, at 50 degrees north latitude, the aurora may appear only once every few years, especially near the lights of cities, so a child could be 5 or 10 years old before ever seeing it. And finally, say at 35 degrees north latitude, the aurora may appear but once or twice in a lifetime, so many people could be in middle age before viewing them, and others might go an entire lifetime without being touched by their hypnotic display. So it is with disease. The number of opportunities for catching a highly infectious disease, naturally, is high. If the quantity of pathogens in the environment is such that all individuals encounter them on average once a year, only about one-third of infants will avoid infection in their first year. (Actually the number is 1/e = 0.367..., if the chance of infection is completely random.) The same fraction of the remaining infants will catch the disease during their first year, and the rest will be age two or older when they catch the disease. Therefore, as the pathways for transmitting polio diminished during the twentieth century, the chances of catching it in any year decreased and the age of onset correspondingly increased. Polio is like some other diseases that are not usually virulent in infants and young children. A baby infected with polio might have a cold and a runny nose, and the infection might go without particular notice. In an older child, however, it can stop bone growth and muscle development, crippling the child. The polio epidemic of mid-century America was thus not a new disease emerging, but an ancient disease dying out. Albert Sabin, of polio vaccine fame, suspected a connection with flies. In 1941 he and his colleagues reported in Science on a study they performed in areas of the United States where polio had struck. They captured flies, pureed them in sterile fluid, and gave them to monkeys in feedings, nosedrops, or injections. As they put it, “Down came the monkeys with polio.” With further improvements in hygiene and broad use of vaccines, rates of polio have dropped to nearly zero. Figure 14.8 shows a moderate number of cases of polio before the late 1940s, an outbreak lasting until the early 1960s, nearly nothing in the years following.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/14%3A_Humans_as_Prey/14.04%3A_Disease_more_generally.txt
Many ordinary diseases have been subdued since the last half of the twentieth century, some to the point of extinction from the natural world. Smallpox and rinderpest are gone, and polio nearly so—as we are writing this (2016–17), polio workers are anticipating its extinction in the foreseeable future. Diptheria is on a similar path (Figure \(1\)), with no cases at all in the United States during the twentieth century. Diseases such as whooping cough and measles (Figure \(1\)) have been subdued but remain with us, with some cycling through periodic outbreaks. The rates of many ordinary diseases are being reduced, and infectious disease is no longer the major cause of deaths in human populations. Rates of various sexually transmitted diseases, however, are on a different course (Figure \(2\)). Gonorrhea rates have declined but remain considerably above zero, and it is a commonly reported disease in the United States. The appearance of syphilis appears to be cyclic, as rates had declined but are now rising again. Rates of chlamydia—which can lead to serious outcomes, including infertility in women— have been increasing steadily, without an end in sight, and rates of genital herpes and other sexually transmitted diseases are rising similarly. Sexually transmitted diseases are a prominent problem to be solved in the twentieth century. 14.07: An ancient plague perhaps vanishing We close this chapter with a graph to ponder. Examine Figure \(1\), an epidemiological view of annual deaths per 1000 population during the twentieth century, from a widespread and ancient cause. Imagine what it represents. Is it a sexually transmitted disease, an ordinary disease with an effective treatment introduced around 1946, or something else entirely? The solid blue regression line goes through the average number of deaths over time from 1946 forward, projected back on the dashed line through the outbreaks earlier in the century. Figure \(1\) is actually an epidemiological view of deaths from warfare, which have been declining per capita over the past 70 years. The vertical line marks the beginning of the atomic era, and the numbers in parenthesis indicate (1) World War I, (2) World War II, (3) Korea, (4) Vietnam, (5) Cambodia and Ethiopia, (6) USSR–Afganistan and Iran–Iraq, and (7) Rwanda. Why include a chart of war deaths in a book on ecology, and in a chapter on the ecology of disease? First, war is directly connected with ecology and the environment. Throughout human history, warfare has been caused by environmental change, as existing territories became unproductive and new territories were sought, and in turn it has caused environmental change through habitat alteration and other forces. Second, humans are a dominant ecological force, whose impact we examine in this book, and warfare has been a prominent theme in the human condition. And third, warfare has some of the properties of a disease. It can spread from places of origin like a disease, and has analogs of competitors and mutualists in addition to obvious roles of predators and prey. Moreover, it involves an infectious agent—replicating not as biological agents spread between bodies of their hosts, but abstractly, like ideas—as seen by Richard Dawkins,⋆ — replicate as memes between minds of their hosts. Warfare has enough abstract similarities to biological agents, and enough tangible effects on ecology of the planet, that we want to offer these ideas for your future consideration, and with the hope that some progress can be made by enough minds examining them. From discussions with students and colleagues thus far, and from parts of the literature, here are some thoughts for your consideration. Nuclear weapons. It seems indisputable that these caused an initial collapse in warfare, but they may also have affected the number of war deaths during the rest of the century. Of course, they could have led to unprecedented numbers of deaths had political arrangements worked out differently. Immediate journalism. Photographic news coverage becoming ever more immediate gave the world a different view of war. Cell-phone cameras, social networks, and the internet expand that indefinitely today. International law. Most international law may not yet be written, but we have seen its beginnings. How much has the encoding of war crimes since World War II contributed to the decline? Self-government. The rapid expansion of self-government since the middle of the twentieth century may have contributed to the decline in war-related deaths, as self-governing nations tend to avoid war with other self-governing nations. International trade. In the same way, nations that trade mutualistically may also tend to avoid war so as to avoid destroying trading partnerships. Expanding ethics. At the end of 1957 the Soviets launched the space dog “Little Curly” into orbit, intending him to die while orbiting our planet. Though this went largely unchallenged in the twentieth century, would any nation be able to do something like this in the twenty-first? Do expanding ethics in other realms contribute at all to the decline in war deaths? Women in power. It is worth considering whether the increasing proportion of women in government has an effect on the number of war-related deaths. Among primates such as chimpanzees and baboons, males are the more aggressive sex. If this is true in humans, might it have continued effects in the future? Improved medicine. Serious wounds once meant infection and death, but now victims can recover. And mortality from diseases which can spread rapidly in wartime—like influenza—has been reduced. The same level of warfare now manifests fewer deaths, making part of the decline an artifact. Reduced overkill. The percentage of a population killed during a war decreased from nearly 100% in some ancient times to “what is necessary” in more recent cases. Has the development of precision weaponry contributed to continuing decrease in war deaths? This would also make part of the decline an artifact. Humanity has already unexpectedly broken the millennia-long rush of ever-accelerating population levels. Could something similar be happening with the millennia-long scourge of war? According to projections along the regression line, and for whatever the reasons, if the trends of Figure \(1\) are real and can be understood and continued, humanity may be on a path toward the elimination of background warfare, even before the end of this century. This material is a partial encapsulation of Steven Pinker’s 800-page book, “The Better Angels of our Nature” (2011). That downward slope indicates a plausible goal to understand and a plausible hope to maintain. It is plausible, but we cannot know if it is practical without dedicating ourselves to it. There is a level of self-fulfillment in such things, for if we collectively do not believe a goal like this can be achieved, it likely will not, but if we believe in it and work toward it, we might succeed. Along present trends, you can work with reasonable, rational, data-based hope to make background warfare vanish in your lifetime. And regardless of the outcome, all will be ennobled by the effort.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/14%3A_Humans_as_Prey/14.06%3A_Modern_disease_trends.txt
Terminology of the theory of disease is not completely consistent in the epidemiological literature, but we will use it consistently as follows. Virulence: How much or how quickly a pathogen harms its host. Often symbolized as alpha, $\alpha$, in disease equations. Example: if one-tenth of infected organisms die in a particular time period, and everything is random, $\alpha$ = 1/10. Infectivity: How readily a pathogen arrives at and invades a new host. Often symbolized as beta, $\beta$, in disease equations. Example: in an otherwise uninfected population, if each infected host is expected to infect three others in a particular time period, $\beta$ = 3. Basic reproductive number: In an otherwise uninfected population, how many new infections an infected individual is expected to produce during the duration of the infection. Often symbolized as $R_0$ in disease equations, and pronounced “are not.” This is a crucial number; if $R_0$ is greater than 1, the disease will spread through the population, while if $R_0$ is less than 1, the disease will die out. 15.02: The SIR flowchart A standard starting point for examining the theory of disease is the “SIR model” (Figure $1$). In this model individuals are born “susceptible,” into the box marked $S$ at the left. They may remain there all their lives, leaving the box only upon their ultimate death—marked by the red arrow pointing downward from the box. The label $\delta\,s$ on this arrow represents the rate of flow from the box—the rate of death of individuals who have never had the disease. The model assumes a per capita death rate of $\delta$ deaths per individual per time unit. If $\delta$ = 1/50, then one-fiftieth of the population will die each year. Multiplying by the number of individuals in the box, $S$, gives the flow out of the box, $\delta\,S$ individuals per year. The only other way out of the $S$ box is along the red arrow pointing right, indicating susceptible individuals who become infected and move from the left box to the middle box. (The blue arrow pointing up indicates new individuals created by births, not existing individuals moving to a different box.) This rate of flow to the right is more complicated, depending not just on the number of susceptible individuals in the left-hand box but on the number of infected individuals $I$ in the middle box. In the label on the right-pointing arrow out of the $S$ box is the infectivity coefficient $\beta$, the number of susceptible individuals converted by each infected individual per time unit if all individuals in the whole population are susceptible. This is multiplied by the number of individuals who can do the infecting, $I$, then by the probability that an “infection propagule” will reach and infect someone who is susceptible, $S\,/\,(S\,+\,I\,+\,R)$. This is just the ratio of the number in the $S$ box to the number in all boxes combined, and in effect “discounts” the maximum rate $\beta$. The entire term indicates the number of individuals per time unit leaving the $S$ box at left and entering the $I$ box in the middle. All other flows in Figure $1$ are similar. The virulence symbolized with $\alpha$ is the rate of death from infected individuals—those in the $I$ box. This results in $\alpha\,I$ deaths per year among infected individuals, transferring from the blue $I$ box to the gray box below it. Note that if infected individuals can also die from other causes, the actual virulence might be more like $\alpha\,-\delta$, though the situation is complicated by details of the disease. If a disease renders its victims bedridden, for example, their death rate from other causes such as accidents, such as being hit by a train, may be reduced. Such refinements can be addressed in detailed models of specific diseases, but are best not considered in an introductory model like this. The other way out of the blue $I$ box in the middle of Figure $PageIndex{1}$ is by recovery, along the red arrow leading to the blue $R$ box on the right. In this introductory model, recovered individuals are permanently immune to the disease, so the only exit from the $R$ box is by death—the downward red arrow—with $\delta\,R$ recovered individuals dying per year. Note that recovered individuals are assumed to be completely recovered, and not suffering any greater rate of death than susceptible individuals in the $S$ box (both have the same death rate $\delta$.) Again, refinements on this assumption can be addressed in more detailed models of specific diseases. The blue arrows represent offspring born and surviving, not individuals leaving one box for another. In this introductory model, all individuals have the same birth rate b, so that being infected or recovering does not affect the rate. The total number of offspring born and surviving is therefore $b\,(S\,+\,I\,+\,R)$. This is the final red arrow in Figure $1$, placing newborns immediately into the box of susceptible individuals.
textbooks/bio/Ecology/Book%3A_Quantitative_Ecology_-_A_New_Unified_Approach_(Lehman_Loberg_and_Clark)/15%3A_Theory_of_Disease/15.01%3A_New_terminology.txt