score
int64 50
2.08k
| text
stringlengths 698
618k
| url
stringlengths 16
846
| year
int64 13
24
|
---|---|---|---|
103 | First-order logic only uses discrete variables (eg. the variable x represents a person) whereas second-order logic uses variables that range over sets of individuals. For example, the second-order sentence ∀P ∀x (x ∈ P ∨ x ∉ P) says that for every set P of people and every person x, either x is in P or it is not (this is the principle of bivalence). Second-order logic also includes variables quantifying over functions, and other variables as explained in the section Syntax below. Both first-order and second-order logic use the idea of a domain of discourse (often called simply the "domain" or the "universe"). The domain is a set of individual elements which can be quantified over.
Second-order logic is more expressive than first-order logic. For example, if the domain is the set of all real numbers, one can assert in first-order logic the existence of an additive inverse of each real number by writing ∀x ∃y (x + y = 0) but one needs second-order logic to assert the least-upper-bound property for sets of real numbers, which states that every bounded, nonempty set of real numbers has a supremum. If the domain is the set of all real numbers, the following second-order sentence expresses the least upper bound property:
In second-order logic, it is possible to write formal sentences which say "the domain is finite" or "the domain is of countable cardinality." To say that the domain is finite, use the sentence that says that every injective function from the domain to itself is surjective. To say that the domain has countable cardinality, use the sentence that says that there is a bijection between every two infinite subsets of the domain. It follows from the upward Löwenheim–Skolem theorem that it is not possible to characterize finiteness or countability in first-order logic.
The syntax of second-order logic tells which expressions are well formed formulas. In addition to the syntax of first-order logic, second-order logic includes many new sorts (sometimes called types) of variables. These are:
For each of the sorts of variable just defined, it is permissible to build up formulas by using universal and/or existential quantifiers. Thus there are many sorts of quantifiers, two for each sort of variable.
A sentence in second-order logic, as in first-order logic, is a well-formed formula with no free variables (of any sort).
In monadic second-order logic, only variables for subsets of the domain are added. The second-order logic with all the sorts of variables just described is sometimes called full second-order logic to distinguish it from the monadic version.
Just as in first-order logic, second-order logic may include non-logical symbols in a particular second-order language. These are restricted, however, in that all terms that they form must be either first-order terms (which can be substituted for a first-order variable) or second-order terms (which can be substituted for a second-order variable of an appropriate sort).
The semantics of second-order logic establish the meaning of each sentence. Unlike first-order logic, which has only one standard semantics, there are two different semantics that are commonly used for second-order logic: standard semantics and Henkin semantics. In each of these semantics, the interpretations of the first-order quantifiers and the logical connectives are the same as in first-order logic. Only the new quantifiers over second-order variables need to have semantics defined.
In standard semantics, the quantifiers range over all sets or functions of the appropriate sort. Thus once the domain of the first-order variables is established, the meaning of the remaining quantifiers is fixed. It is these semantics that give second-order logic its expressive power, and they will be assumed for the remainder of this article.
In Henkin semantics, each sort of second-order variable has a particular domain of its own to range over, which may be a proper subset of all sets or functions of that sort. Leon Henkin (1950) defined these semantics and proved that Gödel's completeness theorem and compactness theorem, which hold for first-order logic, carry over to second-order logic with Henkin semantics. This is because Henkin semantics are almost identical to many-sorted first-order semantics, where additional sorts of variables are added to simulate the new variables of second-order logic. Second-order logic with Henkin semantics is not more expressive than first-order logic. Henkin semantics are commonly used in the study of second-order arithmetic.
A deductive system for a logic is a set of inference rules and logical axioms that determine which sequences of formulas constitute valid proofs. Several deductive systems can be used for second-order logic, although none can be complete for the standard semantics (see below). Each of these systems is sound, which means any sentence they can be used to prove is logically valid in the appropriate semantics.
The weakest deductive system that can be used consists of a standard deductive system for first-order logic (such as natural deduction) augmented with substitution rules for second-order terms. This deductive system is commonly used in the study of second-order arithmetic.
The deductive systems considered by Shapiro (1991) and Henkin (1950) add to the augmented first-order deductive scheme both comprehension axioms and choice axioms. These axioms are sound for standard second-order semantics. They are sound for Henkin semantics if only Henkin models that satisfy the comprehension and choice axioms are considered.
One might attempt to reduce the second-order theory of the real numbers, with full second-order semantics, to the first-order theory in the following way. First expand the domain from the set of all real numbers to a two-sorted domain, with the second sort containing all sets of real numbers. Add a new binary predicate to the language: the membership relation. Then sentences that were second-order become first-order, with the former second-order quantifiers ranging over the second sort instead. This reduction can be attempted in a one-sorted theory by adding unary predicates that tell whether an element is a number or a set, and taking the domain to be the union of the set of real numbers and the power set of the real numbers.
But notice that the domain was asserted to include all sets of real numbers. That requirement cannot be reduced to a first-order sentence, as the Löwenheim-Skolem theorem shows. That theorem implies that there is some countably infinite subset of the real numbers, whose members we will call internal numbers, and some countably infinite collection of sets of internal numbers, whose members we will call "internal sets", such that the domain consisting of internal numbers and internal sets satisfies exactly the same first-order sentences satisfied as the domain of real-numbers-and-sets-of-real-numbers. In particular, it satisfies a sort of least-upper-bound axiom that says, in effect:
Countability of the set of all internal numbers (in conjunction with the fact that those form a densely ordered set) implies that that set does not satisfy the full least-upper-bound axiom. Countability of the set of all internal sets implies that it is not the set of all subsets of the set of all internal numbers (since Cantor's theorem implies that the set of all subsets of a countably infinite set is an uncountably infinite set). This construction is closely related to Skolem's paradox.
Thus the first-order theory of real numbers and sets of real numbers has many models, some of which are countable. The second-order theory of the real numbers has only one model, however. This follows from the classical theorem that there is only one Archimedean complete ordered field, along with the fact that all the axioms of an Archimedean complete ordered field are expressible in second-order logic. This shows that the second-order theory of the real numbers cannot be reduced to a first-order theory, in the sense that the second-order theory of the real numbers has only one model but the corresponding first-order theory has many models.
There are more extreme examples showing that second-order logic with standard semantics is more expressive than first-order logic. There is a finite second-order theory whose only model is the real numbers if the continuum hypothesis holds and which has no model if the continuum hypothesis does not hold. This theory consists of a finite theory characterizing the real numbers as a complete Archimedean ordered field plus an axiom saying that the domain is of the first uncountable cardinality. This example illustrates that the question of whether a sentence in second-order logic is consistent is extremely subtle.
Additional limitations of second order logic are described in the next section.
This corollary is sometimes expressed by saying that second-order logic does not admit a complete proof theory. In this respect second-order logic with standard semantics differs from first-order logic, and this is at least one of the reasons logicians have shied away from its use. (Quine occasionally pointed to this as a reason for thinking of second-order logic as not logic, properly speaking.)
As mentioned above, Henkin proved that the standard deductive system for first-order logic is sound, complete, and effective for second-order logic with Henkin semantics, and the deductive system with comprehension and choice principles is sound, complete, and effective for Henkin semantics using only models that satisfy these principles.
It was found that set theory could be formulated as an axiomatized system within the apparatus of first-order logic (at the cost of several kinds of completeness, but nothing so bad as Russell's paradox), and this was done (see Zermelo-Fraenkel set theory), as sets are vital for mathematics. Arithmetic, mereology, and a variety of other powerful logical theories could be formulated axiomatically without appeal to any more logical apparatus than first-order quantification, and this, along with Gödel and Skolem's adherence to first-order logic, led to a general decline in work in second (or any higher) order logic.
This rejection was actively advanced by some logicians, most notably W. V. Quine. Quine advanced the view that in predicate-language sentences like Fx the "x" is to be thought of as a variable or name denoting an object and hence can be quantified over, as in "For all things, it is the case that . . ." but the "F" is to be thought of as an abbreviation for an incomplete sentence, not the name of an object (not even of an abstract object like a property). For example, it might mean " . . . is a dog." But it makes no sense to think we can quantify over something like this. (Such a position is quite consistent with Frege's own arguments on the concept-object distinction). So to use a predicate as a variable is to have it occupy the place of a name which only individual variables should occupy. This reasoning has been rejected by Boolos.
In recent years second-order logic has made something of a recovery, buoyed by George Boolos' interpretation of second-order quantification as plural quantification over the same domain of objects as first-order quantification. Boolos furthermore points to the claimed nonfirstorderizability of sentences such as "Some critics admire only each other" and "Some of Fianchetto's men went into the warehouse unaccompanied by anyone else" which he argues can only be expressed by the full force of second-order quantification. However, generalized quantification and partially-ordered, or branching, quantification may suffice to express a certain class of purportedly nonfirstorderizable sentences as well and it does not appeal to second-order quantification.
Relationships among these classes directly impact the relative expressiveness of the logics; for example, if PH=PSPACE, then adding a transitive closure operator to second-order logic would not make it any more expressive.
Agency Reviews Patent Application Approval Request for "Zirconium Oxide Dental Implant with Internal Thread and Polygonal Tapering Section, and the Mold for Molding the Same"
Dec 21, 2012; By a News Reporter-Staff News Editor at Health & Medicine Week -- A patent application by the inventors WANG, Min-Wen (US); Yu,...
Patent Issued for Zirconium Oxide Dental Implant with Internal Thread and Polygonal Tapering Section, and the Mold for Molding the Same
Apr 03, 2013; A patent by the inventors Wang, Min-Wen (Kaohsiung, TW); Yu, Chia-Wei (Hsinchu County, TW), filed on May 26, 2011, was cleared... | http://www.reference.com/browse/second+none | 13 |
97 | Origins of the American Civil War
Historians debating the origins of the American Civil War focus on the reasons seven states declared their secession from the U.S. and joined to form the Confederate States of America (the "Confederacy"). The main explanation is slavery, especially Southern anger at the attempts by Northern antislavery political forces to block the expansion of slavery into the western territories. Southern slave owners held that such a restriction on slavery would violate the principle of states' rights.
Abraham Lincoln won the 1860 presidential election without being on the ballot in ten of the Southern states. His victory triggered declarations of secession by seven slave states of the Deep South, and their formation of the Confederate States of America, even before Lincoln took office. Nationalists (in the North and elsewhere) refused to recognize the secessions, nor did any foreign government, and the U.S. government in Washington refused to abandon its forts that were in territory claimed by the Confederacy. War began in April 1861 when Confederates attacked Fort Sumter, a major U.S. fortress in South Carolina, the state that had been the first to declare its independence.
As a panel of historians emphasized in 2011, "while slavery and its various and multifaceted discontents were the primary cause of disunion, it was disunion itself that sparked the war." States' rights and the tariff issue became entangled in the slavery issue, and were intensified by it. Other important factors were party politics, abolitionism, Southern nationalism, Northern nationalism, expansionism, sectionalism, economics and modernization in the Antebellum period.
The United States had become a nation of two distinct regions. The free states in New England, the Northeast, and the Midwest had a rapidly growing economy based on family farms, industry, mining, commerce and transportation, with a large and rapidly growing urban population. Their growth was fed by a high birth rate and large numbers of European immigrants, especially Irish, British and German. The South was dominated by a settled plantation system based on slavery. There was some rapid growth taking place in the Southwest, (e.g., Texas), based on high birth rates and high migration from the Southeast, but it had a much lower immigration rate from Europe. The South also had fewer large cities, and little manufacturing except in border areas. Slave owners controlled politics and economics, though about 70% of Southern whites owned no slaves and usually were engaged in subsistence agriculture.
Overall, the Northern population was growing much more quickly than the Southern population, which made it increasingly difficult for the South to continue to influence the national government. By the time of the 1860 election, the heavily agricultural southern states as a group had fewer Electoral College votes than the rapidly industrializing northern states. Lincoln was able to win the 1860 Presidential election without even being on the ballot in ten Southern states. Southerners felt a loss of federal concern for Southern pro-slavery political demands, and continued domination of the Federal government by "Slaveocracy" was on the wane. This political calculus provided a very real basis for Southerners' worry about the relative political decline of their region due to the North growing much faster in terms of population and industrial output.
In the interest of maintaining unity, politicians had mostly moderated opposition to slavery, resulting in numerous compromises such as the Missouri Compromise of 1820. After the Mexican-American War, the issue of slavery in the new territories led to the Compromise of 1850. While the compromise averted an immediate political crisis, it did not permanently resolve the issue of the Slave power (the power of slaveholders to control the national government on the slavery issue). Part of the 1850 compromise was the Fugitive Slave Law of 1850, requiring that Northerners assist Southerners in reclaiming fugitive slaves, which many Northerners found to be extremely offensive.
Amid the emergence of increasingly virulent and hostile sectional ideologies in national politics, the collapse of the old Second Party System in the 1850s hampered efforts of the politicians to reach yet one more compromise. The compromise that was reached (the 1854 Kansas-Nebraska Act) outraged too many northerners, and led to the formation of the Republican Party, the first major party with no appeal in the South. The industrializing North and agrarian Midwest became committed to the economic ethos of free-labor industrial capitalism.
Arguments that slavery was undesirable for the nation had long existed, and early in U.S. history were made even by some prominent Southerners. After 1840, abolitionists denounced slavery as not only a social evil but a moral wrong. Many Northerners, especially leaders of the new Republican Party, considered slavery a great national evil and believed that a small number of Southern owners of large plantations controlled the national government with the goal of spreading that evil. Southern defenders of slavery, for their part, increasingly came to contend that blacks actually benefited from slavery, an assertion that alienated Northerners even further.
Early Republic
At the time of the American Revolution, the institution of slavery was firmly established in the American colonies. It was most important in the six southern states from Maryland to Georgia, but the total of a half million slaves were spread out through all of the colonies. In the South 40% of the population was made up of slaves, and as Americans moved into Kentucky and the rest of the southwest fully one-sixth of the settlers were slaves. By the end of the war, the New England states provided most of the American ships that were used in the foreign slave trade while most of their customers were in Georgia and the Carolinas.
During this time many Americans found it difficult to reconcile slavery with their interpretation of Christianity and the lofty sentiments that flowed from the Declaration of Independence. A small antislavery movement, led by the Quakers, had some impact in the 1780s and by the late 1780s all of the states except for Georgia had placed some restrictions on their participation in slave trafficking. Still, no serious national political movement against slavery developed, largely due to the overriding concern over achieving national unity. When the Constitutional Convention met, slavery was the one issue "that left the least possibility of compromise, the one that would most pit morality against pragmatism. In the end, while many would take comfort in the fact that the word slavery never occurs in the Constitution, critics note that the three-fifths clause provided slaveholders with extra representatives in Congress, the requirement of the federal government to suppress domestic violence would dedicate national resources to defending against slave revolts, a twenty-year delay in banning the import of slaves allowed the South to fortify its labor needs, and the amendment process made the national abolition of slavery very unlikely in the foreseeable future.
With the outlawing of the African slave trade on January 1, 1808, many Americans felt that the slavery issue was resolved. Any national discussion that might have continued over slavery was drowned out by the years of trade embargoes, maritime competition with Great Britain and France, and, finally, the War of 1812. The one exception to this quiet regarding slavery was the New Englanders' association of their frustration with the war with their resentment of the three-fifths clause that seemed to allow the South to dominate national politics.
In the aftermath of the American Revolution, the northern states (north of the Mason-Dixon Line separating Pennsylvania and Maryland) abolished slavery by 1804. In the 1787 Northwest Ordinance, Congress (still under the Articles of Confederation) barred slavery from the Mid-Western territory north of the Ohio River, but when the U.S. Congress organized the southern territories acquired through the Louisiana Purchase, the ban on slavery was omitted.
Missouri Compromise
In 1819 Congressman James Tallmadge, Jr. of New York initiated an uproar in the South when he proposed two amendments to a bill admitting Missouri to the Union as a free state. The first barred slaves from being moved to Missouri, and the second would free all Missouri slaves born after admission to the Union at age 25. With the admission of Alabama as a slave state in 1819, the U.S. was equally divided with 11 slave states and 11 free states. The admission of the new state of Missouri as a slave state would give the slave states a majority in the Senate; the Tallmadge Amendment would give the free states a majority.
The Tallmadge amendments passed the House of Representatives but failed in the Senate when five Northern Senators voted with all the Southern senators. The question was now the admission of Missouri as a slave state, and many leaders shared Thomas Jefferson's fear of a crisis over slavery—a fear that Jefferson described as "a fire bell in the night". The crisis was solved by the Compromise of 1820, which admitted Maine to the Union as a free state at the same time that Missouri was admitted as a slave state. The Compromise also banned slavery in the Louisiana Purchase territory north and west of the state of Missouri along the line of 36–30. The Missouri Compromise quieted the issue until its limitations on slavery were repealed by the Kansas Nebraska Act of 1854.
In the South, the Missouri crisis reawakened old fears that a strong federal government could be a fatal threat to slavery. The Jeffersonian coalition that united southern planters and northern farmers, mechanics and artisans in opposition to the threat presented by the Federalist Party had started to dissolve after the War of 1812. It was not until the Missouri crisis that Americans became aware of the political possibilities of a sectional attack on slavery, and it was not until the mass politics of the Jackson Administration that this type of organization around this issue became practical.
Nullification Crisis
The American System, advocated by Henry Clay in Congress and supported by many nationalist supporters of the War of 1812 such as John C. Calhoun, was a program for rapid economic modernization featuring protective tariffs, internal improvements at Federal expense, and a national bank. The purpose was to develop American industry and international commerce. Since iron, coal, and water power were mainly in the North, this tax plan was doomed to cause rancor in the South where economies were agriculture-based. Southerners claimed it demonstrated favoritism toward the North.
The nation suffered an economic downturn throughout the 1820s, and South Carolina was particularly affected. The highly protective Tariff of 1828 (also called the "Tariff of Abominations"), designed to protect American industry by taxing imported manufactured goods, was enacted into law during the last year of the presidency of John Quincy Adams. Opposed in the South and parts of New England, the expectation of the tariff’s opponents was that with the election of Andrew Jackson the tariff would be significantly reduced.
By 1828 South Carolina state politics increasingly organized around the tariff issue. When the Jackson administration failed to take any actions to address their concerns, the most radical faction in the state began to advocate that the state declare the tariff null and void within South Carolina. In Washington, an open split on the issue occurred between Jackson and his vice-president John C. Calhoun, the most effective proponent of the constitutional theory of state nullification through his 1828 "South Carolina Exposition and Protest".
Congress enacted a new tariff in 1832, but it offered the state little relief, resulting in the most dangerous sectional crisis since the Union was formed. Some militant South Carolinians even hinted at withdrawing from the Union in response. The newly elected South Carolina legislature then quickly called for the election of delegates to a state convention. Once assembled, the convention voted to declare null and void the tariffs of 1828 and 1832 within the state. President Andrew Jackson responded firmly, declaring nullification an act of treason. He then took steps to strengthen federal forts in the state.
Violence seemed a real possibility early in 1833 as Jacksonians in Congress introduced a "Force Bill" authorizing the President to use the Federal army and navy in order to enforce acts of Congress. No other state had come forward to support South Carolina, and the state itself was divided on willingness to continue the showdown with the Federal government. The crisis ended when Clay and Calhoun worked to devise a compromise tariff. Both sides later claimed victory. Calhoun and his supporters in South Carolina claimed a victory for nullification, insisting that it had forced the revision of the tariff. Jackson's followers, however, saw the episode as a demonstration that no single state could assert its rights by independent action.
Calhoun, in turn, devoted his efforts to building up a sense of Southern solidarity so that when another standoff should come, the whole section might be prepared to act as a bloc in resisting the federal government. As early as 1830, in the midst of the crisis, Calhoun identified the right to own slaves as the chief southern minority right being threatened:
I consider the tariff act as the occasion, rather than the real cause of the present unhappy state of things. The truth can no longer be disguised, that the peculiar domestick [sic] institution of the Southern States and the consequent direction which that and her soil have given to her industry, has placed them in regard to taxation and appropriations in opposite relation to the majority of the Union, against the danger of which, if there be no protective power in the reserved rights of the states they must in the end be forced to rebel, or, submit to have their paramount interests sacrificed, their domestic institutions subordinated by Colonization and other schemes, and themselves and children reduced to wretchedness.
The issue appeared again after 1842's Black Tariff. A period of relative free trade after 1846's Walker Tariff reduction followed until 1860, when the protectionist Morrill Tariff was introduced by the Republicans, fueling Southern anti-tariff sentiments once again.
Gag Rule debates
From 1831 to 1836 William Lloyd Garrison and the American Anti-Slavery Society (AA-SS) initiated a campaign to petition Congress in favor of ending slavery in the District of Columbia and all federal territories. Hundreds of thousands of petitions were sent with the number reaching a peak in 1835.
The House passed the Pinckney Resolutions on May 26, 1836. The first of these resolutions stated that Congress had no constitutional authority to interfere with slavery in the states and the second that it "ought not" do so in the District of Columbia. The third resolution, known from the beginning as the "gag rule", provided that:
All petitions, memorials, resolutions, propositions, or papers, relating in any way, or to any extent whatsoever, to the subject of slavery or the abolition of slavery, shall, without being either printed or referred, be laid on the table and that no further action whatever shall be had thereon.
The first two resolutions passed by votes of 182 to 9 and 132 to 45. The gag rule, supported by Northern and Southern Democrats as well as some Southern Whigs, was passed with a vote of 117 to 68.
Former President John Quincy Adams, who was elected to the House of Representatives in 1830, became an early and central figure in the opposition to the gag rules. He argued that they were a direct violation of the First Amendment right "to petition the Government for a redress of grievances". A majority of Northern Whigs joined the opposition. Rather than suppress anti-slavery petitions, however, the gag rules only served to offend Americans from Northern states, and dramatically increase the number of petitions.
Since the original gag was a resolution, not a standing House Rule, it had to be renewed every session and the Adams' faction often gained the floor before the gag could be imposed. However in January 1840, the House of Representatives passed the Twenty-first Rule, which prohibited even the reception of anti-slavery petitions and was a standing House rule. Now the pro-petition forces focused on trying to revoke a standing rule. The Rule raised serious doubts about its constitutionality and had less support than the original Pinckney gag, passing only by 114 to 108. Throughout the gag period, Adams' "superior talent in using and abusing parliamentary rules" and skill in baiting his enemies into making mistakes, enabled him to evade the rule and debate the slavery issues. The gag rule was finally rescinded on December 3, 1844, by a strongly sectional vote of 108 to 80, all the Northern and four Southern Whigs voting for repeal, along with 55 of the 71 Northern Democrats.
Antebellum South and the Union
There had been a continuing contest between the states and the national government over the power of the latter—and over the loyalty of the citizenry—almost since the founding of the republic. The Kentucky and Virginia Resolutions of 1798, for example, had defied the Alien and Sedition Acts, and at the Hartford Convention, New England voiced its opposition to President James Madison and the War of 1812, and discussed secession from the Union.
Southern culture
Although a minority of free Southerners owned slaves (and, in turn, a minority of similar proportion within these slaveholders who owned the vast majority of slaves), Southerners of all classes nevertheless defended the institution of slavery– threatened by the rise of free labor abolitionist movements in the Northern states– as the cornerstone of their social order.
Based on a system of plantation slavery, the social structure of the South was far more stratified and patriarchal than that of the North. In 1850 there were around 350,000 slaveholders in a total free Southern population of about six million. Among slaveholders, the concentration of slave ownership was unevenly distributed. Perhaps around 7 percent of slaveholders owned roughly three-quarters of the slave population. The largest slaveholders, generally owners of large plantations, represented the top stratum of Southern society. They benefited from economies of scale and needed large numbers of slaves on big plantations to produce profitable labor-intensive crops like cotton. This plantation-owning elite, known as "slave magnates", was comparable to the millionaires of the following century.
In the 1850s as large plantation owners outcompeted smaller farmers, more slaves were owned by fewer planters. Yet, while the proportion of the white population consisting of slaveholders was on the decline on the eve of the Civil War—perhaps falling below around a quarter of free southerners in 1860—poor whites and small farmers generally accepted the political leadership of the planter elite.
Several factors helped explain why slavery was not under serious threat of internal collapse from any moves for democratic change initiated from the South. First, given the opening of new territories in the West for white settlement, many non-slaveowners also perceived a possibility that they, too, might own slaves at some point in their life.
Second, small free farmers in the South often embraced hysterical racism, making them unlikely agents for internal democratic reforms in the South. The principle of white supremacy, accepted by almost all white southerners of all classes, made slavery seem legitimate, natural, and essential for a civilized society. White racism in the South was sustained by official systems of repression such as the "slave codes" and elaborate codes of speech, behavior, and social practices illustrating the subordination of blacks to whites. For example, the "slave patrols" were among the institutions bringing together southern whites of all classes in support of the prevailing economic and racial order. Serving as slave "patrollers" and "overseers" offered white southerners positions of power and honor. These positions gave even poor white southerners the authority to stop, search, whip, maim, and even kill any slave traveling outside his or her plantation. Slave "patrollers" and "overseers" also won prestige in their communities. Policing and punishing blacks who transgressed the regimentation of slave society was a valued community service in the South, where the fear of free blacks threatening law and order figured heavily in the public discourse of the period.
Third, many small farmers with a few slaves and yeomen were linked to elite planters through the market economy. In many areas, small farmers depended on local planter elites for vital goods and services including (but not limited to) access to cotton gins, access to markets, access to feed and livestock, and even for loans (since the banking system was not well developed in the antebellum South). Southern tradesmen often depended on the richest planters for steady work. Such dependency effectively deterred many white non-slaveholders from engaging in any political activity that was not in the interest of the large slaveholders. Furthermore, whites of varying social class, including poor whites and "plain folk" who worked outside or in the periphery of the market economy (and therefore lacked any real economic interest in the defense of slavery) might nonetheless be linked to elite planters through extensive kinship networks. Since inheritance in the South was often unequitable (and generally favored eldest sons), it was not uncommon for a poor white person to be perhaps the first cousin of the richest plantation owner of his county and to share the same militant support of slavery as his richer relatives. Finally, there was no secret ballot at the time anywhere in the United States – this innovation did not become widespread in the U.S. until the 1880s. For a typical white Southerner, this meant that so much as casting a ballot against the wishes of the establishment meant running the risk of social ostracization.
Thus, by the 1850s, Southern slaveholders and non-slaveholders alike felt increasingly encircled psychologically and politically in the national political arena because of the rise of free soilism and abolitionism in the Northern states. Increasingly dependent on the North for manufactured goods, for commercial services, and for loans, and increasingly cut off from the flourishing agricultural regions of the Northwest, they faced the prospects of a growing free labor and abolitionist movement in the North.
Militant defense of slavery
With the outcry over developments in Kansas strong in the North, defenders of slavery— increasingly committed to a way of life that abolitionists and their sympathizers considered obsolete or immoral— articulated a militant pro-slavery ideology that would lay the groundwork for secession upon the election of a Republican president. Southerners waged a vitriolic response to political change in the North. Slaveholding interests sought to uphold their constitutional rights in the territories and to maintain sufficient political strength to repulse "hostile" and "ruinous" legislation. Behind this shift was the growth of the cotton industry, which left slavery more important than ever to the Southern economy.
Reactions to the popularity of Uncle Tom's Cabin (1852) by Harriet Beecher Stowe (whom Abraham Lincoln reputedly called "the little woman that started this great war") and the growth of the abolitionist movement (pronounced after the founding of The Liberator in 1831 by William Lloyd Garrison) inspired an elaborate intellectual defense of slavery. Increasingly vocal (and sometimes violent) abolitionist movements, culminating in John Brown's raid on Harpers Ferry in 1859 were viewed as a serious threat, and—in the minds of many Southerners—abolitionists were attempting to foment violent slave revolts as seen in Haiti in the 1790s and as attempted by Nat Turner some three decades prior (1831).
After J. D. B. DeBow established De Bow's Review in 1846, it grew to become the leading Southern magazine, warning the planter class about the dangers of depending on the North economically. De Bow's Review also emerged as the leading voice for secession. The magazine emphasized the South's economic inequality, relating it to the concentration of manufacturing, shipping, banking and international trade in the North. Searching for Biblical passages endorsing slavery and forming economic, sociological, historical and scientific arguments, slavery went from being a "necessary evil" to a "positive good". Dr. J.H. Van Evrie's book Negroes and Negro slavery: The First an Inferior Race: The Latter Its Normal Condition– setting out the arguments the title would suggest– was an attempt to apply scientific support to the Southern arguments in favor of race based slavery.
Latent sectional divisions suddenly activated derogatory sectional imagery which emerged into sectional ideologies. As industrial capitalism gained momentum in the North, Southern writers emphasized whatever aristocratic traits they valued (but often did not practice) in their own society: courtesy, grace, chivalry, the slow pace of life, orderly life and leisure. This supported their argument that slavery provided a more humane society than industrial labor.
In his Cannibals All!, George Fitzhugh argued that the antagonism between labor and capital in a free society would result in "robber barons" and "pauper slavery", while in a slave society such antagonisms were avoided. He advocated enslaving Northern factory workers, for their own benefit. Abraham Lincoln, on the other hand, denounced such Southern insinuations that Northern wage earners were fatally fixed in that condition for life. To Free Soilers, the stereotype of the South was one of a diametrically opposite, static society in which the slave system maintained an entrenched anti-democratic aristocracy.
Southern fears of modernization
According to the historian James M. McPherson, exceptionalism applied not to the South but to the North after the North phased out slavery and launched an industrial revolution that led to urbanization, which in turn led to increased education, which in its own turn gave ever-increasing strength to various reform movements but especially abolitionism. The fact that seven immigrants out of eight settled in the North (and the fact that most immigrants viewed slavery with disfavor), compounded by the fact that twice as many whites left the South for the North as vice versa, contributed to the South's defensive-aggressive political behavior. The Charleston Mercury read that on the issue of slavery the North and South "are not only two Peoples, but they are rival, hostile Peoples." As De Bow's Review said, "We are resisting revolution.... We are not engaged in a Quixotic fight for the rights of man.... We are conservative."
Southern fears of modernity
Allan Nevins argued that the Civil War was an "irrepressible" conflict, adopting a phrase first used by U.S. Senator and Abraham Lincoln's Secretary of State William H. Seward. Nevins synthesized contending accounts emphasizing moral, cultural, social, ideological, political, and economic issues. In doing so, he brought the historical discussion back to an emphasis on social and cultural factors. Nevins pointed out that the North and the South were rapidly becoming two different peoples, a point made also by historian Avery Craven. At the root of these cultural differences was the problem of slavery, but fundamental assumptions, tastes, and cultural aims of the regions were diverging in other ways as well. More specifically, the North was rapidly modernizing in a manner threatening to the South. Historian McPherson explains:
When secessionists protested in 1861 that they were acting to preserve traditional rights and values, they were correct. They fought to preserve their constitutional liberties against the perceived Northern threat to overthrow them. The South's concept of republicanism had not changed in three-quarters of a century; the North's had.... The ascension to power of the Republican Party, with its ideology of competitive, egalitarian free-labor capitalism, was a signal to the South that the Northern majority had turned irrevocably towards this frightening, revolutionary future.
Harry L. Watson has synthesized research on antebellum southern social, economic, and political history. Self-sufficient yeomen, in Watson's view, "collaborated in their own transformation" by allowing promoters of a market economy to gain political influence. Resultant "doubts and frustrations" provided fertile soil for the argument that southern rights and liberties were menaced by Black Republicanism.
J. Mills Thornton III, explained the viewpoint of the average white Alabamian. Thornton contends that Alabama was engulfed in a severe crisis long before 1860. Deeply held principles of freedom, equality, and autonomy, as expressed in republican values appeared threatened, especially during the 1850s, by the relentless expansion of market relations and commercial agriculture. Alabamians were thus, he judged, prepared to believe the worst once Lincoln was elected.
Sectional tensions and the emergence of mass politics
The politicians of the 1850s were acting in a society in which the traditional restraints that suppressed sectional conflict in the 1820s and 1850s– the most important of which being the stability of the two-party system– were being eroded as this rapid extension of mass democracy went forward in the North and South. It was an era when the mass political party galvanized voter participation to an unprecedented degree, and a time in which politics formed an essential component of American mass culture. Historians agree that political involvement was a larger concern to the average American in the 1850s than today. Politics was, in one of its functions, a form of mass entertainment, a spectacle with rallies, parades, and colorful personalities. Leading politicians, moreover, often served as a focus for popular interests, aspirations, and values.
Historian Allan Nevins, for instance, writes of political rallies in 1856 with turnouts of anywhere from twenty to fifty thousand men and women. Voter turnouts even ran as high as 84% by 1860. An abundance of new parties emerged 1854–56, including the Republicans, People's party men, Anti-Nebraskans, Fusionists, Know-Nothings, Know-Somethings (anti-slavery nativists), Maine Lawites, Temperance men, Rum Democrats, Silver Gray Whigs, Hindus, Hard Shell Democrats, Soft Shells, Half Shells and Adopted Citizens. By 1858, they were mostly gone, and politics divided four ways. Republicans controlled most Northern states with a strong Democratic minority. The Democrats were split North and South and fielded two tickets in 1860. Southern non-Democrats tried different coalitions; most supported the Constitutional Union party in 1860.
Many Southern states held constitutional conventions in 1851 to consider the questions of nullification and secession. With the exception of South Carolina, whose convention election did not even offer the option of "no secession" but rather "no secession without the collaboration of other states", the Southern conventions were dominated by Unionists who voted down articles of secession.
Historians today generally agree that economic conflicts were not a major cause of the war. While an economic basis to the sectional crisis was popular among the “Progressive school” of historians from the 1910s to the 1940s, few professional historians now subscribe to this explanation. According to economic historian Lee A. Craig, "In fact, numerous studies by economic historians over the past several decades reveal that economic conflict was not an inherent condition of North-South relations during the antebellum era and did not cause the Civil War."
When numerous groups tried at the last minute in 1860–61 to find a compromise to avert war, they did not turn to economic policies. The three major attempts at compromise, the Crittenden Compromise, the Corwin Amendment and the Washington Peace Conference, addressed only the slavery-related issues of fugitive slave laws, personal liberty laws, slavery in the territories and interference with slavery within the existing slave states.
Economic value of slavery to the South
Historian James L. Huston emphasizes the role of slavery as an economic institution. In October 1860 William Lowndes Yancey, a leading advocate of secession, placed the value of Southern-held slaves at $2.8 billion. Huston writes:
Understanding the relations between wealth, slavery, and property rights in the South provides a powerful means of understanding southern political behavior leading to disunion. First, the size dimensions of slavery are important to comprehend, for slavery was a colossal institution. Second, the property rights argument was the ultimate defense of slavery, and white southerners and the proslavery radicals knew it. Third, the weak point in the protection of slavery by property rights was the federal government.... Fourth, the intense need to preserve the sanctity of property rights in Africans led southern political leaders to demand the nationalization of slavery– the condition under which slaveholders would always be protected in their property holdings.
The cotton gin greatly increased the efficiency with which cotton could be harvested, contributing to the consolidation of "King Cotton" as the backbone of the economy of the Deep South, and to the entrenchment of the system of slave labor on which the cotton plantation economy depended.
The tendency of monoculture cotton plantings to lead to soil exhaustion created a need for cotton planters to move their operations to new lands, and therefore to the westward expansion of slavery from the Eastern seaboard into new areas (e.g., Alabama, Mississippi, and beyond to East Texas).
Regional economic differences
The South, Midwest, and Northeast had quite different economic structures. They traded with each other and each became more prosperous by staying in the Union, a point many businessmen made in 1860–61. However Charles A. Beard in the 1920s made a highly influential argument to the effect that these differences caused the war (rather than slavery or constitutional debates). He saw the industrial Northeast forming a coalition with the agrarian Midwest against the Plantation South. Critics challenged his image of a unified Northeast and said that the region was in fact highly diverse with many different competing economic interests. In 1860–61, most business interests in the Northeast opposed war.
After 1950, only a few mainstream historians accepted the Beard interpretation, though it was accepted by libertarian economists. As Historian Kenneth Stampp—who abandoned Beardianism after 1950, sums up the scholarly consensus: "Most historians...now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united."
Free labor vs. pro-slavery arguments
Historian Eric Foner argued that a free-labor ideology dominated thinking in the North, which emphasized economic opportunity. By contrast, Southerners described free labor as "greasy mechanics, filthy operators, small-fisted farmers, and moonstruck theorists". They strongly opposed the homestead laws that were proposed to give free farms in the west, fearing the small farmers would oppose plantation slavery. Indeed, opposition to homestead laws was far more common in secessionist rhetoric than opposition to tariffs. Southerners such as Calhoun argued that slavery was "a positive good", and that slaves were more civilized and morally and intellectually improved because of slavery.
Religious conflict over the slavery question
Led by Mark Noll, a body of scholarship has highlighted the fact that the American debate over slavery became a shooting war in part because the two sides reached diametrically opposite conclusions based on reading the same authoritative source of guidance on moral questions: the King James Version of the Bible.
After the American Revolution and the disestablishment of government-sponsored churches, the U.S. experienced the Second Great Awakening, a massive Protestant revival. Without centralized church authorities, American Protestantism was heavily reliant on the Bible, which was read in the standard 19th-century Reformed hermeneutic of "common sense", literal interpretation as if the Bible were speaking directly about the modern American situation instead of events that occurred in a much different context, millennia ago. By the mid-19th century this form of religion and Bible interpretation had become a dominant strand in American religious, moral and political discourse, almost serving as a de facto state religion.
The problem that this caused for resolving the slavery question was that the Bible, interpreted under these assumptions, seemed to clearly suggest that slavery was Biblically justified:
- The pro-slavery South could point to slaveholding by the godly patriarch Abraham (Gen 12:5; 14:14; 24:35–36; 26:13–14), a practice that was later incorporated into Israelite national law (Lev 25:44–46). It was never denounced by Jesus, who made slavery a model of discipleship (Mk 10:44). The Apostle Paul supported slavery, counseling obedience to earthly masters (Eph 6:5–9; Col 3:22–25) as a duty in agreement with "the sound words of our Lord Jesus Christ and the teaching which accords with godliness" (1 Tim 6:3). Because slaves were to remain in their present state unless they could win their freedom (1 Cor 7:20–24), he sent the fugitive slave Onesimus back to his owner Philemon (Phlm 10–20). The abolitionist north had a difficult time matching the pro-slavery south passage for passage. [...] Professor Eugene Genovese, who has studied these biblical debates over slavery in minute detail, concludes that the pro-slavery faction clearly emerged victorious over the abolitionists except for one specious argument based on the so-called Curse of Ham (Gen 9:18–27). For our purposes, it is important to realize that the South won this crucial contest with the North by using the prevailing hermeneutic, or method of interpretation, on which both sides agreed. So decisive was its triumph that the South mounted a vigorous counterattack on the abolitionists as infidels who had abandoned the plain words of Scripture for the secular ideology of the Enlightenment.
Protestant churches in the U.S., unable to agree on what God's Word said about slavery, ended up with schisms between Northern and Southern branches: the Methodists in 1844, the Baptists in 1845, and the Presbyterians in 1857. These splits presaged the subsequent split in the nation: "The churches played a major role in the dividing of the nation, and it is probably true that it was the splits in the churches which made a final split of the national inevitable." The conflict over how to interpret the Bible was central:
- The theological crisis occasioned by reasoning like [conservative Presbyterian theologian James H.] Thornwell's was acute. Many Northern Bible-readers and not a few in the South felt that slavery was evil. They somehow knew the Bible supported them in that feeling. Yet when it came to using the Bible as it had been used with such success to evangelize and civilize the United States, the sacred page was snatched out of their hands. Trust in the Bible and reliance upon a Reformed, literal hermeneutic had created a crisis that only bullets, not arguments, could resolve.
- The question of the Bible and slavery in the era of the Civil War was never a simple question. The issue involved the American expression of a Reformed literal hermeneutic, the failure of hermeneutical alternatives to gain cultural authority, and the exercise of deeply entrenched intuitive racism, as well as the presence of Scripture as an authoritative religious book and slavery as an inherited social-economic relationship. The North– forced to fight on unfriendly terrain that it had helped to create– lost the exegetical war. The South certainly lost the shooting war. But constructive orthodox theology was the major loser when American believers allowed bullets instead of hermeneutical self-consciousness to determine what the Bible said about slavery. For the history of theology in America, the great tragedy of the Civil War is that the most persuasive theologians were the Rev. Drs. William Tecumseh Sherman and Ulysses S. Grant.
There were many causes of the Civil War, but the religious conflict, almost unimaginable in modern America, cut very deep at the time. Noll and others highlight the significance of the religion issue for the famous phrase in Lincoln's second inaugural: "Both read the same Bible and pray to the same God, and each invokes His aid against the other."
The Territorial Crisis and the United States Constitution
Between 1803 and 1854, the United States achieved a vast expansion of territory through purchase, negotiation and conquest. Of the states carved out of these territories by 1845, all had entered the union as slave states: Louisiana, Missouri, Arkansas, Florida and Texas, as well as the southern portions of Alabama and Mississippi. And with the conquest of northern Mexico, including California, in 1848, slaveholding interests looked forward to the institution flourishing in these lands as well. Southerners also anticipated garnering slaves and slave states in Cuba and Central America. Northern free soil interests vigorously sought to curtail any further expansion of slave soil. It was these territorial disputes that the proslavery and antislavery forces collided over.
The existence of slavery in the southern states was far less politically polarizing than the explosive question of the territorial expansion of the institution in the west. Moreover, Americans were informed by two well-established readings of the Constitution regarding human bondage: that the slave states had complete autonomy over the institution within their boundaries, and that the domestic slave trade – trade among the states – was immune to federal interference. The only feasible strategy available to attack slavery was to restrict its expansion into the new territories. Slaveholding interests fully grasped the danger that this strategy posed to them. Both the South and the North believed: “The power to decide the question of slavery for the territories was the power to determine the future of slavery itself.”
By 1860, four doctrines had emerged to answer the question of federal control in the territories, and they all claimed to be sanctioned by the Constitution, implicitly or explicitly. Two of the “conservative” doctrines emphasized the written text and historical precedents of the founding document, while the other two doctrines developed arguments that transcended the Constitution.
One of the “conservative” theories, represented by the Constitutional Union Party, argued that the historical designation of free and slave apportionments in territories should be become a Constitutional mandate. The Crittenden Compromise of 1860 was an expression of this view.
The second doctrine of Congressional preeminence, championed by Abraham Lincoln and the Republican Party, insisted that the Constitution did not bind legislators to a policy of balance – that slavery could be excluded altogether in a territory at the discretion of Congress – with one caveat: the due process clause of the Fifth Amendment must apply. In other words, Congress could restrict human bondage, but never establish it. The Wilmot Proviso announced this position in 1846.
Of the two doctrines that rejected federal authority, one was articulated by northern Democrat of Illinois Senator Stephen A. Douglas, and the other by southern Democrats Senator Jefferson Davis of Mississippi and Senator John C. Breckinridge of Kentucky.
Douglas devised the doctrine of territorial or “popular” sovereignty, which declared that the settlers in a territory had the same rights as states in the Union to establish or disestablish slavery – a purely local matter. Congress, having created the territory, was barred, according to Douglas, from exercising any authority in domestic matters. To do so would violate historic traditions of self-government, implicit in the US Constitution. The Kansas-Nebraska Act of 1854 legislated this doctrine.
The fourth in this quartet is the theory of state sovereignty (“states’ rights”), also known as the “Calhoun doctrine” after the South Carolinian political theorist and statesman John C. Calhoun. Rejecting the arguments for federal authority or self-government, state sovereignty would empower states to promote the expansion of slavery as part of the Federal Union under the US Constitution – and not merely as an argument for secession. The basic premise was that all authority regarding matters of slavery in the territories resided in each state. The role of the federal government was merely to enable the implementation of state laws when residents of the states entered the territories. Calhoun asserted that the federal government in the territories was only the agent of the several sovereign states, and hence incapable of forbidding the bringing into any territory of anything that was legal property in any state. State sovereignty, in other words, gave the laws of the slaveholding states extra-jurisdictional effect.
“States’ rights” was an ideology formulated and applied as a means of advancing slave state interests through federal authority. As historian Thomas L Krannawitter points out, “[T]he Southern demand for federal slave protection represented a demand for an unprecedented expansion of federal power.”
By 1860, these four doctrines comprised the major ideologies presented to the American public on the matters of slavery, the territories and the US Constitution.
Antislavery movements in the North gained momentum in the 1830s and 1840s, a period of rapid transformation of Northern society that inspired a social and political reformism. Many of the reformers of the period, including abolitionists, attempted in one way or another to transform the lifestyle and work habits of labor, helping workers respond to the new demands of an industrializing, capitalistic society.
Antislavery, like many other reform movements of the period, was influenced by the legacy of the Second Great Awakening, a period of religious revival in the new country stressing the reform of individuals which was still relatively fresh in the American memory. Thus, while the reform spirit of the period was expressed by a variety of movements with often-conflicting political goals, most reform movements shared a common feature in their emphasis on the Great Awakening principle of transforming the human personality through discipline, order, and restraint.
"Abolitionist" had several meanings at the time. The followers of William Lloyd Garrison, including Wendell Phillips and Frederick Douglass, demanded the "immediate abolition of slavery", hence the name. A more pragmatic group of abolitionists, like Theodore Weld and Arthur Tappan, wanted immediate action, but that action might well be a program of gradual emancipation, with a long intermediate stage. "Antislavery men", like John Quincy Adams, did what they could to limit slavery and end it where possible, but were not part of any abolitionist group. For example, in 1841 Adams represented the Amistad African slaves in the Supreme Court of the United States and argued that they should be set free. In the last years before the war, "antislavery" could mean the Northern majority, like Abraham Lincoln, who opposed expansion of slavery or its influence, as by the Kansas-Nebraska Act, or the Fugitive Slave Act. Many Southerners called all these abolitionists, without distinguishing them from the Garrisonians. James M. McPherson explains the abolitionists' deep beliefs: "All people were equal in God's sight; the souls of black folks were as valuable as those of whites; for one of God's children to enslave another was a violation of the Higher Law, even if it was sanctioned by the Constitution."
Stressing the Yankee Protestant ideals of self-improvement, industry, and thrift, most abolitionists– most notably William Lloyd Garrison– condemned slavery as a lack of control over one's own destiny and the fruits of one's labor.
The experience of the fifty years… shows us the slaves trebling in numbers—slaveholders monopolizing the offices and dictating the policy of the Government—prostituting the strength and influence of the Nation to the support of slavery here and elsewhere—trampling on the rights of the free States, and making the courts of the country their tools. To continue this disastrous alliance longer is madness.… Why prolong the experiment?
Abolitionists also attacked slavery as a threat to the freedom of white Americans. Defining freedom as more than a simple lack of restraint, antebellum reformers held that the truly free man was one who imposed restraints upon himself. Thus, for the anti-slavery reformers of the 1830s and 1840s, the promise of free labor and upward social mobility (opportunities for advancement, rights to own property, and to control one's own labor), was central to the ideal of reforming individuals.
Controversy over the so-called Ostend Manifesto (which proposed the U.S. annexation of Cuba as a slave state) and the Fugitive Slave Act kept sectional tensions alive before the issue of slavery in the West could occupy the country's politics in the mid-to-late 1850s.
Antislavery sentiment among some groups in the North intensified after the Compromise of 1850, when Southerners began appearing in Northern states to pursue fugitives or often to claim as slaves free African Americans who had resided there for years. Meanwhile, some abolitionists openly sought to prevent enforcement of the law. Violation of the Fugitive Slave Act was often open and organized. In Boston– a city from which it was boasted that no fugitive had ever been returned– Theodore Parker and other members of the city's elite helped form mobs to prevent enforcement of the law as early as April 1851. A pattern of public resistance emerged in city after city, notably in Syracuse in 1851 (culminating in the Jerry Rescue incident late that year), and Boston again in 1854. But the issue did not lead to a crisis until revived by the same issue underlying the Missouri Compromise of 1820: slavery in the territories.
Arguments for and against slavery
William Lloyd Garrison, a prominent abolitionist, was motivated by a belief in the growth of democracy. Because the Constitution had a three-fifths clause, a fugitive slave clause and a 20-year extension of the Atlantic slave trade, Garrison once publicly burned a copy of the U.S. Constitution and called it "a covenant with death and an agreement with hell". In 1854, he said:
|“||I am a believer in that portion of the Declaration of American Independence in which it is set forth, as among self-evident truths, "that all men are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty, and the pursuit of happiness." Hence, I am an abolitionist. Hence, I cannot but regard oppression in every form—and most of all, that which turns a man into a thing—with indignation and abhorrence.||”|
|“||(Thomas Jefferson's) ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error.... Our new government is founded upon exactly the opposite idea; its foundations are laid, its corner-stone rests, upon the great truth that the negro is not equal to the white man; that slavery—subordination to the superior race—is his natural and normal condition.||”|
"Free soil" movement
The assumptions, tastes, and cultural aims of the reformers of the 1830s and 1840s anticipated the political and ideological ferment of the 1850s. A surge of working class Irish and German Catholic immigration provoked reactions among many Northern Whigs, as well as Democrats. Growing fears of labor competition for white workers and farmers because of the growing number of free blacks prompted several northern states to adopt discriminatory "Black Codes".
In the Northwest, although farm tenancy was increasing, the number of free farmers was still double that of farm laborers and tenants. Moreover, although the expansion of the factory system was undermining the economic independence of the small craftsman and artisan, industry in the region, still one largely of small towns, was still concentrated in small-scale enterprises. Arguably, social mobility was on the verge of contracting in the urban centers of the North, but long-cherished ideas of opportunity, "honest industry" and "toil" were at least close enough in time to lend plausibility to the free labor ideology.
In the rural and small-town North, the picture of Northern society (framed by the ethos of "free labor") corresponded to a large degree with reality. Propelled by advancements in transportation and communication– especially steam navigation, railroads, and telegraphs– the two decades before the Civil War were of rapid expansion in population and economy of the Northwest. Combined with the rise of Northeastern and export markets for their products, the social standing of farmers in the region substantially improved. The small towns and villages that emerged as the Republican Party's heartland showed every sign of vigorous expansion. Their vision for an ideal society was of small-scale capitalism, with white American laborers entitled to the chance of upward mobility opportunities for advancement, rights to own property, and to control their own labor. Many free-soilers demanded that the slave labor system and free black settlers (and, in places such as California, Chinese immigrants) should be excluded from the Great Plains to guarantee the predominance there of the free white laborer.
Opposition to the 1847 Wilmot Proviso helped to consolidate the "free-soil" forces. The next year, Radical New York Democrats known as Barnburners, members of the Liberty Party, and anti-slavery Whigs held a convention at Buffalo, New York, in August, forming the Free-Soil Party. The party supported former President Martin Van Buren and Charles Francis Adams, Sr., for President and Vice President, respectively. The party opposed the expansion of slavery into territories where it had not yet existed, such as Oregon and the ceded Mexican territory.
Relating Northern and Southern positions on slavery to basic differences in labor systems, but insisting on the role of culture and ideology in coloring these differences, Eric Foner's book Free Soil, Free Labor, Free Men (1970) went beyond the economic determinism of Charles A. Beard (a leading historian of the 1930s). Foner emphasized the importance of free labor ideology to Northern opponents of slavery, pointing out that the moral concerns of the abolitionists were not necessarily the dominant sentiments in the North. Many Northerners (including Lincoln) opposed slavery also because they feared that black labor might spread to the North and threaten the position of free white laborers. In this sense, Republicans and the abolitionists were able to appeal to powerful emotions in the North through a broader commitment to "free labor" principles. The "Slave Power" idea had a far greater appeal to Northern self-interest than arguments based on the plight of black slaves in the South. If the free labor ideology of the 1830s and 1840s depended on the transformation of Northern society, its entry into politics depended on the rise of mass democracy, in turn propelled by far-reaching social change. Its chance would come by the mid-1850s with the collapse of the traditional two-party system, which had long suppressed sectional conflict.
Slavery question in territories acquired from Mexico
Soon after the Mexican War started and long before negotiation of the new US-Mexico border, the question of slavery in the territories to be acquired polarized the Northern and Southern United States in the most bitter sectional conflict up to this time, which lasted for a deadlock of four years during which the Second Party System broke up, Mormon pioneers settled Utah, the California Gold Rush settled California, and New Mexico under a federal military government turned back Texas's attempt to assert control over territory Texas claimed as far west as the Rio Grande. Eventually the Compromise of 1850 preserved the Union, but only for another decade. Proposals included:
- The Wilmot Proviso banning slavery in any new territory to be acquired from Mexico, not including Texas which had been annexed the previous year. Passed by the United States House of Representatives in August 1846 and February 1847 but not the Senate. Later an effort to attach the proviso to the Treaty of Guadalupe Hidalgo also failed.
- Failed amendments to the Wilmot Proviso by William W. Wick and then Stephen Douglas extending the Missouri Compromise line (36°30' parallel north) west to the Pacific, allowing slavery in most of present day New Mexico and Arizona, Las Vegas, Nevada, and Southern California, as well as any other territories that might be acquired from Mexico. The line was again proposed by the Nashville Convention of June 1850.
- Popular sovereignty, developed by Lewis Cass and Douglas as the eventual Democratic Party position, letting each territory decide whether to allow slavery.
- William L. Yancey's "Alabama Platform", endorsed by the Alabama and Georgia legislatures and by Democratic state conventions in Florida and Virginia, called for no restrictions on slavery in the territories either by the federal government or by territorial governments before statehood, opposition to any candidates supporting either the Wilmot Proviso or popular sovereignty, and federal legislation overruling Mexican anti-slavery laws.
- General Zachary Taylor, who became the Whig candidate in 1848 and then President from March 1849 to July 1850, proposed after becoming President that the entire area become two free states, called California and New Mexico but much larger than the eventual ones. None of the area would be left as an unorganized or organized territory, avoiding the question of slavery in the territories.
- The Mormons' proposal for a State of Deseret incorporating most of the area of the Mexican Cession but excluding the largest non-Mormon populations in Northern California and central New Mexico was considered unlikely to succeed in Congress, but nevertheless in 1849 President Zachary Taylor sent his agent John Wilson westward with a proposal to combine California and Deseret as a single state, decreasing the number of new free states and the erosion of Southern parity in the Senate.
- The Compromise of 1850, proposed by Henry Clay in January 1850, guided to passage by Douglas over Northern Whig and Southern Democrat opposition, and enacted September 1850, admitted California as a free state including Southern California and organized Utah Territory and New Mexico Territory with slavery to be decided by popular sovereignty. Texas dropped its claim to the disputed northwestern areas in return for debt relief, and the areas were divided between the two new territories and unorganized territory. El Paso where Texas had successfully established county government was left in Texas. No southern territory dominated by Southerners (like the later short-lived Confederate Territory of Arizona) was created. Also, the slave trade was abolished in Washington, D.C. (but not slavery itself), and the Fugitive Slave Act was strengthened.
States' rights
States' rights was an issue in the 19th century for those who felt that the federal government was superseded by the authority of the individual states and was in violation of the role intended for it by the Founding Fathers of the United States. Kenneth M. Stampp notes that each section used states' rights arguments when convenient, and shifted positions when convenient. For example, the Fugitive Slave Act of 1850 was justified by its supporters as a state's right to have its property laws respected by other states, and was resisted by northern legislatures in the form of state personal liberty laws that placed state laws above the federal mandate.
States’ rights and slavery
Arthur M. Schlesinger, Jr. noted that the states' rights “never had any real vitality independent of underlying conditions of vast social, economic, or political significance.” He further elaborated:
From the close of the nullification episode of 1832–1833 to the outbreak of the Civil War, the agitation of state rights was intimately connected with the new issue of growing importance, the slavery question, and the principle form assumed by the doctrine was the right of secession. The pro-slavery forces sought refuge in the state rights position as a shield against federal interference with pro-slavery projects.... As a natural consequence, anti-slavery legislatures in the North were led to lay great stress on the national character of the Union and the broad powers of the general government in dealing with slavery. Nevertheless, it is significant to note that when it served anti-slavery purposes better to lapse into state rights dialectic, northern legislatures did not hesitate to be inconsistent.
Echoing Schlesinger, Forrest McDonald wrote that “the dynamics of the tension between federal and state authority changed abruptly during the late 1840s” as a result of the acquisition of territory in the Mexican War. McDonald states:
And then, as a by-product or offshoot of a war of conquest, slavery– a subject that leading politicians had, with the exception of the gag rule controversy and Calhoun’s occasional outbursts, scrupulously kept out of partisan debate– erupted as the dominant issue in that arena. So disruptive was the issue that it subjected the federal Union to the greatest strain the young republic had yet known.
States' rights and minority rights
States' rights theories gained strength from the awareness that the Northern population was growing much faster than the population of the South, so it was only a matter of time before the North controlled the federal government. Acting as a "conscious minority", Southerners hoped that a strict, constructionist interpretation of the Constitution would limit federal power over the states, and that a defense of states' rights against federal encroachments or even nullification or secession would save the South. Before 1860, most presidents were either Southern or pro-South. The North's growing population would mean the election of pro-North presidents, and the addition of free-soil states would end Southern parity with the North in the Senate. As the historian Allan Nevins described Calhoun's theory of states' rights, "Governments, observed Calhoun, were formed to protect minorities, for majorities could take care of themselves".
Until the 1860 election, the South’s interests nationally were entrusted to the Democratic Party. In 1860, the Democratic Party split into Northern and Southern factions as the result of a "bitter debate in the Senate between Jefferson Davis and Stephen Douglas". The debate was over resolutions proposed by Davis “opposing popular sovereignty and supporting a federal slave code and states’ rights” which carried over to the national convention in Charleston.
Davis defined equality in terms of the equal rights of states, and opposed the declaration that all men are created equal. Jefferson Davis stated that a "disparaging discrimination" and a fight for "liberty" against "the tyranny of an unbridled majority" gave the Confederate states a right to secede. In 1860, Congressman Laurence M. Keitt of South Carolina said, "The anti-slavery party contend that slavery is wrong in itself, and the Government is a consolidated national democracy. We of the South contend that slavery is right, and that this is a confederate Republic of sovereign States."
Stampp mentioned Confederate Vice President Alexander Stephens' A Constitutional View of the Late War Between the States as an example of a Southern leader who said that slavery was the "cornerstone of the Confederacy" when the war began and then said that the war was not about slavery but states' rights after Southern defeat. Stampp said that Stephens became one of the most ardent defenders of the Lost Cause.
To the old Union they had said that the Federal power had no authority to interfere with slavery issues in a state. To their new nation they would declare that the state had no power to interfere with a federal protection of slavery. Of all the many testimonials to the fact that slavery, and not states rights, really lay at the heart of their movement, this was the most eloquent of all.
The Compromise of 1850
The victory of the United States over Mexico resulted in the addition of large new territories conquered from Mexico. Controversy over whether these territories would be slave or free raised the risk of a war between slave and free states, and Northern support for the Wilmot Proviso, which would have banned slavery in the conquered territories, increased sectional tensions. The controversy was temporarily resolved by the Compromise of 1850, which allowed the territories of Utah and New Mexico to decide for or against slavery, but also allowed the admission of California as a free state, reduced the size of the slave state of Texas by adjusting the boundary, and ended the slave trade (but not slavery itself) in the District of Columbia. In return, the South got a stronger fugitive slave law than the version mentioned in the Constitution. The Fugitive Slave Law would reignite controversy over slavery.
Fugitive Slave Law issues
The Fugitive Slave Law of 1850 required that Northerners assist Southerners in reclaiming fugitive slaves, which many Northerners found to be extremely offensive. Anthony Burns was among the fugitive slaves captured and returned in chains to slavery as a result of the law. Harriett Beecher Stowe's best selling novel Uncle Tom's Cabin greatly increased opposition to the Fugitive Slave Law.
Kansas-Nebraska Act (1854)
Most people thought the Compromise had ended the territorial issue, but Stephen A. Douglas reopened it in 1854, in the name of democracy. Douglas proposed the Kansas-Nebraska Bill with the intention of opening up vast new high quality farm lands to settlement. As a Chicagoan, he was especially interested in the railroad connections from Chicago into Kansas and Nebraska, but that was not a controversial point. More importantly, Douglas firmly believed in democracy at the grass roots—that actual settlers have the right to decide on slavery, not politicians from other states. His bill provided that popular sovereignty, through the territorial legislatures, should decide "all questions pertaining to slavery", thus effectively repealing the Missouri Compromise. The ensuing public reaction against it created a firestorm of protest in the Northern states. It was seen as an effort to repeal the Missouri Compromise. However, the popular reaction in the first month after the bill's introduction failed to foreshadow the gravity of the situation. As Northern papers initially ignored the story, Republican leaders lamented the lack of a popular response.
Eventually, the popular reaction did come, but the leaders had to spark it. Chase's "Appeal of the Independent Democrats" did much to arouse popular opinion. In New York, William H. Seward finally took it upon himself to organize a rally against the Nebraska bill, since none had arisen spontaneously. Press such as the National Era, the New York Tribune, and local free-soil journals, condemned the bill. The Lincoln-Douglas debates of 1858 drew national attention to the issue of slavery expansion.
Founding of the Republican Party (1854)
Convinced that Northern society was superior to that of the South, and increasingly persuaded of the South's ambitions to extend slave power beyond its existing borders, Northerners were embracing a viewpoint that made conflict likely; however, conflict required the ascendancy of a political group to express the views of the North, such as the Republican Party. The Republican Party– campaigning on the popular, emotional issue of "free soil" in the frontier– captured the White House after just six years of existence.
The Republican Party grew out of the controversy over the Kansas-Nebraska legislation. Once the Northern reaction against the Kansas-Nebraska Act took place, its leaders acted to advance another political reorganization. Henry Wilson declared the Whig Party dead and vowed to oppose any efforts to resurrect it. Horace Greeley's Tribune called for the formation of a new Northern party, and Benjamin Wade, Chase, Charles Sumner, and others spoke out for the union of all opponents of the Nebraska Act. The Tribune's Gamaliel Bailey was involved in calling a caucus of anti-slavery Whig and Democratic Party Congressmen in May.
Meeting in a Ripon, Wisconsin, Congregational Church on February 28, 1854, some thirty opponents of the Nebraska Act called for the organization of a new political party and suggested that "Republican" would be the most appropriate name (to link their cause to the defunct Republican Party of Thomas Jefferson). These founders also took a leading role in the creation of the Republican Party in many northern states during the summer of 1854. While conservatives and many moderates were content merely to call for the restoration of the Missouri Compromise or a prohibition of slavery extension, radicals advocated repeal of the Fugitive Slave Laws and rapid abolition in existing states. The term "radical" has also been applied to those who objected to the Compromise of 1850, which extended slavery in the territories.
But without the benefit of hindsight, the 1854 elections would seem to indicate the possible triumph of the Know-Nothing movement rather than anti-slavery, with the Catholic/immigrant question replacing slavery as the issue capable of mobilizing mass appeal. Know-Nothings, for instance, captured the mayoralty of Philadelphia with a majority of over 8,000 votes in 1854. Even after opening up immense discord with his Kansas-Nebraska Act, Senator Douglas began speaking of the Know-Nothings, rather than the Republicans, as the principal danger to the Democratic Party.
When Republicans spoke of themselves as a party of "free labor", they appealed to a rapidly growing, primarily middle class base of support, not permanent wage earners or the unemployed (the working class). When they extolled the virtues of free labor, they were merely reflecting the experiences of millions of men who had "made it" and millions of others who had a realistic hope of doing so. Like the Tories in England, the Republicans in the United States would emerge as the nationalists, homogenizers, imperialists, and cosmopolitans.
Those who had not yet "made it" included Irish immigrants, who made up a large growing proportion of Northern factory workers. Republicans often saw the Catholic working class as lacking the qualities of self-discipline, temperance, and sobriety essential for their vision of ordered liberty. Republicans insisted that there was a high correlation between education, religion, and hard work—the values of the "Protestant work ethic"—and Republican votes. "Where free schools are regarded as a nuisance, where religion is least honored and lazy unthrift is the rule", read an editorial of the pro-Republican Chicago Democratic Press after James Buchanan's defeat of John C. Fremont in the 1856 presidential election, "there Buchanan has received his strongest support".
Ethno-religious, socio-economic, and cultural fault lines ran throughout American society, but were becoming increasingly sectional, pitting Yankee Protestants with a stake in the emerging industrial capitalism and American nationalism increasingly against those tied to Southern slave holding interests. For example, acclaimed historian Don E. Fehrenbacher, in his Prelude to Greatness, Lincoln in the 1850s, noticed how Illinois was a microcosm of the national political scene, pointing out voting patterns that bore striking correlations to regional patterns of settlement. Those areas settled from the South were staunchly Democratic, while those by New Englanders were staunchly Republican. In addition, a belt of border counties were known for their political moderation, and traditionally held the balance of power. Intertwined with religious, ethnic, regional, and class identities, the issues of free labor and free soil were thus easy to play on.
Events during the next two years in "Bleeding Kansas" sustained the popular fervor originally aroused among some elements in the North by the Kansas-Nebraska Act. Free-State settlers from the North were encouraged by press and pulpit and the powerful organs of abolitionist propaganda. Often they received financial help from such organizations as the Massachusetts Emigrant Aid Company. Those from the South often received financial contributions from the communities they left. Southerners sought to uphold their constitutional rights in the territories and to maintain sufficient political strength to repulse "hostile and ruinous legislation".
While the Great Plains were largely unfit for the cultivation of cotton, informed Southerners demanded that the West be open to slavery, often—perhaps most often—with minerals in mind. Brazil, for instance, was an example of the successful use of slave labor in mining. In the middle of the 18th century, diamond mining supplemented gold mining in Minas Gerais and accounted for a massive transfer of masters and slaves from Brazil's northeastern sugar region. Southern leaders knew a good deal about this experience. It was even promoted in the pro-slavery DeBow's Review as far back as 1848.
Fragmentation of the American party system
"Bleeding Kansas" and the elections of 1856
In Kansas around 1855, the slavery issue reached a condition of intolerable tension and violence. But this was in an area where an overwhelming proportion of settlers were merely land-hungry Westerners indifferent to the public issues. The majority of the inhabitants were not concerned with sectional tensions or the issue of slavery. Instead, the tension in Kansas began as a contention between rival claimants. During the first wave of settlement, no one held titles to the land, and settlers rushed to occupy newly open land fit for cultivation. While the tension and violence did emerge as a pattern pitting Yankee and Missourian settlers against each other, there is little evidence of any ideological divides on the questions of slavery. Instead, the Missouri claimants, thinking of Kansas as their own domain, regarded the Yankee squatters as invaders, while the Yankees accused the Missourians for grabbing the best land without honestly settling on it.
However, the 1855–56 violence in "Bleeding Kansas" did reach an ideological climax after John Brown– regarded by followers as the instrument of God's will to destroy slavery– entered the melee. His assassination of five pro-slavery settlers (the so-called "Pottawatomie Massacre", during the night of May 24, 1856) resulted in some irregular, guerrilla-style strife. Aside from John Brown's fervor, the strife in Kansas often involved only armed bands more interested in land claims or loot.
Of greater importance than the civil strife in Kansas, however, was the reaction against it nationwide and in Congress. In both North and South, the belief was widespread that the aggressive designs of the other section were epitomized by (and responsible for) what was happening in Kansas. Consequently, "Bleeding Kansas" emerged as a symbol of sectional controversy.
Indignant over the developments in Kansas, the Republicans—the first entirely sectional major party in U.S. history—entered their first presidential campaign with confidence. Their nominee, John C. Frémont, was a generally safe candidate for the new party. Although his nomination upset some of their Nativist Know-Nothing supporters (his mother was a Catholic), the nomination of the famed explorer of the Far West with no political record was an attempt to woo ex-Democrats. The other two Republican contenders, William H. Seward and Salmon P. Chase, were seen as too radical.
Nevertheless, the campaign of 1856 was waged almost exclusively on the slavery issue—pitted as a struggle between democracy and aristocracy—focusing on the question of Kansas. The Republicans condemned the Kansas-Nebraska Act and the expansion of slavery, but they advanced a program of internal improvements combining the idealism of anti-slavery with the economic aspirations of the North. The new party rapidly developed a powerful partisan culture, and energetic activists drove voters to the polls in unprecedented numbers. People reacted with fervor. Young Republicans organized the "Wide Awake" clubs and chanted "Free Soil, Free Labor, Free Men, Frémont!" With Southern fire-eaters and even some moderates uttering threats of secession if Frémont won, the Democratic candidate, Buchanan, benefited from apprehensions about the future of the Union.
Dred Scott decision (1857) and the Lecompton Constitution
The Lecompton Constitution and Dred Scott v. Sandford were both part of the Bleeding Kansas controversy over slavery as a result of the Kansas Nebraska Act, which was Stephen Douglas' attempt at replacing the Missouri Compromise ban on slavery in the Kansas and Nebraska territories with popular sovereignty, which meant that the people of a territory could vote either for or against slavery. The Lecompton Constitution, which would have allowed slavery in Kansas, was the result of massive vote fraud by the pro-slavery Border Ruffians. Douglas defeated the Lecompton Constitution because it was supported by the minority of pro-slavery people in Kansas, and Douglas believed in majority rule. Douglas hoped that both South and North would support popular sovereignty, but the opposite was true. Neither side trusted Douglas.
The Supreme Court decision of 1857 in Dred Scott v. Sandford added to the controversy. Chief Justice Roger B. Taney's decision said that slaves were "so far inferior that they had no rights which the white man was bound to respect", and that slavery could spread into the territories even if the majority of people in the territories were anti-slavery. Lincoln warned that "the next Dred Scott decision" could threaten Northern states with slavery.
Buchanan, Republicans and anti-administration Democrats
President James Buchanan decided to end the troubles in Kansas by urging Congress to admit Kansas as a slave state under the Lecompton Constitution. Kansas voters, however, soundly rejected this constitution— at least with a measure of widespread fraud on both sides— by more than 10,000 votes. As Buchanan directed his presidential authority to this goal, he further angered the Republicans and alienated members of his own party. Prompting their break with the administration, the Douglasites saw this scheme as an attempt to pervert the principle of popular sovereignty on which the Kansas-Nebraska Act was based. Nationwide, conservatives were incensed, feeling as though the principles of states' rights had been violated. Even in the South, ex-Whigs and border states Know-Nothings— most notably John Bell and John J. Crittenden (key figures in the event of sectional controversies)— urged the Republicans to oppose the administration's moves and take up the demand that the territories be given the power to accept or reject sovereignty.
As the schism in the Democratic party deepened, moderate Republicans argued that an alliance with anti-administration Democrats, especially Stephen Douglas, would be a key advantage in the 1860 elections. Some Republican observers saw the controversy over the Lecompton Constitution as an opportunity to peel off Democratic support in the border states, where Frémont picked up little support. After all, the border states had often gone for Whigs with a Northern base of support in the past without prompting threats of Southern withdrawal from the Union.
Among the proponents of this strategy was The New York Times, which called on the Republicans to downplay opposition to popular sovereignty in favor of a compromise policy calling for "no more slave states" in order to quell sectional tensions. The Times maintained that for the Republicans to be competitive in the 1860 elections, they would need to broaden their base of support to include all voters who for one reason or another were upset with the Buchanan Administration.
Indeed, pressure was strong for an alliance that would unite the growing opposition to the Democratic Administration. But such an alliance was no novel idea; it would essentially entail transforming the Republicans into the national, conservative, Union party of the country. In effect, this would be a successor to the Whig party.
Republican leaders, however, staunchly opposed any attempts to modify the party position on slavery, appalled by what they considered a surrender of their principles when, for example, all the ninety-two Republican members of Congress voted for the Crittenden-Montgomery bill in 1858. Although this compromise measure blocked Kansas' entry into the union as a slave state, the fact that it called for popular sovereignty, rather than outright opposition to the expansion of slavery, was troubling to the party leaders.
In the end, the Crittenden-Montgomery bill did not forge a grand anti-administration coalition of Republicans, ex-Whig Southerners in the border states, and Northern Democrats. Instead, the Democratic Party merely split along sectional lines. Anti-Lecompton Democrats complained that a new, pro-slavery test had been imposed upon the party. The Douglasites, however, refused to yield to administration pressure. Like the anti-Nebraska Democrats, who were now members of the Republican Party, the Douglasean insisted that they— not the administration— commanded the support of most northern Democrats.
Extremist sentiment in the South advanced dramatically as the Southern planter class perceived its hold on the executive, legislative, and judicial apparatus of the central government wane. It also grew increasingly difficult for Southern Democrats to manipulate power in many of the Northern states through their allies in the Democratic Party.
Historians have emphasized that the sense of honor was a central concern of upper class white Southerners. The idea of being treated like a second class citizen was anathema and could not be tolerated by an honorable southerner. The anti-slavery position held that slavery was a negative or evil phenomenon that damaged the rights of white men and the prospects of republicanism. To the white South this rhetoric made Southerners second-class citizens because it trampled their Constitutional rights to take their property anywhere.
Assault on Sumner (1856)
On May 19 Massachusetts Senator Charles Sumner gave a long speech in the Senate entitled "The Crime Against Kansas" , which condemned the Slave Power as the evil force behind the nation's troubles. Sumner said the Southerners had committed a "crime against Kansas", singling out Senator Andrew P. Butler of South Carolina:
- "Not in any common lust for power did this uncommon tragedy have its origin. It is the rape of a virgin Territory, compelling it to the hateful embrace of slavery; and it may be clearly traced to a depraved desire for a new Slave State, hideous offspring of such a crime, in the hope of adding to the power of slavery in the National Government."
Sumner's cast the South Carolinian as having "chosen a mistress [the harlot slavery]... who, though ugly to others, is always lovely to him, though polluted in the sight of the world is chaste in his sight." According to Hoffer (2010), "It is also important to note the sexual imagery that recurred throughout the oration, which was neither accidental nor without precedent. Abolitionists routinely accused slaveholders of maintaining slavery so that they could engage in forcible sexual relations with their slaves." Three days later, Sumner, working at his desk on the Senate floor, was beaten almost to death by Congressman Preston S. Brooks, Butler's nephew. Sumner took years to recover; he became the martyr to the antislavery cause who said the episode proved the barbarism of slave society. Brooks was lauded as a hero upholding Southern honor. The episode further polarized North and South, strengthened the new Republican Party, and added a new element of violence on the floor of Congress.
Emergence of Lincoln
Republican Party structure
Despite their significant loss in the election of 1856, Republican leaders realized that even though they appealed only to Northern voters, they need win only two more states, such as Pennsylvania and Illinois, to win the presidency in 1860.
As the Democrats were grappling with their own troubles, leaders in the Republican party fought to keep elected members focused on the issue of slavery in the West, which allowed them to mobilize popular support. Chase wrote Sumner that if the conservatives succeeded, it might be necessary to recreate the Free Soil Party. He was also particularly disturbed by the tendency of many Republicans to eschew moral attacks on slavery for political and economic arguments.
The controversy over slavery in the West was still not creating a fixation on the issue of slavery. Although the old restraints on the sectional tensions were being eroded with the rapid extension of mass politics and mass democracy in the North, the perpetuation of conflict over the issue of slavery in the West still required the efforts of radical Democrats in the South and radical Republicans in the North. They had to ensure that the sectional conflict would remain at the center of the political debate.
William Seward contemplated this potential in the 1840s, when the Democrats were the nation's majority party, usually controlling Congress, the presidency, and many state offices. The country's institutional structure and party system allowed slaveholders to prevail in more of the nation's territories and to garner a great deal of influence over national policy. With growing popular discontent with the unwillingness of many Democratic leaders to take a stand against slavery, and growing consciousness of the party's increasingly pro-Southern stance, Seward became convinced that the only way for the Whig Party to counteract the Democrats' strong monopoly of the rhetoric of democracy and equality was for the Whigs to embrace anti-slavery as a party platform. Once again, to increasing numbers of Northerners, the Southern labor system was increasingly seen as contrary to the ideals of American democracy.
Republicans believed in the existence of "the Slave Power Conspiracy", which had seized control of the federal government and was attempting to pervert the Constitution for its own purposes. The "Slave Power" idea gave the Republicans the anti-aristocratic appeal with which men like Seward had long wished to be associated politically. By fusing older anti-slavery arguments with the idea that slavery posed a threat to Northern free labor and democratic values, it enabled the Republicans to tap into the egalitarian outlook which lay at the heart of Northern society.
In this sense, during the 1860 presidential campaign, Republican orators even cast "Honest Abe" as an embodiment of these principles, repeatedly referring to him as "the child of labor" and "son of the frontier", who had proved how "honest industry and toil" were rewarded in the North. Although Lincoln had been a Whig, the "Wide Awakes" (members of the Republican clubs), used replicas of rails that he had split to remind voters of his humble origins.
In almost every northern state, organizers attempted to have a Republican Party or an anti-Nebraska fusion movement on ballots in 1854. In areas where the radical Republicans controlled the new organization, the comprehensive radical program became the party policy. Just as they helped organize the Republican Party in the summer of 1854, the radicals played an important role in the national organization of the party in 1856. Republican conventions in New York, Massachusetts, and Illinois adopted radical platforms. These radical platforms in such states as Wisconsin, Michigan, Maine, and Vermont usually called for the divorce of the government from slavery, the repeal of the Fugitive Slave Laws, and no more slave states, as did platforms in Pennsylvania, Minnesota, and Massachusetts when radical influence was high.
Conservatives at the Republican 1860 nominating convention in Chicago were able to block the nomination of William Seward, who had an earlier reputation as a radical (but by 1860 had been criticized by Horace Greeley as being too moderate). Other candidates had earlier joined or formed parties opposing the Whigs and had thereby made enemies of many delegates. Lincoln was selected on the third ballot. However, conservatives were unable to bring about the resurrection of "Whiggery". The convention's resolutions regarding slavery were roughly the same as they had been in 1856, but the language appeared less radical. In the following months, even Republican conservatives like Thomas Ewing and Edward Baker embraced the platform language that "the normal condition of territories was freedom". All in all, the organizers had done an effective job of shaping the official policy of the Republican Party.
Southern slave holding interests now faced the prospects of a Republican President and the entry of new free states that would alter the nation's balance of power between the sections. To many Southerners, the resounding defeat of the Lecompton Constitution foreshadowed the entry of more free states into the Union. Dating back to the Missouri Compromise, the Southern region desperately sought to maintain an equal balance of slave states and free states so as to be competitive in the Senate. Since the last slave state was admitted in 1845, five more free states had entered. The tradition of maintaining a balance between North and South was abandoned in favor of the addition of more free soil states.
Sectional battles over federal policy in the late 1850s
Lincoln-Douglas Debates
The Lincoln-Douglas Debates were a series of seven debates in 1858 between Stephen Douglas, United States Senator from Illinois, and Abraham Lincoln, the Republican who sought to replace Douglas in the Senate. The debates were mainly about slavery. Douglas defended his Kansas Nebraska Act, which replaced the Missouri Compromise ban on slavery in the Louisiana Purchase territory north and west of Missouri with popular sovereignty, which allowed residents of territories such as the Kansas to vote either for or against slavery. Douglas put Lincoln on the defensive by accusing him of being a Black Republican abolitionist, but Lincoln responded by asking Douglas to reconcile popular sovereignty with the Dred Scott decision. Douglas' Freeport Doctrine was that residents of a territory could keep slavery out by refusing to pass a slave code and other laws needed to protect slavery. Douglas' Freeport Doctrine, and the fact that he helped defeat the pro-slavery Lecompton Constitution, made Douglas unpopular in the South, which led to the 1860 split of the Democratic Party into Northern and Southern wings. The Democrats retained control of the Illinois legislature, and Douglas thus retained his seat in the U.S. Senate (at that time United States Senators were elected by the state legislatures, not by popular vote); however, Lincoln's national profile was greatly raised, paving the way for his election as president of the United States two years later.
In The Rise of American Civilization (1927), Charles and Mary Beard argue that slavery was not so much a social or cultural institution as an economic one (a labor system). The Beards cited inherent conflicts between Northeastern finance, manufacturing, and commerce and Southern plantations, which competed to control the federal government so as to protect their own interests. According to the economic determinists of the era, both groups used arguments over slavery and states' rights as a cover.
Recent historians have rejected the Beardian thesis. But their economic determinism has influenced subsequent historians in important ways. Modernization theorists, such as Raimondo Luraghi, have argued that as the Industrial Revolution was expanding on a worldwide scale, the days of wrath were coming for a series of agrarian, pre-capitalistic, "backward" societies throughout the world, from the Italian and American South to India. But most American historians point out the South was highly developed and on average about as prosperous as the North.
Panic of 1857 and sectional realignments
A few historians believe that the serious financial panic of 1857 and the economic difficulties leading up to it strengthened the Republican Party and heightened sectional tensions. Before the panic, strong economic growth was being achieved under relatively low tariffs. Hence much of the nation concentrated on growth and prosperity.
The iron and textile industries were facing acute, worsening trouble each year after 1850. By 1854, stocks of iron were accumulating in each world market. Iron prices fell, forcing many American iron mills to shut down.
Republicans urged western farmers and northern manufacturers to blame the depression on the domination of the low-tariff economic policies of southern-controlled Democratic administrations. However the depression revived suspicion of Northeastern banking interests in both the South and the West. Eastern demand for western farm products shifted the West closer to the North. As the "transportation revolution" (canals and railroads) went forward, an increasingly large share and absolute amount of wheat, corn, and other staples of western producers– once difficult to haul across the Appalachians– went to markets in the Northeast. The depression emphasized the value of the western markets for eastern goods and homesteaders who would furnish markets and respectable profits.
Aside from the land issue, economic difficulties strengthened the Republican case for higher tariffs for industries in response to the depression. This issue was important in Pennsylvania and perhaps New Jersey.
Southern response
Meanwhile, many Southerners grumbled over "radical" notions of giving land away to farmers that would "abolitionize" the area. While the ideology of Southern sectionalism was well-developed before the Panic of 1857 by figures like J.D.B. DeBow, the panic helped convince even more cotton barons that they had grown too reliant on Eastern financial interests.
Thomas Prentice Kettell, former editor of the Democratic Review, was another commentator popular in the South to enjoy a great degree of prominence between 1857 and 1860. Kettell gathered an array of statistics in his book on Southern Wealth and Northern Profits, to show that the South produced vast wealth, while the North, with its dependence on raw materials, siphoned off the wealth of the South. Arguing that sectional inequality resulted from the concentration of manufacturing in the North, and from the North's supremacy in communications, transportation, finance, and international trade, his ideas paralleled old physiocratic doctrines that all profits of manufacturing and trade come out of the land. Political sociologists, such as Barrington Moore, have noted that these forms of romantic nostalgia tend to crop up whenever industrialization takes hold.
Such Southern hostility to the free farmers gave the North an opportunity for an alliance with Western farmers. After the political realignments of 1857–58—manifested by the emerging strength of the Republican Party and their networks of local support nationwide—almost every issue was entangled with the controversy over the expansion of slavery in the West. While questions of tariffs, banking policy, public land, and subsidies to railroads did not always unite all elements in the North and the Northwest against the interests of slaveholders in the South under the pre-1854 party system, they were translated in terms of sectional conflict—with the expansion of slavery in the West involved.
As the depression strengthened the Republican Party, slave holding interests were becoming convinced that the North had aggressive and hostile designs on the Southern way of life. The South was thus increasingly fertile ground for secessionism.
The Republicans' Whig-style personality-driven "hurrah" campaign helped stir hysteria in the slave states upon the emergence of Lincoln and intensify divisive tendencies, while Southern "fire eaters" gave credence to notions of the slave power conspiracy among Republican constituencies in the North and West. New Southern demands to re-open the African slave trade further fueled sectional tensions.
From the early 1840s until the outbreak of the Civil War, the cost of slaves had been rising steadily. Meanwhile, the price of cotton was experiencing market fluctuations typical of raw commodities. After the Panic of 1857, the price of cotton fell while the price of slaves continued its steep rise. At the 1858 Southern commercial convention, William L. Yancey of Alabama called for the reopening of the African slave trade. Only the delegates from the states of the Upper South, who profited from the domestic trade, opposed the reopening of the slave trade since they saw it as a potential form of competition. The convention in 1858 wound up voting to recommend the repeal of all laws against slave imports, despite some reservations.
John Brown and Harpers Ferry (1859)
On October 16, 1859, radical abolitionist John Brown led an attempt to start an armed slave revolt by seizing the U.S. Army arsenal at Harper's Ferry, Virginia (now West Virginia). Brown and twenty followers, both whites (including two of Brown's sons) and blacks (three free blacks, one freedman, and one fugitive slave), planned to seize the armory and use weapons stored there to arm black slaves in order to spark a general uprising by the slave population.
Although the raiders were initially successful in cutting the telegraph line and capturing the armory, they allowed a passing train to continue on to Washington, D.C., where the authorities were alerted to the attack. By October 17 the raiders were surrounded in the armory by the militia and other locals. Robert E. Lee (then a Colonel in the U.S. Army) led a company of U.S. Marines in storming the armory on October 18. Ten of the raiders were killed, including both of Brown's sons; Brown himself along with a half dozen of his followers were captured; four of the raiders escaped immediate capture. Six locals were killed and nine injured; the Marines suffered one dead and one injured. The local slave population failed to join in Brown's attack.
Brown was subsequently hanged for treason (against the Commonwealth of Virginia), as were six of his followers. The raid became a cause célèbre in both the North and the South, with Brown vilified by Southerners as a bloodthirsty fanatic, but celebrated by many Northern abolitionists as a martyr to the cause of freedom.
Elections of 1860
Initially, William H. Seward of New York, Salmon P. Chase of Ohio, and Simon Cameron of Pennsylvania, were the leading contenders for the Republican presidential nomination. But Abraham Lincoln, a former one-term House member who gained fame amid the Lincoln-Douglas Debates of 1858, had fewer political opponents within the party and outmaneuvered the other contenders. On May 16, 1860, he received the Republican nomination at their convention in Chicago, Illinois.
The schism in the Democratic Party over the Lecompton Constitution and Douglas' Freeport Doctrine caused Southern "fire-eaters" to oppose front runner Stephen A. Douglas' bid for the Democratic presidential nomination. Douglas defeated the proslavery Lecompton Constitution for Kansas because the majority of Kansans were antislavery, and Douglas' popular sovereignty doctrine would allow the majority to vote slavery up or down as they chose. Douglas' Freeport Doctrine alleged that the antislavery majority of Kansans could thwart the Dred Scott decision that allowed slavery by withholding legislation for a slave code and other laws needed to protect slavery. As a result, Southern extremists demanded a slave code for the territories, and used this issue to divide the northern and southern wings of the Democratic Party. Southerners left the party and in June nominated John C. Breckinridge, while Northern Democrats supported Douglas. As a result, the Southern planter class lost a considerable measure of sway in national politics. Because of the Democrats' division, the Republican nominee faced a divided opposition. Adding to Lincoln's advantage, ex-Whigs from the border states had earlier formed the Constitutional Union Party, nominating John C. Bell for President. Thus, party nominees waged regional campaigns. Douglas and Lincoln competed for Northern votes, while Bell, Douglas and Breckinridge competed for Southern votes.
"Vote yourself a farm– vote yourself a tariff" could have been a slogan for the Republicans in 1860. In sum, business was to support the farmers' demands for land (popular also in industrial working-class circles) in return for support for a higher tariff. To an extent, the elections of 1860 bolstered the political power of new social forces unleashed by the Industrial Revolution. In February 1861, after the seven states had departed the Union (four more would depart in April–May 1861; in late April, Maryland was unable to secede because it was put under martial law), Congress had a strong northern majority and passed the Morrill Tariff Act (signed by Buchanan), which increased duties and provided the government with funds needed for the war.
Split in the Democratic Party
The Alabama extremist William Lowndes Yancey's demand for a federal slave code for the territories split the Democratic Party between North and South, which made the election of Lincoln possible. Yancey tried to make his demand for a slave code moderate enough to get Southern support and yet extreme enough to enrage Northerners and split the party. He demanded that the party support a slave code for the territories if later necessary, so that the demand would be conditional enough to win Southern support. His tactic worked, and lower South delegates left the Democratic Convention at Institute Hall in Charleston, South Carolina and walked over to Military Hall. The South Carolina extremist Robert Barnwell Rhett hoped that the lower South would completely break with the Northern Democrats and attend a separate convention at Richmond, Virginia, but lower South delegates gave the national Democrats one last chance at unification by going to the convention at Baltimore, Maryland before the split became permanent. The end result was that John C. Breckinridge became the candidate of the Southern Democrats, and Stephen Douglas became the candidate of the Northern Democrats.
Yancy's previous 1848 attempt at demanding a slave code for the territories was his Alabama Platform, which was in response to the Northern Wilmot Proviso attempt at banning slavery in territories conquered from Mexico. Both the Alabama Platform and the Wilmot Proviso failed, but Yancey learned to be less overtly radical in order to get more support. Southerners thought they were merely demanding equality, in that they wanted Southern property in slaves to get the same (or more) protection as Northern forms of property.
Southern secession
With the emergence of the Republicans as the nation's first major sectional party by the mid-1850s, politics became the stage on which sectional tensions were played out. Although much of the West– the focal point of sectional tensions– was unfit for cotton cultivation, Southern secessionists read the political fallout as a sign that their power in national politics was rapidly weakening. Before, the slave system had been buttressed to an extent by the Democratic Party, which was increasingly seen as representing a more pro-Southern position that unfairly permitted Southerners to prevail in the nation's territories and to dominate national policy before the Civil War. But Democrats suffered a significant reverse in the electoral realignment of the mid-1850s. 1860 was a critical election that marked a stark change in existing patterns of party loyalties among groups of voters; Abraham Lincoln's election was a watershed in the balance of power of competing national and parochial interests and affiliations.
Once the election returns were certain, a special South Carolina convention declared "that the Union now subsisting between South Carolina and other states under the name of the 'United States of America' is hereby dissolved", heralding the secession of six more cotton states by February, and the formation of an independent nation, the Confederate States of America. Lipset (1960) examined the secessionist vote in each Southern state in 1860–61. In each state he divided the counties into high, medium or low proportion of slaves. He found that in the 181 high-slavery counties, the vote was 72% for secession. In the 205 low-slavery counties. the vote was only 37% for secession. (And in the 153 middle counties, the vote for secession was in the middle at 60%). Both the outgoing Buchanan administration and the incoming Lincoln administration refused to recognize the legality of secession or the legitimacy of the Confederacy. After Lincoln called for troops, four border states (that lacked cotton) seceded.
Disputes over the route of a proposed transcontinental railroad affected the timing of the Kansas Nebraska Act. The timing of the completion of a railroad from Georgia to South Carolina also was important, in that it allowed influential Georgians to declare their support for secession in South Carolina at a crucial moment. South Carolina secessionists feared that if they seceded first, they would be as isolated as they were during the Nullification Crisis. Support from Georgians was quickly followed by support for secession in the same South Carolina state legislature that previously preferred a cooperationist approach, as opposed to separate state secession.
The Totten system of forts (including forts Sumter and Pickens) designed for coastal defense encouraged Anderson to move federal troops from Fort Moultrie to the more easily defended Fort Sumter in Charleston harbor, South Carolina. Likewise, Slemmer moved U.S. troops from Fort Barrancas to the more easily defended Fort Pickens in Florida. These troop movements were defensive from the Northern point of view, and acts of aggression from the Southern point of view. Also, an attempt to resupply Fort Sumter via the ship Star of the West was seen as an attack on a Southern owned fort by secessionists, and as an attempt to defend U.S. property from the Northern point of view.
The tariff issue is greatly exaggerated by Lost Cause historians. The tariff had been written and approved by the South, so it was mostly Northerners (especially in Pennsylvania) who complained about the low rates; some Southerners feared that eventually the North would have enough control it could raise the tariff at will.
As for states' rights, while a states' right of revolution mentioned in the Declaration of Independence was based on the inalienable equal rights of man, secessionists believed in a modified version of states' rights that was safe for slavery.
These issues were especially important in the lower South, where 47 percent of the population were slaves. The upper South, where 32 percent of the population were slaves, considered the Fort Sumter crisis—especially Lincoln's call for troops to march south to recapture it—a cause for secession. The northernmost border slave states, where 13 percent of the population were slaves, did not secede.
Fort Sumter
When South Carolina seceded In December 1860, Major Robert Anderson, a pro-slavery, former slave-owner from Kentucky, remained loyal to the Union. He was the commanding officer of United States Army forces in Charleston, South Carolina—the last remaining important Union post In the Deep South. Acting without orders, he moved his small garrison from Fort Moultrie, which was indefensible, to the more modern, more defensible, Fort Sumter in the middle of Charleston Harbor. South Carolina leaders cried betrayal, while the North celebrated with enormous excitement at this show of defiance against secessionism. In February 1861 the Confederate States of America was formed and took charge. Jefferson Davis, the Confederate President, ordered the fort be captured. The artillery attack was commanded by Brig. Gen. P. G. T. Beauregard, who had been Anderson's student at West Point. The attack began April 12, 1861, and continued until Anderson, badly outnumbered and outgunned, surrendered the fort on April 14. The battle began the American Civil War, As an overwhelming demand for war swept both the North and South, with only Kentucky attempting to remain neutral.
The opening of the Civil War, as well as the modern meaning of the American flag, according to Adam Goodheart (2011), was forged in December 1860, when Anderson, acting without orders, moved the American garrison from Fort Moultrie to Fort Sumter, in Charleston Harbor, in defiance of the overwhelming power of the new Confederate States of America. Goodheart argues this was the opening move of the Civil War, and the flag was used throughout the North to symbolize American nationalism and rejection of secessionism.
- Before that day, the flag had served mostly as a military ensign or a convenient marking of American territory, flown from forts, embassies, and ships, and displayed on special occasions like the Fourth of July. But in the weeks after Major Anderson's surprising stand, it became something different. Suddenly the Stars and Stripes flew – as it does today, and especially as it did after September 11 – from houses, from storefronts, from churches; above the village greens and college quads. For the first time American flags were mass-produced rather than individually stitched and even so, manufacturers could not keep up with demand. As the long winter of 1861 turned into spring, that old flag meant something new. The abstraction of the Union clause was transfigured into a physical thing: strips of cloth that millions of people would fight for, and many thousands die for.
Onset of the Civil War and the question of compromise
Abraham Lincoln's rejection of the Crittenden Compromise, the failure to secure the ratification of the Corwin amendment in 1861, and the inability of the Washington Peace Conference of 1861 to provide an effective alternative to Crittenden and Corwin came together to prevent a compromise that is still debated by Civil War historians. Even as the war was going on, William Seward and James Buchanan were outlining a debate over the question of inevitability that would continue among historians.
Two competing explanations of the sectional tensions inflaming the nation emerged even before the war. Buchanan believed the sectional hostility to be the accidental, unnecessary work of self-interested or fanatical agitators. He also singled out the "fanaticism" of the Republican Party. Seward, on the other hand, believed there to be an irrepressible conflict between opposing and enduring forces.
The irrepressible conflict argument was the first to dominate historical discussion. In the first decades after the fighting, histories of the Civil War generally reflected the views of Northerners who had participated in the conflict. The war appeared to be a stark moral conflict in which the South was to blame, a conflict that arose as a result of the designs of slave power. Henry Wilson's History of The Rise and Fall of the Slave Power in America (1872–1877) is the foremost representative of this moral interpretation, which argued that Northerners had fought to preserve the union against the aggressive designs of "slave power". Later, in his seven-volume History of the United States from the Compromise of 1850 to the Civil War, (1893–1900), James Ford Rhodes identified slavery as the central—and virtually only—cause of the Civil War. The North and South had reached positions on the issue of slavery that were both irreconcilable and unalterable. The conflict had become inevitable.
But the idea that the war was avoidable did not gain ground among historians until the 1920s, when the "revisionists" began to offer new accounts of the prologue to the conflict. Revisionist historians, such as James G. Randall and Avery Craven, saw in the social and economic systems of the South no differences so fundamental as to require a war. Randall blamed the ineptitude of a "blundering generation" of leaders. He also saw slavery as essentially a benign institution, crumbling in the presence of 19th century tendencies. Craven, the other leading revisionist, placed more emphasis on the issue of slavery than Randall but argued roughly the same points. In The Coming of the Civil War (1942), Craven argued that slave laborers were not much worse off than Northern workers, that the institution was already on the road to ultimate extinction, and that the war could have been averted by skillful and responsible leaders in the tradition of Congressional statesmen Henry Clay and Daniel Webster. Two of the most important figures in U.S. politics in the first half of the 19th century, Clay and Webster, arguably in contrast to the 1850s generation of leaders, shared a predisposition to compromises marked by a passionate patriotic devotion to the Union.
But it is possible that the politicians of the 1850s were not inept. More recent studies have kept elements of the revisionist interpretation alive, emphasizing the role of political agitation (the efforts of Democratic politicians of the South and Republican politicians in the North to keep the sectional conflict at the center of the political debate). David Herbert Donald argued in 1960 that the politicians of the 1850s were not unusually inept but that they were operating in a society in which traditional restraints were being eroded in the face of the rapid extension of democracy. The stability of the two-party system kept the union together, but would collapse in the 1850s, thus reinforcing, rather than suppressing, sectional conflict.
Reinforcing this interpretation, political sociologists have pointed out that the stable functioning of a political democracy requires a setting in which parties represent broad coalitions of varying interests, and that peaceful resolution of social conflicts takes place most easily when the major parties share fundamental values. Before the 1850s, the second American two party system (competition between the Democrats and the Whigs) conformed to this pattern, largely because sectional ideologies and issues were kept out of politics to maintain cross-regional networks of political alliances. However, in the 1840s and 1850s, ideology made its way into the heart of the political system despite the best efforts of the conservative Whig Party and the Democratic Party to keep it out.
Contemporaneous explanations
|“||The new [Confederate] Constitution has put at rest forever all the agitating questions relating to our peculiar institutions—African slavery as it exists among us—the proper status of the negro in our form of civilization. This was the immediate cause of the late rupture and present revolution. . . .(Jefferson's) ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error.... Our new government is founded upon exactly the opposite idea; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery– subordination to the superior race– is his natural and normal condition.||”|
In July 1863, as decisive campaigns were fought at Gettysburg and Vicksburg, Republican senator Charles Sumner re-dedicated his speech The Barbarism of Slavery and said that desire to preserve slavery was the sole cause of the war:
|“||[T]here are two apparent rudiments to this war. One is Slavery and the other is State Rights. But the latter is only a cover for the former. If Slavery were out of the way there would be no trouble from State Rights.
The war, then, is for Slavery, and nothing else. It is an insane attempt to vindicate by arms the lordship which had been already asserted in debate. With mad-cap audacity it seeks to install this Barbarism as the truest Civilization. Slavery is declared to be the "corner-stone" of the new edifice.
Lincoln's war goals were reactions to the war, as opposed to causes. Abraham Lincoln explained the nationalist goal as the preservation of the Union on August 22, 1862, one month before his preliminary Emancipation Proclamation:
|“||I would save the Union. I would save it the shortest way under the Constitution. The sooner the national authority can be restored; the nearer the Union will be "the Union as it was." ... My paramount object in this struggle is to save the Union, and is not either to save or to destroy slavery. If I could save the Union without freeing any slave I would do it, and if I could save it by freeing all the slaves I would do it; and if I could save it by freeing some and leaving others alone I would also do that.... I have here stated my purpose according to my view of official duty; and I intend no modification of my oft-expressed personal wish that all men everywhere could be free.||”|
On March 4, 1865, Lincoln said in his Second Inaugural Address that slavery was the cause of the War:
|“||One-eighth of the whole population were colored slaves, not distributed generally over the Union, but localized in the southern part of it. These slaves constituted a peculiar and powerful interest. All knew that this interest was somehow the cause of the war. To strengthen, perpetuate, and extend this interest was the object for which the insurgents would rend the Union even by war, while the Government claimed no right to do more than to restrict the territorial enlargement of it.||”|
See also
- American Civil War
- Compensated Emancipation
- Conclusion of the American Civil War
- Issues of the American Civil War
- Slavery in the United States
- Timeline of events leading to the American Civil War
- Elizabeth R. Varon, Bruce Levine, Marc Egnal, and Michael Holt at a plenary session of the organization of American Historians, March 17, 2011, reported by David A. Walsh "Highlights from the 2011 Annual Meeting of the Organization of American Historians in Houston, Texas" HNN online
- David Potter, The Impending Crisis, pages 42–50
- The Mason-Dixon Line and the Ohio River were key boundaries.
- Fehrenbacher pp.15–17. Fehrenbacher wrote, "As a racial caste system, slavery was the most distinctive element in the southern social order. The slave production of staple crops dominated southern agriculture and eminently suited the development of a national market economy."
- Fehrenbacher pp. 16–18
- Goldstone p. 13
- McDougall p. 318
- Forbes p. 4
- Mason pp. 3–4
- Freehling p.144
- Freehling p. 149. In the House the votes for the Tallmadge amendments in the North were 86–10 and 80-14 in favor, while in the South the vote to oppose was 66–1 and 64-2.
- Missouri Compromise
- Forbes pp. 6–7
- Mason p. 8
- Leah S. Glaser, "United States Expansion, 1800–1860"
- Richard J. Ellis, Review of The Shaping of American Liberalism: The Debates over Ratification, Nullification, and Slavery. by David F. Ericson, William and Mary Quarterly, Vol. 51, No. 4 (1994), pp. 826–829
- John Tyler, Life Before the Presidency
- Jane H. Pease, William H. Pease, "The Economics and Politics of Charleston's Nullification Crisis", Journal of Southern History, Vol. 47, No. 3 (1981), pp. 335–362
- Remini, Andrew Jackson, v2 pp. 136–137. Niven pg. 135–137. Freehling, Prelude to Civil War pg 143
- Craven pg.65. Niven pg. 135–137. Freehling, Prelude to Civil War pg 143
- Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights, and the Nullification Crisis (1987), page 193; Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816–1836. (1965), page 257
- Ellis p. 193. Ellis further notes that “Calhoun and the nullifiers were not the first southerners to link slavery with states’ rights. At various points in their careers, John Taylor, John Randolph, and Nathaniel Macon had warned that giving too much power to the federal government, especially on such an open-ended issue as internal improvement, could ultimately provide it with the power to emancipate slaves against their owners’ wishes.”
- Jon Meacham (2009), American Lion: Andrew Jackson in the White House, p. 247; Correspondence of Andrew Jackson, Vol. V, p. 72.
- Varon (2008) p. 109. Wilentz (2005) p. 451
- Miller (1995) pp. 144–146
- Miller (1995) pp. 209–210
- Wilentz (2005) pp. 470–472
- Miller, 112
- Miller, pp. 476, 479–481
- Huston p. 41. Huston writes, "...on at least three matters southerners were united. First, slaves were property. Second, the sanctity of southerners' property rights in slaves was beyond the questioning of anyone inside or outside of the South. Third, slavery was the only means of adjusting social relations properly between Europeans and Africans."
- Brinkley, Alan (1986). American History: A Survey. New York: McGraw-Hill. p. 328.
- Moore, Barrington (1966). Social Origins of Dictatorship and Democracy. New York: Beacon Press. p. 117.
- North, Douglas C. (1961). The Economic Growth of the United States 1790–1860. Englewood Cliffs. p. 130.
- Elizabeth Fox-Genovese and Eugene D. Genovese, Slavery in White and Black: Class and Race in the Southern Slaveholders' New World Order (2008)
- James M. McPherson, "Antebellum Southern Exceptionalism: A New Look at an Old Question", Civil War History 29 (September 1983)
- "Conflict and Collaboration: Yeomen, Slaveholders, and Politics in the Antebellum South", Social History 10 (October 1985): 273–98. quote at p. 297.
- Thornton, Politics and Power in a Slave Society: Alabama, 1800–1860 (Louisiana State University Press, 1978)
- McPherson (2007) pp.4–7. James M. McPherson wrote in referring to the Progressive historians, the Vanderbilt agrarians, and revisionists writing in the 1940s, “While one or more of these interpretations remain popular among the Sons of Confederate Veterans and other Southern heritage groups, few historians now subscribe to them.”
- Craig in Woodworth, ed. The American Civil War: A Handbook of Literature and Research (1996), p.505.
- Donald 2001 pp 134–38
- Huston pp. 24–25. Huston lists other estimates of the value of slaves; James D. B. De Bow puts it at $2 billion in 1850, while in 1858 Governor James Pettus of Mississippi estimated the value at $2.6 billion in 1858.
- Huston p. 25
- Soil Exhaustion as a Factor in the Agricultural History of Virginia and Maryland, 1606–1860
- Encyclopedia of American Foreign Policy – A-D
- Woodworth, ed. The American Civil War: A Handbook of Literature and Research (1996), 145 151 505 512 554 557 684; Richard Hofstadter, The Progressive Historians: Turner, Beard, Parrington (1969); for one dissenter see Marc Egnal. "The Beards Were Right: Parties in the North, 1840–1860". Civil War History 47, no. 1. (2001): 30–56.
- Kenneth M. Stampp, The Imperiled Union: Essays on the Background of the Civil War (1981) p 198
- Also from Kenneth M. Stampp, The Imperiled Union p 198
Most historians... now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united. Beard oversimplified the controversies relating to federal economic policy, for neither section unanimously supported or opposed measures such as the protective tariff, appropriations for internal improvements, or the creation of a national banking system.... During the 1850s, Federal economic policy gave no substantial cause for southern disaffection, for policy was largely determined by pro-Southern Congresses and administrations. Finally, the characteristic posture of the conservative northeastern business community was far from anti-Southern. Most merchants, bankers, and manufacturers were outspoken in their hostility to antislavery agitation and eager for sectional compromise in order to maintain their profitable business connections with the South. The conclusion seems inescapable that if economic differences, real though they were, had been all that troubled relations between North and South, there would be no substantial basis for the idea of an irrepressible conflict.
- James M. McPherson, Antebellum Southern Exceptionalism: A New Look at an Old Question Civil War History – Volume 50, Number 4, December 2004, page 421
- Richard Hofstadter, "The Tariff Issue on the Eve of the Civil War", The American Historical Review Vol. 44, No. 1 (1938), pp. 50–55 full text in JSTOR
- John Calhoun, Slavery a Positive Good, February 6, 1837
- Noll, Mark A. (2002). America's God: From Jonathan Edwards to Abraham Lincoln. Oxford University Press. p. 640.
- Noll, Mark A. (2006). The Civil War as a Theological Crisis. UNC Press. p. 216.
- Noll, Mark A. (2002). The US Civil War as a Theological War: Confederate Christian Nationalism and the League of the South. Oxford University Press. p. 640.
- Hull, William E. (February 2003). "Learning the Lessons of Slavery". Christian Ethics Today 9 (43). Retrieved 2007-12-19.
- Methodist Episcopal Church, South
- Presbyterian Church in the United States
- Gaustad, Edwin S. (1982). A Documentary History of Religion in America to the Civil War. Wm. B. Eerdmans Publishing Co. pp. 491–502.
- Johnson, Paul (1976). History of Christianity. Simon & Schuster. p. 438.
- Noll, Mark A. (2002). America's God: From Jonathan Edwards to Abraham Lincoln. Oxford University Press. pp. 399–400.
- Miller, Randall M.; Stout, Harry S.; Wilson, Charles Reagan, eds. (1998). "title=The Bible and Slavery". Religion and the American Civil War. Oxford University Press. p. 62.
- Bestor, 1964, pp. 10–11
- McPherson, 2007, p. 14.
- Stampp, pp. 190–193.
- Bestor, 1964, p. 11.
- Krannawitter, 2008, pp. 49–50.
- McPherson, 2007, pp. 13–14.
- Bestor, 1964, pp. 17–18.
- Guelzo, pp. 21–22.
- Bestor, 1964, p. 15.
- Miller, 2008, p. 153.
- McPherson, 2007, p. 3.
- Bestor, 1964, p. 19.
- McPherson, 2007, p. 16.
- Bestor, 1964, pp. 19–20.
- Bestor, 1964, p. 21
- Bestor, 1964, p. 20
- Bestor, 1964, p. 20.
- Russell, 1966, p. 468-469
- Bestor, 1964, p. 23
- Russell, 1966, p. 470
- Bestor, 1964, p. 24
- Bestor, 1964, pp. 23-24
- Holt, 2004, pp. 34–35.
- McPherson, 2007, p. 7.
- Krannawitter, 2008, p. 232.
- Bestor, 1964, pp. 24–25.
- "The Amistad Case". National Portrait Gallery. Retrieved 2007-10-16.
- McPherson, Battle Cry p. 8; James Brewer Stewart, Holy Warriors: The Abolitionists and American Slavery (1976); Pressly, 270ff
- Wendell Phillips, "No Union With Slaveholders", January 15, 1845, in Louis Ruchames, ed. The Abolitionists (1963), p.196.
- Mason I Lowance, Against Slavery: An Abolitionist Reader, (2000), page 26
- "Abolitionist William Lloyd Garrison Admits of No Compromise with the Evil of Slavery". Retrieved 2007-10-16.
- Alexander Stephen's Cornerstone Speech, Savannah; Georgia, March 21, 1861
- Stampp, The Causes of the Civil War, page 59
- Schlessinger quotes from an essay “The State Rights Fetish” excerpted in Stampp p. 70
- Schlessinger in Stampp pp. 68–69
- McDonald p. 143
- Kenneth M. Stampp, The Causes of the Civil War, p. 14
- Nevins, Ordeal of the Union: Fruits of Manifest Destiny 1847–1852, p. 155
- Donald, Baker, and Holt, p.117.
- When arguing for the equality of states, Jefferson Davis said, "Who has been in advance of him in the fiery charge on the rights of the States, and in assuming to the Federal Government the power to crush and to coerce them? Even to-day he has repeated his doctrines. He tells us this is a Government which we will learn is not merely a Government of the States, but a Government of each individual of the people of the United States". – Jefferson Davis' reply in the Senate to William H. Seward, Senate Chamber, U.S. Capitol, February 29, 1860, From The Papers of Jefferson Davis, Volume 6, pp. 277–84.
- When arguing against equality of individuals, Davis said, "We recognize the fact of the inferiority stamped upon that race of men by the Creator, and from the cradle to the grave, our Government, as a civil institution, marks that inferiority". – Jefferson Davis' reply in the Senate to William H. Seward, Senate Chamber, U.S. Capitol, February 29, 1860, – From The Papers of Jefferson Davis, Volume 6, pp. 277–84. Transcribed from the Congressional Globe, 36th Congress, 1st Session, pp. 916–18.
- Jefferson Davis' Second Inaugural Address, Virginia Capitol, Richmond, February 22, 1862, Transcribed from Dunbar Rowland, ed., Jefferson Davis, Constitutionalist, Volume 5, pp. 198–203. Summarized in The Papers of Jefferson Davis, Volume 8, p. 55.
- Lawrence Keitt, Congressman from South Carolina, in a speech to the House on January 25, 1860: Congressional Globe.
- Stampp, The Causes of the Civil War, pages 63–65
- William C. Davis, Look Away, pages 97–98
- David Potter, The Impending Crisis, page 275
- First Lincoln Douglas Debate at Ottawa, Illinois August 21, 1858
- Bertram Wyatt-Brown, Southern Honor: Ethics and Behavior in the Old South (1982) pp 22–23, 363
- Christopher J. Olsen (2002). Political Culture and Secession in Mississippi: Masculinity, Honor, and the Antiparty Tradition, 1830–1860. Oxford University Press. p. 237. footnote 33
- Lacy Ford, ed. (2011). A Companion to the Civil War and Reconstruction. Wiley. p. 28.
- Michael William Pfau, "Time, Tropes, and Textuality: Reading Republicanism in Charles Sumner's 'Crime Against Kansas'", Rhetoric & Public Affairs vol 6 #3 (2003) 385–413, quote on p. 393 online in Project MUSE
- In modern terms Sumner accused Butler of being a "pimp who attempted to introduce the whore, slavery, into Kansas" says Judith N. McArthur; Orville Vernon Burton (1996). "A Gentleman and an Officer": A Military and Social History of James B. Griffin's Civil War. Oxford U.P. p. 40.
- Williamjames Hoffer, The Caning of Charles Sumner: Honor, Idealism, and the Origins of the Civil War (2010) p. 62
- William E. Gienapp, "The Crime Against Sumner: The Caning of Charles Sumner and the Rise of the Republican Party," Civil War History (1979) 25#3 pp. 218-245 doi:10.1353/cwh.1979.0005
- Donald, David; Randal, J.G. (1961). The Civil War and Reconstruction. Boston: D.C. Health and Company. p. 79.
- Allan, Nevins (1947). Ordeal of the Union (vol. 3) III. New York: Charles Scribner's Sons. p. 218.
- Moore, Barrington, p.122.
- William W, Freehling, The Road to Disunion: Secessionists Triumphant 1854–1861, pages 271–341
- Roy Nichols, The Disruption of American Democracy: A History of the Political Crisis That Led Up To The Civil War (1949)
- Seymour Martin Lipset, Political Man: The Social Bases of Politics (Doubleday, 1960) p. 349.
- Maury Klein, Days of Defiance: Sumter, Secession, and the Coming of the Civil War (1999)
- David M. Potter, The Impending Crisis, pages 14–150
- William W. Freehling, The Road to Disunion, Secessionists Triumphant: 1854–1861, pages 345–516
- Richard Hofstadter, "The Tariff Issue on the Eve of the Civil War", American Historical Review Vol. 44, No. 1 (October 1938), pp. 50–55 in JSTOR
- Daniel Crofts, Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989
- Adam Goodheart, 1861: The Civil War Awakening (2011) ch 2–5
- Adam Goodheart, "Prologue", in 1861: The Civil War Awakening (2011)
- Letter to Horace Greeley, August 22, 1862
- Craven, Avery. The Coming of the Civil War (1942) ISBN 0-226-11894-0
- Donald, David Herbert, Baker, Jean Harvey, and Holt, Michael F. The Civil War and Reconstruction. (2001)
- Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights and the Nullification Crisis. (1987)
- Fehrenbacher, Don E. The Slaveholding Republic: An Account of the United States Government's Relations to Slavery. (2001) ISBN 1-195-14177-6
- Forbes, Robert Pierce. The Missouri Compromise and ItAftermath: Slavery and the Meaning of America. (2007) ISBN 978-0-8078-3105-2
- Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816–1836. (1965) ISBN 0-19-507681-8
- Freehling, William W. The Road to Disunion: Secessionists at Bay 1776–1854. (1990) ISBN 0-19-505814-3
- Freehling, William W. and Craig M. Simpson, eds. Secession Debated: Georgia's Showdown in 1860 (1992), speeches
- Hesseltine; William B. ed. The Tragic Conflict: The Civil War and Reconstruction (1962), primary documents
- Huston, James L. Calculating the Value of the Union: Slavery, Property Rights, and the Economic Origins of the Civil War. (2003) ISBN 0-8078-2804-1
- Mason, Matthew. Slavery and Politics in the Early American Republic. (2006) ISBN 13:978-0-8078-3049-9
- McDonald, Forrest. States' Rights and the Union: Imperium in Imperio, 1776–1876. (2000)
- McPherson, James M. This Mighty Scourge: Perspectives on the Civil War. (2007)
- Miller, William Lee. Arguing About Slavery: John Quincy Adams and the Great Battle in the United States Congress. (1995) ISBN 0-394-56922-9
- Niven, John. John C. Calhoun and the Price of Union (1988) ISBN 0-8071-1451-0
- Perman, Michael, ed. Major Problems in Civil War & Reconstruction (2nd ed. 1998) primary and secondary sources.
- Remini, Robert V. Andrew Jackson and the Course of American Freedom, 1822–1832,v2 (1981) ISBN 0-06-014844-6
- Stampp, Kenneth, ed. The Causes of the Civil War (3rd ed 1992), primary and secondary sources.
- Varon, Elizabeth R. Disunion: The Coming of the American Civil War, 1789–1859. (2008) ISBN 978-0-8078-3232-5
- Wakelyn; Jon L. ed. Southern Pamphlets on Secession, November 1860 – April 1861 (1996)
- Wilentz, Sean. The Rise of American Democracy: Jefferson to Lincoln. (2005) ISBN 0-393-05820-4
Further reading
- Ayers, Edward L. What Caused the Civil War? Reflections on the South and Southern History (2005). 222 pp.
- Beale, Howard K., "What Historians Have Said About the Causes of the Civil War", Social Science Research Bulletin 54, 1946.
- Boritt, Gabor S. ed. Why the Civil War Came (1996)
- Childers, Christopher. "Interpreting Popular Sovereignty: A Historiographical Essay", Civil War History Volume 57, Number 1, March 2011 pp. 48–70 in Project MUSE
- Crofts Daniel. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989), pp 353–82 and 457-80
- Etcheson, Nicole. "The Origins of the Civil War", History Compass 2005 #3 (North America)
- Foner, Eric. "The Causes of the American Civil War: Recent Interpretations and New Directions". In Beyond the Civil War Synthesis: Political Essays of the Civil War Era, edited by Robert P. Swierenga, 1975.
- Kornblith, Gary J., "Rethinking the Coming of the Civil War: A Counterfactual Exercise". Journal of American History 90.1 (2003): 80 pars. detailed historiography; online version
- Pressly, Thomas. Americans Interpret Their Civil War (1966), sorts historians into schools of interpretation
- SenGupta, Gunja. “Bleeding Kansas: A Review Essay”. Kansas History 24 (Winter 2001/2002): 318–341.
- Tulloch, Hugh. The Debate On the American Civil War Era (Issues in Historiography) (2000)
- Woodworth, Steven E. ed. The American Civil War: A Handbook of Literature and Research (1996), 750 pages of historiography; see part IV on Causation.
"Needless war" school
- Craven, Avery, The Repressible Conflict, 1830–61 (1939)
- The Coming of the Civil War (1942)
- "The Coming of the War Between the States", Journal of Southern History 2 (August 1936): 30–63; in JSTOR
- Donald, David. "An Excess of Democracy: The Civil War and the Social Process" in David Donald, Lincoln Reconsidered: Essays on the Civil War Era, 2d ed. (New York: Alfred A. Knopf, 1966), 209–35.
- Holt, Michael F. The Political Crisis of the 1850s. (1978) emphasis on political parties and voters
- Randall, James G. "The Blundering Generation", Mississippi Valley Historical Review 27 (June 1940): 3–28 in JSTOR
- James G. Randall. The Civil War and Reconstruction. (1937), survey and statement of "needless war" interpretation
- Pressly, Thomas J. "The Repressible Conflict", chapter 7 of Americans Interpret Their Civil War (Princeton: Princeton University Press, 1954).
- Ramsdell, Charles W. "The Natural Limits of Slavery Expansion", Mississippi Valley Historical Review, 16 (September 1929), 151–71, in JSTOR; says slavery had almost reached its outer limits of growth by 1860, so war was unnecessary to stop further growth. online version without footnotes
Economic causation and modernization
- Beard, Charles, and Mary Beard. The Rise of American Civilization. Two volumes. (1927), says slavery was minor factor
- Luraghi, Raimondo, "The Civil War and the Modernization of American Society: Social Structure and Industrial Revolution in the Old South Before and During the War", Civil War History XVIII (September 1972). in JSTOR
- McPherson, James M. Ordeal by Fire: the Civil War and Reconstruction. (1982), uses modernization interpretation.
- Moore, Barrington. Social Origins of Dictatorship and Democracy. (1966). modernization interpretation
- Thornton, Mark; Ekelund, Robert B. Tariffs, Blockades, and Inflation: The Economics of the Civil War. (2004), stresses fear of future protective tariffs
Nationalism and culture
- Crofts Daniel. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)
- Current, Richard. Lincoln and the First Shot (1963)
- Nevins, Allan, author of most detailed history
- Ordeal of the Union 2 vols. (1947) covers 1850–57.
- The Emergence of Lincoln, 2 vols. (1950) covers 1857–61; does not take strong position on causation
- Olsen, Christopher J. Political Culture and Secession in Mississippi: Masculinity, Honor, and the Antiparty Tradition, 1830–1860" (2000), cultural interpretation
- Potter, David The Impending Crisis 1848–1861. (1976), Pulitzer Prize-winning history emphasizing rise of Southern nationalism
- Potter, David M. Lincoln and His Party in the Secession Crisis (1942).
- Miller, Randall M., Harry S. Stout, and Charles Reagan Wilson, eds. Religion and the American Civil War (1998), essays
Slavery as cause
- Ashworth, John
- Slavery, Capitalism, and Politics in the Antebellum Republic. (1995)
- "Free labor, wage labor, and the slave power: republicanism and the Republican party in the 1850s", in Melvyn Stokes and Stephen Conway (eds), The Market Revolution in America: Social, Political and Religious Expressions, 1800–1880, pp. 128–46. (1996)
- Donald, David et al. The Civil War and Reconstruction (latest edition 2001); 700-page survey
- Fellman, Michael et al. This Terrible War: The Civil War and its Aftermath (2003), 400-page survey
- Foner, Eric
- Free Soil, Free Labor, Free Men: the Ideology of the Republican Party before the Civil War. (1970, 1995) stress on ideology
- Politics and Ideology in the Age of the Civil War. New York: Oxford University Press. (1981)
- Freehling, William W. The Road to Disunion: Secessionists at Bay, 1776–1854 1991., emphasis on slavery
- Gienapp William E. The Origins of the Republican Party, 1852–1856 (1987)
- Manning, Chandra. What This Cruel War Was Over: Soldiers, Slavery, and the Civil War. New York: Vintage Books (2007).
- McPherson, James M. Battle Cry of Freedom: The Civil War Era. (1988), major overview, neoabolitionist emphasis on slavery
- Morrison, Michael. Slavery and the American West: The Eclipse of Manifest Destiny and the Coming of the Civil War (1997)
- Ralph E. Morrow. "The Proslavery Argument Revisited", The Mississippi Valley Historical Review, Vol. 48, No. 1. (June 1961), pp. 79–94. in JSTOR
- Rhodes, James Ford History of the United States from the Compromise of 1850 to the McKinley-Bryan Campaign of 1896 Volume: 1. (1920), highly detailed narrative 1850–56. vol 2 1856–60; emphasis on slavery
- Schlesinger, Arthur Jr. "The Causes of the Civil War" (1949) reprinted in his The Politics of Hope (1963); reintroduced new emphasis on slavery
- Stampp, Kenneth M. America in 1857: A Nation on the Brink (1990)
- Stampp, Kenneth M. And the War Came: The North and the Secession Crisis, 1860–1861 (1950).
- Civil War and Reconstruction: Jensen's Guide to WWW Resources
- Report of the Brown University Steering Committee on Slavery and Justice
- State by state popular vote for president in 1860 election
- Tulane course – article on 1860 election
- Onuf, Peter. "Making Two Nations: The Origins of the Civil War" 2003 speech
- The Gilder Lehrman Institute of American History
- CivilWar.com Many source materials, including states' secession declarations.
- Causes of the Civil War Collection of primary documents
- Declarations of Causes of Seceding States
- Alexander H. Stephens' Cornerstone Address
- An entry from Alexander Stephens' diary, dated 1866, reflecting on the origins of the Civil War.
- The Arguments of the Constitutional Unionists in 1850–51
- Shmoop US History: Causes of the Civil War – study guide, dates, trivia, multimedia, teachers' guide
- Booknotes interview with Stephen B. Oates on The Approaching Fury: Voices of the Storm, 1820–1861, April 27, 1997. | http://en.wikipedia.org/wiki/Origins_of_the_American_Civil_War | 13 |
54 | Recall 1. In the past we put quadratic equations in the form
y = ax2 + bx + c.
This makes the equation look nice and become more readable. For the purposes of graphing, we would rather alter this form slightly. Recall that we represented lines in standard form as
y = mx + b.
Then we were able to read the critical information about slope and y-intercept directly from the form of the equation. The slope is m and the y-intercept is b. We wish to do the same with the graphs of quadratic functions. We will use several ideas that were touched on in past hours to provide a scheme for knowing how to graph a quadratic function.
Recall 2 (The standard parabola centered at the origin.)
If you sketch the graph of the quadratic equation y = x*x, you find a curve that looks like the curve below This curve is called a parabola. The first thing to notice about a parabola is its symmetry. If you flip this parabola about the y-axis, you get the exact same parabola. Every parabola has such a line of symmetry. In the case of the parabola y = x2, the y-axis is the line of symmetry. The point at which the parabola meets its line of symmetry is called the vertex of the parabola.
Now, the basic shape and orientation will be determined
by the coefficients of the quadratic y = ax2 + bx + c.
We take one possibility at a time. Suppose that the coefficients b and c are both 0. In this case our parabola is given by the equation
y = ax*x.
The question should now be: What role does
a play in orienting or shaping the graph? To
out, let a be a
few different values and compare the values of a with the
shapes of the sketches that we get. The sketches below
role of a.
If the coefficient is negative, the parabola faces
If the coefficient is positive, then the parabola faces upward.
The absolute value of the coefficient determines the sharpness of the curve at the vertex--the higher the absolute value, the steeper and sharper the parabola.
Recall 3 . (The general case y = ax2 + bx + c.)
If we can put the equation into the form
y = a(x - h)2 + k,
h and k
are specific real numbers then we
would know the orientation
and position of
the parabola that is the graph of the
quadratic equation. It turns
out that this parabola has a
vertex at the point (h,k).
Again, the steepness of the
parabola, or the sharpness at the vertex is
determined by the
coefficient a--the larger its absolute
value, the steeper the
For example, the curve
y = 2(x - 5)2 + 7
is a parabola opening upward with vertex at (5,7). The line of symmetry in this case is the vertical line x = 5.
Recall 4. (Changing y = ax2 + bx + c to the form y = a(x - h)2 + k)
the form y = ax2 + bx
+ c can be changed to the form y = a(x -
h)2 + k,
if the proper values
of h and k can be found. The
process of finding such h and
k is called completing the
Here's how it works. Start with y = ax2 + bx + c and take a factor of a out from the first two terms of the right-hand side. (Remember a is different from 0, otherwise we would have the linear equation y = bx + c. ) We get
Notice that this is in the form y = a(x - h)2 + k, with h =-(b/2a) and k=(c-a(b/2a)2)
This means that our parabola has vertex at (-(b/2a), (c-a(b/2a)2))
Example 1. Find the axis of symmetry and the vertex of the graph of the function
f(x) = x2 - 6x + 11.
Solution: List the coefficients a = 1, b = -6, and c = 11. By completing the square for the equation y = x2 - 6x + 11, we find
y = (x2 - 6x ) + 11
=(x2-6x+9) + 11 -9
=(x-3)2 + 11 - 9
= (x - 3)2 + 2.
Therefore, the parabola faces upward (a is positive) and it has a vertex at (3,2) with a vertical line of symmetry x = 3.
Example 2. Find the axis of symmetry and the vertex of the graph of the function
f(x) = -2x2 + 16x - 35.
Solution: List the coefficients a = -2, b = 16, and c = -35. By completing the square for the equation y = -2x2 + 16x - 35, we find
y = -2(x2 - 8x) - 35
= -2(x2-8x+16)- 35 + 2*16
= -2(x-4)2 - 35 + 32
= -2(x-4)2- 3.
Therefore, the parabola faces downward (a is negative) and it has a vertex at (4,-3) with a vertical line of symmetry x = 4. Notice that the parabola has a sharper steepness than the one in the previous example, because a = 2.
A quadratic function is in the form f(x) = ax2 + bx + c. A quadratic function graphs into a parabola, a curve shape like one of the McDonald's arches. Every quadratic function has a vertex and a vertical axis of symmetry. This means it is the same graph on either side of a vertical line. The axis of symmetry goes through the vertex point. The vertex point is either the maximum or minimum point for the parabola.
|graph opens up and the
vertex is a minimum
graph opems down and the vertex is a maximum
|b2 - 4ac > 0
b2 - 4ac = 0
b2 - 4ac < 0
Graph has 1 x-intercept
Graph has no x-intercepts
|Vertex Point||at x = -b/2a
to find y substitute above value for x in the expression of the function
none if b2-4ac<0
|y-intercept||c value in ax2 + bx + c|
1. Plot the points (x,f(x)) at x = -4, -3, -2, -1, 0, 1, 2, 3, and 4 for the function f(x) = x2 -6x + 2. | http://www.marlboro.edu/academics/study/mathematics/courses/graphquadratic | 13 |
64 | Linear Algebra/Systems of Linear Equations
Systems of Linear Equations
An important problem in Linear Algebra is in solving the system of m linear equations of n variables in a field F:
Where all are elements of a field.
An example of a system of linear equations is: Linear system (2):
A solution of the system is a list of numbers that makes each equation true, when the values are substituted for respectively. For example, is a solution to the system (2) because, when these values are substituted in (2) for respectively, the equations simplify to and .
The set of all possible solutions is called the solution set of the linear system. Two linear systems are called equivalent if they have the same solution set. That is, each solution of the first system is a solution to the second system, and vice versa.
Finding the solution set of a system ot two linear equations in two variables is easy because it is the same as finding the intersection point of two lines. A typical problem is
The graphs of these lines, which we denote by and . A pair of numbers statisfies both equations in the system if and only if the point lies on both and . In the system above, the solution is the single point , as you easily can see on Figure 1.
Of course, two lines do not have to intersect in a single point, they could be parallel, or they could coincide and "intersect" at every point on the line. Figure 2 and Figure 3 shows graphs to visualize this.
A system of linear equations is said to be consistent if it has either one solution or infinitely many solutions, and a systems is said to be inconsistent if it has no solution.
In Linear Algebra, we are concerned with three problems:
- Is a system of linear equations consistent or not?
- If so, how many elements are in the solution set?
- What is the solution set?
Systems of linear equations are common in science and mathematics. These two examples from high school science give a sense of how they arise.
The first example is from Physics. Suppose that we are given three objects, one with a mass known to be 2 kg, and are asked to find the unknown masses. Suppose further that experimentation with a meter stick produces these two balances.
Since the sum of moments on the left of each balance equals the sum of moments on the right (the moment of an object is its mass times its distance from the balance point), the two balances give this system of two equations.
The second example of a linear system is from Chemistry. We can mix, under controlled conditions, toluene C7H8 and nitric acid HNO3 to produce trinitrotoluene C7H5O6N3 along with the byproduct water (conditions have to be controlled very well, indeed— trinitrotoluene is better known as TNT). In what proportion should those components be mixed? The number of atoms of each element present before the reaction
must equal the number present afterward. Applying that principle to the elements C, H, N, and O in turn gives this system.
To finish each of these examples requires solving a system of equations. In each, the equations involve only the first power of the variables. This chapter gives an introduction to systems of linear equations. We will solve them later.
The essential information of a linear system can be described in a rectangular array called a matrix. Given the system
with the coefficients of each variable aligned in columns, we make a matrix called the coefficient matrix (or matrix of coeffients) of the system. The coefficient matrix looks like this
Which make sense if you look at the definition of a linear equation (1). The second row contains a zero because the second equation could be written as
We also have a matrix called the augmented matrix which for the same system looks like this
An augmented matrix of a system consists of the coefficient matrix with an added column containing the constants from the right sides of the equation. Again look at the linear equation definition (1), if it does not make sense.
The size of a matrix tells us how many rows and columns it has. The augmented matrix above has 3 rows and 4 columns, therefore it is called a 3x4 (read "3 by 4") matrix. m and n are positive integers, an m x n matrix is a rectangular array of numbers with m rows and n columns. Matrix notation will simplify the calculations of a linear system.
Elementary Row Operations
There are three elementary row operations:
The following are an explanation and examples of the three different operations. In the chapter Systems of Linear Equations we have used all of these operations, except Scaling.
An important thing to remember is that all operations can be used on all matrices, not just on matrices derived from linear systems.
Replace one row by the sum of itself and a multiple of another row. A more common paraphrase of row replacement is "Add to one row a multiple of another row."
An example is, we are given the linear system:
Which can be written in matrix notation, as an augmented matrix, like this
Now we have decided to eliminate the term in equation 2, this can be done by adding -2 times equation 1 to equation 2
Which gives us the matrix
Interchange two rows.
An example is, we are given the matrix
Here we have performed an interchange operation on the two rows
This is a useful operation when you are trying to solve a linear system, and can see that it will be easier to solve it by interchanging two rows. It is a widely used operation, even though it seems odd and not very usable.
Multiply all entries in a row by a nonzero constant.
An example is, we are given the matrix
Now a scaling operation has been performed on the first row, by multiplying by -2
Solving a Linear System
The basic strategy for solving a linear systems is to replace one system with an equivalent system (i.e. another system with the same solution set) that is easier to solve. This is done by using term from the first equation to eliminate the terms in the other equations, use the term from the second equation to eliminate the terms in the other equations, and so forth, until you get a very simple equivalent system of equations. There are three operations that are used to simplify a system. You can replace one equation by the sum of itself and a multiple of another equation, we can interchange two equations, and multiply all the terms in an equation with a nonzero constant. During this example, we can see why these operations do not change the solution set of a system.
Example 1: Solve this linear system
The augmented matrix for this system is
The first thing we want is keep in the first equation and eliminate it from the other equations.
To achieve this, we add 4 times equation 1 to equation 3.
The result of this calculation is written in place of the original equation 3
Next thing we want to do is to multiply equation 2 with to obtain 1 as the coefficient for , which will simplify arithmetic in the next step
Now, we use in equation 2 to eliminate the in equation 3
The new linear system has a triangular form, which is called echelon form, and looks like this
The next step is to use in equation 3 to eliminate the and in equation 1 and 2.
The system now looks like this
Now we are just one step away from obtaining the solution of the linear system. The last step is to add 2 times equation 2 to equation 1. When doing so we obtain the linear system, which has a reduced echelon form
Now that the linear system is solved, it shows that the only solution of the original system is . However, because of the many calculations involved, it is a good practice to check you work. This is done by substituting the solution into the original system
This shows us that the solution we found is right, and therefore is a solution of the original linear system.
- Onan, Linear Algebra | http://en.m.wikibooks.org/wiki/Linear_Algebra/Systems_of_Linear_Equations | 13 |
68 | Short description about this website
[ To view this site's graphical design UPGRADE your web browser to Firefox ]
|Chapter 12: One-Way, Independent Analysis of Variance|
We encounter many situations in our everyday lives in which we are trying to detect something under adverse conditions. For example, you've almost certainly had the experience at a party of trying to understand a conversation and having difficulty doing so because of the loud music and other people talking. You may have experienced so much static on your cell phone that you could not understand the person you were calling. You may even have experienced a pouting rain so loud that you couldn't hear the thunder. In these and similar situations, the background conditions can be considered "static" or "noise."
Likewise, what you're trying to detect--a nearby conversation, a voice on the other end of the phone, or thunder in a rainstorm--can be considered the "message" or "signal." In order for you to hear the message correctly, the signal must be louder than the noise. As you walk through this chapter, think about how signals and noises fit into the type of statistical analysis in this chapter.
Keep these ideas in mind and you will be fine.
First off, ANOVA stands for analysis of variance. We use ANOVA when we wish to compare more than two group averages under the same hypothesis. How many groups? As many as you want. Hopefully, you will have a very good theoretical rationale for comparing 20 or 30 group averages in a single ANOVA. Most of the time, you won't be comparing that many groups.
The implication of this is that ANOVA is always at some level a nondirectional test. Think about it--"direction" implies some prediction as to which group average will be highest. And if we had some previous, theoretical expectation that one group would have a higher average than another group, then we should modify our hypothosis and compare just those two groups--perhaps using a t-test. When we run an ANOVA, we're admitting that we're not sure which of our groups will emerge with the highest group average--and for that matter, which will have the lowest average. It's fair to say that ANOVA is an exploratory procedure. This will make more sense when we get into an example.
Why don't we do a bunch of t tests? One reason for preferring ANOVA over multiple t tests is that this would be a very tedious exercise. Doing t tests over and over again just aint no fun!
A more important reason is that the more tests we run under the umbrella of the same hypothesis, the more likely we are to get a spurious significant difference. In other words, we're increasing the probability of Type I error. For example, if we set alpha at .05 and run three t tests for three variables that are part of the same hypothesis, our probability of Type I error is actually 15%--5% for each test. If we didn't adjust our alpha level accordingly (dividing it by 3), we would not be holding our data to a sufficiently high significance-testing standard.
Another reason for using ANOVA is that there are complex types of analyses that can be done with ANOVA and not with the t tests. Additionally, ANOVA is by far the most commonly-used technique for comparing means, and it is important to understand ANOVA in order to understand research reports.
ANOVA is a type of model that can be applied to a variety of data configurations. The ANOVA that we are going to study is the one-way, independent ANOVA. Specifically, a one-way ANOVA is used when there is a single independent variable that has three or more categories. The One‐Way ANOVA tells us if the three (or more) groups differ from one another on a dependent variable.
Imagine a study in which researchers implemented two separate interventions, one designed to improve skill building among youth (Intervention A) and another designed to increase supportive relationships between instructors and youth (Interventiosn B). In addition, the researchers also employed a control group in their study. The researchers want to know if either of these two interventions improved instruction quality. In this study there are three groups, participants who received Intervention A, those that received Intervention B, and the control group. Below is a graph of what the means for each group might look like. Researchers interested in differences between these three groups in terms of instruction quality would apply a one-way ANOVA. The one-way ANOVA provides information about if there were statistically significant instruction quality differences between these three groups. Click to enlarge images.
The result of this one-way ANOVA indicates that there are differences between the three means. However, ANOVA on its own does not provide information about where these differences actually are. In this example there could be a difference in instructional quality between the Control and Intervention A groups, between the Control and Intervention B groups, and/or between the Intervention A and Intervention B group averages. To get at these differences additional analyses must be conducted. More on this, later.
Though we aren't going to walk through full examples of other types of ANOVA, it is important to give you at least a brief introduction to a few other ANOVA models. The first of these is what we will call a two-way ANOVA. This one is sometimes referred to as a factorial ANOVA. Unlike a one-way ANOVA, a two-way ANOVA is used when there is more than one independent variable. In the previous example there was only one independent variable with three levels (Intervention A, Intervention B, Control). Now, suppose that a researcher also wanted to know if there were additional group differences between boys and girls in the youth program. When this second independent variable is added to the analysis, a two-way ANOVA must be used.
The results of a two-way ANOVA consist of several parts. First are called main effects. Main effects tell you if there is a difference between groups for each of the independent variables. For example, a main effect of intervention type (Intervention A, Intervention B, Control) would indicate that there is a significant difference between these three groups. That is, somewhere among these three averages, at least two of them are significantly different from one another. Like the one-way ANOVA, a two-way ANOVA does not provide information on where these differences are and additional analyses are required.
In addition to the main effects, two-way ANOVA also usually provides information about interaction effects as well. Interaction effects provide information about whether an observed group difference in one independent variable varies as a function of another independent variable.
The graph shows that there are no differences for boys (blue line) between the Control, Intervention A, or Intervention B. However, there are clear differences for girls (red line) between these three groups. The implication of an interaction such as this one is that differences between groups are dependent on gender.
Moving right along. The last type of ANOVA is referred to as repeated-measures ANOVA, which can be either one-way or two-way. To put it in terms of our t test chapter, we could also call this type of ANOVA a dependent-groups ANOVA. As discussed in a previous chapter, a dependent-samples t test is used when the scores between two groups are somehow dependent on each other. One example of such a dependency is when the same people are given the same measure over time to see whether there is change in that measure. The repeated-measures ANOVA takes the dependent samples t test one step further and allows the research to ask the question “Does the difference between the pre‐test and post‐test means differ as a function of group membership?”
Here's an example. Suppose that we are interested in the effect of practice on the ability to solve algebra problems. First we test 20 participants in algebra performance before practice, recording the number of problems they solve correctly out of 10 problems. We then provide the participants with practice on algebra problems and retest their performance after one day and then again after one week. Essentially, we're looking at whether the effects of practice persist. Because we have three groups comprising the same participants, the best analysis would be a repeated-measures ANOVA.
The following are assumptions of one-way, independent ANOVA:
The first and simplest problem to consider is the comparison of three means; though if you're not worried about the math, which we won't be, it doesn't matter how many means we're dealing with. This is done by the analysis of variance (ANOVA). The aim of this chapter is to look at an example in some conceptual detail.
So, here is the example. If we have three fertilizers, and we wish to compare their effectiveness, this could be done by a field experiment in which each fertilizer is applied to 10 plots, and then the 30 plots are later harvested, with the crop yield (in tons) being calculated for each plot. We have 3 groups with 10 scores in each.
Though we won't be calculating our F-comp by hand, it will be helpful to at least know what the raw data are:
When these data are plotted on a graph, it appears that the fertilizers do differ in the amount of yield produced (we'll call this within-groups variation also known as error variation), but there is also a lot of variation between plots (we'll call this between-groups variation). Whilst it appears that fertiliser 1 produces the highest yield on average, a number of plots treated with fertiliser 1 did actually yield less than some of the plots treated with fertilisers 2 or 3.
We now need to compare these three groups to discover if this apparent difference is statistically significant. When comparing two samples, the first step was to compute the difference between the two sample means (see revision section). However, because we have more than two samples, we do not com- pute the differences between the group means directly. Instead, we focus on the variability in the data. At first this seems slightly counter-intuitive: we are going to ask questions about the means of three groups by analysing the variation in the data. How does this work? Click images to enlarge.
The variability in a set of data quantifies the scatter of the data points around the mean. To calculate a variance, first the mean is calculated, then the deviation of each point from the mean. We know from before that adding up the raw deviations from a mean always yields 0, which is not helpful. If the deviations are squared before summation then this sum is a useful measure of variability, which will increase the greater the scatter of the data points around the mean. This quantity is referred to as a sum of squares (SS), and is central to our analysis.
The SS however cannot be used as a comparative measure between groups, because clearly it will be influenced by the number of data points in the group; the more data points, the greater the SS. Instead, this quantity is converted to a variance by dividing by n − 1, where n equals the number of data points in the group. Doint this gives us an "average" SS that can be compared across groups.
In an ANOVA, it is useful to keep the total measure of variability in its two components--within-groups and between-groups; that is, a sum of squares, and the degrees of freedom associated with the sum of squares. Returning to the original question: Do the three fertilizers produce equal yeilds of crops? Numerous factors are likely to be involved: e.g. differences in soil nutrients between the plots, differences in moisture content, many other biotic and abiotic factors, and also the fertiliser applied to the plot. It is only the last of these that we are interested in, so we will divide the variability between plots into two parts: that due to applying different fertilisers (between-groups), and that due to all the other factors (within-groups).
To illustrate the principle behind partitioning the variability, first consider two extreme datasets. If there was almost no variation between the plots due to any of the other factors, and nearly all variation was due to the application of the three fertilisers, then the data would follow the This pattern. The first step would be to calculate a grand mean (the mean of all scores from all groups), and there is considerable variation around this mean. The second step is to calculate the three group means that we wish to compare: that is, the means for the plots given fertilisers A, B and C. It can be seen that once these means are fitted, then little variation is left around the group means. In other words, there is little within-group varability in the groups but large between-groups variability across the groups. When all is said and done, we would be almost certain to reject the null hypothesis for this experiment.
Now consider the other extreme, in which the three fertilisers are, in fact, about the same. Once again, the first step is tofit a grand mean and calculate the sum of squares. Second, three group means are fitted, only to find that there is almost as much variability as before. Little variability has been explained. This has happened because the three means are relatively close to each other (compared to the scatter of the data).
The amount of variability that has been explained can be quantified directly by measuring the scatter of the treatment means around the grand mean. In the first of our two examples, the deviations of the group means around the grand mean are considerable, whereas in the second example these deviations are relatively small. When the variability around the grand mean isn't all that much different from the variability around the individiual means, we say that we've not explained much variance. Explain it how? Quite simply explain it in terms of our independent variable--type of fertilizer. In this second example, we can't tell whether the variability is coming from the fertilizers, or the soils or the weather, etc. When all is said and done, we would almost certainly fail to reject the null hypothesis for this experiment.
But at what point do we decide that the amount of variation explained by fitting the three means is significant? The word significant, in this context, actually has a technical meaning. It means ‘When is the variability between the group means greater than that we would expect by chance alone?’ At this point it is useful to define the three measures of variability that have been referred to. These are:
Total sum of squares (SST): Sum of squares of the deviations of the data around the grand mean. This is a measure of the total variability in the dataset.
Within-Groups or Error sum of squares (SSW): Sum of squares of the deviations of the data within each distribution.
Between-groups sum of squares (SSB): Sum of squares of the deviations of the group means from the grand mean. This is a measure of the variation between plots given different fertilisers.
Variability is measured in terms of sums of squares rather than variances because these three quantities have the simple relationship: SST = SSW + SSB. So the total variability has been divided into two components; that due to differences between plots given different treatments, and that due to differences within plots. Variability must be due to one or other of these two causes. Separating the total SS into its component SS is referred to as partitioning the sums of squares.
For ANOVA, there are always two degrees of freedom. This is reflected in the Table of Critical F Values. Expand this table (or zoom in) and note the differences between this table and the last one that we used.
Every SS was calculated using a number of independent pieces of information. As with all lists of numbers, some are free to vary and some are not. The first step in any analysis of variance is to calculate SST. It has already been discussed that when looking at the deviations of data around a central grand mean, there are N − 1 independent deviations: i.e. in this case N − 1 = 29 degrees of freedom (df). The second step is to calculate the three treatment means. When the deviations of two of these treatment means from the grand mean have been calculated, the third is not free to vary. Therefore the df for SSB is 2. Finally, SSW measures variation within each of the three groups that are part of this study.. Within each of these groups, the ten deviations must sum to zero. Given nine deviations within the group, the last is predetermined. Thus SSW has 3 × 9 = N − 3 = 27 df associated with it.
Just as the SS are additive, so too are the df. Adding the df for both SSW and SSB equals the df associated with SST. Combining the information on SS and df, we can arrive at a measure of variability per df. This is equivalent to a variance, and in the context of ANOVA is called a mean square (MS). The calculations for MS are as follows:
Mean Square Between (MSB) = SSB/dfbetween
Mean Square Within (MSW) = SSW/dfwithin
Essentially, what we now have is all of the variance in the experiment, properly accounted for. And because we have kept the variability separated into variance within and between groups, we can compute a statistic that will allow us to determine whether there is a significant difference among the average crop yields in our study. How, do we do that? We do what we always do--we put the signal (between) over the noise (within).
One of the most important considerations in applying and understanding ANOVA is the summary table. For any ANOVA analysis that you conduct, a summary table will be the center of the output. It is from such a table that we decide whether to accept or reject a given null hypothesis. In the table below, you can see how to calculate degrees of freedom, mean square values and even F-comp.
This table helps us keep track of the different parts of our variation. Examination of the summary table reveals some items we have not yet discussed. First, note that the sources of variation (between and within) are indicated on the left side. Next we see places for degrees of freedom. The between-groups df is equal to K - 1, where K is the number of groups we are comparing. Because we are comparing three groups in our example, this is 3 - 1 = 2. The within-groups df equals N - K, which means that we subtract the number of groups from the total number of participants in all of our groups. For our example this is 30 - 3 = 27. Finally total df equals N - 1, or 30 - 1 = 29, for our example. Notice that df-between and df-within can be added up equal df-total.
The hardest task in constructing a summary table manually is the calculation of the sums of squares in the SS column. We will not be going into these calculations in this class, but having this information already in the table ensures that we can calculate our mean squares. As you can see, calculating MS is simply a matter of dividing SS by its df. MS-between comes out to 5.41, and MS-within comes out to .949.
And once we have MS-between and MS-within, we can calculate our F-comp or our F ratio. To do this, we simply divide MS-Between by MS-Within. For our example, this is 5.41/.949 = 5.7. And that's it. That's F-comp. Remember that F-comp will always be positive. F is never negative. Now, all we have to do is compare our computed F with the critical F for our alpha and degrees of freedom.
Let's go through this using our steps of hypothesis testing. Remember that if we were conducting research, we would go through the first four steps before we even collected data.
1. H0: µ1 = µ2 = µ3 . The null hypothesis for ANOVA is called an omnibus hypothesis. This type of hypothesis covers multiple groups and assumes that there is equality among all group averages. Stating the null in words: There is no difference in the amount (measured in tons) of crops yielded by the three fertilizers.
2. H1: The crop yields among the three groups are not all the same. Another way to say this is: At least one group average will be significantly different from one other group average. So, for ANOVA, we will reject the null if only two group averages are significantly different from one another. Even if we run an ANOVA with nine groups, only two of them have to be significantly different from one another in order for our data to yield a significant F-comp.
3. Set α = .05.
4. Reject H0 if F-comp ≥ F-crit; F.05, df = 2, 27 = 3.35.
5. The fifth step in this chapter is a little different from past chapters. Though you won't be performing any extensive calculations, you do need to be able to fill out a summary ANOVA table, with relevant information. Filling out the summary table correctly will yield the correct F-comp. Here is the completed summary ANOVA table for the fertilizer study. In the previous section, we walked through the steps for filling in the summary table. Click image to enlarge.
6a. A one-way ANOVA was conducted to determine whether there is no difference in the amount (measured in tons) of crops yielded by the three fertilizers.
6b. There was sufficient evidence to reject the null hypothesis; F(2, 27) = 5.7, p < .05. If our F-comp value was less than F-crit, then we would have said, There was insufficient evidence to reject the null hypothesis...
6c. The average crop yield associated with Fertilizers 1, 2, and 3 are not all the same (5.45, 4.00, and 4.49 tons, respectively). Further post-hoc testing of pairwise differences is necessary to determine which group averages are significantly different from one another. At this point we're unable to state exactly which groups have averages that are significantly different from one another, because we have only one statistics--a single F value. Based on a single, significant F value we can't draw any conclusions or talk about the implications of the study. In order to do this, we have to compare pairs of averages using another type of test.
The Fisher LSD (least significant difference) test, or simply LSD test, is easy to compute and is used to make all pairwise comparisons of group means. By pairwise comparisons, we mean that we make all possible comparisons between groups by looking at one pair of groups at a time. One advantage of the LSD is that it does not require equal sample sizes. Another advantages is that it is a powerful test; that is, we are more likely to be able to reject the null hypothesis with it than other post-hoc tests.
The LSD is sometimes called a protected t test because it follows a significant F test. If we used the t test to do all pairwise comparisons before the F test, the probability of committing a Type I error would be greater than our α. However, by applying a form of the t test, after the F test has revealed at least one significant comparison, we say the error rate (probability of Type I error) is "protected."
Without going into the formula and computations, it is sufficient to say that running an LSD test consists of first calculating a critical LSD value, which is based on sample sizes, our MSW value, and a critical t value pulled from the t table. More than one LSD value needs to be calculated if group sizes are different. Once the LSD value has been calculated, the differences between group averages are compared to the LSD value. Average differences that are equal to or larger than the LSD value are considered statistically significant and would be interpreted as such. Now, in the table below, you won't see the calculated LSD value, because the computer program from which this derives, handles that, behind the scenes. Therefore, interpretation will be a little different than if we were performing all of the calculations manually. Read on.
Here is the LSD Table of Mean Differences. Note that unimportant columns have been grayed out for easier interpretation. Click image to enlarge. This table comprises a few pieces of information, the most important of which are the Mean Difference and Sig. columns. In the Mean Difference column, we see all of the differences between the group averages (e.g., average of Fertilizer 1 minus the average of Fertilizer 2). The Sig. column just provides the probability associated with each difference. We interpret these group differences as we've always done. If p>.05 (assuming that is the established alpha level for the study), then we accept that the two group averages are not significantly different from one another. If p<.05, then we reject the null hypothesis and conclude that there is a statistically significant difference between two group means.
Now, with the LSD table in front of us, we can finish off this problem. There are basically two steps, here. First, we will clearly identify all statistically significant, pairwise differences. Second, we need to talk about the implications of such findings. Here it is:
7a. Our LSD test revealed that Fertilizer 1 yielded significantly more crops (M=5.45 tons) than did Fertilizers 2 and 3 (M=4.00 and 4.49 tons, respectively.
7b. Given these findings, more extensive usage of Fertalizer 1 is recommended. Further testing on other crops in other locations is also advised, so as to improve methodological representativeness.
Use this interactive animation to get a better understanding of one-way ANOVA. Notice what happens to F-comp when you increase the variability within groups. Notice what happens when you increase the differences between the groups and the grand mean. Also, keep an eye on the bars representing MSB and MSW or the F ratio. Can you figure out how to get the largest F-comp possible? Definitely one of the coolest applets I've found on the web.
Some content adapted from other's work. See home page for specifics.
LAST UPDATED: 2013-01-23 5:48 PM | http://www.mesacc.edu/~derdy61351/230_web_book/module4/anova/index.html | 13 |
50 | Fixed Intervals Help (page 2)
Introduction to Fixed Intervals
In descriptive measures statistics, you can divided up data sets into subsets containing equal (or as nearly equal as possible) number of elements, and then observed the ranges of values in each subset. There's another approach: we can define fixed ranges of independent-variable values, and then observe the number of elements in each range.
The Test Revisited
Let's re-examine the test whose results are portrayed in Table 4-1. This time, let's think about the ranges of scores. There are many ways we can do this, three of which are shown in Tables 4-8, 4-9, and 4-10.
In Table 4-8, the results of the test are laid out according to the number of papers having scores in the following four ranges: 0–10, 11–20, 21–30, and 31–40. We can see that the largest number of students have scores in the range 21–30, followed by the ranges 31–40, 11–20, and 0–10.
In Table 4-9, the results are shown according to the number of papers having scores in 10 ranges. In this case, the most ''popular'' range is 29–32. The next most ''popular'' range is 21–24. The least ''popular'' range is 0–4.
Both Tables 4-8 and 4-9 divide the test scores into equal-sized ranges (except the lowest range, which includes one extra score, the score of 0). Table 4-10 is different. Instead of breaking the scores down into ranges of equal size, the scores are tabulated according to letter grades A, B, C, D, and F. The assignment of letter grades is often subjective, and depends on the performance of the class in general, the difficulty of the test, and the disposition of the teacher. (The imaginary teacher grading this test must be a hardnosed person.)
The data in Tables 4-8, 4-9, and 4-10 can be portrayed readily in graphical form using broken-up circles. This is a pie graph, also sometimes called a pie chart. The circle is divided into wedge-shaped sections in the same way a pie is sliced. As the size of the data subset increases, the angular width of the pie section increases in direct proportion.
In Fig. 4-3, graph A portrays the data results from Table 4-8, graph B portrays the results from Table 4-9, and graph C portrays the results from Table 4-10. The angle at the tip or apex of each pie wedge, in degrees, is directly proportional to the percentage of data elements in the subset. Thus if a wedge portrays 10% of the students, its apex angle is 10% of 3608, or 368; if a wedge portrays 25% of the students, its apex angle is 25% of 3608, or 908. In general, if a wedge portrays x%of the elements in the population, the apex angle θ of its wedge in a pie graph, in degrees, is 3.6x.
The sizes of the wedges of each pie can also be expressed in terms of the area percentage. The wedges all have the same radius – equal to the radius of the circle – so their areas are proportional to the percentages of the data elements in the subsets they portray. Thus, for example, in Fig. 4-3A, the range of scores 31–40 represents a slice containing ''30% or 3/10 of the pie,'' while in Fig. 4-3C, we can see that the students who have grades of C represent ''25% or 1/4 of the pie.''
Histograms were introduced back in Chapter 1. The example shown in that chapter is a bit of an oversimplification, because it's a fixed-width histogram. There exists a more flexible type of histogram, called the variable-width histogram. This sort of graph is ideal for portraying the results of our hypothetical 40-question test given to 1000 students in various ways.
Figure 4-4 shows variable-width histograms that express the same data as that in the tables and pie graphs. In Fig. 4-4, graph A portrays the data results from Table 4-8, graph B portrays the results from Table 4-9, and graph C portrays the results from Table 4-10. The width of each vertical bar is directly proportional to the range of scores. The height of each bar is directly proportional to the percentage of students who received scores in the indicated range.
Percentages are included in the histogram of Fig. 4-4A, because there's room enough to show the numbers without making the graph look confusing or cluttered. In Figs. 4-4B and C, the percentages are not written at the top of each bar. This is a matter of preference. Showing the numbers in graph B would make it look too cluttered to some people. In graph C, showing the percentage for the grade of A would be difficult and could cause confusion, so they're all left out. It's a good idea to include tabular data with histograms when the percentages aren't listed at the tops of the bars.
Fixed Intervals Practice Problems
Imagine a large corporation that operates on a five-day work week (Monday through Friday). Suppose the number of workers who call in sick each day of the week is averaged over a long period, and the number of sick-person-days per week is averaged over the same period. (A sick-person-day is the equivalent of one person staying home sick for one day. If the same person calls in sick for three days in a given week, that's three sick-person-days in that week, but it's only one sick person.) For each of the five days of the work week, the average number of people who call in sick on that day is divided by the average number of sick-person-days per week, and is tabulated as a percentage for that work-week day. The results are portrayed as a pie graph in Fig. 4-5. Name two things that this graph tells us about Fridays. Name one thing that this graph might at first seem to, but actually does not, tells us about Fridays.
The pie graph indicates that more people (on the average) call in sick on Fridays than on any other day of the work week. It also tells us that, of the total number of sick-person-days on a weekly basis, an average of 33% of them occur on Fridays. The pie graph might at first seem to, but in fact does not, indicate that an average of 33% of the workers in the corporation call in sick on Fridays.
Suppose that, in the above described corporation and over the survey period portrayed by the pie graph of Fig. 4-5, there are 1000 sick-person-days per week on average. What is the average number of sick-person-days on Mondays? What is the average number of people who call in sick on Mondays?
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities
Local SAT & ACT Classes
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Bullying in Schools
- A Teacher's Guide to Differentiating Instruction
- First Grade Sight Words List | http://www.education.com/study-help/article/fixed-intervals/?page=2 | 13 |
61 | Brad teaches the basics of natural harmonics. He gives you tips and advice on practicing them. Natural Harmonics produce a wicked squealing sound that can really accent your playing.
Taught by Brad Henecke in Rock Guitar with Brad Henecke seriesLength: 13:16Difficulty: 3.0 of 5
“Harmonic” is a term that is often used in the scientific field of acoustics. Harmonics are component frequencies that comprise a larger, fundamental frequency. The individual pitches or notes that occur in music are referred to as “fundamentals.” For example, the pitch we hear when the fifth string is struck is a fundamental. This fundamental is called “A”. The pitch that our ears perceive as “A” is actually a sum of several overtone frequencies called harmonics. When we hear a fundamental, our ears cannot distinguish the individual overtones. We only hear their resulting sum or fundamental.2. Natural Harmonics
A frequency is assigned to every pitch or note that we hear. Frequency is measured in a unit called hertz (abbreviated Hz). For example, the frequency of the open A string is 220Hz. Harmonics are integer multiples of this fundamental frequency. For example, if the length of the A string is divided in half, the resulting pitch is an octave higher. The 12th fret marks the exact center of a string. If a harmonic is plucked at the 12th fret, the frequency of the pitch doubles. The frequency of this harmonic is 440Hz. The pitch that results is still A, just one octave higher. If the length of string is divided into an even smaller section, a higher harmonic occurs.
Listen to the introduction music once again. The high chime-y sounds Brad creates are examples of “natural harmonics.” Many guitarists also refer to the sound produced by harmonics as a high pitched squealing sound. The harmonics he plays occur at either the 5th or 12th fret. Harmonics are a frequently used compositional technique. Compare the sound of a fretted note at the 12th fret to a harmonic played at the 12th fret. The pitch is the same, but the overall tone is quite different. Harmonics are added to a piece of music to add a contrasting tonal color.3. Where do Natural Harmonics Occur?
The easiest harmonics to produce occur at the 5th, 7th, and 12th frets of all six strings. The upper octaves of these harmonics are relatively easy to produce as well. These harmonics occur at the 17th and 19th frets. If you have a guitar with 24 frets, harmonics can be produced at the final fret. Due to how frequently they are used, Brad refers to these harmonics as the first set of natural harmonics.4. Performing Natural Harmonics
Note: Open the “Supplemental Content” tab for a fretboard diagram detailing where the first set of harmonics occurs. Brad has labeled the pitch produced by a harmonic at each location on the fretboard. What do you notice about the arrangement of these pitches?
Harmonics actually occur down the length of the entire string. However, many of these harmonics are very difficult to produce. As a result, these harmonics are used rather infrequently. For example, harmonics at the 9th and 3rd frets are occasionally used. These harmonics are typically used in rock and metal.
Note: It is much easier to produce natural harmonics on an electric guitar. Harmonics really jump out when played with a distorted tone. Also, scooping the midrange frequency of a distorted guitar tone increases the projection of harmonics.
Performing a natural harmonic is relatively easy. Begin by practicing harmonics at the 5th fret. Lightly rest the fleshy tip of the index finger on the string directly over the 5th fret. If your finger is not directly over the fret, the harmonic will not sound. Misplacement of a left hand finger while playing a harmonic usually results in a muted string.5. Combining Harmonics with Other Techniques
Do not press the string down at all. Once you pluck the string with the right hand, release your left hand finger from the string. Many instructors teach that you must immediately remove the left hand finger from the string to produce the harmonic. However, this is not true. You actually have a comfortable amount of time to remove your finger from the string. The tone of the harmonic blossoms and becomes richer once the finger is released from the string.
Practice playing harmonics at the 5th fret on each string. It is much easier to produce a loud and clear harmonic on the larger strings than on the smaller strings. Then, practice harmonics on the 7th fret and 12th fret. Harmonics at these locations are slightly easier to produce than harmonics at the 5th fret. Harmonics that are closer to the nut require a little more effort to produce a clear tone. Memorize the pitch that corresponds with each harmonic location.
Also, practice playing two harmonics simultaneously. Simply barre a left hand finger across the desired strings. Pluck them simultaneously. Then, quickly lift the barre from both strings.
Vibrato and bends are frequently combined with harmonics. These tricks are most commonly performed on guitars equipped with a floating tremolo system. A floating tremolo enables you to pull the whammy bar up. This raises the pitch. Guitarists such as Eddie Van Halen and Dimebag Darrell have exploited these techniques to great effect. Dimebag would frequently force a harmonic at the third fret. Then, he would raise the pitch of the harmonic with the whammy bar. Listen to the end of “Cemetery Gates” for a great example of this combination technique. You can also push the bar down to lower the pitch. Rapidly moving the bar up and down produces a unique form of vibrato.Chapter 4: (2:18) More Natural Harmonics The same harmonics produced at the 5th and 7th frets can also be played at the 17th and 19th frets respectively.
However, most guitars are not equipped with a floating tremolo. Brad demonstrates how to play these same techniques on a guitar with a fixed bridge such as a Les Paul. First, produce a harmonic. Then, use a left hand finger to manipulate the strings on the other side of the nut. The middle finger works the best since it is the strongest finger. Pressing a string down on this side of the nut raises the pitch. Jimmy Page uses this technique all of the time. Listen to the introduction of “Dazed and Confused” from the first Led Zeppelin album to hear some examples of harmonics combined with bends.
In this Phase 2 series Brad Henecke will school you in the art of rock guitar. You will not only learn how to play some of your favorite songs in this series, but you will also learn how to create your own.
This lesson covers the absolute basics of rock guitar. Learn about the electric guitar, pickups, amplifiers, changing strings, and more.Length: 52:09 Difficulty: 0.5 Members Only
The first step of your rock guitar experience is learning some of the more popular chords and that is what this lesson is all about.Length: 42:30 Difficulty: 1.0 Members Only
Brad Henecke introduces common strumming patterns and barre chords.Length: 42:23 Difficulty: 2.5 Members Only
In this lesson Brad covers some of the more advanced barre chord shapes. He applies these shapes to the song "Hotel California."Length: 41:31 Difficulty: 2.0 Members Only
Rock has its roots in the blues. Brad helps you explore the wonderful world of blues in this lesson. He also covers some chord theory.Length: 48:14 Difficulty: 2.0 Members Only
This lesson is all about specific techniques used by lead guitarists.Length: 52:02 Difficulty: 2.5 Members Only
This lesson details how to improvise with the blues scale.Length: 27:27 Difficulty: 2.5 Members Only
In this fun lesson, Brad Henecke teaches you riffs from 3 classic rock songs.Length: 28:28 Difficulty: 2.5 Members Only
Power chords help give rock music that "punch you in the face" feel. Learn basic power chords in this lesson.Length: 13:22 Difficulty: 2.5 Members Only
Are you ready to learn "Ain't Talking About Love" by Van Halen and "You Shook Me All Night Long" by AC/DC? That's what this lesson is all about.Length: 27:32 Difficulty: 2.5 Members Only
In this lesson Brad teaches the first pattern of the minor pentatonic scale and explains how it relates to the blues scale.Length: 14:30 Difficulty: 2.5 Members Only
Brad covers the second pattern for both the minor blues and minor pentatonic scales.Length: 9:07 Difficulty: 2.5 Members Only
Learn the classic rock song "Message in a Bottle."Length: 10:22 Difficulty: 3.0 Members Only
This great lesson covers the 3rd fretboard pattern of the minor pentatonic and minor blues scales.Length: 7:19 Difficulty: 2.5 Members Only
Brad demonstrates how open strings can be added to chord shapes you are already familiar with.Length: 9:09 Difficulty: 2.5 Members Only
Brad covers the 4th pattern of the minor pentatonic and minor blues scales.Length: 8:28 Difficulty: 2.5 Members Only
In this lesson Brad demonstrates how to play the Beatles song "Daytripper."Length: 15:21 Difficulty: 3.0 Members Only
Brad demonstrates the 5th pattern of the minor pentatonic and minor blues scales. He also discusses practicing and memorizing them.Length: 13:05 Difficulty: 2.5 Members Only
Learn the classic rock song "Brown Eyed Girl" in this episode of Rock Guitar.Length: 11:23 Difficulty: 3.0 Members Only
Brad introduces you to the importance of phrasing. Quality phrasing is essential when performing any melodic line.Length: 14:19 Difficulty: 3.0 Members Only
Tapping is an idiomatic guitar technique that offers a unique sound.Length: 14:34 Difficulty: 3.0 Members Only
Learning the modes is essential to the development of your scale vocabulary.Length: 31:04 Difficulty: 3.0 Members Only
Brad further explains what chord shapes are and how they relate to barre chords.Length: 10:15 Difficulty: 2.0 Members Only
Learn the right and left hand mechanics involved in playing harmonics.Length: 13:16 Difficulty: 3.0 Members Only
Brad covers more advanced harmonic techniques such as harp harmonics, pinch harmonics and tap harmonics.Length: 16:10 Difficulty: 3.0 Members Only
Brad moves on in his modal lesson series to explain the Dorian mode. Includes 2 backing tracks.Length: 22:00 Difficulty: 3.0 Members Only
Brad explains and demonstrates the Phrygian mode.Length: 13:33 Difficulty: 3.0 Members Only
Brad continues his discussion of the modes. You will learn the Lydian mode in this lesson.Length: 9:27 Difficulty: 3.0 Members Only
Brad explains the Mixolydian mode and its practical applications.Length: 10:00 Difficulty: 3.0 Members Only
Continuing with his modal lessons, Brad Henecke teaches the Aeolian mode.Length: 9:09 Difficulty: 3.0 Members Only
The final lesson in our modal series covers the Locrian mode.Length: 9:00 Difficulty: 3.0 Members Only
Brad teaches some licks inspired by Ace Frehley of KISS. Incorporate these licks into your own solos.Length: 7:18 Difficulty: 3.0 Members Only
In this lesson Brad Henecke teaches you some fun licks that can be used in your own guitar solos.Length: 10:00 Difficulty: 3.0 Members Only
Brad Henecke demonstrates some cool blues licks.Length: 17:00 Difficulty: 3.0 Members Only
Brad Henecke provides an alternate way of comparing modes and scales.Length: 8:00 Difficulty: 3.0 Members Only
In the last lesson, Brad Henecke compared some scales that are major or dominant in quality. Now, he repeats this process with minor scales.Length: 7:00 Difficulty: 3.0 Members Only
This lesson is all about 1 string scales. Learning scales on 1 string is essential to your knowledge of the fretboard.Length: 8:34 Difficulty: 3.0 Members Only
Brad demonstrates a one string version of the Ionian mode. This lesson demonstrates the importance of horizontal scales.Length: 7:27 Difficulty: 3.0 Members Only
Brad continues his discussion of single string scales. He explains how to play the Aeolian mode across a single string.Length: 4:11 Difficulty: 3.0 Members Only
Brad explains how to locate octaves within scale patterns. He demonstrates a cool lick that involves playing simultaneous octaves.Length: 7:07 Difficulty: 3.0 Members Only
Brad explains how to use octaves in the context of an exercise. Octaves can also be used to build effective licks.Length: 5:18 Difficulty: 3.0 Members Only
Brad introduces the harmonic minor scale. He explains how it can be applied to the solo break in "Sweet Child O' Mine."Length: 7:00 Difficulty: 3.0 Members Only
Brad Henecke provides valuable tips regarding the process of learning songs by ear.Length: 23:00 Difficulty: 3.5 Members Only
Improve your ear training by playing "The Tone Is Right" with Brad Henecke.Length: 29:00 Difficulty: 3.0 Members Only
Brad Henecke explains diminished chords and provides a fun diminished arpeggio exercise.Length: 19:00 Difficulty: 3.0 Members Only
Brad Henecke addresses time signatures.Length: 10:00 Difficulty: 2.5 Members Only
Brad Henecke explains the construction of diminished seventh chords. He also provides a diminished chord exercise.Length: 10:30 Difficulty: 2.5 Members Only
Brad Henecke introduces open G tuning in this lesson.Length: 23:50 Difficulty: 2.5 Members Only
Brad Henecke introduces drop D tuning in this lesson. He explains many advantages of this tuning.Length: 12:57 Difficulty: 2.0 Members Only
Brad Henecke teaches the G major pentatonic scale. He demonstrates all 5 patterns and explains how they can be transposed to any key.Length: 22:50 Difficulty: 2.0 Members Only
In this lesson Brad Henecke talks about changing the pentatonic/blues scales with each chord in a chord progression.Length: 11:08 Difficulty: 3.0 Members Only
Brad will show how to use the Mixolydian scale with a blues chord progression.Length: 6:56 Difficulty: 3.0 Members Only
This lesson is all about gear and effects. Brad begins his discussion with power conditioning and removing hiss from your amplifier. He progresses to discuss a plethora of effects pedals. Brad explores...Length: 52:48 Difficulty: 2.5 Members Only
In this lesson, Brad Henecke introduces the wah pedal and demonstrates its many applications.Length: 15:53 Difficulty: 1.5 Members Only
About Brad Henecke
View Full Biography
Brad Henecke was born in Cedar Rapids, Iowa on May 5th of 1963. He has been a fan of music for as long as he & his family can remember. You could always find him running around the farm wailing on his cardboard guitar, pretending to be a member of the rock band KISS. Additional inspiration came during his first concert when he got the chance to see Boston & Sammy Hagar in the early 1970's.
This opened up a whole new world of rock and roll music for him; his parents noticed his growing interest in music and enrolled him into guitar lessons when he was 13.
From there he jumped into two years of lessons at a local music store in Cedar Rapids. After discovering Eddie Van Halen, Brad knew that the guitar would always be a part of his life. He took his love throughout the city as he played as a pit musician & jammed at parties for friends.
This made him thirsty for more. He enrolled classes at Kirkwood Community College & also took lessons from the one & only Craig-Erickson (www.craig-erickson.com).
His love for music landed him a gig opening for Molly Hatchet in Cedar Rapids with a band called "Slap & Tickle". He has also played in the Greeley Stampede show for quite a few years with "True North".
Brad is currently playing in Greeley, Colorado with a rock band titled "Ragged Doll". They play a wide variety of music with an emphasis on classic rock from the 60's to present, with Brad playing electric guitar in the five piece lineup.
He currently jams on his all-time favorite guitar: a Paul Reed Smith Custom 24. Beyond guitar, he plays also plays drums & bass guitar. He has also been known to thrash a banjo from time to time. He is still actively playing & passing his 31 years of playing experience on to others (you!).
Our acoustic guitar lessons are taught by qualified instructors with various backgrounds with the instrument.
Pamela brings a cap to her first 13 JamPlay lessons with another original etude inspired by the great Leo Brouwer. This is...Free LessonSeries Details
Hawkeye teaches several Robert Johnson licks in this lesson. These licks are played with a slide in open G tuning.Free LessonSeries Details
Erik expounds on the many possibilities of open tunings and the new harmonics that you can use in them. He explains what...Free LessonSeries Details
Lesson 7 is all about arpeggios. Danny provides discussion and exercises designed to build your right hand skills.Free LessonSeries Details
In lesson 6, Kaki discusses how the left and right hands can work together or independently of each other to create different...Free LessonSeries Details
Miche introduces several new chord concepts that add color and excitement to any progression.Free LessonSeries Details
Mark Nelson introduces "'Ulupalakua," a song he will be using to teach different skills and techniques. In this lesson, he...Free LessonSeries Details
Jessica kindly introduces herself, her background, and her approach to this series.Free LessonSeries Details
Our electric guitar lessons are taught by instructors with an incredible amount of teaching experience.
Brendan demonstrates the tiny triad shapes derived from the form 1 barre chord.Free LessonSeries Details
Emil takes you through some techniques that he uses frequently in his style of playing. Topics include neck bending, percussive...Free LessonSeries Details
Known around the world for his inspirational approach to guitar instruction, Musician's Institute veteran Daniel Gilbert...Free LessonSeries Details
Learn a variety of essential techniques commonly used in the metal genre, including palm muting, string slides, and chord...Free LessonSeries Details
Bryan Beller of the Aristocrats, Dethklok, and Steve Vai takes you inside his six step method to learning any song by ear....Free LessonSeries Details
Jane Miller talks about chord solos in part one of this fascinating mini-series.Free LessonSeries Details
Stuart doesn't waste an ytime diving into blues as he starts his series off by demonstrating one of the most iconic and recognizable...Free LessonSeries Details
Michael "Nomad" Ripoll dives deep into the rhythm & blues, funk, and soul genres that were made popular by artists like Earth...Free LessonSeries Details
Lisa breaks into the very basics of the electric guitar. She starts by explaining the parts of the guitar. Then, she dives...Free LessonSeries Details
While we have attempted to provide you with an accurate rendition of our video lesson experience, there are some features which
require a membership with us!
At JamPlay, we give you the ability to monitor your own progress for any lesson! If you watch one of our lessons and feel as though you understand around half of it, mark your progress at 50%. This adds the lesson to your customized Progress Report, and gives you an incredible ability to document what you need to work on, and where you left off.
With thousands of lessons at your fingertips, JamPlay can be a touch intimidating to a first-time user. With Progressive Bookmarking, we give you the ability to systematically bookmark sections of any lessons you are working on to quickly access later. After all, what is the point of all this content if it isn't easy to use?
JamPlay also gives you the ability to leave notes for yourself on any lesson. Just like in any educational system, taking your own notes while learning gives you the ability to highlight the instruction that is important to you. Leave your notes, and we store them in our database for you to reference each and everytime you come back to the lesson. | http://www.jamplay.com/guitar-lessons/genres/9-rock/5/127-24-natural-harmonics | 13 |
100 | An electronic amplifier, amplifier, or (informally) amp is an electronic device that increases the power of a signal. It does this by taking energy from a power supply and controlling the output to match the input signal shape but with a larger amplitude. In this sense, an amplifier modulates the output of the power supply.
Numerous types of electronic amplifiers are specialized to various applications. An amplifier can refer to anything from an electrical circuit that uses a single active component, to a complete system such as a packaged audio hi-fi amplifier.
Figures of merit
Amplifier quality is characterized by a list of specifications that includes:
- Gain, the ratio between the magnitude of output and input signals
- Bandwidth, the width of the useful frequency range
- Efficiency, the ratio between the power of the output and total power consumption
- Linearity, the degree of proportionality between input and output
- Noise, a measure of undesired noise mixed into the output
- Output dynamic range, the ratio of the largest and the smallest useful output levels
- Slew rate, the maximum rate of change of the output
- Rise time, settling time, ringing and overshoot that characterize the step response
- Stability, the ability to avoid self-oscillation
Amplifiers are described according to their input and output properties. They have some kind of gain, or multiplication factor that relates the magnitude of the output signal to the input signal. The gain may be specified as the ratio of output voltage to input voltage (voltage gain), output power to input power (power gain), or some combination of current, voltage, and power. In many cases, with input and output in the same unit, gain is unitless (though often expressed in decibels). For others this is not necessarily so. For example, a transconductance amplifier has a gain with units of conductance (output current per input voltage). The power gain of an amplifier depends on the source and load impedances used as well as its voltage gain; while an RF amplifier may have its impedances optimized for power transfer, audio and instrumentation amplifiers are normally employed with amplifier input and output impedances optimized for least loading and highest quality. So an amplifier that is said to have a gain of 20 dB might have a voltage gain of ten times and an available power gain of much more than 20 dB (100 times power ratio), yet be delivering a much lower power gain if, for example, the input is a 600 ohm microphone and the output is a 47 kilohm power amplifier's input socket.
In most cases an amplifier should be linear; that is, the gain should be constant for any combination of input and output signal. If the gain is not constant, e.g., by clipping the output signal at the limits of its capabilities, the output signal is distorted. There are however cases where variable gain is useful.
There are many types of electronic amplifiers, commonly used in radio and television transmitters and receivers, high-fidelity ("hi-fi") stereo equipment, microcomputers and other electronic digital equipment, and guitar and other instrument amplifiers. Critical components include active devices, such as vacuum tubes or transistors. A brief introduction to the many types of electronic amplifier follows.
The term power amplifier is a relative term with respect to the amount of power delivered to the load and/or sourced by the supply circuit. In general a power amplifier is designated as the last amplifier in a transmission chain (the output stage) and is the amplifier stage that typically requires most attention to power efficiency. Efficiency considerations lead to various classes of power amplifier based on the biasing of the output transistors or tubes: see power amplifier classes.
Power amplifiers by application
- Audio power amplifiers
- RF power amplifier, such as for transmitter final stages (see also: Linear amplifier).
- Servo motor controllers, where linearity is not important.
- Piezoelectric audio amplifier includes a DC-to-DC converter to generate the high voltage output required to drive piezoelectric speakers.
Power amplifier circuits
Power amplifier circuits include the following types:
- Vacuum tube/valve, hybrid or transistor power amplifiers
- Push-pull output or single-ended output stages
Vacuum-tube (valve) amplifiers
According to Symons, while semiconductor amplifiers have largely displaced valve amplifiers for low power applications, valve amplifiers are much more cost effective in high power applications such as "radar, countermeasures equipment, or communications equipment" (p. 56). Many microwave amplifiers are specially designed valves, such as the klystron, gyrotron, traveling wave tube, and crossed-field amplifier, and these microwave valves provide much greater single-device power output at microwave frequencies than solid-state devices (p. 59).
Valves/tube amplifiers also have niche uses in other areas, such as
- electric guitar amplification
- in Russian military aircraft, for their EMP tolerance
- niche audio for their sound qualities (recording, and audiophile equipment)
The essential role of this active element is to magnify an input signal to yield a significantly larger output signal. The amount of magnification (the "forward gain") is determined by the external circuit design as well as the active device.
Many common active devices in transistor amplifiers are bipolar junction transistors (BJTs) and metal oxide semiconductor field-effect transistors (MOSFETs).
Applications are numerous, some common examples are audio amplifiers in a home stereo or PA system, RF high power generation for semiconductor equipment, to RF and Microwave applications such as radio transmitters.
Transistor-based amplifier can be realized using various configurations: for example with a bipolar junction transistor we can realize common base, common collector or common emitter amplifier; using a MOSFET we can realize common gate, common source or common drain amplifier. Each configuration has different characteristic (gain, impedance...).
Operational amplifiers (op-amps)
An operational amplifier is an amplifier circuit with very high open loop gain and differential inputs that employs external feedback to control its transfer function, or gain. Though the term today commonly applies to integrated circuits, the original operational amplifier design used valves.
Fully differential amplifiers
A fully differential amplifier is a solid state integrated circuit amplifier that uses external feedback to control its transfer function or gain. It is similar to the operational amplifier, but also has differential output pins. These are usually constructed using BJTs or FETs.
These deal with video signals and have varying bandwidths depending on whether the video signal is for SDTV, EDTV, HDTV 720p or 1080i/p etc.. The specification of the bandwidth itself depends on what kind of filter is used—and at which point (-1 dB or -3 dB for example) the bandwidth is measured. Certain requirements for step response and overshoot are necessary for an acceptable TV image.
Oscilloscope vertical amplifiers
These deal with video signals that drive an oscilloscope display tube, and can have bandwidths of about 500 MHz. The specifications on step response, rise time, overshoot, and aberrations can make designing these amplifiers difficult. One of the pioneers in high bandwidth vertical amplifiers was the Tektronix company.
These use transmission lines to temporally split the signal and amplify each portion separately to achieve higher bandwidth than possible from a single amplifier. The outputs of each stage are combined in the output transmission line. This type of amplifier was commonly used on oscilloscopes as the final vertical amplifier. The transmission lines were often housed inside the display tube glass envelope.
Switched mode amplifiers
These nonlinear amplifiers have much higher efficiencies than linear amps, and are used where the power saving justifies the extra complexity.
Negative resistance devices
Travelling wave tube amplifiers
Traveling wave tube amplifiers (TWTAs) are used for high power amplification at low microwave frequencies. They typically can amplify across a broad spectrum of frequencies; however, they are usually not as tunable as klystrons.
Klystrons are specialized linear-beam vacuum-devices, designed to provide high power, widely tunable amplification of millimetre and sub-millimetre waves. Klystrons are designed for large scale operations and despite having a narrower bandwidth than TWTAs, they have the advantage of coherently amplifying a reference signal so its output may be precisely controlled in amplitude, frequency and phase.
Musical instrument amplifiers
An audio power amplifier is usually used to amplify signals such as music or speech. Several factors are especially important in the selection of musical instrument amplifiers (such as guitar amplifiers) and other audio amplifiers (although the whole of the sound system – components such as microphones to loudspeakers – affect these parameters):
- Frequency response – not just the frequency range but the requirement that the signal level varies so little across the audible frequency range that the human ear notices no variation. A typical specification for audio amplifiers may be 20 Hz to 20 kHz +/- 0.5dB.
- Power output – the power level obtainable with little distortion, to obtain a sufficiently loud sound pressure level from the loudspeakers.
- Low distortion – all amplifiers and transducers distort to some extent. They cannot be perfectly linear, but aim to pass signals without affecting the harmonic content of the sound more than the human ear can tolerate. That tolerance of distortion, and indeed the possibility that some "warmth" or second harmonic distortion (Tube sound) improves the "musicality" of the sound, are subjects of great debate.
Classification of amplifier stages and systems
|This section does not cite any references or sources. (October 2008)|
Many alternative classifications address different aspects of amplifier designs, and they all express some particular perspective relating the design parameters to the objectives of the circuit. Amplifier design is always a compromise of numerous factors, such as cost, power consumption, real-world device imperfections, and a multitude of performance specifications. Below are several different approaches to classification:
Input and output variables
Electronic amplifiers use one variable presented as either a current and voltage. Either current or voltage can be used as input and either as output, leading to four types of amplifiers. In idealized form they are represented by each of the four types of dependent source used in linear analysis, as shown in the figure, namely:
|Input||Output||Dependent source||Amplifier type|
|I||I||current controlled current source CCCS||current amplifier|
|I||V||current controlled voltage source CCVS||transresistance amplifier|
|V||I||voltage controlled current source VCCS||transconductance amplifier|
|V||V||voltage controlled voltage source VCVS||voltage amplifier|
Each type of amplifier in its ideal form has an ideal input and output resistance that is the same as that of the corresponding dependent source:
|Amplifier type||Dependent source||Input impedance||Output impedance|
In practice the ideal impedances are only approximated. For any particular circuit, a small-signal analysis is often used to find the impedance actually achieved. A small-signal AC test current Ix is applied to the input or output node, all external sources are set to AC zero, and the corresponding alternating voltage Vx across the test current source determines the impedance seen at that node as R = Vx / Ix.
Amplifiers designed to attach to a transmission line at input and/or output, especially RF amplifiers, do not fit into this classification approach. Rather than dealing with voltage or current individually, they ideally couple with an input and/or output impedance matched to the transmission line impedance, that is, match ratios of voltage to current. Many real RF amplifiers come close to this ideal. Although, for a given appropriate source and load impedance, RF amplifiers can be characterized as amplifying voltage or current, they fundamentally are amplifying power.
One set of classifications for amplifiers is based on which device terminal is common to both the input and the output circuit. In the case of bipolar junction transistors, the three classes are common emitter, common base, and common collector. For field-effect transistors, the corresponding configurations are common source, common gate, and common drain; for triode vacuum devices, common cathode, common grid, and common plate. The output voltage of a common plate amplifier is the same as the input (this arrangement is used as the input presents a high impedance and does not load the signal source, although it does not amplify the voltage), i.e., the output at the cathode follows the input at the grid; consequently it was commonly called a cathode follower. By analogy the terms emitter follower and source follower are sometimes used.
Unilateral or bilateral
When an amplifier has an output that exhibits no feedback to its input side, it is called 'unilateral'. The input impedance of a unilateral amplifier is independent of the load, and the output impedance is independent of the signal source impedance.
If feedback connects part of the output back to the input of the amplifier it is called a 'bilateral' amplifier. The input impedance of a bilateral amplifier is dependent upon the load, and the output impedance is dependent upon the signal source impedance.
All amplifiers are bilateral to some degree; however they may often be modeled as unilateral under operating conditions where feedback is small enough to neglect for most purposes, simplifying analysis (see the common base article for an example).
Negative feedback is often applied deliberately to tailor amplifier behavior. Some feedback, which may be positive or negative, is unavoidable and often undesirable, introduced, for example, by parasitic elements such as the inherent capacitance between input and output of a device such as a transistor and capacitative coupling due to external wiring. Excessive frequency-dependent positive feedback may cause what is intended/expected to be an amplifier to become an oscillator.
Linear unilateral and bilateral amplifiers can be represented as two-port networks.
Inverting or non-inverting
Another way to classify amps is the phase relationship of the input signal to the output signal. An 'inverting' amplifier produces an output 180 degrees out of phase with the input signal (that is, a polarity inversion or mirror image of the input as seen on an oscilloscope). A 'non-inverting' amplifier maintains the phase of the input signal waveforms. An emitter follower is a type of non-inverting amplifier, indicating that the signal at the emitter of a transistor is following (that is, matching with unity gain but perhaps an offset) the input signal.
This description can apply to a single stage of an amplifier, or to a complete amplifier system.
Other amplifiers may be classified by their function or output characteristics. These functional descriptions usually apply to complete amplifier systems or sub-systems and rarely to individual stages.
- A servo amplifier indicates an integrated feedback loop to actively control the output at some desired level. A DC servo indicates use at frequencies down to DC levels, where the rapid fluctuations of an audio or RF signal do not occur. These are often used in mechanical actuators, or devices such as DC motors that must maintain a constant speed or torque. An AC servo amp can do this for some ac motors.
- A linear amplifier responds to different frequency components independently, and does not generate harmonic distortion or Intermodulation distortion.
A nonlinear amplifier does generate distortion. For example, it may output to a lamp that must be either fully on or off based on a threshold in a continuously variable input. In other examples, a non-linear amplifier in an analog computer can provide a special transfer function, such as logarithmic—or a following tuned circuit removes harmonics generated by a nonlinear RF amplifier. Even the most linear amplifier has some nonlinearities, since the amplifying devices—transistors or vacuum tubes—follow nonlinear power laws such as square-laws and rely on circuitry techniques to reduce those effects.
- A wideband amplifier has a precise amplification factor over a wide frequency range, and is often used to boost signals for relay in communications systems. A narrowband amp amplifies a specific narrow range of frequencies, to the exclusion of other frequencies.
- An RF amplifier amplifies signals in the radio frequency range of the electromagnetic spectrum, and is often used to increase the sensitivity of a receiver or the output power of a transmitter.
- An audio amplifier amplifies audio frequencies. This category subdivides into small signal amplification, and power amps that are optimised to driving speakers, sometimes with multiple amps grouped together as separate or bridgeable channels to accommodate different audio reproduction requirements. Frequently used terms within audio amplifiers include:
- Preamplifier (preamp), which may include a phono preamp with RIAA equalization, or tape head preamps with CCIR equalisation filters. They may include filters or tone control circuitry.
- Power amplifier (normally drives loudspeakers), headphone amplifiers, and public address amplifiers.
- Stereo amplifiers imply two channels of output (left and right), though the term simply means "solid" sound (referring to three-dimensional)—so quadraphonic stereo was used for amplifiers with four channels. 5.1 and 7.1 systems refer to Home theatre systems with 5 or 7 normal spacial channels, plus a subwoofer channel.
- Buffer amplifiers, which may include emitter followers, provide a high impedance input for a device (perhaps another amplifier, or perhaps an energy-hungry load such as lights) that would otherwise draw too much current from the source. Line drivers are a type of buffer that feeds long or interference-prone interconnect cables, possibly with differential outputs through twisted pair cables.
- A special type of amplifier is widely used in measuring instruments for signal processing, and many other uses. These are called operational amplifiers or op-amps. The "operational" name is because this type of amplifier can be used in circuits that perform mathematical algorithmic functions, or "operations" on input signals to obtain specific types of output signals. A typical modern op-amp has differential inputs (one "inverting", one "non-inverting") and one output.
An idealised op-amp has the following characteristics:
- Infinite input impedance (so it does not load the circuitry at its input)
- Zero output impedance
- Infinite gain
- Zero propagation delay
The performance of an op-amp with these characteristics is entirely defined by the (usually passive) components that form a negative feedback loop around it. The amplifier itself does not effect the output.
Modern op-amps are usually provided as integrated circuits, rather than constructed from discrete components. All real-world op-amps fall short of the idealised specification above—but some modern components have remarkable performance and come close in some respects.
Interstage coupling method
Amplifiers are sometimes classified by the coupling method of the signal at the input, output, or between stages. Different types of these include:
- Resistive-capacitive (RC) coupled amplifier, using a network of resistors and capacitors
- By design these amplifiers cannot amplify DC signals as the capacitors block the DC component of the input signal. RC-coupled amplifiers were used very often in circuits with vacuum tubes or discrete transistors. In the days of the integrated circuit a few more transistors on a chip are much cheaper and smaller than a capacitor.
- Inductive-capacitive (LC) coupled amplifier, using a network of inductors and capacitors
- This kind of amplifier is most often used in selective radio-frequency circuits.
- Transformer coupled amplifier, using a transformer to match impedances or to decouple parts of the circuits
- Quite often LC-coupled and transformer-coupled amplifiers cannot be distinguished as a transformer is some kind of inductor.
- Direct coupled amplifier, using no impedance and bias matching components
- This class of amplifier was very uncommon in the vacuum tube days when the anode (output) voltage was at greater than several hundred volts and the grid (input) voltage at a few volts minus. So they were only used if the gain was specified down to DC (e.g., in an oscilloscope). In the context of modern electronics developers are encouraged to use directly coupled amplifiers whenever possible.
Depending on the frequency range and other properties amplifiers are designed according to different principles.
- Frequency ranges down to DC are only used when this property is needed. DC amplification leads to specific complications that are avoided if possible; DC-blocking capacitors are added to remove DC and sub-sonic frequencies from audio amplifiers.
- Depending on the frequency range specified different design principles must be used. Up to the MHz range only "discrete" properties need be considered; e.g., a terminal has an input impedance.
- As soon as any connection within the circuit gets longer than perhaps 1% of the wavelength of the highest specified frequency (e.g., at 100 MHz the wavelength is 3 m, so the critical connection length is approx. 3 cm) design properties radically change. For example, a specified length and width of a PCB trace can be used as a selective or impedance-matching entity.
- Above a few hundred MHz, it gets difficult to use discrete elements, especially inductors. In most cases, PCB traces of very closely defined shapes are used instead.
The frequency range handled by an amplifier might be specified in terms of bandwidth (normally implying a response that is 3 dB down when the frequency reaches the specified bandwidth), or by specifying a frequency response that is within a certain number of decibels between a lower and an upper frequency (e.g. "20 Hz to 20 kHz plus or minus 1 dB").
Power amplifier classes
Power amplifier circuits (output stages) are classified as A, B, AB and C for analog designs, and class D and E for switching designs based on the proportion of each input cycle (conduction angle), during which an amplifying device is passing current. The image of the conduction angle is derived from amplifying a sinusoidal signal. If the device is always on, the conducting angle is 360°. If it is on for only half of each cycle, the angle is 180°. The angle of flow is closely related to the amplifier power efficiency. The various classes are introduced below, followed by a more detailed discussion under their individual headings further down.
Conduction angle classes
- Class A
- 100% of the input signal is used (conduction angle Θ = 360°). The active element remains conducting all of the time.
- Class B
- 50% of the input signal is used (Θ = 180°); the active element carries current half of each cycle, and is turned off for the other half.
- Class AB
- Class AB is intermediate between class A and B, the two active elements conduct more than half of the time
- Class C
- Less than 50% of the input signal is used (conduction angle Θ < 180°).
A "Class D" amplifier uses some form of pulse-width modulation to control the output devices; the conduction angle of each device is no longer related directly to the input signal but instead varies in pulse width. These are sometimes called "digital" amplifiers because the output device is switched fully on or off, and not carrying current proportional to the signal amplitude.
- Additional classes
- There are several other amplifier classes, although they are mainly variations of the previous classes. For example, class-G and class-H amplifiers are marked by variation of the supply rails (in discrete steps or in a continuous fashion, respectively) following the input signal. Wasted heat on the output devices can be reduced as excess voltage is kept to a minimum. The amplifier that is fed with these rails itself can be of any class. These kinds of amplifiers are more complex, and are mainly used for specialized applications, such as very high-power units. Also, class-E and class-F amplifiers are commonly described in literature for radio-frequency applications where efficiency of the traditional classes is important, yet several aspects deviate substantially from their ideal values. These classes use harmonic tuning of their output networks to achieve higher efficiency and can be considered a subset of class C due to their conduction-angle characteristics.
Amplifying devices operating in class A conduct over the whole of the input cycle. A class-A amplifier is distinguished by the output stage being biased into class A (see definition above). Subclass A2 is sometimes used to refer to vacuum-tube class-A stages where the grid is allowed to be driven slightly positive on signal peaks, resulting in slightly more power than normal class A (A1; where the grid is always negative), but incurring more distortion.
Advantages of class-A amplifiers
- Class-A designs are simpler than other classes; for example class-AB and -B designs require two devices (push–pull output) to handle both halves of the waveform; class A can use a single device single-ended.
- The amplifying element is biased so the device is always conducting to some extent, normally implying the quiescent (small-signal) collector current (for transistors; drain current for FETs or anode/plate current for vacuum tubes) is close to the most linear portion of its transconductance curve.
- Because the device is never shut off completely there is no "turn on" time, little problem with charge storage, and generally better high frequency performance and feedback loop stability (and usually fewer high-order harmonics).
- The point at which the device comes closest to being cut off is not close to zero signal, so the problem of crossover distortion associated with class-AB and -B designs is avoided.
Disadvantage of class-A amplifiers
- They are very inefficient. A theoretical maximum of 50% is obtainable with inductive output coupling and only 25% with capacitive coupling, unless deliberate use of nonlinearities is made (such as in square-law output stages). In a power amplifier, this not only wastes power and limits battery operation, increase costs and may restrict the output devices that can be used (for example, ruling out some audio triodes to accommodate modern low-efficiency loudspeakers. Inefficiency comes not just from the fact that the device is always conducting to some extent. (That happens even with class AB, yet its efficiency can be close to class B.) It is that the standing current is roughly half the maximum output current (though this can be less with a square law output stage), and a large part of the power supply voltage develops across the output device at low signal levels (as with classes AB and B, but unlike output stages such as class D). If high output powers are needed from a class-A circuit, the power waste (and the accompanying heat) becomes significant. For every watt delivered to the load, the amplifier itself, at best, dissipates another watt. For large powers this means very large and expensive power supplies and heat sinking.
Class-A designs have largely been superseded by the more efficient designs for power amplifiers, though they remain popular with some hobbyists, mostly for their simplicity. There is a market for expensive high fidelity class-A amps considered a "cult item" amongst audiophiles mainly for their absence of crossover distortion and reduced odd-harmonic and high-order harmonic distortion.
Single-ended and triode class-A amplifiers
Some aficionados[who?] who prefer class-A amplifiers also prefer the use of thermionic valve (or "tube") designs instead of transistors, especially in Single-ended triode output configurations for several claimed reasons:
- Single-ended output stages (be they tube or transistor) have an asymmetrical transfer function, meaning that even order harmonics in the created distortion tend not to be canceled (as they are in push–pull output stages); for tubes, or FETs, most of the distortion is second-order harmonics, from the square law transfer characteristic, which to some produces a "warmer" and more pleasant sound.
- For those who prefer low distortion figures, the use of tubes with class A (generating little odd-harmonic distortion, as mentioned above) together with symmetrical circuits (such as push–pull output stages, or balanced low-level stages) results in the cancellation of most of the even distortion harmonics, hence the removal of most of the distortion.
- Distortion is characteristic of the sound of electric guitar amplifiers.
- Historically, valve amplifiers often used a class-A power amplifier simply because valves are large and expensive; many class-A designs use only a single device.
Transistors are much cheaper, and so more elaborate designs that give greater efficiency but use more parts are still cost-effective. A classic application for a pair of class-A devices is the long-tailed pair, which is exceptionally linear, and forms the basis of many more complex circuits, including many audio amplifiers and almost all op-amps.
Class-A amplifiers are often used in output stages of high quality op-amps (although the accuracy of the bias in low cost op-amps such as the 741 may result in class A or class AB or class B, varying from device to device or with temperature). They are sometimes used as medium-power, low-efficiency, and high-cost audio power amplifiers. The power consumption is unrelated to the output power. At idle (no input), the power consumption is essentially the same as at high output volume. The result is low efficiency and high heat dissipation.
Class-B amplifiers only amplify half of the input wave cycle, thus creating a large amount of distortion, but their efficiency is greatly improved and is much better than class A. Class-B amplifiers are also favored in battery-operated devices, such as transistor radios. Class B has a maximum theoretical efficiency of π/4. (i.e. 78.5%) This is because the amplifying element is switched off altogether half of the time, and so cannot dissipate power. A single class-B element is rarely found in practice, though it has been used for driving the loudspeaker in the early IBM Personal Computers with beeps, and it can be used in RF power amplifier where the distortion levels are less important. However, class C is more commonly used for this.
A practical circuit using class-B elements is the push–pull stage, such as the very simplified complementary pair arrangement shown below. Here, complementary or quasi-complementary devices are each used for amplifying the opposite halves of the input signal, which is then recombined at the output. This arrangement gives excellent efficiency, but can suffer from the drawback that there is a small mismatch in the cross-over region – at the "joins" between the two halves of the signal, as one output device has to take over supplying power exactly as the other finishes. This is called crossover distortion. An improvement is to bias the devices so they are not completely off when they're not in use. This approach is called class AB operation.
Class AB is widely considered a good compromise for audio power amplifiers, since much of the time the music is quiet enough that the signal stays in the "class A" region, where it is amplified with good fidelity, and by definition if passing out of this region, is large enough that the distortion products typical of class B are relatively small. The crossover distortion can be reduced further by using negative feedback.
In class-AB operation, each device operates the same way as in class B over half the waveform, but also conducts a small amount on the other half. As a result, the region where both devices simultaneously are nearly off (the "dead zone") is reduced. The result is that when the waveforms from the two devices are combined, the crossover is greatly minimised or eliminated altogether. The exact choice of quiescent current, the standing current through both devices when there is no signal, makes a large difference to the level of distortion (and to the risk of thermal runaway, that may damage the devices); often the bias voltage applied to set this quiescent current has to be adjusted with the temperature of the output transistors (for example in the circuit at the beginning of the article the diodes would be mounted physically close to the output transistors, and chosen to have a matched temperature coefficient). Another approach (often used as well as thermally tracking bias voltages) is to include small value resistors in series with the emitters.
Class AB sacrifices some efficiency over class B in favor of linearity, thus is less efficient (below 78.5% for full-amplitude sinewaves in transistor amplifiers, typically; much less is common in class-AB vacuum-tube amplifiers). It is typically much more efficient than class A.
Sometimes a numeral is added for vacuum-tube stages. If the grid voltage is always negative with respect to the cathode the class is AB1. If the grid is allowed to go slightly positive (hence drawing grid current, adding more distortion, but giving slightly higher output power) on signal peaks the class is AB2.
Class-C amplifiers conduct less than 50% of the input signal and the distortion at the output is high, but high efficiencies (up to 90%) are possible. The usual application for class-C amplifiers is in RF transmitters operating at a single fixed carrier frequency, where the distortion is controlled by a tuned load on the amplifier. The input signal is used to switch the active device causing pulses of current to flow through a tuned circuit forming part of the load.
The class-C amplifier has two modes of operation: tuned and untuned. The diagram shows a waveform from a simple class-C circuit without the tuned load. This is called untuned operation, and the analysis of the waveforms shows the massive distortion that appears in the signal. When the proper load (e.g., an inductive-capacitive filter plus a load resistor) is used, two things happen. The first is that the output's bias level is clamped with the average output voltage equal to the supply voltage. This is why tuned operation is sometimes called a clamper. This allows the waveform to be restored to its proper shape despite the amplifier having only a one-polarity supply. This is directly related to the second phenomenon: the waveform on the center frequency becomes less distorted. The residual distortion is dependent upon the bandwidth of the tuned load, with the center frequency seeing very little distortion, but greater attenuation the farther from the tuned frequency that the signal gets.
The tuned circuit resonates at one frequency, the fixed carrier frequency, and so the unwanted frequencies are suppressed, and the wanted full signal (sine wave) is extracted by the tuned load. The signal bandwidth of the amplifier is limited by the Q-factor of the tuned circuit but this is not a serious limitation. Any residual harmonics can be removed using a further filter.
In practical class-C amplifiers a tuned load is invariably used. In one common arrangement the resistor shown in the circuit above is replaced with a parallel-tuned circuit consisting of an inductor and capacitor in parallel, whose components are chosen to resonate the frequency of the input signal. Power can be coupled to a load by transformer action with a secondary coil wound on the inductor. The average voltage at the drain is then equal to the supply voltage, and the signal voltage appearing across the tuned circuit varies from near zero to near twice the supply voltage during the rf cycle. The input circuit is biassed so that the active element (e.g. transistor) conducts for only a fraction of the rf cycle, usually one third (120 degrees) or less.
The active element conducts only while the drain voltage is passing through its minimum. By this means, power dissipation in the active device is minimised, and efficiency increased. Ideally, the active element would pass only an instantaneous current pulse while the voltage across it is zero: it then disspates no power and 100% efficiency is achieved. However practical devices have a limit to the peak current they can pass, and the pulse must therefore be widened, to around 120 degrees, to obtain a reasonable amount of power, and the efficiency is then 60-70%.
In the class-D amplifier the input signal is converted to a sequence of higher voltage output pulses. The averaged-over-time power values of these pulses are directly proportional to the instantaneous amplitude of the input signal. The frequency of the output pulses is typically ten or more times the highest frequency in the input signal to be amplified. The output pulses contain inaccurate spectral components (that is, the pulse frequency and its harmonics), which must be removed by a low-pass passive filter. The resulting filtered signal is then an amplified replica of the input.
These amplifiers use pulse width modulation, pulse density modulation (sometimes referred to as pulse frequency modulation) or a more advanced form of modulation such as Delta-sigma modulation (for example, in the Analog Devices AD1990 class-D audio power amplifier). Output stages such as those used in pulse generators are examples of class-D amplifiers. The term class D is usually applied to devices intended to reproduce signals with a bandwidth well below the switching frequency.
Class-D amplifiers can be controlled by either analog or digital circuits. The digital control introduces additional distortion called quantization error caused by its conversion of the input signal to a digital value.
The main advantage of a class-D amplifier is power efficiency. Because the output pulses have a fixed amplitude, the switching elements (usually MOSFETs, but valves (vacuum tubes) and bipolar transistors were once used) are switched either completely on or completely off, rather than operated in linear mode. A MOSFET operates with the lowest resistance when fully on and thus (excluding when fully off) has the lowest power dissipation when in that condition. Compared to an equivalent class-AB device, a class-D amplifier's lower losses permit the use of a smaller heat sink for the MOSFETs while also reducing the amount of input power required, allowing for a lower-capacity power supply design. Therefore, class-D amplifiers are typically smaller than an equivalent class-AB amplifier.
Class-D amplifiers have been widely used to control motors, but they are now also used as audio power amplifiers, with some extra circuitry to allow analogue to be converted to a much higher frequency pulse width modulated signal.
High quality class-D audio power amplifiers have now appeared on the market. These designs have been said to rival traditional AB amplifiers in terms of quality. An early use of class-D amplifiers was high-power subwoofer amplifiers in cars. Because subwoofers are generally limited to a bandwidth of no higher than 150 Hz, the switching speed for the amplifier does not have to be as high as for a full range amplifier, allowing simpler designs. Class-D amplifiers for driving subwoofers are relatively inexpensive in comparison to class-AB amplifiers.
The letter D used to designate this amplifier class is simply the next letter after C, and does not stand for digital. Class-D and class-E amplifiers are sometimes mistakenly described as "digital" because the output waveform superficially resembles a pulse-train of digital symbols, but a class-D amplifier merely converts an input waveform into a continuously pulse-width modulated (square wave) analog signal. (A digital waveform would be pulse-code modulated.)
The class-E/F amplifier is a highly efficient switching power amplifier, typically used at such high frequencies that the switching time becomes comparable to the duty time. As said in the class-D amplifier, the transistor is connected via a serial LC circuit to the load, and connected via a large L (inductor) to the supply voltage. The supply voltage is connected to ground via a large capacitor to prevent any RF signals leaking into the supply. The class-E amplifier adds a C (capacitor) between the transistor and ground and uses a defined L1 to connect to the supply voltage.
The following description ignores DC, which can be added easily afterwards. The above mentioned C and L are in effect a parallel LC circuit to ground. When the transistor is on, it pushes through the serial LC circuit into the load and some current begins to flow to the parallel LC circuit to ground. Then the serial LC circuit swings back and compensates the current into the parallel LC circuit. At this point the current through the transistor is zero and it is switched off. Both LC circuits are now filled with energy in C and L0. The whole circuit performs a damped oscillation. The damping by the load has been adjusted so that some time later the energy from the Ls is gone into the load, but the energy in both C0 peaks at the original value to in turn restore the original voltage so that the voltage across the transistor is zero again and it can be switched on.
With load, frequency, and duty cycle (0.5) as given parameters and the constraint that the voltage is not only restored, but peaks at the original voltage, the four parameters (L, L0, C and C0) are determined. The class-E amplifier takes the finite on resistance into account and tries to make the current touch the bottom at zero. This means that the voltage and the current at the transistor are symmetric with respect to time. The Fourier transform allows an elegant formulation to generate the complicated LC networks and says that the first harmonic is passed into the load, all even harmonics are shorted and all higher odd harmonics are open.
Class E uses a significant amount of second-harmonic voltage. The second harmonic can be used to reduce the overlap with edges with finite sharpness. For this to work, energy on the second harmonic has to flow from the load into the transistor, and no source for this is visible in the circuit diagram. In reality, the impedance is mostly reactive and the only reason for it is that class E is a class F (see below) amplifier with a much simplified load network and thus has to deal with imperfections.
In many amateur simulations of class-E amplifiers, sharp current edges are assumed nullifying the very motivation for class E and measurements near the transit frequency of the transistors show very symmetric curves, which look much similar to class-F simulations.
The class-E amplifier was invented in 1972 by Nathan O. Sokal and Alan D. Sokal, and details were first published in 1975. Some earlier reports on this operating class have been published in Russian.
In push–pull amplifiers and in CMOS, the even harmonics of both transistors just cancel. Experiment shows that a square wave can be generated by those amplifiers. Theoretically square waves consist of odd harmonics only. In a class-D amplifier, the output filter blocks all harmonics; i.e., the harmonics see an open load. So even small currents in the harmonics suffice to generate a voltage square wave. The current is in phase with the voltage applied to the filter, but the voltage across the transistors is out of phase. Therefore, there is a minimal overlap between current through the transistors and voltage across the transistors. The sharper the edges, the lower the overlap.
While in class D, transistors and the load exist as two separate modules, class F admits imperfections like the parasitics of the transistor and tries to optimise the global system to have a high impedance at the harmonics. Of course there has to be a finite voltage across the transistor to push the current across the on-state resistance. Because the combined current through both transistors is mostly in the first harmonic, it looks like a sine. That means that in the middle of the square the maximum of current has to flow, so it may make sense to have a dip in the square or in other words to allow some overswing of the voltage square wave. A class-F load network by definition has to transmit below a cutoff frequency and reflect above.
Any frequency lying below the cutoff and having its second harmonic above the cutoff can be amplified, that is an octave bandwidth. On the other hand, an inductive-capacitive series circuit with a large inductance and a tunable capacitance may be simpler to implement. By reducing the duty cycle below 0.5, the output amplitude can be modulated. The voltage square waveform degrades, but any overheating is compensated by the lower overall power flowing. Any load mismatch behind the filter can only act on the first harmonic current waveform, clearly only a purely resistive load makes sense, then the lower the resistance, the higher the current.
Class F can be driven by sine or by a square wave, for a sine the input can be tuned by an inductor to increase gain. If class F is implemented with a single transistor, the filter is complicated to short the even harmonics. All previous designs use sharp edges to minimise the overlap.
Classes G and H
||This section may require cleanup to meet Wikipedia's quality standards. (July 2007)|
There are a variety of amplifier designs that enhance class-AB output stages with more efficient techniques to achieve greater efficiencies with low distortion. These designs are common in large audio amplifiers since the heatsinks and power transformers would be prohibitively large (and costly) without the efficiency increases. The terms "class G" and "class H" are used interchangeably to refer to different designs, varying in definition from one manufacturer or paper to another.
Class-G amplifiers (which use "rail switching" to decrease power consumption and increase efficiency) are more efficient than class-AB amplifiers. These amplifiers provide several power rails at different voltages and switch between them as the signal output approaches each level. Thus, the amplifier increases efficiency by reducing the wasted power at the output transistors. Class-G amplifiers are more efficient than class AB but less efficient when compared to class D, without the negative EMI effects of class D.
Class-H amplifiers take the idea of class G one step further creating an infinitely variable supply rail. This is done by modulating the supply rails so that the rails are only a few volts larger than the output signal at any given time. The output stage operates at its maximum efficiency all the time. Switched-mode power supplies can be used to create the tracking rails. Significant efficiency gains can be achieved but with the drawback of more complicated supply design and reduced THD performance. In common designs, a voltage drop of about 10V is maintained over the output transistors in Class H circuits. The picture above shows positive supply voltage of the output stage and the voltage at the speaker output. The boost of the supply voltage is shown for a real music signal.
The voltage signal shown is thus a larger version of the input, but has been changed in sign (inverted) by the amplification. Other arrangements of amplifying device are possible, but that given (that is, common emitter, common source or common cathode) is the easiest to understand and employ in practice. If the amplifying element is linear, the output is a faithful copy of the input, only larger and inverted. In practice, transistors are not linear, and the output only approximates the input. nonlinearity from any of several sources is the origin of distortion within an amplifier. The class of amplifier (A, B, AB or C) depends on how the amplifying device is biased. The diagrams omit the bias circuits for clarity.
Any real amplifier is an imperfect realization of an ideal amplifier. An important limitation of a real amplifier is that the output it generates is ultimately limited by the power available from the power supply. An amplifier saturates and clips the output if the input signal becomes too large for the amplifier to reproduce or exceeds operational limits for the device.
The Doherty, a hybrid configuration, is receiving new attention. It was invented in 1934 by William H. Doherty for Bell Laboratories—whose sister company, Western Electric, manufactured radio transmitters. The Doherty amplifier consists of a class-B primary or carrier stages in parallel with a class-C auxiliary or peak stage. The input signal splits to drive the two amplifiers, and a combining network sums the two output signals. Phase shifting networks are used in inputs and outputs. During periods of low signal level, the class-B amplifier efficiently operates on the signal and the class-C amplifier is cutoff and consumes little power. During periods of high signal level, the class-B amplifier delivers its maximum power and the class-C amplifier delivers up to its maximum power. The efficiency of previous AM transmitter designs was proportional to modulation but, with average modulation typically around 20%, transmitters were limited to less than 50% efficiency. In Doherty's design, even with zero modulation, a transmitter could achieve at least 60% efficiency.
As a successor to Western Electric for broadcast transmitters, the Doherty concept was considerably refined by Continental Electronics Manufacturing Company of Dallas, TX. Perhaps, the ultimate refinement was the screen-grid modulation scheme invented by Joseph B. Sainton. The Sainton amplifier consists of a class-C primary or carrier stage in parallel with a class-C auxiliary or peak stage. The stages are split and combined through 90-degree phase shifting networks as in the Doherty amplifier. The unmodulated radio frequency carrier is applied to the control grids of both tubes. Carrier modulation is applied to the screen grids of both tubes. The bias point of the carrier and peak tubes is different, and is established such that the peak tube is cutoff when modulation is absent (and the amplifier is producing rated unmodulated carrier power) whereas both tubes contribute twice the rated carrier power during 100% modulation (as four times the carrier power is required to achieve 100% modulation). As both tubes operate in class C, a significant improvement in efficiency is thereby achieved in the final stage. In addition, as the tetrode carrier and peak tubes require very little drive power, a significant improvement in efficiency within the driver stage is achieved as well (317C, et al.). The released version of the Sainton amplifier employs a cathode-follower modulator, not a push–pull modulator. Previous Continental Electronics designs, by James O. Weldon and others, retained most of the characteristics of the Doherty amplifier but added screen-grid modulation of the driver (317B, et al.).
The Doherty amplifier remains in use in very-high-power AM transmitters, but for lower-power AM transmitters, vacuum-tube amplifiers in general were eclipsed in the 1980s by arrays of solid-state amplifiers, which could be switched on and off with much finer granularity in response to the requirements of the input audio. However, interest in the Doherty configuration has been revived by cellular-telephone and wireless-Internet applications where the sum of several constant envelope users creates an aggregate AM result. The main challenge of the Doherty amplifier for digital transmission modes is in aligning the two stages and getting the class-C amplifier to turn on and off very quickly.
Recently, Doherty amplifiers have found widespread use in cellular base station transmitters for GHz frequencies. Implementations for transmitters in mobile devices have also been demonstrated.
Various newer classes of amplifier, as defined by the technical details of their topology, have been developed on the basis of previously existing operating classes. For example, Crown's K and I-Tech Series as well as several other models use Crown's patented class I (or BCA) technology. Lab.gruppen use a form of class-D amplifier called class TD or tracked class D that tracks the waveform to more accurately amplify it without the drawbacks of traditional class-D amplifiers.
"Class S" was the name of a design published by A M Sandman in Wireless World (sept. 1982). It had some elements in common with a current dumping design. It comprises a class A input stage coupled to a class B output stage, with a specific feedback design. Technics used a modified design in their class AA (marketed) output stage.
"Class T" is a trademark of TriPath company, which manufactures audio amplifier ICs. This new class T is a revision of the common class-D amplifier, but with changes to ensure fidelity over the full audio spectrum, unlike traditional class-D designs. It operates at different frequencies depending on the power output, with values ranging from as low as 200 kHz to 1.2 MHz, using a proprietary modulator. Tripath ceased operations in 2007, its patents acquired by Cirrus Logic for their Mixed-Signal Audio division. Some Kenwood Recorder use class-W amplifiers.
"Class Z" is a trademark of Zetex Semiconductors (now part of Diodes Inc. of Dallas, TX) and is a direct-digital-feedback technology. Zetex-patented circuits are being used in the latest power amplifiers by NAD Electronics of Canada.
Amplifiers are implemented using active elements of different kinds:
- The first active elements were relays. They were for example used in transcontinental telegraph lines: a weak current was used to switch the voltage of a battery to the outgoing line.
- For transmitting audio, carbon microphones were used as the active element. This was used to modulate a radio-frequency source in one of the first AM audio transmissions, by Reginald Fessenden on Dec. 24, 1906.
- In the 1960s, the transistor started to take over. These days, discrete transistors are still used in high-power amplifiers and in specialist audio devices.
- Up to the early 1970s, most amplifiers used vacuum tubes. Today, tubes are used for specialist audio applications such as guitar amplifiers and audiophile amplifiers. Many broadcast transmitters still use vacuum tubes.
- Beginning in the 1970s, more and more transistors were connected on a single chip therefore creating the integrated circuit. A large number of amplifiers commercially available today are based on integrated circuits.
For special purposes, other active elements have been used. For example, in the early days of the satellite communication, parametric amplifiers were used. The core circuit was a diode whose capacity was changed by an RF signal created locally. Under certain conditions, this RF signal provided energy that was modulated by the extremely weak satellite signal received at the earth station.
The practical amplifier circuit to the right could be the basis for a moderate-power audio amplifier. It features a typical (though substantially simplified) design as found in modern amplifiers, with a class-AB push–pull output stage, and uses some overall negative feedback. Bipolar transistors are shown, but this design would also be realizable with FETs or valves.
The input signal is coupled through capacitor C1 to the base of transistor Q1. The capacitor allows the AC signal to pass, but blocks the DC bias voltage established by resistors R1 and R2 so that any preceding circuit is not affected by it. Q1 and Q2 form a differential amplifier (an amplifier that multiplies the difference between two inputs by some constant), in an arrangement known as a long-tailed pair. This arrangement is used to conveniently allow the use of negative feedback, which is fed from the output to Q2 via R7 and R8.
The negative feedback into the difference amplifier allows the amplifier to compare the input to the actual output. The amplified signal from Q1 is directly fed to the second stage, Q3, which is a common emitter stage that provides further amplification of the signal and the DC bias for the output stages, Q4 and Q5. R6 provides the load for Q3 (A better design would probably use some form of active load here, such as a constant-current sink). So far, all of the amplifier is operating in class A. The output pair are arranged in class-AB push–pull, also called a complementary pair. They provide the majority of the current amplification (while consuming low quiescent current) and directly drive the load, connected via DC-blocking capacitor C2. The diodes D1 and D2 provide a small amount of constant voltage bias for the output pair, just biasing them into the conducting state so that crossover distortion is minimized. That is, the diodes push the output stage firmly into class-AB mode (assuming that the base-emitter drop of the output transistors is reduced by heat dissipation).
This design is simple, but a good basis for a practical design because it automatically stabilises its operating point, since feedback internally operates from DC up through the audio range and beyond. Further circuit elements would probably be found in a real design that would roll off the frequency response above the needed range to prevent the possibility of unwanted oscillation. Also, the use of fixed diode bias as shown here can cause problems if the diodes are not both electrically and thermally matched to the output transistors – if the output transistors turn on too much, they can easily overheat and destroy themselves, as the full current from the power supply is not limited at this stage.
A common solution to help stabilise the output devices is to include some emitter resistors, typically an ohm or so. Calculating the values of the circuit's resistors and capacitors is done based on the components employed and the intended use of the amp.
For the basics of radio frequency amplifiers using valves, see Valved RF amplifiers.
Notes on implementation
Real world amplifiers are imperfect.
- One consequence is that the power supply itself may influence the output, and must itself be considered when designing the amplifier
- a power amplifier is effectively an input signal controlled power regulator - regulating the power sourced from the power supply or mains to the amplifier's load. The power output from a power amplifier cannot exceed the power input to it.
- The amplifier circuit has an "open loop" performance, that can be described by various parameters (gain, slew rate, output impedance, distortion, bandwidth, signal to noise ratio, etc.)
- Many modern amplifiers use negative feedback techniques to hold the gain at the desired value and to reduce distortion. Negative loop feedback has the intended effect of electrically damping loudspeaker motion, thereby damping the mechanical dynamic performance of the loudspeaker.
- When assessing rated amplifier power output it is useful to consider the load to be applied, the form of signal - i.e. speech or music, duration of power output needed - e.g. short-time or continuous, and dynamic range required - e.g. recorded program or live
- In the case of high-powered audio applications requiring long cables to the load - e.g. cinemas and shipping centres - instead of using heavy gauge cables it may be more efficient to connect to the load at line output voltage with matching transformers at source and loads.
- To prevent instability and/or overheating, care is need to ensure solid state amplifiers are adequately loaded. Most have a rated minimum load impedance.
- All amplifiers generate heat through electrical losses. This heat must be dissipated via natural or forced air cooling. Heat can damage or reduce service life of electronic components. Consideration should be given to the heating effects of or upon adjacent equipment.
Different methods of supplying power result in many different methods of bias. Bias is a technique by which the active devices are set up to operate in a particular region, or by which the DC component of the output signal is set to the midpoint between the maximum voltages available from the power supply. Most amplifiers use several devices at each stage; they are typically matched in specifications except for polarity. Matched inverted polarity devices are called complementary pairs. Class-A amplifiers generally use only one device, unless the power supply is set to provide both positive and negative voltages, in which case a dual device symmetrical design may be used. Class-C amplifiers, by definition, use a single polarity supply.
Amplifiers often have multiple stages in cascade to increase gain. Each stage of these designs may be a different type of amp to suit the needs of that stage. For instance, the first stage might be a class-A stage, feeding a class-AB push–pull second stage, which then drives a class-G final output stage, taking advantage of the strengths of each type, while minimizing their weaknesses.
- Charge transfer amplifier
- Distributed amplifier
- Faithful amplification
- Guitar amplifier
- Instrument amplifier
- Instrumentation amplifier
- Low noise amplifier
- Negative feedback amplifier
- Operational amplifier
- Optical amplifier
- Power added efficiency
- Programmable gain amplifier
- RF power amplifier
- Valve audio amplifier
- Robert Boylestad and Louis Nashelsky (1996). Electronic Devices and Circuit Theory, 7th Edition. Prentice Hall College Division. ISBN 978-0-13-375734-7.
- *Mark Cherry, Maxim Engineering journal, volume 62, Amplifier Considerations in Ceramic Speaker Applications, p.3, accessed 2012-10-01
- Robert S. Symons (1998). "Tubes: Still vital after all these years". IEEE Spectrum 35 (4): 52–63. doi:10.1109/6.666962.
- It is a curiosity to note that this table is a "Zwicky box"; in particular, it encompasses all possibilities. See Fritz Zwicky.
- John Everett (1992). Vsats: Very Small Aperture Terminals. IET. ISBN 0-86341-200-9.
- RCA Receiving Tube Manual, RC-14 (1940) p 12
- ARRL Handbook, 1968; page 65
- Jerry Del Colliano (20 February 2012), Pass Labs XA30.5 Class-A Stereo Amp Reviewed, Home Theater Review, Luxury Publishing Group Inc.
- Ask the Doctors: Tube vs. Solid-State Harmonics
- Volume cranked up in amp debate
- A.P. Malvino, Electronic Principles (2nd Ed.1979. ISBN 0-07-039867-4) p.299.
- Electronic and Radio Engineering, R.P.Terman, McGraw Hill, 1964
- N. O. Sokal and A. D. Sokal, "Class E – A New Class of High-Efficiency Tuned Single-Ended Switching Power Amplifiers", IEEE Journal of Solid-State Circuits, vol. SC-10, pp. 168–176, June 1975. HVK
- US patent 2210028, William H. Doherty, "Amplifier", issued 1940-08-06, assigned to Bell Telephone Laboratories
- US patent 3314034, Joseph B. Sainton, "High Efficiency Amplifier and Push–Pull Modulator", issued 1967-04-11, assigned to Continental Electronics Manufacturing Company
- Audio and Hi-Fi handbook, Third edition, 1998, ISBN 0-7506-3636-X p. 271
- Kenwood MGR-E8 with Class W Amplifier
- Class Z Direct Digital Feedback Amplifiers, Zetex Semiconductors, 2006.
- Lee, Thomas (2004). The Design of CMOS Radio-Frequency Integrated Circuits. New York, NY: Cambridge University Press. p. 8. ISBN 978-0-521-83539-8.
|Wikimedia Commons has media related to: Electronic amplifiers|
- Rane audio's guide to amplifier classes
- Design and analysis of a basic class D amplifier
- Conversion: distortion factor to distortion attenuation and THD
- An alternate topology called the grounded bridge amplifier - pdf
- Contains an explanation of different amplifier classes - pdf
- Reinventing the power amplifier - pdf
- Anatomy of the power amplifier, including information about classes
- Tons of Tones - Site explaining non linear distortion stages in Amplifier Models
- Class D audio amplifiers, white paper - pdf
- Class E Radio Transmitters - Tutorials, Schematics, Examples, and Construction Details | http://en.wikipedia.org/wiki/Amplifier | 13 |
57 | Bone - Mineralized Connective Tissue
Bone is a connective tissue with living cells (osteocytes) and collagen fibers distributed throughout a ground substance that is hardened by calcium salts. As bone develops, precursor cells called osteoblasts secrete collagen fibers and a ground substance of proteins and carbohydrates. Eventually, osteocytes reside within lacunae in the ground substance, which becomes mineralized by calcium deposits.
Bones are surrounded by a sturdy membrane called the periosteum. There are two kinds of bone tissue. Compact bone tissue forms the bone’s shaft and the outer portion of its two ends. Compact bone forms in thin, circular layers (osteons or Haversian systems) with small canals at their centers, which contain blood vessels and nerves. Osteocytes in the lacunae communicate by way of canaliculi (little canals). Spongy bone tissue is located inside the shaft of long bones.
Spongy and compact bone tissue in a femur.
Thin, dense layers of compact bone tissue form interconnected arrays around canals that contain blood vessels and nerves. Each array is an osteon (Haversian system). The blood vessel threading through it transports substances to and from osteocytes, living bone cells in small spaces (lacunae) in the bone tissue. Small tunnels called canaliculi connect neighboring spaces.
Click here for the Animation: Structure of the Human Thigh Bone. Please make sure that your sound is on and your volume is up.
|How a long bone forms. First, osteoblasts begin to function in a cartilage model in the embryo. The bone-forming cells are active first in the shaft, then at the knobby ends. In time, cartilage is left only in the epiphyses at the ends of the shaft.|
A bone develops on a cartilage model. Osteoblasts secrete material inside the shaft of the cartilage model of long bones. Calcium is deposited; cavities merge to form the marrow cavity. Eventually osteoblasts become trapped within their own secretions and become osteocytes (mature bone cells).
In growing children, the epiphyses (ends of bone) are separated from the shaft by an epiphyseal plate (cartilage), which continues to grow under the influence of growth hormone until late adolescence.
Click here for the Animation: How a Long Bone Forms. Please make sure that your sound is on and your volume is up.
Click here for the Video: Taller and Taller. Please make sure that your sound is on and your volume is up.
|Normal bone tissue.||After the onset of osteoporosis, replacements of mineral ions lag behind withdrawals. In time the tissue erodes, and the bone becomes hollow and brittle.|
Bone tissue is constantly “remodeled.” Bone is renewed constantly as minerals are deposited by osteoblasts and withdrawn by osteoclasts during the bone remodeling process. Before adulthood, bone turnover is especially important in increasing the diameter of certain bones. Bone turnover helps to maintain calcium levels for the entire body. A hormone called PTH causes bone cells to release enzymes that will dissolve bone tissue and release calcium to the interstitial fluid and blood; calcitonin stimulates the reverse. Osteoporosis (decreased bone density) is associated with decreases in osteoblast activity, sex hormone production, exercise, and calcium uptake.
The Skeleton: The Body’s Bony Framework
Bones are the main components of the human skeletal system. There are four types of bones: long (arms), short (ankle), flat (skull), and irregular (vertebrae). Bone marrow fills the cavities of bones. In long bones, red marrow is confined to the ends; yellow marrow fills the shaft portion. Irregular bones and flat bones are completely filled with the red bone marrow responsible for blood cell formation.
The skeleton: a preview. The 206 bones of a human are arranged in two major divisions: the axial skeleton and the appendicular skeleton. Bones are attached to other bones by ligaments; bones are connected to muscles by tendons.
Bone functions are vital in maintaining homeostasis. The bones are moved by muscles; thus the whole body is movable. The bones support and anchor muscles. Bones protect vital organs such as the brain and lungs. Bone tissue acts as a depository for calcium, phosphorus, and other ions. Parts of some bones are sites of blood cell production.
Functions of Bone
|Movement||Bones interact with skeletal muscles to maintain or change the position of body parts.|
|Support||Bones support and anchor muscles.|
|Protection||Many bones form hard compartments that enclose and protect soft internal organs.|
|Mineral Storage||Bones are a reservoir for calcium and phosphorus. Deposits and withdrawals of these mineral ions help to maintain their proper concentrations in body fluids.|
|Blood Cell Formation||Some bones contain marrow where blood cells are produced.|
The Axial Skeleton
Sinuses in bones in the skull and face.
The irregular junctions between different bones are called sutures.
An “inferior,” or bottom-up, view of the skull. The large foramen magnum is situated atop the uppermost cervical vertebra.
The Appendicular Skeleton
Bones of the pectoral girdle, the arm, and the hand.
The pectoral girdle and upper limbs provide flexibility. The pectoral girdle includes the bones of, and is attached to, the shoulder. The scapula is a large, flat shoulder blade with a socket for the upper arm bone. The clavicle (collarbone) connects the scapula to the sternum.
Each upper limb includes some 30 separate bones. The humerus is the bone of the upper arm. The radius and ulna extend from the hingelike joint of the elbow to the wrist. The carpals form the wrist; the metacarpals form the palm of the hand, and the phalanges the fingers.
|Structure of a femur (thighbone), a typical long bone.||Bones of the pelvic girdle, the leg, and the foot.|
The pelvic girdle and lower limbs support body weight. The pelvic girdle includes the pelvis and the legs. The pelvis is made up of coxal bones attaching to the sacrum in the back and forming the pelvic arch in the front. The pelvis is broader in females than males; this is necessary for childbearing.
The legs contain the body’s largest bones. The femur is the longest bone, extending from the pelvis to the knee. The tibia and fibula form the lower leg; the kneecap bone is the patella. Tarsal bones compose the ankle, metatarsals the foot, and phalanges the toes.
Joints—Connections Between Bones
The knee joint, an example of a synovial joint. The knee is the largest and most complex joint in the body. Part (a) shows the joint with muscles stripped away; in (b) you can see where muscles such as the quadriceps attach.
Click to enlarge
Synovial joints move freely. Synovial joints are the most common type of joint and move freely; they include the ball-and-socket joints of the hips and the hingelike joints such as the knee. These types of joints are stabilized by ligaments. A capsule of dense connective tissue surrounds the bones of the joint and produces synovial fluid that lubricates the joint.
Other joints move little or not at all. Cartilaginous joints (such as between the vertebrae) have no gap, but are held together by cartilage and can move only a little. Fibrous joints also have no gap between the bones and hardly move; flat cranial bones are an example.
Ways body parts move at synovial joints. The synovial joint at the shoulder permits the greatest range of movement.
Disorders of the Skeleton
Inflammation is a factor in some skeletal disorders. In rheumatoid arthritis, the synovial membrane becomes inflamed due to immune system dysfunction, the cartilage degenerates, and bone is deposited into the joint.
In osteoarthritis, the cartilage at the end of the bone degenerates. Tendinitis is the inflammation of tendons and synovial membranes around joints. Carpal tunnel syndrome is the result of the inflammation of the tendons in the space between a wrist ligament and the carpal bones, usually aggravated by chronic over use.
An x-ray of an arm bone deforemed by Osteogenesis Imperfecta (OI).
Little girl with OI, she had multiple fractures in her arms and legs at birth
A child with rickets
Joints also are vulnerable to strains, sprains, and dislocations. A strain results from stretching or twisting a joint suddenly or too far. A sprain is a tear of ligaments or tendons. A dislocation causes two bones to no longer be in contact.
In factures, bones break. A simple fracture is a crack in the bone; not very serious. A complete fracture separates the bone into two pieces, which must be quickly realigned for proper healing. A compound fracture is the most serious because it means there are multiple breaks with the possibility of bone fragments penetrating the surrounding tissues.
Other bone disorders include genetic diseases, infections, and cancer. Genetic diseases such as osteogenesis imperfecta can leave bones brittle and easily broken. Bacterial and other infections can spread from the blood stream to bone tissue or marrow. Osteosarcoma, bone cancer, usually occurs in long bones.
The Body’s Three Kinds of Muscle
The three kinds of muscle are built and function in different ways. Skeletal muscle, composed of long thin cells called muscle “fibers,” allows the body to move. Smooth muscle is found in the walls of hollow organs and tubes; the cells are smaller than those of skeletal muscle and are not striated. The heart is the only place where cardiac muscle is found.
Cardiac muscle and smooth muscle are considered involuntary muscles because we cannot consciously control their contraction; skeletal muscles are voluntary muscles. Skeletal muscle comprises the body’s muscular system.
Click here for the Animation: Major Skeletal Muscles. Please make sure that your sound is on and your volume is up.
The three kinds of muscle in the body and where each type is found
Some of the major muscles of the muscular system.
Click to enlarge
The flexor digitorum superficialis, a forearem muscle that helps move the fingers.
The zygomaticus major, which helps you smile.
The Structure and Function of Skeletal Muscles
A tendon sheath. Notice the lubricating fluid inside each of the sheaths sketched here.
A skeletal muscle is built of bundled muscle cells. Inside each cell are threadlike myofibrils, which are critical to muscle contraction. The cells are bundled together with connective tissue that extends past the muscle to form tendons, which attach the muscle to bones.
Bones and skeletal muscles work like a system of levers. The human body’s skeletal muscles number more than 600. The origin end of each muscle is designated as being attached to the bone that moves relatively little; whereas the insertion is attached to the bone that moves the most. Because most muscle attachments are located close to joints, only a small contraction is needed to produce considerable movement of some body parts (leverage advantage).
Many muscles are arranged as pairs or in groups. Many muscles are arranged as pairs or grouped for related function. Some work antagonistically (in opposition) so that one reverses the action of the other. Others work synergistically, the contraction of one stabilizes the contraction of another. Reciprocal innervation dictates that only one muscle of an antagonistic pair (e.g. biceps and triceps) can be stimulated at a time.
Click here for the Animation: Two Opposing Muscle Groups in Human Arms. Please make sure that your sound is on and your volume is up.
“Fast” and “slow” muscle. Humans have two general types of skeletal muscles: “Slow” muscle is red in color due to myoglobin and blood capillaries; its contractions are slower but more sustained. “Fast” or “white” muscle cells contain fewer mitochondria and less myoglobin but can contract rapidly and powerfully for short periods.
When athletes train, one goal is to increase the relative size and contractile strength of fast (sprinters) and slow (distance swimmers) muscle fibers.
How Muscles Contract
|Zooming down through skeletalmuscle from a biceps to filaments of the proteins actin andmyosin. These proteins can contract.|
A muscle contracts when its cells shorten. Muscles are divided into contractile units called sarcomeres. Each muscle cell contains myofibrils composed of thin (actin) and thick (myosin) filaments. Each actin filament is actually two beaded strands of protein twisted together. Each myosin filament is a protein with a double head (projecting outward) and a long tail (which is bound together with others). The arrangement of actin and myosin filaments gives skeletal muscles their characteristic striped appearance.
Click here for the Video: Structure of a Sarcomere. Please make sure that your sound is on and your volume is up.
Click here for the Animation: Structure of Skeletal Muscle. Please make sure that your sound is on and your volume is up.
Click here for the Animation: Banding Patterns and Muscle Contraction. Please make sure that your sound is on and your volume is up.
Muscle cells shorten when actin filaments slide over myosin. Within each sarcomere there are two sets of actin filaments, which are attached on opposite sides of the sarcomere; myosin filaments lie suspended between the actin filaments.
During contraction, the myosin filaments physically slide along and pull the two sets of actin filaments toward each other at the center of the sarcomere; this is called the sliding-filament model of contraction. When a myosin head is energized, it forms cross-bridges with an adjacent actin filament and tilts in a power stroke toward the sarcomere’s center. Energy from ATP drives the power stroke as the heads pull the actin filaments along. After the power stroke the myosin heads detach and prepare for another attachment (power stroke).
Click here for the Animation: Muscle Contraction. Please make sure that your sound is on and your volume is up.
When a person dies, ATP production stops, myosin heads become stuck to actin, and rigor mortis sets in, making the body stiff.
Click here for the Video: Muscle Contraction Overview. Please make sure that your sound is on and your volume is up.
How the Nervous System Controls Muscle Contraction
Pathway for signals from thenervous system that stimulate contraction of skeletal muscle.
Calcium ions are the key to contraction. Skeletal muscles contract in response to signals from motor neurons of the nervous system. Signals arrive at the T tubules of the sarcoplasmic reticulum (SR), which wraps around the myofibrils. The SR responds by releasing stored calcium ions; calcium binds to the protein troponin, changing the conformation of actin and allowing myosin cross-bridges to form. Another protein, tropomyosin, is also associated with actin filaments.
When nervous stimulation stops, calcium ions are actively taken up by the sarcoplasmic reticulum and the changes in filament conformation are reversed; the muscle relaxes.
Animation: Pathway from Nerve Signal to Skeletal Muscle Contraction
Animation: Actin, Troponin, and Tropomyosin
Neurons act on muscle cells at neuromuscular junctions. At neuromuscular junctions, impulses from the branched endings (axons) of motor neurons pass to the muscle cell membranes. Between the axons and the muscle cell is a gap called a synapse. Signals are transmitted across the gap by a neurotransmitter called acetylcholine (ACh). When the neuron is stimulated, calcium channels open to allow calcium ions to flow inward, causing a release of acetylcholine into the synapse.
The interactions of actin, tropomyosin,and troponins in a skeletal muscle cell.
Click to enlarge
How a chemical messenger called a neurotransmitter carries a signal across a neuromuscular junction.
How Muscle Cells Get Energy
ATP supplies the energy for muscle contraction. Initiation of muscle contraction requires much ATP; this will initially be provided by creatine phosphate, which gives up a phosphate to ADP to make ATP. Cellular respiration provides most of the ATP needed for muscle contraction after this, even during the first 5-10 minutes of moderate exercise. During prolonged muscle action, glycolysis alone produces low amounts of ATP; lactic acid is also produced, which hinders further contraction.
Muscle fatigue is due to the oxygen debt that results when muscles use more ATP than cellular respiration can deliver.
Click here for the Animation: Energy Sources for Muscle Contraction. Please make sure that your sound is on and your volume is up.
Three metabolic pathways by whichATP forms in muscles in response to the demands of physical exercise.
Properties of Whole Muscles
(a) Example of motor units present in muscles.(b) The micrograph shows the axon endings of a motorneuron that acts on individual muscle cells in the muscle.
Several factors determine the characteristics of a muscle contraction. A motor neuron and the muscle cells under its control are a motor unit; the number of cells in a motor unit depends on the precision of the muscle control needed. A single, brief stimulus to a motor unit causes a brief contraction called a muscle twitch. Repeated stimulation makes the twitches run together in a sustained contraction called tetanus (tetany).
Click here for the Animation: Effects of Stimulation on Muscles. Please make sure that your sound is on and your volume is up.
Not all muscle cells in a muscle contract at the same time. The number of motor units that are activated determines the strength of the contraction: Small number of units = weak contraction; large number of units at greater frequency = stronger contraction. Muscle tone is the continued steady, low level of contraction that stabilizes joints and maintains general muscle health.
Muscle tension is the force a contracting muscle exerts on an object; to contract, a muscle’s tension must exceed the load opposing it. An isotonically contracting muscle shortens and moves a load. An isometrically contracting muscle develops tension but does not shorten.
Tired muscles can’t generate much force. Muscles fatigue when strong stimulation keeps a muscle in a state of tetanus too long. After resting, muscles will be able to contract again; muscles may need to rest for minutes up to a day to fully recover.
|Recordings of twitches in muscles artificially stimulated in different ways. (a) A single twitch. (b) Six persecond cause a summation of twitches, and (c) about 20 per second cause tetanic contraction. (d) Painting of a soldier dying of thedisease tetanus in a military hospital in the 1800s after the bacterium Clostridium tetani infected a battlefield wound.||(a) An isotonic contraction.The load is less than a muscle’s peakcapacity to contract, so the musclecan contract, shorten, and lift the load.(b) In an isometric contraction, the loadexceeds the muscle’s peak capacity. Itcontracts, but can’t shorten.|
Strains and tears are muscle injuries. Muscle strains come from movement that stretches or tears muscle fibers; ice, rest and anti-inflammatory drugs (ibuprofen) allow damage to repair. If the whole muscle is torn, scar tissue may develop, shortening the muscle and making it function less effectively.
Sometimes a skeletal muscle will contract abnormally. A muscle spasm is a sudden, involuntary contraction that rapidly releases, while cramps are spasms that don’t immediately release; cramps usually occur in calf and thigh muscles. Tics are minor, involuntary twitches of muscles in the face and eyelids.
Muscular dystrophies destroy muscle fibers. Muscular dystrophies are genetic diseases leading to breakdown of muscle fibers over time. Duchenne muscular dystrophy (DMD) is common in children; a single mutant gene interferes with sarcomere contraction. Myotonic muscular dystrophy is usually found in adults; muscles of the hands and feet contract strongly but fail to relax normally. In these diseases, muscles progressively weaken and shrivel.
Exercise makes the most of muscles. Muscles that are damaged or which go unused for prolonged periods of time will atrophy (waste away). Aerobic exercise improves the capacity of muscles to do work. Walking, biking, and jogging are examples of exercises that increase endurance. Regular aerobic exercise increases the number and size of mitochondria, the number of blood capillaries, and the amount of myoglobin in the muscle tissue. Strength training improves function of fast muscle but does not increase endurance. Even modest activity slows the loss of muscle strength that comes with aging.
- Skeletal and Muscular Pathophysiology
- Epiphyseal plate
- Bone remodeling
- Bone Metabolism
- exchangeable calcium
- loosely bound CaPO4
- quickly released in response to short term increases in PTH
- Parathyroid hormone (PTH)
- when blood levels elevated for longer peroids of time, PTH stimulates:
- promotes renal retention
- promotes intestinal calcium absorption
- Vitamin D (Cholcalciferol)
- produced in skin
- transformed into CACIDIOL
- calcidiol is then transported to the kidney where it is finally converted into CALCITRIOL, the most potent agent.
- at normal physiological concentrations calcitriol promotes bone deposition
- at higher than physiological levels calcitriol works with PTH to activate resorption and calcium release
- calcitriol also regulates the growth and development of other tissues
- secreted by the thyroid gland
- opposes the action of PTH
- Bone pathophysiology
- DiGeorge's syndrome
- multipleendocrine deficiency
- vitamin D deficiency
- genetic disorders
- osteogenesis imperfect
- achondroplastic dwarfism
- acquired bone disorders
- Paget's disease
- Bone tumors
- Skeletal muscle disorders
- Muscle cell degeneration
- muscular dystrophy
- Motor end plate disorders
- myasthenia gravis
- Motor neuron disorders
- amyotrophic lateral sclerosis
- extrapyramidal disorders
- Parkinson's disease
- Huntington's disease
- Demyelinating disorders
- multiple sclerosis
- Guillain-Barre syndrome
- Joint disorders
- Rheumatoid arthritis
- specific effects
- Other joint disorders
- Ankylosing spondylitis
The Nervous System
The role of the nervous system is to detect and integrate information about external and internal conditions and carry out responses. Neurons form the basis of the system’s communication network. There are three types of neurons:
- Sensory neurons are receptors for specific sensory stimuli (signals).
The functional zones of a motor neuron. The micrograph shows a motor neuron with its plump cell body and branching dendrites.
- Interneurons in the brain and spinal cord integrate input and output signals.
- Motor neurons send information from integrator to muscle or gland cells (effectors).
Neurons have several functional zones. Neurons form extended cells with several zones: The cell body contains the nucleus and organelles. The cell body has slender extensions called dendrites; the cell body and the dendrites form the input zone for receiving information. Next comes the trigger zone, called the axon hillock in motor neurons and interneurons; the trigger zone leads to the axon, which is the neuron’s conducting zone. The axon’s endings are output zones where messages are sent to other cells.
Only 10% of the nervous system consists of neurons; the rest of the 90% is composed of support cells called neuroglia, or glia. Neurons function well in communication because they are excitable (produce electrical signals in response to stimuli).
Properties of a neuron’s plasma membrane allow it to carry signals. The plasma membrane prevents charged substances (K+ and Na+ ions) from moving freely across, but both ions can move through channels. Some channel proteins are always open, others are gated. In a resting neuron, gated sodium channels are closed; sodium does not pass through the membrane, but potassium does. According to the gradients that form, sodium diffuses into the cell, potassium diffuses out of the cell. The difference across the membrane that forms because of the K+ and Na+ gradients results in a resting membrane potential of ‒70 millivolts (cytoplasmic side of the membrane is negative).
Ions and a neuron’s plasma membrane. (a) Gradients of sodium (Na+) and potassium (K+) ions across a neuron’s plasma membrane. (b) How ions cross the plasma membrane of a neuron. They are selectively allowed to cross at protein channels and pumps that span the membrane.
Ions and a neuron’s plasma membrane. (a) Gradients of sodium (Na+) and potassium (K+) ions across a neuron’s plasma membrane. (b) How ions cross the plasma membrane of a neuron. They are selectively allowed to cross at protein channels and pumps that span the membrane.
Action Potentials = Nerve Impulses
(1, 2) Steps leading to an action potential. (3, 4) How an action potential propagates.
Chemical Synapses: Communication Junctions
Action potentials can stimulate the release of neurotransmitters. Neurotransmitters diffuse across a chemical synapse, the junction between a neuron and an adjacent cell (between neurons and other neurons, or between neurons and muscle or gland cells). The neuron that releases the transmitter is called the presynaptic cell. In response to an action potential, gated calcium channels open and allow calcium ions to enter the neuron from the synapse. Calcium causes the synaptic vesicles to fuse with the membrane and release the transmitter substance into the synapse. The transmitter binds to receptors on the membrane of the postsynaptic cell.
Neurotransmitters can excite or inhibit a receiving cell. How a postsynaptic cell responds to a transmitter depends on the type and amount of transmitter, the receptors it has, and the types of channels in its input zone. Excitatory signals drive the membrane toward an action potential. Inhibitory signals prevent an action potential.
Examples of neurotransmitters:
Neuromodulators can magnify or reduce the effects of a neurotransmitter. One example includes the natural painkillers called endorphins. Release of endorphins prevents sensations of pain from being recognized. Endorphins may also play a role in memory, learning, and sexual behavior. Competing signals are “summed up.” Excitatory and inhibitory signals compete at the input zone. An excitatory postsynaptic potential (EPSP) depolarizes the membrane to bring it closer to threshold. An inhibitory postsynaptic potential (IPSP) either drives the membrane away from threshold by a hyperpolarizing effect or maintains the membrane potential at the resting level. In synaptic integration, competing signals that reach the input zone of a neuron at the same time are summed; summation of signals determines whether a signal is suppressed, reinforced, or sent onward to other body cells.
Neurotransmitter molecules must be removed from the synapse. Neurotransmitters must be removed from the synaptic cleft to discontinue stimulation. There are three methods of removal: Some neurotransmitter molecules simply diffuse out of the cleft. Enzymes, such as acetylcholinesterase, break down the transmitters. Membrane transport proteins actively pump neurotransmitter molecules back into the presynaptic cells.
Click here for the Video: Synapse Function. Please make sure that your sound is on and your volume is up.
Nerves are long-distance lines. Signals between the brain or spinal cord and body regions travel via nerves. Axons of sensory neurons, motor neurons, or both, are bundled together in a nerve. Within the brain and spinal cord, bundles of interneuron axons are called nerve tracts. Axons are covered by a myelin sheath derived from Schwann cells. Each section of the sheath is separated from adjacent ones by a region where the axon membrane, along with gated sodium channels, is exposed. Action potentials jump from node to node (saltatory conduction); such jumps are fast and efficient. There are no Schwann cells in the central nervous system; here processes from oligodendrocytes form the sheaths of myelinated axons.
Reflex arcs are the simplest nerve pathways. A reflex is a simple, stereotyped movement in response to a stimulus. In the simplest reflex arcs, sensory neurons synapse directly with motor neurons; an example is the stretch reflex, which contracts a muscle after that muscle has been stretched. In most reflex pathways, the sensory neurons also interact with several interneurons, which excite or inhibit motor neurons as needed for a coordinated response.
In the brain and spinal cord, neurons interact in circuits. The overall direction of flow in the nervous system: sensory neurons >>> spinal cord and brain >>> interneurons >>> motor neurons. Interneurons in the spinal cord and brain are grouped into blocks, which in turn form circuits; blocks receive signals, integrate them, and then generate new ones. Divergent circuits fan out from one block into another. Other circuits funnel down to just a few neurons. In reverberating circuits, neurons repeat signals among themselves.
Overview of the Nervous System
The central nervous system (CNS) is composed of the brain and spinal cord; all of the interneurons are contained in this system. Nerves that carry sensory input to the CNS are called the afferent nerves. Efferent nerves carry signals away from the CNS.
The peripheral nervous system (PNS) includes all the nerves that carry signals to and from the brain and spinal cord to the rest of the body. The PNS is further divided into the somatic and autonomic subdivisions. The PNS consists of 31 pairs of spinal nerves and 12 pairs of cranial nerves. At some sites, cell bodies from several neurons cluster together in ganglia.
Major Expressways: Peripheral Nerves and the Spinal Cord
The peripheral nervous system consists of somatic and autonomic nerves. Somatic nerves carry signals related to movement of the head, trunk, and limbs; signals move to and from skeletal muscles for voluntary control. Autonomic nerves carry signals between internal organs and other structures; signals move to and from smooth muscles, cardiac muscle, and glands (involuntary control). The cell bodies of preganglionic neurons lie within the CNS and extend their axons to ganglia outside the CNS. Postganglionic neurons receive the messages from the axons of the preganglionic cells and pass the impulses on to the effectors.
Autonomic nerves are divided into parasympathetic and sympathetic groups. They normally work antagonistically towards each other. Parasympathetic nerves slow down body activity when the body is not under stress. Sympathetic nerves increase overall body activity during times of stress, excitement, or danger; they also call on the hormone norepinephrine to increase the fight-flight response. When sympathetic activity drops, parasympathetic activity may rise in a rebound effect.
The spinal cord is the pathway between the PNS and the brain. The spinal cord lies within a closed channel formed by the bones of the vertebral column. Signals move up and down the spinal cord in nerve tracts. The myelin sheaths of these tracts are white; thus, they are called white matter. The central, butterfly-shaped area (in cross-section) consists of dendrites, cell bodies, interneurons, and neuroglia cells; it is called gray matter. The spinal cord and brain are covered with three tough membranes - the meninges. The spinal cord is a pathway for signal travel between the peripheral nervous system and the brain; it also is the center for controlling some reflex actions. Spinal reflexes result from neural connections made within the spinal cord and do not require input from the brain, even though the event is recorded there. Autonomic reflexes, such as bladder emptying, are also the responsibility of the spinal cord.
Click here for the Video: New Nerves. Please make sure that your sound is on and your volume is up.
The Brain - Command Central
The spinal cord merges with the body’s master control center, the brain. The brain is protected by bone and meninges. The tough outer membrane is the dura mater; it is folded double around the brain and divides the brain into its right and left halves. The thinner middle layer is the arachnoid; the delicate pia mater wraps the brain and spinal cord as the innermost layer. The meninges also enclose fluid-filled spaces that cushion and nourish the brain.
The brain is divided into a hindbrain, midbrain, and forebrain. The hindbrain and midbrain form the brain stem, responsible for many simple reflexes. Hindbrain. The medulla oblongata has influence over respiration, heart rate, swallowing, coughing, and sleep/wake responses. The cerebellum acts as a reflex center for maintaining posture and coordinating limbs. The pons (“bridge”) possesses nerve tracts that pass between brain centers. The midbrain coordinates reflex responses to sight and sound. It has a roof of gray matter, the tectum, where visual and sensory input converges before being sent to higher brain centers. The forebrain is the most developed portion of the brain in humans. The cerebrum integrates sensory input and selected motor responses; olfactory bulbs deal with the sense of smell. The thalamus relays and coordinates sensory signals through clusters of neuron cell bodies called nuclei; Parkinson’s disease occurs when the function of basal nuclei in the thalamus is disrupted. The hypothalamus monitors internal organs and influences responses to thirst, hunger, and sex, thus controlling homeostasis.
Cerebrospinal fluid fills cavities and canals in the brain. The brain and spinal cord are surrounded by the cerebrospinal fluid (CSF), which fills cavities (ventricles) and canals within the brain. A mechanism called the blood-brain barrier controls which substances will pass to the fluid and subsequently to the neurons. The capillaries of the brain are much less permeable than other capillaries, forcing materials to pass through the cells, not around them. Lipid-soluble substances, such as alcohol, nicotine, and drugs, diffuse quickly through the lipid bilayer of the plasma membrane.
A Closer Look at the Cerebrum
There are two cerebral hemispheres. The human cerebrum is divided into left and right cerebral hemispheres, which communicate with each other by means of the corpus callosum. Each hemisphere can function separately; the left hemisphere responds to signals from the right side of the body, and vice versa. The left hemisphere deals mainly with speech, analytical skills, and mathematics; nonverbal skills such as music and other creative activities reside in the right. The thin surface (cerebral cortex) is gray matter, divided into lobes by folds and fissures; white matter and basal nuclei (gray matter in the thalamus) underlie the surface. Each hemisphere is divided into frontal, occipital, temporal, and parietal lobes.
The cerebral cortex controls thought and other conscious behavior. Motor areas are found in the frontal lobe of each hemisphere. The motor cortex controls the coordinated movements of the skeletal muscles. The premotor cortex is associated with learned pattern or motor skills. Broca’s area is involved in speech. The frontal eye field controls voluntary eye movements.
Several sensory areasare found in the parietal lobe: The primary somatosensory cortex is the main receiving center for sensory input from the skin and joints, while the primary cortical area deals with taste. The primary visual cortex, which receives sensory input from the eyes, is found in the occipital lobe. Sound and odor perception arises in primary cortical areas in each temporal lobe. Association areasoccupy all parts of the cortex except the primary motor and sensory regions: Each area integrates, analyzes, and responds to many inputs. Neural activity is the most complex in the prefrontal cortex, the area of the brain that allows for complex learning, intellect, and personality.
The limbic system: Emotions and more. Our emotions and parts of our memory are governed by the limbic system, which consists of several brain regions. Parts of the thalamus, hypothalamus, amygdala, and the hippocampus form the limbic system and contribute to producing our “gut” reactions.
Click here for the Video: Limbic System. Please make sure that your sound is on and your volume is up.
Memory and Consciousness
Disorders of the Nervous System
Some diseases attack and damage neurons. Alzheimer’s disease involves the progressive degeneration of brain neurons, while at the same time there is an abnormal buildup of amyloid protein, leading to the loss of memory. Parkinson’s disease (PD) is characterized by the death of neurons in the thalamus that normally make dopamine and norepinephrine needed for normal muscle function. Meningitis is an often fatal inflammatory disease caused by a virus or bacterial infection of the meninges covering the brain and/or spinal cord. Encephalitis is very dangerous inflammation of the brain, often caused by a virus. Multiple sclerosis (MS) is an autoimmune disease that results in the destruction of the myelin sheath of neurons in the CNS.
The CNS can also be damaged by injury or seizure. A concussion can result from a severe blow to the head, resulting in blurred vision and brief loss of consciousness. Damage to the spinal cord can result in lost sensation, muscle weakness, or paralysis below the site of the injury. Epilepsy is a seizure disorder, often inherited but also caused by brain injury, birth trauma, or other assaults on the brain. Headaches occur when the brain registers tension in muscles or blood vessels of the face, neck, and scalp as pain; migraine headaches are extremely painful and can be triggered by hormonal changes, fluorescent lights, and certain foods, particularly in women.
The Brain on Drugs
Drugs can alter mind and body functions. Psychoactive drugs exert their influence on brain regions that govern states of consciousness and behavior. There are four categories of psychoactive drugs: Stimulants (caffeine, cocaine, nicotine, amphetamines) increase alertness or activity for a time, and then depress you.
Drug use can lead to addiction. As the body develops tolerance to a drug, larger and more frequent doses are needed to produce the same effect; this reflects physical drug dependence. Psychological drug dependence, or habituation, develops when a user begins to crave the feelings associated with using a particular drug and cannot function without it. Habituation and tolerance are evidence of addiction.
Click here for the Video: Brain Stem. Please make sure that your sound is on and your volume is up.
Hormones are signaling molecules that are carried in the bloodstream. Signaling molecules are hormones and secretions that can bind to target cells and elicit in them a response. Hormones are secreted by endocrine glands, endocrine cells, and some neurons. Local signaling molecules are released by some cells; these work only on nearby tissues. Pheromones are signaling molecules that have targets outside the body and which are used to integrate behaviors.
Hormone sources: The endocrine system. The sources of hormones (hormone producing glands, cells, and organs) may be collectively called the endocrine system. Endocrine sources and the nervous system function in highly interconnected ways.
Hormones often interact. In an opposing interaction the effect of one hormone opposes the effect of another. In a synergistic interaction the combined action of two or more hormones is necessary to produce the required effect on target cells. In a permissive interaction one hormone exerts its effect only when a target cell has been “primed” to respond by another hormone.
Click here for the Animation: Major Human Endocrine Glands. Please make sure that your sound is on and your volume is up.
Types of Hormones and Their Signals
Hormones come in several chemical forms. Steroid hormones are lipids made from cholesterol. Amine hormones are modified amino acids. Peptide hormones are peptides of only a few amino acids. Protein hormones are longer chains of amino acids. All hormones bind target cells; this signal is converted into a form that works in the cell to change activity. A target cell’s response to a hormone is dependent on two factors: Different hormones activate different cellular response mechanisms. Not all cells have receptors for all hormones; the cells that respond are selected by means of the type of receptor they possess.
Steroid hormones interact with cell DNA. Steroid hormones, such as estrogen and testosterone, are lipid-soluble and therefore cross plasma membranes readily. Once inside the cell, they penetrate the nuclear membrane and bind to receptors in the nucleus, either turning on or turning off genes. Switching genes on or off changes the proteins that are made by the cell, thus effecting a response. Some steroid hormones bind receptors in the cell membrane and change membrane properties to affect change to the target cell’s function.
Click here for the Video: Mechanism of a steroid hormone. Please make sure that your sound is on and your volume is up.
Nonsteroid hormones act indirectly, by way of second messengers. Nonsteroid hormones include the amine, peptide, and protein hormones. Nonsteroid hormones cannot cross the plasma membrane of target cells, so they must first bind to a receptor on the plasma membrane. Binding of the hormone to the receptor activates the receptor; it in turn stimulates the production of a second messenger, a small molecule that can relay signals in the cell. Cyclic AMP (cyclic adenosine monophosphate) is one example of a second messenger.
Click here for the Video: Mechanism of a peptide hormone. Please make sure that your sound is on and your volume is up.
The Hypothalamus and Pituitary Gland: Major Controllers
The hypothalamus and pituitary gland work jointly as the neural-endocrine control center. The hypothalamus is a portion of the brain that monitors internal organs and conditions. The pituitary is connected to the hypothalamus by a stalk. The posterior lobe consists of nervous tissue and releases two hormones made in the hypothalamus. The anterior lobe makes and secretes hormones that control the activity of other endocrine glands.
The posterior pituitary lobe produces ADH and oxytocin. Neurons in the hypothalamus produce antidiuretic hormone (ADH) and oxytocin, which are released from axon endings in the capillary bed of the posterior lobe. ADH (or vasopressin) acts on the walls of kidney tubules to control the body’s water and solute levels by stimulating reabsorption. Oxytocin triggers uterine muscle contractions to expel the fetus and acts on mammary glands to release milk.
Click here for the Animation: Posterior Pituitary Function. Please make sure that your sound is on and your volume is up.
The anterior pituitary lobe produces six other hormones. Corticotropin (ACTH) stimulates the adrenal cortex. Thyrotropin (TSH) stimulates the thyroid gland. Follicle-stimulating hormone (FSH) causes ovarian follicle development and egg production. Luteinizing hormone (LH) also acts on the ovary to release an egg. Prolactin (PRL) acts on the mammary glands to stimulate and sustain milk production. Somatotropin (STH), also known as growth hormone (GH), acts on body cells in general to promote growth. Most of these hormones are releasers that stimulate target cells to secrete other hormones; other hormones from the hypothalamus are inhibitors and block secretions.
Click here for the Animation: Anterior Pituitary Function. Please make sure that your sound is on and your volume is up.
Click here for the Video: Hypothalamus and Pituitary. Please make sure that your sound is on and your volume is up.
Factors That Influence Hormone Effects
Problems with control mechanisms can result in skewed hormone signals. Endocrine glands in general only release small quantities of hormones and control the frequency of release to make sure there isn’t too much or too little hormone.
Abnormal quantities of hormones can lead to growth problems. Gigantism results from an oversecretion of growth hormone in childhood. Pituitary dwarfism results from an undersecretion of GH. Acromegaly is a condition resulting from an oversecretion of GH in adulthood leading to abnormal thickening of tissues. Diabetes insipidus occurs when ADH secretions fall or stop, leading to dilute urine and the possibility of serious dehydration.
Hormone interactions, feedback, and other factors also influence a hormone’s effects. At least four factors influence the effects of any given hormone. Hormones often interact with one another. Negative feedback mechanisms control secretion of hormones. Target cells may react differently to hormones at different times. Environmental cues can affect release of hormones. Hormones throughout the body are affected in similar ways.
The Thymus, Thyroid, and Parathyroid Glands
Thymus gland hormones aid immunity. Thyroid hormones affect metabolism, growth, and development. The thyroid gland secretes thyroid hormone (TH), which has effects on metabolism, growth, and development; the thyroid gland also secretes calcitonin, which helps regulate calcium levels in the blood.
Click here for the Animation: Thyroid Hormone Action. Please make sure that your sound is on and your volume is up.
Iodine-deficient diets interfere with proper synthesis of thyroid hormones. Simple goiter is an enlargement of one or both lobes of the thyroid gland in the neck; enlargement follows low blood levels of thyroid hormones (hypothyroidism). Graves disease and other forms of hyperthyroidism result from too much thyroid hormone in the blood.
PTH from the parathyroids is the main calcium regulator. Humans have four parathyroid glands, which secrete parathyroid hormone (PTH), the main regulator of blood calcium levels. More PTH is secreted when blood calcium levels drop below a certain point; less is secreted when calcium rises. Calcitonin contributes to processes that pull calcium out of the blood. Rickets in children arises from a vitamin D deficient diet; vitamin D is needed to aid absorption of calcium from food. Hyperparathyroidism sees so much calcium being withdrawn from a person’s bones that the bone tissue is dangerously weakened.
Adrenal Glands and Stress Responses
The adrenal cortex produces glucocorticoids and mineralocorticoids. One adrenal gland is located on top of each kidney; the outer part of each gland is the adrenal cortex, the site of production for two major steroid hormones. Glucocorticoids raise the level of glucose in the blood. The main glucocorticoid, cortisol, is secreted when the body is stressed and blood sugar levels drop; cortisol promotes gluconeogenesis, a mechanism for making glucose from amino acids derived from protein breakdown. Cortisol also dampens the uptake of glucose from the blood, stimulates the breakdown of fats for energy, and suppresses inflammation. Hypoglycemia can result when the adrenal cortex makes too little cortisol; this results in chronically low glucose levels in the blood. Mineralocorticoids regulate the concentrations of minerals such as K+ and Na+ in the extracellular fluid; aldosterone is one example that works in the nephrons of the kidneys. The adrenal cortex also secretes sex hormones in the fetus and at puberty.
Click here for the Animation: Control of Cortisol Secretion. Please make sure that your sound is on and your volume is up.
Hormones from the adrenal medulla help regulate blood circulation. The inner part of the adrenal gland, the adrenal medulla, secretes epinephrine and norepinephrine. Secretion by the adrenal medulla influences these molecules to behave like hormones to regulate blood circulation and carbohydrate use during stress.
Long-term stress can damage health. Stress triggers the fight-flight response and the release of cortisol, epinephrine, and norepinephrine; constant release of these molecules can contribute to hypertension and cardiovascular disease. Excess cortisol suppresses the immune system, making individuals susceptible to disease. Social connections for support and exercise for health can reduce the effects of stress.
The Pancreas: Regulating Blood Sugar
The pancreas has both exocrine and endocrine functions; the endocrine cells are located in clusters called pancreatic islets.
Each pancreatic islet secretes three hormones:
Click here for the Animation: Hormones and Glucose Metabolism. Please make sure that your sound is on and your volume is up.
Disorders of Glucose Homeostasis
Diabetes mellitus is a disease resulting from the secretion of too little insulin. Without insulin, cells can’t remove glucose from the blood; the kidneys remove the excess in urine, creating imbalances in water-solute concentrations. Metabolic acidosis, a lower than optimal blood pH, can result because of this imbalance.
In type 1 diabetes (also known as “juvenile-onset diabetes”) the insulin is no longer produced because the beta cells have been destroyed by an autoimmune response. Only about 1 in 10 diabetics have this form of diabetes. Treatment is by insulin injection.
Type 2 diabetes is a global health crisis. In type 2 diabetes the insulin levels are near normal but the target cells cannot respond to the hormone. Beta cells eventually break down and produce less and less insulin. Excess glucose in the blood damages capillaries. Cardiovascular disease, stroke, heart attack, and other serious complications arise.
Metabolic syndrome is a warning sign. Prediabetes describes individuals with slightly elevated blood sugar levels that have an increased risk for developing type 2 diabetes; about 20 million Americans fall into this category and do not know it. A composite of features collectively called metabolic syndrome also describe risk for diabetes; these features include: “apple shaped” waistline, elevated blood pressure, low levels of HDL, and elevated glucose and triglycerides. Type 2 diabetes can be controlled with a combination of improved diet, exercise, and sometimes drugs.
Some Final Examples of Integration and Control
Light/dark cycles influence the pineal gland, which produces melatonin. Located in the brain, the pineal gland is a modification of a primitive “third eye” and is sensitive to light and seasonal influences; this gland secretes the hormone melatonin. Melatonin is secreted in the dark, and levels change with the seasons. The biological clock seems to tick in synchrony with day length and is apparently influenced by melatonin. Seasonal affective disorder (SAD) affects persons during the winter and may result from an out-of-sync biological clock; melatonin makes it worse; exposure to intense light helps. Melatonin levels may potentially be linked to the onset of puberty.
Hormones also are produced in the heart and GI tract. Atrial natriuretic peptide (ANP) produced by the heart atria regulates blood pressure. Gastrin and secretin from the GI tract stimulate release of stomach and intestinal secretions.
Prostaglandins have many effects. More than 16 prostaglandins have been identified in tissues throughout the body. When stimulated by epinephrine and norepinephrine, prostaglandins cause smooth muscles in blood vessels to constrict or dilate. Allergic responses to dust and pollen may be aggravated by the effects of prostaglandins on airways in the lungs. Prostaglandins have major effects on menstruation and childbirth.
Growth factors influence cell division. Hormonelike proteins called growth factors influence growth by regulating the rate of cellular division. Epidermal growth factor (EGF) influences the growth of many cell types, as does insulinlike growth factor (IGF). Nerve growth factor (NGF) promotes growth and survival of neurons in the developing embryo. The current list of growth factors is expanding rapidly; many of these factors may have applications in medicine.
Pheromones may be important communication molecules in humans. Pheromones are released outside of the body by several animals to serve as sex attractants, territory markers, and communication signals. Recent studies suggest that humans also may communicate using pheromones.
Are endocrine disrupters at work? Endocrine disrupters are proposed to be environmental substances that interfere with reproduction or development. Sperm counts in males in Western countries declined about 40% between the years 1938 and 1990, possibly due to exposure to estrogens in the environment.
Click here for the Video: Hormone-Induced Adjustments. Please make sure that your sound is on and your volume is up.
The Nervous system and the Endocrine System
I. Types of nervous tissue
A) Neurons – conduct nerve impulses
1. sensory neurons
B) Glia – do not conduct impulses, they provide support to neurons
C) Anatomy of a neuron
1. cell body
II. Properties of the Neuronal membranes
A) resting membrane potential
B) action potential
V. Organization of the Nervous System
A) Central Nervous system
B) Peripheral Nervous system – everything outside of the Central Nervous system
VII. Disorders of the Nervous system
A) Parkinson’s disease
VIII. The Brain and Drugs
IX. The Endocrine System | http://philssciencestudyguide.blogspot.com/ | 13 |
422 | Common Core Math Standards - High SchoolMathScore aligns to the Common Core Math Standards for High School. The standards appear below along with the MathScore topics that match. If you click on a topic name, you will see sample problems at varying degrees of difficulty that MathScore generated. When students use our program, the difficulty of the problems will automatically adapt based on individual performance, resulting in not only true differentiated instruction, but a challenging game-like experience.
The Real Number SystemExtend the properties of exponents to rational exponents.
1. Explain how the definition of the meaning of rational exponents follows from extending the properties of integer exponents to those values, allowing for a notation for radicals in terms of rational exponents. For example, we define 51/3 to be the cube root of 5 because we want (51/3)3 = 5(1/3)3 to hold, so (51/3)3 must equal 5.
2. Rewrite expressions involving radicals and rational exponents using the properties of exponents. (Simplifying Algebraic Expressions 2 , Simplifying Radical Expressions , Roots Of Exponential Expressions )
Use properties of rational and irrational numbers.
3. Explain why the sum or product of two rational numbers is rational; that the sum of a rational number and an irrational number is irrational; and that the product of a nonzero rational number and an irrational number is irrational.
Quantities*Reason quantitatively and use units to solve problems.
1. Use units as a way to understand problems and to guide the solution of multi-step problems; choose and interpret units consistently in formulas; choose and interpret the scale and the origin in graphs and data displays.
2. Define appropriate quantities for the purpose of descriptive modeling.
3. Choose a level of accuracy appropriate to limitations on measurement when reporting quantities.
The Complex Number SystemPerform arithmetic operations with complex numbers.
1. Know there is a complex number i such that i2 = –1, and every complex number has the form a + bi with a and b real.
2. Use the relation i2 = –1 and the commutative, associative, and distributive properties to add, subtract, and multiply complex numbers.
3. (+) Find the conjugate of a complex number; use conjugates to find moduli and quotients of complex numbers.
Represent complex numbers and their operations on the complex plane.
4. (+) Represent complex numbers on the complex plane in rectangular and polar form (including real and imaginary numbers), and explain why the rectangular and polar forms of a given complex number represent the same number.
5. (+) Represent addition, subtraction, multiplication, and conjugation of complex numbers geometrically on the complex plane; use properties of this representation for computation. For example, (-1 + √3 i)3 = 8 because (-1 + √3 i) has modulus 2 and argument 120°.
6. (+) Calculate the distance between numbers in the complex plane as the modulus of the difference, and the midpoint of a segment as the average of the numbers at its endpoints.
Use complex numbers in polynomial identities and equations.
7. Solve quadratic equations with real coefficients that have complex solutions.
8. (+) Extend polynomial identities to the complex numbers. For example, rewrite x2 + 4 as (x + 2i)(x – 2i).
9. (+) Know the Fundamental Theorem of Algebra; show that it is true for quadratic polynomials.
Vector and Matrix QuantitiesRepresent and model with vector quantities.
1. (+) Recognize vector quantities as having both magnitude and direction. Represent vector quantities by directed line segments, and use appropriate symbols for vectors and their magnitudes (e.g., v, |v|, ||v||, v).
2. (+) Find the components of a vector by subtracting the coordinates of an initial point from the coordinates of a terminal point.
3. (+) Solve problems involving velocity and other quantities that can be represented by vectors.
Perform operations on vectors.
4. (+) Add and subtract vectors.
a. Add vectors end-to-end, component-wise, and by the parallelogram rule. Understand that the magnitude of a sum of two vectors is typically not the sum of the magnitudes.
b. Given two vectors in magnitude and direction form, determine the magnitude and direction of their sum.
c. Understand vector subtraction v – w as v + (–w), where –w is the additive inverse of w, with the same magnitude as w and pointing in the opposite direction. Represent vector subtraction graphically by connecting the tips in the appropriate order, and perform vector subtraction component-wise.
5. (+) Multiply a vector by a scalar.
a. Represent scalar multiplication graphically by scaling vectors and possibly reversing their direction; perform scalar multiplication component-wise, e.g., as c(vx, vy) = (cvx, cvy).
b. Compute the magnitude of a scalar multiple cv using ||cv|| = |c|v. Compute the direction of cv knowing that when |c|v ≠ 0, the direction of cv is either along v (for c > 0) or against v (for c < 0).
Perform operations on matrices and use matrices in applications.
6. (+) Use matrices to represent and manipulate data, e.g., to represent payoffs or incidence relationships in a network.
7. (+) Multiply matrices by scalars to produce new matrices, e.g., as when all of the payoffs in a game are doubled.
8. (+) Add, subtract, and multiply matrices of appropriate dimensions.
9. (+) Understand that, unlike multiplication of numbers, matrix multiplication for square matrices is not a commutative operation, but still satisfies the associative and distributive properties.
10. (+) Understand that the zero and identity matrices play a role in matrix addition and multiplication similar to the role of 0 and 1 in the real numbers. The determinant of a square matrix is nonzero if and only if the matrix has a multiplicative inverse.
11. (+) Multiply a vector (regarded as a matrix with one column) by a matrix of suitable dimensions to produce another vector. Work with matrices as transformations of vectors.
12. (+) Work with 2 × 2 matrices as a transformations of the plane, and interpret the absolute value of the determinant in terms of area.
Seeing Structure in ExpressionsInterpret the structure of expressions.
1. Interpret expressions that represent a quantity in terms of its context.★
a. Interpret parts of an expression, such as terms, factors, and coefficients.
b. Interpret complicated expressions by viewing one or more of their parts as a single entity. For example, interpret P(1+r)n as the product of P and a factor not depending on P.
2. Use the structure of an expression to identify ways to rewrite it. For example, see x4 – y4 as (x2)2 – (y2)2, thus recognizing it as a difference of squares that can be factored as (x2 – y2)(x2 + y2).
Write expressions in equivalent forms to solve problems.
3. Choose and produce an equivalent form of an expression to reveal and explain properties of the quantity represented by the expression.★
a. Factor a quadratic expression to reveal the zeros of the function it defines. (Quadratic Zero Equations , Trinomial Factoring )
b. Complete the square in a quadratic expression to reveal the maximum or minimum value of the function it defines.
c. Use the properties of exponents to transform expressions for exponential functions. For example the expression 1.15t can be rewritten as (1.151/12)12t ≈ 1.01212t to reveal the approximate equivalent monthly interest rate if the annual rate is 15%.
4. Derive the formula for the sum of a finite geometric series (when the common ratio is not 1), and use the formula to solve problems. For example, calculate mortgage payments.★
Arithmetic with Polynomials and Rational ExpressionsPerform arithmetic operations on polynomials.
1. Understand that polynomials form a system analogous to the integers, namely, they are closed under the operations of addition, subtraction, and multiplication; add, subtract, and multiply polynomials. (Foil Method )
Understand the relationship between zeros and factors of polynomials.
2. Know and apply the Remainder Theorem: For a polynomial p(x) and a number a, the remainder on division by x – a is p(a), so p(a) = 0 if and only if (x – a) is a factor of p(x).
3. Identify zeros of polynomials when suitable factorizations are available, and use the zeros to construct a rough graph of the function defined by the polynomial.
Use polynomial identities to solve problems.
4. Prove polynomial identities and use them to describe numerical relationships. For example, the polynomial identity (x2 + y2)2 = (x2 – y2)2 + (2xy)2 can be used to generate Pythagorean triples.
5. (+) Know and apply the Binomial Theorem for the expansion of (x + y)n in powers of x and y for a positive integer n, where x and y are any numbers, with coefficients determined for example by Pascal’s Triangle.1
Rewrite rational expressions.
6. Rewrite simple rational expressions in different forms; write a(x)/b(x) in the form q(x) + r(x)/b(x), where a(x), b(x), q(x), and r(x) are polynomials with the degree of r(x) less than the degree of b(x), using inspection, long division, or, for the more complicated examples, a computer algebra system.
7. (+) Understand that rational expressions form a system analogous to the rational numbers, closed under addition, subtraction, multiplication, and division by a nonzero rational expression; add, subtract, multiply, and divide rational expressions.
Creating Equations*Create equations that describe numbers or relationships.
1. Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and exponential functions. (Mixture Word Problems , Work Word Problems , Integer Word Problems )
2. Create equations in two or more variables to represent relationships between quantities; graph equations on coordinate axes with labels and scales.
3. Represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret solutions as viable or nonviable options in a modeling context. For example, represent inequalities describing nutritional and cost constraints on combinations of different foods. (Age Problems )
4. Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations. For example, rearrange Ohm’s law V = IR to highlight resistance R. (Two Variable Equations )
Reasoning with Equations and InequalitiesUnderstand solving equations as a process of reasoning and explain the reasoning.
1. Explain each step in solving a simple equation as following from the equality of numbers asserted at the previous step, starting from the assumption that the original equation has a solution. Construct a viable argument to justify a solution method.
2. Solve simple rational and radical equations in one variable, and give examples showing how extraneous solutions may arise.
Solve equations and inequalities in one variable.
3. Solve linear equations and inequalities in one variable, including equations with coefficients represented by letters. (Linear Equations , Single Variable Equations , Single Variable Equations 2 , Single Variable Equations 3 , Single Variable Inequalities )
4. Solve quadratic equations in one variable.
(Quadratic Zero Equations , Quadratic Formula )
a. Use the method of completing the square to transform any quadratic equation in x into an equation of the form (x – p)2 = q that has the same solutions. Derive the quadratic formula from this form.
b. Solve quadratic equations by inspection (e.g., for x2 = 49), taking square roots, completing the square, the quadratic formula and factoring, as appropriate to the initial form of the equation. Recognize when the quadratic formula gives complex solutions and write them as a ± bi for real numbers a and b. (Quadratic X-Intercepts , Quadratic Zero Equations , Quadratic Formula )
Solve systems of equations.
5. Prove that, given a system of two equations in two variables, replacing one equation by the sum of that equation and a multiple of the other produces a system with the same solutions.
6. Solve systems of linear equations exactly and approximately (e.g., with graphs), focusing on pairs of linear equations in two variables. (System of Equations Substitution , System of Equations Addition )
7. Solve a simple system consisting of a linear equation and a quadratic equation in two variables algebraically and graphically. For example, find the points of intersection between the line y = –3x and the circle x2 + y2 = 3.
8. (+) Represent a system of linear equations as a single matrix equation in a vector variable.
9. (+) Find the inverse of a matrix if it exists and use it to solve systems of linear equations (using technology for matrices of dimension 3 × 3 or greater).
Represent and solve equations and inequalities graphically.
10. Understand that the graph of an equation in two variables is the set of all its solutions plotted in the coordinate plane, often forming a curve (which could be a line).
11. Explain why the x-coordinates of the points where the graphs of the equations y = f(x) and y = g(x) intersect are the solutions of the equation f(x) = g(x); find the solutions approximately, e.g., using technology to graph the functions, make tables of values, or find successive approximations. Include cases where f(x) and/or g(x) are linear, polynomial, rational, absolute value, exponential, and logarithmic functions.★
12. Graph the solutions to a linear inequality in two variables as a half-plane (excluding the boundary in the case of a strict inequality), and graph the solution set to a system of linear inequalities in two variables as the intersection of the corresponding half-planes.
Interpreting FunctionsUnderstand the concept of a function and use function notation.
1. Understand that a function from one set (called the domain) to another set (called the range) assigns to each element of the domain exactly one element of the range. If f is a function and x is an element of its domain, then f(x) denotes the output of f corresponding to the input x. The graph of f is the graph of the equation y = f(x). (Domain and Range )
2. Use function notation, evaluate functions for inputs in their domains, and interpret statements that use function notation in terms of a context.
3. Recognize that sequences are functions, sometimes defined recursively, whose domain is a subset of the integers. For example, the Fibonacci sequence is defined recursively by f(0) = f(1) = 1, f(n+1) = f(n) + f(n-1) for n ≥ 1.
Interpret functions that arise in applications in terms of the context.
4. For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal description of the relationship. Key features include: intercepts; intervals where the function is increasing, decreasing, positive, or negative; relative maximums and minimums; symmetries; end behavior; and periodicity.★
5. Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes. For example, if the function h(n) gives the number of person-hours it takes to assemble n engines in a factory, then the positive integers would be an appropriate domain for the function.★ (Domain and Range )
6. Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph.★
Analyze functions using different representations.
7. Graph functions expressed symbolically and show key features of the graph, by hand in simple cases and using technology for more complicated cases.★
a. Graph linear and quadratic functions and show intercepts, maxima, and minima.
b. Graph square root, cube root, and piecewise-defined functions, including step functions and absolute value functions.
c. Graph polynomial functions, identifying zeros when suitable factorizations are available, and showing end behavior.
d. (+) Graph rational functions, identifying zeros and asymptotes when suitable factorizations are available, and showing end behavior.
e. Graph exponential and logarithmic functions, showing intercepts and end behavior, and trigonometric functions, showing period, midline, and amplitude.
8. Write a function defined by an expression in different but equivalent forms to reveal and explain different properties of the function.
a. Use the process of factoring and completing the square in a quadratic function to show zeros, extreme values, and symmetry of the graph, and interpret these in terms of a context.
b. Use the properties of exponents to interpret expressions for exponential functions. For example, identify percent rate of change in functions such as y = (1.02)t, y = (0.97)t, y = (1.01)12t, y = (1.2)t/10, and classify them as representing exponential growth or decay.
9. Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). For example, given a graph of one quadratic function and an algebraic expression for another, say which has the larger maximum.
Building FunctionsBuild a function that models a relationship between two quantities.
1. Write a function that describes a relationship between two quantities.★
a. Determine an explicit expression, a recursive process, or steps for calculation from a context.
b. Combine standard function types using arithmetic operations. For example, build a function that models the temperature of a cooling body by adding a constant function to a decaying exponential, and relate these functions to the model.
c. (+) Compose functions. For example, if T(y) is the temperature in the atmosphere as a function of height, and h(t) is the height of a weather balloon as a function of time, then T(h(t)) is the temperature at the location of the weather balloon as a function of time.
2. Write arithmetic and geometric sequences both recursively and with an explicit formula, use them to model situations, and translate between the two forms.★
Build new functions from existing functions.
3. Identify the effect on the graph of replacing f(x) by f(x) + k, k f(x), f(kx), and f(x + k) for specific values of k (both positive and negative); find the value of k given the graphs. Experiment with cases and illustrate an explanation of the effects on the graph using technology. Include recognizing even and odd functions from their graphs and algebraic expressions for them.
4. Find inverse functions.
a. Solve an equation of the form f(x) = c for a simple function f that has an inverse and write an expression for the inverse. For example, f(x) =2 x3 or f(x) = (x+1)/(x–1) for x ≠ 1.
b. (+) Verify by composition that one function is the inverse of another.
c. (+) Read values of an inverse function from a graph or a table, given that the function has an inverse.
d. (+) Produce an invertible function from a non-invertible function by restricting the domain.
5. (+) Understand the inverse relationship between exponents and logarithms and use this relationship to solve problems involving logarithms and exponents.
Linear, Quadratic, and Exponential Models*Construct and compare linear, quadratic, and exponential models and solve problems.
1. Distinguish between situations that can be modeled with linear functions and with exponential functions.
a. Prove that linear functions grow by equal differences over equal intervals, and that exponential functions grow by equal factors over equal intervals.
b. Recognize situations in which one quantity changes at a constant rate per unit interval relative to another.
c. Recognize situations in which a quantity grows or decays by a constant percent rate per unit interval relative to another.
2. Construct linear and exponential functions, including arithmetic and geometric sequences, given a graph, a description of a relationship, or two input-output pairs (include reading these from a table). (Function Tables , Function Tables 2 )
3. Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function.
4. For exponential models, express as a logarithm the solution to abct = d where a, c, and d are numbers and the base b is 2, 10, or e; evaluate the logarithm using technology.
Interpret expressions for functions in terms of the situation they model.
5. Interpret the parameters in a linear or exponential function in terms of a context.
Trigonometric FunctionsExtend the domain of trigonometric functions using the unit circle.
1. Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle.
2. Explain how the unit circle in the coordinate plane enables the extension of trigonometric functions to all real numbers, interpreted as radian measures of angles traversed counterclockwise around the unit circle.
3. (+) Use special triangles to determine geometrically the values of sine, cosine, tangent for π/3, π/4 and π/6, and use the unit circle to express the values of sine, cosines, and tangent for x, π + x, and 2π – x in terms of their values for x, where x is any real number.
4. (+) Use the unit circle to explain symmetry (odd and even) and periodicity of trigonometric functions.
Model periodic phenomena with trigonometric functions.
5. Choose trigonometric functions to model periodic phenomena with specified amplitude, frequency, and midline.★
6. (+) Understand that restricting a trigonometric function to a domain on which it is always increasing or always decreasing allows its inverse to be constructed.
7. (+) Use inverse functions to solve trigonometric equations that arise in modeling contexts; evaluate the solutions using technology, and interpret them in terms of the context.★
Prove and apply trigonometric identities.
8. Prove the Pythagorean identity sin2(θ) + cos2(θ) = 1 and use it to calculate trigonometric ratios.
9. (+) Prove the addition and subtraction formulas for sine, cosine, and tangent and use them to solve problems.
Modeling is best interpreted not as a collection of isolated topics but rather in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol (★).
CongruenceExperiment with transformations in the plane
1. Know precise definitions of angle, circle, perpendicular line, parallel line, and line segment, based on the undefined notions of point, line, distance along a line, and distance around a circular arc.
2. Represent transformations in the plane using, e.g., transparencies and geometry software; describe transformations as functions that take points in the plane as inputs and give other points as outputs. Compare transformations that preserve distance and angle to those that do not (e.g., translation versus horizontal stretch).
3. Given a rectangle, parallelogram, trapezoid, or regular polygon, describe the rotations and reflections that carry it onto itself.
4. Develop definitions of rotations, reflections, and translations in terms of angles, circles, perpendicular lines, parallel lines, and line segments.
5. Given a geometric figure and a rotation, reflection, or translation, draw the transformed figure using, e.g., graph paper, tracing paper, or geometry software. Specify a sequence of transformations that will carry a given figure onto another.
Understand congruence in terms of rigid motions
6. Use geometric descriptions of rigid motions to transform figures and to predict the effect of a given rigid motion on a given figure; given two figures, use the definition of congruence in terms of rigid motions to decide if they are congruent.
7. Use the definition of congruence in terms of rigid motions to show that two triangles are congruent if and only if corresponding pairs of sides and corresponding pairs of angles are congruent.
8. Explain how the criteria for triangle congruence (ASA, SAS, and SSS) follow from the definition of congruence in terms of rigid motions.
Prove geometric theorems
9. Prove theorems about lines and angles. Theorems include: vertical angles are congruent; when a transversal crosses parallel lines, alternate interior angles are congruent and corresponding angles are congruent; points on a perpendicular bisector of a line segment are exactly those equidistant from the segment’s endpoints.
10. Prove theorems about triangles. Theorems include: measures of interior angles of a triangle sum to 180°; base angles of isosceles triangles are congruent; the segment joining midpoints of two sides of a triangle is parallel to the third side and half the length; the medians of a triangle meet at a point.
11. Prove theorems about parallelograms. Theorems include: opposite sides are congruent, opposite angles are congruent, the diagonals of a parallelogram bisect each other, and conversely, rectangles are parallelograms with congruent diagonals.
Make geometric constructions
12. Make formal geometric constructions with a variety of tools and methods (compass and straightedge, string, reflective devices, paper folding, dynamic geometric software, etc.). Copying a segment; copying an angle; bisecting a segment; bisecting an angle; constructing perpendicular lines, including the perpendicular bisector of a line segment; and constructing a line parallel to a given line through a point not on the line.
13. Construct an equilateral triangle, a square, and a regular hexagon inscribed in a circle.
Similarity, Right Triangles, and TrigonometryUnderstand similarity in terms of similarity transformations
1. Verify experimentally the properties of dilations given by a center and a scale factor:
a. A dilation takes a line not passing through the center of the dilation to a parallel line, and leaves a line passing through the center unchanged.
b. The dilation of a line segment is longer or shorter in the ratio given by the scale factor.
2. Given two figures, use the definition of similarity in terms of similarity transformations to decide if they are similar; explain using similarity transformations the meaning of similarity for triangles as the equality of all corresponding pairs of angles and the proportionality of all corresponding pairs of sides.
3. Use the properties of similarity transformations to establish the AA criterion for two triangles to be similar.
Prove theorems involving similarity
4. Prove theorems about triangles. Theorems include: a line parallel to one side of a triangle divides the other two proportionally, and conversely; the Pythagorean Theorem proved using triangle similarity.
5. Use congruence and similarity criteria for triangles to solve problems and to prove relationships in geometric figures.
Define trigonometric ratios and solve problems involving right triangles
6. Understand that by similarity, side ratios in right triangles are properties of the angles in the triangle, leading to definitions of trigonometric ratios for acute angles.
7. Explain and use the relationship between the sine and cosine of complementary angles.
8. Use trigonometric ratios and the Pythagorean Theorem to solve right triangles in applied problems..
Apply trigonometry to general triangles
9. (+) Derive the formula A = 1/2 ab sin(C) for the area of a triangle by drawing an auxiliary line from a vertex perpendicular to the opposite side.
10. (+) Prove the Laws of Sines and Cosines and use them to solve problems.
11. (+) Understand and apply the Law of Sines and the Law of Cosines to find unknown measurements in right and non-right triangles (e.g., surveying problems, resultant forces).
CirclesUnderstand and apply theorems about circles
1. Prove that all circles are similar.
2. Identify and describe relationships among inscribed angles, radii, and chords. Include the relationship between central, inscribed and circumscribed angles; inscribed angles on a diameter are right angles; the radius of a circle is perpendicular to the tangent where the radius intersects the circle.
3. Construct the inscribed and circumscribed circles of a triangle, and prove properties of angles for a quadrilateral inscribed in a circle.
4. (+) Construct a tangent line from a point outside a given circle to the circle.
Find arc lengths and areas of sectors of circles
5. Derive using similarity the fact that the length of the arc intercepted by an angle is proportional to the radius, and define the radian measure of the angle as the constant of proportionality; derive the formula for the area of a sector.
Expressing Geometric Properties with EquationsTranslate between the geometric description and the equation for a conic section
1. Derive the equation of a circle of given center and radius using the Pythagorean Theorem; complete the square to find the center and radius of a circle given by an equation.
2. Derive the equation of a parabola given a focus and directrix.
3. (+) Derive the equations of ellipses and hyperbolas given the foci, using the fact that the sum or difference of distances from the foci is constant.
Use coordinates to prove simple geometric theorems algebraically
4. Use coordinates to prove simple geometric theorems algebraically. For example, prove or disprove that a figure defined by four given points in the coordinate plane is a rectangle; prove or disprove that the point (1, √3) lies on the circle centered at the origin and containing the point (0, 2).
5. Prove the slope criteria for parallel and perpendicular lines and use them to solve geometric problems (e.g., find the equation of a line parallel or perpendicular to a given line that passes through a given point). (Applied Linear Equations 2 )
6. Find the point on a directed line segment between two given points that divide the segment in a given ratio.
7. Use coordinates to compute perimeters of polygons and areas for triangles and rectangles, e.g. using the distance formula.★
Geometric Measurement and DimensionExplain volume formulas and use them to solve problems
1. Give an informal argument for the formulas for the volume of a cylinder, pyramid, and cone. Use dissection arguments, Cavalieri’s principle, and informal limit arguments.
2. (+) Given an informal argument using Cavalieri’s principle for the formulas for the volume of a sphere and other solid figures.
3. Use volume formulas for cylinders, pyramids, cones and spheres to solve problems.★
Visualize the relation between two-dimensional and three-dimensional objects
4. Identify the shapes of two-dimensional cross-sections of three-dimensional objects, and identify three-dimensional objects generated by rotations of two-dimensional objects.
Modeling with GeometryApply geometric concepts in modeling situations
1. Use geometric shapes, their measures and their properties to describe objects (e.g., modeling a tree trunk or a human torso as a cylinder).★
2. Apply concepts of density based on area and volume in modeling situations (e.g., persons per square mile, BTUs per cubic foot).★
3. Apply geometric methods to solve design problems (e.g., designing an object or structure to satisfy constraints or minimize cost; working with typographic grid systems based on ratios).★
Interpreting Categorical and Quantitative DataSummarize, represent, and interpret data on a single count or measurement variable
1. Represent data with plots on the real number line (dot plots, histograms, and box plots).
2. Use statistics appropriate to the shape of the data distribution to compare center (median, mean) and spread (interquartile range, standard deviation) of two or more different data sets.
3. Interpret differences in shape, center, and spread in the context of the data sets, accounting for possible effects of extreme data points (outliers).
4. Use the mean and standard deviation of a data set to fit it to a normal distribution and to estimate population percentages. Recognize that there are data sets for which such a procedure is not appropriate. Use calculators, spreadsheets and tables to estimate areas under the normal curve.
Summarize, represent, and interpret data on two categorical and quantitative variables
5. Summarize categorical data for two categories in two-way frequency tables. Interpret relative frequencies in the context of the data (including joint, marginal and conditional relative frequencies). Recognize possible associations and trends in the data.
6. Represent data on two quantitative variables on a scatter plot and describe how the variables are related.
a. Fit a function to the data; use functions fitted to data to solve problems in the context of the data. Use given functions or choose a function suggested by the context. Emphasize linear, quadratic, and exponential models.
b. Informally assess the fit of a function by plotting and analyzing residuals.
c. Fit a linear function for a scatter plot that suggests a linear association.
Interpret linear models
7. Interpret the slope (rate of change) and the intercept (constant term) of a linear fit in the context of the data.
8. Compute (using technology) and interpret the correlation coefficient of a linear fit.
9. Distinguish between correlation and causation.
Making Inferences and Justifying ConclusionsUnderstand and evaluate random processes underlying statistical experiments
1. Understand that statistics is a process for making inferences about population parameters based on a random sample from that population.
2. Decide if a specified model is consistent with results from a given data-generating process, e.g. using simulation. For example, a model says a spinning coin falls heads up with probability 0.5. Would a result of 5 tails in a row cause you to question the model?
Make inferences and justify conclusions from sample surveys, experiments and observational studies
3. Recognize the purposes of and differences among sample surveys, experiments and observational studies; explain how randomization relates to each.
4. Use data from a sample survey to estimate a population mean or proportion; develop a margin of error through the use of simulation models for random sampling.
5. Use data from a randomized experiment to compare two treatments; use simulations to decide if differences between parameters are significant.
6. Evaluate reports based on data.
Conditional Probability and the Rules of ProbabilityUnderstand independence and conditional probability and use them to interpret data
1. Describe events as subsets of a sample space (the set of outcomes) using characteristics (or categories) of the outcomes, or as unions, intersections, or complements of other events (“or,” “and,” “not”).
2. Understand that two events A and B are independent if the probability of A and B occurring together is the product of their probabilities, and use this characterization to determine if they are independent. (Probability 2 , Object Picking Probability )
3. Understand the conditional probability of A given B as P(A and B)/P(B), and interpret independence of A and B as saying that the conditional probability of A given B is the same as the probability of A, and the conditional probability of B given A is the same as the probability of B.
4. Construct and interpret two-way frequency tables of data when two categories are associated with each object being classified. Use the two-way table as a sample space to decide if events are independent and to approximate conditional probabilities. For example, collect data from a random sample of students in your school on their favorite subject among math, science, and English. Estimate the probability that a randomly selected student from your school will favor science given that the student is in tenth grade. Do the same for other subjects and compare the results.
5. Recognize and explain the concepts of conditional probability and independence in everyday language and everyday situations. For example, compare the chance of having lung cancer if you are a smoker with the chance of being a smoker if you have lung cancer.
Use the rules of probability to compute probabilities of compound events in a uniform probability model
6. Find the conditional probability of A given B as the fraction of B’s outcomes that also belong to A, and interpret the answer in terms of the model.
7. Apply the Addition Rule, P(A or B) = P(A) + P(B) – P(A and B), and interpret the answer in terms of the model.
8. (+) Apply the general Multiplication Rule in a uniform probability model, P(A and B) = P(A)P(B|A) = P(B)P(A|B), and interpret the answer in terms of the model.
9. (+) Use permutations and combinations to compute probabilities of compound events and solve problems.
Using Probability to Make DecisionsCalculate expected values and use them to solve problems
1. (+) Define a random variable for a quantity of interest by assigning a numerical value to each event in a sample space; graph the corresponding probability distribution using the same graphical displays as for data distributions.
2. (+) Calculate the expected value of a random variable; interpret it as the mean of the probability distribution.
3. (+) Develop a probability distribution for a random variable defined for a sample space in which theoretical probabilities can be calculated; find the expected value. For example, find the theoretical probability distribution for the number of correct answers obtained by guessing on all five questions of a multiple-choice test where each question has four choices, and find the expected grade under various grading schemes.
4. (+) Develop a probability distribution for a random variable defined for a sample space in which probabilities are assigned empirically; find the expected value. For example, find a current data distribution on the number of TV sets per household in the United States, and calculate the expected number of sets per household. How many TV sets would you expect to find in 100 randomly selected households?
Use probability to evaluate outcomes of decisions
5. (+) Weigh the possible outcomes of a decision by assigning probabilities to payoff values and finding expected values.
a. Find the expected payoff for a game of chance. For example, find the expected winnings from a state lottery ticket or a game at a fast-food restaurant.
b. Evaluate and compare strategies on the basis of expected values. For example, compare a high-deductible versus a low-deductible automobile insurance policy using various, but reasonable, chances of having a minor or a major accident.
6. (+) Use probabilities to make fair decisions (e.g., drawing by lots, using a random number generator).
7. (+) Analyze decisions and strategies using probability concepts (e.g., product testing, medical testing, pulling a hockey goalie at the end of a game).
Learn more about our online math practice software. | http://www.mathscore.com/math/standards/Common%20Core/High%20School/ | 13 |
50 | Domain of a function
To understand what the domain of a function is, it is important to understand what an ordered pair is.
An ordered pair is a pair of numbers inside parentheses such as (5, 6).
Generally speaking you can write (x , y)
x is called x-coordinate and y is called y-coordinate
If you have more than one ordered pair, you name this situation set of ordered pairs or relation
Basically, the domain of a function are the first coordinates (x-coordinates) of a set of ordered pairs or relation.
For example, take a look at the following relation or set of ordered pairs.
( 1, 2), ( 2, 4), (3, 6), ( 4, 8), ( 5,10), (6, 12), (7,14)
The domain is 1, 2, 3, 4, 5, 6, 7. We will not focus on the range too much here. This lesson is about the domain of a function. However, the range are the second coordinates or 2, 4, 6, 8, 10, 12, 14
Let's say you have a business (selling books) and your business follows the following model:
Sell 3 books, make 12 dollars. (3, 12)
Sell 4 books, make 16 dollars. (4, 16)
Sell 5 books, make 20 dollars. (5, 20)
Sell 6 books, make 24 dollars. (6, 24)
The domain of your business is 3, 4, 5, and 6.
Pretend now that you can sell unlimited books. (3, 4, 5, 6, 7, ........).
Your domain in this case will be all whole numbers
You may then need a more convenient way to represent your business situation
A close observation at your business model and you will be able to see that the y-coordinate equals x-coordinate × 4
y = 4x
You can write (x, 4x). In this case, the domain is x and x represents all whole numbers or your entire domain for this situation.
In reality, it makes more sense for you to sell unlimited books.
Thus, when the domain is only 3, 4, 5, and 6, we call this type of domain restricted domain, since you restrict yourself only to a portion of your entire domain
In some cases, some value(s) must be excluded from your domain in order for things to make sense
Consider for instance all integers and their inverses as shown below with ordered pairs
...,(-4, 1/-4), (-3, 1/-3), (-2, 1/-2), (-1, 1/-1), (0, 1/0),(1, 1/1) (2, 1/2), (3, 1/3), (4, 1/4), ...
One of these these domain values will not make sense. Do you know which one?
It is 0. If the domain is 0, then 1/0 does not make sense since 1/0 is not defined or has no answer
Instead of writing all these ordered pairs, you could just write (x, 1/x) and say that the domain of definition is x such that x is not equal to 0
In general, the domain of definition of any rational expressions is any number except those that will make the denominator equal to 0
What is the domain of (6x + 7)/ x - 5
The denominator equals to 0 when x - 5 = 0 or x = 5
The domain for this rational expression is any number except 5
What is the domain of (-x + 5) / x2 + 4
The denominator equals to 0 when x2 + 4 = 0
However, x2 + 4 is never equals to 0. Why? Because x2 is always positive no matter what number you replace x with
42 = 16 and 16 is positive. 16 + 4 is still positive
(-5)2 = 25 and 25 is positive. 25 + 4 is still positive
However, if you change the denominator to x2 - 4, the denominator will be 0 for some numbers
x2 - 4 = 0 when x = -2 and x = 2
22 - 4 = 2 × 2 - 4 = 4 - 4 = 0
(-2)2 - 4 = 2 × 2 - 4 = 4 - 4 = 0
The domain will be in this case any number except 2 and -2
Consider now all integers and their square roots as shown below with ordered pairs
...,(-4, √-4), (-3, √-3), (-2, √-2), (-1, √-1), (0, √0),(1, √1) (2, √2), (3, √3), (4, √4), ...
Many of these domain values will not make sense. Do you know which ones?
They are -4, -3, -2, and -1. For any of these domain values, the square root does not exist. At least it does not exist for real numbers. It does exist for complex numbers, but this is a completely different story that we will not consider here
Our asumption here is that we are working with real numbers only to look for the domain of a function and the square root does not exist for real numbers that are negative!
Instead of writing all these ordered pairs, you could just write (x, √x) and say that the domain of definition is x such that x is bigger or equal to 0
What is the domain of √ (x - 5)?
When you deal with square roots, the number under the square root sign is called a radicand
√ (x - 5) is defined when the radicand x - 5 is bigger or equal to 0
x - 5 ≥ 0
x - 5 + 5 ≥ 0 + 5
x ≥ 5
The domain of definition is at least 5 or any number bigger or equal to 5
As you can see here, the domain of a function does not always make sense for some value(s). It is your job to find these values when you look for the domain of a function
[?] Subscribe To
|Powered by Site Build It| | http://www.basic-mathematics.com/domain-of-a-function.html | 13 |
68 | In mechanics we are interested in trying to understand the motion of objects. In this chapter, the motion of objects in 1 dimension will be discussed. Motion in 1 dimension is motion along a straight line.
The position of an object along a straight line can be uniquely identified by its distance from a (user chosen) origin. (see Figure 2.1). Note: the position is fully specified by 1 coordinate (that is why this a 1 dimensional problem).
Figure 2.1. One-dimensional position.
Figure 2.2. x vs. t graphs for various velocities.
For a given problem, the origin can be chosen at whatever point is convenient. For example, the position of the object at time t = 0 is often chosen as the origin. The position of the object will in general be a function of time: x(t). Figure 2.2. shows the position as a function of time for an object at rest, and for objects moving to the left and to the right.
The slope of the curve in the position versus time graph depends on the velocity of the object. See for example Figure 2.3. After 10 seconds, the cheetah has covered a distance of 310 meter, the human 100 meter, and the pig 50 meter. Obviously, the cheetah has the highest velocity. A similar conclusion is obtained when we consider the time required to cover a fixed distance. The cheetah covers 300 meter in 10 s, the human in 30 s, and the pig requires 60 s. It is clear that a steeper slope of the curve in the x vs. t graph corresponds to a higher velocity.
Figure 2.3. x vs. t graphs for various creatures.
An object that changes its position has a non-zero velocity. The average velocity of an object during a specified time interval is defined as:
If the object moves to the right, the average velocity is positive. An object moving to the left has a negative average velocity. It is clear from the definition of the average velocity that depends only on the position of the object at time t = t1 and at time t = t2. This is nicely illustrated in sample problem 2-1 and 2-2.
Sample Problem 2-1
You drive a beat-up pickup truck down a straight road for 5.2 mi at 43 mi/h, at which point you run out of fuel. You walk 1.2 mi farther, to the nearest gas station, in 27 min (= 0.450 h). What is your average velocity from the time you started your truck to the time that you arrived at the station ?
The pickup truck initially covers a distance of 5.2 miles with a velocity of 43 miles/hour. This takes 7.3 minutes. After the pickup truck runs out of gas, it takes you 27 minutes to walk to the nearest gas station which is 1.2 miles down the road. When you arrive at the gas station, you have covered (5.2 + 1.2) = 6.4 miles, during a period of (7.3 + 27) = 34.3 minutes. Your average velocity up to this point is:
Sample Problem 2-2
Suppose you next carry the fuel back to the truck, making the round-trip in 35 min. What is your average velocity for the full journey, from the start of your driving to you arrival back at the truck with the fuel ?
It takes you another 35 minutes to walk back to your car. When you reach your truck, you are again 5.2 miles from the origin, and have been traveling for (34.4 + 35) = 69.4 minutes. At that point your average velocity is:
After this episode, you return back home. You cover the 5.2 miles again in 7.3 minutes (velocity equals 43 miles/hour). When you arrives home, you are 0 miles from your origin, and obviously your average velocity is:
The average velocity of the pickup truck which was left in the garage is also 0 miles/hour. Since the average velocity of an object depends only on its initial and final location and time, and not on the motion of the object in between, it is in general not a useful parameter. A more useful quantity is the instantaneous velocity of an object at a given instant. The instantaneous velocity is the value that the average velocity approaches as the time interval over which it is measured approaches zero:
For example: see sample problem 2-5.
The velocity of the object at t = 3.5 s can now be calculated:
The velocity of an object is defined in terms of the change of position of that object over time. A quantity used to describe the change of the velocity of an object over time is the acceleration a. The average acceleration over a time interval between t1 and t2 is defined as:
Note the similarity between the definition of the average velocity and the definition of the average acceleration. The instantaneous acceleration a is defined as:
From the definition of the acceleration, it is clear that the acceleration has the following units:
A positive acceleration is in general interpreted as meaning an increase in velocity. However, this is not correct. From the definition of the acceleration, we can conclude that the acceleration is positive if
This is obviously true if the velocities are positive, and the velocity is increasing with time. However, it is also true for negative velocities if the velocity becomes less negative over time.
Objects falling under the influence of gravity are one example of objects moving with constant acceleration. A constant acceleration means that the acceleration does not depend on time:
Integrating this equation, the velocity of the object can be obtained:
where v0 is the velocity of the object at time t = 0. From the velocity, the position of the object as function of time can be calculated:
where x0 is the position of the object at time t = 0.
Note 1: verify these relations by integrating the formulas for the position and the velocity.
Note 2: the equations of motion are the basis for most problems (see sample problem 7).
Sample Problem 2-8
Spotting a police car, you brake a Porsche from 75 km/h to 45 km/h over a distance of 88m. a) What is the acceleration, assumed to be constant ? b) What is the elapsed time ? c) If you continue to slow down with the acceleration calculated in (a) above, how much time would elapse in bringing the car to rest from 75 km/h ? d) In (c) above, what distance would be covered ? e) Suppose that, on a second trial with the acceleration calculated in (a) above and a different initial velocity, you bring your car to rest after traversing 200 m. What was the total braking time ?
a) Our starting points are the equations of motion:
The following information is provided:
* v(t = 0) = v0 = 75 km/h = 20.8 m/s
* v(t1) = 45 km/h = 12.5 m/s
* x(t = 0) = x0 = 0 m (Note: origin defined as position of Porsche at t = 0 s)
* x(t1) = 88 m
* a = constant
From eq.(1) we obtain:
Substitute (3) in (2):
From eq.(4) we can obtain the acceleration a:
b) Substitute eq.(5) into eq.(3):
c) The car is at rest at time t2:
Substituting the acceleration calculated using eq.(5) into eq.(3):
d) Substitute t2 (from eq.(8)) and a (from eq.(5)) into eq.(2):
e) The following information is provided:
* v(t3) = 0 m/s (Note: Porsche at rest at t = t3)
* x(t = 0) = x0 = 0 m (Note: origin defined as position of Porsche at t = 0)
* x(t3) = 200 m
* a = constant = - 1.6 m/s2
Eq.(1) tells us:
Substitute eq.(10) into eq.(2):
The time t3 can now easily be calculated:
A special case of constant acceleration is free fall (falling in vacuum). In problems of free fall, the direction of free fall is defined along the y-axis, and the positive position along the y-axis corresponds to upward motion. The acceleration due to gravity (g) equals 9.8 m/s2 (along the negative y-axis). The equations of motion for free fall are very similar to those discussed previously for constant acceleration:
where y0 and v0 are the position and the velocity of the object at time t = 0.
A pitcher tosses a baseball straight up, with an initial speed of 25 m/s. (a) How long does it take to reach its highest point ? (b) How high does the ball rise above its release point ? (c) How long will it take for the ball to reach a point 25 m above its release point.
Figure 2.4. Vertical position of baseball as function of time.
a) Our starting points are the equations of motion:
The initial conditions are:
* v(t = 0) = v0 = 25 m/s (upwards movement)
* y(t = 0) = y0 = 0 m (Note: origin defined as position of ball at t = 0)
* g = 9.8 m/s2
The highest point is obtained at time t = t1. At that point, the velocity is zero:
The ball reaches its highest point after 2.6 s (see Figure 2.4).
b) The position of the ball at t1 = 2.6 s can be easily calculated:
c) The quation for y(t) can be easily rewritten as:
where y is the height of the ball at time t. This Equation can be easily solved for t:
Using the initial conditions specified in (a) this equation can be used to calculate the time at which the ball reaches a height of 25 m (y = 25 m):
t = 1.4 s
t = 3.7 s
Figure 2.5. Velocity of the baseball as function of time.
The velocities of the ball at these times are (see also Figure 2.5):
v(t = 1.4 s) = + 11.3 m/s
v(t = 3.7 s) = - 11.3 m/s
At t = 1.4 s, the ball is at y = 25 m with positive velocity (upwards motion). At t = 2.6 s, the ball reaches its highest point (v = 0). After t = 2.6 s, the ball starts falling down (negative velocity). At t= 3.7 s the ball is located again at y = 25 m, but now moves downwards. | http://teacher.pas.rochester.edu/phy121/LectureNotes/Chapter02/Chapter2.html | 13 |
75 | Students will build a family of cylinders and discover the relation between the dimensions of the generating rectangle and the resulting pair of cylinders. They will order the cylinders by their volumes and draw a conclusion about the relation between a cylinder's dimensions and its volume. They will also calculate the volumes of the family of cylinders with constant area. Finally, they will write the volumes of the cylinders as a function of radius.
8 1/2" by 11" sheets of paper for the class (transparencies work well for the initial experiment), tape, ruler, graph paper, fill material (birdseed, Rice Krispies, Cheerios, packing "peanuts," etc.).
Cylinder, dimension, area, circumference, height, lateral surface area, volume
Take a sheet of paper and join the top and bottom edges to form a cylinder. The edges should meet exactly, with no gaps or overlap. With another sheet of paper the same size and aligned the same way, join the left and right edges to make another cylinder.
Stand both cylinders on a table. One of the cylinders will be tall and narrow; the other will be short and stout. We will refer to the tall cylinder as cylinder A and the short one as cylinder B. Mark each cylinder now to avoid confusion later. Mark each cylinder now to avoid confusion later.
Now pose the following question to the class: "Do you think the two cylinders will hold the same amount? Or will one hold more than the other? If you think that one will hold more, which one will that be?" Have them record their predictions, with an explanation.
Place cylinder B in a large flat box with cylinder A inside it. Fill cylinder A. Ask for someone to restate his or her predictions and explanation. With flair, slowly lift cylinder A so that the filler material falls into cylinder B. (You might want to pause partway through, to allow them to think about their answers.) Since the filler material does not fill cylinder B, we can conclude that cylinder B holds more than cylinder A.
Ask the class: "Was your prediction correct? Do the two cylinders hold the same amount? Why or why not? Can we explain why they don't?" (Note to the teacher: because the volume of the cylinder equals pi*r2*h, r has more effect than h [because r is squared], and therefore the cylinder with the greater radius will have the greater volume.)
"Let's go back and look at our original sheet of paper. We made two different cylinders from it. What geometric shape is the sheet of paper?" (rectangle) "What are its dimensions?" (8.5" by 11").
"What are the dimensions of the resulting cylinders? That is, what is the height and what is the circumference?" (The height of the cylinder is the length of the side of the paper rectangle that you taped, and the circumference is the length of the other side.)
"Are there any other cylinders that we can make from this same sheet of paper?" (Yes. There are many cylinders that can be made.)
"Let's try to make some other cylinders. If we fold a new sheet of paper lengthwise and cut it in half, we will get two pieces -- each measuring 4.25" by 11" -- which we can tape together to form a rectangle 4.25" by 22". We can repeat the process to create a second rectangle the same size. Now we can roll these rectangles into two different cylinders, one 4.25" high and another 22" high. We will label them cylinder C (4.25" high) and cylinder D (22" high)."
"Now we have four cylinders. Which of them would hold the most? Write down your predictions."
Test by filling. Have a student report the results.
Now have the students arrange the cylinders in order, by volume, from the cylinder that holds the least to the cylinder that holds the most. "Do you see any pattern that relates the size of the cylinder and the amounts they hold?" (As they get taller and narrower the cylinders hold less, and as they get shorter and stouter, they hold more.)
"How many other cylinders could we make from a rectangle with these same dimensions?" (Theoretically, infinitely many. Cylinders could get taller and narrower and taller and narrower until they were infinitely tall and infinitely narrow, or they could get shorter and stouter and shorter and stouter until they were infinitely short and infinitely stout.)
"We think that the taller the cylinder, the smaller the volume, and the shorter the cylinder, the greater the volume. Can we write this in mathematical language that will help us confirm our observations? What formulas relate to this problem?"
C = 2pi*r or pi*d [circumference of a circle]So if our ultimate goal is to calculate the volume, then the formula we will need to use is
"Let's go back to our original sheet of paper. What were its dimensions?" (8.5" by 11")
"Which of these two dimensions represents the height of the cylinder?" (11". The height of the taped edge of the paper is the height of the cylinder.)
"Half way there. We have found h. Now on to r."
"How does the circumference of the cylinder relate to the dimensions of the rectangle?" (The base of the rectangle is the circumference of the cylinder.)
"So, since the circumference is equal to 2pi*r and the circumference equals the base of the rectangle, then C = 2pi*r = 8.5"
So now we can solve for r. How do we do that?" (Divide both sides of the equation by 2pi.)
"What do we get?" (r = 8.5/(2pi) = 1.35282")
"Now we have r and h and we are ready to find the volume. Let's put them both into the volume formula,
V = pi*r2*h Using substitution,"Now you do the other cylinder and see what you get. Compare the volumes of the two cylinders. Do your results confirm what we discovered with our physical models?" (Note to teacher: you may need to lead students through the reasoning here as well.)
Organizing Material: Complete a Table
Remember our conclusion relating the dimensions of the cylinder to its volume? (As cylinders get taller and narrower they hold less, and as they get shorter and stouter they hold more.) Fill out the following table, and confirm that calculation. You can download the completed table as an Excel spreadsheet here.
Multiply the number in the first column of the above table by the number in the second column. What do you notice? (The products are all equal.) Why is this true? (These products represent the base times the height of the rectangle -- in other words, the area. Since the cylinders were all made from sheets of paper having the same dimensions, they all have the same area. The rectangle area represents the lateral surface area in the cylinder.)
Find a cylinder that has a volume of over 300 in3 with a lateral surface area of 93.5 in2. In the above table, fill in all four columns for that cylinder.
Using Algebra: Consider the whole family of cylinders that you could make with a fixed lateral surface area of 93.5 in2. Remember how many there would be? (Theoretically, infinitely many.) Write an expression for the volume of the cylinders as a function of r.
(Note to the teacher: Students need to learn to see the relation between r and h, so they can re-write h in terms of r in the formula
V = pi*r2[93.5/(2pi*r)]What kind of function do you get? (linear)
How could you use figures from the table to check your equation? (Pick an r from the table, put it into the equation, and see whether you get the correct V. [How do I know I'm right?])
Using our function, find the volume when r = 50, or when r = .005. Does the function confirm our earlier observation that as the cylinders get taller and narrower they hold less, and as they get shorter and stouter, they hold more? Explain.
Note to the teacher: you may want to assign the Constant Perimeter Project as an out-of-class or a next-day project, depending on the amount of guidance you feel you will need to give.
Evaluate answers from group projects. If you want to do further evaluation, you can add a similar problem on your next test using a different-sized sheet of paper. Example: Find the height of a "baseless" cylinder that would yield the maximum volume given a family of cylinders made from rectangles with a constant perimeter of 40.
After students have completed the Constant Perimeter Project, talk about the concept of maximum volume. Notice that the rectangle that gives the greatest volume is twice as wide as it is high. In other words, p/6 is the height, and p/3 is the circumference that gives the greatest volume. This is the interesting fact described in the activity that follows. Have students explore whether this is a unique occurrence that applies only to this particular perimeter, or whether it seems to be true given any P. | http://mathforum.org/brap/wrap2/highlesson.html | 13 |
113 | Science Fair Project Encyclopedia
In electrodynamics, polarization is a property of waves, such as light and other electromagnetic radiation. Unlike more familiar wave phenomena such as waves on water or sound waves, electromagnetic waves are three-dimensional, and it is their vector nature that gives rise to the phenomenon of polarization.
Basics: plane waves
The simplest manifestation of polarization to visualize is that of a plane wave, which is a good approximation to most light waves. A plane wave is one where the direction of the magnetic and electric fields are confined to a plane perpendicular to the propagation direction. Simply because the plane is two-dimensional, the electric vector in the plane at a point in space can be decomposed into two orthogonal components. Call these the x and y components (following the conventions of analytic geometry). For a simple harmonic wave, where the amplitude of the electric vector varies in a sinusoidal manner, the two components have exactly the same frequency. However, these components have two other defining characteristics that can differ. First, the two components may not have the same amplitude. Second, the two components may not have the same phase, that is they may not reach their maxima and minima at the same time in the fixed plane we are talking about. By considering the shape traced out in a fixed plane by the electric vector as such a plane wave passes over it (a Lissajous figure), we obtain a description of the polarization state. The following figures show some examples of the evolution of the electric field vector (blue) with time, along with its x and y components (red/left and green/right) and the path made by the vector in the plane (purple):
Consider first the special case (left) where the two orthogonal components are in phase. In this case the strength of the two components are always equal or related by a constant ratio, so the direction of the electric vector (the vector sum of these two components) will always fall on a single line in the plane. We call this special case linear polarization. The direction of this line will depend on the relative amplitude of the two components. This direction can be in any angle in the plane, but the direction never varies.
Now consider another special case (center), where the two orthogonal components have exactly the same amplitude and are exactly ninety degrees out of phase. In this case one component is zero when the other component is at maximum or minimum amplitude. Notice that there are two possible phase relationships that satisfy this requirement. The x component can be ninety degrees ahead of the y component or it can be ninety degrees behind the y component. In this special case the electric vector in the plane formed by summing the two components will rotate in a circle. We call this special case circular polarization. The direction of rotation will depend on which of the two phase relationships exists. We call these cases right-hand circular polarization and left-hand circular polarization, depending on which way the electric vector rotates.
All the other cases, that is where the two components are not in phase and either do not have the same amplitude and/or are not ninety degrees out of phase (e.g. right) are called elliptical polarization because the sum electric vector in the plane will trace out an ellipse (the "polarization ellipse").
In nature, electromagnetic radiation is often produced by a large ensemble of individual radiators, producing waves independently of each other. This type of light is termed incoherent. In general there is no single frequency but rather a spectrum of different frequencies present, and even if filtered to an arbitrarily narrow frequency range, there may not be a consistent state of polarization. However, this does not mean that polarization is only a feature of coherent radiation. Incoherent radiation may show statistical correlation between the components of the electric field, which can be interpreted as partial polarization. In general it is possible to describe an observed wave field as the sum of a completely incoherent part (no correlations) and a completely polarized part. One may then describe the light in terms of the degree of polarization, and the parameters of the polarization ellipse.
For ease of visualization, polarization states are often specified in terms of the polarization ellipse, specifically its orientation and elongation. A common parameterization uses the azimuth angle, ψ (the angle between the major semi-axis of the ellipse and the x-axis) and the ellipticity, ε (the ratio of the two semi-axes). Ellipticity is used in preference to the more common geometrical concept of eccentricity, which is of limited physical meaning in the case of polarization. An ellipticity of zero corresponds to linear polarization and an ellipticity of 1 corresponds to circular polarization. The arctangent of the ellipticity, χ = tan−1 ε (the "ellipticity angle"), is also commonly used. An example is shown in the diagram to the right.
Full information on a completely polarized state is also provided by the amplitude and phase of oscillations in two components of the electric field vector in the plane of polarization. This representation was used above to show how different states of polarization are possible. The amplitude and phase information can be conveniently represented as a two-dimensional complex vector (the Jones vector):
Notice that the product of a Jones vector with a complex number of unit modulus gives a different Jones vector representing the same ellipse, and thus the same state of polarization. The physical electric field, as the real part of the Jones vector, would be altered but the polarization state itself is independent of absolute phase. Note also that the basis vectors used to represent the Jones vector need not represent linear polarization states (i.e. be real). In general any two orthogonal states can be used, where an orthogonal vector pair is formally defined as one having a zero inner product. A common choice is left and right circular polarizations, for example to model the different propagation of waves in two such components in circularly birefringent media (see below) or signal paths of coherent detectors sensitive to circular polarization.
Regardless of whether polarization ellipses are represented using geometric parameters or Jones vectors, implicit in the parameterization is the orientation of the coordinate frame. This permits a degree of freedom, namely rotation about the propagation direction. When considering light that is propagating parallel to the surface of the Earth, the terms "horizontal" and "vertical" polarization are often used, with the former being associated with the first component of the Jones vector, or zero azimuth angle. On the other hand, in astronomy the equatorial coordinate system is generally used instead, with the zero azimuth (or position angle, as it is more commonly called in astronomy to avoid confusion with the horizontal coordinate system) corresponding to due north. Another coordinate system frequently used relates to the plane made by the propagation direction and a vector normal to the plane of a reflecting surface. This is illustrated in the diagram to the right. The components of the electric field parallel and perpendicular to this plane are termed "p-like" (parallel) and "s-like" (senkrecht, i.e. perpendicular in German). Alternative terms are pi-polarized, tangential plane polarized, vertically polarized, or a transverse-magnetic (TM) wave for the p-component; and sigma-polarized, sagittal plane polarized, horizontally polarized, or a transverse-electric (TE) wave for the s-component.
In the case of partially polarized radiation, the Jones vector varies in time and space in a way that differs from the constant rate of phase rotation of monochromatic, purely polarized waves. In this case, the wave field is likely stochastic, and only statistical information can be gathered about the variations and correlations between components of the electric field. This information is embodied in the coherency matrix:
where angular brackets denote averaging over many wave cycles. Several variants of the coherency matrix have been proposed: the Wiener coherency matrix and the spectral coherency matrix of Richard Barakat measure the coherence of a spectral decomposition of the signal, while the Wolf coherency matrix averages over all time/frequencies.
The coherency matrix contains all of the information on polarization that is obtainable using second order statistics. It can be decomposed into the sum of two idempotent matrices, corresponding to the eigenvectors of the coherency matrix, each representing a polarization state that is orthogonal to the other. An alternative decomposition is into completely polarized (zero determinant) and unpolarized (scaled identity matrix) components. In either case, the operation of summing the components corresponds to the incoherent superposition of waves from the two components. The latter case gives rise to the concept of the "degree of polarization", i.e. the fraction of the total intensity contributed by the completely polarized component.
The coherency matrix is not easy to visualize, and it is therefore common to describe incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. An alternative and mathematically convenient description is given by the Stokes parameters, introduced by George Gabriel Stokes in 1852. The relationship of the Stokes parameters to intensity and polarization ellipse parameters is shown in the equations and figure below.
Here Ip, 2ψ and 2χ are the spherical coordinates of the polarization state in the three-dimensional space of the last three Stokes parameters. Note the factors of two before ψ and χ corresponding respectively to the facts that any polarization ellipse is indistinguishable from one rotated by 180°, or one with the semi-axis lengths swapped accompanied by a 90° rotation. The Stokes parameters are sometimes denoted I, Q, U and V.
The Stokes parameters contain all of the information of the coherency matrix, and are related to it linearly by means of the identity matrix plus the three Pauli matrices:
Mathematically, the factor of two relating physical angles to their counterparts in Stokes space derives from the use of second-order moments and correlations, and incorporates the loss of information due to absolute phase invariance.
The figure above makes use of a convenient representation of the last three Stokes parameters as components in a three-dimensional vector space. This space is closely related to the Poincaré sphere, which is the spherical surface occupied by completely polarized states in the space of the vector
All four Stokes parameters can also be combined into the four-dimensional Stokes vector, which can be interpreted as four-vectors of Minkowski space. In this case, all physically realizable polarization states correspond to time-like, future-directed vectors.
Propagation, reflection and scattering
In a vacuum, the components of the electric field propagate at the speed of light, so that the phase of the wave varies in space in time while the polarization state does not. That is:
where k is the wavenumber and positive z is the direction of propagation. As noted above, the physical electric vector is the real part of the Jones vector. When electromagnetic waves interact with matter, their propagation is altered. If this depends on the polarization states of the waves, then their polarization may also be altered.
In many types of media, electromagnetic waves are decomposed into two orthogonal components that encounter different propagation effects. A similar situation occurs in the signal processing paths of detection systems that record the electric field directly. Such effects are most easily characterized in the form of a complex 2×2 transformation matrix called the Jones matrix:
In general the Jones matrix of a medium depends on the frequency of the waves.
For propagation effects in two orthogonal modes, the Jones matrix can be written as:
where g1 and g2 are complex numbers representing the change in amplitude and phase caused in each of the two propagation modes, and T is a unitary matrix representing a change of basis from these propagation modes to the linear system used for the Jones vectors. For those media in which the amplitudes are unchanged but a differential phase delay occurs, the Jones matrix is unitary, while those affecting amplitude without phase have Hermitian Jones matrices. In fact, since any matrix may be written as the product of unitary and positive Hermitian matrices, any sequence of linear propagation effects, no matter how complex, can be written as the a product of these two basic types of transformations.
Media in which the two modes accrue a differential delay are called birefringent. Well known manifestations of this effect appear in optical wave plates/retarders (linear modes) and in Faraday rotation/optical rotation (circular modes). An easily visualized example is one where the propagation modes are linear, and the incoming radiation is linearly polarized at a 45° angle to the modes. As the phase difference starts to appear, the polarization becomes elliptical, eventually changing to purely circular polarization (90° phase difference), then to elliptical and eventually linear polarization (180° phase) with an azimuth angle perpendicular to the original direction, then through circular again (270° phase), then elliptical with the original azimuth angle, and finally back to the original linearly polarized state (360° phase) where the cycle begins anew. In general the situation is more complicated and can be characterized as a rotation in the Poincaré sphere about the axis defined by the propagation modes (this is a consequence of the isomorphism of SU(2) with SO(3)). Examples for linear (blue), circular (red) and elliptical (yellow) birefringence are shown in the figure on the left. The total intensity and degree of polarization are unaffected. If the path length in the birefringent medium is sufficient, plane waves will exit the material with a significantly different propagation direction, due to refraction. For example, this is the case with macroscopic crystals of calcite, which present the viewer with two offset, orthogonally polarized images of whatever is viewed through them. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669. In addition, the phase shift, and thus the change in polarization state, is usually frequency dependent, which, in combination with dichroism, often gives rise to bright colors and rainbow-like effects.
Media in which the amplitude of waves propagating in one of the modes is reduced are called dichroic. Devices that block nearly all of the radiation in one mode are known as polarizing filters or simply "polarizers". In terms of the Stokes parameters, the total intensity is reduced while vectors in the Poincaré sphere are "dragged" towards the direction of the favored mode. Mathematically, under the treatment of the Stokes parameters as a Minkowski 4-vector, the transformation is a scaled Lorentz boost (due to the isomorphism of SL(2,C) and the restricted Lorentz group, SO(3,1)). Just as the Lorentz transformation preserves the proper time, the quantity det Ψ = S02-S12-S22-S32 is invariant within a multiplicative scalar constant under Jones matrix transformations (dichroic and/or birefringent).
In birefringent and dichroic media, in addition to writing a Jones matrix for the net effect of passing through a particular path in a given medium, the evolution of the polarization state along that path can be characterized as the (matrix) product of an infinite series of infinitesimal steps, each operating on the state produced by all earlier matrices. In a uniform medium each step is the same, and one may write
where J is an overall (real) gain/loss factor. Here D is a traceless matrix such that αDe gives the derivative of e with respect to z. If D is Hermitian the effect is dichroism, while a unitary matrix models birefringence. The matrix D can be expressed as a linear combination of the Pauli matrices, where real coefficients give Hermitian matrices and imaginary coefficients give unitary matrices. The Jones matrix in each case may therefore be written with the convenient construction:
where σ is a 3-vector composed of the Pauli matrices (used here as generators for the Lie group SL(2,C)) and n and m are real 3-vectors on the Poincaré sphere corresponding to one of the propagation modes of the medium. The effects in that space correspond to a Lorentz boost of velocity parameter 2β along the given direction, or a rotation of angle 2φ about the given axis. These transformations may also be written as biquaternions (quaternions with complex elements), where the elements are related to the Jones matrix in the same way that the Stokes parameters are related to the coherency matrix. They may then be applied in pre- and post-multiplication to the quaternion representation of the coherency matrix, with the usual exploitation of the quaternion exponential for performing rotations and boosts taking a form equivalent to the matrix exponential equations above (See: Quaternion rotation).
In addition to birefringence and dichroism in extended media, polarization effects describable using Jones matrices can also occur at (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected, with the ratio depending on angle of incidence and the angle of refraction. In addition, if the plane of the reflecting surface is not aligned with the plane of propagation of the wave, the polarization of the two parts is altered. In general, the Jones matrices of the reflection and transmission are real and diagonal, making the effect similar to that of a simple linear polarizer. For unpolarized light striking a surface at a certain optimum angle of incidence known as Brewster's angle, the reflected wave will be completely s-polarized.
Certain effects do not produce linear transformations of the Jones vector, and thus cannot be described with (constant) Jones matrices. For these cases it is usual instead to use a 4×4 matrix that acts upon the Stokes 4-vector. Such matrices were first used by Paul Soleillet in 1929, although they have come to be known as Mueller matrices. While every Jones matrix has a Mueller matrix, the reverse is not true. Mueller matrices are frequently used to study the effects of the scattering of waves from complex surfaces or ensembles of particles.
Polarization in nature, science, and technology
Observing polarization effects in everyday life
All light which reflects off a flat surface is at least partially polarized. You can take a polarizing filter and hold it at 90 degrees to the reflection, and it will be reduced or eliminated. Polarizing filters remove light polarized at 90 degrees to the filter. This is why you can take 2 polarizers and lay them atop one another at 90 degree angles to each other and no light will pass through.
Polarized light can be observed all around you if you know what it is and what to look for. (the lenses of Polaroid® sunglasses will work to demonstrate). While viewing through the filter, rotate it, and if linear or elliptically polarized light is present the degree of illumination will change. Polarization by scattering is observed as light passes through our atmosphere. The scattered light often produces a glare in the skies. Photographers know that this partial polarization of scattered light produces a washed-out sky. An easy first phenomenon to observe is at sunset to view the horizon at a 90° angle from the sunset. Another easily observed effect is the drastic reduction in brightness of images of the sky and clouds reflected from horizontal surfaces, which is the reason why polarizing filters are often used in sunglasses. Also frequently visible through polarizing sunglasses are rainbow-like patterns caused by color-dependent birefringent effects, for example in toughened glass (e.g. car windows) or items made from transparent plastics. The role played by polarization in the operation of liquid crystal displays (LCDs) is also frequently apparent to the wearer of polarizing sunglasses, which may reduce the contrast or even make the display unreadable.
Many animals are apparently capable of perceiving the polarization of light, which is generally used for navigational purposes, since the linear polarization of sky light is always perpendicular to the direction of the sun. This ability is very common among the insects, including bees, which use this information to orient their communicative dances. Polarization sensitivity has also been observed in species of octopus, squid, cuttlefish, and mantis shrimp. The rapidly changing, vividly colored skin patterns of cuttlefish, used for communication, also incorporate polarization patterns, and mantis shrimp are known to have polarization selective reflective tissue. Sky polarization can also be perceived by some vertebrates, including pigeons, for which the ability is but one of many aids to homing.
The property of (linear) birefringence is widespread in crystalline minerals, and indeed was pivotal in the initial discovery of polarization. In mineralogy, this property is frequently exploited using polarization microscopes, for the purpose of identifying minerals. See pleochroism.
In many areas of astronomy, the study of polarized electromagnetic radiation from outer space is of great importance. Although not usually a factor in the thermal radiation of stars, polarization is also present in radiation from coherent astronomical sources (e.g. hydroxyl or methanol masers), and incoherent sources such as the large radio lobes in active galaxies, and pulsar radio radiation (which may, it is speculated, sometimes be coherent), and is also imposed upon starlight by scattering from interstellar dust. Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field via Faraday rotation. The polarization of the cosmic microwave background is being used to study the physics of the very early universe.
Technological applications of polarization are extremely widespread. Perhaps the most commonly encountered example is the liquid crystal display. All radio transmitters and receivers are intrinsically polarized, special use of which is made in radar. In engineering, the relationship between strain and birefringence motivates the use of polarization in characterizing the distribution of stress and strain in prototypes. Electronically controlled birefringent devices are used in combination with polarizing filters as modulators in fiber optics. Polarizing filters are also used in photography. They can deepen the color of a blue sky and eliminate reflections from windows.
Sky polarization has been exploited in the "sky compass ", which was used in the 1950s when navigating near the poles of the Earth's magnetic field when neither the sun nor stars were visible (e.g. under daytime cloud or twilight). It has been suggested, controversially, that the Vikings exploited a similar device (the "sunstone ") in their extensive expeditions across the North Atlantic in the 9th - 11th centuries, before the arrival of the magnetic compass in Europe in the 12th century. Related to the sky compass is the "polar clock ", invented by Charles Wheatstone in the late 19th century.
- Principles of Optics, M. Born & E. Wolf, Cambridge University Press, 7th edition 1999, ISBN 0521642221
- Fundamentals of polarized light : a statistical optics approach, C. Brosseau, Wiley, 1998, ISBN 0-471-14302-2
- Polarized Light, Production and Use, William A. Shurcliff, Harvard University Press, 1962.
- Optics, Eugene Hecht, Addison Wesley, 4th edition 2002, hardcover, ISBN 0-8053-8566-5
- Polarised Light in Science and Nature, D. Pye, Institute of Physics Publishing, 2001, ISBN 0750306734
- Polarized Light in Nature, G. P. Können, Translated by G. A. Beerling, Cambridge University Press, 1985, hardcover, ISBN 0-521-25862-6
- polarization.com: Polarized Light in Nature and Technology
- Polarized Light Digital Image Gallery: Microscopic images made using polarization effects
- The relationship between photon spin and polarization
- A virtual polarization microscope
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Polarization | 13 |
85 | The cosine addition formula calculates the cosine of an angle that is either the sum or difference of two other angles. It arises from the law of cosines and the distance formula. By using the cosine addition formula, the cosine of both the sum and difference of two angles can be found with the two angles' sines and cosines.
I want to derive the cosine of a difference formula but let me explain what that means first. Let's take a look at a picture, this is the unit circle and I have two points a and b and the coordinates of point a are cosine alpha, sine alpha, coordinates of point b are cosine beta, sine beta. I'm interested in finding a formula for the cosine of this angle here beta minus alpha this angle is beta this is alpha so this angle between the two is beta minus alpha I want a cosine for that.
Now let's recall the law of cosines, if I have a triangle and I want to find the length of side c, I need these two sides and the angle between them and I'll use this formula a squared minus b squared minus 2ab cosine of and this is gamma the angle between them.
Alright, let's start by using the law of cosines on this picture, so first law of cosines and let's observe I want solve for this length ab squared that will be our c squared so ab squared equals and then my a and b these two lengths are both 1 because this is the unit circle the radius is 1 so any radius is going to have length 1 both of these have length 1 so it'll be 1 squared plus 1 squared minus 2 times 1 times 1 times the cosine of the angle between them this angle so 1 squared plus 1 squared minus 2 times 1 times 1 times times the cosine of beta minus alpha, beta minus alpha again is this angle. Okay now just simplifying a little bit this is ab squared equals 2 minus 2 cosine of beta minus alpha. Okay so that's one formula.
Now the second thing I want to do is use the distance formula. Remember the distance formula is how we find the distance between two points in the plane and I want to find the distance between these two points actually I want the distance squared. But the distance formula would say that length ab is the square root of and you take the difference in the x coordinates and that's cosine beta minus cosine alpha squared plus the difference in the y coordinates that is sine beta minus sine alpha squared. Now I'm actually interested in the square of this right because eventually I'm going to equate what I get with this so I want ab squared equals and then I'm going to have cosine of beta minus cosine alpha squared plus sine beta minus sine alpha squared. Now we got to expand this, okay it's going to be a mess but prepare yourself, I'm expanding this I get cosine squared beta minus 2 cosine beta cosine alpha plus cosine squared alpha right that's the expansion of this term. Now let's do this term, so plus sine squared beta minus twice the product 2 sine beta sine alpha plus sine squared alpha. It looks terrible but something really really nice is about to happen. Check this out, we got cosine squared beta plus sine squared beta very nice that adds up to one by the Pythagorean identity so I put a 1 down here.
We also have cosine squared alpha and sine squared alpha that also adds up to a 1 so it's another 1, so you write this equal 1+1 and then I have the rest of the stuff the minus 2 let me observe that both of the remaining terms have a minus 2 in front of them so I can write minus 2 I can factor that out and I'll be left with cosine beta cosine alpha right factored out of here and out of here I'll have a plus sine beta sine alpha, so this is precisely 2 minus 2 cosine beta cosine alpha plus sine beta sine alpha. That's what ab squared equals from the distance formula. If we equate it to what ab squared equaled from the law of cosines, we get this right we get this thing equals this 2 minus 2 cosine beta minus alpha. Now let's observe that in both sides both sides of this equation we have 2 minus 2 times something we can cancel these 2's and then divide both sides by negative 2 and what we'll get is this cosine beta cosine alpha plus sine beta sine alpha equals this cosine of beta minus alpha that's our formula bring it right up here. The cosine of beta minus alpha equals cosine beta cosine alpha plus sine beta sine alpha. This is the cosine of the difference formula and we'll use it a lot in coming lessons. | http://www.brightstorm.com/math/trigonometry/advanced-trigonometry/the-cosine-addition-formulas/ | 13 |
58 | Glossary items by topic: Physics
The coldest possible temperature, at which all molecular motion stops. On the Kelvin temperature scale, this temperature is the zero point (0 K), which is equivalent to –273° C and –460° F.
A process by which lighter elements capture helium nuclei (alpha particles) to form heavier elements. For example, when a carbon nucleus captures an alpha particle, a heavier oxygen nucleus is formed.
The size of a wave from the top of a wave crest to its midpoint.
A property that an object, such as a planet revolving around the Sun, possesses by virtue of its rotation or circular motion. An object’s angular momentum cannot change unless some force acts to speed up or slow down its circular motion. This principle, known as conservation of angular momentum, is why an object can indefinitely maintain a circular motion around an axis of revolution or rotation.
Matter made up of elementary particles whose masses are identical to their normal-matter counterparts but whose other properties, such as electric charge, are reversed. The positron is the antimatter counterpart of an electron, with a positive charge instead of a negative charge. When an antimatter particle collides with its normal-matter counterpart, both particles are annihilated and energy is released.
The smallest unit of matter that possesses chemical properties. All atoms have the same basic structure: a nucleus containing positively charged protons with an equal number of negatively charged electrons orbiting around it. In addition to protons, most nuclei contain neutral neutrons whose mass is similar to that of protons. Each atom corresponds to a unique chemical element determined by the number of protons in its nucleus.
The positively charged core of an atom consisting of protons and (except for hydrogen) neutrons, and around which electrons orbit.
Celsius (Centigrade) Temperature Scale
A temperature scale on which the freezing point of water is 0° C and the boiling point is 100° C.
A pure substance consisting of atoms or ions of two or more different elements. The elements are in definite proportions. A chemical compound usually possesses properties unlike those of its constituent elements. For example, table salt (the common name for sodium chloride) is a chemical compound made up of the elements chlorine and sodium.
The chemical (i.e., pre-biological) changes that transformed simple atoms and molecules into the more complex chemicals needed for the origin of life. For example, hydrogen atoms in the cores of stars combine through nuclear fusion to form the heavier element helium.
An event involving a collision of objects; for example, the excitation of a hydrogen atom when it is hit by an electron.
The visual perception of light that enables human eyes to differentiate between wavelengths of the visible spectrum, with the longest wavelengths appearing red and the shortest appearing blue or violet.
Conservation of Energy And Mass
A fundamental law of physics, which states that the total amount of mass and energy in the universe remains unchanged. However, mass can be converted to energy, and vice versa.
The transfer of heat through a liquid or gas caused by the physical upwelling of hot matter. The heat transfer results in the circulation of currents from lower, hotter regions to higher, cooler regions. An everyday example of this process is boiling water. Convection occurs in the Sun and other stars.
The ratio of the mass of an object to its volume. For example, water has a density of one gram of mass for every milliliter of volume.
A special form of hydrogen (an isotope called “heavy hydrogen”) that has a neutron as well as a proton in its nucleus.
The change in the wavelength of sound or light waves caused when the object emitting the waves moves toward or away from the observer; also called Doppler Shift. In sound, the Doppler Effect causes a shift in sound frequency or pitch (for example, the change in pitch noted as an ambulance passes). In light, an object’s visible color is altered and its spectrum is shifted toward the blue region of the spectrum for objects moving toward the observer and toward the red for objects moving away.
A fundamental force that governs all interactions among electrical charges and magnetism. Essentially, all charged particles attract oppositely charged particles and repel identically charged particles. Similarly, opposite poles of magnets attract and like magnetic poles repel.
The science dealing with the physical relationship between electricity and magnetism. The principle of an electromagnet, a magnet generated by electrical current flow, is based on this phenomenon.
A negatively charge elementary particle that typically resides outside the nucleus of an atom but is bound to it by electromagnetic forces. An electron’s mass is tiny: 1,836 electrons equals the mass of one proton.
Electron Volt (eV)
A unit of energy that is equal to the energy that an electron gains as it moves through a potential difference of one volt. This very small amount of energy is equal to 1.602 * 10–19 joules. Because an electron volt is so small, engineers and scientists sometimes use the terms MeV (mega-million) and GeV (giga-billion) electron volts.
A substance composed of a particular kind of atom. All atoms with the same number of protons (atomic numbers) in the nucleus are examples of the same element and have identical chemical properties. For example, gold (with 79 protons) and iron (with 26 protons) are both elements, but table salt is not because it is made from two different elements: sodium and chlorine. The atoms of a particular element have the same number of protons in the nucleus and exhibit a unique set of chemical properties. There are about 90 naturally occurring elements on Earth.
Particles smaller than atoms that are the basic building blocks of the universe. The most prominent examples are photons, electrons, and quarks.
The minimum velocity required for an object to escape the gravity of a massive object.
The spherical outer boundary of a black hole. Once matter crosses this threshold, the speed required for it to escape the black hole’s gravitational grip is greater than the speed of light.
A greater-than-minimum energy state of any atom that is achieved when at least one of its electrons resides at a greater-than-normal distance from its parent nucleus.
Fahrenheit Temperature Scale
A temperature scale on which the freezing point of water is 32° F and the boiling point is 212° F.
A nuclear process that releases energy when heavyweight atomic nuclei break down into lighter nuclei. Fission is the basis of the atomic bomb.
The flow of fluid, particles, or energy through a given area within a certain time. In astronomy, this term is often used to describe the rate at which light flows. For example, the amount of light (photons) striking a single square centimeter of a detector in one second is its flux.
Describes the number of wave crests passing by a fixed point in a given time period (usually one second). Frequency is measured in Hertz (Hz).
A nuclear process that releases energy when light atomic nuclei combine to form heavier nuclei. Fusion is the energy source for stars like our Sun.
Also known as geostationary. An orbit in which an object circles the Earth once every 24 hours, moving at the same speed and direction as the planet’s rotation. The object remains nearly stationary above a particular point, as observed from Earth. The International Ultraviolet Explorer (IUE) and some weather satellites are examples of satellites in geosynchronous orbit.
Gravitational Constant (G)
A value used in the calculation of the gravitational force between objects. In the equation describing the force of gravity, “G” represents the gravitational constant and is equal to 6.672 * 10–11 Nm2/kg2.
A condition that occurs when an object’s inward-pulling gravitational forces exceed the outward-pushing pressure forces, thus causing the object to collapse on itself. For example, when the pressure forces within an interstellar gas cloud cannot resist the gravitational forces that act to compress the cloud, then the cloud collapses upon itself to form a star.
Gravity (Gravitational Force)
The attractive force between all masses in the universe. All objects that have mass possess a gravitational force that attracts all other masses. The more massive the object, the stronger the gravitational force. The closer objects are to each other, the stronger the gravitational attraction.
The minimum energy state of an atom that is achieved when all of its electrons have the lowest possible energy and therefore are as close to the nucleus as possible.
The amount, degree, or quantity of energy passing through a point per unit time. For example, the intensity of light that Earth receives from the Sun is far greater than that from any other star because the Sun is the closest star to us.
Inverse Square Law
A law that describes any quantity, such as gravitational force, that decreases with the square of the distance between two objects. For example, if the distance between two objects is doubled, then the gravitational force exerted between them is one-fourth as strong. Likewise, if the distance to a star is doubled, then its apparent brightness is only one-fourth as great.
An atom with one or more electrons removed (or added), giving the atom a positive (or negative) charge.
The process by which ions are produced, typically by collisions with other atoms or electrons, or by absorption of electromagnetic radiation.
An atom of a given element having a particular number of neutrons in the nucleus. Isotopes of a given element differ in the numbers of neutrons within the nucleus. Adding or subtracting a neutron from the nucleus changes an atoms mass but does not affect its basic chemical properties.
The temperature scale most commonly used in science, on which absolute zero is the lowest possible value. On this scale, water freezes at 273 K and boils at 373 K.
A measure of distance in the metric system equal to 1000 meters or about 0.6 of a mile.
The energy that an object has by virtue of its motion.
A specific wavelength (91.2 nm) that corresponds to the energy needed to ionize a hydrogen atom (13.6 eV). Galactic space is opaque at wavelengths shorter than the Lyman limit. Subsequently, light from cosmic objects at wavelengths less than the Lyman limit is exceedingly difficult to detect.
A region of space in which magnetic forces may be detected or may affect the motion of an electrically charged particle. As with gravity, magnetism has a long-range effect and magnetic fields are associated with many astronomical objects.
A measure of the total amount of matter contained within an object.
A highly efficient energy-generation process in which equal amounts of matter and antimatter collide and destroy each other, thus producing a burst of energy.
The average speed of the molecules in a gas of a given temperature.
A tightly knit group of two or more atoms bound together by electromagnetic forces among the atoms’ electrons and nuclei. For example, water (H2O) is two hydrogen atoms bound with one oxygen atom. Identical molecules have identical chemical properties.
A neutral, weakly interacting elementary particle having a very tiny mass. Stars like the Sun produce more than 200 trillion trillion trillion neutrinos every second. Neutrinos from the Sun interact so weakly with other matter that they pass straight through the Earth as if it weren’t there.
A device designed to detect neutrinos.
A neutral (no electric charge) elementary particle having slightly more mass than a proton and residing in the nucleus of all atoms other than hydrogen.
Radiation that is not produced from heat energy — for example, radiation released when a very fast-moving charged particle (such as an electron) interacts with a magnetic force field. Because the electron’s velocity in this case is not related to the gas temperature, this process has nothing to do with heat.
The process by which an atomic nucleus is transformed into another type of atomic nucleus. For example, by removing an alpha particle from the nucleus, the element radium is transformed into the element radon.
The degree to which light is prevented from passing through an object or a substance. Opacity is the opposite of transparency. As an object’s opacity increases, the amount of light passing through it decreases. Glass, for example, is transparent and most clouds are opaque.
Periodic Table (of the Elements)
A chart of all the known chemical elements arranged according to the number of protons in the nucleus (also known as the atomic number). Elements with similar properties are grouped together in the same column.
The release of electrons from a solid material when it is struck by radiant energy, such as visible or ultraviolet light, X-rays, or gamma rays.
A packet of electromagnetic energy, such as light. A photon is regarded as a charge-less, mass-less particle having an indefinitely long lifetime.
The graphical representation of the mathematical relationship between the frequency (or wavelength) and intensity of radiation emitted from an object by virtue of its heat energy.
A substance composed of charged particles, like ions and electrons, and possibly some neutral particles. Our Sun is made of plasma. Overall, the charge of a plasma is electrically neutral. Plasma is regarded as an additional state of matter because its properties are different from those of solids, liquids, and normal gases.
The energy of an object owing to its position in a force field or its internal condition, as opposed to kinetic energy, which depends on its motion. Examples of objects with potential energy include a diver on a diving board and a coiled spring.
A positively charged elementary particle that resides in the nucleus of every atom.
A series of nuclear events occurring in the core of a star whereby hydrogen nuclei (protons) are converted into helium nuclei. This process releases energy.
A basic building block of protons, neutrons, and other elementary particles.
RADAR (Radio Detection and Ranging)
A method of detecting, locating, or tracking an object by using beamed, reflected, and timed radio waves. RADAR also refers to the electronic equipment that uses radio waves to detect, locate, and track objects.
An event involving the emission or absorption of radiation. For example, a hydrogen atom that absorbs a photon of light converts the energy of that radiation into electrical potential energy.
The spontaneous decay of certain rare, unstable, atomic nuclei into more stable atomic nuclei. A natural by-product of this process is the release of energy.
A theory of physics that describes the dynamical behavior of matter and energy. The consequences of relativity can be quite strange at very high velocities and very high densities. A direct result of the theory of relativity is the equation E = mc2, which expresses a relationship between mass (m), energy (E), and the speed of light (c).
The orbital motion of one object around another. The Earth revolves around the Sun in one year. The moon revolves around the Earth in approximately 28 days.
The spin of an object around its central axis. Earth rotates about its axis every 24 hours. A spinning top rotates about its center shaft.
A high-pressure wave that travels at supersonic speeds. Shock waves are usually produced by an explosion.
The four-dimensional coordinate system (three dimensions of space and one of time) in which physical events are located.
Speed Of Light (c)
The speed at which light (photons) travels through empty space is roughly 3 * 108 meters per second or 300 million meters per second.
The force that binds protons and neutrons within atomic nuclei and is effective only at distances less than 10—13 centimeters.
A measure of the amount of heat energy in a substance, such as air, a star, or the human body. Because heat energy corresponds to motions and vibrations of molecules, temperature provides information about the amount of molecular motion occurring in a substance.
Radiation released by virtue of an object’s heat, namely, the transfer of heat energy into the radiative energy of electromagnetic waves. Examples of thermal radiation are sunlight, the orange glow of an electric range, and the light from in incandescent light bulb.
Unstable and disorderly motion, as when a smooth, flowing stream becomes a churning rapid.
The speed of an object moving in a specific direction. A car traveling at 35 miles per hour is a measurement of speed. Observing that a car is traveling 35 miles per hour due north is a measurement of velocity.
A vibration in some media that transfers energy from one place to another. Sound waves are vibrations passing in air. Light waves are vibrations in electromagnetic fields.
The distance between two wave crests. Radio waves can have lengths of several feet; the wavelengths of X-rays are roughly the size of atoms.
The force that governs the change of one kind of elementary particle into another. This force is associated with radioactive processes that involve neutrons. | http://www.hubblesite.org/reference_desk/glossary/index.php?topic=topic_physics | 13 |
60 | Lesson 1. Lisp
ACL2 uses a subset of the Lisp programming language. Lisp syntax is one of the simplest ones of any programming language. However, it is different from most other languages, so let's take a moment to get acquainted with it.
First of all, every operation in Lisp is a function. Even very basic mathematical operations, such as addition. Secondly, functions are called in a different way than they are in mathematics or in most other languages. While in most languages, you might write
f(x, y), in Lisp, you write
(f x y).
The addition function is called
+. Try adding two numbers now:
(+ 1 2)
Tip: You can click on any code you see in this tutorial to insert it at the prompt.
'Lisp' is short for 'list processing'. Lists, specifically linked lists, are a fundamental part of how Lisp represents and manipulates data. The function used to create a list is called, appropriately,
list. Try creating a list:
(list 8 6 7 5 3 0 9)
You can use the single quote mark as a type of shorthand for creating lists. Unlike with
(list ...), the single quote marker also "quotes" the items inside of it. While
(list 1 2 (+ 1 2)) would be the same as
(list 1 2 3),
'(1 2 (+ 1 2)) is the same as
(list 1 2 (list '+ 1 2)).
The single quoted + sign,
'+ is called a "symbol". You can use the single quote to create symbols out of sequences of characters, such as
append concatenates two (or more) lists together. Try:
(append '(1 2 3) '(4 5 6))
There are two more basic types of values in ACL2 that are important: strings and booleans. Strings are double quoted,
"like this". The two boolean values are
t (for true) and
nil (for false). Actually, any value (except
nil) is considered to be true for boolean tests. Use
t if no other value makes more sense.
The primary use for boolean functions and values is to branch using the
(if ...) function.
(if test if-true if-false)
Try it now with one of the simple boolean values:
(if t "True" "False")
(if ...) doesn't do much good here; you already know that t is true. You'll almost always call (if ...) with a function call for the second parameter. Here are some functions that return boolean values:
(< a b) (> a b) (<= a b) (>= a b) (= a b) (/= a b)
These are the six equality and inequality tests. They should be familiar to you. The final one is 'not equal'; the slash is meant to signify the crossed out equal sign of mathematics.
(integerp x) (rationalp x) (acl2-numberp x) (complex-rationalp x) (natp x)
These functions 'recognize' some of the different types of numbers available in ACL2 (that is, they return
t when their argument is of the requested type;
(integerp x) returns
x is an integer). The terminal '
p' is a common idiom from Lisp and means "predicate". You can imagine it as a sort of question mark;
(natp i) means "Is
i an natural number?".
(if (= (+ 1 3) (+ 5 2 -3)) "Equal" "Not equal")
(endp xs) (listp xs) (true-listp xs) (equal xs ys)
These functions relate to lists.
(endp xs) checks to see if
xs is an empty list (if we are "at the end" of the list, a phrasing that makes sense if you think about these functions as they are used recursively).
(listp xs) is a recognizer for lists.
(true-listp xs) is a stronger recognizer for lists that checks to see if the marker used for the empty list is
nil, as it is in the system-created lists. Most ACL2 functions are intended to work with "true" lists, like the ones you can construct with
(equal ...) tests the equality of lists (or more simple elements).
(= ...) is intended for numbers.
As a side note, I often use
zs, etc), pronounced like English plurals ("exes", "whys", "zees"), for lists and
y, etc. for numbers or simple values. This is just a convention, not a part of Lisp.
This time, try:
(true-listp '(a b c)) to verify what I asserted earlier.
By now, if you're an experienced programmer, you're probably expecting a way to assign variables. However, one of the important parts of how ACL2 differs from regular Common Lisp is that it does not allow variables to be modified in place. In essence, it doesn't allow variables at all; only constants and aliases.
Instead, to produce meaningful or complex programs, you'll need to modify and return variables by creating new ones. This may seem unusual and may take some time to get accustomed to if you've never used an applicative programming language, but once you get the hang of it, it's pretty easy.
Lets start with aliases. They are a good way to "save" the results of a complex computation and use it more than once, so you don't have to redo expensive operations. Aliasing is done with the
(let ((alias1 value1) (alias2 value2) ...) body)
where the aliases are symbols, just like before (but without the ') and the values are either literal values, or, more often, function calls.
(let ((xs (append '(1 2 3) '(4 5 6)))) (and (listp xs) (true-listp xs)))
In the previous example, I snuck in an extra simple operator:
(and ...). It does what you'd expect from boolean algebra, and you can also use
(implies p q), etc.
There's one final important built-in function that I'd like to introduce. To compute the length of a list, use (len xs):
(len '(1 2 3 4)).
Lesson 2. Functions
Now that you've got a basic understanding of some of the built-in functions of lisp, let's move on to defining new ones. The function you use to define new functions is
(defun name (param1 param2 ...) body)
Try adding this function that squares numbers:
(defun square (x) (* x x)).
An important thing happened here: in addition to just adding the new function, ACL2 did something interesting: it proved that no matter what input you give to your new function, it will, eventually, complete. It won't loop or call itself forever. Now, that's not too surprising, given how simple
(square ...) is. But it can be quite a task for complex functions.
Try out your new function now by squaring some of the types of numbers that ACL2 supports.
Next, we're going to enter a recursive function;
factorial. The factorial of a natural number is the product of the number and all the natural numbers below it. In other words,
(factorial n) =
(* n (factorial (- n 1))). As a special case (called the "base case"),
(factorial 0) = 1. A naive (but wrong) approach might be this:
(defun factorial-wrong (n)
(if (= n 0)
(* n (factorial-wrong (- n 1)))))
You can try adding this to ACL2, but it won't work. To see what's wrong, imagine calling factorial on the (perfectly valid) ACL2 number
(* -1 (factorial -2))
(* -1 (* -2 (factorial -3)))
Uh oh. You can clearly see that this will never terminate. ACL2 functions are 'total', meaning that they must be able to accept any object of any type and return something in a finite amount of time, even if it's not necessarily a useful response. Keep this in mind as you define ACL2 functions.
When defining functions like
factorial that recurse "towards" 0 and where you only care about natural numbers, use the function
zp (the "zero predicate").
(zp n) is
t when n is zero. But it's also
t if n is not a natural number. So it returns the "base case" if n is something weird, like a list or a complex number. Try this definition of
(defun factorial (n)
(if (zp n)
(* n (factorial (- n 1)))))
You'll notice that
(factorial (list 1 2 3)) returns
1, an inapplicable answer. This is a little messy, but fine. ACL2 is type-free, so it has to give an answer for any arguments. One approach (not covered here) to making this easier to use is to add
:guards to functions and turn on guard checking to ensure that users don't execute your functions on irrelevant values. (The prover, on the other hand, ignores guards, so it's best to do the same for now).
Now let's try defining a function on lists;
rev, which reverses the list you give it. The base case for
rev is the empty list, the reverse of which is just the empty list. For the general case, the reversed list is just the reversed "rest" of the list with the "first" element at the end, like so:
(defun rev (xs)
(if (endp xs)
(append (rev (rest xs))
(list (first xs)))))
Lesson 3. Theorems
Of course, proving that functions terminate is only a small part of proving that they work. ACL2 also lets you prove arbitrary theorems about functions. We will start out using the
(thm ...) function to do this. Try the following pretty obvious theorem:
(thm (= (+ a b) (+ b a)))
The proof for that theorem isn't very interesting. ACL2 just applies the built-in knowledge it has of linear arithmetic. How about a theorem about our previously-defined factorial function:
(thm (> (factorial n) 0))
Again, a relatively simple proof. For this one, ACL2 uses the fact that when it admitted
factorial, it determined that the result was always a positive integer and stored that as a as a special kind of rule known as a :type-prescription rule. (ACL2 supports numerous kinds of rules, but we need not go into that
Let's prove that the built in
append function from earlier is associative; that is,
(append (append xs ys) zs) equals
(append xs (append ys zs)).
(thm (equal (append (append xs ys) zs)
(append xs (append ys zs))))
This is a long, (but interesting!) proof. If you're interested in the details, there's a good, relatively non-technical discussion of this proof by the authors of ACL2 here.
For theorems that ACL2 can't prove on its own, you'll often have to provide lemmas; theorems that are added to the ACL2 logical world and can then be used in proving future theorems. To add a theorem to the logical world, use
(defthm ...) and give it a name.
(equal (append (append xs ys) zs)
(append xs (append ys zs))))
Theorems added using this method must be written with care; the prover blindly
replaces the left side with the right side whenever it finds something that
looks like the left side and can prove all of the
implies hypotheses. If we
admitted a different version of append-associative that converted
(append ys zs)) to
(append (append xs ys)), the theorem prover would loop
forever, applying these two rules repeatedly.
A final easy theorem before we move on to more difficult theorems is that reversing a list twice yields the original list.
The proof of this one is also interesting. In proving rev-rev, the prover identifies the need for a simpler lemma, namely, that
(rev (append xs (list x))) equals
(cons x (rev xs)), and proves it using a second induction.
(implies (true-listp xs)
(equal (rev (rev xs)) xs)))
This tutorial is a work in progress
For a more in-depth look at ACL2, see this page with some tutorials curated by the developers of ACL2. You can also try Proof Pad, an ACL2 IDE geared towards beginners.
Feel free to email any comments you have to me at [email protected], and come back for more later. | http://tryacl2.org/ | 13 |
78 | logging in or signing up PERIMETER AREA bellaonline Download Post to : URL : Related Presentations : Share Add to Flag Embed Email Send to Blogs and Networks Add to Channel Uploaded from authorPOINT lite Insert YouTube videos in PowerPont slides with aS Desktop Copy embed code: Embed: Flash iPad Copy Does not support media & animations WordPress Embed Customize Embed URL: Copy Thumbnail: Copy The presentation is successfully added In Your Favorites. Views: 3447 Category: Education License: All Rights Reserved Like it (0) Dislike it (1) Added: March 30, 2009 This Presentation is Public Favorites: 2 Presentation Description helps student understand area and perimeter Comments Posting comment... By: ranu.chakraborty (46 month(s) ago) THIS IS A VERY GOOD PRESENTATION GIVING CONCEPTUAL IDEAS OF THE SUBJECT IN A VERY SIMPLE MANNER! Saving..... Post Reply Close Saving..... Edit Comment Close Premium member Presentation Transcript Measuring Perimeter : Measuring Perimeter in 4th grade & Area Perimeter versus Area of Rectangles : Perimeter versus Area of Rectangles Perimeter measures the distance around. To calculate the perimeter, add up the measure of all sides. P = length + length + width + width P = l + l + w + w P = (length + length) + (width + width) P = 2(length) + 2(width) P = 2l + 2w Area measures the space inside. To calculate the area, simply multiply the length times the width. A = length x width A = l x w You must be able to recognize the 2 ways to write the equation for perimeter. Perimeter versus Area of Rectangles : Perimeter versus Area of Rectangles Perimeter is a linear measurement, meaning that it is a “line,” and it has only 1 dimension. Area is a measure of two dimensions. Therefore, the units of measure for area are written as square units or as units2 Examples: P = 16 feet P = 24 cm Examples: A = 20 square feet or 16 ft2 A = 32 square cm or 24 cm2 Perimeter versus Area : Perimeter versus Area Two rectangles can have the same area and different perimeters. Here’s an example: 2 units 20 units P = 44 units A = 40 units2 4 units 10 units P = 28 units A = 40 units2 …There will probably be some kind of question that tests your general understanding of this fact. Perimeter versus Area : Perimeter versus Area On the other hand, two rectangles can have the same perimeter and different areas. 13 units 2 units P = 30 units A = 26 units2 7 units 8 units P = 30 units A = 56 units2 …There will probably be some kind of question that tests your general understanding of this fact. Perimeter of polygons : Perimeter of polygons that are made up of rectangles or squares… 2 2 5 14 Perimeter can’t be calculated until you figure out the length of the unmarked sides: A B Length A = 14 – (2+2) = 10 units Length B = 5 units because the opposite side of the rectangle = 5 Length C = 4 units because the opposite side = 4 C Now add up all of the sides. There are 8 sides, so there should be 8 values in the equation! Starting at side A, and adding clockwise, here are the measurements: P = 10 + 5 + 14 + 5 + 2 + 4 + 2 + 4 P = 48 units 4 Area of polygons : Area of polygons that are made up of rectangles or squares… Area can’t be calculated until you figure out how many rectangles or squares make up the polygon. In this example, there are two rectangles stuck together. We can see their length and their width. Calculate the area of each rectangle, and add the two areas together: A = (L x W) of the smaller rectangle A = (L x W) of the bigger rectangle Then Total A = A + A. We can write one equation for all of these steps: Total A = (4 x 2) + (14 x 5) Total A = 8 + 70 Total A = 78 sq. units or 78 units2 Perimeter versus Areain Word Problems : Perimeter versus Areain Word Problems Look for key words that tell you which one to calculate. PERIMETER: AREA: A farmer is putting up a fence around a field that is 100 feet by 200 feet. How much fencing will he need? A decorator wants to glue a border around the ceiling of a bedroom that is 12 feet by 12 feet. How much border will she need? A farmer is covering a field that is 100 feet by 200 feet with fertilizer. How much fertilizer will he need? A decorator wants to install new carpeting in the living room, which is 18 ft x 15 ft. How much carpet will she need? If there isn’t a key word, then you must draw a picture or visualize the situation. 15 18 You do not have the permission to view this presentation. In order to view it, please contact the author of the presentation. | http://www.authorstream.com/Presentation/bellaonline-169045-perimeter-area-math-education-ppt-powerpoint/ | 13 |
67 | Besides the while statement just introduced, Python knows the usual control flow statements known from other languages, with some twists.
Perhaps the most well-known statement type is the if statement. For example:
>>> # [Code which sets 'x' to a value...] >>> if x < 0: ... x = 0 ... print 'Negative changed to zero' ... elif x == 0: ... print 'Zero' ... elif x == 1: ... print 'Single' ... else: ... print 'More' ...
There can be zero or more elif parts, and the else part is optional. The keyword `elif' is short for `else if', and is useful to avoid excessive indentation. An if ... elif ... elif ... sequence is a substitute for the switch or case statements found in other languages.
The for statement in Python differs a bit from what you may be used to in C or Pascal. Rather than always iterating over an arithmetic progression of numbers (like in Pascal), or giving the user the ability to define both the iteration step and halting condition (as C), Python's for statement iterates over the items of any sequence (e.g., a list or a string), in the order that they appear in the sequence. For example (no pun intended):
>>> # Measure some strings: ... a = ['cat', 'window', 'defenestrate'] >>> for x in a: ... print x, len(x) ... cat 3 window 6 defenestrate 12
It is not safe to modify the sequence being iterated over in the loop (this can only happen for mutable sequence types, i.e., lists). If you need to modify the list you are iterating over, e.g., duplicate selected items, you must iterate over a copy. The slice notation makes this particularly convenient:
>>> for x in a[:]: # make a slice copy of the entire list ... if len(x) > 6: a.insert(0, x) ... >>> a ['defenestrate', 'cat', 'window', 'defenestrate']
If you do need to iterate over a sequence of numbers, the built-in function range() comes in handy. It generates lists containing arithmetic progressions, e.g.:
>>> range(10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
The given end point is never part of the generated list; range(10) generates a list of 10 values, exactly the legal indices for items of a sequence of length 10. It is possible to let the range start at another number, or to specify a different increment (even negative):
>>> range(5, 10) [5, 6, 7, 8, 9] >>> range(0, 10, 3) [0, 3, 6, 9] >>> range(-10, -100, -30) [-10, -40, -70]
To iterate over the indices of a sequence, combine range() and len() as follows:
>>> a = ['Mary', 'had', 'a', 'little', 'lamb'] >>> for i in range(len(a)): ... print i, a[i] ... 0 Mary 1 had 2 a 3 little 4 lamb
The break statement, like in C, breaks out of the smallest enclosing for or while loop.
The continue statement, also borrowed from C, continues with the next iteration of the loop.
Loop statements may have an else clause; it is executed when the loop terminates through exhaustion of the list (with for) or when the condition becomes false (with while), but not when the loop is terminated by a break statement. This is exemplified by the following loop, which searches for prime numbers:
>>> for n in range(2, 10): ... for x in range(2, n): ... if n % x == 0: ... print n, 'equals', x, '*', n/x ... break ... else: ... print n, 'is a prime number' ... 2 is a prime number 3 is a prime number 4 equals 2 * 2 5 is a prime number 6 equals 2 * 3 7 is a prime number 8 equals 2 * 4 9 equals 3 * 3
The pass statement does nothing. It can be used when a statement is required syntactically but the program requires no action. For example:
>>> while 1: ... pass # Busy-wait for keyboard interrupt ...
We can create a function that writes the Fibonacci series to an arbitrary boundary:
>>> def fib(n): # write Fibonacci series up to n ... "Print a Fibonacci series up to n" ... a, b = 0, 1 ... while b < n: ... print b, ... a, b = b, a+b ... >>> # Now call the function we just defined: ... fib(2000) 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597
The keyword def introduces a function definition. It must be followed by the function name and the parenthesized list of formal parameters. The statements that form the body of the function start at the next line, indented by a tab stop. The first statement of the function body can optionally be a string literal; this string literal is the function's documentation string, or docstring. There are tools which use docstrings to automatically produce printed documentation, or to let the user interactively browse through code; it's good practice to include docstrings in code that you write, so try to make a habit of it.
The execution of a function introduces a new symbol table used for the local variables of the function. More precisely, all variable assignments in a function store the value in the local symbol table; whereas variable references first look in the local symbol table, then in the global symbol table, and then in the table of built-in names. Thus, global variables cannot be directly assigned a value within a function (unless named in a global statement), although they may be referenced.
The actual parameters (arguments) to a function call are introduced in the local symbol table of the called function when it is called; thus, arguments are passed using call by value.4.1When a function calls another function, a new local symbol table is created for that call.
A function definition introduces the function name in the current symbol table. The value of the function name has a type that is recognized by the interpreter as a user-defined function. This value can be assigned to another name which can then also be used as a function. This serves as a general renaming mechanism:
>>> fib <function object at 10042ed0> >>> f = fib >>> f(100) 1 1 2 3 5 8 13 21 34 55 89
You might object that fib is not a function but a procedure. In Python, like in C, procedures are just functions that don't return a value. In fact, technically speaking, procedures do return a value, albeit a rather boring one. This value is called None (it's a built-in name). Writing the value None is normally suppressed by the interpreter if it would be the only value written. You can see it if you really want to:
>>> print fib(0) None
It is simple to write a function that returns a list of the numbers of the Fibonacci series, instead of printing it:
>>> def fib2(n): # return Fibonacci series up to n ... "Return a list containing the Fibonacci series up to n" ... result = ... a, b = 0, 1 ... while b < n: ... result.append(b) # see below ... a, b = b, a+b ... return result ... >>> f100 = fib2(100) # call it >>> f100 # write the result [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
It is also possible to define functions with a variable number of arguments. There are three forms, which can be combined.
The most useful form is to specify a default value for one or more arguments. This creates a function that can be called with fewer arguments than it is defined, e.g.
def ask_ok(prompt, retries=4, complaint='Yes or no, please!'): while 1: ok = raw_input(prompt) if ok in ('y', 'ye', 'yes'): return 1 if ok in ('n', 'no', 'nop', 'nope'): return 0 retries = retries - 1 if retries < 0: raise IOError, 'refusenik user' print complaint
This function can be called either like this: ask_ok('Do you really want to quit?') or like this: ask_ok('OK to overwrite the file?', 2).
The default values are evaluated at the point of function definition in the defining scope, so that e.g.
i = 5 def f(arg = i): print arg i = 6 f()
will print 5.
Important warning: The default value is evaluated only once. This makes a difference when the default is a mutable object such as a list or dictionary. For example, the following function accumulates the arguments passed to it on subsequent calls:
def f(a, l = ): l.append(a) return l print f(1) print f(2) print f(3)
This will print
[1, 2] [1, 2, 3]
If you don't want the default to be shared between subsequent calls, you can write the function like this instead:
def f(a, l = None): if l is None: l = l.append(a) return l
Functions can also be called using keyword arguments of the form "keyword = value". For instance, the following function:
def parrot(voltage, state='a stiff', action='voom', type='Norwegian Blue'): print "-- This parrot wouldn't", action, print "if you put", voltage, "Volts through it." print "-- Lovely plumage, the", type print "-- It's", state, "!"
could be called in any of the following ways:
parrot(1000) parrot(action = 'VOOOOOM', voltage = 1000000) parrot('a thousand', state = 'pushing up the daisies') parrot('a million', 'bereft of life', 'jump')
but the following calls would all be invalid:
parrot() # required argument missing parrot(voltage=5.0, 'dead') # non-keyword argument following keyword parrot(110, voltage=220) # duplicate value for argument parrot(actor='John Cleese') # unknown keyword
In general, an argument list must have any positional arguments followed by any keyword arguments, where the keywords must be chosen from the formal parameter names. It's not important whether a formal parameter has a default value or not. No argument must receive a value more than once -- formal parameter names corresponding to positional arguments cannot be used as keywords in the same calls.
When a final formal parameter of the form **name is present, it receives a dictionary containing all keyword arguments whose keyword doesn't correspond to a formal parameter. This may be combined with a formal parameter of the form *name(described in the next subsection) which receives a tuple containing the positional arguments beyond the formal parameter list. (*name must occur before **name.) For example, if we define a function like this:
def cheeseshop(kind, *arguments, **keywords): print "-- Do you have any", kind, '?' print "-- I'm sorry, we're all out of", kind for arg in arguments: print arg print '-'*40 for kw in keywords.keys(): print kw, ':', keywords[kw]
It could be called like this:
cheeseshop('Limburger', "It's very runny, sir.", "It's really very, VERY runny, sir.", client='John Cleese', shopkeeper='Michael Palin', sketch='Cheese Shop Sketch')
and of course it would print:
-- Do you have any Limburger ? -- I'm sorry, we're all out of Limburger It's very runny, sir. It's really very, VERY runny, sir. ---------------------------------------- client : John Cleese shopkeeper : Michael Palin sketch : Cheese Shop Sketch
Finally, the least frequently used option is to specify that a function can be called with an arbitrary number of arguments. These arguments will be wrapped up in a tuple. Before the variable number of arguments, zero or more normal arguments may occur.
def fprintf(file, format, *args): file.write(format % args)
By popular demand, a few features commonly found in functional programming languages and Lisp have been added to Python. With the lambda keyword, small anonymous functions can be created. Here's a function that returns the sum of its two arguments: "lambda a, b: a+b". Lambda forms can be used wherever function objects are required. They are syntactically restricted to a single expression. Semantically, they are just syntactic sugar for a normal function definition. Like nested function definitions, lambda forms cannot reference variables from the containing scope, but this can be overcome through the judicious use of default argument values, e.g.
def make_incrementor(n): return lambda x, incr=n: x+incr
There are emerging conventions about the content and formatting of documentation strings.
The first line should always be a short, concise summary of the object's purpose. For brevity, it should not explicitly state the object's name or type, since these are available by other means (except if the name happens to be a verb describing a function's operation). This line should begin with a capital letter and end with a period.
If there are more lines in the documentation string, the second line should be blank, visually separating the summary from the rest of the description. The following lines should be one or more paragraphs describing the object's calling conventions, its side effects, etc.
The Python parser does not strip indentation from multi-line string literals in Python, so tools that process documentation have to strip indentation. This is done using the following convention. The first non-blank line after the first line of the string determines the amount of indentation for the entire documentation string. (We can't use the first line since it is generally adjacent to the string's opening quotes so its indentation is not apparent in the string literal.) Whitespace ``equivalent'' to this indentation is then stripped from the start of all lines of the string. Lines that are indented less should not occur, but if they occur all their leading whitespace should be stripped. Equivalence of whitespace should be tested after expansion of tabs (to 8 spaces, normally). | http://docs.python.org/release/1.5.2/tut/node6.html | 13 |
57 | We now need to discuss the section that most students
hate. We need to talk about applications
to linear equations. Or, put in other
words, we will now start looking at story problems or word problems. Throughout history students have hated
these. It is my belief however that the
main reason for this is that students really don’t know how to work them. Once you understand how to work them, you’ll
probably find that they aren’t as bad as they may seem on occasion. So, we’ll start this section off with a
process for working applications.
Process for Working
- READ THE PROBLEM.
- READ THE PROBLEM AGAIN. Okay, this may be a little bit of
overkill here. However, the point
of these first two steps is that you must read the problem. This step is
the MOST important step, but it is also the step that most people don’t
You need to read the problem very carefully and as many times as it
takes. You are only done with
this step when you have completely understood what the problem is asking
you to do. This includes
identifying all the given information and identifying what you being
asked to find.
Again, it can’t be stressed enough that you’ve got to carefully read the
problem. Sometimes a single word
can completely change how the problem is worked. If you just skim the problem you may
well miss that very important word.
one of the unknown quantities with a variable and try to relate all the
other unknown quantities (if there are any of course) to this variable.
applicable, sketch a figure illustrating the situation. This may seem like a silly step, but
it can be incredibly helpful with the next step on occasion.
an equation that will relate known quantities to the unknown quantities. To do this make use of known formulas
and often the figure sketched in the previous step can be used to
determine the equation.
the equation formed in the previous step and write down the answer to
all the questions. It is
important to answer all the questions that you were asked. Often you will be asked for several
quantities in the answer and the equation will only give one of them.
your answer. Do this by plugging
into the equation, but also use intuition to make sure that the answer
makes sense. Mistakes can often
be identified by acknowledging that the answer just doesn’t make sense.
Let’s start things off with a couple of fairly basic
examples to illustrate the process. Note
as well that at this point it is assumed that you are capable of solving fairly
simple linear equations and so not a lot of detail will be given for the actual
solution stage. The point of this
section is more on the set up of the equation than the solving of the equation.
Example 1 In
a certain Algebra class there is a total of 350 possible points. These points come from 5 homework sets that
are worth 10 points each and 3 hour exams that are worth 100 points
each. A student has received homework
scores of 4, 8, 7, 7, and 9 and the first two exam scores are 78 and 83. Assuming that grades are assigned according
to the standard scale and there are no weights assigned to any of the grades
is it possible for the student to receive an A in the class and if so what is
the minimum score on the third exam that will give an A? What about a B?
Okay, let’s start off by defining p to be the minimum required score on the third exam.
Now, let’s recall how grades are set. Since there are no weights or anything on
the grades, the grade will be set by first computing the following
Since we are using the standard scale if the grade
percentage is 0.9 or higher the student will get an A. Likewise if the grade percentage is between
0.8 and 0.9 the student will get a B.
We know that the total possible points is 350 and the
student has a total points (including the third exam) of,
The smallest possible percentage for an A is 0.9 and so
is the minimum required score on the third exam for an A we will have the
This is a linear equation that we will need to solve for p.
So, the minimum required score on the third exam is
119. This is a problem since the exam
is worth only 100 points. In other
words, the student will not be getting an A in the Algebra class.
Now let’s check if the student will get a B. In this case the minimum percentage is
0.8. So, to find the minimum required
score on the third exam for a B we will need to solve,
Solving this for p
So, it is possible for the student to get a B in the
class. All that the student will need
to do is get at least an 84 on the third exam.
Example 2 We
want to build a set of shelves. The
width of the set of shelves needs to be 4 times the height of the set of
selves and the set of shelves must have three shelves in it. If there are 72 feet of wood to use to
build the set of shelves what should the dimensions of the set of shelves be?
We will first define x
to be the height of the set of shelves.
This means that 4x is width
of the set of shelves. In this case we
definitely need to sketch a figure so we can correctly set up the
equation. Here it is,
Now we know that there are 72 feet of wood to be used and
we will assume that all of it will be used.
So, we can set up the following word equation.
It is often a good idea to first put the equation in words
before actually writing down the equation as we did here. At this point, we can see from the figure
there are two vertical pieces; each one has a length of x. Also, there are 4
horizontal pieces, each with a length of 4x.
So, the equation is then,
So, it looks like the height of the set of shelves should
be 4 feet. Note however that we
haven’t actually answered the question however. The problem asked us to find the dimensions. This means that we also need the width of
the set of shelves. The width is
4(4)=16 feet. So the dimensions will
need to be 4x16 feet.
The next couple of problems deal with some basic principles
Example 3 A
calculator has been marked up 15% and is being sold for $78.50. How much did the store pay the manufacturer
of the calculator?
First, let’s define p
to be the cost that the store paid for the calculator. The stores markup on the calculator is
15%. This means that 0.15p has been added on to the original
price (p) to get the amount the
calculator is being sold for. In other
words, we have the following equation
that we need to solve for p. Doing this gives,
The store paid $68.26 for the calculator. Note that since we are dealing with money
we rounded the answer down to two decimal places.
Example 4 A
shirt is on sale for $15.00 and has been marked down 35%. How much was the shirt being sold for
before the sale?
This problem is pretty much the opposite of the previous
example. Let’s start with defining p to be the price of the shirt before
the sale. It has been marked down by
35%. This means that 0.35p has been subtracted off from the
original price. Therefore, the
equation (and solution) is,
So, with rounding it looks like the shirt was originally
sold for $23.08.
These are some of the standard problems that most people
think about when they think about Algebra word problems. The standard formula that we will be using
All of the problems that we’ll be doing in this set of
examples will use this to one degree or another and often more than once as we
Example 5 Two
cars are 500 miles apart and moving directly towards each other. One car is moving at a speed of 100 mph and
the other is moving at 70 mph.
Assuming that the cars start moving at the same time how long does it
take for the two cars to meet?
Let’s let t
represent the amount of time that the cars are traveling before they
meet. Now, we need to sketch a figure
for this one. This figure will help us
to write down the equation that we’ll need to solve.
From this figure we can see that the Distance Car A
travels plus the Distance Car B travels must equal the total distance
separating the two cars, 500 miles.
Here is the word equation for this problem in two separate
We used the standard formula here twice, once for each
car. We know that the distance a car
travels is the rate of the car times the time traveled by the car. In this case we know that Car A travels at
100 mph for t hours and that Car B
travels at 70 mph for t hours as
well. Plugging these into the word
equation and solving gives us,
So, they will travel for approximately 2.94 hours before
Example 6 Repeat
the previous example except this time assume that the faster car will start 1
hour after slower car starts.
For this problem we are going to need to be careful with
the time traveled by each car. Let’s
let t be the amount of time that
the slower travel car travels. Now,
since the faster car starts out 1 hour after the slower car it will only
travel for hours.
Now, since we are repeating the problem from above the
figure and word equation will remain identical and so we won’t bother
repeating them here. The only
difference is what we substitute for the time traveled for the faster
car. Instead of t as we used in the previous example we will use since it travels for one hour less that the
Here is the equation and solution for this example.
In this case the slower car will travel for 3.53 hours before
meeting while the faster car will travel for 2.53 hrs (1 hour less than the
Example 7 Two
boats start out 100 miles apart and start moving to the right at the same
time. The boat on the left is moving
at twice the speed as the boat on the right.
Five hours after starting the boat on the left catches up with the
boat on the right. How fast was each
Let’s start off by letting r be the speed of the boat on the right (the slower boat). This means that the boat to the left (the
faster boat) is moving at a speed of 2r. Here is the figure for this situation.
From the figure it looks like we’ve got the following word
Upon plugging in the standard formula for the distance
For this problem we know that the time each is 5 hours and
we know that the rate of Boat A is 2r
and the rate of Boat B is r. Plugging these into the work equation and
So, the slower boat is moving at 20 mph and the faster
boat is moving at 40 mpg (twice as fast).
These problems are actually variants of the Distance/Rate
problems that we just got done working.
The standard equation that will be needed for these problems is,
As you can see this formula is very similar to the formula
we used above.
Example 8 An
office has two envelope stuffing machines.
Machine A can stuff a batch of envelopes in 5 hours, while Machine B
can stuff a batch of envelopes in 3 hours.
How long would it take the two machines working together to stuff a
batch of envelopes?
Let t be the
time that it takes both machines, working together, to stuff a batch of
envelopes. The word equation for this
We know that the time spent working is t however we don’t know the work rate
of each machine. To get these we’ll
need to use the initial information given about how long it takes each
machine to do the job individually. We
can use the following equation to get these rates.
Let’s start with Machine A.
Now, Machine B.
Plugging these quantities into the main equation above
gives the following equation that we need to solve.
So, it looks like it will take the two machines, working
together, 1.875 hours to stuff a batch of envelopes.
Example 9 Mary
can clean an office complex in 5 hours.
Working together John and Mary can clean the office complex in 3.5
hours. How long would it take John to
clean the office complex by himself?
Let t be the
amount of time it would take John to clean the office complex by
himself. The basic word equation for
this problem is,
This time we know that the time spent working together is
3.5 hours. We now need to find the
work rates for each person. We’ll
start with Mary.
Now we’ll find the work rate of John. Notice however, that since we don’t know
how long it will take him to do the job by himself we aren’t going to be able
to get a single number for this. That
is not a problem as we’ll see in a second.
Notice that we’ve managed to get the work rate of John in
terms of the time it would take him to do the job himself. This means that once we solve the equation
above we’ll have the answer that we want.
So, let’s plug into the work equation and solve for the time it would
take John to do the job by himself.
So, it looks like it would take John 11.67 hours to clean
the complex by himself.
This is the final type of problems that we’ll be looking at
in this section. We are going to be
looking at mixing solutions of different percentages to get a new
percentage. The solution will consist of
a secondary liquid mixed in with water.
The secondary liquid can be alcohol or acid for instance.
The standard equation that we’ll use here will be the
Note as well that the percentage needs to be a decimal. So if we have an 80% solution we will need to
Example 10 How
much of a 50% alcohol solution should we mix with 10 gallons of a 35%
solution to get a 40% solution?
Okay, let x be
the amount of 50% solution that we need.
This means that there will be gallons of the 40% solution once we’re done
mixing the two.
Here is the basic work equation for this problem.
Now, plug in the volumes and solve for x.
So, we need 5 gallons of the 50% solution to get a 40%
Example 11 We
have a 40% acid solution and we want 75 liters of a 15% acid solution. How much water should we put into the 40%
solution to do this?
Let x be the
amount of water we need to add to the 40% solution. Now, we also don’t how much of the 40% solution
we’ll need. However, since we know the
final volume (75 liters) we will know that we will need liters of the 40% solution.
Here is the word equation for this problem.
Notice that in the first term we used the “Amount of acid
in the water”. This might look a
little weird to you because there shouldn’t be any acid in the water. However, this is exactly what we want. The basic equation tells us to look at how
much of the secondary liquid is in the water.
So, this is the correct wording.
When we plug in the percentages and volumes we will think of the water
as a 0% percent solution since that is in fact what it is. So, the new word equation is,
Do not get excited about the zero in the first term. This is okay and will not be a
problem. Let’s now plug in the volumes
and solve for x.
So, we need to add in 46.875 liters of water to 28.125
liters of a 40% solution to get 75 liters of a 15% solution. | http://tutorial.math.lamar.edu/Classes/Alg/LinearApps.aspx | 13 |
122 | Science Fair Project Encyclopedia
In physics, a force acting on a body is that which causes the body to accelerate; that is, to change its velocity. The concept appeared first in Newton's second law of motion of classical mechanics, and is usually expressed by the equation:
- F = m · a
- F is the force, measured in newtons,
- m is the mass, measured in kilograms, and
- a is the acceleration, measured in metres per second squared.
The field of engineering mechanics relies to a large extent on the concept of force. Scientists have developed a more accurate concept, defining force as the derivative of momentum. Force is not a fundamental quantity in physics, despite the tendency to introduce students to physics via this concept. More fundamental are momentum, energy and stress. Force is rarely measured directly and is often confused with related concepts such as tension and stress.
Forces in applications
Types of forces
However, scientists consider there to be only 4 fundamental forces of nature, with which every observed phenomenon can be explained: the strong nuclear force, the electromagnetic force, the weak nuclear force, and the gravitational force. The former three forces have been accurately modeled using quantum field theory, but a successful theory of quantum gravity has not been developed, although it is described accurately on large scales by general relativity.
Some forces are conservative, while others are not. A conservative force can be written as the gradient of a potential. Examples of conservative forces are gravity, electromagnetic forces, and spring forces. Examples of nonconservative forces include friction and drag.
Properties of force
Forces have an intensity and direction.
Forces can be added together using parallelogram of force. When two forces act on an object, the resulting force (called the resultant) is the vector sum of the original forces. The magnitude of the resultant varies from zero to the sum of the magnitudes of the two forces, depending on the angle between their lines of action. If the two forces are equal but opposite, the resultant is zero. This condition is called static equilibrium, and the object moves at a constant speed (possibly, but not necessarily zero).
While forces can be added together, they can also be resolved into components. For example, an horizontal force acting in the direction of northeast can be split into two forces along the north and east directions respectively. The sum of these component forces is equal to the original force.
Units of measure
Imperial units of force and mass
The relationship F=m·a mentioned above may also be used with non-metric units. If those units do not form a consistent set of units, the more general form F=k·m·a must be used, where the constant k is a conversion factor dependent upon the units used.
For example, in imperial engineering units, F is in "pounds force" or "lbf", m is in "pounds mass" or "lb", and a is in feet per second squared. However, in this particular system, you need to use the more general form above, usually written F=m·a/gc with the constant normally used for this purpose gc = 32.174 lb·ft/(lbf·s2) equal to the reciprocal of the k above.
As with the kilogram, the pound is colloquially used as both a unit of mass and a unit of force. 1 lbf is the force required to accelerate 1 lb at 32.174 ft per second squared, since 32.174 ft per second squared is the standard acceleration due to terrestrial gravity.
Another imperial unit of mass is the slug, defined as 32.174 lb. It is the mass that accelerates by one foot per second squared when a force of one lbf is exerted on it.
When the acceleration of free fall is equal to that used to define pounds force (now usually 9.80665 m/s²), the magnitude of the mass in pounds equals the magnitude of the force due to gravity in pounds force. However, even at sea level on Earth, the actual acceleration of free fall is quite variable, over 0.53% more at the poles than at the equator. Thus, a mass of 1.0000 lb at sea level at the Equator exerts a force due to gravity of 0.9973 lbf, whereas a mass of 1.000 lb at sea level at the poles exerts a force due to gravity of 1.0026 lbf. The normal average sea level acceleration on Earth (World Gravity Formula 1980) is 9.79764 m/s², so on average at sea level on Earth, 1.0000 lb will exert a force of 0.9991 lbf.
The equivalence 1 lb = 0.453 592 37 kg is always true, anywhere in the universe. If you borrow the acceleration which is official for defining kilograms force to define pounds force as well, then the same relationship will hold between pounds-force and kilograms-force (an old non-SI unit which we still see used). If a different value is used to define pounds force, then the relationship to kilograms force will be slightly different—but in any case, that relationship is also a constant anywhere in the universe. What is not constant throughout the universe is the amount of force in terms of pounds-force (or any other force units) which 1 lb will exert due to gravity.
By analogy with the slug, there is a rarely used unit of mass called the "metric slug". This is the mass that accelerates at one metre per second squared when pushed by a force of one kgf. An item with a mass of 10 kg has a mass of 1.01972661 metric slugs (= 10 kg divided by 9.80665 kg per metric slug). This unit is also known by various other names such as the hyl, TME (from a German acronym), and mug (from metric slug).
Another unit of force called the poundal (pdl) is defined as the force that accelerates 1 lbm at 1 foot per second squared. Given that 1 lbf = 32.174 lb times one foot per second squared, we have 1 lbf = 32.174 pdl.
In conclusion, we have the following conversions:
- 1 kgf (kilopond kp) = 9.80665 newtons
- 1 metric slug = 9.80665 kg
- 1 lbf = 32.174 poundals
- 1 slug = 32.174 lb
- 1 kgf = 2.2046 lbf
Forces in everyday life
Forces are part of everyday life:
- gravity: objects fall, even after being thrown upwards, objects slide and roll down
- friction: floors and objects are not extremely slippery
- spring force, objects resist tensile stress, compressive stress and/or shear stress, objects bounce back.
- electromagnetic force: attraction of magnets
- movement created by force : the movement of objects when force is applied.
Forces in industry
<to be completed>
Forces in the laboratory
- Galileo Galilei uses rolling balls to disprove the Aristotelian theory of motion (1602 - 1607)
- Henry Cavendish's torsion bar experiment measured the force of gravity between 2 masses (1798)
Instruments to measure forces
<to be completed>
Forces in theory
Force, usually represented with the symbol F, is a vector quantity.
Newton's second law of motion can be formulated as follows:
- F = m · a
F is the force, measured in newtons
v is its final velocity (measured in metres per second)
T is the time from the initial state to the final state (measured in seconds);
Lim T→0 is the limit as T tends towards zero.
Force was so defined in order that its reification would explain the effects of superimposing situations: If in one situation, a force is experienced by a particle, and if in another situation another force is experienced by that particle, then in a third situation, which (according to standard physical practice) is taken to be a combination of the two individual situations, the force experienced by the particle will be the vector sum of the individual forces experienced in the first two situations. This superposition of forces, and the definition of inertial frames and inertial mass, are the empirical content of Newton's laws of motion.
The content of above definition of force can be further explicated. First, the mass of a body times its velocity is designated its momentum (labeled p). So the above definition can be written:
If F is not constant over Δt, then this is the definition of average force over the time interval. To apply it at an instant we apply an idea from Calculus. Graphing p as a function of time, the average force will be the slope of the line connecting the momentum at two times. Taking the limit as the two times get closer together gives the slope at an instant, which is called the derivative:
With many forces a potential energy field is associated. For instance, the gravitational force acting upon a body can be seen as the action of the gravitational field that is present at the body's location. The potential field is defined as that field whose gradient is minus the force produced at every point:
While force is the name of the derivative of momentum with respect to time, the derivative of force with respect to time is sometimes called yank. Higher order derivates can be considered, but they lack names, because they are not commonly used.
In most expositions of mechanics, force is usually defined only implicitly, in terms of the equations that work with it. Some physicists, philosophers and mathematicians, such as Ernst Mach, Clifford Truesdell and Walter Noll , have found this problematic and sought a more explicit definition of force.
Non-SI usage of force and mass units
The kilogram-force is a unit of force that was used in various fields of science and technology. In 1901, the CGPM made kilogram-force well defined, by adopting a standard acceleration of gravity for this purpose, making the kilogram-force equal to the force exerted by a mass of 1 kg when accelerated by 9.80665 m/s². The kilogram-force is not a part of the modern SI system, but vestiges of its use can still occur in:
- Thrust of jet and rocket engines
- Spoke tension of bicycles
- Draw weight of bows
- Torque wrenches in units such as "meter kilograms" or "kilogram centimetres" (the kilograms are rarely identified as unit of force)
- Engine torque output (kgf·m expressed in various word order, spelling, and symbols)
- Pressure gauges in "kg/cm²" or "kgf/cm²"
In colloquial, non-scientific usage, the "kilograms" used for "weight" are almost always the proper SI units for this purpose. They are units of mass, not units of force.
The symbol "kgm" for kilograms is also sometimes encountered. This might occasionally be an attempt to disintinguish kilograms as units of mass from the "kgf" symbol for the units of force. It might also be used as a symbol for those obsolete torque units (kilogram-force metres) mentioned above, used without properly separating the units for kilogram and metre with either a space or a centered dot.
Force was first described by Archimedes.
- Calculation: force F - English and American units to metric units
- Online Unit Converter - Conversion of many different units
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Force | 13 |
50 | The simplest gravitational lens is the lens around a distant star or planet. This lens is a point gravitational lens, because it behaves as though all of the mass of the gravitating object were concentrated at the very center of the lens. The structure of a point gravitational lens depends only on the mass of the central object, and the only role the radius of the central source plays is to set the minimum distance a lens must be from us to produce an image of a more distant object.
Light experiences the largest deflection as it passes through the gravitational lens of a star when the light just grazes the photosphere of the star. This defines a minimum focal length for the gravitational lens, or a minimum distance that an observer must be from a star to see it act as a lens. This distance is given by
where D is the distance from the observer to the star generating the lens, R is the radius of the star, c is the speed of light, G is the gravitational constant, and M is the mass of the star. The second form of this equation gives the distance in Astronomical Units for the mass and radius in units of the solar values.
For a main-sequence star, the radius of the star is very roughly proportional to the mass, so the minimum distance one must be from a main sequence star to see its lens is about the same for all stars. Given that the nearest star is over a parsec away, and a parsec is 2 × 105 AU, potentially the gravitational lens of every star in our Galaxy is visible to us.
For a solid planet, where the mass is proportional to R3 and the density varies relatively little from planet to planet, the minimum distance required to see its gravitational lens is proportional to M-1/3, so the smaller the body, the farther away we must be to see its lens. For Earth, we must be farther away than 1.5 × 104 AU to see its lens, or 0.074 parsecs. For the bodies found in the Kuiper belt, which are much smaller than Earth, this minimum distance is a parsec or more, so we would never see the effects of a Kuiper Belt object's gravitational lens at Earth.
The area on the sky covered by a gravitational lens is roughly the area encircled by its Einstein ring. The larger this ring is on the sky, the more likely another more distant object will be seen through the lens. But most of the stars in our own galaxy are very far from us, and the Einstein ring of each of these stars covers a very small area on the sky. A star at the edge of the plane of our galaxy is about 50 pc away from us; if this star is one solar mass, its Einstein ring would cover only 4 × 10-15 of the sky. A more typical star in our galaxy is around 10 kpc away from the Sun; if this star is of one solar mass, it would cover 10-19 of the sky. Placing the 1012 solar masses worth of stars at 10 kpc, which would constitute the total mass of our galaxy, we would find that altogether they cover only about 10-7 of the sky. The chances that any one object outside of our galaxy is behind the lens of a star within our galaxy is therefore only about one in ten million.
An important point about this last calculation is that this probability depend only on the distance to the lenses, and not the mass of each individual star. The radius of an Einstein ring is proportional to M1/2, so the area is proportional to M, so that the total area covered by lenses is the sum of all of the stars in the galaxy. For this reason, whether our galaxy of 1012 solar masses is composed of only one solar mass stars, or of 0.1 solar mass stars, the area covered by their gravitational lenses would be the same.
A point gravitational lens produces only two types of image: either a single Einstein ring, or a pair of images. An Einstein ring is created whenever the image source lies on a direct line with the lens and us. If the source is away from this line, the lens creates two separate and distorted images of the source, one on each side of the lens. This second case is much more likely than the first if the size on the sky of the image source is much smaller than the radius of the Einstein ring.
The transition from ring to double image is easy to imagine. Let us assume that the source is a star in another galaxy and that the lens source is a star in our own Galaxy. We start with the image source in a direct line behind the lens. The ring that is produced is uniform all the way around the lens. As we move our extragalactic star to the left, we see the ring appear to shift to the left. The ring remains intact until the right edge of the extragalactic star is on the lens-observer line; when the extragalactic star's edge drops off of this line, the ring splits into two crescents, one facing left, the other facing right, with the points of one nearly touching the points of the other. As we move the star further to the left, the points of the crescent pull back and become rounded, until the crescents transform into two curved ovals, one on the left of the lens, the other on the right of the lens.
When the line to the extragalactic star falls inside the region defined by the Einstein ring, the lens creates images that are larger on the sky, and therefore much more luminous, than the unmagnified source. As our extragalactic star continues to move left, the images created by the lens continue to move left and shrink in size. Once the extragalactic star is outside of the area enclosed by the Einstein ring , the image on the right side of the lens becomes dimmer than the unmagnified source, and the image to the left of the lens becomes as bright as the unmagnified source. Eventually the image on the right of the lens becomes too dim to see, and the image on the left is the unmagnified image of the extragalactic star.
A star at 1 pc produces an Einstein ring of radius 0.29." A star of one solar mass at a more typical distance of 100 pc produce a ring of radius 0.03." The Hubble telescope can point to a position with a 0.01" resolution, so an Einstein ring is at the limit of detectability with this telescope. Realistically, we will never see the image created by a stellar gravitational lens.
The tell-tail signature of a point gravitational lens is its the brightening of more distant stars. The maximum magnification is simply the area of the Einstein ring on the sky divided by the area of the unmagnified image, which for a distant star is an increase in luminosity of 2 α/φ, where α is the radius of the Einstein ring on the sky and φ is the radius of the unmagnified image on the sky. Consider for example the magnification of a one solar mass star at 100 pc by the gravitational lens produced by a one solar mass star at 10 pc. The star at 10 pc produces an Einstein ring with radius α = 0.029." The radius on the sky of the star at 100 pc if the lens is absent is φ = 4.7 × 10-5." The gravitational lens can therefore increase the luminosity of the 100 pc star by a factor of 1200.
The more likely outcome of a lens magnifying a more distant star is that the more distant star falls near the circle defined by the Einstein ring. The probability of a star's image having the maximum possible magnification to the probability that it is magnified at all is roughly the area of the unmagnified star on the sky to the area enclosed by the Einstein ring. For our last example, this probability is only 3 × 10-6. The most likely outcome of magnification is that the luminosity of the distant star increases by less than a factor of two. | http://www.astrophysicsspectator.com/topics/generalrelativity/GravitationalLensPoint.html | 13 |
55 | The chemical analysis of bones to interpret diet rests on the observation that different foods vary in the composition of different chemical elements or isotopes. Isotopes are different forms of an element that have different numbers of neutrons in their atomic nuclei (if they had different numbers of protons, they would be different elements). The number of neutrons in the nucleus affects the atomic weight of the isotope, and for this reason different isotopes may be taken up differently by different kinds of plants or animals, or they may be more or less abundant from different natural sources, such as the different mineral compositions of local soils. The chemical composition of an animal depends on the foods that it has eaten over the course of its life. Therefore, different kinds of animals may have different chemical signatures based on their preferred diets. If these isotopes are stable, then they may be preserved in fossil remains long after the death of the individual, and paleontologists may be able to access these ratios and make interpretations about the diets of ancient species. The amount of preserved material varies depending on the type of tissue examined, so that chemical analyses are usually expressed in terms of the ratio or proportion of one isotope or element to another. The major stable isotopes that have been examined in fossil hominid remains include the ratio of strontium (Sr) to calcium (Ca) and the ratio of carbon-13 (13C) to carbon-12 (12C).
Strontium and calcium are chemically similar elements that occupy the same column on the periodic table. For this reason, strontium can be taken up by plants in the place of calcium, and the two form a ratio that depends on the environmental abundance of strontium. When herbivores eat the plants, their bodies preferentially incorporate calcium instead of strontium, so that their Sr/Ca ratio is lower than that of the plants. And when carnivores eat the herbivores, they again incorporate more calcium than strontium, so that their Sr/Ca ratio is lower than the herbivores. Taken together, this means that we could infer the general diet composition (trophic level) of a fossil hominid if we knew its ratio of strontium to calcium. This wouldn't tell everything about the diet, and in fact it leaves many blanks. But with respect to the question of whether early humans were significant meat eaters, and whether robust australopithecines and other early hominids significantly differed in diet, this technique has great potential to inform.
One thing that is worth noting about these kinds of chemical ratios is that they reflect the average diet of ancient hominids across a large part of their lifespan. This time probably varies with circumstances, but it must always have included multiple years of dietary intake. This means that these ratios may respond to different aspects of the diet than the anatomy and size of the teeth, especially if the teeth were significantly adapted to fallback foods that did not make up the majority of the dietary intake of the animal.
Analysis of strontium and calcium in fossil bones requires some background work. The amount of strontium available to the food web depends on the local soil composition, so the Sr/Ca ratio may vary among samples of the same kind of animal taken at different sites. This means that different kinds of herbivores, carnivores, and omnivores must be sampled at the same location in order to interpret their Sr/Ca ratios. Additionally, it appears that fossil bone and fossil teeth may vary in their preservation of Sr/Ca ratios. The initial work on early hominid Sr/Ca ratios was done by Andy Sillen (1992) on fossil bone from Swartkrans. But Sillen and others have shown that the process of fossilization alters composition of bones in ways that may skew or erase the endogenous ratio of strontium to calcium.
Sponheimer and colleagues (2005b) examine the ratio of strontium to calcium in the tooth enamel of fossil hominids from South African sites. Enamel is less susceptible to diagenesis (change over time) than bone, and should preserve more accurate estimates of Sr/Ca ratios. A possible issue is that enamel is mainly deposited early in life, and therefore reflects preweaning or early juvenile diets that may not be fully representative of the dietary repertoire of the animal. To examine this, Sponheimer et al. examined comparative samples of mammals from the fossil localities and from recent contexts, finding that the Sr/Ca ratios did differentiate browsers, grazers, and carnivores from each other. The differences between these animal groups are that the grazers have the highest Sr/Ca ratios, and the carnivores and browsers. Browsers eat a high proportion of leafy species that tend to have lower Sr/Ca ratios, and as a consequence their Sr/Ca ratios tend to be slightly lower than those of the carnivores.
The analysis found that the remains of A. africanus from Sterkfontein Member 4 had relatively high Sr/Ca ratios, easily within the range of or even exceeding those of the grazers. A. robustus from Swartkrans Member 1 had substantially lower Sr/Ca ratios than A. africanus, but these were within the range of all the other animals, including browsers, grazers, and carnivores.
Sponheimer and colleagues (2005b) note that the results here were different from those of Sillen (1992), who showed the robust australopithecine bones to have rather low Sr/Ca ratios. Sillen suggested that this meant that the robust australopithecines was significantly omnivorous. The tooth enamel is consistent with a broad range of diets, so it does not disprove the hypothesis that robust australopithecines were omnivores, but it does not specifically disprove the notion that they were exclusive herbivores either.
The bottom line is that it is very difficult to differentiate diets with this kind of information. One problem is the nature of the overlap among the comparison samples. The whisker plots overlap substantially among these, and since they show the 10th and 90th percentiles, the extent of overlap may have been almost complete. This is not to say that the distributions are the same, but that individual fossils that are in the area of overlap (which would include most of the robust specimens and many of the A. africanus specimens) may not be diagnosed. Fortunately the study included a large number of teeth, so that samples may be compared to each other, and these samples are significantly different from each other. Assuming that the teeth have been assigned correctly to samples, this provides some confidence in the idea of a dietary difference between these samples.
A more important problem is that very different dietary compositions may have the same Sr/Ca signature. For example, a leaf browser that included some grass seeds in its diet might have the same Sr/Ca ratio as a fruit eater that included significant meat. And the Sr/Ca ratio does not give any indication of seasonal variations that might have ecological importance.
Carbon stable isotopes
Not all plants photosynthesize in the same way. The majority of plant species use a three carbon photosynthetic pathway. These are called C3 plants. But some plants use instead a four carbon pathway, and these are called C4 plants. The C4 plants are a minority, but include a large proportion of grasses and sedges, and a few other kinds of plants.
There are two stable isotopes of carbon in nature. Most of this carbon has six neutrons, resulting in an atomic weight of 12. But a minority of carbon has seven neutrons, with an atomic weight of 13 (an additional small proportion is the radioactive carbon 14). The C3 photosynthetic pathway preferentially includes carbon 12 (12C), so that C3 plants have a ratio of 13C to 12C that is substantially lower than the 13C/12C ratio in nature. For C4 plants, this discrimination is not as great, so that C3 plants and C4 plants differ in their 13C/12C ratios. Animals obtain their carbon from the foods they eat, so that the 13C/12C ratio of a herbivore marks the proportion of C3 and C4 plants in its diet. Likewise, the 13C/12C ratio of a carnivore reflects the plant diets of its prey species.
For example, grazers tend to eat a high proportion of grasses, which in Africa are predominantly C4 plants. This means that grazers have a relatively high 13C/12C ratio compared to other herbivores. It also means that carnivores who focus on grazers as prey species also have a high 13C/12C ratio. As noted by Sponheimer et al. (2005a:302): "the tissues of zebra, which eat C4 grass, are more enriched in 13C than the tissues of giraffe, which eat leaves from C3 trees."
A number of studies have examined the 13C/12C ratio in early hominid remains, focusing on those from the South African caves (Lee-Thorp et al. 1994; Sponheimer and Lee-Thorp 1999; van der Merwe et al. 2003). These studies are reviewed along with new results by Sponheimer and colleagues (2005a). The two basic results are that A. africanus and A. robustus are indistinguishable from their 13C/12C ratios, and that both australopithecine species have 13C/12C ratios that are elevated compared to C3 consumers and intermediate between them and C4 grazers. The fact that their ratios are lower than C4 grazers is not surprising, since australopithecines clearly did not eat grass. But if they depended largely on fruits, nuts, or other C3 foods, then it is difficult to explain why they should have stable isotope ratios that reflect a partial consumption of C4 foods.
Several hypotheses might explain this observation:
- Australopithecines may have eaten underground storage organs of C4 plants, such as grass corms or tubers of certain sedges.
- They may have eaten seeds from C4 grasses.
- They may have eaten the meat from grazing species.
- They may have eaten termites that relied on grasses and other C4 species.
- Diagenesis of 13C/12C ratios in fossils may have altered the isotopic signature, which actually may have been the same as that of C3 consumers (Schoeninger et al. 2001).
Sponheimer and colleagues (2005a) address the last hypothesis by testing a greater number of australopithecine teeth, finding results consistent with earlier findings. It is not obvious that this eliminates doubt entirely, but more samples provide more confidence that a real phenomenon has been observed. A comparison of all the hominids with recent C3 consumers shows clearly that they are significantly different, with relatively little overlap. They are also very different from the fossil C3 consumers preserved at the same sites (Sterkfontein and Swartkrans), which include browsing antelopes and giraffids. They are also distinct from C4 grazers in having a lower 13C level. From these values, Sponheimer and colleagues (2005a:305, emphasis in original) write:
[T]he data suggest that Australopithecus and Paranthropus ate about 40% and 35% C4-derived foods respectively. Such a significant C4 contribution, whatever its origin, is very distinct from what has been observed for modern chimpanzees (Pan troglodytes). Schoeninger et al. (1999) found no evidence of C4 foods in chimpanzee diets even in open environments with abundant C4-grass cover.
With respect to the termites and sedges, Sponheimer and colleauges (2005a) found that termites in open environments do have a high C4 proportion, while South African sedge species were found to be predominantly C3 plants. This means that termites might have provided part of the C4 component of early hominid diets, but the underground storage organs of sedges most likely did not. This does not mean that hominids may not have used sedges as a resource, but instead that such use would not explain their relatively high C4 proportion. And a diet of 35 to 40 percent termites seems quite high, so even if these were included in the diet, there were likely other C4 sources for the early hominids.
Another factor of the observations is that A. africanus teeth were quite variable in their 13C levels. Sponheimer and colleagues (2005a) suggest that one hypothesis to explain this variability would be if the sample changed over time--for example, in response to environmental change toward more open environments. Such changes in environment may be evidenced by a difference in the stable oxygen isotope ratios of the Sterkfontein and Swartkrans hominids. But when the ages of fossils were compared to 13C/12C ratios, there was no change over time, indicating that whatever dietary changes may have occurred, they evidently did not greatly affect the C4 proportion in the diets of the australopithecines. Sponheimer and colleagues (2005a:308) conclude that the australopithecines with high variability may simply have been "extremely opportunistic primates with wide habitat tolerances that always inhabited a similarly wide range of microhabitats regardless of broad-scale environmental flux."
Combining the data
Can these different sources of evidence be put together into a single picture of ancient hominid diets? The answer is yes, but unfortunately there is more than one hypothesis that may fit the bill. The facts that must be explained are as follows:
- High Sr/Ca ratios in A. africanus
- Moderate Sr/Ca ratios in A. robustus
- High proportion of C4 sources in both A. africanus and A. robustus
- Dental anatomy unsuited to leaf or grass eating in either species
- Tooth wear and anatomy reflecting hard, brittle food consumption by robust australopithecines (Grine 1986; Grine and Kay 1988), and possibly similar but to a lesser degree in other early hominids (Ungar 2004).
Sponheimer et al. (2005b) treat the first of these observations as the most problematic, and try to account for it with hypotheses that are consistent with the other observations. One hypothesis is that early hominids were insectivorous. They indicate that modern insectivores do have higher Sr/Ca ratios than other faunivores (153). In combination with possible evidence for termite digging at Swartkrans (Backwell and d'Errico 2001), this observation might suggest that early hominids used termites and other insects as a significant food source, even moreso than living chimpanzees. Sponheimer and colleagues judge this hypothesis as problematic because the fossil hominids differ from recent insectivores in having a low ratio of barium to calcium (154). One may also add that the robust australopithecines from Swartkrans did not have especially high Sr/Ca ratios, while there has not yet been evidence of termite digging for earlier hominids.
A second hypothesis is described as follows:
We have noticed that among the modern fauna that have the unusual combination of high Sr/Ca and low Ba/Ca are warthogs (Phacochoerus africanus) and mole rats (Cryptomys hottentotus (Sponheimer, unpublished data), both of which eat diets rich in underground resources such as roots and rhizomes. Thus, the possibility of greater exploitation of underground resources by Australopithecus compared to Paranthropus requires consideration. In addition, the slightly enriched Sr/Ca of Paranthropus compared to papionins might also be evidence of increased utilization of underground resources.
This last point about A. robustus may be reaching. On the other hand, this leaves the dietary mix issue somewhat unsettled. For example, what if both A. africanus and A. robustus ate underground resources, but A. robustus also ate meat? Or if A. africanus also ate insects. And so on. It seems unclear that anything short of a clear identity of diet between an early hominid and a modern analog in the African fauna will leave the possibility of exotic mixes.
This leaves us to reflect on the full pattern of evidence more closely. How is the hard, brittle diet inferred from dental anatomy and wear reconcilable with the hypothesis that australopithecines were eating the tough, fibrous underground storage organs of C4 plants?
Peters and Vogel (2005) address the issue of C4 diet proportion by examining the range of C4 plants that may have been available to early hominids. They make a number of observations:
- C4 sedges that produce edible roots, tubers, or stems are water-reliant, and do not compete with grasses in areas where drought occurs seasonally. They are therefore limited to relatively permanent watercourses including areas that are seasonally inundated with water. The South African sites do not represent such wetlands.
- Interestingly, C4 grasses have an evolutionary origin in the late Middle Miocene, and had increased in abundance in the African flora by the origin of the hominids.
- A majority C4 food intake by early hominids seems unlikely because of the wide availability of hominid-edible C3 foods in areas where the relatively rarer C4 hominid-edible plants also exist.
- Mature tubers of C4 sedges appear to have toxicity that may have impeded their edibility by early hominids, and they could probably have been consumed only in very small amounts.
- A number of potential animals may have provided a C4 component for early hominids, beyond the relatively large C4 grazing ungulates. These would include reptiles, birds, and rodents as well as insects. Early hominids would not have competed with other large carnivores for these small animals.
This brings us to a third hypothesis for early hominid diets. Peters and Vogel (2005:232-233) support an interpretation of omnivory for the early hominids, giving the following scenario:
As a starting point we can offer the following theoretical formulation of possibilities for a 30% C4 contribution to a subadult hominid diet based on minor potential C4 food categories:
- 5% C4 input from sedge stem/rootstock, green grass seed, and forb leaves
- 5% C4 input from invertebrates
- 5% C4 input from bird eggs and nestlings
- 5% C4 input from reptiles and micromammals
- 5% C4 input from small ungulates
- 5% C4 input from medium and large ungulates
This type of formulation maximizes the diversity of food species, i.e., both food-species-richness and evenness of contribution. The exact numbers are not as important as the species richness of the formulation.
A couple of things can be noted from this scenario. First, the predominant part of the C4 contribution comes from animal resources rather than plants. This conforms with Peters and Vogel's (2005) examination of C4 plant resources, only few of which are both edible by hominids and potentially available in quantity in their apparent paleoenvironments. However, this dietary component does not explain the masticatory adaptations of the australopithecines (indeed, it is not intended to explain them, since early Homo does not differ in dietary C4 contribution from earlier hominids). There is certainly nothing about the dentitions and jaw musculature of robust australopithecines to preclude an omnivorous diet of this type, but that invites the question of what fallback foods may explain that adaptation, or may explain the difference between robust and other australopithecines in that respect.
None of these three hypotheses really accounts for the full pattern of evidence about early hominid diets. The consensus so far appears to be that the chemical characteristics of the bones and teeth of early hominids reflect a majority diet that did not require a specialized dental adaptation. Therefore, the dental specializations of early hominids, in particular the enlargement of the postcanine dentition, reduction of the incisors and canines, and the low crowns of the molar teeth probably were adaptations to a minority of dietary intake that nevertheless was extremely important in selective terms. This would be characteristic of fallback foods eaten at times of resource scarcity, and would evidently have consisted of hard, brittle food items that could be effectively pulverized and ground by low-crowned teeth with large surface areas and thick enamel. This interpretation is supported by Ungar (2004) in an analysis of dental topography in early hominids and living hominoids (discussed in another article).
There are some remaining mysteries:
- If australopithecines had basically similar C4 dietary proportions, then what accounts for their differences in Sr/Ca ratios?
- Did any A. africanus-like hominids ever coexist with a robust australopithecine species?
- If early Homo had a C4 proportion that came in large part from hunting or scavenging grazing species, a hypothesis also supported by their dental anatomy (Ungar 2004), then did they abandon any of the C3 resources used by australopithecines?
- If australopithecines were opportunistic omnivores, were there important regional differences in their dietary composition?
These questions and others might be addressed with further sampling of dental chemistry.
Backwell LR, d'Errico F. 2001. Evidence of termite foraging by Swartkrans early hominids. Proc Natl Acad Sci U S A 98:1358-1363.
Grine FE. 1986. Dental evidence for dietary differences in Australopithecus and Paranthropus: a quantitative analysis of permanent molar microwear. J Hum Evol 15:783-822.
Grine FE, Kay RF. 1988. Early hominid diets from quantitative image analysis of dental microwear. Nature 333:765-768.
Peters CR, Vogel JC. 2005. Africa's wild C4 plant foods and possible early hominid diets. J Hum Evol 48:219-236.
Schoeninger MJ, Bunn HT, Murray S, Pickering T, Moore J. 2001. Meat-eating by the fourth African ape. In: Stanford CB, Bunn HT, editors, Meat-eating and human evolution. Oxford, UK: Oxford University Press. p 179-195.
Schoeninger MJ, Moore J, Sept JM. 1999. Subsistence strategies of two savanna chimpanzee populations: The stable isotope evidence. Am J Primatol 49:297-314.
Sillen A. 1992. Strontium-calcium ratios (Sr/Ca) of Australopithecus robustus and associated fauna from Swartkrans. J Hum Evol 23:495-516.
Sponheimer M, de Ruiter D, Lee-Thorp J, Späth A. 2005b. Sr/Ca and early hominin diets revisited: New data from modern and fossil tooth enamel. J Hum Evol 48:147-156.
Sponheimer M, Lee-Thorp J, de Ruiter D, Codron D, Codron J, Baugh AT, Thackeray F. 2005a. Hominins, sedges, and termites: New carbon isotope data from the Sterkfontein valley and Kruger National Park. J Hum Evol 48:301-312.
Sponheimer M, Lee-Thorp JA. 1999. Isotopic evidence for the diet of an early hominid, Australopithecus africanus. Science 283:368-370.
Ungar P. 2004. Dental topography and diets of Australopithecus afarensis and early Homo. J Hum Evol 46:605-622.
van der Merwe NJ, Thackeray JF, Lee-Thorp JA, Luyt J. 2003. The carbon isotope ecology and diet of Australopithecus africanus at Sterkfontein, South Africa. J Hum Evol 44:581-597. | http://johnhawks.net/weblog?page=178 | 13 |
87 | For hundreds of years, seamen yearned to better their lot. From the whip-lashed oarsmen of Roman and Spanish galleys to the crews of modern windjammers, seamen were usually underfed, underpaid and overworked and considered workmen beyond the usual recourses of the law.
Along with the harsh and vigorous nature of their daily labors were the constant hazards of seafaring. Untold thousands of sailors have set out from port never to return, becoming victims of storms, collisions and that most dreaded foe of the ocean voyager--fire at sea. And in the pages of old shipping journals there was always this recurrent notice beside the name of a ship: "missing and presumed lost with all hands."
Much as they wanted to better their condition, seamen had little chance to express their dissatisfaction in any effective way, much less to organize for concerted action. Maritime laws of all nations gave absolute authority to the captain at sea. Quite appropriately was the captain called "master." He was that, in fact. Many protests by seamen during a voyage against poor food, overwork, brutality or unsafe conditions were branded "mutinies" and were suppressed by fists, guns or belaying pins. Only rarely was the seaman's voice heard as far as the courts, and then the masters, mates or owners almost always won the case.
All maritime nations had strict laws against a seaman leaving his ship before the end of the voyage. In 1552, for instance, the Spanish government decreed that any sailor who deserted his ship before the end of a voyage to America could be punished by 100 lashes, a sentence virtually equal to death. As late as the 19th century in England, the United States and other maritime nations, a seaman who left his ship before the end of a trip could be forcefully apprehended and brought back on board. If he wasn't returned, he automatically forfeited his pay and any belongings left on the ship. In the U.S., this law was only rescinded after passage of the "Magna Carta of the American Seamen"-- the Seamen's Act of 1915. This legislation was initiated by Andrew Furuseth, famous champion of seamen's rights and head of the old International Seamen's Union.
The sailor was always at a great disadvantage in organizing into a union because of the nature of his profession. He was at sea most of the time. And when ashore, his meager wages were soon spent, leaving him at the mercy of crimps, shipping masters, owners and the many other harpies of the waterfront.
A seaman with a reputation for protesting his lot would soon find it hard to get a ship. But the seaman has always been an independent fellow, and it is not surprising that the first labor strike in the United States was by the sailors of New York in 1803, when they refused to sail the ships until they received an increase in pay from $10 a month. There is little information available about this strike, but there is a reference to them getting $17 a month later, so the action must have been effective. But the sailors' efforts were only spasmodic and their achievements did not last long. There was a strike in Boston in 1837, when pay was little more than it was in 1803.
It must be remembered, of course, that many shoreside workers were not much better off than the seaman. If the sailor was unhappy with his pay, he did not have much chance of improving himself ashore. Once accustomed to the sea, moreover, the sailor did not take kindly to the boredom and drudgery of jobs ashore.
The first organization of seamen in the United States occurred in January of 1866 when the following notice appeared in a San Francisco paper:
Seamens Friendly Union Society
All seamen are invited to attend at the Turn Verein Hall on Bush Street between Stockton and Powell Streets on Thursday Evening, January 11 at 7 1/2 o'clock to form a Seamens Society for the Pacific Coast.
This meeting resulted in organization of the Seamens Friendly Union and Protective Society. Alfred Enquist was elected president and George McAlpine, secretary. It was the first organization of seamen in this country, perhaps the first in the world. In 1875, the United Seamen's Association was formed in the port of New York, and it sent a delegation to Congress to petition for laws to protect seamen. The delegation, according to a news report in The New York Times of January 21, was "graciously received by the President."
No more was heard of this organization.
The Seamen's Friendly Union and Protective Society in San Francisco did not last long, and the next organization to come along was the Seamen's Protective Union formed in San Francisco in 1878 with 800 members. It, too, had a short life.
When wages on the coasting vessels fell to $25 a month in 1885, seamen met one night on a lumber wharf along the San Francisco waterfront to protest. This was followed a week later by a second meeting, which resulted in formation of the Coast Seamen's Union, with Billy Thompson being elected president. By July, the union had a permanent headquarters and some 2,000 members. Only sailors were allowed to join. Dues were 50 cents a month. In the following year, seamen on steamships formed the Steamship Sailor's Protective Association, which merged in 1891 with the Coast Seamen's Union under the name Sailors Union of the Pacific.
In June of 1886, the SUP had called its first strike, forcing wages up to $30 a month.
With these organizations, the seamen's labor movement was off to a firm start, at least on the West Coast.
Seamen organized on the Great Lakes at about the same time. The Seamen's Benevolent Union of Chicago was formed in 1863 but soon expired, mainly because its main objective was to take care of sick or indigent members rather than to raise wages and improve conditions.
In 1878, this organization was revived with the name Lakes Seamens Benevolent Association, under the leadership of Dan Keefe.
This was a real trade union, with its main commitment being financial betterment and improved living conditions aboard ship. Branches sprang up in the major Lakes ports. Within a few years, the ship owners had broken the union by setting up their own hiring halls and refusing to ship any men with known union proclivities. The Union, however, was revived in the 1890s and survived to become part of the International Seamen's Union.
Longshoremen of the Lakes organized in Chicago in 1877 and then formed the National Longshoremen's Association of the United states in Detroit in 1892. This became the International Longshoremen's Association in 1895. It was also on the Great Lakes that the first union of marine engineers was formed in 1854. It quickly faded away but was revived in 1863 and again in 1875 when it became the National Marine Engineers Beneficial Association. Captains and mates have a history of union activity on the Lakes dating back to 1886.
In 1892, a convention of seamen was held in Chicago, with delegates from the various unions now organized on the West Coast, the Great Lakes and the Gulf of Mexico. There were no delegates from the Atlantic.
At this meeting was born the National Union of Seamen of America, later to be known as the International Seamen's Union. It lasted until the 1930s, and out of its eventual wreckage came the Seafarers International Union and the National Maritime Union.
It was almost 100 years ago that American seamen belonging to various unions realized the need for a strong, single voice to speak for the sailor in the halls of Congress and in attempts to improve his economic situation. Convening in Chicago in April of 1892, representatives from the Pacific and Gulf Coasts and the Great Lakes formed the National Seamen's Union of America, later to become the International Seamen's Union.
A constitution was drafted, national officers were elected, and a chief organizer was appointed. Charles Hagen was the first president; Thomas Elderkin, the first secretary; and James McLaren, the first national organizer.
These officers were not just pie cards. They had solid seagoing backgrounds, a record of labor organizing, and a resounding zeal for the sailor's cause.
A native of Germany, Hagen sailed for 15 years on windjammers under many flags. A man of unusual energy and imagination, he organized the Gulf Coast union of seamen and firemen and the New Orleans Marine Council, an influential group of marine engineers, captains, pilots and other maritime workers ... a close parallel to our important Port Councils today. He was president of the Gulf Coast union.
Secretary Elderkin, a native of England, was also a deep water sailor who had become aroused over the conditions of seamen after making a voyage on the "hellship" Waterloo, notorious for the brutality of its officers. He shipped on the Great Lakes for some years and helped to organize the Lakes Seamen's Union. He also lent his talents to organizing the Chicago building employees. He was president of the LSU.
Organizer McLaren was a Nova Scotian who joined the Sailor's Union of the Pacific in 1887 and served as an officer in various capacities. According to an article in the Coast Seamen's Journal of 1893, McLaren was a man of "shrewd energy and unswerving devotion to the sailors' cause ... feared and respected by all enemies of seamen," especially the crimps.
Seamen enjoying the comparative luxury of today's ships and the good food and high wages won by union efforts in the past 50 years will be amazed by what the seamen of 1915 hailed as the major achievements of the act passed that year.
The seamen's bill provided a two-watch system for the deck force, and a three-watch system for the engine gang, plus a maximum nine-hour working day in port. It set a more liberal schedule for rations and a minimum of 100 cubic feet of space per man in the fo'c's'les. Previously, each man had been allotted 72 cubic feet, which Furuseth described as "too large for a coffin too small for a grave." Also, the law specified that bunks in fo'c's'les could be no more than two high.
The law also decreed that 75 percent of the crew must be able to understand commands given in the English language.
Spurred by the sinking of the Titanic and other marine disasters, the act was also concerned with more safety at sea, better qualified seamen, more and better lifeboats and more seaworthy conditions of ships.
It brought about historic improvements in the life of the sailor.
For one thing, the law decreed that the sailor no longer could allot part of his wages to creditors before signing on a vessel. This sounded the death knell to crimps, shanghaiers and shady boarding housekeepers who had preyed on the sailor, taking a "mortgage" on his wages in exchange for food, lodging, drinks and clothes.
And no longer could the seaman be imprisoned on charges of desertion if he left his ship before the end of a contracted voyage. It also prohibited corporal punishment for offenses aboard ship.
For these reasons, the ISU hailed the seamen's bill as "the emancipation proclamation for seamen of the world."
It was union support that financed the years of effort necessary to arouse congressional and public support for the seamen's cause and successfully guide the seamen's bill on its rocky and often tempestuous course through Congress. Its eventual passage was a tribute to union organization and to Andrew Furuseth, who had devoted 20 years to the seamen's cause in Washington.
The National Seamen's Union was set up as a federation of a number of independent unions, including the Sailors' Union of the Pacific, which was the sparkplug in its organization; the Lake Seamen's Union, the Atlantic Coast Seamen's Union, and the Seamen's and Firemen's union of the Gulf Coast. The Atlantic Coast Seamen's Union had been in existence since 1889 but had not been very effective and was in such poor financial shape that it could not even afford to send a delegate to the Chicago convention of 1892. It could only afford a "good luck" telegram.
The new federation wasted no time . . . nor did organizer McLaren. Within a year, the dues paying membership of the "weak sister," the Atlantic Coast Union, was increased from about 400 to more than 1,000; several branches were reorganized, and wages had been boosted by about $12 a month. By the time of the new federation's second annual convention in New Orleans in 1893, the Atlantic Coast union was considered to be "on a fair way to becoming the largest seamen's union in the world." This prediction was actually realized in World War I.
The ISU supported a determined effort to improve the conditions of seamen through congressional legislation, eliminating abuses which had plagued the seamen's lot for generations. This battle was spearheaded by Andrew Furuseth, Washington representative of the Sailors' Union of the Pacific since 1893 and its longtime secretary. He devoted the better part of a lifetime to fighting sailors' battles in Washington.
Furuseth was elected president of the ISU in 1908 and from that time on was the respected voice of all American seamen, not only in the halls of Congress but in the press and to the hundreds of groups to whom he spoke on behalf of the "sailor's cause."
Over the years, several pieces of legislation were passed by Congress on behalf of seamen, but it was the Seamen's Bill of 1915 that crowned all such efforts for the sailor and has rightly been called "the Magna Carta of the American seaman."
The bill was sponsored for Furuseth and the ISU by Sen. Robert M. LaFollette of Wisconsin and was actively supported by Secretary of Labor William B. Wilson and a number of other congressmen. Furuseth labored for it passionately and untiringly day and night. After a two-year battle in Congress, the bill was signed by President Wilson on March 4, 1915.
The Seafarers International Union was born in the hectic, strike-ridden days of the Great Depression, the worldwide economic slump of the 1930s. The founders and many of the early members of the SIU came out of the International Seamen's Union, founded in 1892 as a federation of a number of seamen's unions on the four coasts of the United States.
The great achievement of the ISU was its support of the long-time battle to improve the legal status of seamen and of safety and living conditions aboard ship. This fight culminated in passage of the Seamen's Act of 1915. But the union's history, unfortunately, was plagued by frequent internal strife, a continually weak financial situation, and the not-always-successful effort to speak for its various autonomous parts, which could not always agree on common objectives.
In 1913, for instance, the ISU revoked the charter of the Atlantic Coast Seamen's Union because it would not support a national legislative program. The Eastern and Gulf Sailors Association, headquartered in Boston, was chartered to replace it.
There was a continual changeover in the makeup and leadership of unions within the ISU. In the space of a few years, as an example, the Atlantic Coast Seamen's Union became the Sailors Union of the Atlantic, and then the Sailors and Fireman's Union of the Atlantic.
Thanks to the shipping boom of World War I, the ISU enrolled more than 115,000 dues-paying members and enjoyed a brief period of financial prosperity. One of its major successes was the strike of 1919, which resulted in a base wage of $65 for ABs and $90 for firemen, an all-time high for deep sea sailors in peace time.
But this war-generated shipping boom soon ended, there was a worldwide shipping depression, and by 1921 membership rolls of the ISU had shrunk to 50,000. Owners refused to renew contracts and decreed wage cuts of up to 25 percent, which the ISU refused to accept. An all-ports strike started on May 1, 1921.
Shipowners set up their own hiring halls and hired non-union men or those who had dropped out of the union, a situation made more favorable for the owners because of the big reservoir of jobless seamen. After two months, the strike collapsed and the wage cuts prevailed.
This defeat weakened the ISU. It was further crippled by the continuing disruption by such radical groups as the Industrial Workers of the World (IWW) and the Marine Workers Industrial Union (MWIU).
For about 10 years after the ill-fated 1921 strike, the ISU was relatively dormant. But it was projected head-first into the violent West Coast longshoremen's strike of 1934 despite the reluctance of its leadership to get involved. The ILA West Coast dockworkers had gone on strike May 9, 1934 for more money, a 30-hour week, union-run hiring halls and a coastwide contract. West Coast seamen walked off their ships in support of the dock workers and presented demands of their own for higher wages, union recognition in collective bargaining and better conditions aboard ship. East Coast officials of the ISU then decided to support the strike in all areas, asserting that 1933 demands for better wages and conditions had been ignored by shipowners.
The owners rejected all demands.
Shipping in San Francisco and other West Coast ports was soon at a standstill. Within a few days, more than 50 ships were idle at their docks or at anchor and piers were filled with cargo that could not move to its destination.
Shipowners and other business interests then determined to open the port, and plans were made through the Industrial Association to run trucks through the gauntlet of pickets and get cargo off the piers, with Pier 38 as a start. Trucks were driven to the pier on the afternoon of the second of July, with the drivers being evacuated from the water-end in a launch. On the morning of Thursday, July 3, more than 5,000 longshoremen, seamen and curious onlookers had gathered on the Embarcadero near Pier 38. At about noon, a convoy of loaded trucks came off the pier under police escort and headed for a warehouse on King Street, passing unmolested through the picket lines.
This operation was repeated several times to the growing discontent of the pickets. Finally, the strikers could stand it no longer, and when the trucks again tried to run the gauntlet, the longshoremen and the sailors bombarded truckers and police with bricks and stones. Police counterattacked with clubs and tear gas. The battle had begun. When it was over, one picketer had been killed and many hurt.
There was no action on Independence Day, but by 8 a.m. on July 5, some 3,000 picketers had gathered on the Embarcadero, and when a Belt Line locomotive came along with cars for the pier, the battle began again. Picketers set cars on fire, hundreds of policemen charged the massed picketers, and a full-scale engagement began, with bricks and bullets, clubs and tear gas on nearby Rincon Hill, a knoll along the waterfront. When police charged up the hill to chase the picketers away, shots were fired and two picketers were killed. Scores were wounded.
When the National Guard moved in that night and took over the waterfront, the Embarcadero became a no-man's land.
The unions retaliated by calling a general strike on July 16. This action paralyzed the city. Nothing moved. Stores closed. Only a few restaurants were permitted to open. Business life came to a standstill. The strike was called off July 19 when the Joint Strike Committee representing 120 striking unions agreed to put all demands to arbitration. The president had designated a National Longshoremen Board to arbitrate the dispute.
The 1934 strike, which lasted 39 days, resulted in substantial gains for both longshoremen and seamen, with the latter obtaining wage increases, a three-watch system onboard ship and better living conditions. Although the strike seemed to end with satisfactory results for all concerned, there were more strikes to come in those troubled days of the Great Depression, with labor unrest only one phase of the social fermentation and upheaval.
Labor unrest included a new form of on-the-job protest called the sit-down strike, in which men literally sat down on the job. There were a number of sit-down actions in the maritime industry, with seamen preventing ships from sailing as a means of getting immediate response from owners on demands for higher wages and union representation. Two new maritime unions, the Seafarers International Union and the National Maritime Union were born in these hectic times. Both sprang out of the old ISU, which faded away as an organization which had served its purpose and had outlived its time.
By 1936, the International Seamen's Union was headed for the rocks, buffeted by forces from within and without.
At a long and stormy Washington convention in February of that year, conservative elements retained control of the union and reelected the venerable Andrew Furuseth as president. More importantly, they pushed through a constitutional amendment giving the union's executive board the power to revoke the charter of any local union at any time.
The board then revoked the charter of the Sailors' Union of the Pacific, which Furuseth charged was being taken over by the Industrial Workers of the World (IWW) and other radicals. The ISU tried briefly in 1938 to set up a competing union, but this attempt soon died for lack of support. The SUP sailors remained faithful to their union.
Another factor in the weakening of the ISU had come about in 1934 with formation of the Maritime Federation of the Pacific, a central labor organization containing some ISU units, principally the SUP, plus longshoremen and other groups. Harry Bridges, the longshoremen's leader, was the principal organizer of the Federation, which Victor Orlander, national secretary of the ISU, claimed was set up to destroy the International.
But it was also being wrecked from within.
Dissidents in the ISU charged that officials were not holding the required elections and had negotiated contracts with shipowners without approval of the membership and demanded their removal. Probably an equally important factor in undermining the union, however, was the general temper for change that was sweeping the country in the 1930s. It is possible that no change within the old union structure would have satisfied the activists who wanted new leaders and a more aggressive program in tune with the times. A coastwide strike started in October of 1936 as seamen demanded a new agreement to replace the 1934 pact with the shipping lines. ISU officials resisted efforts to call a general sympathy strike on the East Coast, and this incited more unrest among the rank and file. Numerous unauthorized sympathy strikes took place.
In March of 1936, crewmen of the liner California went on strike at sailing time in San Pedro, Calif., refusing to cast off the lines unless the Panama Pacific Line met West Coast wage scales and overtime.
Secretary of Labor Frances Perkins persuaded the crew by telephone to sail the ship and promised to look into their grievances when it docked in New York. But Secretary of Commerce Daniel C. Roper branded the action a mutiny, and when the ship docked, the strike leaders were logged and fired. Many ISU men blamed their officials for not backing up the crew in this beef, and the leadership was further weakened.
They were fast losing control over their members.
In October of 1936, ISU crews staged a sit-down strike in sympathy with West Coast seamen and against orders of union officials, starting with a sit-down on the S.S. American Trader in New York. This "sitting down" on the job was a new type of action that was to become common during the labor unrest of the 1930s.
ISU officials called on the men to live up to their agreements and sail the ships and threatened to expel those who didn't, but these threats had little effect.
In November of 1936, ISU men in Boston struck in support of the West Coast and issued a daily mimeographed strike bulletin in which they denounced both union officials and shipowners.
Unhappy about the reluctance of their leaders to call out "all hands" in support of the West Coast, a group of dissidents set up a Seamen's Defense Committee in October of 1936. Joe Curran, a 34-year-old newcomer to the maritime labor scene and a spokesman for strikers on the liner California, became chairman of the strike strategy committee, the beginning of his rapid rise to power. Curran was described by The New York Times as a "young and militant disciple of Harry Bridges" and as a "key man in the rank-and-file of seamen here."
The Seamen's Journal, official publication of the ISU, pointed out the inconsistency of Curran's sudden disenchantment with ISU leadership, saying he had only been a member of the union for one year during his seafaring career. But Curran was aggressive, articulate and ambitious, and the times suited him well.
And it was evident, judging by those who surrounded and supported him, that Curran was willing to front for the strong cadre of left-wingers in the new union. He later repudiated these associates and helped reduce their influence in the NMU.
In November, Curran headed a so-called Insurgent Seamen's Committee, which negotiated contracts with two small steamship lines, Prudential and Transoceanic, this being made possible by support from the Marine Engineers Beneficial Association, the American Radio Telegraphers Association, and the Masters, Mates and Pilots, which were striking these companies at the time.
In May of 1937, a large group of the ISU rebels led by Curran and Jack Laurenson broke away from the old union entirely and formed a new organization called the National Maritime Union, claiming 27,000 members. They filed a petition with the National Labor Relations Board to hold an election and determine which group should be bargaining agent for the more than 70 ISU lines operating out of the East Coast and the Gulf. The voting started in June of 1937. The NMU was victorious on most of the ships, although the crews on some lines, notably the Eastern Steamship Company, remained faithful to the old union. But with the new organization dominating the elections, it was evident that drastic action had to be taken to maintain the AF of L's role in maritime labor.
And so in August of 1937, the AFL took over the remnants of the ISU in order to rebuild a seamen's union within the Federation.
William Green, president of the AFL, requested the resignation of ISU officials, and the Federation's executive council placed the union's affairs in the hands of an executive committee which included Green, ILA President Joe Ryan and AFL organizer Holt Ross.
At Green's request, Harry Lundeberg, head of the SUP, sent a top assistant, Morris Weisberger, to New York to set up a nucleus for this rebuilding, straighten out the union's financial situation, and organize a new dues structure for the Atlantic and Gulf divisions. A Seamen's Reorganization Committee was established for this purpose in December of 1937, with Lundeberg naming Robert Chapdelaine as temporary head of the new union. During this time, it operated under a federal charter.
Once it was stabilized and in firm hands, the executive council of the AFL issued a charter. This was done at the Houston convention on October 15, 1938, the charter being handed to Lundeberg by President Green. By then about 7.000 members had been organized on the East Coast and the Gulf, and Green was predicting that there would soon be 30,000 on all coasts. The new AFL seamen's union, the Seafarers International Union, was now under way and going "full speed ahead."
The ink was scarcely dry on its charter before the new Seafarers International Union began winning benefits for its members and proving its intention to play an aggressive role in maritime labor.
In 1939, SIU crews began a drive for more adequate bonuses on ships sailing into war zones. The union also signed improved contracts with the Savannah Line and other operators.
An 11-day strike against the big Eastern S.S. Co., operator of passenger ships and freighters, resulted in a contract for better wages and working conditions. A strike began against the Peninsular and Occidental Line (P&O), which operated car-ferries and passenger ships between Florida and Cuba. This strike lasted 14 months and was finally successful for the SIU, although the company later put its ships under foreign flags. The P&O beef showed that the new union could "stand together" when the going got rough. The SIU was most effective for its members in the war bonus beefs that began in 1939. These bonuses were for extra "hazardous duty" pay for men sailing ships to South and East Africa and the Red Sea.
The September 18, 1939 issue of the Seafarers LOG carried this headline: "SIU Strikes Ships for Bonus."
Crews walked off the Eastern Steamship liners Acadia and St. John and the Robin Line freighter Robin Adair. The St. John and Acadia had been chartered for returning American citizens from Europe and for carrying American construction workers to air base projects in Bermuda.
These actions resulted in the shipowners agreeing to a 25 percent bonus for voyages to certain Atlantic and Middle East war zones.
In September of 1940, the LOG carried a headline of vital interest to seamen: "SIU Gets Increase to 33-1/3 Percent in Bonus for African Run." There probably would have been no increase if it had not been for militant action taken by SIU crewmen on the Robin Line's S.S. Algic in July of 1940, when they walked off the ship, demanding a bonus of $1 a day from the time the ship left port in the United States until her return home.
The Algic action came after an announcement by the German Navy that it had planted mines in African waters.
As the war spread and both submarine and air attacks were intensified, the SIU pressed for a still more adequate war bonus for seamen endangering their lives in war areas.
SIU men again hit the bricks in July of 1941, tying up the Flomar, Shickshinny and Robin Locksley in New York to show they meant business. The ships were later released and allowed to sail when operators and the government agreed to sit down and negotiate.
The urgent need for action on bonuses was emphasized with the torpedoing of the SIU-manned Robin Moor about 700 miles south of the Azores in May of 1941. She was the first American flag merchant ship sunk in World War II. American ships and seamen were now on the front lines of the war, and there they served through VJ Day in 1945 to the official end of hostilities more than a year later.
When there was no progress in talks with operators and the government, the SIU initiated all-out action in September of 1941, starting with ships in New York that were loaded with cargo for new bases in the Caribbean. The tie-up soon extended to vessels in Boston, New Orleans, Mobile and Tacoma. Within a few days, more than 20 ships were tied up.
The U.S. Maritime Commission struck back by seizing three Alcoa ships and placing government-recruited crews onboard and threatening to requisition all privately-operated merchant vessels.
President Roosevelt told the union that the ships "must move or else." The SIU was up against the federal government, so on September 25, seamen met at 14 SIU ports and voted to release the ships pending negotiations to end the dispute.
Hearings began in Washington which ended in a victory for the seamen, for on October 5, the newly created National Defense Mediation Board (NDMB) recommended increased bonuses and set up a procedure for avoiding future disputes. It also recommended creation of a three-man War Emergency Maritime Board for maritime mediation, which was approved by the president. This board handled bonus matters for the duration of the war.
The NDMB granted an immediate increase in war bonuses for unlicensed personnel from $60 a month to $80 a month and an increase in special bonuses for the port of Suez and other Red Sea and Persian Gulf ports. Needless to say, the West Coast unions and the National Maritime Union were powerful allies with the SIU in its bonus battles, with the NMU respecting SIU picket lines, even though it did walk out of an important union-industry Washington bonus conference in 1941.
If it had not been for strong and militant action by the union before United States entry into the war, American merchant seamen would probably have been sailing dangerous cargoes through hazardous seas for regular pay. In its war bonus fight, the SIU proved that it could pinpoint an issue, "move the troops" and use the power of well organized action to win just compensation for its members.
Members of the Seafarers International Union were on the front lines of battle in World War II. They carried guns, planes, gas and "ammo" to a dozen beachheads and to supply ports and island bases all over the world from the Aleutians to Algiers.
Even before the United States had officially entered the war against Germany, Italy and Japan, SIU sailors knew what it was to be torpedoed and put adrift in open boats hundreds of miles from the nearest land. On May 21 of 1941, long before Pearl Harbor, a submarine stopped the unarmed Robin Moor of the Robin Line on route from New York to South Africa. Capt. William Myers was given 20 minutes to abandon ship, after which the U-boat's gunners put 33 shells into the freighter and sank her. After the sub disappeared, the 45 survivors struck out for land in four boats. Fortunately, all four were picked up but not until the fourth boat had traversed 700 miles of open ocean.
When the first survivors were landed and news of the sinking stirred the nation, President Roosevelt sent a special message to Congress in which he said that American ships would not be intimidated. "We are not yielding," he said, "and we do not propose to yield."
When German U-boats brought the war to the very coasts of the United States early in 1942, SIU seamen were among the first to feel the brunt of it. The SIU-manned City of Atlanta was northbound off Hatteras on January 19, 1942, when it was torpedoed by a German submarine, with the ship going down so fast that there was no time to launch the boats. Only three men survived; 39 were lost.
Less than a week after this, the SIU-manned S.S. Venore, an ore carrier, was torpedoed off Cape Hatteras with the loss of 18 men. Following quickly in the wake of this sinking were a long list of SIU ships, all of them unarmed and unescorted. There were the Robin Hood, the Alcoa Guide, Pipestone County, the Major Wheeler, the Mary and many more as U-boats enjoyed a field day along the Atlantic Coast, in the Gulf of Mexico and in the Caribbean.
Two boats from the Pipestone County were adrift for 16 days before being picked up. The Major Wheeler completely disappeared. The Robert E. Lee, a passenger ship, was sunk almost inside the Mississippi Delta.
Despite this havoc, no SIU ship was held up for lack of a crew. Many crews steamed out to meet almost certain death. The Alcoa Pilgrim, loaded deep with 9,500 tons of bauxite for Mobile, caught a "tin fish" and went down in 90 seconds with heavy loss of life.
SIU men made the hazardous run to Russia, including the famous convoys of July and September, 1942, which were hit by subs and bombers and lost many ships in those cold, Arctic waters.
SIU crews made all the hazardous war runs?all the bloody beachheads. Unsung "heroes," in a way, were the crews who spent months on tedious trips to supply bases behind the tides of battle.
There wasn't a beachhead from Anzio to the Philippines; from Normandy to Okinawa, where SIU crews were not in the forefront of war. They took part in the longest battle of the war, too?the four-year-long Battle of the Atlantic?the fight to keep England supplied with food, gas, guns and other war supplies.
They had to run the U-boat gauntlet to get the goods through, and many ships went down trying to do it.
Thousands of SIU seamen took part in the greatest assault and resupply in the history of war?the invasion of the French coast in June of 1944. They had an important role in landing the 2,500,000 troops, the 17 million tons of ammunition and supplies and the half-million trucks and tanks that were put ashore there in the first 109 days after D-Day.
There were myriad tales of heroism as SIU ships steamed their embattled way across sub-invested seas.
Take the case of the S.S. Angelina of the Bull Line. This SIU freighter was westbound in October of 1942 across the North Atlantic when it became separated from the rest of its convoy in a violent storm in which waves were 30 feet high and more. Just before midnight on the 17th, a torpedo exploded in the engine room, killing the black gang and flooding the engine spaces.
Only one boat could be launched and, being overloaded with crewmen and Navy armed guard gunners, it was soon capsized in tremendous seas. Some managed to hold on to the grab rails on the bottom of the boat, but one by one they were swept away by the numbing cold and the battering waves, until only a few remained.
These would have died, too, were it not for the heroic efforts of the ship's carpenter, Gustave Alm. It was Alm who urged the weary, desperate men to "hang on . . . hang on." When one of them would drop away from exhaustion, he would bring him back and help to hold him on until he revived. When someone said, "I've had enough" and wanted to die, Alm would slap him on the face and yell, "Keep on . . . keep on."
When the British rescue vessel SS Bury finally found them many hours later, Alm continued to assist his shipmates while crew members from the Bury climbed down scrambling nets to bring them to safety. Alm was the last to be saved, and later was singled out for praise by the Bury's captain in a report. (The rescue is covered in a 1968 book titled "The Rescue Ships," by Brian Schofield and L.F. Martyn.)
Like many other SIU men in World War II, carpenter Gustave Alm was one of the merchant marine's true "heroes in dungarees." | http://www.seafarers.org/aboutthesiu/history.asp | 13 |
66 | Code division multiple access (CDMA) is a channel access method utilized by various radio communication technologies. It should not be confused with the mobile phone standards called cdmaOne and CDMA2000 (which are often referred to as simply "CDMA"), that use CDMA as their underlying channel access methods.
One of the basic concepts in data communication is the idea of allowing several transmitters to send information simultaneously over a single communication channel. This allows several users to share a bandwidth of frequencies. This concept is called multiplexing. CDMA employs spread-spectrum technology and a special coding scheme (where each transmitter is assigned a code) to allow multiple users to be multiplexed over the same physical channel. By contrast, time division multiple access (TDMA) divides access by time, while frequency-division multiple access (FDMA) divides it by frequency. CDMA is a form of "spread-spectrum" signaling, since the modulated coded signal has a much higher data bandwidth than the data being communicated.
An analogy to the problem of multiple access is a room (channel) in which people wish to communicate with each other. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different directions (spatial division). In CDMA, they would speak different languages. People speaking the same language can understand each other, but not other people. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can understand each other.
Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance will occur when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made by correlating the received signal with the locally generated code of the desired user. If the signal matches the desired user's code then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to as cross correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference.
In general, CDMA belongs to two basic categories: synchronous (orthogonal codes) and asynchronous (pseudorandom codes).
Synchronous CDMA exploits mathematical properties of orthogonality between vectors representing the data strings. For example, binary string "1011" is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking their dot product, by summing the products of their respective components. If the dot product is zero, the two vectors are said to be orthogonal to each other. (Note: If u=(a,b) and v=(c,d), the dot product u.v = a*c + b*d) Some properties of the dot product help to understand how W-CDMA works. If vectors a and b are orthogonal, then
Each user in synchronous CDMA uses an orthogonal codes to modulate their signal. An example of four mutually orthogonal digital signals is shown in the figure. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95 64 bit Walsh codes are used to encode the signal to separate different users. Since each of the 64 Walsh codes are orthogonal to one another, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each users signal can be encoded and decoded.
Each user is associated with a different code, say v. If the data to be transmitted is a digital zero, then the actual bits transmitted will be –v, and if the data to be transmitted is a digital one, then the actual bits transmitted will be v. For example, if v=(1,–1), and the data that the user wishes to transmit is (1, 0, 1, 1) this would correspond to (v, –v, v, v) which is then constructed in binary as ((1,–1),(–1,1),(1,–1),(1,–1)). For the purposes of this article, we call this constructed vector the transmitted vector.
Each sender has a different, unique vector v chosen from that set, but the construction method of the transmitted vector is identical.
Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they "subtract" and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component.
If sender0 has code (1,–1) and data (1,0,1,1), and sender1 has code (1,1) and data (0,0,1,1), and both senders transmit simultaneously, then this table describes the coding steps:
|Step||Encode sender0||Encode sender1|
|0||vector0=(1,–1), data0=(1,0,1,1)=(1,–1,1,1)||vector1=(1,1), data1=(0,0,1,1)=(–1,–1,1,1)|
Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal:
(1,–1,–1,1,1,–1,1,–1) + (–1,–1,–1,–1,1,1,1,1) = (0,–2,–2,0,2,0,2,0)
This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern, the receiver combines it with the codes of the senders. The following table explains how this works and shows that the signals do not interfer with one another:
|Step||Decode sender0||Decode sender1|
|0||vector0=(1,–1), pattern=(0,–2,–2,0,2,0,2,0)||vector1=(1,1), pattern=(0,–2,–2,0,2,0,2,0)|
Further, after decoding, all values greater than 0 are interpreted as 1 while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2,–2,2,2), but the receiver interprets this as (1,0,1,1).
We can also consider what would happen if a receiver tries to decode a signal when the user has not sent any information. Assume signal0=(1,-1,-1,1,1,-1,1,-1) is transmitted alone. The following table shows the decode at the receiver:
|Step||Decode sender0||Decode sender1|
|0||vector0=(1,–1), pattern=(1,-1,-1,1,1,-1,1,-1)||vector1=(1,1), pattern=(1,-1,-1,1,1,-1,1,-1)|
When the receiver attempts to decode the signal using sender1’s code, the data is all zeros, therefore the cross correlation is equal to zero and it is clear that sender1 did not transmit any data.
The previous example of orthogonal Walsh sequences describes how 2 users can be multiplexed together in a synchronous system, a technique that is commonly referred to as Code Division Multiplexing (CDM). The set of 4 Walsh sequences shown in the figure will afford up to 4 users, and in general, an NxN Walsh matrix can be used to multiplex N users. Multiplexing requires all of the users to be coordinated so that each transmits their assigned sequence v (or the complement, -v) starting at exactly the same time. Thus, this technique finds use in base-to-mobile links, where all of the transmissions originate from the same transmitter and can be perfectly coordinated.
On the other hand, the mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, and require a somewhat different approach. Since it is not mathematically possible to create signature sequences that are orthogonal for arbitrarily random starting points, unique "pseudo-random" or "pseudo-noise" (PN) sequences are used in Asynchronous CDMA systems. A PN code is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These PN codes are used to encode and decode a users signal in Asynchronous CDMA in the same manner as the orthogonal codes in synchrous CDMA (shown in the example above). These PN sequences are statistically uncorrelated, and the sum of a large number of PN sequences results in Multiple Access Interference (MAI) that is approximated by a Gaussian noise process (following the "central limit theorem" in statistics). If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users.
All forms of CDMA use spread spectrum process gain to allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified PN sequence (code) are received, while signals with different codes (or the same code but a different timing offset) appear as wideband noise reduced by the process gain.
Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (Synchronous CDMA), TDMA or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for Asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any Asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power control scheme to tightly control each mobile's transmit power. See Near-far problem for further information on this problem.
Most importantly, Asynchronous CDMA offers a key advantage in the flexible allocation of resources. There are a fixed number of orthogonal codes, timeslots or frequency bands that can be allocated for CDM, TDMA and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an Asynchronous CDMA system, only a practical limit governed by the desired bit error probability, since the SIR (Signal to Interference Ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by Asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2N users that only talk half of the time, then 2N users can be accommodated with the same average bit error probability as N users that talk all of the time. The key difference here is that the bit error probability for N users talking all of the time is constant, whereas it is a random quantity (with the same mean) for 2N users talking half of the time.
In other words, Asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (Synchronous CDMA), TDMA and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number of orthogonal codes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there are N time slots in a TDMA system and 2N users that talk half of the time, then half of the time there will be more than N users needing to use more than N timeslots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal code, time-slot or frequency channel resources. By comparison, Asynchronous CDMA transmitters simply send when they have something to say, and go off the air when they don't, keeping the same PN signature sequence as long as they are connected to the system.
CDMA can also effectively reject narrowband interference. Since narrowband interference affects only a small portion of the spread spectrum signal, it can easily be removed through notch filtering without much loss of information. Convolution encoding and interleaving can be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread spectrum signal occupies a large bandwidth only a small portion of this will undergo fading due to multipath at any given time. Like the narrowband interference this will result in only a small loss of data and can be overcome.
Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudorandom codes will have poor correlation with the original pseudorandom code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudorandom codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored.
Some CDMA devices use a rake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlator tuned to the path delay of the strongest signal.
Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems frequency planning is an important consideration. The frequencies used in different cells need to be planned carefully in order to ensure that the signals from different cells do not interfere with each other. In a CDMA system the same frequency can be used in every cell because channelization is done using the pseudorandom codes. Reusing the same frequency in every cell eliminates the need for frequency planning in a CDMA system; however, planning of the different pseudorandom sequences must be done to ensure that the received signal from one cell does not correlate with the signal from a nearby cell.
Since adjacent cells use the same frequencies, CDMA systems have the ability to perform soft handoffs. Soft handoffs allow the mobile telephone to communicate simultaneously with two or more cells. The best signal quality is selected until the handoff is complete. This is different than hard handoffs utilized in other cellular systems. In a hard handoff situation, as the mobile telephone approaches a handoff, signal strength may vary abruptly. In contrast, CDMA systems use the soft handoff, which is undetectable and provides a more reliable and higher quality signal.
WIPO ASSIGNS PATENT TO CL LCD FOR "COMMUNICATION DEVICE BASED ON BINARY CODE DIVISION MULTIPLE ACCESS TECHNOLOGY" (SOUTH KOREAN INVENTOR)
Jun 27, 2011; GENEVA, June 27 -- Publication No. WO/2011/074724 was published on June 23. Title of the invention: "COMMUNICATION DEVICE BASED...
WIPO ASSIGNS PATENT TO ZTE FOR "WIDEBAND CODE DIVISION MULTIPLE ACCESS CORE NETWORK, HANDOVER METHOD BASED ON WIDEBAND CODE DIVISION MULTIPLE ACCESS CORE NETWORK" (CHINESE INVENTORS)
Jul 27, 2011; GENEVA, July 27 -- Publication No. WO/2011/085636 was published on July 21. Title of the invention: "WIDEBAND CODE DIVISION...
WIPO ASSIGNS PATENT TO ZTE FOR "SIGNAL COVERING METHOD AND CODE DIVISION MULTIPLE ACCESS WIRELESS CELLULAR COMMUNICATION SYSTEM" (CHINESE INVENTOR)
Aug 14, 2010; GENEVA, Aug. 18 -- Publication No. WO/2010/088820 was published on Aug. 12. Title of the invention: "SIGNAL COVERING METHOD AND...
Publication No. WO/2009/012629 Published on Jan. 29, Assigned to Zte for Code Division Multiple Access Roaming Realization Method (Chinese Inventors)
Feb 28, 2009; GENEVA, Feb. 28 -- Jinlei Zhu, Li Song and Fang Wang, all from China, have developed a method for realizing roaming of code... | http://www.reference.com/browse/Code+Division+Multiple+Access | 13 |
95 | Protein Crystallography Course
We've seen that, when waves are diffracted from a crystal, they give rise to diffraction spots. Each diffraction spot corresponds to a point in the reciprocal lattice and represents a wave with an amplitude and a relative phase. But really what happens is that photons are reflected from the crystal in different directions with a probability proportional to the square of the amplitude of this wave. We count the photons, and we lose any information about the relative phases of different diffraction.
The figure below shows again how the phase and amplitude of the overall scattered wave arise from the individual scattered waves. Two Bragg planes are shown, together with four atoms. The relative phase (from 0 to 360 degrees) depends on the relative distance of the atoms between the planes that define a phase angle of zero. The atoms and their contributions to the scattering (represented as vectors) are shown in matching colours. The overall scattered wave is represented by a black vector, which is the sum of the other vectors.
The vector (amplitude and phase or, more properly, the complex number) representing the overall scattering from a particular set of Bragg planes is termed the structure factor, and it is usually denoted F. (The use of bold font indicates that it is a vector or complex number.)
It turns out (for reasons beyond the present discussion) that the structure factors for the various points on the reciprocal lattice correspond to the Fourier transform of the electron density distribution within the unit cell of the crystal. A very convenient property of the Fourier transform is that it is reversible; if you apply an inverse Fourier transform to the structure factors, you get back the electron density. You might want to look at Kevin Cowtan's Interactive Structure Factor Tutorial again to remind yourself how this works.
So we measure a diffraction pattern, take the square roots of the intensities, and we're stuck: if we knew the phases we could simply compute a picture of the molecule, but we've lost the information in the experiment! This is the phase problem, and a large part of crystallography is devoted to solving it.
In the beginning, crystallographers worked on the structures of simple molecules and they could often make a good guess of the conformation of a molecule and even how it might pack in the crystal lattice. The guesses could be tested by calculating a diffraction pattern and comparing it to the observed one. If a guess places the atoms in about the right place, then the calculated phases will be approximately correct and a useful electron density map can be computed by combining the observed amplitudes with the calculated phases. If the model is reasonably accurate, such a map will show features missing from the model so that the model can be improved. You can remind yourself how this works by looking at Kevin Cowtan's cats.
For proteins, we can only guess what the structure will look like if we've already seen a closely-related protein structure before. And then we still have to work out how it is oriented and where it is located in the unit cell. The technique to use prior structural information, called molecular replacement, is discussed below after the Patterson function, which provides a way to understand it.
Remember that, if we carry out an inverse Fourier transform of the structure factors (amplitudes and phases), we get a picture of the electron density. Patterson asked the question of what we would get if we took a Fourier transform of the intensities (amplitudes squared) instead, which would only require the measured data. It turns out that the resulting map, which is now called a Patterson function or Patterson map, has some very interesting and useful features.
We won't go into the math here, but it turns out that the Patterson function gives us a map of the vectors between atoms. In other words, if there is a peak of electron density for atom 1 at position x1 and a peak of electron density for atom 2 at position x2, then the Patterson map will have peaks at positions given by x2-x1 and x1-x2. The height of the peak in the Patterson map is proportional to the product of the heights of the two peaks in the electron density map. The figure below illustrates a Patterson map corresponding to a cell with one molecule. It demonstrates that you can think of a Patterson as being a sum of images of the molecule, with each atom placed in turn on the origin. Because for each vector there is one in the opposite direction, the same Patterson map is also a sum of inverted images of the molecule, as shown in the bottom representation.
For relatively small numbers of atoms, it is possible to work out the original positions of the atoms that would give rise to the observed Patterson peaks. This is called deconvoluting the Patterson. But it quickly becomes impossible to deconvolute a Patterson for larger molecules. If we have N atoms in a unit cell and the resolution of the data is high enough, there will be N separate electron density peaks in an electron density map. In a Patterson map, each of these N atoms has a vector to all N atoms, so that there are N2 vectors. N of these will be self-vectors from an atom to itself, which will accumulate as a big origin peak, but that still leaves N2-N non-origin peaks to sort out. If N is a small number, say 10, then we will have a larger but feasible number of non-origin Patterson peaks to deal with (90 for N=10). But if N were 1000, which would be more in the range seen for protein crystals, then there would be 999,000 non-origin Patterson peaks. And even at high resolution the protein atoms are barely resolved, so there's no chance that the Patterson peaks will be resolved from each other!
Nonetheless, the Patterson function becomes useful as part of other methods to solve structures, as we will soon see.
Molecular replacement can be used when you have a good model for a reasonably large fraction of the structure in the crystal. The level of resemblance of two protein structures correlates well with the level of sequence identity, which means that you can get a good idea of whether or not molecular replacement will succeed before even trying it. As a rule of thumb, molecular replacement will probably be fairly straightforward if the model is fairly complete and shares at least 40% sequence identity with the unknown structure. It becomes progressively more difficult as the model becomes less complete or shares less sequence identity.
To carry out molecular replacement, you need to place the model structure in the correct orientation and position in the unknown unit cell. To orient a molecule you need to specify three rotation angles and to place it in the unit cell you need to specify three translational parameters. So if there is one molecule in the asymmetric unit of the crystal, the molecular replacement problem is a 6-dimensional problem. It turns out that it is usually possible to separate this into two 3D problems. A rotation function can be computed to find the three rotation angles, and then the oriented model can be placed in the cell with a 3D translation function.
An understanding of the rotation and translation functions can be obtained most easily by considering the Patterson function. Even though the vectors are unresolved for a structure the size of a protein, the way that they accumulate can provide a signature for a protein structure. The vectors in the Patterson map can be divided into two categories. Intramolecular vectors (from one atom in the molecule to another atom in the same molecule) depend only on the orientation of the molecule, and not on its position in the cell, so these can be exploited in the rotation function. Intermolecular vectors depend both on the orientation of the molecule and on its position so, once the orientation is known, these can be exploited in the translation function.
Intramolecular vectors before rotation
Colour-coded Patterson map
Intramolecular vectors after rotation
On average, the intramolecular vectors will be shorter than the intermolecular vectors, so the rotation function can be computed using only the part of the Patterson map near the origin.
We won't go into detail here, but it turns out that if you assume that a crystal is made up of similarly-shaped atoms that all have positive electron density, then there are statistical relationships between sets of structure factors. These statistical relationships can be used to deduce possible values for the phases. Direct methods exploit such relationships, and can be used to solve small molecule structures. Unfortunately, the statistical relationships become weaker as the number of atoms increases, and direct methods are limited to structures with, at most, a few hundred atoms in the unit cell. Although there are developments that push these limits, particularly for crystals that diffract to very high resolution (1.2Å or better), direct methods are not generally applicable to the vast majority of crystal structures. However, they do become useful in the context of experimental phasing methods, such as isomorphous replacement and anomalous dispersion, as discussed below.
In isomorphous replacement, the idea is to make a change to the crystal that will perturb the structure factors and, by the way that they are perturbed, to make some deductions about possible phase values. It is necessary to be able to explain the change to the crystal with only a few parameters, which means that we have to use heavy atoms (heavy in the sense that they have a large atomic number, i.e. many electrons). The figure below illustrates the effect of adding a heavy atom to the structure considered above.
The introduction of a heavy atom will change the scattered intensity significantly. One reason for this is that "heavy" atoms contribute disproportionately to the overall intensity. As you can see from the figure, the contributions from the lighter atoms will tend to cancel out, because they will scatter with different phase angles. On the other hand, all of the electrons in a heavy atom will scatter essentially in phase with one another. Because of this effect, different atoms contribute to the scattered intensity in proportion to the square of the number of electrons they contain. For example, a uranium atom contains 15 times as many electrons as a carbon atom, so its contribution to the intensity will be equivalent to that of 225 carbon atoms. As a result, the change in intensity from the addition of 1 uranium atom to a protein of 20kDa is easily measured.
If we have two crystals, one containing just the protein (native crystal) and one containing in addition bound heavy atoms (derivative crystal), we can measure diffraction data from both. The differences in scattered intensities will largely reflect the scattering contribution of the heavy atoms, and these differences can be used (for instance) to compute a Patterson map. Because there are only a few heavy atoms, such a Patterson map will be relatively simple and easy to deconvolute. (Alternatively, direct methods can be applied to the intensity differences.) Once we know where the heavy atoms are located in the crystal, we can compute their contribution to the structure factors.
This allows us to make some deductions about possible values for the protein phase angles, as follows. First, note that we have been assuming that the scattering from the protein atoms is unchanged by the addition of heavy atoms. This is what the term "isomorphous" (= "same shape") refers to. ("Replacement" comes from the idea that heavy atoms might be replacing light salt ions or solvent molecules.) If the heavy atom doesn't change the rest of the structure, then the structure factor for the derivative crystal (FPH) is equal to the sum of the protein structure factor (FP) and the heavy atom structure factor (FH), or
FPH = FP + FH
If we remember that the structure factors can be thought of as vectors, then this equation defines a triangle. We know the length and orientation of one side (FH), and the lengths of the other two sides. As shown in the figure below, there are two ways to construct such a triangle, which means that there are two possible phases for FP.
There is another way, called the Harker construction, to show the two possible phases. This ends up being more useful because it generalises nicely when there is more than one derivative. First we draw a circle with a radius equal to the amplitude of FP (denoted |FP|), centered at the origin (blue in the figure below). The circle indicates all the vectors that would be obtained with all the possible phase angles for FP. Next we draw a circle with radius |FPH| centered at a point defined by -|FH| (magenta in the figure below). All of the points on the magenta circle are possible values for FP (magnitude and phase) that satisfy the equation FPH = FH + FP while agreeing with the measured amplitude |FPH|. There are two possible values for FP that agree with the both measured amplitudes and with the heavy atom model.
In principle, the twofold phase ambiguity can be removed by preparing a second derivative crystal with heavy atoms that bind at other sites. The information from the second derivative is illustrated in green below, showing that only one phase choice is consistent with all the observations. The need for multiple derivatives to obtain less ambiguous phase information is the reason for the term "multiple" in "multiple isomorphous replacement".
These figures have all been drawn assuming that there are no errors in the model for the heavy atoms in the derivative crystal, no error in measuring the amplitudes of the structure factors, and also assuming that the two crystals are perfectly isomorphous. The effect of these sources of uncertainty is to smear out the circles, so that the regions of overlap are much more diffuse and much more ambiguity remains.
Most electrons in the atoms that make up a crystal will interact identically with X-rays. If placed at the origin of the crystal, they will diffract with a relative phase of zero. Because of this, pairs of diffraction spots obey Friedel's law, which is illustrated below. On the left, the black arrows indicate a diffraction event from the top of the planes. The atoms contribute to the diffraction pattern with phases determined by their relative distances from the planes, as indicated by the colour-coded arrows on the right. The red arrows on the left indicate a very similar diffraction event, but from the bottom of the same planes. The angles of incidence and reflection are the same, and all that is different is which side of the planes we're looking at. If the black arrows define planes with Miller indices (h k l), the same planes are defined from the other side with Miller indices (-h -k -l). The reflection with indices (-h -k -l) is referred to as the Friedel mate of (h k l). Atoms will contribute with the same phase shift, but where the phase shifts were positive they will now be negative. This is illustrated on the right with the red arrows on the bottom, each of which has the opposite phase of the coloured arrows on the top. The effect of reversing the phases is to reflect the picture across the horizontal axis.
Remember the picture we had of the electric field of the electromagnetic wave inducing an oscillation in the electrons. You may have studied the behaviour of driven oscillators in physics. As long as the frequency of oscillation is very different from the natural frequency of oscillation, the electrons will all oscillate with the same phase. This is true of most electrons in a crystal. But if it is similar to the natural frequency of oscillation, then there will be a small shift in both the amplitude and phase of the induced oscillation. This is true for some inner shell electrons in some atoms, where the X-ray photon energy is close to a transition energy. (Such transitions are used, in fact, to generate X-rays with a characteristic wavelength. We often use a particular transition of electrons in copper.) The shift in amplitude and phase is called anomalous scattering.
The phase shift in anomalous scattering leads to a breakdown of Friedel's law, as illustrated in the figure below. Now we have added a heavy atom with an anomalous scattering component. It is convenient to represent the phase shift by adding a vector at 90 degrees to the normal scattering for the heavy atom. Significantly, this vector is at +90 degrees from the contribution from the anomalous scattering, regardless of which of the two Friedel mates we are looking at. And this causes the symmetry to break down.
The effect is easier to see (and to use) if we take the Friedel mate and reverse the sign of its phase, i.e. reflect it across the horizontal axis. (Thinking of the structure factor as a complex number, this means that we reverse the sign of the imaginary component, the result of which is called the complex conjugate, indicated with an asterisk.)
Now we can see that the effect of anomalous scattering has been to make the amplitudes of the Friedel mates different. You can see that, if we have a model for the anomalous scatterers in the crystal, we can draw vectors for their contribution to the structure factors for the Friedel mates and construct a Harker diagram, as in the case of MIR.
The anomalous scattering effect depends on the frequency of oscillation being similar to the natural frequency for the atom. So clearly the strength of the anomalous scattering effect depends on the wavelength of the X-rays, which will change both the normal scattering and the out-of-phase scattering of the anomalous scatterers. By collecting data at several wavelengths near the absorption edge of an element in the crystal, we can obtain phase information analogous to that obtained from MIR. This technique is called MAD, for multiple-wavelength anomalous dispersion. One popular way to use MAD is to introduce selenomethionine in place of methionine residues in a protein. The selenium atoms (which replace the sulfur atoms) have a strong anomalous signal at wavelengths that can be obtained from synchrotron X-ray sources.
Depending on the quality of the phasing experiment (quality of diffraction data, quality of protein model for molecular replacement or heavy atom model for isomorphous replacement or anomalous dispersion), there can be rather large errors in the phases and thus in the electron density maps. Over the last twenty years or so, a variety of techniques have been developed to improve the phases. These methods are mostly based on the idea that we know something about the characteristics of a good electron density map, and if we change the map to look more like a good one, phases computed from the this map will be more accurate than the original phases.
The term "density modification" is used to describe a number of techniques in which the density map is modified to have the features we would expect from a good map.
Solvent flattening. It turns out that, in a typical protein crystal, about half of the volume is occupied by well-ordered protein molecules while the other half is occupied by disordered solvent. We know that the disordered solvent should have flat, featureless electron density, so if there are features in the solvent region they are probably the result of phase errors. Intuitively, if the density map is modified so that the solvent region is flattened, the corresponding phases will be more accurate. (As we will see later, another way of looking at this is that solvent flattening uses phase information from other structure factors to improve the phase information of a particular structure factor.) To carry out solvent flattening, the phases have to be at least good enough to see the boundaries between the disordered solvent and the ordered protein. Fortunately, there are algorithms to define the boundaries automatically; the first of these was proposed by B.C. Wang.
Averaging. Frequently proteins crystallise with more than one copy in the unique part (asymmetric unit) of the unit cell of the crystal. In other cases, proteins crystallise in different crystal forms. For the most part, the structure of a protein is fairly fixed and does not depend much on its environment. So we expect that when the same protein appears in different places in an electron density map (or in maps from different crystals), the density should be more or less the same in each copy. As for solvent flattening, if they differ it is probably because of errors in the phases. By averaging the density, we cancel out some of the random errors and thereby increase the accuracy of the corresponding phases.
Histogram matching. This one is slightly less obvious. Proteins are made up of the same atom types with the same sorts of relative distances and, as a result, the same kinds of density values are seen in electron density maps for different proteins. If a map in the protein region does not have the distribution of low and high densities that one expects, this is probably because of phase errors. By altering the distribution of density values to match what we expect (with an algorithm called histogram matching), the corresponding phases are again made more accurate.
This can be thought of as another form of density modification. We know that protein structures are made up of atoms. If the density can be interpreted in terms of an atomic model (and the atoms are put more or less in the right place), the density distribution will be closer to the truth and the corresponding phases will yet again be more accurate. Because we know a lot about how the atoms are arranged relative to each other (bond lengths, bond angles, the chemical connectivity defined by the amino acid sequence), we can exploit a lot of information in building an atomic model. If it is possible to make a good start, then the approach can be applied iteratively: model building into a density map is followed by refinement to gain better agreement with the observed diffraction data, then the new improved phases can be used to compute a new, better density map. Optionally, other density modification techniques (such as averaging and solvent flattening) can be applied before a new cycle is started with the building of a new model.
© 1999-2009 Randy J Read, University of Cambridge. All rights reserved.
Last updated: 26 February, 2010 | http://www-structmed.cimr.cam.ac.uk/Course/Basic_phasing/Phasing.html | 13 |
54 | Multiplication is a wonderful little operation. Depending on the context, it can
And today we’ll see yet another use: listing combinations.
Revisiting multiplication has a few uses:
- It demystifies other parts of math. The binomial theorem, Boolean algebra (used in computer circuits) and even parts of calculus become easier with a new interpretation of “multiplication”.
- It keeps our brain fresh. Math gives us models to work with, and it’s good to see how one model can have many uses. Even a wrench can drive nails, once you understand the true nature of “being a hammer” (very Zen, eh?).
The long multiplication you learned in elementary school is quite useful: we can find the possibilities of several coin flips, for example. Let’s take a look.
You’ve Been Making Combinations All Along
How would you find 12 × 34? It’s ok, you can do it on paper:
“Well, let’s see… 4 times 12 is 48. 3 times 12 is 36… but it’s shifted over one place, so it’s 360. Add 48 and 360 and you get… uh… carry the 1… 408. Phew.”
Not bad. But instead of doing 12 × 34 all at once, break it into steps:
What’s happening? Well, 4 × 12 is actually “4 x (10 + 2)” or “40 + 8″, right? We can view that first step (blue) as two separate multiplications (4×10 and 4×2).
We’re so used to combining and carrying that we merge the steps, but they’re there. (For example, 4 × 17 = 4 x (10 + 7) = 40 + 28 = 68, but we usually don’t separate it like that.)
Similarly, the red step of “3 × 12″ is really “30 × 12″ — the 3 is in the tens column, after all. We get “30 x (10 + 2)” or “300 + 60″. Again, we can split the number into two parts.
What does this have to do with combinations? Hang in there, you’ll see soon enough.
Curses, Foiled Again
Take a closer look at what happened: 12 × 34 is really (10 + 2) x (30 + 4) = 300 + 40 + 60 + 8. This breakdown looks suspiciously like algebra equation (a + b) * (c + d):
And yes, that’s what’s happening! In both cases we’re multiplying one “group” by another. We take each item in the red group (10 and 2) and combine it with every element of the blue group (30 and 4). We don’t mix red items with each other, and we don’t mix blue items with each other.
This combination technique is often called FOIL (first-inside-outside-last), and gives headaches to kids. But it’s not a magical operation! It’s just laying things out in a grid. FOIL is already built into the way we multiply!
When doing long multiplication, we “know” we’re not supposed to multiply across: you don’t do 1 × 2, because they’re in the same row. Similarly, you don’t do a x b, because they’re in the same parenthesis. We only multiply “up and down” — that is, we need an item from the top row (1 or 2, a or b) and an item from the bottom row (3 or 4, c or d).
Everyday multiplication (aka FOIL) gives us a way to crank out combinations of two groups: one from group A, another from group B. And sometimes it’s nice having all the possibilities as an equation.
Examples Make It Click
Let’s try an example. Suppose we want to find every combination of flipping a coin twice. There’s a few ways to do it, like using a grid or decision tree:
That’s fine, but let’s be different. We can turn the question into an equation using the following rules:
- addition = OR. We can get heads OR tails: (h+t)
- multiplication = AND. We have a first toss AND a second toss: (h+t) * (h+t)
Wow! How does this work?
Well, we really just want to crank out combinations, just like doing (a+b) * (c+d) = ac + bc + ad + bd. Looking carefully, this format means we pick a OR b, and combine it with one of c OR d.
When we see an addition (a+b), we know it means we must choose one variable: this OR that. When we see a multiplication (group1 * group2), we know it means we take one item from each: this AND that.
The shortcuts “AND = multiply” & “OR = add” are simply another way to describe the relationship inside the equation. (Be careful; when we say three hundred and four, most people think 304, which is right too. This AND/OR trick works in the context of describing combinations).
So, when all’s said and dune, we can turn the sentence “(heads OR tails) AND (heads OR tails)” into:
And just for kicks, we can multiply it out:
The result “h2 + 2ht + t2” shows us every possibility, just like the grids and decision trees. And the size (coefficient) of each combination shows the number of ways it can happen:
- h^2: There’s one way to get two heads (h2 = hh = heads AND heads)
- 2ht: There’s two ways to get a head and tails (ht, th)
- t^2: There’s one way to get two tails (tt)
Neato. The sum of the coefficients is 1 + 2 + 1 = 4, the total number of possibilities. The chance of getting exactly one heads and one tails is 2/4 = 50%. We figured this out without a tree or grid — regular multiplication does the trick!
Grids? Trees? I Figured That Out In My Head.
Ok, hotshot, let’s expand the scope. How many ways can we get exactly 2 heads and 2 tails with 4 coin flips? What’s the chance of getting 3 or more heads?
Our sentence becomes: “(heads OR tails) AND (h OR t) AND (h OR t) AND (h OR t)”
Looking at the result (it looks hard but there are shortcuts), there are 6 ways to get 2 heads and 2 tails. There’s 1 + 4 + 6 + 4 + 1 = 16 possibilities, so there’s only a 6/16 = 37.5% chance of having a “balanced” result after 4 coin flips. (It’s a bit surprising that it’s much more likely to be uneven than even).
And how many ways can we get 3 or more heads? Well, that means any components with h3 or h4: 4 + 1 = 5. So we have 5/16 = 31.25% chance of 3 or more heads.
Sometimes equations are better than grids and trees — look at how much info we crammed into a single line! Formulas work great when you have a calculator or computer handy.
But most of all, we have another tool in our box: we can write possibilities as equations, and use multiplication to find combinations.
There’s a few areas of math that benefit from seeing multiplication in this way:
- Binomial Theorem. This scary-sounding theorem relates (h+t)^n to the coefficients. If you’re clever, you realize you can use combinations and permutations to figure out the exponents rather than having to multiply out the whole equation. This is what the binomial theorem does. We’ll cover more later — this theorem shows up in a lot of places, including calculus.
- Boolean Algebra. Computer geeks love converting conditions like OR and AND into mathematical statements. This type of AND/OR logic is used when designing computer circuits, and expressing possibilities with equations (not diagrams) is very useful. The fancy name of this technique is Boolean Algebra, which we’ll save for a rainy day as well.
- Calculus. Calculus gets a double bonus from this interpretation. First, the binomial theorem makes working with equations like x^n much easier. Second, one view of calculus is an “expansion” of multiplication. Today we got practice thinking that multiplication means a lot more than “repeated addition”. (“12 × 34″ means 12 groups of 34, right?)
- More advanced combinations. Let’s say you have 3 guests (Alice, Bob, and Charlie) and they are bringing soda, ice cream, or yogurt. Someone knocks at the door — what are the possibilities? (a + b + c) * (s + i + y). The equation has it all there.
So you can teach an old dog like multiplication new tricks after all. Well, the tricks have always been there — it’s like discovering Fido has been barking poetry in morse code all this time.
And come to think of it, maybe we’re the animal that learned a new trick. The poetry was there, staring us in the face and we just didn’t recognize it (12 × 34 is based on combinations!). I know I had some forehead-slapping moments after seeing how similar combinations and regular multiplication really were. | http://betterexplained.com/articles/how-to-understand-combinations-using-multiplication/ | 13 |
73 | Stellar rotation is the angular motion of a star about its axis. The rate of rotation can be measured from the spectrum of the star, or by timing the movements of active features on the surface.
The rotation of a star produces an equatorial bulge due to centrifugal force. As stars are not solid bodies, they can also undergo differential rotation. Thus the equator of the star can rotate at a different angular velocity than the higher latitudes. These differences in the rate of rotation within a star may have a significant role in the generation of a stellar magnetic field.
The magnetic field of a star interacts with the stellar wind. As the wind moves away from the star its rate of angular velocity slows. The magnetic field of the star interacts with the wind, which applies a drag to the stellar rotation. As a result, angular momentum is transferred from the star to the wind, and over time this gradually slows the star's rate of rotation.
Unless a star is being observed from the direction of its pole, sections of the surface have some amount of movement toward or away from the observer. The component of movement that is in the direction of the observer is called the radial velocity. For the portion of the surface with a radial velocity component toward the observer, the radiation is shifted to a higher frequency because of Doppler shift. Likewise the region that has a component moving away from the observer is shifted to a lower frequency. When the absorption lines of a star are observed, this shift at each end of the spectrum causes the line to broaden. However, this broadening must be carefully separated from other effects that can increase the line width.
The component of the radial velocity observed through line broadening depends on the inclination of the star's pole to the line of sight. The derived value is given as , where ve is the rotational velocity at the equator and i is the inclination. However, i is not always known, so the result gives a minimum value for the star's rotational velocity. That is, if i is not a right angle, then the actual velocity is greater than . This is sometimes referred to as the projected rotational velocity.
For giant stars, the atmospheric microturbulence can result in line broadening that is much larger than effects of rotational, effectively drowning out the signal. However, an alternate approach can be employed that makes use of gravitational microlensing events. These occur when a massive object passes in front of the more distant star and functions like a lens, briefly magnifying the image. The more detailed information gathered by this means allows the effects of microturbulence to be distinguished from rotation.
If a star displays magnetic surface activity such as starspots, then these features can be tracked to estimate the rotation rate. However, such features can form at locations other than equator and can migrate across latitudes over the course of their life span, so differential rotation of a star can produce varying measurements. Stellar magnetic activity is often associated with rapid rotation, so this technique can be used for measurement of such stars. Observation of starspots has shown that these features can actually vary the rotation rate of a star, as the magnetic fields modify the flow of gases in the star.
Physical effects
Equatorial bulge
Gravity tends to contract celestial bodies into a perfect sphere, the shape where all the mass is as close to the center of gravity as possible. But a rotating star is not spherical in shape, it has an equatorial bulge.
As a rotating proto-stellar disk contracts to form a star its shape becomes more and more spherical, but the contraction doesn't proceed all the way to a perfect sphere. At the poles all of the gravity acts to increase the contraction, but at the equator the effective gravity is diminished by the centrifugal force. The final shape of the star after star formation is an equilibrium shape, in the sense that the effective gravity in the equatorial region (being diminished) cannot pull the star to a more spherical shape. The rotation also gives rise to gravity darkening at the equator, as described by the von Zeipel theorem.
An extreme example of an equatorial bulge is found on the star Regulus A (α Leonis A). The equator of this star has a measured rotational velocity of 317 ± 3 km/s. This corresponds to a rotation period of 15.9 hours, which is 86% of the velocity at which the star would break apart. The equatorial radius of this star is 32% larger than polar radius. Other rapidly rotating stars include Alpha Arae, Pleione, Vega and Achernar.
The break-up velocity of a star is an expression that is used to describe the case where the centrifugal force at the equator is equal to the gravitational force. For a star to be stable the rotational velocity must be below this value.
Differential rotation
Surface differential rotation is observed on stars such as the Sun when the angular velocity varies with latitude. Typically the angular velocity decreases with increasing latitude. However the reverse has also been observed, such as on the star designated HD 31993.) The first such star, other than the Sun, to have its differential rotation mapped in detail is AB Doradus.
The underlying mechanism that causes differential rotation is turbulent convection inside a star. Convective motion carries energy toward the surface through the mass movement of plasma. This mass of plasma carries a portion of the angular velocity of the star. When turbulence occurs through shear and rotation, the angular momentum can become redistributed to different latitudes through meridional flow.
The interfaces between regions with sharp differences in rotation are believed to be efficient sites for the dynamo processes that generate the stellar magnetic field. There is also a complex interaction between a star's rotation distribution and its magnetic field, with the conversion of magnetic energy into kinetic energy modifying the velocity distribution.
Rotation braking
Stars are believed to form as the result of a collapse of a low-temperature cloud of gas and dust. As the cloud collapses, conservation of angular momentum causes any small net rotation of the cloud to increase, forcing the material into a rotating disk. At the dense center of this disk a protostar forms, which gains heat from the gravitational energy of the collapse.
As the collapse continues, the rotation rate can increase to the point where the accreting protostar can break up due to centrifugal force at the equator. Thus the rotation rate must be braked during the first 100,000 years to avoid this scenario. One possible explanation for the braking is the interaction of the protostar's magnetic field with the stellar wind in magnetic braking. The expanding wind carries away the angular momentum and slows down the rotation rate of the collapsing protostar.
Most main-sequence stars with a spectral class between O5 and F5 have been found to rotate rapidly. For stars in this range, the measured rotation velocity increases with mass. This increase in rotation peaks among young, massive B-class stars. As the expected life span of a star decreases with increasing mass, this can be explained as a decline in rotational velocity with age.
For main-sequence stars, the decline in rotation can be approximated by a mathematical relation:
where is the angular velocity at the equator and t is the star's age. This relation is named Skumanich's law after Andrew P. Skumanich who discovered it in 1972. Gyrochronology is the determination of a star's age based on the rotation rate, calibrated using the Sun.
Stars slowly lose mass by the emission of a stellar wind from the photosphere. The star's magnetic field exerts a torque on the ejected matter, resulting in a steady transfer of angular momentum away from the star. Stars with a rate of rotation greater than 15 km/s also exhibit more rapid mass loss, and consequently a faster rate of rotation decay. Thus as the rotation of a star is slowed because of braking, there is a decrease in rate of loss of angular momentum. Under these conditions, stars gradually approach, but never quite reach, a condition of zero rotation.
Close binary systems
A close binary star system occurs when two stars orbit each other with an average separation that is of the same order of magnitude as their diameters. At these distances, more complex interactions can occur, such as tidal effects, transfer of mass and even collisions. Tidal interactions in a close binary system can result in modification of the orbital and rotational parameters. The total angular momentum of the system is conserved, but the angular momentum can be transferred between the orbital periods and the rotation rates.
Each of the members of a close binary system raise tides on the companion star through gravitational interaction. However the bulges can be slightly misaligned with respect to the direction of gravitational attraction. Thus the force of gravity produces a torque component on the bulge, resulting in the transfer of angular momentum. This causes the system to steadily evolve, although it can approach a stable equilibrium. The effect can be more complex in cases where the axis of rotation is not perpendicular to the orbital plane.
For contact or semi-detached binaries, the transfer of mass from a star to its companion can also result in a significant transfer of angular momentum. The accreting companion can spin up to the point where it reaches its critical rotation rate and begins losing mass along the equator.
Degenerate stars
After a star has finished generating energy through thermonuclear fusion, it evolves into a more compact, degenerate state. During this process the dimensions of the star are significantly reduced, which can result in a corresponding increase in angular velocity.
White dwarf
A white dwarf is a star that consists of material that is the by-product of thermonuclear fusion during the earlier part of its life, but lacks the mass to burn those more massive elements. It is a compact body that is supported by a quantum mechanical effect known as electron degeneracy pressure that will not allow the star to collapse any further. Generally most white dwarfs have a low rate of rotation, most likely as the result of rotational braking or by shedding angular momentum when the progenitor star lost its outer envelope. (See planetary nebula.)
A slow-rotating white dwarf star can not exceed the Chandrasekhar limit of 1.44 solar masses without collapsing to form a neutron star or exploding as a Type Ia supernova. Once the white dwarf reaches this mass, such as by accretion or collision, the gravitational force would exceed the pressure exerted by the electrons. If the white dwarf is rotating rapidly, however, the effective gravity is diminished in the equatorial region, thus allowing the white dwarf to exceed the Chandrasekhar limit. Such rapid rotation can occur, for example, as a result of mass accretion that results in a transfer of angular momentum.
Neutron star
A neutron star is a highly dense remnant of a star that is primarily composed of neutrons—a particle that is found in most atomic nuclei and has no net electrical charge. The mass of a neutron star is in the range of 1.2 to 2.1 times the mass of the Sun. As a result of the collapse, a newly formed neutron star can have a very rapid rate of rotation; on the order of a hundred rotations per second.
Pulsars are rotating neutron stars that have a magnetic field. A narrow beam of electromagnetic radiation is emitted from the poles of rotating pulsars. If the beam sweeps past the direction of the Solar System then the pulsar will produce a periodic pulse that can be detected from the Earth. The energy radiated by the magnetic field gradually slows down the rotation rate, so that older pulsars can require as long as several seconds between each pulse.
Black hole
A black hole is an object with a gravitational field that is sufficiently powerful that it can prevent light from escaping. When they are formed from the collapse of a rotating mass, they retain all of the angular momentum that is not shed in the form of ejected gas. This rotation causes the space within an oblate spheroid-shaped volume, called the "ergosphere", to be dragged around with the black hole. Mass falling into this volume gains energy by this process and some portion of the mass can then be ejected without falling into the black hole. When the mass is ejected, the black hole loses angular momentum (the "Penrose process"). The rotation rate of a black hole has been measured as high as 98.7% of the speed of light.
- Donati, Jean-François (November 5, 2003). "Differential rotation of stars other than the Sun". Laboratoire d’Astrophysique de Toulouse. Retrieved 2007-06-24.
- Shajn, G.; Struve, O. (1929). "On the rotation of the stars". Monthly Notices of the Royal Astronomical Society 89: 222–239. Bibcode:1929MNRAS..89..222S.
- Gould, Andrew (1997). "Measuring the Rotation Speed of Giant Stars from Gravitational Microlensing". Astrophysical Journal 483 (1): 98–102. arXiv:astro-ph/9611057. Bibcode:1996astro.ph.11057G. doi:10.1086/304244.
- Soon, W.; Frick, P.; Baliunas, S. (1999). "On the rotation of the stars". The Astrophysical Journal 510 (2): L135–L138. arXiv:astro-ph/9811114. Bibcode:1999ApJ...510L.135S. doi:10.1086/311805.
- Collier Cameron, A.; Donati, J.-F. (2002). "Doin' the twist: secular changes in the surface differential rotation on AB Doradus". Monthly Notices of the Royal Astronomical Society 329 (1): L23–L27. arXiv:astro-ph/0111235. Bibcode:2002MNRAS.329L..23C. doi:10.1046/j.1365-8711.2002.05147.x.
- McAlister, H. A., ten Brummelaar, T. A., et al. (2005). "First Results from the CHARA Array. I. An Interferometric and Spectroscopic Study of the Fast Rotator Alpha Leonis (Regulus).". The Astrophysical Journal 628 (1): 439–452. arXiv:astro-ph/0501261. Bibcode:2005ApJ...628..439M. doi:10.1086/430730.
- Hardorp, J.; Strittmatter, P. A. (September 8–11, 1969). "Rotation and Evolution of be Stars". Proceedings of IAU Colloq. 4. Ohio State University, Columbus, Ohio: Gordon and Breach Science Publishers. p. 48. Bibcode 1970stro.coll...48H.
- Kitchatinov, L. L.; Rüdiger, G. (2004). "Anti-solar differential rotation". Astronomische Nachrichten 325 (6): 496–500. arXiv:astro-ph/0504173. Bibcode:2004AN....325..496K. doi:10.1002/asna.200410297.
- Ruediger, G.; von Rekowski, B.; Donahue, R. A.; Baliunas, S. L. (1998). "Differential Rotation and Meridional Flow for Fast-rotating Solar-Type Stars". Astrophysical Journal 494 (2): 691–699. Bibcode:1998ApJ...494..691R. doi:10.1086/305216.
- Donati, J.-F.; Collier Cameron, A. (1997). "Differential rotation and magnetic polarity patterns on AB Doradus". Monthly Notices of the Royal Astronomical Society 291 (1): 1–19. Bibcode:1997MNRAS.291....1D.
- Korab, Holly (June 25, 1997). "NCSA Access: 3D Star Simulation". National Center for Supercomputing Applications. Retrieved 2007-06-27.
- Küker, M.; Rüdiger, G. (2004). "Differential rotation on the lower main sequence". Astronomische Nachrichten 326 (3): 265–268. arXiv:astro-ph/0504411. Bibcode:2005AN....326..265K. doi:10.1002/asna.200410387.
- Ferreira, J.; Pelletier, G.; Appl, S. (2000). "Reconnection X-winds: spin-down of low-mass protostars". Monthly Notices of the Royal Astronomical Society 312 (2): 387–397. Bibcode:2000MNRAS.312..387F. doi:10.1046/j.1365-8711.2000.03215.x.
- Devitt, Terry (January 31, 2001). "What Puts The Brakes On Madly Spinning Stars?". University of Wisconsin-Madison. Retrieved 2007-06-27.
- McNally, D. (1965). "The distribution of angular momentum among main sequence stars". The Observatory 85: 166–169. Bibcode:1965Obs....85..166M.
- Peterson, Deane M. et al. (2004). "Resolving the effects of rotation in early type stars". New Frontiers in Stellar Interferometry, Proceedings of SPIE Volume 5491. Bellingham, Washington, USA: The International Society for Optical Engineering. p. 65. Bibcode 2004SPIE.5491...65P.
- Tassoul, Jean-Louis (1972). Stellar Rotation. Cambridge, MA: Cambridge University Press. ISBN 0-521-77218-4. Retrieved 2007-06-26.
- Skumanich, Andrew P. (1972). "Time Scales for CA II Emission Decay, Rotational Braking, and Lithium Depletion". The Astrophysical Journal 171: 565. Bibcode:1972ApJ...171..565S. doi:10.1086/151310.
- Barnes, Sydney A. (2007). "Ages for illustrative field stars using gyrochronology: viability, limitations and errors". The Astrophysical Journal 669 (2): 1167–1189. arXiv:0704.3068. Bibcode:2007ApJ...669.1167B. doi:10.1086/519295.
- Nariai, Kyoji (1969). "Mass Loss from Coronae and Its Effect upon Stellar Rotation". Astrophysics and Space Science 3 (1): 150–159. Bibcode:1969Ap&SS...3..150N. doi:10.1007/BF00649601.
- Hut, P. (1999). "Tidal evolution in close binary systems". Astronomy and Astrophysics 99 (1): 126–140. Bibcode:1981A&A....99..126H.
- Weaver, D.; Nicholson, M. (December 4, 1997). "One Star's Loss is Another's Gain: Hubble Captures Brief Moment in Life of Lively Duo". NASA Hubble. Retrieved 2007-07-03.
- Willson, L. A.; Stalio, R. (1990). Angular Momentum and Mass Loss for Hot Stars (1st ed.). Springer. pp. 315–16. ISBN 0-7923-0881-6.
- Yoon, S.-C.; Langer, N. (2004). "Presupernova evolution of accreting white dwarfs with rotation". Astronomy and Astrophysics 419 (2): 623–644. arXiv:astro-ph/0402287. Bibcode:2004A&A...419..623Y. doi:10.1051/0004-6361:20035822.
- Lorimer, D. R. (August 28, 1998). "Binary and Millisecond Pulsars". Max-Planck-Gesellschaft. Retrieved 2007-06-27.
- Begelman, Mitchell C. (2003). "Evidence for Black Holes". Science 300 (5627): 1898–1903. Bibcode:2003Sci...300.1898B. doi:10.1126/science.1085334. PMID 12817138.
- Tune, Lee (May 29, 2007). "Spin of Supermassive Black Holes Measured for First Time". University of Maryland Newsdesk. Retrieved 2007-06-25.
- Staff (February 28, 2006). "Stellar Spots and Cyclic Activity: Detailed Results". ETH Zürich. Retrieved 2009-10-22. | http://en.wikipedia.org/wiki/Projected_rotational_velocity | 13 |
137 | In mathematics, a generating function is a formal power series in one indeterminate, whose coefficients encode information about a sequence of numbers an that is indexed by the natural numbers. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. One can generalize to formal power series in more than one indeterminate, to encode information about arrays of numbers indexed by several natural numbers.
There are various types of generating functions, including ordinary generating functions, exponential generating functions, Lambert series, Bell series, and Dirichlet series; definitions and examples are given below. Every sequence in principle has a generating function of each type (except that Lambert and Dirichlet series require indices to start at 1 rather than 0), but the ease with which they can be handled may differ considerably. The particular generating function, if any, that is most useful in a given context will depend upon the nature of the sequence and the details of the problem being addressed.
Generating functions are often expressed in closed form (rather than as a series), by some expression involving operations defined for formal power series. These expressions in terms of the indeterminate x may involve arithmetic operations, differentiation with respect to x and composition with (i.e., substitution into) other generating functions; since these operations are also defined for functions, the result looks like a function of x. Indeed, the closed form expression can often be interpreted as a function that can be evaluated at (sufficiently small) concrete values of x, and which has the formal power series as its Taylor series; this explains the designation "generating functions". However such interpretation is not required to be possible, because formal power series are not required to give a convergent series when a nonzero numeric value is substituted for x. Also, not all expressions that are meaningful as functions of x are meaningful as expressions designating formal power series; negative and fractional powers of x are examples of this.
- A generating function is a clothesline on which we hang up a sequence of numbers for display.
- —Herbert Wilf, Generatingfunctionology (1994)
Ordinary generating function
The ordinary generating function of a sequence an is
When the term generating function is used without qualification, it is usually taken to mean an ordinary generating function.
The ordinary generating function can be generalized to arrays with multiple indices. For example, the ordinary generating function of a two-dimensional array am, n (where n and m are natural numbers) is
Exponential generating function
The exponential generating function of a sequence an is
Poisson generating function
The Poisson generating function of a sequence an is
Lambert series
The Lambert series of a sequence an is
Note that in a Lambert series the index n starts at 1, not at 0, as the first term would otherwise be undefined.
Bell series
Dirichlet series generating functions
Polynomial sequence generating functions
The idea of generating functions can be extended to sequences of other objects. Thus, for example, polynomial sequences of binomial type are generated by
where pn(x) is a sequence of polynomials and f(t) is a function of a certain form. Sheffer sequences are generated in a similar way. See the main article generalized Appell polynomials for more information.
Ordinary generating functions
Polynomials are a special case of ordinary generating functions, corresponding to finite sequences, or equivalently sequences that vanish after a certain point. These are important in that many finite sequences can usefully be interpreted as generating functions, such as the Poincaré polynomial, and others.
A key generating function is the constant sequence 1, 1, 1, 1, 1, 1, 1, 1, 1, ..., whose ordinary generating function is
The left-hand side is the Maclaurin series expansion of the right-hand side. Alternatively, the right-hand side expression can be justified by multiplying the power series on the left by 1 − x, and checking that the result is the constant power series 1, in other words that all coefficients except the one of x0 vanish. Moreover there can be no other power series with this property. The left-hand side therefore designates the multiplicative inverse of 1 − x in the ring of power series.
Expressions for the ordinary generating function of other sequences are easily derived from this one. For instance, the substitution x → ax gives the generating function for the geometric sequence 1,a,a2,a3,... for any constant a:
(The equality also follows directly from the fact that the left-hand side is the Maclaurin series expansion of the right-hand side.) In particular,
One can also introduce regular "gaps" in the sequence by replacing x by some power of x, so for instance for the sequence 1, 0, 1, 0, 1, 0, 1, 0, .... one gets the generating function
By squaring the initial generating function, or by finding the derivative of both sides with respect to x and making a change of running variable n → n-1, one sees that the coefficients form the sequence 1, 2, 3, 4, 5, ..., so one has
More generally, for any positive integer k, it is true that
Note that, since
one can find the ordinary generating function for the sequence 0, 1, 4, 9, 16, ... of square numbers by linear combination of binomial-coefficient generating sequences;
Rational functions
The ordinary generating function of a sequence can be expressed as a rational function (the ratio of two polynomials) if and only if the sequence is a linear recursive sequence; this generalizes the examples above.
Multiplication yields convolution
Multiplication of ordinary generating functions yields a discrete convolution (the Cauchy product) of the sequences. For example, the sequence of cumulative sums of a sequence with ordinary generating function G(an; x) has the generating function because 1/(1-x) is the ordinary generating function for the sequence (1, 1, ...).
Relation to discrete-time Fourier transform
When the series converges absolutely, is the discrete-time Fourier transform of the sequence a0, a1, ....
Asymptotic growth of a sequence
In calculus, often the growth rate of the coefficients of a power series can be used to deduce a radius of convergence for the power series. The reverse can also hold; often the radius of convergence for a generating function can be used to deduce the asymptotic growth of the underlying sequence.
For instance, if an ordinary generating function G(an; x) that has a finite radius of convergence of r can be written as
using the Gamma function.
Asymptotic growth of the sequence of squares
As derived above, the ordinary generating function for the sequence of squares is With r = 1, α = 0, β = 3, A(x) = 0, and B(x) = x(x+1), we can verify that the squares grow as expected, like the squares:
Asymptotic growth of the Catalan numbers
The ordinary generating function for the Catalan numbers is With r = 1/4, α = 1, β = −1/2, A(x) = 1/2, and B(x) = −1/2, we can conclude that, for the Catalan numbers,
Bivariate and multivariate generating functions
One can define generating functions in several variables for arrays with several indices. These are called multivariate generating functions or, sometimes, super generating functions. For two variables, these are often called bivariate generating functions.
For instance, since is the ordinary generating function for binomial coefficients for a fixed n, one may ask for a bivariate generating function that generates the binomial coefficients for all k and n. To do this, consider as itself a series, in n, and find the generating function in y that has these as coefficients. Since the generating function for is , the generating function for the binomial coefficients is:
Generating functions for the sequence of square numbers an = n2 are:
Ordinary generating function
Exponential generating function
Bell series
Dirichlet series generating function
using the Riemann zeta function.
The sequence generated by a Dirichlet series generating function corresponding to:
where is the Riemann zeta function, has the ordinary generating function:
Multivariate generating function
Multivariate generating functions arise in practice when calculating the number of contingency tables of non-negative integers with specified row and column totals. Suppose the table has r rows and c columns; the row sums are and the column sums are . Then, according to I. J. Good, the number of such tables is the coefficient of in
Generating functions are used to
- Find a closed formula for a sequence given in a recurrence relation. For example consider Fibonacci numbers.
- Find recurrence relations for sequences—the form of a generating function may suggest a recurrence formula.
- Find relationships between sequences—if the generating functions of two sequences have a similar form, then the sequences themselves may be related.
- Explore the asymptotic behaviour of sequences.
- Prove identities involving sequences.
- Solve enumeration problems in combinatorics and encoding their solutions. Rook polynomials are an example of an application in combinatorics.
- Evaluate infinite sums.
Other generating functions
Examples of polynomial sequences generated by more complex generating functions include:
- Appell polynomials
- Chebyshev polynomials
- Difference polynomials
- Generalized Appell polynomials
- Q-difference polynomials
Similar concepts
See also
- Moment-generating function
- Probability-generating function
- Stanley's reciprocity theorem
- Applications to partitions
- Combinatorial principles
- Donald E. Knuth, The Art of Computer Programming, Volume 1 Fundamental Algorithms (Third Edition) Addison-Wesley. ISBN 0-201-89683-4. Section 1.2.9: Generating Functions, pp. 86
- This alternative term can already be found in E.N. Gilbert, Enumeration of Labeled graphs, Canadian Journal of Mathematics 3, 1956, p. 405–411, but its use is rare before the year 2000; since then it appears to be increasing
- Apostol (1976) pp.42–43
- Wilf (1994) p.56
- Wilf (1994) p.59
- Good, I. J. (1986). "On applications of symmetric Dirichlet distributions and their mixtures to contingency tables". The Annals of Statistics 4 (6): 1159–1189. doi:10.1214/aos/1176343649.
- Doubilet, Peter; Rota, Gian-Carlo; Stanley, Richard (1972). "On the foundations of combinatorial theory. VI. The idea of generating function". Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability 2: 267–318. Zbl 0267.05002. Reprinted in Rota, Gian-Carlo (1975). "3. The idea of generating function". Finite Operator Calculus. With the collaboration of P. Doubilet, C. Greene, D. Kahaner, A. Odlyzko and R. Stanley. Academic Press. pp. 83–134. ISBN 0-12-596650-4. Zbl 0328.05007.
- Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001
- Ronald L. Graham, Donald E. Knuth, and Oren Patashnik (1994). Concrete Mathematics. A foundation for computer science (second ed.). Addison-Wesley. pp. 320–380. ISBN 0-201-55802-5. Zbl 0836.00001. Unknown parameter
- Wilf, Herbert S. (1994). Generatingfunctionology (2nd ed.). Boston, MA: Academic Press. ISBN 0-12-751956-4. Zbl 0831.05001.
- Flajolet, Philippe; Sedgewick, Robert (2009). Analytic Combinatorics. Cambridge University Press. ISBN 978-0-521-89806-5. Zbl 1165.05001.
- Hazewinkel, Michiel, ed. (2001), "Generating function", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Generating Functions, Power Indices and Coin Change at cut-the-knot
- Generatingfunctionology PDF download page
- (French) 1031 Generating Functions
- Ignacio Larrosa Cañestro, León-Sotelo, Marko Riedel, Georges Zeller, Suma de números equilibrados, newsgroup es.ciencia.matematicas
- Frederick Lecue; Riedel, Marko, et al., Permutation, Les-Mathematiques.net, in French, title somewhat misleading.
- "Generating Functions" by Ed Pegg, Jr., Wolfram Demonstrations Project, 2007. | http://en.wikipedia.org/wiki/Generating_function | 13 |
58 | Moment of inertia, also called mass moment of inertia or the angular mass, (SI units kg m2) is a measure of an object’s resistance to changes in its rotation rate. It is the rotational analog of mass. That is, it is the inertia of a rigid rotating body with respect to its rotation. The moment of inertia plays much the same role in rotational dynamics as mass does in basic dynamics, determining the relationship between angular momentum and angular velocity, torque and angular acceleration, and several other quantities. While a simple scalar treatment of the moment of inertia suffices for many situations, a more advanced tensor treatment allows the analysis of such complicated systems as spinning tops and gyroscope motion.
The symbol I and sometimes J are usually used to refer to the moment of inertia.
The moment of inertia of an object about a given axis describes how difficult it is to change its angular motion about that axis. For example, consider two discs (A and B) of the same mass. Disc A has a larger radius than disc B. Assuming that there is uniform thickness and mass distribution, it requires more effort to accelerate disc A (change its angular velocity) because its mass is distributed further from its axis of rotation: mass that is further out from that axis must, for a given angular velocity, move more quickly than mass closer in. In this case, disc A has a larger moment of inertia than disc B.
The moment of inertia has two forms, a scalar form I (used when the axis of rotation is known) and a more general tensor form that does not require knowing the axis of rotation. The scalar moment of inertia I (often called simply the "moment of inertia") allows a succinct analysis of many simple problems in rotational dynamics, such as objects rolling down inclines and the behavior of pulleys. For instance, while a block of any shape will slide down a frictionless decline at the same rate, rolling objects may descend at different rates, depending on their moments of inertia. A hoop will descend more slowly than a solid disk of equal mass and radius because more of its mass is located far from the axis of rotation, and thus needs to move faster if the hoop rolls at the same angular velocity. However, for (more complicated) problems in which the axis of rotation can change, the scalar treatment is inadequate, and the tensor treatment must be used (although shortcuts are possible in special situations). Examples requiring such a treatment include gyroscopes, tops, and even satellites, all objects whose alignment can change.
The moment of inertia can also be called the mass moment of inertia (especially by mechanical engineers) to avoid confusion with the second moment of area, which is sometimes called the moment of inertia (especially by structural engineers) and denoted by the same symbol I. The easiest way to differentiate these quantities is through their units. In addition, the moment of inertia should not be confused with the polar moment of inertia, which is a measure of an object’s ability to resist torsion (twisting).
A simple definition of the moment of inertia of any object, be it a point mass or a 3D-structure, is given by:
‘dm’ is the mass of an infinitesimally small part of the body
and r is the (perpendicular) distance of the point mass to the axis of rotation.
The (scalar) moment of inertia of a point mass rotating about a known axis is defined by
The moment of inertia is additive. Thus, for a rigid body consisting of N point masses mi with distances ri to the rotation axis, the total moment of inertia equals the sum of the point-mass moments of inertia:
For a solid body described by a continuous mass density function ?(r), the moment of inertia about a known axis can be calculated by integrating the square of the distance (weighted by the mass density) from a point in the body to the rotation axis:
V is the volume occupied by the object.
? is the spatial density function of the object, and
are coordinates of a point inside the body.
Diagram for the calculation of a disk’s moment of inertia. Here k is 1/2 and r is the radius used in determining the moment.
Based on dimensional analysis alone, the moment of inertia of a non-point object must take the form:
M is the mass
R is the radius of the object from the center of mass (in some cases, the length of the object is used instead.)
k is a dimensionless constant called the inertia constant that varies with the object in consideration.
Inertial constants are used to account for the differences in the placement of the mass from the center of rotation. Examples include:
k = 1, thin ring or thin-walled cylinder around its center,
k = 2/5, solid sphere around its center
k = 1/2, solid cylinder or disk around its center.
Parallel axis theorem
Once the moment of inertia has been calculated for rotations about the center of mass of a rigid body, one can conveniently recalculate the moment of inertia for all parallel rotation axes as well, without having to resort to the formal definition. If the axis of rotation is displaced by a distance R from the center of mass axis of rotation (e.g. spinning a disc about a point on its periphery, rather than through its center,) the displaced and center-moment of inertia are related as follows:
This theorem is also known as the parallel axes rule and is a special case of Steiner’s parallel-axis theorem.
Perpendicular Axis Theorem
The perpendicular axis theorem for planar objects can be demonstrated by looking at the contribution to the three axis moments of inertia from an arbitrary mass element. From the point mass moment, the contributions to each of the axis moments of inertia are
If a body can be decomposed (either physically or conceptually) into several constituent parts, then the moment of inertia of the body about a given axis is obtained by summing the moments of inertia of each constituent part around the same given axis.
Common Moments of Inertia | http://theconstructor.org/structural-engg/strength-of-materials/moment-of-inertia/2825/ | 13 |
51 | Area is a quantity expressing the size of a region of space. Surface area refers to the summation of the exposed sides of an object. Area (Cx2) is the derivative of volume (Cx3). Area is the antiderivative of length (Cx1). In the case of the perfect closed curve in two dimensions, which is the circle, the area is the simple integral of the circumference. Thus, the circumference is 2πr, while the area is πr2.
any regular polygon: P × a / 2 (where P = the length of the perimeter, and a is the length of the apothem of the polygon [the distance from the center of the polygon to the center of one side])
a parallelogram: B × h (where the base B is any side, and the height h is the distance between the lines that the sides of length B lie on)
a trapezoid: (B + b) × h / 2 (B and b are the lengths of the parallel sides, and h is the distance between the lines on which the parallel sides lie)
a triangle: B × h / 2 (where B is any side, and h is the distance from the line on which B lies to the other vertex of the triangle). This formula can be used if the height h is known. If the lengths of the three sides are known then Heron's formula can be used: √(s×(s-a)×(s-b)×(s-c)) (where a, b, c are the sides of the triangle, and s = (a + b + c)/2 is half of its perimeter)
The area of a polygon in the Euclidean plane is a positive number such that:
The area of the unit square is equal to one.
Equal polygons have equal area.
If a polygon is a union of two polygons which do not have common interior points, then its area is the sum of the areas of these polygons.
But before using this definition one has to prove that such an area indeed exists.
In other words, one can also give a formula for the area of an arbitrary triangle, and then define the area of an arbitrary polygon
using the idea that the area of a union of polygons (without common interior points) is the sum of the areas of its pieces.
But then it is not easy to show that such area does not depend
on the way you break the polygon into pieces.
Nowadays, the most standard (correct) way to introduce area is through the more advanced notion of Lebesgue measure, but one should note that in general, if one adopts the axiom of choice then it is possible to prove that there are some shapes whose Lebesgue measure cannot be meaningfully defined. Such 'shapes' (they cannot a fortiori be simply visualised) enter into Tarski's circle-squaring problem (and, moving to three dimensions, in the Banach-Tarski paradox). The sets involved do not arise in practical matters. | http://allwebhunt.com/wiki-article-tab.cfm/area | 13 |
63 | Estimating Angles, Area, and Length
Grade Levels: 3 - 7
Math students in middle school will use estimation to approximate values, angle, and area measurements of a triangle.
Explain to students that they are going to work as a class to estimate the measurements of several angles and compare the estimates with measured values. Then, students will work in groups of four to estimate a traingle's angles and area. Explain that this lesson covers two benchmark units, degrees and centimeters.
Draw two triangles on the chalkboard, and write the base and height for each: first triangle, height = 732 and base = 1239; second triangle, height = 128 and base = 985. Have students select an acute angle from the first triangle, and show them that they can visualize whether the angle is less than or greater than 90 degrees. Then have them determine if the angle is less than or greater than 45 degrees. This will help them narrow the angle's range to 45 degrees (0-45 or 45-90). If the angle is less than 45 degrees, students can determine whether the angle is closer to 0 or 45 degrees. Guide them through this process for the first triangle, and then repeat the process for the second triangle. Prompt them with questions about the angle's relation to 0, 45, 90, 135, and 180 degrees to help them narrow the acceptable range, and then have them make their estimate. Finally, have students measure the actual angles and compare the estimates with measured values.
Have students estimate the area for each triangle by estimating the product dictated by the formula for the area of a triangle (area = [base/2] x height) and document their process in their notebooks. Explain that they should choose numbers that are close to the originals, but are easier to work with. For example, with the parameters given for the first triangle, a student might say,
"The base is 1239, which is very close to 1200, so I will divide that by 2 to get 600.
600 x 732 is difficult to calculate, but 732 is very close to 700, and
600 x 700 = 420,000."
Have students calculate the actual area with the exact measurements and compare these measurements to their estimates. The actual area of the first triangle is 453,474, so the estimate is only off by 7% (453,474 - 420,000/453,474 = .07 = 7%).
Divide the class into groups of four. Have three of the students face one another, and provide them with a string approximately 10 feet in length. Have each student tie his or her string to another student's so that they form a triangle. Have students estimate the angle they create at their vertex (the point at which they hold the string). Ask the fourth student to record the estimates on a sheet of paper, add the three estimates, and compare the sum to 180 degrees. The sum of the angles of a triangle is equal to 180 degrees, so the addition of the estimates should be somewhere between 170 and 190 degrees. If the sum of the estimates is outside the acceptable range, discuss possible reasons why. Next, have the fourth student use a protractor to measure each of the angles and record the actual angle measurements. The fourth student should share this information with the rest of the group.
Have the four students estimate the base and height of the triangle in centimeters and then estimate the area by performing the calculation in their heads. They should use estimation techniques for the base and height of the triangle as well as for the area.
For example, if the estimate for the base is 68 centimeters, and the estimate for the height is 81 centimeters, students might estimate 68 x 81 is similar to 70 x 80 so the estimated area is (70/2) x 80, or 2800.
Emphasize that the best way to estimate the product of two numbers is to either round the numbers up or down or to use a substitute number that is easier to work with. Answers will not be exact, but the estimates should be reasonable. Have the fourth student record all estimates and then measure the base and height of the triangle (in centimeters). Finally, have the fourth student calculate the area and compare the actual value to the estimates.
Have students write a short paragraph that describes how they arrived at their estimates for the triangle's angles and area. Collect their paragraphs, and evaluate their understanding of the estimation process. As a final evaluation, have students draw two triangles with different measurements on one sheet of paper. Have them estimate both triangles' angles and areas. They should provide estimates for the lengths of all sides as well as a computational estimate of the area. Evaluate the estimates to determine if students are able to estimate proportionally. For example, if one side is obviously longer than another, be sure estimates reflect that. For the angle measurements, evaluate students' ability to estimate angles and their relation to well-known angles in addition to how close the sum of the estimates is to 180 degrees. For the estimate of the area, evaluate students' choices of suitable alternate numbers with which perform computational estimates.
The end of the school year is quickly approaching! Celebrate with fun activities, then prepare yourself and your students with report card advice, summer reading guides, summer math, and more.
Happy Father's Day! Celebrate dads and father figures this June 16 with homemade cards, writing prompts, crafts, and other activities.
Common Core Lessons & Resources
Is your school district adopting the Common Core? Work these new standards into your curriculum with our reading, writing, speaking, social studies, and math lessons and activities. Each piece of content incorporates the Common Core State Standards into the activity or lesson.
Top 10 Galleries
Explore our most popular Top 10 galleries, from Top 10 Behavior Management Tips for the Classroom and Top 10 Classroom Organization Tips from Veteran Teachers to Top 10 Free (& Cheap) Rewards for Students and Top 10 Things Every Teacher Needs in the Classroom. We'll help you get organized and prepared for every classroom situation, holiday, and more! Check out all of our galleries today.
June Calendar of Events
June is full of holidays and events that you can incorporate into your standard curriculum. Our Educators' Calendar outlines activities for each event, including: Flag Day (6/14), Father's Day (6/16), Summer Begins (6/21), Helen Keller's Birthday (6/27/1880), World War I Began (6/28/1914), and Meteor Day (6/30). Plus, celebrate Child Vision Awareness Month, National Rivers Month, and National Safety Month all June long!
Causes We Support: We Give Books
Visit We Give Books, an ever-growing, free online library of children's picture books! For every book read on the site, a brand-new book will be donated to a children's literacy campaign of your choosing. Read aloud to students or encourage them read independently, and you'll teach them to help others at the same time. Giving is as simple as reading! | http://www.teachervision.fen.com/geometry/lesson-plan/48942.html | 13 |
109 | Students whose mathematics curriculum has been consistent with the recommendations in Principles and Standards should enter high school having designed simple surveys and experiments, gathered data, and graphed and summarized those data in various ways. They should be familiar with basic measures of center and spread, able to describe the shape of data distributions, and able to draw conclusions about a single sample. Students will have computed the probabilities of simple and some compound events and performed simulations, comparing the results of the simulations to predicted probabilities.
In grades 912 students should gain a deep understanding of the issues entailed in drawing conclusions in light of variability. They will learn more-sophisticated ways to collect and analyze data and draw conclusions from data in order to answer questions or make informed decisions in workplace and everyday situations. They should learn to ask questions that will help them evaluate the quality of surveys, observational studies, and controlled experiments. They can use their expanding repertoire of algebraic functions, especially linear functions, to model and analyze data, with increasing understanding of what it means for a model to fit data well. In addition, students should begin to understand and use correlation in conjunction with residuals and visual displays to analyze associations between two variables. They should become knowledgeable, analytical, thoughtful consumers of the information and data generated by others.
As students analyze data in grades 912, the natural link between statistics and algebra can be developed further. Students' understandings of graphs and functions can also be applied in work with data.
Basic ideas of probability underlie much
of statistical inference. Probability is linked to other topics in high
school mathematics, especially counting techniques (Number and Operations),
area concepts (Geometry), the binomial theorem, and relationships between
functions and the area under their graphs (Algebra). Students should learn
to determine the probability of a sample statistic for a known population
and to draw simple inferences about a population from randomly generated
Students' experiences with surveys and experiments in lower grades should prepare them to consider issues of design. In high school, students should design surveys, observational studies, and experiments that take into consideration questions such as the following: Are the issues and questions clear and unambiguous? What is the population? How should the sample be selected? Is a stratified sample called for? What size should the sample be? Students should understand the concept of bias in surveys and ways to reduce bias, such as using randomization in selecting samples. Similarly, when students design experiments, they should begin to learn how to take into account the nature of the treatments, the selection of the experimental units, and the randomization used to assign treatments to units. Examples of situations students might consider are shown in figure 7.22. »
Nonrandomness in sampling may also limit the conclusions that can be drawn from observational studies. For instance, in the observational study example, it is not certain that the number of people riding trains reflects the number of people who would ride trains if more were available or if scheduling were more convenient. Similarly, it would be inappropriate to draw conclusions about the percentage of the population that ice skates on the basis of observational studies done either » in Florida or in Quebec. Students need to be aware that any conclusions about cause and effect should be made very cautiously in observational studies. They should also know how certain kinds of systematic observations, such as random testing of manufacturing parts taken from an assembly line, can be used for purposes of quality control.
In designed experiments, two or
more experimental treatments (or conditions) are compared. In order for
such comparisons to be valid, other sources of variation must be controlled.
This is not the situation in the tire example, in which the front and
rear tires are subjected to different kinds of wear. Another goal in designed
experiments is to be able to draw conclusions with broad applicability.
For this reason, new tires should be tested on all relevant road conditions.
Consider another designed experiment in which the goal is to test the
effect of a treatment (such as getting a flu shot) on a response (such
as getting the flu) for older people. This is done by comparing the responses
of a treatment group, which gets treatment, with those of a control group,
which does not. Here, the investigators would randomly choose subjects
for their study from the population group to which they want to generalize,
say, all males and females aged 65 or older. They would then randomly
assign these individuals to the control and treatment groups. Note that
interesting issues arise in the choice of subjects (not everyone wants
to or is able to participatecould this introduce bias?) and in the
concept of a control group (are these seniors then at greater risk of
getting the flu?).
Describing center, spread, and shape is essential to the analysis of both univariate and bivariate data. Students should be able to use a variety of summary statistics and graphical displays to analyze these characteristics.
The shape of a distribution of
a single measurement variable can be analyzed using graphical displays
such as histograms, dotplots, stem-and-leaf plots, or box plots. Students
should be able to construct these graphs and select from among them to
assist in understanding the data. They should comment on the overall shape
of the plot and on points that do not fit the general shape. By examining
these characteristics of the plots, students should be better able to
explain differences in measures of center (such as mean or median) and
spread (such as standard deviation or interquartile range). For example,
students should recognize that the statement "the mean score on a test
was 50 percent" may cover several situations, including the following:
all scores are 50 percent; half the scores are 40 percent and half the
scores are 60 percent; half the scores are 0 percent and half the scores
are 100 percent; one score is 100 percent and 50 scores are 49 percent.
Students should also recognize that the sample mean and median can differ
greatly for a skewed distribution. They should understand that for data
that are identified by categoriesfor example, gender, favorite color,
or ethnic originbar graphs, pie charts, and summary tables often
display information about the relative frequency or percent in each category.
Students should learn to apply their knowledge of linear transformations from algebra and geometry to linear transformations of data. They should be able to explain why adding a constant to all observed values in a sample changes the measures of center by that constant but does not » change measures of spread or the general shape of the distribution. They should also understand why multiplying each observed value by the same constant multiplies the mean, median, range, and standard deviation by the same factor (see the related discussion in the "Reasoning and Proof" section of this chapter).
The methods used for representing univariate measurement data also can be adapted to represent bivariate data where one variable is categorical and the other is a continuous measurement. The levels of the categorical variable split the measurement variable into groups. Students can use parallel box plots, back-to-back stem-and-leaf, or same-scale histograms to compare the groups. The following problem from Moore (1990, pp. 1089) illustrates conclusions that can be drawn from such comparisons:
U.S. Department of Agriculture regulations group hot dogs into three types: beef, meat, and poultry. Do these types differ in the number of calories they contain? The three boxplots below display the distribution of calories per hot dog among brands of the three types. The box ends mark the quartiles, the line within the box is the median, and the whiskers extend to the smallest and largest individual observations. We see that beef and meat hot dogs are similar but that poultry hot dogs as a group show considerably fewer calories per hot dog.
Analyses of the relationships between two sets of measurement data are central in high school mathematics. These analyses involve finding functions that "fit" the data well. For instance, students could examine the scatterplot of bivariate measurement data shown in figure 7.23 and consider what type of function (e.g., linear, exponential, quadratic) might be a good model. If the plot of the data seems approximately linear, students should be able to produce lines that fit the data, to compare several such lines, and to discuss what best fit might mean. This analysis includes stepping back and making certain that what is being done makes sense practically.
The dashed vertical line segments in figure 7.23 represent residualsthe differences between the y-values predicted by the linear model and » the actual y-valuesfor three data points. Teachers can help students explore several ways of using residuals to define best fit. For example, a candidate for best-fitting line might be chosen to minimize the sum of the absolute values of residuals; another might minimize the sum of squared residuals. Using dynamic software, students can change the position of candidate lines for best fit and see the effects of those changes on squared residuals. The line illustrated in figure 7.23, which minimizes the sum of the squares of the residuals, is called the least-squares regression line. Using technology, students should be able to compute the equation of the least-squares regression line and the correlation coefficient, r.
Students should understand that
the correlation coefficient r gives information about (1) how
tightly packed the data are about the regression line and (2) about the
strength of the relationship between the two variables. Students should
understand that correlation does not imply a cause-and-effect relationship.
For example, the presence of certain kinds of eye problems and the loss
of sensitivity in people's feet can be related statistically. However,
the correlation may be due to an underlying cause, such as diabetes, for
both symptoms rather than to one symptom's causing the other.
Once students have determined
a model for a data set, they can use the model to make predictions and
recognize and explain the limitations of those predictions. For example,
the regression line depicted in figure 7.23 has the equation y = 0.33x 93.9,
where x represents the number of screens and y represents
box-office revenues (in units of $10 000). To help students understand
the meaning of the regression line, its role in making predictions and
inferences, and its limitations and possible extensions, teachers might
ask questions like the following:
A parameter is
a single number that describes some aspect of an entire population, and
a statistic is an estimate of that value computed from some
sample of the population. To understand terms such as margin of error
in opinion polls, it is necessary to understand how statistics, such
as sample proportions, vary when different random samples are chosen from
a population. Similarly, sample means computed from measurement data vary
according to the random sample chosen, so it is important to understand
the distribution of sample means in order to assess how well a specific
sample mean estimates the population mean.
Understanding how to draw inferences about a population from random samples requires understanding how those samples might be distributed. Such an understanding can be developed with the aid of simulations. Consider the following situation: »
Suppose that 65% of a city's registered voters support Mr. Blake for mayor. How unusual would it be to obtain a random sample of 20 registered voters in which at most 8 support Mr. Blake?
Here the parameter for the population is known: 65 percent of all registered voters support Mr. Blake. The question is, How likely is a random sample with a very different proportion (at most 8 out of 20, or 40%) of supporters? The probability of such a sample can be approximated with a simulation. Figure 7.24 shows the results of drawing 100 random samples of size 20 from a population in which 65 percent support Mr. Blake.
In the situation just described, a parameter of the population was known and the probability of a particular sample characteristic was estimated in order to understand how sampling distributions work. However, in applications of this idea in real situations, the information about a population is unknown and a sample is used to project what that information might be without having to check all the individuals in the population. For example, suppose that the proportion of registered voters supporting Mr. Blake was unknown (a realistic situation) and that a pollster wanted to find out what that proportion might be. If » the pollster surveyed a sample of 20 voters and found that 65 percent of them support the candidate, is it reasonable to expect that about 65 percent of all voters support the candidate? What if the sample was 200 voters? 2000 voters? As indicated above, the proportion of voters who supported Mr. Blake could vary substantially from sample to sample in samples of 20. There is much less variation in samples of 200. By performing simulations with samples of different sizes, students can see that as sample size increases, variation decreases. In this way, they can develop the intuitive underpinnings for understanding confidence intervals.
A similar kind of reasoning about
the relationship between the characteristics of a sample and the population
from which it is drawn lies behind the use of sampling for monitoring
process control and quality in the workplace.
In high school, students can apply the concepts of probability to predict the likelihood of an event by constructing probability distributions for simple sample spaces. Students should be able to describe sample spaces such as the set of possible outcomes when four coins are tossed and the set of possibilities for the sum of the values on the faces that are down when two tetrahedral dice are rolled.
High school students should learn to identify mutually exclusive, joint, and conditional events by drawing on their knowledge of combinations, permutations, and counting to compute the probabilities associated with such events. They can use their understandings to address questions such as those in following series of examples.
The diagram below shows the results of a two-question survey administered to 80 randomly selected students at Highcrest High School.
High school students should learn to compute expected values. They can use their understanding of probability distributions and expected value to decide if the following game is fair:
You pay 5 chips to play a game. You roll two tetrahedral dice with faces numbered 1, 2, 3, and 5, and you win the sum of the values on the faces that are not showing.
Teachers can ask students to discuss whether they think the game is fair and perhaps have the students play the game a number of times to see if there are any trends in the results they obtain. They can then have the students analyze the game. First, students need to delineate the sample space. The outcomes are indicated in figure 7.25. The numbers on the first die are indicated in the top row. The numbers on the second die are indicated in the first column. The sums are given in the interior of the table. Since all outcomes are equally likely, each cell in the table has a probability of 1/16 of occurring.
If a player pays a five-chip fee to play the game, on average, the player will win 0.5 chips. The game is not statistically fair, since the player can expect to win.
Students can also use the sample space to answer conditional probability questions such as "Given that the sum is even, what is the probability that the sum is a 6?" Since ten of the sums in the sample space are even and three of those are 6s, the probability of a 6 given that the sum is even is 3/10.
The following situation, adapted from Coxford et al. (1998, p. 469), could give rise to a very rich classroom discussion of compound events.
In a trial in Sweden, a parking officer testified to having noted the position of the valve stems on the tires on one side of a car. Returning later, the officer noted that the valve stems were still in the same position. The officer noted the position of the valve stems to the nearest "hour." For example, in figure 7.26 the valve stems are at 10:00 and at 3:00. The officer issued a ticket for overtime parking. However, the owner of the car claimed he had moved the car and returned to the same parking place.
The judge who presided over the trial made the assumption that the wheels move independently and the odds of the two valve stems returning to their previous "clock" positions were calculated as 144 to 1. The driver was declared to be innocent because such odds were considered insufficienthad all four valve stems been found to have returned to their previous positions, the driver would have been declared guilty (Zeisel 1968). »
Students could also explore the effect of more-precise measurements on the resulting probabilities. They could calculate the probabilities if, say, instead of recording markings to the nearest hour on the clockface, the markings had been recorded to the nearest half or quarter hour. This line of thinking could raise the issue of continuous distributions and the idea of calculating probabilities involving an interval of values rather than a finite number of values. Some related questions are, How could a practical method of obtaining more-precise measurements be devised? How could a parking officer realistically measure tire-marking positions to the nearest clock half-hour? How could measurement errors be minimized? These could begin a discussion of operational definitions and measurement processes.
Students should be able to investigate the following question by using a simulation to obtain an approximate answer:
How likely is it that at most 25 of the 50 people receiving a promotion are women when all the people in the applicant pool from which the promotions are made are well qualified and 65% of the applicant pool is female?
Those students who pursue the study of probability will be able to find an exact solution by using the binomial distribution. Either way, students are likely to find the result rather surprising.
|Home | Table of Contents | Purchase | Resources|
|NCTM Home | Illuminations Web site|
Copyright © 2000 by the National Council of Teachers of Mathematics. | http://www.fayar.net/east/teacher.web/Math/Standards/document/chapter7/data.htm | 13 |
60 | The Dutch units of measurement used today are those of the metric system. Before the 19th century, a wide variety of different weights and measures were used by the various Dutch towns and provinces. Despite the country's small size, there was a lack of uniformity. During the Dutch Golden Age, these weights and measures accompanied the Dutch to the farthest corners of their colonial empire, including South Africa, New Amsterdam and the Dutch East Indies. Units of weight included the pond, ons and last. There was also an apothecaries' system of weights. The mijl and roede were measurements of distance. Smaller distances were measured in units based on parts of the body – the el, the voet, the palm and the duim. Area was measured by the morgen, hont, roede and voet. Units of volume included the okshoofd, aam, anker, stoop, and mingel. At the start of the 19th century the Dutch adopted a unified metric system, but it was based on a modified version of the metric system, different from the system used today. In 1869, this was realigned with the international metric system. These old units of measurement have disappeared, but they remain a colourful legacy of the Netherlands' maritime and commercial importance and survive today in a number of Dutch sayings and expressions.
Historical units of measure
When Charlemagne was crowned Holy Roman Emperor in 800 AD, his empire included most of modern-day Western Europe including the Netherlands and Belgium. Charlemagne introduced a standard system of measurement across his domains using names such as "pound" and "foot". At the Treaty of Verdun, the empire was divided between Charlemagne's three grandsons and Lothair received the central portion, stretching from the Netherlands in the north to Burgundy and Provence in the south.
Further fragmentation followed and with it various parts of the empire modified the units of measures in a manner that suited the local lord. By the start of the religious wars, the territories that made up the Netherlands, still part of the Holy Roman Empire, had passed into the lordship of the King of Spain. Each territory had its own variant of the original Carolignian units of measure. Under the Treaty of Westphalia in 1648, the seven Protestant territories that owed a nominal allegiance to the Prince of Orange ceded from the Holy Roman Empire and established their own confederacy but each kept its own system of measures.
- A pond was divided into sixteen ons. A pond was roughly about the same size as a modern pound. It was generally around 480 grams, but there was much variation from region to region. The most commonly used measure of weight was the Amsterdam pound.
- After the metric system was introduced in 1816, the word pond continued to be used, but for 1 kilogram. This doubling in size of the pond in one fell swoop created a good deal of confusion. The name "kilogram" was adopted in 1869, but the pond was only eliminated as a formal unit of measurement in 1937. Pond is still used today in everyday parlance to refer to 500 g, not far from its historical weight. The word pond is also used when referring to the pound used in English-speaking countries.
- An ons was 1/16 of a pond. An ons was generally around 30 grams, but there was much variation. The figures provided above for the weight of the various pounds used in the Netherlands can be divided by 16 to obtain the weights of the various ounces in use. After the metric system was introduced, the word ons continued to be used, but for 100 g. The ons was eliminated as a formal unit of measurement in 1937, but it is still used today in everyday parlance to refer to 100 g. In the Netherlands today the word ons does not commonly refer to its historical weight of around 30 g (the exact weight depending on where you were), but to 100 g.
Last or Scheepslast
- scheepslast – 4,000 Amsterdam pond = 1976.4 kg (2.1786 short tons)
- Meaning literally a "load", a last was essentially the equivalent of 120 cubic feet of shipping space. A last in the Dutch East India Company (VOC) in the 17th century was about the same as 1,250 kg, becoming later as much as 2,000 kg.
- In the Dutch fishery, a last was a measurement of the fish loaded into the various types of fishing boat in use (e.g. a bomschuit, buis, sloep or logger). The last of these could take 35 to 40 last of fish, the exact amount depending on the location. In the South Holland fishing villages of Scheveningen and Katwijk, it amounted to 17 crans (kantjes) of herring; in Vlaardingen 14 packed tons. A cran (kantje) held about 900 to 1,000 herring. In Flanders a last was about 1,000 kg of herring. The term fell out of use when the herring fishery disappeared.
- In the Netherlands (as in English-speaking countries) there was an apothecaries' system of weights.
|medicinal pound (medicinaal pond)||lb||12 ons||5760||373.241 72|
|medicinal ounce (medicinaal ons)||℥||8 drachmen||480||31.103 477|
|dram (drachme)||ℨ||3 scrupels||60||3.887 9346|
|scruple (scrupel)||℈||20 grein||20||1.295 9782|
|grain (grein)||gr.||1||0.064 79891|
- mijl (mile) = about 5 km (with variations)
- The Hollandse mijl was "an hour's walk" (één uur gaans) which makes it equivalent to the English league – about three English miles or five kilometres, though the exact distance varied from region to region. Other equivalents of the various miles in use were the French lieu marine (5,555 m), 20,000 Amsterdam feet (5,660 m) or 20,000 Rijnland feet (6,280 m). Between the introduction of the "Dutch metric system" (Nederlands metriek stelsel) in 1816 and the reforms in 1869, the word "mijl" was used to refer to a kilometre. The word mijl has since fallen into disuse except when referring to the "mile" used in English-speaking countries.
- The roede (literally, "rod") was generally somewhat smaller than the English rod, which is 16.5 feet (or 5.0292 metres). However, the length of a roede, and the number of voeten in a roede, varied from place to place. There could be anywhere from 7 to 21 voeten in a roede. The roede used in the Netherlands for the measurement of long distances was generally the Rijnland rod. Other rods included:
- one Rijnland rod (Rijnlandse roede) (= 12 Rijnland feet) was 3.767 m
- one Amsterdam rod (Amsterdamse roede) (= 13 Amsterdam feet) was 3.68 m
- one Bloois rod (Blooise roede) (= 12 feet) was 3.612 m
- one 's-Hertogenbosch rod ('s-Hertogenbosche roede) (= 20 feet) was 5.75 m
- one Hondsbos and Rijp rod (Hondsbosse en Rijp roede) was 3.42 m
- one Putten rod (Puttense roede) (= 14 feet) was 4.056 m
- one Schouw rod (Schouwse roede) (= 12 feet) was 3.729 m
- one Kings rod (in Friesland) (Konings roede) (= 12 feet) was 3.913 m
- one Gelderland rod (Geldersche roede) (= 14 feet) was 3.807 m
- Today the word roede is not in common use in the Netherlands as a unit of measurement.
- The length represented by the Dutch ell was the distance of the inside of the arm (i.e. the distance from the armpit to the tip of the fingers), an easy way to measure length. The Dutch "ell", which varied from town to town (55 – 75 cm), was somewhat shorter than the English ell (114.3 cm). A section of measurements is given below:
- In 1725 the Hague ell was fixed as the national standard for tax purposes and from 1816 to 1869, the word el was used in the Netherlands to refer to the metre. In 1869 the word meter was adopted and the el, disappeared, both as a word and as a unit of measurement.
- The voet ("foot") was of the same order of magnitude as the English foot (30.48 cm), but its exact size varied from city to city and from province to province. There were 10, 11, 12 or 13 duimen (inches) in a voet, depending on the city's local regulations. The Rijnland foot which had been in use since 1621 was most commonly used voet in the both Netherlands and in parts of Germany. In 1807, de Gelder measured the copy of the Rijnland foot in the Leiden observatory to be 0.3139465 m while Eytelwien found that the master copy that was in use in Germany was 0.313853543 m – a difference of 0.03%. In the seventeenth and eighteenth centuries Dutch settlers took the Rijnland foot to the Cape Colony. In 1859, by which time the colony had passed into British control, the Cape foot was calibrated against the English foot and legally defined as 1.033 English feet (0.314858 m).
- The following is a partial list of the various voeten in use the Netherlands:
- one Rijnland foot (Rijnlandse voet) (=12 Rijnland inches) was 31.4 cm
- one Amsterdam foot (Amsterdamse voet) (= 11 Amsterdam inches) was 28.3133 cm
- one Bloois foot (Blooise voet) was 30.1 cm
- one 's-Hertogenbosch foot ('s-Hertogenbossche voet) was 28.7 cm
- one Hondsbos and Rijp foot (Honsbossche en Rijpse voet) was 28.5 cm
- one Schouw foot (Schouwse voet) was 31.1 cm
- one Gelderland foot (Geldersche voet) was 29.2 cm
- Today the word voet is not in common use in the Netherlands as a unit of measurement, except when referring to the English foot.
- grote palm (large palm) – 9.6 cm; after 1820, 10 cm
- The duim ("thumb", but translated as "inch") was about the width of the top phalanx of the thumb of an adult man. It was very similar to the length of the English inch (2.54 cm). Its exact length and definition varied from region to region, but was usually one twelfth of a voet, though the Amsterdamse duim was one eleventh of an Amsterdamse voet.
- When the "Dutch metric system" (Nederlands metriek stelsel) was introduced in 1820 the word duim was used for the centimeter, but in 1870 was dropped. Today the word duim is not in common use in the Netherlands as a unit of measurement except when referring to the English inch. The word is still used in certain expressions such as "drieduims pijp" (three-inch pipe) and "duimstok" (ruler or gauge).
- morgen was 8,516 square metres (with variations).
- "Morgen" means "morning" in Dutch. A morgen of land represented the amount of land that could be ploughed in a morning. The exact size varied from region to region. The number of roede in a morgen also varied from place to place, and could be anywhere from 150 to 900.
- one Rijnland morgen (Rijnlandse morgen) = 8,516 square metres (Divided into 6 honts. A hont was divided into 100 square Rijnland rods. So there were 600 Rijnland rods in a morgen. A Rijnland rod was divided into 144 square Rijnland feet.)
- one Bilt morgen (Biltse morgen) = 9,200 square metres
- one Gelderland morgen (Gelderse morgen) = 8,600 square metres
- one Gooi morgen (Gooise morgen) = 9,800 square metres
- one 's-Hertogenbosch morgen (Bossche morgen) = 9,930 square metres (Divided into 6 loopense = 600 square roede = 240.000 square feet.
- one Veluwe morgen (Veluwse morgen) = 9,300 square meteres
- one Waterland morgen (Waterlandse morgen) = 10,700 square metres
- one Zijp or Schermer morgen (Zijper of Schermer morgen) = 8,516 square metres
- During the French occupation, measurements were standardised and regional variations eliminated. Initially, the Napoleonic king Louis Napoleon decreed in 1806 that the Rijnland morgen would be used throughout the country, but this only lasted a few years. It wasn't long before the metric system was introduced. Since then land has been measured in square metres (hectares, ares and centiares).
- A hont was made up of 100 roede. The exact size of a hont of land varied from place to place, but the Rijnland hont was 1,400 square metres. Another name for hont was "honderd", a Dutch word meaning "hundred". The word hond is derived from the earlier Germanic word hunda, which meant "hundred" (or "dog"). After the metric system was introduced in the 19th century, the measurement fell into disuse.
- A square roede was also referred to as a roede. Roede (or roe) was both an area measurement as well as a linear measurement. The exact size of a roede depended on the length of the local roede, which varied from place to place. The most common roede used in the Netherlands was the Rijnland rod.
- one Rijnland rod (Rijnlandse roede) was 14.19 m²
- one Amsterdam rod (Amsterdamse roede) was 13.52 m²
- one 's-Hertogenbosch rod (Bossche roede) was 33.1 m²
- one Breda rod (Bredase roede) was 32.26 m²
- one Groningen rod (Groningse roede) was 16.72 m²
- one Hondsbos rod (Hondsbosse roede) was 11.71 m²
- When the Dutch metric system (Nederlands metriek stelsel) was introduced in 1816, the old names were used for the new metric measures. An are was referred to as a "square rod" (vierkante roede). The rod and the square rod were abandoned by 1937, but the Rijnland rod (Rijnlandse Roede), abbreviated as "RR²", is still used as a measurement of surface area for flowerbulb fields.
- A square voet was also called a voet. The word voet (meaning "foot") could refer to a foot or to a square foot. The exact size of a voet depended on the length of the local voet, which changed from region to region. The most commonly used voet in the Netherlands was the Rijnland foot.
- The Dutch measures of volume, as with all other measures, varied from locality to locality (as do modern-day US and UK measures of volume). The modern day equivalents are therefore only approximate and equating litres with quarts will not unduly distort the results (1 litre = 1.136 US quarts = 0.880 UK quarts)
- A okshoofd (earlier spelling: oxhoofd) was a measurement of volume representing the volume held by a large barrel of wine. The measurement was also used for vinegar, tobacco and sugar. The measurement is still used by businesses in the wine and spirits trade. There were six ankers in an okshoofd.
- There is a saying in Dutch: "You can't draw clean wine from an unclean oxhead". (Men kan geen reine wijn uit een onrein okshoofd tappen.)
- aam – 4 ankers = 155 L
- There were four ankers in an aam. It was used for measuring the volume of wine. The size of an aam varied from place to place. It was anything from 141 to 160 litres.
- anker (anchor) = approximately 38.75 L
- An anker was a measure of volume representing the volume held in a small cask holding around 45 bottles.
- stoop – 1/16 anker = 2.4 L
- mingel – 1/2 stoop = approximately 1.21 L
Dutch metric system
In 1792 the southern part of the Netherlands was incorporated into the First French Republic and in 1807 the rest of the Netherlands was incorporated into what had now become the First French Empire and as a result the Netherlands was forced to accept the French units of measurement. In 1812 France replaced the original metric system with the mesures usuelles.
Under the Congress of Vienna in 1815, the Kingdom of the Netherlands which included Belgium and Luxembourg was established as a buffer state against France. Under the Royal decree of 27 March 1817 (Koningklijk besluit van den 27 Maart 1817), the newly-formed Kingdom of the Netherlands abandoned the mesures usuelles in favour of the "Dutch" metric system (Nederlands metrisch stelsel) in which metric units were given the names of units of measure that were then in use. Examples include:
- 1 mijl (mile) = 1 kilometre (1 statute mile = 1.609 km)
- 1 roede (rood) = 10 metres
- 1 el (ell) = 1 metre (1 English ell of 45 in = 1.143 m)
- 1 palm (hand) = 10 centimetres (1 English hand = 10.16 cm)
- 1 duim (thumb/inch) = 1 centimetre (1 inch = 2.54 cm)
- 1 streep (line) = 1 millimetre (1 English line = 2.12 mm)
- 1 bunder = 1 hectare
- 1 vierkante roede (square rod) = 1 are or 100 m2
- 1 wisse or teerling el = 1 cubic metre.
- 1 mud (bushel) = 100 litres
- 1 kop (cup) = 1 litre (1 Australian cup = 250 ml)
- 1 maatje (small measure) = 100 millilitres
- 1 vingerhoed (thimble) = 10 millilitres
- 1 pond (pound) = 1 kilogram (1 pound avoirdupois = 0.454 kg)
- (though in modern colloquial speech, 500 g is also known as a pond.
- 1 ons (ounce) = 100 grams (1 ounce avoirdupois = 28.35 g)
- 1 lood (lead)= 10 grams
- 1 wigtje (small weight) = 1 gram
- 1 korrel (grain) = 0.1 gram
In 1816, the Netherlands and France were the only countries in the world that were using variations of the metric system. By the late 1860s, the German Zollverein and many other neighbouring countries had adopted the metric system, so in 1869 the modern names were adopted (Wet van 7 April 1869, Staatsblad No.57). A few of the older names remained officially in use, but they were eliminated when the system was further standardised by the 1937 Act on Weights and Measures (IJkwet), though the pond is now used colloquially to mean half a kilogram.
In 1830 the Belgians revolted against Dutch rule and under the Treaty of London of 1839 Belgian independence was recognized. The boundary agreed in 1839 is the current Belgian – Dutch boundary.
Modern metric system
Today the Netherlands uses the International system of units (SI).
The metric system in the Netherlands has virtually the same nomenclature as in English, except:
On 30 October 2006 the Weights and Measures Act was replaced by the Metrology Act. The organisation currently responsible for weights and measures in the Netherlands is a private company called the Nederlands Meetinstituut (NMi). Literally, this means "Dutch Institute of Measures", but the organisation uses its Dutch name in English. The company was created in 1989 when the Metrology Service (Dienst van het IJkwezen) was privatised. At first the sole shareholder was the Dutch government, but in 2001 the sole shareholder became TNO Bedrijven, a holding company for TNO, the Dutch Organisation for Applied Scientific Research.
See also
- Much of the information on this page was obtained from various unfootnoted articles found on the Dutch version of Wikipedia, including "Metriek stelsel", "Nederlands metriek stelsel", "Pond (massa)", "Ons (massa)", "Last", "Medicinaal pond", "Mijl (Nederland)", "Roede (lengte)", "El (lengtemaat)", "Voet (lengte)", "Duim(lengte)", "Anker", "Aam", "Morgen" and "Roede" and "Hont". Some of the information was also found in other articles on the English Wikipedia, including "Apothecaries' system". In accordance with Wikipedia policy to avoid references to other Wikipedia articles, the source of this information is not footnoted in each sentence.
- Charles Ralph Boxer (1959). The Dutch Seaborne Empire 1600–1800. Hutchinson. OCLC 11348150. Appendix
- VOC Glossarium
- A. Hoogendijk Jz., De grootvisserij op de Noordzee, 1895
- Piet Spaans, Bouwteelt, 2007
- R. Degrijse, Vlaanderens haringbedrijf, 1944
- de VOC site – Woordenlijst – Navigatie (the VOC site – Vocabulary – Navigation) – (in Dutch)
- de Gelder, page 167
- de Gelder, page 169
- de Gelder, page 164
- "Cape Foot". Sizes. Retrieved 2011-12-26.
- "Oude maten en gewichten Old measures and weights (data taken from Mariska van Venetië, Alles wat u beslist over Nederland moet weten. Uitgeverij Bert Bakker, Amsterdam, 2004)". Allesopeenrij – Nederland in lijsten [Everything in a row, The Netherlands in lists]. p. http://www.allesopeenrij.nl/lijsten/wetenschap/oudematen_gewichten.html. Retrieved 2010-02-06. "Follow link "verkeer & ruimte" and then "ouden maten en gewichten""
- Universität Heidelberg – Hund
- "Home Page (English)". "De Oude Flesch" (A society dedicated to the collecting of historic Dutch bottles). Retrieved 2010-05-01.
- de Gelder, pages 155–157
- NMi website
- "History". Dutch Metrology Institute/Nederlands Metrologie Instituut (NMI). Retrieved 10 November 2012.
- W.C.H. Staring (1902). De binnen- en buitenlandsche maten, gewichten en munten van vroeger en tegenwoordig, met hunne onderlinge vergelijkingen en herleidingen, benevens vele andere, dagelijks te pas komende opgaven en berekeningen. (in Dutch) (Vierde, herziene en veel vermeerderde druk ed.).
- J.M. Verhoef (1983). De oude Nederlandse maten en gewichten [Old Dutch weights and measures] (in Dutch) (2e druk ed.). P.J. Meertens-Instituut voor dialectologie, volkskunde en naamkunde van de Koninklijke Nederlande Akademie van Wetenschappen.
- Jacob de Gelder (1824). Allereerste Gronden der Cijferkunst [Introduction to Numeracy] (in Dutch). 's Gravenhage and Amsterdam: de Gebroeders van Cleef. pp. 163–176. Retrieved 2011-03-02.
- NMI (Nederlands Meetinstituut (NMi) There is some information in English, but very little on the historical system.)
- VSL Dutch Metrology Institute
- Cor Snabel's page on Old Dutch Measures (A comprehensive collection of links and information.)
- Pieter Simons' page on "Oude Maten" (Dutch only)
- Oscar van Vlijmen's page on "Historische eenheden Nederland en België" (Dutch only)
- Dutch Weights and Measures Collectors Society
A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia. | http://www.digplanet.com/wiki/Dutch_units_of_measurement | 13 |
80 | Material covered in these notes are designed to span over 12-16 weeks. Each subpage will contain about 3 hours of material to read through carefully, and additional time to properly absorb the material.
Introduction - Linear Equations
In this course we will investigate solutions to linear equations, vectors and vector spaces and connections between them.
Of course you're familiar with basic linear equations from Algebra. They consist of equations such as
and you're hopefully familiar with all the usual problems that go along with this sort of equation. Such as how to find the slope, how to place the equation in point-slope form, standard form, or slope-intercept form.
All of these ideas are useful, but we will, at first, be interested in systems of linear equations. A system of equations is some collection of equations that are all expected to hold at the same time. Let us start with an example of such a system.
Usually the first questions to ask is are there any solutions to a system of equations. By this we mean is there a pair of numbers so that both equations hold when you plug in this pair. For example, in the above system, if you take and then you can check that both equations hold for this pair of numbers:
Notice that we are asking for the same pair of numbers to satisfy both equations. The first thing to realize is that just because you ask for two equations to hold at the same time doesn't mean it is always possible. Consider the system:
Clearly there is a problem. If and then you would have , and so . Which is absurd. Just because we would like there to be a pair of numbers that satisfy both equations doesn't mean that we will get what we like. One of the main goals of this course will be to understand when systems of equations have solutions, how many solutions they have, and how to find them.
In the first example above there is an easy way to find that and is the solution directly from the equations. If you add these two equations together, you can see that the y's cancel each other out. When this happens, you will get , or . Substituting back into the above, we find that . Note that this is the only solution to the system of equations.
Frequently one encounters systems of equations with more then just two variables. For example one may be interested in finding a solution to the system:
Solving Linear Systems Algebraically
One was mentioned above, but there are other ways to solve a system of linear equations without graphing.
If you get a system of equations that looks like this:
You can switch around some terms in the first to get this:
Then you can substitute that into the bottom one so that it looks like this:
Then, you can substitute 2 into an x from either equation and solve for y. It's usually easier to substitute it in the one that had the single y. In this case, after substituting 2 for x, you would find that y = 7.
Thinking in terms of matrices
Much of finite elements revolves around forming matrices and solving systems of linear equations using matrices. This learning resource gives you a brief review of matrices.
Suppose that you have a linear system of equations
Matrices provide a simple way of expressing these equations. Thus, we can instead write
An even more compact notation is
Here is a matrix while and are matrices. In general, an matrix is a set of numbers arranged in rows and columns.
Practice Exercises
Types of Matrices
Common types of matrices that we encounter in finite elements are:
- a row vector that has one row and columns.
- a column vector that has rows and one column.
- a square matrix that has an equal number of rows and columns.
- a diagonal matrix which is a square matrix with only the
diagonal elements () nonzero.
- the identity matrix () which is a diagonal matrix and
with each of its nonzero elements () equal to 1.
- a symmetric matrix which is a square matrix with elements
such that .
- a skew-symmetric matrix which is a square matrix with elements
such that .
Note that the diagonal elements of a skew-symmetric matrix have to be zero: .
Matrix addition
Let and be two matrices with components and , respectively. Then
Multiplication by a scalar
Let be a matrix with components and let be a scalar quantity. Then,
Multiplication of matrices
Let be a matrix with components . Let be a matrix with components .
The product is defined only if . The matrix is a matrix with components . Thus,
Similarly, the product is defined only if . The matrix is a matrix with components . We have
Clearly, in general, i.e., the matrix product is not commutative.
However, matrix multiplication is distributive. That means
The product is also associative. That means
Transpose of a matrix
Let be a matrix with components . Then the transpose of the matrix is defined as the matrix with components . That is,
An important identity involving the transpose of matrices is
Determinant of a matrix
The determinant of a matrix is defined only for square matrices.
For a matrix , we have
For a matrix, the determinant is calculated by expanding into minors as
In short, the determinant of a matrix has the value
where is the determinant of the submatrix of formed by eliminating row and column from .
Some useful identities involving the determinant are given below.
- If is a matrix, then
- If is a constant and is a matrix, then
- If and are two matrices, then
If you think you understand determinants, take the quiz.
Inverse of a matrix
Let be a matrix. The inverse of is denoted by and is defined such that
where is the identity matrix.
The inverse exists only if . A singular matrix does not have an inverse.
An important identity involving the inverse is
since this leads to:
Some other identities involving the inverse of a matrix are given below.
- The determinant of a matrix is equal to the multiplicative inverse of the
determinant of its inverse.
- The determinant of a similarity transformation of a matrix
is equal to the original matrix.
We usually use numerical methods such as Gaussian elimination to compute the inverse of a matrix.
Eigenvalues and eigenvectors
A thorough explanation of this material can be found at Eigenvalue, eigenvector and eigenspace. However, for further study, let us consider the following examples:
- Let :
Which vector is an eigenvector for ?
We have , and
Thus, is an eigenvector.
- Is an eigenvector for ?
We have that since , is not an eigenvector for | http://en.wikiversity.org/wiki/Linear_algebra | 13 |
78 | See something needing your input? Click here to Join us in providing quality, expert-guided information to the public for free!
From Citizendium, the Citizens' Compendium
Deoxyribonucleic acid (DNA) is a very large biological molecule that is vital in providing information for the development and reproduction of living things. Every living organism has its own DNA sequence that is like a unique 'barcode' or 'fingerprint'. This inheritable variation in DNA is the most important factor driving evolutionary change over many generations. But, beyond these general characteristics, what "exactly" is DNA? What are the precise physical attributes of this molecule that make its role so centrally imposing in understanding life?
DNA is a long polymer of simple units called nucleotides, held together by a sugar phosphate backbone. Attached to each sugar molecule is a molecule of one of four bases; adenine (A), thymine (T), guanine (G) or cytosine (C), and the order of these bases on the DNA strand encodes information. In most organisms, DNA is a double-helix (or duplex molecule) consisting of two DNA strands coiled around each other, and held together by hydrogen bonds between bases. Because of the chemical nature of these bases, adenine always pairs with thymine and guanine always pairs with cytosine. This complementarity forms the basis of semi-conservative DNA replication — it makes it possible for DNA to be copied relatively simply, while accurately preserving its information content.
The entire DNA sequence of an organism is called its genome. In animals and plants, most DNA is stored inside the cell nucleus. In bacteria, there is no nuclear membrane around the DNA, which is in a region called the nucleoid. Some organelles in eukaryotic cells (mitochondria and chloroplasts) have their own DNA with a similar organisation to bacterial DNA. Viruses have a single type of nucleic acid, either DNA or RNA, directly encased in a protein coat.
Overview of biological functions
DNA contains the genetic information that is the basis for living functions including growth, reproduction and evolution. This information is held in segments of the DNA called genes that may span in size from scores of DNA base-pairs to many thousands of base-pairs. In eukaryotes (organisms such as plants, yeasts and animals whose cells have a nucleus) DNA usually occurs as several large, linear chromosomes, each of which may contain hundreds or thousands of genes. Prokaryotes (organisms such as common bacteria) generally have a single large circular chromosome, but often possess other miniature chromosomes called plasmids. The set of chromosomes in a cell makes up its genome; the human genome has about three billion base pairs of DNA arranged into 46 chromosomes and contains 20-25,000 genes.
There are many interactions that happen between DNA and other molecules to coordinate its functions. When cells divide, the genetic information must be duplicated to produce two daughter copies of DNA in a process called DNA replication. When a cell uses the information in a gene, the DNA sequence is copied into a complementary single strand of RNA in a process called transcription. Of the transcribed sequences, some are used to directly make a matching protein sequence by a process called translation (meaning translation from a nucleic acid polymer to an amino acid polymer). The other transcribed RNA sequences may have regulatory, structural or catalytic roles. This article introduces some of the functions and interactions that characterize the DNA molecules in cells, and touches on some of the more technological uses for this molecule.
Our understanding of the various ways in which genes play a role in cells has been continually revised throughout the history of genetics, starting from abstract concepts of inheritable particles whose composition was unknown. This has led to several modern different definitions of a gene. One of the most straightforward ways to define a gene is simply as a segment of DNA that is transcribed into RNA - that is - the gene is a unit of transcription. This definition encompasses genes for non-translated RNAs, such as ribosomal RNA (rRNA) and transfer RNA (tRNA), as well as messenger RNA (mRNA) which is used for encoding the sequences of proteins.
A second approach is to define a gene as a region of DNA that encodes a single polypeptide. By this definition, any particular mRNA transcription unit can cover more than one gene, and thus a mRNA can carry regions encoding one or more polypeptides. Such a multi-genic transcription unit is called an operon.
Other definitions include consideration of genes as units of biological function. This definition can include sites on DNA that are not transcribed, such as DNA sites at which regulatory and catalytically active proteins concerned with gene regulation and expression are located. Examples of such sites (loci, sing. locus) are promoters and operators. (Locus is a genetic term very similar in meaning to gene, and which refers to a site or region on a chromosome concerned with a particular function or trait.)
All of the cells in our body contain essentially the same DNA, with a few exceptions; red blood cells for example do not have a nucleus and contain no DNA. However although two cells may carry identical DNA, this does not make them identical, because the two cells may have different patterns of gene expression; only some genes will be active in each cell, and the level of activity varies between cells, and this is what makes different cell types different. The "level" of gene expression (for a given gene) is used sometimes to refer to the amount of mRNA made by the cell, and sometimes to refer to the amount of protein produced.
Every human has essentially the same genes, but has slightly different DNA; on average the DNA of two individuals differs at about three million bases. These differences are very rarely in the protein-coding sequences of genes, but some affect how particular genes are regulated — they may affect exactly where in the body a gene is expressed, how intensely it is expressed, or how expression is regulated by other genes or by environmental factors; these slight differences help to make every human being unique. By comparison, the genome of our closest living relative, the chimpanzee, differs from the human genome at about 30 million bases.
In eukaryotes, DNA is located mainly in the cell nucleus (there are also small amounts in mitochondria and chloroplasts). In prokaryotes, the DNA is in an irregularly shaped body in the cytoplasm called the nucleoid. The DNA is usually in linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The human genome has about three billion base pairs of DNA arranged into 46 chromosomes, and contains 20-25,000 genes , the simple nematode C elegans has almost as many genes (more than 19,000).
In many species, only a small fraction of the genome encodes protein: only about 1.5% of the human genome consists of protein-coding exons, while over 50% consists of non-coding repetitive sequences. Some have concluded that much of human DNA is "junk DNA" because most of the non-coding elements appear to have no function, Some other vertebrates, including the puffer fish Fugu have very much more compact genomes, and (for multicellular organisms) there seems to be no consistent relationship between the size of the genome and the complexity of the organism . Some non-coding DNA sequences are now known to have a structural role in chromosomes. In particular, telomeres and centromeres contain few genes, but are important for the function and stability of chromosomes. An abundant form of non-coding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation; these are usually just molecular 'fossils', but they can provide the raw genetic material for new genes.
A recent challenge to the long-standing view that the human genome consists of relatively few genes along with a vast amount of "junk DNA" comes from the ENCyclopedia Of DNA Elements (ENCODE) consortium. Their survey of the human genome shows that most of the DNA is transcribed into molecules of RNA. This broad pattern of transcription was unexpected, but whether these transcribed (but not translated) elements have any biological function is not yet clear.
- Further information: Replication of a circular bacterial chromosome
For an organism to grow, its cells must multiply, and this occurs by cell division: one cell splits to create two new 'daughter' cells. When a cell divides it must replicate its DNA so that the two daughter cells have the same genetic information as the parent cell. The double-stranded structure of DNA provides a simple mechanism for this replication. The two strands of DNA separate, and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This makes the complementary strand by finding the correct base (through complementary base pairing), and bonding it to the original strand. DNA polymerases can only extend a DNA strand in a 5' to 3' direction, so other mechanisms are needed to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell (usually) ends up with a perfect copy of its DNA. However, occasionally mistakes (called mutations) occur, contributing to the genetic variation that is the raw material for evolutionary change.
DNA replication requires a complex set of proteins, each dedicated to one of several different tasks needed to replicate this large molecule in an orderly and precise fashion. Further capacity for DNA replication with substantial rearrangement (which has major implications for understanding mechanisms of molecular evolution, is provided by mechanisms for DNA movement, inversion, and duplication, illustrated by various mobile DNAs such as transposons and proviruses.
Transcription and translation
Within most (but not all) genes, the nucleotides define a messenger RNA (mRNA) which in turn defines one or more protein sequences. This is possible because of the genetic code which translates the nucleic acid sequence to an amino-acid sequence. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides. For example, the sequences ACU, CAG and UUU in mRNA are translated to threonine, glutamine and phenylalanine respectively.
In transcription, the codons are copied into mRNA by RNA polymerase. This copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the mRNA to a specific aminoacyl-tRNA; an amino acid carried by a transfer RNA (tRNA). As there are four bases in three-letter combinations, there are 64 possible codons, and these encode the twenty standard amino acids. Most amino acids, therefore, have more than one possible codon (for example, ACU and ACA both code for threonine). There is one start codon (AUG) that also encodes for methionine) and three 'stop' or 'nonsense' codons (UAA, UGA and UAG) that signify the end of the coding region.
Regulation of gene expression
How genes are regulated — how they are turned on or off — is an important topic in modern biology, and research continually yields surprising insights into how phenotypic traits and biological adaptations are determined.
In the 1950's, investigations of the bacterium Escherichia coli led to the recognition that the region upstream of a transcribed region provides a place for the enzyme RNA polymerase to attach to DNA and start transcribing RNA in the 5' to 3' direction of the nucleic acid chain. The site at which this occurs came to be called the promoter. Other regulatory proteins (such as repressors) influence transcription by binding to a region near to or overlapping the promoter, called the operator. In the early years of modern genetics, emphasis was given to transcriptional regulation as an important and common means of modulating gene expression, but today we realize that there are a wide range of mechanisms by which the expression of mRNA and proteins can be modulated by both external and internal signals in cells.
Physical and chemical properties
DNA is a long chain of repeating units called nucleotides (a nucleotide is a base linked to a sugar and one or more phosphate groups) The DNA chain is 22-26 Å wide (2.2-2.6nm) The nucleotides are very small (just 3.3Å long), but DNA can contain millions of them: the DNA in the largest human chromosome (Chromosome 1) has 220 million base pairs.
In living organisms, DNA does not usually exist as a single molecule, but as a tightly-associated pair of molecules. These two long strands are entwined in the shape of a double helix. DNA can thus be thought of as an anti-parallel double helix. The nucleotide repeats contain both the backbone of the molecule, which holds the chain together, and a base, which interacts with the other DNA strand. The double helix is held together by hydrogen bonds between the bases attached to the two strands. If many nucleotides are linked together, as in DNA, the polymer is referred to as a polynucleotide.
The backbone of the DNA strand has alternating phosphate and sugar residues: the sugar is the pentose (five carbon) sugar 2-deoxyribose. The sugar molecules are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms in the sugar rings. Because these bonds are asymmetric, a strand of DNA has a 'direction'. In a double helix, the direction of the nucleotides in one strand is opposite to that in the other strand. This arrangement of DNA strands is called antiparallel. The asymmetric ends of a strand of DNA bases are referred to as the 5' (five prime) and 3' (three prime) ends. One of the major differences between DNA and RNA is the sugar: 2-deoxyribose is replaced by ribose in RNA.
The four bases in DNA are adenine (A), cytosine (C), guanine (G) and thymine (T), and these bases are attached to the sugar/phosphate to form the complete nucleotide. Adenine and guanine are fused five- and six-membered heterocyclic compounds called purines, while cytosine and thymine are six-membered rings called pyrimidines. A fifth pyrimidine base, uracil (U), replaces thymine in RNA. Uracil is normally only found in DNA as a breakdown product of cytosine, but bacterial viruses contain uracil in their DNA. In contrast, following synthesis of certain RNA molecules, many uracils are converted to thymines. This occurs mostly on structural and enzymatic RNAs like tRNAs and ribosomal RNA.
The double helix is a right-handed spiral. As the DNA strands wind around each other, gaps between the two phosphate backbones reveal the sides of the bases inside (see animation). Two of these grooves twist around the surface of the double helix: the major groove is 22 Å wide and the minor groove is 12 Å wide. The edges of the bases are more accessible in the major groove, so proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove.
Each type of base on one strand of DNA forms a bond with just one type of base on the other strand, called 'complementary base pairing'. Purines form hydrogen bonds to pyrimidines; for example, A bonds only to T, and C bonds only to G. This arrangement of two nucleotides joined together across the double helix is called a base pair. In a double helix, the two strands are also held together by hydrophobic effects and by a variation of pi stacking. Hydrogen bonds can be broken and rejoined quite easily, so the two strands of DNA in a double helix can be pulled apart like a zipper, either by mechanical force or by high temperatures. Becauseof this complementarity, the information in the double-stranded sequence of a DNA helix is duplicated on each strand, and this is vital in DNA replication. The reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in living organisms.
The two types of base pairs form different numbers of hydrogen bonds; AT forms two hydrogen bonds, and GC forms three, so the GC base-pair is stronger than the AT pair. Thus, long DNA helices with a high GC content have strongly interacting strands, while short helices with high AT content have weakly interacting strands. Parts of the DNA double helix that need to separate easily tend to have a high AT content, making the strands easier to pull apart. The strength of this interaction can be measured by finding the temperature required to break the hydrogen bonds (their 'melting temperature'). When all the base pairs in a DNA double helix melt, the strands separate, leaving two single-stranded molecules in solution. These molecules have no single shape, but some conformations are more stable than others.
Sense and antisense
DNA is copied into RNA by RNA polymerases. A DNA sequence is called a "sense" sequence if it is copied by these enzymes (which only work in the 5' to 3' direction) and then translated into protein. The sequence on the opposite strand is complementary to the sense sequence and is called the "antisense" sequence. Sense and antisense sequences can co-exist on the same strand of DNA; in both prokaryotes and eukaryotes, antisense sequences are transcribed, and antisense RNAs might be involved in regulating gene expression.. (See Micro RNA, RNA interference, sRNA.)
Many DNA sequences in prokaryotes and eukaryotes (and more in plasmids and viruses) have overlapping genes which may both occur in the same direction, on the same strand (parallel) or in opposite directions, on opposite strands (antiparallel), blurring the distinction between sense and antisense strands. In these cases, some DNA sequences encode one protein when read from 5′ to 3′ along one strand, and a different protein when read in the opposite direction (but still from 5′ to 3′) along the other strand. In bacteria, this overlap may be involved in regulating gene transcription, while in viruses, overlapping genes increase the information that can be encoded within the small viral genome. Another way of reducing genome size is seen in some viruses that contain linear or circular single-stranded DNA.
DNA can be 'twisted' in a process called DNA supercoiling. In its "relaxed" state, a DNA strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted, the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix (positive supercoiling), and the bases are held more tightly together. If they are twisted in the opposite direction (negative supercoiling) the bases come apart more easily. Most DNA has slight negative supercoiling that is introduced by topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.
The conformation of a DNA molecule depends on its sequence, the amount and direction of supercoiling, chemical modifications of the bases, and also solution conditions, such as the concentration of metal ions. Accordingly, DNA can exist in several possible conformations, but only a few of these ("A-DNA", "B-DNA", and "Z-DNA") are thought to occur naturally. Of these, the "B" form is most common. The A form is a wider right-handed spiral, with a shallow and wide minor groove and a narrower and deeper major groove; this form occurs in dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, as well as in enzyme-DNA complexes. Segments of DNA where the bases have been modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in regulating transcription.
At the ends of the linear chromosomes, specialized regions called telomeres allow the cell to replicate chromosome ends using the enzyme telomerase. Without telomeres, a chromosome would become shorter each time it was replicated. These specialized 'caps' also help to protect the DNA ends from exonucleases and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA that contain several thousand repeats of a TTAGGG sequence. These sequences may stabilize chromosome ends by forming unusual quadruplex structures. Here, four guanine bases form a flat plate, through hydrogen bonding, and these plates then stack on top of each other to form a stable quadruplex. Other structures can also be formed, and the central set of four bases can come from either one folded strand, or several different parallel strands, each contributing one base to the central structure.
The expression of genes is influenced by the chromatin structure of a chromosome, and regions of heterochromatin (with little or no gene expression) correlate with the methylation of cytosine. These structural changes to the DNA are one type of epigenetic change that can alter chromatin structure, and they are inheritable. Epigenetics refers to features of organisms that are stable over successive rounds of cell division but which do not involve changes in the underlying DNA sequence. Epigenetic changes are important in cellular differentiation, allowing cells to maintain different characteristics despite containing the same genomic material. Some epigenetic features can be inherited from one generation to the next.
The level of methylation varies between organisms; the nematode C. elegans lacks any cytosine methylation, while up to 1% of the DNA of vertebrates contains 5-methylcytosine. Despite the biological importance of 5-methylcytosine, it is susceptible to spontaneous deamination, and methylated cytosines are therefore mutation 'hotspots'. Other base modifications include adenine methylation in bacteria and the glycosylation of uracil to produce the "J-base" in kinetoplastids.
DNA can be damaged by many different agents. Mutagens are agents which can produce genetic mutations - these are alterations of one DNA base to another base. Examples of mutagens include oxidizing agents, alkylating agents and high-energy electromagnetic radiation such as ultraviolet light and x-rays. Lesions of DNA, in which residues are changed to a structure that is not a normal feature of DNA, are different from mutations. However, damaged DNA can give rise to mutations, and indeed, some DNA repair processes are error prone and thus themselves generate mutations.
Ultraviolet radiation damages DNA mostly by producing thymine dimers. Oxidants such as free radicals or hydrogen peroxide can cause several forms of damage, including base modifications (particularly of guanosine) as well as double-strand breaks. It has been estimated that, in each human cell, about 500 bases suffer oxidative damage every day. Of these lesions, the most damaging are double-strand breaks, as they can produce point mutations, insertions and deletions from the DNA sequence, as well as chromosomal translocations.
Many mutagens intercalate into the space between two adjacent base pairs. These are mostly polycyclic, aromatic, and planar molecules, and include ethidium, proflavin, daunomycin, doxorubicin and thalidomide. DNA intercalators are used in chemotherapy to inhibit DNA replication in rapidly-growing cancer cells. For an intercalator to fit between base pairs, the bases must separate, distorting the DNA strand by unwinding of the double helix. These structural modifications inhibit transcription and replication processes, causing both toxicity and mutations. As a result, DNA intercalators are often carcinogens. Nevertheless, because they can inhibit DNA transcription and replication, they are also used in chemotherapy to suppress rapidly-growing cancer cells.
Interactions with proteins
All of the functions of DNA depend on its interactions with proteins. some of these interactions are non-specific, others are specific in that the protein can only bind to a particular DNA sequence. Some enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.
Within chromosomes, DNA is held in complexes between DNA and structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes, this structure involves DNA binding to small basic proteins called histones, while in prokaryotes many types of proteins are involved. The histones form a disk-shaped complex called a nucleosome which has two complete turns of double-stranded DNA wrapped around it. These interactions are formed through basic residues in the histones making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation and acetylation. These changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins found in chromatin include the high-mobility group proteins, which bind preferentially to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into more complex chromatin structures.
Another group of DNA-binding proteins bind single-stranded DNA. In humans, replication protein A is the best-characterised member of this family, and is essential for most processes where the double helix is separated (including DNA replication, recombination and DNA repair). These proteins seem to stabilize single-stranded DNA and protect it from forming stem loops or being degraded by nucleases.
Other proteins bind to particular DNA sequences. The most intensively studied of these are the transcription factors. Each of these proteins binds to a particular set of DNA sequences and thereby activates or inhibits the transcription of genes with these sequences close to their promoters. Transcription factors do this in two ways. Some can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Other transcription factors can bind enzymes that modify the histones at the promoter, this will change the accessibility of the DNA template to the polymerase.
These DNA targets can occur throughout an organism's genome, so changes in the activity of one type of transcription factor in a given cell can affect the expression of many genes in that cell. Consequently, these proteins are often the targets of the signal transduction processes that mediate responses to environmental changes or cellular differentiation and development. The specificity of transcription factor interactions with DNA arises because the proteins make multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these interactions occur in the major groove, where the bases are most accessible.
Nucleases and ligases
Nucleases cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds (nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands). The most frequently-used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme recognizes the 6-base sequence 5′-GAT|ATC-3′ and makes a cut at the vertical line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.
DNA ligases can rejoin cut or broken DNA strands, using the energy from either adenosine triphosphate or nicotinamide adenine dinucleotide. Ligases are particularly important in lagging strand DNA replication, as they join together the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.
Topoisomerases and helicases
Topoisomerases have both nuclease and ligase activity, and they can change the amount of supercoiling in DNA. Some work by cutting the DNA helix and allowing one section to rotate; the enzyme then seals the DNA break. Others can cut one DNA helix and then pass a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.
Helicases are a type of 'molecular motor'; they use the chemical energy in nucleoside triphosphates (predominantly ATP) to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases.
Polymerases synthesise polynucleotides from nucleoside triphosphates. They add nucleotides onto the 3′ hydroxyl group of the previous nucleotide in the DNA strand, so all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the nucleoside triphosphate substrate base-pairs to a single-stranded polynucleotide template: this allows polymerases to synthesise the complementary strand of this template accurately.
DNA polymerases copy DNA sequences. Accuracy is very important for this, so many polymerases have a proofreading activity: they can recognizes the occasional mistakes that can occur during synthesis. These enzymes detect the lack of base pairing between mismatched nucleotides, and if a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base is removed. In most organisms, DNA polymerases function in a large complex called the replisome.
RNA-dependent DNA polymerases copy the sequence of an RNA strand into DNA. One example is reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses; another example is telomerase, which is required for the replication of telomeres. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure.
Transcription is carried out by a RNA polymerase that copies a DNA sequence into RNA. The enzyme binds to a promoter and separates the DNA strands. It then copies the gene sequence into a mRNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. RNA polymerase II, which transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.
A DNA helix does not usually interact with other segments of DNA, and in human cells the different chromosomes even occupy different regions of the nucleus (called "chromosome territories"). This physical separation of chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is when they recombine. Recombination is when two DNA helices break, swap a section and then rejoin. In eukaryotes, this usually occurs during meiosis, when two chromatids are paired together in the center of the cell. This allows chromosomes to exchange genetic information and produces new combinations of genes. Genetic recombination can also be involved in DNA repair. The most common form of recombination is homologous recombination, where the two chromosomes involved share very similar sequences. However, recombination can also damage cells, by producing chromosomal translocations and genetic abnormalities. Recombination reactions are catalyzed by recombinases, which have a DNA-dependent ATPase activity. The recombinase makes a 'nick' in one strand of a DNA double helix, allowing the nicked strand to separate from its complementary strand and anneal to one strand of the double helix on the opposite chromatid. A second nick allows the strand in the second chromatid to separate and anneal to the strand in the first helix, forming a cross-strand exchange (also called a Holliday junction). The Holliday junction is a tetrahedral junction structure which can be moved along the pair of chromosomes, swapping one strand for another. The recombination is then halted by cleavage of the junction and re-ligation of the released DNA.
DNA and molecular evolution
As well as being susceptible to largely random mutations (that usually affect just a single base), some regions of DNA are specialised to undergo dramatic, rapid, non-random rearrangements, or to undergo more subtle changes at a high frequency so that the expression of a gene is dramatically altered. Such rearrangements include various versions of site-specific recombination. This depends on enzymes that recognise particular sites on DNA and create novel structures such as insertions, deletions and inversions. For instance, DNA regions in the small genome of the bacterial virus P1 can invert, enabling different versions of tail fibers to be expressed in different viruses. Similarly, site-specific recombinase enzymes are responsible for mating type variation in yeast, and for flagellum type (phase) changes in the bacterium Salmonella enterica Typhimurium.
More subtle mutations can occur in micro-satellite repeats (also called homopolymeric tracts of DNA). An example of such a structure are short DNA intervals where the same base is tandemly repeated, as in 5'-gcAAAAAAAAAAAttg-3', or a dinucleotide or even a triplet as in 5'- ATGATGATGATGATGATGATG-3'. DNA polymerase III is prone to make 'stuttering errors' at such repeats. As a consequence, changes in repeat number occur quite often during cell replication, and when they appear at a position where the spacing of nucleotide residues is critical for gene function, they can cause changes to the phenotype.
The stomach ulcer bacterium Helicobacter pylori is a good example of how homopolymeric tracts can enable quasi-directed evolution of particular genes. H. pylori has 46 genes that contain homopolymeric runs of nucleotides or dinucleotide repeats that are prone to frequent length changes as a consequence of stuttering errors during replication. These changes can lead to frequent reversible inactivation of these genes, or to changed gene transcription if the repeat is located in a regulatory sequence. This generates highly diverse populations of H. pylori in an individual human host, and this diversity helps the bacterium evade the immune system.
Triplet repeats behave similarly, but are particularly suited to evolution of proteins with differing characteristics. In the clock-like period gene of the fruit fly, triplet repeats 'fine tune' an insect's biological clock in response to changes in environmental temperatures. Triplet repeats are widely distributed in genomes, and their high frequency of mutation is responsible for several genetically determined disorders in humans.
Uses in technology
Forensic scientists can use DNA in blood, semen, skin, saliva or hair to match samples collected at a crime scene to samples taken from possible suspects. This process is called genetic fingerprinting or more formally, DNA profiling. In DNA profiling, the lengths of variable sections of repetitive DNA (such as short tandem repeats and minisatellites) are compared between people. This is usually very reliable for identifying the source of a sample, but identification can be complicated if the samples that are collected include DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork (and to clear the prime suspect) in the 1988 Enderby murders case. People convicted of certain types of crimes may be required to provide a sample of DNA for a database. This has helped investigators solve old cases where only a DNA sample was obtained from the scene. DNA profiling can also be used to identify victims of mass casualty incidents.
Bioinformatics involves the analysis of DNA sequence data; DNA from hundreds of different organisms has now been partially sequenced, and this information is stored in massive databases. The development of techniques to store and search DNA sequences has led to many applications in computer science. String-searching or 'matching' algorithms, which identify a given sequence of letters inside a larger sequence of letters, are used to search for specific sequences of nucleotides. The related problem of sequence alignment aims to identify homologous sequences; these are sequences that are very similar but not identical. When two different genes in an organism have very similar sequences, this is evidence that, at some stage in evolution, a single gene was duplicated, and the sequences subsequently diverged (under different selection pressures) by incorporating different mutations. Identifying such holologies can give valuable clues about the likely function of novel genes. Similarly, identifying homologies between genes in different organisms can be used to reconstruct the evolutionary relationships between organisms. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without annotations, which label the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have patterns that are characteristic of protein- or RNA-coding genes can be identified by gene finding algorithms, allowing researchers to predict the presence of particular gene products in an organism.
The small chromosomes of bacteria have proved extraordinarily useful for analysing genetic mechanisms and genome structure in many organisms, including humans. This utility arose because of several important discoveries made by microbiologists in the 1950-70's that provided new laboratory tools for directly manipulating genes. The principal laboratory tools and techniques discovered in this period were:
- Compact circular plasmid molecules such as ColE1 that were relatively easy to extract and purify from bacterial cells, and when inserted back inside living bacterial cells could serve as genetically stable carriers (vectors) of novel DNA fragments. These vectors were propagated in bacterial cell lines termed "clones". Since each plasmid vector in these clones carried a single DNA molecular fragment, these could be considered to be 'molecular clones'.
- Convenient methods for extracting plasmid DNA from cells, physical analysis of this DNA, and reinsertion of circular plasmid DNA back inside living cells. The main method for DNA re-insertion is termed DNA transformation - that is, direct uptake of naked DNA molecules by cells. One technique that assisted DNA transformation was the use of selectable genetic markers, and plasmid-borne bacterial antibiotic resistance provided such markers in the form of traits like ampicillin resistance and tetracycline resistance.
- Restriction endonucleases such as the enzymes EcoRI or HindIII, were found to be useful for specifically digesting DNA at particular sites and creating novel combinations of DNA fragments in the laboratory.
- Restriction enzymes enabled re-annealing different DNA fragments, such as circular plasmid DNA which had been linearised at a single EcoRI restriction endonuclease target site together with another fragment, say an EcoRI generated fragment of a human chromosome. By sealing together two such fragments with the enzyme DNA ligase, novel hybrid-plasmid molecules could be created in which "foreign" DNA inserts were carried in a chimeric or hybrid plasmid, often called a recombinant DNA molecule.
After the mid-1970's, re-insertion of recombinant DNA plasmids into living bacterial cells, such as those of Escherichia coli opened up many new approaches in biotechnology. These made it possible to manufacture, for example, human hormones such as insulin in microbial cell-based protein factories. When combined with other genetic techniques, the field of "recombinant DNA technology" opened up the possibility of decoding the gene sequence of whole genomes. This latter area is now usually referred to as genomics, and includes the Human Genome Project.
- ↑ 1.0 1.1 Venter J et al. (2001). "The sequence of the human genome". Science 291: 1304–51. PMID 11181995.
- ↑ Pollard KS et al. (2006) Forces shaping the fastest evolving regions in the human genome. PLoS Genet 2(10):e168 PMID 17040131
- ↑ Thanbichler M et al. (2005). "The bacterial nucleoid: a highly organized and dynamic structure". J Cell Biochem 96: 506–21. PMID 15988757.
- ↑ Human Genome Project Information
- ↑ The C. elegans Sequencing Consortium (1998)Genome Sequence of the Nematode C. elegans: A Platform for Investigating Biology. Science 282: 2012-8
- ↑ Wolfsberg T et al. (2001). "Guide to the draft human genome". Nature 409: 824-6. PMID 11236998.
- ↑ See Sydney Brennar's Nobel lecture (2002)
- ↑ 8.0 8.1 Wright W et al. (1997). "Normal human chromosomes have long, G-rich telomeric overhangs at one end". Genes Dev 11: 2801-9. PMID 9353250.
- ↑ Pidoux A, Allshire R (2005). "The role of heterochromatin in centromere function". Philos Trans R Soc Lond B 360: 569-79. PMID 15905142.
- ↑ Harrison P et al. (2002). "Molecular fossils in the human genome: identification and analysis of the pseudogenes in chromosomes 21 and 22". Genome Res 12: 272-80. PMID 11827946.
- ↑ Harrison P, Gerstein M (2002). "Studying genomes through the aeons: protein families, pseudogenes and proteome evolution". J Mol Biol 318: 1155-74. PMID 12083509.
- ↑ Weinstock GM (2007). "ENCODE: More genomic empowerment". Genome Research 17: 667-668. PMID 17567987.
- ↑ 13.0 13.1 ENCODE Project Consortium (2007). "Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project". Nature 447: 799-816. PMID 17571346.
- ↑ Albà M (2001). "Replicative DNA polymerases". Genome Biol 2: REVIEWS3002. PMID 11178285.
- ↑ 15.0 15.1 Alberts, Bruce; Alexander Johnson, Julian Lewis, Martin Raff, Keith Roberts, and Peter Walters (2002). Molecular Biology of the Cell; Fourth Edition. New York and London: Garland Science. ISBN 0-8153-3218-1.
- ↑ Mandelkern M et al. (1981). "The dimensions of DNA in solution". J Mol Biol 152: 153-61. PMID 7338906.
- ↑ Gregory S et al. (2006). "The DNA sequence and biological annotation of human chromosome 1". Nature 441: 315-21. PMID 16710414.
- ↑ Watson J, Crick F (1953). "Molecular structure of nucleic acids; a structure for deoxyribose nucleic acid". Nature 171: 737-8. PMID 13054692.
- ↑ 19.0 19.1 Berg J et al. (2002) Biochemistry. WH Freeman and Co. ISBN 0-7167-4955-6
- ↑ 20.0 20.1 Abbreviations and Symbols for Nucleic Acids, Polynucleotides and their Constituents IUPAC-IUB Commission on Biochemical Nomenclature (CBN) Accessed 03 Jan 2006
- ↑ Ghosh A, Bansal M (2003). "A glossary of DNA structures from A to Z". Acta Crystallogr D Biol Crystallogr 59: 620-6. PMID 12657780.
- ↑ Takahashi I, Marmur J (1963). "Replacement of thymidylic acid by deoxyuridylic acid in the deoxyribonucleic acid of a transducing phage for Bacillus subtilis". Nature 197: 794-5. PMID 13980287.
- ↑ Agris P (2004). "Decoding the genome: a modified view". Nucleic Acids Res 32: 223 – 38. PMID 14715921.
- ↑ Wing R et al. (1980). "Crystal structure analysis of a complete turn of B-DNA". Nature 287: 755 – 8. PMID 7432492.
- ↑ 25.0 25.1 Pabo C, Sauer R. "Protein-DNA recognition". Ann Rev Biochem 53: 293-321. PMID 6236744.
- ↑ Ponnuswamy P, Gromiha M (1994). "On the conformational stability of oligonucleotide duplexes and tRNA molecules". J Theor Biol 169: 419-32. PMID 7526075.
- ↑ Clausen-Schaumann H et al. (2000). "Mechanical stability of single DNA molecules". Biophys J 78: 1997-2007. PMID 10733978.
- ↑ Chalikian T et al. (1999). "A more unified picture for the thermodynamics of nucleic acid duplex melting: a characterization by calorimetric and volumetric techniques". Proc Natl Acad Sci USA 96: 7853-8. PMID 10393911.
- ↑ deHaseth P, Helmann J (1995). "Open complex formation by Escherichia coli RNA polymerase: the mechanism of polymerase-induced strand separation of double helical DNA". Mol Microbiol 16: 817-24. PMID 7476180.
- ↑ 30.0 30.1 Joyce C, Steitz T (1995). "Polymerase structures and function: variations on a theme?". J Bacteriol 177: 6321-9. PMID 7592405.
- ↑ Hüttenhofer A et al. (2005). "Non-coding RNAs: hope or hype?". Trends Genet 21: 289-97. PMID 15851066.
- ↑ Munroe S (2004). "Diversity of antisense regulation in eukaryotes: multiple mechanisms, emerging patterns". J Cell Biochem 93: 664-71. PMID 15389973.
- ↑ Makalowska I et al. (2005). "Overlapping genes in vertebrate genomes". Comput Biol Chem 29: 1–12. PMID 15680581.
- ↑ 34.0 34.1 Johnson Z, Chisholm S (2004). "Properties of overlapping genes are conserved across microbial genomes". Genome Res 14: 2268–72. PMID 15520290.
- ↑ Lamb R, Horvath C (1991). "Diversity of coding strategies in influenza viruses". Trends Genet 7: 261–6. PMID 1771674.
- ↑ Davies J, Stanley J (1989). "Geminivirus genes and vectors". Trends Genet 5: 77–81. PMID 2660364.
- ↑ Benham C, Mielke S. "DNA mechanics". Ann Rev Biomed Eng 7: 21–53. PMID 16004565.
- ↑ 38.0 38.1 Wang J (2002). "Cellular roles of DNA topoisomerases: a molecular perspective". Nat Rev Mol Cell Biol 3: 430–40. PMID 12042765.
- ↑ Basu H et al. (1988). "Recognition of Z-RNA and Z-DNA determinants by polyamines in solution: experimental and theoretical studies". J Biomol Struct Dyn 6: 299-309. PMID 2482766.
- ↑ Lu XJ et al. (2000). "A-form conformational motifs in ligand-bound DNA structures". J Mol Biol 300: 819-40. PMID 10891271.
- ↑ Rothenburg S et al.. "DNA methylation and Z-DNA formation as mediators of quantitative differences in the expression of alleles". Immunol Rev 184: 286–98. PMID 12086319.
- ↑ Oh D et al. (2002). "Z-DNA-binding proteins can act as potent effectors of gene expression in vivo". Proc Natl Acad Sci USA 99: 16666-71. PMID 12486233.
- ↑ 43.0 43.1 Greider C, Blackburn E (1985). "Identification of a specific telomere terminal transferase activity in Tetrahymena extracts". Cell 43: 405-13. PMID 3907856.
- ↑ Burge S et al. (2006). "Quadruplex DNA: sequence, topology and structure". Nucleic Acids Res 34: 5402-15. PMID 17012276.
- ↑ Griffith J et al. (1999). "Mammalian telomeres end in a large duplex loop". Cell 97: 503-14. PMID 10338214.
- ↑ For example, cytosine methylation, to produce 5-methylcytosine, is important for X-chromosome inactivation. Klose R, Bird A (2006). "Genomic DNA methylation: the mark and its mediators". Trends Biochem Sci 31: 89–97. PMID 16403636.
- ↑ Adrian Bird (2007). "Perceptions of epigenetics". Nature 447: 396-398. PMID 17522671
- ↑ V.L. Chandler (2007). "Paramutation: From Maize to Mice". Cell 128: 641-645.
- ↑ Bird A (2002). "DNA methylation patterns and epigenetic memory". Genes Dev 16: 6–21. PMID 11782440.
- ↑ Walsh C, Xu G. "Cytosine methylation and DNA repair". Curr Top Microbiol Immunol 301: 283–315. PMID 16570853.
- ↑ Ratel D et al. (2006). "N6-methyladenine: the other methylated base of DNA". Bioessays 28: 309–15. PMID 16479578.
- ↑ Gommers-Ampt J et al. (1993). "beta-D-glucosyl-hydroxymethyluracil: a novel modified base present in the DNA of the parasitic protozoan T. brucei". Cell 75: 1129–36. PMID 8261512.
- ↑ Douki T et al. (2003). "Bipyrimidine photoproducts rather than oxidative lesions are the main type of DNA damage involved in the genotoxic effect of solar ultraviolet radiation". Biochemistry 42: 9221-6. PMID 12885257.
- ↑ Cadet J et al. (1999). "Hydroxyl radicals and DNA base damage". Mutat Res 424 (1-2): 9-21. PMID 10064846.
- ↑ Shigenaga M et al. (1989). "Urinary 8-hydroxy-2'-deoxyguanosine as a biological marker of in vivo oxidative DNA damage". Proc Natl Acad Sci USA 86: 9697-701. PMID 2602371.
- ↑ Cathcart R et al. (1984). "Thymine glycol and thymidine glycol in human and rat urine: a possible assay for oxidative DNA damage". Proc Natl Acad Sci USA 81: 5633-7. PMID 6592579.
- ↑ Valerie K, Povirk L (2003). "Regulation and mechanisms of mammalian double-strand break repair". Oncogene 22: 5792-812. PMID 12947387.
- ↑ 58.0 58.1 Braña M et al. (2001). "Intercalators as anticancer drugs". Curr Pharm Des 7: 1745-80. PMID 11562309.
- ↑ Sandman K et al. (1998). "Diversity of prokaryotic chromosomal proteins and the origin of the nucleosome". Cell Mol Life Sci 54: 1350-64. PMID 9893710.
- ↑ Luger K et al. (1997). "Crystal structure of the nucleosome core particle at 2.8 A resolution". Nature 389: 251-60. PMID 9305837.
- ↑ Jenuwein T, Allis C (2001). "Translating the histone code". Science 293: 1074-80. PMID 11498575.
- ↑ Ito T. "Nucleosome assembly and remodelling". Curr Top Microbiol Immunol 274: 1-22. PMID 12596902.
- ↑ Thomas J (2001). "HMG1 and 2: architectural DNA-binding proteins". Biochem Soc Trans 29: 395-401. PMID 11497996.
- ↑ Grosschedl R et al. (1994). "HMG domain proteins: architectural elements in the assembly of nucleoprotein structures". Trends Genet 10: 94-100. PMID 8178371.
- ↑ Iftode C et al. (1999). "Replication protein A (RPA): the eukaryotic SSB". Crit Rev Biochem Mol Biol 34: 141-80. PMID 10473346.
- ↑ Myers L, Kornberg R. "Mediator of transcriptional regulation". Ann Rev Biochem 69: 729-49. PMID 10966474.
- ↑ Spiegelman B, Heinrich R (2004). "Biological control through regulated transcriptional coactivators". Cell 119: 157-67. PMID 15479634.
- ↑ Li Z et al. (2003). "A global transcriptional regulatory role for c-Myc in Burkitt's lymphoma cells". Proc Natl Acad Sci USA 100: 8164-9. PMID 12808131.
- ↑ Bickle T, Krüger D (1993). "Biology of DNA restriction". Microbiol Rev 57: 434–50. PMID 8336674.
- ↑ Doherty A, Suh S (2000). "Structural and mechanistic conservation in DNA ligases.". Nucleic Acids Res 28: 4051–8. PMID 11058099.
- ↑ Champoux J. "DNA topoisomerases: structure, function, and mechanism". Ann Rev Biochem 70: 369–413. PMID 11395412.
- ↑ Schoeffler A, Berger J (2005). "Recent advances in understanding structure-function relationships in the type II topoisomerase mechanism". Biochem Soc Trans 33: 1465–70. PMID 16246147.
- ↑ Tuteja N, Tuteja R (2004). "Unraveling DNA helicases. Motif, structure, mechanism and function". Eur J Biochem 271: 1849–63. PMID 15128295.
- ↑ Hubscher U et al.. "Eukaryotic DNA polymerases". Annu Rev Biochem 71: 133–63. PMID 12045093.
- ↑ Tarrago-Litvak L et al. (1994). "The reverse transcriptase of HIV-1: from enzymology to therapeutic intervention". FASEB J 8: 497–503. PMID 7514143.
- ↑ Nugent C, Lundblad V (1998). "The telomerase reverse transcriptase: components and regulation". Genes Dev 12 (8): 1073–85. PMID 9553037.
- ↑ Martinez E (2002). "Multi-protein complexes in eukaryotic gene transcription". Plant Mol Biol 50: 925–47. PMID 12516863.
- ↑ Cremer T, Cremer C (2001). "Chromosome territories, nuclear architecture and gene regulation in mammalian cells". Nat Rev Genet 2: 292-301. PMID 11283701.
- ↑ Pál C et al. (2006). "An integrated view of protein evolution". Nat Rev Genet 7: 337-48. PMID 16619049.
- ↑ O'Driscoll M, Jeggo P (2006). "The role of double-strand break repair - insights from human genetics". Nat Rev Genet 7: 45-54. PMID 16369571.
- ↑ Sung P et al. (2003). "Rad51 recombinase and recombination mediators". J Biol Chem 278: 42729-32. PMID 12912992.
- ↑ Dickman M et al. (2002). "The RuvABC resolvasome". Eur J Biochem 269: 5492-501. PMID 12423347.
- ↑ Suerbaum S, Josenhans C (2007) Helicobacter pylori evolution and phenotypic diversification in a changing host. Nat Rev Microbiol 5:441-52 PMID 17505524
- ↑ Christopher Wills discusses the evolutionary significance of these concepts in The Runaway Brain: The Evolution of Human Uniqueness ISBN 0-00-654672-2 (1995)
- ↑ Collins A, Morton N (1994). "Likelihood ratios for DNA identification". Proc Natl Acad Sci USA 91: 6007-11. PMID 8016106.
- ↑ Weir B et al. (1997). "Interpreting DNA mixtures". J Forensic Sci 42: 213-22. PMID 9068179.
- ↑ Jeffreys A et al. (1985). "Individual-specific 'fingerprints' of human DNA.". Nature 316: 76-9. PMID 2989708.
- ↑ Colin Pitchfork - first murder conviction on DNA evidence also clears the prime suspect Forensic Science Service Accessed 23 Dec 2006
- ↑ DNA Identification in Mass Fatality Incidents. National Institute of Justice (September 2006).
- ↑ Baldi, Pierre. Brunak, Soren. Bioinformatics: The Machine Learning Approach MIT Press (2001) ISBN 978-0-262-02506-5
- ↑ Sjölander K (2004). "Phylogenomic inference of protein molecular function: advances and challenges". Bioinformatics 20: 170-9. PMID 14734307.
- ↑ Mount DM (2004). Bioinformatics: Sequence and Genome Analysis, 2. Cold Spring Harbor Laboratory Press. ISBN 0879697121.
Some content on this page may previously have appeared on Wikipedia. | http://en.citizendium.org/wiki/DNA | 13 |
54 | An Introduction to Molecular Biology/Cell Cycle
The cell cycle, or cell-division cycle (cdc), is the series of events that takes place in a cell leading to its division and duplication. In cells without a nucleus (prokaryotic), the cell cycle occurs via a process termed binary fission. In cells with a nucleus (eukaryotes), the cell cycle can be divided in two brief periods: interphase—during which the cell grows, accumulating nutrients needed for mitosis and duplicating its DNA—and the mitosis (M) phase, during which the cell splits itself into two distinct cells, often called "daughter cells". The cell-division cycle is a vital process by which a single-celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed.
Phases of cell division
The cell cycle consists of four distinct phases: G1 (Gap1) phase, S phase (synthesis), G2 (Gap2) phase (collectively known as interphase) and M phase (mitosis). M (mitosis) phase is itself composed of two tightly coupled processes: mitosis, in which the cell's chromosomes are divided between the two daughter cells, and cytokinesis, in which the cell's cytoplasm divides in half forming distinct cells. Activation of each phase is dependent on the proper progression and completion of the previous one. Cells that have temporarily or reversibly stopped dividing are said to have entered a state of quiescence called G0 phase.
G0 phase
The G0 phase is a period in the cell cycle in which cells exist in a quiescent state. G0 phase is viewed as either an extended G1 phase, where the cell is neither dividing nor preparing to divide, or a distinct quiescent stage that occurs outside of the cell cycle. G0 is sometimes referred to as a "post-mitotic" state, since cells in G0 are in a non-dividing phase outside of the cell cycle. Some types of cells, such as nerve and heart muscle cells, become post-mitotic when they reach maturity (i.e., when they are terminally differentiated) but continue to perform their main functions for the rest of the organism's life. Multinucleated muscle cells that do not undergo cytokinesis are also often considered to be in the G0 stage. On occasion, a distinction in terms is made between a G0 cell and a 'post-mitotic' cell (e.g., heart muscle cells and neurons), which will never enter the G1 phase, whereas other G0 cells may.
G1 phase
The first phase of interphase is G1 phase, from the end of the previous Mitosis phase until the beginning of DNA replication is called G1 (G indicating gap). It is also called the growth phase. During this phase the biosynthetic activities of the cell, which had been considerably slowed down during M phase, resume at a high rate. This phase is marked by synthesis of various enzymes that are required in S phase, mainly those needed for DNA replication. Duration of G1 is highly variable, even among different cells of the same species.
S phase
Initiation of DNA replication is indication of S phase; when it is complete, all of the chromosomes have been replicated, at this time each chromosome has two (sister) chromatids. Thus, during this phase, the amount of DNA in the cell has effectively doubled, though the ploidy of the cell remains the same. Rates of RNA transcription and protein synthesis are very low during this phase. An exception to this is production of histone protein, which mostly occurs during the S phase.
G2 phase
After S phase or replication cell then enters the G2 phase, which lasts until the cell enters mitosis. Again, significant biosynthesis occurs during this phase, mainly involving the production of microtubules, which are required during the process of mitosis. Inhibition of protein synthesis during G2 phase prevents the cell from undergoing mitosis.
Analysis of Cell cycle
Cell cycle analysis is a method in cell biology that employs flow cytometry to distinguish cells in different phases of the cell cycle. Before analysis, the cells are permeabilised and treated with a fluorescent dye that stains DNA quantitatively, usually propidium iodide (PI). The fluorescence intensity of the stained cells at certain wavelengths will therefore correlate with the amount of DNA they contain. As the DNA content of cells duplicates during the S phase of the cell cycle, the relative amount of cells in the G0 phase and G1 phase (before S phase), in the S phase, and in the G2 phase and M phase (after S phase) can be determined, as the fluorescence of cells in the G2/M phase will be twice as high as that of cells in the G0/G1 phase. Cell cycle anomalies can be symptoms for various kinds of cell damage, for example DNA damage, which cause the cell to interrupt the cell cycle at certain checkpoints to prevent transformation into a cancer cell (carcinogenesis). Other possible reasons for anomalies include lack of nutrients, for example after serum deprivation. Cell cycle analysis was first described in 1969 at Los Alamos Scientific Laboratory by a group from the University of California, using the Feulgen staining technique. The first protocol for cell cycle analysis using propidium iodide staining was presented in 1975 by Awtar Krishan from Harvard Medical School and is still widely cited today.
Mitosis is the process by which a eukaryotic cell separates the chromosomes in its nucleus into two identical sets in two nuclei. It is generally followed immediately by cytokinesis, which divides the nuclei, cytoplasm, organelles and cell membrane into two cells containing roughly equal shares of these cellular components. Mitosis and cytokinesis together define the mitotic (M) phase of the cell cycle - the division of the mother cell into two daughter cells, genetically identical to each other and to their parent cell. This accounts for approximately 10% of the cell cycle. Mitosis occurs exclusively in eukaryotic cells, but occurs in different ways in different species. For example, animals undergo an "open" mitosis, where the nuclear envelope breaks down before the chromosomes separate, while fungi such as Aspergillus nidulans and Saccharomyces cerevisiae (yeast) undergo a "closed" mitosis, where chromosomes divide within an intact cell nucleus. Prokaryotic cells, which lack a nucleus, divide by a process called binary fission. The process of mitosis is complex and highly regulated. The sequence of events is divided into phases, corresponding to the completion of one set of activities and the start of the next. These stages are prophase, prometaphase, metaphase, anaphase and telophase. During the process of mitosis the pairs of chromosomes condense and attach to fibers that pull the sister chromatids to opposite sides of the cell. The cell then divides in cytokinesis, to produce two identical daughter cells. Because cytokinesis usually occurs in conjunction with mitosis, "mitosis" is often used interchangeably with "M phase". However, there are many cells where mitosis and cytokinesis occur separately, forming single cells with multiple nuclei. This occurs most notably among the fungi and slime moulds, but is found in various different groups. Even in animals, cytokinesis and mitosis may occur independently, for instance during certain stages of fruit fly embryonic development. Errors in mitosis can either kill a cell through apoptosis or cause mutations that may lead to cancer.
Prophase, from the ancient Greek pro (before) and phase (stage), is a stage of mitosis in which the chromatin condenses (it becomes shorter and fatter) into a highly ordered structure called a chromosome in which the chromatin becomes visible. This process, called chromatin condensation, is mediated by the condensin complex. Since the genetic material has been duplicated in an earlier phase of the cell cycle, there are two identical copies of each chromosome in the cell. Identical chromosomes, called sister chromatids, are attached to each other at a DNA element present on every chromosome called the centromere. During prophase, giemsa staining can be applied to elicit G-banding in chromosomes. Prophase accounts for approximately 3% of the cell cycle's duration. An important organelle in mitosis is the centrosome, the microtubule organizing center in metazoans. During prophase, the two centrosomes, which replicate independently of mitosis, have their microtubule-activity increased due to the recruitment of γ-tubulin. The centrosomes will be pushed apart to opposite ends of the cell nucleus by the action of molecular motors acting on the microtubules. The nuclear envelope breaks down to allow the microtubules to reach the kinetochores on the chromosomes, marking the end of prophase. Prometaphase, the next step of mitosis, will see the chromosome being captured by the microtubules.
Prophase in plant cells
In this first phase of mitosis, plant cells undergo a series of changes that is called puberty. In highly vacuolated plant cells, the contractile vacuole has to migrate into the center of the cell before mitosis can begin. This is achieved during the G2 phase of the cell cycle. A transverse sheet of cytoplasm bisects the cell along the future plane of cell division. Prophase in plant cells is preceded by a stage only found in plants, the formation of a ring of microtubules and actin filaments underneath the plasma membrane around the equatorial plane of the future mitotic spindle and predicting the position of cell plate fusion during telophase. During telophase in animal cells, a cleavage furrow forms. The preprophase band disappears during nuclear envelope disassembly and spindle formation in prometaphase despite contrary belief. The cells of higher plants lack centrioles. Instead, the nuclear envelope serves as a microtubule organising center. Spindle microtubules aggregate on the surface of the nuclear envelope during preprophase and prophase, forming the prophase spindle.
Metaphase, from the ancient Greek meta (between) and phase (stage), is a stage of mitosis in the eukaryotic cell cycle in which condensed & highly coiled chromosomes, carrying genetic information, align in the middle of the cell before being separated into each of the two daughter cells. Metaphase accounts for approximately 4% of the cell cycle's duration. Preceded by events in prometaphase and followed by anaphase, microtubules formed in prophase have already found and attached themselves to kinetochores in metaphase. The centromeres of the chromosomes convene themselves on the metaphase plate (or equatorial plate), an imaginary line that is equidistant from the two centrosome poles. This even alignment is due to the counterbalance of the pulling powers generated by the opposing kinetochores, analogous to a tug of war between equally strong people. In certain types of cells, chromosomes do not line up at the metaphase plate and instead move back and forth between the poles randomly, only roughly lining up along the middleline. Early events of metaphase can coincide with the later events of prometaphase, as chromosomes with connected kinetochores will start the events of metaphase individually before other chromosomes with unconnected kinetochores that are still lingering in the events of prometaphase. One of the cell cycle checkpoints occurs during prometaphase and metaphase. Only after all chromosomes have become aligned at the metaphase plate, when every kinetochore is properly attached to a bundle of microtubules, does the cell enter anaphase. It is thought that unattached or improperly attached kinetochores generate a signal to prevent premature progression to anaphase, even if most of the kinetochores have been attached and most of the chromosomes have been aligned. Such a signal creates the mitotic spindle checkpoint. This would be accomplished by regulation of the anaphase-promoting complex, securin, and separase.
Anaphase (ana (up) and phase (stage))
Anaphase begins abruptly with the regulated triggering of the metaphase-to-anaphase transition and accounts for approximately 1% of the cell cycle's duration. At this point, anaphase begins. This terminate activity by cleaving and inactivating the M-phase cyclin required for the function of M-phase cyclin dependent kinases (M-Cdks). It also cleaves securin, a protein that inhibits the protease known as separase. Separase then cleaves cohesin, a protein responsible for holding sister chromatids together. During early anaphase (or Anaphase A), the chromatids abruptly separate and move toward the spindle poles. This is achieved by the shortening of spindle microtubules, with forces mainly being exerted at the kinetochores. anaphase is when the chromatids separate from each other and move to opposit ends of the cell When the chromatids are fully separated, late anaphase (or Anaphase B) begins. This involves the polar microtubules elongating and sliding relative to each other to drive the spindle poles to opposite ends of the cell. Anaphase B drives the separation of sister centrosomes to opposite poles through three forces. Kinesin proteins that are attached to polar microtubules push the microtubules past one another. A second force involves the pulling of the microtubules by cortex-associated cytosolic dynein. The third force for chromosome separation involves the lengthening of the polar microtubules at their plus ends. These two processes were originally distinguished by their different sensitivities to drugs, and they are mechanically distinct. Early anaphase (Anaphase A) involves the shortening of kinetochore microtubules by depolymerization at their plus ends. During this process, a sliding collar allows chromatid movement. No motor protein is involved, as ATP depletion does not inhibit early anaphase. Late anaphase (Anaphase B) involves both the elongation of overlapping microtubules and the use of two distinct sets of motor proteins: one pulls overlapping microtubules past each other, and the other pulls astral microtubules that have attached to the cell cortex. The contributions of early anaphase and late anaphase to anaphase as a whole vary by cell type. In mammalian cells, late anaphase follows shortly after early anaphase and extends the spindle to approximately twice its metaphase length; by contrast, yeast and certain protozoans use late metaphase as the main means of chromosome separation and, in the process, can extend their spindles to up to 15 times the metaphase length.
Cyclins are a Group of proteins that control the progression of cells through the cell cycle by activating Cyclin-dependent kinase (Cdk) enzymes.Cyclins were discovered by R. Timothy Hunt in 1982 while studying the cell cycle of sea urchins.
Types of Cyclins
There are several different cyclins that are active in different parts of the cell cycle and that cause the Cdk to phosphorylate different substrates.
There are two groups of cyclins:
G1/S cyclins – These cyclins are essential for the control of the cell cycle at the G1/S transition, Cyclin A / CDK2 – active in S phase. Cyclin D / CDK4, Cyclin D / CDK6, and Cyclin E / CDK2 – regulates transition from G1 to S phase.
G2/M cyclins – essential for the control of the cell cycle at the G2/M transition (mitosis). G2/M cyclins accumulate steadily during G2 and are abruptly destroyed as cells exit from mitosis (at the end of the M-phase). Cyclin B / CDK1 – regulates progression from G2 to M phase.
There are also several "orphan" cyclins for which no Cdk partner has been identified. For example, cyclin F is an orphan cyclin that is essential for G2/M transition.
Cyclin dependent kinases (CDKs)
CDKs are a family of protein kinases. CDKs are present in all known eukaryotes, and their regulatory function in the cell cycle has been evolutionarily conserved. CDKS are also involved in regulation of transcription, mRNA processing, and the differentiation of nerve cells. One interesting fact is that, yeast cells can proliferate normally when their CDK gene has been replaced with the homologous human gene. CDKs are relatively small proteins, with molecular weights ranging from 34 to 40 kDa, and contain little more than the kinase domain. CDK binds to a regulatory protein called a cyclin. Without cyclin, CDK has little kinase activity, only the cyclin-CDK complex is an active kinase. CDKs phosphorylate their substrates on serines and threonines, so they are serine-threonine kinases. The consensus sequence for the phosphorylation site in the amino acid sequence of a CDK substrate is [S/T*]PX[K/R], where S/T* is the phosphorylated serine or threonine, P is proline, X is any amino acid, K is lysine, and R is arginine.
Table : Cyclin-dependent kinases that control the cell cycle in model organisms.
|Species||Name||Original name||Size (amino acids)||Function|
|Saccharomyces cerevisiae||Cdk1||Cdc28||298||All cell-cycle stages|
|Schizosaccharomyces pombe||Cdk1||Cdc2||297||All cell-cycle stages|
|Cdk2||Cdc2c||314||G1/S, S, possibly M|
|Cdk4||Cdk4/6||317||G1, promotes growth|
|Cdk2||297||S, possibly M|
|Cdk2||298||G1, S, possibly M|
Functions of cyclin and CDKs
Two key classes of regulatory molecules, cyclins and cyclin-dependent kinases (CDKs), determine a cell's progress through the cell cycle. Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of these central molecules. Many of the genes encoding cyclins and CDKs are conserved among all eukaryotes, but in general more complex organisms have more elaborate cell cycle control systems that incorporate more individual components. Many of the relevant genes were first identified by studying yeast, especially Saccharomyces cerevisiae; genetic nomenclature in yeast dubs many these genes cdc (for "cell division cycle") followed by an identifying number, e.g., cdc25 or cdc20.
Cyclins form the regulatory subunits and CDKs the catalytic subunits of an activated heterodimer; cyclins have no catalytic activity and CDKs are inactive in the absence of a partner cyclin. When activated by a bound cyclin, CDKs perform a common biochemical reaction called phosphorylation that activates or inactivates target proteins to orchestrate coordinated entry into the next phase of the cell cycle. Different cyclin-CDK combinations determine the downstream proteins targeted. CDKs are constitutively expressed in cells whereas cyclins are synthesised at specific stages of the cell cycle, in response to various molecular signals.
Upon receiving a pro-mitotic extracellular signal, G1 cyclin-CDK complexes become active to prepare the cell for S phase, promoting the expression of transcription factors that in turn promote the expression of S cyclins and of enzymes required for DNA replication. The G1 cyclin-CDK complexes also promote the degradation of molecules that function as S phase inhibitors by targeting them for ubiquitination. Once a protein has been ubiquitinated, it is targeted for proteolytic degradation by the proteasome. Active S cyclin-CDK complexes phosphorylate proteins that make up the pre-replication complexes assembled during G1 phase on DNA replication origins. The phosphorylation serves two purposes: to activate each already-assembled pre-replication complex, and to prevent new complexes from forming. This ensures that every portion of the cell's genome will be replicated once and only once. The reason for prevention of gaps in replication is fairly clear, because daughter cells that are missing all or part of crucial genes will die. However, for reasons related to gene copy number effects, possession of extra copies of certain genes is also deleterious to the daughter cells. Mitotic cyclin-CDK complexes, which are synthesized but inactivated during S and G2 phases, promote the initiation of mitosis by stimulating downstream proteins involved in chromosome condensation and mitotic spindle assembly. A critical complex activated during this process is a ubiquitin ligase known as the anaphase-promoting complex (APC), which promotes degradation of structural proteins associated with the chromosomal kinetochore. APC also targets the mitotic cyclins for degradation, ensuring that telophase and cytokinesis can proceed. Interphase: Interphase generally lasts at least 12 to 24 hours in mammalian tissue. During this period, the cell is constantly synthesizing RNA, producing protein and growing in size. By studying molecular events in cells, scientists have determined that interphase can be divided into 4 steps: Gap 0 (G0), Gap 1 (G1), S (synthesis) phase, Gap 2 (G2).
Cyclin D is the first cyclin produced in the cell cycle, in response to extracellular signals (e.g. growth factors). Cyclin D binds to existing CDK4, forming the active cyclin D-CDK4 complex. Cyclin D-CDK4 complex in turn phosphorylates the retinoblastoma susceptibility protein (Rb). The hyperphosphorylated Rb dissociates from the E2F/DP1/Rb complex (which was bound to the E2F responsive genes, effectively "blocking" them from transcription), activating E2F. Activation of E2F results in transcription of various genes like cyclin E, cyclin A, DNA polymerase, thymidine kinase, etc. Cyclin E thus produced binds to CDK2, forming the cyclin E-CDK2 complex, which pushes the cell from G1 to S phase (G1/S transition). Cyclin B along with cdc2 (cdc2 - fission yeasts (CDK1 - mammalia)) forms the cyclin B-cdc2 complex, which initiates the G2/M transition. Cyclin B-cdc2 complex activation causes breakdown of nuclear envelope and initiation of prophase, and subsequently, its deactivation causes the cell to exit mitosis.
Disregulation of cell cycle
A disregulation of the cell cycle components may lead to tumor formation. As mentioned above, some genes like the cell cycle inhibitors, RB, p53 etc., when they mutate, may cause the cell to multiply uncontrollably, forming a tumor. Although the duration of cell cycle in tumor cells is equal to or longer than that of normal cell cycle, the proportion of cells that are in active cell division (versus quiescent cells in G0 phase) in tumors is much higher than that in normal tissue. Thus there is a net increase in cell number as the number of cells that die by apoptosis or senescence remains the same. The cells which are actively undergoing cell cycle are targeted in cancer therapy as the DNA is relatively exposed during cell division and hence susceptible to damage by drugs or radiation. This fact is made use of in cancer treatment; by a process known as debulking, a significant mass of the tumor is removed which pushes a significant number of the remaining tumor cells from G0 to G1 phase (due to increased availability of nutrients, oxygen, growth factors etc.). Radiation or chemotherapy following the debulking procedure kills these cells which have newly entered the cell cycle. The fastest cycling mammalian cells in culture, crypt cells in the intestinal epithelium, have a cycle time as short as 9 to 10 hours. Stem cells in resting mouse skin may have a cycle time of more than 200 hours. Most of this difference is due to the varying length of G1, the most variable phase of the cycle. M and S do not vary much. In general, cells are most radiosensitive in late M and G2 phases and most resistant in late S. For cells with a longer cell cycle time and a significantly long G1 phase, there is a second peak of resistance late in G1 The pattern of resistance and sensitivity correlates with the level of sulfhydryl compounds in the cell. Sulfhydryls are natural radioprotectors and tend to be at their highest levels in S and at their lowest near mitosis.
Cell cycle checkpoints
The G1/S checkpoint
The G1/S transition, more commonly known as the Start checkpoint in budding yeast (the restriction point in other organisms) regulates cell cycle commitment At this checkpoint, cells either arrest before DNA replication (due to limiting nutrients or a pheromone signal), prolong G1 (size control), or begin replication and progress through the rest of the cell cycle. The G1/S regulatory network or regulon in budding yeast includes the G1 cyclins Cln1, Cln2 and Cln3, Cdc28 (Cdk1), the transcription factors SBF and MBF, and the transcriptional inhibitor Whi5. Cln3 interacts with Cdk1 to initiate the sequence of events by phosphorylating a large number of targets, including SBF, MBF and Whi5. Phosphorylation of Whi5 causes it to translocate out of the nucleus, preventing it from inhibiting SBF and MBF. Active SBF/MBF drive the G1/S transition by turning on the B-type cyclins and initiating DNA replication, bud formation and spindle body duplication. Moreover, SBF/MBF drives expression of Cln1 and Cln2, which can also interact with Cdk1 to promote phosphorylation of its targets.
This G1/S switch was initially thought to function as a linear sequence of events starting with Cln3 and ending in S phase. However, the observation that any one of the Clns was sufficient to activate the regulon indicated that Cln1 and Cln2 might be able to engage positive feedback to activate their own transcription. This would result in a continuously accelerating cycle that could act as an irreversible bistable trigger. Skotheim et al. used single-cell measurements in budding yeast to show that this positive feedback does indeed occur. A small amount of Cln3 induces Cln1/2 expression and then the feedback loop takes over, leading to rapid and abrupt exit of Whi5 from the nucleus and consequently coherent expression of G1/S regulon genes. In the absence of coherent gene expression, cells take longer to exit G1 and a significant fraction even arrest before S phase, highlighting the importance of positive feedback in sharpening the G1/S switch.
The G1/S cell cycle checkpoint controls the passage of eukaryotic cells from the first gap phase, G1, into the DNA synthesis phase, S. In this switch in mammalian cells, there are two cell cycle kinases that help to control the checkpoint: cell cycle kinases CDK4/6-cyclin D and CDK2-cyclin E. The transcription complex that includes Rb and E2F is important in controlling this checkpoint. In the first gap phase, the Rb-HDAC repressor complex binds to the E2F-DP1 transcription factors, therefore inhibiting the downstream transcription. The phosphorylation of Rb by CDK4/6 and CDK2 dissociates the Rb-repressor complex and serves as an on/off switch for the cell cycle. Once Rb is phosphorylated, the inhibition is released on the E2F transcriptional activity. This allows for the transcription of S phase genes encoding for proteins that amplify the G1 to S phase switch.
Many different stimuli apply checkpoint controls including TGFb, DNA damage, contact inhibition, replicative senescence, and growth factor withdrawal. The first four act by inducing members of the INK4 or Kip/Cip families of cell cycle kinase inhibitors. TGFb inhibits the transcription of Cdc25A, a phosphatase that activates the cell cycle kinases, and growth factor withdrawal activates GSK3b, which phosphorylates cyclin D. This leads to its rapid ubiquitination..
The G2/M checkpoint
This transition is commenced by E2F-mediated transcription of cyclin A, forming the cyclin A-Cdk2 complex. This is useful in regulating events in prophase. In order to proceed past prophase, the cyclin B-Cdk1 complex (first discovered as MPF or M-phase promoting factor) is activated by Cdc 25, a protein phosphatase1. As mitosis starts, the nuclear envelope disintegrates, chromosomes condense and become visibile, and the cells prepares for division. The Cyclin B-Cdk1 activation results in nuclear envelope breakdown, which is a characteristic of the initiation of mitosis.It is evident that the cyclin A and B complexes with Cdks help regulate mitotic events at the G2/M transition.
As mentioned above, entry into mitosis is controlled by the Cyclin B-Cdk1 complex (first discovered as MPF or M-phase promoting factor; Cdk1 is also known as Cdc2 in fission yeast and Cdc28 in budding yeast). This complex forms an element of an interesting regulatory circuit in which Cdk1 can phosphorylate and activate its activator, the phosphatase Cdc25 (positive feedback), and phosphorylate and inactivate its inactivator, the kinase Wee1 (double-negative feedback). It was suggested that this circuit could act as a bistable trigger with one stable steady state in G2 (Cdk and Cdc25 off, Wee1 on) and a second stable steady state in M phase (Cdk and Cdc25 active, Wee1 off). Once cells are in mitosis, Cyclin B-Cdk1 activates the Anaphase-promoting complex (APC), which in turn inactivates Cyclin B-Cdk1 by degrading Cyclin B, eventually leading to exit from mitosis. Coupling the bistable Cdk1 response function to the negative feedback from the APC could generate what is known as a relaxation oscillator, with sharp spikes of Cdk1 activity triggering robust mitotic cycles. However, in a relaxation oscillator, the control parameter moves slowly relative to the system’s response dynamics which may be an accurate representation of mitotic entry, but not necessarily mitotic exit.
It is necessary to inactivate the cyclin B-Cdk1 complex in order to exit the mitotic stage of the cell cycle. The cells can then return to the first gap phase G1 and wait until the cycle proceeds yet again.
In 2003 Pomerening et al. provided strong evidence for this hypothesis by demonstrating hysteresis and bistability in the activation of Cdk1 in the cytoplasmic extracts of Xenopus oocytes. They first demonstrated a discontinuous sharp response of Cdk1 to changing concentrations of non-destructible Cyclin B (to decouple the Cdk1 response network from APC-mediated negative feedback). However, such a response would be consistent with both a monostable, ultransensitive transition and a bistable transition. To distinguish between these two possibilities, they measured the steady-state levels of active Cdk1 in response to changing cyclin levels, but in two separate experiments, one starting with an interphase extract and one starting with an extract already in mitosis. At intermediate concentrations of cyclin they found two steady-state concentrations of active Cdk1. Which of the two steady states was occupied depended on the history of the system, i.e.whether they started with interphase or mitotic extract, effectively demonstrating hysteresis and bistability.
In the same year, Sha et al. independently reached the same conclusion revealing the hysteretic loop also using Xenopus laevis egg extracts. In this article, three predictions of the Novak-Tyson model were tested in an effort to conclude that hysteresis is the driving force for “cell-cycle transitions into and out of mitosis”. The predictions of the Novak-Tyson model are generic to all saddle-node bifurcations. Saddle-node bifurcations are extremely useful bifurcations in an imperfect world because they help describe biological systems which are not perfect. The first prediction was that the threshold concentration of cyclin to enter mitosis is higher than the threshold concentration of cyclin to exit mitosis, and this was confirmed by supplementing cycling egg extracts with non-degradable cyclin B and measuring the activation and inactivation threshold after the addition of cycloheximide (CHX), which is a protein synthesis inhibitor. Furthermore, the second prediction of the Novak-Tyson model was also validated: unreplicated deoxyribonucleic acid, or DNA, increases the threshold concentration of cyclin that is required to enter mitosis. In order to arrive at this conclusion, cytostatic factor released extracts were supplemented with CHX, APH (a DNA polymerase inhibitor), or both, and non-degradable cyclin B was added. The third and last prediction that was tested and proven true in this article was that the rate of Cdc2 activation slows down near the activation threshold concentration of cyclin. These predictions and experiments demonstrate the toggle-like switching behavior that can be described by hysteresis in a dynamical system..
Metaphase-anaphase checkpoint
In the transition from Spindle checkpoint|metaphase to anaphase, it is crucial that sister chromatids are properly and simultaneously separated to opposite ends of the cell. Separation of sister-chromatids is initially strongly inhibited to prevent premature separation in late mitosis, but this inhibition is relieved through destruction of the inhibitory elements by the anaphase-promoting complex (APC) once sister-chromatid bi-orientation is achieved. One of these inhibitory elements is securin, which prevents the destruction of cohesin, the complex that holds the sister-chromatids together, by binding the protease separase which targets Scc1, a subunit of the cohesin complex, for destruction. In this system, the phosphatase Cdc14 can remove an inhibitory phosphate from securin, thereby facilitating the destruction of securin by the APC, releasing separase. As shown by Uhlmann et al., during the attachment of chromosomes to the mitotic spindle the chromatids remain paired because cohesion between the sisters prevents separation.. Cohesion is established during DNA replication and depends on cohesin, which is a multisubunit complex composed of Scc1, Scc3, Smc2, and Smc3. In yeast at the metaphase-to-anaphase transition, Scc1 dissociates from the chromosomes and the sister chromatids separate. This action is controlled by the Esp1 protein, which is tightly bound by the anaphase inhibitor Pds1 that is destroyed by the anaphase-promoting complex. In order to verify that Esp1 does play a role in regulating Scc1 chromosome association, cell strains were arrested in G1 with an alpha factor. These cells stayed in arrest during the development. Esp1-1 mutant cells were used and the experiment was repeated, and Scc1 successfully bound to the chromosomes and remained associated even after the synthesis was terminated. This was crucial in showing that with Esp1, Scc1 is hindered in its ability to become stably associated with chromosomes during G1, and Esp1 can in fact directly remove Scc1 from chromosomes.
It has been shown by Holt et al. that separase activates Cdc14, which in turn acts on securin, thus creating a positive feedback loop that increases the sharpness of the metaphase to anaphase transition and coordination of sister-chromatid separation. Holt et al. probed the basis for the effect of positive feedback in securin phosphophorlyation by using mutant 'securin' strains of yeast, and testing how changes in the phosphoregulation of securin affects the synchrony of sister chromatid separation. Their results indicate that interfering with this positive securin-separase-cdc14 loop decreases sister chromatid separation synchrony. This positive feedback can hypothetically generate bistability in the transition to anaphase, causing the cell to make the irreversible decision to separate sister-chromatids.
Cell division in fission yeast
The fission yeast is a single-celled fungus with simple, fully characterized genome and a rapid growth rate. It has long since been used in brewing, baking and molecular genetics. S. pombe is a rod-shaped cell, approximately 3µm in diameter, that grows entirely by elongation at the ends. After mitosis, division occurs by the formation of a septum, or cell plate, that cleaves the cell at its midpoint.
The central events of cell reproduction are chromosome duplication, which takes place in S (Synthetic) phase, followed by chromosome segregation and nuclear division (mitosis) and cell division (cytokinesis), which are collectively called M (Mitotic) phase.G1 is the gap between M and S phases, and G2 is the gap between S and M phases. In the budding yeast, the G2 phase is particularly extended, and cytokinesis (daughter-cell segregation) does not happen until a new S (Synthetic) phase is launched.
Fission yeast governs mitosis by mechanisms that are similar to those in multicellular animals. It normally proliferates in a haploid state. When starved, cells of opposite mating types (P and M) fuse to form a diploid zygote that immediately enters meiosis to generate four haploid spores. When conditions improve, these spores germinate to produce proliferating haploid cells.
Facts to be remembered
|Gap 0||G0||A resting phase where the cell has left the cycle and has stopped dividing.|
|Interphase||Gap 1||G1||Cells increase in size in Gap 1. The G1 checkpoint control mechanism ensures that everything is ready for DNA synthesis.|
|Synthesis||S||DNA replication occurs during this phase.|
|Gap 2||G2||During the gap between DNA synthesis and mitosis, the cell will continue to grow. The G2 checkpoint control mechanism ensures that everything is ready to enter the M (mitosis) phase and divide.|
|Cell division||Mitosis||M||Cell growth stops at this stage and cellular energy is focused on the orderly division into two daughter cells. A checkpoint in the middle of mitosis (Metaphase Checkpoint) ensures that the cell is ready to complete cell division.|
Stages of mitosis
- Cell cycle
- G0 phase
- Cell cycle analysis
- Cyclin-dependent kinase
- Morgan, David O. (2007). The Cell Cycle: Principles of Control. London: New Science Press, 1st ed.
- Skotheim, J.M.; Di Talia, S.; Siggia, E.D.; Cross, F.R. (2008), "Positive feedback of G1 cyclins ensures coherent cell cycle entry", Nature 454 (7202): 291, http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2606905, retrieved 2009-12-11
- Stuart, D.; Wittenberg, C. (1995), "CLN3, not positive feedback, determines the timing of CLN2 transcription in cycling cells.", Genes & development 9 (22): 2780, http://genesdev.cshlp.org/cgi/reprint/9/22/2780.pdf, retrieved 2009-12-11
- Biochemical switches in the cell cycle
- Harper JW. A phosphorylation-driven ubiquitination switch for cell cycle control. TrendsCell Biol. 2002 Mar;12(3):104-7. PMID: 11859016
- Biochemical switches in the cell cycle
- Novak, B.; Tyson, J.J. (1993), "Numerical analysis of a comprehensive model of M-phase control in Xenopus oocyte extracts and intact embryos", Journal of Cell Science 106 (4): 1153, http://jcs.biologists.org/cgi/reprint/106/4/1153.pdf, retrieved 2009-12-11
- Pomerening, J. R., E. D. Sontag, et al. (2003). "Building a cell cycle oscillator: hysteresis and bistability in the activation of Cdc2." Nat Cell Biol 5(4): 346-351.
- Sha, W.; Moore, J.; Chen, K.; Lassaletta, A.D.; Yi, C.S.; Tyson, J.J.; Sible, J.C. (2003), "Hysteresis drives cell-cycle transitions in Xenopus laevis egg extracts", Proceedings of the National Academy of Sciences 100 (3): 975, http://www.pnas.org/cgi/content/full/100/3/975, retrieved 2009-12-11
- Cooper, G. (2000), “The Cell: A Molecular Approach.”, retrieved 2010-11-21
- Uhlmann F.; Lottspeich F.; Nasmyth K. (1999), “Sister-chromatid separation at anaphase onset is promoted by cleavage of the cohesion subunit Scc1,” Nature 400: 37-42, retrieved 2010-9-25
- Biochemical switches in the cell cycle
- Holt, L. J., A. N. Krutchinsky, et al. (2008). "Positive feedback sharpens the anaphase switch." Nature 454(7202): 353-357.
- Schizosaccharomyces pombe | http://en.wikibooks.org/wiki/An_Introduction_to_Molecular_Biology/Cell_Cycle | 13 |
96 | Position, Velocity, and Acceleration in One Dimension
We have already discussed examples of position functions in the previous section. We now turn our attention to velocity and acceleration functions in order to understand the role that these quantities play in describing the motion of objects. We will find that position, velocity, and acceleration are all tightly interconnected notions.
Velocity in One Dimension
In one dimension, velocity is almost exactly the same as what we normally call speed. The speed of an object (relative to some fixed reference frame) is a measure of "how fast" the object is going--and coincides precisely with the idea of speed that we normally use in reference to a moving vehicle. Velocity in one-dimension takes into account one additional piece of information that speed, however, does not: the direction of the moving object. Once a coordinate axis has been chosen for a particular problem, the velocity v of an object moving at a speed s will either be v = s , if the object is moving in the positive direction, or v = - s , if the object is moving in the opposite (negative) direction.
More explicitly, the velocity of an object is its change in position per unit time, and is hence usually given in units such as m/s (meters per second) or km/hr (kilometers per hour). The velocity function, v(t) , of an object will give the object's velocity at each instant in time--just as the speedometer of a car allows the driver to see how fast he is going. The value of the function v at a particular time t 0 is also known as the instantaneous velocity of the object at time t = t 0 , although the word "instantaneous" here is a bit redundant and is usually used only to emphasize the distinction between the velocity of an object at a particular instant and its "average velocity" over a longer time interval. (Those familiar with elementary calculus will recognize the velocity function as the time derivative of the position function.)
Average Velocity and Instantaneous Velocity
Now that we have a better grasp of what velocity is, we can more precisely define its relationship to position.
We begin by writing down the formula for average velocity. The average velocity of an object with position function x(t) over the time interval (t 0, t 1) is given by:
In other words, the average velocity is the total displacement divided by the total time. Notice that if a car leaves its garage in the morning, drives all around town throughout the day, and ends up right back in the same garage at night, its displacement is 0, meaning its average velocity for the whole day is also 0.
As the time intervals get smaller and smaller in the equation for average velocity, we approach the instantaneous velocity of an object. The formula we arrive at for the velocity of an object with position function x(t) at a particular instant of time t is thus:
This is, in fact, the formula for the velocity function in terms of the position function! (In the language of calculus, this is also known as the formula for the derivative of x with respect to t .) Unfortunately, it is not feasible, in general, to compute this limit for every single value of t. However, the position functions we will be dealing with in this SparkNote (and those you will likely have to deal with in class) have exceptionally simple forms, and hence it is possible for us to write down their corresponding velocity functions in terms of a single rule valid for all time. In order to do this, we will borrow some results from elementary calculus. These results will also prove useful in our discussion of acceleration.
Some Useful Results from Elementary Calculus
Loosely speaking, the time derivative of a function f (t) is a new function f'(t) that keeps track of the rate of change of f in time. Just as in our formula for velocity, we have, in general:
Notice that this means we can write: v(t) = x'(t) . Similarly, we can also take the derivative of the derivative of a function, which yields what is called the second derivative of the original function:
We will see later that this enables us to write: a(t) = x''(t) , since the acceleration a of an object is equal to the time-derivative of its velocity, i.e. a(t) = v'(t) .
It can be shown, from the above definition for the derivative, that derivatives satisfy certain properties:
- (P1) (f + g)' = f' + g'
- (P2) (cf )' = cf' , where c is a constant.
- (F1) if f (t) = t n , where n is a non-zero integer, then f'(t) = nt n-1 .
- (F2) if f (t) = c , where c is a constant, then f'(t) = 0 .
- (F3a)if f (t) = cos wt , where w is a constant, then f'(t) = - w sin wt .
- (F3b)if f (t) = sin wt , then f'(t) = w cos wt .
Velocities Corresponding to Sample Position Functions
Since we know that v(t) = x'(t) , we can now use our new knowledge of derivatives to compute the velocities for some basic position functions:
- for x(t) = c , c a constant, v(t) = 0 (using (F2))
- for x(t) = at 2 + vt + c , v(t) = at + v (using (F1),(F2),(P1), and (P2))
- for x(t) = cos wt , v(t) = - w sin wt (using (F3a))
- for x(t) = vt + c , v(t) = v (using (F1),(P2))
Acceleration in One Dimension
Just as velocity is given by the change in position per unit time, acceleration is defined as the change in velocity per unit time, and is hence usually given in units such as m/s 2 (meters per second 2 ; do not be bothered by what a second 2 is, since these units are to be interpreted as (m/s)/s--i.e. units of velocity per second.) From our past experience with the velocity function, we can now immediately write by analogy: a(t) = v'(t) , where a is the acceleration function and v is the velocity function. Recalling that v , in turn, is the time derivative of the position function x , we find that a(t) = x''(t) .
To compute the acceleration functions corresponding to different velocity or position functions, we repeat the same process illustrated above for finding velocity. For instance, in the case
we find a(t) = v'(t) = a ! (This suggests some method to the seeming arbitrariness of writing the coefficient of t 2 in the equation for x(t) as a .)
Relating Position, Velocity, and Acceleration
Combining this latest result with (2) above, we discover that, for constant acceleration a , initial velocity v 0 , and initial position x 0 ,
This position function represents motion at constant acceleration, and is an example of how we can use knowledge of acceleration and velocity to reconstruct the original position function. Hence the relationship between position, velocity, and acceleration goes both ways: not only can you find velocity and acceleration from the position function x(t) , but x(t) can be reconstructed if v(t) and a(t) are known. (Notice that in this particular case, velocity is not constant: v(t) = at + v 0 , and so v = v 0 only at t = 0 .)
A natural question to ask might be, "Why stop at acceleration? If v(t) = x'(t) , and a(t) = x''(t) , why don't we discuss x'''(t) and so forth?" It turns out, the third time derivative of position, x'''(t) , does have a name: it is called the "jerk" (honestly). The nice thing is, however, that these higher derivatives don't seem to come into play in formulating physical laws. They exist and we can compute them, but when it comes to writing down force laws (such as Newton's Laws) which deal with the dynamics of physical systems, they get completely left out. This is why we don't care so much about giving them special names and computing them explicitly. | http://www.sparknotes.com/physics/kinematics/1dmotion/section2.rhtml | 13 |
73 | From Wikipedia, the free encyclopedia
The principle of inertia is one of the fundamental laws of classical physics which are used to describe the motion of matter and how it is affected by applied forces. Inertia is the property of an object to remain constant in velocity unless acted upon by an outside force. Inertia is dependent upon the mass and shape of the object. The concept of inertia is today most commonly defined using Sir Isaac Newton's First Law of Motion, which states:
Every body perseveres in its state of being at rest or of moving uniformly straight ahead, except insofar as it is compelled to change its state by forces impressed. [Cohen & Whitman 1999 translation]
The description of inertia presented by Newton's law is still considered the standard for classical physics. However, it has also been refined and expanded over time to reflect developments in understanding of relativity and quantum physics which have led to somewhat different (and more mathematical) interpretations in some of those fields.
Common usage of term
It should be emphasised that 'inertia' is a scientific principle, and thus not quantifiable. Therefore, contrary to popular belief, it is neither a force nor a measure of mass. In common usage, however, people may also use the term "inertia" to refer to an object's "amount of resistance to change in velocity" (which is quantified by its mass), and sometimes its momentum, depending on context (e.g. "this object has a lot of inertia"). The term "inertia" is more properly understood as a shorthand for "the principle of inertia as described by Newton in his First Law."
- In simple terms we can say that "In an isolated system, a body at rest will remain at rest and a body moving with constant velocity will continue to do so, unless disturbed by an unbalanced force"
History and development of the concept
Early understanding of motion
Prior to the Renaissance in the 15th century, the generally accepted theory of motion in western philosophy was that proposed by Aristotle (around 335 BC to 322 BC), which stated that in the absence of an external motive power, all objects (on earth) would naturally come to rest in a state of no movement, and that moving objects only continue to move so long as there is a power inducing them to do so. Aristotle explained the continued motion of projectiles, which are separated from their projector, by the action of the surrounding medium which continues to move the projectile in some way. As a consequence, Aristotle concluded that such violent motion in a void was impossible for there would be nothing there to keep the body in motion against the resistance of its own gravity. Then in a statement regarded by Newton as expressing his Principia's first law of motion, Aristotle continued by asserting that a body in (non-violent) motion in a void would continue moving forever if externally unimpeded:
- [N]o one could say why a thing once set in motion should stop anywhere; for why should it stop here rather than here? So that a thing will either be at rest or must be moved ad infinitum, unless something more powerful get in its way.
Despite its remarkable success and general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over the nearly 2 millennia of its reign. For example, Lucretius (following, presumably, Epicurus) clearly stated that the 'default state' of matter was motion, not stasis. In the 6th century, John Philoponus criticized Aristotle's view, noting the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of the surrounding medium but by some property implanted in the object when it was set in motion. This was not the modern concept of inertia, for there was still the need for a power to keep a body in motion. This view was strongly opposed by Averroës and many scholastic philosophers who supported Aristotle. However this view did not go unchallenged in the Islamic world. For example Ibn al-Haitham seems to have supported Philoponus' views.
Theory of impetus
In the 14th century Jean Buridan rejected the notion that this motion-generating property, which he named impetus, dissipated spontaneously. Instead, Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also maintained that impetus could be not only linear, but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan's thought was followed up by the Oxford Calculators, who performed various experiments that further undermined the classical, Aristotelian view. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs.
Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone:
- “…[Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path.”
Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion.
Classical inertia
The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the earth (and everything on it) was in fact never "at rest", but was actually in constant motion around the sun. Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle:
A body moving on a level surface will continue in the same direction at a constant speed unless disturbed.
It is also worth noting that Galileo later went on to conclude that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Einstein to develop the theory of Special Relativity.
Galileo's concept of inertia would later come to be refined and codified by Isaac Newton as the first of his Laws of Motion (first published in Newton's work, Philosophiae Naturalis Principia Mathematica, in 1687):
Unless acted upon by an unbalanced force, an object will maintain a constant velocity.
Note that "velocity" in this context is defined as a vector, thus Newton's "constant velocity" implies both constant speed and constant direction (and also includes the case of zero speed, or no motion). Since initial publication, Newton's Laws of Motion (and by extension this first law) have come to form the basis for the almost universally accepted branch of physics now termed classical mechanics.
The actual term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1618-1621); however, the meaning of Kepler's term (which he derived from the Latin word for "idleness" or "laziness") was not quite the same as its modern interpretation. Kepler defined inertia only in terms of a resistance to movement, once again based on the presumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to these concepts as it is today.
Nevertheless, despite defining the concept so elegantly in his laws of motion, even Newton did not actually use the term "inertia" to refer to his First Law. In fact, Newton originally viewed the phenomenon he described in his First Law of Motion as being caused by "innate forces" inherent in matter, which resisted any acceleration. Given this perspective, and borrowing from Kepler, Newton actually attributed the term "inertia" to mean "the innate force possessed by an object which resists changes in motion"; thus Newton defined "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself. However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one which we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon described by Newton's First Law of Motion, and the two concepts are now basically equivalent.
Albert Einstein's theory of Special Relativity, as proposed in his 1905 paper, "On the Electrodynamics of Moving Bodies," was built on the understanding of inertia and inertial reference frames developed by Galileo and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass, energy, and distance, Einstein's concept of inertia remained unchanged from Newton's original meaning (in fact the entire theory was based on Newton's definition of inertia). However, this resulted in a limitation inherent in Special Relativity that it could only apply when reference frames were inertial in nature (meaning when no acceleration was present). In an attempt to address this limitation, Einstein proceeded to develop his theory of General Relativity ("The Foundation of the General Theory of Relativity," 1916), which ultimately provided a unified theory for both inertial and noninertial (accelerated) reference frames. However, in order to accomplish this, in General Relativity Einstein found it necessary to redefine several fundamental aspects of the universe (such as gravity) in terms of a new concept of "curvature" of spacetime, instead of the more traditional system of forces understood by Newton.
As a result of this redefinition, Einstein also redefined the concept of "inertia" in terms of geodesic deviation instead, with some subtle but significant additional implications. The result of this is that according to General Relativity, when dealing with very large scales, the traditional Newtonian idea of "inertia" does not actually apply, and cannot necessarily be relied upon. Luckily, for sufficiently small regions of spacetime, the Special Theory can still be used, in which inertia still means the same (and works the same) as in the classical model. Towards the end of his life it seems as if Einstein had become convinced that space-time is a new form of aether, in some way serving as a reference frame for the property of inertia (Kostro, 2000).
Another profound, perhaps the most well-known, conclusion of the theory of Special Relativity was that energy and mass are not separate things, but are, in fact, interchangeable. This new relationship, however, also carried with it new implications for the concept of inertia. The logical conclusion of Special Relativity was that if mass exhibits the principle of inertia, then inertia must also apply to energy as well. This theory, and subsequent experiments confirming some of its conclusions, have also served to radically expand the definition of inertia in some contexts to apply to a much wider context including energy as well as matter.
According to Isaac Asimov
According to Isaac Asimov in "Understanding Physics": "This tendency for motion (or for rest) to maintain itself steadily unless made to do otherwise by some interfering force can be viewed as a kind of "laziness," a kind of unwillingness to make a change. And indeed, [Newton's] first law of motion is referred to as the principle of inertia, from a Latin word meaning "idleness" or "laziness." With the footnote: "In Aristotle's time the earth was considered a motionless body fixed at the center of the universe; the notion of 'rest' therefore had a literal meaning. What we ordinarily consider 'rest' nowadays is a state of being motionless with respect to the surface of the earth. But we know (and Newton did, too) that the earth itself is in motion about the sun and about its own axis. A body resting on the surface of the earth is therefore not really in a state of rest at all."
As Isaac Asimov goes on to explain, "Newton's laws of motion represent assumptions and definitions and are not subject to proof. In particular, the notion of 'inertia' is as much an assumption as Aristotle's notion of 'natural place.'...To be sure, the new relativistic view of the universe advanced by Einstein makes it plain that in some respects Newton's laws of motion are only approximations...At ordinary velocities and distance, however, the approximations are extremely good."
Mass and 'inertia'
Physics and mathematics appear to be less inclined to use the original concept of inertia as "a tendency to maintain momentum" and instead favor the mathematically useful definition of inertia as the measure of a body's resistance to changes in momentum or simply a body's inertial mass.
This was clear in the beginning of the 20th century, when the theory of relativity was not yet created. Mass, m, denoted something like amount of substance or quantity of matter. And at the same time mass was the quantitative measure of inertia of a body.
The mass of a body determines the momentum P of the body at given velocity v; it is a proportionality factor in the formula:
- P = mv
The factor m is referred to as inertial mass.
But mass as related to 'inertia' of a body can be defined also by the formula:
- F = ma
By this formula, the greater its mass, the less a body accelerates under given force. Masses m defined by the formulae (1) and (2) are equal because the formula (2) is a consequence of the formula (1) if mass does not depend on time and speed. Thus, "mass is the quantitative or numerical measure of body’s inertia, that is of its resistance to being accelerated".
This meaning of a body's inertia therefore is altered from the original meaning as "a tendency to maintain momentum" to a description of the measure of how difficult it is to change the momentum of a body.
Inertial mass
The only difference there appears to be between inertial mass and gravitational mass is the method used to determine them.
Gravitational mass is measured by comparing the force of gravity of an unknown mass to the force of gravity of a known mass. This is typically done with some sort of balance scale. The beauty of this method is that no matter where, or on what planet, you are, the masses will always balance out because the gravitational acceleration on each object will be the same. This does break down near supermassive objects such as black holes and neutron stars due to the high gradient of the gravitational field around such objects.
Inertial mass is found by applying a known force to an unknown mass, measuring the acceleration, and applying Newton's Second Law, m = F/a. This gives an accurate value for mass, limited only by the accuracy of the measurements. When astronauts need to be weighed in outer space, they actually find their inertial mass in a special chair.
The interesting thing is that, physically, no difference has been found between gravitational and inertial mass. Many experiments have been performed to check the values and the experiments always agree to within the margin of error for the experiment. Einstein used the fact that gravitational and inertial mass were equal to begin his Theory of General Relativity in which he postulated that gravitational mass was the same as inertial mass, and that the acceleration of gravity is a result of a 'valley' or slope in the space-time continuum that masses 'fell down' much as pennies spiral around a hole in the common donation toy at a chain store.
Inertial frames
In a location such as a steadily moving railway carriage, a dropped ball would behave as it would if it were dropped in a stationary carriage. The ball would simply descend vertically. It is possible to ignore the motion of the carriage by defining it as an inertial frame. In a moving but non-accelerating frame, the ball behaves normally because the train and its contents continue to move at a constant velocity. Before being dropped, the ball was traveling with the train at the same speed, and the ball's inertia ensured that it continued to move in the same speed and direction as the train, even while dropping. Note that, here, it is inertia which ensured that, not its mass.
In an inertial frame all the observers in uniform (non-accelerating) motion will observe the same laws of physics. However observers in another inertial frames can make a simple, and intuitively obvious, transformation (the Galilean transformation), to convert their observations. Thus, an observer from outside the moving train could deduce that the dropped ball within the carriage fell vertically downwards.
However, in frames which are experiencing acceleration (non-inertial frames), objects appear to be affected by fictitious forces. For example, if the railway carriage was accelerating, the ball would not fall vertically within the carriage but would appear to an observer to be deflected because the carriage and the ball would not be traveling at the same speed while the ball was falling. Other examples of fictitious forces occur in rotating frames such as the earth. For example, a missile at the North Pole could be aimed directly at a location and fired southwards. An observer would see it apparently deflected away from its target by a force (the Coriolis force) but in reality the southerly target has moved because earth has rotated while the missile is in flight. Because the earth is rotating, a useful inertial frame of reference is defined by the stars, which only move imperceptibly during most observations.
Novel interpretations
Lack of consensus as to the true inherent nature of inertia may still challenge a few to further speculate on and research the subject.
If you look at inertia as a manifestation of mass, you may be interested in particle physics’ ideas about the Higgs boson. This is an intense field of advanced research, and new hints will be commented upon quickly. According to the Standard model of particle physics all elementary particles are almost massless. Their masses (and hence their inertia) stem from the Higgs-Mechanism via interchange with an omnipenetrating Higgs field. This calls for the existence of a so far undiscovered elementary particle, the Higgs boson.
If you are inclined to see inertia as a property connected to mass, you may work along other routes. The number of researchers delivering new ideas here are few. What have come up recently are still to be regarded as protoscience, but it illustrates how the formation of theories in the area is advancing.
In a paper published in 2006, the Swedish-American physicist Johan Masreliez proposes that the phenomenon of inertia may be explained, if the metrical coefficients in the Minkowskian line element were to change as a consequence of acceleration. A certain scale factor was found, which models inertia as a gravitational-type effect. In a following paperfor Physica Scripta he explains how special relativity can be compatible with a cosmos with a fixed and unique cosmological reference frame. The Lorentz transformation might model "morphing" of moving particles, which might preserve their properties by changing their local spacetime geometries. With this the geometry becomes dynamic and an integral part of motion. He claims this changing geometry to be the source of inertia; it is said to generate the inertial force These new ideas have so far only been checked by the peers of the journals and a few from the rest of the scientific community. If accepted, inertia might be a smart property that will connect special relativity with the general one.
Another approach has been suggested by Emil Marinchev (2002).
Rotational inertia
Another form of inertia is rotational inertia, which refers to the fact that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum is unchanged, unless an external torque is applied; this is also called conservation of angular momentum. Rotational inertia often has hidden practical consequences.
See also
- General relativity
- Inertial guidance system
- List of moments of inertia
- Mach's principle
- Newton's laws of motion
- Newtonian physics
- Special relativity
- Kinetic Energy
References and footnotes
- ^ Aristotle, Physics, 8.10, 267a1-21; Aristotle, Physics, trans. by R. P. Hardie and R. K. Gaye.
- ^ Aristotle, Physics, 4.8, 214b29-215a24.
- ^ Aristotle, Physics, 4.8, 215a19-22.
- ^ Lucretius, On the Nature of Things (London: Penguin, 1988), pp, 60-65
- ^ Richard Sorabji, Matter, Space, and Motion: Theories in Antiquity and their Sequel, (London: Duckworth, 1988), pp. 227-8; Stanford Encyclopedia of Philosophy: John Philoponus.
- ^ Jean Buridan: Quaestiones on Aristotle's Physics (quoted at http://brahms.phy.vanderbilt.edu/a203/impetus_theory.html)
- ^ Giovanni Benedetti, selection from Speculationum, in Stillman Drake and I.E. Drabkin, Mechanics in Sixteenth Century Italy (The University of Wisconsin Press, 1969), p. 156.
- ^ Nicholas Copernicus: The Revolutions of the Heavenly Spheres, 1543
- ^ Galileo: Dialogue Concerning the Two Chief World Systems, 1632 (Wikipedia Article)
- ^ Kostro, Ludwik; Einstein and the Ether Montreal, Apeiron (2000). ISBN 0-9683689-4-8
- ^ Masreliez C. J., On the origin of inertial force, Apeiron (2006)
- ^ Masreliez, C.J., Motion, Inertia and Special Relativity – a Novel Perspective, Physica Scripta, (Dec 2006)
- ^ Emil Marinchev (2002) UNIVERSALITY, i.a. a new generalized principle of inertia
External links
Books and papers
- Butterfield, H (1957) The Origins of Modern Science ISBN 0-7135-0160-X
- Clement, J (1982) "Students' preconceptions in introductory mechanics", American Journal of Physics vol 50, pp66-71
- Crombie, A C (1959) Medieval and Early Modern Science, vol 2
- McCloskey, M (1983) "Intuitive physics", Scientific American, April, pp114-123
- McCloskey, M & Carmazza, A (1980) "Curvilinear motion in the absence of external forces: naïve beliefs about the motion of objects", Science vol 210, pp1139-1141
- Masreliez, C.J., Motion, Inertia and Special Relativity – a Novel Perspective, Physica Scripta, accepted (Oct 2006) | http://www.bazpedia.com/en/i/n/e/Inertia.html | 13 |
110 | Polygons are enclosed geometric shapes that
cannot have fewer than three sides. As this definition suggests,
triangles are actually a type of polygon, but they are so important
on the Math IIC that we gave them their own section. Polygons are
named according to the number of sides they have, as you can see
in the chart below.
All polygons, no matter the number of sides they possess,
share certain characteristics:
- The sum of the interior angles of a polygon
with n sides is (n – 2). So, for example, the sum of the
interior angles of an octagon is (8 – 2) = 6 = .
- The sum of the exterior angles of any polygon is .
- The perimeter of a polygon is the sum of the lengths of
its sides. The perimeter of the hexagon below, for example, is 35.
Most of the polygons with more than four sides that you’ll
deal with on the Math IIC will be regular polygons—polygons whose
sides are all of equal length and whose angles are all congruent
(neither of these conditions can exist without the other). Below
are diagrams, from left to right, of a regular pentagon, a regular
octagon, and a square (also known as a regular quadrilateral):
Area of a Regular Polygon
There is one more characteristic of polygons with which
to become familiar. It has to do specifically with regular hexagons.
A regular hexagon can be divided into six equilateral triangles,
as the figure below shows:
If you know the length of just one side of a regular hexagon,
you can use that information to calculate the area of the equilateral
triangle that uses the side. To find the area of the hexagon, simply
multiply the area of that triangle by 6.
The most frequently seen polygon on the Math IC is the quadrilateral,
which is a general term for a four-sided polygon. In fact, there
are five types of quadrilaterals that pop up on the test: trapezoids,
parallelograms, rectangles, rhombuses, and squares. Each of these
five quadrilaterals has special qualities, as shown in the sections
A trapezoid is a quadrilateral with one pair
of parallel sides and one pair of nonparallel sides. Below is an
example of a trapezoid:
In the trapezoid pictured above, AB is
parallel to CD (shown by the arrow marks), whereas AC and BD are
The area of a trapezoid is:
where s1 and s2 are
the lengths of the parallel sides (also called the bases of the
trapezoid), and h is the height. In a trapezoid,
the height is the perpendicular distance from one base to the other.
Try to find the area of the trapezoid pictured below:
To find the area, draw in the height of the trapezoid
so that you create a 45-45-90 triangle. You know that the length
of the leg of this triangle—and the height of the trapezoid—is 4. Thus,
the area of the trapezoid is 6+10
4 = 8
4 = 32. Check out the figure below,
which includes all the information we know about the trapezoid:
A parallelogram is a quadrilateral whose
opposite sides are parallel. The figure below shows an example:
Parallelograms have three very important properties:
Opposite sides are equal.
angles are congruent.
angles are supplementary (they add up to 180º).
To visualize this last property, simply picture the opposite
sides of the parallelogram as parallel lines and one of the other
sides as a transversal:
The area of a parallelogram is given by the formula:
where b is the length of the base, and h is
In area problems, you will likely have to find the height
using techniques similar to the one used in the previous example
problem with trapezoids.
The next three quadrilaterals that we’ll review—rectangles,
rhombuses, and squares—are all special types of parallelograms.
A rectangle is a quadrilateral in
which the opposite sides are parallel and the interior angles are all
right angles. A rectangle is essentially a parallelogram in which
the angles are all right angles. Also similar to parallelograms,
the opposite sides of a rectangle are equal.
The formula for the area of a rectangle is:
where b is the length of
the base, and h is the height.
A diagonal through the rectangle cuts the rectangle into
two equal right triangles. In the figure below, the diagonal BD cuts
rectangle ABCD into congruent right triangles ABD and BCD.
Because the diagonal of the rectangle forms right triangles
that include the diagonal and two sides of the rectangle, if you
know two of these values, you can always calculate the third with
the Pythagorean theorem. If you know the side lengths of the rectangle,
you can calculate the diagonal; if you know the diagonal and one
side length, you can calculate the other side.
A rhombus is a quadrilateral in which the
opposite sides are parallel and the sides are of equal length.
The formula for the area of a rhombus is:
where b is the length of
the base, and h is the height.
To find the area of a rhombus, use the same methods as
used to find the area of a parallelogram. For example:
||If ABCD is
a rhombus, AC = 4, and ABD is
an equilateral triangle, what is the area of the rhombus?
is an equilateral triangle, then
the length of a side of the rhombus is 4, and angles ADB
60º. Draw an altitude from a
create a 30-60-90 triangle, and you can calculate the length of
this altitude to be 2
. The area of a
rhombus is bh
, so the area of this rhombus is 4
A square is a quadrilateral in which all
the sides are equal and all the angles are right angles. A square
is a special type of rhombus, rectangle, and parallelogram:
The formula for the area of a square is:
where s is the length of a side of the
square. Because all the sides of a square are equal, it is also
possible to provide a simple formula for the perimeter: P = 4s,
where s is, once again, the length of a side.
A diagonal drawn into the square will always
form two congruent 45-45-90 triangles:
From the properties of a 45-45-90 triangle, we know that
words, if you know the length of one side of the square, you can
easily calculate the length of the diagonal. Similarly, if you know
the length of the diagonal, you can calculate the length of the sides
of the square. | http://www.sparknotes.com/testprep/books/sat2/math1c/chapter6section3.rhtml | 13 |
75 | In this final post on the basic four methods of proof (but perhaps not our last post on proof methods), we consider the proof by induction.
Proving Statements About All Natural Numbers
Induction comes in many flavors, but the goal never changes. We use induction when we want to prove something is true about all natural numbers. These statements will look something like this:
For all natural numbers n, .
Of course, there are many ways to prove this fact, but induction relies on one key idea: if we know the statement is true for some specific number , then that gives us information about whether the statement is true for . If this isn’t true about the problem, then proof by induction is hopeless.
Let’s see how we can apply it to the italicized statement above (though we haven’t yet said what induction is, this example will pave the way for a formal description of the technique). The first thing we notice is that indeed, if we know something about the first numbers, then we can just add to it to get the sum of the first numbers. That is,
Reiterating our key point, this is how we know induction is a valid strategy: the statement written for a fixed translates naturally into the statement about . Now suppose we know the theorem is true for . Then we can rewrite the above sum as follows:
With some algebra, we can write the left-hand side as a single fraction:
and factoring the numerator gives
Indeed, this is precisely what we’re looking for! It’s what happens when you replace by in the original statement of the problem.
At this point we’re very close to being finished. We proved that if the statement is true for , then it’s true for . And by the same reasoning, it will be true for and all numbers after . But this raises the obvious question: what’s the smallest number that it’s true for?
For this problem, it’s easy to see the answer is . A mathematician would say: the statement is trivially true for (here trivial means there is no thinking required to show it: you just plug in and verify). And so by our reasoning, the statement is true for and so on forever. This is the spirit of mathematical induction.
Now that we’ve got a taste of how to use induction in practice, let’s formally write down the rules for induction. Let’s have a statement which depends on a number , and call it . This is written as a function because it actually is one (naively). It’s a function from the set of natural numbers to the set of all mathematical statements. In our example above, was the statement that the equality holds.
We can plug in various numbers into this function, such as being the statement “ holds,” or being “ holds.”
The basic recipe for induction is then very simple. Say you want to prove that is true for all . First you prove that is true (this is called the base case), and then you prove the implication for an arbitrary . Each of these pieces can be proved with any method one wishes (direct, contrapositive, contradiction, etc.). Once they are proven, we get that is true for all automatically.
Indeed, we can even prove it. A rigorous proof requires a bit of extra knowledge, but we can easily understand the argument: it’s just a proof by contradiction. Here’s a quick sketch. Let be the set of all natural numbers for which is false. Suppose to the contrary that is not empty. Every nonempty set of natural numbers has a smallest element, so let’s call the smallest number for which is false. Now , so must be true. But we proved that whenever is true then so is , so is true, a contradiction.
Besides practicing proof by induction, that’s all there is to it. One more caveat is that the base case can be some number other than 1. For instance, it is true that , but only for . In this case, we prove is true, and with the extra assumption that . The same inductive result applies.
Here are some exercises the reader can practice with, and afterward we will explore some variants of induction.
- Prove that for all .
- Prove that for all the following equality holds: .
- Prove that for every natural number , a set of elements has subsets (including the empty subset).
This last exercise gives a hint that induction can prove more than arithmetic formulas. Indeed, if we have any way to associate a mathematical object with a number, we can potentially use induction to prove things about those objects. Unfortunately, we don’t have any mathematical objects to work with (except for sets and functions), and so we will stick primarily to proving facts about numbers.
One interesting observation about proof by induction is very relevant to programmers: it’s just recursion. That is, if we want to prove a statement , it suffices to prove it for and do some “extra computation” to arrive at the statement for . And of course, we want to make sure the recursion terminates, so we add in the known result for .
Variations on Induction, and Connections to Dynamic Programming
The first variation of induction is simultaneous induction on multiple quantities. That is, we can formulate a statement which depends on two natural numbers independently of one another. The base case is a bit trickier, but paralleling the above remark about recursion, double-induction follows the same pattern as a two-dimensional dynamic programming algorithm. The base cases would consist of all and all , and the inductive step to get requires and (and potentially or others; it depends on the problem).
Unfortunately, natural instances where double induction is useful (or anywhere close to necessary) are rare. Here is an example of a (tricky) but elementary example. Call
where the exclamation point denotes the factorial function. We will outline a proof that is always an integer for all . If we look at the base cases, (recalling that 0! = 1), we get , and this happens to be in the form of a binomial coefficient (here, the number of ways to choose objects from a collection of objects), and binomial coefficients are known to be integers. Now the inductive step relies on the fact that and are both integers. If this is true then
which is obviously again an integer.
If we write these values in a table, we can see how the induction progresses in a “dynamic programming” fashion:
In order to fill the values in the next column (prove the statement for those values of ), we need to fill the entire column (for indeed, we rely on the inductive hypothesis for both the and row). But since our base case was the entire column, we can fill the entire table. In fact, we have just described a dynamic programming algorithm for computing the value of in steps. The correctness of the algorithm is indeed an inductive proof of the theorem.
Perhaps uninterestingly, this is asymptotically slower than the naive algorithm of computing directly by computing directly; this would take a linear number of steps in , assuming . In passing, this author wonders if, when the numbers are really large, the lack of division and multiplication in the dynamic program (multiplying by 4 using bit shifting instead) would overtake the naive algorithm. But is certainly not interesting enough in its own right for anyone to care
At this point, we have noticed that we sometimes use induction and assume that many smaller instances of the statement are true. Indeed, why not inductively assume that the statement holds for all smaller . This would certainly give the prover more tools to work with. Indeed, this technique is sometimes called strong induction, in the sense that we assume a stronger inductive hypothesis when we’re trying to prove . It may not be entirely obvious (especially to one well versed in the minutiae of formal logic) that this is actually equivalent to normal induction, but it is. In fact, the concept of “strong” induction is entirely pedagogical in nature. Most working mathematicians will not mention the difference in their proofs.
The last variant we’ll mention about induction is that of transfinite induction. The concept is that if you have any set which is well-ordered (essentially this means: allowing one to prove induction applies as we did earlier in the post), then we can perform induction its elements. In this way, we can “parameterize” statements by elements of an arbitrary well-ordered set, so that instead of being a function from natural numbers to mathematical statements, it’s a function from to mathematical statements. One somewhat common example of when is something besides natural numbers is when we use the so-called cardinal numbers. We have already seen two distinct infinite cardinal numbers in this series: the cardinality of the integers and the cardinality of the real numbers (indeed, “cardinal number” just means a number which is the cardinality of a set). It turns out that there are many more kinds of cardinal numbers, and you can do induction on them, but this rarely shows up outside of mathematical logic.
And of course, we should close this post on an example of when induction goes wrong. For this we point the reader to our proof gallery, and the false proof that all horses are the same color. It’s quite an amusing joke, and hopefully it will stimulate the reader’s mind to recognize the pitfalls that can occur in a proof by induction.
So those are the basic four proof techniques! Fortunately for the reader, pretty much all proofs presented on this blog follow one of these four techniques. I imagine many of my readers skip over the proofs entirely (if only I could put proofs in animated gifs, with claims illustrated by grumpy cats!). So hopefully, if you have been intimidated or confused by the structure of the proofs on this blog, this will aid you in your future mathematical endeavors. Butchering an old phrase for the sake of a bad pun, the eating of the pudding is in the proof.
Until next time! | http://jeremykun.com/tag/methods-of-proof/ | 13 |
92 | Illinois Learning Standards
Stage I - Math
Students who meet the standard can demonstrate knowledge and use of numbers and their many representations in a broad range of theoretical and practical settings. (Representations)
- Illustrate the relationship between second and third roots and powers of a number.
- Organize problem situations using matrices.
- Represent, order, and compare real numbers.
- Place real numbers on a number line.
Students who meet the standard can investigate, represent and solve problems using number facts, operations, and their properties, algorithms, and relationships. (Operations and properties)
- Compare and contrast the properties of numbers and number systems, including the rational and the real numbers. **
- Determine an appropriate numerical representation of a problem situation, including roots and powers, if applicable.
- Judge the effects of such operations as multiplication, division, and computing powers and roots on the magnitudes of quantities. *
- Solve problems using simple matrix operations (addition, subtraction, scalar multiplication).
- Develop fluency in operations with real numbers using mental computation or paper-and-pencil calculations for simple cases and technology for more-complicated cases. **
- Judge the reasonableness of numerical computations and their results. *
Students who meet the standard can compute and estimate using mental mathematics, paper-and-pencil methods, calculators, and computers. (Choice of method)
- Develop fluency in operations with real numbers and matrices using mental computation or paper-and-pencil calculations for simple cases and technology for more-complicated cases. **
- Determine and explain whether exact values or approximations are needed in a variety of situations.
- Determine an appropriate number of digits to represent an outcome.
Students who meet the standard can solve problems using comparison of quantities, ratios, proportions, and percents.
- Explain how ratios and proportions can be used to solve problems of percent, growth, and error tolerance.
- Set up and solve proportions for direct and inverse variation of simple quantities.
Students who meet the standard can measure and compare quantities using appropriate units, instruments, and methods. (Performance and conversion of measurements)
- Select units and scales that are appropriate for problem situations involving measurement. **
- Convert between the U.S. customary and metric systems given the conversion factor.
Students who meet the standard can estimate measurements and determine acceptable levels of accuracy. (Estimation)
- Estimate the magnitude and directions of physical quantities (e.g., velocity, force, slope).
- Determine answers to an appropriate degree of accuracy using significant digits.
Students who meet the standard can select and use appropriate technology, instruments, and formulas to solve problems, interpret results, and communicate findings. (Progression from selection of appropriate tools and methods to application of measurements to solve problems)
- Solve problems using indirect measurement by choosing appropriate technology, instruments, and/or formulas.
- Check measurement computations using unit analysis. **
- Describe the general trends of how the change in one measure affects other measures in the same figure (e.g., length, area, volume).
- Determine linear measures, perimeters, areas, surface areas, and volumes of similar figures using the ratio of similitude.
- Determine the ratio of similar figure perimeters, areas, and volumes using the ratio of similitude.
- Calculate by an appropriate method the length, width, height, perimeter, area, volume, surface area, angle measures, or sums of angle measures of common geometric figures, or combinations of common geometric figures.
- Solve problems involving multiple rates, measures, and conversions.
Students who meet the standard can describe numerical relationships using variables and patterns. (Representations and algebraic manipulations)
- Write equivalent forms of equations, inequalities, and systems of equations. **
- Represent and explain mathematical relationships using symbolic algebra. **
- Model and describe slope as a constant rate of change.
- Explain the difference between constant and non-constant rate of change.
- Create an equation of a line of best fit from a set of ordered pairs or set of data points.
- Simplify algebraic expressions using a variety of methods, including factoring.
- Justify the results of symbol manipulations, including those carried out by technology. **
- Identify essential quantitative relationships in a situation and determine the class or classes of functions (e.g., linear, quadratic) that might model the relationships. **
- Represent relationships arising from various contexts using algebraic expression.
- Rewrite absolute value inequalities in terms of two separate equivalent inequalities with the appropriate connecting phrase of "AND" or "OR".
Students who meet the standard can interpret and describe numerical relationships using tables, graphs, and symbols. (Connections of representations including the rate of change)
- Describe the relationships of the independent and dependent variables from a graph.
- Interpret the role of the coefficients and constants on the graph of linear and quadratic functions given a set of equations.
- Relate the effect of translations on linear graphs and equations.
- Create and connect representations that are tabular, graphical, numeric, and algebraic from a set of data.
- Recognize and describe the general shape and properties of the graphs of linear, absolute value, and quadratic functions.
- Approximate and interpret rates of change from graphical and numerical data. *
- Identify slope in an equation and from a table of values.
- Graph absolute values of linear functions on the Cartesian plane.
- Recognize direct variation, inverse variation, linear, and exponential curves from their graphs, a table of values, or equations. **
- Interpret and use functions as a geometric representation of linear and non-linear relationships.
Students who meet the standard can solve problems using systems of numbers and their properties. (Problem solving; number systems, systems of equations, inequalities, algebraic functions)
- Describe and compare the properties of linear and quadratic functions. **
- Solve problems by recognizing how an equation changes when parameters change.
- Interpolate and extrapolate to solve problems using systems of numbers.
- Solve problems using translations and dilations on basic functions.
Students who meet the standard can use algebraic concepts and procedures to represent and solve problems. (Connection of 8A, 8B, and 8C to solve problems)
- Solve equivalent forms of equations, inequalities, and systems of equations with fluency-mentally or with paper-and-pencil in simple cases and using technology in all cases. **
- Create word problems that meet given conditions and represent simple power or exponential relationships, or direct or inverse variation situations.
- Solve simple quadratic equations using algebraic or graphical representations.
- Solve problems of direct variation situations using a variety of methods.
Students who meet the standard can demonstrate and apply geometric concepts involving points, lines, planes, and space. (Properties of single figures, coordinate geometry and constructions)
- Describe and apply properties of a polygon or a circle in a problem-solving situation.
- Classify angle relationships for two or more parallel lines crossed by a transversal.
- Analyze geometric situations using Cartesian coordinates. **
- Represent transformations of an object in the plane using sketches, coordinates, and vectors.
- Design a net that will create a given figure when folded.
- Solve problems using constructions.
- Gain insights into, and answer questions in, other areas of mathematics using geometric models. **
- Calculate distance, midpoint coordinates, and slope using coordinate geometry.
- Visualize a three-dimensional object from different perspectives and describe their cross sections. **
- Identify and apply properties of medians, altitudes, angle bisectors, perpendicular bisectors, and midlines of a triangle.
Students who meet the standard can identify, describe, classify and compare relationships using points, lines, planes, and solids. (Connections between and among multiple geometric figures)
- Solve problems using triangle congruence and similarity of figures.
- Extend knowledge of plane figure relationships to relationships within and between geometric solids.
- Identify relationships among circles, arcs, chords, tangents, and secants.
- Solve problems in, and gain insights into, other disciplines and other areas of interest such as art and architecture using geometric ideas. **
- Analyze and describe the transformations that lead to successful tessellations of one or more figures.
Students who meet the standard can construct convincing arguments and proofs to solve problems. (Justifications of conjectures and conclusions)
- Create and critique arguments concerning geometric ideas and relationships such as properties of circles, triangles and quadrilaterals.
- Develop a formal proof for a given geometric situation on the plane.
- Provide a counter-example to disprove a conjecture.
- Develop conjectures about geometric situations with and without technology.
- Justify constructions using geometric properties.
- Describe the difference between an inductive argument and a deductive argument.
Students who meet the standard can use trigonometric ratios and circular functions to solve problems.
- Determine distances and angle measures using indirect measurement and properties of right triangles.
- Solve problems using 45°-45°-90° and 30°-60°-90° triangles.
Students who meet the standard can organize, describe and make predictions from existing data. (Data analysis)
- Describe the meaning of measurement data and categorical data, of univariate and bivariate data, and of the term variable. **
- Display a scatter plot, describe its shape, and determine regression coefficients, regression equations, and correlation coefficients for bivariate measurement data using technological tools.
- Evaluate published reports that are based on data by examining the design of the study, the appropriateness of the data analysis, and the validity of conclusions. *
- Analyze two-variable data for linear or quadratic fit.
- Make decisions based on data, including the relationships of correlation and causation.
Students who meet the standard can formulate questions, design data collection methods, gather and analyze data and communicate findings. (Data Collection)
- Describe the characteristics of well-designed studies, including the role of randomization in surveys and experiments. **
- Discuss informally different populations and sampling techniques.
- Decide if a survey was "successful" in gathering intended data and justify the decision.
Students who meet the standard can determine, describe and apply the probabilities of events. (Probability including counting techniques)
- Determine geometric probability based on area.
- Calculate probability using Venn diagrams.
- Determine simple probabilities using frequency tables.
- Construct empirical probability distributions using simulations.**
- Describe the concepts of conditional probability.
- Develop an understanding of permutations and combinations as counting techniques. *
* National Council of Teachers of Mathematics. Principles and Standards for School Mathematics. Reston, Va: National Council of Teachers of Mathematics, 2000.
** Adapted from: National Council of Teachers of Mathematics. Principles and Standards for School Mathematics. Reston, Va: National Council of Teachers of Mathematics, 2000. | http://www.isbe.net/ils/math/stage_I/descriptor.htm | 13 |
60 | Changing an Equation into Slope-Intercept Form
Before doing this lesson, you should have a grasp of the concept of slope as well as a good idea of how to use a table to draw lines on a coordinate plane. See the menu of algebra links for lessons on these topics.
The slope-intercept form of an equation is y = mx + b, where m is the slope and b is the y-intercept. However, not all equations are given in this form.
Equations that are not in this form may be more difficult to graph. Before looking at the lesson, consider the equation 8y = 24 – 4x. Can you find any coordinates that work for this equation? Can you determine the slope of this line or the x or y intercept?
Drawing the line of the equation 8y = 24 – 4x can be done, but this line can be graphed more easily if the equation is rewritten in slope-intercept form. In this lesson, you will learn how to change equations into slope-intercept form to allow you to analyze them and draw their graph more easily.
In the introduction, you were asked to take a closer look at the equation 8y = 24 – 4x. Finding coordinates for this equation can be done by “plugging in” values of x.
If x = 0, then 8y = 24 and y = 3. This is the coordinate (0, 3)
If x = 1, then 8y = 24 – (4)1, 8y = 20, and y = 2.5. Coordinate (1, 2.5)
If x = 2, then 8y = 24 – (4)(2), 8y = 16 and y = 2. Coordinate (2, 2)
The graph of the equation 8y = 24 – 4x is shown to the left. The graph makes a straight line and this line appears to have a negative slope and a y-intercept of 3. One can look at the graph and determine the slope and the y-intercept visually, but it is also possible to find these two characteristics of the line using algebra.
The slope-intercept form of an equation is y = mx + b, where m is the slope and b is the y-intercept. To change our original equation into slope-intercept form, simply solve the equation for y.
In the equation above, the y-term has been isolated on the left side of the equation and the right side has been rearranged into slope-intercept form (mx + b). So the equation has been 8y = 24 – 4x can be changed into y = -½x + 3. The slope is -½ and the y-intercept is 3.
Finding the slope and y-intercept of an equation can often be done without drawing a graph. See if you can find the slope and y-intercept of the equation without drawing a graph.
Example 1: Find the slope and y-intercept of the line 5x + 5y = 10.
Example 2: Find the slope and y-intercept of the line 2y = 6(x + 3)
Examples 1 and 2 result in equations whose slopes and y-intercepts are integers. When simplifying many equations, however, you will often run into fractions for the slope, y-intercept, or both. Example 3 demonstrates fractional results for the slope and y-intercept.
Example 3: Find the slope and y-intercept of the line 5y = 24 + 8x
You can use the rules of algebra to change any 2-variable equation into slope-intercept form. Remember that the simplified (slope-intercept) form can be useful to quickly identify the slope and y-intercept of the line.
Even though graphing is not covered in this lesson, the purpose of changing an equation into slope-intercept form is often to draw the graph. Drawing the graph of a line is easiest when the equation is in slope-intercept form.
For more information on graphing lines and related topics, try one of the links below.
- The Slope-Intercept Form of an Equation
- Find the equation (given 1 point and a slope)
- Find the equation (given 2 points)
Looking for something else? Try the buttons to the left or type your topic into the search feature at the top of this page. | http://www.freemathresource.com/lessons/algebra/46-changing-an-equation-into-slope-intercept-form | 13 |
86 | What Was Missing
Dutch astronomer Jan Oort first discovered the 'missing matter' problem in the 1930's. By observing the Doppler red-shift values of stars moving near the plane of our galaxy, Oort assumed he could calculate how fast the stars were moving. Since the galaxy was not flying apart, he reasoned that there must be enough matter inside the galaxy such that the central gravitational force was strong enough to keep the stars from escaping, much as the Sun's gravitational pull keeps a planet in its orbit. But when the calculation was made, it turned out that there was not enough mass in the galaxy. And the discrepancy was not small; the galaxy had to be at least twice as massive as the sum of the mass of all its visible components combined. Where was all this missing matter?
In addition, in the 1960's the radial profile of the tangential velocity of stars in their orbits around the galactic center as a function of their distance from that center was measured. It was found that typically, once we get away from the galactic center all the stars travel with the same velocity independent of their distance out from the galactic center. (See the figure below.) Usually, as is the case with our solar system, the farther out an object is, the slower it travels in its orbit.
Figure 1. A typical star's tangential velocity as a function of its distance from the galactic center.
To visualize the seriousness of the problem cosmologists face, we need to consider just a bit of Newtonian dynamics:
- To change a body's velocity vector - either in direction or magnitude or both, a force must be applied to the mass of the body. The resulting acceleration is equal to the ratio of the applied force divided by the mass of the object; i.e., f = m a, where f is the force applied to the body, m is the mass of the body, and a is the resulting acceleration (change in velocity). Both f and a are vectors; the change in direction of the velocity will be in the direction of the applied force.
- When an Olympic athlete, starting to do the hammer throw, swings the hammer around himself in a circle, the force he feels stretching his arms (the force he is applying to the hammer) is the 'centripetal force'. That force is equal to the product of the hammer's mass, m1, times the centripetal acceleration (which in this case is the acceleration that continually changes only the direction, not the magnitude, of the velocity vector of the hammer - inward - so as to keep it in a circular orbit around the athlete). This acceleration is equal to the square of the hammer's tangential velocity, v, divided by the radius of the circle. So, the inward force the athlete needs to exert to keep the hammer in its circular path is: f = m1 v^2/R.
- Newton's law of gravitational force says that the force between two masses is equal to G (the gravitational 'constant') times the product of the two masses divided by the square of the distance between them. f = G(m1 x m2)/R^2.
Consider the case of a star on the outskirts of a galaxy. Its radius from the galactic center is R. Its mass is m1, and m2 is the total mass of everything else (all the other stars and matter) inside a circle whose radius is R, the distance of the star from the galaxy's center. Newtonian dynamics assumes all that combined mass, m2, acts as if it were located at a single point at the galaxy's center. For the star to remain in a fixed orbit, the necessary inward (centripetal) force, m1 V^2/R, must be exactly equal to the available (gravitational) force, G(m1 x m2)/R^2. Setting these two expressions equal to each other results in the expression:
m2 = (V^2) R /G
This says that for the tangential velocity, V, to remain constant as R increases - as it does in figure 1 (as we look at stars farther and farther out from the galaxy's center) the included mass, m2, must increase proportionally to that radius, R. But we realize that, if we move far out from the center, to the last few stars in any galaxy, included mass will not increase proportionally to the radius. So there seems to be no way the velocity can remain the same for the outermost stars as for the inner stars. Therefore, astrophysicists have concluded that, either some mass is 'missing' in the outer regions of galaxies, or the outer stars rotating around galaxy cores do not obey Newton's law of gravity.
There were problems, too, at a larger scale. In 1933 astronomer Fritz Zwicky announced that when he measured the individual velocities of a large group of galaxies known as the Coma cluster, he found that all of the galaxies that he measured were moving so rapidly relative to one another that the cluster should have come apart long ago. The visible mass of the galaxies making up the cluster was far too little to produce enough gravitational force to hold the cluster together. So not only was our own galaxy lacking mass, but so was the whole Coma cluster of galaxies.
MACHOs, WIMPs & MOND
At first, cosmologists decided to leave Newton's laws inviolate and to postulate the existence of some invisible dark entities to make up the missing mass. Apparently it never ocurred to anyone to go back and examine the basic assumption that only gravity was at work in these cases. It was easier to patch up the theory with invisible entities. (Remember the invisible gnomes in my garden?) To quote Astronomy magazine (Aug. 2001 p 26):
"What's more, astronomers have gone to great lengths to affectionately name, define, and categorize this zoo of invisible stuff called dark matter. There are the MAssive Compact Halo Objects (MACHOs) - things like ... black holes, and neutron stars that purportedly populate the outer reaches of galaxies like the Milky Way. Then there are the Weakly Interacting Massive Particles (WIMPs), which possess mass, yet don't interact with ordinary matter - baryons such as protons and neutrons - because they are composed of something entirely foreign and unknown. Dark matter even comes in two flavors, hot (HDM) and cold (CDM)....."
1. Cold dark matter - supposedly in dead stars, planets, brown dwarfs ("failed stars") etc.
2. Hot dark matter - postulated to be fast moving particles floating throughout the universe, neutrinos, tachions etc.
"And all the while astronomers and physicists have refined their dark matter theories without ever getting their hands on a single piece of it. But where is all of this dark matter? The truth is that after more than 30 years of looking for it, there's still no definitive proof that WIMPs exist or that MACHOs will ever make up more than five percent of the total reserve of missing dark stuff."
Of course, the second possibility mentioned above (that the outer stars rotating around galaxy cores do not obey Newton's Law of Gravity) was thought to be impossible. But the first alternative - the fanciful notion that 99% of the matter in the universe was invisible - began to be worrisome too. It was stated that WIMPs and MACHOs were in the category of particle known as "Fabricated Ad hoc Inventions Repeatedly Invoked in Efforts to Defend Untenable Scientific Theories" (FAIRIE DUST). Even such an august authority as Princeton University cosmologist Jim Peebles has been quoted as saying,
"It's an embarrassment that the dominant forms of matter in the universe are hypothetical..."
So the second alternative, radical as it is, was chosen by some astrophysicists and called "MOdify Newton's Dynamics" (MOND) This paradigm shaking proposal to alter Newton's Law of Gravity - because it does not seem to give correct answers in the low density regions of galaxies - was first put forward in 1983 by astrophysicist Mordehai Milgrom at the Weizman Institute of Science in Israel. It has recently been given more publicity by University of Maryland astronomer Stacy McGaugh. Milgrom, himself, has recently ("Does Dark Matter Really Exist?", Scientific American, Aug. 2002, p. 42-52) said, "Although people are right to be skeptical about MOND, until definitive evidence arrives for dark matter or for one of its alternatives, we should keep our minds open." One wonders what alternatives was he referring to?
Some other astrophysicists have grasped at the announcement that neutrinos, that permeate the cosmos, have mass. This, they say, must be the previously "missing matter". But the "missing mass" is not missing homogeneously throughout the universe - just in specific places (like the outer reaches of galaxies). The neutrinos are homogeneously distributed. So this last ditch explanation fails as well.
The dilemma presented by the fact that Newton's Law of Gravity does not give the correct (observed) results in most cases involving galaxy rotation can only be resolved by realizing that Newton's Law of Gravity is simply not applicable in these situations. Galaxies are not held together by gravity. They are formed, driven, and stabilized by dynamic electromagnetic effects.
The Real Explanation:
Dynamic Electromagnetic Forces in Cosmic Plasmas
Ninety nine percent of the universe is made up of tenuous clouds of ions and electrons called electric plasma. Plasmas respond to the electrical physical laws codified by James Clerk Maxwell and Oliver Heaviside in the late 1800's. An additional single law due to Hendrick Lorentz explains the mysterious stellar velocities described above.
d/dt(mv) = q(E + v x B)Simply stated, this law says that a moving charged particle's momentum (direction) can be changed by application of either an electric field, E, or a magnetic field, B, or both. Consider the mass and charge of a proton for example. The electrostatic force between two protons is 36 orders of magnitude greater than the gravitational force (given by Newton's equation). It's not that Newton's Law is wrong. It is just that in deep space it is totally overpowered by the Maxwell-Lorentz forces of electromagnetic dynamics.
Notice, in the equation in the previous paragraph, that the change in a charged particle's momentum (left hand side of the equation) is directly proportional to the strength of the magnetic field, B, the particle is moving through. The strength of the magnetic field produced by an electric current (e.g., a cosmic sized Birkeland current) falls off inversely as the first power of the distance from the current. Both electrostatic and gravitational forces fall off inversely as the square of the distance. This inherent difference in the spatial distribution of electromagnetic forces as compared to gravitational forces may indeed be the root cause of the inexplicable velocity profiles exhibited by galaxies.
Electrical engineer Dr. Anthony L. Peratt, using Maxwell's and Lorentz's equations, has shown that charged particles, such as those that form the intergalactic plasma, will evolve into very familiar galactic shapes under the influence of electrodynamic forces. The results of these simulations fit perfectly with the observed values of the velocity contours in galaxies. No missing matter is needed - and Newton can rest easy in his grave. The electromagnetic force is many orders of magnitude stronger than the force due to gravity and it distributes itself more widely throughout space. But present day astronomy refuses to recognize the existence of any cosmic force other than gravity. That error is the cause of their mystification.
A farmer and his young daughter are driving along a dusty road. They are almost home when the car breaks down. The farmer walks to the barn and gets his horse, Dobbin. He harnesses Dobbin to the front bumper of the car and begins to drag it along the road toward home. The young daughter takes a piece of string and attaches it to the bumper and says, "I'll help drag the car, Daddy."
Anyone who cannot see horses will think the daughter must possess "missing muscle".
Or, as in Moti Milgrom's MOND proposal, they might suggest that Newton's Laws of motion needed "modification" in this case.
In 1986, Nobel laureate Hannes Alfven postulated both an electrical galactic model and an electric solar model. Recently physicist Wal Thornhill has pointed out that Alfven's circuits are really scaled up versions of the familiar homopolar motor that serves as the watt-hour meter on each of our homes. The simple application of the Lorentz force equation ("crossing" the direction, v, of the current into the direction, B, of the magnetic field) yields a rotational force. Not only does this effect explain the mysterious tangential velocities of the outer stars in galaxies, but also (in scaled down version) the observed fact that our Sun rotates faster at its equator than at higher (solar) latitudes.
Up to now astronomers and cosmologists have not given serious consideration to any sort of electrical explanation for any of the above observations. This is puzzling because all these electrical principles have now been known for decades. They have long been applied in the solution of problems in plasma laboratories here on Earth and have been used successfully in the invention of many practical devices - such as industrial electrical arc machining, particle accelerators, etc. The correct, simple, solution to the "mysteries" of galaxy rotation lies in Plasma Electro-Dynamics - not in the invention of imaginary, fanciful entities such as WIMPs and MACHOs or in the trashing of a perfectly valid law of physics as is proposed in MOND.
Present day astronomy/cosmology seems to be on the horns of a very painful dilemma. This dilemma is caused by the fact that Newton's Law of Gravity does not give the correct (observed) results in most cases involving galaxy rotation. The "missing matter" proposal attempts to balance the equation by increasing one of the variables (one of the mass terms). The second proposal (MOND) is to change Newton's equation itself. (If you are losing the game, change the rules.)
But, the ultimate resolution of the dilemma lies in realizing that Newton's Law of Gravity is simply not applicable in these situations. Maxwell’s equations are! Why do astrophysicists grope wildly for solutions in every possible direction except the right one?
Go to the next page
Return to the home page
The Electric Sky (Mikamar Pub.) | http://electric-cosmos.org/darkmatter.htm | 13 |
171 | 11. Programmable Logic Controllers (PLCs)
• CONTROL - Using artificial means to manipulate the world with a particular goal.
• System types,
• Continuous - The values to be controlled change smoothly.
e.g. the speed of a car as the gas pedal is pushed
• Logical - The values to be controlled are easily described as on-off.
e.g. The car motor is on-off (like basic pneumatics).
Note: All systems are continuous but they can be treated as logical for simplicity.
• Logical control types,
• Conditional - A control decision is made by looking at current conditions only.
e.g. A car engine may turn on only when the key is in the ignition and the transmission is in park.
• Sequential - The controller must keep track of things that change and/or know the time and/or how long since something happened.
e.g. A car with a diesel engine must wait 30 seconds after the glow plug has been active before the engine may start.
Note: We can often turn a sequential problem into a conditional by adding more sensors.
mixed (continuous and logical) systems:
• A Programmable Logic Controller (PLC) is an input/output processing computer.
• Advantages of PLCs are:
- cost effective for complex systems
- flexible (easy to add new timers/counters, etc)
- computational abilities
- trouble shooting aids
- easy to add new components
• Ladder logic was originally introduced to mimic relay logic.
11.1 Ladder Logic
• The PLC can be programmed like other computers using specialized “languages.”
- Ladder Logic - a programming technique using a ladder-like structure. It was originally adopted because of its similarity to relay logic diagrams to ease its acceptance in manufacturing facilities. The ladder approach is somewhat limited by the lack of loops, etc. (although this is changing).
- Mnemonic - instructions and opcodes, similar to assembly language. It is more involved to program, but also more flexible than ladder logic. This will be used with the hand held programmers.
• There are other methods that are not as common,
- sequential function charts/petri nets
- state space diagrams
11.2 What Does Ladder Logic Do?
11.2.1 Connecting A PLC To A Process
• The PLC continuously scans the inputs and changes the outputs.
• The process can be anything - a large press, a car, a security door, a blast furnace, etc.
• As inputs change (e.g. a start button), the outputs will be changed. This will cause the process to change and new inputs to the PLC will be received.
11.2.2 PLC Operation
• Remember: The PLC is a computer. Computers have basic components, as shown below:
• In fact the computer above looks more like the one below:
• Notice that in this computer, outputs aren’t connected to the CPU directly.
• A PLC will scan a copy of all inputs into memory. After this, the ladder logic program is run once and it creates a temporary table of all outputs in memory. This table is then written to the outputs after the ladder logic program is done. This continues indefinitely while the PLC is running.
• PLC operation can be shown with a time-line -
SELF TEST - Checks to see if all cards error free, resets watch-dog timer, etc. (A watchdog timer will cause an error, and shut down the PLC if not reset within a short period of time - this would indicate that the ladder logic is not being scanned normally).
INPUT SCAN - Reads input values from the chips in the input cards and copies their values to memory. This makes the PLC operation faster and avoids cases where an input changes from the start to the end of the program (e.g., an emergency stop). There are special PLC functions that read the inputs directly and avoid the input tables.
LOGIC SOLVE/SCAN - Based on the input table in memory, the program is executed one step at a time, and outputs are updated. This is the focus of the later sections.
OUTPUT SCAN - The output table is copied from memory to the output chips. These chips then drive the output devices.
11.3 More Ladder Logic
• Ladder logic has been developed to mimic relay logic - to make the computer more acceptable to companies and employees.
• Original efforts resisted the use of computers because they required new skills and approaches, but the use of ladder logic allowed a much smaller paradigm shift.
• Original relay ladder logic diagrams show how to hook-up inputs to run outputs.
Relay - An input coil uses a voltage/current to create a magnetic field. As the coil becomes magnetic it pulls a metal switch (or reed) towards it and makes an electrical contact. The contact that closes when the coil is energized is normally open. There is a contact that the reed touches without the coil energized is called the normally closed contact. Relays are used to let one power source close a switch for another (often high current) power source while keeping them isolated.
Schematic - The drawing below shows the relay above in a symbolic form.
A Circuit - A mix of inputs and outputs allows logical selection of a device.
• We can then imaging this in context of a PLC. (this idea was suggested by Walt Siedelman of Ackerman Electric)
11.3.1 Relay Terminology
• Contactor - special relays for switching of large loads.
• Motor Starter - Basically a contactor in series with an overload relay to cut off when too much current is drawn.
• Rated Voltage - Suggested operation voltage. Lower levels can result in failure to operate: voltages above shorten life.
• Rated Current - The maximum current before contact damage occurs (welding or melting).
• DC relays require special arc suppression. AC relays have a zero crossing to reduce relay arc problems.
• AC relays require a shading pole to maintain contact. If a DC relay is used with AC power on the coil, it clicks on-and-off at the frequency of the AC (also known as chattering).
11.3.2 Ladder Logic Inputs
• Contact coils are used to connect the PLC power lines to drive the outputs.
• The inputs can come from electrical inputs or memory locations.
• Note: if we are using normally closed contacts in our ladder logic, this is independent of what the actual device is. The choice between normally open or closed is based on what is logically needed, and not the physical device.
• For the Micrologix PLCs the inputs are labelled ‘I:0.0/x’ where x is the input number 0 to 9.
11.3.3 Ladder Logic Outputs
• The outputs allow switches to close that supply or cut-off power to control devices.
• Ladder logic indicates what to do with the output, regardless of what is hooked up -- The programmer and electrician that connect the PLC are responsible for that.
• Outputs can go to electrical outputs, or to memory.
• Output symbols -
• We can relate these to actual outputs using numbers (look for these on the front of the PLC).
• For the Micrologix PLCs the outputs are labelled ‘O:0.0/x’ where x is the output number 0 to 5.
11.4 Ladder Diagrams
• These diagrams are read from left to right, top to bottom.
• For the ladder logic below the sequence of operations would be B1, B2 on the top first, then the bottom. This would be followed by T1, then F1.
• Power flow can be used to consider how ladder diagrams work. Power must be able to flow from the left to the right.
11.4.1 Ladder Logic Design
eg. Burglar Alarm
1. If alarm is on, check sensors.
2. If window/door sensor is broken (turns off), sound alarm and turn on lights.
3. If motion sensor goes on (detects thief), sound alarm and turn on lights.
A = Alarm and lights switch (1 = on)
W = Window/Door sensor (1 = OK)
M = Motion Sensor (0 = OK)
S = Alarm Active switch (1 = on)
11.4.2 A More Complicated Example of Design
• There are some devices and concepts that are temporal/sequential (time based) or sequential. This means that they keep track of events over time, as opposed to conditional logic that decides based on instantaneous conditions.
• Controls that have states or time dependence will require temporal controls (also known as sequential).
• Some devices that are temporal are:
Flip-Flops - These can be latched on or off.
Latches - Will stay on until reset (Similar to flip-flops)
Counters - Keeps a count of events
Timers - Allows inputs and outputs to be delayed or prolonged be a known amount
• We can show how these latches respond with a simple diagram.
• As an example consider the ladder logic:
• In most PLCs, latches will keep their last state even when the PLC is turned off and back on.
(Note: In some other PLCs latches are only used for keeping the state of the PLC when it was turned off, they don’t ‘stick’ on or off)
• We use timers to do some or all of the following:
- Delay turning on
- Delay turning off
- Accumulate time passed (retentive)
• When using timers (especially retentive) we must reset values when done. The (RES) instruction does this.
• Count up/count down counters will track input events.
• Count down counters are similar.
• Consider the example below,
11.9 Design and Safety
11.9.1 Flow Charts
• Good when the PLC only does one thing at a time in a predictable sequence.
• The real advantage is in modeling the process in an orderly manner.
• The case of an object should be tied to ground to give current a path to follow in the case of a fault that energizes the case. (Note: fuses or breakers will cut off the power, but the fault will be on for long enough to be fatal.)
• Step potential is another problem. Electron waves from a fault travel out in a radial direction through the ground. If a worker has two feet on the ground at different radial distances, there will be a potential difference between the feet that will cause a current to flow through the legs. If there is a fault, don’t run/walk away/towards.
• Always ground systems first before applying power. (The first time a system is activated it will have a higher chance of failure.)
• Safe current levels are listed below [ref hydro handbooks], but be aware that in certain circumstances very low currents can kill. When in doubt, take no chances.
• Fail-safe wiring should be used so that if wires are cut or connections fail, the equipment should turn off. For example, if a normally closed stop button is used and the connector is broken off, it will cause the machine to stop, as if the stop button has been pressed and broken the connection.
• Programs should be designed so that they check for problems and shut down in safe ways. Some PLC’s also have power interruption sensors; use these whenever danger is present.
• Proper programming techniques will help detect possible problems on paper instead of in operation.
11.10.3 PLC Safety Rules
• Use a fail-safe design.
• Make the program inaccessible to unauthorized persons.
• Use predictable, non-configurable programs.
• Use redundancy in hardware.
• Directly connect emergency stops to the PLC, or the main power supply.
• Check for system OK at start-up.
• Provide training for new users and engineers to reduce careless and uninformed mistakes.
• Use PLC built in functions for error and failure detection.
1. Look at the process and see if it is in a normal state. i.e. no jammed actuators, broken parts, etc. If there are visible problems, fix them and restart the process.
2. Look at the PLC to see which error lights are on. Each PLC vendor will provide documents that indicate which problems correspond to the error lights. Common error lights are given below. If any off the warning lights are on, look for electrical supply problems to the PLC.
HALT - something has stopped the CPU
RUN - the PLC thinks it is OK (and probably is)
ERROR - a physical problem has occurred with the PLC
3. Check indicator lights on I/O cards to see if they match the system. i.e., look at sensors that are on/off, and actuators on/off, check to see that the lights on the PLC I/O cards agree. If any of the light disagree with the physical reality, then interface electronics/mechanics need inspection.
4. Turn the PLC off and on again. If this fixes the problem it could be a programming mistake, or a grounding problem. Programming mistakes often happen the same way each time. Grounding problems are often random, and have no pattern.
5. Consult the manuals or use software if available. If no obvious problems exist, the problem is not simple and requires a technically skilled approach.
6. If all else fails call the vendor (or the contractor) for help.
11.11 Design Cases
11.11.1 Deadman Switch
A motor will be controlled by two switches. The Go switch will start the motor and the Stop switch will stop it. If the Stop switch was used to stop the motor, the Go switch must be thrown twice to start the motor. When the motor is active a light should be turned on. The Stop switch will be wired as normally closed.
A conveyor is run by switching on or off a motor. We are positioning parts on the conveyor with an optical detector. When the optical sensor goes on, we want to wait 1.5 seconds, and then stop the conveyor. After a delay of 2 seconds the conveyor will start again. We need to use a start and stop button - a light should be on when the system is active.
11.11.3 Accept/Reject Sorting
For the conveyor in the last case we will add a sorting system. Gages have been attached that indicate good or bad. If the part is good, it continues on. If the part is bad, we do not want to delay for 2 seconds, but instead actuate a pneumatic cylinder.
11.11.4 Shear Press
The basic requirements are,
1. A toggle start switch (TS1) and a limit switch on a safety gate (LS1) must both be on before a solenoid (SOL1) can be energized to extend a stamping cylinder to the top of a part.
2. While the stamping solenoid is energized, it must remain energized until a limit switch (LS2) is activated. This second limit switch indicates the end of a stroke. At this point the solenoid should be de-energized, thus retracting the cylinder.
3. When the cylinder is fully retracted a limit switch (LS3) is activated. The cycle may not begin again until this limit switch is active.
4. A cycle counter should also be included to allow counts of parts produced. When this value exceeds 5000 the machine should shut down and a light lit up.
5. A safety check should be included. If the cylinder solenoid has been on for more than 5 seconds, it suggests that the cylinder is jammed or the machine has a fault. If this is the case, the machine should be shut down and a maintenance light turned on.
• To use advanced data functions in a PLC, we must first understand the structure of the data in the PLC memory.
• There are two types of memory used in a PLC-5.
Program Files - these are a collection of 1000 slots to store up to 1000 programs. The main program will be stored in program file 2. SFC programs must be in file 1, and file 0 is used for program and password information. All other program files from 3 to 999 can be used for ‘subroutines’.
Data Files - This is where the variable data is stored that the PLC programs operate on. This is quite complicated, so a detailed explanation follows.
11.12.1 Data Files
• In brief PLC memory works like the memories in a pocket calculator. The values below are for a PLC-5, although most Allen-Bradley PLCs have a similar structure.
• These memory locations are typically word oriented (16 bits, or 2 bytes). This includes the bit memory. But the T4, C5, R6 data files are all three words long.
• All values are stored and used as integers (except when specified, eg. floating point). When integers are stored in binary format 2’s complements are used to allow negative numbers. BCD values are also used.
• There are a number of ways the PLC memory can be addressed,
bit - individual bits in memory can be accessed - this is like addressing a single output as a data bit
word/integer - 16 bits can be manipulated as a group
data value - an actual data value can be provided
file level - an array of data values can be manipulated and operated on as a group
indirect - another memory location can be used in the description of a location.
expression - a text string that describes a complex operation
• For the user assigned data files from 9 to 999 different data types can be assigned. These can be one of the data types already discussed, or another data type.
A - ASCII
B - bit
BT - block transfer
C - counter
D - BCD
F - floating point
MG - message
N - integer (signed, unsigned, 2s compliment, BCD)
PD - PID controller
R - control
SC - SFC status
ST - ASCII string
T - timer
220.127.116.11 - Inputs and Outputs
• Recall that the inputs and outputs use octal for specific bits. This means that the sequence of output bits is 00, 01, 02, 03, 04, 05, 06, 07, 10, 11, 12, 13, 14, 15, 16, 17
18.104.22.168 - User Numerical Memory
• Bit data file B3 is well suited to use of single bits. the data is stored as words and this allows two different ways to access the same bit.
B3:0/0 = B3/0
B3:0/10 = B3/10
B3:1/0 = B3/16
B3:1/5 = B3/21
B3:2/0 = B3/32
• The integer file N7 stores words in 2’s complement form. This allows values from -32768 to 32767. These values can be addressed as whole words, and individual bits can also be changed.
• The floating point file F8 will store floating point numbers that can only be used by floating point functions. The structure of these numbers does not allow bit access.
22.214.171.124 - Timer Counter Memory
• Timer T4 values are addressed using the number of the timers, and an associated data type. For example the accumulator value of timer 3 is T4:3.ACC or T4:3/ACC.
EN - timer enabled bit
TT - timer timing bit
DN - timer done bit
PRE - preset word
ACC - accumulated time word
• Counter C5 values are addressed using the number of the counters, and an associated data type. For example the accumulator value of counter 3 is C5:3.ACC or C5:3/ACC.
CU - count up bit
CD - count down bit
DN - counter done bit
OV - overflow bit
UN - underflow bit
PRE - preset word
ACC - accumulated count word
126.96.36.199 - PLC Status Bits (for PLC-5s)
• Some of the more commonly useful status bits in data file S2 are given below. Full listings are given in the manuals.
S2:0/0 carry in math operation
S2:0/1 overflow in math operation
S2:0/2 zero in math operation
S2:0/3 sign in math operation
S2:1/14 first scan of program file
S2:8 the scan time (ms)
S2:28 watchdog setpoint
S2:29 fault routine file umber
S2:30 STI (selectable timed interrupt) setpoint
S2:31 STI file number
S2:46-S2:54,S2:55-S2:56 PII (Programmable Input Interrupt) settings
S2:55 STI last scan time (ms)
S2:77 communication scan time (ms)
188.8.131.52 - User Function Memory
• Control file R6 is used by various functions to track progress. Values that are available are, listed below. The use of these bits is specific to the function using the control location.
EN - enable bit
EU - enable unload
DN - done bit
EM - empty bit
ER - error bit
UL - unload bit
IN - inhibit bit
FD - found bit
LEN - length word
POS - position word
11.13 Instruction Types
• There are basic categories of instructions,
Basic (discussed before)
- relay instructions
- timer instructions
- counter instructions
- immediate inputs/outputs
- fault/interrupt detection
Basic Data Handling
- computation instructions
- boolean instructions
Advanced Data Handling
- file instructions
- shift registers/stacks
- high speed counters
- ASCII string functions
• The reader should be aware that some functions are positive edge triggered (i.e. they only work the scan is active). while most are active any time the input is active. Some examples of edge triggered and non-edge triggered functions are listed below,
TON, TRO, TOF, ADD, MUL, etc.
11.13.1 Program Control Structures
• These change the flow of execution of the ladder logic.
11.13.2 Branching and Looping
• These functions allow control found in languages like Fortran
IF-THEN is like MCR (Master Control Reset)
GOTO is like JMP (Jump)
SUBROUTINES is like Program Files
• MCR blocks have been used earlier, but they are worth mentioning again.
• Block of ladder logic can be bypassed using a jump statment.
• Subroutines allow reusable programs to be written and called as needed. They are different from jump statements because they are not part of the main program (they are other program files), and arguments can be passed and returned.
• For next loops can also be done to repeat blocks of ladder logic inside a single scan. Care must be used for this instruction so that the ladder logic does not get caught in an infinite, or long loop - if this happens the PLC will experience a fault and halt.
• Ladder logic programs always have an end statement, but it is often taken for granted and ignored. Most modern software automatically inserts this. Some PLCs will experience faults if this is not present.
• There is also a temporary end (TND) that for a single ladder scan will skip the remaining portion of a program.
• A one shot contact can be used to turn on a ladder run for a single scan. When the run has a positive rising edge the oneshot will turn on the run for a single scan. Bit ‘B3:0’ is used here to track to rung status.
184.108.40.206 - Immediate I/O Instructions
• This approach avoids problems caused by logic setting and resetting outputs before done.
• If we have a problem we may want to update an output immediately, and not wait for the PLC to complete its scan of the ladder logic. To do this we use immediate inputs and outputs.
220.127.116.11 - Fault Detection and Interrupts
• The PLC can be set up to run programs automatically. This is normally done for a few reasons,
- to deal with errors that occur (eg. divide by zero)
- to run a program at a regular timed interval (eg. SPC calculations)
- to respond when a long instruction is complete (eg. analog input)
- when a certain input changed (eg. panic button)
• Two types of errors will occur - terminal (critical) and warnings (non-critical). A critical failure will normally stop the PLC.
• In some applications faults and failures must be dealt with in logic if possible, if not the system must be shut down.
• There are some memory locations that store indications of warning and fatal errors that have occurred. The routine in program file [S:29] needs to be able to detect and clear the fault.
S:29 - program file number to run when a fault occurs
• To set a timed interrupt we will set values in the status memory as indicated below. The program in file [S:31] will be run every [S:30]ms.
S:30 - timed delay between program execution - an integer number of ms
S:31 - the program number to be run
• To cause an interrupt when a bit changes the following bits can be set.
S:46 - the program file to run when the input bit changes
S:47 - the rack and group number (eg. if in the main rack it is 000)
S:48 - mask for the input address (eg. 0000000000000100 watches 02)
S:49 - for positive edge triggered =1 for negative edge triggered = 0
S:50 - the number of counts before the interrupt occurs 1 = always up to 32767
11.13.3 Basic Data Handling
• Some handy functions found in PLC-5’s (similar functions are available in other PLC’s)
18.104.22.168 - Move Functions
• There are two types of move functions,
MOV(value,destination) - moves a value to a memory location
MVM(value,mask,destination) - moves a value to a memory location, but with a mask to select specific bits.
• The following function moves data values between memory locations. The following example moves a floating point number from floating point memory 7 to 23
• The following example moves a floating point number from floating point memory F8:7 to integer memory N7:23
• The following example puts an integer value 123 in integer memory N7:23
• A more complex example of the move functions follows,
11.14 Math Functions
• These functions use values in memory, and store the results back in memory (Note: these functions do not use variables like normal programming languages.)
• Math functions are quite similar. The following example adds the integer and floating point number and puts the results in ‘F8:36’.
• Basic PLC-5 math functions include,
ADD(value,value,destination) - add two values
SUB(value,value,destination) - subtract
MUL(value,value,destination) - multiply
DIV(value,value,destination) - divide
NEG(value,destination) - reverse sign from positive/negative
CLR(value) - clear the memory location
• Consider the example below,
• As an exercise, try the calculation below with ladder logic,
• Some intermediate math functions include,
CPT(destination,expression) - does a calculation
ACS(value,destination) - inverse cosine
COS(value,destination) - cosine
ASN(value,destination) - inverse sine
SIN(value,destination) - sine
ATN(value,destination) - inverse tangent
TAN(value,destination) - tangent
XPY(value,value,destination) - X to the power of Y
LN(value,destination) - natural log
LOG(value,destination) - base 10 log
SQR(value,destination) - square root
• Examples of some of these functions are given below.
• For practice implement the following function,
• Some functions are well suited to statistics.
AVE(start value,destination,control,length) - average of values
STD(start value,destination,control,length) - standard deviation of values
SRT(start value,control,length) - sort a list of values
• Examples of these functions are given below.
• There are also functions for basic data conversion.
TOD(value,destination) - convert from BCD to binary
FRD(value,destination) - convert from binary to BCD
DEG(value,destination) - convert from radians to degrees
RAD(value,destination) - convert from degrees to radians
• Examples of these functions are given below.
11.15 Logical Functions
11.15.1 Comparison of Values
• These functions act like input contacts. The equivalent to these functions are if-then statements in traditional programming languages.
• Basic comparison functions in a PLC-5 include,
CMP(expression) - compares two values for equality
EQU(value,value) - equal
NEQ(value,value) - not equal
LES(value,value) - less than
LEQ(value,value) - less than or equal
GRT(value,value) - greater than
GEQ(value,value) - greater than or equal
• The comparison function below compares values at locations A and B. If they are not equal, the output is true. The use of the other comparison functions is identical.
• More advanced comparison functions in a PLC-5 include,
MEQ(value,mask,threshold) - compare for equality using a mask
LIM(low limit,value,high limit) - check for a value between limits
• Examples of these functions are shown below.
11.16 Binary Functions
• These functions allow Boolean operations on numbers and values in the PLC memory.
• Binary functions are also available for,
AND(value,value,destination) - Binary and function
OR(value,value,destination) - Binary or function
NOT(value,value,destination) - Binary not function
XOR(value,value,destination) - Binary exclusive or function
• Examples of the functions are,
11.17 Advanced Data Handling
11.17.1 Multiple Data Value Functions
• We can also deal with large ‘chunks’ of memory at once. These will not be covered, but are available in texts. Some functions include,
- move/copy memory blocks
- add/subtract/multiply/divide/and/or/eor/not/etc blocks of memory
• These functions are similar to single value functions, but they also include some matrix operations. For a PLC-5 a matrix, or block of memory is also known as an array.
• The basic functions are,
FAL(control,length,mode,destination,expression) - will perform basic math operations to multiple values.
FSC(control,length,mode,expression) - will do a comparison to multiple values
COP(start value,destination,length) - copies a block of values
FLL(value,destination,length) - copies a single value to a block of memory
• These functions are done on a PLC-5 using file commands. Typical operations include
file to file - copy an array of memory from one location to another.
element to file - one value is copied to a block of memory
file to element - can convert between data types
file add - add arrays
file subtract - subtract arrays
file multiply - multiply arrays
file divide - divide an array by a value
convert to/from BCD
AND/OR/XOR/NOT - perform binary functions.
• Examples of these functions are shown below.
• a useful function not implemented on PLC-5 processors is a memory exchange.
11.17.2 Block Transfer Functions
• Certain PLC cards only have a single address (eg. O:001 or I:001) but multiple data values need to be read or written to it. To do this the block transfer functions are used.
• These will be used in the labs for analog input/output cards.
• These functions will take more than a single scan, and so once activated they will require a delay until they finish.
• To use the write functions we set up a block of memory, the function shows this starting at N9:0, and it is 10 words long (this is determined by the special purpose card). The block transfer function also needs a control block of memory, this is BT10:1
• To read values we use a similar method. In the example below 9 values will be read from the card and be placed in memory locations from N9:4 to N9:11.
11.18 Complex Functions
11.18.1 Shift Registers
• The values can be shifted left or right with the following functions.
BSL - shifts left from the LSB to the MSB. The LSB must be supplied
BSR - similar to the BSL, except the bit is input to the MSB and shifted to the LSB
• These use bit memory blocks of variable length.
• An example of a shift register is given below. In this case it is taking the value of bit B3:1/0 and putting it in the control word bit R6:2/UL. It then shifts the bits once to the right, B3:1/0 = B3:1/1 then B3:1/1 = B3:1/2 then B3:1/2 = B3:1/3 then B3:1/3 = B3:1/4. Then the input bit is put into the most significant bit B3:1/4 = I:000/00.
• There are other types of shift registers not implemented in PLC-5s.
• We can also use stack type commands. These allow values to be stored in a ‘pile’. This allows us to write programs that will accumulate values that can be used later, or in sequence.
• The basic concept of a FIFO stack is that the first element in is the first element out.
• The PLC-5 commands are FFL to load the stack, and FFU to unload it.
• The example below shows two instructions to load and unload the stack. The first time FFL is activated it will grab all of the bits from the input card I:001 and store them on the stack, at N7:0. The next value would be at N7:1, and so on until the stack length is met. When FFU is used the value at N7:0 will be moved to set all of the bits on the output card O:003 and the values on the stack will be shifted up so that the value previously in N7:1 is now in N7:0, etc. (note: the source and destination do not need to be inputs and outputs)
• A Last-In-First-Out stack can also be used with the LFL/LFU functions.
• Basically, sequencers are a method for using predetermined patterns to drive a process
• These were originally based on motor driven rotating cams that made and broke switches. When a number of these cams were put together, they would be equivalent to a binary number, and could control multiple system variables.
• A sequencer can keep a set of values in memory and move these to memory locations (such as an output card) when directed.
• These are well suited to state diagrams/processes with a single flow of execution (like traffic lights)
• The commands are,
SQO(start,mask,source,destination,control,length) - sequencer output from table to memory address
SQI(start,mask,source,control,length) - sequencer input from memory address to table
SQL(start,source,control,length) - sequencer load to set up the sequencer parameters
• An example of a sequencer is given below for traffic light control. The light patterns are stored in memory (entered manually by the programmer). These are then moved out to the output card as the function is activated. The mask (003F = 0000000000111111) is used so that only the 6 LSB are changed.
11.19 ASCII Functions
• ASCII functions can be used to interpret and manipulate strings in PLCs.
• These functions include,
ABL(channel, control, )- reports the number of ASCII characters including line endings
ACB(channel, control, ) - reports the numbers of ASCII characters in buffer
ACI(string, dest) - convert ASCII string to integer
ACN(string, string,dest) - concatenate strings
AEX(string, start, length, dest) - this will cut a segment of a string out of a larger string
AIC(integer, string) - convert an integer to a string
AHL(channel, mask, mask, control) - does data handshaking
ARD(channel, dest, control, length) - will get characters from the ASCII buffer
ARL(channel, dest, control, length) - will get characters from an ASCII buffer
ASC(string, start, string, result) - this will look for one string inside another
AWT(channel, string, control, length) - will write characters to an ASCII output
• An example of this function is given below,
• Try the problem below,
11.20 Design Techniques
11.20.1 State Diagrams
• We can implement state diagrams seen in earlier sections using many of the advanced function discussed in this section.
• Most PLCs allow multiple programs that may be used as subroutines. We could implement a block logic method using subroutine programs.
• Consider the state diagram below and implement it in ladder logic. You should anticipate what will happen if both A and C are pushed at the same time.
11.21 Design Cases
• If-then can be implemented different ways, as a simple jump, or as a subroutine call.
• For-next can be implemented as shown below, but recall that PLC programs do not execute one line at a time.
• A For/Next function is also available in the PLC.
• A do-while can be done as a simple variation of this.
• Consider a conveyor where parts enter on one end. they will be checked to be in a left or right orientation with a vision system. If neither left nor right is found, he part will be placed in a reject bin. The conveyor layout is shown below.
11.23 PLC Wiring
• Many configurations and packages are available. But essential components are:
power supply - Provides voltage/current to drive the electronics (often 5V, +/- 12V, +/- 24V)
CPU - Where ladder logic is stored and processed; the main control is executed here.
I/O (Input/Output) - A number of input/output terminals to connect to the actual system
Indicator lights - Indicate mode/power and status. These are essential when diagnosing problems.
• Common Configurations:
Rack/Chassis - A rack or number of racks are used to put PLC cards into. These are easy to change, and expand.
Shoebox - A compact, all-in-one unit (about the size of a shoebox) that has limited expansion capabilities. Lower cost and compactness make these ideal for small applications.
• Criteria for evaluation:
Rack, shoebox or micro
# of inputs/outputs (digital)
Memory - often 1K and up. Need is dictated by size of ladder logic program. A ladder element will take only a few bytes and will be specified in manufacturers documentation.
# of I/O modules - When doing some exotic applications, a large number of special add-on cards may be required.
Scan Time - The time to execute ladder logic elements. Big programs or faster processes will require shorter scan times. The shorter the scan time, the higher the cost. Typical values for this are 1 microsecond per simple ladder instruction.
Communications - Serial and networked connections allow the PLC to be programmed and talk to other PLCs. The needs are determined by the application.
11.23.1 Switched Inputs and Outputs
A PLC is just a computer. We must get information in so that it may make decisions and have outputs so that it can make things happen.
Switches - Contact, deadman, etc. all allow a voltage to be applied or removed from an input.
Relays - Used to isolate high voltages from the PLC inputs, these act as switches.
Encoder - Can keep track of positions.
22.214.171.124 - Input Modules
• Input modules typically accept various inputs depending upon specified values.
• Typical input voltages are:
• DC voltages are usually lower and, therefore, safer (i.e., 12-24V)
• DC inputs are very fast. AC inputs require a longer time (e.g., a 60Hz wave would require 1/60sec for reasonable recognition).
• DC voltages are flexible being able to connect to greater varieties of electrical systems.
• DC input cards typically have more inputs.
• AC signals are more immune to noise than DC, so they are suited to long distances and noisy (magnetic) environments.
• AC signals are very common in many existing automation devices.
126.96.36.199 - Actuators
• Inductive loads - Inductance is caused by a coil building up a magnetic field. When a voltage is removed from the coil, the field starts to collapse. As it does this, the magnetic field is changed back to current/voltage. If this change is too sudden, a large voltage spike is created. One way to overcome this is by adding a surge suppressor. One type of design was suggested by Steel McCreery of Omron Canada Ltd.
188.8.131.52 - Output Modules
• Typical Outputs
Motors - Motors often have their own controllers, or relays because of the high current they require.
Lights - Lights can often be powered directly from PLC output boards,
• WARNING - ALWAYS CHECK RATED VOLTAGES AND CURRENTS FOR PLC’s AND NEVER EXCEED!
• Typical outputs operate in one of two ways:
Dry contacts - A separate relay is dedicated to each output. This allows mixed voltages (AC or DC and voltage levels up to the maximum) as well as isolated outputs to protect other outputs and the PLC. Response times are often greater than 10ms. This method is the least sensitive to voltage variations and spikes.
Switched outputs - A voltage is supplied to the PLC card and the card switches it to different outputs using solid state circuitry (transistors, triacs, etc.) Triacs are well suited to AC devices requiring less than an amp. They are sensitive to power spikes and might inadvertently turn on when there are transient voltage spikes. A resistor may need to be put in parallel with a load to ensure enough current is drawn to turn on the triac. The resistor size can be determined by
Transistor outputs use NPN or PNP transistors up to 1A typically. Their response time is well under 1ms.
11.24 The PLC Environment
11.24.1 Electrical Wiring Diagrams
• PLC’s are generally used to control the supply of power to a system. As a result, a brief examination of electrical supply wiring diagrams is worthwhile.
• Generally electrical diagrams contain very basic circuit elements, such as relays, transformers, motors, fuses, lights, etc.
• Within these systems there is often a mix of AC and DC power. 3 phase AC power is what is delivered universally by electric utilities, so the wiring diagrams focus on AC circuits.
• A relay diagram for a simple motor with a seal in circuit might look like the one shown below,:
• The circuit designed for the motor controller must be laid out so that it may be installed in an insulated cabinet. In the figure below, each box could be a purchased module(s).
• After the Layout for the cabinet is determined, the wire paths must be determined. The figure below lays out the wire paths and modules to be used.
• Discrete inputs - If a group of input voltages are the same, they can be grouped together. An example of this is shown below:
• If the input voltages are different and/or come from different sources, the user might use isolated inputs.
• Analog Inputs - The continuous nature of these inputs makes them very sensitive to noise. More is discussed in the next section, and an example is given below:
11.24.3 Shielding and Grounding
• In any sort of control system, wire still carries most inputs/outputs/communications
• We transmit signals along wires by pushing/pulling electrons in one end of the metal wires. Based upon the push/pull that shows up at the other end, we determine the input/output/communications. *** The key idea is that a signal propagates along the wire.
• There are two problems that occur in these systems.
1. Different power sources in the same system can cause different power supply voltages at opposite ends of a wire. As a result, a current will flow and an unwanted voltage appears. This can destroy components and create false signal levels.
2. Magnetic fields crossing the long conductors or in conductor loops can induce currents, destroy equipment, give false readings, or add unwanted noise to analog data signals.
• General design points
- Choose a good shielding cabinet
- Avoid “noisy” equipment when possible
- Separate voltage levels, and AC/DC wires from each other when possible.
• typical sources of grounding problems are:
- Resistance coupled circuits
- Ground loops
• Shielded wire is one good approach to reducing electrostatic/magnetic interference. The conductors are housed in a conducting jacket or the circuitry in housed in a conducting metal cabinet.
• Resistance coupled devices can have interference through a common power source, such as power spikes or brownouts caused by other devices in a factory.
• Ground loops are caused when too many separate connections to ground are made creating loops of wire that become excellent receivers for magnetic interference that induces differences in voltage between grounds on different machines. The common solution is to use a common ground bar.
11.24.4 PLC Environment
• Care must be taken to avoid certain environmental factors.
Dirt - dust and grime can enter the PLC through air ventilation ducts. As dirt clogs internal circuitry and external circuitry, it can effect operation. A storage cabinet such as Nema 4 or 12 can help protect the PLC.
Humidity - Humidity is not a problem with the modern plastic construction materials. But if the humidity condenses, the water can cause corrosion, conduct current, etc. Condensation should be avoided at all costs.
Temperature - The semiconductor chips in the PLC have operating ranges where they are operational. As the temperature is moved out of this range, they will not operate properly and the PLC will shut down. Ambient heat generated in the PLC will help keep the PLC operational at lower temperatures (generally to 0°C). The upper range for the devices is about 60°C, which is generally sufficient for sealed cabinets, but warm temperatures, or other heat sources (e.g. direct irradiation from the sun) can raise the temperature above acceptable limits. In extreme conditions, heating or cooling units may be required. (This includes “cold-starts” for PLCs before their semiconductors heat up).
Shock and Vibration - The nature of most industrial equipment is to apply energy to exact changes. As this energy is applied, there are shocks and vibrations induced. Both will travel through solid materials with ease. While PLCs are designed to withstand a great deal of shock and vibration, special elastomer/sprung or other mounting equipment may be required. Also note that careful consideration of vibration is also required when wiring.
- Interference - Discussed in shielding and grounding.
- Power - Power will fluctuate in the factory as large equipment is turned on and off. To avoid this various options are available. Use an isolation transformer. A UPS (Uninterruptable Power Supply) is also becoming an inexpensive option and are widely available for personal computers.
11.24.5 Special I/O Modules
• each card will have 1 to 16 counters generally.
• typical sample speeds 200KHz
• often allow count up/down
• the counter can be set to zero, or up/down, or gating can occur with an external input.
• High Speed Counter - When pulses are too fast to be counted during normal PLC ladder scans, a special counter can be used that will keep track of the pulses.
• Position controller - A card that will drive a motor (servo motor or stepper motor), and use feedback of the motor position to increase accuracy (feedback is optional with stepper motors).
• PID modules - For continuous systems, for example motor speed.
• There are 2 types of PID modules. In the first, the CPU does the calculation; in the second, a second controller card does the calculation.
- When the CPU does the calculation, the PID loop is slower.
- When a specialized card controls the PID loop, it is faster, but it costs more.
• Typical applications - positioning workpieces.
• Thermocouple - Thermocouples can be used to measure temperature, but these low voltage devices require sensitive electronics to get accurate temperature readings.
• Analog Input/Output - These cards measure voltages in various ranges and allow monitoring of continuous processes. These cards can also output analog voltages to help control external processes, etc.
• Programmers - There are a few basic types of programmers in use. These tend to fall into 3 categories:
1. Hand held units (or integrated) - They allow programming of PLC using a calculator type interface. And is often done using mnemonics.
2. Specialized programming units - Effectively these are portable computers that allows graphical editing of the ladder logic, and fast uploading/downloading/monitoring of the PLC.
3. PLC Software for Personal Computers - They are similar to the specialized programming units, but the software runs on a multi-use, user supplied computer. This approach is typically preferred over 2.
• Man Machine Interface (MMI) - The user can use,
• touch screens
• screen and buttons
• LCD/LED and buttons
• keypad to talk to PLC
• PLC CPU’s - A wide variety of CPU’s are available and can often be used interchangeably in the rack systems. The basic formula is price/performance. The table below compares a few CPU units in various criteria.
• Specialty cards for IBM PC interface.
- Siemens/Allen-Bradley/Etc have cards that fit into IBM computers and will communicate with PLC’s. Most modern PLCs will connect directly to a PC using ethernet or serial (RS-232) cables.
• IBM PC computer cards - an IBM compatible computer card that plugs into a PLC bus and allows use of common software
• For example, the Siemens CP580 Simatic AT
- 1 com port (RS-232C)
- 1 serial port (?)
- 1 RS-422 serial port
- RGB monitor driver (VGA)
- 3.5” disk
- TTY interface
- 9 pin RS-232C mouse
• Diagnostic Modules
- Plug in and all they do is watch for trouble.
• ID Tags - Special “tags” can be attached to products and, as they pass within range of pickup sensors, they transmit (via radio) an ID number or a packet of data. This data can then be used, updated and rewritten to the tags by the PLC
• e.g., Omron V600/V620 ID system
• a basic method for transmission of a text based message
• tags on parts carry message
• transceivers that receive and transmit changes
• Voice Recognition/Speech - In some cases verbal I/O can be useful. Speech recognition methods are still very limited, the user must control their speech. Background noise causes problems.
11.25 Practice Problems
1. A switch will turn a counter on when engaged. This counter can be reset by a second switch. The value in the counter should be multiplied by 5, and then displayed as a binary output using (201-208)
2. Develop Ladder Logic for a car door/seat belt safety system. When the car door is open, or the seatbelt is not done up, the ignition power must not be applied. In addition the key must be able to switch ignition power.
1. List of Inputs
2. Draw Ladder
3. TRUE / FALSE -- PLC outputs can be set with Bytes instead of bits.
4. Create a ladder logic program that will start when input ‘A’ is turned on and calculate the series below. The value of ‘n’ will start at 1 and with each scan of the ladder logic ‘n’ will increase until n=100. While the sequence is being incremented, any change in ‘A’ will be ignored.
5. A thumbwheel input card acquires a four digit BCD count. A sensor detects parts dropping down a chute. When the count matches the BCD value the chute is closed, and a light is turned on until a reset button is pushed. A start button must be pushed to start the part feeding. Develop the ladder logic for this controller. Use a structured design technique such as a state diagram.
6. Design and write ladder logic for a simple traffic light controller that has a single fixed sequence of 16 seconds for both green lights and 4 second for both yellow lights. Use either stacks or sequencers.
7. A PLC is to be used to control a carillon (a bell tower). Each bell corresponds to a musical note and each has a pneumatic actuator that will ring it. The table below defines the tune to be programmed. Write a program that will run the tune once each time a start button is pushed. A stop button will stop the song.
8. The following program uses indirect addressing. Indicate what the new values in memory will be when button A is pushed after the first and second instructions.
9. Divide the string in ST10:0 by the string in ST10:1 and store the results in ST10:2. Check for a divide by zero error.
10. Write a number guessing program that will allow a user to enter a number on a terminal that transmits it to a PLC where it is compared to a value in ’N7:0’. If the guess is above "Hi" will be returned. If below "Lo" will be returned. When it matches "ON" will be returned.
11. Write a program that will convert a numerical value stored in ‘F8:0’ and write it out the RS-232 output on a PLC-5 processor.
Bryan, L.A., Bryan, E.A., Programmable Controllers, Industrial Text Company, 19??.
Cox, R., Technician’s Guide to Programmable Controllers, Delmar Publishing, 19??.
Filer and Linonen, Programmable Controllers and Designing Sequential Logic, Dryden & Saunders/HBJ, 19??.
Petruzella, F., Programmable Logic Controllers, McGraw-Hill Publishing Co., 19??.
Sobh, M., Owen, J.C., Valvanis, K.P., Gracanin, S., “A Subject-Indexed Bibliography of Discrete Event Dynamic Systems”, IEEE Robotics and Applications Magazine, June 1994, pp. 14-20.
Sugiyama, H., Umehara, Y., Smith, E., “A Sequential Function Chart (SFC) Language for Batch Control”, ISA Transactions, Vol. 29, No. 2, 1990, pp. 63-69.
Swainston, F., A Systems Approach to Programmable Controllers, Delmar Publishing, 19??.
Teng, S.H., Black, J. T., “Cellular Manufacturing Systems Modelling: The Petri Net Approach”, Journal of Manufacturing Systems, Vol. 9, No. 1, 1988, pp. 45-54.
Warnock, I., Programmable Controllers: Operation and Application, Prentice Hall, 19??.
Wright, C.P., Applied Measurement Engineering, Prentice-Hall, New Jersey, 1995.
11.27 Laboratory - Serial Interfacing to a PLC
To write C++ and ladder logic program to communicate over RS-232.
only transmit a fixed number of characters
line endings important
when plc receives the following charaters it should,
A - turn on an output
B - turn off an output
C - return ’0’ if output is off, or ’1’ if output is on
1. If necessary review PLC basics and the PLC-5 tutorial.
2. Write a ladder logic program to receive ASCII commands as described in the Overview, and perform the desired action.
3. Write a C++ program to communicate with the ladder logic program using a user menu.
1. Enter the ladder logic program and test it with a terminal program.
2. Enter the C++ program and test it with a terminal emulator.
3. Test the two programs together. | http://engineeronadisk.com/V2/book_integration/engineeronadisk-11.html | 13 |
119 | In statistics, a confidence interval (CI) is a type of interval estimate of a population parameter and is used to indicate the reliability of an estimate. It is an observed interval (i.e. it is calculated from the observations), in principle different from sample to sample, that frequently includes the parameter of interest if the experiment is repeated. How frequently the observed interval contains the parameter is determined by the confidence level or confidence coefficient. More specifically, the meaning of the term "confidence level" is that, if confidence intervals are constructed across many separate data analyses of repeated (and possibly different) experiments, the proportion of such intervals that contain the true value of the parameter will match the confidence level; this is guaranteed by the reasoning underlying the construction of confidence intervals. Whereas two-sided confidence limits form a confidence interval, their one-sided counterparts are referred to as lower or upper confidence bounds.
Confidence intervals consist of a range of values (interval) that act as good estimates of the unknown population parameter. However, in infrequent cases, none of these values may cover the value of the parameter. The level of confidence of the confidence interval would indicate the probability that the confidence range captures this true population parameter given a distribution of samples. It does not describe any single sample. This value is represented by a percentage, so when we say, "we are 99% confident that the true value of the parameter is in our confidence interval", we express that 99% of the observed confidence intervals will hold the true value of the parameter. After a sample is taken, the population parameter is either in the interval made or not, there is no chance. The desired level of confidence is set by the researcher (not determined by data). If a corresponding hypothesis test is performed, the confidence level is the complement of respective level of significance, i.e. a 95% confidence interval reflects a significance level of 0.05. The confidence interval contains the parameter values that, when tested, should not be rejected with the same sample. Greater levels of variance yield larger confidence intervals, and hence less precise estimates of the parameter. Confidence intervals of difference parameters not containing 0 imply that there is a statistically significant difference between the populations.
In applied practice, confidence intervals are typically stated at the 95% confidence level. However, when presented graphically, confidence intervals can be shown at several confidence levels, for example 50%, 95% and 99%.
Certain factors may affect the confidence interval size including size of sample, level of confidence, and population variability. A larger sample size normally will lead to a better estimate of the population parameter.
A confidence interval does not predict that the true value of the parameter has a particular probability of being in the confidence interval given the data actually obtained. (An interval intended to have such a property, called a credible interval, can be estimated using Bayesian methods; but such methods bring with them their own distinct strengths and weaknesses.)
Interval estimates can be contrasted with point estimates. A point estimate is a single value given as the estimate of a population parameter that is of interest, for example the mean of some quantity. An interval estimate specifies instead a range within which the parameter is estimated to lie. Confidence intervals are commonly reported in tables or graphs along with point estimates of the same parameters, to show the reliability of the estimates.
For example, a confidence interval can be used to describe how reliable survey results are. In a poll of election voting-intentions, the result might be that 40% of respondents intend to vote for a certain party. A 90% confidence interval for the proportion in the whole population having the same intention on the survey date might be 38% to 42%. From the same data one may calculate a 95% confidence interval, which in this case might be 36% to 44%. A major factor determining the length of a confidence interval is the size of the sample used in the estimation procedure, for example the number of people taking part in a survey.
Meaning and interpretation
For users of frequentist methods, various interpretations of a confidence interval can be given.
- The confidence interval can be expressed in terms of samples (or repeated samples): "Were this procedure to be repeated on multiple samples, the calculated confidence interval (which would differ for each sample) would encompass the true population parameter 90% of the time." Note that this does not refer to repeated measurement of the same sample, but repeated sampling.
- The explanation of a confidence interval can amount to something like: "The confidence interval represents values for the population parameter for which the difference between the parameter and the observed estimate is not statistically significant at the 10% level". In fact, this relates to one particular way in which a confidence interval may be constructed.
- The probability associated with a confidence interval may also be considered from a pre-experiment point of view, in the same context in which arguments for the random allocation of treatments to study items are made. Here the experimenter sets out the way in which they intend to calculate a confidence interval and know, before they do the actual experiment, that the interval they will end up calculating has a certain chance of covering the true but unknown value. This is very similar to the "repeated sample" interpretation above, except that it avoids relying on considering hypothetical repeats of a sampling procedure that may not be repeatable in any meaningful sense. See Neyman construction.
In each of the above, the following applies: If the true value of the parameter lies outside the 90% confidence interval once it has been calculated, then an event has occurred which had a probability of 10% (or less) of happening by chance.
The principle behind confidence intervals was formulated to provide an answer to the question raised in statistical inference of how to deal with the uncertainty inherent in results derived from data that are themselves only a randomly selected subset of a population. There are other answers, notably that provided by Bayesian inference in the form of credible intervals. Confidence intervals correspond to a chosen rule for determining the confidence bounds, where this rule is essentially determined before any data are obtained, or before an experiment is done. The rule defined such that over all possible datasets that might be obtained, there is a high probability ("high" is specifically quantified) that the interval determined by the rule will include the true value of the quantity under consideration. That is a fairly straightforward and reasonable way of specifying a rule for determining uncertainty intervals. The Bayesian approach appears to offer intervals that can, subject to acceptance of an interpretation of "probability" as Bayesian probability, be interpreted as meaning that the specific interval calculated from a given dataset has a certain probability of including the true value, conditional on the data and other information available. The confidence interval approach does not allow this, since in this formulation and at this same stage, both the bounds of interval and the true values are fixed values and there is no randomness involved.
For example, in the poll example outlined in the introduction, to be 95% confident that the actual number of voters intending to vote for the party in question is between 36% and 44%, should not be interpreted in the common-sense interpretation that there is a 95% probability that the actual number of voters intending to vote for the party in question is between 36% and 44%. The actual meaning of confidence levels and confidence intervals is rather more subtle. In the above case, a correct interpretation would be as follows: If the polling were repeated a large number of times (you could produce a 95% confidence interval for your polling confidence interval), each time generating about a 95% confidence interval from the poll sample, then 95% of the generated intervals would contain the true percentage of voters who intend to vote for the given party. Each time the polling is repeated, a different confidence interval is produced; hence, it is not possible to make absolute statements about probabilities for any one given interval. For more information, see the section on meaning and interpretation.
The questions concerning how an interval expressing uncertainty in an estimate might be formulated, and of how such intervals might be interpreted, are not strictly mathematical problems and are philosophically problematic. Mathematics can take over once the basic principles of an approach to inference have been established, but it has only a limited role[original research?] in saying why one approach should be preferred to another..
Relationship with other statistical topics
Statistical hypothesis testing
Confidence intervals are closely related to statistical significance testing. For example, if for some estimated parameter θ one wants to test the null hypothesis that θ = 0 against the alternative that θ ≠ 0, then this test can be performed by determining whether the confidence interval for θ contains 0.
More generally, given the availability of a hypothesis testing procedure that can test the null hypothesis θ = θ0 against the alternative that θ ≠ θ0 for any value of θ0, then a confidence interval with confidence level γ = 1 − α can be defined as containing any number θ0 for which the corresponding null hypothesis is not rejected at significance level α.
In consequence,[clarification needed] if the estimates of two parameters (for example, the mean values of a variable in two independent groups of objects) have confidence intervals at a given γ value that do not overlap, then the difference between the two values is significant at the corresponding value of α. However, this test is too conservative. If two confidence intervals overlap, the difference between the two means still may be significantly different.
While the formulations of the notions of confidence intervals and of statistical hypothesis testing are distinct they are in some senses related and to some extent complementary. While not all confidence intervals are constructed in this way, one general purpose approach to constructing confidence intervals is to define a 100(1 − α)% confidence interval to consist of all those values θ0 for which a test of the hypothesis θ = θ0 is not rejected at a significance level of 100α%. Such an approach may not always be available since it presupposes the practical availability of an appropriate significance test. Naturally, any assumptions required for the significance test would carry over to the confidence intervals.
It may be convenient to make the general correspondence that parameter values within a confidence interval are equivalent to those values that would not be rejected by a hypothesis test, but this would be dangerous. In many instances the confidence intervals that are quoted are only approximately valid, perhaps derived from "plus or minus twice the standard error", and the implications of this for the supposedly corresponding hypothesis tests are usually unknown.
It is worth noting, that the confidence interval for a parameter is not the same as the acceptance region of a test for this parameter, as is sometimes thought. The confidence interval is part of the parameter space, whereas the acceptance region is part of the sample space. For the same reason the confidence level is not the same as the complementary probability of the level of significance.
Confidence regions generalize the confidence interval concept to deal with multiple quantities. Such regions can indicate not only the extent of likely sampling errors but can also reveal whether (for example) it is the case that if the estimate for one quantity is unreliable then the other is also likely to be unreliable.
|This section is empty. You can help by adding to it. (February 2013)|
Let X be a random sample from a probability distribution with statistical parameters θ, which is a quantity to be estimated, and φ, representing quantities that are not of immediate interest. A confidence interval for the parameter θ, with confidence level or confidence coefficient γ, is an interval with random endpoints (u(X), v(X)), determined by the pair of random variables u(X) and v(X), with the property:
The quantities φ in which there is no immediate interest are called nuisance parameters, as statistical theory still needs to find some way to deal with them. The number γ, with typical values close to but not greater than 1, is sometimes given in the form 1 − α (or as a percentage 100%·(1 − α)), where α is a small non-negative number, close to 0.
Here Prθ,φ indicates the probability distribution of X characterised by (θ, φ). An important part of this specification is that the random interval (u(X), v(X)) covers the unknown value θ with a high probability no matter what the true value of θ actually is.
Note that here Prθ,φ need not refer to an explicitly given parameterised family of distributions, although it often does. Just as the random variable X notionally corresponds to other possible realizations of x from the same population or from the same version of reality, the parameters (θ, φ) indicate that we need to consider other versions of reality in which the distribution of X might have different characteristics.
In a specific situation, when x is the outcome of the sample X, the interval (u(x), v(x)) is also referred to as a confidence interval for θ. Note that it is no longer possible to say that the (observed) interval (u(x), v(x)) has probability γ to contain the parameter θ. This observed interval is just one realization of all possible intervals for which the probability statement holds.
Approximate confidence intervals
In many applications, confidence intervals that have exactly the required confidence level are hard to construct. But practically useful intervals can still be found: the rule for constructing the interval may be accepted as providing a confidence interval at level γ if
to an acceptable level of approximation. Alternatively, some authors simply require that
which is useful if the probabilities are only partially identified, or imprecise.
When applying standard statistical procedures, there will often be standard ways of constructing confidence intervals. These will have been devised so as to meet certain desirable properties, which will hold given that the assumptions on which the procedure rely are true. These desirable properties may be described as: validity, optimality and invariance. Of these "validity" is most important, followed closely by "optimality". "Invariance" may be considered as a property of the method of derivation of a confidence interval rather than of the rule for constructing the interval. In non-standard applications, the same desirable properties would be sought.
- Validity. This means that the nominal coverage probability (confidence level) of the confidence interval should hold, either exactly or to a good approximation.
- Optimality. This means that the rule for constructing the confidence interval should make as much use of the information in the data-set as possible. Recall that one could throw away half of a dataset and still be able to derive a valid confidence interval. One way of assessing optimality is by the length of the interval, so that a rule for constructing a confidence interval is judged better than another if it leads to intervals whose lengths are typically shorter.
- Invariance. In many applications the quantity being estimated might not be tightly defined as such. For example, a survey might result in an estimate of the median income in a population, but it might equally be considered as providing an estimate of the logarithm of the median income, given that this is a common scale for presenting graphical results. It would be desirable that the method used for constructing a confidence interval for the median income would give equivalent results when applied to constructing a confidence interval for the logarithm of the median income: specifically the values at the ends of the latter interval would be the logarithms of the values at the ends of former interval.
Methods of derivation
For non-standard applications, there are several routes that might be taken to derive a rule for the construction of confidence intervals. Established rules for standard procedures might be justified or explained via several of these routes. Typically a rule for constructing confidence intervals is closely tied to a particular way of finding a point estimate of the quantity being considered.
- Descriptive statistics
- This is closely related to the method of moments for estimation. A simple example arises where the quantity to be estimated is the mean, in which case a natural estimate is the sample mean. The usual arguments indicate that the sample variance can be used to estimate the variance of the sample mean. A naive confidence interval for the true mean can be constructed centered on the sample mean with a width which is a multiple of the square root of the sample variance.
- Likelihood theory
- Where estimates are constructed using the maximum likelihood principle, the theory for this provides two ways of constructing confidence intervals or confidence regions for the estimates.
- Estimating equations
- The estimation approach here can be considered as both a generalization of the method of moments and a generalization of the maximum likelihood approach. There are corresponding generalizations of the results of maximum likelihood theory that allow confidence intervals to be constructed based on estimates derived from estimating equations.
- Via significance testing
- If significance tests are available for general values of a parameter, then confidence intervals/regions can be constructed by including in the 100p% confidence region all those points for which the significance test of the null hypothesis that the true value is the given value is not rejected at a significance level of (1-p).
- In situations where the distributional assumptions for that above methods are uncertain or violated, resampling methods allow construction of confidence intervals or prediction intervals. The observed data distribution and the internal correlations are used as the surrogate for the correlations in the wider population.
A machine fills cups with a liquid, and is supposed to be adjusted so that the content of the cups is 250 g of liquid. As the machine cannot fill every cup with exactly 250 g, the content added to individual cups shows some variation, and is considered a random variable X. This variation is assumed to be normally distributed around the desired average of 250 g, with a standard deviation of 2.5 g. To determine if the machine is adequately calibrated, a sample of n = 25 cups of liquid are chosen at random and the cups are weighed. The resulting measured masses of liquid are X1, ..., X25, a random sample from X.
To get an impression of the expectation μ, it is sufficient to give an estimate. The appropriate estimator is the sample mean:
The sample shows actual weights x1, ..., x25, with mean:
If we take another sample of 25 cups, we could easily expect to find mass values like 250.4 or 251.1 grams. A sample mean value of 280 grams however would be extremely rare if the mean content of the cups is in fact close to 250 grams. There is a whole interval around the observed value 250.2 grams of the sample mean within which, if the whole population mean actually takes a value in this range, the observed data would not be considered particularly unusual. Such an interval is called a confidence interval for the parameter μ. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves.
In our case we may determine the endpoints by considering that the sample mean X from a normally distributed sample is also normally distributed, with the same expectation μ, but with a standard error of:
By standardizing, we get a random variable:
dependent on the parameter μ to be estimated, but with a standard normal distribution independent of the parameter μ. Hence it is possible to find numbers −z and z, independent of μ, between which Z lies with probability 1 − α, a measure of how confident we want to be.
We take 1 − α = 0.95, for example. So we have:
and we get:
In other words, the lower endpoint of the 95% confidence interval is:
and the upper endpoint of the 95% confidence interval is:
With the values in this example, the confidence interval is:
This might be interpreted as: with probability 0.95 we will find a confidence interval in which we will meet the parameter μ between the stochastic endpoints
This does not mean that there is 0.95 probability of meeting the parameter μ in the interval obtained by using the currently computed value of the sample mean,
Instead, every time the measurements are repeated, there will be another value for the mean X of the sample. In 95% of the cases μ will be between the endpoints calculated from this mean, but in 5% of the cases it will not be. The actual confidence interval is calculated by entering the measured masses in the formula. Our 0.95 confidence interval becomes:
In other words, the 95% confidence interval is between the lower endpoint 249.22 g and the upper endpoint 251.18 g.
As the desired value 250 of μ is within the resulted confidence interval, there is no reason to believe the machine is wrongly calibrated.
The calculated interval has fixed endpoints, where μ might be in between (or not). Thus this event has probability either 0 or 1. One cannot say: "with probability (1 − α) the parameter μ lies in the confidence interval." One only knows that by repetition in 100(1 − α) % of the cases, μ will be in the calculated interval. In 100α% of the cases however it does not. And unfortunately one does not know in which of the cases this happens. That is why one can say: "with confidence level 100(1 − α) %, μ lies in the confidence interval."
The maximum error is calculated to be 0.98 since it is the difference between value that we are confident of with upper or lower endpoint.
The figure on the right shows 50 realizations of a confidence interval for a given population mean μ. If we randomly choose one realization, the probability is 95% we end up having chosen an interval that contains the parameter; however we may be unlucky and have picked the wrong one. We will never know; we are stuck with our interval.
has a Student's t-distribution with n − 1 degrees of freedom. Note that the distribution of T does not depend on the values of the unobservable parameters μ and σ2; i.e., it is a pivotal quantity. Suppose we wanted to calculate a 95% confidence interval for μ. Then, denoting c as the 97.5th percentile of this distribution,
(Note: "97.5th" and "0.95" are correct in the preceding expressions. There is a 2.5% chance that T will be less than −c and a 2.5% chance that it will be larger than +c. Thus, the probability that T will be between −c and +c is 95%.)
and we have a theoretical (stochastic) 95% confidence interval for μ.
After observing the sample we find values x for X and s for S, from which we compute the confidence interval
an interval with fixed numbers as endpoints, of which we can no longer say there is a certain probability it contains the parameter μ; either μ is in this interval or isn't.
Alternatives and critiques
Confidence intervals are one method of interval estimation, and the most widely used in frequentist statistics. An analogous concept in Bayesian statistics is credible intervals, while an alternative frequentist method is that of prediction intervals which, rather than estimating parameters, estimate the outcome of future samples. For other approaches to expressing uncertainty using intervals, see interval estimation.
There is disagreement about which of these methods produces the most useful results: the mathematics of the computations are rarely in question–confidence intervals being based on sampling distributions, credible intervals being based on Bayes' theorem–but the application of these methods, the utility and interpretation of the produced statistics, is debated.
Users of Bayesian methods, if they produced an interval estimate, would in contrast to confidence intervals, want to say "My degree of belief that the parameter is in fact in this interval is 90%," while users of prediction intervals would instead say "I predict that the next sample will fall in this interval 90% of the time."
Confidence intervals are an expression of probability and are subject to the normal laws of probability. If several statistics are presented with confidence intervals, each calculated separately on the assumption of independence, that assumption must be honoured or the calculations will be rendered invalid. For example, if a researcher generates a set of statistics with intervals and selects some of them as significant, the act of selecting invalidates the calculations used to generate the intervals.
Comparison to prediction intervals
A prediction interval for a random variable is defined similarly to a confidence interval for a statistical parameter. Consider an additional random variable Y which may or may not be statistically dependent on the random sample X. Then (u(X), v(X)) provides a prediction interval for the as-yet-to-be observed value y of Y if
Comparison to Bayesian interval estimates
Here Θ is used to emphasize that the unknown value of θ is being treated as a random variable. The definitions of the two types of intervals may be compared as follows.
- The definition of a confidence interval involves probabilities calculated from the distribution of X for given (θ, φ) (or conditional on these values) and the condition needs to hold for all values of (θ, φ).
- The definition of a credible interval involves probabilities calculated from the distribution of Θ conditional on the observed values of X = x and marginalised (or averaged) over the values of Φ, where this last quantity is the random variable corresponding to the uncertainty about the nuisance parameters in φ.
Note that the treatment of the nuisance parameters above is often omitted from discussions comparing confidence and credible intervals but it is markedly different between the two cases.
In some simple standard cases, the intervals produced as confidence and credible intervals from the same data set can be identical. They are very different if informative prior information is included in the Bayesian analysis; and may be very different for some parts of the space of possible data even if the Bayesian prior is relatively uninformative.
An approximate confidence interval for a population mean can be constructed for random variables that are not normally distributed in the population, relying on the central limit theorem, if the sample sizes and counts are big enough. The formulae are identical to the case above (where the sample mean is actually normally distributed about the population mean). The approximation will be quite good with only a few dozen observations in the sample if the probability distribution of the random variable is not too different from the normal distribution (e.g. its cumulative distribution function does not have any discontinuities and its skewness is moderate).
One type of sample mean is the mean of an indicator variable, which takes on the value 1 for true and the value 0 for false. The mean of such a variable is equal to the proportion that have the variable equal to one (both in the population and in any sample). This is a useful property of indicator variables, especially for hypothesis testing. To apply the central limit theorem, one must use a large enough sample. A rough rule of thumb is that one should see at least 5 cases in which the indicator is 1 and at least 5 in which it is 0. Confidence intervals constructed using the above formulae may include negative numbers or numbers greater than 1, but proportions obviously cannot be negative or exceed 1. Additionally, sample proportions can only take on a finite number of values, so the central limit theorem and the normal distribution are not the best tools for building a confidence interval. See "Binomial proportion confidence interval" for better methods which are specific to this case.
- CLs upper limits
- Confidence band
- Confidence interval for binomial distribution
- Confidence interval for exponent of the Power law distribution
- Confidence interval for mean of the Exponential distribution
- Confidence interval for mean of the Poisson distribution
- Confidence intervals for mean and variance of the Normal distribution
- Cumulative distribution function-based nonparametric confidence interval
- Error bar
- Estimation statistics
- Robust confidence intervals
- Tolerance interval
- Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, p49, p209
- Kendall, M.G. and Stuart, D.G. (1973) The Advanced Theory of Statistics. Vol 2: Inference and Relationship, Griffin, London. Section 20.4
- Neyman, J. (1937). "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability". Philosophical Transactions of the Royal Society of London A 236: 333–380.
- Zar, J.H. (1984) Biostatistical Analysis. Prentice Hall International, New Jersey. pp 43–45
- Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, p214, 225, 233
- T. Seidenfeld, Philosophical Problems of Statistical Inference: Learning from R.A. Fisher, Springer-Verlag, 1979
- Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, Section 7.2(iii)
- Goldstein, H.; Healey, M.J.R. (1995). "The graphical presentation of a collection of means". Journal of the Royal Statistical Society 158: 175–77.
- Wolfe R, Hanley J (Jan 2002). "If we're so different, why do we keep overlapping? When 1 plus 1 doesn't make 2". CMAJ 166 (1): 65–6. PMC 99228. PMID 11800251.
- George G. Roussas (1997) A Course in Matheamtical Statistics, 2nd Edition, Academic Press, p397
- Rees. D.G. (2001) Essential Statistics, 4th Edition, Chapman and Hall/CRC. ISBN 1-58488-007-4 (Section 9.5)
- Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, p390
- Bernardo JE, Smith, Adrian (2000). Bayesian theory. New York: Wiley. p. 259. ISBN 0-471-49464-X.
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (May 2012)|
- Fisher, R.A. (1956) Statistical Methods and Scientific Inference. Oliver and Boyd, Edinburgh. (See p. 32.)
- Freund, J.E. (1962) Mathematical Statistics Prentice Hall, Englewood Cliffs, NJ. (See pp. 227–228.)
- Hacking, I. (1965) Logic of Statistical Inference. Cambridge University Press, Cambridge. ISBN 0-521-05165-7
- Keeping, E.S. (1962) Introduction to Statistical Inference. D. Van Nostrand, Princeton, NJ.
- Kiefer, J. (1977). "Conditional Confidence Statements and Confidence Estimators (with discussion)". Journal of the American Statistical Association 72: 789–827.
- Mayo, D. G. (1981) "In defence of the Neyman-Pearson theory of confidence intervals", Philosophy of Science, 48 (2), 269–280. JSTOR 187185
- Neyman, J. (1937) "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability" Philosophical Transactions of the Royal Society of London A, 236, 333–380. (Seminal work.)
- Robinson, G.K. (1975). "Some Counterexamples to the Theory of Confidence Intervals". Biometrika 62: 155–161.
- Smithson, M. (2003) Confidence intervals. Quantitative Applications in the Social Sciences Series, No. 140. Belmont, CA: SAGE Publications. ISBN 978-0-7619-2499-9.
|Wikimedia Commons has media related to: Confidence interval|
- Hazewinkel, Michiel, ed. (2001), "Confidence estimation", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- The Exploratory Software for Confidence Intervals tutorial programs that run under Excel
- Confidence interval calculators for R-Squares, Regression Coefficients, and Regression Intercepts
- Weisstein, Eric W., "Confidence Interval", MathWorld.
- CAUSEweb.org Many resources for teaching statistics including Confidence Intervals.
- An interactive introduction to Confidence Intervals
- Confidence Intervals: Confidence Level, Sample Size, and Margin of Error by Eric Schulz, the Wolfram Demonstrations Project.
- Confidence Intervals in Public Health. Straightforward description with examples and what to do about small sample sizes or rates near 0. | http://en.wikipedia.org/wiki/Confidence_belt | 13 |
56 | Special Relativity assumes time is a dimension, i.e. space-time is Minkowski space. There are thus four coordinates in this space, xi with the index i taking the values 0,1,2,3. Since time has different units than length, to be able to describe space and time as elements of one space-time we have to multiply time by a constant of dimension length/time, i.e. a velocity. This constant is usually denoted c. It is then x0 = c t. We will come back to the meaning of this constant later.
The other ingredient of Special Relativity is that the laws of physics are same for all observers with constant velocity. That means there are sensible and well-defined transformations between observers that preserve the form of the equations.
A Word or Two about Tensors
The way to achieve such sensible transformations is to make the equations "tensor equations", since a tensor does exactly what we want: it transforms in a well-defined way under a change from one to the other observer's coordinate system. The simplest sort of a tensor is a scalar φ, which doesn't transform at all - it's just the same in all coordinate systems. That doesn't mean it has the same value at each point though, so it is actually a scalar field.
The next simplest tensor is a vector Vi which has one index that runs from 0 to 3, corresponding to four entries - three for the spatial and one for the time-component. Again this can be a position dependent quantity, so it's actually a vector field. The next tensor has two indices Tij that run from 0 to 3, so 16 entries, and so on: Uijklmn.... The number of indices is also called the "rank" of a tensor. To transform a tensor from one coordinate system in the other, one acts on it with the transformation matrix, one for every index. We will come to this transformation later.
Note that it is meaningless to say an object defined in only one inertial frame is a tensor. If you have it in only one frame, you can always make it into a tensor by just defining it in every other frame to be the appropriately transformed version.
The Scalar Product
A specifically important scalar for Special Relativity is the scalar product between two vectors. The scalar product is a symmetric bilinear form, which basically means it's given by a rank two tensor gij that doesn't care in which order the indices come, and if you shovel in two vectors out comes a scalar. It goes like this:
gijViUj = scalar,
where sums are taken over indices that appear twice, once up and once down. This is also known as Einstein's summation convention.
I used to have a photo of Einstein with him standing in front of a blackboard cluttered with sum symbols. Unfortunately I can't find it online, a reference would be highly welcome. That photo made really clear why the convention was introduced. Today the sum convention is so common that it often isn't even mentioned. In fact, you will have to tell readers instead not to sum over equal indices if that's what you mean.
The scalar product is a property of the space one operates in. It tells you what the lengths of a vector is, and angles between different vectors. That means it describes how to do measurements in that space. The bilinear form you need for this is also called the "metric", you can use it to raise and lower indices on vectors in the following way: gijVj = Vi. Note how indices on both sides match: if you leave out the indices that appear both up and down, the remaining indices have to be equal on both sides.
Technically, the metric it is a map from the tangential to the co-tangential space, it thus transforms row-vectors V into column vectors VT and vice versa, where the T means taking the transverse. A lower index is also called "covariant", whereas upper indices are called "contravariant," just to give you some lingo. The index jiggling is also called "Ricci calculus" and one of the common ways to calculate in General Relativity. The other possibility is to go indexless via differential forms. If you use indices, here is a good advice: Make sure you don't accidentally use an index twice for different purposes in one equation. You can produce all kind of nonsense that way.
In Special Relativity, the metric is (in Euclidean coordinates) just a diagonal matrix with entries (1,-1,-1,-1), usually denoted with ηij. In the case of a curved space-time it is denoted with gij as I used above, but that General case is a different story and shall be told another time. So for now let us stick with the case of Special Relativity where the scalar product is defined through η.
Now what is a Lorentz transformation? Let us denote it with Λ. As mentioned above, you need one for every index of your tensor that you want to transform. Say we want to get a vector V from one coordinate system to the other, we apply a Lorentz transformations on it so in the new coordinate system we have V' = VΛ, where V' is the same vector, but how seen in the other coordinate system. With indices that reads V'iΛij = Vj. Similarly, the transverse vector transforms by V'T = ΛT VT.
Lorentz transformations are then just the group of transformations that preserve the length of all vectors, length as defined through the scalar product with η. You can derive it from this requirement. First note that a transformation that preserves the lengths of all vectors also preserves angles. Proof: Draw a triangle. If you fix the length of all sides you can't change the angles either. Lorentz transformations are thus orthogonal transformations in Minkowski space. In particular, since the scalar product between any two vectors has to remain invariant,
VT η U = V'T η U' = VT ΛT η Λ U,
they fulfil (with and without indices)
ΛijηkiΛlk = ηjl <=> ΛT η Λ = η (1)
If you forget for a moment that we have three spatial dimension, you can derive the transformations from (1) as we go along. Just insert that η is diagonal with (in two dimensions) entries (1,-1), name the four entries of Λ and solve for them. You might want to use that if you take the determinant on both sides of the above equation you also find that |det Λ| = 1, from which we will restrict ourselves to the case with det = 1 to preserve orientation. You will be left with a matrix that has one unknown parameter β in the following familiar form
with γ-2 = 1- β2.
Now what about the parameter β? We can determine it by applying the Lorentz transformation to the worldline (cΔt, Δx) of an observer in rest such that Δx = 0. We apply the Lorentz transformation and ask what his world line (Δt', Δx') looks like. One finds that Δx'/Δt = βc. Thus, β is the relative velocity of the observers in units of c.
One can generalize this derivation to three spatial dimensions by noticing that the two-dimensional case represents the situation in which the motion is aligned with one of the coordinate axis. One obtains the general case by doing the same for all three axis, and adding spatial rotations to the group. The full group then has six generators (three boosts, three rotations), and it is called the Lorentz group, named after the Dutch physicist Hendrik Lorentz. Strictly speaking, since we have only considered the case with det Λ = +1, it is the "proper Lorentz group" we have here. It is usually denoted SO(3,1).
Once you have the group structure, you can then go ahead and derive the addition-theorem for velocities (by multiplying two Lorentz-transformations with different velocities), length contraction, and time dilatation (by applying Lorentz transformations to rulers).
Now let us consider some particles in this space-time with such nice symmetry properties. First, we introduce another important scalar invariant of Special Relativity, which is an observer's proper time τ. τ is the proper length of the particle's world line, and an infinitesimally small step of proper time dτ is consequently
dτ2 = c2 dt2 - dx2
One obtains the proper time of a curve by integrating dτ over this curve. Pull out a factor dt2 and use dx/dt = v to obtain
dτ2 γ2 = dt2
A massive particle's relativistic four-momentum is pi = mui, where ui=dxi/dτ = γ dxi/dt is the four-velocity of the particle, and m is its invariant rest mass (sometimes denoted m0). The rest mass is also a scalar. We then have for the spatial components (a = 1,2,3)
pa = m γ va .
What is c?
Let us eventually come back to the parameter c that we introduced in the beginning. Taking the square of the previous expression (possibly summing over spatial components), inserting γ and solving for v one obtains the particle's spatial velocity as a function of the momentum to
In the limit of m to zero, one obtains for arbitrary p that v=c. Or the other way round, the only way to get v=c is if the particle is massless m=0.
So far there is no experimental evidence that photons - the particles that constitute light - have mass. Thus, light moves with speed c. However, note that in the derivation that got us here, there was no mentioning of light whatsoever. There is no doubt that historically Einstein's path to the Special Relativity came from Maxwell's equations, and many of his thought experiments are about light signals. But a priori, arguing from symmetry principles in Minkowski-space as I did here, the constant c has nothing to do with light. Nowadays, this insight can get you an article in NewScientist.
Btw, note that c is indeed a constant. If you want to fiddle around with that, you'll have to mess up at least one step in this derivation.
See also: The Equivalence Principle | http://backreaction.blogspot.co.uk/2008_10_01_archive.html | 13 |
59 | 2008/9 Schools Wikipedia Selection. Related subjects: Mathematics
|Map of Earth|
|Lines of longitude appear curved in this projection, but are actually halves of great circles.|
|Lines of latitude appear horizontal in this projection, but are actually circular with different radii. All locations with a given latitude are collectively referred to as a circle of latitude.|
|The equator divides the planet into a Northern Hemisphere and a Southern Hemisphere, and has a latitude of 0°.|
Latitude, usually denoted symbolically by the Greek letter phi, , gives the location of a place on Earth (or other planetary body) north or south of the equator. Lines of Latitude are the horizontal lines shown running east-to-west on maps. Technically, latitude is an angular measurement in degrees (marked with °) ranging from 0° at the Equator (low latitude) to 90° at the poles (90° N for the North Pole or 90° S for the South Pole; high latitude). The complementary angle of a latitude is called the colatitude.
Circles of latitude
All locations of a given latitude are collectively referred to as a circle of latitude or line of latitude or parallel, because they are coplanar, and all such planes are parallel to the equator. Lines of latitude other than the Equator are approximately small circles on the surface of the Earth; they are not geodesics since the shortest route between two points at the same latitude involves moving farther away from, then towards, the equator (see great circle).
A specific latitude may then be combined with a specific longitude to give a precise position on the Earth's surface (see satellite navigation system).
Important named circles of latitude
Besides the equator, four other lines of latitude are named because of the role they play in the geometrical relationship with the Earth and the Sun:
- Arctic Circle — 66° 33′ 39″ N
- Tropic of Cancer — 23° 26′ 21″ N
- Tropic of Capricorn — 23° 26′ 21″ S
- Antarctic Circle — 66° 33′ 39″ S
The reason that these lines have the values that they do, lies in the axial tilt of the Earth with respect to the sun, which is 23° 26′ 21.41″.
Note that the Arctic Circle and Tropic of Cancer and the Antarctic Circle and Tropic of Capricorn are colatitudes since the sum of their angles is 90°.
To simplify calculations where elliptical consideration is not important, the nautical mile was created, equaling exactly 111.12 km per degree of arc or, sub-dividing into minutes, 1852 metres per minute of arc. One minute of latitude can be further divided into 60 seconds. A latitude is thus specified as 13°19'43″ N (for greater precision, a decimal fraction can be added to the seconds). An alternative representation uses only degrees and minutes, where the seconds are expressed as a decimal fraction of minutes, thus the above example is expressed as 13°19.717' N. Degrees can also be expressed singularly, with both the minutes and seconds incorporated as a decimal number and rounded as desired (decimal degree notation): 13.32861° N. Sometimes, the north/south suffix is replaced by a negative sign for south (−90° for the South Pole).
Effect of latitude
A region's latitude has a great effect on its climate and weather (see Effect of sun angle on climate). Latitude more loosely determines tendencies in polar auroras, prevailing winds, and other physical characteristics of geographic locations.
Researchers at Harvard's Centre for International Development (CID) found in 2001 that only three tropical economies — Hong Kong, Singapore, and Taiwan — were classified as high-income by the World Bank, while all countries within regions zoned as temperate had either middle- or high-income economies.
Because most planets (including Earth) are ellipsoids of revolution, or spheroids, rather than spheres, both the radius and the length of arc varies with latitude. This variation requires the introduction of elliptic parameters based on an ellipse's angular eccentricity, (which equals , where and are the equatorial and polar radii; is the first eccentricity squared, ; and or is the flattening, ). Utilized in creating the integrands for curvature is the inverse of the principal elliptic integrand, :
The length of an arcdegree of latitude (north-south) is about 60 nautical miles, 111 kilometres or 69 statute miles at any latitude. The length of an arcdegree of longitude (east-west) at the equator is about the same, reducing to zero at the poles.
In the case of a spheroid, a meridian and its anti-meridian form an ellipse, from which an exact expression for the length of an arcdegree of latitude is:
This radius of arc (or "arcradius") is in the plane of a meridian, and is known as the meridional radius of curvature, .
Similarly, an exact expression for the length of an arcdegree of longitude is:
The arcradius contained here is in the plane of the prime vertical, the east-west plane perpendicular (or " normal") to both the plane of the meridian and the plane tangent to the surface of the ellipsoid, and is known as the normal radius of curvature, .
Along the equator (east-west), equals the equatorial radius. The radius of curvature at a right angle to the equator (north-south), , is 43 km shorter, hence the length of an arcdegree of latitude at the equator is about 1 km less than the length of an arcdegree of longitude at the equator. The radii of curvature are equal at the poles where they are about 64 km greater than the north-south equatorial radius of curvature because the polar radius is 21 km less than the equatorial radius. The shorter polar radii indicate that the northern and southern hemispheres are flatter, making their radii of curvature longer. This flattening also 'pinches' the north-south equatorial radius of curvature, making it 43 km less than the equatorial radius. Both radii of curvature are perpendicular to the plane tangent to the surface of the ellipsoid at all latitudes, directed toward a point on the polar axis in the opposite hemisphere (except at the equator where both point toward Earth's centre). The east-west radius of curvature reaches the axis, whereas the north-south radius of curvature is shorter at all latitudes except the poles.
The WGS84 ellipsoid, used by all GPS devices, uses an equatorial radius of 6378137.0 m and an inverse flattening, (1/f), of 298.257223563, hence its polar radius is 6356752.3142 m and its first eccentricity squared is 0.00669437999014. The more recent but little used IERS 2003 ellipsoid provides equatorial and polar radii of 6378136.6 and 6356751.9 m, respectively, and an inverse flattening of 298.25642. Lengths of degrees on the WGS84 and IERS 2003 ellipsoids are the same when rounded to six significant digits. An appropriate calculator for any latitude is provided by the U.S. government's National Geospatial-Intelligence Agency (NGA).
|0°||6335.44 km||110.574 km||6378.14 km||111.320 km|
|15°||6339.70 km||110.649 km||6379.57 km||107.551 km|
|30°||6351.38 km||110.852 km||6383.48 km||96.486 km|
|45°||6367.38 km||111.132 km||6388.84 km||78.847 km|
|60°||6383.45 km||111.412 km||6394.21 km||55.800 km|
|75°||6395.26 km||111.618 km||6398.15 km||28.902 km|
|90°||6399.59 km||111.694 km||6399.59 km||0.000 km|
Types of latitude
With a spheroid that is slightly flattened by its rotation, cartographers refer to a variety of auxiliary latitudes to precisely adapt spherical projections according to their purpose.
For planets other than Earth, such as Mars, geographic and geocentric latitude are called "planetographic" and "planetocentric" latitude, respectively. Most maps of Mars since 2002 use planetocentric coordinates.
- In common usage, "latitude" refers to geodetic or geographic latitude and is the angle between the equatorial plane and a line that is normal to the reference spheroid, which approximates the shape of Earth to account for flattening of the poles and bulging of the equator.
The expressions following assume elliptical polar sections and that all sections parallel to the equatorial plane are circular. Geographic latitude (with longitude) then provides a Gauss map.
- Reduced or parametric latitude, , is the latitude of the same radius on the sphere with the same equator.
- Authalic latitude, , gives an area-preserving transform to the sphere.
- Rectifying latitude, , is the surface distance from the equator, scaled so the pole is 90°, but involves elliptic integration:
- Conformal latitude, , gives an angle-preserving ( conformal) transform to the sphere.
- The geocentric latitude, , is the angle between the equatorial plane and a line from the centre of Earth.
Comparison of latitudes
The following plot shows the differences between the types of latitude. The data used is found in the table following the plot. Please note that the values in the table are in minutes, not degrees, and the plot reflects this as well. Also note that the conformal symbols are hidden behind the geocentric due to being very close in value.
Approximate difference from geographic latitude ("Lat") Lat Reduced Authalic Rectifying Conformal Geocentric 0° 0.00′ 0.00′ 0.00′ 0.00′ 0.00′ 5° 1.01′ 1.35′ 1.52′ 2.02′ 2.02′ 10° 1.99′ 2.66′ 2.99′ 3.98′ 3.98′ 15° 2.91′ 3.89′ 4.37′ 5.82′ 5.82′ 20° 3.75′ 5.00′ 5.62′ 7.48′ 7.48′ 25° 4.47′ 5.96′ 6.70′ 8.92′ 8.92′ 30° 5.05′ 6.73′ 7.57′ 10.09′ 10.09′ 35° 5.48′ 7.31′ 8.22′ 10.95′ 10.96′ 40° 5.75′ 7.66′ 8.62′ 11.48′ 11.49′ 45° 5.84′ 7.78′ 8.76′ 11.67′ 11.67′ 50° 5.75′ 7.67′ 8.63′ 11.50′ 11.50′ 55° 5.49′ 7.32′ 8.23′ 10.97′ 10.98′ 60° 5.06′ 6.75′ 7.59′ 10.12′ 10.13′ 65° 4.48′ 5.97′ 6.72′ 8.95′ 8.96′ 70° 3.76′ 5.01′ 5.64′ 7.52′ 7.52′ 75° 2.92′ 3.90′ 4.39′ 5.85′ 5.85′ 80° 2.00′ 2.67′ 3.00′ 4.00′ 4.01′ 85° 1.02′ 1.35′ 1.52′ 2.03′ 2.03′ 90° 0.00′ 0.00′ 0.00′ 0.00′ 0.00′
A more obscure measure of latitude is the astronomical latitude, which is the angle between the equatorial plane and the normal to the geoid (ie a plumb line). It originated as the angle between horizon and pole star.
Astronomical latitude is not to be confused with declination, the coordinate astronomers use to describe the locations of stars north/south of the celestial equator (see equatorial coordinates), nor with ecliptic latitude, the coordinate that astronomers use to describe the locations of stars north/south of the ecliptic (see ecliptic coordinates).
Continents move over time, due to continental drift, taking whatever fossils and other features of interest they may have with them. Particularly when discussing fossils, it's often more useful to know where the fossil was when it was laid down, than where it is when it was dug up: this is called the palæolatitude of the fossil. The Palæolatitude can be constrained by palæomagnetic data. If tiny magnetisable grains are present when the rock is being formed, these will align themselves with Earth's magnetic field like compass needles. A magnetometer can deduce the orientation of these grains by subjecting a sample to a magnetic field, and the magnetic declination of the grains can be used to infer the latitude of deposition.
Corrections for altitude
When converting from geodetic ("common") latitude, corrections must be made for altitude for systems which do not measure the angle from the normal of the spheroid. In the figure at right, point H (located on the surface of the spheroid) and point H' (located at some greater elevation) have different geocentric latitudes (angles β and γ respectively), even though they share the same geodetic latitude (angle α). (Note that the flatness of the spheroid and elevation of point H' is significantly greater than what is found on the Earth, exaggerating the errors commonly found in such calculations.) | http://www.pustakalaya.org/wiki/wp/l/Latitude.htm | 13 |
70 | The proportions and expressions of the human face are important to identify origin, emotional tendencies, health qualities, and some social information. From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain. Sometimes, damaged parts of the brain can cause specific impairments in understanding faces or prosopagnosia.
From birth, infants possess rudimentary facial processing capacities. Infants as young as two days of age are capable of mimicking the facial expressions of an adult, displaying their capacity to note details like mouth and eye shape as well as to move their own muscles in a way that produces similar patterns in their faces. However, despite this ability, newborns are not yet aware of the emotional content encoded within facial expressions. Five-month olds, when presented with an image of a person making a fearful expression and a person making a happy expression, pay the same amount of attention to and exhibit similar event-related potentials for both. When seven-month-olds are given the same treatment, they focus more on the fearful face, and their event-related potential for the scared face shows a stronger initial negative central component than the happy face. This result indicates an increased attentional and cognitive focus toward fear that reflects the threat-salient nature of the emotion. In addition, infants’ negative central components were not different for new faces that varied in the intensity of an emotional expression but portrayed the same emotion as a face they had been habituated to but were stronger for different-emotion faces, showing that seven-month-olds regarded happy and sad faces as distinct emotive categories.
The recognition of faces is an important neurological mechanism that an individual uses every day. Jeffrey and Rhodes said that faces "convey a wealth of information that we use to guide our social interactions." For example, emotions play a large role in our social interactions. The perception of a positive or negative emotion on a face affects the way that an individual perceives and processes that face. A face that is perceived to have a negative emotion is processed in a less holistic manner than a face displaying a positive emotion. The ability of face recognition is apparent even in early childhood. By age five, the neurological mechanisms responsible for face recognition are present. Research shows that the way children process faces is similar to that of adults, however adults process faces more efficiently. The reason for this may be because of improvements in memory and cognitive functioning that occur with age.
Infants are able to comprehend facial expressions as social cues representing the feelings of other people before they have been alive for a year. At seven months, the object of an observed face’s apparent emotional reaction is relevant in processing the face. Infants at this age show greater negative central components to angry faces that are looking directly at them than elsewhere, although the direction of fearful faces’ gaze produces no difference. In addition, two ERP components in the posterior part of the brain are differently aroused by the two negative expressions tested. These results indicate that infants at this age can at least partially understand the higher level of threat from anger directed at them as compared to anger directed elsewhere. At least seven months of age, infants are also able to use others’ facial expressions to understand their behavior. Seven-month olds will look to facial cues to understand the motives of other people in ambiguous situations, as shown by a study in which they watched an experimenter’s face longer if she took a toy from them and maintained a neutral expression than if she made a happy expression. Interest in the social world is increased by interaction with the physical environment. Training three-month-old infants to reach for objects with Velcro-covered “sticky mitts” increases the amount of attention that they pay to faces as compared to passively moving objects through their hands and non-trained control groups.
In following with the notion that seven-month-olds have categorical understandings of emotion, they are also capable of associating emotional prosodies with corresponding facial expressions. When presented with a happy or angry face, shortly followed by an emotionally-neutral word read in a happy or angry tone, their ERPs follow different patterns. Happy faces followed by angry vocal tones produce more changes than the other incongruous pairing, while there was no such difference between happy and angry congruous pairings, with the greater reaction implying that infants held greater expectations of a happy vocal tone after seeing a happy face than an angry tone following an angry face. Considering an infant’s relative immobility and thus their decreased capacity to elicit negative reactions from their parents, this result implies that experience has a role in building comprehension of facial expressions.
Several other studies indicate that early perceptual experience is crucial to the development of capacities characteristic of adult visual perception, including the ability to identify familiar others and to recognize and comprehend facial expressions. The capacity to discern between faces, much like language, appears to have a broad potential in early life that is whittled down to kinds of faces that are experienced in early life. Infants can discern between macaque faces at six months of age, but, without continued exposure, cannot at nine months of age. Being shown photographs of macaques during this three-month period gave nine-month-olds the ability to reliably tell between unfamiliar macaque faces.
The neural substrates of face perception in infants are likely similar to those of adults, but the limits of imaging technology that are feasible for use with infants currently prevent very specific localization of function as well as specific information from subcortical areas like the amygdala, which is active in the perception of facial expression in adults. In a study on healthy adults, it was shown that faces are likely to be processed, in part, via a retinotectal (subcortical) pathway.
However, there is activity near the fusiform gyrus, as well as in occipital areas. when infants are exposed to faces, and it varies depending on factors including facial expression and eye gaze direction.
Adult face perception
Theories about the processes involved in adult face perception have largely come from two sources: research on normal adult face perception and the study of impairments in face perception that are caused by brain injury or neurological illness. Novel optical illusions such as the Flashed Face Distortion Effect, in which scientific phenomenology outpaces neurological theory, also provide areas for research.
One of the most widely accepted theories of face perception argues that understanding faces involves several stages: from basic perceptual manipulations on the sensory information to derive details about the person (such as age, gender or attractiveness), to being able to recall meaningful details such as their name and any relevant past experiences of the individual.
This model (developed by psychologists Vicki Bruce and Andrew Young) argues that face perception might involve several independent sub-processes working in unison.
- A "view centered description" is derived from the perceptual input. Simple physical aspects of the face are used to work out age, gender or basic facial expressions. Most analysis at this stage is on feature-by-feature basis. That initial information is used to create a structural model of the face, which allows it to be compared to other faces in memory, and across views. This explains why the same person seen from a novel angle can still be recognized. This structural encoding can be seen to be specific for upright faces as demonstrated by the Thatcher effect. The structurally encoded representation is transferred to notional "face recognition units" that are used with "personal identity nodes" to identify a person through information from semantic memory. The natural ability to produce someone's name when presented with their face has been shown in experimental research to be damaged in some cases of brain injury, suggesting that naming may be a separate process from the memory of other information about a person.
The study of prosopagnosia (an impairment in recognizing faces which is usually caused by brain injury) has been particularly helpful in understanding how normal face perception might work. Individuals with prosopagnosia may differ in their abilities to understand faces, and it has been the investigation of these differences which has suggested that several stage theories might be correct.
Face perception is an ability that involves many areas of the brain; however, some areas have been shown to be particularly important. Brain imaging studies typically show a great deal of activity in an area of the temporal lobe known as the fusiform gyrus, an area also known to cause prosopagnosia when damaged (particularly when damage occurs on both sides). This evidence has led to a particular interest in this area and it is sometimes referred to as the fusiform face area for that reason.
Neuroanatomy of facial processing
There are several parts of the brain that play a role in face perception. Rossion, Hanseeuw, and Dricot used BOLD fMRI mapping to identify activation in the brain when subjects viewed both cars and faces. The majority of BOLD fMRI studies use blood oxygen level dependent (BOLD) contrast to determine which areas of the brain are activated by various cognitive functions. They found that the occipital face area, located in the occipital lobe, the fusiform face area, the superior temporal sulcus, the amygdala, and the anterior/inferior cortex of the temporal lobe, all played roles in contrasting the faces from the cars, with the initial face perception beginning in the fusiform face area and occipital face areas. This entire region links to form a network that acts to distinguish faces. The processing of faces in the brain is known as a "sum of parts" perception. However, the individual parts of the face must be processed first in order to put all of the pieces together. In early processing, the occipital face area contributes to face perception by recognizing the eyes, nose, and mouth as individual pieces. Furthermore, Arcurio, Gold, and James used BOLD fMRI mapping to determine the patterns of activation in the brain when parts of the face were presented in combination and when they were presented singly. The occipital face area is activated by the visual perception of single features of the face, for example, the nose and mouth, and preferred combination of two-eyes over other combinations. This research supports that the occipital face area recognizes the parts of the face at the early stages of recognition. On the contrary, the fusiform face area shows no preference for single features, because the fusiform face area is responsible for "holistic/configural" information, meaning that it puts all of the processed pieces of the face together in later processing. This theory is supported by the work of Gold et al. who found that regardless of the orientation of a face, subjects were impacted by the configuration of the individual facial features. Subjects were also impacted by the coding of the relationships between those features. This shows that processing is done by a summation of the parts in the later stages of recognition.
Facial perception has well identified, neuroanatomical correlates in the brain. During the perception of faces, major activations occur in the extrastriate areas bilaterally, particularly in the fusiform face area (FFA), the occipital face area (OFA), and the superior temporal sulcus (fSTS).
The FFA is located in the lateral fusiform gyrus. It is thought that this area is involved in holistic processing of faces and it is sensitive to the presence of facial parts as well as the configuration of these parts. The FFA is also necessary for successful face detection and identification. This is supported by fMRI activation and studies on prospagnosia, which involves lesions in the FFA.
The OFA is located in the inferior occipital gyrus. Similar to the FFA, this area is also active during successful face detection and identification, a finding that is supported by fMRI activation. The OFA is involved and necessary in the analysis of facial parts but not in the spacing or configuration of facial parts. This suggests that the OFA may be involved in a facial processing step that occurs prior to the FFA processing.
The fSTS is involved in recognition of facial parts and is not sensitive to the configuration of these parts. It is also thought that this area is involved in gaze perception. The fSTS has demonstrated increased activation when attending to gaze direction.
Bilateral activation is generally shown in all of these specialized facial areas. However there are some studies that include increased activation in one side over the other. For instance McCarthy (1997) has shown that the right fusiform gyrus is more important for facial processing in complex situations.
Gorno-Tempini and Price have shown that the fusiform gyri are preferentially responsive to faces, whereas the parahippocampal/lingual gyri are responsive to buildings.
It is important to note that while certain areas respond selectively to faces, facial processing involves many neural networks. These networks include visual and emotional processing systems as well. Emotional face processing research has demonstrated that there are some of the other functions at work. While looking at faces displaying emotions (especially those with fear facial expressions) compared to neutral faces there is increased activity in the right fusiform gyrus. This increased activity also correlates with increased amygdala activity in the same situations. The emotional processing effects observed in the fusiform gyrus are decreased in patients with amygdala lesions. This demonstrates possible connections between the amygdala and facial processing areas.
Another aspect that affects both the fusiform gyrus and the amygdala activation is the familiarity of faces. Having multiple regions that can be activated by similar face components indicates that facial processing is a complex process.
Ishai and colleagues have proposed the object form topology hypothesis, which posits that there is a topological organization of neural substrates for object and facial processing. However, Gauthier disagrees and suggests that the category-specific and process-map models could accommodate most other proposed models for the neural underpinnings of facial processing.
Most neuroanatomical substrates for facial processing are perfused by the middle cerebral artery (MCA). Therefore, facial processing has been studied using measurements of mean cerebral blood flow velocity in the middle cerebral arteries bilaterally. During facial recognition tasks, greater changes in the right middle cerebral artery (RMCA) than the left (LMCA) have been observed. It has been demonstrated that men were right lateralized and women left lateralized during facial processing tasks.
Just as memory and cognitive function separate the abilities of children and adults to recognize faces, the familiarity of a face may also play a role in the perception of faces. Zheng, Mondloch, and Segalowitz recorded event-related potentials in the brain to determine the timing of recognition of faces in the brain. The results of the study showed that familiar faces are indicated and recognized by a stronger N250, a specific wavelength response that plays a role in the visual memory of faces. Similarly, Moulson et al. found that all faces elicit the N170 response in the brain.
Hemispheric asymmetries in facial processing capability
The mechanisms underlying gender-related differences in facial processing have not been studied extensively.
Studies using electrophysiological techniques have demonstrated gender-related differences during a face recognition memory (FRM) task and a facial affect identification task (FAIT). The male subjects used a right, while the female subjects used a left, hemisphere neural activation system in the processing of faces and facial affect. Moreover, in facial perception there was no association to estimated intelligence, suggesting that face recognition performance in women is unrelated to several basic cognitive processes. Gender-related differences may suggest a role for sex hormones. In females there may be variability for psychological functions related to differences in hormonal levels during different phases of the menstrual cycle.
Data obtained in norm and in pathology support asymmetric face processing. Gorno-Tempini and others in 2001, suggested that the left inferior frontal cortex and the bilateral occipitotemporal junction respond equally to all face conditions. Some neuroscientists contend that both the left inferior frontal cortex (Brodmann area 47) and the occipitotemporal junction are implicated in facial memory. The right inferior temporal/fusiform gyrus responds selectively to faces but not to non-faces. The right temporal pole is activated during the discrimination of familiar faces and scenes from unfamiliar ones. Right asymmetry in the mid temporal lobe for faces has also been shown using 133-Xenon measured cerebral blood flow (CBF). Other investigators have observed right lateralization for facial recognition in previous electrophysiological and imaging studies.
The implication of the observation of asymmetry for facial perception would be that different hemispheric strategies would be implemented. The right hemisphere would be expected to employ a holistic strategy, and the left an analytic strategy. In 2007, Philip Njemanze, using a novel functional transcranial Doppler (fTCD) technique called functional transcranial Doppler spectroscopy (fTCDS), demonstrated that men were right lateralized for object and facial perception, while women were left lateralized for facial tasks but showed a right tendency or no lateralization for object perception. Njemanze demonstrated using fTCDS, summation of responses related to facial stimulus complexity, which could be presumed as evidence for topological organization of these cortical areas in men. It may suggest that the latter extends from the area implicated in object perception to a much greater area involved in facial perception.
This agrees with the object form topology hypothesis proposed by Ishai and colleagues in 1999. However, the relatedness of object and facial perception was process based, and appears to be associated with their common holistic processing strategy in the right hemisphere. Moreover, when the same men were presented with facial paradigm requiring analytic processing, the left hemisphere was activated. This agrees with the suggestion made by Gauthier in 2000, that the extrastriate cortex contains areas that are best suited for different computations, and described as the process-map model. Therefore, the proposed models are not mutually exclusive, and this underscores the fact that facial processing does not impose any new constraints on the brain other than those used for other stimuli.
It may be suggested that each stimulus was mapped by category into face or non-face, and by process into holistic or analytic. Therefore, a unified category-specific process-mapping system was implemented for either right or left cognitive styles. Njemanze in 2007, concluded that, for facial perception, men used a category-specific process-mapping system for right cognitive style, but women used same for the left.
||This article's Criticism or Controversy section may compromise the article's neutral point of view of the subject. (September 2011)|
Cognitive neuroscientists Isabel Gauthier and Michael Tarr are two of the major proponents of the view that face recognition involves expert discrimination of similar objects (See the Perceptual Expertise Network). Other scientists, in particular Nancy Kanwisher and her colleagues, argue that face recognition involves processes that are face-specific and that are not recruited by expert discriminations in other object classes (see the domain specificity).
Studies by Gauthier have shown that an area of the brain known as the fusiform gyrus (sometimes called the "fusiform face area, (FFA)" because it is active during face recognition) is also active when study participants are asked to discriminate between different types of birds and cars, and even when participants become expert at distinguishing computer generated nonsense shapes known as greebles. This suggests that the fusiform gyrus may have a general role in the recognition of similar visual objects. Yaoda Xu, then a post doctoral fellow with Nancy Kanwisher, replicated the car and bird expertise study using an improved fMRI design that was less susceptible to attentional accounts.
The activity found by Gauthier when participants viewed non-face objects was not as strong as when participants were viewing faces, however this could be because we have much more expertise for faces than for most other objects. Furthermore, not all findings of this research have been successfully replicated, for example, other research groups using different study designs have found that the fusiform gyrus is specific to faces and other nearby regions deal with non-face objects.
However, these failures to replicate are difficult to interpret, because studies vary on too many aspects of the method. It has been argued that some studies test experts with objects that are slightly outside of their domain of expertise. More to the point, failures to replicate are null effects and can occur for many different reasons. In contrast, each replication adds a great deal of weight to a particular argument. With regard to "face specific" effects in neuroimaging, there are now multiple replications with Greebles, with birds and cars, and two unpublished studies with chess experts.
Although it is sometimes found that expertise recruits the FFA (e.g. as hypothesized by a proponent of this view in the preceding paragraph), a more common and less controversial finding is that expertise leads to focal category-selectivity in the fusiform gyrus—a pattern similar in terms of antecedent factors and neural specificity to that seen for faces. As such, it remains an open question as to whether face recognition and expert-level object recognition recruit similar neural mechanisms across different subregions of the fusiform or whether the two domains literally share the same neural substrates. Moreover, at least one study argues that the issue as to whether expertise-predicated category-selective areas overlap with the FFA is nonsensical in that multiple measurements of the FFA within an individual person often overlap no more with each other than do measurements of FFA and expertise-predicated regions. At the same time, numerous studies have failed to replicate them altogether. For example, four published fMRI studies have asked whether expertise has any specific connection to the FFA in particular, by testing for expertise effects in both the FFA and a nearby but not face-selective region called LOC (Rhodes et al., JOCN 2004; Op de Beeck et al., JN 2006; Moore et al., JN 2006; Yue et al. VR 2006). In all four studies, expertise effects are significantly stronger in the LOC than in the FFA, and indeed expertise effects were only borderline significant in the FFA in two of the studies, while the effects were robust and significant in the LOC in all four studies.
Therefore, it is still not clear in exactly which situations the fusiform gyrus becomes active, although it is certain that face recognition relies heavily on this area and damage to it can lead to severe face recognition impairment.
Differences in own- versus other-race face recognition and perceptual discrimination was first researched in 1914 Humans tend to perceive people of other races than their own to all look alike:
Other things being equal, individuals of a given race are distinguishable from each other in proportion to our familiarity, to our contact with the race as whole. Thus, to the uninitiated American all Asiatics look alike, while to the Asiatics, all White men look alike.
This phenomenon is known as the cross-race effect, own-race effect, other-race effect, own race bias or interracial-face-recognition-deficit. The effect occurs as early as 170ms in the brain with the N170 brain response to faces.[clarification needed]
A meta-analysis, Mullen has found evidence that the other-race effect is larger among White subjects than among African American subjects, whereas Brigham and Williamson (1979, cited in Shepherd, 1981) obtained the opposite pattern. Shepherd also reviewed studies that found a main effect for race efface like that of the present[clarification needed] study, with better performance on White faces, other studies in which no difference was found, and yet other studies in which performance was better on African American faces. Overall, Shepherd reports a reliable positive correlation between the size of the effect of target race (indexed by the difference in proportion correct on same- and other-race faces) and self-ratings of amount of interaction with members of the other race, r(30) = .57, p < .01. This correlation is at least partly an artifact of the fact that African American subjects, who performed equally well on faces of both races, almost always responded with the highest possible self-rating of amount of interaction with white people (M = 4.75), whereas their white counterparts both demonstrated an other-race effect and reported less other-race interaction (M = 2.13); the difference in ratings was reliable, £(30) = 7.86, p < .01
Further research points to the importance of other-race experience in own- versus other-race face processing (O'Toole et al., 1991; Slone et al., 2000; Walker & Tanaka, 2003). In a series of studies, Walker and colleagues showed the relationship between amount and type of other-race contact and the ability to perceptually differentiate other-race faces (Walker & Tanaka, 2003; Walker & Hewstone, 2006a,b; 2007). Participants with greater other-race experience were consistently more accurate at discriminating between other-race faces than were participants with less other-race experience.
In addition to other-race contact, there is suggestion that the own-race effect is linked to increased ability to extract information about the spatial relationships between different features. Richard Ferraro writes that facial recognition is an example of a neuropsychological measure that can be used to assess cognitive abilities that are salient within African-American culture. Daniel T. Levin writes that the deficit occurs because people emphasize visual information specifying race at the expense of individuating information when recognizing faces of other races. Further research using perceptual tasks could shed light on the specific cognitive processes involved in the other-race effect. The question if the own-race effect can be overcome was already indirectly answered by Ekman & Friesen in 1976 and Ducci, Arcuri, Georgis & Sineshaw in 1982. They had observed that people from New Guinea and Ethiopia who had had contact with white people before had a significantly better emotional recognition rate.
Studies on adults have also shown sex differences in face recognition. Men tend to recognize fewer faces of women than women do, whereas there are no sex differences with regard to male faces.
In individuals with autism spectrum disorder
Autism spectrum disorder (ASD) is a comprehensive neural developmental disorder that produces many deficits including social, communicative, and perceptual deficits. Of specific interest, individuals with autism exhibit difficulties in various aspects of facial perception, including facial identity recognition and recognition of emotional expressions. These deficits are suspected to be a product of abnormalities occurring in both the early and late stages of facial processing.
Speed and methods
People with ASD process face and non-face stimuli with the same speed. In typically developing individuals, there is a preference for face processing, thus resulting in a faster processing speed in comparison to non-face stimuli. These individuals primarily utilize holistic processing when perceiving faces. Contrastingly, individuals with ASD employ part-based processing or bottom-up processing, focusing on individual features rather than the face as a whole. When focusing on the individual parts of the face, persons with ASD direct their gaze primarily to the lower half of the face, specifically the mouth, varying from the eye trained gaze of typically developing people. This deviation from holistic face processing does not employ the use of facial prototypes, which are templates stored in memory that make for easy retrieval.
Additionally, individuals with ASD display difficulty with recognition memory, specifically memory that aids in identifying faces. The memory deficit is selective for faces and does not extend to other objects or visual inputs. Some evidence lends support to the theory that these face-memory deficits are products of interference between connections of face processing regions.
Associated difficulties
The atypical facial processing style of people with ASD often manifests in constrained social ability, due to decreased eye contact, joint attention, interpretation of emotional expression, and communicative skills. These deficiencies can be seen in infants as young as 9 months; specifically in terms of poor eye contact and difficulties engaging in joint attention. Some experts have even used the term 'face avoidance' to describe the phenomena where infants who are later diagnosed with ASD preferentially attend to non-face objects over faces. Furthermore, some have proposed that the demonstrated impairment in children with ASD's ability to grasp emotional content of faces is not a reflection of the incapacity to process emotional information, but rather, the result of a general inattentiveness to facial expression. The constraints of these processes that are essential to the development of communicative and social-cognitive abilities are viewed to be the cause of impaired social engagement and responsivity. Furthermore, research suggests that there exists a link between decreased face processing abilities in individuals with ASD and later deficits in Theory of Mind; for example, while typically developing individuals are able to relate others' emotional expressions to their actions, individuals with ASD do not demonstrate this skill to the same extent.
There is some contention about this causation however, resembling the chicken or the egg dispute. Others theorize that social impairment leads to perceptual problems rather than vice versa. In this perspective, a biological lack of social interest inherent to ASD inhibits developments of facial recognition and perception processes due to underutilization. Continued research is necessary to determine which theory is best supported.
Many of the obstacles that individuals with ASD face in terms of facial processing may be derived from abnormalities in the fusiform face area and amygdala, which have been shown to be important in face perception as discussed above. Typically, the fusiform face area in individuals with ASD has reduced volume compared to normally developed persons. This volume reduction has been attributed to deviant amygdala activity that does not flag faces as emotionally salient and thus decreases activation levels of the fusiform face area. This hypoactivity in the in the fusiform face area has been found in several studies.
Studies are not conclusive as to which brain areas people with ASD use instead. One study found that, when looking at faces, people with ASD exhibit activity in brain regions normally active when typically developing individuals perceive objects. Another study found that during facial perception, people with ASD use different neural systems, with each one of them using their own unique neural circuitry.
Compensation mechanisms
As ASD individuals age, scores on behavioral tests assessing ability to perform face-emotion recognition increase to levels similar to controls. Yet, it is apparent that the recognition mechanisms of these individuals are still atypical, though often effective. In terms of face identity-recognition, compensation can take many forms including a more pattern-based strategy which was first seen in face inversion tasks. Alternatively, evidence suggests that older individuals compensate by using mimicry of other’s facial expressions and rely on their motor feedback of facial muscles for face emotion-recognition. These strategies help overcome the obstacles individuals with ASD face in interacting within social contexts.
Artificial face perception
A great deal of effort has been put into developing software that can recognize human faces. Much of the work has been done by a branch of artificial intelligence known as computer vision which uses findings from the psychology of face perception to inform software design. Recent breakthroughs using noninvasive functional transcranial Doppler spectroscopy as demonstrated by Njemanze, 2007, to locate specific responses to facial stimuli have led to improved systems for facial recognition. The new system uses input responses called cortical long-term potentiation (CLTP) derived from Fourier analysis of mean blood flow velocity to trigger target face search from a computerized face database system. Such a system provides for brain-machine interface for facial recognition, and the method has been referred to as cognitive biometrics.
Another interesting application is the estimation of human age from face images. As an important hint for human communication, facial images contain lots of useful information including gender, expression, age, etc. Unfortunately, compared with other cognition problems, age estimation from facial images is still very challenging. This is mainly because the aging process is influenced not only by a person's genes but also many external factors. Physical condition, living style etc. may accelerate or slow the aging process. Besides, since the aging process is slow and with long duration, collecting sufficient data for training is fairly demanding work.
See also
- Capgras delusion
- Fregoli syndrome
- Cognitive neuropsychology
- Delusional misidentification syndrome
- Facial recognition system
- Prosopagnosia, or face blindness
- Recognition of human individuals
- Social cognition
- Thatcher effect
- The Greebles
- Pareidolia, perceiving faces in random objects and shapes
- Apophenia, seeing meaningful patterns in random data
- Hollow face illusion
- N170, an event-related potential associated with viewing faces
- Cross-race effect
Further reading
- Bruce, V. and Young, A. (2000) In the Eye of the Beholder: The Science of Face Perception. Oxford: Oxford University Press. ISBN 0-19-852439-0
- Tiffany M. Field, Robert Woodson, Reena Greenberg, Debra Cohen (8 October 1982). "Discrimination and imitation of facial expressions by neonates". Science 218 (4568): 179–181. doi:10.1126/science.7123230. PMID 7123230.
- Mikko J. Peltola, Jukka M. Leppanen, Silja Maki & Jari K. Hietanen (June 2009). "Emergence of enhanced attention to fearful faces between 5 and 7 months of age". Social cognitive and affective neuroscience 4 (2): 134–142. doi:10.1093/scan/nsn046. PMC 2686224. PMID 19174536.
- Leppanen, Jukka; Richmond, Jenny; Vogel-Farley, Vanessa; Moulson, Margaret; Nelson, Charles (May 2009). "Categorical representation of facial expressions in the infant brain". Infancy : the official journal of the International Society on Infant Studies 14 (3): 346–362. doi:10.1080/15250000902839393. PMC 2954432. PMID 20953267.
- Jeffery, L.; Rhodes, G. (2011). "Insights into the development of face recognition mechanisms revealed by face after effects.". British Journal of Psychology 102 (4): 799–815. doi:10.1111/j.2044-8295.2011.02066.x.
- Jeffery, L.; Rhodes, G. (2011). "Insights into the development of face recognition mechanisms revealed by face aftereffects.". British Journal of Psychology 102 (4): 799. doi:10.1111/j.2044-8295.2011.02066.x.
- Curby, K.M.; Johnson, K.J., & Tyson A. (2012). "Face to face with emotion: Holistic face processing is modulated by emotional state". Cognition and Emotion 26 (1): 93–102. doi:10.1080/02699931.2011.555752.
- Jeffery, L.; Rhodes, G. (2011). "Insights into the development of face recognition mechanisms revealed by face aftereffects.". British Journal of Psychology 102 (4): 799–815. doi:10.1111/j.2044-8295.2011.02066.x.
- Stefanie Hoehl & Tricia Striano (November–December 2008). "Neural processing of eye gaze and threat-related emotional facial expressions in infancy". Child development 79 (6): 1752–1760. doi:10.1111/j.1467-8624.2008.01223.x. PMID 19037947.
- Tricia Striano & Amrisha Vaish (2010). "Seven- to 9-month-old infants use facial expressions to interpret others' actions". British Journal of Developmental Psychology 24 (4): 753–760. doi:10.1348/026151005X70319.
- Klaus Libertus & Amy Needham (November 2011). "Reaching experience increases face preference in 3-month-old infants". Developmental science 14 (6): 1355–1364. doi:10.1111/j.1467-7687.2011.01084.x. PMID 22010895.
- Tobias Grossmann, Tricia Striano & Angela D. Friederici (May 2006). "Crossmodal integration of emotional information from face and voice in the infant brain". Developmental science 9 (3): 309–315. doi:10.1111/j.1467-7687.2006.00494.x. PMID 16669802.
- Charles A. Nelson (March–June 2001). "The development and neural bases of face recognition". nfant and Child Development 10 (1–2): 3–18. doi:10.1002/icd.239.
- O. Pascalis, L. S. Scott, D. J. Kelly, R. W. Shannon, E. Nicholson, M. Coleman & C. A. Nelson (April 2005). "Plasticity of face processing in infancy". Proceedings of the National Academy of Sciences of the United States of America 102 (14): 5297–5300. doi:10.1073/pnas.0406627102. PMC 555965. PMID 15790676.
- Emi Nakato, Yumiko Otsuka, So Kanazawa, Masami K. Yamaguchi & Ryusuke Kakigi (January 2011). "Distinct differences in the pattern of hemodynamic response to happy and angry facial expressions in infants--a near-infrared spectroscopic study". NeuroImage 54 (2): 1600–1606. doi:10.1016/j.neuroimage.2010.09.021. PMID 20850548.
- Awasthi B, Friedman J, Williams, MA (2011). "Processing of low spatial frequency faces at periphery in choice reaching tasks". Neuropsychologia 49 (7): 2136–41. doi:10.1016/j.neuropsychologia.2011.03.003. PMID 21397615.
- Bruce V, Young A (August 1986). "Understanding face recognition". Br J Psychology 77 (Pt 3): 305–27. doi:10.1111/j.2044-8295.1986.tb02199.x. PMID 3756376.
- Kanwisher N, McDermott J, Chun MM (1 June 1997). "The fusiform face area: a module in human extrastriate cortex specialized for face perception". J. Neurosci. 17 (11): 4302–11. PMID 9151747.
- Rossion, B.; Hanseeuw, B., & Dricot, L. (2012). "Defining face perception areas in the human brain: A large scale factorial fMRI face localizer analysis.". Brain and Cognition 79 (2): 138–157. doi:10.1016/j.bandc.2012.01.001.
- KannurpattiRypmaBiswal, S.S.B. (March 2012). "Prediction of task-related BOLD fMRI with amplitude signatures of resting-state fMRI". Frontiers in Systems Neuroscience 6: 1–7. doi:10.3389/fnsys.2012.00007.
- Gold, J.M.; Mundy, P.J., & Tjan, B.S. (2012). "The perception of a face is no more than the sum of its parts". Psychological Science 23 (4): 427–434. doi:10.1177/0956797611427407.
- Pitcher, D.; Walsh, V., & Duchaine, B. (2011). "The role of the occipital face area in the cortical face perception network". Experimental Brain Research 209 (4): 481–493. doi:10.1007/s00221-011-2579-1.
- Arcurio, L.R.; Gold, J.M., & James, T.W. (2012). "The response of face-selective cortex with single face parts and part combinations". Neuropsychologia 50 (10): 2454–2459. doi:10.1016/j.neuropsychologia.2012.06.016.
- Arcurio, L.R.; Gold, J.M., & James, T.W. (2012). "The response of face-selective cortex with single face parts and part combinations". Neuropsychologia 50 (10): 2458. doi:10.1016/j.neuropsychologia.2012.06.016.
- Liu J, Harris A, Kanwisher N. (2010). Perception of face parts and face configurations: An fmri study. Journal of Cognitive Neuroscience. (1), 203–211.
- Rossion, B., Caldara, R., Seghier, M., Schuller, A-M., Lazeyras, F., Mayer, E., (2003). A network of occipito-temporal face-sensitive areas besides the right middle fusiform gyrus is necessary for normal face processing. A Journal of Neurology, 126 11 2381-2395
- McCarthy, G., Puce, A., Gore, J., Allison, T., (1997). Face-Specific Processing in the Human Fusiform Gyrus. Journal of Cognitive Neuroscience, 9 5 605-610
- Campbell, R., Heywood, C.A., Cowey, A., Regard, M., and Landis, T. (1990). Sensitivity to eye gaze in prosopagnosic patients and monkeys with superior temporal sulcus ablation. Neuropsychologia, 28(11), 1123-1142
- 8 (2). 1996. pp. 139–46. PMID 9081548. Missing or empty
- Haxby JV, Horwitz B, Ungerleider LG, Maisog JM, Pietrini P, Grady CL (1 November 1994). "The functional organization of human extrastriate cortex: a PET-rCBF study of selective attention to faces and locations". J. Neurosci. 14 (11 Pt 1): 6336–53. PMID 7965040.
- Haxby JV, Ungerleider LG, Clark VP, Schouten JL, Hoffman EA, Martin A (January 1999). "The effect of face inversion on activity in human neural systems for face and object perception". Neuron 22 (1): 189–99. doi:10.1016/S0896-6273(00)80690-X. PMID 10027301.
- Puce A, Allison T, Asgari M, Gore JC, McCarthy G (15 August 1996). "Differential sensitivity of human visual cortex to faces, letterstrings, and textures: a functional magnetic resonance imaging study". J. Neurosci. 16 (16): 5205–15. PMID 8756449.
- Puce A, Allison T, Gore JC, McCarthy G (September 1995). "Face-sensitive regions in human extrastriate cortex studied by functional MRI". J. Neurophysiol. 74 (3): 1192–9. PMID 7500143.
- Sergent J, Ohta S, MacDonald B (February 1992). "Functional neuroanatomy of face and object processing. A positron emission tomography study". Brain 115 (Pt 1): 15–36. doi:10.1093/brain/115.1.15. PMID 1559150.
- Gorno-Tempini ML, Price CJ (October 2001). "Identification of famous faces and buildings: a functional neuroimaging study of semantically unique items". Brain 124 (Pt 10): 2087–97. doi:10.1093/brain/124.10.2087. PMID 11571224.
- Vuilleumier P, Pourtois G, Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia 45 (2007) 174–194
- Ishai A, Ungerleider LG, Martin A, Schouten JL, Haxby JV (August 1999). "Distributed representation of objects in the human ventral visual pathway". Proc. Natl. Acad. Sci. U.S.A. 96 (16): 9379–84. doi:10.1073/pnas.96.16.9379. PMC 17791. PMID 10430951.
- Gauthier I (January 2000). "What constrains the organization of the ventral temporal cortex?". Trends Cogn. Sci. (Regul. Ed.) 4 (1): 1–2. doi:10.1016/S1364-6613(99)01416-3. PMID 10637614.
- Droste DW, Harders AG, Rastogi E (August 1989). "A transcranial Doppler study of blood flow velocity in the middle cerebral arteries performed at rest and during mental activities". Stroke 20 (8): 1005–11. doi:10.1161/01.STR.20.8.1005. PMID 2667197.
- Harders AG, Laborde G, Droste DW, Rastogi E (July 1989). "Brain activity and blood flow velocity changes: a transcranial Doppler study". Int. J. Neurosci. 47 (1–2): 91–102. doi:10.3109/00207458908987421. PMID 2676884.
- Njemanze PC (September 2004). "Asymmetry in cerebral blood flow velocity with processing of facial images during head-down rest". Aviat Space Environ Med 75 (9): 800–5. PMID 15460633.
- Zheng, X.; Mondloch, C.J. & Segalowitz, S.J. (2012). "The timing of individual face recognition in the brain". Neuropsychologia 50 (7): 1451–1461. doi:10.1016/j.neuropsychologia.2012.02.030.
- Eimer, M.; Gosling, A., & Duchaine, B. (2012). "Electrophysiological markers of covert face recognition in developmental prosopagnosia". Brain: A Journal of Neurology 135 (2): 542–554. doi:10.1093/brain/awr347.
- Moulson, M.C.; Balas, B., Nelson, C., & Sinha, P. (2011). "EEG correlates of categorical and graded face perception.". Neuropsychologia 49 (14): 3847–3853. doi:10.1016/j.neuropsychologia.2011.09.046.
- Everhart DE, Shucard JL, Quatrin T, Shucard DW (July 2001). "Sex-related differences in event-related potentials, face recognition, and facial affect processing in prepubertal children". Neuropsychology 15 (3): 329–41. doi:10.1037/0894-418.104.22.1689. PMID 11499988.
- Herlitz A, Yonker JE (February 2002). "Sex differences in episodic memory: the influence of intelligence". J Clin Exp Neuropsychol 24 (1): 107–14. doi:10.1076/jcen.22.214.171.1240. PMID 11935429.
- Smith WM (July 2000). "Hemispheric and facial asymmetry: gender differences". Laterality 5 (3): 251–8. doi:10.1080/135765000406094. PMID 15513145.
- Voyer D, Voyer S, Bryden MP (March 1995). "Magnitude of sex differences in spatial abilities: a meta-analysis and consideration of critical variables". Psychol Bull 117 (2): 250–70. doi:10.1037/0033-2909.117.2.250. PMID 7724690.
- Hausmann M (2005). "Hemispheric asymmetry in spatial attention across the menstrual cycle". Neuropsychologia 43 (11): 1559–67. doi:10.1016/j.neuropsychologia.2005.01.017. PMID 16009238.
- De Renzi E (1986). "Prosopagnosia in two patients with CT scan evidence of damage confined to the right hemisphere". Neuropsychologia 24 (3): 385–9. doi:10.1016/0028-3932(86)90023-0. PMID 3736820.
- De Renzi E, Perani D, Carlesimo GA, Silveri MC, Fazio F (August 1994). "Prosopagnosia can be associated with damage confined to the right hemisphere--an MRI and PET study and a review of the literature". Neuropsychologia 32 (8): 893–902. doi:10.1016/0028-3932(94)90041-8. PMID 7969865.
- Mattson AJ, Levin HS, Grafman J (February 2000). "A case of prosopagnosia following moderate closed head injury with left hemisphere focal lesion". Cortex 36 (1): 125–37. doi:10.1016/S0010-9452(08)70841-4. PMID 10728902.
- Barton JJ, Cherkasova M (July 2003). "Face imagery and its relation to perception and covert recognition in prosopagnosia". Neurology 61 (2): 220–5. doi:10.1212/01.WNL.0000071229.11658.F8. PMID 12874402.
- Sprengelmeyer R, Rausch M, Eysel UT, Przuntek H (October 1998). "Neural structures associated with recognition of facial expressions of basic emotions". Proc. Biol. Sci. 265 (1409): 1927–31. doi:10.1098/rspb.1998.0522. PMC 1689486. PMID 9821359.
- Verstichel P (2001). "[Impaired recognition of faces: implicit recognition, feeling of familiarity, role of each hemisphere]". Bull. Acad. Natl. Med. (in French) 185 (3): 537–49; discussion 550–3. PMID 11501262.
- Nakamura K, Kawashima R, Sato N et al. (September 2000). "Functional delineation of the human occipito-temporal areas related to face and scene processing. A PET study". Brain 123 (Pt 9): 1903–12. doi:10.1093/brain/123.9.1903. PMID 10960054.
- Gur RC, Jaggi JL, Ragland JD et al. (September 1993). "Effects of memory processing on regional brain activation: cerebral blood flow in normal subjects". Int. J. Neurosci. 72 (1–2): 31–44. doi:10.3109/00207459308991621. PMID 8225798.
- Ojemann JG, Ojemann GA, Lettich E (February 1992). "Neuronal activity related to faces and matching in human right nondominant temporal cortex". Brain 115 (Pt 1): 1–13. doi:10.1093/brain/115.1.1. PMID 1559147.
- Bogen JE (April 1969). "The other side of the brain. I. Dysgraphia and dyscopia following cerebral commissurotomy". Bull Los Angeles Neurol Soc 34 (2): 73–105. PMID 5792283.
- Bogen JE (1975). "Some educational aspects of hemispheric specialization". UCLA Educator 17: 24–32.
- Bradshaw JL, Nettleton NC (1981). "The nature of hemispheric specialization in man". Behavioral and Brain Science 4: 51–91. doi:10.1017/S0140525X00007548.
- Galin D (October 1974). "Implications for psychiatry of left and right cerebral specialization. A neurophysiological context for unconscious processes". Arch. Gen. Psychiatry 31 (4): 572–83. doi:10.1001/archpsyc.1974.01760160110022. PMID 4421063.
- Njemanze PC (January 2007). "Cerebral lateralisation for facial processing: gender-related cognitive styles determined using Fourier analysis of mean cerebral blood flow velocity in the middle cerebral arteries". Laterality 12 (1): 31–49. doi:10.1080/13576500600886796. PMID 17090448.
- Gauthier I, Skudlarski P, Gore JC, Anderson AW (February 2000). "Expertise for cars and birds recruits brain areas involved in face recognition". Nat. Neurosci. 3 (2): 191–7. doi:10.1038/72140. PMID 10649576.
- Gauthier I, Tarr MJ, Anderson AW, Skudlarski P, Gore JC (June 1999). "Activation of the middle fusiform 'face area' increases with expertise in recognizing novel objects". Nat. Neurosci. 2 (6): 568–73. doi:10.1038/9224. PMID 10448223.
- Grill-Spector K, Knouf N, Kanwisher N (May 2004). "The fusiform face area subserves face perception, not generic within-category identification". Nat. Neurosci. 7 (5): 555–62. doi:10.1038/nn1224. PMID 15077112.
- Xu Y (August 2005). "Revisiting the role of the fusiform face area in visual expertise". Cereb. Cortex 15 (8): 1234–42. doi:10.1093/cercor/bhi006. PMID 15677350.
- Righi G, Tarr MJ (2004). "Are chess experts any different from face, bird, or greeble experts?". Journal of Vision 4 (8): 504–504. doi:10.1167/4.8.504.
- My Brilliant Brain, partly about grandmaster Susan Polgar, shows brain scans of the fusiform gyrus while Polgar viewed chess diagrams.
- Kung CC, Peissig JJ, Tarr MJ (December 2007). "Is region-of-interest overlap comparison a reliable measure of category specificity?". J Cogn Neurosci 19 (12): 2019–34. doi:10.1162/jocn.2007.19.12.2019. PMID 17892386.
- Feingold CA (1914). "The influence of environment on identification of persons and things". Journal of Criminal Law and Police Science 5: 39–51.
- Walker PM, Tanaka JW (2003). "An encoding advantage for own-race versus other-race faces". Perception 32 (9): 1117–25. doi:10.1068/p5098. PMID 14651324.
- Vizioli L, Rousselet GA, Caldara R (2010). "Neural repetition suppression to identity is abolished by other-race faces". Proc Natl Acad Sci U S A 107 (46): 20081–20086. doi:10.1073/pnas.1005751107. PMC 2993371. PMID 21041643.
- Malpass & Kravitz, 1969; Cross, Cross, & Daly, 1971; Shepherd, Deregowski, & Ellis, 1974; all cited in Shepherd, 1981
- Chance, Goldstein, & McBride, 1975; Feinman & Entwistle, 1976; cited in Shepherd, 1981
- Brigham & Karkowitz, 1978; Brigham & Williamson, 1979; cited in Shepherd, 1981
- Other-Race Face Perception D. Stephen Lindsay, Philip C. Jack, Jr., and Marcus A. Christian. Williams College
- Diamond & Carey, 1986; Rhodeset al.,1989
- F. Richard Ferraro (2002). Minority and Cross-cultural Aspects of Neuropsychological Assessment. Studies on Neuropsychology, Development and Cognition 4. East Sussex: Psychology Press. p. 90. ISBN 90-265-1830-7.
- Levin DT (December 2000). "Race as a visual feature: using visual search and perceptual discrimination tasks to understand face categories and the cross-race recognition deficit". J Exp Psychol Gen 129 (4): 559–74. doi:10.1037/0096-34126.96.36.1999. PMID 11142869.
- Rehnman J, Herlitz A (April 2006). "Higher face recognition ability in girls: Magnified by own-sex and own-ethnicity bias". Memory 14 (3): 289–96. doi:10.1080/09658210500233581. PMID 16574585.
- Tanaka, J.W.; Lincoln, S.; Hegg, L. (2003). "A framework for the study and treatment of face processing deficits in autism". In Schwarzer, G.; Leder, H. The development of face processing. Ohio: Hogrefe & Huber Publishers. pp. 101–119. ISBN 9780889372641.
- Behrmann, Marlene; Avidan, Galia; Leonard, Grace L.; Kimchi, Rutie; Beatriz, Luna; Humphreys, Kate; Minshew, Nancy (2006). "Configural processing in autism and its relationship to face processing". Neuropsychologia 44: 110–129. doi:10.1016/j.neuropsychologia.2005.04.002.
- Schreibman, Laura (1988). Autism. Newbury Park: Sage Publications. pp. 14–47. ISBN 0803928092.
- Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy (2012). "Face identity recognition in autism spectrum disorders: A review of behavioral studies". Neuroscience & Biobehavioral Reviews 36: 1060–1084. doi:10.1016/j.neubiorev.2011.12.008.
- Dawson, Geraldine; Webb, Sara Jane; McPartland, James (2005). "Understanding the nature of face processing impairment in autism: Insights from behavioral and electrophysiological studies". Developmental Neuropsychology 27: 403–424. PMID 15843104.
- Kita, Yosuke; Inagaki, Masumi (2012). "Face recognition in patients with Autism Spectrum Disorder". Brain and Nerve 64: 821–831. PMID 22764354.
- Grelotti, David; Gauthier, Isabel; Schultz, Robert (2002). "Social interest and the development of cortical face specialization: What autism teaches us about face processing". Developmental Psychobiology 40: 213–235. doi:10.1002/dev.10028. Retrieved 2/24/2012.
- Riby, Deborah; Doherty-Sneddon Gwyneth (2009). "The eyes or the mouth? Feature salience and unfamiliar face processing in Williams syndrome and autism". The Quarterly Journal of Experimental Psychology 62: 189–203. doi:10.1080/17470210701855629.
- Joseph, Robert; Tanaka, James (2003). "Holistic and part-based face recognition in children with autism". Journal of Child Psychology and Psychiatry 44: 529–542. doi:10.1111/1469-7610.00142.
- Langdell, Tim (1978). "Recognition of Faces: An approach to the study of autism". Journal of Psychology and Psychiatry and Allied Disciplines (Blackwell) 19: 255–265. Retrieved 2/12/2013.
- Spezio, Michael; Adolphs, Ralph; Hurley, Robert; Piven, Joseph (28 Sept 2006). "Abnormal use of facial information in high functioning autism". Journal of Autism and Developmental Disorders 37: 929–939. doi:10.1007/s10803-006-0232-9.
- Revlin, Russell (2013). Cognition: Theory and Practice. Worth Publishers. pp. 98–101. ISBN 9780716756675.
- Triesch, Jochen; Teuscher, Christof; Deak, Gedeon O.; Carlson, Eric (2006). "Gaze following: why (not) learn it?". Developmental Science 9: 125–157. doi:10.1111/j.1467-7687.2006.00470.x.
- Volkmar, Fred; Chawarska, Kasia; Klin, Ami (2005). "Autism in infancy and early childhood". Annual Reviews of Psychology 56: 315–316. doi:10.1146/annurev.psych.56.091103.070159.
- Nader-Grosbois, N.; Day, J.M. (2011). "Emotional cognition: theory of mind and face recognition". In Matson, J.L; Sturmey, R. International handbook of autism and pervasive developmental disorders. New York: Springer Science & Business Media. pp. 127–157. ISBN 9781441980649.
- Pierce, Karen; Muller, R.A., Ambrose, J., Allen, G.,Chourchesne (2001). "Face processing occurs outside the fusiform 'face area' in autism: evidence from functional MRI". Brain 124: 2059–2073. Retrieved 2/13/2013.
- Harms, Madeline; Martin, Alex; Wallace, Gregory (2010). "Facial emotion recognition in autism spectrum disorders: A review of behavioral and neuroimaging studies". Neuropsychology Review 20: 290–322. doi:10.1007/s11065-010-9138-6.
- Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew; Clarke, Paula; Miles, Jermey; Nation, Kate; Clarke, Leesa; Williams, Christine (2008). "Emotion recognition in faces and the use of visual context Vo in young people with high-functioning autism spectrum disorders". Autism 12: 607-. doi:10.1177/1362361308097118.
- Njemanze, P.C. Transcranial doppler spectroscopy for assessment of brain cognitive functions. United States Patent Application No. 20040158155, August 12th, 2004
- Njemanze, P.C. Noninvasive transcranial doppler ultrasound face and object recognition testing system. United States Patent No. 6,773,400, August 10th, 2004
- YangJing Long (2009). "Human age estimation by metric learning for regression problems". Proc. International Conference on Computer Analysis of Images and Patterns: 74–82.
- Face Recognition Homepage
- Are Faces a "Special" Class of Objects?
- Science Aid: Face Recognition
- FaceResearch – Scientific research and online studies on face perception
- Face Blind Prosopagnosia Research Centers at Harvard and University College London
- Face Recognition Tests - online tests for self-assessment of face recognition abilities.
- Perceptual Expertise Network (PEN) Collaborative group of cognitive neuroscientists studying perceptual expertise, including face recognition.
- Face Lab at the University of Western Australia
- Perception Lab at the University of St Andrews, Scotland
- The effect of facial expression and identity information on the processing of own and other race faces by Yoriko Hirose, PhD thesis from the University of Stirling
- Global Emotion Online-Training to overcome Caucasian-Asian other-race effect | http://en.wikipedia.org/wiki/Face_perception | 13 |
50 | Robotics Kinematics and Dynamics/Description of Position and Orientation
Note: Some illustrative figures still have to be added.
In general, a rigid body in three-dimensional space has six degrees of freedom: three rotational and three translational.
A conventional way to describe the position and orientation of a rigid body is to attach a frame to it. After defining a reference coordinate system, the position and orientation of the rigid body are fully described by the position of the frame's origin and the orientation of its axes, relative to the reference frame.
A rotation matrix describes the relative orientation of two such frames. The columns of this 3 × 3 matrix consist of the unit vectors along the axes of one frame, relative to the other, reference frame. Thus, the relative orientation of a frame with respect to a reference frame is given by the rotation matrix :
Rotation matrices can be interpreted in two ways:
- As the representation of the rotation of the first frame into the second (active interpretation).
- As the representation of the mutual orientation between two coordinate systems (passive interpretation).
The coordinates, relative to the reference frame , of a point , of which the coordinates are known with respect to a frame with the same origin, can then be calculated as follows: .
Some of the properties of the rotation matrix that may be of practical value, are:
- The column vectors of are normal to each other.
- The length of the column vectors of equals 1.
- A rotation matrix is a non-minimal description of a rigid body's orientation. That is, it uses nine numbers to represent an orientation instead of just three. (The two above properties correspond to six relations between the nine matrix elements. Hence, only three of them are independent.) Non-minimal representations often have some numerical advantages, though, as they do not exhibit coordinate singularities.
- Since is orthonormal, .
Elementary Rotations about Frame Axes
The expressions for elementary rotations about frame axes can easily be derived. From the figure on the right, it can be seen that the rotation of a frame by an angle about the z-axis, is described by:
Similarly, it can be shown that the rotation of a frame by an angle about the x-axis, is given by:
Derived in exactly the same manner, the rotation of a frame by an angle about the y-axis, is described by:
Compound rotations are found by multiplication of the different elementary rotation matrices.
The matrix corresponding to a set of rotations about moving axes can be found by postmultiplying the rotation matrices, thus multiplying them in the the same order in which the rotations take place. The rotation matrix formed by a rotation by an angle about the z-axis followed by a rotation by an angle about the moved y-axis, is then given by:
The composition of rotations about fixed axes, on the other hand, is found by premultiplying the different elementary rotation matrices.
The inverse of a single rotation about a frame axis is a rotation by the negative of the rotation angle about the same axis:
The inverse of a compound rotation follows from the inverse of the matrix product:
Note: These examples require the Robotics Toolbox to be properly installed.
theta = pi/2; T_x = rotx(theta); % Returns a 4x4 pose matrix. The upper-left 3x3 submatrix is the % rotation matrix representing a rotation by theta about the x-axis. R_x = tr2rot(T_x); % Returns the 3x3 rotation matrix corresponding with T_x. T_y = roty(theta); % A rotation about the y-axis. T_z = rotz(theta); % A rotation about the z-axis.
Contrary to the rotation matrix, Euler angles are a minimal representation (a set of just three numbers, that is) of relative orientation. This set of three angles describes a sequence of rotations about the axes of a moving reference frame. There are, however, many (12, to be exact) sets that describe the same orientation: different combinations of axes (e.g. ZXZ, ZYZ, and so on) lead to different Euler angles. Euler angles are often used for the description of the orientation of the wrist-like end-effectors of many serial manipulator robots.
Note: Identical axes should not be in consecutive places (e.g. ZZX). Also, the range of the Euler angles should be limited in order to avoid different angles for the same orientation. E.g.: for the case of ZYZ Euler angles, the first rotation about the z-axis should be within . The second rotation, about the moved y-axis, has a range of . The last rotation, about the moved z-axis, has a range of .
Forward mapping, or finding the orientation of the end-effector with respect to the base frame, follows from the composition of rotations about moving axes. For a rotation by an angle about the z-axis, followed by a rotation by an angle about the moved x-axis, and a final rotation by an angle about the moved z-axis, the resulting rotation matrix is:
After writing out:
Note: Notice the shorthand notation: stands for , stands for , and so on.
In order to drive the end-effector, the inverse problem must be solved: given a certain orientation matrix, which are the Euler angles that accomplish this orientation?
For the above case, the Euler angles , and are found by inspection of the rotation matrix:
In the above example, a coordinate singularity exists for . The above equations are badly numerically conditioned for small values of : the first and last equaton become undefined. This corresponds with an alignment of the first and last axes of the end-effector. The occurrence of a coordinate singularity involves the loss of a degree of freedom: in the case of the above example, small rotations about the y-axis require impossibly large rotations about the x- and z-axes.
No minimal representation of orientation can globally describe all orientations without coordinate singularities occurring.
The orientation of a rigid body can equally well be described by three consecutive rotations about fixed axes. This leads to a notation with Roll-Pitch-Yaw (RPY) angles.
The forward mapping of RPY angles to a rotation matrix similar to that of Euler angles. Since the frame now rotates about fixed axes instead of moving axes, the order in which the different rotation matrices are multiplied is inversed:
After writing out:
The inverse relationships are found from inspection of the rotation matrix above:
Note: The above equations are badly numerically conditioned for values of near and .
Unit quaternions (quaternions of which the absolute value equals 1) are another representation of orientation. They can be seen as a compromise between the advantages and disadvantages of rotation matrices and Euler angle sets.
The notations above describe only relative orientation. The coordinates of a point, relative to a frame , rotated and translated with respect to a reference frame , are given by:
This can be compacted into the form of a homogeneous transformation matrix or pose (matrix). It is defined as follows:
This matrix represents the position and orientation of a frame whose origin, relative to a reference frame , is described by , and whose orientation, relative to the same reference frame , is described by the rotation matrix .
is, thus, the representation of a frame in three-dimensional space. If the coordinates of a point are known with respect to a frame , then its coordinates, relative to are found by:
This is the same as writing:
Note that the above vectors are extended with a fourth coordinate equal to one: they're made homogeneous.
As was the case with rotation matrices, homogeneous transformation matrices can be interpreted in an active ("displacement"), and a passive ("pose") manner. It is also a non-minimal representation of a pose, that does not suffer from coordinate singularities.
If the pose of a frame is known, relative to , whose pose is known with respect to a third frame , the resulting pose is found as follows:
x = 1; y = 1.3; z = 0.4; T = transl(x,y,z); % Returns the pose matrix corresponding with a translation over % the vector (x, y, z)'.
Finite Displacement Twist
A pose matrix is a non-minimal way of describing a pose. A frequently used minimal alternative is the finite displacement twist:
Here, , and are a valid set (any) of Euler angles, while , and are the coordinates of a reference point on the rigid body. | http://en.m.wikibooks.org/wiki/Robotics_Kinematics_and_Dynamics/Description_of_Position_and_Orientation | 13 |
80 | Area of a Triangle - box method (Coordinate Geometry)
The area of a triangle can be found by subtracting the area of simpler shapes from its bounding box.
Drag any point A,B,C. The area of the triangle ABC is continuously recalculated using the box method.
You can also drag the origin point at (0,0).
This method works by first drawing the
bounding box of the triangle. This is the smallest rectangle that can contain the triangle.
It must have
This leaves easy shapes (right triangles
and rectangles) around it whose area can be easily calculated.
These areas are then subtracted from the area of the bounding box to give the desired result.
In the figure above, (click 'reset' if necessary), we are trying to find the area of the yellow triangle ABC.
The triangle's bounding box is the large gray rectangle surrounding the triangle.
You can see that there are three
formed around the yellow one.
By subtracting their areas from the area of the bounding box, we are left with the area of the yellow triangle.
These gray triangles are easy to calculate because they are orthogonal - they have two sides that are vertical or horizontal
and so their area can be found by the usual
'half base time height' method.
When one vertex is inside the box
In the figure above, click 'reset' and then drag point A to the left until it is inside the box. You should get a shape like the one on the right.
Now we have an extra rectangle in the corner.
We simply subtract the
area of this rectangle
along with the three triangles from the bounding box in the usual way.
Step by step
Draw the bounding box. This is the smallest orthogonal rectangle that will enclose the triangle.
(An orthogonal rectangle is one where all four sides are either vertical or horizontal.) Calculate the area of this box.
Calculate the area of the three
formed between the triangle and the box (shown in gray above).
This is easy since they are all orthogonal (one side vertical or horizontal). The simple
'half base time height' method method is used.
If one of the vertices is inside the box, calculate the
area of the small rectangle that forms in the corner.
Subtract the area of the three triangles, and possibly the extra rectangle, from the area of the bounding box,
thus giving the area of the triangle.
In the diagram above, click on "reset".
If one vertex was inside the box, we must also subtract the area of the resulting extra rectangle from the box.
Draw the bounding box - a rectangle with vertical and horizontal sides. In this particular case:
The area of this box is its width times its height: 45×30 = 1350 square units.
- The left side is determined by the x coordinate of point B (10)
- The right side is determined by the x coordinate of point A (55)
- The top side is determined by the y coordinate of point B (35)
- The lower side is determined by the y coordinate of point C (5)
Calculate the area of the three gray right triangles using "half base times height". Taking the upper triangle,
we choose the horizontal line from B to be the base. Subtracting its X coordinates the base length is 45.
Its height is the vertical distance from A up to the corner, so subtracting its y coordinates gives 10. Its area is thus
half of 45×10 = 225. Using similar methods we find the area of all three triangles which are 225, 100, and 525 square units.
Subtracting the area of these three triangles from the area of the bounding box we get
1350-225-525-100 = 500 square units, the desired area of the triangle ABC.
You can also calculate the area by formula. See Area of a triangle, formula method.
The box method also works with irregular quadrilaterals. The general approach also works with any polygon, although
you need to get a little creative sometimes to find the collection of simple orthogonal shapes to surround it.
Things to try
Once you have done the above, you can click on "print" and it will print the diagram exactly as you set it.
In the diagram at the top of the page, Drag the points A, B or C around and notice how the area calculation
uses the areas of the simple surrounding shapes.
Try points that are negative in x and y. You can drag the origin point to move the axes.
Click "hide details". Drag the triangle to some random new shape. Calculate its area using the box method and then click
"show details" to check your result.
After the above, estimate the area by counting the grid squares inside the triangle. (Each square is 5 by 5 so
has an area of 25).
Other Coordinate Geometry entries
(C) 2009 Copyright Math Open Reference. All rights reserved | http://www.mathopenref.com/coordtriangleareabox.html | 13 |
57 | In AC electrical theory every power source supplies a voltage
either a sine wave of one particular frequency or can be considered as
a sum of sine waves of differing frequencies. The neat thing about a
wave such as V(t) = Asin(ωt
is that it can be considered to be directly related to a vector of
A revolving in a circle with angular velocity ω -
in fact just the y component of the vector. The phase constant
is the starting angle at t = 0. In Figure 1, an animated GIF shows this
relation [you may need to click on the image for it to animate].
Since a pen and paper drawing cannot be animated so easily, a
of a rotating vector shows the vector inscribed in the centre of a
as indicated in Figure 2 below. The angular frequency ω
may or may not be indicated.
When two sine waves are produced on the same display, one wave
said to be leading or lagging
the other. This terminology
makes sense in the revolving vector picture as shown in Figure 3. The
vector is said to be leading the red vector or conversely the red
is lagging the blue vector.
Considering sine waves as vertical components of vectors has more important properties. For instance, adding or subtracting two sine waves directly requires a great deal of algebraic manipulation and the use of trigonometric identities. However if we consider the sine waves as vectors, we have a simple problem of vector addition if we ignore ω. For example consider
Asin(ωt + φ) = 5sin(ωt + 30°) + 4sin(ωt + 140°) ;
the corresponding vector addition is:
Ay = 5sin(30° + 4sin(140°) = 5.07115
So the Pythagorean theorem and simple trigonometry produces the result
5.23sin(ωt + 76.0°) .
Make a note not to forget to put ωt
Phasors and Resistors, Capacitors, and Inductors
The basic relationship in electrical circuits is between the current through an element and the voltage across it. For resistors, the famous Ohm's Law gives
VR = IR . (1)
VC = q/C . (2)
VL = LdI/dt . (3)
These three equations also provide a phase relationship between the current entering the element and the voltage over it. For the resistor, the voltage and current will be in phase. That means if I has the form Imaxsin(ωt + φ) then VR has the identical form Vmaxsin(ωt + φ) where Vmax = ImaxR. For capacitors and inductors it is a little more complicated. Consider the capacitor. Imagine the current entering the capacitor has the form Imaxsin(ωt + φ). The voltage, however, depends on the charge on the plates as indicated in Equation (2). The current and charge are related by I = dq/dt. Since we know the form of I simple calculus tells us that q should have the form − (Imax/ω)cos(ωt + φ) or (Imax/ω)sin(ωt + φ - 90°). Thus VC has the form (Imax/ωC)sin(ωt + φ - 90°) = Vmaxsin(ωt + φ - 90°). The capacitor current leads the capacitor voltage by 90°. Also note that Vmax = Imax/ωC. The quantity 1/ωC is called the capacitive reactance XC and has the unit of Ohms. For the inductor, we again assume that the current entering the capacitor has the form Imaxsin(ωt + φ). The voltage, however, depends on the time derivative of the current as seen in Equation (3). Since we assumed the form of I, then the voltage over the inductor will have the form ωLImaxcos(ωt + φ) or Vmaxsin(ωt + φ + 90°). The inductor current lags the inductor voltage. Here note that the quantity ωL is called the inductive reactance XL. It also has units of Ohms.
The phase relationship of the three elements is summed up in
diagram, Figure 5.
Note that in all three cases, resistor, capacitor, and inductor, the relationship between the maximum voltage and the maximum current was of the form
Vmax = ImaxZ . (4)
We call Z the impedance of the circuit element.
Equation (4) is
just an extension of Ohm's Law to AC circuits. For circuits
any combination of circuit elements, we can define a unique equivalent
impedance and phase angle that will allow us to find the current
the battery. We show how to do so in the next section.
Phasors and AC Circuit Problems
Phasors reduce AC Circuit problems to simple, if often tedious, vector addition and subtraction problems and provide a nice graphical way of thinking of the solution. In these problems, a power supply is connected to a circuit containing some combination of resistors, capacitors, and inductors. It is common for the characteristics of the power supply, Vmax and frequency ω, to be given. The unknown quantity would be the characteristics of the current leaving the power supply, Imax and the phase angle φ relative to the power supply. To solve one needs only to follow the rules:
The emf for the circuit in Figure 6 is ε = 10sin(1000t). Find the current delivered to the circuit. Find the equivalent impedance of the circuit. Find the equation of the current and voltage drop for each element of the circuit.
XC = 1/ωC = 1/[1000 rad/s× 10 μF] = 100 Ω ,
XL = ωL = 1000 rad/s× 40 mH = 40 Ω .
Next we assign a current to each branch of the circuit.
V2 = (20 Ω)I2 .
The branch carrying I1 needs more work.
Since the current
is common we draw a diagram that indicates the appropriate phase
We need to find the equivalent impedance Z1 and
the phase angle
φ1 that we can use to replace the
Using the Pythagorean Theorem and trigonometry, we find the impedance of the branch
and the phase angle
As we see from Figure 8, the current I1 leads V1 by 63.4°.
Now these two branches containing the 20 Ω
resistor and Z1 are in parallel, that is V1
= V. Since the voltage is common we draw a diagram like the following
find the equivalent impedance Z12 and phase
To do the vector addition, we will treat the voltage vector as the x-axis. Then
Hence the equivalent impedance of the two arms together is Z12 = 18.319 Ω. The phase angle is
As we see from Figure 9, the current leads the voltage.
Next the current I12 equals I and this
current passes through
the 100 Ω resistor, the impedance Z12,
and XL. Figure 10 shows the appropriate diagram
the total circuit's equivalent impedance Zeq and
To do the vector addition, we will treat the current vector as the x-axis. Then
Hence the equivalent impedance of the circuit together is Zeq = 123.9 Ω. The phase angle is
As can be seen from Figure 10, the voltage leads the current. Since εmax = 10 Volts, we have Imax = εmax/Z = 10/123.875 A = 80.73 mA. The requested equation for the current is
I = (80.73 mA)sin(ωt − 17.53°) .
V100 = IR = (8.073 V) sin(ωt − 17.53°) .
For the inductor
VLmax = ImaxXL = 80.73 mA × 40 Ω = 3.229 Volts .
The phase relation between VL and I yields
VL = VLmaxsin(ωt − 17.53° + 90°) = (3.229 V) sin(ωt + 72.47°) .
The maximum voltage drop across Z12 is
Vmax = ImaxZ12 = 80.73 mA × 18.319 Ω = 1.479 Volts.
Since the voltage lags I by φ12, we find
V = Vmaxsin(ωt − 17.53° - φ12) = (1.479 V) sin(ωt − 25.96°) .
From here on we reverse the steps we took to find Z12
the first place.
By definition, the voltage drop across Z12 is also the voltage across the 20 Ω resistor. The maximum current through the resistor will be
I2max = Vmax/R = 1.479 V / 20 Ω = 73.95 mA .
The equation for this current is
I2 = (73.95 mA) sin(ωt − 25.96°) .
The voltage V is also the potential drop across Z1. The maximum current in this branch is
I1max = Vmax/Z1 = 1.479 V / 111.803 Ω = 13.23 mA .
Recalling the phase information we derived for Z1, the current formula will be
I1 = I1max sin(ωt − 25.96° + f1 = (13.23 mA) sin(ωt + 37.48°) .
This in turn is the current through the capacitor and 50 Ω resistor. The maximum voltage drop over the capacitor is
VCmax = I2maxXC = 13.23 mA × 100 &Ohms; = 1.323 Volts .
We know that VC must lag I1 by 90°. Hence the equation for the voltage will be
VC = VCmaxsin(ωt + 37.48° - 90°) = (1.323 V) sin(ωt − 52.52°) .
Finally the maximum voltage drop over the 50 Ω resistor will be
V50max = I2maxR50 = 13.23 mA × 50 &Ohms; = 0.661 Volts.
Current and voltage are in phase for a resistor, so the equation will be
V50 = (0.661 V) sin(ωt + 37.48°) .
|Power Supply||(10 V)sin(ωt)||(80.73 mA)sin(ωt − 17.53°)|
|100 Ω Resistor||(8.073 V) sin(ωt − 17.53°)||(80.73 mA)sin(ωt − 17.53°)|
|40 mH Inductor||(3.229 V) sin(ωt + 72.47°)||(80.73 mA)sin(ωt − 17.53°)|
|Z12||(1.479 V) sin(ωt − 25.96°)||(80.73 mA)sin(ωt − 17.53°)|
|20 Ω Resistor||(1.479 V) sin(ωt − 25.96°)||(73.94 mA) sin(ωt − 25.96°)|
|Z1||(1.479 V) sin(ωt − 25.96°)||(13.23 mA) sin(ωt + 37.48°)|
|10 μF Capacitor||(1.323 V) sin(ωt − 52.52°)||(13.23 mA) sin(ωt + 37.48°)|
|50 Ω Resistor||(0.661 V) sin(ωt + 37.48°)||(13.23 mA) sin(ωt + 37.48°)| | http://www.kwantlen.ca/science/physics/faculty/mcoombes/P2421_Notes/Phasors/Phasors.html | 13 |
53 | Editor's Note: This article first appeared in Xcell Journal Issue 80 Third Quarter 2012, and is reproduced here with the kind permission of Xcell's publisher, Mike Santarini.
One of the many benefits of an FPGA-based solution is the ability to implement a mathematical algorithm in the best possible manner for the problem at hand. For example, if response time is critical, then we can pipeline the stages of mathematics. But if accuracy of the result is more important, we can use more bits to ensure we achieve the desired precision. Of course, many modern FPGAs also provide the benefit of embedded multipliers and DSP slices, which can be used to obtain the optimal implementation in the target device.
Let's take a look at the rules and techniques that you can use to develop mathematical functions within an FPGA or other programmable device.Representation of numbers
There are two methods of representing numbers within a design, fixed- or floating-point number systems. Fixed-point representation maintains the decimal point within a fixed position, allowing for straightforward arithmetic operations. The major drawback of the fixed-point system is that to represent larger numbers or to achieve a more accurate result with fractional numbers, you will need to use a larger number of bits. A fixed-point number consists of two parts, integer and fractional.
Floating-point representation allows the decimal point to "float" to different places within the number, depending upon the magnitude. Floating-point numbers, too, are divided into two parts, the exponent and the mantissa. This scheme is very similar to scientific notation, which represents a number as A times 10 to the power of B, where A is the mantissa and B is the exponent. However, the base of the exponent in a floating-point number is base 2, that is, A times 2 to the power of B. The floating-point number is standardized by IEEE/ANSI standard 754-1985. The basic IEEE floating-point number utilizes an 8-bit exponent and a 24-bit mantissa.
Due to the complexity of floating-point numbers, we as designers tend wherever possible to use fixed-point representations. The above fixed-point number is capable of representing an unsigned number between 0.0 and 255.9906375 or a signed number between –128.9906375 and 127.9906375 using two's complement representation. Within a design you have the choice – typically constrained by the algorithm you are implementing – to use either unsigned or signed numbers. Unsigned numbers are capable of representing a range of 0 to 2n
– 1, and always represent positive numbers. By contrast, the range of a signed number depends upon the encoding scheme used: sign and magnitude, one's complement, or two's complement.
The sign-and-magnitude scheme utilizes the left-most bit to represent the sign of the number (0 = positive, 1 = negative). The remainder of the bits represent the magnitude. Therefore, in this system, positive and negative numbers have the same magnitude but the sign bit differs. As a result, it is possible to have both a positive and a negative zero within the sign-and-magnitude system.
One's complement uses the same unsigned representation for positive numbers as sign and magnitude. However, for negative numbers it uses the inversion (one's complement) of the positive number.
Two's complement is the most widely used encoding scheme for representing signed numbers. Here, as in the other two schemes, positive numbers are represented in the same manner as unsigned numbers, while negative numbers are represented as the binary number you add to a positive number of the same magnitude to get zero. You calculate a negative two's complement number by first taking the one's complement (inversion) of the positive number and then adding one to it. The two's complement number system allows you to subtract one number from another by performing an addition of the two numbers. The range a two's complement number can represent is given by:
– (2n-1) to + (2n-1 – 1)
One way to convert a number to its two's complement format is to work right to left, leaving the number the same until you encounter the first "1." After this point, each bit is inverted. Fixed-point mathematics
The normal way of representing the split between integer and fractional bits within a fixed-point number is x,y where x represents the number of integer bits and y the number of fractional bits. For example, 8,8 represents 8 integer bits and 8 fractional bits, while 16,0 represents 16 integer and 0 fractional. In many cases, you will determine the correct number of integer and fractional bits required at design time, normally following conversion from a floating-point algorithm. Thanks to the flexibility of FPGAs, we can represent a fixed-point number of any bit length; the number of integer bits required depends upon the maximum integer value the number is required to store, while the number of fractional bits will depend upon the accuracy of the final result. To determine the number of integer bits required, use the following equation:
For example, the number of integer bits required to represent a value between 0.0 and 423.0 is given by:
That means you would need 9 integer bits, allowing a range of 0 to 511 to be represented. Representing the number using 16 bits would allow for 7 fractional bits. The accuracy this representation would be capable of providing is given by:
You can increase the accuracy of a fixed-point number by using more bits to store the fractional number. When designing, there are times when you may wish to store only fractional numbers (0,16), depending upon the size of the number you wish to scale up. Scaling up by 216
may yield a number that still does not provide an accurate enough result. In this case you can multiply up by the power of 2, such that the number can be represented within a 16-bit number. You can then remove this scaling at a further stage within the implementation. For example, to represent the number 1.45309806319x10-4
in a 16-bit number, the first step is to multiply it by 216
65536 • 1.45309806319x10-4 = 9.523023
Storing the integer of the result (9) will result in the number being stored as 1.37329101563x10-4
(9 / 65536). This difference between the number required to be stored and the stored number is substantial and could lead to an unacceptable error in the calculated result. You can obtain a more accurate result by scaling the number up by a factor of 2. The result will be between 32768 and 65535, therefore still allowing storage in a 16-bit number. Using the earlier example of storing 1.45309806319x10-4
, multiplying by a factor of 228
will yield a number that can be stored in 16 bits and will be highly accurate of the desired number.
268435456 • 1.45309806319x10-4 = 39006.3041205
The integer of the result will give you a stored number of 1.45308673382x10-4
, which will provide for a much more accurate calculation, assuming you can address the scaling factor of 228
at a later stage within the calculation. For example, multiplying the scaled number with a 16-bit number scaled 4,12 will produce a result of 4,40 (28 + 12). The result, however, will be stored in a 32-bit result. Fixed-point rules
To add, subtract or divide, the decimal points of both numbers must be aligned. That is, you can only add to, subtract from or divide an x,8 number by a number that is also in an x,8 representation. To perform arithmetic operations on numbers of a different x,y format, you must first ensure the decimal points are aligned. To align a number to a different format, you have two choices: either multiply the number with more integer bits by 2X or divide the number with the fewest integer bits by 2X. Division, however, will reduce your accuracy and may lead to a result that is outside the allowable tolerance. Since all numbers are stored in base-two scaling, you can easily scale the number up or down in an FPGA by shifting one place to the left or right for each power of 2 required to balance the two decimal points. To add together two numbers that are scaled 8,8 and 9,7, you can either scale up the 9,7 number by a factor of 21
or scale the 8,8 format down to a 9,7 format, if the loss of a least-significant bit is acceptable.
For example, say you want to add 234.58 and 312.732, which are stored in an 8,8 and a 9,7 format respectively. The first step is to determine the actual 16-bit numbers that will be added together.
234.58 • 28 = 60052.48
312.732 • 27 = 40029.69
The two numbers to be added are 60052 and 40029. However, before you can add them you must align the decimal points. To align the decimal points by scaling up the number with a largest number of integer bits, you must scale up the 9,7-format number by a factor of 21
40029 • 21 = 80058
You can then calculate the result by performing an addition of:
80058 + 60052 = 140110
This represents 547.3046875 in a 10,8 format (140110 / 28
When multiplying two numbers together, you do not need to align the decimal points, as the multiplication will provide a result that is X1 + X2, Y1 + Y2 wide. Multiplying two numbers that are formatted 14,2 and 10,6 will produce a result that is formatted as 24 integer bits and 8 fractional bits.
You can multiply by a fractional number instead of using division within an equation through multiplying by the reciprocal of the divisor. This approach can reduce the complexity of your design significantly. For example, to divide the number 312.732, represented in 9,7 (40029) format, by 15, the first stage is to calculate the reciprocal of the divisor.
This reciprocal must then be scaled up, to be represented within a 16-bit number.
65536 • 0.06666 = 4369
This step will produce a result that is formatted 9,23 when the two numbers are multiplied together.
4369 • 40029 = 174886701
The result of this multiplication is thus:
While the expected result is 20.8488, if the result is not accurate enough, then you can scale up the reciprocal by a larger factor to produce a more accurate result. Therefore, never divide by a number when you can multiply by the reciprocal. Issues of overflow
When implementing algorithms, the result must not be larger than what is capable of being stored within the result register. Otherwise a condition known as overflow occurs. When that happens, the stored result will be incorrect and the most significant bits are lost. A very simple example of overflow would be if you added two 16-bit numbers, each with a value of 65535, and the result was stored within a 16-bit register.
65535 + 65535 = 131070
The above calculation would result in the 16-bit result register containing a value of 65534, which is incorrect. The simplest way to prevent overflow is to determine the maximum value that will result from the mathematical operation and use this equation to determine the size of the result register required.
If you were developing an averager to calculate the average of up to fifty 16-bit inputs, the size of the required result register could be calculated.
50 • 65535 = 3276750
Using this same equation, this would require a 22-bit result register to prevent overflow occurring. You must also take care, when working with signed numbers, to ensure there is no overflow when using negative numbers. Using the averager example again, taking 10 averages of a signed 16-bit number returns a 16-bit result.
10 • -32768 = -327680
Since it is easier to multiply the result by a scaled reciprocal of the divisor, you can multiply this number by: 1/10 • 65536 = 6554 to determine the average.
-32768 • 6554 = -2147614720
This number when divided by 216
equals -32770, which cannot be represented correctly within a 16-bit output. The module design must therefore take the overflow into account and detect it to ensure you don't output an incorrect result.Real-world implementation
Let's say that you are designing a module to implement a transfer function that is used to convert atmospheric pressure, measured in millibars, into altitude, measured in meters.
-0.0088x2 + 1.7673x + 131.29
The input value will range between 0 and 10 millibars, with a resolution of 0.1 millibar. The output of the module is required to be accurate to +/-0.01 meters. As the module specification does not determine the input scaling, you can figure it out by the following equation.
Therefore, to ensure maximum accuracy you should format the input data as 4 integer and 12 fractional bits. The next step in the development of the module is to use a spreadsheet to calculate the expected result of the transfer function across the entire input range using the unscaled values. If the input range is too large to reasonably achieve this, then calculate an acceptable number of points. For this example, you can use 100 entries to determine the expected result across the entire input range.
Once you have calculated the initial unscaled expected values, the next step is to determine the correct scaling factors for the constants and calculate the expected outputs using the scaled values. To ensure maximum accuracy, each of the constants used within the equation will be scaled by a different factor.
The scaling factor for the first polynomial constant (A) is given by:
The second polynomial constant (B) scaling factor is given by:
The final polynomial constant (C) can be scaled up by a factor of 216
, as it is completely fractional.
These scaling factors allow you to calculate the scaled spreadsheet, as shown in Table 1. The results of each stage of the calculation will produce a result that will require more than 16 bits.
Table 1. Real results against the fixed-point mathematics
The calculation of the Cx2 will produce a result that is 32 bits long formatted 4,12 + 4,12 = 8,24. This is then multiplied by the constant C, producing a result that will be 48 bits long formatted 8,24 + 0,16 = 8,40. For the accuracy required in this example, 40 bits of fractional representation is excessive. Therefore, the result will be divided by 232
to produce a result with a bit length of 16 bits formatted 8,8. The same reduction to 16 bits is carried out upon the calculation of Bx to produce a result formatted 5,11.
The result is the addition of columns Cx2, Bx and A. However, to obtain the correct result you must first align the radix points, either by shifting up A and Cx2 to align the numbers in an x,11 format, or shifting down the calculated Bx to a format of 8,8, aligning the radix points with the calculated values of A and Cx2.
In this example, we shifted down the calculated value by 23
to align the radix points in an 8,8 format. This approach simplified the number of shifts required, thus reducing the logic needed to implement the example. Note that if you cannot achieve the required accuracy by shifting down to align the radix points, then you must align the radix points by shifting up the calculated values of A and Cx2. In this example, the calculated result is scaled up by a power of 28
. You can then scale down the result and compare it against the result obtained with unscaled values. The difference between the calculated result and the expected result is then the accuracy, using the spreadsheet commands of MAX() and MIN(), for the maximum and minimum error of the calculated result that can be obtained across the entire range of spreadsheet entries.
Once the calculated spreadsheet confirms that you can achieve the required accuracy, you can write and simulate the RTL code. If desired, you could design the testbench such that the input values are the same as those used in the spreadsheet. This allows you to compare the simulation outputs against the spreadsheet-calculated results to ensure the correct RTL implementation.RTL implementation
The RTL example uses signed parallel mathematics to calculate the result within four clock cycles. Because of the signed parallel multiplication, you must take care to correctly handle the extra sign bits generated by the multiplications.
ENTITY transfer_function IS PORT(
sys_clk : IN std_logic;
reset : IN std_logic;
data : IN std_logic_vector(15 DOWNTO 0);
new_data : IN std_logic;
result : OUT std_logic_vector(15 DOWNTO 0);
new_res : OUT std_logic);
END ENTITY transfer_function;
ARCHITECTURE rtl OF transfer_function IS
-- this module performs the following transfer function -0.0088x2 + 1.7673x + 131.29
-- input data is scaled 8,8, while the output data will be scaled 8,8.
-- this module utilizes signed parallel mathematics
TYPE control_state IS (idle, multiply, add, result_op);
CONSTANT c : signed(16 DOWNTO 0) := to_signed(-577,17);
CONSTANT b : signed(16 DOWNTO 0) := to_signed(57910,17);
CONSTANT a : signed(16 DOWNTO 0) := to_signed(33610,17);
SIGNAL current_state : control_state;
SIGNAL buf_data : std_logic; --used to detect rising edge upon the new_data
SIGNAL squared : signed(33 DOWNTO 0); -- register holds input squared.
SIGNAL cx2 : signed(50 DOWNTO 0); --register used to hold Cx2
SIGNAL bx : signed(33 DOWNTO 0); -- register used to hold bx
SIGNAL res_int : signed(16 DOWNTO 0); --register holding the temporary result
fsm : PROCESS(reset, sys_clk)
IF reset = '1' THEN
buf_data <= '0';
squared <= (OTHERS => '0');
cx2 <= (OTHERS => '0');
bx <= (OTHERS => '0');
result <= (OTHERS => '0');
res_int <= (OTHERS => '0');
new_res <= '0';
current_state <= idle;
ELSIF rising_edge(sys_clk) THEN
buf_data <= new_data;
CASE current_state IS
WHEN idle =>
new_res <= '0';
IF (new_data = '1') AND (buf_data = '0')
THEN --detect rising edge new data
squared <= signed( '0'& data) * signed('0'& data);
current_state <= multiply;
squared <= (OTHERS =>'0');
current_state <= idle;
WHEN multiply =>
new_res <= '0';
cx2 <= (squared * c);
bx <= (signed('0'& data)* b);
current_state <= add;
WHEN add =>
new_res <= '0';
res_int <= a + cx2(48 DOWNTO 32) +
("000"& bx(32 DOWNTO 19));
current_state <= result_op;
WHEN result_op =>
result <= std_logic_vector(res_int (res_int'high -1 DOWNTO 0));
new_res <= '0';
current_state <= idle;
END ARCHITECTURE rtl;
The architecture of FPGAs makes them ideal for implementing mathematical functions, although the implementation of your algorithm may take a little more initial thought and modeling in system-level tools such as MATLAB or Excel. You can quickly implement mathematical algorithms once you have mastered some of the basics of FPGA mathematics.About the author
Adam Taylor is Principal Engineer at EADS Astrium. Adam can be contacted at [email protected]
If you found this article to be of interest, visit Programmable Logic Designline
where – in addition to my Max's Cool Beans
blogs – you will find the latest and greatest design, technology, product, and news articles with regard to programmable logic devices of every flavor and size (FPGAs, CPLDs, CSSPs, PSoCs...).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for my weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you [grin]). | http://www.eetimes.com/design/programmable-logic/4391894/The-basics-of-FPGA-mathematics?Ecosystem=eda-design | 13 |
94 | Lasers can emit electromagnetic radiation of a single frequency, and therefore, a single wavelength, in a highly directional beam. The light from a laser is temporally coherent because all the electromagnetic waves are in phase. It is also spatially coherent because the emissions in different transverse position in the bean have fixed phase relationship. The spatial and temporal coherence of laser permits all of the energy of the beam to be focused by a lens to an extremely small spot, consequently deliver all of the power of the beam to a very small area. The goal of this experiment is to explore what characteristics of laser determine the frequency and intensity pattern emitted by a laser. In addition, I investigated a variety of the transverse patterns that can be found for a laser beam, and what changes in the characteristics of the design of the laser cause changes in those transverse pattern. Last but not the least, I also investigated how a laser can be made from discrete pieces, mirror and a gas of atoms.
As one-dimensional mode, a string fixed at both ends can vibrate in many different spatial modes. The first mode only has nodes only fixed at the ends. This mode has the longest wavelength and the lowest frequency. The second mode has three nodes, and the third mode has five nodes, etc. These one-dimensional modes are analogous to the longitudinal modes of a laser. The two-dimensional mode, one can think of it as the surface of a drum. And the circumference of the vibrating material is a node.
A laser has to have the ability to confine light and to amplify light. One way to confine the light is with two planes and mutually parallel mirrors. Light that bounds back and forth between the mirrors forms a standing wave that satisfies the boundary condition. If we set z-axis to be perpendicular to the two mirrors, and place the two mirrors at z=0 and z=L respectively, the spatially varying part of the electric field of the standing wave can be written as E(z,t)=E0sin(kz)cos(wt), where k is the wave number and w is angular frequency. An integer number of half wavelengths must fit into the distance L. Therefore, L=qλ/2, where q is an integer. Because c=fλ, the possible frequencies can be written as f=qc/2L. A discrete set of allowed frequencies is separated by Δf=c/2L. The graph below shows a schematic curve of the emission of the lasing transition. With the light being confined, it also has to be amplified. Electromagnetic waves can be created from the oscillating dipole moments in atoms when the electrons of the atoms are in a mixture of two appropriate atomic states. The energy difference between the two atomic states is determined by the frequency of the oscillation of the dipole moment: ΔE=hf. Typically, most observations of atomic dipole transition has a transition energy in the order of 107 to 109 Hz. Laser emission involves a process of stimulated emission, in which atoms are stimulated by some of the light that is present to emit their energy in phase with the initial radiation. This process amplifies an incident beam of light.
The Gauss-Hermite modes and the Gauss-Laguerre modes are the most common transverse modes. In these modes, both electric and magnetic fields are transverse to the direction of propagation. These modes are generally noted as TEMmnq, where q is the longitudinal number of half wavelengths, and m and n are the integer number of transverse nodal lines in the x- and y- direction across the beam. Generally, for each of the longitudinal modes 1, there is an infinite number of transverse modes (m,n), with the frequency increasing as the complexity of the modes increases. For the Gauss-Hermite modes, the transverse intensity can be written as:
Where x and y are the transverse coordinates and Hm and Hn are the Hermite polynomials. When z=0, the phase fronts are a plane in the resonator. Also, w(z) has its minimum value, which is called the “beam waist’. In addition, note that the intensity expression involves the product of Hermite polynomials and a Gaussian function. When x2 + y2 = w2(z), the intensity function simplifies to:
The Gauss-Laguerre modes involves both the Hermite polynomials for x and y and the Laguerre polynomials for r and θ, the polar coordinates in the transverse plane. The pattern is symmetric circles.
The apparatus of this experiment includes two lasers: one is the white Metrologic Neon laser (commercial laser). And the other is open cavity laser. There are also two Fabry-Perot interferometers. One is an old and brass one. The other is sealed inside a silver tube and mounted in a black disk. A ccd camera is also used to view the output and is connect to the computer.
I first used these apparatus to study the longitudinal modes of a laser. Firstly, I used the old-style Fabry-Perot interferometer. I first align the interferometer with the laser and the two mirrors. I adjusted the orientation of the cavity so that the laser beam is normal to the mirrors and that the dots shown up on the wall fall on top of each other. Then I inserted a diverging lens between the laser and the cavity. I saw the ring pattern that is created in transmission. When I slightly tune the adjustable mirror of the Fabry-Perot interferometer, I got circular rings shown up on the wall. When the distance between the Fabry-Perot mirrors increases or decreases by half of the laser wavelength, the rings decrease or increase in size by one ring spacing. I also noticed that when I first turned on the laser, even if I don’t turn the adjustable mirror, the rings on the wall still changes its spacing. This is because the laser got heated up inside and the cavity expands because of the rising temperature.
Nest, I used the scanning Fabry-Perot Spectrum Analyzer to display the optical spectrum of the commercial laser. This analyzer has the same function as the old Fabry-Perot interferometer that I just used, except it has a detector placed at the center of the transmitted pattern and has curved mirror so that the light is almost always limited to the fundamental transverse spatial mode. A voltage supply controls and changes the voltage on a piezoelectric crystal mounted on one of the Analyzer mirrors. The crystal changes its length in proportion to the voltage, and thereby scanning the separation between the two mirrors of the interferometers. Display the output of the commercial laser beam, and adjust the tilt of the analyzer, I saw some sharp peaks displayed on the first channel in the oscilloscope. The sharp peaks correspond to the spectral range and the smaller peaks inside the sharp peaks correspond to the longitudinal modes. Channel2 corresponds to the ramp voltage of the crystal. I measured the space between two spectral ranges and the distance between the two longitudinal modes. Knowing that the Fabry-Perot has a free spectral range (FSR) of about 7.5 GHz, I calculated the spacing between frequency, Δf, of the longitudinal modes. Therefore, I calculated the actual distance between the mirrors of the commercial lasers to be 2.1cm. I also did this for the open cavity HeNe laser, where I found the distance between the two mirrors of the open cavity laser is 0.26cm.
Then, I inserted a polarizer between the laser and the Fabry-Perot Spectrum Analyzer. By rotating the polarizer, I expected to see some peaks corresponding to the transverse peaks to vanish at certain angles because some peaks displayed on the oscilloscope correspond to the transverse waves polarizing in y direction and some corresponds to the transverse waves polarizing in the x direction. Unfortunately, I did not see the peaks vanishing with I rotated the polarizer.
To study the transverse modes of the laser, I placed a camera to capture different patterns created by the open-cavity laser. By slightly tuning the adjustable mirror, I got different patterns recorded by camera. Some of the images are shown below:
For an ideal laser, the separation between the two spots is constant. But in reality, most lasers have some small divergence. In order to calculate the divergence of the open-cavity laser, I measured the separation between the center of the two spots, as well as the distance from the output of the laser to the camera. I can see that as the distance from the output end mirror of the laser increase, the separation of the bright points in a transverse mode also increases. The two has a linear relationship. When the line hits x-axis (when the distance from the output of the laser to the camera is 0), the separation of bright points in a transverse mode is 0.1113cm. Therefore, the beam waist of my laser is 0.1113cm.
In this lab, I explored Fourier transform using an optical system.
Theoretically, an object shows up as its Fourier transform in the far field. It also shows up as its Fourier transform if it passes through a normal lens.
The formal Fourier transform could be expressed as:
The first equation tells the time dependent of function f(t) as a continuous sum of sine waves. It tells what frequency is needed to produce a signal; the second equation tells how much each sine wave is needed in the sum. It tells how to find the weighting function g(w) for a given time dependent signal. In quantum mechanics, Fourier transform is used to switch between position and momentum space.
The experimental setup, as shown below, involves a Neon-Helium laser, three lenses and a camera connected to a computer program called uEye. The laser has a “spatial filter” on front. A spatial filter is a kind of optical processor that eliminates high frequency noise on the leaser beam. Therefore, a uniform diverging beam comes out of the laser. The light then hits a collimating lens. The light that comes out of the collimating lens is then parallel. There are two lenses. The first lens takes the Fourier transform of the object. The second lens takes an inverse Fourier transform of the original object. We are essentially interested in three planes: object plane, Fourier Plane and Image plane. Object plane shows the original image. Fourier Plane shows the Fourier transformation of the original plane. Image plane transforms shows the Fourier transform of the image on Fourier plane, which transforms the image back to the original but in the up-side-down direction. If a filter is placed after the Fourier plane, the image plane will show the Fourier transform of the image after the filter. In other words, all frequencies which make up the image of the object separate in the Fourier plane. By filtering out certain frequencies, we could remove some components of the object which cannot be easily removed by blocking parts of the object itself.
Firstly, I put nothing to be my object. Theoretically, the input is a uniform brightness beam, which is a constant. The Fourier transform tells that the output should be a delta function. In fact, what I saw was a bright dot in the middle, with diminishing size dots going towards each sides. This is because I did not have a strictly uniform beam with a big enough diameter; therefore, my input is in fact a rec function. And the output of a rec function is a sinc function, which is the Fourier transform of a rec function.
Then, I used grid as my object. A grid is the superposition of vertical and horizontal lines, which form rectangular apertures. Image that I got at the Fourier plane is the superposition of vertical and horizontal sinc function. This is what I expected because according to the previous experiment, the Fourier transform of vertical or horizontal stripes should be sinc function in vertical or horizontal direction. By the Fourier transformation principal:
If h(x,y)=f(x,y) +g(x,y), where f and g are horizontal stripes and vertical strips;
The Fourier transform is just the superposition of the Fourier transform of vertical stripes and the Fourier transform of the horizontal stripes.
Knowing the Fourier transform of the grid, I could use filter to eliminate either the vertical strips or horizontal stripes. To eliminate the vertical strips and keep the horizontal strips, I had to block out the horizontal component of the Fourier transform. In order to do so, I used the vertical slit so that only vertical component could pass through after the Fourier plane. To eliminate the horizontal strips and keep the vertical strips, I had to block out the vertical components of the Fourier transform. In order to do so, I used the horizontal slit placed after the Fourier plane so that only horizontal component could pass through. Below is the vertical and component strips that I got at the image plane:
Vertical strips (only):
Horizontal strips (only):
The next thing that I did was to put some “text” on the grid as my object. My image on the object plane consists text overlapping slanted lines:
I wanted to eliminate the slanted lines, so that I adjusted the slanted lines so that they are oriented vertically. What I saw at the Fourier plane is:
The horizontal component shown up in the Fourier plane is caused by the vertical slanted lines in the object plane. In order to get rid of it, I placed a vertical slit after the Fourier plane so that the horizontal component could not pass through. This is what I got in the image plane:
The last experiment that I did was to put a fingerprint on a glass as my object. The purpose of this experiment is to get a clear image of the fingerprint. To make a periodic function with sharp edges, the series has to go to very high frequencies because the sharp edges cannot be achieved without rapidly varying intensities. The high frequencies contribute the most to the corner of an object. The frequencies which contribute more to the middle of the pulse are longer wavelength/ shorter frequencies. Without a filter, there is no apparent features showing up in the image plane because it was too bright. To get the edge of a aperture, I put a constant block after the Fourier plane. This filter only allows frequencies above certain threshold to pass through. The high frequencies which make up the edges lining passed through the filter. Below is what I got in the image plane:
In quantum mechanics, spin is an intrinsic property of electrons. An electron can either “spin up” or “spin down” depends on its energy level. When apply an external magnetic field, the spin of an electron interacts with an external magnetic field and gives up energy to or takes energy from its surroundings. This phenomenon is called Electron Spin Resonance (ESR). ESR is useful for the electrostatic and magnetic studies of complicated molecules.
There are two energy levels of an electron, differing by ΔE=2μB, where μ is the magnetic moment of the electron, and B is the external magnetic field. When one provides photons of energy E=hf, there will be a resonance when hf=2μB. When the resonance happens, a quantum mechanical spin ½ particle interacts with magnetic field and makes a transition from one state to another, corresponding to a photon of a specific frequency emission or absorption. Therefore, by setting different frequency f, one could measure the value of B that produces resonance.If we plot B on y-axis and f on x-axis, the slope of the plot will therefore be B/f=h/2μ, which shows the magnetic moment of the electron. A magnetic moment of dipole moment has an energy interaction of E=-μB=-μBzcosθ. The lower energy bound to the energy is E=-μBz for θ=0. This corresponds to when the moment is aligned with the field. The upper energy bound to the energy is E=μBz for θ=π. This corresponds to when the moment is aligned against the field. By studying the frequency of the photon as well as the energy difference between the two energy states of electron, we can investigate the technique of electron spin resonance and learn about the physics of spin.
The picture below shows the setup of the experiment. There are an oscilloscope, a 1 Ohm resistor, Helmholtz coils, the sample, a ESR adaptor box, one 120V transformer and a frequency counter. The frequency counter controls the frequency of photons produced by the coil. The 120V transformer sends current into the Helmoholz coil, which provides an external magnetic field to the sample. The ESR adapter box has +12V and -12V supplied to it by the oscillator. The unpaired electron used is a molecule called disphenylpierylhydrazil, abbreviated as DPPH. One of the electrons in this molecule behaves like an isolated electron. This sample was placed in a coil that acted as an inductor in an LRC resonance circuit. The coil was plugged into an oscillator that contained the rest of the resonance circuit. The oscilloscope displays the electric signal from both ESR and the field. When the resonance happens, paired electric signal peaks from the channel of ESR adapter could be detected.
There are two experiments in total. The first experiment is called The Resonant Circuit. The goal of this experiment is to investigate the sharpness of this resonance circuit as a function of frequency. With the spectrometer set-up, I added a passive LC circuit. Then, I display the output of the passive circuit on the oscilloscope. I found that the induced current has nothing to do with the two coils. Even if I removed the coils, I could still see the induced voltage. But if I turn the power of LRC down, there were no induced voltage anymore. The signal is due to the change of current in LRC circuit. When I adjusted the variable capacitor on the passive circuit, the voltages gets higher and then lower. This is because the maximum magnitude of voltage displayed at the point when the frequency of an LRC circuit matches with the passive LC circuit. This is how a transformer works. If the input to a transformer is V0sin(wt+theta). V0 is changed by transformer but w is not.
The second experiment is the ESR experiment. A magnetic field B is produced by a set of Helmholtz coils. The magnetic field B determines the energy difference, and in turn, determines the frequency of photons, which could provide the resonance, i.e. to flip spins from E=-μB to E=μB. There are a variety of ways to understand how the sample leads to an observed electrical signal. From one point of view, if the sample absorbs the magnetic energy created by the current in the coil, the inductance L of the coil is changed. The voltage across the coil changes when L changes. From another point of view, the alternating current in the coil is producing an ac magnetic field in the inductor. This alternating field can be thought of as a photon field. When the resonance condition is satisfied, the spins will absorb the photons and go from E=-μB to E=μB. They will also use these photons for stimulated emission and go from μB to –μB. Thus, the electron spins are constantly changing states. Since they behave like magnetic dipoles, they give rise to an oscillating magnetic field of their own and this induces a current in the coil by Faraday’s Law.
Again, the resonance condition is when 2μB=hf. To determine the magnetic dipole moment, we fixed the external frequency, changed the external magnetic field and noted where in the cycle the ESR signal occurs. Knowing that the Helmholtz pair of coils have 320 turns, the radius of the coils are 6.8cm, the distance from the center of the loop along an axis perpendicular to the plan of loop of N turns is 3.4cm, one can write the magnetic field B=(4.269*10^(-3) tesla/ampere)I, or B=(4.269*10^(-3) tesla/ampere)*(V/R). The resonance condition tells us that B=hf/2μ. Therefore, we can equate the two equations for B and calculate the magnetic moment of electron:
μ=(h*R)/[2*4.269*10^(-3)*V/f], where h is the Planck’s constant, R is the resistance of the resistor.
The slope of the plot is V/f. Plot everything into the expression for magnetic moment, we get:
μ=(h6.626*10^(-34)*1.04)/[2*4.269*10^(-3)*8/6462*10^(-9)]=(9.335±1.63)*10^(-24). The accepted value is 9.284 763 77(23)*10^(-24). Therefore, my experimental value agrees with the theoretical value within standard error.
Overall, in this experiment, by studying the frequency of the photon as well as the energy difference between the two energy states of electron, we can investigate the technique of electron spin resonance and learn about the physics of spin.
This time, the drink that I make for you is called e/k ratio and the band gap of a semiconductor.
Semiconductor is an artificial substance, which has more number of free electrons than insulator and less free electrons than conductor. When Silicon is converted to semiconductor, it is mixed with some other elements. The process of mixing impurity in Silicon is called doping. When a doped semiconductor contains excess hole, it is called “p-type”; when it contains excess free electrons, it is known as “n-type”. A single semiconductor crystal can have multiple p- and n-type regions. The p-n junctions between these regions have many useful electronic properties. In this experiment, a n-p-n type Silicon transistor TIP3055 was used to measure the current-voltage relationship of the p-n junction at different temperatures. The current-voltage relationship of a p-n junction is measured at different temperatures, permitting a determination of the ratio of e, which is the magnitude of the charge of an electron, to k, which is Boltzmann’s constant.
The theory of the experiment depends on the equation: I = Io[exp(eV/kT)-1]. In the case when eV»kT, the equation can be reduced to I = Io*exp(eV/kT). The parameters Io and e/kT can then be determined from a measurement of current versus voltage.
Below is the picture of the circuits connection:
The circuits include a negative dc voltage source using a 1kΩ and a 1.5V battery. The currents is measured with a picoammeter. The voltage is measured with a voltmeter and temperature is measure by a digital thermometer. E, C and B refer to emitter, collector and base. Below is the picture of the real setup:
In the actual experiment, I kept track of the current and voltage under 5 different temperatures, which include room temperature, water/ice mixture, boiling water, dry ice/ isopropyl alcohol and liquid nitrogen. In order to find the e/k ratio, I took the natural log of the current, and plotted it against voltage. The slope of the plot then corresponds to (e/kT), and the intercept of the plot corresponds to the natural log of Io. Below is the plot I get for dry ice:
For example, in this case, I calculated the e/k ratio by multiplying the slope(60.086) with the temperature(199K). The estimated e/k is therefore 60.086*199=11957.114. Under the other four temperature, the estimated e/k are 11323.728 for room temperature, 11397.75 for water/ice mixer, 11397.75 for boiling water, and 11853.54 for liquid nitrogen. The mean value for e/k for the five measurements are 11585.9764. This roughly agrees with the standard value, 11604.51. (e = 1.602 176 487 (40) *10^(-19) and k = 1.380 6504(24) *10^(-23). So I concluded that my experimental value agrees with the theoretical value.
To find the transistor band gap, I plotted ln(I0) versus 1/T:
The slope of the plot is is -e_gap/k and has a value of (-1.582 +- 0.0025) *10^(4). Therefore, the experimental value of energy bank gap is 1.357 +-0.02 eV. The accepted value is 1.11eV to 1.13eV. The experimental value is different from the theoretical value because the omitted temperature dependence factor.
The second piece of drink that I will make for you is called Alpha Spectroscopy. The first drink that I made in nuclear physics lab!
The purpose of this experiment is to study the detection of charged-particle radiation and the attenuation of such radiation during interaction with matters. Specifically, this experiment studies alpha particles which are helium-4 nuclei that contains 4 protons and two neutrons. Alpha particles produced from nuclear decays generally have energies between 4 and 10 MeV, and can only travel a few centimeters in air at atmosphere pressure. Therefore, this experiment needs to be done by varying degrees of vacuum. There are mainly two things that we care about the alpha particle: (1) range in which the total distance of an alpha particle travels in air which depends on the rate at which it loses energy in the medium; (2) variation in energy transfer to the medium per unit distance.
The setup of this experiment contains a surface barrier detector, an oscilloscope, a delay line amplifier, a Hastings Model 760 vacuum gauge, a multichannel analyzer(MCA), and Am-241 alpha source. The Am-241 alpha source, together with the surface barrier detector, are placed in the stainless steel vacuum chamber. In order to vary the degrees of vacuum, I need to evacuate the vacuum chamber by different extent. There are three valves along the chamber, and the chamber is open when the valve is aligned with the chamber. When evacuating the chamber, I closed all three valves at the beginning, turn on the pump, and successively opened the first and the second valve next to the pump, but always kept the third valve closed which connects to the air. When I want to raise the pressure in the chamber, I kept the the first and the second valve closed, and open the third valve until the pressure raised up to the number that I wanted. The MCA, just like what it does in the Poisson experiment, acquires the data that I need. The pulse hight is an indication of the energy deposited in the detector. The energy is displayed along the horizontal axis and the frequency of occurrence of a pulse height is displayed along the vertical axis. The frequency (number of counts in each energy channel) can be increased by acquiring data for a longer time interval. Below is the picture of the setup:
There are in total 4 experiments. The first experiment studies the Am-241 spectrum and the using of the MCA. In this experiment, I collected data for 300 seconds with 8192 bins. The data that I collected are (1)channel number with the highest peak (2) number of counts at the peak (3) the full width at the half maximum and (4) total number of particles represented in the spectrum. In this 5-minute run, I saw the three peaks in the spectrum. They are not very well resolved and each individual peak is a bit hard to be distinguished.
The second experiment studies the surface barrier detector operations. The purpose of this experiment is to determine the effect of the bias voltage on the operation of the surface barrier detector. During the experiment, I set the bias supply voltage to be 25V, 60V, 95V and 125V, and recorded the location of the largest peak, number of counts for the largest peak, full width at the half maximum and the total number of alpha particles represented. I found that the channel number of the peak height, the total number of alpha particles as well as the total number of alpha particles increase as the bias voltage increases. But they all only increase in a small extent. The full width at the half maximum decreases by a great extent as the bias voltage increases. A small FWHM gives a good resolution. The manufacturer recommends operation of the detector at 125V. This makes sense because the higher bias voltage we have, the higher channel of the peak height, the smaller full width at the half maximum and the bigger total counts of alpha particles we have. This can make the peak better resolved.
The third experiment studies data acquisition techniques. The purpose of this experiment is to become familiar with apparatus, to perform energy calibration, and to investigate the resolution of the apparatus to perform a detailed analysis of the spectrum. In the experiment I set the bias voltage to be 125V, as suggested from the result of the second experiment. I collected data for 30,000 seconds, which takes an overnight run, with 8192 channels. The spectrum that I got has three peaks. I fitted three Gaussian functions in there to describe the three peaks. The problem was that even though the dominant high energy peak can be characterized fairly accurately, each of the lower energy peaks are superimposed on there large peaks on the right. Therefore, the best way is to simultaneously fit three Gaussians at once. This takes multiple tires and adjustment. Luckily, I successfully fitted in three Gaussian functions at last. Both of the amplitude and the mean of the best fits that I got are less than what I would determine by simply looking at the data. You can see the graph as below:
The last experiment studies the energy loss of charged particles. The purpose of this experiment is to investigate the process that alpha particles are easily stopped by air, and to determine all the appropriate properties of the spectrum as a function of air pressure. The alphas from the Am-241 source are in the 5MeV range. When these massive, high-energy particle hit oxygen and nitrogen molecules in the air, they will continue on large undeflected, at least until they have lost most of their energy. In this experiment, I collected the pressure and the channel number. The channel number is related with energy of an alpha particle at a specific pressure. I plotted the energy versus pressure and found that as pressure goes up, the energy goes down. The plot of several pressures are shown below:
This is what I would expect because when there is no air in the chamber, alpha particles go straightly through the chamber, without losing any kinetic energy. When there is air in the chamber, the alpha particles collides with the air molecules. So the mean of the energy moves leftwards on the plot. In addition, since the collision of the loss of energy is random, the plot gets a larger spread. You can see the energy vs. pressure graph as shown below:
Taking the derivative of the plot above, I got the plot for the stopping power of alpha particle:
The first cocktail that I will make for you is called Poisson Statistics in Radioactive Decay.
To make this cocktail, I used the detected gamma ray emitted by one microcurie Cobalt-60 to study Poisson statistics, Gaussian statistics and Binomial statistics. Before actually making the drink, let me give you a general introduction of the mixers that we use, which are three kinds of statistical distribution and nuclear physics.
Poisson distribution is the first mixer. It is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since last event. In physics, Poisson statistics have been used to study phenomenas such as the the shot noise in electric circuits, charge hopping from one site to another in a solid conduction, etc. It has also been used to social science and all branches of natural sciences. For example, the birth defects, rare desease, traffic flow, etc.
Gaussian distribution is a continuous probability distribution that describes the distribution of real-valued random variables that are distributed around some mean value. Physical quantities that are expected to be the sum of many independent processes, for example, measurement errors, often follows Gaussian distribution.
Binomial distribution is used when there are exactly two mutually exclusive outcomes of a trial, “success” and “failure”. In our experiment, a success corresponds to one detected gamma ray; a failure corresponds to nothing is detected.
If the three statistical distributions are the juice mixers for the drink, nuclear decay must be the spirit. It is because the randomness so encountered is an intrinsic property of the quantum process underlying nuclear decay, and the decay of each individual atom is completely independent of the fate of other atoms, we can use Cobalt-60 as our source and the detected gamma rays emitted from the source as our data.
Keeping in mind of the ingredients of the drink, I will now take you to make the drink. The setup that I use to make the drink includes a scintillation, a photo multiplier tube, a preamplifier, a voltage supply, a delay line amplifier, and a multichannel analyzer (MCA). You can see the picture as below:
When making the drink, I place the Cobalt-60 at one end of the scintillator. Starting at time t=0, the MCA counts the decays from Cobalt-60 in successive intervals of length T (dwell time) that I set. In other words, the MCA establishes bins with 0 < t < T, and T < t < 2T, and so on, and it counts the number of decays in each bins.
To study the three distributions, I measured the number of counts detected when T is set to be 0.00001s, 0.0009s, 0.005s, 0.05s, 5s, and 50s, corresponding to the mean of counts ranging from (2.00 ± 140) × 10−4 and (4.37 ± 0.0002) × 104.
When dwell time is 0.0009s, 0.005s, and 0.05s, I compared the experimental data with the theoretical values. In all of the three cases, the experimental data fits the theoretical values within the range of the error bar. For example, the graph below shows the comparison between theoretical value and experimental data of Poisson distribution when mean of counts is 4.50 ± 2.12. Therefore, I concludes that my experimental values of mean equals (7.80 ± 8.8) × 10−1, 4.50±2.12 and (4.50 ± 0.67) × 10 are well distributed according to Poisson distribution.
When the mean of Poisson distribution gets large, it approaches Gaussian distribution. For my experimental data of means equal (4.50±0.70)×103 and 4.40±0.21)×104 , I compared it with the theoretical value of Gaussian distribution. For example, the graph of the comparison between experimental data and theoretical value of both Poisson distribution and Gaussian distribution when mean counts is (4.50 ± 0.70) × 103 is shown below. In both cases, I can see that the experimental data fit the theoretical value within range of the error bar. When the mean is 4.5 × 103, there is a discrepancy between the Poisson distribution and Gaussian distribution. When the mean gets to 4.4 × 10^4 , the theoretical curve of Poisson and Gaussian distribution merge to a single line. The “Poisson noise” disappears. Therefore, we can see that Gaussian distribution is a special case of Poisson distribution when mean gets very big.
When there is only 1 count detected, or no counts detected at all in all trials, the distribution goes to Binomial distribution. For my experimental data of mean equals (2.00±140)×10−4, I plotted the experimental data and compared it with the logarithm of the theoretical value of Binomial distribution. As shown in the graph below, the experimental data first the theoretical value very well. So we can say that when dwell time is 0.00001s, the distribution follows the definition of Binomial distribution, where we only get 0 and 1 counts.
Overall, we can see that my data with different means agrees with theoretical Poisson distribution, Gaussian distribution and Binomial distribution within statistical error or my experimental data. Therefore, we can conclude that the radioactive decay of Cobalt-60 is a Poisson process. In addition, Poisson distribution is a limiting case of Binomial distribution, where the number of trials is large, and the probability of getting a success in any given one trial is small; when the mean is large, Poisson distribution approaches Gaussian distribution.
Hi friend, welcome to cocktail physics! I am Elisa, a junior physics major at Bryn Mawr College. From today on, I will be your bartender at Cocktail Physics. Every week, I will share with you some physics fun that I have inside and outside my classes. The cocktail will be made of a wide variety of ingredients, ranging from condensed matter physics to electromagnetic radiation and optics, from non-linear dynamics and chaos to chemical physics. It will be imported from the experiments that I do in the lab, physics articles that I read, and videos and cartoons that I watch. I hope you will enjoy the cocktails that I make.
Let’s have fun together with Cocktail Physics! | http://cocktailphysics.tumblr.com/ | 13 |
73 | Inertia is the resistance an object has to a change in its state of motion. The principle of inertia is one of the fundamental principles of classical physics which are used to describe the motion of matter and how it is affected by applied forces. Sir Isaac Newton defined inertia in Definition 3 of his Philosophiæ Naturalis Principia Mathematica, which states:
The vis insita, or innate force of matter is a power of resisting, by which every body, as much as in it lies, endeavors to preserve in its present state, whether it be of rest, or of moving uniformly forward in a right line.
In common usage, however, people may also use the term "inertia" to refer to an object's "amount of resistance to change in velocity" (which is quantified by its mass), and sometimes its momentum, depending on context (e.g. "this object has a lot of inertia"). The term "inertia" is more properly understood as a shorthand for "the principle of inertia as described by Newton in Newton's First Law of Motion which, expressed simply, says: "An object that is not subject to any outside forces moves at a constant velocity, covering equal distances in equal times along a straight-line path." In even simpler terms, inertia means "A body in motion tends to remain in motion, a body at rest tends to remain at rest." On the surface of the Earth the nature of inertia is often masked by the effects of friction which brings moving objects to rest relatively quickly unless they are coasting on wheels, well lubricated or perhaps falling or going downhill, being accelerated by gravity. This is what misled classical theorists such as Aristotle who believed objects moved only so long as force was being applied to them.
History and development of the concept
Early understanding of motion
Prior to the Renaissance in the 15th century, the generally accepted theory of motion in Western philosophy was that proposed by Aristotle (around 335 BC to 322 BC), which stated that in the absence of an external motive power, all objects (on earth) would naturally come to rest in a state of no movement, and that moving objects only continue to move so long as there is a power inducing them to do so. Aristotle explained the continued motion of projectiles, which are separated from their projector, by the action of the surrounding medium which continues to move the projectile in some way. As a consequence, Aristotle concluded that such violent motion in a void was impossible for there would be nothing there to keep the body in motion against the resistance of its own gravity. Then in a statement regarded by Newton as expressing his Principia's first law of motion, Aristotle continued by asserting that a body in (non-violent) motion in a void would continue moving forever if externally unimpeded:
- No one could say why a thing once set in motion should stop anywhere; for why should it stop here rather than here? So that a thing will either be at rest or must be moved ad infinitum, unless something more powerful gets in its way.
Despite its remarkable success and general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over the nearly 2 millennia of its reign. For example, Lucretius (following, presumably, Epicurus) clearly stated that the 'default state' of matter was motion, not stasis. In the 6th century, John Philoponus criticized Aristotle's view, noting the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of the surrounding medium but by some property implanted in the object when it was set in motion. This was not the modern concept of inertia, for there was still the need for a power to keep a body in motion. This view was strongly opposed by Averroes and many scholastic philosophers who supported Aristotle. However this view did not go unchallenged in the Islamic world, where Philoponus did have several supporters.
Mozi (Chinese: 墨子; pinyin: Mòzǐ; ca. 470 BCE–ca. 390 BCE), a philosopher who lived in China during the Hundred Schools of Thought period (early Warring States Period), composed or collected his thought in the book Mozi, which contains the following sentence: 'The cessation of motion is due to the opposing force ... If there is no opposing force ... the motion will never stop. This is as true as that an ox is not a horse.' which, according to Joseph Needham, is a precursor to Newton's first law of motion.
- Main article: Islamic science - Mechanics
Several Muslim scientists from the medieval Islamic world wrote Arabic treatises on theories of motion. In the early 11th century, the Islamic scientist Ibn al-Haytham (Arabic:ابن الهيثم) (Latinized as Alhacen) hypothesized that an object will move perpetually unless a force causes it to stop or change direction. Alhacen's model of motion thus bears resemblance to the law of inertia (now known as Newton's first law of motion) later stated by Galileo Galilei in the 16th century.
Alhacen's contemporary, the Persian scientist Ibn Sina (Latinized as Avicenna), developed an elaborate theory of motion, in which he made a distinction between the inclination and force of a projectile, and concluded that motion was a result of an inclination (mayl) transferred to the projectile by the thrower, and that projectile motion in a vacuum would not cease. He viewed inclination as a permanent force whose effect is dissipated by external forces such as air resistance. Avicenna also referred to mayl to as being proportional to weight times velocity, which was similar to Newton's theory of momentum. Avicenna's concept of mayl was later used in Jean Buridan's theory of impetus.
Abū Rayhān al-Bīrūnī (973-1048) was the first physicist to realize that acceleration is connected with non-uniform motion. The first scientist to reject Aristotle's idea that a constant force produces uniform motion was the Arabic Muslim physicist and philosopher Hibat Allah Abu'l-Barakat al-Baghdaadi in the early 12th century. He was the first to argue that a force applied continuously produces acceleration, which is considered "the fundamental law of classical mechanics", and vaguely foreshadows Newton's second law of motion.
In the early 16th century, al-Birjandi, in his analysis on the Earth's rotation, developed a hypothesis similar to Galileo's notion of "circular inertia", which he described in the following observational test:
"The small or large rock will fall to the Earth along the path of a line that is perpendicular to the plane (sath) of the horizon; this is witnessed by experience (tajriba). And this perpendicular is away from the tangent point of the Earth’s sphere and the plane of the perceived (hissi) horizon. This point moves with the motion of the Earth and thus there will be no difference in place of fall of the two rocks."
Theory of impetus
- See also: Conatus
In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus, dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also maintained that impetus could be not only linear, but also circular in nature, causing objects (such as celestial bodies) to move in a circle.
Buridan's thought was followed up by his pupil Albert of Saxony (1316-1390) and the Oxford Calculators, who performed various experiments that further undermined the classical, Aristotelian view. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs.
Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone:
"…[Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path."
Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion.
The law of inertia states that it is the tendency of an object to resist a change in motion.The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the earth (and everything on it) was in fact never "at rest", but was actually in constant motion around the sun. Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle:
A body moving on a level surface will continue in the same direction at a constant speed unless disturbed.
It is also worth nothing that Galileo later went on to conclude that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Einstein to develop the theory of Special Relativity.
Galileo's concept of inertia would later come to be refined and codified by Isaac Newton as the first of his Laws of Motion (first published in Newton's work, Philosophiae Naturalis Principia Mathematica, in 1687):
Unless acted upon by an unbalanced force, an object will maintain a constant velocity.
Note that "velocity" in this context is defined as a vector, thus Newton's "constant velocity" implies both constant speed and constant direction (and also includes the case of zero speed, or no motion). Since initial publication, Newton's Laws of Motion (and by extension this first law) have come to form the basis for the almost universally accepted branch of physics now termed classical mechanics.
The actual term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1618-1621); however, the meaning of Kepler's term (which he derived from the Latin word for "idleness" or "laziness") was not quite the same as its modern interpretation. Kepler defined inertia only in terms of a resistance to movement, once again based on the presumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to these concepts as it is today.
Nevertheless, despite defining the concept so elegantly in his laws of motion, even Newton did not actually use the term "inertia" to refer to his First Law. In fact, Newton originally viewed the phenomenon he described in his First Law of Motion as being caused by "innate forces" inherent in matter, which resisted any acceleration. Given this perspective, and borrowing from Kepler, Newton actually attributed the term "inertia" to mean "the innate force possessed by an object which resists changes in motion"; thus Newton defined "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself. However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one which we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon described by Newton's First Law of Motion, and the two concepts are now basically equivalent.
Albert Einstein's theory of Special Relativity, as proposed in his 1905 paper, "On the Electrodynamics of Moving Bodies," was built on the understanding of inertia and inertial reference frames developed by Galileo and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass, energy, and distance, Einstein's concept of inertia remained unchanged from Newton's original meaning (in fact the entire theory was based on Newton's definition of inertia). However, this resulted in a limitation inherent in Special Relativity that it could only apply when reference frames were inertial in nature (meaning when no acceleration was present). In an attempt to address this limitation, Einstein proceeded to develop his theory of General Relativity ("The Foundation of the General Theory of Relativity," 1916), which ultimately provided a unified theory for both inertial and noninertial (accelerated) reference frames. However, in order to accomplish this, in General Relativity Einstein found it necessary to redefine several fundamental aspects of the universe (such as gravity) in terms of a new concept of "curvature" of spacetime, instead of the more traditional system of forces understood by Newton.
As a result of this redefinition, Einstein also redefined the concept of "inertia" in terms of geodesic deviation instead, with some subtle but significant additional implications. The result of this is that according to General Relativity, when dealing with very large scales, the traditional Newtonian idea of "inertia" does not actually apply, and cannot necessarily be relied upon. Luckily, for sufficiently small regions of spacetime, the Special Theory can still be used, in which inertia still means the same (and works the same) as in the classical model. Towards the end of his life it seems as if Einstein had become convinced that space-time is a new form of aether, in some way serving as a reference frame for the property of inertia.
Another profound, perhaps the most well-known, conclusion of the theory of Special Relativity was that energy and mass are not separate things, but are, in fact, interchangeable. This new relationship, however, also carried with it new implications for the concept of inertia. The logical conclusion of Special Relativity was that if mass exhibits the principle of inertia, then inertia must also apply to energy as well. This theory, and subsequent experiments confirming some of its conclusions, have also served to radically expand the definition of inertia in some contexts to apply to a much wider context including energy as well as matter.
According to Isaac Asimov
According to Isaac Asimov in "Understanding Physics": "This tendency for motion (or for rest) to maintain itself steadily unless made to do otherwise by some interfering force can be viewed as a kind of "laziness," a kind of unwillingness to make a change. And indeed, Newton's first law of motion as Isaac Asimov goes on to explain, "Newton's laws of motion represent assumptions and definitions and are not subject to proof. In particular, the notion of 'inertia' is as much an assumption as Aristotle's notion of 'natural place.'...To be sure, the new relativistic view of the universe advanced by Einstein makes it plain that in some respects Newton's laws of motion are only approximations...At ordinary velocities and distance, however, the approximations are extremely good."
Mass and 'inertia'
Physics and mathematics appear to be less inclined to use the original concept of inertia as "a tendency to maintain momentum" and instead favor the mathematically useful definition of inertia as the measure of a body's resistance to changes in momentum or simply a body's inertial mass.
This was clear in the beginning of the 20th century, when the theory of relativity was not yet created. Mass, m, denoted something like amount of substance or quantity of matter. And at the same time mass was the quantitative measure of inertia of a body.
The mass of a body determines the momentum P of the body at given velocity v; it is a proportionality factor in the formula:
- P = mv
The factor m is referred to as inertial mass.
But mass as related to 'inertia' of a body can be defined also by the formula:
- F = ma
By this formula, the greater its mass, the less a body accelerates under given force. Masses m defined by the formula (1) and (2) are equal because the formula (2) is a consequence of the formula (1) if mass does not depend on time and speed. Thus, "mass is the quantitative or numerical measure of body’s inertia, that is of its resistance to being accelerated".
This meaning of a body's inertia therefore is altered from the original meaning as "a tendency to maintain momentum" to a description of the measure of how difficult it is to change the momentum of a body.
The only difference there appears to be between inertial mass and gravitational mass is the method used to determine them.
Gravitational mass is measured by comparing the force of gravity of an unknown mass to the force of gravity of a known mass. This is typically done with some sort of balance scale. The beauty of this method is that no matter where, or on what planet you are, the masses will always balance out because the gravitational acceleration on each object will be the same. This does break down near supermassive objects such as black holes and neutron stars due to the high gradient of the gravitational field around such objects.
Inertial mass is found by applying a known force to an unknown mass, measuring the acceleration, and applying Newton's Second Law, m = F/a. This gives an accurate value for mass, limited only by the accuracy of the measurements. When astronauts need to be weighed in outer space, they actually find their inertial mass in a special chair.
The interesting thing is that, physically, no difference has been found between gravitational and inertial mass. Many experiments have been performed to check the values and the experiments always agree to within the margin of error for the experiment. Einstein used the fact that gravitational and inertial mass were equal to begin his Theory of General Relativity in which he postulated that gravitational mass was the same as inertial mass, and that the acceleration of gravity is a result of a 'valley' or slope in the space-time continuum that masses 'fell down' much as pennies spiral around a hole in the common donation toy at a chain store.
In a location such as a steadily moving railway carriage, a dropped ball (as seen by an observer in the carriage) would behave as it would if it were dropped in a stationary carriage. The ball would simply descend vertically. It is possible to ignore the motion of the carriage by defining it as an inertial frame. In a moving but non-accelerating frame, the ball behaves normally because the train and its contents continue to move at a constant velocity. Before being dropped, the ball was traveling with the train at the same speed, and the ball's inertia ensured that it continued to move in the same speed and direction as the train, even while dropping. Note that, here, it is inertia which ensured that, not its mass.
In an inertial frame all the observers in uniform (non-accelerating) motion will observe the same laws of physics. However observers in another inertial frame can make a simple, and intuitively obvious, transformation (the Galilean transformation), to convert their observations. Thus, an observer from outside the moving train could deduce that the dropped ball within the carriage fell vertically downwards.
However, in frames which are experiencing acceleration (non-inertial frames), objects appear to be affected by fictitious forces. For example, if the railway carriage was accelerating, the ball would not fall vertically within the carriage but would appear to an observer to be deflected because the carriage and the ball would not be traveling at the same speed while the ball was falling. Other examples of fictitious forces occur in rotating frames such as the earth. For example, a missile at the North Pole could be aimed directly at a location and fired southwards. An observer would see it apparently deflected away from its target by a force (the Coriolis force) but in reality the southerly target has moved because earth has rotated while the missile is in flight. Because the earth is rotating, a useful inertial frame of reference is defined by the stars, which only move imperceptibly during most observations.
Another form of inertia is rotational inertia (→ moment of inertia), which refers to the fact that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum is unchanged, unless an external torque is applied; this is also called conservation of angular momentum. Rotational inertia often has hidden practical consequences.
- ↑ Isaac Newton, Mathematical Principles of Natural Philosophytranslated into English by Andrew Motte, First American Edition, New York, 1846, page 72.
- ↑ Pages 2 to 4, Section 1.1, "Skating", Chapter 1, "Things that Move", Louis Bloomfield, Professor of Physics at the University of Virginia, How Everything Works: Making Physics Out of the Ordinary, John Wiley & Sons (2007), hardcover, 720 pages, ISBN 978-0-471-74817-5
- ↑ Aristotle, Physics, 8.10, 267a1-21; Aristotle, Physics, trans. by R. P. Hardie and R. K. Gaye.
- ↑ Aristotle, Physics, 4.8, 214b29-215a24.
- ↑ Aristotle, Physics, 4.8, 215a19-22.
- ↑ Lucretius, On the Nature of Things (London: Penguin, 1988), pp, 60-65
- ↑ Richard Sorabji, Matter, Space, and Motion: Theories in Antiquity and their Sequel, (London: Duckworth, 1988), pp. 227-8; Stanford Encyclopedia of Philosophy: John Philoponus.
- ↑ Abdus Salam (1984), "Islam and Science". In C. H. Lai (1987), Ideals and Realities: Selected Essays of Abdus Salam, 2nd ed., World Scientific, Singapore, p. 179-213.
- ↑ 9.0 9.1 Fernando Espinoza (2005). "An analysis of the historical development of ideas about motion and its implications for teaching", Physics Education 40 (2), p. 141.
- ↑ A. Sayili (1987), "Ibn Sīnā and Buridan on the Motion of the Projectile", Annals of the New York Academy of Sciences 500 (1), p. 477–482:
"It was a permanent force whose effect got dissipated only as a result of external agents such as air resistance. He is apparently the first to conceive such a permanent type of impressed virtue for non-natural motion."
- ↑ A. Sayili (1987), "Ibn Sīnā and Buridan on the Motion of the Projectile", Annals of the New York Academy of Sciences 500 (1), p. 477–482:
"Thus he considered impetus as proportional to weight times velocity. In other words, his conception of impetus comes very close to the concept of momentum of Newtonian mechanics."
- ↑ O'Connor, John J; Edmund F. Robertson "Al-Biruni". MacTutor History of Mathematics archive.
- ↑ Pines, Shlomo (1970). "Abu'l-Barakāt al-Baghdādī , Hibat Allah". Dictionary of Scientific Biography 1. New York: Charles Scribner's Sons. 26-28. ISBN 0684101149.
(cf. Abel B. Franco (October 2003). "Avempace, Projectile Motion, and Impetus Theory", Journal of the History of Ideas 64 (4), p. 521-546 .)
- ↑ (Ragep 2001b, pp. 63-4)
- ↑ (Ragep 2001a, pp. 152-3)
- ↑ Jean Buridan: Quaestiones on Aristotle's Physics (quoted at http://brahms.phy.vanderbilt.edu/a203/impetus_theory.html)
- ↑ Giovanni Benedetti, selection from Speculationum, in Stillman Drake and I.E. Drabkin, Mechanics in Sixteenth Century Italy (The University of Wisconsin Press, 1969), p. 156.
- ↑ Nicholas Copernicus: The Revolutions of the Heavenly Spheres, 1543
- ↑ Galileo: Dialogue Concerning the Two Chief World Systems, 1631 (Wikipedia Article)
- ↑ Kostro, Ludwik; Einstein and the Ether Montreal, Apeiron (2000). ISBN 0-9683689-4-8
- Ragep, F. Jamil (2001a), "Tusi and Copernicus: The Earth's Motion in Context", Science in Context 14(1-2): 145–163
- Ragep, F. Jamil (2001b), "Freeing Astronomy from Philosophy: An Aspect of Islamic Influence on Science", Osiris, 2nd Series 16(Science in Theistic Contexts: Cognitive Dimensions): 49-64 & 66-71
Books and papers
- Butterfield, H (1957) The Origins of Modern Science ISBN 0-7135-0160-X
- Clement, J (1982) "Students' preconceptions in introductory mechanics", American Journal of Physics vol 50, pp66-71
- Crombie, A C (1959) Medieval and Early Modern Science, vol 2
- McCloskey, M (1983) "Intuitive physics", Scientific American, April, pp114-123
- McCloskey, M & Carmazza, A (1980) "Curvilinear motion in the absence of external forces: naïve beliefs about the motion of objects", Science vol 210, pp1139-1141
- Masreliez, C.J., Motion, Inertia and Special Relativity – a Novel Perspective, Physica Scripta, (dec 2006)
Template:Physics-footerar:عطالة ast:Inercia be:Інерцыя bs:Inercija bg:Инертност ca:Moment d'inèrcia cs:Setrvačnost da:Inerti de:Trägheit et:Inerts el:Αδράνειαko:관성 hr:Tromost io:Inerteso it:Inerzia he:עקרון ההתמדה la:Inertia lv:Inerce ms:Inersia nl:Traagheidno:Treghet nov:Inertiasimple:Inertia sk:Zotrvačnosť sl:Vztrajnost sr:Инерција sv:Tröghetuk:Інерція 09:09, 19 May 2008 (UTC)JamMan (talk)≈
There is no pharmaceutical or device industry support for this site and we need your viewer supported Donations | Editorial Board | Governance | Licensing | Disclaimers | Avoid Plagiarism | Policies | http://www.wikidoc.org/index.php/Inertia | 13 |
54 | Step Three: Determine the Baseline
Any benchmark or baseline should be expressed as a pollution-to-production ratio. It will also be used to determine the cost of the pollution per unit of product.
A baseline needs a relevant unit of product for each product that is manufactured with the chemicals being studied. The unit of product must be an accurate measure of a characteristic of the product. If a process is used for the same part at all times, then number of pieces will make a good unit of product. However, if the process works on several parts, then a more specific measure will be needed to determine units of product, such as surface area or weight.
Units of Measure
How much waste is produced per product? Identifying the correct means of measuring the performance of a manufacturing process is one of the most important steps in pollution prevention planning. The measurement accurately portrays what is happening in the process and provides meaningful data to use in the options analysis step. Pinpointing and solving problems would be difficult without measurement, as would be documenting the impact of pollution prevention. Feedback from measurement will also help in making decisions on facility policies, developing new technologies, and choosing additional pollution prevention options.
The unit of product must be carefully chosen. Generally, valid units of product are count (numbers of pieces), surface area (square feet), volume (cubic feet), etc. Examples of units that are not valid are sales and run time. The unit of product must relate directly to the product or service being measured. In addition, in order to obtain accurate data on the amount of pollution generated during a production run or during a measured time period, rejected product must be included in the calculation of the production volume. This is why sales are not a good indicator of production rate. Conversely, run time is not a good indicator of production because a machine or a process may be operating, but the product is not necessarily being produced nor is waste being generated. Sales underestimates production volume and run time overestimates it.
The Production Ratio
It is necessary to develop a basis of comparison for chemical waste generated in the production process over time. Simply comparing waste generated from year to year can be misleading if there was a significant change in the levels of production involving the chemical being targeted. Production ratio (PR) is used to normalize changes in production levels. It is calculated by dividing the production level for the reporting year by the production level for the previous year. Once a production ratio is determined, it is used as a factor when comparing target chemical waste generated between the two years.
A facility paints 1,600 parts in Year A. It paints 1,800 parts in Year B. The production ratio is:
1,800/1,600 = 1.13
Simply using the units to determine the PR may not give an accurate result if the parts are not identical. In that case, a more specific attribute must be used such as surface area, weight or other relevant measure.
Production Ratio Example
Production level change
Click here for a second production ratio example.
Either during or after a team has been organized, the performance of the current manufacturing processes must be determined. As a minimum, the processes that use or generate Toxic Release Inventory (TRI) chemicals are targeted for pollution prevention. This will be critical for the team to calculate a baseline for future comparisons and must be done prior to options analysis. An important first step is to decide the accurate and relevant units of measurement for the processes involved. The next section provides more details on measuring waste and pollution generation.
Data Gathering for Current Operations
For each and every process that uses a chemical reportable on the TRI Form R, gather and verify information related to the chemical’s waste generation and releases. This information must be comprehensive in order to be as accurate and useful as possible. It should include information related to the product being manufactured, the process, the volume produced, and all associated costs.
There should be a description of the product(s) or service(s) related to the chemical being addressed. This may include information about desired quality and the reason why the product manufacturer requires the use of a TRI chemical. Customer input may be desired or required for specifications. Pollution prevention planning is a good way to question the design of a product and ask why the chemical is needed. Are there customer specifications or product quality issues that need to be considered? These will be factors when options are analyzed for pollution prevention.
In order to further pinpoint how and why a chemical waste is being generated, process information must be gathered. Data on the process should include a description of the major steps.
Finding out how employees are involved in the process is often helpful. This can include information on employee function, training and safety/health considerations. Also, obtain whatever documentation is available about the process such as vendor literature, chemical analysis, preventive maintenance schedules, equipment specifications, etc. Any or all of the information will be needed for the options analysis step that studies the alternatives for making the process more efficient, thus using less raw material or generating less waste or pollution.
Chemical Handling Data
Because waste can be generated as a result of transfers and spills, data should be gathered on how chemicals are stored, transferred, packaged and otherwise dispensed. These operations may be a part of the manufacturing process or they may be auxiliary operations that occur elsewhere in the facility.
During option analysis, in order to calculate the costs, savings and payback of any pollution prevention changes must be gathered on all operations that involve the TRI chemicals in question. Many hidden costs in the use of a chemical are instituted in overhead or department charges. However, these numbers must be isolated and identified in order for the option analysis to be comprehensive.
Some costs to consider are those related to environmental compliance. This includes compliance issues such as analysis of waste, treatment of waste, license fees and the cost of disposal. As burdensome as these costs might be, they are only a fraction of the cost to manage TRI chemicals.
Many of these environmental compliance functions can be done externally or internally. If they are internal costs, remember to include the cost of the time it takes staff to perform these tasks.
Another cost is the purchase of the chemical. Add to this the cost to transport the chemical. This must include not only any external charges to get the chemical to the facility but the internal cost to transport it within the facility. Then add the cost to store the material, including the cost of the space it occupies.
Auxiliary costs to properly store and maintain the chemical must be included. Add any cost for temperature or humidity controls required for the chemicals storage and use. In addition, there might be costs to maintain the equipment that stores or transports the chemical, including preventive maintenance. Costs for risk management include the following: insurance to protect against losses caused by accidental release and injury; health and safety equipment and training requirements so employees can work with the chemical as safely as possible; and for some chemicals significant costs due to absenteeism caused by perceived or real health effects of the chemical.
From Example 3-1, toluene is used to thin the paint at one pound of toluene per gallon of paint. This toluene is released to the air as the paint dries. In Year A, 100 pounds of toluene was released in this way when the 1,600 parts were painted. So if Year A is the baseline year, the pollution to production ratio is 100 divided by 1,600 or 0.063 pound of toluene released per part painted. If the toluene costs a dollar per pound, the cost is 6.3 cents per part painted.
During Year A, tests were performed and it was discovered that paint quality did not deteriorate by using 0.80 pound of toluene per gallon of paint. This reduced use of toluene per gallon of paint released 90 pounds in Year B. The pollution/production ratio is 90 divided by 1,800 or 0.05 pound of toluene released per part painted, and the cost is 5 cents per part. So compared to the baseline year, this is a savings of 1.3 cents per part, or $23.40 for 1,800 parts.
Finally, intangible costs should be assessed and recorded by asking:
- Are there any community concerns?
- Are there employee health or safety concerns about using the chemical?
- Are there emergency response concerns regarding the use of the chemical?
- Does the chemical contribute to unpleasant production work areas (i.e. odors)?
- Are there product marketing disadvantages?
In order to obtain a baseline of the present situation, all this information must be gathered and be effectively organized. This can be done with charts, graphs, matrices, etc. Each facility will have a unique system to organize the data to fit its needs.
Production ratios and baselines must be determined for each process that generates the chemical being studied.
In addition to determining a baseline for measuring the cost of waste generation per unit of product, it is also essential to identify and document current and past pollution prevention efforts. Documentation of efforts will allow the pollution prevention team to avoid repeating work unnecessarily and also provides the groundwork for future feasibility studies if changes in technology or increasing costs of environmental management make yesterday’s discarded ideas more attractive today.
Next is to sum all the chemical waste generated data and divide it by the amount of production that generated those chemicals. The result of this operation is the amount of waste or pollution that is generated per unit of product.
Sources for Data Gathering of Waste and Pollution Information
Waste generated from production processes can assume a variety of forms. Most notable among these are air emissions, process wastewaters, hazardous waste and scrap. It is important to be aware of all forms of waste that are produced through manufacturing to ensure an accurate assessment of a production process. One good approach for gathering this information is to develop a material balance or process map for target chemicals to account for each waste stream that comes from the process. This can start with a sketch showing the flow of raw materials, products, wastes and releases involving the target chemical. Make sure to include streams for wastes that are recycled, treated or otherwise managed on-site.
A common engineering principle is that what goes into a system must come out in some form or another. By measuring the material inputs, the total outputs that must be accounted for can be identified and through process of elimination, the unknowns can be determined. In some cases, the data needed to fully measure the amount of each waste stream may not be available. In these cases, it becomes necessary to use engineering judgment and knowledge of the production process to develop reasonable estimates of how the system is operating. This occurs more often with water and air releases, particularly “fugitive” (non-stack) air releases.
The primary information source for waste shipped off-site, whether to be recycled, treated, or disposed, is the hazardous waste manifest. The manifest provides the type and quantities of hazardous wastes shipped. For mixed wastes or sludge that contains target chemicals, a useful tool for determining the fraction of the mixture that consists of the target chemical is to review the waste profile submitted to the off-site hazardous waste management firm when the waste stream was approved for acceptance. The waste management firm your facility is contracted with should supply, upon request, copies of the results of waste analysis that was performed when a shipment was received.
Information for scrap waste can be found on the bill of lading for each shipment. These are often used in place of the hazardous waste manifest for wastes such as scrap metals, scrap circuit boards or spent lead-acid batteries that are sent to a metals recycler. Similar to the hazardous waste manifest, the bill of lading will provide the type and quantities of scrap materials shipped. Product design specifications may be needed to help estimate the amount of the target chemical contained in the total waste shipped.
Wastewater Discharged to POTW
To discharge wastewater to a publicly-owned treatment works (POTW) generally requires an Industrial Discharge permit, which will include limits on the pollutant concentrations allowed in the wastewater discharge. Facilities are required to perform periodic sampling and analysis of their wastewater discharge to ensure compliance with the limits set. This information can also be used to estimate annual levels of a target chemical that is discharged to a POTW by using the concentration levels determined in sampling along with the cumulative volume of wastewater discharge from the facility. Some facilities perform in-house sampling and analysis on a more frequent basis than required by their permit. These results provide a good tool for estimating the volume of a target chemical that is discharged to a POTW.
Stack Air Emissions
Facilities that are required to hold air emissions permits should find that their permit application contains a great deal of information to help estimate a target chemical’s volume of releases through stack air emissions. Each manufacturing process that vents emissions through a stack is required to be thoroughly described in the air permit application, with information regarding the chemicals used, the throughput of the process and the emissions associated with the process. The calculations contained in an air permit application are performed on a basis for potential to emit, which assumes constant operation of the manufacturing process equipment and does not include emissions reductions due to pollution control equipment. Therefore, any use of air permit application data must include appropriate changes to reflect the actual operating conditions of the process.
Facilities that are not required to hold air emissions permits may estimate their stack air emissions using their knowledge of process conditions and materials balances. Quarterly or annual tests of stack emissions may be worthwhile to perform to provide data to compare to estimates.
Fugitive Air Emissions
Fugitive (non-stack) air emissions can be difficult to determine directly. They are commonly estimated through a materials balance with fugitive emissions representing the last remaining unknown after all other outputs have been directly measured or estimated. If a facility employs an industrial hygienist, he or she may have information on employee exposure levels that can also be used in estimating fugitive air emissions.
On-site Waste Management
There are several ways that wastes are managed on-site. Some wastes can be recycled, such as spent solvents or used oils and lubricants. Most facilities keep track of how many batches are processed by the recycling equipment or of the amount of regenerated material. Also track the amounts of solvents, used oils, or other flammable materials that are incinerated on-site. These should be identified in the air emissions permit application. Other wastes are treated on-site prior to disposal, such as spent acids and caustics or polymer waste. Information for measuring the amounts of waste generated should be obtained either from the treatment process description, or from direct observation of the process.
Some employees may be hesitant to take all of the necessary steps involved in gathering the information needed for a complete material balance, as it can initially appear to be a daunting task. A recommended first step in performing the material balance is to simply document material inputs minus the materials included in the product stream. This result will show the amount of waste that is generated and can serve as a driving force for finding the specific sources of waste in a process.
Like our content and want to share it with others? Please see our reprint policy. | http://www.mntap.umn.edu/prevention/P2_chapter3step3.html | 13 |
120 | Triple Integrals 1 Introduction to the triple integral
Triple Integrals 1
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- Let's say I wanted to find the volume of a cube, where the
- values of the cube-- let's say x is between-- x is greater
- than or equal to 0, is less than or equal to,
- I don't know, 3.
- Let's say y is greater than or equal to 0, and is
- less than or equal to 4.
- And then let's say that z is greater than or equal to 0 and
- is less than or equal to 2.
- And I know, using basic geometry you could figure out--
- you know, just multiply the width times the height times
- the depth and you'd have the volume.
- But I want to do this example, just so that you get used to
- what a triple integral looks like, how it relates to a
- double integral, and then later in the next video we could do
- something slightly more complicated.
- So let's just draw that, this volume.
- So this is my x-axis, this is my z-axis, this is the y.
- x, y, z.
- So x is between 0 and 3.
- So that's x is equal to 0.
- This is x is equal to-- let's see, 1, 2, 3.
- y is between 0 and 4.
- 1, 2, 3, 4.
- So the x-y plane will look something like this.
- The kind of base of our cube will look something like this.
- And then z is between 0 and 2.
- So 0 is the x-y plane, and then 1, 2.
- So this would be the top part.
- And maybe I'll do that in a slightly different color.
- So this is along the x-z axis.
- You'd have a boundary here, and then it would
- come in like this.
- You have a boundary here, come in like that.
- A boundary there.
- So we want to figure out the volume of this cube.
- And you could do it.
- You could say, well, the depth is 3, the base, the width is 4,
- so this area is 12 times the height.
- 12 times 2 is 24.
- You could say it's 24 cubic units, whatever
- units we're doing.
- But let's do it as a triple integral.
- So what does a triple integral mean?
- Well, what we could do is we could take the volume of a very
- small-- I don't want to say area-- of a very small volume.
- So let's say I wanted to take the volume of a small cube.
- Some place in this-- in the volume under question.
- And it'll start to make more sense, or it starts to become a
- lot more useful, when we have variable boundaries and
- surfaces and curves as boundaries.
- But let's say we want to figure out the volume of this
- little, small cube here.
- That's my cube.
- It's some place in this larger cube, this larger rectangle,
- cubic rectangle, whatever you want to call it.
- So what's the volume of that cube?
- Let's say that its width is dy.
- So that length right there is dy.
- It's height is dx.
- Sorry, no, it's height is dz, right?
- The way I drew it, z is up and down.
- And it's depth is dx.
- This is dx.
- This is dz.
- This is dy.
- So you can say that a small volume within this larger
- volume-- you could call that dv, which is kind of the
- volume differential.
- And that would be equal to, you could say, it's just
- the width times the length times the height.
- dx times dy times dz.
- And you could switch the orders of these, right?
- Because multiplication is associative, and order
- doesn't matter and all that.
- But anyway, what can you do with it in here?
- Well, we can take the integral.
- All integrals help us do is help us take infinite sums of
- infinitely small distances, like a dz or a dx or
- a dy, et cetera.
- So, what we could do is we could take this cube and
- first, add it up in, let's say, the z direction.
- So we could take that cube and then add it along the up and
- down axis-- the z-axis-- so that we get the
- volume of a column.
- So what would that look like?
- Well, since we're going up and down, we're adding-- we're
- taking the sum in the z direction.
- We'd have an integral.
- And then what's the lowest z value?
- Well, it's z is equal to 0.
- And what's the upper bound?
- Like if you were to just take-- keep adding these cubes, and
- keep going up, you'd run into the upper bound.
- And what's the upper bound?
- It's z is equal to 2.
- And of course, you would take the sum of these dv's.
- And I'll write dz first.
- Just so it reminds us that we're going to
- take the integral with respect to z first.
- And let's say we'll do y next.
- And then we'll do x.
- So this integral, this value, as I've written it, will
- figure out the volume of a column given any x and y.
- It'll be a function of x and y, but since we're dealing with
- all constants here, it's actually going to be
- a constant value.
- It'll be the constant value of the volume of one
- of these columns.
- So essentially, it'll be 2 times dy dx.
- Because the height of one of these columns is 2,
- and then its with and its depth is dy and dx.
- So then if we want to figure out the entire volume-- what
- we did just now is we figured out the height of a column.
- So then we could take those columns and sum them
- in the y direction.
- So if we're summing in the y direction, we could just take
- another integral of this sum in the y direction.
- And y goes from 0 to what? y goes from 0 to 4.
- I wrote this integral a little bit too far to the
- left, it looks strange.
- But I think you get the idea.
- y is equal to 0, to y is equal to 4.
- And then that'll give us the volume of a sheet that is
- parallel to the zy plane.
- And then all we have left to do is add up a bunch of those
- sheets in the x direction, and we'll have the volume
- of our entire figure.
- So to add up those sheets, we would have to sum
- in the x direction.
- And we'd go from x is equal to 0, to x is equal to 3.
- And to evaluate this is actually fairly
- So, first we're taking the integral with respect to z.
- Well, we don't have anything written under here, but we
- can just assume that there's a 1, right?
- Because dz times dy times dx is the same thing as
- 1 times dz times dy dx.
- So what's the value of this integral?
- Well, the antiderivative of 1 with respect to
- z is just z, right?
- Because the derivative of z is 1.
- And you evaluate that from 2 to 0.
- So then you're left with-- so it's 2 minus 0.
- So you're just left with 2.
- So you're left with 2, and you take the integral of that from
- y is equal to 0, to y is equal to 4 dy, and then
- you have the x.
- From x is equal to 0, to x is equal to 3 dx.
- And notice, when we just took the integral with respect to
- z, we ended up with a double integral.
- And this double integral is the exact integral we would have
- done in the previous videos on the double integral, where you
- would have just said, well, z is a function of x and y.
- So you could have written, you know, z, is a function of x
- and y, is always equal to 2.
- It's a constant function.
- It's independent of x and y.
- But if you had defined z in this way, and you wanted to
- figure out the volume under this surface, where the surface
- is z is equal to 2-- you know, this is a surface, is z
- is equal to 2-- we would have ended up with this.
- So you see that what we're doing with the triple
- integral, it's really, really nothing different.
- And you might be wondering, well, why are we
- doing it at all?
- And I'll show you that in a second.
- But anyway, to evaluate this, you could take the
- antiderivative of this with respect to y, you get 2y-- let
- me scroll down a little bit.
- You get 2y evaluating that at 4 and 0.
- And then, so you get 2 times 4.
- So it's 8 minus 0.
- And then you integrate that from, with respect
- to x from 0 to 3.
- So that's 8x from 0 to 3.
- So that'll be equal to 24 four units cubed.
- So I know the obvious question is, what is this good for?
- Well, when you have a kind of a constant value within
- the volume, you're right.
- You could have just done a double integral.
- But what if I were to tell you, our goal is not to figure out
- the volume of this figure.
- Our goal is to figure out the mass of this figure.
- And even more, this volume-- this area of space or
- whatever-- its mass is not uniform.
- If its mass was uniform, you could just multiply its uniform
- density times its volume, and you'd get its mass.
- But let's say the density changes.
- It could be a volume of some gas or it could be even some
- material with different compounds in it.
- So let's say that its density is a variable function
- of x, y, and z.
- So let's say that the density-- this row, this thing that looks
- like a p is what you normally use in physics for density-- so
- its density is a function of x, y, and z.
- Let's-- just to make it simple-- let's make
- it x times y times z.
- If we wanted to figure out the mass of any small volume, it
- would be that volume times the density, right?
- Because density-- the units of density are like kilograms
- per meter cubed.
- So if you multiply it times meter cubed, you get kilograms.
- So we could say that the mass-- well, I'll make up notation, d
- mass-- this isn't a function.
- Well, I don't want to write it in parentheses, because it
- makes it look like a function.
- So, a very differential mass, or a very small mass, is going
- to equal the density at that point, which would be xyz,
- times the volume of that of that small mass.
- And that volume of that small mass we could write as dv.
- And we know that dv is the same thing as the width times
- the height times the depth.
- dv doesn't always have to be dx times dy times dz.
- If we're doing other coordinates, if we're doing
- polar coordinates, it could be something slightly different.
- And we'll do that eventually.
- But if we wanted to figure out the mass, since we're using
- rectangular coordinates, it would be the density function
- at that point times our differential volume.
- So times dx dy dz.
- And of course, we can change the order here.
- So when you want to figure out the volume-- when you want to
- figure out the mass-- which I will do in the next video, we
- essentially will have to integrate this function.
- As opposed to just 1 over z, y and x.
- And I'm going to do that in the next video.
- And you'll see that it's really just a lot of basic taking
- antiderivatives and avoiding careless mistakes.
- I will see you in the next video.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | http://www.khanacademy.org/math/calculus/double_triple_integrals/triple_integrals/v/triple-integrals-1 | 13 |
89 | - This article is about the arithmetic operation. For other uses, see Division (disambiguation).
where b is not zero, then
that is, a divided by b equals c. For instance, since .
In the above expression, a is called the dividend, b the divisor and c the quotient.
Division by zero (i.e. where the divisor is zero) is usually not defined.
Division is most often shown by placing the dividend over the divisor with a horizontal line between them. For example, a divided by b is written . This can be read out loud as "a divided by b".
A way to express division all on one line is to write the dividend, then a slash, then the divisor, like this: . This is the usual way to specify division in most computer programming languages since it can easily be typed as a simple sequence of characters.
A typographical variation which is halfway between these two forms uses a slash but elevates the dividend, and lowers the divisor: a⁄b
Any of these forms can be used to display a fraction. A fraction is a division expression where both dividend and divisor are integers (although typically called the numerator and denominator), and there is no implication that the division needs to be evaluated further.
A less common way to show division is to use the obelus (or division sign) in this manner: . This form is infrequent except in elementary arithmetic. The obelus is also used alone to represent the division operation itself, as for instance as a label on a key of a calculator.
With a knowledge of multiplication tables, two integers can be divided on paper using the method of long division. If the dividend has a fractional part (expressed as a decimal fraction), one can continue the algorithm past the ones place as far as desired. If the divisor has a fractional part, one can restate the problem by moving the decimal to the right in both numbers until the divisor has no fraction.
Division can be calculated with an abacus by repeatedly placing the dividend on the abacus, and then subtracting the divisor the offset of each digit in the result, counting the number of divisions possible at each offset.
In modular arithmetic, some numbers have a multiplicative inverse with respect to the modulus. In such a case, division can be calculated by multiplication. This approach is useful in computers that do not have a fast division instruction.
Division of integers
Division of integers is not closed. Apart from division by zero being undefined, the quotient will not be an integer unless the dividend is an integer multiple of the divisor; for example 26 cannot be divided by 10 to give an integer. In such a case there are four possible approaches.
- Say that 26 cannot be divided by 10.
- Give the answer as a decimal fraction or a mixed number, so or . This is the approach usually taken in mathematics.
- Give the answer as a quotient and a remainder, so remainder 6.
- Give the quotient as the answer, so . This is sometimes called integer division.
One has to be careful when performing division of integers in a computer program. Some programming languages, such as C, will treat division of integers as in case 4 above, so the answer will be an integer. Other languages, such as MATLAB, will first convert the integers to real numbers, and then give a real number as the answer, as in case 2 above.
Division of rational numbers
The result of dividing two rational numbers is another rational number when the divisor is not 0. We may define division of two rational numbers p/q and r/s by
All four quantities are integers, and only p may be 0. This definition ensures that division is the inverse operation of multiplication.
Division of real numbers
Division of two real numbers results in another real number when the divisor is not 0. It is defined such a/b = c if and only if a = cb and b ≠ 0.
Division of complex numbers
Dividing two complex numbers results in another complex number when the divisor is not 0, defined thus:
All four quantities are real numbers. r and s may not both be 0.
Division for complex numbers expressed in polar form is simpler and easier to remember than the definition above:
Again all four quantities are real numbers. r may not be 0.
Division of polynomials
One can define the division operation for polynomials. Then, as in the case of integers, one has a remainder. See polynomial long division.
Division in abstract algebra
In abstract algebras such as matrix algebras and quaternion algebras, fractions such as are typically defined as or where b is presumed to be an invertible element (i.e. there exists a multiplicative inverse b − 1 such that bb − 1 = b − 1b = 1 where 1 is the multiplicative identity). In an integral domain where such elements may not exist, division can still be performed on equations of the form ab = ac or ba = ca by left or right cancellation, respectively. More generally "division" in the sense of "cancellation" can be done in any ring with the aforementioned cancellation properties. By a theorem of Wedderburn, all finite division rings are fields, hence every nonzero element of such a ring is invertible, so division by any nonzero element is possible in such a ring. To learn about when algebras (in the technical sense) have a division operation, refer to the page on division algebras. In particular Bott periodicity can be used to show that any real normed division algebra must be isomorphic to either the real numbers R, the complex numbers C, the quaternions H, or the octonions O.
Division and calculus
There is no general method to integrate the quotient of two functions.
- Division (electronics)
- Rational number
- Vulgar fraction
- Inverse element
- Division by two
- Division by zero
- Field (algebra)
- Division algebra
- Division ring
- Long division
- Method for Dividing Decimals
- Division on PlanetMath.
- Division on a Japanese abacus selected from Abacus: Mystery of the Bead
- Chinese Short Division Techniques on a Suan Panbg:Деление
da:Division (matematik) ca:Divisió de:Division (Mathematik) es:División (matemáticas) fr:Division is:Deiling it:Divisione (matematica) nl:Delen ja:除法 pl:Dzielenie pt:Divisão simple:Division fi:Jakolasku sv:Division (matematik) th:การหาร zh:除法 | http://www.exampleproblems.com/wiki/index.php/Division_(mathematics) | 13 |
59 | A derivative, one of the fundamental concepts of calculus, measures how quickly a function changes as its input value changes. Given a graph of a real curve, the derivative at a specific point will equal the slope of the line tangent to that point. For example, the derivative of y = x2 at the point (1,1) tells how quickly the function is increasing at that point. If a function has a derivative at some point, it is said to be differentiable there. If a function has a derivative at every point where it is defined, we say it is a differentiable function. Differentiability implies continuity.
One of the main applications of differential calculus is differentiating a function, or calculating its derivative. The First Fundamental Theorem of Calculus explains that one can find the original function, given its derivative, by integrating, or taking the integral of, the derivative.
The derivative of the function f(x), denoted f'(x) or , is defined as:
In other words, it is the limit of the slope of the secant line to f(x) as it becomes a tangent line. If the tangent line is increasing (which it is if the original function is increasing), the derivative is positive; if the function is decreasing, the derivative is negative.
For example, In general, f'(mx) = m; that is, the derivative of any line is equal to its slope.
Higher order derivatives
A higher order derivative is obtained by repeatedly differentiating a function. Thus, the second derivative of x, or , is
and so forth.
A common alternative notation is f''(x), f'''(x), and f(n)(x) for the second, third or n th derivative.
A partial derivative is obtained by differentiating a function of multiple variables with respect to one variable while holding the rest constant. For example, the partial derivative of F(x,y) with respect to x, or , represents the rate of change of F with respect to x while y is constant. Thus, F could be windchill, which depends both on wind velocity and actual temperature. represents how much windchill changes with respect to wind velocity for a given temperature.
Partial derivatives are calculated just like full derivatives, with the other variables being treated as constants.
Example: Let . Then there are two partial derivatives of first order:
Note that the two partial derivatives f1(x1,x2) and f2(x1,x2) in this example are again differentiable functions of x1 and x2, so higher derivatives can be calculated:
Note that f12(x1,x2) equals f21(x1,x2), so that the order of taking the derivative doesn't matter. Though this doesn't hold generally, it's true for a great class of important functions, specifically continuous functions.
In mathematics, derivatives are helpful in determining the maximum and minimum of a function. For example, taking the derivative of a quadratic function will yield a linear function. The points at which this function equals zero are called critical points. Maxima and minima can occur at critical points, and can be verified to be a maximum or minimum by the second derivative test. The second derivative is used to determine the concavity, or curved shape of the graph. Where the concavity is positive, the graph curves upwards, and could contain a relative minimum. Where the concavity is negative, the graph curves downwards, and could contain a relative maximum. Where the concavity equals zero is said to be a point of inflection, meaning that it is a point where the concavity could be changing.
Derivatives are also useful in physics, under the "rate of change" concept. For example, acceleration is the derivative of velocity with respect to time, and velocity is the derivative of distance with respect to time.
Another important application of derivatives is in the Taylor series of a function, a way of writing certain functions like ex as a power series. | http://www.conservapedia.com/Derivative | 13 |
58 | Basic Math for Adults/Fractions
Introduction to Fractions
When you divide (fractionate) something into parts, two or more, you have what is known as a common fraction. A common fraction is usually written as two numbers; a top number and a bottom number. Common fractions can also be expressed in words. The number that is on top is called the numerator, and the number on the bottom is called the denominator (the prefix 'de-' is Latin for reverse) or divisor.
These two numbers are always separated by a line, which is known as a fraction bar. This way of representing fractions is called display representation. Common fractions are, more often than not, simply known as fractions in everyday speech.
The numerator in any given fraction tells you how many parts of something you have on hand. For example, if you were to slice a pizza for a party into six equal pieces, and you took two slices of pizza for yourself, you would have (pronounced two-sixths) of that pizza. Another way to look at it is by thinking in terms of equal parts; when that pizza was cut into six equal parts, each part was exactly (one-sixth) of the whole pizza.
The denominator tells you how many parts are in a whole, in this case your pizza. Your pizza was cut into six equal parts, and therefore the entire pizza consists of six equal slices. So when you took two slices for yourself, only four slices of pizza remain, or (four-sixths).
Also keep in mind that numerator can never be zero. It makes no sense to have zero divided into parts. For instance, the fraction is equal to zero, because you can not have six slices of nothing. If the denominator is zero, then the fraction has no meaning or is considered undefined since it may depend upon the mathematical setting one is working on, for the purpose of this chapter, we will consider that it has no meaning.
Another way of representing fractions is by using a diagonal line between the numerator and the denominator.
In this case, the separator between the numerator and the denominator is called a slash, a solidus or a virgule. This method of representing fractions is called in-line representation, meaning that the fraction is lined up with the rest of the text. You will often see in-line representations in texts where the author does not have any way to use display representation.
The fraction in the pizza analogy we just used is known as a proper fraction. In a proper fraction, the numerator (top number) is always smaller than the denominator (bottom number). Thus, the value of a proper fraction is always less than one. Proper fractions are generally the kind you will encounter most often in mathematics.
When the numerator of a fraction is greater than, or equal to the denominator, you have an improper fraction. For example, the fractions and are all considered improper fractions. Improper fractions always have a value of one whole or more. So with , the numerator says you have 6 pieces, but 6 is also the number of the whole, so the value of this fraction is one whole. It is as if no one took a slice of pizza after you cut it.
In the case of , one whole of something is divided into three equal pieces, but on hand you have five pieces (you had two pizzas, each divided into three slices, and you ate one slice). This means you have two pieces extra, or two pieces greater than one whole. This concept may seem rather confusing and strange at first, but as you become better in math you will eventually put two and two together to the get the whole picture (okay, bad pun).
When a whole number is written next to a fraction, such as (two and one-third) you are seeing what is called a mixed fraction. A mixed fraction is understood as being the sum, or total, of both the whole number and fraction. The number two in stands for two wholes - you also have a third more of something, which is the .
Sometimes in mathematics you will need to rewrite a fraction in smaller numbers, while also keeping the value of the fraction the same. This is known as simplifying, or reducing to lowest terms. It should be mentioned that a fraction which is not reduced is not intrinsically incorrect, but it may be confusing for others reviewing your work. There are two ways to simplify fractions, and both will be useful anytime you work with fractions, so it is recommended you learn both methods.
To reiterate, reducing fractions is essentially replacing your original fraction with another one of equal value, called an equivalent fraction. Below are a few examples of equivalent fractions.
When the fraction is reduced to lowest terms, it then becomes , because four pieces out of a total of eight is exactly one-half of all available pieces. A fraction is also in its lowest terms when both the numerator and denominator cannot be divided evenly by any number other than one.
To reduce a fraction to lowest terms, you must divide the numerator and denominator by the largest whole number that divides evenly into both. For example, to reduce the fraction to lowest terms, divide the numerator (3) and denominator (9) by three.
If the largest whole number is not obvious, and many times it is not, divide the numerator and denominator by any number (except one) that divides evenly into each, and then repeat the process until the fraction is in lowest terms.
For clarity, below are a few examples of reducing fractions using this method.
Reduce to lowest terms.
In this problem, the largest whole number is difficult to see, so we first divide the numerator and denominator by two, as shown:
Next divide by two again:
There are no whole numbers left which can divide evenly into , so the problem is finished.
reduced to lowest terms is
Reduce to lowest terms.
Divide the numerator and denominator by six, as shown:
reduced to lowest terms is .
Reduce to lowest terms.
In this problem, the largest whole number is not immediately apparent, so we first divide the numerator and denominator by two, as shown:
Next divide by seven:
reduced to lowest terms is .
Greatest Common Factor Method
The second method of simplifying fractions involves finding the greatest common factor between the numerator and the denominator. We do this by breaking up both the numerator and the denominator into their prime factors (greatest common factor = 2x2 = 4):
It is implied that any part is multiplied by one. If we divide 2s out of the factorized fraction, we are left with one 2 in the denominator.
It is best to practice these skills of reducing fractions until you feel confident enough to do them on your own. Remember, practice makes perfect.
Raising Fractions to Higher Terms
To raise a fraction to higher terms is to rewrite it in larger numbers while keeping the fraction equivalent to the original in value. This is, for all intents and purposes, the exact opposite of simplifying a fraction.
Convert to a fraction with denominator of 15.
Ask yourself “what number multiplied by 5 equals 15?” To find the answer simply divide 5 into 15.
After dividing the two denominators, you must take the answer and multiply it by the numerator you already have. So in this case, we multiply 2 by 3 to find the missing numerator.
Changing Improper Fractions to Mixed Numbers
Oftentimes you will encounter fractions in their improper form. While this may be useful in some instances, it is usually best to convert the fraction into simplest form, or mixed fraction.
To convert an improper fraction into a mixed fraction, divide the denominator into the numerator.
Change into a mixed fraction.
Divide 13 by 2, use long division to obtain quotient and a remainder.
To form the proper fraction part of the answer, we use the divisor (2) as the denominator, and the remainder (r1) as the numerator. Finally, we take the answer to the division problem, in this case 6, and use that as the whole number.
Hence in mixed fraction form is
Adding Fractions With The Same Denominator
In order to add fractions with the same denominator, you only need to add the numerators while keeping the original denominator for the sum.
Adding fractions with the same denominator is the rule but it begs the question why? Why can’t (or shouldn't) I add both numerators and denominators?
To make sense of this try taking a 12 inch ruler and drawing a 3 inch horizontal line (1/4 of a foot) and then on the end add another 3 inch line (1/4 of a foot). What is the total length of the line? It should be 6 inches (1/2 a foot) and not 2/8 of a foot (3 inches). In essence it seems we can only add like items and like items are terms that have the same denominator and we add them up by adding up numerators.
Adding Fractions With Different Denominators
When adding fractions that do not have the same denominator, you must make the denominators of all the terms the same. We do this by finding the least common multiple of the two denominators.
- Least common multiple of 4 and 5 is 20; therefore, make the denominators 20:
- Now that the common denominators are the same, perform the usual addition:
Subtracting Fractions With The Same Denominator
To subtract fractions sharing a denominator, take their numerators and subtract them in order of appearance. If the numerator's difference is zero, the whole difference will be zero, regardless of the denominator.
Subtracting Fractions With Different Denominators
To subtract one fraction from another, you must again find the least common multiple of the two denominators.
- Least common multiple of 4 and 6 is 12; therefore, make the denominator 12:
- Now that the denominator is same, perform the usual subtraction.
Multiplying fractions is very easy. Simply multiply the numerators of the fractions to find the numerator of the answer. Then multiply the denominators of the fractions to find the denominator of the answer. In other words, it can be said “top times top equals top”, and “bottom times bottom equals bottom.” This rule is used to multiply both proper and improper fractions, and can be used to find the answer to more than two fractions in any given problem.
Multiply the numerators to find the numerator of the answer.
Multiply the denominators to find the denominator of the answer.
Make sure to reduce your answer if possible.
Whole and Mixed Numbers
When you need to multiply a fraction by a whole number, you must first convert the whole number into a fraction. This, fortunately, is not as difficult as it may sound; just put the whole number over the number one. Then proceed to multiply as you would with any two fractions. An example is given below.
If a problem contains one or more mixed numbers, you must first convert all mixed numbers into improper fractions, and multiply as before. Finally, convert any improper fraction back to a mixed number.
To divide fractions, simply exchange the numerator and the denominator of the second term in the problem, then multiply the two fractions.
Invert the second fraction:
Always check to see if simplifying the resulting fraction can be done: | http://en.m.wikibooks.org/wiki/Basic_Math_for_Adults/Fractions | 13 |
71 | logging in or signing up Quadratic Equation 54655 Download Post to : URL : Related Presentations : Share Add to Flag Embed Email Send to Blogs and Networks Add to Channel Uploaded from authorPOINT lite Insert YouTube videos in PowerPont slides with aS Desktop Copy embed code: Embed: Flash iPad Copy Does not support media & animations WordPress Embed Customize Embed URL: Copy Thumbnail: Copy The presentation is successfully added In Your Favorites. Views: 614 Category: Education License: All Rights Reserved Like it (1) Dislike it (0) Added: January 29, 2012 This Presentation is Public Favorites: 0 Presentation Description No description available. Comments Posting comment... Premium member Presentation Transcript PowerPoint Presentation: Quadratic Equation By 10 ‘C’ StudentsPowerPoint Presentation: What is A Quadratic Equation A Quadratic Equation in Standard Form (a, b, and c can have any value, except that a can't be 0.) The letters a, b and c are coefficients. The letter "x" is the variable or unknownPowerPoint Presentation: For Example this makes it a quadratic The name Quadratic comes from "quad" meaning square, because of x 2 (in other words x squared ). It can also be called an equation of degree 2.PowerPoint Presentation: Some othe r Example Where a=2, b=5 and c=3 This one is a little more tricky: Where is a ? In fact a=1 , because we don't usually write "1x 2 " b=-3 And where is c ? Well, c=0 , so is not shown.PowerPoint Presentation: We can solve Quadratic equation by 3 methods: 1:BY FACTORISATION METHOD 2: BY COMPLETING THE SQUARE METHOD 3:BY QUADRATIC FORMULAPowerPoint Presentation: Solving Quadratic Equations by Factorisation methodPowerPoint Presentation: Solving Quadratic Equations by Factorisation This presentation is an introduction to solving quadratic equations by factorisation. The following idea is used when solving quadratics by factorisation. If the product of two numbers is 0 then one (or both) of the numbers must be 0. So if xy = 0 either x = 0 or y = 0 Considering some specific numbers: If 8 x x = 0 then x = 0 If y x 15 = 0 then y = 0PowerPoint Presentation: Solving Quadratic Equations by Factorisation a x 2 + b x + c = 0 , a 0 Some quadratic equations can be solved by factorising and it is normal to try this method first before resorting to the other two methods discussed. The first step in solving is to rearrange them (if necessary) into the form shown above. x 2 = 4 x Example 1: Solve x 2 – 4 x = 0 x ( x – 4) = 0 either x = 0 or x – 4 = 0 if x – 4 = 0 then x = 4 Solutions (roots) are x = 0 , x = 4 rearrange factorisePowerPoint Presentation: 6 x 2 = – 9 x Example 2: Solve 6 x 2 + 9 x = 0 3 x (2 x + 3) = 0 either 3 x = 0 or 2 x + 3 = 0 x = 0 or x = – 1½ rearrange factorisePowerPoint Presentation: Solving Quadratic Equations by Completing the SquareCreating a Perfect Square Trinomial: Creating a Perfect Square Trinomial In the following perfect square trinomial, the constant term is missing. x 2 + 14x + ____ Find the constant term by squaring half the coefficient of the linear term. (14/2) 2 x 2 + 14x + 49Solving Quadratic Equations by Completing the Square: Solving Quadratic Equations by Completing the Square Solve the following equation by completing the square: Step 1: Move quadratic term, and linear term to left side of the equationSolving Quadratic Equations by Completing the Square: Solving Quadratic Equations by Completing the Square Step 2: Find the term that completes the square on the left side of the equation. Add that term to both sides.PowerPoint Presentation: Solving Quadratic Equations by Completing the Square Step 3: Factor the perfect square trinomial on the left side of the equation. Simplify the right side of the equation.Solving Quadratic Equations by Completing the Square: Solving Quadratic Equations by Completing the Square Step 4: Take the square root of each sideSolving Quadratic Equations by Completing the Square: Solving Quadratic Equations by Completing the Square Step 5: Set up the two possibilities and solvePowerPoint Presentation: Solving Quadratic Equations by Quadratic formulaPowerPoint Presentation: The Quadratic Formula.PowerPoint Presentation: What Does The Formula Do ? The Quadratic formula allows you to find the roots of a quadratic equation (if they exist) even if the quadratic equation does not factorise. The formula states that for a quadratic equation of the form : ax 2 + bx + c = 0 The roots of the quadratic equation are given by :PowerPoint Presentation: Example 1 Use the quadratic formula to solve the equation : x 2 + 5x + 6= 0 Solution: x 2 + 5x + 6= 0 a = 1 b = 5 c = 6PowerPoint Presentation: x = - 2 or x = - 3 These are the roots of the equation.PowerPoint Presentation: Thank you Made And Presented By Amar Abhishek Rawat Abhishek Solanki Abhishek Sharma Ajay Anirudh You do not have the permission to view this presentation. In order to view it, please contact the author of the presentation. | http://www.authorstream.com/Presentation/54655-1317762-quadratic-equation/ | 13 |
92 | Vector Dot Product and Vector Length Definitions of the vector dot product and vector length
Vector Dot Product and Vector Length
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- We've already made a few definitions of operations that
- we can do with vectors.
- We've defined addition in the context of vectors
- and you've seen that.
- If you just have two vectors, say a1, a2, all the
- way down to a n.
- We defined the addition of this vector and let's say some
- other vector, b1, b2, all the way down to
- bn as a third vector.
- If you add these two, we defined the addition operation
- to be a third-- you will result in a third vector where
- each of its components are just the sum of the
- corresponding components of the two vectors you're adding.
- So it's going to be a1 plus b1, a2 plus b2, all the way
- down to a n plus bn.
- We knew this and we've done multiple videos where we use
- this definition of vector addition.
- We also know about scalar multiplication.
- Maybe we should just call it scaling multiplication.
- And that's the case of look, if I have some real number c
- and I multiply it times some vector, a1, a2, all the way
- down to a n, we defined scalar multiplication of a vector to
- be-- some scalar times its vector will result in
- essentially, this vector were each of its components are
- multiplied by the scalar.
- ca1, ca2, all the way down to c a n.
- And so after seeing these two operations, you might be
- tempted to say, gee, wouldn't it be nice if there was some
- way to multiply two vectors.
- This is just a scalar times a vector, just scaling it up.
- And that's actually the actual effect of what it's doing if
- you visualize it in three dimensions or less.
- It's actually scaling the size of the vector.
- And we haven't defined size, very precisely just yet.
- But you understand at least this operation.
- For multiplying vectors or taking the product, there's
- actually two ways.
- And I'm going to define one of them in this video.
- And that's the dot product.
- And you signify the dot product by saying a dot v.
- So they borrowed one of the types of multiplication
- notations that you saw, but you can't write across here.
- That'll be actually a different type of vector
- So the dot product is-- it's almost fun to take because
- it's mathematically pretty straightforward, unlike the
- cross product.
- But it's fun to take and it's interesting because it
- results-- so this is a1, a2, all the way down to a n.
- That vector dot my b vector: b1, b2, all the way down to bn
- is going to be equal to the product of each of their
- corresponding components.
- So a1 b1 added together plus a2 b2 plus a3 b3 plus all the
- way to a n, bn.
- So what is this?
- Is this a vector?
- Well no, this is just a number.
- This is just going to be a real number.
- You're just taking the product and adding together a bunch of
- real numbers.
- So this is just going to be a scalar, a real scalar.
- So this is just going to be a scalar right there.
- So in the dot product you multiply two vectors and you
- end up with a scalar value.
- Let me show you a couple of examples just in case this was
- a little bit too abstract.
- So let's say that we take the dot product of the vector 2, 5
- and we're going to dot that with the vector 7, 1.
- Well, this is just going to be equal to 2 times 7 plus 5
- times 1 or 14 plus 6.
- No, sorry.
- 14 plus 5, which is equal to 19.
- So the dot product of this vector and this vector is 19.
- Let me do one more example, although I think this is a
- pretty straightforward idea.
- Let me do it in mauve.
- Say I had the vector 1, 2, 3 and I'm going to dot that with
- the vector minus 2, 0, 5.
- So it's 1 times minus 2 plus 2 times 0 plus 3 times 5.
- So it's minus 2 plus 0 plus 15.
- Minus 2 plus 15 is equal to 13.
- That's the dot product by this definition.
- Now, I'm going to make another definition.
- I'm going to define the length of a vector.
- And you might say, Sal, I know what the length
- of something is.
- I've been measuring things since I was a kid.
- Why do I have to wait until a college level or hopefully
- you're taking this before college maybe.
- But what is now considered a college level course to have
- length defined for me.
- And the answer is because we're abstracting things to
- well beyond just R3 or just three-dimensional space is
- what you're used to.
- We're abstracting that these vectors could have 50
- And our definition of length will satisfy, will work, even
- for these 50 component vectors.
- And so my definition of length-- but it's also going
- to be consistent with what we know length to be.
- So if I take the length of a and the notation we use is
- just these double lines around the vector.
- The length of my vector a is equal to-- and this is a
- It equals the square root of each of the terms, each of my
- components, squared.
- Add it up.
- Plus a2 squared plus all the way to plus a n squared.
- And this is pretty straightforward.
- If I wanted to take let's call this was vector b.
- If I want to take the magnitude of vector b right
- here, this would be what?
- This would be the square root of 2 squared plus 5 squared,
- which is equal to the square root of-- what is this?
- This is 4 plus 25.
- The square root of 29.
- So that's the length of this vector.
- And you might say look, I already knew that.
- That's from the Pythagorean theroem.
- If I were to draw my vector b-- let me draw it.
- Those are my axes.
- My vector b if I draw it in standard
- form, looks like this.
- I go to the right 2.
- 1, 2.
- And I go up 5.
- 1, 2, 3, 4, 5.
- So it looks like this.
- My vector b looks like that.
- And from the Pythagorean theorem you know look, if I
- wanted to figure out the length of this vector in R2,
- or if I'm drawing it in kind of two-dimensional space, I
- take this side which is length 2, I take that side which is
- length 5; this is going to be the square root from the
- Pythagorean theorem of 2 squared plus 5 squared.
- Which is exactly what we did here.
- So this definition of length is completely consistent with
- your idea of measuring things in one-, two- or
- three-dimensional space.
- But what's neat about it is that now we can start thinking
- about the length of a vector that maybe has 50 components.
- Which you know, really to visualize it in our
- traditional way, makes no sense.
- But we can still apply this notion of length and start to
- maybe abstract beyond what we traditionally
- associate length with.
- Now, can we somehow relate length with the dot product?
- Well what happens if I dot a with itself?
- What is a dot a?
- So that's equal to-- I'll just write it out again.
- a1, all the way down to a n dotted with a1 all the way
- down to a n.
- Well that's equal to a1 times a1, which is a1 squared.
- Plus a2 times a2.
- a2 squared.
- Plus all the way down, keep doing that all the way down to
- a n times a n, which is a n squared.
- But what's this?
- This is the same thing as the thing you
- see under the radical.
- These two things are equivalent.
- So we could write our definition of length, of
- vector length, we can write it in terms of the dot product,
- of our dot product definition.
- It equals the square root of the vector dotted with itself.
- Or, if we square both sides, we could say that our new
- length definition squared is equal to the dot product of a
- vector with itself.
- And this is a pretty neat-- it's almost trivial to
- actually prove it, but this is a pretty neat outcome and
- we're going to use this in future videos.
- So this is an introduction to what the dot product
- is, what length is.
- In the next video I'm going to show a bunch of
- properties of it.
- It will almost be mundane, but I want to get all those
- properties out of the way, so we can use
- them in future proofs.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | http://www.khanacademy.org/math/linear-algebra/vectors_and_spaces/dot_cross_products/v/vector-dot-product-and-vector-length | 13 |
59 | In fluid dynamics, wind waves or, more precisely, wind-generated waves are surface waves that occur on the free surface of oceans, seas, lakes, rivers, and canals or even on small puddles and ponds. They usually result from the wind blowing over a vast enough stretch of fluid surface. Waves in the oceans can travel thousands of miles before reaching land. Wind waves range in size from small ripples to huge waves over 30 m high.
When directly being generated and affected by the local winds, a wind wave system is called a wind sea. After the wind ceases to blow, wind waves are called swell. Or, more generally, a swell consists of wind generated waves that are not—or are hardly—affected by the local wind at that time. They have been generated elsewhere, or some time ago. Wind waves in the ocean are called ocean surface waves.
Wind waves have a certain amount of randomness: subsequent waves differ in height, duration and shape, with a limited predictability. They can be described as a stochastic process, in combination with the physics governing their generation, growth, propagation and decay—as well as governing the interdependence between flow quantities such as: the water surface movements, flow velocities and water pressure. The key statistics of wind waves (both seas and swells) in evolving sea states can be predicted with wind wave models.
Tsunamis are a specific type of wave not caused by wind but by geological effects. In deep water, tsunamis are not visible because they are small in height and very long in wavelength. They may grow to devastating proportions at the coast due to reduced water depth.
Wave formation
The great majority of large breakers one observes on a beach result from distant winds. Five factors influence the formation of wind waves:
- Wind speed
- Distance of open water that the wind has blown over (called the fetch)
- Width of area affected by fetch
- Time duration the wind has blown over a given area
- Water depth
All of these factors work together to determine the size of wind waves. The greater each of the variables, the larger the waves. Waves are characterized by:
- Wave height (from trough to crest)
- Wavelength (from crest to crest)
- Wave period (time interval between arrival of consecutive crests at a stationary point)
- Wave propagation direction
Waves in a given area typically have a range of heights. For weather reporting and for scientific analysis of wind wave statistics, their characteristic height over a period of time is usually expressed as significant wave height. This figure represents an average height of the highest one-third of the waves in a given time period (usually chosen somewhere in the range from 20 minutes to twelve hours), or in a specific wave or storm system. The significant wave height is also the value a "trained observer" (e.g. from a ship's crew) would estimate from visual observation of a sea state. Given the variability of wave height, the largest individual waves are likely to be somewhat less than twice the reported significant wave height for a particular day or storm.
Types of wind waves
Three different types of wind waves develop over time:
Ripples appear on smooth water when the wind blows, but will die quickly if the wind stops. The restoring force that allows them to propagate is surface tension. Seas are the larger-scale, often irregular motions that form under sustained winds. These waves tend to last much longer, even after the wind has died, and the restoring force that allows them to propagate is gravity. As waves propagate away from their area of origin, they naturally separate into groups of common direction and wavelength. The sets of waves formed in this way are known as swells.
Individual "rogue waves" (also called "freak waves", "monster waves", "killer waves", and "king waves") much higher than the other waves in the sea state can occur. In the case of the Draupner wave, its 25 m (82 ft) height was 2.2 times the significant wave height. Such waves are distinct from tides, caused by the Moon and Sun's gravitational pull, tsunamis that are caused by underwater earthquakes or landslides, and waves generated by underwater explosions or the fall of meteorites—all having far longer wavelengths than wind waves.
Yet, the largest ever recorded wind waves are common—not rogue—waves in extreme sea states. For example: 29.1 m (95 ft) high waves have been recorded on the RRS Discovery in a sea with 18.5 m (61 ft) significant wave height, so the highest wave is only 1.6 times the significant wave height. The biggest recorded by a buoy (as of 2011) was 32.3 m (106 ft) high during the 2007 typhoon Krosa near Taiwan.
Wave shoaling and refraction
As waves travel from deep to shallow water, their shape alters (wave height increases, speed decreases, and length decreases as wave orbits become asymmetrical). This process is called shoaling.
Wave refraction is the process by which wave crests realign themselves as a result of decreasing water depths. Varying depths along a wave crest cause the crest to travel at different phase speeds, with those parts of the wave in deeper water moving faster than those in shallow water. This process continues until the crests become (nearly) parallel to the depth contours. Rays—lines normal to wave crests between which a fixed amount of energy flux is contained—converge on local shallows and shoals. Therefore, the wave energy between rays is concentrated as they converge, with a resulting increase in wave height.
Because these effects are related to a spatial variation in the phase speed, and because the phase speed also changes with the ambient current – due to the Doppler shift – the same effects of refraction and altering wave height also occur due to current variations. In the case of meeting an adverse current the wave steepens, i.e. its wave height increases while the wave length decreases, similar to the shoaling when the water depth decreases.
Wave breaking
Some waves undergo a phenomenon called "breaking". A breaking wave is one whose base can no longer support its top, causing it to collapse. A wave breaks when it runs into shallow water, or when two wave systems oppose and combine forces. When the slope, or steepness ratio, of a wave is too great, breaking is inevitable.
Individual waves in deep water break when the wave steepness—the ratio of the wave height H to the wavelength λ—exceeds about 0.17, so for H > 0.17 λ. In shallow water, with the water depth small compared to the wavelength, the individual waves break when their wave height H is larger than 0.8 times the water depth h, that is H > 0.8 h. Waves can also break if the wind grows strong enough to blow the crest off the base of the wave.
- Spilling, or rolling: these are the safest waves on which to surf. They can be found in most areas with relatively flat shorelines. They are the most common type of shorebreak
- Plunging, or dumping: these break suddenly and can "dump" swimmers—pushing them to the bottom with great force. These are the preferred waves for experienced surfers. Strong offshore winds and long wave periods can cause dumpers. They are often found where there is a sudden rise in the sea floor, such as a reef or sandbar.
- Surging: these may never actually break as they approach the water's edge, as the water below them is very deep. They tend to form on steep shorelines. These waves can knock swimmers over and drag them back into deeper water.
Science of waves
Wind waves are mechanical waves that propagate along the interface between water and air; the restoring force is provided by gravity, and so they are often referred to as surface gravity waves. As the wind blows, pressure and friction forces perturb the equilibrium of the water surface. These forces transfer energy from the air to the water, forming waves. The initial formation of waves by the wind is described in the theory of Phillips from 1957, and the subsequent growth of the small waves has been modeled by Miles, also in 1957.
In the case of monochromatic linear plane waves in deep water, particles near the surface move in circular paths, making wind waves a combination of longitudinal (back and forth) and transverse (up and down) wave motions. When waves propagate in shallow water, (where the depth is less than half the wavelength) the particle trajectories are compressed into ellipses.
As the wave amplitude (height) increases, the particle paths no longer form closed orbits; rather, after the passage of each crest, particles are displaced slightly from their previous positions, a phenomenon known as Stokes drift.
As the depth below the free surface increases, the radius of the circular motion decreases. At a depth equal to half the wavelength λ, the orbital movement has decayed to less than 5% of its value at the surface. The phase speed (also called the celerity) of a surface gravity wave is – for pure periodic wave motion of small-amplitude waves – well approximated by
- c = phase speed;
- λ = wavelength;
- d = water depth;
- g = acceleration due to gravity at the Earth's surface.
In deep water, where , so and the hyperbolic tangent approaches , the speed approximates
In SI units, with in m/s, , when is measured in metres. This expression tells us that waves of different wavelengths travel at different speeds. The fastest waves in a storm are the ones with the longest wavelength. As a result, after a storm, the first waves to arrive on the coast are the long-wavelength swells.
If the wavelength is very long compared to the water depth, the phase speed (by taking the limit of c when the wavelength approaches infinity) can be approximated by
When several wave trains are present, as is always the case in nature, the waves form groups. In deep water the groups travel at a group velocity which is half of the phase speed. Following a single wave in a group one can see the wave appearing at the back of the group, growing and finally disappearing at the front of the group.
As the water depth decreases towards the coast, this will have an effect: wave height changes due to wave shoaling and refraction. As the wave height increases, the wave may become unstable when the crest of the wave moves faster than the trough. This causes surf, a breaking of the waves.
The movement of wind waves can be captured by wave energy devices. The energy density (per unit area) of regular sinusoidal waves depends on the water density , gravity acceleration and the wave height (which, for regular waves, is equal to twice the amplitude, ):
The velocity of propagation of this energy is the group velocity.
Wind wave models
Surfers are very interested in the wave forecasts. There are many websites that provide predictions of the surf quality for the upcoming days and weeks. Wind wave models are driven by more general weather models that predict the winds and pressures over the oceans, seas and lakes.
Wind wave models are also an important part of examining the impact of shore protection and beach nourishment proposals. For many beach areas there is only patchy information about the wave climate, therefore estimating the effect of wind waves is important for managing littoral environments.
Seismic signals
Ocean water waves generate land seismic waves that propagate hundreds of kilometers into the land. These seismic signals usually have the period of 6 ± 2 seconds. Such recordings were first reported and understood in about 1900.
There are two types of seismic "ocean waves". The primary waves are generated in shallow waters by direct water wave-land interaction and have the same period as the water waves (10 to 16 seconds). The more powerful secondary waves are generated by the superposition of ocean waves of equal period traveling in opposite directions, thus generating standing gravity waves – with an associated pressure oscillation at half the period, which is not diminishing with depth. The theory for microseism generation by standing waves was provided by Michael Longuet-Higgins in 1950, after in 1941 Pierre Bernard suggested this relation with standing waves on the basis of observations.
See also
- Tolman, H.L. (2008), "Practical wind wave modeling", in Mahmood, M.F., CBMS Conference Proceedings on Water Waves: Theory and Experiment, Howard University, USA, 13–18 May 2008: World Scientific Publ. (published (in press)), ISBN 978-981-4304-23-8
- Holthuijsen (2007), page 5.
- Young, I. R. (1999). Wind generated ocean waves. Elsevier. ISBN 0-08-043317-0. p. 83.
- Weisse, Ralf; von Storch, Hans (2008). Marine climate change: Ocean waves, storms and surges in the perspective of climate change. Springer. p. 51. ISBN 978-3-540-25316-7.
- Munk, Walter H. (1950), "Origin and generation of waves", Proceedings 1st International Conference on Coastal Engineering, Long Beach, California: ASCE, pp. 1–4
- Holliday, Naomi P.; Yelland, Margaret J.; Pascal, Robin; Swail, Val R.; Taylor, Peter K.; Griffiths, Colin R.; Kent, Elizabeth (2006), "Were extreme waves in the Rockall Trough the largest ever recorded?", Geophysical Research Letters 33 (L05613), Bibcode:2006GeoRL..3305613H, doi:10.1029/2005GL025238
- P. C. Liu; H. S. Chen; D.-J. Doong; C. C. Kao; Y.-J. G. Hsu (11 June 2008), "Monstrous ocean waves during typhoon Krosa", Annales Geophysicae (European Geosciences Union) 26: 1327–1329, Bibcode:2008AnGeo..26.1327L, doi:10.5194/angeo-26-1327-2008
- Longuet-Higgins, M.S.; Stewart, R.W. (1964), "Radiation stresses in water waves; a physical discussion, with applications", Deep Sea Research 11 (4): 529–562, doi:10.1016/0011-7471(64)90001-4
- R.J. Dean and R.A. Dalrymple (2002). Coastal processes with engineering applications. Cambridge University Press. ISBN 0-521-60275-0. p. 96–97.
- Phillips, O. M. (1957), "On the generation of waves by turbulent wind", Journal of Fluid Mechanics 2 (5): 417–445, Bibcode:1957JFM.....2..417P, doi:10.1017/S0022112057000233
- Miles, J. W. (1957), "On the generation of surface waves by shear flows", Journal of Fluid Mechanics 3 (2): 185–204, Bibcode:1957JFM.....3..185M, doi:10.1017/S0022112057000567
- Figure 6 from: Wiegel, R.L.; Johnson, J.W. (1950), "Elements of wave theory", Proceedings 1st International Conference on Coastal Engineering, Long Beach, California: ASCE, pp. 5–21
- For the particle trajectories within the framework of linear wave theory, see for instance:
Phillips (1977), page 44.
Lamb, H. (1994). Hydrodynamics (6th edition ed.). Cambridge University Press. ISBN 978-0-521-45868-9. Originally published in 1879, the 6th extended edition appeared first in 1932. See §229, page 367.
L. D. Landau and E. M. Lifshitz (1986). Fluid mechanics. Course of Theoretical Physics 6 (Second revised edition ed.). Pergamon Press. ISBN 0-08-033932-8. See page 33.
- A good illustration of the wave motion according to linear theory is given by Prof. Robert Dalrymple's Java applet.
- For nonlinear waves, the particle paths are not closed, as found by George Gabriel Stokes in 1847, see the original paper by Stokes. Or in Phillips (1977), page 44: "To this order, it is evident that the particle paths are not exactly closed … pointed out by Stokes (1847) in his classical investigation".
- Solutions of the particle trajectories in fully nonlinear periodic waves and the Lagrangian wave period they experience can for instance be found in:
J.M. Williams (1981). "Limiting gravity waves in water of finite depth". Philosophical Transactions of the Royal Society of London, Series A 302 (1466): 139–188. Bibcode:1981RSPTA.302..139W. doi:10.1098/rsta.1981.0159.
J.M. Williams (1985). Tables of progressive gravity waves. Pitman. ISBN 978-0-273-08733-5.
- Carl Nordling, Jonny Östermalm (2006). Physics Handbook for Science and Engineering (Eight edition ed.). Studentliteratur. p. 263. ISBN 978-91-44-04453-8.
- In deep water, the group velocity is half the phase velocity, as is shown here. Another reference is .
- Peter Bormann. Seismic Signals and Noise
- Bernard, P. (1941), "Sur certaines proprietes de la boule etudiees a l'aide des enregistrements seismographiques", Bull. Inst. Oceanogr. Monaco 800: 1–19
- Longuet-Higgins, M.S. (1950), "A theory of the origin of microseisms", Phil. Trans. Roy. Soc. London A 243 (857): 1–35, Bibcode:1950RSPTA.243....1L, doi:10.1098/rsta.1950.0012
- Stokes, G.G. (1847). "On the theory of oscillatory waves". Transactions of the Cambridge Philosophical Society 8: 441–455.
Reprinted in: G.G. Stokes (1880). Mathematical and Physical Papers, Volume I. Cambridge University Press. pp. 197–229.
- Phillips, O.M. (1977). The dynamics of the upper ocean (2nd ed.). Cambridge University Press. ISBN 0-521-29801-6.
- Holthuijsen, Leo H. (2007). Waves in oceanic and coastal waters. Cambridge University Press. ISBN 0-521-86028-8.
- Janssen, Peter (2004). The interaction of ocean waves and wind. Cambridge University Press. ISBN 978-0-521-46540-3.
- Rousmaniere, John (1989). The Annapolis Book of Seamanship (2nd revised ed.). Simon & Schuster. ISBN 0-671-67447-1.
- Carr, Michael (Oct 1998). "Understanding Waves". Sail: 38–45.
|Wikimedia Commons has media related to: Ocean surface waves|
|Wikimedia Commons has media related to: Water waves|
- "Anatomy of a Wave" Holben, Jay boatsafe.com captured 5/23/06
- NOAA National Weather Service
- ESA press release on swell tracking with ASAR onboard ENVISAT
- Introductory oceanography chapter 10 – Ocean Waves
- HyperPhysics – Ocean Waves | http://en.wikipedia.org/wiki/Wind_wave | 13 |
62 | Facts, information and articles about Secession, one of the causes of the civil war
Secession summary: the secession of Southern States led to the establishment of the Confederacy and ultimately the civil war. (Civil War should be capitalized; is there an SEO reason not to?) It was the most serious secession movement in the United States and was defeated when the Union armies defeated the Confederate armies in the Civil War, 1861-65.
Causes Of Secession
Before the Civil War, the country was dividing between north and south. Issues included States Rights but centered mostly on the issue of slavery, which was prominent in the south but increasingly banned by northern states.
With the election in 1860 of Abraham Lincoln, who ran on a message of anti-slavery, the Southern states felt it was only a matter of time before the institution was outlawed completely. South Carolina became the first state to officially secede from the United States on December 20, 1860. Four months later, Georgia, Florida, Alabama, Mississippi, Texas and Louisiana seceded as well. Later Virginia, Arkansas, North Carolina, and Tennessee joined them, and soon afterward, the people of these states elected Jefferson Davis as president of the newly formed Confederacy.
Secession Leads To War
The Civil War officially began with the Battle Of Fort Sumter. Fort Sumter was a Union fort in the harbor of Charleston, South Carolina. After the U.S. Army troops inside the fort refused to vacate it, Confederate forces opened fire on the fort with cannons. It was surrendered without casualty, but led to the bloodiest war in the nation’s history.
A Short History of Secession
From Articles of Confederation to "A More Perfect Union." Arguably, the act of secession lies deep within the American psyche. When the 13 colonies rebelled against Great Britain in the War for American Independence, it was an act of secession, one that is celebrated by Americans to this day.
During that war, each of the rebelling colonies regarded itself as a sovereign nation that was cooperating with a dozen other sovereigns in a relationship of convenience to achieve shared goals, the most immediate being independence from Britain. On Nov. 15, 1777, the Continental Congress passed the Articles of Confederation—"Certain Articles of Confederation and Perpetual Union"—to create "The United States of America." That document asserted that "Each State retains is sovereignty, freedom and independence" while entering into "a firm league of friendship with each other" for their common defense and to secure their liberties, as well as to provide for "their mutual and general welfare."
Under the Articles of Confederation, the central government was weak, without even an executive to lead it. Its only political body was the Congress, which could not collect taxes or tariffs (it could ask states for "donations" for the common good). It did have the power to oversee foreign relations but could not create an army or navy to enforce foreign treaties. Even this relatively weak governing document was not ratified by all the states until 1781. It is an old truism that "All politics are local," and never was that more true than during the early days of the United States. Having just seceded from what they saw as a despotic, powerful central government that was too distant from its citizens, Americans were skeptical about giving much power to any government other than that of their own states, where they could exercise more direct control. However, seeds of nationalism were also sown in the war: the war required a united effort, and many men who likely would have lived out their lives without venturing from their own state traveled to other states as part of the Continental Army.
The weaknesses of the Articles of Confederation were obvious almost from the beginning. Foreign nations, ruled to varying degrees by monarchies, were inherently contemptuous of the American experiment of entrusting rule to the ordinary people. A government without an army or navy and little real power was, to them, simply a laughing stock and a plum ripe for picking whenever the opportunity arose.
Domestically, the lack of any uniform codes meant each state established its own form of government, a chaotic system marked at times by mob rule that burned courthouses and terrorized state and local officials. State laws were passed and almost immediately repealed; sometimes ex post facto laws made new codes retroactive. Collecting debts could be virtually impossible.
George Washington, writing to John Jay in 1786, said, "We have, probably, had too good an opinion of human nature in forming our confederation." He underlined his words for emphasis. Jay himself felt the country had to become "one nation in every respect." Alexander Hamilton felt "the prospect of a number of petty states, with appearance only of union," was something "diminutive and contemptible."
In May 1787, a Constitutional Convention met in Philadelphia to address the shortcomings of the Articles of Confederation. Some Americans felt it was an aristocratic plot, but every state felt a need to do something to improve the situation, and smaller states felt a stronger central government could protect them against domination by the larger states. What emerged was a new constitution "in order to provide a more perfect union." It established the three branches of the federal government—executive, legislative, and judicial—and provided for two houses within the legislature. That Constitution, though amended 27 times, has governed the United States of America ever since. It failed to clearly address two critical issues, however.
It made no mention of the future of slavery. (The Northwest Ordinance, not the Constitution, prohibited slavery in the Northwest Territories, that area north of the Ohio River and along the upper Mississippi River.) It also did not include any provision for a procedure by which a state could withdraw from the Union, or by which the Union could be wholly dissolved. To have included such provisions would have been, as some have pointed out, to have written a suicide clause into the Constitution. But the issues of slavery and secession would take on towering importance in the decades to come, with no clear-cut guidance from the Founding Fathers for resolving them.
First Calls for Secession
Following ratification by 11 of the 13 states, the government began operation under the new U.S. Constitution in March 1789. In less than 15 years, states of New England had already threatened to secede from the Union. The first time was a threat to leave if the Assumption Bill, which provided for the federal government to assume the debts of the various states, were not passed. The next threat was over the expense of the Louisiana Purchase. Then, in 1812, President James Madison, the man who had done more than any other individual to shape the Constitution, led the United States into a new war with Great Britain. The New England states objected, for war would cut into their trade with Britain and Europe. Resentment grew so strong that a convention was called at Hartford, Connecticut, in 1814, to discuss secession for the New England states. The Hartford Convention was the most serious secession threat up to that time, but its delegates took no action.
Southerners had also discussed secession in the nation’s early years, concerned over talk of abolishing slavery. But when push came to shove in 1832, it was not over slavery but tariffs. National tariffs were passed that protected Northern manufacturers but increased prices for manufactured goods purchased in the predominantly agricultural South, where the Tariff of 1828 was dubbed the "Tariff of Abominations." The legislature of South Carolina declared the tariff acts of 1828 and 1832 were "unauthorized by the constitution of the United States" and voted them null, void and non-binding on the state.
President Andrew Jackson responded with a Proclamation of Force, declaring, "I consider, then, the power to annul a law of the United States, assumed by one state, incompatible with the existence of the Union, contradicted expressly by the letter of the Constitution, inconsistent with every principle on which it was founded, and destructive of the great object for which it was formed." (Emphasis is Jackson’s). Congress authorized Jackson to use military force if necessary to enforce the law (every Southern senator walked out in protest before the vote was taken). That proved unnecessary, as a compromise tariff was approved, and South Carolina rescinded its Nullification Ordinance.
The Nullification Crisis, as the episode is known, was the most serious threat of disunion the young country had yet confronted. It demonstrated both continuing beliefs in the primacy of states rights over those of the federal government (on the part of South Carolina and other Southern states) and a belief that the chief executive had a right and responsibility to suppress any attempts to give individual states the right to override federal law.
The Abolition Movement, and Southern Secession
Between the 1830s and 1860, a widening chasm developed between North and South over the issue of slavery, which had been abolished in all states north of the Mason-Dixon line. The Abolition Movement grew in power and prominence. The slave holding South increasingly felt its interests were threatened, particularly since slavery had been prohibited in much of the new territory that had been added west of the Mississippi River. The Missouri Compromise, the Dred Scott Decision case, the issue of Popular Sovereignty (allowing residents of a territory to vote on whether it would be slave or free), and John Brown‘s Raid On Harpers Ferry all played a role in the intensifying debate. Whereas once Southerners had talked of an emancipation process that would gradually end slavery, they increasingly took a hard line in favor of perpetuating it forever.
In 1850, the Nashville Convention met from June 3 to June 12 "to devise and adopt some mode of resistance to northern aggression." While the delegates approved 28 resolutions affirming the South’s constitutional rights within the new western territories and similar issues, they essentially adopted a wait-and-see attitude before taking any drastic action. Compromise measures at the federal level diminished interest in a second Nashville Convention, but a much smaller one was held in November. It approved measures that affirmed the right of secession but rejected any unified secession among Southern states. During the brief presidency of Zachary Taylor, 1849-50, he was approached by pro-secession ambassadors. Taylor flew into a rage and declared he would raise an army, put himself at its head and force any state that attempted secession back into the Union.
The potato famine that struck Ireland and Germany in the 1840s–1850s sent waves of hungry immigrants to America’s shores. More of them settled in the North than in the South, where the existence of slavery depressed wages. These newcomers had sought refuge in the United States, not in New York or Virginia or Louisiana. To most of them, the U.S. was a single entity, not a collection of sovereign nations, and arguments in favor of secession failed to move them, for the most part.
The Election Of Abraham Lincoln And Nullification
The U.S. elections of 1860 saw the new Republican Party, a sectional party with very little support in the South, win many seats in Congress. Its candidate, Abraham Lincoln, won the presidency. Republicans opposed the expansion of slavery into the territories, and many party members were abolitionists who wanted to see the "peculiar institution" ended everywhere in the United States. South Carolina again decided it was time to nullify its agreement with the other states. On Dec. 20, 1860, the Palmetto State approved an Ordinance of Secession, followed by a declaration of the causes leading to its decision and another document that concluded with an invitation to form "a Confederacy of Slaveholding States."
The South Begins To Secede
South Carolina didn’t intend to go it alone, as it had in the Nullification Crisis. It sent ambassadors to other Southern states. Soon, six more states of the Deep South—Georgia, Florida, Alabama, Mississippi, Texas and Louisiana—renounced their compact with the United States. After Confederate artillery fired on Fort Sumter in Charleston Harbor, South Carolina, on April 12, 1861, Abraham Lincoln called for 75,000 volunteers to put down the rebellion. This led four more states— Virginia, Arkansas, North Carolina, and Tennessee—to secede; they refused to take up arms against their Southern brothers and maintained Lincoln had exceeded his constitutional powers by not waiting for approval of Congress (as Jackson had done in the Nullification Crisis) before declaring war on the South. The legislature of Tennessee, the last state to leave the Union, waived any opinion as to "the abstract doctrine of secession," but asserted "the right, as a free and independent people, to alter, reform or abolish our form of government, in such manner as we think proper."
In addition to those states that seceded, other areas of the country threatened to. The southern portions of Northern states bordering the Ohio River held pro-Southern, pro-slavery sentiments, and there was talk within those regions of seceding and casting their lot with the South.
A portion of Virginia did secede from the Old Dominion and formed the Union-loyal state of West Virginia. Its creation and admittance to the Union raised many constitutional questions—Lincoln’s cabinet split 50–50 on the legality and expediency of admitting the new state. But Lincoln wrote, “It is said that the admission of West-Virginia is secession, and tolerated only because it is our secession. Well, if we call it by that name, there is still difference enough between secession against the constitution, and secession in favor of the constitution.”
The Civil War: The End Of The Secession Movement
Four bloody years of war ended what has been the most significant attempt by states to secede from the Union. While the South was forced to abandon its dreams of a new Southern Confederacy, many of its people have never accepted the idea that secession was a violation of the U.S. Constitution, basing their arguments primarily on Article X of that constitution: "The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people."
Ongoing Calls For Secession & The Eternal Question: "Can A State Legally Secede?"
The ongoing debate continues over the question that has been asked since the forming of the United States itself: "Can a state secede from the Union of the United States?" Whether it is legal for a state to secede from the United States is a question that was fiercely debated before the Civil War (see the article below), and even now, that debate continues. From time to time, new calls have arisen for one state or another to secede, in reaction to political and/or social changes, and organizations such as the League of the South openly support secession and the formation of a new Southern republic.
Articles Featuring Secession From History Net Magazines
Was Secession Legal
Southerners insisted they could legally bolt from the Union.
Northerners swore they could not.
War would settle the matter for good.
Over the centuries, various excuses have been employed for starting wars. Wars have been fought over land or honor. Wars have been fought over soccer (in the case of the conflict between Honduras and El Salvador in 1969) or even the shooting of a pig (in the case of the fighting between the United States and Britain in the San Juan Islands in 1859).
But the Civil War was largely fought over equally compelling interpretations of the U.S. Constitution. Which side was the Constitution on? That’s difficult to say.
The interpretative debate—and ultimately the war—turned on the intent of the framers of the Constitution and the meaning of a single word: sovereignty—which does not actually appear anywhere in the text of the Constitution.
Southern leaders like John C. Calhoun and Jefferson Davis argued that the Constitution was essentially a contract between sovereign states—with the contracting parties retaining the inherent authority to withdraw from the agreement. Northern leaders like Abraham Lincoln insisted the Constitution was neither a contract nor an agreement between sovereign states. It was an agreement with the people, and once a state enters the Union, it cannot leave the Union.
It is a touchstone of American constitutional law that this is a nation based on federalism—the union of states, which retain all rights not expressly given to the federal government. After the Declaration of Independence, when most people still identified themselves not as Americans but as Virginians, New Yorkers or Rhode Islanders, this union of “Free and Independent States” was defined as a “confederation.” Some framers of the Constitution, like Maryland’s Luther Martin, argued the new states were “separate sovereignties.” Others, like Pennsylvania’s James Wilson, took the opposite view that the states “were independent, not Individually but Unitedly.”
Supporting the individual sovereignty claims is the fierce independence that was asserted by states under the Articles of Confederation and Perpetual Union, which actually established the name “The United States of America.” The charter, however, was careful to maintain the inherent sovereignty of its composite state elements, mandating that “each state retains its sovereignty, freedom, and independence, and every power, jurisdiction, and right, which is not by this Confederation expressly delegated.” It affirmed the sovereignty of the respective states by declaring, “The said states hereby severally enter into a firm league of friendship with each other for their common defence [sic].” There would seem little question that the states agreed to the Confederation on the express recognition of their sovereignty and relative independence.
Supporting the later view of Lincoln, the perpetuality of the Union was referenced during the Confederation period. For example, the Northwest Ordinance of 1787 stated that “the said territory, and the States which may be formed therein, shall forever remain a part of this confederacy of the United States of America.”
The Confederation produced endless conflicts as various states issued their own money, resisted national obligations and favored their own citizens in disputes. James Madison criticized the Articles of Confederation as reinforcing the view of the Union as “a league of sovereign powers, not as a political Constitution by virtue of which they are become one sovereign power.” Madison warned that such a view could lead to the “dissolving of the United States altogether.” If the matter had ended there with the Articles of Confederation, Lincoln would have had a much weaker case for the court of law in taking up arms to preserve the Union. His legal case was saved by an 18th-century bait-and-switch.
A convention was called in 1787 to amend the Articles of Confederation, but several delegates eventually concluded that a new political structure—a federation—was needed. As they debated what would become the Constitution, the status of the states was a primary concern. George Washington, who presided over the convention, noted, “It is obviously impracticable in the federal government of these states, to secure all rights of independent sovereignty to each, and yet provide for the interest and safety of all.” Of course, Washington was more concerned with a working federal government—and national army—than resolving the question of a state’s inherent right to withdraw from such a union. The new government forged in Philadelphia would have clear lines of authority for the federal system. The premise of the Constitution, however, was that states would still hold all rights not expressly given to the federal government.
The final version of the Constitution never actually refers to the states as “sovereign,” which for many at the time was the ultimate legal game-changer. In the U.S. Supreme Court’s landmark 1819 decision in McCulloch v. Maryland, Chief Justice John Marshall espoused the view later embraced by Lincoln: “The government of the Union…is emphatically and truly, a government of the people.” Those with differing views resolved to leave the matter unresolved—and thereby planted the seed that would grow into a full civil war. But did Lincoln win by force of arms or force of argument?
On January 21, 1861, Jefferson Davis of Mississippi went to the well of the U.S. Senate one last time to announce that he had “satisfactory evidence that the State of Mississippi, by a solemn ordinance of her people in convention assembled, has declared her separation from the United States.” Before resigning his Senate seat, Davis laid out the basis for Mississippi’s legal claim, coming down squarely on the fact that in the Declaration of Independence “the communities were declaring their independence”—not “the people.” He added, “I have for many years advocated, as an essential attribute of state sovereignty, the right of a state to secede from the Union.”
Davis’ position reaffirmed that of John C. Calhoun, the powerful South Carolina senator who had long viewed the states as independent sovereign entities. In an 1833 speech upholding the right of his home state to nullify federal tariffs it believed were unfair, Calhoun insisted, “I go on the ground that [the] constitution was made by the States; that it is a federal union of the States, in which the several States still retain their sovereignty.” Calhoun allowed that a state could be barred from secession by a vote of two-thirds of the states under Article V, which lays out the procedure for amending the Constitution.
Lincoln’s inauguration on March 4, 1861, was one of the least auspicious beginnings for any president in history. His election was used as a rallying cry for secession, and he became the head of a country that was falling apart even as he raised his hand to take the oath of office. His first inaugural address left no doubt about his legal position: “No State, upon its own mere motion, can lawfully get out of the Union, that resolves and ordinances to that effect are legally void, and that acts of violence, within any State or States, against the authority of the United States, are insurrectionary or revolutionary, according to circumstances.”
While Lincoln expressly called for a peaceful resolution, this was the final straw for many in the South who saw the speech as a veiled threat. Clearly when Lincoln took the oath to “preserve, protect, and defend” the Constitution, he considered himself bound to preserve the Union as the physical creation of the Declaration of Independence and a central subject of the Constitution. This was made plain in his next major legal argument—an address where Lincoln rejected the notion of sovereignty for states as an “ingenious sophism” that would lead “to the complete destruction of the Union.” In a Fourth of July message to a special session of Congress in 1861, Lincoln declared, “Our States have neither more, nor less power, than that reserved to them, in the Union, by the Constitution—no one of them ever having been a State out of the Union. The original ones passed into the Union even before they cast off their British colonial dependence; and the new ones each came into the Union directly from a condition of dependence, excepting Texas. And even Texas, in its temporary independence, was never designated a State.”
It is a brilliant framing of the issue, which Lincoln proceeds to characterize as nothing less than an attack on the very notion of democracy:
Our popular government has often been called an experiment. Two points in it, our people have already settled—the successful establishing, and the successful administering of it. One still remains—its successful maintenance against a formidable [internal] attempt to overthrow it. It is now for them to demonstrate to the world, that those who can fairly carry an election, can also suppress a rebellion—that ballots are the rightful, and peaceful, successors of bullets; and that when ballots have fairly, and constitutionally, decided, there can be no successful appeal, back to bullets; that there can be no successful appeal, except to ballots themselves, at succeeding elections. Such will be a great lesson of peace; teaching men that what they cannot take by an election, neither can they take it by a war—teaching all, the folly of being the beginners of a war.
Lincoln implicitly rejected the view of his predecessor, James Buchanan. Buchanan agreed that secession was not allowed under the Constitution, but he also believed the national government could not use force to keep a state in the Union. Notably, however, it was Buchanan who sent troops to protect Fort Sumter six days after South Carolina seceded. The subsequent seizure of Fort Sumter by rebels would push Lincoln on April 14, 1861, to call for 75,000 volunteers to restore the Southern states to the Union—a decisive move to war.
Lincoln showed his gift as a litigator in the July 4th address, though it should be noted that his scruples did not stop him from clearly violating the Constitution when he suspended habeas corpus in 1861 and 1862. His argument also rejects the suggestion of people like Calhoun that, if states can change the Constitution under Article V by democratic vote, they can agree to a state leaving the Union. Lincoln’s view is absolute and treats secession as nothing more than rebellion. Ironically, as Lincoln himself acknowledged, that places the states in the same position as the Constitution’s framers (and presumably himself as King George).
But he did note one telling difference: “Our adversaries have adopted some Declarations of Independence; in which, unlike the good old one, penned by Jefferson, they omit the words ‘all men are created equal.’”
Lincoln’s argument was more convincing, but only up to a point. The South did in fact secede because it was unwilling to accept decisions by a majority in Congress. Moreover, the critical passage of the Constitution may be more important than the status of the states when independence was declared. Davis and Calhoun’s argument was more compelling under the Articles of Confederation, where there was no express waiver of withdrawal. The reference to the “perpetuity” of the Union in the Articles and such documents as the Northwest Ordinance does not necessarily mean each state is bound in perpetuity, but that the nation itself is so created.
After the Constitution was ratified, a new government was formed by the consent of the states that clearly established a single national government. While, as Lincoln noted, the states possessed powers not expressly given to the federal government, the federal government had sole power over the defense of its territory and maintenance of the Union. Citizens under the Constitution were guaranteed free travel and interstate commerce. Therefore it is in conflict to suggest that citizens could find themselves separated from the country as a whole by a seceding state.
Moreover, while neither the Declaration of Independence nor the Constitution says states can not secede, they also do not guarantee states such a right nor refer to the states as sovereign entities. While Calhoun’s argument that Article V allows for changing the Constitution is attractive on some levels, Article V is designed to amend the Constitution, not the Union. A clearly better argument could be made for a duly enacted amendment to the Constitution that would allow secession. In such a case, Lincoln would clearly have been warring against the democratic process he claimed to defend.
Neither side, in my view, had an overwhelming argument. Lincoln’s position was the one most likely to be upheld by an objective court of law. Faced with ambiguous founding and constitutional documents, the spirit of the language clearly supported the view that the original states formed a union and did not retain the sovereign authority to secede from that union.
Of course, a rebellion is ultimately a contest of arms rather than arguments, and to the victor goes the argument. This legal dispute would be resolved not by lawyers but by more practical men such as William Tecumseh Sherman and Thomas “Stonewall” Jackson.
Ultimately, the War Between the States resolved the Constitution’s meaning for any states that entered the Union after 1865, with no delusions about the contractual understanding of the parties. Thus, 15 states from Alaska to Colorado to Washington entered in the full understanding that this was the view of the Union. Moreover, the enactment of the 14th Amendment strengthened the view that the Constitution is a compact between “the people” and the federal government. The amendment affirms the power of the states to make their own laws, but those laws cannot “abridge the privileges or immunities of citizens of the United States.”
There remains a separate guarantee that runs from the federal government directly to each American citizen. Indeed, it was after the Civil War that the notion of being “American” became widely accepted. People now identified themselves as Americans and Virginians. While the South had a plausible legal claim in the 19th century, there is no plausible argument in the 21st century. That argument was answered by Lincoln on July 4, 1861, and more decisively at Appomattox Court House on April 9, 1865.
Jonathan Turley is one of the nation’s leading constitutional scholars and legal commentators. He teaches at George Washington University.
Article originally published in the November 2010 issue of America’s Civil War.
Second: Secession – Revisionism or Reality
Secession fever revisited
We can take an honest look at history, or just revise it to make it more palatable
Try this version of history: 150 years ago this spring, North Carolina and Tennessee became the final two Southern states to secede illegally from the sacred American Union in order to keep 4 million blacks in perpetual bondage. With Jefferson Davis newly ensconced in his Richmond capital just a hundred miles south of Abraham Lincoln’s legally elected government in Washington, recruiting volunteers to fight for his “nation,” there could be little doubt that the rebellion would soon turn bloody. The Union was understandably prepared to fight for its own existence.
Or should the scenario read this way? A century and a half ago, North Carolina and Tennessee joined other brave Southern states in asserting their right to govern themselves, limit the evils of unchecked federal power, protect the integrity of the cotton market from burdensome tariffs, and fulfill the promise of liberty that the nation’s founders had guaranteed in the Declaration of Independence. With Abraham Lincoln’s hostile minority government now raising militia to invade sovereign states, there could be little doubt that peaceful secession would soon turn into bloody war. The Confederacy was understandably prepared to fight for its own freedom.
Which version is true? And which is myth? Although the Civil War sesquicentennial is only a few months old, questions like this, which most serious readers believed had been asked and answered 50—if not 150—years ago, are resurfacing with surprising frequency. So-called Southern heritage Web sites are ablaze with alternative explanations for secession that make such scant mention of chattel slavery that the modern observer might think shackled plantation laborers were dues-paying members of the AFL-CIO. Some of the more egregious comments currently proliferating on the new Civil War blogs of both the New York Times (“Disunion”) and Washington Post (“A House Divided”) suggest that many contributors continue to believe slavery had little to do with secession: Lincoln had no right to serve as president, they argue; his policies threatened state sovereignty; Republicans wanted to impose crippling tariffs that would have destroyed the cotton industry; it was all about honor. Edward Ball, author of Slaves in the Family, has dubbed such skewed memory as “the whitewash explanation” for secession. He is right.
As Ball and scholars like William Freehling, author of Prelude to Civil War and The Road to Disunion, have pointed out, all today’s readers need to do in order to understand what truly motivated secession is to study the proceedings of the state conventions where separation from the Union was openly discussed and enthusiastically authorized. Many of these dusty records have been digitized and made available online—discrediting this fairy tale once and for all.
Consider these excerpts. South Carolina voted for secession first in December 1860, bluntly citing the rationale that Northern states had “denounced as sinful the institution of slavery.”
Georgia delegates similarly warned against the “progress of anti-slavery.” As delegate Thomas R.R. Cobb proudly insisted in an 1860 address to the Legislature, “Our slaves are the most happy and contented of workers.”
Mississippians boasted, “Our position is thoroughly identified with the institution of slavery—the greatest material interest of the world…. There is no choice left us but submission to the mandates of abolition, or a dissolution of the Union.” And an Alabama newspaper opined that Lincoln’s election plainly showed the North planned “to free the negroes and force amalgamation between them and the children of the poor men of the South.”
Certainly the effort to “whitewash” secession is not new. Jefferson Davis himself was maddeningly vague when he provocatively asked fellow Mississippians, “Will you be slaves or will you be independent?…Will you consent to be robbed of your property [or] strike bravely for liberty, property, honor and life?” Non-slaveholders—the majority of Southerners—were bombarded with similarly inflammatory rhetoric designed to paint Northerners as integrationist aggressors scheming to make blacks the equal of whites and impose race-mixing on a helpless population. The whitewash worked in 1861—but does that mean that it should be taken seriously today?
From 1960-65, the Civil War Centennial Commission wrestled with similar issues, and ultimately bowed too deeply to segregationists who worried that an emphasis on slavery—much less freedom—would embolden the civil rights movement then beginning to gain national traction. Keeping the focus on battlefield re-enactments, regional pride and uncritical celebration took the spotlight off the real cause of the war, and its potential inspiration to modern freedom marchers and their sympathizers. Some members of the national centennial commission actually argued against staging a 100th anniversary commemoration of emancipation at the Lincoln Memorial. Doing so, they contended, would encourage “agitators.”
In a way, it is more difficult to understand why so much space is again being devoted to this debate. Fifty years have passed since the centennial. The nation has been vastly transformed by legislation and attitude. We supposedly live in a “post-racial era.” And just two years ago, Americans (including voters in the former Confederate states of Virginia and North Carolina), chose the first African-American president of the United States.
Or is this, perhaps, the real underlying problem—the salt that still irritates the scab covering this country’s unhealed racial divide?
Just as some Southern conservatives decried a 1961 emphasis on slavery because it might embolden civil rights, 2011 revisionists may have a hidden agenda of their own: Beat back federal authority, reinvigorate the states’ rights movement and perhaps turn back the re-election of a black president who has been labeled as everything from a Communist to a foreigner (not unlike the insults hurled at the freedom riders half a century ago).
Fifty years from now, Americans will either celebrate the honesty that animated the Civil War sesquicentennial, or subject it to the same criticisms that have been leveled against the centennial celebrations of the 1960s. The choice is ours. As Lincoln once said, “The struggle of today is not altogether for today—it is for a vast future also.”
Harold Holzer is chairman of the Abraham Lincoln Bicentennial Foundation.
A bird's-eye view of pre-war New York displays the shipping commerce that made the city rich. Image courtesy of Library of Congress.
A NOTE FROM THE EDITOR: Because of a production problem, a portion of this article was omitted from …
Confederate soldiers under the command of Gen. Robert E. Lee camp on the outskirts of Hagerstown, Maryland, in September of 1862. Image courtesy of Weider History Group archive.
War seemed far away to the editors of a Maryland weekly newspaper–until …
A Louisiana youth wages a personal war with the Yankees on his doorstep
Aleck Mouton was 10 years old, barefoot and Confederate to the core when he confronted Maj. Gen. Nathaniel Banks, who had just invaded the tiny south Louisiana …
Simmering animosities between North and South signaled an American apocalypse
Any man who takes it upon himself to explain the causes of the Civil War deserves whatever grief comes his way, regardless of his good intentions. Having acknowledged …
Americans who lived through the Civil War established four great interpretive traditions regarding the conflict. The Union Cause tradition framed the war as preeminently an effort to maintain a viable republic in the face of secessionist actions that threatened both …
Vicksburg 1863, by Winston Groom, Alfred A. Knopf
Winston Groom is a first-rate spinner of yarns, and like the tales of his most famous fictional character, Forrest Gump, his accounts seamlessly transport readers into the story. Vicksburg 1863 is …
It's perfectly feasible to imagine that if the South had successfully left the Union, the West would also have split away
Did Confederate soldiers lose the will to fight as the outlook began to appear bleak for the South late …
Missouri in the Balance Struggle for St. Louis
By Anthony Monachello
The dark clouds of civil war gathered over the nation as twoaggressive factions–the Wide-Awakes and the Minutemen–plotted to gain political control of Missouri and its most important city, St. …
Amid Bedbugs and Drunken Secessionists
BY JACK D. FOWLERWilliam Woods Averell was a man on a mission–at least he wanted to be. He had come to Washington, D.C., from his New York home to attend President Abraham Lincoln's inauguration on …
Suave, gentlemanly Lt. Col. Arthur Fremantle of Her Majesty's Coldstream Guards picked an unusual vacation spot: the Civil War-torn United States.
By Robert R. Hodges, Jr.
After graduating from Sandhurst, Great Britain's West Point, Arthur James Lyon Fremantle entered the … | http://www.historynet.com/secession | 13 |
53 | Binomial vs Normal Distribution
Probability distributions of random variables play an important role in the field of statistics. Out of those probability distributions, binomial distribution and normal distribution are two of the most commonly occurring ones in the real life.
What is binomial distribution?
Binomial distribution is the probability distribution corresponding to the random variable X, which is the number of successes of a finite sequence of independent yes/no experiments each of which has a probability of success p. From the definition of X, it is evident that it is a discrete random variable; therefore, binomial distribution is discrete too.
The distribution is denoted as X ~B(n,p) where n is the number of experiments and p is the probability of success. According to probability theory, we can deduce that B(n,p) follows the probability mass function . From this equation, it can be further deduced that the expected value of X, E(X) = np and the variance of X, V(X) = np(1-p).
For example, consider a random experiment of tossing a coin 3 times. Define success as obtaining H, failure as obtaining T and the random variable X as the number of successes in the experiment. Then X~B(3, 0.5) and the probability mass function of X given by . Therefore, the probability of obtaining at least 2 H’s is P(X ≥ 2) = P (X = 2 or X = 3) = P (X = 2) + P (X = 3) = 3C2(0.52)(0.51) + 3C3(0.53)(0.50) = 0.375 + 0.125 = 0.5.
What is normal distribution?
Normal distribution is the continuous probability distribution defined by the probability density function, . The parameters denote the mean and the standard deviation of the population of interest. When the distribution is called the standard normal distribution.
This distribution is called normal since most of the natural phenomena follow the normal distribution. For, example the IQ of the human population is normally distributed. As seen from the graph it is unimodal, symmetric about the mean and bell shaped. The mean, mode, and median are coinciding. The area under the curve corresponds to the portion of the population, satisfying a given condition.
The portions of population in the interval , , are approximately 68.2%, 95.6% and 99.8% respectively.
What is the difference between Binomial and Normal Distributions?
- Binomial distribution is a discrete probability distribution whereas the normal distribution is a continuous one.
- The probability mass function of the binomial distribution is , whereas the probability density function of the normal distribution is
- Binomial distribution is approximated with normal distribution under certain conditions but not the other way around. | http://www.differencebetween.com/difference-between-binomial-and-vs-normal-distribution/ | 13 |
54 | Video:Common Terms For Calculating Areawith Bassem Saad
Calculating area is a fundamental math skill that all students must learn. Here are common terms associated with calculating area, as well as their definitions.See Transcript
Transcript:Common Terms For Calculating AreaHi, my name is Bassem Saad. I'm an associate math instructor and a Ph.D. candidate, and I'm here today for About.com to define common terms when calculating area.
Area is a way of understanding the size of an object. There are four geometric objects that we commonly want to know the size of -- that is a rectangle, a parallelogram, a triangle and a circle. We have a formula for each of their areas, based on measurements of certain edges or lengths.
Terms for Calculating Area of RectanglesOk, so the area of a rectangle is the base times height. The base is just one edge of the rectangle and the height will be just one edge of the rectangle because the edges are perpendicular.
Terms for Calculating Area of a ParallelogramThe same is true for a parallelogram: here we have the base of the parallelogram, which is just one of the edges of the parallelogram, and the height. Note that the height is not the other edge -- the perpendicular edge -- it's actually a perpendicular line from the base to the highest point on the parallelogram.
Terms for Calculating Area of a TriangleThe same is true for the triangle, but the triangle is one half times the base, times height. So again, the base is just one edge of the triangle and the height is a perpendicular distance from the base to the highest point on the triangle.
Terms for Calculating Area of a CircleThere you have the circle is just pi times the radius squared. The radius is r and it's any edge that starts from the center of the circle and goes out to the edge of the circle.
Units of Measurement for Calculating AreaSo we measure a length, say with a ruler, we may use different units: centimeters, meters, inches, or feet. When calculating the area, we have a corresponding set of units: Centimeters corresponds to centimeters squared for area; length of meters corresponds to meters squared for area; inches corresponds to inches squared for area; and length in feet corresponds to feet squared in area.
Examples of Measurement Units for Calculating AreaTo see how this works, let's look at a couple of examples. Say we had a rectangle with the base measured to be one foot and the height measure to be two feet. The area will be one foot, times two feet, resulting in two feet squared.
So for a circle, we could measure the radius to be one foot. So the area of a circle is pi, one foot squared. So that would just be pi feet squared, which is approximately equal to 3.14 feet squared. So now we know some common terms when calculating area.
Thanks for watching, and to learn more, visit us on the web at About.com.
About videos are made available on an "as is" basis, subject to the User Agreement. | http://video.about.com/math/Common-Terms-For-Calculating-Area.htm | 13 |
52 | Maxwell’s Equations and the Lorentz Force Law together comprise the e/m field equations; i.e., those equations determining the interactions of charged particles in the vicinity of electric and magnetic fields and the resultant effect of those interactions on the values of the e/m field.
For ease of explanation, the following will refer to “fields” as though they possess some independent physical reality. They do not. The use of fields is as an aid in understanding how forces exerted by and upon real particles, and how the positions (coordinates) of those particles may exist at a given time, or may vary over an interval of time. Field lines are also convenient notational devices to aid in understanding what physically is going on, and are not “real”. An oft-used example is the set of lines or contours of equal elevation relative to some fixed reference value, often found on topographic maps of land areas, and varying pressure distributions on weather charts. Such lines do not exist as real physical entities; they can be used for calculation and visualization of simple or complex phenomena, but they do not effect changes or position or exert force or anything else, themselves. Imagining that they are real is called reification; it can be a convenient aid to better understanding, but it is incorrect to say that field lines of any type are real or “do” anything.
The implications of Maxwell’s Equations and the underlying research are:
- A static electric field can exist in the absence of a magnetic field; e.g., a capacitor with a static charge Q has an electric field without a magnetic field.
- A constant magnetic field can exist without an electric field; e.g., a conductor with constant current I has a magnetic field without an electric field.
- Where electric fields are time-variable, a non-zero magnetic field must exist.
- Where magnetic fields are time-variable, a non-zero electric field must exist.
- Magnetic fields can be generated in two ways other than by permanent magnets: by an electric current, or by a changing electric field.
- Magnetic monopoles cannot exist; all lines of magnetic flux are closed loops.
The Lorentz Force Law
The Lorentz Force Law expresses the total force on a charged particle exposed to both electric and magnetic fields. The resultant force dictates the motion of the charged particle by Newtonian mechanics.
F = Q(E + U×B) (remember, vectors are given in bold text)
where F is the Lorentz force on the particle; Q is the charge on the particle; E is the electric field intensity (and direction); and B is the magnetic flux density and direction.
Note that the force due to the electric field is constant and in the direction of E, so will cause constant acceleration in the direction of E. However, the force due to the combination of the particle’s velocity and the magnetic field is orthogonal to the plane of U and B due to the cross product of the two vectors in vector algebra (Appendix I). The magnetic field will therefore cause the particle to move in a circle (to gyrate) in a plane perpendicular to the magnetic field.
If B and E are parallel (as in a field-aligned current situation) then a charged particle approaching radially toward the direction of the fields will be constrained to move in a helical path aligned with the direction of the fields; that is to say, the particle will spiral around the magnetic field lines as a result of the Lorentz force, accelerating in the direction of the E field.
Further Discussion of Maxwell’s Equations
The Maxwell Equations are the result of combining the experimental results of various electric pioneers into a consistent mathematical formulation, whose names the individual equations still retain. They are expressed in terms of vector algebra and may appear, with equal validity, in either the point (differential) form or the integral form.
The Maxwell Equations can be expressed as a General Set, applicable to all situations; and as a “Free Space” set, a special case applicable only where there are no charges and no conduction currents. The General Set is the one which applies to plasma:
- E is the electric field intensity vector in newtons/coulomb (N/C) or volts/meter (V/m)
- D is the electric flux density in C/m2; D = εE for an isotropic medium of permittivity ε
- H is the magnetic field strength and direction in amperes/meter (A/m)
- B is the magnetic flux density in A/N・m, or tesla (T); B = μH for an isotropic medium of permeability μ
- Jc is the conduction current density in A/m2; Jc = σE for a medium of conductivity σ
- ρ is the charge density, C/m3
Gauss’ Law states that “the total electric flux (in coulombs/m2) out of a closed surface is equal to the net charge enclosed within the surface”.
By definition, electric flux ψ originates on a positive charge and terminates on a negative charge. In the absence of a negative charge flux “terminates at infinity”. If more flux flows out of a region than flows into it, then the region must contain a source of flux; i.e., a net positive charge.
Gauss’ Law equates the total (net) flux flowing out through the closed surface of a 3D region (i.e., a surface which fully encases the region) to the net positive charge within the volume enclosed by the surface. A net flow into a closed surface indicates a net negative charge within it.
Note that it does not matter what size the enclosing surface is – the total flux will be the same if the enclosed charge is the same. A given quantity of flux emanates from a unit of charge and will terminate at infinity in the absence of a negative charge. In the case of an isolated single positive charge, any sphere, for example, drawn around the charge will receive the same total amount of flux. The flux density D will reduce in proportion (decrease per unit area) as the area of the sphere increases.
Gauss’ Law for Magnetism states that “the total magnetic flux out of a closed surface is zero”.
Unlike electric flux which originates and terminates on charges, the lines of magnetic flux are closed curves with no starting point or termination point. This is a consequence of the definition of magnetic field strength, H, as resulting from a current (see Ampere’s Law, below), and the definition of the force field associated with H as the magnetic flux density B = μH in teslas (T) or newtons per amp meter (N/Am).
Therefore all magnetic flux lines entering a region via a closed surface must leave the region elsewhere on the same surface. A region cannot have any sources or sinks. This is equivalent to stating that magnetic monopoles do not exist.
Ampere’s Law with Maxwell’s Correction
Ampere’s Law is based on the Biot-Savart Law
dH = (I dl×ar) / 4πR2
which states that “a differential (i.e., tiny segment of) magnetic field strength dH at any point results from a differential current element I dl of a closed current path of current I). The magnetic field strength varies inversely with the square of the distance R from the current element and has a direction given by the cross-product of I dl and the unit vector ar of the line joining the current element to the point in question. The magnetic field strength is also independent of the medium in which it is measured.
As current elements have no independent existence, all elements making up the complete current path, i.e., a closed path, must be summed to find the total value of the magnetic field strength at any point. Thus:
H = ∫ (I dl×ar) / 4πR2
where the integral is a closed line integral which may close at infinity.
Thence, for example, an infinitely long straight filamentary current I (closing at infinity) will produce a concentric cylindrical magnetic field circling the current in accordance with the right-hand rule, with strength decreasing with the radial distance r from the wire, or:
H = (I/2πr) ar
(note the vector notation in cylindrical coordinates; the direction of H is everywhere tangential to the circle of radius r)
Ampere’s Law effectively inverts the Biot-Savart Law and states that “the line integral of the tangential component of the magnetic field strength around a closed path is equal to the current enclosed by the path”, or
∫H・dl = Ienc where the integral is a closed line integral
Alternatively, by definition of curl, Curl H or Δ×H = J, the current density.
This effectively means that a magnetic field will be generated by an electric current.
However, this only applies to time-invariant currents and static magnetic fields. As Jc = σE, this implies that the electric field is constant as well.
To overcome these restraints so as to allow for time-varying charge density and to allow for the correct interpretation of the propagation of e/m waves, Maxwell introduced a second term based on the Displacement Current, JD, where
JD = δD/δt
arising from the rate of change of the electric field E.
Maxwell’s correction, as included in the revised Law, dictates that a magnetic field will also arise due to a changing electric field.
Faraday’s Law states that “if the magnetic flux Φ, linking (i.e., looping through) an open surface S bounded by a closed curve C, varies with time then a voltage v around C exists”; specifically
v = -dΦ/dt
or, in integral form,
∫cE・dl = -d(∫s B・dS)/dt for a plane area S and B normal to S
Thus if B varies with time there must be a non-zero E present, or, a changing magnetic field generates an electric field.
The minus sign in the equation above indicates Lenz’s Law, namely “the voltage induced by a changing flux has a polarity such that the current established in a closed path gives rise to a flux which opposes the change in flux”.
In the special case of a conductor moving through a time-invariant magnetic field, the induced polarity is such that the conductor experiences magnetic forces which oppose its motion. | http://www.thunderbolts.info/wp/2012/06/20/appendix-ii-the-electro-magnetic-field-equations/ | 13 |
82 | Spacecraft propulsion is used to change the velocity of spacecraft and artificial satellites, or in short, to provide delta-v. There are many different methods. Each method has drawbacks and advantages, and spacecraft propulsion is an active area of research. Most spacecraft today are propelled by heating the reaction mass and allowing it to flow out the back of the vehicle. This sort of engine is called a rocket engine.
All current spacecraft use chemical rocket engines (bipropellant or solid-fuel) for launch. Most satellites have simple reliable chemical rockets (often monopropellant rockets) or resistojet rockets to keep their station, although some use momentum wheels for attitude control. A few use some sort of electrical engine for stationkeeping. Interplanetary vehicles mostly use chemical rockets as well, although a few have experimentally used ion thrusters with some success.
The necessity for propulsion systems
Artificial satellites must be launched into orbit, and once there they must accelerate to circularize their orbit. Once in the desired orbit, they often need some form of attitude control so that they are correctly pointed with respect to the Earth, the Sun, and possibly some astronomical object of interest. They are also subject to drag from the thin atmosphere, so that to stay in orbit for a long period of time some form of propulsion is occasionally necessary to make small corrections. Many satellites need to be moved from one orbit to another from time to time, and this also requires propulsion. When a satellite has exhausted its ability to adjust its orbit, its useful life is over.
Spacecraft designed to travel further also need propulsion methods. They need to be launched out of the Earth's atmosphere just as do satellites. Once there, they need to leave orbit and move around.
For interplanetary travel, a spacecraft must use its engines to leave Earth orbit. Once it has done so, it must somehow make its way to its destination. Current interplanetary spacecraft do this with a series of short-term orbital adjustments. In between these adjustments, the spacecraft simply falls freely along its orbit. The simplest fuel-efficient means to move from one circular orbit to another is with a Hohmann transfer orbit: the spacecraft begins in a roughly circular orbit around the Sun. A short period of thrust in the direction of motion accelerates or decelerates the spacecraft into an elliptical orbit around the Sun which is tangential to its previous orbit and also to the orbit of its destination. The spacecraft falls freely along this elliptical orbit until it reaches its destination, where another short period of thrust accelerates or decelerates it to match the orbit of its destination. Special methods such as aerobraking are sometimes used for this final orbital adjustment.
Some spacecraft propulsion methods such as solar sails provide very low but inexhaustible thrust; an interplanetary vehicle using one of these methods would follow a rather different trajectory, either constantly thrusting against its direction of motion in order to decrease its distance from the Sun or constantly thrusting along its direction of motion to increase its distance from the Sun.
Spacecraft for interstellar travel also need propulsion methods. No such spacecraft has yet been built, but many designs have been discussed. Since interstellar distances are very great, a tremendous velocity is needed to get a spacecraft to its destination in a reasonable amount of time. Acquiring such a velocity on launch and getting rid of it on arrival will be a formidable challenge for spacecraft designers.
Effectiveness of propulsion systems
When in space, the purpose of a propulsion system is to change the velocity v of a spacecraft. Since this is more difficult for more massive spacecraft, designers generally discuss momentum, mv. The amount of change in momentum is called impulse. So the goal of a propulsion method in space is to create an impulse.
When launching a spacecraft from the Earth, a propulsion method must overcome the Earth's gravitational pull in addition to providing acceleration.
The rate of change of velocity is called acceleration, and the rate of change of momentum is called force. To reach a given velocity, one can apply a small acceleration over a long period of time, or one can apply a large acceleration over a short time. Similarly, one can achieve a given impulse with a large force over a short time or a small force over a long time. This means that for maneuvering in space, a propulsion method that produces tiny accelerations but runs for a long time can produce the same impulse as a propulsion method that produces large accelerations for a short time. When launching from a planet, tiny accelerations cannot overcome the planet's gravitational pull and so cannot be used.
The law of conservation of momentum means that in order for a propulsion method to change the momentum of a space craft it must change the momentum of something else as well. A few designs take advantage of things like magnetic fields or light pressure in order to change the spacecraft's momentum, but in free space the rocket must bring along some mass to accelerate away in order to push itself forward. Such mass is called reaction mass.
In order for a rocket to work, it needs two things: reaction mass and energy. The impulse provided by launching a particle of reaction mass having mass m at velocity v is mv. But this particle has kinetic energy mv2/2, which must come from somewhere. In a conventional solid fuel rocket, the fuel is burned, providing the energy, and the reaction products are allowed to flow out the back, providing the reaction mass. In an ion thruster, electricity is used to accelerate ions out the back. Here some other source must provide the electrical energy (perhaps a solar panel or a nuclear reactor) while the ions provide the reaction mass.
When discussing the efficiency of a propulsion system, designers often focus on the reaction mass. After all, energy can in principle be produced without much difficulty, but the reaction mass must be carried along with the rocket and irretrievably consumed when used. A way of measuring the amount of impulse that can be obtained from a fixed amount of reaction mass is the specific impulse. This is the impulse per unit mass in newton seconds per kilogram (Ns/kg). This corresponds to metres per second (m/s), and is the effective exhaust velocity ve.
A rocket with a high exhaust velocity can achieve the same impulse with less reaction mass. However, the kinetic energy is proportional to the square of the exhaust velocity, so that more efficient engines (in the sense of having a large specific impulse) require more energy to run.
A second problem is that if the engine is to provide a large amount of thrust, that is, a large amount of impulse per second, it must also provide a large amount of energy per second. So highly efficient engines require enormous amounts of energy per second to produce high thrusts. As a result, most high-efficiency engine designs also provide very low thrust.
Burning the entire usable propellent of a spacecraft through the engines in a straight line would produce a net velocity change to the vehicle- this number is termed 'delta-v'.
The total Δv of a vehicle can be calculated using the rocket equation, where M is the mass of fuel, P is the mass of the payload (including the rocket structure), and Isp is the specific impulse of the rocket. This is known as the Tsiolkovsky rocket equation:
For a long voyage, the majority of the spacecraft's mass may be reaction mass. Since a rocket must carry all its reaction mass with it, most of the first reaction mass goes towards accelerating reaction mass rather than payload. If we have a payload of mass P, the spacecraft needs to change its velocity by Δv, and the rocket engine has exhaust velocity ve, then the mass M of reaction mass which is needed can be calculated using the rocket equation and the formula for Isp
For Δv much smaller than ve, this equation is roughly linear, and little reaction mass is needed. If Δv is comparable to ve, then there needs to be about twice as much fuel as combined payload and structure (which includes engines, fuel tanks, and so on). Beyond this, the growth is exponential; speeds much higher than the exhaust velocity require very high ratios of fuel mass to payload and structural mass.
In order to achieve this, some amount of energy must go into accelerating the reaction mass. Every engine will waste some energy, but even assuming 100% efficiency, the engine will need energy amounting to
but even with 100% engine efficiency, certainly not all of this energy ends up in the vehicle- some of it, indeed usually most of it, ends up as kinetic energy of the exhaust.
For a mission, for example, when launching from or landing on a planet, the effects of gravitational attraction and any atmospheric drag must be overcome by using fuel. It is typical to combine the effects of these and other effects into an effective mission delta-v. For example a launch mission to low Earth orbit requires about 9.3-10 km/s delta-v. These mission delta-vs are typically numerically integrated on a computer.
Suppose we want to send a 10,000-kg space probe to Mars. The required Δv from LEO is approximately 3000 m/s, using a Hohmann transfer orbit. (A manned probe would need to take a faster route and use more fuel). For the sake of argument, let us say that the following thrusters may be used:
Instead, a much smaller, less powerful generator may be included which will take much longer to generate the total energy needed. This lower power is only sufficient to accelerate a tiny amount of fuel per second, but over long periods the velocity will be finally achieved. However, since in any case transit times for this Hohmann transfer orbit are on the order of years, this need not slow down the mission. In fact, the best orbit will probably no longer be a Hohmann transfer orbit at all, but instead some constant-acceleration orbit which might be faster or slower. But the launched mass is often lower, which can lower cost.
Interestingly, for a mission delta-v, there is a fixed ISP that minimises the overall energy used by the rocket. This comes to an exhaust velocity of about 2/3 of the delta-v (see also the energy computed from the rocket equation). Drives such as VASIMR, and to a lesser extent other Ion thrusters have exhaust velocities that can be enormously higher than this ideal, and thus end up powersource limited and give very low thrust. Since most of the energy ends up in the exhaust in these situations; and the vehicle performance is limited by available generator power, lower specific impulse engines are often more appropriate for low ISP missions.
Propulsion methods can be classified based on their means of accelerating the reaction mass. There are also some special methods for launches, planetary arrivals, and landings.
A rocket engine accelerates its reaction mass by heating it, giving hot high-pressure gas or plasma. The reaction mass is then allowed to escape from the rear of the vehicle by passing through a de Laval nozzle, which dramatically accelerates the reaction mass, converting thermal energy into kinetic energy. It is this nozzle which gives a rocket engine its characteristic shape.
Rockets emitting gases are limited by the fact that their exhaust temperature cannot be so high that the nozzle and reaction chamber are damaged; most large rockets have elaborate cooling systems to prevent damage to either component. Rockets emitting plasma can potentially carry out reactions inside a magnetic bottle and release the plasma via a magnetic nozzle, so that no solid matter need come in contact with the plasma. Of course, the machinery to do this is complex, but research into nuclear fusion has developed methods.
Rocket engines that could be used in space (all emit gases unless otherwise noted):
When launching a vehicle from the Earth's surface, the atmosphere poses problems. For example, the precise shape of the most efficient de Laval nozzle for a rocket depends strongly on the ambient pressure. For this reason, various exotic nozzle designs such as the plug nozzle, the expanding nozzle and the aerospike have been proposed, each having some way to adapt to changing ambient air pressure.
On the other hand, rocket engines have been proposed that take advantage of the air in some way (as do jet engines and other air-breathing engines):
Electromagnetic acceleration of reaction mass
Rather than relying on high temperature and fluid dynamics to accelerate the reaction mass to high speeds, there are a variety of methods that use electrostatic or electromagnetic forces to accelerate the reaction mass directly. Usually the reaction mass is a stream of ions. Such an engine requires electric power to run, and high exhaust velocities require large amounts of power to run.
It turns out that to a reasonable approximation, for these drives, that fuel use, energetic efficiency and thrust are all inversely proportional to exhaust velocity. Their very high exhaust velocity means they require huge amounts of energy and provide low thrust; but use hardly any fuel.
For some missions, solar energy may be sufficient, but for others nuclear energy will be necessary; engines drawing their power from a nuclear source are called nuclear electric rockets. With any current source of power, the maximum amount of power that can be generated limits the maximum amount of thrust that can be produced while adding significant mass to the spacecraft.
Some electromagnetic methods:
The Biefeld-Brown effect is a somewhat exotic electrical effect. In air, a voltage applied across a particular kind of capacitor produces a thrust. There have been claims that this also happens in a vacuum due to some sort of coupling between the electromagnetic field and gravity, but recent experiments show no evidence of this hypothesis.
Systems without reaction mass
The law of conservation of momentum states that any engine which uses no reaction mass cannot move the center of mass of a spaceship (changing orientation, on the other hand, is possible). But space is not empty, especially space inside the Solar Systems; there is a magnetic field and a solar wind. Various propulsion methods try to take advantage of this; since all these things are very diffuse, propulsion structures need to be large.
Space drives that need no (or little) reaction mass:
High thrust is of vital importance for launch, the thrust per unit mass has to be well above g, see also gravity drag. Many of the propulsion methods above do not provide that much thrust. Exhaust toxicity or other side effects can also have detrimental effects on the environment the spacecraft is launching from, ruling out other propulsion methods.
One advantage that spacecraft have in launch is the availability of infrastructure on the ground to assist them. Proposed ground-assisted launch mechanisms include:
Planetary arrival and landing
When a vehicle is to enter orbit around its destination planet, or when it is to land, it must adjust its velocity. This can be done using all the methods listed above (provided they can generate a high enough thrust), but there are a few methods that can take advantage of planetary atmospheres.
Gravitational slingshots can also be used to carry a probe onward to other destinations.
Methods requiring new principles of physics
In addition, a variety of hypothetical propulsion techniques have been considered that would require entirely new principles of physics to realize. As such, they are currently highly speculative:
Table of methods and their efficiencies
Below is a summary of some of the more popular, proven technologies, followed by increasingly speculative methods.
Three numbers are shown. The first is the specific impulse: the amount of thrust that can be produced using a unit of fuel. This is the most important characteristic of the propulsion method as it determines the velocity at which exorbitant amounts of fuel begin to be necessary (see the section on calculations, above).
The second and third are the typical amounts of thrust and the typical burn times of the method. Outside a gravitational potential small amounts of thrust applied over a long period will give the same effect as large amounts of thrust over a short period.
This result does not apply when the object is influenced by gravity. | http://askfactmaster.com/Rocket_engine | 13 |
71 | Archimedes' principle states that the buoyant force on a fluid is equal to the weight of the displaced fluid. To calculate the buoyant force, we use the equation buoyant force = density of fluid x volume of displaced fluid x acceleration due to gravity. In a completely submerged object, the volume of displaced fluid equals the volume of the object. If the object is floating, the volume of the displaced fluid is less than the volume of the object but the buoyant force = the weight of the object.
Alright let's talk about Archimedes Principle. Archimedes Principle has been around for a really, really long time and it was very important to rulers of like Greece and Egypt years and years and years ago, and we'll talk about why in just a minute. So what Archimedes said was that the buoyant force on something is equal to the weight of the displace fluid. Now what does a buoyant force mean? The buoyant force, is the net force that is exerted on an object that's immersed in a fluid. Alright so we've got a force from the pressure on the bottom and a force from the pressure on the top we add them together, we want the net force and that's the buoyant force. And so what Archimedes said was that, that buoyant force is the weight of how much fluid the object is displacing. So buoyant force equals weight of the displaced fluid. Well what is that? Well it's the density of the fluid times the volume of the volume of the fluid that's benen displaced. So that's the volume of the object that's immersed in the fluid times the acceleration due to gravity.
Alright now we have 2 major situations in which we can use Archimedes Principle, if the object is completely submerged, so the entire object is immersed in the fluid then the volume displaced equals the volume of the object. Alright, now so that makes it really, really simple. All I need to know is what's the volume of the object and I'm done. If the object is floating on the other hand, then what that means, is that it's not all the way immersed. What that means is that the volume displaced is actually less than the whole volume, because some of it, is floating on the top, we said it was floating. So in the floating situation we always have the buoyant force is equal to the weight of the object because the object is floating. And that means that its weight has to be canceled by an upward force, what's that upward force? That's the buoyant force, so these are the 2 separate situations and they're entirely separate.
We'll see that there're equations that you can derive that are perfectly valid when the object is fully submerged but that are not true for floating objects. So you just need to be careful about that distinction. Alright, so why is Archimedes Principle true? Well we can look at a situation in which we've got an object immersed in a fluid like this, alright now what are the forces acting on this object we're going to draw a free body diagram because we're good Physicists so we've got weight and then we've got 2 forces that are acting from the fluid. The fluid has a pressure in it, so there's a pressure at the bottom and a pressure at the top. Now pressure is force per unit area, and it points in any direction. So the pressure at the bottom, the force that's acting on the object is pushing up because that's what direction, I mean if it's going to push on the object well what's, it's going to push up right? So the upward force is pressure at the bottom times the area, because force is pressure times area.
Same way pressure at the top is pushing down. So the downward force is pressure at the top times the area. Now the buoyant force is the net force exerted on the object by the fluid, so that means that it's equal to pressure at the bottom times the area minus pressure at the top times the area. So the area I can take out because it's the same for top and the bottom, so that means that it's the change in pressure times the area. Now we know that as you descend into a fluid the pressure increases. How much is the increase? Well the change in pressure is given by the density of the fluid times acceleration due to gravity times how far you went down. Now here we went down a distance h, so look what we got here. h times a that's the volume displaced, so that means that the buoyant force is equal to density of the fluid times acceleration due to gravity times the amount of volume that was displaced. So Archimedes Principle alright why do people care? Well one of the most enigmatic uses of this principle in history has been to determine the density of valuable objects like crowns and vase and things like that. Are they made of gold or is somebody trying to cheat me?
Alright here's the idea, all you need to do is weigh the object in air and then weigh the same object when it's emerged in water. So why does this work? Well here's the idea, when I weigh an object basically what I'm doing is I've got a scale and I'm trying to measure how much force it requires for me to suspend the object? So I've got a tension force going up from my scale and then I've got a weight going down and this needs to equal each other. So when I weigh it in air, the tension force is equal to mg, is equal to the weight. What about when I weigh it in water? Well when I weigh it in water I've got a buoyant force now and this buoyant force depends on the volume of the crown and the density of water. So that means that this buoyant force inside of it has information about the density. So when I look this tension here, this tension is the weight of the crown in water. It's going to be smaller than the weight of the crown in air and that difference is this buoyant force. And that buoyant force contains information that I can pull out about the density of the crown. And that's Archimedes Principle. | http://www.brightstorm.com/science/physics/solids-liquids-and-gases/archimedes-principle/ | 13 |
175 | Picture of Saturn V Launch for Apollo 15 Mission
. Source: NASA
Rocket physics is basically the application of Newton's Laws
to a system with variable mass. A rocket has variable mass because its mass decreases over time, as a result of its fuel (propellant) burning off.
A rocket obtains thrust by the principle of action and reaction (Newton's third law). As the rocket propellant ignites, it experiences a very large acceleration and exits the back of the rocket (as exhaust) at a very high velocity. This backwards acceleration of the exhaust exerts a "push" force on the rocket in the opposite direction, causing the rocket to accelerate forward. This is the essential principle behind the physics of rockets, and how rockets work.
The equations of motion of a rocket will be derived next.
Rocket Physics — Equations Of Motion
To find the equations of motion, apply the principle of impulse and momentum to the "system", consisting of rocket and exhaust. In this analysis of the rocket physics we will use Calculus to set up the governing equations. For simplicity, we will assume the rocket is moving in a vacuum, with no gravity, and no air resistance (drag).
To properly analyze the rocket physics, consider the figure below which shows a schematic of a rocket moving in the vertical direction. The two stages, (1) and (2), show the "state" of the system at time t
and time t
, where dt
is a very small (infinitesimal) time step. The system (consisting of rocket and exhaust) is shown as inside the dashed line.
is the mass of the rocket (including propellant), at stage (1)
is the total mass of the rocket exhaust (that has already exited the rocket), at stage (1)
is the velocity of the rocket, at stage (1)
is the linear momentum of the rocket exhaust (that has already exited the rocket), at stage (1). This remains constant between (1) and (2)
is the mass of rocket propellant that has exited the rocket (in the form of exhaust), between (1) and (2)
is the change in velocity of the rocket, between (1) and (2)
is the velocity of the exhaust exiting the rocket, at stage (2)
Note that all velocities are measured with respect to ground (an inertial reference frame).
The sign convention in the vertical direction is as follows: "up" is positive and "down" is negative.
Between (1) and (2), the change in linear momentum in the vertical direction of all the particles in the system, is due to the sum of the external forces in the vertical direction acting on all the particles in the system.
We can express this mathematically using Calculus and the principle of impulse and momentum:
is the sum of the external forces in the vertical direction acting on all the particles in the system (consisting of rocket and exhaust).
Expand the above expression. In the limit as dt
→0 we may neglect the "second-order" term dmedv
. Divide by dt
and simplify. We get
Since the rocket is moving in a vacuum, with no gravity, and no air resistance (drag), then ΣFy
= 0 since no external forces are acting on the system. As a result, the above equation becomes
The left side of this equation must represent the thrust acting on the rocket, since a
is the acceleration of the rocket, and ΣF
(Newton's second law).
Therefore, the thrust T
acting on the rocket is equal to
The term v
is the velocity of the exhaust gases relative to the rocket. This is approximately constant in rockets. The term dme
is the burn rate of rocket propellant.
As the rocket loses mass due to the burning of propellant, its acceleration increases (for a given thrust T
). The maximum acceleration is therefore just before all the propellant burns off.
From equation (2),
The mass of the ejected rocket exhaust equals the negative of the mass change of the rocket. Thus,
Again, the term v
is the velocity of the exhaust gases relative to the rocket, which is approximately constant. For simplicity set u
Integrate the above equation using Calculus. We get
This is a very useful equation coming out of the rocket physics analysis, shown above. The variables are defined as follows: vi
is the initial rocket velocity and mi
is the initial rocket mass. The terms v
are the velocity of the rocket and its mass at any point in time thereafter (respectively). Note that the change in velocity (delta-v
) is always the same no matter what the initial velocity vi
is. This is a very useful result coming out of the rocket physics analysis. The fact that delta-v
is constant is useful for those instances where powered gravity assist is used to increase the speed of a rocket. By increasing the speed of the rocket at the point when its speed reaches a maximum (during the periapsis stage), the final kinetic energy of the rocket is maximized. This in turn maximizes the final velocity of the rocket. To visualize how this works, imagine dropping a ball onto a floor. We wish to increase the velocity of the ball by a constant amount delta-v
at some point during its fall, such that it rebounds off the floor with the maximum possible velocity. It turns out that the point at which to increase the velocity is just before the ball strikes the floor. This maximizes the total energy of the ball (gravitational plus kinetic) which enables it to rebound off the floor with the maximum velocity (and the maximum kinetic energy). This maximum ball velocity is analogous to the maximum possible velocity reached by the rocket at the completion of the gravity assist maneuver. This is known as the Oberth Effect
(opens in new window). It is a very useful principle related to rocket physics.
In the following discussion on rocket physics we will look at staging and how it can also be used to obtain greater rocket velocity.
Rocket Physics — Staging
Often times in a space mission, a single rocket cannot carry enough propellant (along with the required tankage, structure, valves, engines and so on) to achieve the necessary mass ratio to achieve the desired final orbital velocity (v
). This presents a unique challenge in rocket physics. The only way to overcome this difficulty is by "shedding" unnecessary mass once a certain amount of propellant is burned off. This is accomplished by using different rocket stages and ejecting spent fuel tanks plus associated rocket engines used in those stages, once those stages are completed.
The figure below illustrates how staging works.
The picture below shows the separation stage of the Saturn V rocket for the Apollo 11 mission.
The picture below shows the separation stage of the twin rocket boosters of the Space Shuttle.
In the following discussion on rocket physics we will look at rocket efficiency and how it relates to modern (non-rocket powered) aircraft.
Rocket Physics — Efficiency
Rockets can accelerate even when the exhaust relative velocity is moving slower than the rocket (meaning ve
is in the same direction as the rocket velocity). This differs from propeller engines or air breathing jet engines, which have a limiting speed equal to the speed at which the engine can move the air (while in a stationary position). So when the relative exhaust air velocity is equal to the engine/plane velocity, there is zero thrust. This essentially means that the air is passing through the engine without accelerating, therefore no push force (thrust) is possible. Thus, rocket physics is fundamentally different from the physics in propeller engines and jet engines.
Rockets can convert most of the chemical energy of the propellant into mechanical energy (as much as 70%). This is the energy that is converted into motion, of both the rocket and the propellant/exhaust. The rest of the chemical energy of the propellant is lost as waste heat. Rockets are designed to lose as little waste heat as possible. This is accomplished by having the exhaust leave the rocket nozzle at as low a temperature as possible. This maximizes the Carnot efficiency, which maximizes the mechanical energy derived from the propellant (and minimizes the thermal energy lost as waste heat). Carnot efficiency applies to rockets because rocket engines are a type of heat engine, converting some of the initial heat energy of the propellant into mechanical work (and losing the remainder as waste heat). Thus, rocket physics is related to heat engine physics. For more information see Wikipedia's article on heat engines
(opens in new window).
In the following discussion on rocket physics we will derive the equation of motion for rocket flight in the presence of air resistance (drag) and gravity, such as for rockets flying near the earth's surface.
Rocket Physics — Flight Near Earth's Surface
The rocket physics analysis in this section is similar to the previous one, but we are now including the effect of air resistance (drag) and gravity, which is a necessary inclusion for flight near the earth's surface.
For example, let’s consider a rocket moving straight upward against an atmospheric drag force FD
and against gravity g
(equal to 9.8 m/s2
on earth). The figure below illustrates this.
is the center of mass of the rocket at the instant shown. The weight of the rocket acts through this point.
From equations (1) and (3) and using the fact that acceleration a
we can write the following general equation:
The sum of the external forces acting on the rocket is the gravity force plus the drag force. Thus, from the above equation,
As a result,
This is the general equation of motion resulting from the rocket physics analysis accounting for the presence of air resistance (drag) and gravity. Looking closely at this equation, you can see that it is an application of ΣF
(Newton’s second law). Note that the mass m
of the rocket changes with time (due to propellant burning off), and T
is the thrust given from equation (3). An expression for the drag force FD
can be found on the page on drag force
Due to the complexity of the drag term FD
, the above equation must be solved numerically to determine the motion of the rocket as a function of time.
Another important analysis in the study of rocket physics is designing a rocket that experiences minimal atmospheric drag. At high velocities, air resistance is significant. So for purposes of energy efficiency, it is necessary to minimize the atmospheric drag experienced by the rocket, since energy used to overcome drag is energy that is wasted. To minimize drag, rockets are made as aerodynamic as possible, such as with a pointed nose to better cut through the air, as well as using stabilizing fins at the rear of the rocket (near the exhaust), to help maintain steady orientation during flight.
In the following discussion on rocket physics we will take a closer look at thrust.
Rocket Physics — A Closer Look At Thrust
The thrust given in equation (3) is valid for an optimal nozzle expansion. This assumes that the exhaust gas flows in an ideal manner through the rocket nozzle. But this is not necessarily true in real-life operation. Therefore, in the following rocket physics analysis we will develop a thrust equation for non-optimal flow of the exhaust gas through the rocket nozzle.
To set up the analysis consider the basic schematic of a rocket engine, shown below.
is the internal pressure inside the rocket engine (which may vary with location)
is the ambient pressure outside the rocket engine (assumed constant)
is the pressure at the exit plane of the rocket engine nozzle (this is taken as an average pressure along this plane)
is the cross-sectional area of the opening at the nozzle exit plane
The arrows along the top and sides represent the pressure acting on the wall of the rocket engine (inside and outside). The arrows along the bottom represent the pressure acting on the exhaust gas, at the exit plane.
Gravity and air resistance are ignored in this analysis (their effect can be included separately, as shown in the previous section).
Next, isolate the propellant (plus exhaust) inside the rocket engine. It is useful to do this because it allows us to fully account for the contact force between rocket engine wall and propellant (plus exhaust). This contact force can then be related to the thrust experienced by the rocket, as will be shown. The schematic below shows the isolated propellant (plus exhaust). The dashed blue line (along the top and sides) represents the contact interface between the inside wall of the rocket engine and the propellant (plus exhaust). The dashed black line (along the bottom) represents the exit plane of the exhaust gas, upon which the pressure Pe
is the resultant downward force exerted on the propellant (plus exhaust) due to contact with the inside wall of the rocket engine (represented by the dashed blue line). This force is calculated by: (1) Multiplying the local pressure Pi
at a point on the inside wall by a differential area on the wall, (2) Using Calculus, integrating over the entire inside wall surface to find the resultant force, and (3) Determining the vertical component of this force (Fw
). The details of this calculation are not shown here.
(Note that, due to geometric symmetry of the rocket engine, the resultant force acts in the vertical direction, and there is no sideways component).
Now, sum all the forces acting on the propellant (plus exhaust) and then apply Newton's second law:
is the mass of the propellant/exhaust inside the rocket engine
is the acceleration of the rocket engine
is the velocity of the exhaust gases relative to the rocket, which is approximately constant
is the burn rate of the propellant
The right side of the above equation is derived using the same method that was used for deriving equation (1). This is not shown here.
(Note that we are defining "up" as positive and "down" as negative).
Next, isolate the rocket engine, as shown in the schematic below.
is the force exerted on the rocket engine by the rocket body.
Now, sum all the forces acting on the rocket engine and then apply Newton's second law:
is the mass of the rocket engine. The term PaAe
is the force exerted on the rocket engine due to the ambient pressure acting on the outside of the engine.
Now, by Newton’s third law Frb
acting on the rocket body is pointing up (positive). Therefore, by Newton’s second law we can write
is the mass of the rocket body (which excludes the mass of the rocket engine and the mass of propellant/exhaust inside the rocket engine).
Combine the above two equations and we get
Combine equations (5) and (6) and we get
where the term (mp
) is the total mass of the rocket at any point in time. Therefore the left side of the equation must be the thrust acting on the rocket (since F
, by Newton's second law).
Thus, the thrust T
This is the most general equation for thrust coming out of the rocket physics analysis, shown above. The first term on the right is the momentum thrust term, and the last term on the right is the pressure thrust term due to the difference between the nozzle exit pressure and the ambient pressure. In deriving equation (3) we assumed that Pe
, which means that the pressure thrust term is zero. This is true if there is optimal nozzle expansion and therefore maximum thrust in the rocket nozzle. However, the pressure thrust term is generally small relative to the momentum thrust term.
A subtle point regarding Pe
is that Je
(the momentum of the rocket exhaust, described in the first section) remains constant. This is because the pressure force pushing on the exhaust at the rocket nozzle exit (due to the pressure Pe
) exactly balances the pressure force pushing on the remainder of the exhaust due to the ambient pressure Pa
. Again, we are ignoring gravity and air resistance (drag) in the derivation of equation (3) and (7). To include their effect we simply add their contribution to the thrust force, as shown in the previous section on rocket physics.
The rocket physics analysis in this section is basically a force and momentum analysis. But to do a complete thrust analysis we would have to look at the thermal and fluid dynamics of the expansion process, as the exhaust gas travels through the rocket nozzle. This analysis (not discussed here) enables one to optimize the engine design plus nozzle geometry such that optimal nozzle expansion is achieved during operation (or as close to it as possible).
The flow of the exhaust gas through the nozzle falls under the category of compressible supersonic flow and its treatment is somewhat complicated. For more information on this see Wikipedia's page on rocket engines
(opens in new window). And here is a summary of the rocket thrust equations provided by NASA
(opens in new window).
In the following discussion on rocket physics we will look at the energy consumption of a rocket moving through the air at constant velocity.
Rocket Physics — Energy Consumption For Rocket Moving At Constant Velocity
An interesting question related to rocket physics is, how much energy is used to power a rocket during its flight? One way to answer that is to consider the energy use of a rocket moving at constant velocity, such as through the air. Now, in order for the rocket to move at constant velocity the sum of the forces acting on it must equal zero. For purposes of simplicity let's assume the rocket is traveling horizontally against the force of air resistance (drag), and where gravity has no component in the direction of motion. The figure below illustrates this schematically.
The thrust force T
acting on the rocket is equal to the air drag FD
, so that T
Let's say the rocket is moving at a constant horizontal velocity v
, and it travels a horizontal distance d
. We wish to find the amount of mechanical energy it takes to move the rocket this distance. Note that this is not the same as the total amount of chemical energy in the spent propellant, but rather the amount of energy that was converted into motion. This amount of energy is always less than the total chemical energy in the propellant, due to naturally occurring losses, such as waste heat generated by the rocket engine.
Thus, we must look at the energy used to push the rocket (in the forward direction) and add it to the energy it takes to push the exhaust (in the backward direction). To do this we can apply the principle of work and energy.
For the exhaust gas:
is the total mass of the exhaust ejected from the rocket over the flight distance d
is the velocity of the exhaust gases relative to the rocket, which is approximately constant
is the work required to change the kinetic energy of the rocket propellant/exhaust (ejected from the rocket) over the flight distance d
For the rocket:
is the thrust acting on the rocket (this is constant if we assume the burn rate is constant)
is the work required to push the rocket a distance d
(constant burn rate), and assuming optimal nozzle expansion we have
Now, the time t
it takes for the rocket to travel a horizontal distance d
The total mass of exhaust me
Substitute this into equation (8), then combine equations (8) and (9) to find the total mechanical energy used to propel the rocket over a distance d
, which means we can substitute T
with the drag force in the above equation.
is the drag coefficient, which can vary along with the speed of the rocket. But typical values range from 0.4 to 1.0
is the density of the air
is the projected cross-sectional area of the rocket perpendicular to the flow direction (that is, perpendicular to v
is the speed of the rocket relative to the air
Substitute the above equation into the previous equation (for T
) and we get
As you can see, the higher the velocity v
, the greater the energy required to move the rocket over a distance d
, even though the time it takes is less. Furthermore, rockets typically have a large relative exhaust velocity u
which makes the energy expenditure large, as evident in the above equation. (The relative exhaust velocity can be in the neighborhood of a few kilometers per second).
This tells us that rockets are inefficient for earth-bound travel, due to the effects of air resistance (drag) and the high relative exhaust velocity u
. It is only their high speed that makes them attractive for earth-bound travel, because they can "get there" sooner, which is particularly important for military and weapons applications.
For rockets that are launched into space (such as the Space Shuttle) the density of the air decreases as the altitude of the rocket increases. This decrease, combined with the increase in rocket velocity, means that the drag force will reach a maximum at some altitude (typically several kilometers above the surface of the earth). This maximum drag force must be withstood by the rocket body and as such is an important part of rocket physics analysis, and design.
In the following discussion on rocket physics we will look at the energy consumption of a rocket moving through space.
Rocket Physics — Energy Consumption For Rocket Moving Through Space
A very useful piece of information in the study of rocket physics is how much energy a rocket uses for a given increase in speed (delta-v
), while traveling in space. This analysis is conveniently simplified somewhat since gravity force and air drag are non-existent. Once again, we are assuming optimal nozzle expansion.
We will need to apply the principle of work and energy to the system (consisting of rocket and propellant/exhaust), to determine the required energy.
The initial kinetic energy of the system is:
is the initial rocket (plus propellant) mass
is the initial rocket (plus propellant) velocity
The final kinetic energy of the rocket is:
is the final rocket mass
is the final rocket velocity
The exhaust gases are assumed to continue traveling at the same velocity as they did upon exiting the rocket. Therefore, the final kinetic energy of the exhaust gases is:
is the infinitesimal mass of rocket propellant that has exited the rocket (in the form of exhaust), over a very small time duration
is the velocity of the exhaust exiting the rocket, at stage (2), at a given time
The last term on the right represents an integration, in which you have to sum over all the exhaust particles for the whole burn time.
Therefore, the final kinetic energy of the rocket (plus exhaust) is:
Now, apply the principle of work and energy to all the particles in the system, consisting of rocket and propellant/exhaust:
Substituting the expressions for T1
into the above equation we get
is the mechanical energy used by the rocket between initial and final velocity (in other words, for a given delta-v
). This amount of energy is less than the total chemical energy in the propellant, due to naturally occurring losses, such as waste heat generated by the rocket engine. U
must be solved for in the above equation.
From equation (4),
is the velocity of the exhaust gases relative to the rocket (constant).
Note that all velocities are measured with respect to an inertial reference frame.
After a lot of algebra and messy integration we find that,
This answer is very nice and compact, and it does not depend on the initial velocity vi
. This is perhaps a surprising result coming out of this rocket physics analysis.
If we want to find the mechanical power P
generated by the rocket, differentiate the above expression with respect to time. This gives us
is the burn rate of rocket propellant.
Note that, in the above energy calculations, the rocket does not have to be flying in a straight line. The required energy U
is the same regardless of the path taken by the rocket between initial and final velocity. However, it is assumed that any angular rotation of the rocket stays constant (and can therefore be excluded from the energy equation), or it is small enough to be negligible. Thus, the energy bookkeeping in this analysis of the rocket physics only consists of translational rocket velocity.
This concludes the discussion on rocket physics.
Return from Rocket Physics to Miscellaneous Physics page
Return from Rocket Physics to Real World Physics Problems home page | http://www.real-world-physics-problems.com/rocket-physics.html | 13 |
66 | Linear regression models are used to show or predict the relationship between two variables or factors. The factor that is being predicted (the factor that the equation solves for) is called the dependent variable. The factors that are used to predict the value of the dependent variable are called the independent variables.
In simple linear regression, each observation consists of two values. One value is for the dependent variable and one value is for the independent variable.
Simple Linear Regression Model
The simple linear regression model is represented like this: y = (β0 +β1 + Ε
By mathematical convention, the two factors that are involved in a simple linear regression analysis are designated x and y. The equation that describes how y is related to x is known as the regression model. The linear regression model also contains an error term that is represented by Ε, or the Greek letter epsilon. The error term is used to account for the variability in y that cannot be explained by the linear relationship between x and y. There also parameters that represent the population being studied. These parameters of the model that are represented by (β0+β1x).
Simple Linear Regression Model
The simple linear regression equation is represented like this: Ε(y) = (β0 +β1 x).
The simple linear regression equation is graphed as a straight line.
(β0 is the y intercept of the regression line.
β1 is the slope.
Ε(y) is the mean or expected value of y for a given value of x.
A regression line can show a positive linear relationship, a negative linear relationship, or no relationship. If the graphed line in a simple linear regression is flat (not sloped), there is no relationship between the two variables. If the regression line slopes upward with the lower end of the line at the y intercept (axis) of the graph, and the upper end of line extending upward into the graph field, away from the x intercept (axis) a positive linear relationship exists. If the regression line slopes downward with the upper end of the line at the y intercept (axis) of the graph, and the lower end of line extending downward into the graph field, toward the x intercept (axis) a negative linear relationship exists.
Estimated Linear Regression Equation
If the parameters of the population were known, the simple linear regression equation (shown below) could be used to compute the mean value of y for a known value of x.
Ε(y) = (β0 +β1 x).
However, in practice, the parameter values are not known so they must be estimated by using data from a sample of the population. The population parameters are estimated by using sample statistics. The sample statistics are represented by b0 +b1. When the sample statistics are substituted for the population parameters, the estimated regression equation is formed.
The estimated regression equation is shown below.
(ŷ) = (β0 +β1 x
(ŷ) is pronounced y hat.
The graph of the estimated simple regression equation is called the estimated regression line.
The b0 is the y intercept.
The b1 is the slope.
The ŷ) is the estimated value of y for a given value of x.
Important Note: Regression analysis is not used to interpret cause-and-effect relationships between variables. Regression analysis can, however, indicate how variables are related or to what extent variables are associated with each other. In so doing, regression analysis tends to make salient relationships that warrant a knowledgeable researcher taking a closer look.
The Least Squares Method is a statistical procedure for using sample data to find the value of the estimated regression equation. The Least Squares Method was proposed by Carl Friedrich Gauss, who was born in the year 1777 and died in 1855. The Least Squares Method is still widely used.
Anderson, D. R., Sweeney, D. J., and Williams, T. A. (2003). Essentials of Statistics for Business and Economics (3rd ed.) Mason, Ohio: Southwestern, Thompson Learning.
______. (2010). Explained: Regression Analysis. MIT News. Retrieved http://web.mit.edu/newsoffice/2010/explained-reg-analysis-0316.html
McIntyre, L. (1994). Using Cigarette Data for An Introduction to Multiple Regression. Journal of Statistics Education, 2(1). Retrieved http://www.amstat.org/publications/jse/v2n1/datasets.mcintyre.html
Mendenhall, W., and Sincich, T. (1992). Statistics for Engineering and the Sciences (3rd ed.), New York, NY: Dellen Publishing Co.
Panchenko, D. 18.443 Statistics for Applications, Fall 2006, Section 14, Simple Linear Regression. (Massachusetts Institute of Technology: MIT OpenCourseWare) Retrieved http://ocw.mit.edu License: Creative Commons BY-NC-SA | http://marketresearch.about.com/od/Market_Research_Basics/g/Simple-Linear-Regression.htm | 13 |
56 | From Wikipedia, the free encyclopedia
Hypertension is a chronic medical condition in which the blood pressure is elevated. It is also referred to as high blood pressure or shortened to HT, HTN or HPN. The word "hypertension", by itself, normally refers to systemic, arterial hypertension.
Hypertension can be classified as either essential (primary) or secondary. Essential or primary hypertension means that no medical cause can be found to explain the raised blood pressure and represents about 90-95% of hypertension cases. Secondary hypertension indicates that the high blood pressure is a result of (i.e., secondary to) another condition, such as kidney disease or tumours (adrenal adenoma or pheochromocytoma).
Persistent hypertension is one of the risk factors for strokes, heart attacks, heart failure and arterial aneurysm, and is a leading cause of chronic renal failure. Even moderate elevation of arterial blood pressure leads to shortened life expectancy. At severely high pressures, defined as mean arterial pressures 50% or more above average, a person can expect to live no more than a few years unless appropriately treated. Beginning at a systolic pressure (which is peak pressure in the arteries, which occurs near the end of the cardiac cycle when the ventricles are contracting) of 115 mmHg and diastolic pressure (which is minimum pressure in the arteries, which occurs near the beginning of the cardiac cycle when the ventricles are filled with blood) of 75 mmHg (commonly written as 115/75 mmHg), cardiovascular disease (CVD) risk doubles for each increment of 20/10 mmHg.
The variation in pressure in the left ventricle
(blue line) and the aorta
(red line) over two cardiac cycles
("heart beats"), showing the definitions of systolic and diastolic pressure.
A recent classification recommends blood pressure criteria for defining normal blood pressure, prehypertension, hypertension (stages I and II), and isolated systolic hypertension, which is a common occurrence among the elderly. These readings are based on the average of seated blood pressure readings that were properly measured during 2 or more office visits. In individuals older than 50 years, hypertension is considered to be present when a person's blood pressure is consistently at least 140 mmHg systolic or 90 mmHg diastolic. Patients with blood pressures over 130/80 mmHg along with Type 1 or Type 2 diabetes, or kidney disease require further treatment.
|Source: American Heart Association (2003).
Resistant hypertension is defined as the failure to reduce blood pressure to the appropriate level after taking a three-drug regimen (include thiazide diuretic). Guidelines for treating resistant hypertension have been published in the UK, and US.
Excessive elevation in blood pressure during exercise is called exercise hypertension. The upper normal systolic values during exercise reach levels between 200 and 230 mm Hg. Exercise hypertension may be regarded as a precursor to established hypertension at rest.
Signs and symptoms
Mild to moderate essential hypertension is usually asymptomatic. Accelerated hypertension is associated with headache, somnolence, confusion, visual disturbances, and nausea and vomiting (hypertensive encephalopathy). Retinas are affected with narrowing of arterial diameter to less than 50% of venous diameter, copper or silver wire appearance, exudates, hemorrhages, or papilledema. Some signs and symptoms are especially important in infants and neonates such as failure to thrive, seizure, irritability or lethargy, and respiratory distress. While in children hypertension may cause headache, fatigue, blurred vision, epistaxis, and bell palsy.
Some signs and symptoms are especially important in suggesting a secondary medical cause of chronic hypertension, such as centripetal obesity, "buffalo hump," and/or wide purple abdominal striae and maybe a recent onset of diabetes suggest glucocorticoid excess either due to Cushing's syndrome or other causes. Hypertension due to other secondary endocrine diseases such as hyperthyroidism, hypothyroidism, or growth hormone excess show symptoms specific to these disease such as in hyperthyrodism there may be weight loss, tremor, tachycardia or atrial arrhythmia, palmar erythema and sweating. Signs and symptoms associated with growth hormone excess such as coarsening of facial features, prognathism, macroglossia, hypertrichosis, hyperpigmentation, and hyperhidrosis may occur in these patients.:499. Other endocrine causes such as hyperaldosteronism may cause less specific symptoms such as numbness, polyuria, polydipsia, hypernatraemia, and metabolic alkalosis. A systolic bruit heard over the abdomen or in the flanks suggests renal artery stenosis. Also radio femoral delay or diminished pulses in lower versus upper extremities suggests coarctation of the aorta. Hypertension in patients with pheochromocytomas is usually sustained but may be episodic. The typical attack lasts from minutes to hours and is associated with headache, anxiety, palpitation, profuse perspiration, pallor, tremor, and nausea and vomiting. Blood pressure is markedly elevated, and angina or acute pulmonary edema may occur. In primary aldosteronism, patients may have muscular weakness, polyuria, and nocturia due to hypokalemia. Chronic hypertension often leads to left ventricular hypertrophy, which can present with exertional and paroxysmal nocturnal dyspnea. Cerebral involvement causes stroke due to thrombosis or hemorrhage from microaneurysms of small penetrating intracranial arteries. Hypertensive encephalopathy is probably caused by acute capillary congestion and exudation with cerebral edema, which is reversible.
Signs and symptoms associated with pre-eclampsia and eclampsia, can be proteinuria, edema, and hallmark of eclampsia which is convulsions, Other cerebral signs may precede the convulsion such as nausea, vomiting, headaches, and blindness.
While one of the most common disorders, essential hypertension, by definition idiopathic, has an unknown cause . It is the most prevalent hypertension type, affecting 90-95% of hypertensive patients. Although no direct cause has identified itself, there are many factors such as sedentary lifestyle, Stress, visceral obesity, potassium deficiency (hypokalemia) obesity (more than 85% of cases occur in those with a body mass index greater than 25), salt (sodium) sensitivity, alcohol intake, and vitamin D deficiency. Risk also increases with aging, some inherited genetic mutations and family history. An elevation of Renin, an enzyme secreted by the kidney, is another risk factor, as is sympathetic nervous system overactivity. Insulin resistance which is a component of syndrome X, or the metabolic syndrome is also thought to contribute to hypertension. Recent studies have implicated low birth weight as a risk factor for adult essential hypertension.
Secondary hypertension by definition results from an identifiable cause. This type is important to recognize since it's treated differently than essential type by treating the underlying cause.
Many secondary causes can cause hypertension, some are common and well recognized secondary causes such as Cushing's syndrome, which is a condition where both adrenal glands can overproduce the hormone cortisol. Hypertension results from the interplay of several pathophysiological mechanisms regulating plasma volume, peripheral vascular resistance and cardiac output, all of which may be increased. More than 80% of patients with Cushing's syndrome have hypertension. Another important cause is the congenital abnormality coarctation of the aorta.
A variety of adrenal cortical abnormalities can cause hypertension, In primary aldosteronism there is a clear relationship between the aldosterone-induced sodium retention and the hypertension. Another related disorder that causes hypertension is apparent mineralocorticoid excess syndrome which is an autosomal recessive disorder results from mutations in gene encoding 11β-hydroxysteroid dehydrogenase which normal patient inactivates circulating cortisol to the less-active metabolite cortisone. Cortisol at high concentrations can cross-react and activate the mineralocorticoid receptor, leading to aldosterone-like effects in the kidney, causing hypertension. This effect can also be produced by prolonged ingestion of liquorice(which can be of potent strength in liquorice candy), can result in inhibition of the 11β-hydroxysteroid dehydrogenase enzyme and cause secondary apparent mineralocorticoid excess syndrome. Frequently, if liquorice is the cause of the high blood pressure, a low blood level of potassium will also be present. Yet another related disorder causing hypertension is glucocorticoid remediable aldosteronism, which is an autosomal dominant disorder in which the increase in aldosterone secretion produced by ACTH is no longer transient, causing of primary hyperaldosteronism, the Gene mutated will result in an aldosterone synthase that is ACTH-sensitive, which is normally not. GRA appears to be the most common monogenic form of human hypertension. Compare these effects to those seen in Conn's disease, an adrenocortical tumor which causes excess release of aldosterone, that leads to hypertension.
Another adrenal related cause is Cushing's syndrome which is a disorder caused by high levels of cortisol. Cortisol is a hormone secreted by the cortex of the adrenal glands. Cushing's syndrome can be caused by taking glucocorticoid drugs, or by tumors that produce cortisol or adrenocorticotropic hormone (ACTH). More than 80% of patients with Cushing's syndrome develop hypertension., which is accompanied by distinct symptoms of the syndrome, such as central obesity, buffalo hump, moon face, sweating, hirsutism and anxiety.
Other well known causes include diseases of the kidney. This includes diseases such as polycystic kidney disease which is a cystic genetic disorder of the kidneys, PKD is characterized by the presence of multiple cysts (hence, "polycystic") in both kidneys, can also damage the liver, pancreas, and rarely, the heart and brain. It can be autosomal dominant or autosomal recessive, with the autosomal dominant form being more common and characterized by progressive cyst development and bilaterally enlarged kidneys with multiple cysts, with concurrent development of hypertension, renal insufficiency and renal pain. Or chronic glomerulonephritis which is a disease characterized by inflammation of the glomeruli, or small blood vessels in the kidneys. Hypertension can also be produced by diseases of the renal arteries supplying the kidney. This is known as renovascular hypertension; it is thought that decreased perfusion of renal tissue due to stenosis of a main or branch renal artery activates the renin-angiotensin system. also some renal tumors can cause hypertension. The differential diagnosis of a renal tumor in a young patient with hypertension includes Juxtaglomerular cell tumor, Wilms' tumor, and renal cell carcinoma, all of which may produce renin.
Neuroendocrine tumors are also a well known cause of secondary hypertension. Pheochromocytoma (most often located in the adrenal medulla) increases secretion of catecholamines such as epinephrine and norepinephrine, causing excessive stimulation of adrenergic receptors, which results in peripheral vasoconstriction and cardiac stimulation. This diagnosis is confirmed by demonstrating increased urinary excretion of epinephrine and norepinephrine and/or their metabolites (vanillylmandelic acid).
Medication side effects
Certain medications, especially NSAIDs (Motrin/Ibuprofen) and steroids can cause hypertension. High blood pressure that is associated with the sudden withdrawal of various antihypertensive medications is called rebound hypertension. The increases in blood pressure may result in blood pressures greater than when the medication was initiated. Depending on the severity of the increase in blood pressure, rebound hypertension may result in a hypertensive emergency. Rebound hypertension is avoided by gradually reducing the dose (also known as "dose tapering"), thereby giving the body enough time to adjust to reduction in dose. Medications commonly associated with rebound hypertension include centrally-acting antihypertensive agents, such as clonidine and beta-blockers.
Few women of childbearing age have high blood pressure, up to 11% develop hypertension of pregnancy. While generally benign, it may herald three complications of pregnancy: pre-eclampsia, HELLP syndrome and eclampsia. Follow-up and control with medication is therefore often necessary.
Another common and under-recognized sign of hypertension is sleep apnea, which is often best treated with nocturnal nasal continuous positive airway pressure (CPAP), but other approaches include the Mandibular advancement splint (MAS), UPPP, tonsillectomy, adenoidectomy, septoplasty, or weight loss. Another cause is an exceptionally rare neurological disease called Binswanger's disease, causing dementia; it is a rare form of multi-infarct dementia, and is one of the neurological syndromes associated with hypertension.
Because of the ubiquity of arsenic in ground water supplies and its effect on cardiovascular health, low dose arsenic poisoning should be inferred as a part of the pathogenesis of idiopathic hypertension. Idiopathic and essential are both somewhat synonymous with primary hypertension. Arsenic exposure has also many of the same signs of primary hypertension such as headache, somnolence, confusion, proteinuria visual disturbances, and nausea and vomiting
Due to the role of intracellular potassium in regulation of cellular pressures related to sodium, establishing potassium balance has been show to reverse hypertension.
Most of the mechanisms associated with secondary hypertension are generally fully understood. However, those associated with essential (primary) hypertension are far less understood. What is known is that cardiac output is raised early in the disease course, with total peripheral resistance (TPR) normal; over time cardiac output drops to normal levels but TPR is increased. Three theories have been proposed to explain this:
It is also known that hypertension is highly heritable and polygenic (caused by more than one gene) and a few candidate genes have been postulated in the etiology of this condition.
Recently, work related to the association between essential hypertension and sustained endothelial damage has gained popularity among hypertension scientists. It remains unclear however whether endothelial changes precede the development of hypertension or whether such changes are mainly due to long standing elevated blood pressures.
Initial assessment of the hypertensive patient should include a complete history and physical examination to confirm a diagnosis of hypertension. Most patients with hypertension have no specific symptoms referable to their blood pressure elevation. Although popularly considered a symptom of elevated arterial pressure, headache generally occurs only in patients with severe hypertension. Characteristically, a "hypertensive headache" occurs in the morning and is localized to the occipital region. Other nonspecific symptoms that may be related to elevated blood pressure include dizziness, palpitations, easy fatiguability, and impotence.
Measuring blood pressure
Main article: Blood pressure
Diagnosis of hypertension is generally on the basis of a persistently high blood pressure. Usually this requires three separate measurements at least one week apart. Exceptionally, if the elevation is extreme, or end-organ damage is present then the diagnosis may be applied and treatment commenced immediately.
Obtaining reliable blood pressure measurements relies on following several rules and understanding the many factors that influence blood pressure reading.
For instance, measurements in control of hypertension should be at least 1 hour after caffeine, 30 minutes after smoking or strenuous exercise and without any stress. Cuff size is also important. The bladder should encircle and cover two-thirds of the length of the (upper) arm. The patient should be sitting upright in a chair with both feet flat on the floor for a minimum of five minutes prior to taking a reading. The patient should not be on any adrenergic stimulants, such as those found in many cold medications.
When taking manual measurements, the person taking the measurement should be careful to inflate the cuff suitably above anticipated systolic pressure. The person should inflate the cuff to 200 mmHg and then slowly release the air while palpating the radial pulse. After one minute, the cuff should be reinflated to 30 mmHg higher than the pressure at which the radial pulse was no longer palpable. A stethoscope should be placed lightly over the brachial artery. The cuff should be at the level of the heart and the cuff should be deflated at a rate of 2 to 3 mmHg/s. Systolic pressure is the pressure reading at the onset of the sounds described by Korotkoff (Phase one). Diastolic pressure is then recorded as the pressure at which the sounds disappear (K5) or sometimes the K4 point, where the sound is abruptly muffled. Two measurements should be made at least 5 minutes apart, and, if there is a discrepancy of more than 5 mmHg, a third reading should be done. The readings should then be averaged. An initial measurement should include both arms. In elderly patients who particularly when treated may show orthostatic hypotension, measuring lying sitting and standing BP may be useful. The BP should at some time have been measured in each arm, and the higher pressure arm preferred for subsequent measurements.
BP varies with time of day, as may the effectiveness of treatment, and archetypes used to record the data should include the time taken. Analysis of this is rare at present.
Automated machines are commonly used and reduce the variability in manually collected readings. Routine measurements done in medical offices of patients with known hypertension may incorrectly diagnose 20% of patients with uncontrolled hypertension.
Home blood pressure monitoring can provide a measurement of a person's blood pressure at different times throughout the day and in different environments, such as at home and at work. Home monitoring may assist in the diagnosis of high or low blood pressure. It may also be used to monitor the effects of medication or lifestyle changes taken to lower or regulate blood pressure levels. Home monitoring of blood pressure can also assist in the diagnosis of white coat hypertension. The American Heart Association states, "You may have what's called 'white coat hypertension'; that means your blood pressure goes up when you're at the doctor's office. Monitoring at home will help you measure your true blood pressure and can provide your doctor with a log of blood pressure measurements over time. This is helpful in diagnosing and preventing potential health problems."
Some home blood pressure monitoring devices also make use of blood pressure charting software. These charting methods provide printouts for the patient's physician and reminders to take a blood pressure reading. However, a simple and cheap way is simply to manually record values with pen and paper, which can then be inspected by a doctor.
Systolic hypertension is defined as an elevated systolic blood pressure. If systolic blood pressure is elevated with a normal diastolic blood pressure, it is called isolated systolic hypertension. Systolic hypertension may be due to reduced compliance of the aorta with increasing age.
Once the diagnosis of hypertension has been made it is important to attempt to exclude or identify reversible (secondary) causes. Secondary hypertension is more common in preadolescent children, with most cases caused by renal disease. Primary or essential hypertension is more common in adolescents and has multiple risk factors, including obesity and a family history of hypertension. Tests are undertaken to identify possible causes of secondary hypertension, and seek evidence for end-organ damage to the heart itself or the eyes (retina) and kidneys. Diabetes and raised cholesterol levels being additional risk factors for the development of cardiovascular disease are also tested for as they will also require management. Tests done are classified as follows:
||Microscopic urinalysis, proteinuria, serum BUN (blood urea nitrogen) and/or creatinine
||Serum sodium, potassium, calcium, TSH (thyroid-stimulating hormone).
||Fasting blood glucose, total cholesterol, HDL and LDL cholesterol, triglycerides
||Hematocrit, electrocardiogram, and Chest X-ray
|Sources: Harrison's principles of internal medicine others
Creatinine (renal function) testing is done to identify both the underlying renal disease as a cause of hypertension and, conversely, hypertension causing the onset of kidney damage. It is a baseline for monitoring the possible side-effects of certain antihypertensive drugs later. Glucose testing is done to identify diabetes mellitus. Additionally, testing of urine samples for proteinuria detection is used to pick up an underlying kidney disease or evidence of hypertensive renal damage. Electrocardiogram (EKG/ECG) testing is done to check for evidence of the heart being under strain from working against a high blood pressure. It may also show a resulting thickening of the heart muscle (left ventricular hypertrophy) or of the occurrence of a previously silent cardiac disease (either a subtle electrical conduction disruption or even a myocardial infarction). A chest X-ray might be used to observe signs of cardiac enlargement or evidence of cardiac failure.
The degree to which hypertension can be prevented depends on a number of features including: current blood pressure level, sodium/potassium balance, detection and omission of environmental toxins, changes in end/target organs (retina, kidney, heart - among others), risk factors for cardiovascular diseases and the age at presentation. Unless the presenting patient has very severe hypertension, there should be a relatively prolonged assessment period within which repeated measurements of blood pressure should be taken. Following this, lifestyle advice and non-pharmacological options should be offered to the patient, before any initiation of drug therapy.
The process of managing hypertension according the guidelines of the British Hypertension Society suggest that non-pharmacological options should be explored in all patients who are hypertensive or pre-hypertensive. These measures include;
- Weight reduction and regular aerobic exercise (e.g., walking) are recommended as the first steps in treating mild to moderate hypertension. Regular exercise improves blood flow and helps to reduce resting heart rate and blood pressure. Several studies indicate that low intensity exercise may be more effective in lowering blood pressure than higher intensity exercise. These steps are highly effective in reducing blood pressure, although drug therapy is still necessary for many patients with moderate or severe hypertension to bring their blood pressure down to a safe level.
- Reducing dietary sugar intake.
- Reducing sodium (salt) in the diet may be effective: It decreases blood pressure in about 33% of people (see above). Many people use a salt substitute to reduce their salt intake.
- Additional dietary changes beneficial to reducing blood pressure include the DASH diet (dietary approaches to stop hypertension) which is rich in fruits and vegetables and low-fat or fat-free dairy foods. This diet has been shown to be effective based on research sponsored by the National Heart, Lung, and Blood Institute. In addition, an increase in dietary potassium, which offsets the effect of sodium has been shown to be highly effective in reducing blood pressure.
- Discontinuing tobacco use and alcohol consumption has been shown to lower blood pressure. The exact mechanisms are not fully understood, but blood pressure (especially systolic) always transiently increases following alcohol or nicotine consumption. Besides, abstention from cigarette smoking is important for people with hypertension because it reduces the risk of many dangerous outcomes of hypertension, such as stroke and heart attack. Note that coffee drinking (caffeine ingestion) also increases blood pressure transiently but does not produce chronic hypertension.
- Reducing stress, for example with relaxation therapy, such as meditation and other mindbody relaxation techniques, by reducing environmental stress such as high sound levels and over-illumination can be an additional method of ameliorating hypertension. Jacobson's Progressive Muscle Relaxation and biofeedback are also used, particularly, device-guided paced breathing, although meta-analysis suggests it is not effective unless combined with other relaxation techniques.
Lifestyle changes such as the DASH diet, physical exercise, and weight loss have been shown to significantly reduced blood pressure in people with high blood pressure. If hypertension is high enough to justify immediate use of medications, lifestyle changes are initiated concomitantly.
A series of UK guidelines advocate treatment initiation thresholds and desirable targets to be reached as set out in the following table. Of particular note is that for patients with blood pressures between 140-159/80-99 and without additional factors, that only lifestyle actions and regular blood pressure and risk-factor review is proposed.
Biofeedback devices can be used alone or in conjunction with lifestyle changes or medications to monitor and possibly reduce hypertension. One example is Resperate, a portable, battery-operated personal therapeutic medical device, sold over the counter (OTC) in the United States. However, claims of efficacy are not supported by scientific studies. Testimonials are used to promote such products, while no real evidence exists that the use of resperate like devices lowers any morbidity associated with hypertension.
There are many classes of medications for treating hypertension, together called antihypertensives, which — by varying means — act by lowering blood pressure. Evidence suggests that reduction of the blood pressure by 5–6 mmHg can decrease the risk of stroke by 40%, of coronary heart disease by 15–20%, and reduces the likelihood of dementia, heart failure, and mortality from vascular disease.
The aim of treatment should be blood pressure control to <140/90 mmHg for most patients, and lower in certain contexts such as diabetes or kidney disease (some medical professionals recommend keeping levels below 120/80 mmHg). Each added drug may reduce the systolic blood pressure by 5–10 mmHg, so often multiple drugs are often necessary to achieve blood pressure control.
Commonly used drugs include the typical groups of:
- ACE inhibitors such as captopril, enalapril, fosinopril (Monopril), lisinopril (Zestril), quinapril, ramipril (Altace)
- Angiotensin II receptor antagonists may be used where ACE inhibitors are not tolerated: eg, telmisartan (Micardis, Pritor), irbesartan (Avapro), losartan (Cozaar), valsartan (Diovan), candesartan (Amias), olmesartan (Benicar, Olmetec)
- Calcium channel blockers such as nifedipine (Adalat) amlodipine (Norvasc), diltiazem, verapamil
- Diuretics: eg, bendroflumethiazide, chlorthalidone, hydrochlorothiazide (also called HCTZ).
Other additionally used groups include:
Finally several agents may be given simultaneously:
- Combination products (which usually contain HCTZ and one other drug). The advantage of fixed combinations resides in the fact that they increase compliance with treatment by reducing the number of pills taken by the patients. A fixed combination of the ACE inhibitor perindopril and the calcium channel blocker amlodipine, recently been proved to be very effective even in patients with additional impaired glucose tolerance and in patients with the metabolic syndrome.
Choice of initial medication
For mild blood pressure elevation, consensus guidelines call for medically-supervised lifestyle changes and observation before recommending initiation of drug therapy. However, according to the American Hypertension Association, evidence of sustained damage to the body may be present even prior to observed elevation of blood pressure. Therefore the use of hypertensive medications may be started in individuals with apparent normal blood pressures but who show evidence of hypertension related nephropathy, proteinuria, atherosclerotic vascular disease, as well as other other evidence of hypertension related organ damage.
If lifestyle changes are ineffective, then drug therapy is initiated, often requiring more than one agent to effectively lower hypertension. Which type of many medications should be used initially for hypertension has been the subject of several large studies and various national guidelines.
The largest study, Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT), concluded that thiazide-type diuretics are better and cheaper than other major classes of drugs at preventing cardiovascular disease, and should be preferred as the starting drug. ALLHAT used the thiazide diuretic chlorthalidone. (ALLHAT showed that doxazosin, an alpha-adrenergic receptor blocker, had a higher incidence of heart failure events, and the doxazosin arm of the study was stopped.)
A subsequent smaller study (ANBP2) did not show the slight advantages in thiazide diuretic outcomes observed in the ALLHAT study, and actually showed slightly better outcomes for ACE-inhibitors in older white male patients.
Thiazide diuretics are effective, recommended as the best first-line drug for hypertension by many experts, and are much more affordable than other therapies, yet they are not prescribed as often as some newer drugs. Hydrochlorothiazide is perhaps the safest and most inexpensive agent commonly used in this class and is very frequently combined with other agents in a single pill. Doses in excess of 25 milligrams per day of this agent incur an unacceptable risk of low potassium or Hypokalemia. Patients with an exaggerated hypokalemic response to a low dose of a thiazide diuretic should be suspected to have Hyperaldosteronism, a common cause of secondary hypertension.
Other drugs have a role in treating hypertension. Adverse effects of thiazide diuretics include hypercholesterolemia, and impaired glucose tolerance with increased risk of developing Diabetes mellitus type 2. The thiazide diuretics also deplete circulating potassium unless combined with a potassium-sparing diuretic or supplemental potassium. Some authors have challenged thiazides as first line treatment. However as the Merck Manual of Geriatrics notes, "thiazide-type diuretics are especially safe and effective in the elderly."
Current UK guidelines suggest starting patients over the age of 55 years and all those of African/Afrocaribbean ethnicity firstly on calcium channel blockers or thiazide diuretics, whilst younger patients of other ethnic groups should be started on ACE-inhibitors. Subsequently if dual therapy is required to use ACE-inhibitor in combination with either a calcium channel blocker or a (thiazide) diuretic. Triple therapy is then of all three groups and should the need arise then to add in a fourth agent, to consider either a further diuretic (e.g. spironolactone or furosemide), an alpha-blocker or a beta-blocker. Prior to the demotion of beta-blockers as first line agents, the UK sequence of combination therapy used the first letter of the drug classes and was known as the "ABCD rule".
Diagram illustrating the main complications of persistent high blood pressure.
It is based upon several factors including genetics, dietary habits, and overall lifestyle choices. If individuals conscious of their condition take the necessary preventive measures to lower their blood pressure, they are more likely to have a much better outcome than those who do not.
Hypertension is a risk factor for all clinical manifestations of atherosclerosis since it is a risk factor for atherosclerosis itself. It is an independent predisposing factor for heart failure, coronary artery disease, stroke, renal disease, and peripheral arterial disease. it is the most important risk factor for cardiovascular morbidity and mortality in industrialized countries. The risk is increased for:
Graph showing, prevalence of awareness, treatment and control of hypertension compared between the four studies of NHANES
It is estimated that nearly one billion people are affected by hypertension worldwide, and this figure is predicted to increase to 1.5 billion by 2025. The level of blood pressure regarded as deleterious has been revised down during years of epidemiological studies. A widely quoted and important series of such studies is the Framingham Heart Study carried out in an American town: Framingham, Massachusetts. The results from Framingham and of similar work in Busselton, Western Australia have been widely applied. To the extent that people are similar this seems reasonable, but there are known to be genetic variations in the most effective drugs for particular sub-populations. Recently (2004), the Framingham figures have been found to overestimate risks for the UK population considerably. The reasons are unclear. Nevertheless the Framingham work has been an important element of UK health policy.
Over 90-95% of adult hypertension is of the essential hypertension type. It is estimated that 43 million people in the United States have hypertension or are taking antihypertensive medication, which is almost 24% of the adult population. This proportion changes with race, being higher in blacks and lower in whites and Mexican Americans ; second it changes with age, because in industrialized countries systolic BP rises throughout life, whereas diastolic BP rises until age 55 to 60 years and thus the greater increase in prevalence of hypertension among the elderly is mainly due to systolic hypertension; also geographic patterns, because hypertension is more prevalent in the southeastern United States; another important one is gender, because hypertension is more prevalent in men (though menopause tends to abolish this difference); and finally socioeconomic status, which is an indicator of lifestyle attributes and is inversely related to the prevalence, morbidity, and mortality rates of hypertension. A series of studies and surveys conducted by National Health and Nutrition Examination Survey (NHANES) between 1976 and 2004 to assess the trends in hypertension prevalence, blood pressure distributions and mean levels, and hypertension awareness, treatment, and control among US adults, aged more than 18 years, showed that there is an increasing pattern of awareness, control and treatment of hypertension, and that prevalence of hypertension is increasing reaching 28.9% as of 2004, with the largest increases among non-Hispanic women.
For the secondary hypertension its known that primary aldosteronism is the most frequent endocrine form of secondary hypertension. The incidence of exercise hypertension is reported to range from 1 to 10% of the total population.
Hypertension often is part of the metabolic "syndrome X" its co-occurring with other components of the syndrome. The other components are, diabetes mellitus, combined hyperlipidemia, and central obesity. This is especially occurring among women. And this co-occurrence will increase the risk of cardiovascular disease and cardiovascular events.
Children and adolescents
As with adults, blood pressure is a variable parameter in children. It varies between individuals and within individuals from day to day and at various times of the day. And the population prevalence of high blood pressure in the young is increasing. The epidemic of childhood obesity, the risk of developing left ventricular hypertrophy, and evidence of the early development of atherosclerosis in children would make the detection of and intervention in childhood hypertension important to reduce long-term health risks. Most childhood hypertension, particularly in preadolescents, is secondary to an underlying disorder. Renal parenchymal disease is the most common (60 to 70%) cause of hypertension. Adolescents usually have primary or essential hypertension, making up 85 to 95% of cases.. Medical students commonly suffer from hypertension especially mature students.
Some cite the writings of Sushruta in the 6th century BC as being the first mention of symptoms like those of hypertension.
Our modern understanding of hypertension began with the work of physician William Harvey (1578–1657). It was then recognized as a disease a century later by Richard Bright (physician) in (1789–1858). The first ever elevated blood pressure in a patient without kidney disease was reported by Frederick Mahomed (1849–1884).
Society and culture
The National Heart, Lung, and Blood Institute (NHLBI) estimated in 2002 that hypertension cost the United States $47.2 billion dollars.
High blood pressure is the most common chronic medical problem prompting visits to primary health care providers, yet it is estimated that only 34% of the 50 million American adults with hypertension have their blood pressure controlled to a level of <140/90 mm Hg. Thus, about two thirds of Americans with hypertension are at increased risk for cardiovascular events. The medical, economic, and human costs of untreated and inadequately controlled high blood pressure are enormous. Adequate management of hypertension can be hampered by inadequacies in the diagnosis, treatment, and/or control of high blood pressure. Health care providers face many obstacles to achieving blood pressure control among their patients, including a limited ability to adequately lower blood pressure with monotherapy and a typical reluctance to increase therapy (either in dose or number of medications) to achieve blood pressure goals. Patients also face important challenges in adhering to multidrug regimens and accepting the need for therapeutic lifestyle changes. Nonetheless, the achievement of blood pressure goals is possible, and, most importantly, lowering blood pressure significantly reduces cardiovascular morbidity and mortality, as proved in clinical trials. The medical and human costs of treating preventable conditions such as stroke, heart failure, and end-stage renal disease can be reduced by antihypertensive treatment. The recurrent and chronic morbidities associated with hypertension are costly to treat. Pharmacotherapy for hypertension therefore offers a substantial potential for cost savings. Recent studies proved that the use of angiotensin receptor blockers for treatment of hypertension is cost-saving and cost-effective treatment compared with other conventional treatment.
The World Health Organization attributes hypertension, or high blood pressure, as the leading cause of cardiovascular mortality. The World Hypertension League (WHL), an umbrella organization of 85 national hypertension societies and leagues, recognized that more than 50% of the hypertensive population worldwide are unaware of their condition. To address this problem, the WHL initiated a global awareness campaign on hypertension in 2005 and dedicated May 17 of each year as World Hypertension Day (WHD). Over the past three years, more national societies have been engaging in WHD and have been innovative in their activities to get the message to the public. In 2007, there was record participation from 47 member countries of the WHL. During the week of WHD, all these countries – in partnership with their local governments, professional societies, nongovernmental organizations and private industries – promoted hypertension awareness among the public through several media and public rallies. Using mass media such as Internet and television, the message reached more than 250 million people. As the momentum picks up year after year, the WHL is confident that almost all the estimated 1.5 billion people affected by elevated blood pressure can be reached.
- ^ Maton, Anthea; Jean Hopkins, Charles William McLaughlin, Susan Johnson, Maryanna Quon Warner, David LaHart, Jill D. Wright (1993). Human Biology and Health. Englewood Cliffs, New Jersey, USA: Prentice Hall. ISBN 0-13-981176-1. OCLC 32308337.
- ^ a b c d e Carretero OA, Oparil S (January 2000). "Essential hypertension. Part I: definition and etiology". Circulation 101 (3): 329–35. PMID 10645931. http://circ.ahajournals.org/cgi/pmidlookup?view=long&pmid=10645931. Retrieved 2009-06-05.
- ^ a b c Oparil S, Zaman MA, Calhoun DA (November 2003). "Pathogenesis of hypertension". Ann. Intern. Med. 139 (9): 761–76. PMID 14597461.
- ^ a b c Hall, John E.; Guyton, Arthur C. (2006). Textbook of medical physiology. St. Louis, Mo: Elsevier Saunders. p. 228. ISBN 0-7216-0240-1.
- ^ a b c "Hypertension: eMedicine Nephrology". http://emedicine.medscape.com/article/241381-overview. Retrieved 2009-06-05.
- ^ Pierdomenico SD, Di Nicola M, Esposito AL, et al. (June 2009). "Prognostic Value of Different Indices of Blood Pressure Variability in Hypertensive Patients". American Journal of Hypertension 22 (8): 842–7. doi:10.1038/ajh.2009.103. PMID 19498342.
- ^ Guyton & Hall (2005). Textbook of Medical Physiology (7th ed.). Elsevier-Saunders. p. 220. ISBN 0-7216-0240-1. OCLC 213041516.
- ^ a b c d Chobanian AV, Bakris GL, Black HR, et al. (December 2003). "Seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure". Hypertension 42 (6): 1206–52. doi:10.1161/01.HYP.0000107251.49515.c2. PMID 14656957. http://hyper.ahajournals.org/cgi/content/full/42/6/1206.
- ^ a b c d e f "CG34 Hypertension - quick reference guide" (PDF). National Institute for Health and Clinical Excellence. 28 June 2006. http://www.nice.org.uk/nicemedia/pdf/cg034quickrefguide.pdf. Retrieved 2009-03-04.
- ^ Calhoun DA, Jones D, Textor S, et al. (June 2008). "Resistant hypertension: diagnosis, evaluation, and treatment. A scientific statement from the American Heart Association Professional Education Committee of the Council for High Blood Pressure Research" (PDF). Hypertension 51 (6): 1403–19. doi:10.1161/HYPERTENSIONAHA.108.189141. PMID 18391085. http://hyper.ahajournals.org/cgi/reprint/51/6/1403.pdf.
- ^ Jetté M, Landry F, Blümchen G (April 1987). "Exercise hypertension in healthy normotensive subjects. Implications, evaluation and interpretation". Herz 12 (2): 110–8. PMID 3583204.
- ^ Pickering TG (April 1987). "Pathophysiology of exercise hypertension". Herz 12 (2): 119–24. PMID 2953661.
- ^ a b Rost R, Heck H (April 1987). "[Exercise hypertension--significance from the viewpoint of sports]" (in German). Herz 12 (2): 125–33. PMID 3583205.
- ^ a b c Klaus D (April 1987). "[Differential therapy of exercise hypertension]" (in German). Herz 12 (2): 146–55. PMID 3583208.
- ^ Pitts SR, Adams RP (February 1998). "Emergency department hypertension and regression to the mean". Annals of Emergency Medicine 31 (2): 214–8. doi:10.1016/S0196-0644(98)70309-9. PMID 9472183.
- ^ Chiang WK, Jamshahi B (November 1998). "Asymptomatic hypertension in the ED". The American Journal of Emergency Medicine 16 (7): 701–4. doi:10.1016/S0735-6757(98)90181-4. PMID 9827753.
- ^ Decker WW, Godwin SA, Hess EP, Lenamond CC, Jagoda AS (March 2006). "Clinical policy: critical issues in the evaluation and management of adult patients with asymptomatic hypertension in the emergency department". Annals of Emergency Medicine 47 (3): 237–49. doi:10.1016/j.annemergmed.2005.10.003. PMID 16492490. http://linkinghub.elsevier.com/retrieve/pii/S0196-0644(05)01790-7. Retrieved 2009-06-18.
- ^ Rogers RL, Anderson RS (May 2007). "Severe hypertension in the geriatric patient-is it an emergency or not?". Clinics in Geriatric Medicine 23 (2): 363–70, vii. doi:10.1016/j.cger.2007.01.008. PMID 17462522. http://journals.elsevierhealth.com/retrieve/pii/S0749-0690(07)00009-2. Retrieved 2009-06-18.
- ^ Zeller KR, Von Kuhnert L, Matthews C (October 1989). "Rapid reduction of severe asymptomatic hypertension. A prospective, controlled trial". Archives of Internal Medicine 149 (10): 2186–9. doi:10.1001/archinte.149.10.2186. PMID 2679473. http://archinte.ama-assn.org/cgi/pmidlookup?view=long&pmid=2679473. Retrieved 2009-06-18.
- ^ a b Papadakis, Maxine A.; McPhee, Stephen J. (2008). Current Medical Diagnosis and Treatment 2009 (Current Medical Diagnosis and Treatment). McGraw-Hill Professional. ISBN 0-07-159124-9.
- ^ a b c "Hypertension: eMedicine Pediatrics: Cardiac Disease and Critical Care Medicine". http://emedicine.medscape.com/article/889877-overview. Retrieved 2009-06-16.
- ^ "Hyperthyroidism: eMedicine Endocrinology". http://emedicine.medscape.com/article/121865-overview. Retrieved 2009-06-16.
- ^ "Acromegaly: eMedicine Endocrinology". http://emedicine.medscape.com/article/116366-overview. Retrieved 2009-06-16.
- ^ James, William; Berger, Timothy; Elston, Dirk (2005). Andrews' Diseases of the Skin: Clinical Dermatology. (10th ed.). Saunders. ISBN 0721629210.
- ^ "Hyperaldosteronism: eMedicine Pediatrics: General Medicine". http://emedicine.medscape.com/article/920713-overview. Retrieved 2009-06-16.
- ^ "Hypertension and Pregnancy: eMedicine Obstetrics and Gynecology". http://emedicine.medscape.com/article/261435-overview. Retrieved 2009-06-16.
- ^ name="pmid17148735">Kyrou I, Chrousos GP, Tsigos C (November 2006). [http://www3.interscience.wiley.com/resolve/openurl?genre=article&sid=nlm:pubmed&issn=0077-8923&date=2006&volume=1083&spage=77 "
- ^ Hypertension. 2006;48:201.)© 2006 American Heart Association, Inc. Editorial Commentaries Potassium Depletion and Diastolic Dysfunction David B. Young From the Department of Physiology and Biophysics, University of Mississippi Medical Center, Jackson, Miss.and metabolic complications"]. Ann. N. Y. Acad. Sci. 1083: 77–110. doi:10.1196/annals.1367.008. PMID 17148735. http://www3.interscience.wiley.com/resolve/openurl?genre=article&sid=nlm:pubmed&issn=0077-8923&date=2006&volume=1083&spage=77. Retrieved 2009-06-01.
- ^ Wofford MR, Hall JE (2004). "Pathophysiology and treatment of obesity hypertension". Curr. Pharm. Des. 10 (29): 3621–37. doi:10.2174/1381612043382855. PMID 15579059. http://www.bentham-direct.org/pages/content.php?CPD/2004/00000010/00000029/0007B.SGM. Retrieved 2009-06-07.
- ^ a b c Segura J, Ruilope LM (October 2007). "Obesity, essential hypertension and renin-angiotensin system". Public Health Nutr 10 (10A): 1151–5. doi:10.1017/S136898000700064X. PMID 17903324. http://journals.cambridge.org/abstract_S136898000700064X. Retrieved 2009-06-01.
- ^ a b Uchiyama M (August 2008). "[Mild hypertension in children]" (in Japanese). Nippon Rinsho 66 (8): 1477–80. PMID 18700545.
- ^ a b Rahmouni K, Correia ML, Haynes WG, Mark AL (January 2005). "Obesity-associated hypertension: new insights into mechanisms". Hypertension 45 (1): 9–14. doi:10.1161/01.HYP.0000151325.83008.b4. PMID 15583075.
- ^ a b Haslam DW, James WP (2005). "Obesity". Lancet 366 (9492): 1197–209. doi:10.1016/S0140-6736(05)67483-1. PMID 16198769.
- ^ Lackland DT, Egan BM (August 2007). "Dietary salt restriction and blood pressure in clinical trials". Curr. Hypertens. Rep. 9 (4): 314–9. doi:10.1007/s11906-007-0057-8. PMID 17686383.
- ^ Rodriguez-Iturbe B, Romero F, Johnson RJ (October 2007). "Pathophysiological mechanisms of salt-dependent hypertension". Am. J. Kidney Dis. 50 (4): 655–72. doi:10.1053/j.ajkd.2007.05.025. PMID 17900467. http://linkinghub.elsevier.com/retrieve/pii/S0272-6386(07)00945-6. Retrieved 2009-06-01.
- ^ http://www.jstage.jst.go.jp/article/jphs/100/5/370/_pdf A Missing Link Between a High Salt Intake and Blood Pressure Increase: Makoto Katori and Masataka Majima, Department of Pharmacology, Kitasato University School of Medicine, Kitasato, Sagamihara, Kanagawa, Japan February 8, 2006
- ^ Jürgens G, Graudal NA (2004). "Effects of low sodium diet versus high sodium diet on blood pressure, renin, aldosterone, catecholamines, cholesterols, and triglyceride". Cochrane Database Syst Rev (1): CD004022. doi:10.1002/14651858.CD004022.pub2. PMID 14974053.
- ^ Djoussé L, Mukamal KJ (June 2009). "Alcohol Consumption and Risk of Hypertension: Does the Type of Beverage or Drinking Pattern Matter?". Rev Esp Cardiol 62 (6): 603–605. doi:10.1016/S1885-5857(09)72223-6. PMID 19480755. http://www.revespcardiol.org/cgi-bin/wdbcgi.exe/cardio/mrevista_cardio.pubmed_full?inctrl=05ZI0113&vol=62&num=6&pag=603. Retrieved 2009-06-03.
- ^ Weber, Michael (2001). Hypertension medicine. Totowa, N.J: Humana. ISBN 0-89603-788-6.
- ^ Tuohimaa P (March 2009). "Vitamin D and aging". The Journal of Steroid Biochemistry and Molecular Biology 114 (1-2): 78–84. doi:10.1016/j.jsbmb.2008.12.020. PMID 19444937.
- ^ Lee JH, O'Keefe JH, Bell D, Hensrud DD, Holick MF (2008). "Vitamin D deficiency an important, common, and easily treatable cardiovascular risk factor?". J. Am. Coll. Cardiol. 52 (24): 1949–56. doi:10.1016/j.jacc.2008.08.050. PMID 19055985.
- ^ Forman JP, Giovannucci E, Holmes MD, et al. (May 2007). "Plasma 25-hydroxyvitamin D levels and risk of incident hypertension". Hypertension 49 (5): 1063–9. doi:10.1161/HYPERTENSIONAHA.107.087288. PMID 17372031.
- ^ Kosugi T, Nakagawa T, Kamath D, Johnson RJ (February 2009). "Uric acid and hypertension: an age-related relationship?". J Hum Hypertens 23 (2): 75–6. doi:10.1038/jhh.2008.110. PMID 18754017.
- ^ Cite error: Invalid
<ref> tag; no text was provided for refs named
pmid16754793; see Help:Cite error.
- ^ Gong M, Hubner N (March 2006). "Molecular genetics of human hypertension". Clin. Sci. 110 (3): 315–26. doi:10.1042/CS20050208. PMID 16464173. http://www.clinsci.org/cs/110/0315/cs1100315.htm. Retrieved 2009-06-01.
- ^ Qiu CC, Zhou WY (April 2006). "[Susceptible genes of essential hypertension]" (in Chinese). Zhongguo Yi Xue Ke Xue Yuan Xue Bao 28 (2): 284–8. PMID 16733921.
- ^ Harrison M, Maresso K, Broeckel U (December 2008). "Genetic determinants of hypertension: an update". Curr. Hypertens. Rep. 10 (6): 488–95. doi:10.1007/s11906-008-0091-1. PMID 18959837.
- ^ Kotchen TA, Kotchen JM, Grim CE, et al. (July 2000). "Genetic determinants of hypertension: identification of candidate phenotypes". Hypertension 36 (1): 7–13. PMID 10904005. http://hyper.ahajournals.org/cgi/pmidlookup?view=long&pmid=10904005.
- ^ "Hypertension Etiology & Classification - Secondary Hypertension". Armenian Medical Network. 2006. http://www.health.am/hypertension/secondary-hypertension/. Retrieved 2007-12-02.
- ^ a b Luma GB, Spiotta RT (May 2006). "Hypertension in children and adolescents". Am Fam Physician 73 (9): 1558–68. PMID 16719248.
- ^ a b Loscalzo, Joseph; Fauci, Anthony S.; Braunwald, Eugene; Dennis L. Kasper; Hauser, Stephen L; Longo, Dan L. (2008). Harrison's principles of internal medicine. McGraw-Hill Medical. ISBN 0-07-147691-1.
- ^ Inaba S, Iwai M, Horiuchi M (August 2008). "[Role of RAS in prehypertension]" (in Japanese). Nippon Rinsho 66 (8): 1503–8. PMID 18700549.
- ^ a b c Sorof J, Daniels S (October 2002). "Obesity hypertension in children: a problem of epidemic proportions". Hypertension 40 (4): 441–7. doi:10.1161/01.HYP.0000032940.33466.12. PMID 12364344. http://hyper.ahajournals.org/cgi/pmidlookup?view=long&pmid=12364344. Retrieved 2009-06-03.
- ^ Esler M, Rumantir M, Kaye D, Lambert G (June 2001). "The sympathetic neurobiology of essential hypertension: disparate influences of obesity, stress, and noradrenaline transporter dysfunction?". Am. J. Hypertens. 14 (6 Pt 2): 139S–146S. doi:10.1016/S0895-7061(01)02081-7. PMID 11411749.
- ^ Esler M, Rumantir M, Kaye D, et al. (December 2001). "Sympathetic nerve biology in essential hypertension". Clin. Exp. Pharmacol. Physiol. 28 (12): 986–9. doi:10.1046/j.1440-1681.2001.03566.x. PMID 11903299. http://www3.interscience.wiley.com/resolve/openurl?genre=article&sid=nlm:pubmed&issn=0305-1870&date=2001&volume=28&issue=12&spage=986. Retrieved 2009-06-01.
- ^ Esler M, Alvarenga M, Pier C, et al. (July 2006). "The neuronal noradrenaline transporter, anxiety and cardiovascular disease". J. Psychopharmacol. (Oxford) 20 (4 Suppl): 60–6. doi:10.1177/1359786806066055. PMID 16785272. http://jop.sagepub.com/cgi/pmidlookup?view=long&pmid=16785272. Retrieved 2009-06-01.
- ^ Reaven GM (December 2005). "Insulin resistance, the insulin resistance syndrome, and cardiovascular disease". Panminerva Med 47 (4): 201–10. PMID 16489319.
- ^ Bagby SP (April 2007). "Maternal nutrition, low nephron number, and hypertension in later life: pathways of nutritional programming". J. Nutr. 137 (4): 1066–72. PMID 17374679. http://jn.nutrition.org/cgi/pmidlookup?view=long&pmid=17374679. Retrieved 2009-06-01.
- ^ a b c Dodt C, Wellhöner JP, Schütt M, Sayk F (January 2009). "[Glucocorticoids and hypertension]" (in German). Der Internist 50 (1): 36–41. doi:10.1007/s00108-008-2197-6. PMID 19096817.
- ^ a b Giacchetti G, Turchi F, Boscaro M, Ronconi V (April 2009). "Management of primary aldosteronism: its complications and their outcomes after treatment". Current Vascular Pharmacology 7 (2): 244–49. doi:10.2174/157016109787455716. PMID 19356005. http://www.bentham-direct.org/pages/content.php?CVP/2009/00000007/00000002/014AD.SGM. Retrieved 2009-06-19.
- ^ Bailey MA, Paterson JM, Hadoke PW, et al. (January 2008). "A switch in the mechanism of hypertension in the syndrome of apparent mineralocorticoid excess". Journal of the American Society of Nephrology : JASN 19 (1): 47–58. doi:10.1681/ASN.2007040401. PMID 18032795. PMC 2391031. http://jasn.asnjournals.org/cgi/pmidlookup?view=long&pmid=18032795. Retrieved 2009-06-19.
- ^ Vantyghem MC, Marcelli-Tourvieille S, Defrance F, Wemeau JL (October 2007). "[11beta-hydroxysteroide dehydrogenases. Recent advances"] (in French). Annales D'endocrinologie 68 (5): 349–56. doi:10.1016/j.ando.2007.02.003 (inactive 2009-10-19). PMID 17368420. http://www.masson.fr/masson/S0003-4266(07)00039-X. Retrieved 2009-06-19.
- ^ Atanasov AG, Ignatova ID, Nashev LG, et al. (April 2007). "Impaired protein stability of 11beta-hydroxysteroid dehydrogenase type 2: a novel mechanism of apparent mineralocorticoid excess". Journal of the American Society of Nephrology : JASN 18 (4): 1262–70. doi:10.1681/ASN.2006111235. PMID 17314322. http://jasn.asnjournals.org/cgi/pmidlookup?view=long&pmid=17314322. Retrieved 2009-06-19.
- ^ a b Johns C (January 2009). "Glycyrrhizic acid toxicity caused by consumption of licorice candy cigars". CJEM : Canadian Journal of Emergency Medical Care = JCMU : Journal Canadien De Soins Médicaux D'urgence 11 (1): 94–6. PMID 19166646. http://caep.ca/template.asp?id=41e4ddc5c30c4ceebd15a779632204b5. Retrieved 2009-06-19.
- ^ Sontia B, Mooney J, Gaudet L, Touyz RM (February 2008). "Pseudohyperaldosteronism, liquorice, and hypertension". J Clin Hypertens (Greenwich) 10 (2): 153–7. doi:10.1111/j.1751-7176.2008.07470.x. PMID 18256580. http://www.lejacq.com/articleDetail.cfm?pid=JClinHypertens_10;2:153.
- ^ Escher G (April 2009). "Hyperaldosteronism in pregnancy". Therapeutic Advances in Cardiovascular Disease 3 (2): 123–32. doi:10.1177/1753944708100180. PMID 19171690. http://tak.sagepub.com/cgi/pmidlookup?view=long&pmid=19171690. Retrieved 2009-06-19.
- ^ Sukor N, Mulatero P, Gordon RD, et al. (August 2008). "Further evidence for linkage of familial hyperaldosteronism type II at chromosome 7p22 in Italian as well as Australian and South American families". Journal of Hypertension 26 (8): 1577–82. doi:10.1097/HJH.0b013e3283028352. PMID 18622235. http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=0263-6352&volume=26&issue=8&spage=1577. Retrieved 2009-06-19.
- ^ Omura M, Nishikawa T (May 2006). "[Glucocorticoid remediable aldosteronism]" (in Japanese). Nippon Rinsho. Japanese Journal of Clinical Medicine Suppl 1: 628–34. PMID 16776234.
- ^ Luft FC (October 2003). "Mendelian forms of human hypertension and mechanisms of disease". Clinical Medicine & Research 1 (4): 291–300. doi:10.3121/cmr.1.4.291. PMID 15931322. PMC 1069058. http://www.clinmedres.org/cgi/pmidlookup?view=long&pmid=15931322. Retrieved 2009-06-19.
- ^ Nicod J, Dick B, Frey FJ, Ferrari P (February 2004). "Mutation analysis of CYP11B1 and CYP11B2 in patients with increased 18-hydroxycortisol production". Molecular and Cellular Endocrinology 214 (1-2): 167–74. doi:10.1016/j.mce.2003.10.056. PMID 15062555. http://linkinghub.elsevier.com/retrieve/pii/S030372070300443X. Retrieved 2009-06-19.
- ^ McMahon GT, Dluhy RG (2004). "Glucocorticoid-remediable aldosteronism". Cardiology in Review 12 (1): 44–8. doi:10.1097/01.crd.0000096417.42861.ce. PMID 14667264. http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=1061-5377&volume=12&issue=1&spage=44. Retrieved 2009-06-19.
- ^ Ziaja J, Cholewa K, Mazurek U, Cierpka L (2008). "[Molecular basics of aldosterone and cortisol synthesis in normal adrenals and adrenocortical adenomas]" (in Polish). Endokrynologia Polska 59 (4): 330–9. PMID 18777504.
- ^ Astegiano M, Bresso F, Demarchi B, et al. (March 2005). "Association between Crohn's disease and Conn's syndrome. A report of two cases". Panminerva Medica 47 (1): 61–4. PMID 15985978.
- ^ Pereira RM, Michalkiewicz E, Sandrini F, et al. (October 2004). "[Childhood adrenocortical tumors"] (in Portuguese). Arquivos Brasileiros De Endocrinologia E Metabologia 48 (5): 651–8. doi:/S0004-27302004000500010 (inactive 2009-10-19). PMID 15761535. http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0004-27302004000500010&lng=en&nrm=iso&tlng=en. Retrieved 2009-06-19.
- ^ Kievit J, Haak HR (March 2000). "Diagnosis and treatment of adrenal incidentaloma. A cost-effectiveness analysis". Endocrinology and Metabolism Clinics of North America 29 (1): 69–90, viii–ix. doi:10.1016/S0889-8529(05)70117-1. PMID 10732265.
- ^ Kumar, Abbas, Fausto. Robbins and Cotran Pathologic Basis of Disease, 7th ed. Elsevier-Saunders; New York, 2005.
- ^ Yudofsky, Stuart C.; Robert E. Hales (2007). The American Psychiatric Publishing Textbook of Neuropsychiatry and Behavioral Neurosciences (5th ed.). American Psychiatric Pub, Inc.. ISBN 1585622397.
- ^ Ecder T, Schrier RW (April 2009). "Cardiovascular abnormalities in autosomal-dominant polycystic kidney disease". Nature Reviews. Nephrology 5 (4): 221–8. doi:10.1038/nrneph.2009.13. PMID 19322187.
- ^ Gross P (May 2008). "Polycystic kidney disease: will it become treatable?". Polskie Archiwum Medycyny Wewnȩtrznej 118 (5): 298–301. PMID 18619180. http://tip.org.pl/pamw/issue/search.html?lang=en&search=18619180. Retrieved 2009-06-19.
- ^ Masoumi A, Reed-Gitomer B, Kelleher C, Schrier RW (2007). "Potential pharmacological interventions in polycystic kidney disease". Drugs 67 (17): 2495–510. doi:10.2165/00003495-200767170-00004. PMID 18034588.
- ^ Chapman AB (May 2007). "Autosomal dominant polycystic kidney disease: time for a change?". Journal of the American Society of Nephrology : JASN 18 (5): 1399–407. doi:10.1681/ASN.2007020155. PMID 17429048. http://jasn.asnjournals.org/cgi/pmidlookup?view=long&pmid=17429048. Retrieved 2009-06-19.
- ^ Chapman AB (July 2008). "Approaches to testing new treatments in autosomal dominant polycystic kidney disease: insights from the CRISP and HALT-PKD studies". Clinical Journal of the American Society of Nephrology : CJASN 3 (4): 1197–204. doi:10.2215/CJN.00060108. PMID 18579674. http://cjasn.asnjournals.org/cgi/pmidlookup?view=long&pmid=18579674. Retrieved 2009-06-19.
- ^ Berthoux FC, Mohey H, Afiani A (January 2008). "Natural history of primary IgA nephropathy". Seminars in Nephrology 28 (1): 4–9. doi:10.1016/j.semnephrol.2007.10.001. PMID 18222341. http://journals.elsevierhealth.com/retrieve/pii/S0270-9295(07)00139-8. Retrieved 2009-06-19.
- ^ D'Cruz D (February 2009). "Renal manifestations of the antiphospholipid syndrome". Current Rheumatology Reports 11 (1): 52–60. doi:10.1007/s11926-009-0008-2. PMID 19171112.
- ^ Licht C, Fremeaux-Bacchi V (February 2009). "Hereditary and acquired complement dysregulation in membranoproliferative glomerulonephritis". Thrombosis and Haemostasis 101 (2): 271–8. PMID 19190809. http://www.schattauer.de/index.php?id=1268&pii=th09020271&no_cache=1. Retrieved 2009-06-19.
- ^ Textor SC (May 2009). "Current approaches to renovascular hypertension". The Medical Clinics of North America 93 (3): 717–32, Table of Contents. doi:10.1016/j.mcna.2009.02.012. PMID 19427501. PMC 2752469. http://journals.elsevierhealth.com/retrieve/pii/S0025-7125(09)00028-5. Retrieved 2009-06-19.
- ^ Voiculescu A, Rump LC (January 2009). "[Hypertension in patients with renal artery stenosis]" (in German). Der Internist 50 (1): 42–50. doi:10.1007/s00108-008-2198-5. PMID 19096816.
- ^ Kendrick J, Chonchol M (October 2008). "Renal artery stenosis and chronic ischemic nephropathy: epidemiology and diagnosis". Advances in Chronic Kidney Disease 15 (4): 355–62. doi:10.1053/j.ackd.2008.07.004. PMID 18805381. http://linkinghub.elsevier.com/retrieve/pii/S1548-5595(08)00107-9. Retrieved 2009-06-19.
- ^ Méndez GP, Klock C, Nosé V (December 2008). "Juxtaglomerular Cell Tumor of the Kidney: Case Report and Differential Diagnosis With Emphasis on Pathologic and Cytopathologic Features". Int. J. Surg. Pathol.. doi:10.1177/1066896908329413. PMID 19098017. http://ijs.sagepub.com/cgi/pmidlookup?view=long&pmid=19098017.
- ^ Kassim TA, Clarke DD, Mai VQ, Clyde PW, Mohamed Shakir KM (December 2008). "Catecholamine-induced cardiomyopathy". Endocrine Practice : Official Journal of the American College of Endocrinology and the American Association of Clinical Endocrinologists 14 (9): 1137–49. PMID 19158054. http://aace.metapress.com/openurl.asp?genre=article&issn=1530-891X&volume=14&issue=9&spage=1137. Retrieved 2009-06-19.
- ^ Simone Rossi, ed (2006). Australian medicines handbook 2006. Adelaide: Australian Medicines Handbook Pty Ltd. ISBN 0-9757919-2-3.
- ^ a b c d White WB (May 2009). "Defining the problem of treating the patient with hypertension and arthritis pain". The American Journal of Medicine 122 (5 Suppl): S3–9. doi:10.1016/j.amjmed.2009.03.002. PMID 19393824. http://linkinghub.elsevier.com/retrieve/pii/S0002-9343(09)00206-X. Retrieved 2009-06-18.
- ^ Mackenzie IS, Rutherford D, MacDonald TM (2008). "Nitric oxide and cardiovascular effects: new insights in the role of nitric oxide for the management of osteoarthritis". Arthritis Research & Therapy 10 Suppl 2: S3. doi:10.1186/ar2464. PMID 19007428. PMC 2582806. http://arthritis-research.com/content/10%20Suppl%202//S3. Retrieved 2009-06-18.
- ^ Berenbaum F (2008). "New horizons and perspectives in the treatment of osteoarthritis". Arthritis Research & Therapy 10 Suppl 2: S1. doi:10.1186/ar2462. PMID 19007426. PMC 2582808. http://arthritis-research.com/content/10%20Suppl%202//S1. Retrieved 2009-06-18.
- ^ Akinbamowo AO, Salzberg DJ, Weir MR (October 2008). "Renal consequences of prostaglandin inhibition in heart failure". Heart Failure Clinics 4 (4): 505–10. doi:10.1016/j.hfc.2008.03.002. PMID 18760760. http://linkinghub.elsevier.com/retrieve/pii/S1551-7136(08)00054-8. Retrieved 2009-06-18.
- ^ Lowenstein J (January 1980). "Drugs five years later: clonidine". Annals of Internal Medicine 92 (1): 74–7. PMID 6101302.
- ^ Robertson JI (January 1997). "Risk factors and drugs in the treatment of hypertension". Journal of Hypertension. Supplement : Official Journal of the International Society of Hypertension 15 (1): S43–6. PMID 9050985.
- ^ Schachter M (August 1999). "Moxonidine: a review of safety and tolerability after seven years of clinical experience". Journal of Hypertension. Supplement : Official Journal of the International Society of Hypertension 17 (3): S37–9. PMID 10489097.
- ^ Schäfer SG, Kaan EC, Christen MO, Löw-Kröger A, Mest HJ, Molderings GJ (July 1995). "Why imidazoline receptor modulator in the treatment of hypertension?". Annals of the New York Academy of Sciences 763: 659–72. doi:10.1111/j.1749-6632.1995.tb32460.x. PMID 7677385. http://www3.interscience.wiley.com/resolve/openurl?genre=article&sid=nlm:pubmed&issn=0077-8923&date=1995&volume=763&spage=659. Retrieved 2009-06-18.
- ^ Larsen R, Kleinschmidt S (April 1995). "[Controlled hypotension]" (in German). Der Anaesthesist 44 (4): 291–308. PMID 7785759.
- ^ Scholtysik G (March 1986). "Animal pharmacology of guanfacine". The American Journal of Cardiology 57 (9): 13E–17E. doi:10.1016/0002-9149(86)90717-4. PMID 3006469.
- ^ a b Myers MG (January 1977). "New drugs in hypertension". Canadian Medical Association Journal 116 (2): 173–6. PMID 343894.
- ^ van Zwieten PA, Thoolen MJ, Timmermans PB (1984). "The hypotensive activity and side effects of methyldopa, clonidine, and guanfacine". Hypertension 6 (5 Pt 2): II28–33. PMID 6094346.
- ^ Kang A, Struben H (November 2008). "[Pre-eclampsia screening in first and second trimester]" (in German). Therapeutische Umschau. Revue Thérapeutique 65 (11): 663–6. doi:10.1024/0040-59126.96.36.1993. PMID 18979429.
- ^ Marik PE (March 2009). "Hypertensive disorders of pregnancy". Postgraduate Medicine 121 (2): 69–76. doi:10.3810/pgm.2009.03.1978. PMID 19332964. http://www.postgradmed.com/index.php?art=pgm_03_2009?article=1978. Retrieved 2009-06-18.
- ^ Mounier-Vehier C, Delsart P (April 2009). "[Pregnancy-related hypertension: a cardiovascular risk situation"] (in French). Presse Médicale (Paris, France : 1983) 38 (4): 600–8. doi:10.1016/j.lpm.2008.11.018. PMID 19250798. http://www.masson.fr/masson/S0755-4982(09)00045-1. Retrieved 2009-06-18.
- ^ Pack AI, Gislason T (2009). "Obstructive sleep apnea and cardiovascular disease: a perspective and future directions". Progress in Cardiovascular Diseases 51 (5): 434–51. doi:10.1016/j.pcad.2009.01.002. PMID 19249449. http://linkinghub.elsevier.com/retrieve/pii/S0033-0620(09)00003-6. Retrieved 2009-06-20.
- ^ Silverberg DS, Iaina A and Oksenberg A (January 2002). "Treating Obstructive Sleep Apnea Improves Essential Hypertension and Quality of Life". American Family Physician 65 (2): 229–36. PMID 11820487. http://www.aafp.org/afp/20020115/229.html.
- ^ Tomimoto H, Ihara M, Takahashi R, Fukuyama H (November 2008). "[Functional imaging in Binswanger's disease]" (in Japanese). Rinshō Shinkeigaku = Clinical Neurology 48 (11): 947–50. PMID 19198127.
- ^ Arsenic trioxide drugs dot com
- ^ atsdr-medical management guidelines for arsenic trioxide
- ^ Arsenic Author: Frances M Dyro, MD, Chief of the Neuromuscular Section, Associate Professor, Department of Neurology, New York Medical College, Westchester Medical Center
- ^ Can Med Assoc J. 1928 March; 18(3): 281–285. PMCID: PMC1710082 The Use of Sodium Chloride, Potassium Chloride, Sodium Bromide, and Potassium Bromide in Cases of Arterial Hypertension which are Amenable to Potassium Chloride W. L. T. Addison
- ^ Pimenta E, Oparil S (2009). "Role of aliskiren in cardio-renal protection and use in hypertensives with multiple risk factors". Vascular Health and Risk Management 5 (1): 453–63. PMID 19475781.
- ^ a b Reisin E, Jack AV (May 2009). "Obesity and hypertension: mechanisms, cardio-renal consequences, and therapeutic approaches". The Medical Clinics of North America 93 (3): 733–51. doi:10.1016/j.mcna.2009.02.010. PMID 19427502. http://journals.elsevierhealth.com/retrieve/pii/S0025-7125(09)00025-X. Retrieved 2009-06-30.
- ^ a b c Sowers JR, Whaley-Connell A, Epstein M (June 2009). "Narrative review: the emerging clinical implications of the role of aldosterone in the metabolic syndrome and resistant hypertension". Annals of Internal Medicine 150 (11): 776–83. PMID 19487712. PMC 2824330. http://www.annals.org/cgi/pmidlookup?view=long&pmid=19487712. Retrieved 2009-06-30.
- ^ McConnaughey MM, McConnaughey JS, Ingenito AJ (June 1999). "Practical considerations of the pharmacology of angiotensin receptor blockers". Journal of Clinical Pharmacology 39 (6): 547–59. doi:10.1177/00912709922008155. PMID 10354958. http://jcp.sagepub.com/cgi/pmidlookup?view=long&pmid=10354958. Retrieved 2009-06-30.
- ^ Manrique C, Lastra G, Gardner M, Sowers JR (May 2009). "The renin angiotensin aldosterone system in hypertension: roles of insulin resistance and oxidative stress". The Medical Clinics of North America 93 (3): 569–82. doi:10.1016/j.mcna.2009.02.014. PMID 19427492. PMC 2828938. http://journals.elsevierhealth.com/retrieve/pii/S0025-7125(09)00027-3. Retrieved 2009-06-30.
- ^ Nistala R, Wei Y, Sowers JR, Whaley-Connell A (March 2009). "Renin-angiotensin-aldosterone system-mediated redox effects in chronic kidney disease". Translational Research : the Journal of Laboratory and Clinical Medicine 153 (3): 102–13. doi:10.1016/j.trsl.2008.12.008. PMID 19218092. PMC 2680726. http://linkinghub.elsevier.com/retrieve/pii/S1931-5244(08)00361-7. Retrieved 2009-06-30.
- ^ Takahashi H (August 2008). "[Sympathetic hyperactivity in hypertension]" (in Japanese). Nippon Rinsho. Japanese Journal of Clinical Medicine 66 (8): 1495–502. PMID 18700548.
- ^ Esler M (June 2000). "The sympathetic system and hypertension". American Journal of Hypertension 13 (6 Pt 2): 99S–105S. doi:10.1016/S0895-7061(00)00225-9. PMID 10921528.
- ^ Mark AL (December 1996). "The sympathetic nervous system in hypertension: a potential long-term regulator of arterial pressure". Journal of Hypertension. Supplement : Official Journal of the International Society of Hypertension 14 (5): S159–65. PMID 9120673.
- ^ Somers VK, Anderson EA, Mark AL (January 1993). "Sympathetic neural mechanisms in human hypertension". Current Opinion in Nephrology and Hypertension 2 (1): 96–105. PMID 7922174.
- ^ Sagnella GA, Swift PA (June 2006). "The Renal Epithelial Sodium Channel: Genetic Heterogeneity and Implications for the Treatment of High Blood Pressure". Current Pharmaceutical Design 12 (14): 2221–2234. doi:10.2174/138161206777585157. PMID 16787251.
- ^ Johnson JA, Turner ST (June 2005). "Hypertension pharmacogenomics: current status and future directions". Current Opinion in Molecular Therapy 7 (3): 218–225. PMID 15977418.
- ^ Hideo Izawa; Yoshiji Yamada et al. (May 2003). "Prediction of Genetic Risk for Hypertension". Hypertension 41 (5): 1035–1040. doi:10.1161/01.HYP.0000065618.56368.24. PMID 12654703. http://hyper.ahajournals.org/cgi/content/short/01.HYP.0000065618.56368.24v1.
- ^ Reeves R (1995). "The rational clinical examination. Does this patient have hypertension? How to measure blood pressure". JAMA 273 (15): 1211–8. doi:10.1001/jama.273.15.1211. PMID 7707630.
- ^ White W, Lund-Johansen P, Omvik P (1990). "Assessment of four ambulatory blood pressure monitors and measurements by clinicians versus intraarterial blood pressure at rest and during exercise". Am J Cardiol 65 (1): 60–6. doi:10.1016/0002-9149(90)90026-W. PMID 2294682.
- ^ Kim J, Bosworth H, Voils C, Olsen M, Dudley T, Gribbin M, Adams M, Oddone E (2005). "How well do clinic-based blood pressure measurements agree with the mercury standard?". J Gen Intern Med 20 (7): 647–9. doi:10.1007/s11606-005-0112-6. PMID 16050862.
- ^ Blood Pressure Monitoring. "Blood Pressure Monitoring". http://www.blood-pressure-monitoring.org.
- ^ The American Heart Association. "Home Monitoring of High Blood Pressure". http://www.americanheart.org/presenter.jhtml?identifier=576.
- ^ Smulyan H, Safar ME (February 2000). "The diastolic blood pressure in systolic hypertension". Annals of Internal Medicine 132 (3): 233–7. PMID 10651605.
- ^ Padwal RS, Hemmelgarn BR, Khan NA, et al. (May 2009). "The 2009 Canadian Hypertension Education Program recommendations for the management of hypertension: Part 1--blood pressure measurement, diagnosis and assessment of risk". The Canadian Journal of Cardiology 25 (5): 279–86. PMID 19417858.
- ^ a b Mulatero P, Bertello C, Verhovez A, et al. (June 2009). "Differential diagnosis of primary aldosteronism subtypes". Current Hypertension Reports 11 (3): 217–23. doi:10.1007/s11906-009-0038-1. PMID 19442332.
- ^ Padwal RJ, Hemmelgarn BR, Khan NA, et al. (June 2008). "The 2008 Canadian Hypertension Education Program recommendations for the management of hypertension: Part 1 - blood pressure measurement, diagnosis and assessment of risk". The Canadian Journal of Cardiology 24 (6): 455–63. PMID 18548142.
- ^ Padwal RS, Hemmelgarn BR, McAlister FA, et al. (May 2007). "The 2007 Canadian Hypertension Education Program recommendations for the management of hypertension: part 1- blood pressure measurement, diagnosis and assessment of risk". The Canadian Journal of Cardiology 23 (7): 529–38. PMID 17534459.
- ^ Hemmelgarn BR, McAlister FA, Grover S, et al. (May 2006). "The 2006 Canadian Hypertension Education Program recommendations for the management of hypertension: Part I--Blood pressure measurement, diagnosis and assessment of risk". The Canadian Journal of Cardiology 22 (7): 573–81. PMID 16755312.
- ^ Hemmelgarn BR, McAllister FA, Myers MG, et al. (June 2005). "The 2005 Canadian Hypertension Education Program recommendations for the management of hypertension: part 1- blood pressure measurement, diagnosis and assessment of risk". The Canadian Journal of Cardiology 21 (8): 645–56. PMID 16003448.
- ^ Elley CR, Arroll B (2002). "Review: aerobic exercise reduces systolic and diastolic blood pressure in adults". ACP J. Club 137 (3): 109. PMID 12418849. http://www.acpjc.org/Content/137/3/Issue/ACPJC-2002-137-3-109.htm.
- ^ "Is Low Intensity Exercise A Superior Blood Pressure Reducer Than High Intensity Exercise?". Hypertension Library. http://www.hypertensionlibrary.com/is-low-intensity-exercise-a-superior-blood-pressure-reducer-than-high-intensity-exercise/. Retrieved 2009-02-01.
- ^ Klaus D, Böhm M, Halle M, et al. (May 2009). "[Restriction of salt intake in the whole population promises great long-term benefits"] (in German). Deutsche Medizinische Wochenschrift (1946) 134 Suppl 3: S108–18. doi:10.1055/s-0029-1222573. PMID 19418415. http://www.thieme-connect.com/DOI/DOI?10.1055/s-0029-1222573. Retrieved 2009-06-02.
- ^ Appel LJ, Moore TJ, Obarzanek E et al. (April 1997). "A Clinical Trial of the Effects of Dietary Patterns on Blood Pressure". New England Journal of Medicine 336 (16): 1117–24. doi:10.1056/NEJM199704173361601. PMID 9099655. http://content.nejm.org/cgi/content/full/336/16/1117.
- ^ Can Med Assoc J. 1928 March; 18(3): 281–285.PMCID: PMC1710082 The Use of Sodium Chloride, Potassium Chloride, Sodium Bromide, and Potassium Bromide in Cases of Arterial Hypertension which are Amenable to Potassium Chloride W. L. T. Addison
- ^ Benson, Herbert, M.D., The Relaxation Response, New York: Morrow, 1975. ISBN 0688029558
- ^ Mayo Clinic - Biofeedback
- ^ "Device-Guided Paced Breathing Lowers Blood Pressure, Blood Pressure". Press release. 2006-05-16. http://www.emaxhealth.com/106/5912.html. Retrieved 2009-07-03.
- ^ Elliott WJ, Izzo JL (2006). "Device-guided breathing to lower blood pressure: case report and clinical overview". MedGenMed 8 (3): 23. PMID 17406163. PMC 1781326. http://www.medscape.com/viewarticle/539099.
- ^ Nakao M, Yano E, Nomura S, Kuboki T (January 2003). "Blood pressure-lowering effects of biofeedback treatment in hypertension: a meta-analysis of randomized controlled trials" (). Hypertens. Res. 26 (1): 37–46. doi:10.1291/hypres.26.37. PMID 12661911. http://joi.jlc.jst.go.jp/JST.JSTAGE/hypres/26.37?from=PubMed.
- ^ Blumenthal JA, Babyak MA, Hinderliter A, et al. (January 2010). "Effects of the DASH diet alone and in combination with exercise and weight loss on blood pressure and cardiovascular biomarkers in men and women with high blood pressure: the ENCORE study". Arch. Intern. Med. 170 (2): 126–35. doi:10.1001/archinternmed.2009.470. PMID 20101007.
- ^ a b "CCG73 Chronic kidney disease: quick reference guide" (PDF). National Institute for Health and Clinical Excellence. 24 September 2008. http://www.nice.org.uk/nicemedia/pdf/CG015adultsquickrefguide.pdf. Retrieved 2009-03-04.
- ^ Soucek M (April 2009). "[Target values of blood pressure in patients with diabetes mellitus]" (in Czech). Vnitr̆ní Lékar̆ství 55 (4): 363–7. PMID 19449751.
- ^ a b "CG15 Type 1 diabetes in adults: quick reference guide" (PDF). National Institute for Health and Clinical Excellence. 21 July 2004. http://www.nice.org.uk/nicemedia/pdf/CG015adultsquickrefguide.pdf. Retrieved 2009-03-04.
- ^ "CG66 Diabetes - type 2 (update): quick reference guide" (PDF). National Institute for Health and Clinical Excellence. 28 May 2008. http://www.nice.org.uk/nicemedia/pdf/CG66T2DQRG.pdf. Retrieved 2009-03-04.
- ^ Shaw, Gina (2009-03-07). "Prehypertension: Early-stage High Blood Pressure". WebMD. http://www.webmd.com/content/article/73/88927.htm. Retrieved 2009-07-03.
- ^ Kragten JA, Dunselman PHJM. Nifedipine gastrointestinal therapeutic system (GITS) in the treatment of coronary heart disease and hypertension. Expert Rev Cardiovasc Ther 5 (2007):643-653. FULL TEXT!
- ^ Piller LB, Davis BR, Cutler JA, et al. (November 2002). "Validation of Heart Failure Events in the Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) Participants Assigned to Doxazosin and Chlorthalidone". Curr Control Trials Cardiovasc Med 3 (1): 10. doi:10.1186/1468-6708-3-10. PMID 12459039.
- ^ Mayor S (2006). "NICE removes beta blockers as first line treatment for hypertension". BMJ 333 (7557): 8. doi:10.1136/bmj.333.7557.8-a. PMID 16809680. PMC 1488775. http://www.bmj.com/cgi/content/full/333/7557/8-a.
- ^ Presentation on Direct Renin Inhibitors as Antihypertensive Drugs
- ^ Widimský J (February 2009). "[The combination of an ACE inhibitor and a calcium channel blocker is an optimal combination for the treatment of hypertension]" (in Czech). Vnitr̆ní Lékar̆ství 55 (2): 123–30. PMID 19348394.
- ^ ALLHAT Officers and Coordinators for the ALLHAT Collaborative Research Group (December 18 2002). "Major outcomes in high-risk hypertensive patients randomized to angiotensin-converting enzyme inhibitor or calcium channel blocker vs diuretic: The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT)". JAMA 288 (23): 2981–97. doi:10.1001/jama.288.23.2981. PMID 12479763. http://jama.ama-assn.org/cgi/content/full/288/23/2981.
- ^ Wing LM, Reid CM, Ryan P et al. (February 13 2003). "A comparison of outcomes with angiotensin-converting—enzyme inhibitors and diuretics for hypertension in the elderly". NEJM 348 (7): 583–92. doi:10.1056/NEJMoa021716. PMID 12584366. http://content.nejm.org/cgi/content/abstract/348/7/583.
- ^ Lewis PJ, Kohner EM, Petrie A, Dollery CT (1976). "Deterioration of glucose tolerance in hypertensive patients on prolonged diuretic treatment". Lancet 307 (7959): 564–566. doi:10.1016/S0140-6736(76)90359-7. PMID 55840.
- ^ Murphy MB, Lewis PJ, Kohner E, Schumer B, Dollery CT (1982). "Glucose intolerance in hypertensive patients treated with diuretics; a fourteen-year follow-up". Lancet 320 (8311): 1293–1295. doi:10.1016/S0140-6736(82)91506-9. PMID 6128594.
- ^ Messerli FH, Williams B, Ritz E (2007). "Essential hypertension". Lancet 370 (9587): 591–603. doi:10.1016/S0140-6736(07)61299-9. PMID 17707755.
- ^ "Section 11. Cardiovascular Disorders - Chapter 85. Hypertension". Merck Manual of Geriatrics. July 2005. http://www.merck.com/mkgr/mmg/sec11/ch85/ch85a.jsp.
- ^ Williams B (November 2003). "Treatment of hypertension in the UK: simple as ABCD?". J R Soc Med 96 (11): 521–2. doi:10.1258/jrsm.96.11.521. PMID 14594956. PMC 539621. http://www.jrsm.org/cgi/pmidlookup?view=long&pmid=14594956.
- ^ Insull W (January 2009). "The pathology of atherosclerosis: plaque development and plaque responses to medical treatment". The American Journal of Medicine 122 (1 Suppl): S3–S14. doi:10.1016/j.amjmed.2008.10.013. PMID 19110086. http://linkinghub.elsevier.com/retrieve/pii/S0002-9343(08)01017-6. Retrieved 2009-06-20.
- ^ Liapis CD, Avgerinos ED, Kadoglou NP, Kakisis JD (May 2009). "What a vascular surgeon should know and do about atherosclerotic risk factors". [[Journal of Vascular Surgery : Official Publication, the Society for Vascular Surgery [and] International Society for Cardiovascular Surgery, North American Chapter]] 49 (5): 1348–54. doi:10.1016/j.jvs.2008.12.046. PMID 19394559. http://linkinghub.elsevier.com/retrieve/pii/S0741-5214(08)02276-3. Retrieved 2009-06-20.
- ^ Riccioni G (2009). "The effect of antihypertensive drugs on carotid intima media thickness: an up-to-date review". Current Medicinal Chemistry 16 (8): 988–96. doi:10.2174/092986709787581923. PMID 19275607. http://www.bentham-direct.org/pages/content.php?CMC/2009/00000016/00000008/0006C.SGM. Retrieved 2009-06-20.
- ^ Safar ME, Jankowski P (February 2009). "Central blood pressure and hypertension: role in cardiovascular risk assessment". Clinical Science (London, England : 1979) 116 (4): 273–82. doi:10.1042/CS20080072. PMID 19138169. http://www.clinsci.org/cs/116/0273/cs1160273.htm. Retrieved 2009-06-20.
- ^ Werner CM, Böhm M (June 2008). "The therapeutic role of RAS blockade in chronic heart failure". Therapeutic Advances in Cardiovascular Disease 2 (3): 167–77. doi:10.1177/1753944708091777. PMID 19124420. http://tak.sagepub.com/cgi/pmidlookup?view=long&pmid=19124420. Retrieved 2009-06-20.
- ^ Gaddam KK, Verma A, Thompson M, Amin R, Ventura H (May 2009). "Hypertension and cardiac failure in its various forms". The Medical Clinics of North America 93 (3): 665–80. doi:10.1016/j.mcna.2009.02.005. PMID 19427498. http://journals.elsevierhealth.com/retrieve/pii/S0025-7125(09)00020-0. Retrieved 2009-06-20.
- ^ Agabiti-Rosei E (September 2008). "From macro- to microcirculation: benefits in hypertension and diabetes". Journal of Hypertension 26 Suppl 3: S15–21. doi:10.1097/01.hjh.0000334602.71005.52. PMID 18815511. http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=0263-6352&volume=26&issue=&spage=S15. Retrieved 2009-06-20.
- ^ a b c Agabiti-Rosei E (September 2008). "From macro- to microcirculation: benefits in hypertension and diabetes". Journal of Hypertension. Supplement : Official Journal of the International Society of Hypertension 26 (3): S15–9. PMID 19363848.
- ^ Murphy BP, Stanton T, Dunn FG (May 2009). "Hypertension and myocardial ischemia". The Medical Clinics of North America 93 (3): 681–95. doi:10.1016/j.mcna.2009.02.003. PMID 19427499. http://journals.elsevierhealth.com/retrieve/pii/S0025-7125(09)00018-2. Retrieved 2009-06-20.
- ^ Tylicki L, Rutkowski B (February 2003). "[Hypertensive nephropathy: pathogenesis, diagnosis and treatment]" (in Polish). Polski Merkuriusz Lekarski : Organ Polskiego Towarzystwa Lekarskiego 14 (80): 168–73. PMID 12728683.
- ^ Truong LD, Shen SS, Park MH, Krishnan B (February 2009). "Diagnosing nonneoplastic lesions in nephrectomy specimens". Archives of Pathology & Laboratory Medicine 133 (2): 189–200. PMID 19195963. http://journals.allenpress.com/jrnlserv/?request=get-abstract&issn=0003-9985&volume=133&page=189. Retrieved 2009-06-20.
- ^ Tracy RE, White S (February 2002). "A method for quantifying adrenocortical nodular hyperplasia at autopsy: some use of the method in illuminating hypertension and atherosclerosis". Annals of Diagnostic Pathology 6 (1): 20–9. doi:10.1053/adpa.2002.30606. PMID 11842376. http://linkinghub.elsevier.com/retrieve/pii/S1092913402249217. Retrieved 2009-06-20.
- ^ Singer DR, Kite A (June 2008). "Management of hypertension in peripheral arterial disease: does the choice of drugs matter?". European Journal of Vascular and Endovascular Surgery : the Official Journal of the European Society for Vascular Surgery 35 (6): 701–8. doi:10.1016/j.ejvs.2008.01.007. PMID 18375152. http://linkinghub.elsevier.com/retrieve/pii/S1078-5884(08)00047-6. Retrieved 2009-06-20.
- ^ Aronow WS (August 2008). "Hypertension and the older diabetic". Clinics in Geriatric Medicine 24 (3): 489–501, vi–vii. doi:10.1016/j.cger.2008.03.001. PMID 18672184. http://journals.elsevierhealth.com/retrieve/pii/S0749-0690(08)00012-8. Retrieved 2009-06-20.
- ^ Gardner AW, Afaq A (2008). "Management of lower extremity peripheral arterial disease". Journal of Cardiopulmonary Rehabilitation and Prevention 28 (6): 349–57. doi:10.1097/HCR.0b013e31818c3b96 (inactive 2009-10-19). PMID 19008688. PMC 2743684. http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=1932-7501&volume=28&issue=6&spage=349. Retrieved 2009-06-20.
- ^ Novo S, Lunetta M, Evola S, Novo G (January 2009). "Role of ARBs in the blood hypertension therapy and prevention of cardiovascular events". Current Drug Targets 10 (1): 20–5. doi:10.2174/138945009787122897. PMID 19149532. http://www.bentham-direct.org/pages/content.php?CDT/2009/00000010/00000001/0003J.SGM. Retrieved 2009-06-20.
- ^ a b c Zeng C, Villar VA, Yu P, Zhou L, Jose PA (April 2009). "Reactive oxygen species and dopamine receptor function in essential hypertension". Clinical and Experimental Hypertension (New York, N.Y. : 1993) 31 (2): 156–78. doi:10.1080/10641960802621283. PMID 19330604. http://www.informaworld.com/openurl?genre=article&doi=10.1080/10641960802621283&magic=pubmed||1B69BA326FFE69C3F0A8F227DF8201D0. Retrieved 2009-06-20.
- ^ Varon J (October 2007). "Diagnosis and management of labile blood pressure during acute cerebrovascular accidents and other hypertensive crises". The American Journal of Emergency Medicine 25 (8): 949–59. doi:10.1016/j.ajem.2007.02.032. PMID 17920983. http://linkinghub.elsevier.com/retrieve/pii/S0735-6757(07)00174-X. Retrieved 2009-06-20.
- ^ Sare GM, Geeganage C, Bath PM (2009). "High blood pressure in acute ischaemic stroke--broadening therapeutic horizons". Cerebrovascular Diseases (Basel, Switzerland) 27 Suppl 1: 156–61. doi:10.1159/000200454. PMID 19342846. http://content.karger.com/produktedb/produkte.asp?typ=fulltext&file=000200454. Retrieved 2009-06-20.
- ^ Palm F, Urbanek C, Grau A (April 2009). "Infection, its treatment and the risk for stroke". Current Vascular Pharmacology 7 (2): 146–52. doi:10.2174/157016109787455707. PMID 19355997. http://www.bentham-direct.org/pages/content.php?CVP/2009/00000007/00000002/004AD.SGM. Retrieved 2009-06-20.
- ^ Tanahashi N (April 2009). "[Roles of angiotensin II receptor blockers in stroke prevention]" (in Japanese). Nippon Rinsho. Japanese Journal of Clinical Medicine 67 (4): 742–9. PMID 19348237.
- ^ a b Pedrinelli R, Dell'Omo G, Talini E, Canale ML, Di Bello V (February 2009). "Systemic hypertension and the right-sided cardiovascular system: a review of the available evidence". Journal of Cardiovascular Medicine (Hagerstown, Md.) 10 (2): 115–21. doi:10.2459/JCM.0b013e32831da941 (inactive 2009-10-19). PMID 19377378. http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=1558-2027&volume=10&issue=2&spage=115. Retrieved 2009-06-02.
- ^ Papadopoulos DP, Papademetriou V (February 2009). "Hypertrophic and Hypertensive Hypertrophic Cardiomyopathy: A True Association?". Angiology 61 (1): 92–9. doi:10.1177/0003319709331391. PMID 19240104. http://ang.sagepub.com/cgi/pmidlookup?view=long&pmid=19240104. Retrieved 2009-06-21.
- ^ Thiele S, Britz S, Landsiedel L, Wallaschofski H, Lohmann T (August 2008). "Short-term changes in hsCRP and NT-proBNP levels in hypertensive emergencies". Hormone and Metabolic Research = Hormon- Und Stoffwechselforschung = Hormones et Métabolisme 40 (8): 561–5. doi:10.1055/s-2008-1073152. PMID 18459085. http://www.thieme-connect.com/DOI/DOI?10.1055/s-2008-1073152. Retrieved 2009-06-21.
- ^ Martín Raymondi D, Díaz Dorronsoro I, Barba J, Díez J (September 2005). "[Characteristics of hypertensive cardiomyopathy in a population of hypertensive patients never treated"] (in Spanish; Castilian). Medicina Clínica 125 (9): 321–4. PMID 16185630. http://www.elsevier.es/revistas/0025-7753/125/321. Retrieved 2009-06-21.
- ^ Motz W (October 2004). "[Right ventricle in arterial hypertension]" (in German). Der Internist 45 (10): 1108–16. doi:10.1007/s00108-004-1273-9. PMID 15351931.
- ^ Steinmetz M, Nickenig G (April 2009). "[Cardiac sequelae of hypertension]" (in German). Der Internist 50 (4): 397–409. doi:10.1007/s00108-008-2289-3. PMID 19343394.
- ^ Goch A (October 2008). "[Blood pressure variability--clinical implications]" (in Polish). Polski Merkuriusz Lekarski : Organ Polskiego Towarzystwa Lekarskiego 25 (148): 364–7. PMID 19145938.
- ^ Rodríguez NA, Zurutuza A (2008). "[Ophthalmological manifestations of arterial hypertension"] (in Spanish; Castilian). Anales Del Sistema Sanitario De Navarra 31 Suppl 3: 13–22. PMID 19169291. http://recyt.fecyt.es/index.php/ASSN/article/view/5209/4401. Retrieved 2009-06-21.
- ^ DellaCroce JT, Vitale AT (November 2008). "Hypertension and the eye". Current Opinion in Ophthalmology 19 (6): 493–8. doi:10.1097/ICU.0b013e3283129779. PMID 18854694. http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=1040-8738&volume=19&issue=6&spage=493. Retrieved 2009-06-21.
- ^ Barar A, Apatachioaie ID, Apatachioaie C, Marceanu L (2008). "[Hypertensive retinopathy--assessment]" (in Romanian). Oftalmologia (Bucharest, Romania : 1990) 52 (1): 3–12. PMID 18714483.
- ^ Uhler TA, Piltz-Seymour J (March 2008). "Optic disc hemorrhages in glaucoma and ocular hypertension: implications and recommendations". Current Opinion in Ophthalmology 19 (2): 89–94. doi:10.1097/ICU.0b013e3282f3e6bc. PMID 18301280. http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=1040-8738&volume=19&issue=2&spage=89. Retrieved 2009-06-21.
- ^ Wong TY, Wong T, Mitchell P (February 2007). "The eye in hypertension". Lancet 369 (9559): 425–35. doi:10.1016/S0140-6736(07)60198-6. PMID 17276782. http://linkinghub.elsevier.com/retrieve/pii/S0140-6736(07)60198-6. Retrieved 2009-06-21.
- ^ Karpha M, Lip GV (August 2006). "The pathophysiology of target organ damage in hypertension". Minerva Cardioangiologica 54 (4): 417–29. PMID 17016413.
- ^ Nadar SK, Tayebjee MH, Messerli F, Lip GY (2006). "Target organ damage in hypertension: pathophysiology and implications for drug therapy". Current Pharmaceutical Design 12 (13): 1581–92. doi:10.2174/138161206776843368. PMID 16729871. http://www.bentham-direct.org/pages/content.php?CPD/2006/00000012/00000013/0004B.SGM. Retrieved 2009-06-21.
- ^ Grosso A, Veglio F, Porta M, Grignolo FM, Wong TY (December 2005). "Hypertensive retinopathy revisited: some answers, more questions". The British Journal of Ophthalmology 89 (12): 1646–54. doi:10.1136/bjo.2005.072546. PMID 16299149. PMC 1772998. http://bjo.bmj.com/cgi/pmidlookup?view=long&pmid=16299149. Retrieved 2009-06-21.
- ^ Wong TY, McIntosh R (2005). "Hypertensive retinopathy signs as risk indicators of cardiovascular morbidity and mortality". British Medical Bulletin 73-74: 57–70. doi:10.1093/bmb/ldh050. PMID 16148191. http://bmb.oxfordjournals.org/cgi/pmidlookup?view=long&pmid=16148191. Retrieved 2009-06-21.
- ^ Porta M, Grosso A, Veglio F (April 2005). "Hypertensive retinopathy: there's more than meets the eye". Journal of Hypertension 23 (4): 683–96. doi:10.1097/01.hjh.0000163131.77267.11. PMID 15775767. http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=0263-6352&volume=23&issue=4&spage=683. Retrieved 2009-06-21.
- ^ Wong TY, Mitchell P (November 2004). "Hypertensive retinopathy". The New England Journal of Medicine 351 (22): 2310–7. doi:10.1056/NEJMra032865. PMID 15564546. http://content.nejm.org/cgi/pmidlookup?view=short&pmid=15564546&promo=ONFLNS19. Retrieved 2009-06-21.
- ^ Yoshimoto H, Ganka Y (March 2004). "[Hypertensive retinopathy]" (in Japanese). Nippon Rinsho. Japanese Journal of Clinical Medicine 62 Suppl 3: 381–5. PMID 15171404.
- ^ Chatterjee S, Chattopadhyay S, Hope-Ross M, Lip PL, Chattopadhya S (October 2002). "Hypertension and the eye: changing perspectives". Journal of Human Hypertension 16 (10): 667–75. doi:10.1038/sj.jhh.1001472. PMID 12420190.
- ^ Wang G, Lai FM, Kwan BC, et al. (March 2009). "Podocyte loss in human hypertensive nephrosclerosis". American Journal of Hypertension 22 (3): 300–6. doi:10.1038/ajh.2008.360. PMID 19131934.
- ^ Lubomirova M, Djerassi R, Kiperova B, Boyanov M, Christov V (2007). "Renal Doppler ultrasound in patients with hypertension and metabolic syndrom". Medicinski Pregled 60 Suppl 2: 84–6. PMID 18928166.
- ^ a b Burt VL, Cutler JA, Higgins M, et al. (July 1995). "Trends in the prevalence, awareness, treatment, and control of hypertension in the adult US population. Data from the health examination surveys, 1960 to 1991". Hypertension 26 (1): 60–9. PMID 7607734. http://hyper.ahajournals.org/cgi/pmidlookup?view=long&pmid=7607734. Retrieved 2009-06-05.
- ^ Kearney PM, Whelton M, Reynolds K, Muntner P, Whelton PK, He J (2005). "Global burden of hypertension: analysis of worldwide data". Lancet 365 (9455): 217–23. doi:10.1016/S0140-6736(05)17741-1. PMID 15652604. http://linkinghub.elsevier.com/retrieve/pii/S0140-6736(05)17741-1. Retrieved 2009-06-22.
- ^ Vokonas PS, Kannel WB, Cupples LA (November 1988). "Epidemiology and risk of hypertension in the elderly: the Framingham Study". J Hypertens Suppl 6 (1): S3–9.
- ^ Burt VL, Whelton P, Roccella EJ, et al. (March 1995). "Prevalence of hypertension in the US adult population. Results from the Third National Health and Nutrition Examination Survey, 1988-1991". Hypertension 25 (3): 305–13. PMID 7875754. http://hyper.ahajournals.org/cgi/pmidlookup?view=long&pmid=7875754. Retrieved 2009-06-05.
- ^ Ostchega Y, Dillon CF, Hughes JP, Carroll M, Yoon S (July 2007). "Trends in hypertension prevalence, awareness, treatment, and control in older U.S. adults: data from the National Health and Nutrition Examination Survey 1988 to 2004". J Am Geriatr Soc 55 (7): 1056–65. doi:10.1111/j.1532-5415.2007.01215.x. PMID 17608879. http://www3.interscience.wiley.com/resolve/openurl?genre=article&sid=nlm:pubmed&issn=0002-8614&date=2007&volume=55&issue=7&spage=1056. Retrieved 2009-06-05.
- ^ Reaven GM, Lithell H, Landsberg L (February 1996). "Hypertension and associated metabolic abnormalities--the role of insulin resistance and the sympathoadrenal system". The New England Journal of Medicine 334 (6): 374–81. doi:10.1056/NEJM199602083340607. PMID 8538710. http://content.nejm.org/cgi/pmidlookup?view=short&pmid=8538710&promo=ONFLNS19. Retrieved 2009-06-08.
- ^ Makaryus AN, Akhrass P, McFarlane SI (June 2009). "Treatment of hypertension in metabolic syndrome: implications of recent clinical trials". Curr. Diab. Rep. 9 (3): 229–37. doi:10.1007/s11892-009-0037-2. PMID 19490825.
- ^ a b Redon J, Cífková R, Narkiewicz K (April 2009). "Hypertension in the metabolic syndrome: summary of the new position statement of the European Society of Hypertension". Pol. Arch. Med. Wewn. 119 (4): 255–60. PMID 19413186. http://tip.org.pl/pamw/issue/search.html?lang=en&search=19413186. Retrieved 2009-06-07.
- ^ a b Falkner B (May 2009). "Hypertension in children and adolescents: epidemiology and natural history". Pediatr. Nephrol.. doi:10.1007/s00467-009-1200-3. PMID 19421783.
- ^ "Hypertension in Children and Adolescents". Hypertension in Children and Adolescents. American Academy of Family Physician. 2006. http://www.aafp.org/afp/20060501/1558.html. Retrieved 2007-07-24.
- ^ Dwivedi, Girish & Dwivedi, Shridhar (2007). History of Medicine: Sushruta – the Clinician – Teacher par Excellence. National Informatics Centre (Government of India).
- ^ edited by J.D. Swales. (1995). Manual of hypertension. Oxford: Blackwell Science. pp. xiii. ISBN 0-86542-861-1.
- ^ "What is Hypertension? - WrongDiagnosis.com". http://www.wrongdiagnosis.com/h/hypertension/basics.htm.
- ^ Alcocer L, Cueto L (June 2008). "Hypertension, a health economics perspective". Therapeutic Advances in Cardiovascular Disease 2 (3): 147–55. doi:10.1177/1753944708090572. PMID 19124418. http://tak.sagepub.com/cgi/pmidlookup?view=long&pmid=19124418. Retrieved 2009-06-20.
- ^ William J. Elliott (October 2003). "The Economic Impact of Hypertension". The Journal of Clinical Hypertension 5 (4): 3–13. doi:10.1111/j.1524-6175.2003.02463.x.
- ^ Theodoratou D, Maniadakis N, Fragoulakis V, Stamouli E (2009). "Analysis of published economic evaluations of angiotensin receptor blockers". Hellenic Journal of Cardiology : HJC = Hellēnikē Kardiologikē Epitheōrēsē 50 (2): 105–18. PMID 19329412. http://www.hellenicjcardiol.org/archive/full_text/2009/2/2009_2_105.pdf. Retrieved 2009-06-20.
- ^ Coca A (2008). "Economic benefits of treating high-risk hypertension with angiotensin II receptor antagonists (blockers)". Clinical Drug Investigation 28 (4): 211–20. doi:10.2165/00044011-200828040-00002. PMID 18345711.
- ^ Chockalingam A (May 2007). "Impact of World Hypertension Day". The Canadian Journal of Cardiology 23 (7): 517–9. PMID 17534457.
- ^ Chockalingam A (June 2008). "World Hypertension Day and global awareness". The Canadian Journal of Cardiology 24 (6): 441–4. PMID 18548140.
Hypertension at the Open Directory Project | http://www.thefullwiki.org/Hypertension | 13 |
79 | In vector calculus, divergence is a vector operator that measures the magnitude of a vector field's source or sink at a given point, in terms of a signed scalar. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
For example, consider air as it is heated or cooled. The relevant vector field for this example is the velocity of the moving air at a point. If air is heated in a region it will expand in all directions such that the velocity field points outward from that region. Therefore the divergence of the velocity field in that region would have a positive value, as the region is a source. If the air cools and contracts, the divergence is negative and the region is called a sink.
Definition of divergence
In physical terms, the divergence of a three dimensional vector field is the extent to which the vector field flow behaves like a source or a sink at a given point. It is a local measure of its "outgoingness"—the extent to which there is more exiting an infinitesimal region of space than entering it. If the divergence is nonzero at some point then there must be a source or sink at that position. (Note that we are imagining the vector field to be like the velocity vector field of a fluid (in motion) when we use the terms flow, sink and so on.)
More rigorously, the divergence of a vector field F at a point p is defined as the limit of the net flow of F across the smooth boundary of a three dimensional region V divided by the volume of V as V shrinks to p. Formally,
where |V | is the volume of V, S(V) is the boundary of V, and the integral is a surface integral with n being the outward unit normal to that surface. The result, div F, is a function of p. From this definition it also becomes explicitly visible that div F can be seen as the source density of the flux of F.
In light of the physical interpretation, a vector field with constant zero divergence is called incompressible or solenoidal – in this case, no net flow can occur across any closed surface.
The intuition that the sum of all sources minus the sum of all sinks should give the net flow outwards of a region is made precise by the divergence theorem.
Application in Cartesian coordinates
Although expressed in terms of coordinates, the result is invariant under orthogonal transformations, as the physical interpretation suggests.
The common notation for the divergence ∇ · F is a convenient mnemonic, where the dot denotes an operation reminiscent of the dot product: take the components of ∇ (see del), apply them to the components of F, and sum the results. Because applying an operator is different from multiplying the components, this is considered an abuse of notation.
Cylindrical coordinates
For a vector expressed in cylindrical coordinates as
where ea is the unit vector in direction a, the divergence is
Spherical coordinates
Decomposition theorem
It can be shown that any stationary flux v(r) which is at least two times continuously differentiable in and vanishes sufficiently fast for |r| → ∞ can be decomposed into an irrotational part E(r) and a source-free part B(r). Moreover, these parts are explicitly determined by the respective source-densities (see above) and circulation densities (see the article Curl):
For the irrotational part one has
The source-free part, B, can be similarly written: one only has to replace the scalar potential Φ(r) by a vector potential A(r) and the terms −∇Φ by +∇×A, and the source-density div v by the circulation-density ∇×v.
This "decomposition theorem" is in fact a by-product of the stationary case of electrodynamics. It is a special case of the more general Helmholtz decomposition which works in dimensions greater than three as well.
for all vector fields F and G and all real numbers a and b.
There is a product rule of the following type: if is a scalar valued function and F is a vector field, then
or in more suggestive notation
The divergence of the curl of any vector field (in three dimensions) is equal to zero:
If a vector field F with zero divergence is defined on a ball in R3, then there exists some vector field G on the ball with F = curl(G). For regions in R3 more complicated than this, the latter statement might be false (see Poincaré lemma). The degree of failure of the truth of the statement, measured by the homology of the chain complex
(where the first map is the gradient, the second is the curl, the third is the divergence) serves as a nice quantification of the complicatedness of the underlying region U. These are the beginnings and main motivations of de Rham cohomology.
Relation with the exterior derivative
One can express the divergence as a particular case of the exterior derivative, which takes a 2-form to a 3-form in R3. Define the current two form
It measures the amount of "stuff" flowing through a surface per unit time in a "stuff fluid" of density moving with local velocity F. Its exterior derivative is then given by
Thus, the divergence of the vector field F can be expressed as:
Here the superscript is one of the two musical isomorphisms, and is the Hodge dual. Note however that working with the current two form itself and the exterior derivative is usually easier than working with the vector field and divergence, because unlike the divergence, the exterior derivative commutes with a change of (curvilinear) coordinate system.
The divergence of a vector field can be defined in any number of dimensions. If
in a Euclidean coordinate system where and , define
The appropriate expression is more complicated in curvilinear coordinates.
In the case of one dimension, a "vector field" is simply a regular function, and the divergence is simply the derivative.
For any n, the divergence is a linear operator, and it satisfies the "product rule"
for any scalar-valued function .
The divergence can be defined on any manifold of dimension n with a volume form (or density) e.g. a Riemannian or Lorentzian manifold. Generalising the construction of a two form for a vector field on , on such a manifold a vector field X defines a n−1 form obtained by contracting X with . The divergence is then the function defined by
Standard formulas for the Lie derivative allow us to reformulate this as
This means that the divergence measures the rate of expansion of a volume element as we let it flow with the vector field.
On a Riemannian or Lorentzian manifold the divergence with respect to the metric volume form can be computed in terms of the Levi Civita connection
where the second expression is the contraction of the vector field valued 1-form with itself and the last expression is the traditional coordinate expression used by physicists.
If T is a (p,q)-tensor (p for the contravariant vector and q for the covariant one), then we define the divergence of T to be the (p,q−1)-tensor
that is we trace the covariant derivative on the first two covariant indices.
See also
- DIVERGENCE of a Vector Field
- Cylindrical coordinates at Wolfram Mathworld
- Spherical coordinates at Wolfram Mathworld
- Brewer, Jess H. (1999-04-07). "DIVERGENCE of a Vector Field". Vector Calculus. Retrieved 2007-09-28.
- Theresa M. Korn; Korn, Granino Arthur. Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review. New York: Dover Publications. pp. 157–160. ISBN 0-486-41147-8. | http://en.wikipedia.org/wiki/Divergence | 13 |
55 | USING EXCEL TO RECORD AND CALCULATE GRADES
Dr. Richard L.
A spreadsheet is simply a collection of cells of information in columns and rows. In each cell one can place a number, some text or an equation. Information stored in the cells of a spreadsheet can also be displayed using a variety of graph types. This tutorial will use this format to set up a simple system for recording and calculating students' grades in a course.
To complete this tutorial, you will need the sample Excel workbook with grades for a fictitious physics course. To save this sample spreadsheet, "101-F_F07.xls", right-click on the file name and select the option, "Save Target As." Save the file in an appropriate directory under your computer account, e.g., x:\Grades. Note: This file is in Excel 2003 format but should also load fine in Excel 2007.
II. Start Excel
1. Begin by clicking on the Windows "Start" button, selecting "Programs" and locating the link to Excel. (Usually it will be in a Microsoft Office folder.) Once Excel is running, a blank spreadsheet page will appear ready for entering data, as shown below.
2. Move around the spreadsheet using the arrow keys. As you do notice that the cell identifier at the corner just above the spreadsheet area changes to show which cell you are visiting.
3. Close this instance of Excel before exploring the sample workbook.
III. Loading and Exploring the Sample Workbook
1. To load the sample workbook (which you had saved from the web page), use My Computer to locate the spreadsheet file, "101-F_F07.xl s", that you saved earlier. Then open it by double-clicking it.
2. Use the cursor keys to move around the first sheet of this workbook. Notice that in the last two columns, the cells contain equations which generate the text or numerical values seen on the spreadsheet. One of these is a mathematical-type equation and the other column contains a programming-type equation that actually results in text as an answer.
3. Now move to the other sheets of the workbook by clicking on the appropriate tab at the bottom of the screen.
4. Open a new blank workbook. To do this either press the New button or use the New option on the File menu.
5. Compare the width and height of the cells in the blank workbook with those in this sample workbook. To do this, select a cell and then from the Format menu select Row/Height or Column/Width.
Alternately, select a row or column indicator by clicking on the respective number or letter. Then right-click in a cell in the highlighted group and select either Row Height or Column Width.
III. Entering Data and Formulas
Note: At any time that you need more pages to your spreadsheet, select the Insert menu and choose "Worksheet." A new page will be placed in front of the current worksheet, and it will have the name, "Sheet#," where # will be one digit greater than the largest numbered sheet.
1. Numerical values are entered into a cell by moving to the cell in which you want to place the number, typing it in, and pressing return. Numbers are automatically right-justified.
2. To enter some text into a cell, move to that cell and type in the text, pressing return at the end of the text you want entered. Text is automatically left-justified.
Note: If you wish to enter a number as text, simply beginning by typing a single quotation mark(') and then the numerical digits.
3. Equations need to begin with an equals sign.
4.To try data entry, create a new workbook with the New button. In cell A3, enter your name, last name first. Move to B3 and then C3, etc. Enter a series of 5 quiz scores based on a total possible score in each quiz of 5, e.g., 5, 4, 3, 5, and 3.
Note: You can enter the data and move to an adjacent cell all by one press of a cursor arrow key! Try it.
Notice that once data is entered in cell B3, the whole data in A3 (your name) will not be visible. To solve this problem, make column A to be larger. Click on the "A" and then do a right-click in a highlighted cell and select Column Width. Enter a larger number. Some trial and error may be necessary.
5. Now sum these up using the SUM function. Move to the cell just to the right of the last quiz grade, press the "fx" button to the left of the window displaying the contents, if any, of the cell where the cursor is located. This area is referred to as the "formula bar." A Insert Function window will appear. From the list of functions select the "SUM" function by clicking it. (If your particular function is not displayed, select you may choose a different Category such as All.)
Proceed by following the directions in the resulting windows. If the cell or range of cells is not the one desired, select the correct cells by making certain that the cells you wish to sum are visible. You may have to drag the Formula Palette window a round to clear the cells needed. Select the cells you want to sum over by pressing the left mouse button in the cell at the beginning and while holding down this button move through the cells you want to sum. Notice how the cell addresses show up in the Formula Palette as B3:F3. Then press the "OK" button to enter the completed formula in the appropriate cell. A Function Arguments dialog box may pop up in which the cells being summed may be changed.
6. Find the average instead of the sum.. Click on the "fx" button again. The Function Arguments dialog box will pop up, and now to the left of the Formula Bar will be an area showing SUM. In place of SUM click on the arrow and from the drop-down menu choose AVERAGE. Then click OK in the Function Arguments dialog box. The number in the equation cell will change from 20 to 4, if you have used the numbers I used.
Note: The functions and the cell addresses can be typed using lowercase letters, and Excel will automatically convert them to uppercase during execution.
7. Some teachers like to assign a certain weight to the different categories that go into making up the final grade. For example, quizzes may be worth 25% of the total, homework might be worth 20%, a paper could be worth 15%, and the exams w ill be worth 40%. Assume the grade for each of these components are listed as so many points out of 100 and are entered in columns B, C, D, and E. Then in column F the following equation should be entered to calculate the grade for the student in row 3.
Do not use the "fx" button but type the equation exactly as it is, beginning with the equals sign.
Note: In a case such as this, the sum of all of the percentages should equal 100% and the coefficients used should be entered as decimals.
8. Let's exclude the lowest two scores from the average as done in #6. In place of the AVERAGE equation, enter the following:
IV. Copying and Pasting Formulas
Today's spreadsheet programs, by default, copy and paste formulas in a "relative" sense. That is, you only have to type in a formula once to do a particular task on a series of cells. For example, find their average of a row of cells, then the cell with the equation can be copied and pasted into a whole column and the appropriate cells will be used relative to the cell into which the formula was pasted.
1. As an example, make three more rows of quiz scores similar to the one row you did earlier. Do not include the average for each student.
2. Enter the AVERAGE formula in the cell to the left of the first row. (This will probably be G3, if you are following the given example) Now copy and paste the cell with the averaging formula in it by moving to the cell and clicking, holding and dragging on the square at the lower right corner of the cell. Drag it down through all three cells. (See the two figures below.)
Note: If you are creating and copying a formula to a cell not adjacent to the original cell, then select the cell or group of cells. Press the Copy button on the toolbar. Move to the new location. Press the paste button. The cell will be copied and pasted. If a group of cells are copied, then only the one cell at the top left of the ending location needs to be selected, the Copy function in Excel will figure out how many rows and columns are involved.
3. To import the these average quiz scores into the sheet where the final grades will be calculated, select the whole column of averages and copy them to the clipboard using the Copy button. Move to the correct sheet and the first cell of the column where these averages are to appear. Now from the "Edit" menu select "Paste Special". Press the "Paste Link" button to bring all of the averages to the main page.
Note: The beauty of this method is that any changes to the individual quiz grades will automatically be reflected on the main page and thus in the final grade. The sample grade workbook has used this method of copying and pasting links. To examine how this actually works, change one of the homework grades for one of the students and notice how the average is updated automatically on that page and on the final grade page.
V. Changing the Width and Height of Cells and the Position and Display of Cell Contents
1. Remember that to change the width or height of cells, one can select a whole column or row and from there right-click and select the appropriate option.. Or select a cell and then from the "Format" menu, select Column and then Width or Row and Height.
2. To change the font used in cells, select the cells to be involved and then press the arrow by the font size on the toolbar (by default showing Arial and 10) and select a new font type and/or size.
3. To change the position of data in a cell, use the "Center", "Left" and "Right" justification buttons on the toolbar just as is done in a word processing program such as Word..
4. To round-off numbers displayed in cells, select the cell(s) and then from the "Format" menu select "Cell" and then the "Number" tab and finally the "Number" category. Change the number of digits after the decimal to the desired value and press "OK".
Note: The number is not actually rounded off, only the display has been rounded. Internally, Excel still retains the value of the number to the precision that it can (usually up to 15 digits).
VI. Creating Attendance Sheets
Sometimes it is useful to have a list of student names beside a grid of boxes in which to record attendance or grades before copying them into the spreadsheet. Excel carries out this task rather easily.
Note: A grading sheet is included in the sample workbook.
1. On a blank worksheet in Excel, paste or type a list of the students in the class. Adjust column widths and row heights to allow for easy entry of hand-written data.
2. Click in the upper left-most cell, of the region you wish to make into a grid, and drag to the bottom right-most cell of the grid.
3. Outline the cells of the grid using the border button on the toolbar. If the type of border shown is not what you desire, press the small arrow just to the left of the border button. A list of pictures demonstrating the available bordering options will appear. Select the most appropriate one, or for even more control, choose the Format/Cells/Border menu item.
4. Finally, preview the page you have generated, as described below, and print it out.
VII. Previewing and Printing the Workbook
1. To rename any sheet of the workbook, just double-click on the appropriate name tab near the bottom of the screen. Once the name is highlighted, simple type a new name.
2. To edit or choose headers and footers for the selected sheet, click on "Page Setup" from the "File" menu. Then select the "Headers/Footers" tab. If you want to type in your own text, press the "Custom" button.
3. To see how the layout looks, select as many sheet as you want to preview by holding down the "Ctrl" key while clicking on the name tab of each sheet. Finally select "Print Preview" from the "File" menu and scroll through the sheets.
Caution: Before moving on to do more editing, make sure to go back and click a second time on all of the selected tabs to unselect them. That way changes will be made only to one sheet.
4. To print a sheet, press the Print Button.
5. To print more than one sheet or the whole workbook, select all sheets of interest by the procedure outlined in #3, or select Print from the File menu and choose the pages to be printed.
In addition to this tutorial, the reader will probably find several other resources helpful.
1. Help in Excel
From the Help menu select "Microsoft Excell Help" or simply click on the associated button on the toolbar. This button has a question mark in a blue circle. One can then choose to either search Microsoft Office Online or Office Help (within Excel itself)e.
2. Web Authoring Tutorials at Bridgewater
College's Academic Computing Web Site
3. Other Tutorials
Return to the
Go to Academic Computing at Bridgewater College
|©1997, 2007, Richard L. Bowman
Last modified: 30-Aug-07; by R. Bowman, [email protected] | http://people.bridgewater.edu/~rbowman/acadcomp/ExcelGrades-print.html | 13 |
50 | Elastic modulus is a quantitative measure of how much something wants to return to its original shape and size. Generally it can be thought of as stress over strain. We calculate the elastic modulus by using the formula applied pressure / fractional change in size. Young's modulus is an elastic rod stretched one dimensionally, only expanding in length, which behaves like a spring and can be calculated using Hooke's Law.
Okay so today we're going to talk about elastic modulus, the elastic modulus is a property of solids that tell us how much the solid wants to come back to its original shaped and size if we try to deform it. So elastic modulus is always associated with some sort of restorative force which acts to return that solid to its original shape and size. Now the way that we're going to define this, is we're going to say e is equal to stress divided by strain. So it's the amount of stress that we put on the material, which is associated with a force that we're putting on the material. So we're going to write that as a applied pressure divided by the amount of strain that the material shows as a result of that stress that we put it under.
The amount of strain we're going to call the fractional change in size. Alright so let's look at these 2 things. Pressure is equal to applied force divided by area as always, so if I'm going to apply a force I've to apply it over a certain area of that solid. I don't apply it in a pin point. I kind of have to apply it over a big area, so the numerator of this fraction is the force divided by the area. Now what's the unit of that, well we're doing Physics so it's got to be SI, so force is measured in Newtons, area is measured in square meters. So the unit of pressure is Newtons per square meter, which we'll also call Pascal's okay. Alright what about the denominator? Well the denominator is the fractional change in size, so this is defined as how much did the size change divided by how big is it. Now what's the unit of this? Well a change in size divided it's size it looks like kind of the same thing. So this guy is going to be dimensions, so that means that the unit of my elastic modulus will be the same as the unit of pressure Pascal's.
Alright let's look at a specific example of an elastic modulus. There's actually several of them but a lot of times when people refer to an elastic modulus and they don't say anything else, they really mean Young's modulus. Young's modulus is tensile modulus; it has to do with what happens when we stretch a rod of material out okay. So it's like a one dimensional stretching. So here I've got rod of material it's got cross sectional area a it's got length l and I'm going to put a force on it and I'm going to pull it in this way and make it longer by the amount delta l. The way that I'm going to do that is by applying a force f, okay so that means my elastic modulus must be the applied pressure, force over area divided by the fractional change in size delta l over l. So this is my elastic modulus and I can actually look up in a table, what's the elastic modulus of copper? Or what's the elastic modulus of tungsten? Or what's the elastic modulus of steel?
Now what we'll usually use this for, is to solve for the force I need to give me a certain change in length of a rod of steel or iron or whatever. So we'll solve this for f, so we'll have f equals the a is going to go over next to the e and then the delta l over l will come up. So it'll be ea delta l over l just like that, now this is kind of interesting because we've seen it before. Let's see if maybe you can remember, we've got a force that we're applying that's proportional to because look at all this business here, that's just a constant. Remember I looked up that number that number I measured, that's a cross sectional area and this is just how long did it start off being right? So we've got force equal to a number times how much we changed the length. But this is Hooke's law, so that means that this solid actually behaves like a spring as long as we're only making a small change in it's length it will come back just like a spring.
Now of course if I try to pull it too hard I'm going to break the thing. Alright and that's what called the elastic limit, so as long as we're only pulling at a small amount within the elastic limit, then Young's modulus is going to tell us that this rod of material will behave like a spring. Alright so that's elastic modulus. | http://www.brightstorm.com/science/physics/solids-liquids-and-gases/elastic-modulus/ | 13 |
64 | Temperature, heat, and related concepts belong to the world of physics rather than chemistry; yet it would be impossible for the chemist to work without an understanding of these properties. Thermometers, of course, measure temperature according to one or both of two well-known scales based on the freezing and boiling points of water, though scientists prefer a scale based on the virtual freezing point of all matter. Also related to temperature are specific heat capacity, or the amount of energy required to change the temperature of a substance, and also calorimetry, the measurement of changes in heat as a result of physical or chemical changes. Although these concepts do not originate from chemistry but from physics, they are no less useful to the chemist.
HOW IT WORKS
The area of physics known as thermodynamics, discussed briefly below in terms of thermodynamics laws, is the study of the relationships between heat, work, and energy. Work is defined as the exertion of force over a given distance to displace or move an object, and energy is the ability to accomplish work. Energy appears in numerous manifestations, including thermal energy, or the energy associated with heat.
Another type of energy—one of particular interest to chemists—is chemical energy, related to the forces that attract atoms to one another in chemical bonds. Hydrogen and oxygen atoms in water, for instance, are joined by chemical bonding, and when those bonds are broken, the forces joining the atoms are released in the form of chemical energy. Another example of chemical energy release is combustion, whereby chemical bonds in fuel, as well as in oxygen molecules, are broken and new chemical bonds are formed. The total energy in the newly formed chemical bonds is less than the energy of the original bonds, but the energy that makes up the difference is not lost; it has simply been released.
Energy, in fact, is never lost: a fundamental law of the universe is the conservation of energy, which states that in a system isolated from all other outside factors, the total amount of energy remains the same, though transformations of energy from one form to another take place. When a fire burns, then, some chemical energy is turned into thermal energy. Similar transformations occur between these and other manifestations of energy, including electrical and magnetic (sometimes these two are combined as electromagnetic energy), sound, and nuclear energy. If a chemical reaction makes a noise, for instance, some of the energy in the substances being mixed has been dissipated to make that sound. The overall energy that existed before the reaction will be the same as before; however, the energy will not necessarily be in the same place as before.
Note that chemical and other forms of energy are described as “manifestations,” rather than “types,” of energy. In fact, all of these can be described in terms of two basic types of energy: kinetic energy, or the energy associated with movement, and potential energy, or the energy associated with position. The two are inversely related: thus, if a spring is pulled back to its maximum point of tension, its potential energy is also at a maximum, while its kinetic energy is zero. Once it is released and begins springing through the air to return to the position it maintained before it was stretched, it begins gaining kinetic energy and losing potential energy.
Thermal energy is actually a form of kinetic energy generated by the movement of particles at the atomic or molecular level: the greater the movement of these particles, the greater the thermal energy. When people use the word “heat” in ordinary language, what they are really referring to is “the quality of hotness”—that is, the thermal energy internal to a system. In scientific terms, however, heat is internal thermal energy that flows from one body of matter to another— or, more specifically, from a system at a higher temperature to one at a lower temperature.
Two systems at the same temperature are said to be in a state of thermal equilibrium. When this state exists, there is no exchange of heat. Though in everyday terms people speak of “heat” as an expression of relative warmth or coldness, in scientific terms, heat exists only in transfer between two systems. Furthermore, there can never be a transfer of “cold”; although coldness is a recognizable sensory experience in human life, in scientific terms, cold is simply the absence of heat.
If you grasp a snowball in your hand, the hand of course gets cold. The mind perceives this as a transfer of cold from the snowball, but in fact exactly the opposite has happened: heat has moved from your hand to the snow, and if enough heat enters the snowball, it will melt. At the same time, the departure of heat from your hand results in a loss of internal energy near the surface of the hand, experienced as a sensation of coldness.
Just as heat does not mean the same thing in scientific terms as it does in ordinary language, so “temperature” requires a definition that sets it apart from its everyday meaning. Temperature may be defined as a measure of the average internal energy in a system. Two systems in a state of thermal equilibrium have the same temperature; on the other hand, differences in temperature determine the direction of internal energy flow between two systems where heat is being transferred.
This can be illustrated through an experience familiar to everyone: having one’s temperature taken with a thermometer. If one has a fever, the mouth will be warmer than the thermometer, and therefore heat will be transferred to the thermometer from the mouth. The thermometer, discussed in more depth later in this essay, measures the temperature difference between itself and any object with which it is in contact.
Temperature and Thermodynamics
One might pour a kettle of boiling water into a cold bathtub to heat it up; or one might put an ice cube in a hot cup of coffee “to cool it down.” In everyday experience, these seem like two very different events, but from the standpoint of thermodynamics, they are exactly the same. In both cases, a body of high temperature is placed in contact with a body of low temperature, and in both cases, heat passes from the high-temperature body to the low-temperature body.
The boiling water warms the tub of cool water, and due to the high ratio of cool water to boiling water in the bathtub, the boiling water
Because of water’s high specific heat capacity, cities located next to large bodies of water tend to stay warmer in the winter and cooler in the summer. During the early summer months, for instance, Chicago’s lakefront stays cooler than areas further inland. This is because the lake is cooled from the winter’s cold temperatures and snow runoff.
expends all its energy raising the temperature in the bathtub as a whole. The greater the ratio of very hot water to cool water, of course, the warmer the bathtub will be in the end. But even after the bath water is heated, it will continue to lose heat, assuming the air in the room is not warmer than the water in the tub—a safe assumption. If the water in the tub is warmer than the air, it will immediately begin transferring thermal energy to the lower-temperature air until their temperatures are equalized.
As for the coffee and the ice cube, what happens is opposite to the explanation ordinarily given. The ice does not “cool down” the coffee: the coffee warms up, and presumably melts, the ice. However, it expends at least some of its thermal energy in doing so, and, as a result, the coffee becomes cooler than it was.
The Laws of Thermodynamics
These situations illustrate the second of the three laws of thermodynamics. Not only do these laws help to clarify the relationship between heat, temperature, and energy, but they also set limits on what can be accomplished in the world. Hence British writer and scientist C. P. Snow (1905-1980) once described the thermodynamics laws as a set of rules governing an impossible game.
The first law of thermodynamics is essentially the same as the conservation of energy: because the amount of energy in a system remains constant, it is impossible to perform work that results in an energy output greater than the energy input. It could be said that the conservation of energy shows that “the glass is half full”: energy is never lost. By contrast, the first law of thermodynamics shows that “the glass is half empty”: no system can ever produce more energy than was put into it. Snow therefore summed up the first law as stating that the game is impossible to win.
The second law of thermodynamics begins from the fact that the natural flow of heat is always from an area of higher temperature to an area of lower temperature—just as was shown in the bathtub and coffee cup examples above. Consequently, it is impossible for any system to take heat from a source and perform an equivalent amount of work: some of the heat will always be lost. In other words, no system can ever be perfectly efficient: there will always be a degree of breakdown, evidence of a natural tendency called entropy.
Snow summed up the second law of thermodynamics, sometimes called “the law of entropy,” thus: not only is it impossible to win, it is impossible to break even. In effect, the second law compounds the “bad news” delivered by the first with some even worse news. Though it is true that energy is never lost, the energy available for work output will never be as great as the energy put into a system.
The third law of thermodynamics states that at the temperature of absolute zero—a phenomenon discussed later in this essay—entropy also approaches zero. This might seem to counteract the second law, but in fact the third states in effect that absolute zero is impossible to reach. The French physicist and engineer Sadi Carnot (1796-1832) had shown that a perfectly efficient engine is one whose lowest temperature was absolute zero; but the second law of thermodynamics shows that a perfectly efficient engine (or any other perfect system) cannot exist. Hence, as Snow observed, not only is it impossible to win or break even; it is impossible to get out of the game.
Evolution of the Thermometer
A thermometer is a device that gauges temperature by measuring a temperature-dependent property, such as the expansion of a liquid in a sealed tube. The Greco-Roman physician Galen (c. 129-c. 199) was among the first thinkers to envision a scale for measuring temperature, but development of a practical temperature-measuring device—the thermoscope—did not occur until the sixteenth century.
The great physicist Galileo Galilei (15641642) may have invented the thermoscope; certainly he constructed one. Galileo’s thermoscope consisted of a long glass tube planted in a container of liquid. Prior to inserting the tube into the liquid—which was usually colored water, though Galileo’s thermoscope used wine—as much air as possible was removed from the tube. This created a vacuum (an area devoid ofmatter, including air), and as a result of pressure differences between the liquid and the interior of the thermoscope tube, some of the liquid went into the tube.
But the liquid was not the thermometric medium—that is, the substance whose temperature-dependent property changes were measured by the thermoscope. (Mercury, for instance, is the thermometric medium in many thermometers today; however, due to the toxic quality of mercury, an effort is underway to remove mercury thermometers from U.S. schools.) Instead, the air was the medium whose changes the thermoscope measured: when it was warm, the air expanded, pushing down on the liquid; and when the air cooled, it contracted, allowing the liquid to rise.
Early Thermometers: The Search For a Temperature Scale
The first true thermometer, built by Ferdinand II, Grand Duke of Tuscany (16101670) in 1641, used alcohol sealed in glass. The latter was marked with a temperature scale containing 50 units, but did not designate a value for zero. In 1664, English physicist Robert Hooke (1635-1703) created a thermometer with a scale divided into units equal to about 1/500 of the volume of the thermometric medium. For the zero point, Hooke chose the temperature at which water freezes, thus establishing a standard still used today in the Fahrenheit and Celsius scales.
Olaus Roemer (1644-1710), a Danish astronomer, introduced another important standard. Roemer’s thermometer, built in 1702, was based not on one but two fixed points, which he designated as the temperature of snow or crushed ice on the one hand, and the boiling point of water on the other. As with Hooke’s use of the freezing point, Roemer’s idea of designating the freezing and boiling points of water as the two parameters for temperature measurements has remained in use ever since.
The Fahrenheit Scale
Not only did he develop the Fahrenheit scale, oldest of the temperature scales still used in Western nations today, but in 1714, German physicist Daniel Fahrenheit (1686-1736) built the first thermometer to contain mercury as a thermo-metric medium. Alcohol has a low boiling point, whereas mercury remains fluid at a wide range of temperatures. In addition, it expands and contracts at a very constant rate, and tends not to stick to glass. Furthermore, its silvery color makes a mercury thermometer easy to read.
Fahrenheit also conceived the idea of using “degrees” to measure temperature. It is no mistake that the same word refers to portions of a circle, or that exactly 180 degrees—half the number of degrees in a circle—separate the freezing and boiling points for water on Fahrenheit’s thermometer. Ancient astronomers first divided a circle into 360 degrees, as a close approximation of the ratio between days and years, because 360 has a large quantity of divisors. So, too, does 180—a total of 16 whole-number divisors other than 1 and itself.
Though today it might seem obvious that 0 should denote the freezing point of water, and 180 its boiling point, such an idea was far from obvious in the early eighteenth century. Fahrenheit considered a 0-to-180 scale, but also a 180-to-360 one, yet in the end he chose neither—or rather, he chose not to equate the freezing point of water with zero on his scale. For zero, he chose the coldest possible temperature he could create in his laboratory, using what he described as “a mixture of sal ammoniac or sea salt, ice, and water.” Salt lowers the melting point of ice (which is why it is used in the northern United States to melt snow and ice from the streets on cold winter days), and thus the mixture of salt and ice produced an extremely cold liquid water whose temperature he equated to zero.
On the Fahrenheit scale, the ordinary freezing point of water is 32°, and the boiling point exactly 180° above it, at 212°. Just a few years after Fahrenheit introduced his scale, in 1730, a French naturalist and physicist named Rene Antoine Ferchault de Reaumur (1683-1757) presented a scale for which 0° represented the freezing point of water and 80° the boiling point. Although the Reaumur scale never caught on to the same extent as Fahrenheit’s, it did include one valuable addition: the specification that temperature values be determined at standard sea-level atmospheric pressure.
The Celsius Scale
With its 32° freezing point and its 212° boiling point, the Fahrenheit system lacks the neat orderliness of a decimal or base-10 scale. Thus when France adopted the metric system in 1799, it chose as its temperature scale not the Fahrenheit but the Celsius scale. The latter was created in 1742 by Swedish astronomer Anders Celsius (1701-1744).
Like Fahrenheit, Celsius chose the freezing and boiling points of water as his two reference points, but he determined to set them 100, rather than 180, degrees apart. The Celsius scale is sometimes called the centigrade scale, because it is divided into 100 degrees, cent being a Latin root meaning “hundred.” Interestingly, Celsius planned to equate 0° with the boiling point, and 100° with the freezing point; only in 1750 did fellow Swedish physicist Martin Stromer change the orientation of the Celsius scale. In accordance with the innovation offered by Reaumur, Celsius’s scale was based not simply on the boiling and freezing points of water, but specifically on those points at normal sea-level atmospheric pressure.
In SI, a scientific system of measurement that incorporates units from the metric system along with additional standards used only by scientists, the Celsius scale has been redefined in terms of the triple point of water. (Triple point is the temperature and pressure at which a substance is at once a solid, liquid, and vapor.) According to the SI definition, the triple point of water—which occurs at a pressure considerably below normal atmospheric pressure—is exactly 0.01°C.
The Kelvin Scale
French physicist and chemist J. A. C. Charles (1746-1823), who is credited with the gas law that bears his name (see below), discovered that at 0°C, the volume of gas at constant pressure drops by 1/273 for every Celsius degree drop in temperature. This suggested that the gas would simply disappear if cooled to -273°C, which of course made no sense.
The man who solved the quandary raised by Charles’s discovery was William Thompson, Lord Kelvin (1824-1907), who, in 1848, put forward the suggestion that it was the motion of molecules, and not volume, that would become zero at -273°C. He went on to establish what came to be known as the Kelvin scale. Sometimes known as the absolute temperature scale, the Kelvin scale is based not on the freezing point of water, but on absolute zero—the temperature at which molecular motion comes to a virtual stop. This is -273.15°C (-459.67°F), which, in the Kelvin scale, is designated as 0K. (Kelvin measures do not use the term or symbol for “degree.”)
Though scientists normally use metric units, they prefer the Kelvin scale to Celsius because the absolute temperature scale is directly related to average molecular translational energy, based on the relative motion of molecules. Thus if the Kelvin temperature of an object is doubled, this means its average molecular translational energy has doubled as well. The same cannot be said if the temperature were doubled from, say, 10°C to 20°C, or from 40°C to 80°F, since neither the Celsius nor the Fahrenheit scale is based on absolute zero.
Conversions between scales
The Kelvin scale is closely related to the Celsius scale, in that a difference of one degree measures the same amount of temperature in both. Therefore, Celsius temperatures can be converted to Kelvins by adding 273.15. Conversion between Celsius and Fahrenheit figures, on the other hand, is a bit trickier.
To convert a temperature from Celsius to Fahrenheit, multiply by 9/5 and add 32. It is important to perform the steps in that order, because reversing them will produce a wrong figure. Thus, 100°C multiplied by 9/5 or 1.8 equals 180, which, when added to 32 equals 212°F. Obviously, this is correct, since 100°C and 212°F each represent the boiling point of water. But if one adds 32 to 100°, then multiplies it by 9/5, the result is 237.6°F—an incorrect answer.
For converting Fahrenheit temperatures to Celsius, there are also two steps involving multiplication and subtraction, but the order is reversed. Here, the subtraction step is performed before the multiplication step: thus 32 is subtracted from the Fahrenheit temperature, then the result is multiplied by 5/9. Beginning with 212°F, when 32 is subtracted, this equals 180. Multiplied by 5/9, the result is 100°C—the correct answer.
One reason the conversion formulae use simple fractions instead of decimal fractions (what most people simply call “decimals”) is that 5/9 is a repeating decimal fraction (0.55555…. )
Furthermore, the symmetry of5/9 and 9/5 makes memorization easy. One way to remember the formula is that Fahrenheit is multiplied by a fraction—since 5/9 is a real fraction, whereas 9/5 is actually a mixed number, or a whole number plus a fraction.
For a thermometer, it is important that the glass tube be kept sealed; changes in atmospheric pressure contribute to inaccurate readings, because they influence the movement of the thermometric medium. It is also important to have a reliable thermometric medium, and, for this reason, water—so useful in many other contexts—was quickly discarded as an option.
Water has a number of unusual properties: it does not expand uniformly with a rise in temperature, or contract uniformly with a lowered temperature. Rather, it reaches its maximum density at 39.2°F (4°C), and is less dense both above and below that temperature. Therefore alcohol, which responds in a much more uniform fashion to changes in temperature, soon took the place of water, and is still used in many thermometers today. But for the reasons mentioned earlier, mercury is generally considered preferable to alcohol as a thermometric medium.
In a typical mercury thermometer, mercury is placed in a long, narrow sealed tube called a capillary. The capillary is inscribed with figures for a calibrated scale, usually in such a way as to allow easy conversions between Fahrenheit and Celsius. A thermometer is calibrated by measuring the difference in height between mercury at the freezing point of water, and mercury at the boiling point of water. The interval between these two points is then divided into equal increments—180, as we have seen, for the Fahrenheit scale, and 100 for the Celsius scale.
Volume Gas Thermometers
Whereas most liquids and solids expand at an irregular rate, gases tend to follow a fairly regular pattern of expansion in response to increases in temperature. The predictable behavior of gases in these situations has led to the development of the volume gas thermometer, a highly reliable instrument against which other thermometers—including those containing mercury—are often calibrated.
In a volume gas thermometer, an empty container is attached to a glass tube containing mercury. As gas is released into the empty container; this causes the column of mercury to move upward. The difference between the earlier position of the mercury and its position after the introduction of the gas shows the difference between normal atmospheric pressure and the pressure of the gas in the container. It is then possible to use the changes in the volume of the gas as a measure of temperature.
All matter displays a certain resistance to electric current, a resistance that changes with temperature; because of this, it is possible to obtain temperature measurements using an electric thermometer. A resistance thermometer is equipped with a fine wire wrapped around an insulator: when a change in temperature occurs, the resistance in the wire changes as well. This allows much quicker temperature readings than those offered by a thermometer containing a traditional thermometric medium.
Resistance thermometers are highly reliable, but expensive, and primarily are used for very precise measurements. More practical for everyday use is a thermistor, which also uses the principle of electric resistance, but is much simpler and less expensive. Thermistors are used for providing measurements of the internal temperature of food, for instance, and for measuring human body temperature.
Another electric temperature-measurement device is a thermocouple. When wires of two different materials are connected, this creates a small level of voltage that varies as a function of temperature. A typical thermocouple uses two junctions: a reference junction, kept at some constant temperature, and a measurement junction. The measurement junction is applied to the item whose temperature is to be measured, and any temperature difference between it and the reference junction registers as a voltage change, measured with a meter connected to the system.
Other Types of Thermometer
A pyrometer also uses electromagnetic properties, but of a very different kind. Rather than responding to changes in current or voltage, the pyrometer is gauged to respond to visible and infrared radiation. As with the thermocouple, a pyrometer has both a reference element and a measurement element, which compares light readings between the reference filament and the object whose temperature is being measured.
Still other thermometers, such as those in an oven that register the oven’s internal temperature, are based on the expansion of metals with heat. In fact, there are a wide variety of thermometers, each suited to a specific purpose. A pyrometer, for instance, is good for measuring the temperature of a object with which the thermometer itself is not in physical contact.
The measurement of temperature by degrees in the Fahrenheit or Celsius scales is a part of daily life, but measurements of heat are not as familiar to the average person. Because heat is a form of energy, and energy is the ability to perform work, heat is therefore measured by the same units as work. The principal SI unit of work or energy is the joule (J). A joule is equal to 1 newton-meter (N • m)—in other words, the amount of energy required to accelerate a mass of 1 kilogram at the rate of 1 meter per second squared across a distance of 1 meter.
The joule’s equivalent in the English system is the foot-pound: 1 foot-pound is equal to 1.356 J, and 1 joule is equal to 0.7376 ft • lbs. In the British system, Btu, or British thermal unit, is another measure of energy, though it is primarily used for machines. Due to the cumbersome nature of the English system, contrasted with the convenience of the decimal units in the SI system, these English units of measure are not used by chemists or other scientists for heat measurement.
Specific Heat Capacity
Specific heat capacity (sometimes called specific heat) is the amount of heat that must be added to, or removed from, a unit of mass for a given substance to change its temperature by 1°C. Typically, specific heat capacity is measured in units of J/g • °C (joules per gram-degree Celsius).
The specific heat capacity of water is measured by the calorie, which, along with the joule, is an important SI measure of heat. Often another unit, the kilocalorie—which, as its name suggests—is 1,000 calories—is used. This is one of the few confusing aspects of SI, which is much simpler than the English system. The dietary Calorie (capital C) with which most people are familiar is not the same as a calorie (lowercase c)—rather, a dietary Calorie is the same as a kilo-calorie.
Comparing specific heat capacities
The higher the specific heat capacity, the more resistant the substance is to changes in temperature. Many metals, in fact, have a low specific heat capacity, making them easy to heat up and cool down. This contributes to the tendency of metals to expand when heated, and thus affects their malleability. On the other hand, water has a high specific heat capacity, as discussed below; indeed, if it did not, life on Earth would hardly be possible.
One of the many unique properties of water is its very high specific heat capacity, which is easily derived from the value of a kilocalorie: it is 4.184, the same number of joules required to equal a calorie. Few substances even come close to this figure. At the low end of the spectrum are lead, gold, and mercury, with specific heat capacities of 0.13, 0.13, and 0.14 respectively. Aluminum has a specific heat capacity of 0.89, and ethyl alcohol of 2.43. The value for concrete, one of the highest for any substance other than water, is 2.9.
As high as the specific heat capacity of concrete is, that of water is more than 40% higher. On the other hand, water in its vapor state (steam) has a much lower specific heat capacity—2.01. The same is true for solid water, or ice, with a specific heat capacity of2.03. Nonetheless, water in its most familiar form has an astoundingly high specific heat capacity, and this has several effects in the real world.
Effects of water’s high specific heat capacity
For instance, water is much slower to freeze in the winter than most substances. Furthermore, due to other unusual aspects of water—primarily the fact that it actually becomes less dense as a solid—the top of a lake or other body of water freezes first. Because ice is a poor medium for the conduction of heat (a consequence of its specific heat capacity), the ice at the top forms a layer that protects the still-liquid water below it from losing heat. As a result, the water below the ice layer does not freeze, and the animal and plant life in the lake is preserved.
Conversely, when the weather is hot, water is slow to experience a rise in temperature. For this reason, a lake or swimming pool makes a good place to cool off on a sizzling summer day. Given the high specific heat capacity of water, combined with the fact that much of Earth’s surface is composed of water, the planet is far less susceptible than other bodies in the Solar System to variations in temperature.
The same is true of another significant natural feature, one made mostly of water: the human body. A healthy human temperature is 98.6°F (37°C), and, even in cases of extremely high fever, an adult’s temperature rarely climbs by more than 5°F (2.7°C). The specific heat capacity of the human body, though it is of course lower than that of water itself (since it is not entirely made of water), is nonetheless quite high: 3.47.
The measurement of heat gain or loss as a result of physical or chemical change is called calorime-try (pronounced kal-or-IM-uh-tree). Like the word “calorie,” the term is derived from a Latin root word meaning “heat.” The foundations of calorimetry go back to the mid-nineteenth century, but the field owes much to the work of scientists about 75 years prior to that time.
In 1780, French chemist Antoine Lavoisier (1743-1794) and French astronomer and mathematician Pierre Simon Laplace (1749-1827) had used a rudimentary ice calorimeter for measuring heat in the formations of compounds. Around the same time, Scottish chemist Joseph Black (1728-1799) became the first scientist to make a clear distinction between heat and temperature.
By the mid-1800s, a number of thinkers had come to the realization that—contrary to prevailing theories of the day—heat was a form of energy, not a type of material substance. (The belief that heat was a material substance, called “phlogiston,” and that phlogiston was the part of a substance that burned in combustion, had originated in the seventeenth century. Lavoisier was the first scientist to successfully challenge the phlogiston theory.) Among these were American-British physicist Benjamin Thompson, Count Rumford (1753-1814) and English chemist James Joule (1818-1889)—for whom, of course, the joule is named.
Calorimetry as a scientific field of study actually had its beginnings with the work of French chemist Pierre-Eugene Marcelin Berthelot (1827-1907). During the mid-1860s, Berthelot became intrigued with the idea of measuring heat, and, by 1880, he had constructed the first real calorimeter.
Essential to calorimetry is the calorimeter, which can be any device for accurately measuring the temperature of a substance before and after a change occurs. A calorimeter can be as simple as a styrofoam cup. Its quality as an insulator, which makes styrofoam ideal both for holding in the warmth of coffee and protecting the human hand from scalding, also makes styrofoam an excellent material for calorimetric testing. With a styrofoam calorimeter, the temperature of the substance inside the cup is measured, a reaction is allowed to take place, and afterward, the temperature is measured a second time.
The most common type of calorimeter used is the bomb calorimeter, designed to measure the heat of combustion. Typically, a bomb calorimeter consists of a large container filled with water, into which is placed a smaller container, the combustion crucible. The crucible is made of metal, with thick walls into which is cut an opening to allow the introduction of oxygen. In addition, the combustion crucible is designed to be connected to a source of electricity.
In conducting a calorimetric test using a bomb calorimeter, the substance or object to be studied is placed inside the combustion crucible and ignited. The resulting reaction usually occurs so quickly that it resembles the explosion of a bomb—hence the name “bomb calorimeter” Once the “bomb” goes off, the resulting transfer of heat creates a temperature change in the water, which can be readily gauged with a thermometer.
To study heat changes at temperatures higher than the boiling point of water, physicists use substances with higher boiling points. For experiments involving extremely large temperature ranges, an aneroid (without liquid) calorimeter may be used. In this case, the lining of the combustion crucible must be of a metal, such as copper, with a high coefficient or factor of thermal conductivity—that is, the ability to conduct heat from molecule to molecule.
Temperature in Chemistry
The Gas Laws
A collection of statements regarding the behavior of gases, the gas laws are so important to chemistry that a separate essay is devoted to them elsewhere. Several of the gas laws relate temperature to pressure and volume for gases. Indeed, gases respond to changes in temperature with dramatic changes in volume; hence the term “volume,” when used in reference to a gas, is meaningless unless pressure and temperature are specified as well.
Among the gas laws, Boyle’s law holds that in conditions of constant temperature, an inverse relationship exists between the volume and pressure of a gas: the greater the pressure, the less the volume, and vice versa. Even more relevant to the subject of thermal expansion is Charles’s law, which states that when pressure is kept constant, there is a direct relationship between volume and absolute temperature.
Chemical equilibrium and changes in temperature
Just as two systems that exchange no heat are said to be in a state of thermal equilibrium, chemical equilibrium describes a dynamic state in which the concentration of reactants and products remains constant. Though the concentrations of reactants and products do not change, note that chemical equilibrium is a dynamic state—in other words, there is still considerable molecular activity, but no net change.
Calculations involving chemical equilibrium make use of a figure called the equilibrium constant (K). According to Le Chatelier’s principle, named after French chemist Henri Le Chatelier (1850-1936), whenever a stress or change is imposed on a chemical system in equilibrium, the system will adjust the amounts of the various substances in such a way as to reduce the impact of that stress. An example of a stress is a change in temperature, which changes the equilibrium equation by shifting K (itself dependant on temperature).
Using Le Chatelier’s law, it is possible to determine whether K will change in the direction of the forward or reverse reaction. In an exothermic reaction (a reaction that produces heat), K will shift to the left, or in the direction of the forward reaction. On the other hand, in an endothermic reaction (a reaction that absorbs heat), K will shift to the right, or in the direction of the reverse reaction.
Temperature and reaction rates
Another important function of temperature in chemical processes is its function of speeding up chemical reactions. An increase in the concentration of reacting molecules, naturally, leads to a sped-up reaction, because there are simply more molecules colliding with one another. But it is also possible to speed up the reaction without changing the concentration.
By definition, wherever a temperature increase is involved, there is always an increase in average molecular translational energy. When temperatures are high, more molecules are colliding, and the collisions that occur are more energetic. The likelihood is therefore increased that any particular collision will result in the energy necessary to break chemical bonds, and thus bring about the rearrangements in molecules needed for a reaction.
Absolute zero: The temperature, defined as 0K on the Kelvin scale, at which the motion of molecules in a solid virtually ceases. The third law of thermodynamics establishes the impossibility of actually reaching absolute zero.
Calorie: A measure of specific heat capacity in the SI or metric system, equal to the heat that must be added to or removed from 1 gram of water to change its temperature by 1°C. The dietary Calorie (capital C), with which most people are familiar, is the same as the kilocalorie.
Calorimetry: The measurement of heat gain or loss as a result of physical or chemical change.
Celsius scale: The metric scale of temperature, sometimes known as the centigrade scale, created in 1742 by Swedish astronomer Anders Celsius (17011744). The Celsius scale establishes the freezing and boiling points of water at 0° and 100° respectively. To convert a temperature from the Celsius to the Fahrenheit scale, multiply by 9/5 and add 32. Though the worldwide scientific community uses the metric or SI system for most measurements, scientists prefer the related Kelvin scale of absolute temperature.
Conservation of energy: A law of physics which holds that within a system isolated from all other outside factors, the total amount of energy remains the same, though transformations of energy from one form to another take place. The first law of thermodynamics is the same as the conservation of energy.
Energy: The ability to accomplish work—that is, the exertion of force over a given distance to displace or move an object.
Entropy: The tendency of natural systems toward breakdown, and specifically the tendency for the energy in a system to be dissipated. Entropy is closely related to the second law of thermodynamics.
Fahrenheit scale: The oldest of the temperature scales still in use, created in 1714 by German physicist Daniel Fahrenheit (1686-1736). The Fahrenheit scale establishes the freezing and boiling points of water at 32° and 212° respectively. To convert a temperature from the Fahrenheit to the Celsius scale, subtract 32 and multiply by 5/9.
First law of thermodynamics:A law which states the amount of energy in a system remains constant, and therefore it is impossible to perform work that results in an energy output greater than the energy input. This is the same as the conservation of energy.
Heat: Internal thermal energy that flows from one body of matter to another.
Joule: The principal unit of energy— and thus of heat—in the SI or metric system, corresponding to 1 newton-meter (N • m). A joule (J) is equal to 0.7376 footpounds in the English system.
Kelvin scale: Established by William Thompson, Lord Kelvin (18241907), the Kelvin scale measures temperature in relation to absolute zero, or 0K. (Units in the Kelvin system, known as Kelvins, do not include the word or symbol for degree.) The Kelvin scale, which is the system usually favored by scientists, is directly related to the Celsius scale; hence Celsius temperatures can be converted to Kelvins by adding 273.15.
Kilocalorie: A measure of specific heat capacity in the SI or metric system, equal to the heat that must be added to or removed from 1 kilogram of water to change its temperature by 1°C. As its name suggests, a kilocalorie is 1,000 calories. The dietary Calorie (capital C) with which most people are familiar is the same as the kilocalorie.
Kinetic energy: The energy that an object possesses by virtue of its motion.
Molecular translational energy: The kinetic energy in a system produced by the movement of molecules in relation to one another. Thermal energy is a manifestation of molecular translational energy.
Second law of thermodynamics: A law of thermodynamics which states that no system can simply take heat from a source and perform an equivalent amount of work. This is a result of the fact that the natural flow of heat is always from a high-temperature reservoir to a low-temperature reservoir. In the course of such a transfer, some of the heat will always be lost—an example of entropy. The second law is sometimes referred to as “the law of entropy.”
Specific heat capacity: The amount of heat that must be added to, or removed from, a unit of mass of a given substance to change its temperature by 1°C. It is typically measured in J/g • °C (joules per gram-degree Celsius). A calorie is the specific heat capacity of 1 gram of water.
System: In chemistry and other sciences, the term “system” usually refers to any set of interactions isolated from the rest of the universe. Anything outside of the system, including all factors and forces irrelevant to a discussion of that system, is known as the environment.
Thermal energy: Heat energy resulting from internal kinetic energy.
Thermal equilibrium: A situation in which two systems have the same temperature. As a result, there is no exchange of heat between them.
Thermodynamics: The study of the relationships between heat, work, and energy.
Thermometer: A device that gauges temperature by measuring a temperature-dependent property, such as the expansion of a liquid in a sealed tube, or resistance to electric current.
Thermometric medium: A substance whose physical properties change with temperature. A mercury or alcohol thermometer measures such changes.
Third law of thermodynamics:A law of thermodynamics stating that at the temperature of absolute zero, entropy also approaches zero. Zero entropy contradicts the second law of thermodynamics, meaning that absolute zero is therefore impossible to reach. | http://what-when-how.com/chemistry/temperature-and-heat/ | 13 |
77 | Demonstration of Math Applets
Instructions and a detailed help menu are provided with each applet. This demonstration module illustrates the use of the two of them: the Java Math Pad Applet and the Plot-Solve Applet.
Java Math Pad Applet
This applet allows the user to evaluate mathematical expressions. A mathematical expression is made of numbers, variables, built-in and user defined functions together with operations. For more information on the use of this applet, click on the scroll down menu under "Help Topics". The following examples will illustrate some of the features of the applet.
Example 1 Entering and Evaluating Mathematical Expressions
Evaluate the expression 3ab - 4c3 when
a = -2, b = 3, and c = -5.
Begin by assigning values to the variables. On the Math Pad screen,
type a = -2, then press enter;
type b = 3, then press enter;
type c = -5, then press enter;
Next we type the expression using * to indicate multiplication and ^ for exponents and press enter. Java Math Pad will return the numerical value of our expression. See Figure 1.
Example 2 Defining and Evaluating Functions
Evaluate the function f(x) = -3x2 + 17x - 33 when x = -2.4
Click on the Clear button to clear the Java Math Pad screen. Then type f(x)=-3*x^2+17*x-33 and press enter to define the function. Now type f(-2.4) and enter. Java Math Pad will return the value of the function. See Figure 2.
The function f(x) will be stored in Math Pad's memory until clear it. So, you can evaluate the function for as many different values of x as you desire. To clear a defined function from Math Pad's memory type "clear function name". For example, to clear the function f(x) that we just defined, type, "clear f ". You can also simply define a new function named f.
Example 3 Evaluating Trigonometric Functions
Find the value of each of the following trigonometric expressions.
(b) cos (π / 4)
The trigonometric and inverse trigonometric functions
are built-in functions in Java Math Pad. In the case of the sine and
cosine functions, the argument must be a number expressed in radians
and must be enclosed in parenthesis. For the inverse sin, the argument
should be a number between -1 and 1 and should also be enclosed in
parenthesis; Math Pad will give the answer in radians. See Figure
3. For more information about the built-in functions, go to "built-in
functions" in Java Math Pad's help menu.
Example 4 Combining Functions to Form a New Function
Given the functions f(x) = 2x + 5, g(x) = 3x2 + 2, and h(x) = √x-1 evaluate each of the following:
(a) f (-2) + g(-2)
(b) f(1) / g(1)
(c) h (f(6))
Click the Clear button to clear the Math Pad screen. Next define the three functions. Use sqrt to denote the square root. Type f(x)=2*x+5 and press enter, then type g(x)=3x^2+2 and press enter, and finally type h(x)=sqrt(x-1) and press enter once again.
(a) Simply type f(-2)+g(-2) and press enter. Math Pad will return the numerical value.
(b) For this problem, type f(1)/g(1) and press enter.
(c) To evaluate the composite function h(f(x)) when x = 6, you must first assign the value 6 to the variable x. To do this type x = 6 and press enter. Then type h(f(x)) and press enter. See Figure 4.
The purpose of this applet is to plot functions. It can plot up to 10 different functions at the same time. It can also be used to solve equations numerically. This demo includes Instructions for using the applet and examples that will illustrate some of the features.
1. The Viewing Window Parameters area is used to set the area of the graph the user wishes to see. Use it as follows:
Specify the desired values for XMin, XMax, YMin, YMax. The applet will do basic error checking. If an entry other than a number is typed in, an error will be displayed, and the focus will remain in the field which caused the error.
When Use y-range is checked, the supplied values for the y-range will be used. Otherwise, the applet will find YMin and YMax from the supplied functions.
Any change in this area will take effect only after the Plot button is pressed.
2. The Function Information area is where
the functions to be plotted are entered. Use this area as follows:
- To enter a new function, always press the New
Function button. Then, enter the expression defining the function.
The syntax is similar to the syntax used in the Java Math Engine.
For example, to defines the sine function, you would type in sin(x).
Make sure you use x for the variable in your definition.
- A maximum of 10 functions can be defined at
the same time.
- Checking Active means the function will show
on the graph. Unselecting it means the function will not show.
- Use the + and - buttons to scroll through the
list of defined functions.
- Each defined function has a different color
assigned to it. The selection is automatic.
- When a function is defined, it is automatically
assigned a name of the form fi where i is a number which starts
at 1 and is incremented every time a new function is defined.
The name a function will be saved under appears to the left of
the field where it is defined.
- Once a function is defined, its name can be
used in the definition of other functions. For example, if two
functions have been defined, the definition of function 3 could
be f1(x)+x*f2(x) Note that we use f1(x) and not just f1.
- The functions defined will plot only after Plot has been pressed.
- Del Function will delete the function currently
showing. When deleting a function, make sure that it is not used
in the definition of another function. An error would occur in
this case. For example, if f2(x) = x + f1(x) and f1 is deleted,
then the definition of f2 contains an unknown symbol, f1.
3. The Zoom and Trace area is where zooming and tracing take place. Use this area as follows:
- To zoom in, picture in your mind the rectangular
region you would like to zoom in. Left click on one of the corners
of this imaginary region. While holding the mouse button down,
move it to the opposite corner, then release it. As you move the
mouse, a rectangle will be drawn to help you visualize the region.
Once the mouse button is released, the graph will redraw, XMin,
XMax, YMin, YMax will be updated.
- To go back to the original view (XMin = -10,
XMax = 10, YMin = -10, YMax = 10), press Reset Zoom.
- To trace, simply single left click in the graph
area. A point will be generated by taking the x-coordinate of
the point where you clicked and the y-coordinate on the function
- To move the point, use the Right or Left buttons.
The point will move on the function currently showing. By selecting
a different function, you can select which function you want your
point to follow.
- The coordinates of the point will be displayed
under X: and Y:.
4. The Control Buttons area contains
buttons which have a global effect for the applet.
- The Clear All button erases all the function
- The Plot button is used to update the plot area
after changes in the other areas have been made.
5. The Messages area is where the applet
communicates with the user. Error messages as well as user information
is displayed there. Errors are displayed in red, while information
is displayed in black.
Graph the function f(x) = 3x3 - 2x2 + x + 5.
Step 1) On the Plot-solve applet, click
the New Function button and type the expression for the function (do
not type f(x)=) using * to indicate multiplication and ^ to indicate
exponents, just as in the Java Math Pad applet.
Step 2) Set the viewing window. The default
window is x-min -10, x-max 10,
y-min -10, y-max 10. You can change these values to give the best view of the function. In general, you can either specify the y-min and y-max or you can let the applet choose appropriate y-values for the function. If you wish to specify the y-min and y-max, "Use y-Range" must be checked. For this example, set the following values:
x-min -5, x-max 5, y-min -5, y-max 12.
Step 3) Click the Plot button.
Figure 1 below shows the graph.
Example 2 :
Solve the equation x = x3 + 1 graphically.
Begin by graphing both the left side and the right side of the equation on the same coordinate axes. To do this, click the New Function button and enter x. Next click New Function again and enter x^3+1. Click the Plot button. The resulting graph is shown in Figure 2. Notice that the default window, x-min -10, x-max 10, y-min -10, y-max 10 gives a good view of the graphs and their point of intersection.
The graph in Figure 2 shows one point of intersection.
The x-coordinates of this point is the solution to the equation. Follow
the steps outlined below to estimate this solution.
Step 1) Place the cursor on the point of intersection and click the left mouse button. The coordinates of the point will appear in the Zoom and Trace portion of the applet. See Figure 3.
Step 2) Figure 3 shows that the x-value of the point is approximately -1.27. You can get a better estimate of this value by zooming in on the portion of the graph containing that point. To zoom-in on the graph, place the cursor on the graph near the point, but a little to the left and above it; then hold the left mouse button down and drag it to the right and down to form a "box" around the point of intersection. When you are satisfied with the "box", let go of the mouse button and the graph will be redrawn in that viewing window. See Figure 4.
Step 3) Notice that the close-up view shows that we were not actually at the point of intersection. Now move the point closer to the actual point of intersection using the left (or right) button under Point Motion. See Figure 5.
Just as before, the coordinates of the point are displayed by the applet. You can see that a good estimate for the solution to this equation is x = -1.32. You can repeat this procedure as often as necessary to get closer and closer estimates for the point of intersection.
This material is based upon work supported by the National Science Foundation under Grant GEO-0355224. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. | http://keyah.asu.edu/lessons/Demo.html | 13 |
58 | Power rule introduction Determining the derivatives of simple polynomials.
Power rule introduction
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- Welcome back.
- In the last presentation I showed you that if I had the
- function f of x is equal to x squared, that the derivative of
- this function, which is denoted by f-- look at that, my pen
- is already malfunctioning.
- The derivative of that function, f prime of
- x, is equal to 2x.
- And I used the limit definition of a derivative.
- I used, let me write it down here.
- This pen is horrible.
- I need to really figure out some other tool to use.
- The limit as h approaches 0 -- sometimes you'll see delta x
- instead of h, but it's the same thing-- of f of x plus
- h minus f of x over h.
- And I used this definition of a derivative, which is really
- just the slope at any given point along the curve,
- to figure this out.
- That if f of x is equal to x squared, that
- the derivative is 2x.
- And you could actually use this to do others.
- And I won't do it now, maybe I'll do it in a
- future presentation.
- But it turns out that if you have f of x is equal to x to
- the third, that the derivative is f prime of x is
- equal to 3x squared.
- If f of x is equal to x to the fourth, well then the
- derivative is equal to 4x to the third.
- I think you're starting to see a pattern here.
- If I actually wrote up here that if f of x -- let me see
- if I have space to write it neatly.
- If I wrote f of x -- I hope you can see this -- f
- of x is equal to x.
- Well you know this.
- I mean, y equals x, what's the slope of y equals x?
- That's just 1, right?
- y equals x, that's a slope of 1.
- You didn't need to know calculus to know that.
- f prime of x is just equal to 1.
- And then you can probably guess what the next one is.
- If f of x is equal to x to the fifth, then the derivative is--
- I think you could guess-- 5 x to the fourth.
- So in general, for any expression within a polynomial,
- or any degree x to whatever power-- let's say f of x is
- equal to-- this pen drives me nuts.
- f of x is equal to x to the n, right?
- Where n could be any exponent.
- Then f prime of x is equal to nx to the n minus 1.
- And you see this is what the case was in all
- these situations.
- That 1 didn't show up.
- n minus 1.
- So if n was 25, x to the 25th power, the derivative
- would be 25 x to the 24th.
- So I'm going to use this rule and then I'm going to show
- you a couple of other ones.
- And then now we can figure out the derivative of pretty much
- any polynomial function.
- So just another couple of rules.
- This might be a little intuitive for you, and if you
- use that limit definition of a derivative, you could
- actually prove it.
- But if I want to figure out the derivative of, let's say, the
- derivative of-- So another way of-- this is kind of, what is
- the change with respect to x?
- This is another notation.
- I think this is what Leibniz uses to figure out the
- derivative operator.
- So if I wanted to find the derivative of A f of x, where A
- is just some constant number.
- It could be 5 times f of x.
- This is the same thing as saying A times the
- derivative of f of x.
- And what does that tell us?
- Well, this tells us that, let's say I had f of x.
- f of x is equal to-- and this only works with the constants--
- f of x is equal to 5x squared.
- Well this is the same thing as 5 times x squared.
- I know I'm stating the obvious.
- So we can just say that the derivative of this is just 5
- times the derivative of x squared.
- So f prime of x is equal to 5 times, and what's the
- derivative of x squared?
- Right, it's 2x.
- So it equals 10x.
- Similarly, let's say I had g of x, just using
- a different letter.
- g of x is equal to-- and my pen keeps malfunctioning.
- g of x is equal to, let's say, 3x to the 12th.
- Then g prime of x, or the derivative of g, is equal
- to 3 times the derivative of x to the 12th.
- Well we know what that is.
- It's 12 x to the 11th.
- Which you would have seen.
- 12x to the 11th.
- This equals 36x to the 11th.
- Pretty straightforward, right?
- You just multiply the constant times whatever the
- derivative would have been.
- I think you get that.
- Now one other thing.
- If I wanted to apply the derivative operator-- let me
- change colors just to mix things up a little bit.
- Let's say if I wanted to apply the derivative of operator-- I
- think this is called the addition rule.
- It might be a little bit obvious.
- f of x plus g of x.
- This is the same thing as the derivative of f of x plus
- the derivative3 of g of x.
- That might seem a little complicated to you, but all
- it's saying is that you can find the derivative of each of
- the parts when you're adding up, and then that's the
- derivative of the whole thing.
- I'll do a couple of examples.
- So what does this tell us?
- This is also the same thing, of course.
- This is, I believe, Leibniz's notation.
- And then Lagrange's notation is-- of course these were the
- founding fathers of calculus.
- That's the same thing as f prime of x plus g prime of x.
- And let me apply this, because whenever you apply it, I think
- it starts to seem a lot more obvious.
- So let's say f of x is equal to 3x squared plus 5x plus 3.
- Well, if we just want to figure out the derivative, we say f
- prime of x, we just find the derivative of each
- of these terms.
- Well, this is 3 times the derivative of x squared.
- The derivative of x squared, we already figured
- out, is 2x, right?
- So this becomes 6x.
- Really you just take the 2, multiply it by the 3, and
- then decrement the 2 by 1.
- So it's really 6x to the first, which is the same thing as 6x.
- Plus the derivative of 5x is 5.
- And you know that because if I just had a line that's y equals
- 5x, the slope is 5, right?
- Plus, what's the derivative of a constant function?
- What's the derivative of 3?
- Well, I'll give you a hint.
- Graph y equals 3 and tell me what the slope is.
- Right, the derivative of a constant is 0.
- I'll show other times why that might be more intuitive.
- Plus 0.
- You can just ignore that.
- f prime of x is equal to 6x plus 5.
- Let's do some more.
- I think the more examples we do, the better.
- And I want to keep switching notations, so you don't get
- daunted whenever you see it in a different way.
- Let's say y equals 10x to the fifth minus 7x to the
- third plus 4x plus 1.
- So here we're going to apply the derivative operator.
- So we say dy-- this is I think Leibniz's
- notation-- dy over dx.
- And that's just the change in y over the change in x,
- over very small changes.
- That's kind of how I view this d, like a very small delta.
- Is equal to 5 times 10 is 50 x to the fourth minus 21 --
- right, 3 times 7-- x squared plus 4.
- And then the 1, the derivative of 1 is just 0.
- So there it is.
- We figured out the derivative of this very
- complicated function.
- And it was pretty straightforward.
- I think you'll find that derivatives of polynomials are
- actually more straightforward than a lot of concepts that you
- learned a lot earlier in mathematics.
- That's all the time I have now for this presentation.
- In the next couple I'll just do a bunch of more examples, and
- I'll show you some more rules for solving even more
- complicated derivatives.
- See you in the next presentation.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | http://www.khanacademy.org/math/calculus/differential-calculus/power_rule_tutorial/v/calculus--derivatives-3 | 13 |
72 | Understanding the OSI Model
Reference: Cisco: Internetworking Basics
The Open Systems Interconnection (OSI) reference model describes how information from a software application in one computer moves through a network medium to a software application in another computer. The OSI reference model is a conceptual model composed of seven layers, each specifying particular network functions. The model was developed by the International Organization for Standardization (ISO) in 1984, and it is now considered the primary architectural model for intercomputer communications. The OSI model divides the tasks involved with moving information between networked computers into seven smaller, more manageable task groups. A task or group of tasks is then assigned to each of the seven OSI layers. Each layer is reasonably self-contained, so that the tasks assigned to each layer can be implemented independently. This enables the solutions offered by one layer to be updated without adversely affecting the other layers. The following list details the seven layers of the Open System Interconnection (OSI) reference model:
Figure 1-2 illustrates the seven-layer OSI reference model.
Figure 1-2: The OSI reference model contains seven independent layers.
The seven layers of the OSI reference model can be divided into two categories: upper layers and lower layers.
The upper layers of the OSI model deal with application issues and generally are implemented only in software. The highest layer, application, is closest to the end user. Both users and application-layer processes interact with software applications that contain a communications component. The term upper layer is sometimes used to refer to any layer above another layer in the OSI model.
The lower layers of the OSI model handle data transport issues. The physical layer and data-link layer are implemented in hardware and software. The other lower layers generally are implemented only in software. The lowest layer, the physical layer, is closest to the physical network medium (the network cabling, for example, and is responsible for actually placing information on the medium.
Figure 1-3 illustrates the division between the upper and lower OSI layers.
Figure 1-3: Two sets of layers make up the OSI layers.
The OSI model provides a conceptual framework for communication between computers, but the model itself is not a method of communication. Actual communication is made possible by using communication protocols. In the context of data networking, a protocol is a formal set of rules and conventions that governs how computers exchange information over a network medium. A protocol implements the functions of one or more of the OSI layers. A wide variety of communication protocols exist, but all tend to fall into one of the following groups: LAN protocols, WAN protocols, network protocols, and routing protocols. LAN protocols operate at the physical and data-link layers of the OSI model and define communication over the various LAN media. WAN protocols operate at the lowest three layers of the OSI model and define communication over the various wide-area media. Routing protocols are network-layer protocols that are responsible for path determination and traffic switching. Finally, network protocols are the various upper-layer protocols that exist in a given protocol suite.
Information being transferred from a software application in one computer system to a software application in another must pass through each of the OSI layers. If, for example, a software application in System A has information to transmit to a software application in System B, the application program in System A will pass its information to the application layer (Layer 7) of System A. The application layer then passes the information to the presentation layer (Layer 6), which relays the data to the session layer (Layer 5), and so on down to the physical layer (Layer 1). At the physical layer, the information is placed on the physical network medium and is sent across the medium to System B.The physical layer of System B removes the information from the physical medium, and then its physical layer passes the information up to the data link layer (Layer 2), which passes it to the network layer (Layer 3), and so on until it reaches the application layer (Layer 7) of System B. Finally, the application layer of System B passes the information to the recipient application program to complete the communication process.
A given layer in the OSI layers generally communicates with three other OSI layers: the layer directly above it, the layer directly below it, and its peer layer in other networked computer systems. The data link layer in System A, for example, communicates with the network layer of System A, the physical layer of System A, and the data link layer in System B. Figure 1-4 illustrates this example.
Figure 1-4: OSI model layers communicate with other layers.
One OSI layer communicates with another layer to make use of the services provided by the second layer. The services provided by adjacent layers help a given OSI layer communicate with its peer layer in other computer systems. Three basic elements are involved in layer services: the service user, the service provider, and the service access point (SAP).
In this context, the service user is the OSI layer that requests services from an adjacent OSI layer. The service provider is the OSI layer that provides services to service users. OSI layers can provide services to multiple service users. The SAP is a conceptual location at which one OSI layer can request the services of another OSI layer.
Figure 1-5 illustrates how these three elements interact at the network and data link layers.
Figure 1-5: Service users, providers, and SAPs interact at the network and data link layers.
The seven OSI layers use various forms of control information to communicate with their peer layers in other computer systems. This control information consists of specific requests and instructions that are exchanged between peer OSI layers.
Control information typically takes one of two forms: headers and trailers. Headers are prepended to data that has been passed down from upper layers.Trailers are appended to data that has been passed down from upper layers. An OSI layer is not required to attach a header or trailer to data from upper layers.
Headers, trailers, and data are relative concepts, depending on the layer that analyzes the information unit. At the network layer, an information unit, for example, consists of a Layer 3 header and data. At the data link layer, however, all the information passed down by the network layer (the Layer 3 header and the data) is treated as data.
In other words, the data portion of an information unit at a given OSI layer potentially can contain headers, trailers, and data from all the higher layers. This is known as encapsulation. Figure 1-6 shows how the header and data from one layer are encapsulated in to the header of the next lowest layer.
Figure 1-6: Headers and data can be encapsulated during information exchange.
The information exchange process occurs between peer OSI layers. Each layer in the source system adds control information to data and each layer in the destination system analyzes and removes the control information from that data.
If System A has data from a software application to send to System B, the data is passed to the application layer. The application layer in System A then communicates any control information required by the application layer in System B by prepending a header to the data. The resulting information unit (a header and the data) is passed to the presentation layer, which prepends its own header containing control information intended for the presentation layer in System B. The information unit grows in size as each layer prepends its own header (and in some cases a trailer) that contains control information to be used by its peer layer in System B. At the physical layer, the entire information unit is placed onto the network medium.
The physical layer in System B receives the information unit and passes it to the data-link layer. The data link layer in System B then reads the control information contained in the header prepended by the data link layer in System A. The header is then removed, and the remainder of the information unit is passed to the network layer. Each layer performs the same actions: The layer reads the header from its peer layer, strips it off, and passes the remaining information unit to the next highest layer. After the application layer performs these actions, the data is passed to the recipient software application in System B, in exactly the form in which it was transmitted by the application in System A.
The physical layer defines the electrical, mechanical, procedural, and functional specifications for activating, maintaining, and deactivating the physical link between communicating network systems. Physical layer specifications define characteristics such as voltage levels, timing of voltage changes, physical data rates, maximum transmission distances, and physical connectors. Physical-layer implementations can be categorized as either LAN or WAN specifications. Figure 1-7 illustrates some common LAN and WAN physical-layer implementations.
Figure 1-7: Physical-layer implementations can be LAN or WAN specifications.
The data link layer provides reliable transit of data across a physical network link. Different data link layer specifications define different network and protocol characteristics, including physical addressing, network topology, error notification, sequencing of frames, and flow control. Physical addressing (as opposed to network addressing) defines how devices are addressed at the data link layer. Network topology consists of the data-link layer specifications that often define how devices are to be physically connected, such as in a bus or a ring topology. Error notification alerts upper-layer protocols that a transmission error has occurred, and the sequencing of data frames reorders frames that are transmitted out of sequence. Finally, flow control moderates the transmission of data so that the receiving device is not overwhelmed with more traffic than it can handle at one time.
The Institute of Electrical and Electronics Engineers (IEEE) has subdivided the data-link layer into two sublayers: Logical Link Control (LLC) and Media Access Control (MAC). Figure 1-8 illustrates the IEEE sublayers of the data-link layer.
Figure 1-8: The data link layer contains two sublayers.
The Logical Link Control (LLC) sublayer of the data-link layer manages communications between devices over a single link of a network. LLC is defined in the IEEE 802.2 specification and supports both connectionless and connection-oriented services used by higher-layer protocols. IEEE 802.2 defines a number of fields in data-link layer frames that enable multiple higher-layer protocols to share a single physical data link. The Media Access Control (MAC) sublayer of the data link layer manages protocol access to the physical network medium. The IEEE MAC specification defines MAC addresses, which enable multiple devices to uniquely identify one another at the data link layer.
The network layer provides routing and related functions that enable multiple data links to be combined into an internetwork. This is accomplished by the logical addressing (as opposed to the physical addressing) of devices. The network layer supports both connection-oriented and connectionless service from higher-layer protocols. Network-layer protocols typically are routing protocols, but other types of protocols are implemented at the network layer as well. Some common routing protocols include Border Gateway Protocol (BGP), an Internet interdomain routing protocol; Open Shortest Path First (OSPF), a link-state, interior gateway protocol developed for use in TCP/IP networks; and Routing Information Protocol (RIP), an Internet routing protocol that uses hop count as its metric.
The transport layer implements reliable internetwork data transport services that are transparent to upper layers. Transport-layer functions typically include flow control, multiplexing, virtual circuit management, and error checking and recovery.
Flow control manages data transmission between devices so that the transmitting device does not send more data than the receiving device can process. Multiplexing enables data from several applications to be transmitted onto a single physical link. Virtual circuits are established, maintained, and terminated by the transport layer. Error checking involves creating various mechanisms for detecting transmission errors, while error recovery involves taking an action, such as requesting that data be retransmitted, to resolve any errors that occur.
Some transport-layer implementations include Transmission Control Protocol, Name Binding Protocol, and OSI transport protocols. Transmission Control Protocol (TCP) is the protocol in the TCP/IP suite that provides reliable transmission of data. \Name Binding Protocol (NBP) is the protocol that associates AppleTalk names with addresses. OSI transport protocols are a series of transport protocols in the OSI protocol suite.
The session layer establishes, manages, and terminates communication sessions between presentation layer entities. Communication sessions consist of service requests and service responses that occur between applications located in different network devices. These requests and responses are coordinated by protocols implemented at the session layer. Some examples of session-layer implementations include Zone Information Protocol (ZIP), the AppleTalk protocol that coordinates the name binding process; and Session Control Protocol (SCP), the DECnet Phase IV session-layer protocol.
The presentation layer provides a variety of coding and conversion functions that are applied to application layer data. These functions ensure that information sent from the application layer of one system will be readable by the application layer of another system. Some examples of presentation-layer coding and conversion schemes include common data representation formats, conversion of character representation formats, common data compression schemes, and common data encryption schemes.
Common data representation formats, or the use of standard image, sound, and video formats, enable the interchange of application data between different types of computer systems. Conversion schemes are used to exchange information with systems by using different text and data representations, such as EBCDIC and ASCII. Standard data compression schemes enable data that is compressed at the source device to be properly decompressed at the destination. Standard data encryption schemes enable data encrypted at the source device to be properly deciphered at the destination.
Presentation-layer implementations are not typically associated with a particular protocol stack. Some well-known standards for video include QuickTime and Motion (MPEG). QuickTime is an Apple Computer specification for video and audio, and MPEG is a standard for video compression and coding.
Among the well-known graphic image formats are Graphics Interchange Format (GIF), Joint Photographic Experts Group (JPEG), and Tagged Image File Format (TIFF). GIF is a standard for compressing and coding graphic images. JPEG is another compression and coding standard for graphic images, and TIFF is a standard coding format for graphic images.
The application layer is the OSI layer closest to the end user, which means that both the OSI application layer and the user interact directly with the software application.
This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application-layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication.
When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. When determining resource availability, the application layer must decide whether sufficient network resources for the requested communication exist. In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer.
Two key types of application-layer implementations are TCP/IP applications and OSI applications. TCP/IP applications are protocols, such as lTelnet, File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP), that exist in the Internet Protocol suite. OSI applications are protocols, such as File Transfer, Access, and Management (FTAM),F Virtual Terminal Protocol (VTP), and Common Management Information Protocol (CMIP), that exist in the OSI suite.
Copyright (c) 1998-2013 Jeffrey M. Hunter. All rights reserved.
All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at [email protected].
I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.
Last modified on
Monday, 20-Sep-2010 10:08:22 EDT
Page Count: 14176 | http://www.idevelopment.info/data/Networking/Networking_Tips/BASICS_Understanding_OSI_Model.shtml | 13 |
61 | Middle School Astronomy Activity
Written by Judy Young and Bill Randolph
Lesson 5: MEASURING THE ANGLE OF THE SUN
Note: This activity has many intersections with the National and State Science Education Standards.
NATIONAL SCIENCE EDUCATION STANDARDS, Grades 5-8 (NRC, 1996)
SCIENCE AND INQUIRY STANDARDS:
Answer questions that can be solved by scientific investigations.
Design and conduct scientific investigations.
Use appropriate tools and techniques to gather, analyze and interpret data.
Think critically and logically to make relationships between evidence and explanations.
EARTH SCIENCE STANDARDS:
Most objects in the solar system are in regular and predictable motion.
Explain how the tilt of the Earth and its revolution around the Sun results in uneven heating of the Earth, which in turn causes the seasons. Those motions explain such phenomena as the day, the year, phases of the moon and eclipses.
Seasons result from variations in the amount of the sun's energy hitting the surface, due to the tilt of the earth's rotation on it axis and the length of the day.
MASSACHUSETTS STATE SCIENCE STANDARDS, Grades 6-8 (11/00)
EARTH SCIENCE STANDARDS:
Compare and contrast properties and conditions of objects in the solar system to those on Earth.
Explain how the tilt of the Earth and its revolution around the Sun results in uneven heating of the Earth, which in turn causes the seasons.
A. Learning Objectives:
Collect and graph data throughout the year representing the height or angle of the Sun in relation to the horizon.
Collect and interpret data which verifies that the Earth's axis is tilted at approximately 23°.
Understand the angle of the Sun is responsible for climatic and seasonal changes.
Understand the Sun is usually not due South at exactly noon.
Gnomon (fixed on a flat surface and accessible throughout the year)
Ruler or tape measurer
Booklet for recording observations (class or individual)
Pencil or pen
Protractor or graph paper
This lesson complements lesson 1 (Using Shadows to Find the Four Directions) and lesson 2 (Measuring Shadows). The success of this lesson depends on two factors: 1) measuring the length of shadow at the same gnomon throughout the year and 2) taking measurements of the gnomon's shadow when the Sun is directly south, i.e. when shadow aligns with N-S direction. You will need to have completed Lesson 1 (Finding the Four Directions) to locate the N-S line.
You might begin this lesson by asking your class some pre-assessment questions that relate to the tilt of the Earth's axis:
Encourage the class to develop their ideas into data collecting experiments. Then conduct as many of these experiments as feasible. Student-centered experiments can lead to: creating a common vocabulary, insight into student misconceptions, and cognitive readiness for alternative explanations, such as the scientific viewpoint.
Because of the abstract nature of astronomy, the more measurements the students record, the more likely they are to be successful at interpreting their data. Ideally, this activity will allow individual students or the whole class to collect data once a week throughout the school year. A student or a pair of students could take turns collecting data from a shared gnomon on campus; the actual time needed to measure the gnomon's shadow only takes a few minutes. Alternatively, students could also create their own gnomon at home and collect data on the weekends.
There are two distinct parts of this activity, either or both of which can be done. It is very instructive to measure the changing angle to the Sun at noon throughout the year. This alone makes a coherent lesson.
For interested classes and teachers an additional part of this activity is keeping track of when the Sun is due South, and learning about the "Equation of Time." If you are familiar with the analemma (the figure 8, which is made by measuring the Sun's shadow length at noon throughout the year), it incorporates the equation of time because shadow measurements are always made at noon. However, because of the Earth's slightly elliptical orbit around the Sun, and our changing speed in that orbit, the Sun deviates from due South at noon (EST) by as much as 15 minutes throughout the year. The graph of this deviation is called the "Equation of Time" (not an equation of the true sense) and is reproduced here to guide you as to when the Sun is due South each day for measuring the gnomon's shadow.
Figure 5-1: Equation of Time
This graph shows the time when the Sun is seen due South throughout the year.
In making the graph of time when shadows are measured, you will need to account for Daylight Savings Time. If you begin this activity in Sept./Oct., before we go off Daylight Savings Time, remember that times have been shifted and use the right side in Figure 5-1. This also applies for measurements made after April. Between November and April, when we are on Eastern Standard Time, no adjustments are needed, so use the left side in Figure 5-1.
1. Recording Data
During this activity the students will use a gnomon as explained in Lesson 1. Students will record three pieces of data each time they make observations: the length of the gnomon's shadow, the date and time when the gnomon's shadow lines up on the north-south line. Enter the data collected onto a table which includes the yearly observations, such as Table 1. This table also includes a space for entering each determination of the angle to the Sun.
2. Data Analysis
Lesson 3 describes the procedures for using the gnomon's shadow to measure the angle of the Sun above the horizon. Determine the angle to the Sun for each measurement of the gnomon's shadow and enter the result in Table 1 and in the graph below.
Record your observations in Figure 5-3 indicating the time when the Sun is due South, i.e. when the gnomon's shadow points North). Notice that this time can be as much as 15 minutes before or after 12:00 noon EST or 1 p.m. EDT.
Figure 5-3. Equation of Time (corrected to Standard Time)
Eastern Standard Time (EST) appears in bold; Eastern Daylight Time is italics.
3. Illustrating the Data by Month:
It is useful to illustrate the changing angle to the Sun throughout the year in a single graph showing the gnomon, the shadow, and the angle to the Sun. An example is shown in Figure 5-3. The date, time, shadow length and angle to the Sun can be listed, and the shadow lengths can be labeled with dates and drawn accurately to scale. The angle to the Sun is illustrated effectively next to the raw data columns.
4.) Interpreting the Data and Results:
Understanding the Significance of the Angle to the Sun
Over the course of the school year, students have recorded and observed how the angle to the Sun changes from month to month or season to season. It is relatively easy for the students to use their data to calculate that the Earth's axis is 23°.
Here are the steps your students can do to arrive at this conclusion. Let's assume your students have made the following recordings:
On September 21 the Sun was 48° above the horizon
On December 21 the Sun was 25° above the horizon
On March 21 the Sun was 50° above the horizon
On June 21 the Sun was 70° above the horizon
At a very minimum the students will see that the angle to the Sun changes throughout the year. Hopefully they associate the higher the angle to the Sun and the longer daylight hours to coincide with the warmer temperatures. The highest noon-time angle of the Sun also corresponds to the longest day of the year around summer solstice and the smallest angle occurs near the winter solstice. Additionally, the angle to the Sun near the two equinoxes is roughly in between the angle to the Sun on the two solstices. Using the example angles given above, the average angle to the Sun on the Equinoxes is approximately 49°. Students should calculate how much higher the angle to the Sun is in June from this average as well as how much lower it is in December.
A simple example of these would look like:
70° in June minus the equinox average of 49° = +21° in June.
25° in December minus the average of 49° = +24° in December. If you take the average between 21° and 24°, it equals 22 1/2°, which is very close to the actual tilt of the Earth's axis of 23 1/2°.
The angle to the Sun at our latitude of 42° is never straight up, or 90°, from the horizon. It reaches it highest angle during summer solstice (June 21) at 71°, and its lowest angle on winter solstice (December 21) at 24°. The angle to the Sun at the two equinoxes is 48°. Therefore, the difference between the angle to the Sun at the solstice and the equinox is 23°. Hopefully the students understand the difference between the Sun's angle from winter and summer solstice is 48 + 23° or 48 - 23° minus 48°. Again, 48° is the angle to the Sun at the two equinoxes.
Understanding the Importance of the Equation of Time:
The equation of time is the difference between noon and the time the Sun is actually due- south. Again, this is measured by recording the time when the gnomon's shadow lines up on the North-South line. The Earth's elliptical orbit and varying speed in the orbit cause the Sun to be due south alternately ahead of or behind mean solar time (noon). The difference between the two kinds of time can accumulate to about 15 minutes. Often the equation of time is plotted on globes of the Earth as a figure 8, called an analemma, and is placed in the region of the South Pacific Ocean (because there is space).
Last Update: 7/15/2003 | http://donald.astro.umass.edu/~young/lesson5.html | 13 |
154 | A chain of functions starts with y = g(x) Then it finds z = f(y). So z = f(g(x))
Very many functions are built this way, g inside f . So we need their slopes.
The Chain Rule says : MULTIPLY THE SLOPES of f and g.
Find dy/dx for g(x). Then find dz/dy for f(y).
Since dz/dy is found in terms of y, substitute g(x) in place of y !!!
The way to remember the slope of the chain is dz/dx = dz/dy times dy/dx.
Remove y to get a function of x ! The slope of z = sin (3x) is 3 cos (3x).
Professor Strang's Calculus textbook (1st edition, 1991) is freely available here.
Subtitles are provided through the generous assistance of Jimmy Ren.
Lecture summary and Practice problems (PDF)
PROFESSOR: Hi. Well, today's the chain rule. Very, very useful rule, and it's kind of neat, natural. Can I explain what a chain of functions is? There is a chain of functions. And then we want to know the slope, the derivative.
So how does the chain work? So there x is the input. It goes into a function g of x. We could call that inside function y. So the first step is y is g of x. So we get an output from g, call it y. That's halfway, because that y then becomes the input to f. That completes the chain. It starts with x, produces y, which is the inside function g of x. And then let me call it z is f of y.
And what I want to know is how quickly does z change as x changes? That's what the chain rule asks. It's the slope of that chain. Can I maybe just tell you the chain rule? And then we'll try it on some examples. You'll see how it works.
OK, here it is. The derivative, the slope of this chain dz dx, notice I want the change in the whole thing when I change the original input. Then the formula is that I take-- it's nice. You take dz dy times dy dx. So the derivative that we're looking for, the slope, the speed, is a product of two simpler derivatives that we probably know. And when we put the chain together, we multiply those derivatives. But there's one catch that I'll explain. I can give you a hint right away.
dz dy, this first factor, depends on y. But we're looking for the change due to the original change in x. When I find dz dy, I'm going to have to get back to x. Let me just do an example with a picture. You'll see why I have to do it.
So let the chain be cosine of-- oh, sine. Why not? Sine of 3x. Let me take sine of 3x. So that's my sine of 3x. I would like to know if that's my function, and I can draw it and will draw it, what is the slope?
OK, so what's the inside function? What's y here? Well, it's sitting there in parentheses. Often it's in parentheses so we identify it right away. y is 3x. That's the inside function. And then the outside function is the sine of y.
So what's the derivative by the chain rule? I'm ready to use the chain rule, because these are such simple functions, I know their separate derivative. So if this whole thing is z, the chain rule says that dz dx is-- I'm using this rule. I first name dz dy, the derivative of z with respect to y, which is cosine of y. And then the second factor is dy dx, and that's just a straight line with slope 3, so dy dx is 3. Good. Good, but not finished, because I'm getting an answer that's still in terms of y, and I have to get back to x, and no problem to do it. I know the link from y to x.
So here's the 3. I can usually write it out here, and then I wouldn't need parentheses. That's just that 3. Now the part I'm caring about: cosine of 3x. Not cosine x, even though this was just sine. But it was sine of y, and therefore, we need cosine 3x. Let me draw a picture of this function, and you'll see what's going on.
If I draw a picture of-- I'll start with a picture of sine x, maybe out to 180 degrees pi. This direction is now x. And this direction is going to be-- well, there is the sine of x, but that's not my function. My function is sine of 3x, and it's worth realizing what's the difference.
How does the graph change if I have 3x instead of x? Well, things come sooner. Things are speeded up. Here at x equal pi, 180 degrees, is when the sine goes back to 0. But for 3x, it'll be back to 0 already when x is 60 degrees, pi over 3. So 1/3 of the way along, right there, my sine 3x is this one. It's just like the sine curve but faster. That was pi over 3 there, 60 degrees.
So this is my z of x curve, and you can see that the slope is steeper at the beginning. You can see that the slope-- things are happening three times faster. Things are compressed by 3. This sine curve is compressed by 3. That makes it speed up so the slope is 3 at the start, and I claim that it's 3 cosine of 3x. Oh, let's draw the slope. All right, draw the slope.
All right, let me start with the slope of sine-- so this was just old sine x. So its slope is just cosine x along to-- right? That's the slope starts at 1. This is now cosine x. But that's going out to pi again. That's the slope of the original one, not the slope of our function, of our chain.
So the slope of our chain will be-- I mean, it doesn't go out so far. It's all between here and pi over 3, right? Our function, the one we're looking at, is just on this part. And the slope starts out at 3, and it's three times bigger, so it's going to be-- well, I'll just about get it on there. It's going to go down. I don't know if that's great, but it maybe makes the point that I started up here at 3, and I ended down here at minus 3 when x was 60 degrees because then-- you see, this is a picture of 3 cosine of 3x. I had to replace y by 3x at this point.
OK, let me do two or three more examples, just so you see it. Let's take an easy one. Suppose z is x cubed squared. All right, here is the inside function. y is x cubed, and z is-- do you see what z is? z is x cubed squared. So x cubed is the inside function.
What's the outside function? It's a function of y. I'm not going to write-- it's going to be the squaring function. That's what we do outside. I'm not going to write x squared. It's y squared. This is y. It's y squared that gets squared. Then the derivative dz dx by the chain rule is dz dy. Shall I remember the chain rule? dz dy, dy dx. Easy to remember because in the mind of everybody, these dy's, you see that they're sort of canceling.
So what's dz dy? z is y squared, so this is 2y, that factor. What's dy dx? y is x cubed. We know the derivative of x cubed. It's 3x squared.
There is the answer, but it's not final because I've got a y here that doesn't belong. I've got to get it back to. X So I have all together 2 times 3 is making 6, and that y, I have to go back and see what was y in terms of x. It was x cubed. So I have x cubed there, and here's an x squared, altogether x to the fifth.
Now, is that the right answer? In this example, we can certainly check it because we know what x cubed squared is. So x cubed is x times x times x, and I'm squaring that. I'm multiplying by itself. There's another x times x times x. Altogether I have x to the sixth power. Notice I don't add those. When I'm squaring x cubed, I multiply the 2 by the 3 and get 6. So z is x to the sixth, and of course, the derivative of x to the sixth is 6 times x to the fifth, one power lower.
OK, I want to do two more examples. Let me do one more right away while we're on a roll. I'll bring down that board and take this function, just so you can spot the inside function and the outside function. So my function z is going to be 1 over the square root of 1 minus x squared. Such things come up pretty often so we have to know its derivative. We could graph it. That's a perfectly reasonable function, and it's a perfect chain. The first point is to identify what's the inside function and what's the outside.
So inside I'm seeing this 1 minus x squared. That's the quantity that it'll be much simpler if I just give that a single name y. And then what's the outside function? What am I doing to this y? I'm taking its square root, so I have y to the 1/2. But that square root is in the denominator. I'm dividing, so it's y to the minus 1/2. So z is y to the minus 1/2.
OK, those are functions I'm totally happy with. The derivative is what? dz dy, I won't repeat the chain rule. You've got that clearly in mind. It's right above. Let's just put in the answer here. dz dy, the derivative, that's y to some power, so I get minus 1/2 times y to what power? I always go one power lower. Here the power is minus 1/2. If I go down by one, I'll have minus 3/2. And then I have to have dy dx, which is easy. dy dx, y is 1 minus x squared. The derivative of that is just minus 2x.
And now I have to assemble these, put them together, and get rid of the y. So the minus 2 cancels the minus 1/2. That's nice. I have an x still here, and I have y to the minus 3/2. What's that? I know what y is, 1 minus x squared, and so it's that to the minus 3/2. I could write it that way. x times 1 minus x squared-- that's the y-- to the power minus 3/2. Maybe you like it that way. I'm totally OK with that. Or maybe you want to see it as-- this minus exponent down here as 3/2. Either way, both good.
OK, so that's one more practice. and I've got one more in mind. But let me return to this board, the starting board, just to justify where did this chain rule come from. OK, where do derivatives come from? Derivative always start with small finite steps, with delta rather than d. So I start here, I make a change in x, and I want to know the change in z. These are small, but not zero, not darn small.
OK, all right, those are true quantities, and for those, I'm perfectly entitled to divide and multiply by the change in y because there will be a change in y. When I change x, that produces a change in g of x. You remember this was the y. So this factor-- well, first of all, that's simply a true statement for fractions. But it's the right way. It's the way we want it. Because now when I show it, and in words, it says when I change x a little, that produces a change in y, and the change in y produces a change in z. And it's the ratio that we're after, the ratio between the original change and the final change. So I just put the inside change up and divide and multiply.
OK, what am I going to do? What I always do, whatever body does with derivatives at an instant, at a point. Let delta x go to 0. Now as delta x goes to 0, delta y will go to 0, delta z will go to 0, and we get a lot of zeroes over 0. That's what calculus is prepared to live with. Because it keeps this ratio. It doesn't separately think about 0 and then later 0. It's looking at the ratio as things happen. And that ratio does approach that. That was the definition of the derivative. This ratio approaches that, and we get the answer. This ratio approaches the derivative we're after. That in a nutshell is the thinking behind the chain rule.
OK, I could discuss it further, but that's the essence of it. OK, now I'm ready to do one more example that isn't just so made up. It's an important one. And it's one I haven't tackled before. My function is going to be e to the minus x squared over 2. That's my function. Shall I call it z? That's my function of x. So I want you to identify the inside function and the outside function in that change, take the derivative, and then let's look at the graph for this one. The graph of this one is a familiar important graph. But it's quite an interesting function.
OK, so what are you going to take? This often happens. We have e to the something, e to some function. So that's our inside function up there. Our function y, inside function, is going to be minus x squared over 2, that quantity that's sitting up there. And then z, the outside function, is just e to the y, right? So two very, very simple functions have gone into this chain and produced this e to the minus x squared over 2 function.
OK, I'm going to ask you for the derivative, and you're going to do it. No problem. So dz dx, let's use the chain rule. Again, it's sitting right above. dz dy, so I'm going to take the slope, the derivative of the outside function dz dy, which is e to the y. And that has that remarkable property, which is why we care about it, why we named it, why we created it. The derivative of that is itself.
And the derivative of minus x squared over 2 is-- that's a picnic, right?-- is a minus. x squared, we'll bring down a 2. Cancel that 2, it'll be minus x. That's the derivative of minus x squared over 2. Notice the result is negative. This function is at least out where-- if x is positive, the whole slope is negative, and the graph is going downwards.
And now what's-- everybody knows this final step. I can't leave the answer like that because it's got a y in it. I have to put in what y is, and it is-- can I write the minus x first? Because it's easier to write it in front of this e to the y, which is e to the minus x squared over 2. So that's the derivative we wanted. Now I want to think about that function a bit.
OK, notice that we started with an e to the minus something, and we ended with an e to the minus something with other factors. This is typical for exponentials. Exponentials, the derivative stays with that exponent. We could even take the derivative of that, and we would again have some expression. Well, let's do it in a minute, the derivative of that.
OK, I'd like to graph these functions, the original function z and the slope of the z function. OK, so let's see. x can have any sign. x can go for this-- now, I'm graphing this. OK, so what do I expect? I can certainly figure out the point x equals 0. At x equals 0, I have e to the 0, which is 1. So at x equals 0, it's 1.
OK, now at x equals to 1, it has dropped to something. And also at x equals minus 1, notice the symmetry. This function is going to be-- this graph is going to be symmetric around the y-axis because I've got x squared. The right official name for that is we have an even function. It's even when it's same for x and for minus x.
OK, so what's happening at x equal 1? That's e to the minus 1/2. Whew! I should have looked ahead to figure out what that number is. Whatever. It's smaller than 1, certainly, because it's e to the minus something. So let me put it there, and it'll be here, too.
And now rather than a particular value, what's your impression of the whole graph? The whole graph is--It's symmetric, so it's going to start like this, and it's going to start sinking. And then it's going to sink. Let me try to get through that point. Look here. As x gets large, say x is even just 3 or 4 or 1000, I'm squaring it, so I'm getting 9 or 16 or 1000000. And then divide by 2. No problem.
And then e to the minus is-- I mean, so e to the thousandth would be off that board by miles. e to the minus 1000 is a very small number and getting smaller fast. So this is going to get-- but never touches 0, so it's going to-- well, let's see. I want to make it symmetric, and then I want to somehow I made it touch because this darn finite chalk. I couldn't leave a little space. But to your eye it touches. If we had even fine print, you couldn't see that distance.
So this is that curve, which was meant to be symmetric, is the famous bell-shaped curve. It's the most important curve for gamblers, for mathematicians who work in probability. That bell-shaped curve will come up, and you'll see in a later lecture a connection between how calculus enters in probability, and it enters for this function.
OK, now what's this slope? What's the slope of that function? Again, symmetric, or maybe anti-symmetric, because I have this factor x. So what's the slope? The slope starts at 0. So here's x again. I'm graphing now the slope, so this was z. Now I'm going to graph the slope of this.
OK, the slope starts out at 0, as we see from this picture. Now we can see, as I go forward here, the slope is always negative. The slope is going down. Here it starts out-- yeah, so the slope is 0 there. The slope is becoming more and more negative. Let's see. The slope is becoming more and more negative, maybe up to some point. Actually, I believe it's that point where the slope is becoming-- then it becomes less negative. It's always negative. I think that the slope goes down to that point x equals 1, and that's where the slope is as steep as it gets. And then the slope comes up again, but the slope never gets to 0. We're always going downhill, but very slightly.
Oh, well, of course, I expect to be close to that line because this e to the minus x squared over 2 is getting so small. And then over here, I think this will be symmetric. Here the slopes are positive. Ah! Look at that! Here we had an even function, symmetric across 0. Here its slope turns out to be-- and this could not be an accident. Its slope turns out to be an odd function, anti-symmetric across 0. Now, it just was. This is an odd function, because if I change x, I change the sign of that function.
OK, now if you will give me another moment, I'll ask you about the second derivative. Maybe this is the first time we've done the second derivative. What do you think the second derivative is? The second derivative is the derivative of the derivative, the slope of the slope. My classical calculus problem starts with function one, produces function two, height to slope. Now when I take another derivative, I'm starting with this function one, and over here will be a function two.
So this was dz dx, and now here is going to be the second derivative. Second derivative. And we'll give it a nice notation, nice symbol. It's not dz dx, all squared. That's not what I'm doing. I'm taking the derivative of this. So I'm taking-- well, the derivative of that, I could-- I'm going to give a whole return to the second derivative. It's a big deal. I'll just say how I write it: dz dx squared. That's the second derivative. It's the slope of this function.
And I guess what I want is would you know how to take the slope of that function? Can we just think what would go into that, and I'll put it here? Let me put that function here. minus x e to the minus x squared over 2. Slope of that, derivative of that. What do I see there? I see a product. I see that times that. So I'm going to use the product rule. But then I also see that in this factor, in this minus x squared over 2, I see a chain. In fact, it's exactly my original chain. I know how to deal with that chain.
So I'm going to use the product rule and the chain rule. And that's the point that once we have our list of rules, these are now what we might call four simple rules. We know those guys: sum, difference, product, quotient. And now we're doing the chain rule, but we have to be prepared as here for a product, and then one of these factors is a chain.
All right, can we do it? So the derivative, slope. Well, slope of slope, because this was the original slope. OK, so it's the first factor times the derivative of the second factor. And that's the chain, but that's the one we've already done. So the derivative of that is what we already computed, and what was it? It was that. So the second factor was minus x e to the minus x squared over 2. So this is-- can I just like remember this is f dg dx in the product rule. And the product, this is-- here is a product of f times g. So f times dg dx, and now I need g times-- this was g, and this is df dx, or it will be. What's df dx? Phooey on this old example. Gone.
OK, df dx, well, f is minus x. df dx is just minus 1. Simple. All right, put the pieces together. We have, as I expected we would, everything has this factor e to the minus x squared over 2. That's controlling everything, but the question is what's it-- so here we have a minus 1; is that right? And here, we have a plus x squared. So I think we have x squared minus 1 times that.
OK, so we computed a second derivative. Ha! Two things I want to do, one with this example. The second derivative will switch sign. If I graph the darn thing-- suppose I tried to graph that? When x is 0, this thing is negative. What is that telling me? So this is the second group. This is telling me that the slope is going downwards at the start. I see it.
But then at x equal 1, that second derivative, because of this x squared minus 1 factor, is up to 0. It's going to take time with this second derivative. That's the slope of the slope. That's this point here. Here is the slope. Now, at that point, its slope is 0. And after that point, its slope is upwards. We're getting something like this. The slope of the slope, and it'll go evenly upwards, and then so on.
Ha! You see that we've got the derivative code, the slope, but we've got a little more thinking to do for the slope of the slope, the rate of change of the rate of change. Then you really have calculus straight. And a challenge that I don't want to try right now would be what's the chain rule for the second derivative? Ha! I'll leave that as a challenge for professors who might or might not be able to do it.
OK, we've introduced the second derivative here at the end of a lecture. The key central idea of the lecture was the chain rule to give us that derivative. Good! Thank you.
NARRATOR: This has been a production of MIT OpenCourseWare and Gilbert Strang. Funding for this video was provided by the Lord Foundation. To help OCW continue to provide free and open access to MIT courses, please make a donation at ocw.mit.edu/donate. | http://ocw.mit.edu/resources/res-18-005-highlights-of-calculus-spring-2010/derivatives/chains-f-g-x-and-the-chain-rule/ | 13 |
67 | Satellite orbits lie in planes that bisect the orbited body. If the Earth were not rotating, each orbiting satellite would pass over the same point on the Earth with each orbit, crossing the equator repeatedly at the same longitude.
Because the Earth is constantly rotating, each orbital pass of the satellite (as indicated by the model described in this activity) appears to be to the west of the previous one. In reality, the Earth is rotating eastward as the orbital plane remains fixed.
Students will examine the factors determining the length of a satellite's orbit around Earth
Recognize that the Earth rotates 360 degrees in 24 hours, or:
60 minutes X 24 hours = 1440 minutes --------- -------- ------------ 1 hour 1 day 1 day
Dividing 360 degrees by 1440 minutes shows that the Earth is rotating 0.25 degrees every minute. Here's the math:
360 degrees = .25 degrees ----------- ----------- 1440 minutes 1 minute
The satellites that we will want to track travel around the Earth in approximately 102 minutes. Thus, we can see that if the satellite crossed the equator at 0 degrees longitude on one orbit, it would cross over 25.5 degrees longitude 102 minutes later.
25 degrees = 25.5 degrees ---------- ------------ 1 minute 102 minutes
The extremely large size of the Earth in relation to the very modest thickness of the atmosphere leads to frequent, intentional distortions of scale in map projections. Constructing a true scale physical model of an orbiting satellite's path will lay the groundwork for insights into geographical configurations on a three-dimensional sphere, and the physical characteristics of the satellite's orbit.
Given the following information answer the questions that follow.
If you are tracking the NOAA series of weather satellites, the following figures will be a close approximation.
Km. Miles Mean orbital altitude 860 534 Width of field of view 2900 1800 Orbital period...102 minutes
Determine the scale of your globe by measuring either its diameter or circumference and comparing that to the Earth's actual diameter or circumference.
What follows is a series of questions to test for understanding. The answers to each question follow on an accompanying page.
- What is the Earth's diameter? What is the diameter of the globe?
- What is the ratio of the model diameter compared to the Earth's diameter? This is your scale measure.
Using a piece of wire (#10 works well), position the wire in such a way as to center the wire over your location on the globe. For our location here in Maine, the wire should cross the equator at approximately 60 degrees west longitude, and continue up to the left of the north pole by about 8 degrees. Experiment with different ways of supporting the wire slightly above the globe. With our globe, we were able to rig a support from the globe support bar already in place. The height of the globe support bar was almost at the exact height as our orbital plane, which we will be discussing shortly. The globe should be able to rotate under the wire. You may want to add a piece of plastic transparency material to the wire. This will represent the width of the Earth that the satellite will image on a typical pass. Because this width is approximately 1800 miles, the scale plastic strip will be approximately 2.5" wide.
- Just how high should we position the piece of wire above the globe?
On a large sheet of paper, draw a circle of the same diameter as your globe.
This circle will represent the surface of the Earth. Write the scale of measure in the lower right-hand corner as a legend. Draw a circle having the same center as the first, but with a radius of 534 miles more than your first circle. This will represent the orbit of the satellite over the Earth's surface. Label point H on the inner circle (Earth's surface) as your town or city. Draw a straight line through this point. That will represent the horizon as it appears from your location.
The satellite we wish to examine can only be received while in an unobstructed straight line from the antenna; thus, it can only be received while above the horizon. The point at which we first receive a satellite's signal is known as Acquisition of Signal (AOS), and the point at which we lose the signal is referred to as Loss of Signal (LOS). A good analogy would be to think of sunrise as AOS and sunset as LOS.
On your drawing, label two points that lie directly under the points at which the satellite will come into (Aquisition of Signal, AOS) and go out of (Loss of Signal, LOS) receiving range.
Refer to your diagram and answer the following questions:
- How many miles from your location are the points on Earth over which the satellite will come into or leave receiving range?
You may wish to draw a circle on your globe to represent this range, known as the acquisition circle.
- Knowing the period of a complete orbit, find a way to calculate the amount of time that the satellite will be in range if it passes directly overhead as you've illustrated.
Rotate the globe to position the wire so that the northbound orbit will cross the equator at 0 degrees longitude. If the Earth were not rotating, the satellite would always follow the path illustrated by the wire. Would this path ever bring the satellite over your school?
Polar orbiting weather satellites have an orbital period of about 102 minutes. This means they complete a trip around the world in approximately 1 hour and 42 minutes.
The Earth rotates 360 degrees in 24 hours.
- How many degrees does the Earth rotate in 1 minute?
- How many degrees does the Earth rotate in 1 hour?
- How many degrees does the Earth rotate in 102 minutes?
- If the satellite crossed the equator at 0 degrees longitude at 0000 UTC, at what longitude would it cross the equator 102 minutes later? 204 minutes later?
- How many orbits will the satellite complete in one day?
- How many miles does the satellite travel during one orbit?
- How many miles does the satellite travel during each day?
- How many times will your location be viewed by this satellite in one day?
The sample calculations given here are based on the use of a 12-inch diameter globe and should be proportionally adjusted for use of other materials. The diameter of a globe can be determined by first finding the circumference using a tape measure.
Earth Model Diameter 8100 miles 12 inches Circumference 25,500 miles 37.75 inches Path width 1800 miles 2.5 inches Orbit altitude 534 miles .75 inches
Using the circumference, a ratio can be established as follows:
37.75 inches = 1 inch ------------- ------- 25,500 miles 675 miles
By the scale established above, the orbital height of 500 miles is represented by a scale distance of:
1 inch = .75 inch ------- -------- 675 miles 500 miles
Thus, on this model of the Earth, using a 12-inch diameter globe, the wire representing the orbital plane should be placed approximately 3/4" above the surface of the globe.
Assume the satellite pass is directly overhead of point H (your home location). R equals the radius of our reception area (acquisition circle) and D equals the diameter of the reception area.
Having established the AOS and LOS points on the orbital curve, project lines from both the AOS and LOS points down to the center of the Earth. Label this point G (Earth's Geocenter). With these lines in place, go back and label the two points where these two lines intersect the Earth's surface. Appropriately label these Points A and L. If we measure the angle formed by points AGL, we find it to be 55 degrees. This is angle D (diameter of acquisition circle). Angle R is half of angle D (R = radius of acquisition circle). Why is it important to know this angle? We are interested in knowing the size of our acquisition circle, that is, the distance from our home location (point H) that we can expect to receive the satellite signal.
We've previously determined that the circumference of the Earth is approximately 25,500 miles. Thus:
360 degrees X 55 degrees ----------- ---------- 25,500 miles 3800 miles
We know that the satellite signal will be present for 55/360 of this distance. Using your calculator, 55/360 represents .1528 of the total circle. Thus, if we multiply the Earth's circumference (25,500 miles) by .1528, we can determine the distance (diameter) of our acquisition circle, which in this case is equal to 3896 miles. Half of that, or R, is 1948 miles. For a receiver in Maine, a satellite following the path as indicated on Figure A would be somewhere south of Cuba when the signal is first heard (AOS), and to the north of Hudson Bay when the signal is lost (LOS).
Let's look at some numbers. Remember that it takes 102 minutes for a NOAA-class satellite to make one complete orbit around the Earth. We now want to determine the fractional part of the orbit, the exact time inside our acquisition circle, that the signal will be usable to us. Using the same math as before, the satellite will be available for 55/360 of one complete orbit. As previously defined, 55/360 = .1528. This number times the orbital period of 102 minutes yields 15.6 minutes. This means that on an overhead pass, we can expect to hear the satellite's signal for approximately 15.6 minutes. Using a reliable receiver with outside antenna will in fact yield the above reception time.
The Earth rotates 360 degrees in 1440 minutes. Thus:
360 / 1440 = .25 degrees/minute
If the Earth rotates .25 degrees each minute, then:
.25 x 60 = 15 degrees/ hour
If the Earth rotates .25 degrees each minute, then:
.25 x 102 = 25.5 degrees/orbit
The satellite crosses the equator at 0 degrees west longitude. The next equator crossing will be 102 minutes later, and located 25.5 degrees west. Thus, the next equator crossing will take place at 25.5 degrees west longitude. For the next orbit, it will again be 25.5 degrees further west, which would place this crossing at 51 degrees west longitude (25.5 + 25.5).
The satellite will be on its 15th orbit at the end of 24 hours since it completes 14 orbits in one day and begins a 15th. More precisely:
1 orbit X 1440 minutes = 14.12 orbits ------- ------------ ------------ 102 min. 1 day 1 day
The satellite will travel the equivalent of the circumference of each orbit, or 25,500 Earth miles. However, the circumference at orbital altitude is approximately 38,850 miles.
25,500 miles/orbit x 14.12 = 360,060 miles/day (approx.). These are miles traveled at ground level. More precisely, the circumference of the orbital circle is about 28,850 miles, thus:
28,850 miles/orbit x 14.12 = 407,362 miles/day
At least four times/day. The satellite will usually come within range on two consecutive orbits, sometimes three. Usually figure on a pass to the east of your location, nearly overhead, and then to the west. Remember that at one part of the day the satellite will be on an ascending pass (crossing the equator going north) and at another time of the day the satellite will be on a descending pass (crossing the equator going south).
Here is one final activity which will test your understanding of orbital parameters. Assume you are tracking a NOAA-class satellite with an orbital period of 102 minutes, and a longitudinal increment of 25.5 degrees west/orbit.
Complete the chart for the remaining five orbits.
Orbit # Equator Crossing (EQX) Time 1 0 degrees 1200 UTC 2 3 4 5 6
Here's what you should have:
2 25.5 degrees 1342 UTC 3 51.0 degrees 1524 UTC 4 76.5 degrees 1706 UTC 5 102 degrees 1848 UTC 6 127.5 degrees 2030 UTC
- 12-inch globe
- length of #10 wire (20" to 40") to project orbital plane
- clear plastic strip to project width of Earth image
- data for a NOAA-class satellite | http://www.gma.org/surfing/satellites/orbit.html | 13 |
70 | Teaching Math Through Art
of Phoenix on-line
Math Textual & Pictorial Journal with drawings, pictures or visual
features that the student finds which relate daily to math and the other
subjects studied in school, will be utilized. Math symbols and symbols used to
write English will be used. Lessons
on a/symmetry will be implemented by using student or verbal description of an
item or topic that will be translated into pictures.
Patterns are a part of math and by writing a poem with a pattern we
explore various visual and verbal aspects of rhythm and rhyme.
During its 3-dimensional construction, an Icosahedron will challenge
several areas of abilities and biological systems, such as small and large
muscular motor skills. Also a fun
way of learning multiplication will be done with String Art and lines and
is aimed at use for fifth grade students but may be used in higher grades also.
Since everyone is different, different aspects of this unit will speak to
different people differently.
Each lesson will take from one to several 20-40 minute periods. The journal will be maintained throughout the school year and will touch on a wide scope of topics, conceivably all secular school
disciplines, including Math, Science, Social Studies- history, cultures, geography; English Language- literature, writing, reading, spelling, punctuation; even Physical Fitness, Gym or Phys Ed will
be food for creative expression and learning via these math through art lessons.
The underlying goal for the following lessons will be to recognize symmetry and asymmetry in art through doing art, math, construction projects, writing poetry and descriptive or narrative
writing. Working with Cubes to learn about different kinds of spatial arrangements i.e., symmetry, asymmetry and rotational symmetry is the first unit. . Drawing ability changes as our brain
develops and we ‘learn to see’. Sometimes with our fertile imaginations we can create meaning by drawing. This may be called creative seeing and increasing this ability is a goal of this lesson.
Another goal is to improve communication abilities by setting up a consistent, organized format in a Math Art Journal. Unit 2 is Describing A/Symmetry in Words by Drawing from Someone’s
Description. The goal of this lesson is to strengthen decision-making abilities by using everyday experiences and using the focus areas fabricated by the teachers, causing students to improve
their ability to listen and express themselves graphically and verbally. Various formats, English, Mathematical Language, Pictorial ‘Vocabulary’ will be used to expand student’s knowledge base.
Additionally, making connections between creative seeing and pragmatic definitions may also strengthen decision-making abilities. For instance, a student may claim to know what parallel is.
Here she can prove it by describing and drawing some parallel lines.
3 is Poetic Pattern. “What
is the point of writing A Poem in the Shape of a Lamp?” one may ask.
Commonly known as a Japanese lantern, the goal is to make an uncommon
unique work of art for its own sake. Additionally
we will be able to answer if it is symmetrical; determine what makes it good;
and learn some things about the areas of math fellow students are currently
learning. Unit 4 is String Art.
How is making a pretty design part of Math?
For one thing, the goal of achievement of rote memorization can
improve calculation and problem solving skills in the future.
This can be attained while students are, in a sense, distracted doing
art. By observing and analyzing the patterns created, symmetrical
aspects of shapes and numbers will be learned.
Unit 5, A Multifaceted Activity is Literally an Icosahedron.
Concepts in science, such as mechanics, optics and motion will also be
studied from observations of student’s creation.
certain population of students, very frequently “verbally smart” girls claim
they hate math, which becomes a self-fulfilling prophecy.
They see a problem, or a page of information that makes no sense to them,
and from their experience, they immediately give up trying.
These art activities help the student see how so much of math is already
a part of their experience and they can use these activities to broaden and
deepen their math knowledge and find connections between curriculum areas they
did not know existed. The more they
work with it, the more it becomes their own, the fear lessens and confidence
builds confidence, and good self esteem.
insights shared by author of Arts with the Brain in Mind, Eric Jensen are
as follows. He asks, “How do the
arts stack up as a Major Discipline?” 1) They have minimal risk involved [when
it is well supervised]. (All brackets are my own thoughts.) 2) They are
inclusive [because no one is forced to do something at a level they cannot or do
not want to]. 3) They are culturally necessary [because our limited human
capabilities (or differential potentials) cross all cultural lines]. 4) They are
brain-based and they may promote self-discipline and motivation. 5) They are
wide ranging. 6) They have survival value [when the skills taught are not
distracting from any serious or more necessary ones]. 7) They are assessable.
[They can be measured or analyzed to provide feedback, not necessarily
That sums it up. However, other researchers, Kellah M. Edens and Ellen F. Potter of the University of South Carolina, sought to obtain evidence that would strengthen Art’s position as a
Discipline and to enhance its perceived value. They examined an isolated descriptive drawing task and their findings suggest that descriptive drawing provides a viable way for students to learn
scientific concepts and supports the processes of selection, organization, and integration that underpin the cognitive processes necessary for meaningful learning. Art is not just a fun filler
activity when investigated within the cognitive model of learning, an approach that considers the learner’s cognitive system. Furthermore, they maintain it is possible to promote conceptual
understanding with specific types of drawing and any kind of drawing can be an instructional strategy to facilitate learning. Discussion and critiques of the drawings will allow students to make
connections across symbol systems
and build meaning. (Edens, Potter 2001)
There is yet another aspect of arts in education to consider.
According to Phoenix University’s Instructor
of MED522, Sonia McKenzie, who posted this on a class forum, “Effective
integrated instruction blends, harmonizes, coordinates, and unifies concepts to
lead to more authentic real tasks. Also,
integrated instruction allows students to develop understanding and find
connections to what they know and value. As a result of an integrated curriculum
students are more likely to understand and feel confident in their learning.”
1 Title: Making
Cubes to Learn about Symmetry (& other mathematical Principals such as
a three-dimensional object; space, the empty area between, around, above,
below, or within an object-- are Elements of Design, some of the building blocks
of visual art. Balance, the
arrangement of equal parts, stable; contrast, the difference between two
or more things; repetition, the parts used over and over in a pattern; proportion,
the relation of one part to another; unity, all parts working together--
are Principals of Design, how the blocks are used or put together.
involved in this lesson are: Polyhedra,
Space forms, Symmetrical designs, Geometric sculptures, Rotational symmetry,
Constructions, Artists who use math, Technical drawings
with National Art Standards:
Understanding and applying media, techniques, and processes; using knowledge of
structures and functions (elements and principles of art); choosing and
evaluating a range of subject matters, symbols, and ideas; reflecting upon and
assessing the characteristics and merits of their work and the work of others;
making connections between visual arts and other disciplines
with National Math Standards:
Geometry and Spatial Sense, Measurement, Patterns and Relationships, Number
Sense and Numeration, Mathematics as Problem Solving, Communication, Reasoning,
and Mathematical Connections are several National Math Standards involved in
Students will learn about symmetry of a cube, balance and weight, volume of a
cube and be able to recognize them in their surroundings.
Students will learn computer and computer navigational skills by looking
for cubes on especially, http://randisart.com/miscellaneous.htm
information and motivation: The
symmetry and volume of a cube can be depicted in a schematic type drawing.
But it does not necessarily translate in a student’s mind to the
3-dimensional structure it is meant to convey.
Students will explore different ways of representing a cube, making a
cube and making measurements and making a mobile to demonstrate and increase
their mathematical knowledge and artistic creativity.
Art, Science Language Arts
pencils or crayons, sticks, string, scissors, Math Art Journal, tape, foam
board, card board, clear plastic
dimensional objects 3-dimensional, cube, diagonal, bisect, rhombus, vertex or
vertices, rotational symmetry, symmetrical, asymmetrical, area, volume
1. Warm-up: Draw, shade (consider directions of light source)
and color squares and then cubes on paper or in journal. Glue “good” ones into Journal.
Identify what makes them “good cubes”, i.e., symmetrical, six faces,
parallel sides, perpendicular lines, right (90 degree) angles.
Record Process in Math Art Journal like a Laboratory Experiment with following
Purpose or Goal: Find out all about cubes, how many ways we can make a cube and do it.
Materials: paper, foam board, cardboard, clear plastic, writing and coloring
utensils, scissors, Math Art Journal, tape
Procedure: 1.Copy possible shapes needed to make a cube onto paper, cardboard,
foam board and plastic. 2.Cut them
out and decorate with 30, 60 90 degree lines or other geometric shapes.
3. Tape sides together to make cubes.
Record all results and answer questions.
Results (or data and drawings) Record all information in Math Art Journal:
Make a table with Characteristics of Cubes with headings:
1. face shape, 2. number of faces, 3. number of edges, 4. number of
vertices, 5-8. volume and 9-12. weight of one inch3 piece of paper,
piece of cardboard, piece of foam board, plastic.
Conclusion: Explain how many ways there are to make a cube and what difficulties,
if any were encountered and how they were overcome. What are the characteristics of a cube and how are they
different from another polyhedra such as a tetrahedron or a icosahedron?
Make cubes out of paper by folding them origami style:
Make a box by folding paper by starting with a square piece of paper. (1st
challenge: How can we make a square out of a 8.5x11” piece of rectangular
paper?). Fold diagonally twice.
What shape do we get and what is a diagonal line?
Does it bisect the angle? Make a tent. (2nd challenge--see
Instructor for help) Has anyone
ever made a real tent? What were
its parts? How did you do it? What
did you use? Fold up all four
‘legs’. Fold 4 corners into
center. Fold down 4 flaps. Tuck in
four flaps (just like putting children to bed-not always easy).
Crease top and bottom edges or you will get a rhomboidal shape, not a
cube. (A rhombus is a diamond)
Hold it, spread out gently in both hands. Blow into hole to make box.
Answer questions in Math Art Journal.
Add the cubes to the Mobile with at least two cubes, either in a balanced,
symmetrical aesthetically pleasing way; in a line or attached together to make
an animal or other object yet to be known, perhaps an object sold in student’s
Store, such as jewelry box or a bar of soap. Decide if it is symmetrical,
asymmetrical or has rotational symmetry. (yes, no, yes) If a line can be drawn
and you see the same thing on both sides, it is symmetrical, if not, it is
asymmetrical. If you can spin it on
a point and see the same thing, it has rotational symmetry. Where else do you
see these shapes? Answer these
questions in Math Art Journal.
Follow directions on drawing a square, then a square as it would look not
straight on, from various angles. Use
a square object as a model.
Draw a cube several times, shade them and decorate them or make item
labels out of them. For
example, say, Draw a 1-inch square. Use
a ruler or straight edge. Rotate it
about 450. Give it depth
(or height) by drawing lines down, also 1 inch long.
Now it looks like a table. How
would you make it look like a solid (or see through) cube?
What is its volume? How much
space does it take up? How many
cubic inches or in3 does it have?
Answer questions in Math Art Journal or write it like a Laboratory
Experiment with the components of a lab exercise in mind (Purpose, Material,
Procedure or steps, Results or data and Conclusion or further questions.
and/or evaluation: Students
complete procedures listed, problems on volume in textbook, workbooks and
teacher’s worksheets. Explain, write in Math Art Journal, or type what the
project involved and what they learned and how they could or did make it better.
(also see Culmination)
variations: 1. Combine cubes with the Icosahedra Mobile. 2. Use
the drawings and measurements of the cubes to improve Store Drawings. 3. Make
measurements that may be realistic and label dimensions on drawings of Stores.
4. Make a separate mobile with just cubes, 2-d, 3-d colored or plain. 5.
Design a cover for a cube shaped student’s Store commodity (soap, perfume,
jewelry box). 6. Include identifying features that are found on real items such
as weight. 7. Innovate and add dimensions to label. 8. Describe it in Math Art
Journal. 9. Add on shapes or thematic decorations to the boxes according to the
Jewish Holiday or topic currently studying. 10. Make boxes out of clay.
Symmetrical or Asymmetrical Drawing Through a Verbal Description
Geometry, Symmetry, Communication through Description, Objects (nouns, materials
or tools) studied in Science, Social Studies or other school subjects.
with National Art Standards:
reflecting upon and assessing the characteristics and merits of their work and
the work of others
Corresponds with National Math Standards: Geometry and Spatial
Sense, Measurement, Patterns and Relationships, Communication, Reasoning, and
Mathematical Connections are the most prominent National Math Standards involved
in this lesson.
Improve students abilities and confidence in communicating in
mathematical language, assessing characteristics, making connections, spatial
sense, measurement and patterns and relationships (between things and people).
And students will learn about symmetry.
information and motivation:
There are many mathematical abilities, which can be expressed verbally.
When Language Arts is a student’s strong point or when writing and presenting
their written work for other students and parents is part of the lessons, Math
and Art can be integrated into the curriculum with this activity.
Math, Art, English Language, Literature
Paper pencils, pens, objects to describe and draw.
parallel lines, perpendicular lines, 3- dimensional, 2 dimensional, special
sense, pattern relationship, relative size, precise or exact size, descriptive
writing, noun, pronoun, adjective
One student in a pair or group describes some object without telling the
others what exactly it is (unless it is absolutely necessary to name it in order
to come close to the correct object).
She can write it out, type it or describe it orally, using as many or as
few words as needed; at least one 4 –5 sentence paragraph.
More information may be better than not enough information, for the
person drawing to get a nearly accurate idea of what it is.
The one describing the object has to state, among other features, whether
it is symmetrical or not.
If the one describing the object does not give enough information then
the Artist can ask specific questions in order to get a good enough visual
image, to create a drawing.
Changes can be made to the drawing as necessary, for the one describing
to be satisfied that her description was understood.
Switch roles so everyone gets to be the Artist and the Writer.
Discuss and write in math Art Journal whether it was an easy task or
difficult and why.
Was the communication between the pairs smooth?
Did the object need to be identified to get it right? Was it classified
as symmetrical or asymmetrical correctly? How my adjectives were used to get it
done? (also see Culmination)
Limit the items being described to certain objects in the room or
geometric shapes or 2-dimensional objects or 3-dimensional objects:
Shorten or extend the time; make it a homework assignment; limit the
materials to a drawing pencil and nothing else; extend it to something they have
never actually seen:
Make it a character in a book, an object or tool used in Science or
Place the drawing in a cartoon or time-line depiction.
by the Instructor.
Examples: One teacher had the 4th grade girls do a Treasures
Project. They described a
favorite family item for Language Arts. They
wanted a picture of it in the pamphlet. The
student described it to me. I drew
it. The child’s Mother was amazed
at the “real communication between” us. I did not think that it was such an amazing thing.
How different can a ‘round, brown teapot with silver base and silver
ball shaped handle on the top and swirley handle’ come out? My Mother later
reminded me, “For you it was easy but it is not easy for everyone.”
from a description (below)
3 Title: Poetic
concepts: geometric shapes, repetition of motif to create
pattern (where the motif is the rhythm or shape of the poem), 2-d space
forms, symmetrical designs
with National Art Standards:
Making connections between visual arts and other disciplines;
Understanding the visual arts in relation to history and cultures;
Using knowledge of structures and functions (elements and principles of
art-shape, space, balance, etc.)
with National Math Standards:
Mathematical connections, Geometry and spatial sense, Measurement,
Patterns and relationships
objectives: Students will find words to arrange into a designated pattern
or poem and learn about symmetry.
information and motivation:
Choosing and evaluating a range of subject matters, symbols, and ideas is
an exercise that is sometimes considered a Visual Standard.
This lesson stretches to accomplish that task because it has more to do
with Language Arts than Visual Arts, however, combining ideas about math and
molding one’s writing into quantified or measured steps, as is required in
order to make lines consisting of a certain number of syllables, can challenge
the math and linguistic areas of talent. A lantern is a Japanese poem written in the shape of a
lantern. Lantern poems have a
pattern, which resembles the profile of the kind of lantern that gives off
The pattern is:
Line 1-one syllable
Line 3-three syllables
Line 4- four syllables
Line 5- one syllable
areas: Language Arts; writing, syllable counting; Mathematics;
mathematical concept, number or geometric shape, Art: symmetry, pattern
Materials: Paper, writing utensils or computer
Vocabulary: Lantern, shape, syllable
Procedure: Brainstorm with students for ideas about what to write about,
such as geometric shapes, then write a lantern about a mathematical concept, a
number, or a geometric shape. Add
the result and the steps taken to accomplish this to Math Art Diary in a
narrative description. Keep track
of syllables with tally marks. Add
and/or evaluation: Objective assessment: Completion
of a Lantern to the specified dimensions, meaningful use of words.
Subjective Assessment: Positive
attitude, joy and contentment with creation; rhythm and tone aesthetically
pleasing. (and see Culmination)
variations: Write a lantern about any other topic touched upon in
Language Arts, Social Studies or Science class:
Compose and perform it to a melody.
In order to help students brainstorm, do these two activities:
fun: Fold a piece of graph
paper in half. The fold line is the line of symmetry, and each side is a mirror
image of the other. Open the paper.
On one side of the line, color several squares to make a pattern.
Another student copies the other side of the line. Fold the paper with
the patterns inside. Do the squares
of the same color cover each other? If
so, you have created a symmetric design. Describe
how you could create a design with more than one line of symmetry.
it up: On a sheet of graph paper, draw the x-axis near the bottom of
the paper and the y-axis near the left-hand side of the paper to show a quadrant
I grid. Graph the following ordered
pairs, connecting the first point to the second point, then continue as each
point is plotted: (2,1),(1,2),(3,2),
(3,8),(4,8),(3,9),(4,10),(5,9)(4,8),(5,8),(5,2),(7,2),(6,1),(2,1). Double each
number in the ordered pairs, and graph each new point on another sheet of graph
paper, again using quadrant I. What
happened to the drawing? These
drawings are similar. They have the
same shape and proportional dimensions. Create
a simple, small drawing and write a list of ordered pairs that could be plotted
to copy your drawing. “Scale”
the drawing to a larger size by multiplying each number in the order pairs by
the same factor. What scale would
your object be if you divided the ordered pair number by the same factor? (Half)
Record exercises in Math Art Journal.
from Gayle Cloke, Nola Ewing, Dory Stevens. (2001). The
fine art of mathematics, Teaching Children Mathematics, 8(2), 108-110.
by Mrs. Waxman
4 Title: String
Art to Learn About Symmetry (and
other Math Facts like Multiplication)
concepts: Math/Art Topics involved in this lesson are: Polyhedra, Space forms, Symmetrical designs, Geometric
sculptures, Rotational symmetry, Constructions, Artists who use math, Technical
drawings and see unit 1
with National Art Standards:
Form, a three-dimensional object; space, the empty area
between, around, above, below, or within an object-- are Elements of Design,
some of the building blocks of visual art.
Balance, the arrangement of equal parts, stable; contrast,
the difference between two or more things; repetition, the parts used
over and over in a pattern; proportion, the relation of one part to
another; unity, all parts working together-- are Principals of Design,
how the blocks are used or put together.
with National Math Standards:
Mathematics as Problem Solving: How
will we get it to look how we want it to look? We may have to add, subtract,
multiply or divide and we will have to decide what to do with those
calculations, where to apply our answers.
Mathematics as Communication: We
will be talking to each other as we work, about the math-artwork, using the
necessary math and art vocabulary. Mathematics
as Reasoning: For something to be
done in an organized way we will use reasonable methods which logic, and some
call math as a part. Mathematical Connections:
We will discuss and log where these shapes are seen besides here and what
else in the universe they look like. Measurement
and Estimations of length of yarns and lines will be made.
Geometry and Spatial Sense and Patterns and Relationships, Number Sense
and Numeration, Concepts of Whole Number Operations, Whole Number
Computation will be standards addressed in the Procedure or Optional
the National Council Teachers Math Association Standards, Statistics and
probability and Fractions and decimals may be the only ones not touched on in
this lesson design. Changing the
scales to fractions and decimals would take care of that, leaving only
Statistics and probability which, with some thinking we can include also, if we
stay innovative with the lesson.
Students will learn about patterns (Mathematics
has been called “the science of patterns”).
Students will learn how to use patterns to make beautiful line art while
practicing number and operation sense, geometry, measurement estimation, whole
number operations and computation.
information and motivation: Students
may be surprised at the dramatic artwork they can create if they measure
carefully and follow a pattern and it can be fun to put the separate parts of
something together, hang it on the ceiling or wall to make a moving sculpture or
a mobile. Multiplication can be demonstrated to themselves by making squares or
four sided polygons from crossed yarns lines. The more free form activity is the Procedure while the
more structured ones are Optional Variations.
number sense, computation, Art
pencils or crayons, sticks, string, scissors, hole punch or needle, foam board
or clear plastic sheets, Math Art Journal
dimensional objects 3-dimensional, cube, angles, right angle, obtuse, acute,
equivalent triangles, scalene, isosceles, intersecting lines, symmetrical,
asymmetrical, rotational symmetry, polygon
Draw two line segments (or use the edges of foam board or clear plastic sheets)
that meet at a right angles or cut small, equally spaced marks on the edge,
about 3″ line segments with 1/4″ marks.
Sequentially connect the pairs of marks with straight lines, starting at the
first mark on each segment so that the lines cross as they are shown or, wrap
colored yarns to create different patterns. Students can design their own arrangements of line segments
similar to these (see Instructor for help):
Demonstrate, identify and define the vocabulary words in the artwork you are
doing: 2-dimensional objects
3-dimensional, angles, right angle, obtuse, acute, equivalent triangles,
scalene, isosceles, intersecting lines, symmetrical, asymmetrical, rotational
symmetry, polygon. Discuss where
you see them and write all about it in Math Art Journal.
Add line art or string art pieces to the Icosahedra Mobile or Cube structure in
an aesthetically pleasing way, balanced by color or shape or in a the shape of
and animal or other object yet to be known, perhaps an object in ‘Student’s
Store’, such as combs or hair brushes.
and/or evaluation: Students
complete multiplication problems in Math Journal. Explain, write or type what
the project involved. (also see
Variations: A digit circle is a circle with digits 0-9 equally spaced
around the outside. Patterns can be
created while practicing number operations.
Use a black line master from www.TeachingK8.com
or draw your own circle and add numbers 1-9 as shown.
Choose a multiplying number, then multiply each of the digits from 0-9 by
that multiplier. Draw an arrow from that digit to the number, which is the last
digit of the product on the circle. For
example, suppose your multiplier (multiplying number) is 7.
1x7=7, so draw an arrow from 1 to 7.
so draw an arrow from 2 to 4 (4 is the last digit of 14).
so 3 connects to 1,
so 4 connects to 8, and so on.
designs for each of the multipliers, 0-9, and then compare the designs and look
for connections. There are similarities between pairs of designs of multipliers
that sum to 10 and some can come out pretty. Use different colors. This is good
for practicing multiplication facts.
in a Times Table chart: (refer to calculator or rear cover of Math Art Journal
where it is pre-printed with a Conversion Table and Grammar Rules)
Do this with all digits from 0-9, and then compare
the designs to look for similarities and differences. There are some striking
patterns that emerge; for example, the designs are identical for pairs that add
to 10, so 1 and 9 make the same design, as do 2 and 8, 3 and 7 and 4 and 6. The
designs are created in the opposite direction, though. Ask your students to come
up with ideas as to why this might be. One way to think about it is that adding
3 gives the same last digit as subtracting 7 and vice versa.
Problem: If there are four people at a party and everyone shakes hands
with everyone else, how many handshakes are there? What if there are five people? Six people?
100 People? A string art
picture like the one shown can help solve this problem:
patterns students may find: With six people, the first person needs to shake
five hands. The second person shakes four new hands (they already shook with
person #1), the third person shakes three new hands, the fourth shakes two and
the fifth shakes one hand. Everyone
will have shaken hands with the sixth person.
The total number 5+4+3+2+1+0=15. With
100 people, the toal is 99+98+97+…+1+1+0+_?
Another way is to see that in a group of six people, each person shakes
hands with five others for a total of 30 hands shaken. But "a
handshake" is two people shaking hands, so 30 is exactly two times too
many, 30 ÷ 2 = 15 handshakes. So with 100 it should be 99 x 100 ÷ 2
handshakes. Can you spot the pattern?
Adapted from Naylor, M. (2006, March). Do you see a pattern? Teaching Prek-8,
Examples: see www.randisart.com/pottery/String_Art_mobile_orange_sample.jpg
A Multifaceted Activity, A Polyhedra Icosahedron
(20 sided figure)
see unit 4
with National Art Standards:
see unit 1
with National Math Standards:
see unit 1
objectives: Students will learn the basic geometric shape of equilateral
triangles, icosahedron, tetrahedron; they will be able to identify functional
aspects of them and their unique mechanical property of strength; learn about
rotational symmetry; learn about graphic design by creating horizontal, vertical
and alternating patterns and study their motion and how distortions occur; learn
that art is made from shapes and that some shapes occur naturally, are invented
by humans, and have specific names and sometimes, purposes.
information and motivation: It
is commonly thought the Principal of Twisting and Release was first used by
ancient Greeks to power catapults, which tossed heavy stones great distances.
this is made of one piece of paper, when properly constructed, it can support a
heavy book without being crushed. An
icosahedron’s structural qualities are demonstrated by triangulation.
The triangle is a shape used to make things (like bridges and buildings)
that need to withstand a lot of weight or force.
They spread out the force so it is not focused at one point, causing
something to break or fracture.
zoetrope is one of several animation toys that were invented in the 19th
century. They have the property of
causing the images to appear thinner than their actual sizes when viewed in
motion through the slits and were precursors to animation and films.
This multifaceted project incorporates elements from several academic areas. It requires varied tasks and satisfies artistic, technical and hands-on personal preferences while providing success
students of all artistic skill levels
areas: This project combines geometry, structure, physical
science, graphic design, animation, motion, mechanical free-hand drawing with
catapult mechanics. It is an icosahedron,
a geometric figure with 20 triangles made of equilateral triangles, therefore,
it is a multifaceted lesson. It
has three distinct surface areas consisting of five triangles and a central band
of ten triangles.
18" x 24" Paper,
ruler, a sharp edged instrument or scissors, coloring utensils-crayons, markers,
etc., tape, string, glue, pipe cleaner, T-square and a 30-60 degree triangle (optional) to make 60 0
icosahedron, zoetrope, rotational symmetry, torsion, static, stationary,
Trace the notched template pattern of triangles with 3” sides.
Cut it out being careful to leave the hems. The hems will not be seen and are not decorated.
Design, draw and color the surfaces, possibly with the form of motion in mind
since the static drawing will look different in motion.
One end will have generally vertical lines or alternating color circling
the structure, the other will have horizontal lines that waver. The middle can
be designed freely by the student, using simple geometric and free forms, or
elaborate representational drawings. Drawing
skill is not a necessity and the outcome is a mystery until the icosahedron is
Score the edges: Hold ruler on line. Hold
the knife like a pencil. Press with
sharp edge along lines (or teacher will do it to ensure sharp, crisp straight
lines). Fold edges to make a creased form.
The ends are assembles first. Starting
at one end, each hem is glued to its neighbor from the inside. The form begins to take shape as the ends come together.
The center follows automatically. The
last two hems of each end should be left unattached. This will also leave two unattached hems in the center
creating three openings that are connected end to end. A pipe cleaner axle with
looped ends (bend and twist the loops around a pencil) is inserted into this
opening. The three edges of the opening are then glued together.
and/or evaluation: Students
write an essay on the project, including the physical science and historical
information learned, in Math Art Journal. Students
read it to students or parents; demonstrates the properties discussed, such as
strength, by putting a heavy book on it. (see also Culmination)
Add catapult torsion using string
threaded through each loop of the pipe cleaner. Tie ends together. Hold the
ends, stretch the string and spin the icosahedron. As it spins, the string loops
twist around themselves. Pull gently and release, the string will unwind and
rewind. Each pull and release keeps the icosahedron in motion, animating the
surface designs. The horizontal lines move up and down the surface, the colors
in the vertical pattern optically mix and the shapes and colors in the center
mix and move. 2. Hang the Polyhedra from the ceiling.
3. Leave out the string or coloring.
4. Prepare the shape ahead of time or have the student actually use the
template. 5. Make a tetrahedron
with four triangles 6. Attach other shapes from the other lessons to
Adapted from Strazdin,
R. (2000, May), Icosahedrons: A
multifaceted project. Arts& Activities, 127 (4), 38.
Units 1-5: Invite Parents. Everyone
views the display of drawings and projects, Icosahedron Mobile (or Combination
of Shapes Mobile) in a gallery style exhibit in the classroom or school’s
hallways. Students may share their
poems by reading them out loud or matting them on a nice background to post them
on the wall; add pictures or decorations with crayons, colored pencils or pens.
Adejumo, Christopher O.
2002. Vol. 55, Iss. 5; pg. 6, 6 pgs
M Edens, Ellen F Potter.
Gayle Cloke, Nola Ewing, Dory Stevens.
(2001). The Fine Art of Mathematics, Teaching
Children Mathematics, 8(2), 108-110. Retrieved August 8, 2008,
from Research Library database. (Document ID: 83776531).
The Owl at Perdue < http://owl.english.purdue.edu/owl/resource/560/01/>
Weissman, Rabbi Moshe.
click on pictures to enlarge
'Centagon', a hundred side figure/Dragon
box template sketch | http://randisart.com/Math%20Through%20Art.htm | 13 |
103 | Calculus finds the relationship between the distance traveled and the speed — easy for constant speed, not so easy for changing speed. Professor Strang is finding the "rate of change" and the "slope of a curve" and the "derivative of a function."
Professor Strang's Calculus textbook (1st edition, 1991) is freely available here.
Subtitles are provided through the generous assistance of Jimmy Ren.
Please make sure you play the Video before clicking the links below.
PROFESSOR: OK, hi. This is the second in my videos about the main ideas, the big picture of calculus. And this is an important one, because I want to introduce and compute some derivatives. And you'll remember the overall situation is we have pairs of functions, distance and speed, function 1 and function 2, height of a graph, slope of the graph, height of a mountain, slope of a mountain.
And it's the connection between those two that calculus is about. And so our problem today is, you could imagine we have an airplane climbing. Its height is y as it covers a distance x. And its flight recorder will-- Well, probably it has two flight recorders. Let's suppose it has. Or your car has two recorders.
One records the distance, the height, the total amount achieved up to that moment, up to that time, t, or that point, x. The second recorder would tell you at every instant what the speed is. So it would tell you the speed at all times. Do you see the difference? The speed is like what's happening at an instant.
The distance or the height, y, is the total accumulation of how far you've gone, how high you've gone. And now I'm going to suppose that this speed, this second function, the recorder is lost. But the information is there, and how to recover it. So that's the question. How, if I have a total record, say of height-- I'll say mostly with y of x. I write these two so that you realize that letters are not what calculus is about. It's ideas.
And here is a central idea. if I know the height-- as I go along, I know the height, it could go down-- how can I recover from that height what the slope is at each point? So here's something rather important, that's the notation that Leibniz created, and it was a good, good idea for the derivative. And you'll see where it comes from.
But somehow I'm dividing distance up by distance across, and that ratio of up to across is a slope. So let me develop what we're doing. So the one thing we can do and now will do is, for the great functions of calculus, a few very special, very important functions. We will actually figure out what the slope is.
These are given by formulas, and I'll find a formula for the slope, dy/dx equals. And I won't write it in yet. Let me keep a little suspense. But this short list of the great functions is tremendously valuable. The process that we go through takes a little time, but once we do it it's done. Once we write in the answer here, we know it.
And the point is that other functions of science, of engineering, of economics, of life, come from these functions by multiplying-- I could multiply that times that-- and then I need a product rule, the rule for the derivative, the slope of a product. I could divide that by that. So I need a quotient rule.
I could do a chain. And you'll see that's maybe the best and most valuable. e to the sine x, so I'm putting e to the x together with sine x in a chain of functions, e to the sine of x. Then we need a chain rule. That's all coming.
Let me start-- Well, let me even start by giving away the main facts for these three examples, because they're three you want to remember, and might as well start now. The x to the nth, so that's something if n is positive. x to the nth climbs up. Let me draw a graph here of y equals x squared, because that's one we'll work out in detail, y equals x squared.
So this direction is x. This direction is y. And I want to know the slope. And the point is that that slope is changing as I go up. So the slope depends on x. The slope will be different here. So it's getting steeper and steeper. I'll figure out that slope.
For this example, x squared went in as 2, and it's climbing. If n was minus 2, just because that's also on our list, n could be negative, the function would be dropping. You remember x to the minus 2, that negative exponent means divide by x squared. x to the minus 2 is 1 divided by x squared, and it'll be dropping. So n could be positive or negative here. So I tell you the derivative.
The derivative is easy to remember, the set number n. It's another power of x, and that power is one less, one down. You lose one power. I'm going to go through the steps here for n is equal to 2, so I hope to get the answer 2 times x to the 2 minus 1 will just be 1, 2x. But what does the slope mean? That's what this lecture is really telling you.
I'll tell you the answer for if it's sine x going in, beautifully, the derivative of sine x is cos x, the cosine. The derivative of the sine curve is the cosine curve. You couldn't hope for more than that. And then we'll also, at the same time, find the derivative of the cosine curve, which is minus the sine curve. It turns out a minus sine comes in because the cosine curve drops at the start.
And would you like to know this one? e to the x, which I will introduce in a lecture coming very soon, because it's the function of calculus. And the reason it's so terrific is, the connection between the function, whatever it is, whatever this number e is to whatever the xth power means, the slope is the same as the function, e to the x. That's amazing. As the function changes, the slope changes and they stay equal.
Really, my help is just to say, if you know those three, you're really off to a good start, plus the rules that you need. All right, now I'll tackle this particular one and say, what does slope mean? So I'm given the recorder that I have. This is function 1 here. This is function 1, the one I know. And I know it at every point.
If I only had the trip meter after an hour or two hours or three hours, well, calculus can't do the impossible. It can't know, if I only knew the distance reached after each hour, I couldn't tell what it did over that hour, how often you had to break, how often you accelerated. I could only find an average speed over that hour. That would be easy. So averages don't need calculus.
It's instant stuff, what happens at a moment, what is that speedometer reading at the moment x, say, x equal 1. What is the slope? Yeah, let me put in x equals 1 on this graph and x equals 2. And now x squared is going to be at height 1. If x is 1, then x squared is also 1. If x is 2, x squared will be 4. What's the average? Let me just come back one second to that average business. The average slope there would be, in a distance across of 1, I went up by how much? 3. I went up from 1 to 4. I have to do a subtraction. Differences, derivatives, that's the connection here.
So it's 4 minus 1. That is 3. So I would say the average is 3/1. But that's not calculus. Calculus is looking at the instant thing. And let me begin at this instant, at that point.
What does the slope look like to you at that point? At at x equals 0, here's x equals 0, and here's y equals 0. We're very much 0. You see it's climbing, but at that moment, it's like it just started from a red light, whatever. The speed is 0 at that point. And I want to say the slope is 0. That's flat right there. That's flat.
If I continued the curve, if I continued the x squared curve for x negative, it would be the same as for x positive. Well, it doesn't look very the same. Let me improve that. It would start up the same way and be completely symmetric. Everybody sees that, at that 0 position, the curve has hit bottom.
Actually, this will be a major, major application of calculus. You identify the bottom of a curve by the fact that the slope is 0. It's not going up. It's not going down. It's 0 at that point. But now, what do I mean by slope at a point?
Here comes the new idea. If I go way over to 1, that's too much. I just want to stay near this point. I'll go over a little bit, and I call that little bit delta x. So that letter, delta, signals to our minds small, some small. And actually, probably smaller than I drew it there. And then, so what's the average? I'd like to just find the average speed, or average slope.
If I go over by delta x, and then how high do I go up? Well, the curve is y equals x squared. So how high is this? So the average is up first, divided by across. Across is our usual delta x. How far did it go up?
Well, if our curve is x squared and I'm at the point, delta x, then it's delta x squared. That's the average over the first piece, over short, over the first piece of the curve. Out is-- from here out to delta x. OK.
Now, again, that's still only an average, because delta x might have been short. I want to push it to 0. That's where calculus comes in, taking the limit of shorter and shorter and shorter pieces in order to zoom in on that instant, that moment, that spot where we're looking at the slope, and where we're expecting the answer is 0, in this case. And you see that the average, it happens to be especially simple. Delta x squared over delta x is just delta x.
So the average slope is extremely small. And I'll just complete that thought. So the instant slope-- instant slope at 0, at x equals 0, I let this delta x get smaller and smaller. I get the answer is 0, which is just what I expected it. And you could say, well, not too exciting. But it was an easy one to do. It was the first time that we actually went through the steps of computing.
This is a, like, a delta y. This is the delta x. Instead of 3/1, starting here I had delta x squared over delta x. That was easy to see. It was delta x. And if I move in closer, that average slope is smaller and smaller, and the slope at that instant is 0. No problem.
The travel, the climbing began from rest, but it picked up speed. The slope here is certainly not 0. We'll find that slope. We need now to find the slope at every point. OK. That's a good start.
Now I'm ready to find the slope at any point. Instead of just x equals 0, which we've now done, I better draw a new graph of the same picture, climbing up. Now I'm putting in a little climb at some point x here. I'm up at a height, x squared. I'm at that point on the climb. I'd like to know the slope there, at that point. How am I going to do it?
I will follow this as the central dogma of calculus, of differential calculus, function 1 to function 2. Take a little delta x, go as far as x plus delta x. That will take you to a higher point on the curve. That's now the point x plus delta x squared, because our curve is still y equals x squared in this nice, simple parabola. OK.
So now I you look at distance across and distance up. So delta y is the change up. Delta x is the across. And I have to put what is delta y. I have to write in, what is delta y? It's this distance up. It's x plus delta x squared. That's this height minus this height. I'm not counting this bit, of course. It's that that I want. That's the delta y, is this piece.
So it's up to this, subtract x squared. That's delta y. That's important. Now I divide by delta x. This is all algebra now. Calculus is going to come in a moment, but not yet.
For algebra, what do I do? I multiply this out. I see that thing squared. I remember that x is squared. And then I have this times this twice. 2x delta x's, and then I have delta x squared. And then I'm subtracting x squared. So that's delta y written out in full glory.
I wrote it out because now I can simplify by canceling the x squared. I'm not surprised. Now, in this case, I can actually do the division. Delta x is just there. So it leaves me with a 2x. Delta x over delta x is 1. And then here's a delta x squared over a delta x, so that leaves me with one delta x. As you get the hang of calculus, you see the important things is like this first order, delta x to the first power. Delta x squared, that, when divided by delta x, gives us this, which is going to disappear. That's the point.
This was the average over a short but still not instant range, distance. Now, what happens? Now dy/dx. So if this is short, short over short, this is darn short over darn short. That d is, well, it's too short to see. So I don't actually now try to separate a distance dy. This isn't a true division, because it's effectively 0/0. And you might say, well, 0/0, what's the meaning? Well, the meaning of 0/0, in this situation, is, I take the limit of this one, which does have a meaning, because those are true numbers. They're little numbers but they're numbers.
And this was this, so now here's the big step, leaving algebra behind, going to calculus in order to get what's happening at a point. I let delta x go to 0. And what is that? So delta y over delta x is this. What is the dy/dx? So in the limit, ah, it's not hard. Here's the 2x. It's there. Here's the delta x. In the limit, it disappears.
So the conclusion is that the derivative is 2x. So that's function two. That's function two here. That's the slope function. That's the speed function. Maybe I should draw it. Can I draw it above and then I'll put the board back up?
So here's a picture of function 2, the derivative, or the slope, which I was calling s. So that's the s function, against x. x is still the thing that's varying, or it could be t, or it could be whatever letter we've got. And the answer was 2x for this function.
So if I graph it, it starts at 0, and it climbs steadily with slope 2. So that's a graph of s of x. And for example-- yeah, so take a couple of points on that graph-- at x equals 0, the slope is 0. And we did that first. And we actually got it right. The slope is 0 at the start, at the bottom of the curve.
At some other point on the curve, what's the slope here? Ha, yeah, tell me the slope there. At that point on the curve, an average slope was 3/1, but that was the slope of this, like, you know-- sometimes called a chord. That's over a big jump of 1. Then I did it over a small jump of delta x, and then I let delta x go to 0, so it was an instant infinitesimal jump.
So the actual slope, the way to visualize it is that it's more like that. That's the line that's really giving the slope at that point. That's my best picture. It's not Rembrandt, but it's got it. And what is the slope at that point? Well, that's what our calculation was.
It found the slope at that point. And at the particular point, x equals 1, the height was 2. The slope is 2. The actual tangent line is only-- is there. You see? It's up. Oh, wait a minute. Yeah, well, the slope is 2. I don't know. This goes up to 3. It's not Rembrandt, but the math is OK.
So what have we done? We've taken the first small step and literally I could say small step, almost a play on words because that's the point, the step is so small-- to getting these great functions. Before I close this lecture, can I draw this pair, function 1 and function 2, and just see that the movement of the curves is what we would expect.
So let me, just for one more good example, great example, actually, is let me draw. Here goes x. In fact, maybe I already drew in the first letter, lecture that bit out to 90 degrees. Only if we want a nice formula, we better call that pi over 2 radians. And here's a graph of sine x. This is y. This is the function 1, sine x.
And what's function 2? What can we see about function 2? Again, x. We see a slope. This is not the same as x squared. This starts with a definite slope. And it turns out this will be one of the most important limits we'll find.
We'll discover that the first little delta x, which goes up by sine of delta x, has a slope that gets closer and closer to 1. Good. Luckily, cosine does start at 1, so we're OK so far. Now the slope is dropping.
And what's the slope at the top of the sine curve? It's a maximum. But we identify that by the fact that the slope is 0, because we know the thing is going to go down here and go somewhere else. The slope there is 0. The tangent line is horizontal. And that is that point. It passes through 0. The slope is dropping. So this is the slope curve.
And the great thing is that it's the cosine of x. And what I'm doing now is not proving this fact. I'm not doing my delta x's. That's the job I have to do once, and it won't be today, but I only have to do it once. But today, I'm just saying it makes sense the slope is dropping. In that first part, I'm going up, so the slope is positive but the slope is dropping.
And then, at this point, it hits 0. And that's this point. And then the slope turns negative. I'm falling. So the slope goes negative, and actually it follows the cosine. So I go along here to that point, and then I can continue on to this point where it bottoms out again and then starts up.
So where is that on this curve? Well, I'd better draw a little further out. This bottom here would be the-- This is our pi/2. This is our pi, 180 degrees, everybody would say. So what's happening on that curve? The function is dropping, and actually it's dropping its fastest. It's dropping its fastest at that point, which is the slope is minus 1.
And then the slope is still negative, but it's not so negative, and it comes back up to 0 at 3 pi/2 So this is the point, 3 pi/2. And this has come back to 0 at that point. And then it finishes the whole thing at 2 pi. This finishes up here back at 1 again. It's climbing.
All right, climbing, dropping, faster, slower, maximum, minimum, those are the words that make derivatives important and useful to learn. And we've done, in detail, the first of our great list of functions. Thanks.
FEMALE SPEAKER: This has been a production of MIT OpenCourseWare and Gilbert Strang. Funding for this video was provided by the Lord Foundation. To help OCW continue to provide free and open access to MIT courses, please make a donation at ocw.mit.edu/donate. | http://ocw.mit.edu/resources/res-18-005-highlights-of-calculus-spring-2010/highlights_of_calculus/big-picture-derivatives/ | 13 |
61 | Rotation Curve of Galaxy:|
Dynamical studies of the Universe began in the late 1950's. This meant that instead of just looking and classifying galaxies, astronomers began to study their internal motions (rotation for disk galaxies) and their interactions with each other, as in clusters. The question was soon developed of whether we were observing the mass or the light in the Universe. Most of what we see in galaxies is starlight. So clearly, the brighter the galaxy, the more stars, therefore the more massive the galaxy. By the early 1960's, there were indications that this was not always true, called the missing mass problem.
The first indications that there is a significant fraction of missing matter in the Universe was from studies of the rotation of our own Galaxy, the Milky Way. The orbital period of the Sun around the Galaxy gives us a mean mass for the amount of material inside the Sun's orbit. But, a detailed plot of the orbital speed of the Galaxy as a function of radius reveals the distribution of mass within the Galaxy. The simplest type of rotation is wheel rotation shown below.
Rotation following Kepler's 3rd law is shown above as planet-like or differential rotation. Notice that the orbital speeds falls off as you go to greater radii within the Galaxy. This is called a Keplerian rotation curve.
To determine the rotation curve of the Galaxy, stars are not used due to interstellar extinction. Instead, 21-cm maps of neutral hydrogen are used. When this is done, one finds that the rotation curve of the Galaxy stays flat out to large distances, instead of falling off as in the figure above. This means that the mass of the Galaxy increases with increasing distance from the center.
The surprising thing is there is very little visible matter beyond the Sun's orbital distance from the center of the Galaxy. So, the rotation curve of the Galaxy indicates a great deal of mass, but there is no light out there. In other words, the halo of our Galaxy is filled with a mysterious dark matter of unknown composition and type.
Most galaxies occupy groups or clusters with membership ranging from 10 to hundreds of galaxies. Each cluster is held together by the gravity from each galaxy. The more mass, the higher the velocities of the members, and this fact can be used to test for the presence of unseen matter.
When these measurements were performed, it was found that up to 95% of the mass in clusters is not seen, i.e. dark. Since the physics of the motions of galaxies is so basic (pure Newtonian physics), there is no escaping the conclusion that a majority of the matter in the Universe has not been identified, and that the matter around us that we call `normal' is special. The question that remains is whether dark matter is baryonic (normal) or a new substance, non-baryonic.
Exactly how much of the Universe is in the form of dark matter is a mystery and difficult to determine, obviously because its not visible. It has to be inferred by its gravitational effects on the luminous matter in the Universe (stars and gas) and is usually expressed as the mass-to-luminosity ratio (M/L). A high M/L indicates lots of dark matter, a low M/L indicates that most of the matter is in the form of baryonic matter, stars and stellar remnants plus gas.
A important point to the study of dark matter is how it is distributed. If it is distributed like the luminous matter in the Universe, that most of it is in galaxies. However, studies of M/L for a range of scales shows that dark matter becomes more dominate on larger scales.
Most importantly, on very large scales of 100 Mpc's (Mpc = megaparsec, one million parsecs and kpc = 1000 parsecs) the amount of dark matter inferred is near the value needed to close the Universe. Thus, it is for two reasons that the dark matter problem is important, one to determine what is the nature of dark matter, is it a new form of undiscovered matter? The second is the determine if the amount of dark matter is sufficient to close the Universe.
Baryonic Dark Matter:
We know of the presence of dark matter from dynamical studies. But we also know from the abundance of light elements that there is also a problem in our understanding of the fraction of the mass of the Universe that is in normal matter or baryons. The fraction of light elements (hydrogen, helium, lithium, boron) indicates that the density of the Universe in baryons is only 2 to 4% what we measure as the observed density.
It is not too surprising to find that at least some of the matter in the Universe is dark since it requires energy to observe an object, and most of space is cold and low in energy. Can dark matter be some form of normal matter that is cold and does not radiate any energy? For example, dead stars?
Once a normal star has used up its hydrogen fuel, it usually ends its life as a white dwarf star, slowly cooling to become a black dwarf. However, the timescale to cool to a black dwarf is thousands of times longer than the age of the Universe. High mass stars will explode and their cores will form neutron stars or black holes. However, this is rare and we would need 90% of all stars to go supernova to explain all of the dark matter.
Another avenue of thought is to consider low mass objects. Stars that are very low in mass fail to produce their own light by thermonuclear fusion. Thus, many, many brown dwarf stars could make up the dark matter population. Or, even smaller, numerous Jupiter-sized planets, or even plain rocks, would be completely dark outside the illumination of a star. The problem here is that to make-up the mass of all the dark matter requires huge numbers of brown dwarfs, and even more Jupiter's or rocks. We do not find many of these objects nearby, so to presume they exist in the dark matter halos is unsupported.
Non-Baryonic Dark Matter:
An alternative idea is to consider forms of dark matter not composed of quarks or leptons, rather made from some exotic material. If the neutrino has mass, then it would make a good dark matter candidate since it interacts weakly with matter and, therefore, is very hard to detect. However, neutrinos formed in the early Universe would also have mass, and that mass would have a predictable effect on the cluster of galaxies, which is not seen.
Another suggestion is that some new particle exists similar to the neutrino, but more massive and, therefore, more rare. This Weakly Interacting Massive Particle (WIMP) would escape detection in our modern particle accelerators, but no other evidence of its existence has been found.
The more bizarre proposed solutions to the dark matter problem require the use of little understood relics or defects from the early Universe. One school of thought believes that topological defects may have appears during the phase transition at the end of the GUT era. These defects would have had a string-like form and, thus, are called cosmic strings. Cosmic strings would contain the trapped remnants of the earlier dense phase of the Universe. Being high density, they would also be high in mass but are only detectable by their gravitational radiation.
Lastly, the dark matter problem may be an illusion. Rather than missing matter, gravity may operate differently on scales the size of galaxies. This would cause us to overestimate the amount of mass, when it is the weaker gravity to blame. This is no evidence of modified gravity in our laboratory experiments to date.
Current View of Dark Matter:
The current observations and estimates of dark matter is that 20% of dark matter is probably in the form of massive neutrinos, even though that mass is uncertain. Another 5 to 10% is in the form of stellar remnants and low mass, brown dwarfs. However, the combination of both these mixtures only makes about 30% the amount mass necessary to close the Universe.
The rest of dark matter is called CDM (cold dark matter) of unknown origin, but probably cold and heavy. The combination of all these mixtures only makes 20 to 30% the amount mass necessary to close the Universe. Thus, the Universe appears to be open, i.e. M is 0.3.
With the convergence of our measurement of Hubble's constant and M, the end appeared in site for the determination of the geometry and age of our Universe. However, all was throw into turmoil recently with the discovery of dark energy. Dark energy is implied by the fact that the Universe appears to be accelerating, rather than decelerating, as measured by distant supernovae.
This new observation implies that something else is missing from our understanding of the dynamics of the Universe, in math terms this means that something is missing from Friedmann's equation. That missing something is the cosmological constant, .
In modern cosmology, the different classes of Universes (open, flat or closed) are known as Friedmann universes and described by a simple equation:
In this equation, `R' represents the scale factor of the Universe (think of it as the radius of the Universe in 4D spacetime), and H is Hubble's constant, how fast the Universe is expanding. Everything in this equation is a constant, i.e. to be determined from observations. These observables can be broken down into three parts gravity (matter density), curvature and pressure or negative energy given by the cosmological constant.
Historically, we assumed that gravity was the only important force in the Universe, and that the cosmological constant was zero. Thus, if we measure the density of matter, then we could extract the curvature of the Universe (and its future history) as a solution to the equation. New data has indicated that a negative pressure, or dark energy, does exist and we no longer assume that the cosmological constant is zero.
Each of these parameters can close the Universe in terms of turn-around and collapse. Instead of thinking about the various constants in real numbers, we perfer to consider the ratio of the parameter to the value that matches the critical value between open and closed Universes. For example, the density of matter exceeds the critical value, the Universe is closed. We refer to these ratios as (subscript M for matter, k for curvature, for the cosmological constant). For various reasons due to the physics of the Big Bang, the sum of the various must equal one. And for reasons we will see in a later lecture, the curvature is expected to be zero, allowing the rest to be shared between matter and the cosmological constant.
The search for the value of matter density is a much more difficult undertaking. The luminous mass of the Universe is tied up in stars. Stars are what we see when we look at a galaxy and it fairly easy to estimate the amount of mass tied up in stars, gas, planets and assorted rocks. This is contains an estimate of what is called the baryonic mass of the Universe, i.e. all the stuff made of baryons = protons and neutrons. When these numbers are caluclated it is found that for baryons is only 0.02, a very open Universe. However, when we examine motion of objects in the Universe, we quickly realize that most of the mass of the Universe is not seen, i.e. dark matter, which makes this estimate of to be much too low. So we must account for this dark matter in our estimate.
Einstein first introduced to produce a static Universe in his original equations. However, until the supernova data, there was no data to support its existence in other than a mathematical way.
The implication here is that there is some sort of pressure in the fabric of the Universe that is pushing the expansion faster. A pressure is usually associated with some sort of energy, we have named dark energy. Like dark matter, we do not know its origin or characteristics. Only that is produces a contribution of 0.7 to , called , so that matter plus dark energy equals an of 1, a flat Universe.
With a cosmological constant, the possible types of Universes are numerous. Almost any kind of massive or light, open or closed curvature, open or closed history is possible. Also, with high 's, the Universe could race away.
Fortunately, observations, such as the SN data and measurements of allow us to constraint the possible models for the Universe. In terms of for k (curvature), M (mass) and (where the critical values are =1), the new cosmology is given by the following diagram.
SN data gives =0.7 and M=0.3. This results in k=0, or a flat curvature. This is sometimes referred to as the Benchmark Model which gives an age of the Universe of 12.5 billion years. | http://abyss.uoregon.edu/~js/ast123/lectures/lec16.html | 13 |
56 | ||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
A monochromator is an optical device that transmits a mechanically selectable narrow band of wavelengths of light or other radiation chosen from a wider range of wavelengths available at the input. The name is from the Greek roots mono-, single, and chroma, colour, and the Latin suffix -ator, denoting an agent.
A device that can produce monochromatic light has many uses in science and in optics because many optical characteristics of a material are dependent on color. Although there are a number of useful ways to produce pure colors, there are not as many other ways to easily select any pure color from a wide range. See below for a discussion of some of the uses of monochromators.
A monochromator can use either the phenomenon of optical dispersion in a prism, or that of diffraction using a diffraction grating, to spatially separate the colors of light. It usually has a mechanism for directing the selected color to an exit slit. Usually the grating or the prism is used in a reflective mode. A reflective prism is made by making a right triangle prism (typically, half of an equilateral prism) with one side mirrored. The light enters through the hypotenuse face and is reflected back through it, being refracted twice at the same surface. The total refraction, and the total dispersion, is the same as would occur if an equilateral prism were used in transmission mode.
The dispersion or diffraction is only controllable if the light is collimated, that is if all the rays of light are parallel, or practically so. A source, like the sun, which is very far away, provides collimated light. Newton used sunlight in his famous experiments. In a practical monochromator however, the light source is close by, and an optical system in the monochromator converts the diverging light of the source to collimated light. Although some monochromator designs do use focusing gratings that do not need separate collimators, most use collimating mirrors. Reflective optics are preferred because they do not introduce dispersive effects of their own.
Czerny-Turner monochromator
In the common Czerny-Turner design, the broad band illumination source (A) is aimed at an entrance slit (B). The amount of light energy available for use depends on the intensity of the source in the space defined by the slit (width * height) and the acceptance angle of the optical system. The slit is placed at the effective focus of a curved mirror (the collimator, C) so that the light from the slit reflected from the mirror is collimated (focused at infinity). The collimated light is diffracted from the grating (D) and then is collected by another mirror (E) which refocuses the light, now dispersed, on the exit slit (F). In a prism monochromator, a reflective prism takes the place of the diffraction grating, in which case the light is refracted by the prism.
At the exit slit, the colors of the light are spread out (in the visible this shows the colors of the rainbow). Because each color arrives at a separate point in the exit slit plane, there are a series of images of the entrance slit focused on the plane. Because the entrance slit is finite in width, parts of nearby images overlap. The light leaving the exit slit (G) contains the entire image of the entrance slit of the selected color plus parts of the entrance slit images of nearby colors. A rotation of the dispersing element causes the band of colors to move relative to the exit slit, so that the desired entrance slit image is centered on the exit slit. The range of colors leaving the exit slit is a function of the width of the slits. The entrance and exit slit widths are adjusted together.
Stray light
The ideal transfer function of such a monochromator is a triangular shape. The peak of the triangle is at the nominal wavelength selected. The intensity of the nearby colors then decreases linearly on either side of this peak until some cutoff value is reached, where the intensity stops decreasing. This is called the stray light level. The cutoff level is typically about one thousandth of the peak value, or 0.1%.
Spectral bandwidth
Spectral bandwidth is defined as the width of the triangle at the points where the light has reached half the maximum value (Full Width at Half Maximum, abbreviated as FWHM). A typical spectral bandwidth might be one nanometer; however, different values can be chosen to meet the need of analysis. A narrower bandwidth does improve the resolution, but it also decreases the signal-to-noise ratio.
The dispersion of a monochromator is characterized as the width of the band of colors per unit of slit width, 1 nm of spectrum per mm of slit width for instance. This factor is constant for a grating, but varies with wavelength for a prism. If a scanning prism monochromator is used in a constant bandwidth mode, the slit width must change as the wavelength changes. Dispersion depends on the focal length, the grating order and grating resolving power.
Wavelength range
A monochromator's adjustment range might cover the visible spectrum and some part of both or either of the nearby ultraviolet (UV) and infrared (IR) spectra, although monochromators are built for a great variety of optical ranges, and to a great many designs.
Double monochromators
It is common for two monochromators to be connected in series, with their mechanical systems operating in tandem so that they both select the same color. This arrangement is not intended to improve the narrowness of the spectrum, but rather to lower the cutoff level. A double monochromator may have a cutoff about one millionth of the peak value, the product of the two cutoffs of the individual sections. The intensity of the light of other colors in the exit beam is referred to as the stray light level and is the most critical specification of a monochromator for many uses. Achieving low stray light is a large part of the art of making a practical monochromator.
Diffraction gratings and blazed gratings
Grating monochromators disperse ultraviolet, visible, and infrared radiation typically using replica gratings, which are manufactured from a master grating. A master grating consists of a hard, optically flat, surface that has a large number of parallel and closely spaced grooves. The construction of a master grating is a long, expensive process because the grooves must be of identical size, exactly parallel, and equally spaced over the length of the grating (3 to 10 cm). A grating for the ultraviolet and visible region typically has 300-2000 grooves/mm, however 1200-1400 grooves/mm is most common. For the infrared region, gratings usually have 10-200 grooves/mm. When a diffraction grating is used, care must be taken in the design of broadband monochromators because the diffraction pattern has overlapping orders. Sometimes extra, broadband filters are inserted in the optical path to limit the width of the diffraction orders so they do not overlap. Sometimes this is done by using a prism as one of the monochromators of a dual monochromator design.
The original high-resolution diffraction gratings were ruled. The construction of high-quality ruling engines was a large undertaking (as well as exceedingly difficult, in past decades), and good gratings were very expensive. The slope of the triangular groove in a ruled grating is typically adjusted to enhance the brightness of a particular diffraction order. This is called blazing a grating. Ruled gratings have imperfections that produce faint "ghost" diffraction orders that may raise the stray light level of a monochromator. A later photolithographic technique allows gratings to be created from a holographic interference pattern. Holographic gratings have sinusoidal grooves and so are not as bright, but have lower scattered light levels than blazed gratings. Almost all the gratings actually used in monochromators are carefully made replicas of ruled or holographic master gratings.
Prisms have higher dispersion in the UV region. Prism monochromators are favored in some instruments that are principally designed to work in the far UV region. Most monochromators use gratings, however. Some monochromators have several gratings that can be selected for use in different spectral regions. A double monochromator made by placing a prism and a grating monochromator in series typically does not need additional bandpass filters to isolate a single grating order.
Focal length
The narrowness of the band of colors that a monochromator can generate is related to the focal length of the monochromator collimators. Using a longer focal length optical system also unfortunately decreases the amount of light that can be accepted from the source. Very high resolution monochromators might have a focal length of 2 meters. Building such monochromators requires exceptional attention to mechanical and thermal stability. For many applications a monochromator of about 0.4 meter focal length is considered to have excellent resolution. Many monochromators have a focal length less than 0.1 meter.
Slit height
The most common optical system uses spherical collimators and thus contains optical aberrations that curve the field where the slit images come to focus, so that slits are sometimes curved instead of simply straight, to approximate the curvature of the image. This allows taller slits to be used, gathering more light, while still achieving high spectral resolution. Some designs take another approach and use toroidal collimating mirrors to correct the curvature instead, allowing higher straight slits without sacrificing resolution.
Wavelength vs energy
Monochromators are often calibrated in units of wavelength. Uniform rotation of a grating produces a sinusoidal change in wavelength, which is approximately linear for small grating angles, so such an instrument is easy to build. Many of the underlying physical phenomena being studied are linear in energy though, and since wavelength and energy have a reciprocal relationship, spectral patterns that are simple and predictable when plotted as a function of energy are distorted when plotted as a function of wavelength. Some monochromators are calibrated in units of reciprocal centimeters or some other energy units, but the scale may not be linear.
Dynamic range
A spectrophotometer built with a high quality double monochromator can produce light of sufficient purity and intensity that the instrument can measure a narrow band of optical attenuation of about one million fold (6 AU, Absorbance Units).
Monochromators are used in many optical measuring instruments and in other applications where tunable monochromatic light is wanted. Sometimes the monochromatic light is directed at a sample and the reflected or transmitted light is measured. Sometimes white light is directed at a sample and the monochromator is used to analyze the reflected or transmitted light. Two monochromators are used in many fluorometers; one monochromator is used to select the excitation wavelength and a second monochromator is used to analyze the emitted light.
An automatic scanning spectrometer includes a mechanism to change the wavelength selected by the monochromator and to record the resulting changes in the measured quantity as a function of the wavelength.
If an imaging device replaces the exit slit, the result is the basic configuration of a spectrograph. This configuration allows the simultaneous analysis of the intensities of a wide band of colors. Photographic film or an array of photodetectors can be used, for instance to collect the light. Such an instrument can record a spectral function without mechanical scanning, although there may be tradeoffs in terms of resolution or sensitivity for instance.
An absorption spectrophotometer measures the absorption of light by a sample as a function of wavelength. Sometimes the result is expressed as percent transmission and sometimes it is expressed as the inverse logarithm of the transmission. The Beer-Lambert law relates the absorption of light to the concentration of the light-absorbing material, the optical path length, and an intrinsic property of the material called molar absorptivity. According to this relation the decrease in intensity is exponential in concentration and path length. The decrease is linear in these quantities when the inverse logarithm of transmission is used. The old nomenclature for this value was Optical Density (OD), current nomenclature is Absorbance Units (AU). One AU is a tenfold reduction in light intensity. Six AU is a millionfold reduction.
Absorption spectrophotometers often contain a monochromator to supply light to the sample. Some absorption spectrophotometers have automatic spectral analysis capabilities.
Absorption spectrophotometers have many everyday uses in chemistry, biochemistry, and biology. For example, they are used to measure the concentration or change in concentration of many substances that absorb light. Critical characteristics of many biological materials, many enzymes for instance, are measured by starting a chemical reaction that produces a color change that depends on the presence or activity of the material being studied. Optical thermometers have been created by calibrating the change in absorbance of a material against temperature. There are many other examples.
Spectrophotometers are used to measure the specular reflectance of mirrors and the diffuse reflectance of colored objects. They are used to characterize the performance of sunglasses, laser protective glasses, and other optical filters. There are many other examples.
In the UV, visible and near IR, absorbance and reflectance spectrophotometers usually illuminate the sample with monochromatic light. In the corresponding IR instruments, the monochromator is usually used to analyze the light coming from the sample.
Monochromators are also used in optical instruments that measure other phenomena besides simple absorption or reflection, wherever the color of the light is a significant variable. Circular dichroism spectrometers contain a monochromator, for example.
Lasers produce light which is much more monochromatic than the optical monochromators discussed here, but only some lasers are easily tunable, and these lasers are not as simple to use.
Monochromatic light allows for the measurement of the Quantum Efficiency (QE) of an imaging device (e.g. CCD or CMOS imager). Light from the exit slit is passed either through diffusers or an integrating sphere on to the imaging device while a calibrated detector simultaneously measures the light. Coordination of the imager, calibrated detector, and monochromator allows one to calculate the carriers (electrons or holes) generated for a photon of a given wavelength, QE.
See also
- Atomic absorption spectrometers use light from hollow cathode lamps that emit light generated by atoms of a specific element, for instance iron or lead or calcium. The available colors are fixed, but are very monochromatic and are excellent for measuring the concentration of specific elements in a sample. These instruments behave as if they contained a very high quality monochromator, but their use is limited to analyzing the elements they are equipped for.
- A major IR measurement technique, Fourier Transform IR, or FTIR, does not use a monochromator. Instead, the measurement is performed in the time domain, using the field autocorrelation technique.
- Wien filter - a technique for producing "monochromatic" electron beams, where all the electrons have nearly the same energy
- Keppy, N. K. and Allen M., Thermo Fisher Scientific, Madison, WI, USA, 2008
- Skoog, Douglas (2007). Principles of Instrumental Analysis. Belmont, CA: Brooks/Cole. pp. 182–183. ISBN 978-0-495-01201-6.
- Lodish H, Berk A, Zipursky SL, et al. Molecular Cell Biology. 4th edition. New York: W. H. Freeman; 2000. Section 3.5, Purifying, Detecting, and Characterizing Proteins. Available from: http://www.ncbi.nlm.nih.gov/books/NBK21589/ | http://en.wikipedia.org/wiki/Monochromator | 13 |
53 | The surface area of a solid is the area of each surface added together.
There are few formulas to memorize (w00t!). The keys to success: make sure that you don't forget a surface and that you have the correct measurements.
Surface area is often used in construction. If you need to paint any 3‐D object you need to know how much paint to buy.
If we "unfold" the box, we get something that is called – in the geometry world – a "net".
Using the net we can see that there are six rectangular surfaces.
|Side 1||4 x 8||32 cm2|
|Side 2||8 x 6||48 cm2|
|Side 3||4 x 8||32 cm2|
|Side 4||8 x 6||48 cm2|
|Side 5||4 x 6||24 cm2|
|Side 6||4 x 6||24 cm2|
If we study the table we will see that there are two of each surface. That's because the top and bottom of a rectangular prism are congruent, as are the two sides, and the front and back.
If we break down our triangular prism into a net, it looks like this:
In a triangular prism there are five sides, two triangles and three rectangles.
|Side 1||½(9 × 4)||18 cm2|
|Side 2||½(9 × 4)||18 cm2|
|Side 3||4.5 x 8.1||36.45 cm2|
|Side 4||9 x 8.1||72.9 cm2|
|Side 5||7.2 x 8.1||58.32 cm2|
Imagine a can of soup.
If we use a can opener and cut off the top and bottom, and unroll the middle section, we would get:
Now you can see that we have two congruent circles, each with a radius of 4.4 cm and a rectangle with a width of 7.2 cm. The only measurement we are missing is the length. Remember when we unrolled the center section. Well, its length was wrapped around the circles, so it's the perimeter of the circle, i.e., the circumference. Therefore, we must find the circumference of a circle with radius 4.4 cm.
Circumference of a circle = dπ = (4.4 × 2)π = 8.8π ≈ 27.63 cm
Now, we can solve for surface area:
|Circle 1||4.42 × π||≈ 60.79 cm2|
|Circle 2||4.42 × π||≈ 60.79 cm2|
|Center||27.63 x 7.2||≈ 198.94 cm2|
|TOTAL||≈ 320.52 cm2|
That great mathematician Archimedes, the one who gave us the formula for the volume of a sphere, spent many hours plugging away by candlelight to bring you this: the surface area of a sphere is 4 times the area of the center circle.
To find the surface area of a cone we need to find the area of the circular base and the area of the curved section. This one involves a new measurement, s, which is the length of the slanted part.
If you take apart the cone, you get two surfaces, the circular base and the curved sides. The area of the base is just πr2, and the area of the curved section is πrs.
Look Out: surface area is only two-dimensional and is expressed as units squared, not units cubed. This is because we are only dealing with the flat surfaces, not the inside space.
This cube has six congruent faces, each with a length and width of 3 cm.
Area of one face = 3 x 3 cm = 9 cm2
Surface area = 6 sides x 9 cm2 = 54 cm2
This trapezoidal prism has six sides, two congruent trapezoids and four rectangles.
This cylinder has two circles (each with a radius of 2 cm) and one rectangle (with a length of 5.8 cm and a width the circumference of the circles).
This pyramid is made up of four equilateral triangles.
Here we just need to find the area of one triangle and multiply it by four sides:
Area of 1 triangle = ½bh = ½(8 x 6.9) = 27.6 cm2
Now, multiply that by four sides, and we're done.
The diameter of this sphere is 11.9 cm, so the radius is half of that, 5.95 cm.
The area of the circular base is equal to:
Find the surface area of this sphere:
Find the surface area of this cone. | http://www.shmoop.com/basic-geometry/surface-area-help.html | 13 |
70 | Elements of Feedback Control
3.1 OBJECTIVES AND INTRODUCTION
1. Know the definition of the following terms: input, output, feedback, error, open loop, and closed loop.
2. Understand the principle of closed-loop control.
3. Understand how the following processes are related to the closed-loop method of control: position feedback, rate feedback, and acceleration feedback.
4. Understand the principle of damping and its effect upon system operation.
5. Be able to explain the advantages of closed-loop control in a weapon system.
6. Be able to model simple first-order systems mathematically.
The elements of feedback control theory may be applied to a wide range of physical systems. However, in engineering this definition is usually applied only to those systems whose major function is to dynamically or actively command, direct, or regulate themselves or other systems. We will further restrict our discussion to weapons control systems that encompass the series of measurements and computations, beginning with target detection and ending with target interception.
3.2 CONTROL SYSTEM TERMINOLOGY
To discuss control systems, we must first define several key terms.
- Input. Stimulus or excitation applied to a control system from an external source, usually in order to produce a specified response from the system.
- Output. The actual response obtained from the system.
- Feedback. That portion of the output of a system that is returned to modify the input and thus serve as a performance monitor for the system.
- Error. The difference between the input stimulus and the output response. Specifically, it is the difference between the input and the feedback.
A very simple example of a feedback control system is the thermostat. The input is the temperature that is initially set
into the device. Comparison is then made between the input and the temperature of the outside world. If the two are different, an error results and an output is produced that activates a heating or cooling device. The comparator within the thermostat continually samples the ambient temperature, i.e., the feedback, until the error is zero; the output then turns off the heating or cooling device. Figure 3-1 is a block diagram of a simple feedback control system.
Other examples are:
(1) Aircraft rudder control system
(2) Gun or missile director
(3) Missile guidance system
(4) Laser-guided projectiles
(5) Automatic pilot
3.3 CLOSED AND OPEN-LOOP SYSTEMS
Feedback control systems employed in weapons systems are classified as closed-loop control systems. A closed-loop system is one in which the control action is dependent on the output of the system. It can be seen from figure 3-1 and the previous description of the thermostat that these represent examples of closed-loop control systems. Open-loop systems are independent of the output.
3.3.1 Characteristics of Closed-Loop Systems
The basic elements of a feedback control system are shown in figure 3-1. The system measures the output and compares the measurement with the desired value of the output as prescribed by the input. It uses the error (i.e., the difference between the actual output and desired output) to change the actual output and to bring it into closer correspondence with the desired value.
Since arbitrary disturbances and unwanted fluctuations can occur at various points in the system, a feedback control system must be able to reject or filter out these fluctuations and perform its task with prescribed accuracies, while producing as faithful a representation of the desired output as feasible. This function of filtering and smoothing is achieved by various electrical and mechanical components, gyroscopic devices, accelerometers, etc., and by using different types of feedback. Posi- tion feedback is that type of feedback employed in a system in which the output is either a linear distance or an angular displacement, and a portion of the output is returned or fed back to the input. Position feedback is essential in weapons control systems and is used to make the output exactly follow the input. For example: if, in a missile-launcher control system, the position feedback were lost, the system response to an input signal to turn clockwise 10o would be a continuous turning in the clockwise direction, rather than a matchup of the launcher position with the input order.
Motion smoothing by means of feedback is accomplished by the use of rate and acceleration feedback. In the case of rate (velocity) feedback, a portion of the output displacement is differentiated and returned so as to restrict the velocity of the output. Acceleration feedback is accomplished by differentiating a portion of the output velocity, which when fed back serves as an additional restriction on the system output. The result of both rate and acceleration feedback is to aid the system in achieving changes in position without overshoot and oscillation.
The most important features that negative feedback imparts to a control system are:
(1) Increased accuracy--An increase in the system's ability to reproduce faithfully in the output that which is dictated by an input.
(2) Reduced sensitivity to disturbance--When fluctuations in the relationship of system output to input caused by changes within the system are reduced. The values of system components change constantly throughout their lifetime, but by using the self-cor-recting aspect of feedback, the effects of these changes can be minimized.
(3) Smoothing and filtering--When the undesired effects of noise and distortion within the system are reduced.
(4) Increased bandwidth--When the bandwidth of any system is defined as that range of frequencies or changes to the input to which the system will respond satisfactorily.
3.3.2 Block Diagrams
Because of the complexity of most control systems, a shorthand pictorial representation of the relationship between input and output was developed. This representation is commonly called the block diagram. Control systems are made up of various combinations of the following basic blocks.
Element. The simplest representation of system components. It is a labeled block whose transfer function (G) is the output divided by the input.
Summing Point. A device to add or subtract the value of two or more signals.
Splitting Point. A point where the entering variable is to be transmitted identically to two points in the diagram. It is sometimes referred to as a "take off point."
Control or Feed Forward Elements (G). Those components directly between the controlled output and the referenced input.
Reference variable or Input (r). An external signal applied to a control system to produce the desired output.
Feedback (b). A signal determined by the output, as modified by the feedback elements, used in comparison with the input signal.
Controlled Output (c). The variable (temperature, position, velocity, shaft angle, etc.) that the system seeks to guide or regulate.
Error Signal (e). The algebraic sum of the reference input and the feedback.
Feedback Elements (H). Those components required to establish the desired feedback signal by sensing the controlled output.
Figure 3-3 is a block diagram of a simple feedback control system using the components described above.
In the simplified approach taken, the blocks are filled with values representative of component values. The output (c) can be expressed as the product of the error (e) and the control element (G).
c = eG (3-1)
Error is also the combination of the input (r) and the feedback (b).
e = r - b (3-2)
But feedback is the product of the output and of the feedback element (H).
b = cH (3-3)
Hence, by substituting equation (3-3) into equation (3-2)
e = r - cH
and from equation (3-1)
e = c/G
c/G = r - cH (3-4)
c = Gr - cGH
c + cGH = Gr
c = G r
1 +GH (3-5)
It has then been shown that figure 3-3 can be reduced to an equivalent simplified block diagram,
G , shown below.
1 + GH
c = rG
1 + GH (3-6)
In contrast to the closed loop-system, an open-loop system does not monitor its own output, i.e., it contains no feedback loop. A simple open-loop system is strictly an input through a control element. In this case:
c = rG
The open-loop system does not have the ability to compensate for input fluctuations or control element degradation.
3.3.3 Motor Speed Control System
If the speed of a motor is to be controlled, one method is to use a tachometer that senses the speed of the motor, produces an output voltage proportional to motor speed, and then subtracts that output voltage from the input voltage. This system can be drawn in block diagram form as shown in figure 3-5. In this example
r = input voltage to the speed control system
G = motor characteristic of 1,000 rpm per volt of input
c = steady state motor speed in rpm
H = the tachometer characteristic of 1 volt per 250 rpm motor speed
Example. This example assumes that the input signal does not change over the response time of the system. Neglecting transient responses, the steady state motor speed can be determined as follows:
r = 10 volts
c = (e)(1000) rpm
e = c volts
b = c volts
e = r - b
= 10 - c volts
Equating the two expressions for e and solving for c as in equation (3-4)
c = 10 - c volts
c + 4c = 10,000 rpm
c = 2,000 rpm
Finally the error voltage may be found
e = c = 2 volts
or by using the simplified equivalent form developed earlier as equation (3-6):
c = r G = 10V 1000 rpm/V = 2000 rpm
1 + GH 1 + 1000 rpm 1V
V 250 RPM
e = c = 2000 rpm = 2 volts
G 1000 rpm
3.4 RESPONSE IN FEEDBACK CONTROL SYSTEMS
In weaponry, feedback control systems are used for various purposes and must meet certain performance requirements. These requirements not only affect such things as speed of response and accuracy, but also the manner in which the system responds in carrying out its control function. All systems contain certain errors. The problem is to keep them within allowable limits.
Weapons system driving devices must be capable of developing suf-ficient torque and power to position a load in a minimum rise time. In a system, a motor and its connected load have sufficient inertia to drive the load past the point of the desired position as govern-ed by the input signal. This overshooting results in an opposite error signal reversing the direction of rotation of the motor and the load. The motor again attempts to correct the error and again overshoots the desired point, with each reversal requiring less correction until the system is in equilibrium with the input stimu-lus. The time required for the oscillations to die down to the desired level is often referred to as settling time. The magnitude of settling time is greatly affected by the degrees of viscous friction in the system (commonly referred to as damping). As the degree of viscous friction or damping increases, the tendency to overshoot is diminished, until finally no overshoot occurs. As damping is further increased, the settling time of the system begins to increase again.
Consider the system depicted in figure 3-6. A mass is attached to a rigid surface by means of a spring and a dashpot and is free to move left and right on a frictionless slide. A free body diagram of the forces is drawn in figure 3-7.
Newton's laws of motion state that any finite resultant of external forces applied to a body must result in the acceleration of that body, i.e.:
F = Ma
Therefore, the forces are added, with the frame of reference carefully noted to determine the proper signs, and are set equal to the product of mass and acceleration.
F(t) - Fspring - Fdashpot = Ma (3-7)
The force exerted by a spring is proportional to the difference between its rest length and its instantaneous length. The proportionality constant is called the spring constant and is usually designated by the letter K, with the units of Newtons per meter (N/m).
Fspring = Kx
The force exerted by the dashpot is referred to as damping and is proportional to the relative velocity of the two mechanical parts. The proportionality constant is referred to as the damping constant and is usually designated by the letter B, with the units of Newtons per meter per second N- sec.
Fdashpot = Bv
Noting that velocity is the first derivative of displacement with respect to time and that acceleration is the second derivative of displacement with respect to time, equation (3-7) becomes
F(t) - Kx - Bdx = Md2x dt dt2 (3-8)
Md2x + Bdx + Kx = F(t)
d2x + B dx + Kx = F(t) (3-9) dt2 M dt M M
This equation is called a second-order linear differential equation with constant coefficients.
Using the auxiliary equation method of solving a linear differential equation, the auxiliary equation
of (3-9) is:
s2 + Bs + K = 0
M M (3-10)
and has two roots
-B + B2 - 4K
M M M
s = (3-11) 2
and the general solution of equations (3-9) is of the form
x(t) = C1es1t + C2es2t (3-12)
where s1 and s2 are the roots determined in equation (3-10) and C1 and C2 are coefficients that can be determined by evaluating the initial conditions.
It is convenient to express B in terms of a damping coefficient as follows:
B = 2 MK
Then equation (3-10) can be written in the form:
0 = s2 + 2ns + n2 (3-13)
n = K and is the natural frequency of the system.
For the particular value of B such that = 1 the system is critically damped. The roots of equation (3-10) are real and equal (s1 = s2), and the response of the system to a step input is of the form
x(t) = A(1 - C1te-s1t - C2e-s2t) (3-14)
The specific response is shown in figure 3-8a.
For large values of B (>1), the system is overdamped. The roots of equation (3-10) are real and unequal, and the response of the system to a step input is of the form
x(t) = A(1 - C1te-s1t - C2e-s2t)
Since one of the roots is larger than in the case of critical damp-ing, the response will take more time to reach its final value. An example of an overdamped system response is shown in figure 3-8b.
For small values of B such that <1 the system is underdamped. The roots of equation (3-10) are complex conjugates, and the general solution is of the form
x(t) = A[1 - e-t sin(t + )]
where is the imaginary part of the complex roots and is B , the real portion of s. 2M
The system oscillates at the frequency . For small values of , is very nearly the same as n in equation (3-13).
Examples of critically damped, overdamped, and underdamped system response are depicted in figures 3-8a-c respectively.
3.4.2 System Damping
The previous illustrations are characteristic of the types of motion found in most weapons tracking systems. In the case where the system is underdamped, in that the oscillations of overshoot are allowed to continue for a relatively long period of time, the system responds rapidly to an input order, but has relative difficulty in settling down to the desired position dictated by that input. Rapid initial response is a desirable characteristic in weapon control tracking systems if the system is to keep up with high-speed targets. However, the long settling time is an undesirable trait in a dynamic tracking system because during the settling time, the target will have moved, thus initiating a change in the system input prior to the system's responding adequately to the previous stimulus. It should be easy to extrapolate this condition over time to the point where the system can no longer follow the target and the track is lost.
Some of the more common methods of achieving damping are the employment of friction (viscous damping), feeding back electrical signals that are 180o out of phase with the input, or returning a DC voltage that is of opposite polarity to that of a driving voltage.
When the damping in a system becomes too great, the system will not overshoot, but its initial response time will become ex-cessive. This condition is known as overdamped. It is generally an undesirable condition in weapons systems because of the rela-tively slow initial response time associated with it.
When a system responds relatively quickly with no overshoot, the system is critically damped. In actual practice, systems are designed to be slightly underdamped, but approaching the critically damped condition. This accomplishes the functions of minimizing the system response time while at the same time minimizing over-shoot. It is called realistic damping. Figure 3-9 is a graphical representation of the relationship among the different conditions of damping.
Example. To illustrate the basic concepts of feedback control, consider the simple mechanical accelerometer that is employed in various configurations in missile guidance systems and inertial navigation systems. In this example, the accelerometer is employed in a guided-missile direction-control system. It is de-sired to hold the missile on a straight-line course. Lateral ac-celerations must be measured and compensation made by causing the steering-control surfaces to be actuated to produce a counter ac-celeration, thus assisting the overall guidance system in maintain-ing a steady course. This example depicts the system for left and right steering only; however, the up/down steering control is identical.
The accelerometer consists of a spring, a mass, and damping fluid all contained within a sealed case, with the entire assembly mounted in the missile.
The position, x, of the mass with respect to the case, and thus with respect to the potentiometer, is a function of the ac-celeration of the case. As the mass is moved by the results of lateral acceleration, motor drive voltage is picked off by the wiper arm of the potentiometer. The system is calibrated so that when no acceleration exists, the wiper arm is positioned in the center of the potentiometer and no drive voltage is fed to the motor.
As lateral accelerations of the missile occur, the mass is moved by the resulting force in a manner so as to pick off a voltage to steer the missile in a direction opposite to that of the input accelerations. As the missile is steered, an acceleration opposite to that initially encountered tends to move the mass in the opposite direction. The motion of the mass is opposed by the damping action of the fluid and the spring. To achieve a relatively rapid response and a minimum of overshoot, the choice of the viscosity of the fluid and the strength of the spring is critical.
This chapter has presented a broad overview of the basic concepts of feedback control systems. These concepts are employed over a wide range of applications, including automatic target tracking systems, missile and torpedo homing systems, and gun and missile-launcher positioning systems.
A feedback control system consists of an input, an output, a controlled element, and feedback. System response should be as rapid as possible with minimum overshoot. To accomplish this, some means of damping is employed. If damping is weak, then the system is underdamped. This condition is characterized by rapid initial response with a long settling time. If damping is too strong or the system is overdamped, then the system response time is excessively long with no overshoot. The critically damped condition occurs when damping is weak enough to permit a relatively rapid initial response and yet strong enough to prevent overshoot. Control systems employed in modern weaponry are designed to be slightly underdamped but approaching the critically damped case. The result of this compromise in design is the achievement of rapid initial response with a minimum of overshoot and oscillation.
Bureau of Naval Personnel. Aviation Fire Control Technician 3 2. NAVPERS 10387-A. Washington, D.C.: GPO, 1971.
Commander, Naval Ordnance Systems Command. Weapons Systems Fundamentals. NAVORD OP 3000, vols. 2 & 3, 1st Rev. Washington, D.C.: GPO, 1971.
H.Q., U.S. Army Material Command: Servomechanisms. Pamphlet 706-136, Sec. I. Washington, D.C.: GPO, 1965.
Spiegel, Murry R. Applied Differential Equations. Englewood Cliffs, N.J.: Prentice-Hall Inc., 1967.
Weapons and Systems Engineering Dept. Weapons Systems Engineering. Vol. 2, Annapolis, Md.: USNA, 1983. | http://www.fas.org/man/dod-101/navy/docs/fun/part03.htm | 13 |
60 | Exemplars & Common Pitfalls
1: Requiring Automaticity with Basic Number Facts
The four arithmetic operations for whole numbers cannot be mastered if the single-digit addition and multiplication facts (and corresponding subtraction and division facts) have not been learned to automaticity. For multiplication and division, only eleven states (plus Common Core) use key words or phrases such as: automaticity, memorize, instant, or quick recall. Another fifteen states either fail to mention these “math facts†or specify only that students be able to compute them. But “fluency†with calculating the basic facts is not the same as instant recall.
These states specifically require fluency of multiplication and division facts.
- Develop quick recall of multiplication facts and related division facts and fluency with whole-number multiplication (grade 4)
- Develop and demonstrate quick recall of basic addition facts to 20 and related subtraction facts (grades K-2)
- Extend their work with multiplication and division strategies to develop fluency and recall of multiplication and division facts (grades 3-5)
- Immediately recall and use addition and subtraction facts (grade 3)
- Immediately recall and use multiplication and corresponding division facts (products to 144) (grade 4)
In the development of arithmetic, students are expected to be able to use different methods of computing, but fluency is not required.
- Use flexible methods of computing, including student-generated strategies and standard algorithms (grade 3)
- Use flexible methods of computing including standard algorithms to multiply and divide multi-digit numbers by two-digit factors or divisors (grade 5)
- Demonstrate fluency with basic addition and subtraction facts to sums of 20 (grade 2) (This can be interpreted as either computational fluency or instant recall. This lack of specificity means that some students might not be required to actually internalize the basic facts.)
Kentucky only requires students to use “computational procedures” and fails to require instant recall.
- Students will develop and apply computational procedures to add, subtract, multiply and divide whole numbers using basic facts and technology as appropriate (grade 5)
2: Mandating Fluency with Standard Algorithms
Arithmetic forms the foundation of K-16 mathematics, and whole-number arithmetic forms the foundation of arithmetic. The proper goal for whole-number arithmetic is fluency with (and understanding of) the standard algorithms.
The Following states specifically require students to learn and use the standard algorithms:
Demonstrate in the classroom an understanding of and the ability to use the conventional algorithms for addition (two 3-digit numbers and three 2-digit numbers) and subtraction (two 3-digit numbers) (grade 2)
- Multiply multi-digit whole numbers through four digits fluently, demonstrating understanding of the standard algorithm, and checking for reasonableness of results, including solving real-world problems (grade 4)
- Demonstrate an understanding of, and the ability to use, standard algorithms for the addition and subtraction of multi-digit numbers (grade 4)
Only seven states explicitly expect students to know the standard algorithm for whole-number multiplication as their capstone standard for multiplication of whole numbers. But twenty-four states explicitly undermine this goal by offering, even expecting, alternatives to the standard algorithm:
- Use a variety of strategies to multiply three-digit by three-digit numbers (grade 5)
This standard fails even to mention the standard algorithm, and thus leaves little confidence that students across the state will master this essential content.
Other states pay homage to the standard algorithm while still avoiding the goal:
- Solve multi-digit whole number multiplication problems using a variety of strategies, including the standard algorithm, justify methods used (grade 4)
Here, while the standard algorithm is mentioned, students can clearly move on without having mastered it.
3: Getting Fractions Right
After the foundation of whole-number arithmetic, fractions form the core of mathematics. Only fifteen states even mention common denominators, something essential in the development for adding and subtracting fractions. Likewise, standards specifying fractions as division or requiring mastery of the standard algorithms are rare.
The often-confused concept of fractions as numbers is intro-duced early and clearly.
- Understand a fraction as a number on the number line; represent fractions on a number line diagram
a. Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1/b and that the endpoint of the part based at 0 locates the number 1/b on the number line
b. Represent a fraction a/b on a number line diagram by marking off a lengths 1/b from 0. Recognize that the resulting interval has size a/b and that its endpoint locates the number a/b on the number line (grade 3)
The arithmetic of fractions is carefully developed using mathematical reasoning.
- Understand a fraction a/b as a multiple of 1/b. For example, use a visual fraction model to represent 5/4 as the product 5 × (1/4), recording the conclusion by the equation 5/4 = 5 × (1/4) (grade 4)
- Add and subtract fractions with unlike denominators (including mixed numbers) by replacing given fractions with equivalent fractions in such a way as to produce an equivalent sum or difference of fractions with like denominators. For example, 2/3 + 5/4 = 8/12 + 15/12 = 23/12. (In general, a/b + c/d = (ad + bc)/bd) (grade 5)
Similarly, in Connecticut fractions are well developed.
Common denominators are introduced explicitly:
- Compare fractions by finding a common denominator (grade 5)
When fractions are introduced, they are not explicitly introduced as parts of a set or a whole.
- Identify and model representations of fractions (halves, thirds, fourths, fifths, sixths, and eighths) (grade 3)
In the case of adding and subtracting fractions, standard procedures and fluency are not required, nor are common denominators developed.
- Add and subtract fractions having like and unlike denominators that are limited to 2, 3, 4, 5, 6, 8, 10, and 12 (grade 4)
- Solve single-step and multistep practical problems involving addition and subtraction with fractions and with decimals (grade 4)
The standard algorithms are not required. Students are instead encouraged to develop their own algorithms. And common denominators are never mentioned.
- Develop and analyze algorithms for computing with fractions (including mixed numbers) and decimals and demonstrate, with and without technology, computational fluency in their use and justify the solution [sic] (grade 6)
4: Allowing Calculator Use in the Early Grades
Standards should require that students master basic computation in the early grades without the use of technology.
Recall and use basic multiplication and division facts orally and with paper and pencil and without a calculator (grades 4-6)
Technology is introduced early and included often in the standards, undermining students’ mastery of arithmetic. Standards also seem to give students the choice to always use a calculator.
- Select pencil-and-paper, mental math, or a calculator as the appropriate computational method in a given situation depending on the context and numbers (grades 2-6)
- Select and use an appropriate method of computation from mental math, paper and pencil, calculator, or a combination of the three (grades 3-6)
5: Including Axiomatic Geometry in High School
The best standards directly address and require students to prove theorems, and they should mention postulates and axioms.
- Prove theorems about triangles. Theorems include: measures of interior angles of a triangle sum to 180°; base angles of isosceles triangles are congruent; the segment joining midpoints of two sides of a triangle is parallel to the third side and half the length; the medians of a triangle meet at a point (high school)
- Write simple proofs of theorems in geometric situations, such as theorems about congruent and similar figures, parallel or perpendicular lines. Distinguish between postulates and theorems. Use inductive and deductive reasoning, as well as proof by contradiction. Given a conditional statement, write its inverse, converse, and contrapositive (Geometry)
Classical theorems of geometry are not specifically included. If proof is mentioned, the foundations are not well covered, and such basic theorems as the Pythagorean Theorem are not proven.
Congruence and similarity are frequently missing, as in:
- Determine congruence and similarity among geometric objects (grades 9-10)
- Determine angle measures and side lengths of right and similar triangles using trigonometric ratios and properties of similarity, including congruence (grade 10)
Our review ofState Standards | http://standards.educationgadfly.net/best/math | 13 |
58 | Carbon nanotubes (CNTs) are allotropes of carbon with a cylindrical nanostructure. Nanotubes have been constructed with length-to-diameter ratio of up to 132,000,000:1, significantly larger than for any other material. These cylindrical carbon molecules have unusual properties, which are valuable for nanotechnology, electronics, optics and other fields of materials science and technology. In particular, owing to their extraordinary thermal conductivity and mechanical and electrical properties, carbon nanotubes find applications as additives to various structural materials.
Nanotubes are members of the fullerene structural family, which also includes the spherical buckyballs, and the ends of a nanotube may be capped with a hemisphere of the buckyball structure. Their name is derived from their long, hollow structure with the walls formed by one-atom-thick sheets of carbon, called graphene. These sheets are rolled at specific and discrete (“chiral”) angles, and the combination of the rolling angle and radius decides the nanotube properties; for example, whether the individual nanotube shell is a metal or semiconductor. Nanotubes are categorized as single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs). Individual nanotubes naturally align themselves into “ropes” held together by van der Waals forces, more specifically, pi-stacking.
Applied quantum chemistry, specifically, orbital hybridization best describes chemical bonding in nanotubes. The chemical bonding of nanotubes is composed entirely of sp2 bonds, similar to those of graphite. These bonds, which are stronger than the sp3 bonds found in alkanes and diamond, provide nanotubes with their unique strength.
Types of carbon nanotubes and related structures
There is no consensus on some terms describing carbon nanotubes in scientific literature: both “-wall” and “-walled” are being used in combination with “single”, “double”, “triple” or “multi”, and the letter C is often omitted in the abbreviation; for example, multi-walled carbon nanotube (MWNT).
Single-walled Carbon nanotubes
Most single-walled nanotubes (SWNT) have a diameter of close to 1 nanometer, with a tube length that can be many millions of times longer. The structure of a SWNT can be conceptualized by wrapping a one-atom-thick layer of graphite called graphene into a seamless cylinder. The way the graphene sheet is wrapped is represented by a pair of indices (n,m) . The integers n and m denote the number of unit vectors along two directions in the honeycomb crystal lattice of graphene. If m = 0, the nanotubes are called zigzag nanotubes, and if n = m, the nanotubes are called armchair nanotubes. Otherwise, they are called chiral. The diameter of an ideal nanotube can be calculated from its (n,m) indices as follows
where a = 0.246 nm.
SWNTs are an important variety of carbon nanotube because most of their properties change significantly with the (n,m) values, and this dependence is non-monotonic (see Kataura plot). In particular, their band gap can vary from zero to about 2 eV and their electrical conductivity can show metallic or semiconducting behavior. Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is the electric wire, and SWNTs with diameters of an order of a nanometer can be excellent conductors.
Multi-walled Carbon Nanotubes
Multi-walled nanotubes (MWNT) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal.
Double-walled carbon nanotubes (DWNT) form a special class of nanotubes because their morphology and properties are similar to those of SWNT but their resistance to chemicals is significantly improved. This is especially important when functionalization is required (this means grafting of chemical functions at the surface of the nanotubes) to add new properties to the CNT.
Properties of Carbon Nanotubes
Mechanical properties of carbon nanotubes
Carbon nanotube is one of the strongest materials in nature. Carbon nanotubes (CNTs) are basically long hollow cylinders of graphite sheets. Although a graphite sheet has a 2D symmetry, carbon nanotubes by geometry have different properties in axial and radial directions. It has been shown that CNTs are very strong in the axial direction. Young’s modulus on the order of 270 – 950 GPa and tensile strength of 11 – 63 GPa were obtained.
On the other hand, there was evidence that in the radial direction they are rather soft. The first transmission electron microscope observation of radial elasticity suggested that even the van der Waals forces can deform two adjacent nanotubes. Later, nanoindentations with atomic force microscope were performed by several groups to quantitatively measure radial elasticity of multiwalled carbon nanotubes and tapping/contact mode atomic force microscopy was recently performed on single-walled carbon nanotubes. Young’s modulus of on the order of several GPa showed that CNTs are in fact very soft in the radial direction.
Radial direction elasticity of CNTs is important especially for carbon nanotube composites where the embedded tubes are subjected to large deformation in the transverse direction under the applied load on the composite structure.
One of the main problems in characterizing the radial elasticity of CNTs is the knowledge about the internal radius of the CNT; carbon nanotubes with identical outer diameter may have different internal diameter (or the number of walls). Recently a method using atomic force microscope was introduced to find the exact number of layers and hence the internal diameter of the CNT. In this way, mechanical characterization is more accurate.
Optical properties of Carbon Nanotubes
Within materials science, the optical properties of carbon nanotubes refer specifically to the absorption, photoluminescence, and Raman spectroscopy of carbon nanotubes. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes. There is a strong demand for such characterization from the industrial point of view: numerous parameters of the nanotube synthesis can be changed, intentionally or unintentionally, to alter the nanotube quality. As shown below, optical absorption, photoluminescence and Raman spectroscopies allow quick and reliable characterization of this “nanotube quality” in terms of non-tubular carbon content, structure (chirality) of the produced nanotubes, and structural defects. Those features determine nearly any other properties such as optical, mechanical, and electrical properties.
Van Hove singularities
Optical properties of carbon nanotubes derive from electronic transitions within one-dimensional density of states (DOS). A typical feature of one-dimensional crystals is that their DOS is not a continuous function of energy, but it descends gradually and then increases in a discontinuous spike. In contrast, three-dimensional materials have continuous DOS. The sharp peaks found in one-dimensional materials are called Van Hove singularities.
The band structure of carbon nanotubes having certain (n, m) indexes can be easily calculated. A theoretical graph based on this calculations was designed in 1999 by Hiromichi Kataura to rationalize experimental findings. A Kataura plot relates the nanotube diameter and its bandgap energies for all nanotubes in a diameter range. The oscillating shape of every branch of the Kataura plot reflects the intrinsic strong dependence of the SWCNT properties on the (n, m) index rather than on its diameter. For example, (10, 0) and (8, 3) tubes have almost the same diameter, but very different properties: the former is a metal, but the latter is semiconductor.
Optical absorption of Carbon Nanotubes
Optical absorption in carbon nanotubes differs from absorption in conventional 3D materials by presence of sharp peaks (1D nanotubes) instead of an absorption threshold followed by an absorption increase (most 3D solids). Absorption in nanotubes originates from electronic transitions from the v2 to c2 (energy E22) or v1 to c1 (E11) levels, etc.
Raman scattering of Carbon Nanotubes
Raman spectroscopy has good spatial resolution (~0.5 micrometers) and sensitivity (single nanotubes); it requires only minimal sample preparation and is rather informative. Consequently, Raman spectroscopy is probably the most popular technique of carbon nanotube characterization. Raman scattering in SWCNTs is resonant, i.e., only those tubes are probed which have one of the bandgaps equal to the exciting laser energy. Several scattering modes dominate the SWCNT spectrum, as discussed below.
Similar to photoluminescence mapping, the energy of the excitation light can be scanned in Raman measurements, thus producing Raman maps. Those maps also contain oval-shaped features uniquely identifying (n, m) indices. Contrary to PL, Raman mapping detects not only semiconducting but also metallic tubes, and it is less sensitive to nanotube bundling than PL. However, requirement of a tunable laser and a dedicated spectrometer is a strong technical impediment.
Electrical properties of Carbon Nanotubes
Because of the symmetry and unique electronic structure of graphene, the structure of a nanotube strongly affects its electrical properties. For a given (n,m) nanotube, if n = m, the nanotube is metallic; if n − m is a multiple of 3, then the nanotube is semiconducting with a very small band gap, otherwise the nanotube is a moderate semiconductor. Thus all armchair (n = m) nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting.
However, this rule has exceptions, because curvature effects in small diameter carbon nanotubes can strongly influence electrical properties. Thus, a (5,0) SWCNT that should be semiconducting in fact is metallic according to the calculations. Likewise, vice versa– zigzag and chiral SWCNTs with small diameters that should be metallic have finite gap (armchair nanotubes remain metallic). In theory, metallic nanotubes can carry an electric current density of 4 × 109 A/cm2, which is more than 1,000 times greater than those of metals such as copper, where for copper interconnects current densities are limited by electromigration.
There have been reports of intrinsic superconductivity in carbon nanotubes. Many other experiments, however, found no evidence of superconductivity, and the validity of these claims of intrinsic superconductivity remains a subject of debate.
Thermal properties of Carbon Nanotubes
All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as “ballistic conduction”, but good insulators laterally to the tube axis. Measurements show that a SWNT has a room-temperature thermal conductivity along its axis of about 3500 W·m−1·K−1; compare this to copper, a metal well known for its good thermal conductivity, which transmits 385 W·m−1·K−1. A SWNT has a room-temperature thermal conductivity across its axis (in the radial direction) of about 1.52 W·m−1·K−1, which is about as thermally conductive as soil. The temperature stability of carbon nanotubes is estimated to be up to 2800 °C in vacuum and about 750 °C in air.
Defects of Carbon Nanotubes
As with any material, the existence of a crystallographic defect affects the material properties. Defects can occur in the form of atomic vacancies. High levels of such defects can lower the tensile strength by up to 85%. An important example is the Stone Wales defect, which creates a pentagon and heptagon pair by rearrangement of the bonds. Because of the very small structure of CNTs, the tensile strength of the tube is dependent on its weakest segment in a similar manner to a chain, where the strength of the weakest link becomes the maximum strength of the chain.
Crystallographic defects also affect the tube’s electrical properties. A common result is lowered conductivity through the defective region of the tube. A defect in armchair-type tubes (which can conduct electricity) can cause the surrounding region to become semiconducting, and single monoatomic vacancies induce magnetic properties.
Crystallographic defects strongly affect the tube’s thermal properties. Such defects lead to phonon scattering, which in turn increases the relaxation rate of the phonons. This reduces the mean free path and reduces the thermal conductivity of nanotube structures. Phonon transport simulations indicate that substitutional defects such as nitrogen or boron will primarily lead to scattering of high-frequency optical phonons. However, larger-scale defects such as Stone Wales defects cause phonon scattering over a wide range of frequencies, leading to a greater reduction in thermal conductivity.
Toxicity of Carbon Nanotubes
The toxicity of carbon nanotubes has been an important question in nanotechnology. Such research has just begun. The data are still fragmentary and subject to criticism. Preliminary results highlight the difficulties in evaluating the toxicity of this heterogeneous material. Parameters such as structure, size distribution, surface area, surface chemistry, surface charge, and agglomeration state as well as purity of the samples, have considerable impact on the reactivity of carbon nanotubes. However, available data clearly show that, under some conditions, nanotubes can cross membrane barriers, which suggests that, if raw materials reach the organs, they can induce harmful effects such as inflammatory and fibrotic reactions.
Synthesis of Carbon Nanotubes
Techniques have been developed to produce nanotubes in sizeable quantities, including arc discharge, laser ablation, high-pressure carbon monoxide (HiPco), and chemical vapor deposition (CVD). Most of these processes take place in vacuum or with process gases. CVD growth of CNTs can occur in vacuum or at atmospheric pressure. Large quantities of nanotubes can be synthesized by these methods; advances in catalysis and continuous growth processes are making CNTs more commercially viable.
Nanotubes were observed in 1991 in the carbon soot of graphite electrodes during an arc discharge, by using a current of 100 amps, that was intended to produce fullerenes. However the first macroscopic production of carbon nanotubes was made in 1992 by two researchers at NEC’s Fundamental Research Laboratory. The method used was the same as in 1991. During this process, the carbon contained in the negative electrode sublimates because of the high-discharge temperatures. Because nanotubes were initially discovered using this technique, it has been the most widely used method of nanotube synthesis.
The yield for this method is up to 30% by weight and it produces both single- and multi-walled nanotubes with lengths of up to 50 micrometers with few structural defects.
In the laser ablation process, a pulsed laser vaporizes a graphite target in a high-temperature reactor while an inert gas is bled into the chamber. Nanotubes develop on the cooler surfaces of the reactor as the vaporized carbon condenses. A water-cooled surface may be included in the system to collect the nanotubes.
This process was developed by Dr. Richard Smalley and co-workers at Rice University, who at the time of the discovery of carbon nanotubes, were blasting metals with a laser to produce various metal molecules. When they heard of the existence of nanotubes they replaced the metals with graphite to create multi-walled carbon nanotubes. Later that year the team used a composite of graphite and metal catalyst particles (the best yield was from a cobalt and nickel mixture) to synthesize single-walled carbon nanotubes.
The laser ablation method yields around 70% and produces primarily single-walled carbon nanotubes with a controllable diameter determined by the reaction temperature. However, it is more expensive than either arc discharge or chemical vapor deposition.
Chemical vapor deposition (CVD)
During CVD, a substrate is prepared with a layer of metal catalyst particles, most commonly nickel, cobalt, iron, or a combination. The metal nanoparticles can also be produced by other ways, including reduction of oxides or oxides solid solutions. The diameters of the nanotubes that are to be grown are related to the size of the metal particles. This can be controlled by patterned (or masked) deposition of the metal, annealing, or by plasma etching of a metal layer. The substrate is heated to approximately 700°C. To initiate the growth of nanotubes, two gases are bled into the reactor: a process gas (such as ammonia, nitrogen or hydrogen) and a carbon-containing gas (such as acetylene, ethylene, ethanol or methane). Nanotubes grow at the sites of the metal catalyst; the carbon-containing gas is broken apart at the surface of the catalyst particle, and the carbon is transported to the edges of the particle, where it forms the nanotubes. This mechanism is still being studied. The catalyst particles can stay at the tips of the growing nanotube during the growth process, or remain at the nanotube base, depending on the adhesion between the catalyst particle and the substrate. Thermal catalytic decomposition of hydrocarbon has become an active area of research and can be a promising route for the bulk production of CNTs. Fluidised bed reactor is the most widely used reactor for CNT preparation. Scale-up of the reactor is the major challenge.
CVD is a common method for the commercial production of carbon nanotubes. For this purpose, the metal nanoparticles are mixed with a catalyst support such as MgO or Al2O3 to increase the surface area for higher yield of the catalytic reaction of the carbon feedstock with the metal particles.
Applications of Carbon Nanotubes
Current use and application of nanotubes has mostly been limited to the use of bulk nanotubes, which is a mass of rather unorganized fragments of nanotubes. Bulk nanotube materials may never achieve a tensile strength similar to that of individual tubes, but such composites may, nevertheless, yield strengths sufficient for many applications. Bulk carbon nanotubes have already been used as composite fibers in polymers to improve the mechanical, thermal and electrical properties of the bulk product.
Potential applications of carbon Nanotubes
The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be is 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it.
Structural applications of CNTs
Because of the carbon nanotube’s superior mechanical properties, many structures have been proposed ranging from everyday items like clothes and sports gear to combat jackets and space elevators. However, the space elevator will require further efforts in refining carbon nanotube technology, as the practical tensile strength of carbon nanotubes can still be greatly improved.
Application of CNTs In electrical circuits
Nanotube-based transistors, also known as carbon nanotube field-effect transistors (CNTFETs), have been made that operate at room temperature and that are capable of digital switching using a single electron. However, one major obstacle to realization of nanotubes has been the lack of technology for mass production. In 2001 IBM researchers demonstrated how metallic nanotubes can be destroyed, leaving semiconducting ones behind for use as transistors. Their process is called “constructive destruction,” which includes the automatic destruction of defective nanotubes on the wafer. This process, however, only gives control over the electrical properties on a statistical scale.
As paper batteries
A paper battery is a battery engineered to use a paper-thin sheet of cellulose (which is the major constituent of regular paper, among other things) infused with aligned carbon nanotubes. The nanotubes act as electrodes; allowing the storage devices to conduct electricity. The battery, which functions as both a lithium-ion battery and a supercapacitor, can provide a long, steady power output comparable to a conventional battery, as well as a supercapacitor’s quick burst of high energy—and while a conventional battery contains a number of separate components, the paper battery integrates all of the battery components in a single structure, making it more energy efficient.
One of the promising applications of single-walled carbon nanotubes (SWNTs) is their use in solar panels, due to their strong UV/Vis-NIR absorption characteristics. Research has shown that they can provide a sizeable increase in efficiency, even at their current unoptimized state.
In addition to being able to store electrical energy, there has been some research in using carbon nanotubes to store hydrogen to be used as a fuel source. By taking advantage of the capillary effects of the small carbon nanotubes, it is possible to condense gasses in high density inside single-walled nanotubes. This allows for gasses, most notably hydrogen (H2), to be stored at high densities without being condensed into a liquid. Potentially, this storage method could be used on vehicles in place of gas fuel tanks for a hydrogen-powered car. A current issue regarding hydrogen-powered vehicles is the onboard storage of the fuel. Current storage methods involve cooling and condensing the H2 gas to a liquid state for storage which causes a loss of potential energy between (25 – 45%) when compared to the energy associated with the gaseous state. Storage using SWNTs would allow one to keep the H2 in its gaseous state, thereby increasing the storage effciency. This method allows for a volume to energy ratio slightly smaller to that of current gas powered vehicles, allowing for a slightly lower but comparable range.
Discovery of Carbon Nanotubes
In 1952 L. V. Radushkevich and V. M. Lukyanovich published clear images of 50 nanometer diameter tubes made of carbon in the Soviet Journal of Physical Chemistry. This discovery was largely unnoticed, as the article was published in the Russian language, and Western scientists’ access to Soviet press was limited during the Cold War. It is likely that carbon nanotubes were produced before this date, but the invention of the transmission electron microscope (TEM) allowed direct visualization of these structures.
Carbon nanotubes have been produced and observed under a variety of conditions prior to 1991. A paper by Oberlin, Endo, and Koyama published in 1976 clearly showed hollow carbon fibers with nanometer-scale diameters using a vapor-growth technique. Additionally, the authors show a TEM image of a nanotube consisting of a single wall of graphene. Later, Endo has referred to this image as a single-walled nanotube.
In 1979, John Abrahamson presented evidence of carbon nanotubes at the 14th Biennial Conference of Carbon at Pennsylvania State University. The conference paper described carbon nanotubes as carbon fibers that were produced on carbon anodes during arc discharge. A characterization of these fibers was given as well as hypotheses for their growth in a nitrogen atmosphere at low pressures.
In 1981, a group of Soviet scientists published the results of chemical and structural characterization of carbon nanoparticles produced by a thermocatalytical disproportionation of carbon monoxide. Using TEM images and XRD patterns, the authors suggested that their “carbon multi-layer tubular crystals” were formed by rolling graphene layers into cylinders. They speculated that by rolling graphene layers into a cylinder, many different arrangements of graphene hexagonal nets are possible. They suggested two possibilities of such arrangements: circular arrangement (armchair nanotube) and a spiral, helical arrangement (chiral tube).
In 1987, Howard G. Tennett of Hyperion Catalysis was issued a U.S. patent for the production of “cylindrical discrete carbon fibrils” with a “constant diameter between about 3.5 and about 70 nanometers…, length 102 times the diameter, and an outer region of multiple essentially continuous layers of ordered carbon atoms and a distinct inner core….”
Iijima’s discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods in 1991 and Mintmire, Dunlap, and White’s independent prediction that if single-walled carbon nanotubes could be made, then they would exhibit remarkable conducting properties helped create the initial buzz that is now associated with carbon nanotubes. Nanotube research accelerated greatly following the independent discoveries by Bethune at IBM and Iijima at NEC of single-walled carbon nanotubes and methods to specifically produce them by adding transition-metal catalysts to the carbon in an arc discharge. The arc discharge technique was well-known to produce the famed Buckminster fullerene on a preparative scale, and these results appeared to extend the run of accidental discoveries relating to fullerenes. The original observation of fullerenes in mass spectrometry was not anticipated, and the first mass-production technique by Krätschmer and Huffman was used for several years before realizing that it produced fullerenes.
The discovery of nanotubes remains a contentious issue. Many believe that Iijima’s report in 1991 is of particular importance because it brought carbon nanotubes into the awareness of the scientific community as a whole. | http://www.nanoacademia.com/nanotechnology/carbon-nanotube/ | 13 |
50 | |Themes > Science > Zoological Sciences > Animal Physiology > Circulatory System|
All animals must be able to circulate nutrients, fluids, and gases in order to properly function under specified conditions. There are three ways that animals may do this. Simple, sac-like animal, such as jellyfish and flatworms, have a gastrovascular cavity that serves as an area for digestion and helps bring the nutrients from digested foods into close proximity to many cells in the animal's simple body. More complex animals have either an open or closed circulatory system.
There is some degree of correlation between circulation and immune response. All animals must be able to circulate antibodies to areas of the body that need such assistance. The means of circulation for such things is the circulatory system, but in particular, the lymphatic system.
Differences between open and closed circulatory systems and the advantages and disadvantages of each
An open circulatory system is a system in which the heart pumps blood into the hemocoel which is positioned in between the ectoderm and endoderm. The fluid described in the definition is called hemolymph, or blood. Hemolymph flows into an interconnected system of sinuses so that the tissues receive nutrients, fluid and oxygen directly. In animals that have an open circulatory system, there is a high percentage of the body that is blood volume. These animals have a tendency to have low blood pressure, with some exceptions. In some animals, the contractions of some species’ hearts or the muscles surrounding the heart can attain higher pressures.
In a closed circulatory system, blood flows from arteries to capillaries and through veins, but the tissues surrounding the vessels are not directly bathed by blood. Some invertebrates and all vertebrates have closed circulatory systems. A closed circulatory system allows more of a complete separation of function than an open circulatory system does. The blood volume in these animals is considerably lower than that of animals with open circulatory systems. In animals with closed circulatory systems, the heart is the chambered organ that pushes the blood into the arterial system. The heart also sustains the high pressure necessary for the blood to reach all of the extremities of the body.
In the closed circulatory system of mammals, there are two subdivisions—the systemic circulation and the pulmonary circulation. The pulmonary circulation involves circulation of deoxygenated blood from the heart to the lungs, so that it may be properly oxygenated. Systemic circulation takes care of sending blood to the rest of the body. Once the blood flows through the system of capillaries at the body’s tissues, it returns through the venous system. The pressure in the venous system is considerably lower than the pressure in the arterial system. It contains a larger portion of blood than the arterial system does, for the venous system is thought to be the blood reservoir of the body.
As we see it, there are more disadvantages to having an open circulatory system but having an open circulatory system suits those animals well. There is a limited capability for such animals to increase or decrease distribution and velocity of blood flow. There is not a lot of variability to oxygen uptake because changes in such are very slow. Because of the limits to diffusion, animals with open circulatory systems usually have relatively low metabolic rates.
There are a variety of advantages to having a closed circulatory system. Every cell of the body is, at maximum, only two or three cells’ distance from a capillary. There is the ability for such animals to have incredible control over oxygen delivery to tissues. A unique characteristic to closed circulatory systems is that capability for a closed circulation to include the process of ultrafiltration in blood circulation. Since the lymphatic system is included as part of the circulatory system because of its circulation of excess fluid and large molecules, it decreases the pressure in tissues that extra fluid increases. One of the most important advantages of the setup of the closed circulatory system is that the systemic and pulmonary branches of the system can maintain their respective pressures.
Describe blood flow through the mammalian heart
The human heart is a four-chambered double pump, which creates sufficient blood pressure to push the blood in vessels to all the cells in the body. The heart has a route which the blood takes in order to achieve this blood pressure, and to become oxygenated. Systemic venous blood is brought to the heart from the superior vena cava and the inferior vena cava into the right atrium. From the right atrium, the blood passes through the tricuspid valve and into the right ventricle. When the ventricle contracts, the tricuspid closes to prevent a flow of blood back into the atrium. At the same time, the pulmonary semilunar valve opens and blood passes into the left and right pulmonary arteries. These arteries lead the blood into the left and right lungs where the blood gives off its carbon dioxide and picks up oxygen. The oxygenated blood returns to the heart through pulmonary veins, two from each lung and enters the left atrium. The blood then flows from the left atrium to the left ventricle through the bicuspid valve (also known as mitral valve). This valve is open when the left ventricle is relaxed. When the left ventricle contracts, the bicuspid valve closes preventing backflow into the atrium. At the same time, the aortic semilunar valve opens letting blood pass through from the left ventricle into the aorta. Once the blood passes, the left ventricle relaxes and the aortic semilunar valve closes thus preventing backflow from the aorta into the left ventricle.
Heart rate and stroke volume and they affected by exercise
Stroke volume is the volume of blood ejected by each beat of the heart, or more precisely, the difference between the volume of the ventricle before contraction (end-diastolic volume) and the volume of the ventricle at the end of a contraction (end-systolic volume). A change in the end diastolic or systolic volume can cause differences in stroke volume and also cardiac output, which is the volume of blood pumped per unit time for a ventricle. For example an increase in venous filling pressure will cause an increase in the end diastolic volume, and an increase in stroke volume. However, during some circumstances, the heart rate might increase while the stroke volume remains the same. This is due to the fact that pacemaker cells are stimulated causing an increase in heart rate. The rate of production of ATP and other factors in the ventricular cell increases as well, so as to quicken the pace of ventricular work. This makes the rate of ventricular emptying increase during systole in order for there to be the same stroke volume at a higher heart rate. One of these circumstances is exercise where it is associated with large increases in heart rate with little change in stroke volume. This happens because an increase in sympathetic activity ensures more rapid ventricular emptying while the elevated venous pressure makes filling the heart quicker as heart rate increases.
One of the most important things which help regulate the stroke volume during exercise are the sympathetic nerves which raise the heart rate and maintain stroke volume, keeping the heart operating at or near its optimal stroke volume for efficiency of contraction.
Changes in blood pressure and blood flow that occur during contraction of the mammalian heart?
The contraction of the mammalian heart causes fluctuations in the cardiac pressure and the volume of blood in the heart. It is very important for the heart to maintain very specific blood pressure to ensure that the blood is being transferred all over the body and that the heart can repeat both stages of relaxation and contraction. If the blood pressure in the heart never dropped, there would be no relaxed state and the heart would not fill with blood returning from the body and the lungs. Therefore every change in blood pressure and flow is designed to pump blood.
A quick review of the diastole stage, or relaxed state, of the heart will give you a better understanding of what is happening in the systole stage, or contracted state. In the diastole phase of the heartbeat, the aortic valve will be closed. This will cause a difference of pressure in the ventricles compared to the pulmonary arteries and aorta, which will enable the atrioventricular valves to open allowing blood to be flushed through the venous system.
Once this has happened the heart will begin its contraction by increasing the pressure in the atria so that the blood flows into the ventricles from the inferior vena cava, superior vena cava and the left pulmonary veins. Then the ventricles will contract to exceed the pressure of the atria, which closes the atrioventricular valves (the tricuspid valve on the right and the bicuspid, or mitral, valve on the left side of the heart). This prevents the backflow of blood into the atria and allows the pressure to build up in the ventricles. The aortic valves are also closed to ensure that the volume of blood is not changed. Once this has happened, the ventricular contraction can be considered isometric. The pressure in the ventricles goes up rapidly and exceeds the pressure in the systemic and pulmonary aortas. The aortic valves will open and the blood will be ejected into the aortas, causing a drop in pressure and volume in the ventricles leading to another relaxed phase.
Changes occur to the mammalian fetus after birth
In the fetus, the lungs have no air in them and there is a high resistance for blood flow. In addition, blood returning to the heart has oxygen because it is coming from the placenta. Therefore, there is no reason for blood to go to the lungs for gas exchange. Two features of the fetal heart help to direct oxygenated blood returning from the placenta to the systemic circulation. These are the foramen ovale and the ductus arteriosus.
The foramen ovale is a hole in the interatrial septum that is covered by a flap valve that allows blood to flow from the inferior vena cava through the right atrium and into the left atrium. Therefore, much of the oxygenated blood returning from the placenta goes from the right atrium to the left atrium (via the foramen ovale) to the left ventricle and finally to the body via the aorta. The ductus arteriosus shunts blood from the pulmonary artery to the aorta, thereby bypassing the lungs and sending the oxygenated blood to the rest of the body. In the fetus, most of the blood flow is pumped by the right ventricle to the body and is returned to the systemic pathways through the ductus arteriosus.
At birth, the lungs inflate and there is a sudden increase in pulmonary blood flow. This increases pressure in the left atrium, closing the flap over the foramen ovale. Eventually, this flap seals shut. In addition, the ductus arteriosus closes off, thereby preventing further shunting of blood from the pulmonary artery to the aorta. These changes make sense, because the blood returning to the right atrium of the heart is now deoxygenated, and must be sent to the lungs for gas exchange. If the foramen ovale fails to close after birth, there will be some leaking of deoxygenated blood from the right atrium into the left atrium. This may correct itself in time, or may require surgery. If the ductus arteriosus fails to close off at birth, deoxygenated blood is shunted from the pulmonary artery to the aorta, where it mixes with oxygenated blood. This would decrease the amount of oxygen delivered to the the body, thereby decreasing the capacity for exercise, or any other strenuous activity. If the condition is not corrected by surgery, the left ventricle of the heart must work harder to pump blood to the body and brain. Over time, the left ventricle can become enlarged due to this additional strain. In addition, the increased blood pressure in the lungs due to the left ventricle working harder can increase the amount of fluid leaving the capillaries in the lungs and lead to pulmonary congestion from fluid build-up.
What is an electrocardiogram and what are its visible components when one is printed out?
An electrocardiogram is a reflection of the electrical activity of the heart. All of the components of an electrocardiogram vary for the hearts of different species of animals and the most information is known about the human heart. The changes in the duration of the plateau of the action potential and the rates of depolarization and repolarization of the heart are recorded as an electrocardiogram. The duration of the action potential in animals is directly related to the maximum frequency of an animal’s heartbeat. Atrial cells have shorter action potentials than ventricular cells. In smaller mammals, the duration of the ventricular action potential is shorter and the heart rates are higher.
All of these electrically-generated controls of the heart can be recorded in an electrocardiogram. Electrodes are placed on a patient so that the view that appears on the screen is an electric view across the heart. Each of the peaks on an electrocardiogram is given one or more initials. The first wave is the P wave, which represents atrial depolarization. It is a small wave that is slow to rise and fall. The QRS complex comes next and is the summation of two waves, ventricular depolarization and atrial repolarization. The T wave comes after the QRS complex and represents ventricular repolarization. The P-R interval is the time between the beginning of the P wave and the beginning of the R wave. This time interval represents the time that the electricity takes to leave the sinoatrial node and reach the bundle of His. Any changes in this time can be a sign that things are becoming dangerous for the patient. For example, an increase in the P-R interval, which could be due to damage to the AV node, might cause too long of a delay between the contraction of the atria and the ventricles, resulting in decreased efficiency in pumping blood to the lungs and the rest of the body.
What are the effects of sodium and potassium influx on cardiac action potential and how does cardiac action potential differ from action potential of other muscles or nerves?
The cardiac action potential begins with a depolarization due to the influx of sodium ions, followed by rapid depolarization from an influx of calcium ions. This depolarization spreads rapidly across the heart through the interconnected cardiac muscle cells, causing the heart to contract. To ensure that the contraction pushes all of the blood from the heart, the action potential remains at a plateau phase, sustained by delaying the efflux of potassium. This "pause" in a fully contracted state before relaxation ensures that most of the blood that was in the heart is pumped out, thereby increasing the efficiency of each heart beat. The plateau phase seen in cardiac muscles is not seen in other muscles or nerves, both of which repolarize much more quickly due to the rapid efflux of potassium immediately after the sodium influx has stopped. This permits these cells to be ready to generate a second action potential almost immediately after the first.
Another difference between cardiac cells and the cells of other muscles or nerves is that cardiac cells of vertebrates exhibit a pacemaker potential. This is characterized by a slow depolarization toward threshold due to a constant leaking of sodium into the cells. Therefore, cardiac cells do not exhibit a true resting potential as seen in resting skeletal muscles or nerve cells. For this reason, cardiac cells generate action potentials on their own, whereas skeletal muscle cells require nervous stimulation.
Excess extracellular K+ will depolarize the cardiac cell membranes. An increase in extracellular K+ concentration will decrease the rate at which K+ diffuses out of a cell. This loss of K+ from inside the cell to outside the cell is a significant factor in establishing a negative resting potential. If this rate of K+ efflux is decreased, normal membrane potential will not be established and the cell may lose the ability to generate an action potential. Therefore, excess extracellular K+ can inhibit heart function, and could be fatal.
Cardiac cells also have a relatively long refractory period after contraction which prevents another contraction before the heart fully relaxes. This ensures that the heart chambers fill with blood before the heart contracts. Skeletal muscle cells, however, can be stimulated again immediately after contraction, resulting in summation of contractions from repetitive nervous stimulation.
Action potentials in nerves are characterized by rapid depolartization followed by rapid repolarization and a very brief refractory period, which ensures that the nerve can quickly produce another action potential if stimulated.
Describe the function of pacemaker cells and tell what makes them automatic.
A pacemaker is an excitable cell or group of cells whose firing is spontaneous and rhythmic. Electrical activity begins in the pacemaker portion of the heart and spreads from cell to cell through membrane junctions over the rest of the heart. It synchronizes systole (contractions) and diastole (relaxations).
In vertebrates, the pacemaker is the sinoatrial (SA) node, a remaining part of the sinus venosus. It contains contractile specialized muscle cells that do not require constant stimulation. These muscle cells are considered to be myogenic (of muscle cells) as opposed to neurogenic (of neurons). All of these cells have an unstable resting potential and can therefore steadily depolarize to its threshold voltage, at which time an action potential is generated and the muscle contracts. Many cells have the ability of such activity because the capacity lies within all cardiac cells. Therefore, more than one pacemaker can exist in the heart but only one group of cells can determine the rate of heart contraction. Those cells have the fastest inherent activity. Slower pacemakers allow the heart to continue functioning properly if the main pacemakers malfunction. An ectopic pacemaker develops if a slower (latent) pacemaker is rendered out of sync with the rest of the pacemakers and leads one chamber to beat irregularly.
In invertebrates, it is not always clear whether an animal’s heart is myogenic or neurogenic. The hearts of decapod crustaceans are neurogenic and the pacemaker within their hearts is called the cardiac ganglion. If the ganglion is removed from the heart, it ceases to beat but does show some activity. This goes to show that if a part of the heart were damaged, the pacemaker could function around that. The ganglion itself does not alter its function but some nerves in the central nervous system do. These nerves can change the pattern of the firing of the pacemaker which therefore allows a change in the rate of the heart.
The four main functions of arteries
The four main functions of arteries are that they:
Explain the importance of blood pressure in the arterial system and how it is regulated.
Blood is carried to all parts of the body via the arteries so it is important that enough blood is carried to the fingers and the gut even though the first is much farther from the heart. To do this the arterial system is designed to keep a precise blood pressure to ensure that blood can travel through the body. The heart produces a certain blood pressure by ejecting the blood into the arteries at a certain pressure. An artery that has blood ejected into it will expand slightly and allow the pressure to increase, however the heart also has a relaxed state where the pressure drops. When this happens the artery must have a way to stay somewhat pressurized to keep the blood moving although the heart is not pushing any. This is done by the artery contracting along with the blood pressure. The less blood in the artery, the smaller it becomes to keep the pressure on the blood. If the arteries were just to relax and allow the pressure to drop, the blood will stop flowing and will not have enough pressure to make it to the entire body.
Another control that the arteries are designed for is to keep the blood flowing evenly in all parts of the body. For example a human, when lying down, has its heart at the same level as the rest of the body so the arteries can all produce the same pressure. However, when we stand up the heart is now above most of the body and therefore the lower arteries have to produce a higher pressure to keep the blood flowing evenly. If the arteries did not do this, then there would be no blood pressure in the legs and that would cause several problems. This is all controlled by the arteries’ ability to expand and contract as to keep an even blood pressure and flow.
One very important function of blood pressure is to ensure the exchange of interstitial fluids that contimually bathe the cells of the body. Because blood pressure on the arterial end of capillaries is greater than the colloid osmotic pressure of the surrounding tissues, water leaves the capillary and flows into the interstitial space among the surrounding cells. At the venous end of the capillary, the colloid osmotic pressure exceeds blood pressure, and fluid is drawn back into the plasma from the surrounding extracellular space. This helps to exchange the fluids surrounding the cells and remove metabolic wastes.
Another very important function of blood pressure is to maintain proper kidney function. The kidneys are important in filtering wastes and removing some potentially dangerous chemicals from the blood. The filtering process in teh kidneys is driven by blood pressure in the renal arteries. If blood pressure drops too low, the kidneys can no longer function.
The kidneys and the heart both play important roles in regulating blood pressure. When blood pressure is low, renal filtrate moves slowly through the nephron, resulting in a low sodium concentration in the filtrate in the distal convoluted tubule. This is sensed by the cells of the macula densa, and results in the release of renin from the secretory cells of the juxtaglomerular apparatus. This begins a chain of physiological events that includes the formation of angiotensin II. Angiotensin II helps bring blood pressure back up by (1) causing vasoconstriction in arterioles throughout much of the body, and (2) promoting increased synthesis of antidiiuretic hormone (ADH), which increases resorption of water in the collecting ducts of the kidney, thereby increasing blood volume. Angiotensin II also promotes the release of aldosterone from adrenal cortex, which promotes retention of both sodium and water, thereby helping to bring blood pressure back up.
Stretch receptors in the heart monitor the volume of blood returning to the atria. If blood volume and blood pressure get a bit too high, the atria release atrial natriuretic peptide (ANP), which inhibits the release of renin, ADH, and aldosterone. This reduces water resorption in the kidneys, thereby increasing urine production and reducing blood volume and blood pressure.
The three types of capillaries
Capillaries are used to transport gases, nutrients and waste products into the blood. They are normally about 1 mm long and 3-10 micrometers in diameter. There is not one cell in the entire body that is more than three or four cells away from a capillary. This is very important because every cell needs to be able to absorb oxygen and nutrients and get rid of metabolic wastes. In a mammal there are three types of capillaries: continuous, fenestrated and sinusoidal.
Continuous capillaries are made up of an endothelium that is 0.2-0.4 micrometers thick that has a basement membrane. The endothelium cells are separated by clefts. Each cell has many vesicles that can be used for transporting substances in and out of the capillary. The transfer of products through the membrane is done either through or between the endothelium cells. Lipid soluble substances can be transferred through the cells while water and ions have to be transported in between cells in the clefts. The vesicles are still being studied to see what role they play in transporting materials but certain studies have shown that the brain uses vesicles as a mechanism of transport.
Fenestrated capillaries are found in the glomerulus and the gut. They consist of an endothelium cell wall with vesicles. However the difference is that the fenestrated capillaries have pores that perforate the cells instead of clefts. There is still a basement membrane. Transport occurs through the pores, which can handle all types of materials except for large proteins and red blood cells. The basement membrane is complete and this enables the cell to move certain substances across. There has been no evidence that vesicles play a role in the transportation in these cells.
Sinusoidal capillaries have paracellular gaps in the endothelial cells. This, combined with the basement membrane not being complete, allows many materials to be transported across. There are no vesicles in the cells so the paracellular gaps are the only area where substances can be transported. These capillaries are found in the liver and bone.
Adaptations in fish have evolved to better suit air breathing fish and water breathing fish in relation to heart function
The structure of hearts varies in different vertebrates although the basic function remains the same: to get pressurized blood to the body. The heart of air-breathing fishes differ from those of water breathing fishes in order to be efficient in their environmental conditions. In water breathing fishes, such as elasmobranchs, the heart consists of four chambers in a series, all of which are contractile. These chambers are the sinus venosus, atrium, ventricle and conus (bulbus in some fishes). The blood flows uni-directionally through the heart. This is maintained by valves at the sinoatrial and atrioventricular junctions and the exit of the ventricle. In elasmobranchs for example, the four chambers are interconnected but have many valves between them. Blood flows through the sinus venosus into the atrium when it is at rest. When the atrium contracts, the atrioventricular valves open as well as the conus valve, thus blood flows into the ventricle and conus. The valve in the conus most distal to the ventricle is closed so that blood does not go into the aorta before enough pressure is gained. Once enough pressure builds up, ventricular contraction occurs and the atrioventricular valves close. At the onset of ventral contraction, conal contraction starts to let the blood flow into the ventral aorta. The valves proximal to the heart in the conus close to prevent backflow into the ventricle. Then the still deoxygenated blood travels through the ventral aorta into the gills. This flow to the gills in order for gas and ionic exchange to occur is known as gill circulation. After this, the oxygenated blood goes through the dorsal aorta to the rest of the body; this is known as systemic circulation. The gill circulation is under higher pressure than the systemic circulation. However the consequences of a higher blood pressure here is not clear.
In contrast, air breathing fishes do not use their gills as the only method of oxygen intake and gas exchange. They must rise to the surface to take in air bubbles to supplement the intake of oxygen. In some species, the gills of air-breathing fishes are so small that only 20% of the oxygen is obtained through the gills. Thus the gills’ main purpose is not for oxygen intake but for carbon dioxide excretion, ammonia excretion, and ion exchange. Fishes will use structures such as part of the gut or mouth, skin surface or even gas bladder to take up oxygen from the air, but cannot use gills when they are exposed to air because they collapse and stick together and thus cannot function. Because these fishes must use other structures for respiration, oxygenated and deoxygenated blood has to be directed to obtain maximum oxygen intake. In order to do this, oxygenated and deoxygenated blood must be separated so that the deoxygenated blood can be directed to the correct part; either the gills or the air-breathing organ. An example is in Channa argus, a fish which has a division in the ventral aorta. The anterior ventral aorta supplies blood to the first two gill arches and the air breathing organ, while the posterior ventral aorta supplies blood to the posterior arches. Thus deoxygenated blood can go to the first arches and air breathing organ, while oxygenated blood goes to the posterior arches and then the rest of the body. The blood does not mix thanks to some features such as arrangement of veins bringing blood to the heart, and muscular ridges of the bulbus. Other fish have a more divided heart to prevent mixing of blood. The lungfish has a partial septum in both the atrium and ventricle and spiral folds in the bulbus that allows this to take place. Thus deoxygenated blood will flow into the gills then into the lungs, back to the heart again and then into the dorsal aorta. However if the lung is not being used then the blood will flow from the gills through the ductus into the dorsal aorta without passing through the lung or going back to the heart.
All of these have been adaptations that air breathing fish have in order to better suit them for their low oxygen aquatic environment.
Describe some adaptations of air-breathing diving animals which allow them to stay submerged for long periods of time.
Diving air-breathing animals have adapted some features or ways of being able to remain submerged for extended periods of time. All diving animals rely on oxygen stores in the blood for the animal stops breathing while in a dive. The cardiovascular system will thus give off the stored oxygen supply to the brain, heart, and some endocrine tissues that cannot withstand lack of oxygen. When an animal dives, the continued utilization of oxygen, causes a decrease in blood oxygen levels and a rise in carbon dioxide levels. This in turn causes stimulation of arterial chemoreceptors which cause peripheral vasoconstriction, bradycardia and cardiac output. Thus blood flow to many tissues such as muscles and kidneys is reduced and consequently more blood and oxygen is conserved for the brain and heart. Sometimes during a dive, arterial pressure increases and in order for bradycardia to be maintained, an increase in chemoreceptor and baroreceptor discharge frequency occurs. The decrease of blood oxygen levels, rise in carbon dioxide levels and a decrease in pH cause the discharge of these receptors which maintains the brain and heart with sufficient oxygen during the dive. The effect of a discharge of arterial chemoreceptors is different in diving animals when compared to non-diving animals. Stimulation of arterial chemoreceptors in non-diving animals, results in an increase in lung ventilation. When this occurs high carbon dioxide levels and low oxygen levels cause vasodilation which leads to an increase in cardiac output. Now that the body has more oxygen, vasodilation helps oxygen reach all parts of the body faster. As we have seen, low oxygen or hypoxia caused by cessation of breathing (as in a dive) is associated with bradycardia and a decrease in cardiac output while hypoxia caused by breathing is associated with increase heart rate and cardiac output.
The lymphatic system and its uses
The lymphatic system is a system of vessels that returns excess fluid and proteins to the blood and transports large molecules to the blood. Lymph vessels also absorb the end products of fat digestion in the small intestinig. Lymph is a transparent yellowish fluid that is gathered from interstitial fluid and returned to the blood via the lymph system. Lymph contains many white blood cells which makes lymph vessels quite hard to see. The lymphatic capillaries drain the fluid in the interstitial spaces and come together like blood capillaries. The larger lymphatic vessels are somewhat like veins; they empty into the blood circulation at low pressure via a duct (near the heart), which in mammals is called the thoracic duct.
Fluid flows easily into lymph vessels because there is a lower pressure in those vessels. The vessels are valved to prevent backflow into the capillaries. Pressure in the vessels can become higher if they are surrounded by autonomic smooth muscle cells. All movements of the body promote lymph flow. If lymph production exceeds lymph flow, edema (swelling) is produced. If the edema is severe, elephantiasis develops which swells and hardens tissues.
In reptiles and amphibians, there are lymph hearts which help in movement of lymphatic fluid. In this case, lymph output is more similar to cardiac output than in animals. In fish, it appears that there is either no lymphatic system or it exists but it is very rudimentary.
The lymphatic system also participates in circulation and the body’s immune response. Leukocytes (white blood cells) are in blood and lymph. Lymphocytes are prevalent in lymph nodes (along lymph vessels) and these nodes filter lymph and bring antigens in contact with lymphocytes. Leukocytes leave the lymphatic and circulatory systems by extravasation at sites of infection. They roll past infected tissues, adhere to cells and are able to pass between them. | http://www.cartage.org.lb/en/themes/Sciences/Zoology/AnimalPhysiology/CirculatorySystem/CirculatorySystem.htm | 13 |
98 | Torque, also called moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis, fulcrum, or pivot. Just as a force is a push or a pull, a torque can be thought of as a twist.
Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a torque (turning force) that loosens or tightens the nut or bolt.
The terminology for this concept is not straightforward: In physics, it is usually called "torque", and in mechanical engineering, it is called "moment". However, in mechanical engineering, the term "torque" means something different, described below. In this article, the word "torque" is always used in the physics sense, synonymous with "moment" in engineering.
The magnitude of torque depends on three quantities: First, the force applied; second, the length of the lever arm connecting the axis to the point of force application; and third, the angle between the two. In symbols:
The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a mechanical advantage.
The SI unit for torque is the newton metre (N·m). In Imperial and U.S. customary units, it is measured in foot pounds (ft·lbf) (also known as 'pound feet') and for smaller measurement of torque: inch pounds (in·lbf) or even inch ounces (in·ozf). For more on the units of torque, see below.
In mechanical engineering (unlike physics), the terms "torque" and "moment" are not interchangeable. "Moment" is the general term for the tendency of one or more applied forces to rotate an object about an axis (the concept which in physics is called torque). "Torque" is a special case of this: If the applied force vectors add to zero (i.e., their "resultant" is zero), then the forces are called a "couple" and their moment is called a "torque".
For example, a rotational force down a shaft, such as a turning screw-driver, forms a couple, so the resulting moment is called a "torque". By contrast, a lateral force on a beam produces a moment (called a bending moment), but since the net force is nonzero, this bending moment is not called a "torque".
This article follows physics terminology by calling all moments by the term "torque", whether or not they are associated with a couple.
The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia, and angular acceleration, respectively.
A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand curl in the direction of rotation and the thumb points along the axis of rotation, then the thumb also points in the direction of the torque.
More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product:
where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by
where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively,
where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.
It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of rotation, and its direction is determined by the right-hand rule.
The torque on a body determines the rate of change of the body's angular momentum,
where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum:
For rotation about a fixed axis,
where α is the angular acceleration of the body, measured in rad s−2.
The definition of angular momentum for a single particle is:
This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear momentum p = mv,
Then by definition, torque τ = r × F.
If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that
The proof relies on the assumption that mass is constant; this is valid only in non-relativistic systems in which no mass is being ejected.
Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian. The unit newton metre is properly denoted N·m or N m. This avoids ambiguity—for example, mN is the symbol for millinewton.
The joule, which is the SI unit for energy or work, is dimensionally equivalent to a newton metre, but it is not used for torque. Energy and torque are entirely different concepts, so the practice of using different unit names for them helps avoid mistakes and misunderstandings. The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1 N·m applied through a full revolution will require an energy of exactly 2π joules. Mathematically,
Other non-SI units of torque include "pound-force-feet", "foot-pounds-force", "inch-pounds-force", "ounce-force-inches", and "metre-kilograms-force". For all these units, the word "force" is often left out, for example abbreviating "pound-force-foot" to simply "pound-foot". (In this case, it would be implicit that the "pound" is pound-force and not pound-mass.)
A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
The construction of the "moment arm" is shown in the figure below, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force:
For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to the spanner.
For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in two-dimensions, we use three equations.
When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of your point of reference.
Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a dynamometer, and shown as a torque curve. The peak of that torque curve occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the power peak.
Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the wheels. Power is a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Power at the drive wheels is equal to engine power less mechanical losses regardless of any gearing between the engine and drive wheels.
Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam engines can start heavy loads from zero RPM without a clutch.
If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass,
where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body. It follows from the work-energy theorem that W also represents the change in the rotational kinetic energy Krot of the body, given by
Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed – not on the resulting acceleration, if any).
In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation.
Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar.
For different units of power, torque, or angular speed, a conversion factor must be inserted into the equation. Also, if rotational speed (revolutions per time) is used in place of angular speed (radians per time), a conversion factor of 2π must be added because there are 2π radians in a revolution:
where rotational speed is in revolutions per unit time.
Useful formula in SI units:
where 60,000 comes from 60 seconds per minute times 1000 watts per kilowatt.
Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm (revolutions per minute) for angular speed. This results in the formula changing to:
The constant below in, ft·lbf./min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550.
Use of other units (e.g. BTU/h for power) would require a different custom conversion factor.
For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius × angular speed. By definition, linear distance=linear speed × time=radius × angular speed × time.
By the definition of torque: torque=force × radius. We can rearrange this to determine force=torque ÷ radius. These two values can be substituted into the definition of power:
The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give:
If torque is in lbf·ft and rotational speed in revolutions per minute, the above equation gives power in ft·lbf/min. The horsepower form of the equation is then derived by applying the conversion factor 33000 ft·lbf/min per horsepower:
The Principle of Moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the sum of torques due to several forces applied to a single point is equal to the torque due to the sum (resultant) of the forces. Mathematically, this follows from:
(There is currently no text in this page)
|This article or section needs to be wikified. Please write this following our layout guide. Tagged since April 2009|
vectors in a rotating system]]
The force applied to a lever, multiplied by the distance from the lever's fulcrum is described as torque.
The equation for torque is:
τ = r x F
where, F is the force vector, and r is the vector from the axis of rotation to the point where the force is acting.
Scalar - quantities with magnitudes. Vector - quantities with magnitudes and direction. | http://www.thefullwiki.org/Torque | 13 |
57 | Friction is the force that opposes the relative motion or tendency of such motion of two surfaces in contact. It is not, however, a fundamental force, as it originates from the electromagnetic forces between atoms. Friction between solid objects and gases or liquids is called fluid friction.
The classical approximation of the force of friction known as Coulomb friction (named after Charles-Augustin de Coulomb) [] is expressed as Ff = μN, where μ is the coefficient of friction, N is the force normal to the contact surface, and Ff is the force exerted by friction. This force is exerted in the direction opposite the object's motion.
This simple (although incomplete) representation of friction is adequate for the analysis of many physical systems.
Coefficient of frictionEdit
The coefficient of friction (also known as the frictional coefficient) is a dimensionless scalar value which describes the ratio of the force of friction between two bodies and the force pressing them together. The coefficient of friction depends on the materials used -- for example, ice on metal has a low coefficient of friction (they slide past each other easily), while rubber on pavement has a high coefficient of friction (they do not slide past each other easily). Coefficients of friction need not be less than 1 - under good conditions, a tire on concrete may have a coefficient of friction of 1.7.
Sliding (dynamic) friction and static friction are distinct concepts. For sliding friction, the force of friction does not vary with the area of contact between the two objects. This means that sliding friction does not depend on the size of the contact area.
When the surfaces are adhesive, Coulomb friction becomes a very poor approximation (for example, Scotch tape resists sliding even when there is no normal force, or a negative normal force). In this case, the frictional force may depend on the area of contact. Some drag racing tires are adhesive in this way (see, for example, ).
The force of friction is always exerted in a direction that opposes movement (for kinetic friction) or potential movement (for static friction) between the two surfaces. For example, a curling stone sliding along the ice experiences a static force slowing it down. For an example of potential movement, the drive wheels of an accelerating car experience a frictional force pointing forward; if they did not, the wheels would spin, and the rubber would slide backwards along the pavement. Note that it is not the direction of movement of the vehicle they oppose, it is the direction of (potential) sliding between tire and road.
The coefficient of friction is an empirical measurement -- it has to be measured experimentally, and cannot be found through calculations. Rougher surfaces tend to have higher values. Most dry materials in combination give friction coefficient values from 0.3 to 0.6. It is difficult to maintain values outside this range. A value of 0.0 would mean there is no friction at all. Rubber in contact with other surfaces can yield friction coefficients from 1.0 to 2.0. A system with "interlocking teeth" between surfaces may be indistinguishable from friction, if the "teeth" are small, such as the grains on two sheets of sandpaper or even molecule-sized "teeth".
Types of frictionEdit
Static friction (informally known as stiction) occurs when the two objects are not moving relative to each other (like a desk on the ground). The coefficient of static friction is typically denoted as μs. The initial force to get an object moving is often dominated ion.
- Rolling friction occurs moving relative to each other and one "rolls" on the other (like a car's wheels on the ground). This is classified under static friction because the patch of the tire in contact with the ground, at any point while the tire spins, is stationary relative to the ground. The coefficient of rolling friction is typically denoted as μr.
Kinetic (or dynamic) friction occurs when two objects are moving relative to each other and rub together (like a sled on the ground). The coefficient of kinetic friction is typically denoted as μk, and is usually less than the coefficient of static friction. From the mathematical point of view, however, the difference between static and kinematic friction is of minor importance: Let us have a coefficient of friction which depends on the displacement velocity and is such that its value at 0 (the static friction μs ) is the limit of the kinetic friction μk for the velocity tending to zero. Then a solution of the contact problem with such Coulomb friction solves also the problem with the original μk and any static friction greater than that limit.
Examples of kinetic friction:
- Sliding friction is when two objects are rubbing against each other. Putting a book flat on a desk and moving it around is an example of sliding friction
- Fluid friction is the friction between a solid object as it moves through a liquid or a gas. The drag of air on an airplane or of water on a swimmer are two examples of fluid friction.
When an object is pushed along a surface with coefficient of friction μk and a perpendicular (normal) force acting on that object directed towards the surface of magnitude N, then the energy loss of the object is given by:
where d is the distance travelled by the object whilst in contact with the surface. This equation is identical to Energy Loss = Force x Distance as the frictional force is a non-conservative force. Note, this equation only applies to kinetic friction, not rolling friction.
Physical deformation is associated with friction. While this can be beneficial, as in polishing, it is often a problem, as the materials are worn away, and may no longer hold the specified tolerances.
The work done by friction can translate into deformation and heat that in the long run may affect the surface's specification and the coefficient of friction itself. Friction can in some cases cause solid materials to melt.
Limiting friction is the maximum value of static friction, or the force of friction that acts when a body is just on the verge of motion on a surface.
Devices, such as ball bearings can change sliding friction into the less significant rolling friction.
One technique used by railroad engineers is to back up the train to create slack in the linkages between cars. This allows the train to pull forward and only take on the static friction of one car at a time, instead of all cars at once, thus spreading the static frictional force out over time.
Generally, when moving an object over a distance: To minimize work against static friction, the movement is performed in a single interval, if possible. To minimize work against kinetic friction, the movement is performed at the lowest velocity that's practical. This also minimizes frictional stress.
A common way to reduce friction is by using a lubricant, such as oil, that is placed between the two surfaces, often dramatically lessening the coefficient of friction. The science of friction and lubrication is called tribology. Lubricant technology is when lubricants are mixed with the application of science, especially to industrial or commercial objectives.
Superlubricity, a recently-discovered effect, has been observed in graphite: it is the substantial decrease of friction between two sliding objects, approaching zero levels - a very small amount of frictional energy would be dissipated due to electronic and atomic vibrations [].
Energy of frictionEdit
According to the law of conservation of energy, no energy should be lost due to friction. The kinetic energy lost is transformed primarily into heat or motion of other objects. In some cases, the "other object" to be accelerated may be the Earth. A sliding hockey puck comes to rest due to friction both by changing its energy into heat and accelerating the Earth in its direction of travel (by an immeasurably tiny amount). Since heat quickly dissipates, and the change in velocity of the Earth cannot be seen, many early philosophers, including Aristotle [], wrongly concluded that moving objects lose energy without a driving force.
- cite book | last = Tipler | first = Paul | title = Physics for Scientists and Engineers: Vol. 1 | edition = 4th ed. | publisher = W. H. Freeman | year = 1998 id = ISBN 1572594926 []
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | http://engineering.wikia.com/wiki/Friction | 13 |
73 | Quantity vs Numbers
An important distinction must be made between "Quantities" and "Numbers". A quantity is simply some amount of "stuff"; five apples, three pounds, and one automobile are all quantities of different things. A quantity can be represented by any number of different representations. For example, tick-marks on a piece of paper, beads on a string, or stones in a pocket can all represent some quantity of something. One of the most familiar representations are the base-10 (or "decimal") numbers, which consist of 10 digits, from 0 to 9. When more than 9 objects needs to be counted, we make a new column with a 1 in it (which represents a group of 10), and we continue counting from there.
Computers, however, cannot count in decimal. Computer hardware uses a system where values are represented internally as a series of voltage differences. For example, in most computers, a +5V charge is represented as a "1" digit, and a 0V value is represented as a "0" digit. There are no other digits possible! Thus, computers must use a numbering system that has only two digits(0 and 1): the "Binary", or "base-2", number system.
Binary Numbers
Understanding the binary number system is difficult for many students at first. It may help to start with a decimal number, since that is more familiar. It is possible to write a number like 1234 in "expanded notation," so that the value of each place is shown:
Notice that each digit is multiplied by successive powers of 10, since this a decimal, or base 10 system. The "ones" digit ("4" in the example) is multiplied by , or "1". Each digit to the left of the "ones" digit is multiplied by the next higher power of 10 and that is added to the preceding value.
Now, do the same with a binary number; but since this is a "base 2" number, replace powers of 10 with powers of 2:
The subscripts indicate the base. Note that in the above equations:
Binary numbers are the same as their equivalent decimal numbers, they are just a different way to represent a given quantity. To be very simplistic, it does not really matter if you have or apples, you can still make a pie.
The term Bits is short for the phrase Binary Digits. Each bit is a single binary value: 1 or zero. Computers generally represent a 1 as a positive voltage (5 volts or 3.3 volts are common values), and a zero as 0 volts.
Most Significant Bit and Least Significant Bit
In the decimal number 48723, the "4" digit represents the largest power of 10 (or ), and the 3 digit represents the smallest power of 10 (). Therefore, in this number, 4 is the most significant digit and 3 is the least significant digit. Consider a situation where a caterer needs to prepare 156 meals for a wedding. If the caterer makes an error in the least significant digit and accidentally makes 157 meals, it is not a big problem. However, if the caterer makes a mistake on the most significant digit, 1, and prepares 256 meals, that will be a big problem!
Now, consider a binary number: 101011. The Most Significant Bit (MSB) is the left-most bit, because it represents the greatest power of 2 (). The Least Significant Bit (LSB) is the right-most bit and represents the least power of 2 ().
Notice that MSB and LSB are not the same as the notion of "significant figures" that is used in other sciences. The decimal number 123000 has only 3 significant figures, but the most significant digit is 1 (the left-most digit), and the least significant digit is 0 (the right-most digit).
Standard Sizes
- a Nibble is 4 bits long. Nibbles can hold values from 0 to 15 (in decimal).
- a Byte is 8 bits long. Bytes can hold values from 0 to 255 (in decimal).
- a Word is 16 bits, or 2 bytes long. Words can hold values from 0 to 65535 (in Decimal). There is occasionally some confusion between this definition and that of a "machine word". See Machine Word below.
- a Double-word is 2 words long, or 4 bytes long. These are also known simply as "DWords". DWords are also 32 bits long. 32-bit computers therefore, manipulate data that is the size of DWords.
- a Quad-word is 2 DWords long, 4 words long, and 8 bytes long. They are known simply as "QWords". QWords are 64 bits long, and are therefore the default data size in 64-bit computers.
- Machine Word
- A machine word is the length of the standard data size of a given machine. For instance, a 32-bit computer has a 32-bit machine word. Likewise 64-bit computers have a 64-bit machine word. Occasionally the term "machine word" is shortened to simply "word", leaving some ambiguity as to whether we are talking about a regular "word" or a machine word.
Negative Numbers
It would seem logical that to create a negative number in binary, the reader would only need to prefix the number with a "–" sign. For instance, the binary number 1101 can become negative simply by writing it as "–1101". This seems all fine and dandy until you realize that computers and digital circuits do not understand minus sign. Digital circuits only have bits, and so bits must be used to distinguish between positive and negative numbers. With this in mind, there are a variety of schemes that are used to make binary numbers negative or positive: Sign and Magnitude, One's Complement, and Two's Complement.
Sign and Magnitude
Under a Sign and Magnitude scheme, the MSB of a given binary number is used as a "flag" to determine if the number is positive or negative. If the MSB = 0, the number is positive, and if the MSB = 1, the number is negative. This scheme seems awfully simple, except for one simple fact: arithmetic of numbers under this scheme is very hard. Let's say we have 2 nibbles: 1001 and 0111. Under sign and magnitude, we can translate them to read: -001 and +111. In decimal then, these are the numbers –1 and +7.
When we add them together, the sum of –1 + 7 = 6 should be the value that we get. However:
001 +111 ---- 000
And that isn't right. What we need is a decision-making construct to determine if the MSB is set or not, and if it is set, we subtract, and if it is not set, we add. This is a big pain, and therefore sign and magnitude is not used.
One's Complement
Let's now examine a scheme where we define a negative number as being the logical inverse of a positive number. We will use the same "!" operator to express a logical inversion on multiple bits. For instance, !001100 = 110011. 110011 is binary for 51, and 001100 is binary for 12. but in this case, we are saying that 001100 = –110011, or 110011(binary) = -12 decimal. let's perform the addition again:
001100 (12) +110011 (-12) ------- 111111
We can see that if we invert 0000002 we get the value 1111112. and therefore 1111112 is negative zero! What exactly is negative zero? it turns out that in this scheme, positive zero and negative zero are identical.
However, one's complement notation suffers because it has two representations for zero: all 0 bits, or all 1 bits. As well as being clumsy, this will also cause problems when we want to check quickly to see if a number is zero. This is an extremely common operation, and we want it to be easy, so we create a new representation, two's complement.
Two's Complement
Two's complement is a number representation that is very similar to one's complement. We find the negative of a number X using the following formula:
-X = !X + 1
Let's do an example. If we have the binary number 11001 (which is 25 in decimal), and we want to find the representation for -25 in twos complement, we follow two steps:
- Invert the numbers:
- 11001 → 00110
- Add 1:
- 00110 + 1 = 00111
Therefore –11001 = 00111. Let's do a little addition:
11001 +00111 ------ 00000
Now, there is a carry from adding the two MSBs together, but this is digital logic, so we discard the carrys. It is important to remember that digital circuits have capacity for a certain number of bits, and any extra bits are discarded.
Most modern computers use two's complement.
Below is a diagram showing the representation held by these systems for all four-bit combinations:
Signed vs Unsigned
One important fact to remember is that computers are dumb. A computer doesnt know whether or not a given set of bits represents a signed number, or an unsigned number (or, for that matter, and number of other data objects). It is therefore important for the programmer (or the programmers trusty compiler) to keep track of this data for us. Consider the bit pattern 100110:
- Unsigned: 38 (decimal)
- Sign+Magnitude: -6
- One's Complement: -25
- Two's Complement: -26
See how the representation we use changes the value of the number! It is important to understand that bits are bits, and the computer doesn't know what the bits represent. It is up to the circuit designer and the programmer to keep track of what the numbers mean.
Character Data
We've seen how binary numbers can represent unsigned values, and how they can represent negative numbers using various schemes. But now we have to ask ourselves, how do binary numbers represent other forms of data, like text characters? The answer is that there exist different schemes for converting binary data to characters. Each scheme acts like a map to convert a certain bit pattern into a certain character. There are 3 popular schemes: ASCII, UNICODE and EBCDIC.
The ASCII code (American Standard Code for Information Interchange) is the most common code for mapping bits to characters. ASCII uses only 7 bits, although since computers can only deal with 8-bit bytes at a time, ASCII characters have an unused 8th bit as the MSB. ASCII codes 0-31 are "Control codes" which are characters that are not printable to the screen, and are used by the computer to handle certain operations. code 32 is a single space (hit the space bar). The character code for the character '1' is 49, '2' is 50, etc... notice in ASCII '2' = '1' + 1 (the character 1 plus the integer number 1)). This is difficult for many people to grasp at first, so don't worry if you are confused.
Capital letters start with 'A' = 65 to 'Z' = 90. The lower-case letters start with 'a' = 97 to 'z' = 122.
Almost all the rest of the ASCII codes are different punctuation marks.
Extended ASCII
Since computers use data that is the size of bytes, it made no sense to have ASCII only contain 7 bits of data (which is a maximum of 128 character codes). Many companies therefore incorporated the extra bit into an "Extended ASCII" code set. These extended sets have a maximum of 256 characters to use. The first 128 characters are the original ASCII characters, but the next 128 characters are platform-defined. Each computer maker could define their own characters to fill in the last 128 slots.
When computers began to spread around the world, other languages began to be used by computers. Before too long, each country had its own character code sets, to represent their own letters. It is important to remember that some alphabets in the world have more than 256 characters! Therefore, the UNICODE standard was proposed. There are many different representations of UNICODE. Some of them use 2-byte characters, and others use different representations. The first 128 characters of the UNICODE set are the original ASCII characters.
For a more in-depth discussion of UNICODE, see this website.
EBCDIC (Extended Binary Coded Decimal Interchange format) is a character code that was originally proposed by IBM, but was passed in favor of ASCII. IBM however still uses EBCDIC in some of its super computers, mainframes, and server systems.
Octal is just like decimal and binary in that once one column is "full", you move onto the next. It uses the numbers 0−7 as digits, and because there a binary multiple (8=23) of digits available, it has a useful property that it is easy to convert between octal and binary numbers. Consider the binary number: 101110000. To convert this number to octal, we must first break it up into groups of 3 bits: 101, 110, 000. Then we simply add up the values of each bit:
And then we string all the octal digits together:
- 1011100002 = 5608.
Hexadecimal is a very common data representation. It is more common than octal, because it represents four binary digits per digit, and many digital circuits use multiples of four as their data widths.
Hexadecimal uses a base of 16. However, there is a difficulty in that it requires 16 digits, and the common decimal number system only has ten digits to play with (0 through 9). So, to have the necessary number of digits to play with, we use the letters A through F, in addition to the digits 0-9. After the unit column is full, we move onto the "16's" column, just as in binary and decimal.
Hexadecimal Notation
Depending on the source code you are reading, hexadecimal may be notated in one of several ways:
- 0xaa11: ANSI C notation. The 0x prefix indicates that the remaining digits are to be interpreted as hexadeximal. For example, 0x1000, which is equal to 4096 in decimal.
- \xaa11: "C string" notation.
- 0aa11h: Typical assembly language notation, indicated by the h suffix. The leading 0 (zero) ensures the assembler does not mistakenly interpret the number as a label or symbol.
- $aa11: Another common assembly language notation, widely used in 6502/65816 assembly language programming.
- #AA11: BASIC notation.
- $aa11$: Business BASIC notation.
- aa1116: Mathematical notation, with the subscript indicating the number base.
- 16'hAA11: Verilog notation, where the "16" is the total length in bits.
Both uppercase and lowercase letters may be used. Lowercase is generally preferred in a Linux, UNIX or C environment, while uppercase is generally preferred in a mainframe or COBOL environment. | http://en.wikibooks.org/wiki/Digital_Circuits/Representations | 13 |
72 | Introduction to Function Inverses Introduction to Function Inverses
Introduction to Function Inverses
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- Let's think about what functions really do, and then
- we'll think about the idea of an inverse of a function.
- So let's start with a pretty straightforward function.
- Let's say f of x is equal to 2x plus 4.
- And so if I take f of 2, f of 2 is going to be equal to 2 times
- 2 plus 4, which is 4 plus 4, which is 8.
- I could take f of 3, which is 2 times 3 plus 4,
- which is equal to 10.
- 6 plus 4.
- So let's think about it in a little bit more
- of an abstract sense.
- So there's a set of things that I can input into this function.
- You might already be familiar with that notion.
- It's the domain.
- The set of all of the things that I can input into that
- function, that is the domain.
- And in that domain, 2 is sitting there, you have 3 over
- there, pretty much you could input any real number
- into this function.
- So this is going to be all real, but we're making it a
- nice contained set here just to help you visualize it.
- Now, when you apply the function, let's think about
- it means to take f of 2.
- We're inputting a number, 2, and then the function is
- outputting the number 8.
- It is mapping us from 2 to 8.
- So let's make another set here of all of the possible values
- that my function can take on.
- And we can call that the range.
- There are more formal ways to talk about this, and there's a
- much more rigorous discussion of this later on, especially in
- the linear algebra playlist, but this is all the different
- values I can take on.
- So if I take the number 2 from our domain, I input it into the
- function, we're getting mapped to the number 8.
- So let's let me draw that out.
- So we're going from 2 to the number 8 right there.
- And it's being done by the function.
- The function is doing that mapping.
- That function is mapping us from 2 to 8.
- This right here, that is equal to f of 2.
- Same idea.
- You start with 3, 3 is being mapped by the function to 10.
- It's creating an association.
- The function is mapping us from 3 to 10.
- Now, this raises an interesting question.
- Is there a way to get back from 8 to the 2, or is there a
- way to go back from the 10 to the 3?
- Or is there some other function?
- Is there some other function, we can call that the inverse
- of f, that'll take us back?
- Is there some other function that'll take
- us from 10 back to 3?
- We'll call that the inverse of f, and we'll use that as
- notation, and it'll take us back from 10 to 3.
- Is there a way to do that?
- Will that same inverse of f, will it take us back from--
- if we apply 8 to it-- will that take us back to 2?
- Now, all this seems very abstract and difficult.
- What you'll find is it's actually very easy to solve for
- this inverse of f, and I think once we solve for it, it'll
- make it clear what I'm talking about.
- That the function takes you from 2 to 8, the inverse will
- take us back from 8 to 2.
- So to think about that, let's just define-- let's just
- say y is equal to f of x.
- So y is equal to f of x, is equal to 2x plus 4.
- So I can write just y is equal to 2x plus 4, and this once
- again, this is our function.
- You give me an x, it'll give me a y.
- But we want to go the other way around.
- We want to give you a y and get an x.
- So all we have to do is solve for x in terms of y.
- So let's do that.
- If we subtract 4 from both sides of this equation-- let me
- switch colors-- if we subtract 4 from both sides of this
- equation, we get y minus 4 is equal to 2x, and then if we
- divide both sides of this equation by 2, we get y over 2
- minus 2-- 4 divided by 2 is 2-- is equal to x.
- Or if we just want to write it that way, we can just swap the
- sides, we get x is equal to 1/2y-- same thing as
- y over 2-- minus 2.
- So what we have here is a function of y that
- gives us an x, which is exactly what we wanted.
- We want a function of these values that map back to an x.
- So we can call this-- we could say that this is equal to--
- I'll do it in the same color-- this is equal to f inverse
- as a function of y.
- Or let me just write it a little bit cleaner.
- We could say f inverse as a function of y-- so we can have
- 10 or 8-- so now the range is now the domain for f inverse.
- f inverse as a function of y is equal to 1/2y minus 2.
- So all we did is we started with our original function, y
- is equal to 2x plus 4, we solved for-- over here, we've
- solved for y in terms of x-- then we just do a little bit of
- algebra, solve for x in terms of y, and we say that that is
- our inverse as a function of y.
- Which is right over here.
- And then, if we, you know, you can say this is-- you could
- replace the y with an a, a b, an x, whatever you want to do,
- so then we can just rename the y as x.
- So if you put an x into this function, you would get f
- inverse of x is equal to 1/2x minus 2.
- So all you do, you solve for x, and then you swap the y and the
- x, if you want to do it that way.
- That's the easiest way to think about it.
- And one thing I want to point out is what happens when you
- graph the function and the inverse.
- So let me just do a little quick and dirty
- graph right here.
- And then I'll do a bunch of examples of actually solving
- for inverses, but I really just wanted to give
- you the general idea.
- Function takes you from the domain to the range, the
- inverse will take you from that point back to the original
- value, if it exists.
- So if I were to graph these-- just let me draw a little
- coordinate axis right here, draw a little bit of a
- coordinate axis right there.
- This first function, 2x plus 4, its y intercept is going to be
- 1, 2, 3, 4, just like that, and then its slope will
- look like this.
- It has a slope of 2, so it will look something like-- its graph
- will look-- let me make it a little bit neater than that--
- it'll look something like that.
- That's what that function looks like.
- What does this function look like?
- What does the inverse function look like, as a function of x?
- Remember we solved for x, and then we swapped the x
- and the y, essentially.
- We could say now that y is equal to f inverse of x.
- So we have a y-intercept of negative 2, 1, 2, and
- now the slope is 1/2.
- The slope looks like this.
- Let me see if I can draw it.
- The slope looks-- or the line looks something like that.
- And what's the relationship here?
- I mean, you know, these look kind of related, it looks
- like they're reflected about something.
- It'll be a little bit more clear what they're reflected
- about if we draw the line y is equal to x.
- So the line y equals x looks like that.
- I'll do it as a dotted line.
- And you could see, you have the function and its inverse,
- they're reflected about the line y is equal to x.
- And hopefully, that makes sense here.
- Because over here, on this line, let's take
- an easy example.
- Our function, when you take 0-- so f of 0 is equal to 4.
- Our function is mapping 0 to 4.
- The inverse function, if you take f inverse of 4, f
- inverse of 4 is equal to 0.
- Or the inverse function is mapping us from 4 to 0.
- Which is exactly what we expected.
- The function takes us from the x to the y world, and then we
- swap it, we were swapping the x and the y.
- We would take the inverse.
- And that's why it's reflected around y equals x.
- So this example that I just showed you right here, function
- takes you from 0 to 4-- maybe I should do that in the function
- color-- so the function takes you from 0 to 4, that's the
- function f of 0 is 4, you see that right there, so it goes
- from 0 to 4, and then the inverse takes us
- back from 4 to 0.
- So f inverse takes us back from 4 to 0.
- You saw that right there.
- When you evaluate 4 here, 1/2 times 4 minus 2 is 0.
- The next couple of videos we'll do a bunch of examples so you
- really understand how to solve these and are able to do
- the exercises on our application for this.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | http://www.khanacademy.org/math/algebra/algebra-functions/function_inverses/v/introduction-to-function-inverses | 13 |
92 | |Part of a series on|
The Big Bang theory is the prevailing cosmological model that describes the early development of the Universe. According to the theory, the Big Bang occurred approximately 13.798 ± 0.037 billion years ago, which is thus considered the age of the universe. After this time, the Universe was in an extremely hot and dense state and began expanding rapidly. After the initial expansion, the Universe cooled sufficiently to allow energy to be converted into various subatomic particles, including protons, neutrons, and electrons. Though simple atomic nuclei could have formed quickly, thousands of years were needed before the appearance of the first electrically neutral atoms. The first element produced was hydrogen, along with traces of helium and lithium. Giant clouds of these primordial elements later coalesced through gravity to form stars and galaxies, and the heavier elements were synthesized either within stars or during supernovae.
The Big Bang is a well-tested scientific theory and is widely accepted within the scientific community. It offers a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background, large scale structure, and the Hubble diagram for Type Ia supernovae. The core ideas of the Big Bang—the expansion, the early hot state, the formation of helium, and the formation of galaxies—are derived from these and other observations that are independent of any cosmological model. As the distance between galaxy clusters is increasing today, it is inferred that everything was closer together in the past. This idea has been considered in detail back in time to extreme densities and temperatures, and large particle accelerators have been built to experiment in such conditions, resulting in further development of the model. On the other hand, these accelerators have limited capabilities to probe into such high energy regimes. There is little evidence regarding the absolute earliest instant of the expansion. Thus, the Big Bang theory cannot and does not provide any explanation for such an initial condition; rather, it describes and explains the general evolution of the universe going forward from that point on.
Georges Lemaître first proposed what became the Big Bang theory in what he called his "hypothesis of the primeval atom". Over time, scientists built on his initial ideas to form the modern synthesis. The framework for the Big Bang model relies on Albert Einstein's general relativity and on simplifying assumptions such as homogeneity and isotropy of space. The governing equations had been formulated by Alexander Friedmann. In 1929, Edwin Hubble discovered that the distances to far away galaxies were generally proportional to their redshifts—an idea originally suggested by Lemaître in 1927. Hubble's observation was taken to indicate that all very distant galaxies and clusters have an apparent velocity directly away from our vantage point: the farther away, the higher the apparent velocity.
While the scientific community was once divided between supporters of the Big Bang and those of the Steady State theory, most scientists became convinced that some version of the Big Bang scenario best fit observations after the discovery of the cosmic microwave background radiation in 1964, and especially when its spectrum (i.e., the amount of radiation measured at each wavelength) was found to match that of thermal radiation from a black body. Since then, astrophysicists have incorporated a wide range of observational and theoretical additions into the Big Bang model, and its parametrization as the Lambda-CDM model serves as the framework for current investigations of theoretical cosmology.
Timeline of the Big Bang
|A graphical timeline is available at
Graphical timeline of the Big Bang
Extrapolation of the expansion of the Universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This singularity signals the breakdown of general relativity. How closely we can extrapolate towards the singularity is debated—certainly no closer than the end of the Planck epoch. This singularity is sometimes called "the Big Bang", but the term can also refer to the early hot, dense phase itself,[notes 1] which can be considered the "birth" of our Universe. Based on measurements of the expansion using Type Ia supernovae, measurements of temperature fluctuations in the cosmic microwave background, and measurements of the correlation function of galaxies, the Universe has a calculated age of 13.772 ± 0.059 billion years. The agreement of these three independent measurements strongly supports the ΛCDM model that describes in detail the contents of the Universe. In 2013 new Planck data corrected this age to 13.798 ± 0.037 billion years.
The earliest phases of the Big Bang are subject to much speculation. In the most common models the Universe was filled homogeneously and isotropically with an incredibly high energy density and huge temperatures and pressures and was very rapidly expanding and cooling. Approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the Universe grew exponentially. After inflation stopped, the Universe consisted of a quark–gluon plasma, as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present Universe.
The Universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form. After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle physics experiments. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was now no longer high enough to create new proton–antiproton pairs (similarly for neutrons–antineutrons), so a mass annihilation immediately followed, leaving just one in 1010 of the original protons and neutrons, and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the Universe was dominated by photons (with a minor contribution from neutrinos).
A few minutes into the expansion, when the temperature was about a billion (one thousand million; 109; SI prefix giga-) kelvin and the density was about that of air, neutrons combined with protons to form the Universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis. Most protons remained uncombined as hydrogen nuclei. As the Universe cooled, the rest mass energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years the electrons and nuclei combined into atoms (mostly hydrogen); hence the radiation decoupled from matter and continued through space largely unimpeded. This relic radiation is known as the cosmic microwave background radiation.
Over a long period of time, the slightly denser regions of the nearly uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the Universe. The four possible types of matter are known as cold dark matter, warm dark matter, hot dark matter, and baryonic matter. The best measurements available (from WMAP) show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold (warm dark matter is ruled out by early reionization), and is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. In an "extended model" which includes hot dark matter in the form of neutrinos, then if the "physical baryon density" Ωbh2 is estimated at about 0.023 (this is different from the 'baryon density' Ωb expressed as a fraction of the total matter/energy density, which as noted above is about 0.046), and the corresponding cold dark matter density Ωch2 is about 0.11, the corresponding neutrino density Ωvh2 is estimated to be less than 0.0062.
Independent lines of evidence from Type Ia supernovae and the CMB imply that the Universe today is dominated by a mysterious form of energy known as dark energy, which apparently permeates all of space. The observations suggest 73% of the total energy density of today's Universe is in this form. When the Universe was very young, it was likely infused with dark energy, but with less space and everything closer together, gravity had the upper hand, and it was slowly braking the expansion. But eventually, after numerous billion years of expansion, the growing abundance of dark energy caused the expansion of the Universe to slowly begin to accelerate. Dark energy in its simplest formulation takes the form of the cosmological constant term in Einstein's field equations of general relativity, but its composition and mechanism are unknown and, more generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both observationally and theoretically.
All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the ΛCDM model of cosmology, which uses the independent frameworks of quantum mechanics and Einstein's General Relativity. As noted above, there is no well-supported model describing the action prior to 10−15 seconds or so. Apparently a new unified theory of quantum gravitation is needed to break this barrier. Understanding this earliest of eras in the history of the Universe is currently one of the greatest unsolved problems in physics.
The Big Bang theory depends on two major assumptions: the universality of physical laws and the cosmological principle. The cosmological principle states that on large scales the Universe is homogeneous and isotropic.
These ideas were initially taken as postulates, but today there are efforts to test each of them. For example, the first assumption has been tested by observations showing that largest possible deviation of the fine structure constant over much of the age of the universe is of order 10−5. Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars.[notes 2]
If the large-scale Universe appears isotropic as viewed from Earth, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10−5 via observations of the CMB.[notes 3] The Universe has been measured to be homogeneous on the largest scales at the 10% level.
General relativity describes spacetime by a metric, which determines the distances that separate nearby points. The points, which can be galaxies, stars, or other objects, themselves are specified using a coordinate chart or "grid" that is laid down over all spacetime. The cosmological principle implies that the metric should be homogeneous and isotropic on large scales, which uniquely singles out the Friedmann–Lemaître–Robertson–Walker metric (FLRW metric). This metric contains a scale factor, which describes how the size of the Universe changes with time. This enables a convenient choice of a coordinate system to be made, called comoving coordinates. In this coordinate system the grid expands along with the Universe, and objects that are moving only due to the expansion of the Universe remain at fixed points on the grid. While their coordinate distance (comoving distance) remains constant, the physical distance between two such comoving points expands proportionally with the scale factor of the Universe.
The Big Bang is not an explosion of matter moving outward to fill an empty universe. Instead, space itself expands with time everywhere and increases the physical distance between two comoving points. Because the FLRW metric assumes a uniform distribution of mass and energy, it applies to our Universe only on large scales—local concentrations of matter such as our galaxy are gravitationally bound and as such do not experience the large-scale expansion of space.
An important feature of the Big Bang spacetime is the presence of horizons. Since the Universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not had time to reach us. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our Universe. Our understanding of the Universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the Universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the Universe continues to accelerate, there is a future horizon as well.
Fred Hoyle is credited with coining the term Big Bang during a 1949 radio broadcast. It is popularly reported that Hoyle, who favored an alternative "steady state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models.
The Big Bang theory developed from observations of the structure of the Universe and from theoretical considerations. In 1912 Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from Albert Einstein's equations of general relativity, showing that the Universe might be expanding in contrast to the static Universe model advocated by Einstein at that time. In 1924 Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the inferred recession of the nebulae was due to the expansion of the Universe.
In 1931 Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the Universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence.
Starting in 1924, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the 100-inch (2,500 mm) Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929 Hubble discovered a correlation between distance and recession velocity—now known as Hubble's law. Lemaître had already shown that this was expected, given the Cosmological Principle.
In the 1920s and 1930s almost every major cosmologist preferred an eternal steady state Universe, and several complained that the beginning of time implied by the Big Bang imported religious concepts into physics; this objection was later repeated by supporters of the steady state theory. This perception was enhanced by the fact that the originator of the Big Bang theory, Monsignor Georges Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, thought that
If the world has begun with a single quantum, the notions of space and time would altogether fail to have any meaning at the beginning; they would only begin to have a sensible meaning when the original quantum had been divided into a sufficient number of quanta. If this suggestion is correct, the beginning of the world happened a little before the beginning of space and time.
During the 1930s other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory Universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard Tolman) and Fritz Zwicky's tired light hypothesis.
After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady state model, whereby new matter would be created as the Universe seemed to expand. In this model the Universe is roughly the same at any point in time. The other was Lemaître's Big Bang theory, advocated and developed by George Gamow, who introduced big bang nucleosynthesis (BBN) and whose associates, Ralph Alpher and Robert Herman, predicted the cosmic microwave background radiation (CMB). Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949.[notes 4] For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over Steady State. The discovery and confirmation of the cosmic microwave background radiation in 1964 secured the Big Bang as the best theory of the origin and evolution of the cosmos. Much of the current work in cosmology includes understanding how galaxies form in the context of the Big Bang, understanding the physics of the Universe at earlier and earlier times, and reconciling observations with the basic theory.
Significant progress in Big Bang cosmology have been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as COBE, the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the Universe appears to be accelerating.
The earliest and most direct kinds of observational evidence are the Hubble-type expansion seen in the redshifts of galaxies, the detailed measurements of the cosmic microwave background, the relative abundances of light elements produced by Big Bang nucleosynthesis, and today also the large scale distribution and apparent evolution of galaxies predicted to occur due to gravitational growth of structure in the standard theory. These are sometimes called "the four pillars of the Big Bang theory".
Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently subjected to the most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models.[notes 5] Viable, quantitative explanations for such phenomena are still being sought. These are currently unsolved problems in physics.
Hubble's law and the expansion of space
Observations of distant galaxies and quasars show that these objects are redshifted—the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission lines or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed:
- v = H0D,
- v is the recessional velocity of the galaxy or other distant object,
- D is the comoving distance to the object, and
- H0 is Hubble's constant, measured to be 70.4 +1.3
−1.4 km/s/Mpc by the WMAP probe.
Hubble's law has two possible explanations. Either we are at the center of an explosion of galaxies—which is untenable given the Copernican principle—or the Universe is uniformly expanding everywhere. This universal expansion was predicted from general relativity by Alexander Friedmann in 1922 and Georges Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang theory as developed by Friedmann, Lemaître, Robertson, and Walker.
The theory requires the relation v = HD to hold at all times, where D is the comoving distance, v is the recessional velocity, and v, H, and D vary as the Universe expands (hence we write H0 to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable Universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity v. However, the redshift is not a true Doppler shift, but rather the result of the expansion of the Universe between the time the light was emitted and the time that it was detected.
That space is undergoing metric expansion is shown by direct observational evidence of the Cosmological principle and the Copernican principle, which together with Hubble's law have no other explanation. Astronomical redshifts are extremely isotropic and homogenous, supporting the Cosmological principle that the Universe looks the same in all directions, along with much other evidence. If the redshifts were the result of an explosion from a center distant from us, they would not be so similar in different directions.
Measurements of the effects of the cosmic microwave background radiation on the dynamics of distant astrophysical systems in 2000 proved the Copernican principle, that, on a cosmological scale, the Earth is not in a central position. Radiation from the Big Bang was demonstrably warmer at earlier times throughout the Universe. Uniform cooling of the cosmic microwave background over billions of years is explainable only if the Universe is experiencing a metric expansion, and excludes the possibility that we are near the unique center of an explosion.
Cosmic microwave background radiation
In 1964 Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the general CMB predictions: the radiation was found to be consistent with an almost perfect black body spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded a Nobel Prize in 1978.
The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around 372±14 kyr, the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent.
In 1989 NASA launched the Cosmic Background Explorer satellite (COBE). Its findings were consistent with predictions regarding the CMB, finding a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.725 K) and providing the first evidence for fluctuations (anisotropies) in the CMB, at a level of about one part in 105. John C. Mather and George Smoot were awarded the Nobel Prize for their leadership in this work. During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001 several experiments, most notably BOOMERanG, found the shape of the Universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies.
In early 2003 the first results of the Wilkinson Microwave Anisotropy Probe (WMAP) were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon based cosmic microwave background experiments are ongoing.
Abundance of primordial elements
Using the Big Bang model it is possible to calculate the concentration of helium-4, helium-3, deuterium, and lithium-7 in the Universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by number) are about 0.25 for 4He/H, about 10−3 for 2H/H, about 10−4 for 3He/H and about 10−9 for 7Li/H.
The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for 4He, and off by a factor of two 7Li; in the latter two cases there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by Big Bang nucleosynthesis is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed there is no obvious reason outside of the Big Bang that, for example, the young Universe (i.e., before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products) should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too.
Galactic evolution and distribution
Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current state of the Big Bang theory. A combination of observations and theory suggest that the first quasars and galaxies formed about a billion years after the Big Bang, and since then larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early Universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures agree well with Big Bang simulations of the formation of structure in the Universe and are helping to complete details of the theory.
Primordial gas clouds
In 2011 astronomers found what they believe to be pristine clouds of primordial gas, by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. These two clouds of gas contain no elements heavier than hydrogen and deuterium. Since the clouds of gas have no heavy elements, they likely formed in the first few minutes after the Big Bang, during Big Bang nucleosynthesis. Their composition matches the composition predicted from Big Bang nucleosynthesis. This provides direct evidence that there was a period in the history of the universe before the formation of the first stars, when most ordinary matter existed in the form of clouds of neutral hydrogen.
Other lines of evidence
The age of Universe as estimated from the Hubble expansion and the CMB is now in good agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars.
The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult.
Related issues in physics
It is not yet understood why the Universe has more matter than antimatter. It is generally assumed that when the Universe was young and very hot, it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the Universe, including its most distant parts, is made almost entirely of matter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the Universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effect is not strong enough to explain the present baryon asymmetry.
Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the Universe has been accelerating since the Universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the Universe consists of a component with large negative pressure, dubbed "dark energy". Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the Universe is very nearly spatially flat, and therefore according to general relativity the Universe must have almost exactly the critical density of mass/energy. But the mass density of the Universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the Universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure as a cosmic ruler.
Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Possible candidates include a cosmological constant and quintessence. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the Universe, but the dark energy density remains constant (or nearly so) as the Universe expands. Therefore matter made up a larger fraction of the total energy of the Universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant.
During the 1970s and 1980s, various observations showed that there is not sufficient visible matter in the Universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the Universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the Universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the Universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters.
Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway.
Globular cluster age
In the mid-1990s observations of globular clusters appeared to be inconsistent with the Big Bang theory. Computer simulations that matched the observations of the stellar populations of globular clusters suggested that they were about 15 billion years old, which conflicted with the 13.8 billion year age of the Universe. This issue was partially resolved in the late 1990s when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters. There remain some questions as to how accurately the ages of the clusters are measured, but it is clear that observations of globular clusters no longer appear inconsistent with the Big Bang theory.
There are generally considered to be three outstanding problems with the Big Bang theory: the horizon problem, the flatness problem, and the magnetic monopole problem. The most common answer to these problems is inflationary theory; however, since this creates new problems, other options have been proposed, such as the Weyl curvature hypothesis.
The horizon problem results from the premise that information cannot travel faster than light. In a Universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the Universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature.
A resolution to this apparent inconsistency is offered by inflationary theory in which a homogeneous and isotropic scalar energy field dominates the Universe at some very early period (before baryogenesis). During inflation, the Universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable Universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation.
Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to cosmic scale. These fluctuations serve as the seeds of all current structure in the Universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been accurately confirmed by measurements of the CMB.
If inflation occurred, exponential expansion would push large regions of space well beyond our observable horizon.
The flatness problem (also known as the oldness problem) is an observational problem associated with a Friedmann–Lemaître–Robertson–Walker metric. The Universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density, positive if greater, and zero at the critical density, in which case space is said to be flat. The problem is that any small departure from the critical density grows with time, and yet the Universe today remains very close to flat.[notes 6] Given that a natural timescale for departure from flatness might be the Planck time, 10−43 seconds, the fact that the Universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the Universe density must have been within one part in 1014 of its critical value, or it would not exist as it does today.
A resolution to this problem is offered by inflationary theory. During the inflationary period, spacetime expanded to such an extent that its curvature would have been smoothed out. Thus, it is theorized that inflation drove the Universe to a very nearly spatially flat state, with almost exactly the critical density.
The magnetic monopole objection was raised in the late 1970s. Grand unification theories predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early Universe, resulting in a density much higher than is consistent with observations, given that searches have never found any monopoles. This problem is also resolved by cosmic inflation, which removes all point defects from the observable Universe in the same way that it drives the geometry to flatness.
The future according to the Big Bang theory
Before observations of dark energy, cosmologists considered two scenarios for the future of the Universe. If the mass density of the Universe were greater than the critical density, then the Universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch. Alternatively, if the density in the Universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out leaving white dwarfs, neutron stars, and black holes. Very gradually, collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the Universe would asymptotically approach absolute zero—a Big Freeze. Moreover, if the proton were unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the Universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death.
Modern observations of accelerating expansion imply that more and more of the currently visible Universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the Universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the Universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip.
Speculative physics beyond Big Bang theory
While the Big Bang model is well established in cosmology, it is likely to be refined in the future. Little is known about the earliest moments of the Universe's history. The equations of classical general relativity indicate a singularity at the origin of cosmic time, although this conclusion depends on several assumptions. Moreover, general relativity must break down before the Universe reaches the Planck temperature, and a correct treatment of quantum gravity may avoid the would-be singularity.
Some proposals, each of which entails untested hypotheses, are:
- Models including the Hartle–Hawking no-boundary condition in which the whole of space-time is finite; the Big Bang does represent the limit of time, but without the need for a singularity.
- Big Bang lattice model states that the Universe at the moment of the Big Bang consists of an infinite lattice of fermions which is smeared over the fundamental domain so it has both rotational, translational, and gauge symmetry. The symmetry is the largest symmetry possible and hence the lowest entropy of any state.
- Brane cosmology models in which inflation is due to the movement of branes in string theory; the pre-Big Bang model; the ekpyrotic model, in which the Big Bang is the result of a collision between branes; and the cyclic model, a variant of the ekpyrotic model in which collisions occur periodically. In the latter model the Big Bang was preceded by a Big Crunch and the Universe endlessly cycles from one process to the other.
- Eternal inflation, in which universal inflation ends locally here and there in a random fashion, each end-point leading to a bubble universe expanding from its own big bang.
Proposals in the last two categories see the Big Bang as an event in either a much larger and older Universe, or in a multiverse.
Religious and philosophical interpretations
As a theory relevant to the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous.
- There is no consensus about how long the Big Bang phase lasted. For some writers this denotes only the initial singularity, for others the whole history of the Universe. Usually, at least the first few minutes (during which helium is synthesized) are said to occur "during the Big Bang". (see also Big Bang nucleosynthesis)
- Detailed information of and references for tests of general relativity are given in the article tests of general relativity.
- This ignores the dipole anisotropy at a level of 0.1% due to the peculiar velocity of the solar system through the radiation field.
- It is commonly reported that Hoyle intended this to be pejorative. However, Hoyle later denied that, saying that it was just a striking image meant to emphasize the difference between the two theories for radio listeners.
- If inflation is true, baryogenesis must have occurred, but not vice versa.
- Strictly, dark energy in the form of a cosmological constant drives the Universe towards a flat state; however, our Universe remained close to flat for several billion years, before the dark energy density became significant.
- Wollack, Edward J. (10 December 2010). "Cosmology: The Study of the Universe". Universe 101: Big Bang Theory. NASA. Archived from the original on 14 May 2011. Retrieved 27 April 2011.: « The second section discusses the classic tests of the Big Bang theory that make it so compelling as the likely valid description of our universe. »
- "Planck reveals an almost perfect universe". Planck. ESA. 2013-03-21. Retrieved 2013-03-21.
- Staff (21 March 2013). "Planck Reveals An Almost Perfect Universe". ESA. Retrieved 21 March 2013.
- Clavin, Whitney; Harrington, J.D. (21 March 2013). "Planck Mission Brings Universe Into Sharp Focus". NASA. Retrieved 21 March 2013.
- Overbye, Dennis (21 March 2013). "An Infant Universe, Born Before We Knew". New York Times. Retrieved 21 March 2013.
- Boyle, Alan (21 March 2013). "Planck probe's cosmic 'baby picture' revises universe's vital statistics". NBC News. Retrieved 21 March 2013.
- "How Old is the Universe?". WMAP- Age of the Universe. The National Aeronautics and Space Administration (NASA). 2012-12-21. Retrieved 2013-01-01.
- Komatsu, E. et al. (2009). "Five-Year Wilkinson Microwave Anisotropy Probe Observations: Cosmological Interpretation". Astrophysical Journal Supplement 180 (2): 330. arXiv:0803.0547. Bibcode:2009ApJS..180..330K. doi:10.1088/0067-0049/180/2/330.
- Menegoni, E. et al. (2009). "New constraints on variations of the fine structure constant from CMB anisotropies". Physical Review D 80 (8): 087302. arXiv:0909.3584. Bibcode:2009PhRvD..80h7302M. doi:10.1103/PhysRevD.80.087302.
- "Origins: CERN: Ideas: The Big Bang". The Exploratorium. 2000. Archived from the original on 2 September 2010. Retrieved 3 September 2010.
- Keohane, J. (8 November 1997). "Big Bang theory". Ask an astrophysicist. GSFC/NASA. Archived from the original on 2 September 2010. Retrieved 3 September 2010.
- Wright, E.L. (9 May 2009). "What is the evidence for the Big Bang?". Frequently Asked Questions in Cosmology. UCLA, Division of Astronomy and Astrophysics. Retrieved 16 October 2009.
- Gibson, C.H. (21 January 2001). "The First Turbulent Mixing and Combustion". IUTAM Turbulent Mixing and Combustion.
- Gibson, C.H. (2001). "Turbulence And Mixing In The Early Universe". arXiv:astro-ph/0110012 [astro-ph].
- Gibson, C.H. (2005). "The First Turbulent Combustion". arXiv:astro-ph/0501416 [astro-ph].
- Hubble, E. (1929). "A Relation Between Distance and Radial Velocity Among Extra-Galactic Nebulae". Proceedings of the National Academy of Sciences 15 (3): 168–73. Bibcode:1929PNAS...15..168H. doi:10.1073/pnas.15.3.168. PMC 522427. PMID 16577160.
- Kragh, Helge (1996). Cosmology and Controversy. Princeton, NJ: Princeton University Press. p. 318. ISBN 0-691-02623-8.
- Hawking, S.W.; Ellis, G.F.R. (1973). The Large-Scale Structure of Space-Time. Cambridge University Press. ISBN 0-521-20016-4.
- Roos, M. (2008). "Expansion of the Universe – Standard Big Bang Model". In Engvold, O.; Stabell, R.; Czerny, B. et al. Astronomy and Astrophysics. Encyclopedia of Life Support Systems. EOLSS publishers. arXiv:0802.2005. "This singularity is termed the Big Bang."
- Drees, W.B. (1990). Beyond the big bang: quantum cosmologies and God. Open Court Publishing. pp. 223–224. ISBN 978-0-8126-9118-4.
- Weinberg, S. (1993). The First Three Minutes: A Modern View Of The Origin Of The Universe. Basic Books. p. [page needed]. ISBN 0-465-02437-8.
- Bennett, C.L.; Larson, L.; Weiland, J.L.; Jarosk, N.; Hinshaw, N.; Odegard, N.; Smith, K.M.; Hill, R.S.; Gold, B.; Halpern, M.; Komatsu, E.; Nolta, M.R.; Page, L. (December 20, 2012). Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Final Maps and Results. arXiv:1212.5225. Retrieved January 1, 2013.
- "Planck reveals an almost perfect universe". Planck. ESA. 2013-03-21. Retrieved 2013-03-21.
- Guth, A.H. (1998). The Inflationary Universe: Quest for a New Theory of Cosmic Origins. Vintage Books. ISBN 978-0-09-995950-2.
- Schewe, P. (2005). "An Ocean of Quarks". Physics News Update (American Institute of Physics) 728 (1).
- Kolb and Turner (1988), chapter 6
- Moskowitz, Clara (September 25, 2012). "Hubble Telescope Reveals Farthest View Into Universe Ever". Space.com. Retrieved September 26, 2012.
- Kolb and Turner (1988), chapter 7
- Kolb and Turner (1988), chapter 4
- Peacock (1999), chapter 9
- Spergel, D. N. et al. (2003). "First year Wilkinson Microwave Anisotropy Probe (WMAP) observations: determination of cosmological parameters". Astrophysical Journal Supplement 148 (1): 175. arXiv:astro-ph/0302209. Bibcode:2003ApJS..148..175S. doi:10.1086/377226.
- Jarosik, N.; et.al. (WMAP Collaboration). Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, and Basic Results. NASA/GSFC. p. 39, Table 8. Retrieved 4 December 2010.
- Peebles, P.J.E.; Ratra, B. (2003). "The Cosmological Constant and Dark Energy". Reviews of Modern Physics 75 (2): 559–606. arXiv:astro-ph/0207347. Bibcode:2003RvMP...75..559P. doi:10.1103/RevModPhys.75.559.
- Ivanchik, A.V.; Potekhin, A. Y.; Varshalovich, D. A. (1999). "The Fine-Structure Constant: A New Observational Limit on Its Cosmological Variation and Some Theoretical Consequences". Astronomy and Astrophysics 343: 459. arXiv:astro-ph/9810166. Bibcode:1999A&A...343..439I.
- Goodman, J. (1995). "Geocentrism Reexamined". Physical Review D 52 (4): 1821. arXiv:astro-ph/9506068. Bibcode:1995PhRvD..52.1821G. doi:10.1103/PhysRevD.52.1821.
- d'Inverno, R. (1992). "Chapter 23". Introducing Einstein's Relativity. Oxford University Press. ISBN 0-19-859686-3.
- Kolb and Turner (1988), chapter 3
- "'Big bang' astronomer dies". BBC News. 22 August 2001. Archived from the original on 8 December 2008. Retrieved 7 December 2008.
- Croswell, K. (1995). "Chapter 9". The Alchemy of the Heavens. Anchor Books.
- Mitton, S. (2005). Fred Hoyle: A Life in Science. Aurum Press. p. 127.
- Slipher, V.M (1913). "The Radial Velocity of the Andromeda Nebula". Lowell Observatory Bulletin 1: 56–57. Bibcode:1913LowOB...2...56S.
- Slipher, V.M (1915). "Spectrographic Observations of Nebulae". Popular Astronomy 23: 21–24. Bibcode:1915PA.....23Q..21S.
- Friedman, A.A. (1922). "Über die Krümmung des Raumes". Zeitschrift für Physik 10 (1): 377–386. Bibcode:1922ZPhy...10..377F. doi:10.1007/BF01332580. (German)
- Lemaître, G. (1927). "Un univers homogène de masse constante et de rayon croissant rendant compte de la vitesse radiale des nébuleuses extragalactiques". Annals of the Scientific Society of Brussels 47A: 41. (French)
- Lemaître, G. (1931). "The Evolution of the Universe: Discussion". Nature 128 (3234): 699–701. Bibcode:1931Natur.128..704L. doi:10.1038/128704a0.
- Christianson, E. (1995). Edwin Hubble: Mariner of the Nebulae. Farrar, Straus and Giroux. ISBN 0-374-14660-8.
- Kragh, H. (1996). Cosmology and Controversy. Princeton (NJ): Princeton University Press. ISBN 0-691-02623-8.
- "People and Discoveries: Big Bang Theory". A Science Odyssey. PBS. Retrieved 9 March 2012.
- Eddington, A. (1931). "The End of the World: from the Standpoint of Mathematical Physics". Nature 127 (3203): 447–453. Bibcode:1931Natur.127..447E. doi:10.1038/127447a0.
- Appolloni, S. (17 June 2011). ""Repugnant", "Not Repugnant at All": How the Respective Epistemic Attitudes of Georges Lemaitre and Sir Arthur Eddington Influenced How Each Approached the Idea of a Beginning of the Universe". IBSU Scientific Journal 5 (1): 19–44. ISSN 2233-3002.
- Lemaître, G. (1931). "The Beginning of the World from the Point of View of Quantum Theory". Nature 127 (3210): 706. Bibcode:1931Natur.127..706L. doi:10.1038/127706b0.
- Milne, E.A. (1935). Relativity, Gravitation and World Structure. Oxford University Press. LCCN 3519093.
- Tolman, R.C. (1934). Relativity, Thermodynamics, and Cosmology. Clarendon Press. ISBN 0-486-65383-8. LCCN 3432023.
- Zwicky, F. (1929). "On the Red Shift of Spectral Lines through Interstellar Space". Proceedings of the National Academy of Sciences 15 (10): 773–779. Bibcode:1929PNAS...15..773Z. doi:10.1073/pnas.15.10.773. PMC 522555. PMID 16577237.
- Hoyle, F. (1948). "A New Model for the Expanding Universe". Monthly Notices of the Royal Astronomical Society 108: 372. Bibcode:1948MNRAS.108..372H.
- Alpher, R.A.; Bethe, H.; Gamow, G. (1948). "The Origin of Chemical Elements". Physical Review 73 (7): 803. Bibcode:1948PhRv...73..803A. doi:10.1103/PhysRev.73.803.
- Alpher, R.A.; Herman, R. (1948). "Evolution of the Universe". Nature 162 (4124): 774. Bibcode:1948Natur.162..774A. doi:10.1038/162774b0.
- Singh, S. (21 April 2007). "Big Bang". SimonSingh.net. Archived from the original on 30 June 2007. Retrieved 28 May 2007.
- Croswell, K. (1995). The Alchemy of the Heavens. Anchor Books. chapter 9. ISBN 978-0-385-47213-5.
- Penzias, A.A.; Wilson, R.W. (1965). "A Measurement of Excess Antenna Temperature at 4080 Mc/s". Astrophysical Journal 142: 419. Bibcode:1965ApJ...142..419P. doi:10.1086/148307.
- Boggess, N.W. et al. (1992). "The COBE Mission: Its Design and Performance Two Years after the launch". Astrophysical Journal 397: 420. Bibcode:1992ApJ...397..420B. doi:10.1086/171797.
- Spergel, D.N. et al. (2006). "Wilkinson Microwave Anisotropy Probe (WMAP) Three Year Results: Implications for Cosmology". Astrophysical Journal Supplement 170 (2): 377. arXiv:astro-ph/0603449. Bibcode:2007ApJS..170..377S. doi:10.1086/513700.
- Lawrence Krauss (2012), A Universe From Nothing: Why there is Something Rather than Nothing, Free Press, New York. p. 118. 978-1-4516-2445-8.
- Gladders, M.D. et al. (2007). "Cosmological Constraints from the Red-Sequence Cluster Survey". The Astrophysical Journal 655 (1): 128–134. arXiv:astro-ph/0603588. Bibcode:2007ApJ...655..128G. doi:10.1086/509909.
- The Four Pillars of the Standard Cosmology
- Sadoulet, B. "Direct Searches for Dark Matter". Astro2010: The Astronomy and Astrophysics Decadal Survey. The National Academies. Retrieved 12 March 2012.
- Cahn, R. "For a Comprehensive Space-Based Dark Energy Mission". Astro2010: The Astronomy and Astrophysics Decadal Survey. The National Academies. Retrieved 12 March 2012.
- Peacock (1999), chapter 3
- Srianand, R.; Petitjean, P.; Ledoux, C. (2000). "The microwave background temperature at the redshift of 2.33771". Nature 408 (6815): 931–935. arXiv:astro-ph/0012222. Bibcode:2000Natur.408..931S. doi:10.1038/35050020. Lay summary – European Southern Observatory (December 2000).
- Gannon, Megan (December 21, 2012). "New 'Baby Picture' of Universe Unveiled". Space.com. Retrieved December 21, 2012.
- Wright, E.L. (2004). "Theoretical Overview of Cosmic Microwave Background Anisotropy". In W. L. Freedman. Measuring and Modeling the Universe. Carnegie Observatories Astrophysics Series. Cambridge University Press. p. 291. arXiv:astro-ph/0305591. ISBN 0-521-75576-X.
- White, M. (1999). "Anisotropies in the CMB". Proceedings of the Los Angeles Meeting, DPF 99. UCLA. arXiv:astro-ph/9903232. Bibcode:1999dpf..conf.....W.
- Steigman, G. (2005). "Primordial Nucleosynthesis: Successes And Challenges". International Journal of Modern Physics E [Nuclear Physics] 15: 1–36. arXiv:astro-ph/0511534. Bibcode:2006IJMPE..15....1S. doi:10.1142/S0218301306004028.
- Bertschinger, E. (2001). "Cosmological Perturbation Theory and Structure Formation". arXiv:astro-ph/0101009 [astro-ph].
- Bertschinger, E. (1998). "Simulations of Structure Formation in the Universe". Annual Review of Astronomy and Astrophysics 36 (1): 599–654. Bibcode:1998ARA&A..36..599B. doi:10.1146/annurev.astro.36.1.599.
- Fumagalli, Michele; O'Meara, John M.; Prochaska, J. Xavier (2011). "Detection of Pristine Gas Two Billion Years After the Big Bang". Science 334 (6060): 1245–9. arXiv:1111.2334. Bibcode:2011Sci...334.1245F. doi:10.1126/science.1213581. PMID 22075722.
- "Astronomers Find Clouds of Primordial Gas from the Early Universe, Just Moments After Big Bang". Science Daily. 10 November 2011. Retrieved 13 November 2011.
- Perley, Daniel. "Determination of the Universe's Age, to". University of California Berkeley Astronomy Department. Retrieved 27 January 2012.
- Srianand, R.; Noterdaeme, P.; Ledoux, C.; Petitjean, P. (2008). "First detection of CO in a high-redshift damped Lyman-α system". Astronomy and Astrophysics 482 (3): L39. Bibcode:2008A&A...482L..39S. doi:10.1051/0004-6361:200809727.
- Avgoustidis, A.; Luzzi, G.; Martins, C.J.A.P.; Monteiro, A.M.R.V.L. (2011). "Constraints on the CMB temperature-redshift dependence from SZ and distance measurements". arXiv:1112.1862v1 [astro-ph.CO].
- Belusevic, Radoje (2008). Relativity, Astrophysics and Cosmology. Berlin: Wiley-VCH. p. 16. ISBN 3-527-40764-2.
- Kolb and Turner, chapter 6
- Sakharov, A.D. (1967). "Violation of CP Invariance, C Asymmetry and Baryon Asymmetry of the Universe". Zhurnal Eksperimental'noi i Teoreticheskoi Fiziki, Pisma 5: 32. (Russian)
- (Translated in Journal of Experimental and Theoretical Physics Letters 5, 24 (1967).)
- Keel, B. "Dark Matter". Retrieved 28 May 2007.
- Yao, W.M. et al. (2006). "Review of Particle Physics: Dark Matter". Journal of Physics G 33 (1): 1–1232. arXiv:astro-ph/0601168. Bibcode:2006JPhG...33....1Y. doi:10.1088/0954-3899/33/1/001.
- Penrose, R. (1979). "Singularities and Time-Asymmetry". In Hawking, S.W. (ed); Israel, W. (ed). General Relativity: An Einstein Centenary Survey. Cambridge University Press. pp. 581–638.
- Penrose, R. (1989). "Difficulties with Inflationary Cosmology". In Fergus, E.J. (ed). Proceedings of the 14th Texas Symposium on Relativistic Astrophysics. New York Academy of Sciences. pp. 249–264. doi:10.1111/j.1749-6632.1989.tb50513.x.
- Kolb and Turner (1988), chapter 8
- Dicke, R.H.; Peebles, P.J.E. "The big bang cosmology—enigmas and nostrums". In Hawking, S.W. (ed); Israel, W. (ed). General Relativity: an Einstein centenary survey. Cambridge University Press. pp. 504–517.
- Kolb and Turner, chapter 8
- Kolb and Turner, 1988, chapter 3
- Caldwell, R.R; Kamionkowski, M.; Weinberg, N. N. (2003). "Phantom Energy and Cosmic Doomsday". Physical Review Letters 91 (7): 071301. arXiv:astro-ph/0302506. Bibcode:2003PhRvL..91g1301C. doi:10.1103/PhysRevLett.91.071301. PMID 12935004.
- Hawking, S.W.; Ellis, G.F.R. (1973). The Large Scale Structure of Space-Time. Cambridge (UK): Cambridge University Press. ISBN 0-521-09906-4.
- Hartle, J.H.; Hawking, S. (1983). "Wave Function of the Universe". Physical Review D 28 (12): 2960. Bibcode:1983PhRvD..28.2960H. doi:10.1103/PhysRevD.28.2960.
- Bird, Paul (2011). "Determining the Big Bang State Vector".
- Langlois, D. (2002). "Brane Cosmology: An Introduction". Progress of Theoretical Physics Supplement 148: 181–212. arXiv:hep-th/0209261. Bibcode:2002PThPS.148..181L. doi:10.1143/PTPS.148.181.
- Linde, A. (2002). "Inflationary Theory versus Ekpyrotic/Cyclic Scenario". arXiv:hep-th/0205259 [hep-th].
- Than, K. (2006). "Recycled Universe: Theory Could Solve Cosmic Mystery". Space.com. Retrieved 3 July 2007.
- Kennedy, B.K. (2007). "What Happened Before the Big Bang?". Archived from the original on 4 July 2007. Retrieved 3 July 2007.
- Linde, A. (1986). "Eternal Chaotic Inflation". Modern Physics Letters A 1 (2): 81. Bibcode:1986MPLA....1...81L. doi:10.1142/S0217732386000129.
- Linde, A. (1986). "Eternally Existing Self-Reproducing Chaotic Inflationary Universe". Physics Letters B 175 (4): 395–400. Bibcode:1986PhLB..175..395L. doi:10.1016/0370-2693(86)90611-8.
- Harris, J.F. (2002). Analytic philosophy of religion. Springer. p. 128. ISBN 978-1-4020-0530-5.
- Frame, T. (2009). Losing my religion. UNSW Press. pp. 137–141. ISBN 978-1-921410-19-2.
- Harrison, P. (2010). The Cambridge Companion to Science and Religion. Cambridge University Press. p. 9. ISBN 978-0-521-71251-4.
- Harris 2002, p. 129
- Sagan, C. (1988). introduction to A Brief History of Time by Stephen Hawking. Bantam. pp. X. ISBN 0-553-34614-8. "... a universe with no edge in space, no beginning or end in time, and nothing for a Creator to do."
- Kolb, E.; Turner, M. (1988). The Early Universe. Addison–Wesley. ISBN 0-201-11604-9.
- Peacock, J. (1999). Cosmological Physics. Cambridge University Press. ISBN 0-521-42270-1.
- Woolfson, M. (2013). Time, Space, Stars and Man: The Story of Big Bang (2nd edition). World Scientific Publishing. ISBN 978-1-84816-933-3.
- For an annotated list of textbooks and monographs, see physical cosmology.
- Alpher, R.A.; Herman, R. (1988). "Reflections on early work on 'big bang' cosmology". Physics Today 8 (8): 24–34. Bibcode:1988PhT....41h..24A. doi:10.1063/1.881126.
- American Institute of Physics. "Cosmic Journey: A History of Scientific Cosmology". American Institute of Physics.
- Barrow, J.D. (1994). The Origin of the Universe: To the Edge of Space and Time. New York: Phoenix. ISBN 0-465-05354-8.
- Davies, P.C.W. (1992). The Mind of God: The scientific basis for a rational world. Simon & Schuster. ISBN 0-671-71069-9.
- Feuerbacher, B.; Scranton, R. (2006). "Evidence for the Big Bang". TalkOrigins.
- Mather, J.C.; Boslough, J. (1996). The very first light: the true inside story of the scientific journey back to the dawn of the Universe. Basic Books. p. 300. ISBN 0-465-01575-1.
- Singh, S. (2004). Big Bang: The origins of the universe. Fourth Estate. ISBN 0-00-716220-0.
- Scientific American. (March 2005). "Misconceptions about the Big Bang". Scientific American.
- Scientific American. (May 2006). "The First Few Microseconds". Scientific American.
|Wikiquote has a collection of quotations related to: Big Bang|
|Wikimedia Commons has media related to: Big bang|
- Big bang model with animated graphics
- Cosmology at the Open Directory Project
- Evidence for the Big Bang | http://en.wikipedia.org/wiki/Big_Bang | 13 |
120 | 1. Parametric Curves
6.1.6 Part 2 we discussed the equations of the trajectory of the projectile
motion. The graph of the trajectory is
re-produced here in Fig. 1.1.
Trajectory Of Projectile Motion.
The arrow head indicates the direction of motion.
First we found the set of 2 equations where t is the independent variable:
The 1st 2 equations are of the form x
= f(t), y = g(t), where f and g are continuous functions. We saw that these
are called parametric equations, t is called the parameter, and t is non-negative. In Example 3.1 of the same section we
learned that t has a maximum value denoted by tmax. So the common domain of f and g is I = set of all real numbers between
0 and tmax inclusive. Here the parameter t is time. The last equation is of the form y = F(x). It's a Cartesian equation. It was
obtained from the parametric equations by eliminating t. The graph of y = F(x) shown in Fig. 1.1 is the graph of the
corresponding parametric equations x = f(t), y = g(t). It's called a parametric curve.
The parametric curve is the path of the motion of the
object, because each point (x, y) on it is a position (x,
y) of the object in
the plane at time t, where x = f(t) and y = g(t). The direction of the motion is indicated by the arrow head. It of course
corresponds to increasing values of t. This direction is also treated as the direction of the curve.
Fig. 1.2 is obtained by adding the t-axis
to Fig. 1.1 and changing a few labels. The t-axis is
different from the x-axis and y-axis.
At time t = 0 the object is at the origin; at time t = t1 it's at position P1 = (x1, y1), at time t = tmax it's at position xmax on the
Position of object is function of time.
x-axis. Position of the object is a function of time.
In this section we take a closer look at parametric curves.
Definitions 1.1 – Parametric Curves
The set of equations:
x = f(t),
where f and g are continuous functions on a common domain
being some interval I of the
real line R
and t is an
The parametric equations are an expressing of x and y in terms of
the parameter. Since f and g are functions, for each value
of t in the common domain of f and g there corresponds exactly 1 value of x and exactly 1 value of y, thus exactly 1 point (x,
y) on the curve, where x = f(t) and y = g(t).
A Function Described By f And g
By the parametric equations x
= f(t), y = g(t), for each value of t
in the common domain I of f and g there
exactly 1 point (x, y) in the plane, where x = f(t) and y = g(t). See Fig. 1.3, where I = [a, b] with a < b. Let C be the curve
formed by such points. We can view the equations as describing a function say h (not f or g, and not a function or an equation
obtained by eliminating t) that maps each real number t in I to an ordered pair of real numbers (x, y) on C as follows: h(t) =
(x, y) = ( f(t), g(t)). It has domain I on the real line and range C in the plane. It's a function from (a subset of) R to (a subset
of) R2 (= R x R). Note that x = f(t), y = g(t), and y = F(x) are functions from R to R.
Suppose that from the parametric equations x = f(t), y = g(t) we get the
Cartesian equation y = F(x) or G(x, y) = 0 by
eliminating t. The parametric curve of the parametric equations x = f(t), y = g(t) is the graph of the function y = F(x) or of
the equation G(x, y) = 0, because y = F(x) or G(x, y) = 0 is derived from the parametric equations. If there are restrictions
on x or y by the nature of the parametric equations as will be shown in some examples that follow, the curve may be just a
part of the graph of the Cartesian equation. It's called parametric because of the parametric equations. Refer to Fig. 1.3. As for
the axes, the t-axis is different from the x-axis and the y-axis of the xy-coordinate system of the plane of the curve. Remark
that the curve in Fig. 1.3 is the graph of an xy-equation G(x, y) = 0 that's not a function y = F(x). We'll see such curves in
some examples below.
If the Cartesian equation G(x, y) = 0 isn't
recognized or if 1 such equation isn't obtained from the parametric equations,
sketch the parametric curve in this section (another method is introduced in the next section) we'll have to rely on the “lo-tech”
table of values to get a number of points, and we'll also use properties of the curve if any, and then we'll join the points
together by a curve, as will be illustrated in an example below.
Curve C of parametric
equations x = f(t), y = g(t) is graph
Let P be the
point (x, y)
where x = f(t) and y = g(t), as in
Fig. 1.3. At t = a,
P is Pa.
As t increases, P
moves along the curve
C. At any t in [a, b], P is at a position (x, y) on C. At t = b, P is at Pb. The parametric equations x = f(t), y = g(t) specify
the coordinates (x, y) of the point P at parameter value t, and P represents an object moving in the plane. The parametric
curve C is the path of the moving object. Since f and g are continuous, C is continuous (has no breaks in it).
The direction of the curve C
is the direction of the motion of the object, and so it corresponds to
increasing values of t. To
determine the direction of the curve let's take an example. Suppose t is in [0, 5]. We calculate the positions (x, y) = ( f(t),
g(t)) at t = 0, 1, 2, 3, 4, and 5. From this we get the direction of the motion of the object P(x, y) and thus the direction of the
curve, which we indicate on the curve by arrow head(s). Clearly we have to rely on the parametric equations to determine the
direction, which is lost in the Cartesian equation because it doesn't contain t.
The parameter t is
sometimes referred to as time, because it often represents time. As
illustrated in the examples that follow,
it can represent other quantities, and letters other than t can be used as parameter.
a. Identify the parametric curve x
= t + 1, y
= t2 – 1, for all t
b. Sketch it.
c. Label the points corresponding to t = –1, t = 0, and t = 1.
d. Determine its direction and indicate the direction on it.
e. Give an example quantity that the parameter t can represent.
Parabola For Example 2.1.
x2 + y2 = 25 cos2 t + 25 sin2 t = 25(cos2 t + sin2 t) = 25(1) = 52,
the curve is the circle with centre at the origin and radius 5, sketched in Fig. 2.2.
Circle For Example 2.2.
An interpretation of t is that
it's the central angle associated with the point P(x, y).
Here the elimination of the parameter t
is easier by using a trigonometric identity, sin2 x + cos2 x = 1, than by direct
substitution for t as in Example 2.1. The Cartesian equation x2 + y2 = 25 is recognized as that of a circle and isn't a function.
The parameter t is interpreted as an angle, since we're talking about its trigonometric functions. Remark that the circle isn't
the unit circle; its radius is 5, not 1. For example, the perpendicular projection x of P on the horizontal axis is x = ((segment
joining 0 to P) times (cos t)) = 5 cos t, not just cos t.
Partial Ellipse For Example 2.3.
In Example 2.2 the coefficients of cos and
sin are equal, producing a circle. Here they're different,
producing an ellipse, or a
part of it due to restrictions on the parameter. The restrictions on the parameter are specified with the parametric equations
and produce a partial ellipse.
Sketch the curve having parametric representation x = sin t,
y = 2 sin t,
t in R. As t increases in R,
describe the motion of an
object whose position in the plane at time t is given by these equations.
“Curve” For Example 2.4.
Semi-Circle For Example 2.5.
The restrictions that may be placed on x
or y by the nature of the parametric
equations may not be contained or apparent in
the Cartesian equation derived from them by eliminating the parameter. We must examine this possibility carefully.
In Example 2.3, the restrictions are on the parameter and
are stated explicitly in the problem. The restrictions on x
and y also
exist, but they're also contained in the Cartesian equation. Remark that, for that particular example, only a partial ellipse forms
the curve because of the restrictions on the parameter, not because of those on x and y.
In Example 2.5, the direction of the curve is clockwise,
while in Example 2.2 it's counterclockwise. Clearly the direction of the
curve depends on the functions f and g.
Keep in mind that there are 2 kinds of restrictions: those
on the parameter, which are specified with the parametric equations,
and those on x and y, which are implied by the nature of the parametric equations and aren’t contained or apparent in the
Sketch the curve defined by the parametric equations x = t3 – 3t, y = t2, t in [–2, 2].
Use its properties if any. Indicate its
Symmetry. Now x is an odd
function of t and y
an even function of t. So at
opposite values of t, x has opposite values and y
has the same value. Thus the curve is symmetric about the y-axis.
Curve For Example 2.6.
The curve isn't recognized from its Cartesian equation, so
we had to rely on the “lo-tech” table of values to get some points.
We also use the symmetry and self-intersection properties of the curve. A curve self-intersects at a point if it passes thru that
point 2 or more times from different directions, ie there are 3 or more different branches of it that are joined to that point,
which is passed thru for 2 or more different values of the parameter; so we look for self-intersection at points (x, y) that each
correspond to 2 or more different values of the parameter. (A circle with central-angle parameter in R passes thru every point
of it infinitely many times for different values of the parameter, but it doesn't self-intersect, because the repeated passing is
done from the same single direction; there are only 2 branches of the circle that are joined to each point.) Then we join the
points together by a curve. For clarity, points corresponding to fractional values of t in the table of values aren't labelled in
Above, from parametric equations we derive corresponding
Cartesian equations. Now we're going to perform the reverse.
From Cartesian equations we're going to derive corresponding parametric equations. We're also going to determine parametric
equations of some curves whose equations, Cartesian or otherwise, aren't known to us yet. To determine the parametric
equations of a curve is referred to as to parametrize it. The process of determining parametric equations of a curve and the
parametric equations themselves are each referred to as parametrization of the curve.
Since f and g in x = f(t) and y = g(t) are functions, where the parameter t is a particular quantity, for each value of t in the
common domain of f and g there corresponds exactly 1 value of x and exactly 1 value of y, thus exactly 1 point (x, y) on the
curve, where x = f(t) and y = g(t). If this rule is violated, then f or g or both don't exist, then the set of the 2 parametric
equations doesn't exist, then the curve can't be parametrized using the particular quantity t as the parameter.
In Example 2.2 we derive from the
parametric equations x = 5 cos
t, y = 5 sin
t the Cartesian equation x2 + y2 = 25. Now
suppose we're given the circle x2 + y2 = 25 and we're asked to parametrize it. To do this we recall Example 2.2 and let x =
5 cos t and y = 5 sin t, then to verify we have x2 + y2 = 25 cos2 t + 25 sin2 t = 25(cos2 t + sin2 t) = 25(1) = 25. Or we let x =
5 cos t and calculate the t-expression for y. So a parametrization of the circle x2 + y2 = 25 is x = 5 cos t, y = 5 sin t, where t
is the central angle, as shown in Fig. 3.1.
A parametrization of circle x2 + y2 = 25 is x = 5 cos t, y = 5 sin t,
where t is central
Now let's see if there are other parametrizations of the circle x2 + y2 = 25. The following are its valid parametrizations:
The choice of a t-expression
for x must of course be such that x2 + y2 = 25 and the
selection for a t-interval must as a matter
of fact be such that x and y each assumes every value in [–5, 5] to get the full circle. The cosine and sine functions can
produce infinitely many parametrizations of such a circle: x = 5 cos h(t), y = 5 sin h(t), or x = 5 sin h(t), y = 5 cos h(t),
where h is a function of the form h(t) = tm or h(t) = mt and with a t-interval such that x and y have every value in [–5, 5]
(domain of h, which is the same as that of x and y because cosine and sine are defined everywhere on R, is such that the
ranges of x and y both are [–5, 5]).
In general, there are infinitely many parametrizations of the circle x2 + y2 = r2. One of them where t is the central angle is:
You should memorize it as it's often utilized:
A parametrization of the circle x2 + y2 = r2 using the central angle as parameter is:
start and finish point: (r,
Parametrization of circle x2 + y2 = r2 using central angle is x = r cos t, y = r sin t.
(Notice the t-interval.
We must have the upper semi-circle only.) In this case the central angle is the
quantity specified to be
used as parameter.
Parametrize the parabola y = x2 using its slope m at each point of it as parameter.
The slope of the parabola at each point (x, y) of it is m = y' = 2x. Then x = m/2 and y = x2 = (m/2)2 = m2/4. So the desired
parametrization is x = m/2, y = m2/4, m in R.
We express both x and y in terms of the slope m.
For this purpose we determine an equation that relates x
to m. To obtain
the entire parabola, the m-interval must be the entire set of real numbers, R. This example is simple enough that we don't
have to draw a picture for help if not asked to.
In general, to use a specified quantity as parameter we have to determine an equation that relates x or y to that quantity.
Can you parametrize the curve y
= x2 using as parameter the distance d from the general point (x,
y) on the curve to the
origin (0, 0)? Why or why not?
So no we can't, because each non-0 value of d corresponds to opposite values of x, thus to 2 different points (x, y) and (–x,
(–x)2) = (–x, x2) = (–x, y) on the curve.
The right-most expression gives the same value of d from opposite values of x.
Each non-0 value of d corresponds
different values of x, so to 2 different points on the curve, violating the rule that each value of the parameter corresponds to
exactly 1 point.
A string is wound around a circle with equation x2 + y2 = r2. See Fig. 3.3. One end at the point A
= (r, 0) is unwound in such a
way that the part of the string not lying on the circle is extended in a straight line. The curve I followed by this free end of the
string is called an involute of the circle. Let P be the position of the free end of the string at some subsequent time and let T
be the point where the string leaves the circle. Clearly PT is tangent to the circle at T. The path of P is the involute. Let O be
the origin (0, 0). Parametrize the involute employing the central angle TOA, denoted by s, as parameter.
Curve I is involute of circle.
Parametrization Of Involute Of Circle Employing Central
Refer to Fig. 3.4. Let (x, y) be the coordinates of P,
M the perpendicular projection of T on OA, and N the perpendicular
projection of P on TM. Then:
x = OM + NP, y = MT – NT,
OM = OT cos s = r cos s,
angle NTP = angle MTP = angle MOT = s, as arms of angle MTP are perpendicular to those of angle MOT,
PT = arc AT = rs, as s rad = (arc AT )/r,
NP = PT sin s = rs sin s,
x = r cos s + rs sin s;
MT = OT sin s
= r sin s,
NT = PT cos s = rs cos s,
y = r sin s – rs cos s.
The required parametrization is:
Example 3.4 – The Cycloid
When a circle rolls along a straight line, the path traced
by a point on it is called a cycloid. See Fig. 3.5. Suppose the circle
radius r, lies above the x-axis, rolls along the x-axis starting from the origin O(0, 0) and rolling to the right. Let P be a point on
the circle and suppose it's originally at the origin.
2. Determine the x-intercepts,
the x-coordinates corresponding to the
maximum y-value, and the maximum y-value of the
cycloid by using:
a. Properties of the circle.
b. The parametric equations.
Indicate these values on the graph.
3. Show that (the horizontal component of the motion
of) any point P on the circle never moves
back as the circle rolls along
a. Geometry. Consider only the lower semi-circle, as it's only there that points seem to move back.
Curve C is cycloid.
Parametrizing The Cycloid In
Refer to Fig. 3.6. Take q
to be positive as the circle rolls along to the right. Let (x, y) be the
coordinates of P, M
the centre of
the circle, N the point where the circle touches the x-axis, and Q the perpendicular projection of P on MN. Then:
x = ON – PQ, y = NM + MQ = r + MQ,
As circle rolls along, any point on it never moves back.
Contrary to what we may intuitively think, (the horizontal
component of the motion of) any point P on the
never moves back as the circle rolls along. An experiment to generate the cycloid is easy and cheap to set up and carry out.
The word “brachistochrone” comes from 2 Greek words that
mean “shortest time”. Let Q be a given
point in the 4th quadrant.
Suppose a small object slides without friction from the origin O(0, 0) along a curve to Q subject only to the downward force
mg due to gravity. What's the shape of the curve that causes the object to slide from O to Q in the least or shortest possible
time? Initially we might think that it should be the straight line, as the straight-line segment joining O and Q represents the
shortest distance between the 2 points. However the straight line isn't the answer. It can be shown that the answer is a portion
Physical Properties Of Inverted Cycloid.
of an arc of an inverted cycloid generated by a point P on a circle of radius r
rolling on the under side of the x-axis along
x-axis, with P starting at O, r being such that the cycloid contains Q, and the portion being between O and Q. This answer
appears to be reasonable: the curve should drop more steeply at first to allow the object to gain speed more quickly.
The word “tautochrone” comes from 2 Greek words that mean
“same time”. Let an object be placed at a point other than the
low point on an arc of an inverted cycloid. It can be shown that the time required for it to slide to the low point is the same
for every initial point where it's placed. In other words, the time required for it to slide to the low point is independent of the
initial point where it's placed.
4. Plane Curves
A plane curve is a continuous set of points in the plane
that can be described by an xy-Cartesian-equation
or a set of 2
parametric equations, as distinguished from plane regions. Clearly the parabola y = x2 and the circle x2 + y2 = 1 are plane
curves. They have Cartesian and parametric equations. Also clearly the involute of a circle and the cycloid are plane curves.
They have parametric equations. So we use the parametric equations to define plane curves. A plane curve is a curve that can
be described by a set of 2 parametric equations.
Definition 4.1 – Plane Curves
A set of points
in the plane is said to be a plane curve if it's the parametric curve x = f(t), y = g(t), t in I, where f and g
a. Sketch the parametric curve x = t2 , y = t3, for all t in R.
b. Label the points corresponding to t = –1, t = 0, and t = 1.
c. Determine the direction of the curve and indicate the direction on it.
d. Give an example quantity that the parameter t can represent.
The curve is formed by the graphs of y = x3/2 and y = –x3/2.
The parameter t can represent time.
Without examining the t-interval
which implies that y takes on
negative values as well as positive ones we may not realize that
obtaining only the upper half of the curve is incorrect. Thus it's a good idea to always examine it and its implications.
its direction is clockwise.
4. Sketch the curve having parametric representation x = t2,
y = t2 + 1, t
in R. As t increases
in R, describe the motion of an
object whose position in the plane at time t is given by these equations.
y = t2 + 1 = x + 1.
6. Can you parametrize the graph of y = x4 employing as
parameter the following quantities at the general point (x,
y) of the
graph? Why or why not?
a. The first derivative m.
b. The second derivative s.
c. The third derivative t.
d. The fourth derivative f.
So yes we can, because each value of t corresponds to exactly 1 point (x, y) on the graph.
f = y(4) = 24.
So no we can't, because the single value of f corresponds to (infinitely) many points (x, y) on the graph.
If a = 0 then the curve is the horizontal line y = r.
If a = r then the curve is a cycloid. | http://www.phengkimving.com/calc_of_one_real_var/13_plane_curves/13_01_param_curves/13_01_01_param_curves.htm | 13 |
71 | Trig Circle Help (page 2)
Introduction to Trig Circles and Primary Circular Functions
Consider a circle in rectangular coordinates with the following equation:
x 2 + y 2 = 1
This equation, as defined earlier in this chapter, represents the unit circle. Let θ be an angle whose apex is at the origin, and that is measured counterclockwise from the x axis, as shown in Fig. 1-5. Suppose this angle corresponds to a ray that intersects the unit circle at some point P = ( x 0 , y 0 ). We can define three basic trigonometric functions, called circular functions, of the angle θ in a simple and elegant way.
Suppose you swing a glowing ball around and around at the end of a string, at a rate of one revolution per second. The ball describes a circle in space (Fig. 1-6A). Imagine that you make the ball orbit around your head so it is always at the same level above the ground or the floor; that is, so that it takes a path that lies in a horizontal plane. Suppose you do this in a dark gymnasium. If a friend stands several meters away, with his or her eyes right in the plane of the ball’s orbit, what will your friend see?
Close your eyes and use your imagination. You should be able to envision that the ball, seen from a few meters away, will appear to oscillate back and forth in a straight line (Fig. 1-6B). It is an illusion: the glowing dot seems to move toward the right, slow down, then stop and reverse its direction, going back toward the left. It moves faster and faster, then slower again, reaching its left-most point, at which it stops and turns around again. This goes on and on, at the rate of one complete cycle per second, because you are swinging the ball around at one revolution per second.
The Sine Function
The ray from the origin (point O ) passing outward through point P can be called ray OP. Imagine ray OP pointing right along the x axis, and then starting to rotate counterclockwise on its end point O, as if point O is a mechanical bearing. The point P, represented by coordinates ( x 0 , y 0 ), therefore revolves around point O, following the perimeter of the unit circle.
Imagine what happens to the value of y 0 (the ordinate of point P ) during one complete revolution of ray OP. The ordinate of P starts out at y 0 = 0, then increases until it reaches y 0 = 1 after P has gone 90° or π/2 rad around the circle (θ = 90° = π/2). After that, y 0 begins to decrease, getting back to y 0 = 0 when P has gone 180° or π rad around the circle (θ = 180° = π). As P continues on its counterclockwise trek, y 0 keeps decreasing until, at θ = 270° = 3π/2, the value of y 0 reaches its minimum of –1. After that, the value of y 0 rises again until, when P has gone completely around the circle, it returns to y 0 = 0 for θ = 360° = 2π.
The value of y 0 is defined as the sine of the angle θ . The sine function is abbreviated sin, so we can state this simple equation:
sin θ = y 0
The Sine Wave
If you graph the position of the ball, as seen by your friend, with respect to time, the result is a sine wave (Fig. 1-7), which is a graphical plot of a sine function. Some sine waves are “taller” than others (corresponding to a longer string), some are “stretched out” (corresponding to a slower rate of rotation), and some are “squashed” (corresponding to a faster rate of rotation). But the characteristic shape of the wave is the same in every case. When the amplitude and the wavelength are multiplied and divided by the appropriate numbers (or constants), any sine wave can be made to fit exactly along the curve of any other sine wave.
You can whirl the ball around faster or slower than one revolution per second. The string can be made longer or shorter. These adjustments alter the height and/or the frequency of the sine wave graphed in Fig. 1-7. But the fundamental rule always applies: the sine wave can be reduced to circular motion. Conversely, circular motion in the ( x,y ) plane can be defined in terms of a general formula:
y = a sin b θ
where a is a constant that depends on the radius of the circle, and b is a constant that depends on the revolution rate.
The Cosine Function
Look again at Fig. 1-5.
Imagine, once again, a ray from the origin outward through point P on the circle, pointing right along the x axis, and then rotating in a counterclockwise direction.
What happens to the value of x 0 (the abscissa of point P ) during one complete revolution of the ray? The abscissa of P starts out at x 0 = 1, then decreases until it reaches x 0 = 0 when θ – 90° = π/2. After that, x 0 continues to decrease, getting down to x 0 = –1 when θ = 180° = π. As P continues counterclockwise around the circle, x 0 begins to increase again; at θ = 270° = 3π/2, the value gets back up to x 0 = 0. After that, x 0 increases further until, when P has gone completely around the circle, it returns to x 0 = 1 for θ = 360° = 2π.
The value of x 0 is defined as the cosine of the angle θ. The cosine function is abbreviated cos. So we can write this:
cos θ = x 0
The Tangent Function
Once again, look at Fig. 1-5.
The tangent (abbreviated tan) of an angle θ is defined using the same ray OP and the same point P = ( x 0 , y 0 ) as is done with the sine and cosine functions. The definition is:
tan θ = y 0 /x 0
Because we already know that sin θ = y 0 and cos θ = x 0 , we can express the tangent function in terms of the sine and the cosine:
tan θ = sin θ /cos θ
This function is interesting because, unlike the sine and cosine functions, it “blows up” at certain values of θ. Whenever x 0 = 0, the denominator of either quotient above becomes zero. Division by zero is not defined, and that means the tangent function is not defined for any angle θ such that cos θ = 0. Such angles are all the odd multiples of 90° (π/2 rad).
Primary Circular Functions Practice Problems
What is tan 45°? Do not perform any calculations. You should be able to infer this without having to write down a single numeral.
Draw a diagram of a unit circle, such as the one in Fig. 1-5, and place ray OP such that it subtends an angle of 45° with respect to the x axis. That angle is the angle of which we want to find the tangent. Note that the ray OP also subtends an angle of 45° with respect to the y axis, because the x and y axes are perpendicular (they are oriented at 90° with respect to each other), and 45° is exactly half of 90°. Every point on the ray OP is equally distant from the x and y axes; this includes the point ( x 0 , y 0 ). It follows that x 0 = y 0 , and neither of them is equal to zero. From this, we can conclude that y 0 / x 0 = 1. According to the definition of the tangent function, therefore, tan 45° = 1.
Practice Problems for these concepts can be found at: The Circle Model Practice Test
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Bullying in Schools
- A Teacher's Guide to Differentiating Instruction
- Steps in the IEP Process | http://www.education.com/study-help/article/trigonometry-help-primary-circular/?page=2 | 13 |
60 | You're designing an electromagnet, for example, and need to calculate the number of turns, or you might already have a coil and want to know how hard it will pull on a nearby piece of iron. These are simple questions. Although the answers are elusive this page outlines some general principles and pointers towards specific solutions.
About your browser: if this character '×' does not look like a multiplication sign, or you see lots of question marks '?' or symbols like '' or sequences like '&cannot;' then please accept my apologies.
See also ...
[ ↑ Producing wound components] [ Air coils] [A guide to the terminology used in the science of magnetism] [ Power loss in wound components] [ Faraday's law] [ Unit systems in electromagnetism]
One method of calculating the force produced by a magnetic field involves an understanding of the way in which the energy represented by the field changes. To derive an expression for the field energy we'll look at the behaviour of the field within a simple toroidal inductor. We equate the field energy to the electrical energy needed to establish the coil current.
When the coil current increases so does the magnetic field strength, H. That, in turn, leads to an increase in magnetic flux, . The increase in flux induces a voltage in the coil. It's the power needed to push the current into the coil against this voltage which we now calculate.
We choose a toroid because over its cross-sectional area, A, the flux density should be approximately uniform (particularly if the core radius is large compared with it's cross section). We let the flux path length around the core be equal to Lf and the cross-sectional area be equal to Ax. We assume that the core is initially unmagnetized and that the electrical energy (W) supplied to the coil will all be converted to magnetic field energy in the core (we ignore eddy currents).
Faraday's law gives the voltage as
v = N×d /dt volts
W = N(d/dt)i dt
W = N×i d
Now, N×i = Fm and H = Fm/Lf so N×i = H×Lf. Substituting:
W = H×Lf d
Also, from the definition of flux density = Ax× B so d = Ax×dB. Substituting:
W = H×Lf×AxdB joules
This gives the total energy in the core. If we wish to find the energy density then we divide by the volume of the core material:
Wd= ( H×Lf×Ax
|Wd = H d B joules m-3||Equation EFH|
If the magnetization curve is linear (that is we pretend B against H is a straight line, not a curve) then there is a further simplification. Substituting H = B/μ
Wd= B / µ d B
|Wd = B2/(2μ) joules m-3||Equation EFB|
Compare this result with the better known formula for the energy stored by a given inductance, L:
WL = L×I2/2 joules
Another squared term, you notice.
[ ↑ Top of page]
A 'hand-waving' explanation might help clarify the physics. Take an initially uniform magnetic field in free space and introduce into it an iron sphere. The flux lines will bend in the vicinity of the iron so that they will converge upon it. Inside the iron the lines will be quite concentrated (though parallel to the original field).
Now, the point is that there will be no net force on the iron, no matter how strong the field. A sphere has perfect symmetry, so rotation will not change the picture in any way. If there is translational movement then all that can happen if the sphere were to move is that the distortion of the field around the original position of the sphere will disappear and the same distortion will be re-established around the new position; the total system energy will remain unchanged.
OK, instead of the sphere let's try an iron rod. This is different because we've lost symmetry. What happens is that the axis of the rod will be drawn into alignment with the field - like a compass needle. The flux lines prefer the iron to the air because of the higher permeability. Equation EFB has μ on the denominator so the field energy is lower here than in the air, and the further the flux can go through the iron the lower the energy. Think of current flow through a resistor; the current has an easier time going through a low resistance than a high resistance. Flux goes easier through high permeability than through low. When the rod is aligned with the field the flux can go further through a high permeability region. Note that we still don't have a translational force (provided that the field is uniform on the scale of the rod). Think about the famous experiment with iron filings sprinkled onto a piece of cardboard above a bar magnet. The filings tend to line up with the field but don't generally move much because they are so small that the field appears uniform to them.
So for there to be a force on a piece of iron then a displacement of the iron must result in an alteration to the field energy. The electromagnet you are using will have an opinion about changes to the field it generates. It will say that its inductance is changing. This is the basis of one solution to the problem:
|F = (I2/2) dL/dx newtons||Equation EFS|
where I is the coil current and x is displacement in metres. This result is proved in textbooks such as Hammond, and also Smith. Unfortunately, it might be tricky to calculate how the inductance changes unless the system you have is particularly simple to analyze. You might need computer software such as described by Hammond in order to do it.
[ ↑ Top of page]
Some problems of practical importance can be solved when the air gap between the electromagnet and the work piece is small in comparison with the field cross section. This is the situation found in most electromechanical relays.
Equation EFB gives the energy density (joules per metre cubed). Assuming that the field inside the air gap is uniform you can use EFB to get the total field energy simply by multiplying by the volume of the field, V
where g is the gap length and A is the cross sectional area of the coil's core. The total energy is then
We need the force on the armature. That is given by the rate of change of energy with gap length
We next need to find the flux density, B. It's assumption time again. Well designed relays use such high permeability material for the core and armature that most of the field strength produced by the coil will appear across the air gap between the core and the armature and we can ignore the reluctance of the core, pivot and armature. Substituting equation TMH into equation TMD we get
Substituting into Maxwell's force formula
If you have ever tried to bring a piece of iron into contact with a magnet manually then you will quite literally have a feel for the g2 term!
Example: A relay has a coil of 1200 turns. The diameter of the coil core is 6 millimetres and the air gap is 1.8 millimetres. The spring exerts a force on the armature of 0.15 newtons at the part of it opposite the air gap. What coil current will operate the relay?
The core cross sectional area, A = π (0.006/2)2 = 2.83×10-5 m2. Substituting into equation FRS
Therefore I = 0.138 amps. The flux density will be 0.116 teslas. This should be well below saturation for iron. As the gap closes, and g goes to zero, equation FRS predicts that the force on the armature becomes infinite. Of course it won't do so because our assumptions about the field production will go down the tubes first. Under those conditions it might be far harder to calculate the force precisely. One point to note, though, is that flux density is limited by saturation to below about 1.6 teslas. Maxwell's force formula therefore sets a limit on the force of one million newtons per square metre (about 100 tons).
[ ↑ Top of page]
Equation MPU relates the torque on a magnetic dipole to the field. Because this torque drops to zero as the dipole rotates through 90 degrees we can find the energy (which is torque times angle)
So, in a vacuum, substituting equation TMD -
|W = μ0 × m × H joules||Equation FRK|
In non-ferromagnetic materials where the field internal to the specimen is much the same as the externally applied field then the force is given by
where l is distance and v is the volume of the material. It needs emphasizing that this formula will give significant overestimates for ferromagnetic materials. For them the internal 'demagnetizing field' leads to lower values of force than equation FRL would suggest. Demagnetizing fields only have exact analytical solutions for spheroidally shaped specimens. Consult a text such as Jiles for details on correcting for demagnetizing fields.
[ ↑ Top of page]
A 'solenoid' is the term used to describe the type of electromagnet supplied with an iron piston or plunger pulled in by the field generated by current in a coil. Solenoids are frequently used to operate valves, release locks or operate ratchets and so on. Equation EFS above suggests that the pull of a solenoid should be related to the square of the coil current. Take a medium sized 12 volt solenoid (having a plunger about 13 mm diameter) and test this out by attaching it to a spring balance as shown in the figure here. Measurements are made as follows:
Repeat the sequence, slackening the cord on the balance each time to obtain
a lower force. This gave the line, shown below, which has a
slope of about 1.1.
Hmmm ... what may be happening is that non-linearities in the permeability of the iron are affecting the field. If I add a 2 mm thick piece of brass on the end of the plunger then I get:
Notice that the retaining force is now much lower even though a higher coil current has been used. Well, this line has a slope of about 2.2 - a bit closer to theory. In an air gap the flux density is exactly proportional to field strength (and thus current). As far as a static magnetic field is concerned brass behaves just the same as air: the permeability is a steady μ0 at any value of B.
It would be nice to extend this experiment by a measurement of coil inductance against force in order to test Equation EFS. The difficulty is that inductance meters use AC test signals. Without laminated iron (which is only found in solenoids designed for AC operation) the reading will be affected by large eddy current losses.
[ ↑ Top of page]
Last revised: 2010 July 8th. | http://info.ee.surrey.ac.uk/Workshop/advice/coils/force.html | 13 |
63 | From Wikipedia, the free encyclopedia
Population density (in agriculture standing stock and standing crop) is a measurement of population per unit area or unit volume. It is frequently applied to living organisms, and particularly to humans. It is a key geographic term.
Biological population densities
Population density is population divided by total land area or water volume, as appropriate.
Low densities may cause an extinction vortex and lead to further reduced fertility. This is called the Allee effect after the scientist who identified it. Examples of the causes in low population densities include:
- Increased problems with locating sexual mates
- Increased inbreeding
Different species have different expected densities. R-selected species commonly have high population densities, while K-selected species may have lower densities. Low densities may be associated with specialized mate location adaptations such as specialized pollinators, as found in the orchid family (Orchidaceae).
Human population density
For humans, population density is the number of people per unit of area usually per square kilometer or mile (which may include or exclude cultivated or potentially productive area). Commonly this may be calculated for a county, city, country, another territory, or the entire world.
The world's population is 7 billion, and Earth's total area (including land and water) is 510 million square kilometers (197 million square miles). Therefore the worldwide human population density is 6.8 billion ÷ 510 million = 13.3 per km2 (34.5 per sq. mile). If only the Earth's land area of 150 million km2 (58 million sq. miles) is taken into account, then human population density increases to 45.3 per km2 (117.2 per sq. mile). This calculation includes all continental and island land area, including Antarctica. If Antarctica is also excluded, then population density rises to 50 people per km2 (129.28 per sq. mile). Considering that over half of the Earth's land mass consists of areas inhospitable to human inhabitation, such as deserts and high mountains, and that population tends to cluster around seaports and fresh water sources, this number by itself does not give any meaningful measurement of human population density.
Several of the most densely populated territories in the world are city-states, microstates, or dependencies. These territories share a relatively small area and a high urbanization level, with an economically specialized city population drawing also on rural resources outside the area, illustrating the difference between high population density and overpopulation.
Cities with high population densities are, by some, considered to be overpopulated, though the extent to which this is the case depends on factors like quality of housing and infrastructure and access to resources. Most of the most densely populated cities are in southern and eastern Asia, though Cairo and Lagos in Africa also fall into this category.
City population is, however, heavily dependent on the definition of "urban area" used: densities are often higher for the central municipality itself, than when more recently developed and administratively separate suburban communities are included, as in the concepts of agglomeration or metropolitan area, the latter including sometimes neighboring cities. For instance, Milwaukee has a greater population density when just the inner city is measured, and not the surrounding suburbs as well.
As a comparison, based on a world population of seven billion, the world's inhabitants would, as a loose crowd taking up ten square feet (one square metre) per person (Jacobs Method), would occupy a space a little larger than Delaware's land area.
Most densely populated countries
(Pop. per km2)
(Pop. per km2)
Other methods of measurement
While arithmetic density is the most common way of measuring population density, several other methods have been developed which aim to provide a more accurate measure of population density over a specific area.
- Arithmetic density: The total number of people / area of land (measured in km2 or sq miles).
- Physiological density: The total population / area of arable land.
- Agricultural density: The total rural population / area of arable land.
- Residential density : The number of people living in an urban area / area of residential land.
- Urban density : The number of people inhabiting an urban area / total area of urban land.
- Ecological optimum: The density of population which can be supported by the natural resources.
- Human geography
- Idealized population
- Optimum population
- Population bottleneck
- Population genetics
- Population health
- Population momentum
- Population pyramid
- Rural transport problem
- Small population size
- Distance sampling
- List of cities by population
- List of cities by population density
- List of European cities proper by population density
- List of islands by population density
- List of countries by population density
- List of U.S. states by population density
- Matt Rosenberg Population Density. Geography.about.com. March 2 2011. Retrieved on 2011-12-10.
- Minimum viable population size. Eoearth.org (2010-03-06). Retrieved on 2011-12-10.
- Density-Dependent Selection
- U.S. & World Population Clocks. Census.gov. Retrieved on 2011-12-10.
- World. CIA World Handbook
- Department of Economic and Social Affairs Population Division (2009). World Population Prospects, Table A.1 (PDF). 2008 revision. United Nations. Retrieved 2009-03-12.
- The Monaco government uses a smaller surface area figure resulting in a population density of 18,078 per km2
- Human Population. Global Issues. Retrieved on 2011-12-10.
- The largest cities in the world by land area, population and density. Citymayors.com. Retrieved on 2011-12-10.
- The Population of Milwaukee County. Wisconline.com. Retrieved on 2011-12-10.
|Wikimedia Commons has media related to: Population density| | http://wpedia.goo.ne.jp/enwiki/Population_density | 13 |