text
stringlengths 199
648k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 14
419
| file_path
stringlengths 139
140
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 50
235k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
The number of adolescents infected with HIV has jumped by one-third over the past decade, according to the World Health Organisation.
"More than two million adolescents between the ages of 10 and 19 years are living with HIV," marking a 33 per cent rise since 2001, the WHO said.
"Many do not receive the care and support that they need to stay in good health and prevent transmission. In addition, millions more adolescents are at risk of infection."
The big rise is most marked in sub-Saharan Africa where many born with the virus are now adolescents.
Girls there are the worst affected, with many having unprotected sex.
In Asia the most vulnerable groups are drug users.
"Adolescent girls, young men who have sex with men, those who inject drugs or are subject to sexual coercion and abuse are at highest risk," said Craig McClure, head of HIV programs at the UN children's agency UNICEF.
"They face many barriers, including harsh laws, inequalities, stigma and discrimination which prevent them from accessing services that could test, prevent, and treat HIV.
"About one-seventh of all new HIV infections occur during adolescence. Unless the barriers are removed, the dream of an AIDS-free generation will never be realised."
The increase in infections is reflected by an increase in AIDS-related deaths among adolescents, with 70,000 adolescents dying in 2005, but more than 100,000 fatalities in 2012.
That is in stark contrast to a 30 per cent decline in deaths during the same period in the general population.
"Adolescents face difficult and often confusing emotional and social pressures as they grow from children into adults," said Gottfried Hirnschall, head of the WHO's HIV/AIDS department.
"Adolescents need health services and support, tailored to their needs. They are less likely than adults to be tested for HIV and often need more support than adults to help them maintain care and to stick to treatment."
Among the measures needed, the WHO said, is an end to the requirement for parental permission to have an HIV test.
In sub-Saharan Africa, it is estimated that in the 15-24 age bracket, only 10 per cent of young men and 15 per cent of young women know their HIV status.
In other regions, although data are scarce, access to HIV testing and counselling by vulnerable adolescents is consistently reported as being very low, the WHO said.
|
<urn:uuid:661d78a7-d8b4-4e72-9ada-5a7ef7d9d814>
|
CC-MAIN-2016-26
|
http://mobile.abc.net.au/news/2013-11-26/number-of-adolescents-with-hiv-jumps-by-one-third-in-decade3a-/5116232?pfm=sm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00069-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966634 | 494 | 3.15625 | 3 |
- THE ALGORITHM
Crypt::DH - Diffie-Hellman key exchange system
use Crypt::DH; my $dh = Crypt::DH->new; $dh->g($g); $dh->p($p); ## Generate public and private keys. $dh->generate_keys; $my_pub_key = $dh->pub_key; ## Send $my_pub_key to "other" party, and receive "other" ## public key in return. ## Now compute shared secret from "other" public key. my $shared_secret = $dh->compute_secret( $other_pub_key );
Crypt::DH is a Perl implementation of the Diffie-Hellman key exchange system. Diffie-Hellman is an algorithm by which two parties can agree on a shared secret key, known only to them. The secret is negotiated over an insecure network without the two parties ever passing the actual shared secret, or their private keys, between them.
The algorithm generally works as follows: Party A and Party B choose a property p and a property g; these properties are shared by both parties. Each party then computes a random private key integer priv_key, where the length of priv_key is at most (number of bits in p) - 1. Each party then computes a public key based on g, priv_key, and p; the exact value is
g ^ priv_key mod p
The parties exchange these public keys.
The shared secret key is generated based on the exchanged public key, the private key, and p. If the public key of Party B is denoted pub_key_B, then the shared secret is equal to
pub_key_B ^ priv_key mod p
The mathematical principles involved insure that both parties will generate the same shared secret key.
More information can be found in PKCS #3 (Diffie-Hellman Key Agreement Standard):
Crypt::DH implements the core routines needed to use Diffie-Hellman key exchange. To actually use the algorithm, you'll need to start with values for p and g; p is a large prime, and g is a base which must be larger than 0 and less than p.
Crypt::DH uses Math::BigInt internally for big-integer calculations. All accessor methods (p, g, priv_key, and pub_key) thus return Math::BigInt objects, as does the compute_secret method. The accessors, however, allow setting with a scalar decimal string, hex string (^0x), Math::BigInt object, or Math::Pari object (for backwards compatibility).
$dh = Crypt::DH->new([ %param ]).
Constructs a new Crypt::DH object and returns the object. %param may include none, some, or all of the keys p, g, and priv_key.
$dh->p([ $p ])
Given an argument $p, sets the p parameter (large prime) for this Crypt::DH object.
Returns the current value of p. (as a Math::BigInt object)
$dh->g([ $g ])
Given an argument $g, sets the g parameter (base) for this Crypt::DH object.
Returns the current value of g.
Generates the public and private key portions of the Crypt::DH object, assuming that you've already filled p and g with appropriate values.
If you've provided a priv_key, it's used, otherwise a random priv_key is created using either Crypt::Random (if already loaded), or /dev/urandom, or Perl's rand, in that order.
$dh->compute_secret( $public_key )
Given the public key $public_key of Party B (the party with which you're performing key negotiation and exchange), computes the shared secret key, based on that public key, your own private key, and your own large prime value (p).
The historical method name "compute_key" is aliased to this for compatibility.
$dh->priv_key([ $priv_key ])
Returns the private key. Given an argument $priv_key, sets the priv_key parameter for this Crypt::DH object.
Returns the public key.
Benjamin Trott (cpan:BTROTT) <[email protected]>
Brad Fitzpatrick (cpan:BRADFITZ) <[email protected]>
BinGOs - Chris Williams (cpan:BINGOS) <[email protected]>
Mithaldu - Christian Walde (cpan:MITHALDU) <[email protected]>
This library is free software and may be distributed under the same terms as perl itself.
|
<urn:uuid:ac1ffa10-b63b-4d5b-afab-e368489054b3>
|
CC-MAIN-2016-26
|
https://metacpan.org/pod/Crypt::DH
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00102-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.781036 | 1,022 | 3.453125 | 3 |
“We investigated the mass concentration, mineral composition and morphology of particles resuspended by children during scheduled physical education in urban, suburban and rural elementary school gyms in Prague (Czech Republic). Cascade impactors were deployed to sample the particulate matter. Two fractions of coarse particulate matter (PM(10-2.5) and PM(2.5-1.0)) were characterized by gravimetry, energy dispersive X-ray spectrometry and scanning electron microscopy. Two indicators of human activity, the number of exercising children and the number of physical education hours, were also recorded. Lower mass concentrations of coarse particulate matter were recorded outdoors (average PM(10-2.5) 4.1-7.4 μg m(-3) and PM(2.5-1.0) 2.0-3.3 μg m(-3)) than indoors (average PM(10-2.5) 13.6-26.7 μg m(-3) and PM(2.5-1.0) 3.7-7.4 μg m(-3)). The indoor concentrations of coarse aerosol were elevated during days with scheduled physical education with an average indoor-outdoor (I/O) ratio of 2.5-16.3 for the PM(10-2.5) and 1.4-4.8 for the PM(2.5-1.0) values. Under extreme conditions, the I/O ratios reached 180 (PM(10-2.5)) and 19.1 (PM(2.5-1.0)). The multiple regression analysis based on the number of students and outdoor coarse PM as independent variables showed that the main predictor of the indoor coarse PM concentrations is the number of students in the gym. The effect of outdoor coarse PM was weak and inconsistent. The regression models for the three schools explained 60-70% of the particular dataset variability. X-ray spectrometry revealed 6 main groups of minerals contributing to resuspended indoor dust. The most abundant particles were those of crustal origin composed of Si, Al, O and Ca. Scanning electron microscopy showed that, in addition to numerous inorganic particles, various types of fibers and particularly skin scales make up the main part of the resuspended dust in the gyms. In conclusion, school gyms were found to be indoor microenvironments with high concentrations of coarse particulate matter, which can contribute to increased short-term inhalation exposure of exercising children.”
Discoblog: NCBI ROFL: Effects of university affiliation and “school spirit” on color preferences: Berkeley versus Stanford.
Discoblog: NCBI ROFL: Study proves elementary school bathrooms unpleasant.
Discoblog: NCBI ROFL: Unhappy yearbook photos herald crappier lives.
|
<urn:uuid:5b00593a-b50b-4fc2-9498-d40524e1a0e8>
|
CC-MAIN-2016-26
|
http://blogs.discovermagazine.com/discoblog/2011/11/25/ncbi-rofl-characterization-of-coarse-particulate-matter-in-school-gyms/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00162-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.925341 | 583 | 2.75 | 3 |
Biologists concerned about dead round goby in Susquehanna
A breeding population of the voracious invader would threaten watershed’s sports fish, especially trout, bass.
- Comments are closed for this article.
A dead, round goby — an aggressive fish native to the Black Sea that has become a major problem in the Great Lakes — was recently found in the Susquehanna River near Binghamton, NY.
An angler discovered the fish floating in the river not far from the Pennsylvania state line and reported it to the New York Department of Environmental Conservation. Biologists who examined the fish concluded it most likely had been used as bait, as a hook had been run through its side.
As a result, David Lemon, the DEC’s regional fisheries manager, said biologists do not believe the fish came from a breeding population in the river, but they are concerned that whoever used the goby for bait may have released others.
No special sampling is planned to look for other gobies in the river because finding one would be “a needle in a haystack,” Lemon said, but he added that biologists would look for the exotic invader when they do routine sampling.
“If they do establish themselves, they are going to make their presence known in coming years,” Lemon said. “We do some sampling in the river each year.”
A breeding round goby population could have a significant impact on other species in the Susquehanna — including smallmouth bass which are already struggling in the river — and would have the potential to eventually reach the Chesapeake.
“It’s hard to overstate the impact of these gobies,” said James Grazio, a Great Lakes biologist with the Pennsylvania Department of Environmental Protection. “These are very successful invading fishes, and they have a proven track record of significant damage.”
Round gobies are native to Eurasia, where they originally lived in the Black, Caspian and Azov seas and their tributaries, consuming large numbers of zebra mussels. In 1990, they were discovered in the St. Clair River, which drains Lake Huron. Biologists believe the fish were inadvertently transported to the Great Lakes in the ballast water of ocean-going ships. They have also invaded other lake and river systems in Europe in recent decades.
Gobies are bottom feeders that eat benthic dwellers such as mollusks, clams, insects, snails, small fish and fish eggs. They are soft-bodied and can reach 10 inches in length, though adults in the Great Lakes typically are less than 7 inches. Young gobies are slate gray, while older fish are mottled with black and brown, and have a greenish dorsal fin with a black spot. Males may be entirely black when spawning. They have a large head with protruding eyes, like those of a frog.
They outcompete native fish in bottom habitats with their aggressive nature, chasing them from nesting sites and consuming their eggs. They are able to detect movement and consume food in complete darkness, unlike native species, and can tolerate poorer water quality. They have directly impacted populations of sculpins, logperch and darters, and biologist blame round gobies for hampering efforts to restore native lake trout populations.
They have a complex relationship with larger predators, such as smallmouth bass, yellow perch and walleye, Round gobies are an abundant, and popular, source of food for many game fish. But they also raid the egg nests of species such as smallmouth bass — a single round goby can eat 4,000 smallmouth bass eggs in 15 minutes. While smallmouth bass guard their nests, abundant round gobies are often ready to take advantage of any absence. “They can be very detrimental,” Grazio said. Ohio has had to periodically close its smallmouth bass fishery in Lake Erie as the result of low numbers of fish.
Even one of the good things they do can become a problem. Round gobies are major consumers of zebra mussels, one of their main food sources in their native Black Sea region. They can eat up to 78 a day. The mussels have also infested the Great Lakes, where in many areas the filter-feeding mussels build up toxins, which then accumulate in the gobies that eat them. These toxins are ultimately passed on to sports fish, creating a potential health problem for people who consume them.
Round gobies are also able to live in rivers, having already spread to the Mississippi basin. “I think the question is, does the Susquehanna have suitable, slow, deep water habitat for the goby?” Grazio said. “They don’t do well in swift currents.” Gobies don’t have swim bladders, so they have trouble passing obstacles to move upstream, he added, though they can move downstream past dams and other impediments.
If they made it to the Chesapeake, they are capable of tolerating low-salinity waters, like those found in the Black Sea.
“A lot of people consider the round goby to be a model of a perfect invading species because it has such a wide range of tolerances,” Grazio said. Once they establish, they can be highly productive — a female can produce six batches of eggs in a spawning season.
The Susquehanna borders the Great Lakes watershed in New York. “It is pretty concerning,” said Matthew Shank, a biologist with the Susquehanna River Basin Commission. “There is obviously something going on where it jumped from the Great Lakes inland to the Susquehanna.”
Three years ago, round gobies were discovered in a gravel pit near Erie, PA. Their presence was thought to have stemmed from a release of bait fish.
Although the transfer of round gobies as bait is prohibited, Grazio said it is not uncommon to see them in bait buckets. “They are small, they are excellent bait for a lot of bottom-dwelling fish like smallmouth bass, and people know that and they move them around,” he said. “There is a great likelihood of them being spread to other watersheds by bait buckets.”
Help prevent the spread of the round goby
Preventing the spread of round gobies to new areas is the best way to prevent further ecological and economic damage. Anglers are often the first to discover new infestations because they are commonly caught by hook and line. Here’s how to help:
- Always check for and remove any plants, mud and debris from boats, trailers, clothing and equipment before leaving a water body.
- Young goby can resemble bait fish, so it’s important to drain water from bait buckets, bilges and live wells before going to another area.
- Clean and dry all equipment after fishing.
- Never use round gobies as bait.
Pennsylvania Sea Grant
- Category: Wildlife + Habitat
By submitting a comment, you are consenting to these Rules of Conduct. Thank you for your civil participation. Please note: reader comments do not represent the position of Chesapeake Media Service.
Comments are now closed for this article. Comments are accepted for 60 days after publication.
|
<urn:uuid:dc39a592-2410-4a9e-a65f-db6a177fc471>
|
CC-MAIN-2016-26
|
http://www.bayjournal.com/article/biologists_concerned_about_dead_round_goby_in_susquehanna
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00164-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966419 | 1,533 | 2.953125 | 3 |
The first U.S. cheese factory opened in New York in 1851. Cheese factories relieved farmers from the burden of small-scale cheesemaking, a complicated and labor-intensive process. Small farms began selling their milk to local cheese factories rather than making their own cheese, which proved far more lucrative for the farmer. In terms of ownership, factories were owned by a single or a few proprietors or were cooperatively owned by farm patrons.
Factory cheesemaking produced a consistent, high-quality product under the direction of a skilled cheesemaker. Their job was to create good quality cheese across the board. The significant income of factories allowed a certain level of marketing, which in turn resulted in brand recognition and steeper prices. Local factories did not often market locally, but to large, distant cities.
Commercial-farm cheesemaking equipment continued to be used in factory cheesemaking, though it was improved and utilized on a larger scale. The cheese vat was one such tool, becoming larger and more numerous than in the past. In commercial farm cheesemaking, farmers used vats that were self-heated; in a factory setting, the vats were heated by steam from a dairy boiler. Early boilers were powered by wood and later, by coal. These boilers not only heated the cheese vats with their steam, but also powered centrifugal milk testers, and pumps for milk or water, and heated the building itself.
Cheese vats were vital to the cheesemaking process. Milk was poured into the cheese vat, where it continued to ripen overnight. A mechanical stirrer was employed to constantly stir the milk, insuring that the fat didn’t separate out as cream. In cheese factories, a “carefully controlled lactic culture” was often used to expedite the ripening (as opposed to buttermilk, sour milk, or whey in the commercial-farm). The whey was then siphoned out and the curd was cut using a large-scale curd knife. Once the curd was ready, it was removed from the cheese vat to the cheese press and the process continued.
|
<urn:uuid:0b71f136-d6f2-4cfe-94e3-dce417dc7a30>
|
CC-MAIN-2016-26
|
http://culturecheesemag.com/blog/tools-of-the-trade-the-cheese-vat
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00084-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.985165 | 442 | 3.625 | 4 |
Cinderella.2, an interactive geometry software program, has just been released. Besides extensions to the geometry part of the software, the integrated physics simulation engine is particularly interesting for classroom use in mathematics and science education. Furthermore, an API allows for even more customized virtual experiments. The solid mathematical foundation of Cinderella.2 makes it suitable software not only for K-12 education, but also in university teaching and research.
Cinderella has been developed by Prof. Dr. Jürgen Richter-Gebert (Technical University, Munich) and Prof. Dr. Ulrich Kortenkamp (University of Education, Schwäbisch Gmünd).
More information about Cinderella.2, as well as a demo version, are available at the web site. Order before July 1st and get 10% off the regular price. Currently, Cinderella.2 is a download product only. A book version including a CD-ROM to be published by Springer-Verlag is in preparation.
Respond to Ulrich Kortenkamp's recent post announcing the release of Cinderella.2.
SAT Prep Plan
Kyle Wendling has created an SAT preparation site with help
from Paul Rodney. This site contains SAT study content
including streaming video, a custom study plan creator, and
many practice problems.
The Free SAT Practice area includes:
SAT Practice Problems: Math Practice
- Algebra Problems
- Geometry Problems
- Data Analysis Problems
- Solving a Math Problem Video
Practice Exercises: Math Practice
- Algebra 1
- Algebra 2
- Algebra 3
- Numbers and Data Analysis 1
- Numbers and Data Analysis 2
- Geometry 1
- Geometry 2
- Geometry 3
Toshiba America Foundation Grants
Start thinking and planning now to apply for a grant for your classroom. The mission of Toshiba America Foundation is to contribute to the quality of science and mathematics education in U.S. communities by investing in projects designed by classroom teachers.
- Recent Grants
- How to Apply
- For Teachers
- Celebrating Science & Math
- Press Center
Kindergarten - 6th Grade Program
Applications are due October 1st.
7th - 12th Grade Program
Applications accepted year round for grants under $5,000.
Applications are due August 1st and February 1st for grants
|
<urn:uuid:65ac9b17-d5d3-4bbb-b03d-83dab730df09>
|
CC-MAIN-2016-26
|
http://mathforum.org/electronic.newsletter/mf.intnews11.24.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00181-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.872429 | 491 | 2.75 | 3 |
- harpy (n.)
- winged monster of ancient mythology, late 14c., from Old French harpie (14c.), from Latin harpyia, from Greek Harpyia (plural), literally "snatchers," which is probably related to harpazein "to snatch" (see rapid (adj.)). Metaphoric extension to "repulsively greedy person" is c. 1400.
In Homer they are merely personified storm winds, who were believed to have carried off any person that had suddenly disappeared. In Hesiod they are fair-haired and winged maidens who surpass the winds in swiftness, and are called Aello and Ocypete; but in later writers they are represented as disgusting monsters, with heads like maidens, faces pale with hunger, and claws like those of birds. The harpies ministered to the gods as the executors of vengeance. ["American Cyclopædia," 1874]
|
<urn:uuid:0d5775b7-19f9-4a69-8352-bb84a02ab1c6>
|
CC-MAIN-2016-26
|
http://etymonline.com/index.php?term=harpy&allowed_in_frame=0
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00126-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972578 | 199 | 3.390625 | 3 |
Fadia M. Nasser
Tel Aviv University and Beit Berl College
Journal of Statistics Education Volume 12, Number 1 (2004), www.amstat.org/publications/jse/v12n1/nasser.html
Copyright © 2004 by Fadia M. Nasser, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor.
Key Words: Achievement; Affective variable; Attitude toward mathematics; Cognitive variable; Mathematical aptitude; Mathematics anxiety; Statistics anxiety.
This study examined the extent to which statistics and mathematics anxiety, attitudes toward mathematics and statistics, motivation and mathematical aptitude can explain the achievement of Arabic speaking pre-service teachers in introductory statistics. Complete data were collected from 162 pre-service teachers enrolled in an academic teacher-training program for elementary and middle schools in Israel. The data, except for the two achievement tests, were collected during statistics classes prior to the midterm examination. The majority (96%) of participants were female students with a mean age of 21. As regards variables examined in this study, only the hypothesized effect of mathematical aptitude on achievement in statistics was relatively large. The results also indicated that mathematical aptitude, mathematics anxiety, attitudes toward mathematics and statistics, and motivation, together accounted for 36% of the variance in achievement in introductory statistics for the current sample.
Statistics courses have become an essential part of many programs in higher education. The rationale for teaching statistics at the college level is to enable students to handle, use, and interpret research or statistical data in their field of study. An additional goal of teaching statistics is to prepare students to deal effectively with statistical aspects of the world outside the classroom (Gal and Ginsburg 1994; Gal and Garfield 1997). Despite the effort that instructors of introductory statistics devote to simplifying the subject, many students encounter difficulties in their introductory statistics courses (Del Vecchio 1995; Glencross and Cherian 1992; Ossola 1970; Thompson and Smith 1982; Vidal-Madjar 1978; Watts 1991; Zeidner 1991). Perney and Ravid (1991, p. 2) demonstrated how critical these difficulties are by stating “statistics courses are viewed by most college students as an obstacle standing in the way of attaining their desired degree.” Cognitive factors (such as mathematical ability, mathematical background, and cognitive dimensions of attitudes towards mathematics and statistics) and affective factors (such as mathematics and statistics anxiety, motivation, and affective dimensions of attitudes toward mathematics and statistics) are some of the variables thought of as related to performance in statistics (Feinberg and Halprin 1978; Nasser 1999). Even though it has been recognized that affective factors may have long-term effects on students’ use of knowledge acquired in the classroom, statistics educators and researchers have primarily focused on cognitive skills and knowledge and paid much less attention to non-cognitive factors such as beliefs, feelings, attitudes and motivations (Gal and Ginsburg 1994; Krathwohl, Bloom, and Masia 1964). This study aimed to gain greater understanding of the nature and the potential influence of key cognitive and affective variables on the achievement of Arabic speaking pre-service teachers in introductory statistics. The importance of investigating the structure of achievement in statistics with this sample is twofold. First, research on the Arabic speaking population, notably in Israel, is sporadic, so this study can shed light on the structure of achievement in statistics in this population. Second, studying statistics achievement of pre-service teachers is of particular importance because as teachers they will be expected to instill positive attitudes in their students regarding various aspects of statistics, such as design of experiments, data collection, data analysis and interpretation.
Many students experience anxiety when they are required to take statistics courses. Cruise, Cash, and Bolton (1985) argued that anxious students’ image of statistics is generally not a very positive one. Furthermore, students often enter their first statistics class with negative attitudes about learning quantitative subjects. These students experience mathematics anxiety (McLeod 1992), apprehension about taking tests (Hunsley 1987), and/or negative attitudes with respect to the relevance of statistics for their future careers (Roberts and Saxe 1982).
Cruise, et al. (1985) defined statistics anxiety as the feeling of anxiety encountered when taking a statistics course or doing statistics, that is, when gathering, processing and interpreting data. Furthermore, they indicated that statistics anxiety may stem, in part, from the need to use mathematics and therefore correlates with mathematics anxiety (Gal and Ginsburg 1994; Galagedera 1998; Galagedera, Woodward, and Degambodo 2000; Wooten 1998). They also contended that since statistics involves more than manipulating numbers, statistics anxiety might be a broader construct than mathematics anxiety. Other researchers (e.g., Del Vecchio 1995; Lalonde and Gardner 1993) argued that statistics anxiety might result in impaired performance, mental distress and avoidance of statistics courses needed for professional advancement. Furthermore, Ramsden (1992) suggested that any anxiety that students have about a subject affects their learning style.
The effect of anxiety on achievement is not agreed upon in the literature. For example, in the context of mathematics, Llabre and Suarez (1985) stated that mathematics anxiety had little to do with performance once anxious students were already enrolled in the course. Adams and Holcomb (1986) found that while mathematics anxiety was negatively related to performance in statistics, there was no significant relationship between performance in statistics and traditional measures of state and trait anxiety. In contrast, Lalonde and Gardner (1993) found an indirect negative relationship between what they referred to as “situational anxiety” and performance in statistics. More recently, Onwuegbuzie (1998, 2000) reported findings indicating that low achievement of college students was related to higher levels of statistics anxiety and low computation self-concept.
Theoretical and empirical issues related to attitudes have received much attention however, no single definition of attitudes emerged over the years (Gable and Wolf 1993). Currently-used definitions of attitudes incorporate common elements from several definitions. For example, Aiken (1980) merged several definitions to assert that:
Attitudes may be conceptualized as learned predispositions to respond positively or negatively to certain objects, situations, concepts, or persons. As such they possess cognitive (beliefs or knowledge), affective (emotional, motivational), and performance (behavior or action tendencies) dimensions (1980, p. 2).
Simon and Bruce (1991) indicated that attitudes related to mathematics might play an effective role in students’ affective responses to statistics because students expect that learning statistics requires strong knowledge of mathematics. They also argued that by the time students start a formal statistics class, they will all have studied some level of mathematics at high school. Consequently, their affective reactions to the latter experience might affect their engagement and performance in statistics learning.
Consistent with Aiken’s general definition of attitudes, Olson and Zanna (1993) defined attitudes toward statistics as a multidimensional concept, which is composed of affective, cognitive and behavioral dimensions. Gal, Ginsburg, and Schau (1997) indicated that attitudes towards statistics have a potential role in influencing the learning process, students’ statistical behavior outside the classroom, and their willingness to attend statistics courses in the future. Furthermore, several researchers have discussed the relationship between attitudes and other personality traits. For example, Gal and Ginsburg (1994) noted that students’ preconceived ideas about the nature of statistics could produce anxiety. In support of this argument, Wisenbaker and Scott (1997) asserted that in their more intense form, negative attitudes might translate into anxiety. However, the literature makes little if any distinction between the concepts of attitudes and anxiety and the terms are often used interchangeably. The overlap between attitudes and anxiety is evident in measures of these two constructs, which often share similar items and even measure the same dimensions.
Several researchers examined the relationship between attitudes toward mathematics and achievement in statistics and found conflicting results. For example, Adams and Holcomb (1986) found no significant relationship between attitudes toward mathematics and achievement in statistics while Feinberg and Halprin (1978) did find a relationship between the two. As to the relationship between attitudes toward statistics and achievement in statistics, a substantial part of the knowledge about this relationship came from validation of measures of attitudes toward statistics. In general, researchers (e.g., Lalonde and Gardner 1993; Nasser 1999; Roberts and Bilderback 1980; Schau, Dauphinee, Del Vecchio, and Stevens 1995; Wise 1985; Wisenbaker, Nasser, and Scott 1998; Wisenbaker, Scott, and Nasser 2000) have reported a small to moderate positive relationship between attitudes toward, and performance in, statistics. Wisenbaker, Scott, and Nasser (2000) further stated that this relationship appears to be fairly consistent regardless of the instrument used, the time of administration of either the attitudes or performance measure, or the level of the students.
Psychologists concerned with learning and education usually use the word motivation to describe processes that (a) stimulate and induce behavior, (b) give direction or purpose to behavior, (c) continue to allow a certain behavior to persist, and (d) lead to choosing or preferring a certain behavior (Wlodkowski 1993). In research on young learners, where the relationship between motivation and learning has been frequently reviewed and analyzed, there is substantial evidence indicating that motivation is consistently and positively related to educational achievement (Uguroglu and Walberg 1979). Man, Nygärd, and Gjesme (1994) found a positive, albeit low, relationship between the motive-for-success score and school performance level as indicated by school grades in mathematics and English. In his study with 6th graders, Fontaine (1991) found that the more motivated students showed higher facilitating, lower debilitating anxiety, and expressed higher success expectations. It appears reasonable to assume that if motivation bears such a significant relationship to learning for young learners, it has a similar relationship to the learning of older students.
It is difficult to explain scientifically how motivation improves learning and achievement. However, Levin and Long (1981) claimed that the time that motivated students spend actively involved in learning is definitely related to achievement. They suggested that greater concentration and care are probably characteristics of the process for such students. Motivated students also tend to be more cooperative, which would make them more psychologically open to the learning material and enhance their information processing. Keller (1983) asserted that people work longer, harder and with more vigor and intensity when they are motivated than when they are not. Moreover, motivated learners get more out of the instruction than unmotivated learners. The literature we reviewed indicated that the distinct effect of motivation on achievement in statistics was not previously examined. Only the combined effect of motivation and attitudes on achievement in statistics through effort was assessed by Lalonde and Gardner (1993) who found this effect to be positive and significant.
Mathematical ability and background are frequently discussed in relation to achievement in statistics. Although it is often argued that understanding and applying statistics in empirical research does not require advanced mathematics, a significant and positive relationship between mathematical ability and performance has been consistently reported in the literature (Galagedera 1998; Galagedera, Woodward, and Degamboda 2000; Lalonde and Gardner 1993; Nasser 1998, 1999; Wooten 1998). For example, Galagedera (1998) found that first-year business mathematics and statistics students who were successful in mathematics at the university entry-level examination were more likely to do better in elementary statistics than poor performers at matriculation level. Wisenbaker, et al. (2000) argued that mathematical ability affects the acquisition of statistical skills and the two share a negative relationship with mathematics anxiety.
The relationships among statistics and mathematics anxiety, mathematical aptitude, attitudes toward mathematics and statistics, and achievement in statistics were examined in a number of studies (Lalonde and Gardner 1993; Nasser 1998, 1999; Wisenbaker et al. 1998), and these relationships were not always clear. Over the past seven years Wisenbaker and his colleagues have been conducting research in which they used path analysis to predict statistics achievement from mathematical aptitude, mathematics anxiety and attitudes toward mathematics and statistics (Scott and Wisenbaker 1994; Wisenbaker and Scott 1995, 1997; Wisenbaker, Nasser, and Scott 1998; Wisenbaker, Scott, and Nasser 2000). The major finding from their studies has been that students’ attitudes toward statistics at the end of the statistics course were predictive of their achievement, while students’ attitudes toward statistics at the beginning of the course were not. Furthermore, they found a moderately positive relationship between mathematical aptitude and achievement in statistics. The correlation between mathematics anxiety and achievement in statistics was also moderate but negative. It should be stressed that Wisenbaker and his colleagues did not include statistics anxiety in the models they tested. These researchers also did not address teacher effect or class variables, which may have contributed to the positive relationship found at the end of the statistics course.
Lalonde and Gardner (1993) used a version of their socio-educational structural model of second language acquisition (Lalonde and Gardner 1984) for predicting achievement in statistics. This version of their model included situational anxiety (statistics and number anxiety), attitudes, motivation intensity, mathematical aptitude, and effort as predictors of achievement in statistics. They found that a direct path between situational anxiety and achievement was not significant when the path between mathematical aptitude and achievement was present. Their results also suggested that the level of anxiety and the combination of attitudes and motivation could have indirect effects on achievement through effort.
Overall, the relationships among cognitive and affective variables underlying achievement in introductory statistics are not fully established in the literature. I used the available theoretical and empirical findings to hypothesize one of the potential structural models of achievement in statistics (see Figure 1). The hypothesized model included 19 manifest and seven latent variables. The assumption was that four of the seven latent variables (mathematical aptitude, mathematics anxiety, attitudes toward mathematics, and motivation) were precursors of learning statistics and therefore they were treated as exogenous, while the remaining three latent variables (attitudes toward statistics, statistics anxiety, and achievement in statistics) were treated as endogenous. As to structural relationships among the examined constructs, it was assumed that statistics anxiety decreases achievement in statistics (Onwuebuzie 1998, 2000) while high mathematical aptitude, high motivation, and positive attitudes toward statistics increase achievement in statistics (Gal and Ginsburg 1994; Gerson 1999; Man et al. 1994). It was assumed that negative attitudes toward statistics intensify statistics anxiety (Wisenbaker and Scott, 1997) and that greater motivation to succeed intensifies positive attitudes toward statistics (Auzmendi, 1991). According to the theorized model, attitudes toward mathematics and mathematical aptitude positively affect attitudes toward statistics, while mathematics anxiety inversely affects attitudes towards statistics (Gal and Ginsburg 1994; Wisenbaker et al. 1998).
Figure 1. The Hypothesized Structural Model of Achievement in Statistics
Complete data were collected from 162 Arabic speaking pre-service teachers enrolled in a teacher-training program for elementary and middle schools in an academic institution in Israel. The sample consisted predominantly of female students (96%) with a mean age of 21 years. Participants in this study represented the top 15% of applicants who met the strict entry criteria to the teacher education program. Therefore their academic level as measured by the matriculation (high school) scores and scores on the college entrance examination was above the average level of pre-service teachers in teacher training colleges in Israel. For all participants, introductory statistics was a required course and the same instructor taught it. No dropouts from the course were reported, which is to say that all participants completed the course including the midterm and the final examinations.
Seven latent variables were included in the proposed structural model. These variables and the corresponding measures are described below.
Achievement in Statistics
Scores on the midterm and final examinations (0-100 scale) in the introductory statistics course were used as measures of statistics achievement. Each of these achievement tests consisted of ten open-ended questions related to descriptive statistics (frequency tables, measures of central tendency, measures of dispersion, types of distributions, and measures of association) and inferential statistics (basic concepts in inferential statistics, estimation, hypothesis testing, t-test, Chi-square test, and types I and II errors).
The following is an example question taken from the midterm test:
The following table summarizes the results for two groups of college students in a midterm examination in introductory statistics.
|Statistic||Group 1||Group 2|
Both of the achievement tests were timed and open books and notes were allowed during the test. The total scores on the two statistics achievement tests were provided by the introductory statistics instructor, who was the only examiner, graded each of the examination questions. Cronbach's a reliability of the scores on the midterm and the final tests (Score I and II) was .78 and .58 respectively.
The Arabic version of Cruise et al.’s (1985) Statistical Anxiety Rating Scale (STARS) was used to measure statistics anxiety. The English version consists of 51 statements organized in two parts. The first part includes 23 situations related to statistics anxiety. Participants responded to each possible anxiety-inducing situation (e.g., reading a journal article that includes some statistical analysis) by using a 1 to 5 scale, where 1 indicates that the situation causes no anxiety and 5 indicates that it causes a great deal of anxiety. The second part includes 28 statements describing respondent’s feelings towards statistics (e.g., I feel statistics is a waste). Responses to each statement ranged from 1 to 5, where 1 indicates strong disagreement to the content of the statement and 5 indicates strong agreement. High scores on the STARS mean more anxiety. Exploratory factor analysis performed by Cruise et al. (1985) led them to conclude that STARS measures six factors to which they referred as worth of statistics, interpretation anxiety, test/class anxiety, computation self-concept, fear of asking for help, and fear of statistics teachers. Actually, worth of statistics and computation self-concept are worthlessness of statistics and lack of computation self-concept and this is how they will be referred to hereafter. Based on pilot results from an Arabic speaking sample of 170 pre-service teachers (Nasser 1999), twenty-four items were eliminated from the Arabic version due to departure from normality and minor item-total correlation coefficients (within each subscale). Exploratory factor analysis with principal factor axis extraction method and oblique rotation of the scores on the retained 27 items of the Arabic version of the STARS (Nasser 1999) indicated six factors, which accounted for 61% of the total variance. The number of items per factor ranged from three to seven and the reliability coefficients of the factor scores as measured by Cronbach’s for the current sample ranged from .64 to .89. STARS was used in this study because it was believed, in light of the documented information, that it measures aspects that are specific to learning statistics, such as fear of statistics teachers. Furthermore, Cruise et al. (1985) documented the development and validation of the STARS (1985) and the scale was successfully used for measuring statistics anxiety in several studies (Bell 1998; Onwuegbuzie 1998, 2000).
Attitudes Toward Statistics
The Arabic version of Shau, Dauphinee, and Del Vecchio's (1995) Survey of Attitudes toward Statistics (SATS) was used to assess attitudes toward statistics. The original version of SATS (© 1995 Schau et al.) contained 32 Likert-type items; the current English version contains 28 items with a seven-point response scale ranging from 1 (strongly disagree) to 7 (strongly agree). High score indicates more positive attitudes. Schau et al. (1995) indicated that SATS measures four dimensions of attitudes toward statistics to which they referred as affect (e.g., I feel insecure when I have to do statistics), cognitive competence (e.g., I can understand statistics equations), value (e.g., statistical thinking is not applicable in my life outside my job), and difficulty (e.g., statistics is a subject quickly learned by most people). The Arabic version used in the current study consisted of 24 items. Four items were eliminated from the original due to minor item-total correlation coefficients and departure from the factor structure proposed by the SATS (© 1995 Schau et al.) developers. The number of items per factor ranged from four to eight and the internal consistency coefficients (Cronbach’s ) of the factor scores from the present sample ranged from .65 to .80. It should be noted that some revisions were made on the instrument to suit one administration (the original SATS offers pre and post course measure of attitudes) at midway through the statistics course prior to the midterm test.
Motivation was assessed by the Arabic version of the Motive for Success (MS) subscale of Nygärd and Gjesme’s (1973) more comprehensive scale called Achievement Motivation Scale (AMS). The MS is a Likert-type scale, which consists of 15 positively, phrased items with four response points ranging from 1 (almost never) to 4 (almost always). Higher scores indicate more motivation to succeed. These items were devised to measure the capacity of individuals to anticipate positive feelings in achievement situations, which in turn was expected to affect their level of motivation to engage themselves in the achievement situation (Man, Nygärd, and Gjesme 1994) (sample item: I feel pleasure when working on tasks that are somewhat difficult to me). The reliability coefficient associated with this scale for the present sample is .83. The MS scale was translated into different languages, such as German, Russian, Chinese and Arabic and was evaluated and characterized as well-tuned to achievement motivation theory (Man, Nygärd, and Gjesme 1994)
Attitudes Toward Mathematics
The Arabic version of Fennema and Sherman's (1976) Mathematics Attitude Scale (MAS) was used to measure attitudes toward mathematics. The MAS is a ten-item Likert-type scale with seven response points, where 1 indicates strong disagreement and 7 strong agreement (sample item: Mathematics is very interesting and fun). Higher scores indicate more positive attitudes toward mathematics. Factor analysis with principal axis factoring as an extraction method yielded one factor when it was used with data from a previous sample of 170 Arabic speaking pre-service teachers drawn from the same population as the current sample (Nasser 1999). Cronbach's for the mathematics attitudes scores in the present study is .93. This scale was selected for use in this study because it has a well-documented development and validation procedure (Fennema and Sherman 1976; Broadbrooks, Elmore, Pedersen, and Bleyer 1981) and it was extensively used to measure attitudes toward mathematics among American students and students from other countries (e.g., Nasser 1999; Wisenbaker et al., 1997, 1998).
The Arabic version of Parker and Plake’s (1982) Revised Mathematics Anxiety Rating Scale (RMARS) was used to measure mathematics anxiety. This Likert-type scale includes 24 statements designed to identify respondents' anxiety concerning a variety of activities related to mathematics in a statistics-related situation. Responses to the RMARS statements range from 1 (causes little anxiety) to 5 (causes high anxiety). Higher scores indicate higher levels of mathematics anxiety. Factor analysis conducted by Parker and Plake (1982) indicated two factors, which accounted for 60% of the total variance and to which they referred as “mathematics evaluation anxiety” (8 items) and “mathematics learning anxiety” (16 items). Parker and Plake (1982) reported an internal consistency coefficient (Cronbach’s ) of .98 for the RMARS scores. Exploratory factor analysis with principal factoring axis extraction method and oblique rotation of RMARS scores from a sample of 170 Arabic speaking preservice teachers (Nasser 1999) indicated three factors, which accounted for 52% of the total variance. These three factors were referred to as mathematics evaluation anxiety (8 items, e.g., taking an examination in math course), mathematics learning anxiety (12 items, e.g., listening to a lecture in a math class), and mathematics interpretation anxiety (4 items, e.g., reading and interpreting graphs or charts). Internal consistency estimates (Cronbach's a) of scores on each of the three subscales and on the total scale, based on the present sample, were .84, .91, .84, and .94, respectively. RMARS was used for measuring mathematics anxiety in the current sample because it has a well-documented development process, high reliability and high predictive validity, based on a graduate sample (Parker and Plake 1982). RMARS also includes items that measure anxiety in a statistics related context and was widely used in various studies (Wisenbaker et al., 1997, 1998, 2000).
The number of mathematics units studied by the student and his/her rescaled high school mathematics grade (0-120 scale) were used to measure mathematical aptitude. Israeli high school students study mathematics at one of three different levels as indicated by the number of so-called units they take: low (3 units), intermediate (4 units), and high (5 units). These mathematics levels differ in content (mathematics topics covered) and depth of mathematics studies. Topics studied at the 3-unit level include basic algebra, geometry, trigonometry, and elective topics in introductory calculus and statistics. Actually, the latter two topics are rarely taught to students at this level. Topics included within the 4- and 5-unit mathematics curriculum, are calculus (at different levels), probability and descriptive statistics alongside advanced algebra, trigonometry and geometry (especially at the 5-unit level). The high school mathematics grade is comparable across schools because it is obtained from a standardized national examination (matriculation) that is administered concurrently to all students who have studied mathematics at the same level. Higher education institutes rescale grades at different mathematics levels to make them comparable. The final numbers of items in each scale and the associated reliability coefficients (Cronbach’s ) are presented in Table 1.
Participants responded to all instruments, except the two achievement tests, during their statistics class prior to the midterm examination. The course instructor who taught the introductory statistics course used traditional instructional strategies and provided minimal evaluative feedback to students during the course (assignments were not graded and students were not given any credit for doing them). In addition, no computer software or computer assignments were involved in this introductory statistics course. The instructor’s teaching and evaluation styles are outlined here because they may have an effect on students’ attitudes, anxiety and achievement in relation to statistics and hence they should be taken into consideration in any interpretation of the results of this study.
The means, the standard deviations, as well as the values of skewness and kurtosis for participants' scores on each of the research variables, and the intercorrelations among these scores, were obtained and summarized in Table 1 and Table 2.
As shown in Table 1, the means of the two statistics scores were relatively high, yet, on average, the final test scores (score II) were higher and less variable than those on the midterm test (score I). In general, pre-service teachers expressed positive attitudes toward mathematics and statistics, were not anxious about mathematics and statistics and were moderately motivated to succeed. Furthermore, they demonstrated a relatively high mathematical aptitude as reflected by the mean number of mathematics units and high school mathematics scores. Participants' impressive mathematics grades are not surprising given the strict criteria for acceptance to the teacher-training program. The variability of students’ mathematics and statistics scores and their responses to the cognitive and affective measures as indicated by the standard deviations were reasonable given the way they were measured.
Screening data for nonnormality prior to conducting the testing is an important step in every multivariate analysis because departure to a significant degree from normality if not addressed can distort the results of the data analysis (Fan, Thompson, and Wang 1999; Fidell and Tabachanick 2003; Nasser and Wisenbaker 2002; Tabachanick and Fidell 2001; West, Fich, and Curran 1995). Several methods for assessing nonnormality were proposed in literature. The most common are testing the significance of the skewness and kurtosis values by means of Z test with conservative p value (.01 or .001), frequency histograms and/or normal probability plots (Bollen 1989; Tabachnick and Fidell 2001).
1In second line transformed scores.
2For 3-, 4-, and 5-unit level respectively.
In the Results Coach section of SPSS 11 it was indicated that, in general, skewness or kurtosis values greater than 1 indicate a distribution that differs significantly from the normal symmetric distribution (SPSS 11, results coach), this criterion was used to detect nonnormality in the present study.
Stat score II, lack of computation self-concept, and fear of stat teachers scores had skewness and kurtosis that exceeded 1. The first measure was negatively skewed while the latter two were positively skewed. These three measures were corrected for nonnormality using Fox’s (1997, p. 67) method whereby stat score II (negatively skewed) was transformed using power transformation (x to x2), while lack of computation self-concept and fear of statistics teachers scores (positively skewed) were transformed using log transformation (x to logx). The skewness and kurtosis values for the remaining measures were all below 1 and were assumed to be within the expected range of chance fluctuation in these statistics.
Pearson product moment correlation coefficients between each of the affective and cognitive measures and the two measures of achievement in statistics are presented in the left two columns of Table 2. As can be seen in Table 2, these correlation coefficients ranged from nil to moderate (.04-.42). Two observations are worth mentioning as regards these correlations. First, the magnitude of the correlations between the cognitive and affective variables examined in the current study, and the first statistics score were, in general, somewhat higher than their counterparts with the second statistics score. Second, the two statistics test scores correlated .30 to.42 with the two mathematical aptitude measures. These correlations are moderate (Cohen 1988). Medium correlations were also found between the two statistics test scores and the measure of attitudes towards mathematics (.35 and .34). Correlations equal to or larger than .20 (in absolute value) were found between the first statistics score and each one of: the lack of computation self-concept component of statistics anxiety, the cognitive component of attitudes toward statistics, and the learning and interpretation components of mathematics anxiety. The counterpart correlations with the second statistics score were similar except the one with the interpretation component of mathematics anxiety, which was smaller than .20. The correlations between the two statistics scores and the remaining cognitive and affective factors ranged from about zero to small (r<=.20). All nonzero correlations were in the expected direction and those equal to or greater than .16 (absolute value) were statistically significant.
As to the correlations between measures of statistics and mathematics anxiety, attitudes towards mathematics and statistics, the most salient correlations were observed between the cognitive component of attitudes towards mathematics and the lack of computation self-concept component of statistics anxiety (-.62), between the worthlessness of statistics component of statistics anxiety and the value component of attitudes towards statistics (-.76), between mathematics attitudes and measures of learning, evaluation, and interpretation mathematics anxiety (-.76, -.57, -.61, respectively), and between test/class anxiety and mathematics evaluation anxiety (.62).
|Statistics Achievement (0-100)|
|1. Stat score 1|
|2. Stat score 2||0.68|
|Statistics Anxiety (1-5)|
|3. Worthlessness of statistics||-0.14||-0.10|
|4. Fear of asking for help||0.09||0.04||0.20|
|5. Test class/anxiety||-0.10||-0.07||0.42||0.20|
|6. Interpretation anxiety||-0.12||-0.06||0.22||0.31||0.10|
|7. Lack of computation self-concept||-0.24||-0.21||0.51||0.15||0.33||0.10|
|8. Fear of statistics teachers||-0.04||-0.10||0.37||0.23||0.14||0.13||0.39|
|Attitudes toward Statistics (1-7)|
|13. Motive to succeed (1-4)||0.11||0.05||-0.34||-0.11||-0.23||-0.12||-0.29||-0.06||0.24||0.32||-0.09||0.28|
|Attitudes towards Mathematics|
|14. Math attitudes (1-7)||0.35||0.34||-0.41||-0.09||-0.44||-0.07||-0.53||-0.24||0.47||0.37||0.18||0.49||0.32|
|Mathematics Anxiety (1-5)|
|15. Learning anxiety||-0.28||-0.25||0.40||0.12||0.41||0.19||0.43||0.20||-0.46||-0.37||-0.21||-0.52||-0.27||-0.76|
|16. Evaluation anxiety||-0.13||-0.14||0.33||0.01||0.62||0.02||0.37||0.11||-0.37||-0.31||-0.18||-0.51||-0.24||-0.57||0.65|
|17. Interpretation anxiety||-0.20||-0.16||0.38||0.25||0.42||0.37||0.38||0.22||-0.41||-0.35||-0.19||-0.43||-0.23||-0.61||0.70||0.54|
|18. Math units (3-5)||0.32||0.30||-0.07||0.10||-0.01||-0.04||-0.18||-0.06||0.20||0.14||0.23||0.11||-0.02||0.30||-0.28||-0.07||-0.14|
|19. Math score (0-120)||0.42||0.34||-0.16||0.06||-0.08||-0.11||-0.25||-0.11||0.28||0.21||0.17||0.15||0.05||0.43||-0.34||-0.16||-0.30||0.51|
Note. Correlation coefficients equal of greater than .16 are statistically significant at =.05.
The correlations between the transformed values of stat score 2, lackof computation self-concept, and fear of statistics teachers and other variables are reported in the table.
In order to examine the relationship between achievement in statistics and cognitive and affective variables, a structural equation modeling (SEM) analysis via EQS (version 5.7b) procedure was employed. This procedure allows an assessment of the adequacy-of-fit of a theoretical (hypothesized) model to the data, as indicated by the degree to which the specified model leads to an exact reproduction of the population covariance matrix of the manifest variables (Bollen and Long 1993). The structural modeling approach tends to serve two general purposes in research: evaluating the degree to which the hypothesized model fits the data at hand, and estimating the parameters. A structural model with acceptable fit informs the researcher that the empirical relationships among the variables are consistent with those implied by the model whereas model parameter estimates indicate the magnitude and the direction of the relationships among the variables.
The first indicator in each multi-indicator construct was selected as an indicator variable (fixed to 1) to set the scale and the error for the single-indicator constructs was fixed to e=variance of the associated variable multiplied by (1-Cronbach’s ) (McDonald and Seifert 1999).
The hypothesized structural model (Figure 1) yielded a very poor fit. Furthermore, a warning in the output indicated that test results might not be appropriate due to a condition code whereby statistics anxiety and mathematics anxiety are linearly dependent on other variables. The structural coefficient between attitudes and statistics anxiety, moreover, was very large (b=-.99) indicating that the scores on these constructs are highly redundant. The aforementioned diagnosis led to the conclusion that the initial intent of assessing statistics anxiety separately from attitudes toward statistics and mathematics anxiety using the SATS (© 1995 Schau et al.), RMARS, and the STARS scales proved to be unsuccessful. The practical translation of this conclusion was removing one of the anxiety constructs from the model. Because mathematics anxiety measures for the current sample were more reliable than four of the six measures of statistics anxiety, the relationship of mathematics anxiety with attitudes toward statistics and therefore with achievement in statistics, is better established in the literature than the parallel relationships, but with regard to statistics anxiety, and since the RMARS includes items that measure anxiety in a statistics-related context, it was decided to retain mathematics anxiety in the model while leaving out statistics anxiety.
Because the original model did not fit the data well an alternative model (Model I) was tested. Model I was similar to the original one except for excluding statistics anxiety, and fit the data well (see Table 3). Nonetheless, three structural paths in Model I were statistically not significant (p > 0.05). These corresponded with the effect of motivation on achievement in statistics, in the presence of the effects of attitudes toward statistics and mathematical aptitude, and the effects of attitudes toward mathematics and mathematical aptitude on attitudes toward statistics, in the presence of the effects of motivation and mathematics anxiety. Consequently, three additional alternative models were tested (Models II to IV). Model II resulted from discarding (fixing to zero) the insignificant path between motivation and achievement in statistics from Model I. Model III was created by removing the insignificant path between attitudes towards mathematics and attitudes toward statistics from Model II. Model IV resulted from discarding the path between mathematical aptitude and attitudes toward statistics from Model III (see Table 3). The difference (, df=1) between the successive alternative models was not statistically significant. The goodness-of-fit results for the originally hypothesized model and the four alternative ones are shown in Table 3, and the most parsimonious model (Model IV) is displayed in Figure 2.
All the measurement coefficients in the final model (Model IV) are in the expected direction (positive) and statistically significant. Of more interest for understanding the structure of achievement in statistics was the structural part of the model. Strong mathematical aptitude and more positive attitudes toward statistics increased achievement in statistics. Although both effects were statistically significant, the hypothesized effect of mathematical aptitude on achievement in statistics was substantially larger than that of attitude toward statistics. The hypothesized effects of motivation and mathematics anxiety on attitudes toward statistics were statistically significant (p < 0.05). Results indicated that stronger motivation to succeed and lower level of mathematics anxiety intensified positive attitudes toward statistics. However, the effect of mathematics anxiety on attitudes toward statistics is relatively large as compared with the counterpart effect of motivation.
As for amount of variance accounted for, it was found that mathematical aptitude, mathematics anxiety, attitudes toward mathematics, attitudes toward statistics and motivation, together, accounted for 36% of the variance in achievement in the introductory statistics course. In other words, 36% of the variance in achievement in statistics can be explained by the cognitive and affective factors included in the model. It was also shown that mathematical aptitude alone accounted for 22% of the variance in achievement in statistics, while the remaining affective and cognitive variables accounted for 14% of the variance in achievement in statistics.
Figure 2. The Final Sturctural Model of Achievement in Staitstics (significant structural paths only).
It is important to note that the combination of mathematical aptitude, mathematics anxiety, attitudes toward mathematics and motivation accounted for a substantial (51%) amount of the variance in attitudes toward statistics.
Consistent with findings from previous studies (Lalonde and Gardner 1993; Nasser 1998, 1999; Wisenbaker, Nasser, and Scott 1998) this study revealed that the correlations between the two measures of statistics achievement and measures of mathematical aptitude, attitudes towards mathematics, the cognitive component of attitudes toward statistics, learning anxiety and lack of computation self-concept, were all moderate (.21 to .42 in absolute values) (Cohen 1988). Meanwhile, the correlations between the two measures of achievement in statistics and the remaining variables were small (|r| < .2). On the one hand, it might be that these results reflect the actual magnitude of the relationships between the examined variables. On the other hand, it is possible that the unique characteristics of the sample (mostly female students with strong mathematical aptitude and limited variability in their responses to some of the manifest variables) and the low reliabilities (below .70) of five of the 19 measures are responsible for the modest (attenuated) correlations.
The two measures of mathematics and statistics anxiety and the two measures of attitudes toward mathematics and statistics were correlated in the expected direction. The nontrivial correlations among several measures of anxiety and attitudes related to statistics lend some support to the widely known argument that feelings about statistics may stem from feelings about mathematics. This is known to be especially so with students who have either limited prior experience in statistics or none, as was the case with the sample in this study. These results also raise a question regarding the extent to which attitudes and anxiety as measured in the current study are distinct constructs. Furthermore, Wisenbaker and Scott’s (1997) claim that negative attitudes may convert to anxiety provides a reasonable interpretation for the negative significant correlation between measures of these two constructs. The results of the present study also provide some confirmation for Gal and Ginsburg’s (1994) argument that students’ preconceptions regarding the nature of statistics (as requiring high cognitive ability) could produce anxiety.
The unsuccessful attempt to measure statistics anxiety separately from attitudes toward statistics and from mathematics anxiety resulted in a poor fit of the hypothesized model to the data. The alternative model, in which statistics anxiety was removed, yielded an adequate fit although it included three insignificant structural coefficients. T he three other alternative models from which an ascending number of insignificant paths were removed, one at a time, also fit the data well. The most parsimonious model (Model IV) explained about the same amount of variance in achievement of ``statistics as the other, less parsimonious, alternative models (36% and 37%, respectively).
As to parameter estimates, particularly the structural coefficients, the small, but positive (=.15) significant effect of attitudes toward statistics on achievement in statistics, in the presence of the path between mathematical aptitude and achievement in statistics, is consistent with findings from previous research (Nasser 1999; Wisenbaker, et al. 1998, 2000). Motivation had a significant, albeit small, positive effect on attitudes toward statistics and a minor mediated positive effect (through attitudes toward statistics) on achievement in statistics. The existing literature yields no empirical basis for interpreting the link between motivation and achievement in statistics. Despite the theoretical, and some empirical, findings linking motivation with achievement in different subjects (Man et al. 1994), to our knowledge, the link between these two constructs was not examined so far. Although motivation was one of the variables that Lalonde and Gardner (1993) examined in relation to achievement in statistics, only the combined effect of motivation and attitudes on achievement in statistics through effort was examined. Thus the distinct effect of motivativation on achievement in statistics has not been examined so far. In contrast, the positive but modest effect of motivation on attitudes concords with findings reported by Auzmendi (1991). The modest and minor effects that motivation has on attitudes toward statistics and on achievement in statistics may also be associated with validity problems caused by using a measure of general motive for success as an indicator of achievement motivation in a specific context such as statistics. To be realistic, it is important to point out that although motivation is a necessary condition for learning, there are other factors, such as quality of instruction, type of evaluative feedback, amount of effort, expectations, self efficacy and self regulation that are required as well, and they therefore, are recommended to be subject to future research.
Despite the frequently heard argument that understanding and applying statistics to empirical data does not require advanced mathematics (e.g., Galagedera, Woodward, and Degamboda 2000; Lalonde and Gardner 1993), mathematical aptitude accounted for 22% of the variance in statistics achievement while the remaining cognitive and affective variables together explained only 14% of the variance in achievement in statistics. Thus mathematical aptitude turned out to be the most important among the variables used in the current study for modeling the structure of achievement in statistics. That is, mathematical aptitude was the best predictor of achievement in statistics for the current sample and for how achievement was defined in this study. The dominance of mathematical aptitude for predictability of achievement in statistics confirms findings reported in previous research (Lalonde and Gardner 1993; Nasser 1998, 1999; Wisenbaker and Scott 1997). It was shown that a stronger mathematical aptitude is associated with more positive attitudes toward mathematics and statistics, lower levels of mathematics anxiety, and higher achievement in statistics. Therefore, any remedial plans for improving students’ feelings about, and achievement in, statistics should improve their mathematics ability and their feelings toward mathematics and statistics. It is reasonable to believe that strengthening one of these components while ignoring the other, will compromise the outcomes of teaching statistics.
In spite of the modest effects of some of the affective variables on achievement in introductory statistics, they can, alongside mathematical aptitude, explain a considerable amount of the variance in achievement in statistics (36%). It should be reiterated that the substantial improvement in the goodness of model fit as a result of removing statistics anxiety implies that the attempt using STARS to measure statistics anxiety separately from mathematics anxiety and attitudes toward statistics was unsuccessful. It is plausible that the measures of statistics anxiety and attitudes toward statistics, which were used in the current study, targeted similar dimensions of the two constructs. However, if one believes that these are indeed two separate constructs, more careful work should be done to refine their definitions and to develop instruments that yield more accurate and valid scores. Furthermore, the temporal spacing between the collection of the attitude, anxiety, motivation data, the midterm, and then the final test, might have affected the variability of the responses and their relationships as well.
The study was limited by the fact that the sample consisted of a vast majority of able female students in a teacher training college, so that the structural model reported here may not generalize to different samples. Another limitation of the study is rooted in the assessment method by which achievement was measured. Two points are noteworthy in this regard. First, achievement was assessed by a limited number of open-ended questions. Given the limited ability of this kind of questions to adequately represent the learned materials and skills and given the subjectivity involved in this type of evaluation, the ability of the two scores to adequately represent achievement in statistics is uncertain. Second, reliance on only one method of assessment (written examinations with open-ended questions), as is the case in the current study, does not allow for adequate evaluation of student performance (Gal and Garfield 1997). Colvin and Vos (1997), Gal and Garfield (1997), and Schau and Mattern (1997), among others, have argued that in order to properly assess student performance in statistics, various assessment methods should be applied in order to reveal students' understanding of the major ideas in statistics and their ability to select and adequately apply statistical tools when making sense of realistic data. This was not done in the current study. Further research is called for to test the validity and the generalizability of the current results and conclusions. This study is also limited by the fact that it did not address changes over time. Attitudes and feelings about statistics can change in the course of learning the subject so that the magnitude and/or the direction of the relationships in the structural model can also change.
At issue are also the sample size and the low reliability of some of the measures specially that of the second statistics test scores. As indicated by results from previous studies, small sample size especially with highly parameterized models and low reliability of indicators, as was the case in the present study, can have an adverse effect on the variance of parameter estimates (Gerbing and Anderson 1985; Jackson 2001; ) and values of summary fit indexes (Anderson and Gerbing 1984; Marsh, Hau, Balla, and Grayson 1998; Nasser and Wisenbaker 2002). In order to assess the impact of these two factors and to test the accuracy and validity of the results reported here, further research with a larger sample and more reliable measures is imperative.
Regardless of the above limitations, it is reasonable to conclude that the direction and the magnitude of the correlations between affective and cognitive factors and their effects on achievement in statistics of Arabic speaking pre-service teachers are, in general, consistent with findings reported in the literature. However, this conclusion should be empirically tested in a cross-cultural study, which is recommended for future research. It is also important to indicate that although modest, the results of the current study contributed to the scant knowledge about the structure of achievement in statistics and furnished the basis for the further research that is needed to address unresolved questions involving learning statistics. Findings also have implications for planning and teaching this subject. For example, it might be necessary and beneficial to design remedial mathematics courses to be taken prior to or concurrently with introductory statistics courses in order to improve mathematical aptitude, decrease mathematics anxiety, intensify positive attitudes toward mathematics, and consequently strengthen positive motivation. This in turn, will improve attitudes towards and achievement in statistics. However, there is still a long way to go if we want to gain a comprehensive understanding of correlates of students’ achievement in statistics, and if we wish to make statistical studies pleasant as well as successful.
The author wish to thank Dr. Joseph Wisenbaker, Dr. Barbara Fresko, the Editor, the Associate Editor and the two anonymous reviewers for their comments.
Adams, N. A., and Holcomb, W. R. (1986), "Analysis of the relationship between anxiety about mathematics and performance," Psychological Reports, 59, 943-948.
Aiken, L. R. (1980), "Attitudes measurement research," in Recent developments in affective measurement, ed. D. A. Payne, San Francisco: Jossey-Bass, pp. 1-24.
Anderson, J. C., and Gerbing, D. W. (1984), "The effect of sampling error on convergence, improper solutions, and goodness-of-fit indices for maximum likelihood confirmatory factor analysis," Psychometrica, 49, 155-173.
Auzmendi, E. (1991, April), "Factors related to attitudes toward statistics: A study with a Spanish sample," paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL.
Bell, A. J. (1998), "International students have statistics anxiety too," Education, 118, 634-636.
Bollen, A. K. (1989), Structural equations with latent variables, New York: Wiley.
Bollen, A. K., and Long, J, S. (1993), Testing structural equation models, Newbury Park, CA: Sage.
Broadbrooks, J. W., Elmore, B. P., Pederson, K., and Bleyer, R. D. (1981), "A construct validation study of the fennema-sherman mathematics attitudes scales," Educational and Psychological Measurement, 41, 551-557.
Cohen, J. (1988), Statistical power analysis for the behavioral sciences (2nd ed.), Hillsdale, NJ: Erlbaum.
Colvin, S., and Vos, E. K. (1997), "Authentic assessment models for statistics education," in The assessment challenges in statistics education, eds. I. Gal and J. B. Garfield, Amsterdam: IOS, pp. 27-36.
Cruise, J. R., Cash, R.W., and Bolton, L. D. (1985), "Development and validation of an instrument to measure statistical anxiety," in Proceedings of the Section on Statistical Education, American Statistical Association, pp. 92-98.
Del Vecchio, A. (1995), "A psychological model of introductory statistics course completion," paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.
Fan, X., Thompson, B., and Wang, L. (1999), "The effects of sample size, estimation methods, and model specification on SEM fit indices," Structural Equation Modeling, 6, 56-83.
Feinberg, F., and Halprin, S. (1978), "Affective and cognitive correlates of course performance in introductory statistics," Journal of Experimental Education, 46, 11-18.
Fennema, L., and Sherman, J. A. (1976), "Fennema-Sherman mathematics attitude scales: Instruments designed to measure attitude toward the learning of mathematics by females and males," Journal for Research in Mathematics Education, 7, 324-326.
Fidell, L., and T. Tabachnick, G. B. (2003), "Preparatory data analysis," in Handbook of psychology: Research methods in psychology (Vol. 2), eds. A. J. Schinka and W. F. Velicer, New York, John Wiley & Sons, pp. 115-141.
Fontaine, M. A. (1991), "Impact of social context on the relationship between achievement motivation and anxiety, expectation or social conformity," Personality and Individual Differences, 12, 457-466.
Gable, K. R. and Wolf, B. M. (1993), Instrument development in the affective domain (2nd ed.), Boston: Kluwer.
Gal, I., and Garfield, J. B., (1997), "Curricular goals and assessment challenges in statistics education," in The assessment challenges in statistics education, eds. I. Gal and J. B. Garfield, Amsterdam: IOS, pp. 1-13.
Gal, L. B., and Ginsburg, L. (1994), "The rule of beliefs and attitude in learning statistics: toward an assessment framework. Journal of Statistics Education [Online], 2(2). www.amstat.org/publications/jse/v2n2/gal.html
Gal, L. B., Ginsburg, L., and Schau, C. (1997), "Mentoring attitudes and beliefs in statistics education," in The assessment challenges in statistics education, eds. I. Gal and J. B. Garfield, Amsterdam: IOS, pp. 37-51.
Galagedera, D. (1998), "Is remedial mathematics a real remedy? Evidence from learning statistics at tertiary level," International Journal of Mathematics Education, Sciences and Technology, 29, 475-480.
Galagedera, D., Woodward, G., and Degamboda, S. (2000), "An investigation of how perceptions of mathematics ability can affect elementary statistics performance," International Journal of Mathematics Education, Sciences and Technology, 31, 679-689.Gerbing, D. W., and Anderson, J. C. (1985), "The effects of sampling error and model characteristics on parameter estimation for maximum likelihood confirmatory factor analysis", Multivariate Behavioral Research, 20, 255-271.
Gerson, F. R. (1999), "The people side of performance improvement," Performance Improvement, 38, 19-23.
Glencross, M. J., and Cherian, V. I. (1992), "Attitude toward applied statistics of postgraduate students in Transco," Psychological Reports, 70, 67-75.
Hunsley, J. (1987), "Cognitive processes in mathematics anxiety and test anxiety: the rule of appraisals, internal dialogue and attributions," Journal of Educational Psychology, 79, 388-392.
Keller, L. M. (1983), "Motivational design instruction," in Instructional design theories and models: An overview of their current status, ed. C. M. Reigeluth, Hillsdale, N. J.: Erlbaum.
Krathwohl, D. R. Bloom, B. S., and Masia, B. B. (1964), Taxonomy of educational objectives: The classification of educational goals, Handbook II: Affective domain, New York: David.
Lalonde, R, N., and Gardner, R. C. (1984), "Investigating a causal model of second language acquisition: Where does personality fit?," Canadian Journal of Behavioral Science, 16, 224-237.
Lalonde, R. N., and Gardner, R. C. (1993), "Statistics as a second language: Predicting performance of psychology students," Canadian Journal of Behavioral Science, 25, 108-125.
Levin, T., and Long, R. (1981), Effective instruction, Alexandria, VA: Association for Supervision and Curriculum Development.
Llabre, M., and Suarez, E. (1985), "Predicting math anxiety and course performance in college women and men," Journal of Counseling Psychology, 32, 283-287.
Man, F., Nygärd, R., and Gjesme, T. (1994), "The achievement motives scale (AMS): theoretical basis results from a first try-out of Czech form," Scandinavian Journal of Educational Research, 38, 3-4.
Marsh, H. W., Hau, K.-T., Balla, J. R., and Grayson, D. (1998), "Is more ever too much? The number of indicators per factor in confirmatory factor analysis," Multivariate Behavioral Research, 33, 181-220.
McDonald, A. R., and Seifert, F. C. (1999, October), "Full and limited information strategies for incorporating measurement error in regression models," paper presented at the Southern Management Association Meeting. Atlanta GA.
McLeod, D. B. (1992), "Research on affect in mathematics learning in the JRME: 1970 to present," Journal of Research in Mathematics Education, 25, 637-647.
Nasser, F. (1998, July), "Attitude toward statistics and statistics anxiety among college students: Structure and relationship to prior mathematics experience and performance in introductory statistics course," paper presented at the Annual Meeting of the Stress and Anxiety Society (STAR), Istanbul, Turkey.
Nasser, F. (1999), "Prediction of statistics achievement," in Proceedings of the International Statistical Institute 52nd Conference, Helsinki, Finland, (3), pp. 7-8.
Nasser, F., and Wisenbaker, J. (2002, April), "A Monte Carlo study investigating the impact of item parceling strategies on measures of fit in confirmatory factor analysis," paper presented at the Annual Meeting of the American Educational Research Association. New Orleans, LA.
Nygärd, R., and Gjesme, T. (1973), "Assessment of achievement motives: comments and suggestions," Scandinavian Journal of Educational Research, 17, 39-46.
Olson, J. M., and Zanna, M. P. (1993), "Attitude and attitude change," Annual Review of Psychology, 44, 117-154.
Onwuegbuzie, J. A. (1998), "Statistics anxiety: A function of learning style?," Research in Schools, 5, 43-52.
Onwuegbuzie, J. A. (2000), "Statistics anxiety and the role of self-perception," Journal of Educational Research, 93, 323-330.
Ossola, Y. (1970), "Attitude toward statistics of Flemish students in psychology and education," Psychological Belgica, 10, 83-98.
Parker, C. S., and Plake, B. S. (1982), "The development and validation of the Revised Version of the Mathematics Anxiety Rating Scale," Educational and Psychological Measurement, 42, 551-557.
Perney. J., and Ravid, R. (1991), "The relationship between attitude toward statistics, mathematics self-concept, test anxiety and graduate students' achievement in introductory statistics course," unpublished manuscript, National College of Education, Evanston, IL.
Ramsden, M. J. (1992), "If it's enjoyable, is it science?," School Science Review, 73, 65-71.
Roberts, D. M., and Bilderback, E. W. (1980), "Reliability and validity of the of a statistics attitude survey," Educational and Psychological Measurement, 40, 235-238.
Roberts, D. M., and Saxe, J. E. (1982), "Validity of statistics attitude survey: A follow up study," Educational and Psychological Measurement, 42, 907-912.
Schau, C., and Mattern, N. (1997), "Assessing students' connected understanding of statistical relationship," in The assessment challenges in statistics education, eds. I. Gal and J. B. Garfield, Amsterdam: IOS, pp. 91-104.
Schau, C., Stevens, J., Dauphinee, T., and Del Vecchio, A. (1995), "The development and validation of the Survey of Attitude toward Statistics," Educational and Psychological Measurement, 55, 868-875.
Scott, J. S., and Wisenbaker, J. M. (1994, July), "A multiple method study of student attitude toward statistics," paper presented at the Fourth International Conference on Teaching Statistics, Marrakech, Morocco.
Simon, J. L., and Bruce, P. (1991), "Resampling: a tool for everyday statistical work," Chance, 4, 22-32.
Tabachanick. G.B., and Fidell, L. T. (2001), Using multivariate statistics (4th ed.), Boston, MA: Allyn & Bacon.
Thompson, A. P., and Smith, L. M. (1982), "Conceptual, computational, and attitudinal correlates of student performance in introductory statistics," Australian Psychologist, 17, 191-197.
Uguroglu, M., and Walburg, H. J. (1979), "Motivation and achievement: A quantitative synthesis," American Educational Research Journal, 16, 375-389.
Vidal-Madjar, A. (1978), "Teaching mathematics and statistics to adults who are keen on psychology," Educational Studies in Mathematics, 9, 381-390.
Velicer, F. W., and Fava, L. J. (1998), "Affects of variable and subject sampling on factor pattern recovery," Psychological-Methods, 3(2), 231-251.
Watts, D. G. (1991), "Why introductory statistics is difficult to learn? And what we can do to make it easier?," The American Statistician, 45, 290-405.
West, S. G., Fich, J. E., and Curran, P. J. (1995), "Structural equation models with nonnormal variables: problems, and remedies," in Structural equation modeling: concepts, issues and applications, ed. R. H. Hoyle, Thousand Oaks, CA: Sage, pp. 56-75.
Wise, S. L. (1985), "The development and validation of a scale measuring attitude toward statistics," Educational and Psychological Measurement, 45, 401-405.
Wisenbaker, J. M., and Scott, J. S. (1995, April), "Attitude about statistics and achievement in introductory statistics course," paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA.
Wisenbaker, J. M., and Scott, J. S. (1997, March), "Modeling aspects of students’ attitude and achievement in introductory statistics course," paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL.
Wisenbaker, J., Nasser, F., and Scott, J. (1998, June), "A multicultural exploration of the interrelation among attitude about and achievement in introductory statistics," paper presented at the Annual Meeting of the International Conference of Teaching Statistics, Singapore.
Wisenbaker, J., Scott, J., and Nasser, F. (2000, August), "Structural equation models relating attitude about and achievement in introductory statistics courses: a comparison of results from U.S. and Israel," paper presented at the Annual Meeting of the International Group for the Psychology of Mathematics Education, Akito, Japan.
Wlodkowski, J. R. (1993), Enhancing adult motivation to learn, A guide to improving instruction and increasing learner achievement. San Francisco: Jossey-Bass.
Wooten, C. T. (1998), "Factors influencing student learning in introductory accounting classes: A comparison of traditional and nontraditional students," Issues Accounting Education, 13, 357-373.
Zeidner, M. (1991), "Statistics and mathematics anxiety in social science students: some interesting parallels," British Journal of Educational Psychology, 61, 319-328.
Fadia M. Nasser
Tel Aviv University and Beit Berl College
School of Education - P.O. Box 26
Ramat Aviv - Tel Aviv
Volume 12 (2004) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications
|
<urn:uuid:93e9ed35-347d-4b7e-92a6-2c979e5988d9>
|
CC-MAIN-2016-26
|
http://www.amstat.org/publications/jse/v12n1/nasser.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00100-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931027 | 13,892 | 2.65625 | 3 |
Facts About Drought
Some of the most interesting facts about drought are the different types, the causes, and the overall impact on social, environmental, and economical levels.
Drought is considered a natural disaster due to the fact that it is hazardous to human beings because it results in water shortage, damages to crops, and an increased death rate of livestock and wild animals.
They can also destroy lake communites that are dependent on touristism for survival such as Lake Travis in Central Texas. The lake has completely dried up resulting in a loss of 3.8 billion for the local community.
Types of Drought
Hydrological - impact is seen in river systems and reservoirs that are necessary for supporting hydroelectric power and hydrologic storage systems.
Meteorological - is the monitoring of atmospheric conditions for precipitation levels that lead to dry spells, the length a dry period and the overall amount of dryness.
Agricultural - rainfall shortages reduce soil moisture resulting in crop stress, which effects food production and farming.
Socioeconomic - when demand exceeds supply. Water shortages create a strain on products that are dependent on the water supply for production such as hydroelectric power, fisheries, food grains, etc.
What Causes Water Shortages?
According to Elizabeth Kitchen droughts can be caused by a number of things. The most important drought cause is related to how much water vapor is in the atmosphere because water vapor in the atmosphere is what causes precipitation.
When there are moist, low pressure systems, precipitation, such as rain, hail, sleet, and snow can occur. If the presence of dry, high pressure is above average, there will be less moisture available to create precipitation. This then results in a water deficit in areas they move over.
This can also occur when air masses are shifted by winds, resulting in dry, warm, continental air moving over an area instead of moist, cooler, oceanic air masses. El Nino, which affects the temperature of the ocean's water, also impacts levels of precipitation because during the years in which the temperature cycle is present, the air masses can be shifted above the ocean, often leading to places that are typically wet, dry and making places that are typically dry, wet.
More Facts About Drought
Hunger and famine - These conditions often provide too little water to support food crops, through either natural precipitation or irrigation using reserve water supplies. The same problem affects grass and grain used to feed livestock and poultry. When it undermines or destroys food sources, people go hungry. When it is severe and continues over a long period, famine may occur.
Thirst - All living things must have water to survive. People can live for weeks without food, but only a few days without water.
Disease - It often creates a lack of clean water for drinking, public sanitation and personal hygiene, which can lead to a wide range of life-threatening diseases.
Wildfires - The low moisture and precipitation that often characterize droughts can quickly create hazardous conditions in forests and across range lands, setting the stage for wildfires that may cause injuries or deaths as well as extensive damage to property and already shrinking food supplies.
Social conflict and war - When a precious commodity like water is in short supply due to these conditions, and the lack of water creates a corresponding lack of food, people will compete—and eventually fight and kill—to secure enough water to survive.
Migration or relocation - Faced with the other impacts of these extreme conditions many people will flee the area in search of a new home with a better supply of water, enough food, and without the disease and conflict that were present in the place they are leaving. As seen during the 1930s Great Dust Bowl.
Facts about drought in the 1930s
The Dust Bowl - in the 1930s, drought covered virtually the entire Plains for almost a decade (Warrick, 1980). The drought’s direct effect is most often remembered as agricultural.
Many crops were damaged by deficient rainfall, high temperatures, and high winds, as well as insect infestations and dust storms that accompanied these conditions. The resulting agricultural depression contributed to the Great Depression’s bank closures, business losses, increased unemployment, and other physical and emotional hardships.
Although records focus on other problems, the lack of precipitation would also have affected wildlife and plant life, and would have created water shortages for domestic needs.
Facts about drought in China - How it effects everyone
Staying Up to Date on the Facts About Drought
For a map on the current US Drought Map click the link below. This map is updated weekly.
Facts About Drought Sources
National Weather Service - http://www.drought.noaa.gov/
What is drought? - http://www.drought.unl.edu/whatis/dustbowl.htm
Elizabeth C. Kitchen • Writing for Education Updated Jul 8, 2011 • Bright Hub
|
<urn:uuid:e1e62682-212e-4463-8cc4-d440b9e55ba5>
|
CC-MAIN-2016-26
|
http://www.disaster-survival-resources.com/facts-about-drought.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00134-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94353 | 1,013 | 3.53125 | 4 |
South Atlantic Water Science Center - Georgia
USGS IN YOUR STATE
USGS Water Science Centers are located in each state.
The Apalachicola-Chattahoochee-Flint (ACF) River National Water Quality Assessment (NAWQA) Program study
The Apalachicola-Chattahoochee-Flint (ACF) River basin, located in the southeastern United States (fig. 1), was among the first 20 NAWQA study units selected for study in 1991 (Wangsness and Frick, 1991). The ACF River basin drains about 19,800 mi2 in western Georgia, eastern Alabama, and the Florida panhandle, and is comprised of the Chattahoochee and Flint Rivers that converge at Lake Seminole to form the Apalachicola River. The Apalachicola River flows south through the Florida panhandle into Apalachicola Bay, which discharges into the Gulf of Mexico. Basin hydrology is influenced by 16 reservoirs that cause about 50 percent of the mainstem river miles to be in backwater, and play a major role in controlling flow and influencing the quality of water in the basin. The basin is underlain by five major aquifer systems; crystalline rock aquifers in the Blue Ridge and Piedmont physiographic provinces in the northern part of the basin, and four aquifers in the Coastal Plain physiographic province in the southern part of the basin. For more detailed information on the environmental setting of the ACF River basin, see Couch and others (1996).
The goal of the ACF River basin study design is to compare and contrast the effects of predominant land uses on surface- and ground-water quality. Forest and agriculture are dominant land uses and land covers within the ACF River basin, accounting for 59 and 29 percent of the study area, respectively. Most agricultural land in the upper and middle Chattahoochee and upper Flint River subbasins is used for livestock grazing and poultry production, while most agricultural land in the southern ACF River basin is used for row crops and vegetables; and to a lesser extent, orchards. Urban land use accounts for 5.3 percent of the study area. In 1990, the population of the ACF River basin was about 2.64 million people, 60 percent of which lived in the Metropolitan Atlanta area. Wetland areas account for about 5.4 percent of the entire basin. Agricultural and urban land uses are of particular interest within the ACF River basin, because they have the greatest potential impact on the physical, chemical, and biological quality of the surface- and ground-water resources.
The National Water Quality Assessment Program
The National Water-Quality Assessment Program (NAWQA) provides an understanding of water-quality conditions; whether conditions are getting better or worse over time; and how natural features and human activities affect those conditions. Regional and national assessments are possible because of a consistent study design and uniform methods of data collection and analysis. Monitoring data are integrated with geographic information on hydrological characteristics, land use, and other landscape features in models to extend water-quality understanding to unmonitored areas. Local, State, Tribal, and national stakeholders use NAWQA information to design and implement strategies for managing, protecting, and monitoring water resources in many different hydrologic and land-use settings across the Nation.
The USGS implemented the National Water-Quality Assessment (NAWQA) Program in 1991 to develop long-term consistent and comparable information on streams, rivers, ground water, and aquatic systems in support of national, regional, State, and local information needs and decisions related to water-quality management and policy. The NAWQA program is designed to address the following objectives and answer these questions:
USGS scientists collect and interpret data about surface- and ground-water chemistry, hydrology, land use, stream habitat, and aquatic life in parts or all of nearly all 50 States using a nationally consistent study design and uniform methods of sampling analysis (access NAWQA protocols).
From 1991-2001, the NAWQA Program conducted interdisciplinary assessments and established a baseline understanding of water-quality conditions in 51 of the Nation's river basins and aquifers, referred to as Study Units. Descriptions of water-quality conditions in streams and ground water were developed in more than a thousand reports (access NAWQA publications). Non-technical Summary Reports, written primarily for those interested or involved in resource management, conservation, regulation, and policymaking, were completed for each of the 51 Study Units. Non-technical national summary reports on pesticides, nutrients, and volatile organic compounds (VOCs) also were completed, in which water-quality conditions were compared to national standards and guidelines related to drinking water, protection of aquatic life, and nutrient enrichment.
NAWQA activities during the second decade (2001-2012) focus in large part on national and regional assessments, all of which build on continued monitoring and assessments in 42 of the 51 Study Units completed in the first cycle (USGS Fact Sheet 071-01).
Selected major activities during the second decade include:
NAWQA is planning activities for its third decade (2013-2023) (access a summary of the Program's progress through 2008 and setting the stage for the future).
|
<urn:uuid:81a0d158-2288-4ae3-83b4-abba399592bb>
|
CC-MAIN-2016-26
|
http://ga.water.usgs.gov/nawqa/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00087-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.922675 | 1,092 | 3.578125 | 4 |
World Fish Migration Day 2014
Connected by a common purpose, and sharing ideas and lessons across the world.
24th of May 2014
The World Fish Migration Day 2014 calls attention to restore the connections worldwide between rivers and the sea to create safe migration routes for fish. Free migration of fish is necessary to achieve healthy fish stocks and productive rivers. Many species, like salmon, trout, dorado, shad, giant catfish, lamprey, sturgeon and eel, migrate between the sea and the rivers. These species are particularly threatened by barriers such as weirs, dams and sluices, built for water management, hydropower and land drainage. In many places globally, like the Mekong River, people rely on migratory fish as their primary source of protein. Water and resource managers around the world are striving to find ways to improve migration possibilities for fish in and out of rivers, and deltas and the oceans, all of which they need to survive. For more information on international fish migration and best practices see www.fromseatosource.com
World Fish Migration Day has been developed to improve the publics understanding of migratory fish and their needs. Raising awareness, sharing ideas, helping develop commitments and building communities around different basins around the world are essential aspects of fish passage and river restoration issues. We are connected by a common purpose and are already sharing ideas and lessons across the world.
On World Fish Migration Day 2014, we will connect through celebrations and (field) eventsthat start in New Zealand, and follow the sun around the world and end as the sun sets on the west coast of North America. We already have found more than 35 locations worldwide that can be visited. We are still looking for other organizations that want to join us. With all these events we will show and educate citizens around the world about the importance of fish migration and healthy rivers. We will also highlight all projects through the website, social media and media to draw attention to our purpose.
How do we work?
Participating organizations will organize their own event and arrange their own outreach communication under the umbrella of the World Fish Migration Day. The organizating hub is Wanningen Water Consult & LINKit Consult partnering with WWF, The Nature Conservancy and the IUCN/Wetland International - Freshwater Fish Specialist Group. This partnership will take care of the central coordination, will develop and maintain the main website where events are posted and organize the communication and publicity worldwide by collaborating with worldwide organizations like IUCN and other existing networks. The web address of our central website is www.worldfishmigrationday.com.
You can contact WWC if you are interested to participate or need more information.
|
<urn:uuid:b67c7fc5-883f-4359-8ed8-c7e393075b01>
|
CC-MAIN-2016-26
|
http://www.wanningenwaterconsult.nl/en/products/world-fish-migration-day-2014
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00047-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932222 | 556 | 2.9375 | 3 |
New York Evening Post
January 24, 1812
“Tricks upon Travellers,” or “More Ways than one to kill a Cat.” — Old saws. We are certainly now to have a war, for Congress have voted to have an army. But let me tell you, there is all the difference in the world between an army on paper, and an army in the field. An army on paper is voted in a whiff, but to raise an army, you must offer men good wages. The wages proposed to be given to induce men to come forward and enlist for five years, leave their homes and march away to take Canada, is a bounty of $16, and $5 a month; and at the end of the war, if they can get a certificate of good behavior, 160 acres of wild land and three months’ pay; for the purpose, I presume, of enabling the soldier to walk off and find it, if he can. Now I should really be glad to be informed, whether it is seriously expected that, in a country where a stout able-bodied man can earn $15 a month from May to November, and a dollar a day during mowing and harvesting, he will go into the army for a bounty of $16, $5 a month for five years, if the war should last so long, and 160 acres of wild land, if he happens to be on such good terms with his commanding officer as to obtain a certificate of good behavior? Let the public judge if such inducements as these will ever raise an army of 25,000 men, or ever were seriously expected to do it? If not, can anything be meant more than “sound and fury signifying nothing?” This may be called humbugging on a large scale.
|
<urn:uuid:85903635-0084-486b-b0bb-3075d41b4a1a>
|
CC-MAIN-2016-26
|
http://teachingamericanhistory.org/library/document/the-folly-of-joining-the-army/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00109-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.975665 | 370 | 2.53125 | 3 |
Bacteria may be having a renaissance. Back in the days of the discovery of penicillin, doctors gleefully handed out antibiotics like they were candy and patients were more than happy to munch them down. They were quite effective too, but bacteria rapidly became resistant.
Doctors and scientists worry that we are approaching a time where if we don’t come up with novel antibiotic mechanisms, we will face an epidemic of untreatable bacterial infections. MRSA, methicillin-resistant staphylcoccal auerus, is probably one of the biggest fears.
John Rennie wrote about this issue in the PLoS blog The Gleaming Retort. He describes two strategies scientists are using to try to come up with new weapons in the great antibacterial war. So, naturally one of the first things they turned to was cockroach brains.
A group from the University of Nottingham reported a 90 percent MRSA kill rate utilizing compounds extracted from cockroach and locust brains that were not harmful to human cells. The logic behind their research is that insects have no adaptive immune system (antibodies, lymphocytes etc…), but they are able to survive extremely harsh, contaminated and frankly disgusting environments. Researchers theorize they must rely on extremely potent anti-microbial compounds in order to survive. It is unclear though why these compounds would only be in the nervous system, and the study has not yet been subject to peer review.
The other strategy is to study a cannibalistic species of bacteria, Bacillus subtilis. Under harsh conditions, the bacteria releases a compound called SDP that causes neighboring bacteria to commit suicide and release precious nutrients. Researchers at UCSD were able to use this compound to neutralize MRSA at a concentration similar to the popular antibiotic Vancomycin.
John Rennie, however, had some reservations, which he sums up very nicely below:
Also, although the idea of novel antibiotics derived from insects that live in germ-ridden circumstances sounds appealingly sensible, I can’t help but be reminded of this story from a couple of weeks ago about novel antibiotic compounds found in frog skin. Which also makes perfect sense, doesn’t it, because frogs, too, need special resources to help them survive in filthy, microbe-rich water.
Unfortunately, that story also reminded me about this story from 2008 about antibiotics from frog skin. Or this one from 1999. Or the stories I wrote about Michael Zasloff and Magainin Pharmaceuticals, which was trying to develop novel antibiotics from frog skin more than 20 years ago.
The stories behind certain drug candidate molecules are so fun and compelling and sensible that you can’t help but think they will work out. And sometimes they do. But more often, they don’t, no matter how great the stories are.
In three paragraphs he summarizes a major problem facing medical technology and public perception nowadays. There is so much to be excited about and to spend money on, but it is very difficult and rare for exciting medical technology to make it to market and become useful to people. We’ll take the glass as a half-full approach, though — half full of delicious lifesaving cockroach brains.
The Gleaming Retort: Filthy Places for Antibiotics
Image credit: Matt Reinbold
(hat tip: SCOPE Blog)
*This blog post was originally published at Medgadget*
|
<urn:uuid:b251a79e-4dd1-48a7-b30d-4ab3f9a57c99>
|
CC-MAIN-2016-26
|
http://getbetterhealth.com/developing-new-antibiotics-thinking-beyond-bacteria-resistance/2010.09.11
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00153-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962933 | 700 | 3.25 | 3 |
It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
On July 19, 2013, in an event celebrated the world over, NASA's Cassini spacecraft slipped into Saturn's shadow and turned to image the planet, seven of its moons, its inner rings -- and, in the background, our home planet, Earth.
With the sun's powerful and potentially damaging rays eclipsed by Saturn itself, Cassini's onboard cameras were able to take advantage of this unique viewing geometry. They acquired a panoramic mosaic of the Saturn system that allows scientists to see details in the rings and throughout the system as they are backlit by the sun. This mosaic is special as it marks the third time our home planet was imaged from the outer solar system; the second time it was imaged by Cassini from Saturn's orbit; and the first time ever that inhabitants of Earth were made aware in advance that their photo would be taken from such a great distance.
With both Cassini's wide-angle and narrow-angle cameras aimed at Saturn, Cassini was able to capture 323 images in just over four hours. This final mosaic uses 141 of those wide-angle images. Images taken using the red, green and blue spectral filters of the wide-angle camera were combined and mosaicked together to create this natural-color view. A brightened version with contrast and color enhanced and an unannotated version are also available.
This image spans about 404,880 miles (651,591 kilometers) across.
This photo is simply unreal I thought I should share. If you go to the link they have several photos of different resolutions they are pretty cool to look at for the detail it makes me wish I had a bigger monitor.
It IS unreal... looking. In a way. How come I can see the rings thru the body of the planet? It really looks like pretty computer graphics! This is an actual photo of Saturn?
Nasa release new picture of planet Saturn taken from the Cassini spacecraft
I simply refuse to believe this is genuine, along with many other
of the 'real' photographs NASA have treated us to.
Just say No.
|
<urn:uuid:7e6321e2-8843-4b98-a144-e54b1f449180>
|
CC-MAIN-2016-26
|
http://www.abovetopsecret.com/forum/thread982537/pg1
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00073-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958754 | 475 | 3 | 3 |
Posted by rfvv on Friday, April 9, 2010 at 11:26pm.
1. A doctor is a person who takes care of sick peole.
2. A doctor is a person who is qualified in medicine and treats patients.
3. A doctor is a person whojo job is dealing with sick people.
4. A fashion designer is a person who designs clothes.
5. A fashion designer is a person who makes clothing.
6. A fasion designer is a person whose job is desgning people's wears.
(Would yoiu check the sentences? What about the definitions? If there are some errors, correct them,please?)
- English - Anna, Friday, April 9, 2010 at 11:53pm
They all see to be fine, however you have a lot of spelling errors. Im not accurately sure of the second one.
- English - SraJMcGin, Saturday, April 10, 2010 at 2:17am
- English - E.G., Saturday, April 10, 2010 at 10:34am
5. I am not sure that a fashion designer actually MAKES the clothes he/she designs.
- English - christine, Wednesday, November 14, 2012 at 6:25am
hi how are you
- English - christine, Wednesday, November 14, 2012 at 6:27am
how peple come from
Answer This Question
More Related Questions
- English - 1. A doctor is a person who cares for sick people. 2. A doctor is a ...
- English - A doctor is a person who is helping little children. (picture: A ...
- English - 1. He wants to be a singer in the future. 2. He was a lawyer in the ...
- health - A notice of use & and & disclosure is required for A. Every patient a ...
- English - Write the possessive form of the noun in parentheses. 1/ (Estelle) ...
- Law and ethics in Med - please check my answers thanks :) Question: Give an ...
- health - If the Doctor's office computerized records have been compromised by a ...
- Math - Short answer The back-to-back stem-and-leaf plot below shows the ages of ...
- physics - One of the tenets of the pseudoscience (i.e., "false science") of ...
- English - 1. The doctor provided patients with numbing pills. 2. The doctor ...
|
<urn:uuid:d19bc72b-b345-475b-897c-77adb30791c1>
|
CC-MAIN-2016-26
|
http://www.jiskha.com/display.cgi?id=1270869970
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00171-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.942242 | 518 | 3.140625 | 3 |
Definition of undercutCategorized under "General"
Definition as written by Sis:
A cut - or a cutting away - underneath; a notch cut in a tree to determine the direction in which the tree is to fall and to prevent its splitting.
To cut under or beneath; to cut away material from so as to leave a portion overhanging as in carving or sculpture.
Add a definition to this term
|
<urn:uuid:b3d24c91-fd49-47c0-8df5-a271605e25cd>
|
CC-MAIN-2016-26
|
http://davesgarden.com/guides/terms/go/1125/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00106-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95465 | 86 | 2.796875 | 3 |
Color blindness is the inability to see certain colors in the usual way.
Color deficiency; Blindness - color
Color blindness occurs when there is a problem with the color-sensing granules (pigments) in certain nerve cells of the eye. These cells are called cones. They are found in the retina, the light-sensitive layer of tissue that lines the back of the eye.
If just one pigment is missing, you may have trouble telling the difference between red and green. This is the most common type of color blindness. If a different pigment is missing, you may have trouble seeing blue-yellow colors. People with blue-yellow color blindness usually have problems identifying reds and greens, too.
The most severe form of color blindness is achromatopsia. A person with this rare condition cannot see any color, so they see everything in shades of gray. Achromatopsia is often associated with lazy eye, nystagmus (small, jerky eye movements), severe light sensitivity, and extremely poor vision.
Most color blindness is due to a genetic problem. (See: X-linked recessive) About 1 in 10 men have some form of color blindness. Very few women are color blind.
The drug hydroxychloroquine (Plaquenil) can also cause color blindness. It is used to treat rheumatoid arthritis, among other conditions.
Symptoms vary from person to person, but may include:
Often, the symptoms may be so mild that some people do not know they are color blind. A parent may notice signs of color blindness when a child is learning his or her colors.
Rapid, side-to-side eye movements (nystagmus) and other symptoms may occur in severe cases.
Your doctor or eye specialist can check your color vision in several ways. Testing for color blindness is commonly done during an eye exam.
There is no known treatment. However, there are special contact lenses and glasses that may help people with color blindness tell the difference between similar colors.
Color blindness is a lifelong condition. Most people are able to adjust to it without difficulty or disability.
People who are colorblind may not be able to get a job that requires the ability to see colors accurately. For example, electricians (color-coded wires), painters, fashion designers (fabrics), and cooks (using the color of meat to tell whether it's done) need to be able to see colors accurately.
Make an appointment with your health care provider or ophthalmologist if you think you (or your child) have color blindness.
Adams AJ, Verdon WA, Spivey BE. Color vision. In: Tasman W, Jaeger EA, eds. Duane's Foundations of Clinical Ophthalmology 15th ed. Philadelphia, Pa: Lippincott Williams & Wilkins; 2009:chap 19.
Berson EL. Visual function testing: clinical correlations. In: Tasman W, Jaeger EA, eds. Duane's Foundations of Clinical Ophthalmology 15th ed. Philadelphia, Pa: Lippincott Williams & Wilkins; 2009:chap 14.
Wiggs JL. Molecular genetics of selected ocular disorders. In: Yanoff M, Duker JS, eds. Ophthalmology. 3rd ed. St. Louis, Mo: Mosby Elsevier; 2008:chap 1.2.
Sieving PA, Caruso RC. Retinitis pigmentosa and related disorders. In: Yanoff M, Duker JS, eds. Ophthalmology. 3rd ed. St. Louis, Mo: Mosby Elsevier; 2008:chap 6.10.
|
<urn:uuid:7aa2f271-e227-40d9-9cc9-9e1ae5e793a4>
|
CC-MAIN-2016-26
|
http://www.northside.com/HealthLibrary/?Path=HIE+Multimedia%5C1%5C001002.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00178-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.894776 | 770 | 4.15625 | 4 |
The earth has three layers. They are the core, mantle, and crust.
The earth's core is its center. It is made of hot iron mixed with other metals and rock. The core has two parts. The very center is solid. The outer layer is hot liquid metal. Around the core is the mantle. This is a layer of rock. It is 1,800 miles thick.
The mantle has two parts. The inside part is solid rock. The outside part.sometimes melts. This melted rock is called magma.When a volcano explodes, magma flows to the earth's surface. The top layer of the earth is the crust. It is thinner than the outer layers. It is about 31 to 62 miles deep. The ocean floors are part of the crust. The crust is thinner there. The crust also includes the continents. These are seven huge land areas. The crust is thicker below these land areas.
Plate tectonics is a theory about the earth. It states that the crust is not a solid shell. Instead, it is made up of plates. These plates float on the mantle's liquid rock. They often move in different directions. Oceans and continents sit on these giant plates. Millions of years ago the continents use to fit together but they moved apart. The plates are still moving, they move a few inches a year. Sometimes plates pull apart. Sometimes they push together. Two continental plates smashing together makes mountains.
A continental plate is thicker than an ocean plate. When these two kinds of plates hit, the continental plate will slide over the ocean plate. The edge of the lower plate melts. The liquid rock may erupt in a volcano. The two sliding plates may also cause the earth's crust to move suddenly. This is an earthquake. Earthquakes can destroy buildings. Earthquakes under the ocean can cause huge waves called tsuamis. These waves can flood towns next to the ocean.
Sometimes two plates do not hit head-on. They rub their sides together as they move different ways. This causes faults.These are cracks in the earth's crust. Earthquakes can happen near faults. Forces inside the earth cause volcanoes and earthquakes. These change the earth's landforms. Forces on the earth's surface keep changing these landforms. Weathering is the process of breaking rocks into smaller and smaller pieces. Huge rocks become gravel . Gravel becomes sand. Sand becomes soil.
Water and frost cause this to happen. Water drips into cracks in rocks and freezes. Ice gets bigger as it freezes. As the ice gets bigger in the crack, it splits the rock. `Chemicals and plants also cause weathering. Chemicals in dirty air mix with rain. The rain falls to the earth. The chemicals eat away the rocks. Plant seeds fall into the cracks. The plants spread their roots. In time, the roots cause huge rocks to break apart.
Erosion is the process of wearing away or moving weathered material. Water, wind, and ice cause erosion. They carry away rocks and soil. Rain picks up sand and dirt as it runs downhill. Rivers pick up sand and soil along their banks. Wind also blows soil and sand to there places. Sand in the wind works like sandpaper. It hits rocks and rubs them smooth. Ice is the third cause of erosion. Glaciers are giant sheets of ice. They form high in mountains. As they move, the change the land. They carry rocks down the mountains. The rocks are like sandpaper, too. They grind everything below them as they move. In time, the weight of the ice cuts valleys at the mountains' base.
1. How many layers does the earth have? Name them.
2. What is Plate tectonics?
3. What do oceans and continents sit on?
4. What happened to the continents over the years?
5. What causes an earthquake?
6. What tsunamis?What happens when these occur?
7. What causes faults?
8. What changes the earth's landforms?
9. What is weathering?
10. Describe one thing that occurs in the weathering process?
10. What is erosion?
11. What are glaciers made up of?
12. What do they(glaciers) form?
|
<urn:uuid:109a8d38-1230-4390-b404-fcbb2d8e441d>
|
CC-MAIN-2016-26
|
http://montgomeryla.blogspot.com/2009/09/6th-grade-forces-shaping-earth-chapter.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00031-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945718 | 883 | 4.375 | 4 |
Pyrrhonism: The doctrine that all knowledge is uncertain, that you can't trust anything you think you know. More commonly used today simply to indicate extreme skepticism. Pyrrhonism comes from the doctrines of Pyrrho the Skeptic, a Greek thinker from the third and fourth century BC.
If you need an adjective, use Pyrrhonic, not (as I mistakenly have done) Pyrrhic. A Pyrrhic victory is a victory that comes at too great a cost, not a victory whose winners can be doubted. Pyrrhic (as a lowercase common noun) is also a measure of meter in writing indicating a metrical foot made up of two short or unaccented syllables.
|
<urn:uuid:2de0400e-8499-401e-b1a5-3e8d61179645>
|
CC-MAIN-2016-26
|
http://logophilius.blogspot.com/2009_09_01_archive.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00185-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957776 | 148 | 2.9375 | 3 |
To give credit to Geraldine Ferraro for the gains women have made in big-time politics is a simplification. Sarah Palin, U.S. Sen. Debbie Stabenow or former Gov. Jennifer Granholm do not owe their success to the trail-blazing New York congresswoman, who died Saturday.
Still, there has to be a first when it comes to punching through any glass ceiling, and Ferraro made history in a big way. She was the first woman nominated as vice president on a major party ticket in 1984, running in a losing campaign against President Ronald Reagan.
Reading Ferraro’s obituaries, it is striking how much society’s attitudes changed in her lifetime. She was born only 15 years after women earned the right to vote. When she entered law school, an admissions officer warned her she was taking a seat from a man.
Only 15 women had ever been elected to the U.S. Senate in history when Ferraro ran for vice president. Today, 17 serve in the Senate, including Michigan’s Stabenow.
There has never been a female president, but most of the hurdles that Ferraro and other female politicians once faced have fallen away with time. That is progress, and let’s remember Ferraro for her role in making that happen.
|
<urn:uuid:45d83807-4348-40a8-bf81-eba8db1be560>
|
CC-MAIN-2016-26
|
http://www.mlive.com/opinion/jackson/index.ssf/2011/03/editorial_geraldine_ferraro_se.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00019-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.981278 | 271 | 2.53125 | 3 |
Where there’s smoke, there’s a choir.
On Thursday, students from the O’Connor Method Camp honored the strike of Charleston tobacco workers 70 years ago. With bows raised, a group of about 50 from the camp played an orchestral version of “We Shall Overcome” at the old East Bay Street Cigar Factory.
Mark O’Connor, for which the group is named, said the song carries a deep historical significance, especially in the shadow of the Cigar Factory.
“One of the first places that ‘We Shall Overcome’ was heard and launched was right here at this cigar factory in 1945,” O’Connor said.
Playing barefoot in the grass, violin instructor Pam Wiley assembles the students in an arc to teach them of the 1,200 striking black workers, striving for better pay. The song and the strike is largely credited to kicking off the civil rights movement and change in America.
All the performing students in the method camp have had three to five years of experience in violin, viola, cello or bass. The weeklong method camp focuses mostly on American music, teaching the students about jazz, folk and spirituals.
While there is a tendency to believe that European music is more important in educational circles, O’Connor disagrees and tries to make sure students are not “stuck in the Baroque era with Vivaldi.”
“I think American music in the past 100 years has become some of the most important in the world,” O’Connor said.
With each song in the method book, the history and meaning of the song is displayed on the neighboring pages. Teaching music from the children’s own heritage, O’Connor said, is more likely to get children excited about the arts.
“It’s going to be music like this that will motivate children to participate and learn,” O’Connor said.
O’Connor begins with one round of the chorus solo, as the group readies their bows to join.
Several performers played along with the young orchestra, including Lonnie Root on cello. Root, who plays with rock and folk bands around town, said he first began playing around the same age as these kids today.
“I can relate a lot to what they’re learning,” he said. “I wish I had learned earlier.”
Although the method focuses mostly on American music, Root said he enjoys the diversity in styles.
“It totally doesn’t get away from classical, which is great,” Root said.
For Wiley, the cultural learning is the greatest takeaway from a day filled with music.
“It’s really what the method is all about: connecting the music to our American history,” she said.
Reach Nick Watson at 937-4810.
|
<urn:uuid:bce138a5-5311-4d82-9a5e-84884f56d4a0>
|
CC-MAIN-2016-26
|
http://www.postandcourier.com/article/20130801/PC16/130809948/student-orchestra-plays-x2018-we-shall-overcome-x2019-at-cigar-factory&source=RSS
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00176-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960172 | 612 | 2.671875 | 3 |
Stephen Fry traces the evolution of the mobile phone, from hefty executive bricks that required a separate briefcase to carry the battery, to the smartphones available today. There are more mobile phones in the world than there are people on the planet. Stephen Fry talks to the backroom boys who made it all possible, and here’s how the technology succeeded in ways that the geeks had not necessarily intended. For example, the engineers who designed the early texting facilities didn’t imagine that anyone might want to reply. (Just in case, they added a short list of possible pre-set answers: yes, no, and maybe).
They also thought taxifones and fax machines for your car would be winners. In the early 90s, Nokia, then famous for toilet paper and rubber boots, was on the brink of collapse; until the new CEO made a bold decision to focus solely on mobile phones…. Thanks to Margaret Thatcher opening up the airwaves, Britain became a world leader in mobile phone technology. And today, 85% of the silicon chips inside all mobiles are designed by just one Cambridge-based company. Series produced by Anna Buckley.
©2011 Stephen Fry (P)2011 AudioGO Ltd
Report Inappropriate Content
If you find this review inappropriate and think it should be removed from our site, let us know. This report will be reviewed by Audible and we will take appropriate action.
|
<urn:uuid:e5bd2c95-5c84-4725-be85-2404bdd3e418>
|
CC-MAIN-2016-26
|
http://www.audible.com/pd/Radio-TV/Stephen-Fry-on-the-Phone-Complete-Series-Audiobook/B006IFA6NW
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00028-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957288 | 286 | 2.6875 | 3 |
According to the Center for Universal Design: The intent of universal design (UD)is to simplify life for everyone by making products, communications, and the built environment more usable by as many people as possible at little or no extra cost. Universal design benefits people of all ages and abilities. (1997 NC State University)
Some Key Principles of Universal Design include:
- Equitable Use: The design is useful and marketable to people with diverse abilities;
- Flexibility in Use: The design accommodates a wide range of individual preferences and abilities;
- Simple and Intuitive: Use of the design is easy to understand regardless of the user's experience, knowledge, language skills, or concentration level;
- Perceptible Information: The design communicates necessary information effectively to the user, regardless of the user's sensory abilities; and
- Low Physical Effort: The design can be used efficiently and comfortably and with a minimum of fatigue.
Moving Toward the Vision of the Universally Designed Classroom. Explanation of how universal design can be used in a classroom and the benefits to students and teachers.
Universal Design for Learning: Implications for Large-Scale Assessment describes how to create an education system that works for all students, including those with learning disabilities, by applying the concept of universal design to learning and assessment.
UDL in Classroom Practice details the practical application of universal design in learning in educational settings.
Trace Center, University of Wisconsin-Madison conducts research on making information technology accessible and usable to as many people as possible.
The Center for Universal Design is a national research, information, and technical assistance center that evaluates, develops, and promotes universal design in housing, public and commercial facilities, and related products.
Education Programs with Universal Design. From Universal Design Education Online, teachers have submitted information about using universal design in the classroom.
North Carolina State University's School of Design. In partnership with the Center for Universal Design, the School of Design has implemented several strategies to promote universal design in undergraduate education.
Center on Postsecondary Education and Disability, University of Connecticut educates and supports faculty in acquiring the knowledge and skills they need to fully include adolescents and adults with disabilities in education.
Universal Design Education Online supports educators and students in their teaching and study of universal design.
Universal Design of Instruction describes how the principles of universal design can be used in instruction to include individuals with "wide differences in their abilities.
ABLEDATA provides information on assistive technology and rehabilitation equipment available from domestic and international sources to consumers, organizations, professionals, and caregivers within the United States.
Alliance for Technology Access. Network of community-based centers providing information and services to children and adults with disabilities that will increase their use of standard, assistive, and information technologies.
Bobby. A comprehensive Internet accessibility software designed to help expose and repair barriers to accessibility and encourage compliance with existing guidelines.
The Center for Applied Special Technology (CAST). An organization that uses technology to expand opportunities for all people, especially those with disabilities. The site includes resources, examples, and information about professional development opportunities.
National Center for Accessible Media. Association dedicated to ensuring equal access to media for people with disabilities, with numerous projects on access to educational media in particular.
|
<urn:uuid:e48f037e-2598-4aa3-9932-851d4a01563c>
|
CC-MAIN-2016-26
|
http://www2.ed.gov/about/offices/list/ovae/pi/AdultEd/disaccess.html?exp=4
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00097-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.922837 | 663 | 3.828125 | 4 |
This lab science course studies the interrelationships of organisms with their environment. This course is intended for either science or non-science majors in fulfillment of the general education lab science requirements. Through an understanding of general ecological principles contemporary problems such as pollution, endangered species, energy shortages, and over-population are addressed. Field trips and lab exercises support lecture discussions. Formerly BIO 130. Prerequisite: Appropriate placement score or grade of C or higher in ENGL 087; or permission of the Science Division Chair or designee. Recommended: READ 088 or higher.
|
<urn:uuid:db2dbeaf-fcba-43be-bd15-e041288250ae>
|
CC-MAIN-2016-26
|
http://www.wwcc.edu/cat/course_details2.cfm?DC=BIOL&CC=200&CN=130
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00044-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.879186 | 116 | 2.578125 | 3 |
Parental Physical Discipline Linked to Behavior Problems in Teens
Jim LiebeltJim Liebelt's Blog
- 2009 Sep 15
Two new studies explore how discipline changes during childhood and adolescence, and what family factors affect those changes. They conclude that when parents use physical discipline through childhood, their children experience more behavior problems in adolescence.
They find that parents typically adjust the way they discipline their children in response to their children's growing cognitive abilities, using less physical discipline (spanking, slapping, hitting with an object) over time. As children grow older, physical discipline becomes less developmentally appropriate. However, when parents' use of physical discipline continues through childhood, by the time their children are teens, they're more likely to have behavior problems. Teens of parents who stop using physical discipline when their children are young are less likely to have these behavior problems.
The studies were conducted by researchers at Duke University, Oklahoma State University, the University of Pittsburgh, Auburn University, and Indiana University. They appear in the September/October 2009 issue of Child Development.
|
<urn:uuid:42bd3935-c67e-4a73-b3c5-87ac6c5db787>
|
CC-MAIN-2016-26
|
http://www.crosswalk.com/blogs/liebelt/parental-physical-discipline-linked-to-behavior-problems-in-teens-11608555.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00142-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954219 | 214 | 2.828125 | 3 |
Boris Yeltsin, the first elected president of Russia in all its long history, died Monday at 76.
He may have been a flawed character, a leader who will leave an ambiguous legacy. Ultimately, however, as Leon Aron, author of “Yeltsin: A Revolutionary Life” and a resident scholar at the American Enterprise Institute, wrote, “the leitmotif of Yeltsin’s political life … is likely to be the furtherance of liberty.”
The touchstone picture, of course, is Boris Yeltsin atop a tank on Aug. 19, 1991, standing up in the face of an attempted coup. The future of Russia was hanging in the balance as a coterie of hard-line communists tried to take over the liberalizing government Mikhail Gorbachev had spent several years establishing.
Showing great moral and physical courage, Yeltsin leapt atop a tank outside the Russian White House, which the coup leaders controlled, and rallied the people against them.
He could easily have been killed then, but he prevailed, and in December presided as president over the formal dissolution of the Soviet Union.
It is easy to forget — and younger people never knew — the sense of permanence and inevitability that surrounded the communist regime in the Soviet Union in the early 1980s. Gorbachev, who became first secretary of the Communist Party and premier in 1985, understood some of the weaknesses of the system and gently pushed perestroika (restructuring) and glasnost (openness), reforms designed to preserve the Soviet system.
But Boris Yeltsin, who had likewise come up through the Communist Party leadership system, became dissatisfied with the slow pace of Gorbachev’s reforms and eventually with communism itself, publicly quitting the party in 1990.
As Russia’s first elected president, Yeltsin voluntarily granted independence to Ukraine, which Russia had ruled for centuries. He institutionalized freedom of speech and of the press — it’s hard to imagine the still-unsolved murder, in 2006, of investigative journalist Anna Politovskaya during Yeltsin’s time — and presided over the first free, multicandidate elections in Russia’s history.
He began the process of liberating Russia’s economy from the death grip of total state control.
As Arnold Beichman, a research fellow at Stanford’s Hoover Institution said, Yeltsin was a larger-than-life character who “really opened up the Soviet Union, just as Vladimir Putin has begun to close Russia down.”
John G. Dunlop, monitor for Russian elections in 1995 and 1996, said Yeltsin’s first term — right up to the invasion of Chechnya in December 1994 — “can be assessed very positively.”
As a leader, Boris Yeltsin had an innate grasp of the big picture and a charismatic personality. If the privatization of the Russian economy was often accompanied by corruption and favoritism, if Yeltsin had health problems and a taste for the bottle and was sometimes out of commission, if the attempt by force to prevent Chechnya from leaving Russian rule was a mistake that set the stage for a return to the old ways, if he anointed Vladimir Putin, who has shown marked authoritarian tendencies — well, perhaps it’s an illustration of the old saw that revolutionaries should not be rulers.
Was Boris Yeltsin an imperfect hero? No doubt. But he was an authentic hero, one of the more notable of our time.
|
<urn:uuid:bfd9a010-9d30-4536-8ce9-e29ad3ac4b33>
|
CC-MAIN-2016-26
|
http://pntonline.com/2007/04/24/boris-yeltsin-true-hero-of-his-time/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00086-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960801 | 738 | 2.71875 | 3 |
Web-based Distributed Authoring and Versioning (WebDAV) is a set of methods based on the Hypertext Transfer Protocol (HTTP) that facilitates collaboration between users in editing and managing documents and files stored on World Wide Web servers. WebDAV was defined in RFC 4918 by a working group of the Internet Engineering Task Force (IETF).
Microsoft Windows supports WebDAV since Windows 98. It also available in Windows 2000, XP and Windows 7.
WebDAV Service: Apache HTTPD
The following shows two simple WebDAV configuration for Apache httpd server. The “DAV On” indicates the URL is a WebDAV service.
Configuration: Basic Authentication using LDAP
<Location /setup> DAV On Options All Order deny,allow Allow from all AuthType Basic AuthName "DAV" AuthBasicProvider ldap AuthzLDAPAuthoritative off AuthLDAPURL ldap://ldap.estream.com.my/ou=user,dc=example,dc=com?uid?sub?(objectclass=posixAccount) Require valid-user </Location>
Configuration: Digest Authentication using password file
Digest authentication send MD5 hashed password to httpd server and thus provide a bit more security compare to Basic authentication. However, Digest authentication is also not a secure mechanism for HTTP service.
<Location /setup> DAV On <LimitExcept GET OPTIONS> Options All Order deny,allow Allow From all AuthType Digest AuthName "DAV" AuthDigestProvider file AuthUserFile /etc/httpd/conf.d/digest Require valid-user </LimitExcept> </Location>
In Windows XP or perhaps Windows 2000 onwards, WebClient is a service that communicate to WebDAV server. Microsoft Windows doesn’t provide any GUI tools to configure WebClient. All the configuration is done via Registry setting in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\WebClient.
You should restart the WebClient if you modify any WebClient items in Registry to make changes take effect.
Windows XP and WebDAV
Both Basic and Digest Authentication works with Windows XP’s WebClient without much configuration.
You may connect to a WebDAV share via:
- File | Open of Internet Explorer (check Open as Web Folder):
- “Add a network place” in My Network Places:
Once the connection is authenticated and authorised, you should able to access the WebDAV share just like normal network share in Windows Explorer.
Windows 7 and WebDAV
It is not easy to make WebDAV works in Windows 7 like Windows XP. You need extra care to get WebDAV done in Windows 7.
Windows 7 WebClient service supports Digest Authentication by default. This restriction has lead to 2 use cases failed:
- All WebDAV with Basic Authentication will fail no matter how you configure the WebDAV URL.
- All WebDAV using Digest Authentication and LDAP as backend authentication will fail. The LDAP service is unable to perform authentication again digest password.
If the WebDAV URL support digest authentication using file as AuthDigestProvider, Windows 7 should establish the WebDAV connection successfully. During the frequent trial and error testing among httpd server and Windows 7 WebClient service, you might need to restart the WebClient service before start a new test.
You may change an entry in registry:
to allow Basic Authentication work in WebClient service. BasicAuthLevel’s default value is 1. The meaning of BasicAuthLevel is as follow:
0 - Basic authentication disabled
1 - Basic authentication enabled for SSL shares only
2 or greater - Basic authentication enabled for SSL shares and for non-SSL shares
You may set BasicAuthLevel to 2 for Basic Authentication to work in non SSL WebDAV share. You may then use Basic Authentication with LDAP as backend authentication service for the WebDAV share.
To add a WebDAV share in Windows 7, you may use “add a network location” as shown in Windows Explorer:
The rest of configuration is straight forward, just enter the WebDAV share URL and supply valid credential if necessary and you can start access the WebDAV share as usual.
Error 0x800700DF: The file size exceeds the limit allowed and cannot be saved
When you use Windows 7 to access a WebDAV share copying a large file more than 50MB, you may encounter the following error:
There is a setting for webclient service in registry that restrict the transmit file sizes (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\WebClient\Parameters\FileSizeLimitInBytes):
Modify this value to something like 0xFFFFFFFF will allow transferring file size of 4GB.
Restart the WebClient service refresh the setting.
- You may receive an error message when you try to download a file that is larger than 50000000 bytes from a Web folder on a computer that is running Windows Vista or that is running Windows XP with Service Pack 1 or with Service Pack 2.
- Howto Fix Webdav On Windows 7 64bit. URL: http://shon.org/blog/2010/03/04/howto-fix-windows-7-64bit-webdav/
|
<urn:uuid:5c765b1c-f3a1-4434-bf58-ae74211afc8a>
|
CC-MAIN-2016-26
|
http://chee-yang.blogspot.com/2010/09/microsoft-windows-and-webdav.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00199-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.745003 | 1,118 | 2.546875 | 3 |
Email spam, also known as junk email or unsolicited bulk email (UBE), is a subset of electronic spam involving nearly identical messages sent to numerous recipients by email. The messages may contain disguised links that appear to be for familiar websites but in fact lead to phishing web sites or sites that are hosting malware. Spam email may also include malware as scripts or other executable file attachments. Definitions of spam usually include the aspects that email is unsolicited and sent in bulk. One subset of UBE is UCE (unsolicited commercial email). The opposite of "spam", email which one wants, is sometimes called "ham". Like other forms of unwanted bulk messaging, it is named for Spam luncheon meat by way of a Monty Python sketch in which Spam is depicted as ubiquitous and unavoidable.
Email spam has steadily grown since the early 1990s. Botnets, networks of virus-infected computers, are used to send about 80% of spam. Since the expense of the spam is borne mostly by the recipient, it is effectively postage due advertising.
The legal status of spam varies from one jurisdiction to another. In the United States, spam was declared to be legal by the CAN-SPAM Act of 2003 provided the message adheres to certain specifications. ISPs have attempted to recover the cost of spam through lawsuits against spammers, although they have been mostly unsuccessful in collecting damages despite winning in court.
Spammers collect email addresses from chatrooms, websites, customer lists, newsgroups, and viruses which harvest users' address books, and are sold to other spammers. They also use a practice known as "email appending" or "epending" in which they use known information about their target (such as a postal address) to search for the target's email address. Much of spam is sent to invalid email addresses. According to the Message Anti-Abuse Working Group, the amount of spam email was between 88–92% of email messages sent in the first half of 2010.
- 1 Overview
- 2 Types
- 3 Spam techniques
- 4 Legality
- 5 Deception and fraud
- 6 Theft of service
- 7 Statistics and estimates
- 8 Anti-spam techniques
- 9 How spammers operate
- 10 Related vocabulary
- 11 History
- 12 See also
- 13 References
- 14 Further reading
- 15 External links
From the beginning of the Internet (the ARPANET), sending of junk email has been prohibited. Gary Thuerk sent the first email spam message in 1978 to 600 people. He was reprimanded and told not to do it again. The ban on spam is enforced by the Terms of Service/Acceptable Use Policy (ToS/AUP) of internet service providers (ISPs) and peer pressure. Even with a thousand users junk email for advertising is not tenable, and with a million users it is not only impractical, but also expensive. It was estimated that spam cost businesses on the order of $100 billion in 2007. As the scale of the spam problem has grown, ISPs and the public have turned to government for relief from spam, which has failed to materialize.
Spam has several definitions varying by source.
- Unsolicited bulk email (UBE)—unsolicited email, sent in large quantities.
- Unsolicited commercial email (UCE)—this more restrictive definition is used by regulators whose mandate is to regulate commerce, such as the U.S. Federal Trade Commission.
Many spam emails contain URLs to a website or websites. According to a Cyberoam report in 2014, there are an average of 54 billion spam messages sent every day. "Pharmaceutical products (Viagra and the like) jumped up 45% from last quarter’s analysis, leading this quarter’s spam pack. Emails purporting to offer jobs with fast, easy cash come in at number two, accounting for approximately 15% of all spam email. And, rounding off at number three are spam emails about diet products (such as Garcinia gummi-gutta or Garcinia Cambogia), accounting for approximately 1%."
Most common products advertised
Advance fee fraud spam such as the Nigerian "419" scam may be sent by a single individual from a cybercafé in a developing country. Organized "spam gangs" operate from sites set up by the Russian mafia, with turf battles and revenge killings sometimes resulting.
Spam is also a medium for fraudsters to scam users into entering personal information on fake Web sites using emails forged to look like they are from banks or other organizations, such as PayPal. This is known as phishing. Targeted phishing, where known information about the recipient is used to create forged emails, is known as spear-phishing.
If a marketer has one database containing names, addresses, and telephone numbers of customers, they can pay to have their database matched against an external database containing email addresses. The company then has the means to send email to people who have not requested email, which may include people who have deliberately withheld their email address.
Image spam, or image-based spam, is an obfuscating method in which the text of the message is stored as a GIF or JPEG image and displayed in the email. This prevents text-based spam filters from detecting and blocking spam messages. Image spam was reportedly used in the mid-2000s to advertise "pump and dump" stocks.
Often, image spam contains nonsensical, computer-generated text which simply annoys the reader. However, new technology in some programs tries to read the images by attempting to find text in these images. These programs are not very accurate, and sometimes filter out innocent images of products, such as a box that has words on it.
A newer technique, however, is to use an animated GIF image that does not contain clear text in its initial frame, or to contort the shapes of letters in the image (as in CAPTCHA) to avoid detection by optical character recognition tools.
Blank spam is spam lacking a payload advertisement. Often the message body is missing altogether, as well as the subject line. Still, it fits the definition of spam because of its nature as bulk and unsolicited email.
Blank spam may be originated in different ways, either intentional or unintentionally:
- Blank spam can have been sent in a directory harvest attack, a form of dictionary attack for gathering valid addresses from an email service provider. Since the goal in such an attack is to use the bounces to separate invalid addresses from the valid ones, spammers may dispense with most elements of the header and the entire message body, and still accomplish their goals.
- Blank spam may also occur when a spammer forgets or otherwise fails to add the payload when he or she sets up the spam run.
- Often blank spam headers appear truncated, suggesting that computer glitches may have contributed to this problem—from poorly written spam software to malfunctioning relay servers, or any problems that may truncate header lines from the message body.
- Some spam may appear to be blank when in fact it is not. An example of this is the VBS.Davinia.B email worm which propagates through messages that have no subject line and appears blank, when in fact it uses HTML code to download other files.
Backscatter is a side-effect of email spam, viruses and worms, where email servers receiving spam and other mail send bounce messages to an innocent party. This occurs because the original message's envelope sender is forged to contain the email address of the victim. A very large proportion of such email is sent with a forged From: header, matching the envelope sender.
Since these messages were not solicited by the recipients, are substantially similar to each other, and are delivered in bulk quantities, they qualify as unsolicited bulk email or spam. As such, systems that generate email backscatter can end up being listed on various DNSBLs and be in violation of internet service providers' Terms of Service.
Sending spam violates the acceptable use policy (AUP) of almost all Internet service providers. Providers vary in their willingness or ability to enforce their AUPs. Some actively enforce their terms and terminate spammers' accounts without warning. Some ISPs lack adequate personnel or technical skills for enforcement, while others may be reluctant to enforce restrictive terms against profitable customers.
As the recipient directly bears the cost of delivery, storage, and processing, one could regard spam as the electronic equivalent of "postage-due" junk mail. Due to the low cost of sending unsolicited email and the potential profit entailed, some believe that only strict legal enforcement can stop junk email. The Coalition Against Unsolicited Commercial Email (CAUCE) argues "Today, much of the spam volume is sent by career criminals and malicious hackers who won't stop until they're all rounded up and put in jail."
All the countries of the European Union have passed laws that specifically target spam.
Article 13 of the European Union Directive on Privacy and Electronic Communications (2002/58/EC) provides that the EU member states shall take appropriate measures to ensure that unsolicited communications for the purposes of direct marketing are not allowed either without the consent of the subscribers concerned or in respect of subscribers who do not wish to receive these communications, the choice between these options to be determined by national legislation.
In the United Kingdom, for example, unsolicited emails cannot be sent to an individual subscriber unless prior permission has been obtained or unless there is a previous relationship between the parties. The regulations can be enforced against an offending company or individual anywhere in the European Union. The Information Commissioner's Office has responsibility for the enforcement of unsolicited emails and considers complaints about breaches. A breach of an enforcement notice is a criminal offence subject to a fine of up to £500,000.
In Australia, the relevant legislation is the Spam Act 2003, which covers some types of email and phone spam and took effect on 11 April 2004. The Spam Act provides that "Unsolicited commercial electronic messages must not be sent." Whether an email is unsolicited depends on whether the sender has consent. Consent can be express or inferred. Express consent is when someone directly instructs a sender to send them emails, e.g. by opting in. Consent can also be inferred from the business relationship between the sender and recipient or if the recipient conspicuously publishes their email address in a public place (such as on a website). Penalties are up to 10,000 penalty units, or 2,000 penalty units for a person other than a body corporate.
Spam is legally permissible according to CAN-SPAM, provided it meets certain criteria: a "truthful" subject line, no forged information in the technical headers or sender address, and other minor requirements. If the spam fails to comply with any of these requirements it is illegal. Aggravated or accelerated penalties apply if the spammer harvested the email addresses using methods described earlier.
A review of the effectiveness of CAN-SPAM in 2005 by the Federal Trade Commission (the agency charged with CAN-SPAM enforcement) stated that the amount of sexually explicit spam had significantly decreased since 2003 and the total volume had begun to level off. Senator Conrad Burns, a principal sponsor, noted that "Enforcement is key regarding the CAN-SPAM legislation." In 2004, less than one percent of spam complied with CAN-SPAM. In contrast to the FTC evaluation, many observers view CAN-SPAM as having failed in its purpose of reducing spam.
Accessing privately owned computer resources without the owner's permission is illegal under computer crime statutes in most nations. Deliberate spreading of computer viruses is also illegal in the United States and elsewhere. Thus, some common behaviors of spammers are criminal regardless of the legality of spamming per se. Even before the advent of laws specifically banning or regulating spamming, spammers were successfully prosecuted under computer fraud and abuse laws for wrongfully using others' computers.
The use of botnets can be perceived as theft. The spammer consumes a zombie owner's bandwidth and resources without any cost. In addition, spam is perceived as theft of services. The receiving SMTP servers consume significant amounts of system resources dealing with this unwanted traffic. As a result, service providers have to spend large amounts of money to make their systems capable of handling these amounts of email. Such costs are inevitably passed on to the service providers' customers.
Other laws, not only those related to spam, have been used to prosecute alleged spammers. For example, Alan Ralsky was indicted on stock fraud charges in January 2008, and Robert Soloway pleaded guilty in March 2008 to charges of mail fraud, fraud in connection with email, and failing to file a tax return.
Deception and fraud
Spammers may engage in deliberate fraud to send out their messages. Spammers often use false names, addresses, phone numbers, and other contact information to set up "disposable" accounts at various Internet service providers. They also often use falsified or stolen credit card numbers to pay for these accounts. This allows them to move quickly from one account to the next as the host ISPs discover and shut down each one.
Senders may go to great lengths to conceal the origin of their messages. Large companies may hire another firm to send their messages so that complaints or blocking of email falls on a third party. Others engage in spoofing of email addresses (much easier than IP address spoofing). The email protocol (SMTP) has no authentication by default, so the spammer can pretend to originate a message apparently from any email address. To prevent this, some ISPs and domains require the use of SMTP-AUTH, allowing positive identification of the specific account from which an email originates.
Senders cannot completely spoof email delivery chains (the 'Received' header), since the receiving mailserver records the actual connection from the last mailserver's IP address. To counter this, some spammers forge additional delivery headers to make it appear as if the email had previously traversed many legitimate servers.
Spoofing can have serious consequences for legitimate email users. Not only can their email inboxes get clogged up with "undeliverable" emails in addition to volumes of spam, they can mistakenly be identified as a spammer. Not only may they receive irate email from spam victims, but (if spam victims report the email address owner to the ISP, for example) a naive ISP may terminate their service for spamming.
Theft of service
Spammers frequently seek out and make use of vulnerable third-party systems such as open mail relays and open proxy servers. SMTP forwards mail from one server to another—mail servers that ISPs run commonly require some form of authentication to ensure that the user is a customer of that ISP. Open relays, however, do not properly check who is using the mail server and pass all mail to the destination address, making it harder to track down spammers.
Increasingly, spammers use networks of malware-infected PCs (zombies) to send their spam. Zombie networks are also known as botnets (such zombifying malware is known as a bot, short for robot). In June 2006, an estimated 80 percent of email spam was sent by zombie PCs, an increase of 30 percentfrom the prior year. An estimated 55 billion email spam were sent each day in June 2006, an increase of 25 billion per day from June 2005.
For the first quarter of 2010, an estimated 305,000 newly activated zombie PCs were brought online each day for malicious activity. This number is slightly lower than the 312,000 of the fourth quarter of 2009.
Brazil produced the most zombies in the first quarter of 2010. Brazil was the source of 20 percent of all zombies, which is down from 14 percent from the fourth quarter of 2009. India had 10 percent, with Vietnam at 8 percent, and the Russian Federation at 7 percent.
||This article possibly contains original research. (October 2015) (Learn how and when to remove this template message)|
To combat the problems posed by botnets, open relays, and proxy servers, many email server administrators pre-emptively block dynamic IP ranges and impose stringent requirements on other servers wishing to deliver mail. Forward-confirmed reverse DNS must be correctly set for the outgoing mail server and large swaths of IP addresses are blocked, sometimes pre-emptively, to prevent spam. These measures can pose problems for those wanting to run a small email server off an inexpensive domestic connection. Blacklisting of IP ranges due to spam emanating from them also causes problems for legitimate email servers in the same IP range.
Statistics and estimates
The total volume of email spam has been consistently growing, but in 2011 the trend seems to have reversed. The amount of spam users see in their mailboxes is only a portion of total spam sent, since spammers' lists often contain a large percentage of invalid addresses and many spam filters simply delete or reject "obvious spam".
The first known spam email, advertising a DEC product presentation, was sent in 1978 by Gary Thuerk to 600 addresses, which was all the users of ARPANET at the time, though software limitations meant only slightly more than half of the intended recipients actually received it. As of August 2010, the amount of spam was estimated to be around 200 billion spam messages sent per day. More than 97% of all emails sent over the Internet are unwanted, according to a Microsoft security report. MAAWG estimates that 85% of incoming mail is "abusive email", as of the second half of 2007. The sample size for the MAAWG's study was over 100 million mailboxes.
A 2010 survey of US and European email users showed that 46% of the respondents had opened spam messages, although only 11% had clicked on a link.
Highest amount of spam received
Cost of spam
A 2004 survey estimated that lost productivity costs Internet users in the United States $21.58 billion annually, while another reported the cost at $17 billion, up from $11 billion in 2003. In 2004, the worldwide productivity cost of spam has been estimated to be $50 billion in 2005. An estimate of the percentage cost borne by the sender of marketing junk mail (snail mail) is 88 percent, whereas in 2001 one spam was estimated to cost $0.10 for the receiver and $0.00001 (0.01% of the cost) for the sender.
Origin of spam
Origin or source of spam refers to the geographical location of the computer from which the spam is sent; it is not the country where the spammer resides, nor the country that hosts the spamvertised site. Because of the international nature of spam, the spammer, the hijacked spam-sending computer, the spamvertised server, and the user target of the spam are all often located in different countries. As much as 80% of spam received by Internet users in North America and Europe can be traced to fewer than 200 spammers.
- The United States (the origin of 19.8% of spam messages, up from 18.9% in Q3)
- China (9.9%, up from 5.4%)
- Russia (6.4%, down from 8.3%)
- Brazil (6.3%, up from 4.5%)
- Turkey (4.4%, down from 8.2%)
When grouped by continents, spam comes mostly from:
- Asia (37.8%, down from 39.8%)
- North America (23.6%, up from 21.8%)
- Europe (23.4%, down from 23.9%)
- South America (12.9%, down from 13.2%)
In terms of number of IP addresses: the Spamhaus Project (which measures spam sources in terms of number of IP addresses used for spamming, rather than volume of spam sent) ranks the top three as the United States, China, and Russia, followed by Japan, Canada, and South Korea.
In terms of networks: As of 5 June 2007[update], the three networks hosting the most spammers are Verizon, AT&T, and VSNL International. Verizon inherited many of these spam sources from its acquisition of MCI, specifically through the UUNet subsidiary of MCI, which Verizon subsequently renamed Verizon Business.
Some popular methods for filtering and refusing spam include email filtering based on the content of the email, DNS-based blackhole lists (DNSBL), greylisting, spamtraps, enforcing technical requirements of email (SMTP), checksumming systems to detect bulk email, and by putting some sort of cost on the sender via a proof-of-work system or a micropayment. Each method has strengths and weaknesses and each is controversial because of its weaknesses. For example, one company's offer to "[remove] some spamtrap and honeypot addresses" from email lists defeats the ability for those methods to identify spammers.
Outbound spam protection combines many of the techniques to scan messages exiting out of a service provider's network, identify spam, and taking action such as blocking the message or shutting off the source of the message.
In one study, 95 percent of revenues (in the study) cleared through just three banks.
How spammers operate
|This section does not cite any sources. (November 2011) (Learn how and when to remove this template message)|
Gathering of addresses
In order to send spam, spammers need to obtain the email addresses of the intended recipients. To this end, both spammers themselves and list merchants gather huge lists of potential email addresses. Since spam is, by definition, unsolicited, this address harvesting is done without the consent (and sometimes against the expressed will) of the address owners. As a consequence, spammers' address lists are inaccurate. A single spam run may target tens of millions of possible addresses – many of which are invalid, malformed, or undeliverable.
Sometimes, if the sent spam is "bounced" or sent back to the sender by various programs that eliminate spam, or if the recipient clicks on an unsubscribe link, that may cause that email address to be marked as "valid", which is interpreted by the spammer as "send me more". This is illegal with the passage of anti-spam legislation, however. Thus a recipient should not automatically assume the unsubscribe link is an invitation to be sent more messages. If the originating company is legitimate and the content of the message is legitimate, then individuals should unsubscribe to messages they no longer wish to receive.
Delivering spam messages
Obfuscating message content
Many spam-filtering techniques work by searching for patterns in the headers or bodies of messages. For instance, a user may decide that all email they receive with the word "Viagra" in the subject line is spam, and instruct their mail program to automatically delete all such messages. To defeat such filters, the spammer may intentionally misspell commonly filtered words or insert other characters, often in a style similar to leetspeak, as in the following examples: V1agra, Via'gra, Vi@graa, vi*gra, \/iagra. This also allows for many different ways to express a given word, making identifying them all more difficult for filter software.
The principle of this method is to leave the word readable to humans (who can easily recognize the intended word for such misspellings), but not likely to be recognized by a literal computer program. This is only somewhat effective, because modern filter patterns have been designed to recognize blacklisted terms in the various iterations of misspelling. Other filters target the actual obfuscation methods, such as the non-standard use of punctuation or numerals into unusual places. Similarly, HTML-based email gives the spammer more tools to obfuscate text. Inserting HTML comments between letters can foil some filters, as can including text made invisible by setting the font color to white on a white background, or shrinking the font size to the smallest fine print. Another common ploy involves presenting the text as an image, which is either sent along or loaded from a remote server. This can be foiled by not permitting an email-program to load images.
As Bayesian filtering has become popular as a spam-filtering technique, spammers have started using methods to weaken it. To a rough approximation, Bayesian filters rely on word probabilities. If a message contains many words that are used only in spam, and few that are never used in spam, it is likely to be spam. To weaken Bayesian filters, some spammers, alongside the sales pitch, now include lines of irrelevant, random words, in a technique known as Bayesian poisoning. A variant on this tactic may be borrowed from the Usenet abuser known as "Hipcrime"—to include passages from books taken from Project Gutenberg, or nonsense sentences generated with "dissociated press" algorithms. Randomly generated phrases can create spoetry (spam poetry) or spam art. The perceived credibility of spam messages by users differs across cultures; for example, Korean unsolicited email frequently uses apologies, likely to be based on Koreans’ modeling behavior and a greater tendency to follow social norms.
Another method used to masquerade spam as legitimate messages is the use of autogenerated sender names in the From: field, ranging from realistic ones such as "Jackie F. Bird" to (either by mistake or intentionally) bizarre attention-grabbing names such as "Sloppiest U. Epiglottis" or "Attentively E. Behavioral". Return addresses are also routinely auto-generated, often using unsuspecting domain owners' legitimate domain names, leading some users to blame the innocent domain owners. Blocking lists use IP addresses rather than sender domain names, as these are more accurate. A mail purporting to be from example.com can be seen to be faked by looking for the originating IP address in the email's headers; also Sender Policy Framework, for example, helps by stating that a certain domain will send email only from certain IP addresses.
A number of other online activities and business practices are considered by anti-spam activists to be connected to spamming. These are sometimes termed spam-support services: business services, other than the actual sending of spam itself, which permit the spammer to continue operating. Spam-support services can include processing orders for goods advertised in spam, hosting Web sites or DNS records referenced in spam messages, or a number of specific services as follows:
Some Internet hosting firms advertise bulk-friendly or bulletproof hosting. This means that, unlike most ISPs, they will not terminate a customer for spamming. These hosting firms operate as clients of larger ISPs, and many have eventually been taken offline by these larger ISPs as a result of complaints regarding spam activity. Thus, while a firm may advertise bulletproof hosting, it is ultimately unable to deliver without the connivance of its upstream ISP. However, some spammers have managed to get what is called a pink contract (see below) – a contract with the ISP that allows them to spam without being disconnected.
A few companies produce spamware, or software designed for spammers. Spamware varies widely, but may include the ability to import thousands of addresses, to generate random addresses, to insert fraudulent headers into messages, to use dozens or hundreds of mail servers simultaneously, and to make use of open relays. The sale of spamware is illegal in eight U.S. states.
So-called millions CDs are commonly advertised in spam. These are CD-ROMs purportedly containing lists of email addresses, for use in sending spam to these addresses. Such lists are also sold directly online, frequently with the false claim that the owners of the listed addresses have requested (or "opted in") to be included. Such lists often contain invalid addresses. In recent years, these have fallen almost entirely out of use due to the low quality email addresses available on them, and because some email lists exceed 20GB in size. The amount you can fit on a CD is no longer substantial.
A number of DNS blacklists (DNSBLs), including the MAPS RBL, Spamhaus SBL, SORBS and SPEWS, target the providers of spam-support services as well as spammers. DNSBLs blacklist IPs or ranges of IPs to persuade ISPs to terminate services with known customers who are spammers or resell to spammers.
- Unsolicited bulk email (UBE)
- A synonym for email spam.
- Unsolicited commercial email (UCE)
- Spam promoting a commercial service or product. This is the most common type of spam, but it excludes spams that are hoaxes (e.g. virus warnings), political advocacy, religious messages and chain letters sent by a person to many other people. The term UCE may be most common in the USA.
- Pink contract
- A pink contract is a service contract offered by an ISP which offers bulk email service to spamming clients, in violation of that ISP's publicly posted acceptable use policy.
- Spamvertising is advertising through the medium of spam.
- Opt-in, confirmed opt-in, double opt-in, opt-out
- Opt-in, confirmed opt-in, double opt-in, opt-out refers to whether the people on a mailing list are given the option to be put in, or taken out, of the list. Confirmation (and "double", in marketing speak) refers to an email address transmitted e.g. through a web form being confirmed to actually request joining a mailing list, instead of being added to the list without verification.
- Final, Ultimate Solution for the Spam Problem (FUSSP)
- An ironic reference to naïve developers who believe they have invented the perfect spam filter, which will stop all spam from reaching users' inboxes while deleting no legitimate email accidentally.
- Bacn is email that has been subscribed to and is therefore solicited. Bacn has been described as "email you want but not right now." Some examples of common bacn messages are news alerts, periodic messages from e-merchants from whom one has made previous purchases, messages from social networking sites, and wiki watch lists. The name bacn is meant to convey the idea that such email is "better than spam, but not as good as a personal email". It was originally coined in August 2007 at PodCamp Pittsburgh 2, and since then has been used amongst the blogging community.
- Address munging
- Anti-spam techniques
- Boulder Pledge
- The Canadian Coalition Against Unsolicited Commercial Email
- CAN-SPAM Act of 2003
- Chain email
- Direct Marketing Associations
- Disposable email address
- Email address harvesting
- Gordon v. Virtumundo, Inc.
- Junk fax
- List poisoning
- Make money fast, the infamous Dave Rhodes chain letter that jumped to email.
- news.admin.net-abuse.email newsgroup
- Nigerian spam
- Project Honey Pot
- Pump and dump stock fraud
- Spider trap
- SPIT (SPam over Internet Telephony)
- Farmer, James John (2003-12-27). "3.4 Specific Types of Spam". An FAQ for news.admin.net-abuse.email; Part 3: Understanding NANAE. Spam FAQ. Archived from the original (FAQ) on 2004-02-12. Retrieved 2008-08-19.
- "You Might Be An Anti-Spam Kook If...". Rhyolite Software. 2006-11-25. Retrieved 2007-01-05.
- "On what type of email should I (not) use SpamCop?" (FAQ). SpamCop FAQ. IronPort Systems. Retrieved 2007-01-05.
- Scott Hazen Mueller. "What is spam?". Information about spam. Abuse.net. Retrieved 2007-01-05.
- "Spam Defined". Infinite Monkeys & Co. 2002-12-22. Retrieved 2007-01-05.
- Bradley, David (2009-05-13). "Spam or Ham?". Sciencetext. Retrieved 2011-09-28.
- "Merriam Webster Dictionary". Merriam-Webster.
- Rebecca Lieb (July 26, 2002). "Make Spammers Pay Before You Do". The ClickZ Network. Archived from the original on 2007-08-07. Retrieved 2010-09-23.
- Clinton Internet provider wins $11B suit against spammer, QC Times
- AOL gives up treasure hunt, Boston Herald
- Email metrics report, MAAWG, Nov 2010
- Opening Pandora's In-Box. Archived June 28, 2008, at the Wayback Machine.
- "alt.spam FAQ". Gandalf.home.digital.net. Retrieved 2012-12-10.
- "Why is spam bad?". Spam.abuse.net. Retrieved 2012-12-10.
- Ferris Research: Cost of Spam
- Spam's Cost To Business Escalates
- "Q1 2014 Internet Threats Trend Report" (PDF) (Press release). Sophos Cyberoam. Retrieved 2015-11-01.
- "Q1 2010 Internet Threats Trend Report" (PDF) (Press release). Commtouch Software Ltd. Retrieved 2010-09-23.
- Brett Forrest (August 2006). "The Sleazy Life and Nasty Death of Russia’s Spam King". Issue 14.08 (Wired Magazine). Retrieved 2007-01-05.
- "Only one in 28 emails legitimate, Sophos report reveals rising tide of spam in April–June 2008" (Press release). Sophos. 2008-07-15. Retrieved 2008-10-12.
- Bob West (January 19, 2008). "Getting it Wrong: Corporate America Spams the Afterlife". Clueless Mailers. Retrieved 2010-09-23.
- Giorgio Fumera, Ignazio Pillai, Fabio Roli,"Spam filtering based on the analysis of text information embedded into images". Journal of Machine Learning Research (special issue on Machine Learning in Computer Security), vol. 7, pp. 2699-2720, 12/2006.
- Battista Biggio, Giorgio Fumera, Ignazio Pillai, Fabio Roli,"A survey and experimental evaluation of image spam filtering techniques, Pattern Recognition Letters". Volume 32, Issue 10, 15 July 2011, Pages 1436-1446, ISSN 0167-8655.
- Eric B. Parizo (2006-07-26). "Image spam paints a troubling picture". Search Security. Retrieved 2007-01-06.
- "Dealing with blank spam". CNET. September 2, 2009. Retrieved August 17, 2015.
- "symantec.com". symantec.com. Retrieved 2012-12-10.
- The Carbon Footprint of Email Spam Report (PDF), McAfee/ICF,
Over 95% of the energy consumed by spam is on the receiver
- Privacy and Electronic Communications (EC Directive) Regulations 2003
- Enforcement, ICO
- Fighting Internet and Wireless Spam Act, CA: GC
- Canada's Anti-spam Bill C-28 is the Law of the Land, Circle ID, 2010-12-15
- "Commonwealth Consolidated Acts: Spam Act 2003 – Schedule 2". Sydney, AU: AustLII, Faculty of Law, University of Technology. Retrieved 2010-09-23.
- But see, e.g., Hypertouch v. ValueClick, Inc. et al., Cal.App.4th (Google Scholar: January 18, 2011).
- Effectiveness and Enforcement of the CAN-SPAM Act (PDF), USA: FTC, archived from the original (PDF) on January 10, 2006
- Is the CAN-SPAM Law Working?, PC World
- Ken Fisher (December 2005), US FTC says CAN-SPAM works, Ars Technica
- Six years later, Can Spam act leaves spam problem unresolved, USA: SC Magazine
- You've Got Spam, Find Law
- Carter, Mike (2008-03-15), "Spam king" pleads guilty to felony fraud, Seattle Times
- "Spammers Continue Innovation: IronPort Study Shows Image-based Spam, Hit & Run, and Increased Volumes Latest Threat to Your Inbox" (Press release). IronPort Systems. 2006-06-28. Retrieved 2007-01-05.
- Charlie White (2011-07-04). "Spam Decreased 82.22% Over The Past Year". Mashable.com. Retrieved 2012-12-10.
- "Spam" (in Dutch). Symantec.cloud. Retrieved 2012-12-10.
- Brad Templeton (8 March 2005). "Reaction to the DEC Spam of 1978". Brad Templeton. Retrieved 2007-01-21.
- Josh Halliday (10 January 2011). "Email spam level bounces back after record low". guardian.co.uk. Retrieved 2011-01-11.
- Waters, Darren (2009-04-08). "Spam overwhelms email messages". BBC News. Retrieved 2012-12-10.
- "Email Metrics Program: The Network Operators' Perspective" (PDF). Report No. 7 – Third and Fourth quarters 2007. Messaging Anti-Abuse Working Group. April 2008. Retrieved 2008-05-08.
- "Email Metrics Program: The Network Operators' Perspective" (PDF). Report No. 1 – 4th quarter 2005 Report. Messaging Anti-Abuse Working Group. March 2006. Archived from the original (PDF) on December 8, 2006. Retrieved 2007-01-06.
- "Email Metrics Program: The Network Operators' Perspective" (PDF). Report No. 2 – 1st quarter 2006. Messaging Anti-Abuse Working Group. June 2006. Archived from the original (PDF) on 2006-09-24. Retrieved 2007-01-06.
- "2010 MAAWG Email Security Awareness and Usage Report, Messing Anti-Abuse Working Group/Ipsos Public Affairs" (PDF). Retrieved 2012-12-10.
- Staff (18 November 2004). "Bill Gates 'most spammed person'". BBC News. Retrieved 2010-09-23.
- Mike Wendland (December 2, 2004). "Ballmer checks out my spam problem". ACME Laboratories republication of article appearing in Detroit Free Press. Retrieved 2010-09-23. the date provided is for the original article; the date of revision for the republication is 8 June 2005; verification that content of the republication is the same as the original article is pending.
- Jef Poskanzer (2006-05-15). "Mail Filtering". ACME Laboratories. Retrieved 2010-09-23.
- Spam Costs Billions
- Register of Known Spam Operations (ROKSO).
- "Sophos reveals 'Dirty Dozen' spam producing countries, August 2004" (Press release). Sophos. 2004-08-24. Retrieved 2007-01-06.
- "Sophos reveals 'dirty dozen' spam relaying countries" (Press release). Sophos. 2006-07-24. Retrieved 2007-01-06.
- "Sophos research reveals dirty dozen spam-relaying nations" (Press release). Sophos. 2007-04-11. Retrieved 2007-06-15.
- "Sophos reveals 'Dirty Dozen' spam producing countries, July 2007" (Press release). Sophos. 2007-07-18. Retrieved 2007-07-24.
- "Sophos reveals 'Dirty Dozen' spam producing countries for Q3 2007" (Press release). Sophos. 2007-10-24. Retrieved 2007-11-09.
- "Sophos details dirty dozen spam-relaying countries for Q4 2007" (Press release). Sophos. 2008-02-11. Retrieved 2008-02-12.
- "Sophos details dirty dozen spam-relaying countries for Q1 2008" (Press release). Sophos. 2008-04-14. Retrieved 2008-06-07.
- "Eight times more malicious email attachments spammed out in Q3 2008" (Press release). Sophos. 2008-10-27. Retrieved 2008-11-02.
- "Spammers defy Bill Gates's death-of-spam prophecy" (Press release). Sophos. 2009-01-22. Retrieved 2009-01-22.
- "Spamhaus Statistics: The Top 10". Spamhaus Blocklist (SBL) database. The Spamhaus Project Ltd. dynamic report. Retrieved 2007-01-06. Check date values in:
- Shawn Hernan; James R. Cutler; David Harris (1997-11-25). "I-005c: E-Mail Spamming countermeasures: Detection and prevention of E-Mail spamming". Computer Incident Advisory Capability Information Bulletins. United States Department of Energy. Retrieved 2007-01-06.
- Kirill Levchenko; Andreas Pitsillidis; Neha Chachra; Brandon Enright; Márk Félegyházi; Chris Grier; Tristan Halvorson; Chris Kanich; Christian Kreibich; He Liu; Damon McCoy; Nicholas Weaver; Vern Paxson; Geoffrey M. Voelker; Stefan Savage (May 2011), Click Trajectories: End-to-End Analysis of the Spam Value Chain (PDF), Oakland, CA: Proceedings of the IEEE Symposium and Security and Privacy
- Park, Hee Sun; Hye Song; Jeong An (2005). ""I Am Sorry to Send You SPAM": Cross-cultural differences in use of apologies in email advertising in Korea and the U.S.". Human Communication Research 31 (3): 365. doi:10.1093/hcr/31.3.365.
- Sapient Fridge (2005-07-08). "Spamware vendor list". Spam Sights. Retrieved 2007-01-06.
- "SBL Policy & Listing Criteria". The Spamhaus Project. 2006-12-22. Retrieved 2007-01-06. original location was at SBL rationale; the referenced page is an auto-redirect target from the original location
- "Spamware – Email Address Harvesting Tools and Anonymous Bulk Emailing Software". MX Logic (abstract hosted by Bit Pipe). 2004-10-01. Retrieved 2007-01-06. the link here is to an abstract of a white paper; registration with the authoring organization is required to obtain the full white paper.
- "Definitions of Words We Use". Coalition Against Unsolicited Bulk Email, Australia. Retrieved 2007-01-06.
- "Vernon Schryver: You Might Be An Anti-Spam Kook If". Rhyolite.com. Retrieved 2012-12-10.
- Tips for your new anti-spam idea.
- "PodCamp Pittsburgh 2 cooks up Bacn". PodCamp Pittsburgh. August 23, 2007. Archived from the original on 30 March 2010. Retrieved 2010-03-15.
- Barrett, Grant (2007-12-23). "All We Are Saying". New York Times. Retrieved 2007-12-24.
Bacn: Impersonal e-mail messages that are nearly as annoying as spam but that you have chosen to receive: alerts, newsletters, automated reminders etcetera. Popularised at the PodCamp conference in Pittsburgh in August.
- Email overload? Try Priority Inbox - Google Gmail Blog, 30 Aug 2010
- NPR: Move Over, Spam: 'Bacn' Is the E-Mail Dish du Jour
- "PCPGH invented BACN". Viddler. October 16, 2008. Retrieved 2011-03-23.
- Dow, K; Serenko, A; Turel, O; Wong, J (2006), "Antecedents and consequences of user satisfaction with email systems", International Journal of e-Collaboration (PDF) 2 (2), pp. 46–64.
- Sjouwerman, Stu; Posluns, Jeffrey, Inside the spam cartel: trade secrets from the dark side, Elsevier/Syngress; 1st edition, November 27, 2004. ISBN 978-1-932266-86-3.
|Wikimedia Commons has media related to SPAM email.|
- Spam Links
- "Can the Spam: How Spam is Bad for the Environment", The Economist, June 15, 2009.
- Worldwide Email Threat Activity, Barracuda Central.
Government reports and industry white papers
- Email Address Harvesting and the Effectiveness of Anti-SPAM Filters (PDF), United States: FTC, retrieved 13 Oct 2007.
- The Electronic Frontier Foundation's spam page which contains legislation, analysis and litigation histories
- Why Am I Getting All This Spam? Unsolicited Commercial Email Research Six Month Report by Center for Democracy & Technology from the author of Pegasus Mail & Mercury Mail Transport System – David Harris
- Spam White Paper – Drowning in Sewage (PDF), Pegasus Mail.
|
<urn:uuid:6c17d1c4-48d4-4a21-a433-2469f233ec92>
|
CC-MAIN-2016-26
|
https://en.wikipedia.org/wiki/Bacn
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00144-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.912638 | 9,449 | 3.234375 | 3 |
Note - In compliance
with local Indian law, all donors donating to
Give2Habitat India must register. Habitat for Humanity India
wishes to thank all donors for their past, present and future support.
Habitat for Humanity International was founded in 1976 by Millard and Linda Fuller who developed the concept of ‘partnership housing’ that centered on those in need of adequate shelter working side by side with volunteers to build simple, decent houses. Homes were built and sold to families in need at no profit and no interest - and the basic model of Habitat for Humanity was established.
The Fullers decided to apply the ‘Fund for Humanity’ concept in developing countries. Thus, Habitat for Humanity International as an organization was born. In 1984, former U.S. President Jimmy Carter and his wife Rosalynn took their first Habitat work trip, the Jimmy Carter Work Project, to New York City. Their personal involvement in Habitat's work brought the organization national visibility and sparked interest in Habitat's work around the world.
Today, Habitat for Humanity has built more than 500,000 houses across 80 countries, sheltering 2.5 million people worldwide.
Habitat India began operations in 1983 in Khammam, Andhra Pradesh. It is among Habitat's largest programmes in the Asia-Pacific region. After the Jimmy Carter Work Project in 2006 - where one hundred homes were built by more than 3,000 volunteers, including celebrities like Brad Pitt, Steve Waugh and John Abraham, during a five-day blitz build in Lonavala - Habitat India launched the IndiaBUILDS - A World of Hope Campaign to support 100,000 families to live in safe and decent homes by 2015.
Habitat for Humanity works in partnership with local, grassroots, non-government organizations, and micro finance institutions throughout India to provide decent housing. Habitat India operates through its resource centers known as Habitat Resource Centers (HRC's) in Bangalore, Chennai, New Delhi and Mumbai. Till date, Habitat has served over 45,000 families across 17 states in India.
Housing crisis in India
India faced a housing shortage of 74
million housing units by the end of 2011, according to the National
Housing Bank. The majority of the housing shortfall is in rural
areas. One in every five rural dweller lives in a kutcha home made
of mud, thatch, grass or other non-lasting natural materials. In
urban areas, the poor live under bridges, on pavements, train
tracks, highways, canals as well as in crowded slums.
Housing must become a priority
The percentage of people without
access to decent, stable housing is rising
Increasing the housing supply across the globe is essential
Adequate housing is vital to the health of the world's economies, communities, and populations
If we are to succeed in the fight against poverty, we must support the expansion of housing both as policy and practice
Housing as a catalyst
Research has shown that one's health is directly linked to housing and housing-related basics such as water and sanitation. Researchers at the World Bank found that replacing dirt floors with concrete floors improved the health of children, facilitating a 20 percent reduction in parasitic infections, a 13 percent reduction in diarrhea and a 20 percent reduction in anemia. Housing provides:
Stability for families and children
A sense of dignity and pride
Health, physical safety, and security
An increase in educational and job prospects
Prevention of poverty induced diseases such as HIV/AIDS, malaria, tuberculosis and diarrhea
An increase in access to credit through the provision of interest-free loans
Safe homes and neighborhoods that in turn help to build social stability and security
HOW HABITAT FOR HUMANITY INDIA WORKS
Habitat for Humanity works in partnership with local, grassroots non-government organizations, microfinance institutions and other partners throughout India to provide decent housing. Home partners contribute their own labor, or sweat equity, construction materials and repay toward the cost of building their homes. Regular repayments go into a Fund for Humanity which helps Habitat to build more homes. Habitat uses a Save & Build housing microfinance concept in India in order to reach more communities in need. Home partner families usually form groups – often led by women – to save about part of the cost of each house while Habitat, non-governmental organizations or corporate partners invest the remaining portion of the amount.
Habitat houses in India range in size from 20 sq. m. to 33.5 sq. m. Each house usually comprises a living room, kitchen and toilet. Houses constructed under the post-tsunami reconstruction program are earthquake-resistant and feature stairs to the roof to aid evacuation in the event of floods. To extend the reach of its programs, HFH India operates Resource Centres in Bengaluru, Chennai, Mumbai and the capital, New Delhi.
Habitat India engages in the following construction related activities:
1. Construction of a new house
2. Rehabilitation of houses: Restoration of a dwelling that once met the required housing standards
3. Incremental constructions: A construction intervention that addresses a build in stages
4. Home repairs: Repairs include patching, restoration or minor replacement of materials and building components for keeping the house in good or sound conditions
5. Home improvements: To reduce the vulnerability of the family in the areas of health or safety. The improvement is permanently attached to the dwelling or property
Habitat for Humanity International fact sheet (frequently
What is Habitat for Humanity International?
A nonprofit, ecumenical Christian housing ministry that has helped to build over 500,000 decent, affordable houses and served 2.5 million people worldwide.
Our vision: a world where everyone has a decent place to live. Founded in 1976 by Millard Fuller and his wife, Linda.
How does it work?
Through volunteer labor and donations of money and materials, Habitat builds and rehabilitates simple, decent houses alongside our homeowner partner families. In addition to zero-interest loan repayments, homeowners invest hundreds of hours of their own labor into building their Habitat house and the houses of others. Habitat houses are sold to partner families at no profit and financed with affordable loans. The homeowners’ repayment of home loans are used to build still more Habitat houses.
How are partner families selected?
Families in need of decent shelter
apply to local Habitat affiliates.
The affiliate’s family selection committee chooses homeowners based on their level of need, their willingness to become partners in the program and their ability to repay the loan.
Every affiliate follows a nondiscriminatory policy of family selection.
Neither race nor religion is a factor in choosing the families who receive Habitat houses.
What are Habitat affiliates?
Community-level Habitat for Humanity offices that act in partnership with and on behalf of Habitat for Humanity International. Each affiliate coordinates all aspects of Habitat home building in its local area.
Where does Habitat for Humanity operate?
Worldwide. Our operational headquarters are located in Americus, Georgia and our administrative headquarters are in Atlanta, Georgia, USA.
How are donations distributed and used?
As designated by the donor.
Gifts designated to a specific affiliate or building project are forwarded to that affiliate or project.
Un-designated gifts are used where most needed and for administrative expenses.
Habitat’s most recent audited financial statement is available online.
Who controls and manages Habitat for Humanity International?
An ecumenical, international board of directors.
Board members are dedicated volunteers who are deeply concerned about the problems of poverty housing around the world.
The Habitat headquarters are operated by an administrative staff, professional and support employees, and volunteers.
How does Habitat work with the government?
We ask legislators and housing
regulators to increase support for affordable home ownership and
eliminate poverty housing
We monitor public policies related to housing, community and international development.
We advocate policy choices that increase access to decent, affordable housing for people around the world.
We accept government funds as long as they have no conditions that would violate Habitat’s principles or limit its ability to proclaim its Christian identity.
- Standard Chartered Bank: Donate to Nepal Earthquake Disaster 1,567,528 INR
- GENPACT Support to Earthquake Victims/Nepal Disaster Response 471,763 INR
- Build homes in India with Eric 282,353 INR
- Nepal & India Earthquake Relief Fund 199,000 INR
- Help Repair & Build Homes in Uttarakhand 120,409 INR
|
<urn:uuid:ce068376-1f1b-4044-b832-217b7e0c9b7c>
|
CC-MAIN-2016-26
|
http://www.give2habitat.org/india/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00153-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940343 | 1,802 | 2.578125 | 3 |
Write better anchor text
Posted by Ouali Rezouali on 12 December 2012 01:40 PM
Write better anchor text
Suitable anchor text makes it easy to convey the contents linked
Anchor text is the clickable text that users will see as a result of a link, and is placed within the anchor tag <a href="..."></a>.
This text tells users and Google something about the page you're linking to. Links on your page maybe internal—pointing to other pages on your site—or external—leading to content on other sites. In either of these cases, the better your anchor text is, the easier it is for users to navigate and for Google to understand what the page you're linking to is about.
Links and Anchor Text on your Cabanova Website
Internal, External and Download Links can be easily created within the Cabanova Sitebuilder and it's very easy to define the Anchor Text
Edit the text box containing the text you’d like to set as a link by left-clicking on it
- Using the mouse left-button, select the text which will be set as a link
- Click on the ‘Link’ icon and the ‘Link type’ window will pop up
1. To set the selected text as an External Link pointing to an external Website:
2. To set the selected text as an Internal Link pointing to a Webpage of your Website:
3. To set the selected text as a Download Link:
Choose descriptive text
The anchor text you use for a link should provide at least a basic idea of what the page linked to is about.
- writing generic anchor text like "page", "article", or "click here"
- using text that is off-topic or has no relation to the content of the page linked to
- using the page's URL as the anchor text in most cases
- although there are certainly legitimate uses of this, such as promoting or referencing a new website's address
Write concise text
Aim for short but descriptive text-usually a few words or a short phrase.
writing long anchor text, such as a lengthy sentence or short paragraph of text
Format links so they're easy to spot
Make it easy for users to distinguish between regular text and the anchor text of your links. Your content becomes less useful if users miss the links or accidentally click them.
- making links look just like regular text
Think about anchor text for internal links too
You may usually think about linking in terms of pointing to outside websites, but paying more attention to the anchor text used for internal links can help users and Google navigate your site better.
- using excessively keyword-filled or lengthy anchor text just for search engines
|
<urn:uuid:c54de079-7950-4d3f-b170-a7d24ff72b9e>
|
CC-MAIN-2016-26
|
http://www.cabanova.com/help/Knowledgebase/Article/View/1150/215/write-better-anchor-text
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00197-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.904038 | 564 | 2.78125 | 3 |
Ancient Greek religion was, by definition, public and communal. Worship was inextricably tied to the polis, the city-state that was one’s community. Religion was expressed publicly through music and dance, especially at the processionals that were often part of sacred festivals honoring the gods. There were also loudly spoken prayers, ritual sacrifice, and the display of votive gifts in the temple, all of which were witnessed by the people as part of worship in a god’s sanctuary. When an individual asked favors of the gods, it was on behalf of the community, not the individual making the request. The polis’ relationship with the gods was reciprocal–the community provided worship and respect, the gods provided favors and gifts of blessings.
However, Greek religion was so complex that throughout it ran a trend counter to this public, communal worship: the mystery cult. Only initiates could take part in the rites of the mystery cult, and they were forbidden to ever speak of what occurred. They followed this precept so faithfully that even today we know very little about what was involved in these cults.
Quick aside on words: “cult” today has various negative connotations. When used in describing ancient Greek religion it merely means the many different practices dedicated to one god or another. There was a “cult of Athena,” a “cult of Dionysos,” a “cult of Zeus,” etc. The mystery cults did not exist entirely outside the public religion of the polis, but were rather a special subset of the worship of particular gods.
Many of the mystery cults dealt specifically with the issue of the afterlife. The basic Greek idea of the afterlife was not pleasant–you existed as a shade in the underworld. In The Odyssey, when Odysseus passes through the underworld and meets the ghost of the greatest of warriors, Achilles, Odysseus says to him that he was so great in life and his everlasting renown so assured, that, basically, he shouldn’t take being dead so hard. Achilles answers, “Say not a word in death’s favor; I would rather be a paid servant in a poor man’s house and be above ground than king of kings among the dead.”
In the ancient world, magical spells and prayers were frequently printed on tiny sheets of paper or beaten metal, which would then be rolled up and worn in a pendant around the neck. Our most beautiful example of this sort of object, I think, is an Orphic prayer sheet barely one inch by two inches. This video offers a reading of the translated English text.
This prayer sheet would have belonged to a member of a cult of Orpheus, the mythological poet-musician who was the son of Calliope, the muse of epic poetry, and was often said to also be the son of Apollo, god of music and reason as well. Orphic cults became popular in the 500s B.C. The Orphic cult was one that promised a way to a better afterlife. The prayer on this sheet is giving instructions to the dead soul, which is thirsty, as all dead souls are. Most drink from Lethe, the river of forgetfulness, and lose all memory of their lives. This prayer says to go past the river and drink from the spring of Memory, marked by the cypress tree, because truly all souls are half-earthly and half-celestial, and the celestial half is your true nature. Follow these instructions, and your afterlife will be one of bliss rather than ghostly misery.
Another mystery cult promising a better afterlife was the cult of the Egyptian goddess Isis, which spread throughout the Mediterranean during the Hellenistic period (323–146 B.C.), remaining popular even into Roman times. A core myth of Isis’s cult was her resurrection of her husband Osiris, who had been murdered and dismembered by his brother, the god Seth. Dedication to her cult allowed access to her powerful magic, which could defeat even death.
Isis was identified with Demeter in the Greek world, and Demeter and her daughter Persephone, often called Kore, were the focus of one of the most famous mystery cults. More is known from more sources (archeological, written, and artistic), of the mysteries of Eleusis than of any of the other ancient mystery cults. Furthermore, this cult was widely known and famous for its ability to bring happiness and comfort to its members in the ancient world, where it lasted from the time of Homer until the fall of the Western Roman Empire, nearly a thousand years. The cult was remarkably accepting–women, slaves, even foreigners could join.
Scholars find echoes in the mystery cults of practices going back to the Neolithic Period (about 8000–3000 B.C.). Both the mystery cults of Demeter and Dionysos show relationships to the Mother Goddess, specifically as she was worshipped in Anatolia (today, Turkey). This goddess is indeed a mystery to us today, mostly known through figures that would have been ancient even to the Greeks. (More about the sculpture below, including views of it in the gallery, in a video from earlier this week.) And of course, that’s all part of the appeal of the mystery cults–if they were ancient and mysterious to the Greeks, they are even more so to us today.
|
<urn:uuid:7a30227a-1622-4c22-9213-6392554be5c6>
|
CC-MAIN-2016-26
|
https://blogs.getty.edu/iris/mystery-cults-and-the-mother-goddess/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00093-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.9794 | 1,120 | 3.375 | 3 |
ANSI Common Lisp 21 Streams 21.2 Dictionary of Streams
- Arguments and Values:
var - a variable name.
stream - a form; evaluated to produce a stream.
declaration - a declare expression; not evaluated.
forms - an implicit progn.
results - the values returned by the forms.
with-open-stream performs a series of operations on
stream, returns a value, and then closes the stream.
Var is bound to the value of stream,
and then forms are executed
as an implicit progn.
is automatically closed on exit from with-open-stream,
no matter whether the exit is normal or abnormal.
The stream has dynamic extent;
its extent ends when the form is exited.
The consequences are undefined if an attempt is made to assign the
variable var with the forms.
(with-open-stream (s (make-string-input-stream "1 2 3 4"))
(+ (read s) (read s) (read s))) 6
- Side Effects:
The stream is closed (upon exit).
- See Also:
- Allegro CL Implementation Details:
|
<urn:uuid:6277a2e6-6751-48bf-b113-303989f780a7>
|
CC-MAIN-2016-26
|
http://franz.com/support/documentation/current/ansicl/dictentr/with-ope.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00016-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.812557 | 244 | 2.546875 | 3 |
site on which Piazza della Signoria was built was occupied in the period
of the Roman Florentia by a large theatre. In the Middle Ages, modest
houses and alleyways sprang up there. The land was the property of the
powerful Ghibelline family of the Uberti, and when the Guelphs took power,
they razed the Ghibelline properties to the ground and established that
nothing should be built on it again; this is the reason why Palazzo Vecchio,
which faces onto the piazza, is irregularly-shaped and both the main entrance
and the tower are somewhat eccentric.
Work on the piazza, which was designed and supervised by Arnolfo di
Cambio, started in 1299 and since then has witnessed all the major changes
in the history of the city. Like Palazzo Vecchio, its name has changed
a number of times: initially called "Piazza dei Priori" or "Piazza
dei Signori", it was renamed "Piazza del Granduca" during
the government of Cosimo I, and maintained this name till 1859 when the
Grand Duchy finally collapsed. Since the unification of Italy, it has
had its current name of Piazza della Signoria.
Besides the name, the perimeter and the paving of the piazza have also
changed several times. When Palazzo Vecchio was enlarged, a number of
houses were destroyed or moved back in order to maintain the size of the
piazza. At the beginning there was simply a dirt surface, then it was
paved in red brick and finished with pietra serena. This lasted until
the nineteenth century when it was resurfaced entirely in pietra serena.
This nineteenth century surface was largely replaced with new pietra serena
paving a few years ago, which sparked off a polemical dispute between
the Florentines and the bodies responsible for the work, not only because
of the appearance of the 'patch job' but also because the old stones mysteriously
The piazza was embellished in 1563 with the Neptune Fountain by Ammannati
(known by Florentines as the "Fontana del Biancone") and in1590
the Equestrian Monument to Cosimo I de' Medici by Giambologna was positioned
to the right of Palazzo Vecchio.
In 1420, on the extreme right of the main facade of Palazzo Vecchio, The
Marzocco by Donatello (now a copy) was displayed as a symbol of the strength
and liberty of the comune, while the famous David of Michelangelo (now
also a copy) was positioned to the side of the main doorway to symbolise
the victory of democracy over tyranny. Bandinelli's Hector and Cacus,
dated 1534, was placed near the David to recall the victory of the Medici
over their internal enemies. Close to the centre of the piazza there is
a circular plaque with an inscription which indicates the point where
Savonarola was burnt at the stake.
Picture by Sandro Santioli
Translated by Jeremy Carden
|
<urn:uuid:5fb96c21-af28-4338-bb52-c884eda19d73>
|
CC-MAIN-2016-26
|
http://www.terraditoscana.com/default.aspx?lpg=visitare_province_alturist&obj=fi_vie&loc=en
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00015-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954499 | 687 | 2.984375 | 3 |
Grégoire-Isidore Flachéron (French, 1806–1873)
Shepherds With Sheep and Goats in a Valley, 1857
Oil on canvas, 67 3/4 x 105 1/2 in.
Signed and dated lower left: Ire. Flacheron/Roma 1857
Grégoire-Isidore Flachéron was the son of a prominent architect, and studied first at the École de Beaux-Arts in Lyon under Pierre Révoil, and later, possibly with Jean-Auguste Dominique Ingres in Paris. In 1833, he moved to Rome, where he spent several years before eventually settling in the south of France. He specialized in landscape painting especially of the Roman countryside, which he regularly exhibited in both Paris and Lyon, and from 1861 increasingly painted views of the Cote d’Azur and Algeria.
Shepherds with Sheep and Goats in a Valley is a painting that clearly demonstrates the influence of Nicolas Poussin and the landscape style of the 17th century, not only because of the carefully composed landscape view, the pastoral, Italian setting, and the idealized figures, but also because of the absence of any signs of modern life. In the 1850s, critics viewed Flachéron as a defender of a traditional form of landscape painting that was then under attack by Realism. An anonymous writer for Le Courrier de Lyon wrote, in 1853, that Flachéron’s landscape paintings were a “conscientious work where nothing is neglected, where the trees are not sacrificed for the fields, the sky for the earth….” Contemporary readers would have understood this as an obvious reference to the way that Realist painters such as Gustave Courbet controversially focused on individual details at the expense of the overall composition.
|
<urn:uuid:e243a595-7800-4991-bca9-191c20309228>
|
CC-MAIN-2016-26
|
http://www.daheshmuseum.org/collection/artwork-of-the-month-august2014-2/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00169-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962173 | 384 | 2.90625 | 3 |
It took a massive team of researchers a decade to sequence the genome of the tick that carries Lyme disease. The results “provide a foundation for a whole new era in tick research,” says Catherine Hill, a professor of medical entomology at Purdue University who led the project.
“Now that we’ve cracked the tick’s code, we can begin to design strategies to control ticks, to understand how they transmit disease and to interfere with that process.”
“Ticks are underappreciated as vectors—until you get Lyme disease.”
The genome of Ixodes scapularis, known as the deer tick or blacklegged tick, decodes the biology of an arachnid with millions of years of successful parasitism. It also sheds light on how ticks acquire and transmit pathogens and offers tick-specific targets for control.
I. scapularis is the first tick species to have its genome sequenced.
Not just Lyme disease
Tick-borne illnesses cause thousands of human and animal deaths annually, and ticks transmit a wider variety of pathogens and parasites than any other arthropod. They primarily spread disease by creating a feeding wound in the skin of their hosts, regurgitating infected saliva into the wound as they ingest blood.
Despite ticks’ capacity to acquire and pass on an array of pathogens, research on ticks has lagged behind that of other arthropod vectors, such as mosquitoes, largely because of a lack of genetic and molecular tools and resources.
“Ticks are underappreciated as vectors—until you get Lyme disease,” says Hill.
About 30,000 cases of Lyme disease cases are reported in the US annually, most concentrated in the Northeast and upper Midwest. But the Centers for Disease Control estimates the actual number of cases is 329,000 a year, many of which are unreported or misdiagnosed.
While not fatal, Lyme disease can be permanently debilitating if the infection is not treated before it reaches the chronic phase.
The deer tick also vectors human granulocytic anaplasmosis, babesiosis, and the potentially lethal Powassan virus. Other tick species transmit a number of flaviviruses, including some that cause hemorrhaging and inflammation of the brain and the membrane that covers the brain and spinal cord.
Less is known about the tick-borne flaviviruses than Lyme disease, Hill says, but they are particularly important diseases in Europe and parts of Asia and represent global threats to human health.
“Genomic resources for the tick were desperately needed,” she adds. “These enable us to look at tick biology in a systems way.”
The secret proteins
The genome provides two lines of valuable biological resources, Hill says: the genes and proteins that make ticks successful parasites and excellent vectors of parasites and pathogens.
Identifying the proteins involved in the transmission of tick-borne diseases could help researchers develop strategies to halt this process.
Researchers pinpointed some of the proteins that play key roles in the interactions between deer ticks and the bacterium that causes Lyme disease and proteins associated with the transmission of human granulocytic anaplasmosis, an emerging disease.
A companion paper published in PLOS Neglected Tropical Diseases identified proteins and biochemical pathways associated with infection and replication of the encephalitis-causing Langat virus, another pathogen transmitted by Ixodes ticks. These proteins could be candidates for drugs and vaccines and give clues to how the virus affects the tick.
“This study opens the door to understanding how tick-borne viruses exploit their hosts and offers unique insights from ticks that could be applicable to humans,” says Richard Kuhn, professor and head of the biological sciences department at Purdue and lead author of the virus study. “Once you know which host proteins are critical for virus replication, you can manipulate those proteins to interfere with the growth and development of the virus.”
Tick saliva and how they digest blood
The genome also provides insights into unique aspects of tick biology.
Tick saliva, for example, teems with antimicrobials, pain inhibitors, cement, anticoagulants, and immune suppressors, all designed to help the tick feed on its host undetected for days or weeks.
The genome reveals that tick saliva contains thousands of compounds—compared with mere hundreds in mosquito saliva—a diversity that presumably allows ticks to exploit a wide range of hosts and stay attached for a long time, Hill said.
The researchers also identified genes that could be linked to ticks’ ability to synthesize new armorlike cuticle as they feed, allowing them to expand over 100 times.
The team searched for clues to how ticks digest blood, a toxic food source due to its high concentrations of iron. The genome points to a number of proteins that link with iron-containing heme molecules, the byproducts of blood digestion, to make them less toxic.
“Ticks have an amazing number of detoxification enzymes, and we don’t know why.”
“Ticks have an amazing number of detoxification enzymes, and we don’t know why,” Hill says. “We’ve got our eye on this because these enzymes are also involved in detoxifying insecticides. As we develop new chemicals to control ticks, we’ll be going up against this massive arsenal of detoxification enzymes, far more than insects have.”
One of the major findings of the genome project is that about 20 percent of the genes appear to be unique to ticks. These genes could provide researchers with tick-specific targets for control.
“We don’t see the equivalent of these genes in a mosquito or human,” Hill says. “That’s a fascinating collection of molecules, and as a scientist, I can’t wait to get into that pot of gold and find out what these are and what they do.”
Lots of duplicate genes
One of the main challenges the research team faced was the complexity of the tick genome, one of the larger arthropod genomes sequenced to date. Another obstacle was the unusual amount of repetitive DNA, which comprises about 70 percent of the genome, an aspect further explored in a companion paper published in BMC Genomics.
While copies of duplicated genes are often eliminated, the tick genome has retained these repeated genes. Many of them have mutated, suggesting that the two copies of a gene are associated with different functions and give the tick an evolutionary advantage. These duplicated genes could also be targets for new tick control measures.
“We estimate those gene duplications took place probably just after the last Ice Age when tick populations would have been expanding into new habitats,” Hill says.
The project also included the first genome-wide analysis of tick population structure in North America, resolving a long-standing debate over whether deer ticks in the North and South are actually two different species.
According to Hill, the genome offers convincing evidence that the two populations are the same species, despite their genetic differences. Because the majority of Lyme disease cases occur in the North, there might be a genetic component to ticks’ ability to transmit Lyme disease that a comparison of the two populations could illuminate.
“Now we’ve got the script to help us work out what proteins the tick’s genes are making, what these proteins do, and whether we can exploit them to control the tick,” Hill says.
The National Institutes of Health and the US Department of Health and Human Services provided principle funding for the project, which includes 93 authors from 46 institutions.
The principle genome paper was published in Nature Communications.
Source: Purdue University
|
<urn:uuid:e4fe41f0-9a87-4843-a532-eed9d9b51c67>
|
CC-MAIN-2016-26
|
http://www.futurity.org/ticks-genome-1104122-2/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00155-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.941058 | 1,596 | 4.09375 | 4 |
Dust blows out over the Atlantic Ocean off of western North Africa and reaches to the Cape Verde Islands in this true-color Terra MODIS image from January 18, 2003. The dust appears as a light tan veil over the land and water, slightly blurring everything underneath. Even so, MODIS was able to "see" through the dust and detect a fire (lower right edge), marked in red, in southern Senegal just below the southern border of The Gambia.
On the left side of the image, past the reach of the dust cloud, white water-clouds form interesting patterns as they move with the wind. The low-level winds moving over the Cape Verde Islands create vortex streets in the clouds, which look like lines of swirls and curves moving toward the southwest. The term "streets" refers to the wind lining the clouds up in the same direction, and "vortices" refers to the patterns formed when the winds move around the islands.
Also visible is a bank of closed-cell clouds at the bottom left in the image. These cells, or parcels of air, often occur in roughly hexagonal arrays in a layer of air that behaves like a fluid (as often occurs in the atmosphere) and begins to convect due to heating at the base or cooling at the top. In these closed cell clouds, warm air is rising at their centers and sinking around the edges to create this honeycomb-like pattern.
|
<urn:uuid:ecfa296a-d556-4632-86dc-0f4780df5b0d>
|
CC-MAIN-2016-26
|
http://visibleearth.nasa.gov/view.php?id=64444
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00033-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940663 | 294 | 3.703125 | 4 |
AS SEEN IN:
"Pocahontas and Captain John Smith - Love and Survival in Jamestown"
AS PLAYED BY:
Jessica May Foss
Pocahontas (c.1595 – March 21, 1617) was a Virginia Indian woman notable for having assisted colonial settlers at Jamestown in present-day Virginia. She was converted to Christianity and married the English settler John Rolfe. After they traveled to London, she became famous in the last year of her life. She was a daughter of Wahunsunacawh, better known as Chief or Emperor Powhatan (to indicate his primacy), who headed a network of tributary tribal nations in the Tidewater region of Virginia (called Tenakomakah by the Powhatan). These tribes made up what is known as the Powhatan Chiefdom and were part of the Algonquian language family.
|
<urn:uuid:7bbe1a72-a547-4c64-a378-976c5b269e15>
|
CC-MAIN-2016-26
|
http://toobworld.blogspot.com/2009_11_22_archive.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00050-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.981528 | 191 | 2.984375 | 3 |
Religious Practices of the Diegueño Indians, by T.T. Waterman, , at sacred-texts.com
In the beginning there was no earth or land. There was nothing except salt water. This covered everything like a big sea. Two brothers lived under this water. The oldest one was Tcaipakomat. 148
Both of them kept their eyes closed, for the salt would blind them. The oldest brother after awhile went up on top of the salt water and looked around. He could see nothing but water. Soon the younger brother too came up. He opened his eyes on the way and the salt water blinded him. When he got to the top he could see nothing at all, so he went back. When the elder brother saw
that there was nothing, he made first of all little red ants, miskiluwi (or ciracir). They filled the water up thick with their bodies and so made land. Then Tcaipakomat caused certain black birds with flat bills, xanyil, to come into being. There was no sun or light when he made these birds. So they were lost and could not find their roost. So Tcaipakomat took three kinds of clay, red, yellow, and black, and made a round, flat object. This he took in his hand and threw up against the sky. It stuck there. It began to give a dim light. We call it the moon now, halya. The light was so poor that they could not see very far. So Tcaipakomat was not satisfied, for he had it in mind to make people. He took some more clay and made another round, flat object and tossed that up against the other side of the sky. It also stuck there. It made everything light. It is the sun, inyau. Then he took a light-colored piece of clay, mutakwic, and split it up part way. He made a man of it. That is the way he made man. Then he took a rib 149 from the man and made a woman. This woman was Sinyaxau, First Woman. 150 The children of this man and this woman were people, ipai. They lived in the east at a great mountain called Wikami. 151 If you go there now you will hear all kinds of singing in all languages. If you put your ear to the ground you will hear the sound of dancing. This is caused by the spirits of all the dead people. They go back there when they die and dance just as they do here. That is the place where everything was created first.
A big snake lived out in the ocean over in the west. He was called Maihaiowit. 152 He was the same as Tcaipakomat but had taken another form. This big snake had swallowed all learning. All the arts were inside his bodysinging, dancing, basket-making, and all the others. The place where the snake lived was
called Wicuwul (Coronado Islands?) The people at this time at Wikami wished to have an Image Ceremony. They had made a wokeruk, ceremonial house, but did not know what else to do. They could neither dance nor make speeches. One man knew more than the others. He told them they ought to do more than just build the house, so that the people who came after them would have something to do. So they made up their minds to send to Maihaiowit and ask him to give them the dances. Another sea monster, Xamilkotat, was going to swallow everyone who tried to go out to Maihaiowit. So the people said the man who went had better change himself into a bubble.
So the man who had first spoken about the matter changed himself into a bubble. The monster swallowed him anyway. When he found himself down inside he first went north, but he could find no way out. Then he went south, east, and west but could find no way out. Then he reached his hand toward the northhe was a wonderful medicine-manand got a blue flint, awi-haxwa. He broke this so as to get a sharp edge. Then he cut a hole through the monster and got out. Then he went on and on till he got to the place where Maihaiowit lived. The snake had a big circular house, with the door in the top. The man went in there. When the snake saw him he called out:
Mamapitc inyawa maxap meyo (Who-are-you my-house hole comes-in?)
The man answered:
Inyatc eyon enuwi (I it-is, Uncle) .
"Tell me what you want," said the snake.
"I came over from Wikami," said the man. "They are trying to make a wukeruk ceremony there, but they don't know how to sing or dance."
"All right," said the snake, "I will come and teach them. You go ahead and I will come slowly."
So the man went back. The monster came after him reaching from mountain to mountain. He left a great white streak over the country where he went along. You can still see it. The people at Wikami were expecting him, so they cleared a space. He came travelling fast as a snake travels. He went to the wukeruk. First he put his head in. Then he began slowly pulling
his length in after him. He coiled and coiled, but there was no end to his length. After he had been coiling a long time the people became afraid at his size. So they threw fire on top of the house and burned him. When they put the fire on him he burst. All the learning inside of him came flying out. It was scattered all around. Each tribe got some one thing. That is the reason one tribe knows the wildcat dance and another the wukeruk and a third are good at peon. Some people got to be witches or medicine-men (kwusiyai), and orators, but not many.
The head of Maihaiowit was burned to a cinder. The rest of his body went back west. It did not go very far. In the Colorado river there is a great, white ridge of rock. That is his body. A black mountain near by is his head. The people go to the white rock and make spearheads.
After the house was burned up, the people were not satisfied, so they scattered in all directions. The people who went south were the oldest. They are called Akwal, Kwiliyeu, and Axwat. The rocks were still soft when the people scattered abroad over the earth. Wherever one of them stepped he left a footprint. The hollows around in all the rocks are where they set down their loads when they rested. 152a
Even a hasty reading of this myth makes evident its dissimilarity with the ordinary Luiseño and Mohave accounts of creation. It may be well to add in this place that a systematic comparison of the narratives in detail confirms the impression of dissimilarity conveyed at first blush by the general structure and underlying idea of the story. 153 A certain external relation between the myth outlined above and the Mohave story 154 is of course apparent. The mountain Wikami, for instance in the
present story, and the monster Maihaiowit, correspond to the Mohave "Avikwame" and the monster "Humasareha." This relationship does not seem to extend down into the story-elements proper.
It is of course impossible to determine at this time, either from the myth just quoted or from other versions, just what elements enter properly into the Diegueño myth. All the evidence extant, however, points quite unmistakably to the conclusion that as far as the mythology of Creation is concerned, the Diegueño are thoroughly independent of the Shoshonean peoples north of them.
It must be noted in passing that the "meteor" or electric fireball, Diegueño Tcaup or Kwiyaxomar (Cuyahomarr), Luiseño Takwish, Mohave Kwayu, is also prominent in all the mythologies of the Mission area. 155 As a corollary to the theme discussed just above, it is to be observed that the Diegueño give this subject, too, a characteristic treatment of their own. The physical phenomenon which is the basis of the stories is apparently the same everywhere, namely, ball-lightning. A certain confusion has arisen in this regard, owing to the use in various papers of the word "meteor" to describe the manifestation. The presence of this word in the literature of the subject is in all likelihood to be charged to a loose employment of the term, in the first place, by uneducated native informants. The being described in the myths is widely thought to be accompanied by thunderings, to have a "bright" or "beaming" appearance, and to fly about close to the surface of the ground. These traits unmistakably characterize ball-lightning rather than meteors. 156 The terrific action of the electric fireball would, at least in the mind of the present writer, account in part for the terror in which the being is held by all the Mission peoples. However this may be, the Luiseño and Mohave "cannibal meteor" stories offer almost no similarity (outside of concerning the same subject) to the corresponding Diegueño tale. This being, who as we have seen is
the culture hero of the Diegueño, is apparently regarded as a malevolent demon among the Luiseño and Mohave.
It is perhaps too early to say that the Diegueño have no myths other than the Chaup and Creation stories. We may safely conclude however that these two are by far the most important types of myth. It is also safe to say concerning Diegueño mythology that while it seems to be restricted in scope, its affiliations are to be sought, not among the mythology of the Shoshoneans as has at times been suggested, but among that of the peoples, related linguistically to the Diegueño, who live to the south and east.
338:148 Miss DuBois gives Tuchaipa as the elder and Yokomat or Yokomatis as the younger, but says (Journ. Am. Folk-Lore, XXI, 229, 1908; and Congr. Intern. American., XV, Quebec, II, 131, 1906) that the two names are sometimes given in one: Chaipakomat.
339:149 This may be an original element and not a gloss from the Biblical myth. The informant is a "bronco" (unbaptized) Indian, who has never been under the influence of the missionaries.
339:150 From siny, woman, and axau, first; apparently the same as Miss DuBois Sinyohauch (Journ. Am. Folk-Lore, XVII, 222, 1904), in which the final ch is guttural.
339:151 Cf. present series, VIII, 123, 1908; Journ. Am. Folk-Lore, XIX, 315, 1906; Am. Anthropologist, n.s. VII, 627, 1905.
339:152 Journ. Am. Folk-Lore, XIX, 315, 1906; XXI, 235, 1908; Am. Anthr., n.s., VII, 627, 1905.
341:152a A full account of the Yuma creation story has been contributed by Mr. John P. Harrington to the Journal of American Folk-Lore, XXI, 324, 1908. The relationship between the above schematic account and Mr. Harrington's full version of the Yuma story is at once evident.
341:153 See Am. Anthr., n.s. XI, 41-55, 1909. Thirteen prominent story elements are there chosen for study. Of these, it develops that the Mohave and Luiseño myths have nine in common. The Diegueño story, on the other hand, has only three elements in common with the Luiseño, and but two in common with the Mohave. This is quite insignificant, since any two totally unrelated mythologies might to this limited extent be similar.
341:154 Journ. Am. Folk-Lore, XIX, 314, 1906.
342:155 Ibid., 316. Ibid., XVII, 217, 1904. Ibid., XIX, 147, 1906.
342:156 The present writer has never met the word "meteor" in this connection among native informants, and has found the being in question identified both in Luiseño and Diegueño territory with the electric fireball.
|
<urn:uuid:255ea13c-04da-4c72-85e3-6b0887a64985>
|
CC-MAIN-2016-26
|
http://sacred-texts.com/nam/ca/rpdi/rpdi25.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00111-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.97835 | 2,681 | 3.09375 | 3 |
- Cancer 101: Cancer Explained
- Guide to Breast Cancer
- Skin Cancer Risks
- Find a local Oncologist in your town
What Can I Do to Combat Fatigue?
The best way to combat fatigue is to treat the underlying medical cause. Unfortunately, the exact cause is often unknown, or there may be multiple causes.
There are some treatments that may help improve fatigue caused by an under-active thyroid or anemia. Other causes of fatigue must be managed on an individual basis. The following guidelines should help you combat fatigue.
Keep a diary for one week to identify the time of day when you are either most fatigued or have the most energy. Note what you think may be contributing factors.
Be alert to your personal warning signs of fatigue. Fatigue warning signs may include tired eyes, tired legs, whole-body tiredness, stiff shoulders, decreased energy or a lack of energy, inability to concentrate, weakness or malaise, boredom or lack of motivation, sleepiness, increased irritability, nervousness, anxiety, or impatience.
There are several ways to conserve your energy. Here are some suggestions:
Plan ahead and organize your work
- Change storage of items to reduce trips or reaching.
- Delegate tasks when needed.
- Combine activities and simplify details.
- Balance periods of rest and work.
- Rest before you become fatigued -- frequent, short rests are beneficial.
- A moderate pace is better than rushing through activities.
- Reduce sudden or prolonged strains.
- Alternate sitting and standing.
Practice proper body mechanics
- When sitting, use a chair with good back support. Sit up with your back straight and your shoulders back.
- Adjust the level of your work -- work without bending over.
- When bending to lift something, bend your knees and use your leg muscles to lift, not your back. Do not bend forward at the waist with your knees straight.
- Carry several small loads instead of one large one, or use a cart.
Limit work that requires reaching over your head
- Use long-handled tools.
- Store items lower.
- Delegate activities when possible.
Limit work that increases muscle tension
- Breathe evenly; do not hold your breath.
- Wear comfortable clothes to allow for free and easy breathing.
Identify effects of your environment
- Avoid temperature extremes.
- Eliminate smoke or harmful fumes.
- Avoid long, hot showers or baths.
Prioritize your activities
- Decide what activities are important to you, and what could be delegated.
- Use your energy on important tasks.
|
<urn:uuid:c35c9008-039f-47e5-b89d-9b5f07e9e3d9>
|
CC-MAIN-2016-26
|
http://www.medicinenet.com/cancer_fatigue/page3.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00157-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.88424 | 545 | 2.984375 | 3 |
(In North America) a publicly funded independent school established by teachers, parents, or community groups under the terms of a charter with a local or national authority.
- High Tech High is a public charter school founded by a business coalition that raised more than $6 million for start-up.
- The firm also recently designed its first school - a charter school in Escondido.
- Attracting students - 25 families came to the first open house - was easy compared with getting the charter school approved, hiring teachers and paying rent.
For editors and proofreaders
Syllabification: char·ter school
Definition of charter school in:
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed.
|
<urn:uuid:04651e5b-8b08-435c-9d44-f8ae3282a369>
|
CC-MAIN-2016-26
|
http://www.oxforddictionaries.com/us/definition/american_english/charter-school
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00163-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936454 | 163 | 2.609375 | 3 |
Concorde Airport Noise
Engine noise during takeoff and landing has been a problem ever since turbojet aircraft were first introduced to the airlines. Noise suppression in jet aircraft has been a concern virtually since their inception. This is particularly true with commercial aircraft which, of necessity, must take-off, pass over, and land at or near populated areas. When new airports are built, where possible they are built away from populated areas and in a manner that the take-off and landing patterns avoid causing noise problems. In airport facilities such as San Diego, Calif. where the airport is very near the city, elaborate measures are mandated in an attempt to reduce the noise impact on the urban area. Even Los Angeles, Calif. has restrictions on aircraft performance upon take-off until the immediate urban area has been sufficiently cleared in distance and altitude. Many airports require engines to be throttled back as soon as it is safe to do so and restrict the rate of climb in the immediate vicinity of the airport.
Exhaust noise led to development of turbojet-noise suppressors and also contributed later the introduction of dual flow (turbofan) engines having reduced exhaust noise. Noise reduction efforts were then oriented towards attenuation of acoustical emission from the fan and compressor.
The situation has changed since SST studies began and exhaust noise of the required highthrust, low frontal area engines is again a major factor. Unfortunately, the presence of a noise suppressor in the exhaust produces thrust losses which, in general, become greater with increased acoustical effectiveness. The thrust loss is not great when the suppressor is designed for an engine with only a convergent primary nozzle. But in the case of an ejector or convergent-divergent nozzle, a considerable thrust loss of 10 to 15% can occur in the sub- and transonic stages of flight.
The Concorde exhaust noise suppressor makes a radial injection of mixing air in the primary jet stream using ten lobes in the form of triangular prisms hinged to the divergent section of the ejector nozzle. When suppression is no longer necessary, a feedback linkage allows the lobes to retract so as to eliminate all thrust losses in the cruise position. Model tests were used to develop detailed geometry of the design.
A noise evaluation process began formally with an advance notice of proposed rulemaking in 1970, and involved three notices of proposed rulemaking ("NPRM"), numerous public hearings, demonstration of the Concorde at Dulles and J.F.K. Airports, the preparation of two comprehensive environmental impact statements, and the consideration of over 11,300 comments from airport neighbors and other concerned citizens, airport proprietors, aircraft operators, aircraft manufacturers, and Federal, State, and local governmental agencies. These comments greatly assisted the effort to develop requirements that are balanced in their responsiveness to divergent public concerns, and are effective in terms of public relief from the noise of civil supersonic air transportation. These rule were developed over the course of 1 year in close consultation between Secretary of Transportation Brock Adams and FAA Administrator Langhorne Bond. The rules reflected the Secretary's responsibility for overall national transportation policy and his concern that these final rules properly take into account all aspects of that policy - including environmental, economic, and international aviation considerations.
On August 4, 1970, the FAA issued advance notice of proposed rulemaking No. 70-33, published in the Federal Register (35 FR 12555) on August 6, 1970. That notice initiated the public process of determining the nature and scope of the factors that must be considered in the development of noise ceilings for SST's. Notice No. 70-33 requested public comment on a number of issues and stated FAA's intent to ensure that SST's like subsonic airplanes, are subject to type certification standards that require the application of all economically reasonable noise reduction technology. Many public comments were received in response to this early invitation to public participation in the FAA's rulemaking on this matter and were considered in the adoption of these rules.
In early 1975, EPA proposed noise rules for supersonic transports (SSTs) applying FAA's standards for subsonic jets to future SSTs. On February 27, 1975, EPA transmitted FAA proposed regulations for the control and abatement of SST noise. These proposals were developed and submitted pursuant to sec. 611(c)(1) of the Federal Aviation Act of 1958, as amended. The 1975 EPA proposal would have required: (1) future design SSTs to meet noise standards applicable to new type subsonic airplanes; (2) existing types of supersonic airplanes (the Concorde and Russian TU-144) upon which "substantive productive effort" had not commenced before the date of the EPA Notice to meet the Stage 2 requirements of Part 36; and (3) SSTs already under production (at least 9, possibly 16, Concordes and an unknown number of TU-144's) to be treated separately.
The Port Authority of New York and New Jersey, the operator of JFK, banned the Concorde from landing because of its higher noise levels and low frequency vibrations. The British and French airlines subsequently filed suit to invalidate the ban. Numerous lawsuits ensued, but landings were ultimately approved.
This would have effectively banned the Anglo-French Concorde, so in January 1976, EPA reversed its stand, exempting the Concorde. President Ford responded to the controversial question of whether the Concorde should be permitted to operate in the U.S. by ordering a thorough investigation and study. The Secretary of Transportation decided to allow Concorde landings at Kennedy and Dulles Airports for a 16-month trial period.
On application of British Airways and Air France to operate the Concorde into the United States, Secretary of Transportation William T. Coleman, Jr., completed extensive hearings and authorized a 16-month trial of Concorde operations at New York and Washington, DC, Airports. On February 4, 1976, deciding an issue that had rekindled America's own SST debate, Coleman permitted, for a 16-month demonstration period, a limited number of Concorde supersonic flights between Europe and Dulles Airport. "Through these operations . . . we can get specific technical information . . . on . . . noise or any interference with the environment . . . and at the end of that 16-month trial period there will be an evaluation made by the Secretary of Transportation . . . But the only way you can find out is to actually undertake them on a limited basis for a limited period of time, and I fully support Secretary Coleman's decision" President Ford said on April 23, 1976.
On May 24, 1976, following a 3-hour 35-minute flight from London, the first Concorde supersonic commercial airliner landed at Dulles Airport. The French Concorde arrived from Paris approximately two minutes later. Although Concorde operations accounted for less than one percent of the take-offs and landings at Dulles International Airport, they resulted in 1,387 complaints or 79 percent of the total noise complaints received. The greatest percentage of Concorde complaints concerned take-off. Complaints were also made about structural vibrations. Studies of low frequency noise vibrations during the Dulles test period showed that, although the vibrations generated by the Concorde were greater than those of subsonic aircraft, they did not result in structural damage. Monitoring confirmed that, compared to the loudest jet subsonic transports, the Concorde was twice as noisy on takeoff and approximately as loud on approach. The 100 EPNdB contour from a Concorde departure may extend 20 miles or more from the start of takeoff roll. In terms of practical effects, outdoor communication at a distance of 2 feet could require shouting for those persons within the 100 EPNdB single-event contour.
On September 23, 1977, at the end of its 16-month trial at Dulles Airport, Adams proposed that the Concorde SST could land in eleven additional U.S. cities, unless banned by fair and nondiscriminatory local standards. In view of its exceptional loudness, however, he retained the ban on Concorde operations between 10:00 p.m. and 7:00 a.m., as well as the absolute prohibition on supersonic flight over land. On October 17, 1977, the U.S. Supreme Court lifted the ban by New York's JFK Airport on the Concorde SST, clearing the way for immediate trial flights. And on November 22, 1977, the first Concorde flights landed at New York City's John F. Kennedy International Airport.
Except for the 16 Concordes which were expected to have flight time before January 1, 1980, all SSTs are required by these rules to comply with the noise limits of Part 36 in effect on January 1, 1977 ("Stage 2 noise limits") in order to operate in the United States. The first 16 Concordes, which was the maximum number that the Britain and France are expected to manufacture before January 1, 1980, are expected from compliance with the Stage 2 noise limits of Part 36. There was no expiration date on this exception. However, under these rules, the excepted Concordes may not be operated on flights scheduled, or otherwise planned, for takeoff or landing at U.S. airports after 10 p.m. and before 7 a.m. local time. Moreover, these rules subject the expected Concordes that operate in the U.S. to an "acoustical change" requirement identical to that applied to U.S. type certificated subsonic airplanes that have not been shown to comply with Stage 2 noise limits. Like those subsonic airplanes (which are called "Stage 1 airplanes" in Part 36), the noncomplying Concordes may not be operated in the U.S. if their design is changed in a way that increases their noise levels.
|Join the GlobalSecurity.org mailing list|
|
<urn:uuid:417e268d-ed41-44a6-a65a-546a2b53484d>
|
CC-MAIN-2016-26
|
http://www.globalsecurity.org/military/world/europe/concorde-noise.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00067-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955854 | 2,032 | 3.296875 | 3 |
|English Bill of Rights (1 Will. and Mary, sess. 2, c. 2 )||United States Bill of Rights (U.S. Constitution, Amendment VIII )|
|10. That excessive baile ought not to be required nor excessive fines imposed nor cruell and unusuall punishment inflicted.||VIII. Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.|
Thus, those who assert that simple corporal or capital punishment is "cruel and unusual" within the meaning of the 8th amendment -- as understood by those who drafted and ratified it -- are ignorant fools.
©Copyright 1995-2006 Chuck Anesi all rights reserved
|
<urn:uuid:2d7b8a4f-eed7-41b3-8326-5a77b80b4820>
|
CC-MAIN-2016-26
|
http://www.anesi.com/q0035.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00130-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.884976 | 146 | 2.765625 | 3 |
33,000 sharks, 2000 dolphins & 2000 turtles killed to boost beach tourism in South Africa16/06/2009 13:42:04 Remove the Nets: Join the Shark Angels' Campaign against Shark Nets!
June 2009. It is difficult to believe in this day in age, with all that we know about sharks' plummeting populations, their critical role in ocean ecosystems and the minimal risk they pose to humans, that the archaic and destructive practice of installing shark nets for "bather protection" still exists. But in KwaZulu-Natal (KZN), South Africa, a province ironically known around the world as one of the few places left where sharks and the ecosystems they keep healthy still thrive, untold numbers of harmless sharks, turtles, dolphins, and rays meet an untimely and senseless death each year by entanglement in the approximately 28 kilometres of ‘shark' nets that are installed just off the beaches.
What are shark nets?
Shark nets are essentially gill nets: long rectangular nylon mesh nets, 200-300 metres in length, that are positioned near the surface of the water and kept afloat with buoys. Sharks swim into these nets and are caught by their gills. The squares of mesh are designed to be just large enough for sharks to become entangled, but not escape. The more a shark or any other animal struggles in these nets, the more hopeless their situation becomes, and the more impossible their chances of escape and survival. The vast majority of these animals die an agonizing death by suffocation. Gill nets are widely considered to be one of the greatest threats to the survival of many species of marine animals.
In South Africa, the shark nets are installed in tiered patterns by the KwaZulu-Natal Sharks Board (KZNSB). Just beneath the surface, they do not fully extend to either the top or the bottom and do not even come close to fully enclosing the beach areas. The result is that sharks can easily swim around or under the nets and into the shallow waters in which humans swim and surf. In fact, the KZNSB acknowledges on its own website that at least 33% of the sharks killed in these nets were actually on their way OUT from the beaches, rather than on their way in, and other sources estimate that this number is closer to 70%.
Bait is set to attract sharks
You see, the goal is not to provide a physical barrier to keep sharks away from the beaches, but rather to control shark populations by culling them. In many cases, the KZNSB places baited drumlines just outside the shark nets, which are designed to attract sharks in towards the beaches and kill them, either by biting the baited hooks on the drumlines or by entanglement in the nearby gill nets.
Nets installed in Marine Protected Areas!
The process is entirely unselective, with nets installed all along the coast, including in Marine Protected Areas! The sole purpose of these nets is to kill all sharks in the area, including highly endangered species that would otherwise enjoy stringent legal protection, such as whale sharks and the great white shark.
According to the KZNSB's own website, "The Marine Living Resources Act (Act 18 of 1998) controls the exploitation of marine plants and animals in South African waters. . . . The great white shark is totally protected; in 1991 South Africa being the first country in the world to do so." And yet, the KZNSB, which is governed by the KZN Department of Arts, Culture and Tourism, is exempted from these important conservation regulations in the interest of making tourists feel safe.
Brutal, indiscriminate killers
Sea Shepherd's Director of Shark Conservation, Kim McCoy, a founding member of the Shark Angels alliance, was outraged to witness first-hand the carnage caused by South African shark nets. "Sharks and other animals don't stand a chance against these nets," said McCoy. "They are brutal, indiscriminate killers designed to systematically cull a species for no other reason than to boost tourism by giving beachgoers a false sense of security against a severely sensationalized threat."
Shark Angels co-founder, Julie Andersen, who frequently leads groups of people on diving trips with the tiger sharks of Aliwal Shoal, clearly illustrates the irony of using shark nets to increase tourism, noting the number of tourists who come to South Africa each year specifically to dive with sharks. "Sharks in South Africa contribute a significant amount of revenue to the South African economy and provide countless jobs," said Andersen. "Live sharks mean tourists, jobs, and money. And that is recurring income-not the one-time income generated when a shark is killed."
33,000 sharks, 200 turtles, 8000 rays and 2000 dolphins killed in shark nets
Over the last three decades, more than 33,000 sharks have been killed in the KZNSB shark nets. And if that's not alarming enough, 2,000+ turtles, 8,000+ rays, and 2,000+ dolphins were also ensnared and killed.
In addition to the countless deaths of sharks and other species caused directly by the shark nets, their impact on our collective psyches is damaging to shark conservation efforts worldwide. The very existence of shark nets perpetuates the myth that sharks are bloodthirsty man-eaters, and that humans require some form of protection from them. The installation of shark nets reinforces our misguided and often irrational fears of sharks by legitimizing these concerns as valid. This in turn fuels the biggest issue faced in shark conservation: the public's apathy, or even loathing, towards sharks.
It could be said there was once a time and a place for shark nets. Perhaps decades ago, when the public knew little about sharks, the fear of shark attacks was running high, and shark populations were far healthier than they are today. The practice of installing shark nets in South Africa began in 1952, when little was known about sharks, and humans had yet to spend the next 50+ years ravaging our oceans, causing irreparable damage and the collapse of species after species. The public wanted "protection" from sharks, and shark nets served this purpose.
|
<urn:uuid:271dbed4-639c-4f9f-8600-8b5d204d3381>
|
CC-MAIN-2016-26
|
http://www.wildlifeextra.com/go/news/shark-nets.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00130-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959525 | 1,271 | 2.984375 | 3 |
In a recent study, which appeared in the October 2005 issue of the Journal of Applied Physiology, researchers at Duke University investigated what amount and what intensity of exercise is needed to prevent gaining abdominal fat.
The 175 participants were sedentary, overweight adults with bad lipid levels. They were randomly assigned to a control group for six months or to low amount/moderate intensity (the equivalent of walking 12 miles per week), low amount/vigorous intensity (like jogging 12 miles per week) or high amount/vigorous intensity activity (like jogging 20 miles per week) for eight weeks.
Low amounts of exercise prevented further gain of visceral fat regardless of exercise intensity, while high amount/vigorous exercise decreased both visceral and subcutaneous fat. The control group gained significant amounts of visceral fat.
More Fitness News Quick Energy Fixes Never Too Early to Move
|
<urn:uuid:f59ec3d0-25a0-4b9e-adf6-89c4adb744f1>
|
CC-MAIN-2016-26
|
http://www.prevention.com/print/51291
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00104-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932539 | 176 | 3.046875 | 3 |
by Staff Writers
Brisbane, Australia (SPX) Nov 15, 2012
Australian marine scientists have unearthed evidence of an historic coral collapse in Queensland's Palm Islands following development on the nearby mainland. Cores taken through the coral reef at Pelorus Island confirm a healthy community of branching Acropora corals flourished for centuries before European settlement of the area, despite frequent floods and cyclone events. Then, between 1920 and 1955, the branching Acropora failed to recover.
Scientists from the ARC Centre of Excellence for Coral Reef Studies at the University of Queensland say the rapid collapse of the coral community is potential evidence of the link between man-made changes in water quality and the loss of corals on the Great Barrier Reef.
It adds weight to evidence that human activity is implicated in the recent loss of up to half of the corals on the Great Barrier Reef, says Professor John Pandolfi of CoECRS and UQ.
The destruction of branching corals coincided with wide-spread land clearing for grazing and agriculture which took place in the nearby Burdekin River catchment in the late 19th Century, causing an increase in the amount of mud and nutrients into the GBR lagoon, says the lead author of a new study on the collapse, Dr George Roff, of CoECRS and UQ.
"Corals have always died from natural events such as floods and cyclones, but historically have shown rapid recovery following disturbance. Our results suggest that the chronic influence of European settlement on the Queensland coastline may have reduced the corals ability to bounce back from these natural disturbances" he says.
The team took cores from dead coral beds on the western side of Pelorus Island and then analysed their coral species composition and their age, using high-precision uranium dating methods pioneered by a team lead by one of the study's co-authors, Jian-xin Zhao at the University of Queensland's Radio Isotope Facility. They then aligned this with records of cyclones, floods and sea surface temperatures over the same period.
"Our results imply ... a previously undetected historical collapse in coral communities coinciding with increased sediment and nutrient loading following European settlement of the Queensland coastline," the researchers report in their paper.
"Significantly, this collapse occurred before the onset of the large-scale coral bleaching episodes seen in recent decades, and also before detailed surveys of GBR coral began in the 1980s.
"And, even more significantly, we found no similar collapse occurring at any time in the previous 1700 years covered by our cores. Throughout this period the branching corals continued to flourish - despite all the cyclones and natural impacts they endured."
At two sites the Acropora corals vanished completely while at a third there was a marked shift in coral species from Acropora to Pavona, which the researchers say parallels similar observations of human impacts in the Caribbean.
"On a global scale, our results are consistent with a recent report from the Caribbean region, where land use changes prior to 1960 were implicated in a significant decline in Acropora corals in near-shore reefs."
The research has raised another realistic possibility - that current coral surveys may significantly underestimate the possibility of major 'unseen' shifts such as these having taken place in the period before effective coral records began, the researchers suggest. In other words, the GBR may be more degraded than it appears to today's eyes.
"We know that at some sites in the region, branching Acropora was the dominant reef builder until recent times. This raises the question of why some inshore reefs appear resilient, while others failed to recover from disturbance" says Dr Roff.
"The research underlines that there is a very strong link between what we do on land - and what will happen to the Great Barrier Reef in future. It encourages us to take greater and more rapid steps to control runoff and other impacts on land," says Prof. Pandolfi.
Their paper "Palaeoecological evidence of a historical collapse of corals at Pelorus Island, inshore Great Barrier Reef, following European settlement" by George Roff, Tara R. Clark, Claire Reymond, Jian-xin Zhao, Yuexing Feng, Laurence J. McCook, Terence J. Done and John M. Pandolfi appears in the latest issue of Proceedings of the Royal Society B.
ARC Centre Of Excellence For Coral Reef Studies
Water News - Science, Technology and Politics
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
|
<urn:uuid:944f80ef-f355-491b-86cd-98bab885afdc>
|
CC-MAIN-2016-26
|
http://www.spacedaily.com/reports/Historic_coral_collapse_on_Great_Barrier_Reef_999.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00076-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929554 | 1,039 | 2.96875 | 3 |
Kangaroos and their kin are characterized by large powerful hind limbs. (This is the inspiration for the scientific name of the family: Macropodidae). Some of the large kangaroo species are capable of speeds up to 88 km/h (55 MPH) for short distances. Species in this family can be a small as a hare or as big as an adult human.
Found primarily in Australia (including Tasmania) and New Guinea, kangaroos occupy the same ecological niche as large grazing animals such as antelopes, deer and bison do in North America, South America and Africa.
Kangaroo teeth are particularly adapted to their diet of tough grasses. The grinding molars erupt in slow succession over the life of the animal and move forward along the jaw, eventually falling out. This process allows the kangaroo to cope with its highly abrasive diet by bringing new teeth into action over time. If the teeth were retained for a long period of time, eventually the grinding required to chew the plants would wear the teeth down.
Kangaroos are not greatly bothered by predators, apart from humans and occasional dingoes. As a defensive tactic, a larger kangaroo will often lead its pursuer into water where, standing submerged to the chest, the kangaroo will attempt to drown the attacker under water. In other adversarial circumstances a kangaroo will back against a tree and kick with clawed hind feet at the adversary, sometimes with enough force to kill an adult human.
|
<urn:uuid:b71aa32d-726b-4a96-bfc2-345f8ea24e7a>
|
CC-MAIN-2016-26
|
http://www.nature.ca/notebooks/english/kanga.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00137-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961588 | 310 | 3.578125 | 4 |
Referencing is an important skill not only for academic work, but also when writing reports for your workplace. A large number of employer provide standardised templates to ensure employees properly reference their work.
The most common form of referencing is known as the Harvard system and in the resources section there is an Excel spreadsheet for easy referencing.
1. Print out a referencing sheet for each student
2. Bring a variety of books to class / tell the library that your students will be visiting
1. Elicit students favourite subjects
2. Elicit how they would buy a book on their favourite subject
3. Write down example format of a book to reference on the board
4. Give each student a sheet
5. Ask them to go to the library and reference five books using the sheet. (Or use books in class) Perhaps make it a race if that isn't too disruptive.
This then leads into a piece of writing or a quiz on the harvard system.
Resources on This Website
|
<urn:uuid:a046b176-3b0f-4da7-a9ca-e3f10a5e7257>
|
CC-MAIN-2016-26
|
http://www.jamesabela.co.uk/advanced/referencing.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00022-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930002 | 205 | 3.65625 | 4 |
In our 24-hour society, it’s no surprise that there’s a sleep condition called Excessive Daytime Sleepiness (EDS). This condition, which is exactly what it sounds like, affects up to 13% of the population and is growing — especially among young people. The reason: Self-imposed sleep deprivation.
The primary causes of sleep deprivation are advanced technology and electronics as well as work and family demands. The availability of television, radio, internet, smartphones, tablets, texting and video gaming provide constant stimuli that may be interfering with sleep. Making a habit of staying up late (or getting up very early) and “pulling all-nighters” are viewed as badges of honor in some circles, but are not providing any health benefit. Ever notice how many people fall asleep at their desks or on a plane or train during the day? People who have an adequate amount of sleep should not fall asleep during daytime hours.
How much sleep do you need?
Sleep requirements vary from person to person, can change throughout one’s lifetime, and can be influenced by age, gender and genetic factors. For example, newborns need more sleep than adults. Some adults function well with 7 hours of sleep while others need 9 hours. The general rule of thumb is to identify the amount of sleep that you need to function properly during daytime hours. If you sleep for 6 hours and feel exhausted, but feel refreshed with 8 hours of sleep, then this would indicate the amount of sleep that you require.
Sleep deprivation and your health
Lack of sleep can have a significant impact on our longevity and quality of life as it is commonly linked to:
- Increased risk of heart disease and diabetes;
- Increased risk for depression, anxiety and substance abuse;
- Poor performance at work or school;
- Decreased attention, impaired memory or cognition;
- Delayed reaction time and sub-par sports performance;
- Increased risk for weight gain and obesity;
- Increase risk of life threatening driving, domestic or work-related accidents.
Getting a better night’s sleep
Sleep occupies nearly a third of our lives, yet we continue to sacrifice and undervalue it. Take the first step to improving your sleep with these tips:
- Keep a regular sleep routine. Try to go to bed and wake up the same time each day---even on weekends.
- Make sleep a priority. Plan to get enough sleep every night.
- Exercise regularly.
- Avoid caffeine and alcohol at least several hours before bedtime.
- Quit smoking.
- Try not to eat within 3 hours of bedtime.
- Make the bedroom a place for sleep and sex only. Keep things that prevent you from sleeping in another room: TV, tablets, computer, e-reader, Smartphone
- Create a comfortable and relaxing sleep environment. Use a comfortable mattress and pillow. Keep the bedroom quiet, dark and cool.
- Relaxation techniques. If you have a hard time falling asleep or “shutting your mind down”, try to have a routine of relaxation techniques before bedtime. For example, a warm bath or meditation.
- Make lists. If you tend to worry about things that you need to accomplish the next day, make a list. Instead of worrying all night, set a goal for things that need to be done. Once they are done, cross them off the list and go to bed. Try to have realistic expectations of what can be accomplished each day.
If you are still tired no matter how long you sleep each night, you may have a sleep disorder. Snoring, gasping, choking and abnormal breathing patterns at night may be a sign of sleep apnea. If you suffer from fatigue, snoring, sleep apnea, or insomnia and these tips have not relieved your symptoms, talk to your health care professional.
Sleep is critical to your health and well-being. Making it a priority will improve your health, mood, memory and daily performance.
Dr. Donald M. Sesso, the Director of The Pennsylvania Snoring and Sleep Institute, is the only triple certified snoring doctor in the tri-state area. He specializes in the surgical treatment of obstructive sleep apnea and sinus disorders and is a Board Certified ENT Otolaryngologist in Head and Neck Surgery, Facial Plastic Surgery, and Sleep Medicine.
|
<urn:uuid:6a8c807c-2a5e-43ed-941a-21ad1e628348>
|
CC-MAIN-2016-26
|
http://www.philly.com/philly/health/Tired_All_The_Time_Maybe_You_Need_More_Sleep.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00147-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.934948 | 901 | 3.53125 | 4 |
Medusa is a character in Greek mythology. Her story has been told and retold by ancient and modern writers and artists.
Medusa has been depicted in the visual arts for centuries. Many interpretations surround the myth, including one by Sigmund Freud. For the ancients, an image of Medusa's head was a device for averting evil. This device was called the Gorgoneion.
Myth[change | change source]
Medusa was one of three sisters. They were known as The Gorgons. Medusa's sisters were Stheno and Euryale. Medusa was mortal, but her sisters were immortal. They were all children of the sea deities, Phorkys and his sister Keto.
Any man or animal who looked upon her was turned to stone.
The hero Perseus beheaded Medusa. After using the dreadful head to defeat his enemies, he presented it to Athena. She put it on her shield.
Medusa in art[change | change source]
Medusa was a subject for ancient vase painters, mosaicists, and sculptors. She appears on the breastplate of Alexander the Great in the Alexander Mosaic at the House of the Faun in Pompeii, Italy (about 200 BC).
Baroque depictions include Head of Medusa by Peter Paul Rubens (1618); the marble bust Medusa by Bernini (1630s); and Perseus Turning Phineus and his Followers to Stone, an oil painting by Luca Giordano from the (early 1680s).
Romantic and modern depictions include Perseus with the Head of Medusa by Antonio Canova (1801) and Perseus, a sculpture by Salvador Dalí. Twentieth century artists whno tackled the Medusa theme include Paul Klee, John Singer Sargent, Pablo Picasso, Pierre et Gilles, and Auguste Rodin.
|
<urn:uuid:3f115c5c-f38f-4290-9081-6abff8c9bcc8>
|
CC-MAIN-2016-26
|
https://simple.wikipedia.org/wiki/Medusa
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00192-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.97686 | 395 | 3.46875 | 3 |
Published: Jul 1979
| ||Format||Pages||Price|| |
|PDF (388K)||30||$25|| ADD TO CART|
|Complete Source PDF (4.1M)||298||$91|| ADD TO CART|
The spread of fire from a compartment is considered as spread through a window, through a doorway, through openings associated with entry conduits, or ultimately through openings caused by deterioration of the structure shell. The literature is reviewed for each mode of spread. Emphasis is placed on the interaction of building geometry and building materials and their relationship to fire spread from a compartment. Examples are given in which mathematical design procedures or analysis could determine a fire safe design. Recommendations are presented for continued studies and practices in this area.
compartments, design, doorways, fire spread, literature review, services, windows
Mechanical engineer, Center for Fire Research, National Engineering Laboratory, National Bureau of Standards, Washington, D.C.
|
<urn:uuid:a9e58e15-67d2-4cc6-9558-0b2417b9c8b6>
|
CC-MAIN-2016-26
|
http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP34996S.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00086-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.911917 | 205 | 2.890625 | 3 |
In days of yore, one could mine Bitcoin without much more than an AMD graphics card. Now, without specialized hardware it’s unlikely that you’ll make any appreciable headway in the bitcoin world. This latest project, however, goes completely in the other direction: [Ken] programmed a 55-year-old IBM mainframe to mine Bitcoin. Note that this is technically the most powerful rig ever made… if you consider the power usage per hash.
Engineering wordplay aside, the project is really quite fascinating. [Ken] goes into great detail about how Bitcoin mining actually works, how to program an assembly program for an IBM 1401 via punch cards, and even a section about networking a computer from this era. (Bonus points if he can get retro.hackaday.com to load!) The IBM boasts some impressive stats for the era as well: It can store up to 16,000 characters in memory and uses binary-coded decimal. All great things if you are running financial software in the early ’60s or demonstrating Bitcoin in the mid-2010s!
If it wasn’t immediately obvious, this rig will probably never mine a block. At 80 seconds per hash, it would take longer than the lifetime of the universe to do, but it is quite a feat of computer science to demonstrate that it is technically possible. This isn’t the first time we’ve seen one of [Ken]’s mainframe projects, and hopefully there are more gems to come!
The DEFCON badge this year was an impressive piece of hardware, complete with mind-bending puzzles, cap sense buttons, LEDs, and of course a Parallax Propeller. [mike] thought a chip as cool as the Propeller should be put to better use than just sitting around until next year so he turned it into a Bitcoin miner, netting him an astonishing 40 hashes per second.
Mining Bitcoins on hardware that doesn’t have much processing power to begin with (at least compared to the FPGAs and ASIC miners commonly used) meant [mike] would have to find some interesting ways to compute the SHA256 hashes that mining requires. He turned to RetroMiner, the Bitcoin miner made for an original Nintendo. Like the NES miner, [mike] is offloading the communication with the Bitcoin network to a host computer, but all of the actual math is handled by a single core on the Propeller.
Saving one core for communication with the host computer, a DEFCON badge could conceivably manage 280 hashes/second, meaning the processing power of all the badges made for DEFCON is about equal to a seven-year-old graphics card.
After hearing about cryptocurrencies like Bitcoin, Litecoin, and Dogecoin, [Eric] decided he would have a go at designing his own mining rig. The goals of the project were to have a self-contained and stackable mining rig that had all the parts easily accessible. The result is this awesome computer enclosure, where GPU mining and traditional woodworking collide.
For mining all those coins, [Eric] is using five R9 280x GPUs. That’s an impressive amount of processing power that ended up being too much for the 1500W power supply he initially planned to use. With a few tweaks, though, he’s managing about 2.8 Mh/s out of his rig, earning him enough dogecoins to take him to the moon.
In the video below, you can see [Eric] building his rig out of 4×8 framing lumber. This isn’t a slipshod enclosure; [Eric] built this thing correctly by running the boards through a jointer, doing proper box joints with this screw and gear-based jig, and other proper woodworking techniques we don’t usually see.
Continue reading “Wow. Such Mining Rig. So Amaze.”
[Adrian] came across a treasure trove of 507 mechanical device designs. It didn’t seem quite right for a Retrotechtacular post, but we wanted to share it as it’s a great place to come up with ideas for your next Rube Goldberg machine.
Biking with headphones is dangerous. That’s why [J.R.] built a handlebar enclosure for his Jambox Bluetooth speaker.
While dumpster diving [Mike] found a Macbook pro. It was missing a few things, like a keyboard, touchpad, battery, ram, and storage. He borrowed a power supply to test it out but without the keyboard there’s no power button. He figured out the traces on the motherboard which turn it on when shorted.
[Mateusz] want to let us know about the Hercules LaunchPad. Like the other TI Launchpad offerings it’s an all-in-one dev board. The Hercules line features a couple of flavors of dual-core ARM chips. Can you believe the dev boards you can get for under $20 these days?
After seeing the ammo can sound system about a month ago [Ilpo] was inspired to share his ammo can PC case with us.
And finally, here’s a way to display your Bitcoin mining rig for all to see. This system was laid out in an antique frame and hung on the wall.
The name of the game in mining Bitcoins isn’t CPUs, GPUs, or even FPGAs. Now, hardcore miners are moving on to custom ASIC chips like the Block Erupter, For around $100 USD, you too can mine Bitcoins at 300 MH/s with 2.5 Watts of power and a single USB port. This speed isn’t enough for some people, like [Jeremy] who overclocked his Block Erupter to nearly twice the speed.
[Jeremy] begins his tutorial with a teardown of the Block Erupter hardware. Inside, he found a custom ASIC chip, an ATTIny2313, a USB UART converter, and a voltage regulator for the ASIC. By changing out the 12 MHz crystal connected to the ASIC and fiddling with the voltage with a trim pot, [Jeremy] was able to overclock the ASIC core from 336 MHz to 560 MHz. Effectively, he’s running two Block Eruptors for the price of one with the potential to actually make back the purchase price of his hardware.
It must be noted the 560 MHz figure comes from replacing the 12 MHz crystal with a 20 MHz one, and this mod only lasted about 20 minutes on [Jeremy]’s bench until the magic blue smoke was released. He recommends a 14 or 16 MHz crystal, netting a new speed of either 392 MHz or 448 MHz for a stable mod.
Mining bitcoins is becoming a fool’s errand, but there’s always some new piece of hardware coming out that allows those hard-core miners to keep ahead of the curve. One such piece of hardware are new custom ASIC devices that are just as fast as an FPGA while being much less expensive. A lot of these ASIC devices come in interesting packages that look just like a large USB thumb drive. Of course this is the perfect opportunity to show off what the Raspberry Pi can do by mining Bitcoins at rates comparable to the best graphics used in mining today.
The Raspberry Pi simply doesn’t have enough horsepower to mine bitcoins at any worthwhile rate. There are, however, USB ASIC devices that will mine for you at about the same speed as a high-end graphics card. Since multiple ASIC devices can be controlled through a USB hub, it’s simply a matter of plugging a USB hub into a Raspberry Pi, loading up CGminer, and letting your new PiMiner loose on a mining pool.
The Adafruit Pi Miner uses one of their really cool LCD character displays and keypad to display the current mining rate, accepted shares, and enough information for you to calculate how long it will take to break even with your Pi powered mining rig. How long that will be for this four device rig we’ll leave to the comments section.
His friends know [gbg] as an aficionado of just about anything with a 6502 processor in it. He’s also interested in bitcoins. A while back, a friend asked if it would be possible to mine bitcoins with an old Nintendo Entertainment System. While this suggestion was made in jest, it’s not one of those ideas anyone can let go of easily. Yes, it is possible to mine bitcoins with an NES, and [gbg] is here to show us how.
Mining bitcoins is simply just performing a SHA256 hash on a random value from the bitcoin network and relaying the result of that calculation back to the Internet. Of course this requires an Internet to NES bridge; [gbg] brought in a Raspberry Pi for this task. There’s the problem of actually getting data into an NES, though, and that’s something only a USB CopyNES can handle. After doing some 32-bit math, the NES sends this out to the Raspberry Pi and onto the bitcoin network.
When you consider that even a high-end gaming computer has little chance of mining a bitcoin in any reasonable amount of time, there’s little chance RetroMiner will ever be able to mine a bitcoin. It’s all random, though, so while it’s possible, we’ll just appreciate the awesome build for now.
|
<urn:uuid:48c3fd19-adea-4271-8995-9fa42cfac56b>
|
CC-MAIN-2016-26
|
http://hackaday.com/tag/bitcoin-mining/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00086-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93994 | 1,974 | 2.90625 | 3 |
The Man Who Can't Get His Lies StraightThe man is Mitt Romney and the lies concern two stories he has told about his and his father's engagement in the civil rights movement. In a speech earlier this month, Romney said he "saw" his father, Michigan Gov. George Romney, march with Martin Luther King, Jr., the famed civil rights leader.
But fact-checkers have challenged this claim and now Romney has admitted that he used the word "saw" only a figurative sense. (That's exactly what we thought! We always use "saw" figuratively, never literally. Who would?)
To make matters worse for Mitt, the Boston Globe is reporting that in a 1978 interview with the paper, Mitt claimed that he and his father had marched with King. A Romney spokesman has acknowledged that this statement was also untrue.
The civil rights issue is a tricky one for Romney, member of a prominent Mormon family, because the LDS church did not allow blacks to serve in church leadership until 1978.
|
<urn:uuid:4c9c862f-cb4e-4626-9534-98a914ee2bde>
|
CC-MAIN-2016-26
|
http://alternativetulsa.blogspot.com/2007/12/romneys-stories-dont-pass-truth-test.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00174-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.981529 | 205 | 2.625 | 3 |
(Redirected from Sounds)
- Which is more musical, a truck passing by a factory or a truck passing by a music school?
- John Cage, "Communication", the third of the Composition as a Process lectures given in Darmstadt in 1958 and published in Silence. Many of Cage's works use sounds traditionally regarded as unmusical (radios not tuned to any particular station, for instance): he really did believe that the sound of a truck and the sounds made in a factory had just as much musical worth as the sounds made in a music school. There is also a suggestion expressed in the quote that in order to determine the artistic worth of something, it is necessary to examine the context in which it exists.
- A sound does not view itself as thought, as ought, as needing another sound for its elucidation, as etc.; it has not time for any consideration--it is occupied with the performance of its characteristics: before it has died away it must have made perfectly exact its frequency, its loudness, its length, its overtone structure, the precise morphology of these and of itself.
- Master: It's everywhere. Listen, listen, listen. Here come the drums. Here come the drums.
- Music has no subject beyond the combinations of notes we hear, for music speaks not only by means of sounds, it speaks nothing but sound.
- You know the sound of two hands clapping; tell me, what is the sound of one hand?
- His feet were like fine copper when glowing in a furnace; and his voice was as the sound of many waters.
- Whenever you wash dishes, cook, or clean, if you make no sound, this is smartness itself. A person who enters a house and makes a lot of noise is revealing a lack of spirituality; even cats and dogs do not make unnecessary sounds, and man as he naturally is does not make any either.
- Michio Kushi (1926), Spiritual Journey (1994), p. 4.
- Doctor: It plays music. What's the point of that? Oh, with music, you can dance to it, sing with it, fall in love to it. Unless you're a Dalek of course. Then it's all just noise.
- Doctor Who Evolution of the Daleks written by Helen Raynor
- Could we not imagine that noise...is itself nothing more than the sum of a multitude of different sounds which are being heard simultaneously?
- Jean-Jacques Rousseau, Dictionnaire de Musique (1767).
- If a tree falls in a forest, and no-one is around to hear it, does it make a noise?
- Source unknown, but apparently originating in the twentieth century; a 1910 physics book asks "When a tree falls in a lonely forest, and no animal is near by to hear it, does it make a sound? Why?" Charles Riborg Mann, George Ransom Twiss, Physics (1910), p. 235. See also: If a tree falls in a forest.
Hoyt's New Cyclopedia Of Practical Quotations
- Quotes reported in Hoyt's New Cyclopedia Of Practical Quotations (1922), p. 740.
- A thousand trills and quivering sounds
In airy circles o'er us fly,
Till, wafted by a gentle breeze,
They faint and languish by degrees,
And at a distance die.
- Joseph Addison, An Ode for St. Cecilia's Day, VI.
- A noise like of a hidden brook
In the leafy month of June,
That to the sleeping woods all night
Singeth a quiet tune.
- Samuel Taylor Coleridge, The Rime of the Ancient Mariner (1798; 1817), Part V, Stanza 18.
- By magic numbers and persuasive sound.
- William Congreve, Mourning Bride, Act I, scene 1.
- I hear a sound so fine there's nothing lives
'Twixt it and silence.
- James Sheridan Knowles, Virginius, Act V, scene 2.
- Parent of sweetest sounds, yet mute forever.
- Thomas Babington Macaulay, 1st Baron Macaulay, Enigma. "Cut off my head, etc." Last line.
- Sonorous metal blowing martial sounds,
At which the universal host up sent
A shout that tore hell's concave, and beyond
Frighted the reign of Chaos and old Night.
- Their rising all at once was as the sound
Of thunder heard remote.
- To all proportioned terms he must dispense
And make the sound a picture of the sense.
- Christopher Pitt, translation of Vida's Art of Poetry.
- The murmur that springs
From the growing of grass.
- Edgar Allen Poe, Al Aaraaf, Part II, line 124.
- The empty vessel makes the greatest sound.
- What's the business,
That such a hideous trumpet calls to parley
The sleepers of the house? Speak, speak!
- Hark! from the tombs a doleful sound.
- Isaac Watts, Hymns and Spiritual Songs, Book II. Hymn 63.
- My eyes are dim with childish tears,
My heart is idly stirred,
For the same sound is in my ears
Which in those days I heard.
- William Wordsworth, The Fountain.
|
<urn:uuid:b5ee1e45-6bd4-4374-96a5-7f00cd4e3047>
|
CC-MAIN-2016-26
|
https://en.wikiquote.org/wiki/Sounds
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00043-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.92865 | 1,153 | 2.71875 | 3 |
How Many People Experience Same-sex Attractions?
You may have heard the claim that 10% of the population has a homosexual orientation. More conservative estimates place the figure at 1–3%. A 2012 Gallup Poll found that 3.4% of adult Americans surveyed identified as gay, lesbian, bisexual or transgender. Among younger Americans (ages 18-29), the percentage is 6.4. Among single adults who have never married the statistic is 7%.
Given these estimates, the following are the approximate number of people who may be in your church congregation:
If you have 400 members in a typical family congregation, there are likely 13 people who experience same-sex attraction.
If you have 40 teenagers, there are likely 2-4 who experience same-sex attraction.
In a congregation of 400 college-aged single people, there are likely 25 who experience same-sex attraction.
In a congregation of 400 older single adults, there are likely 28 people who experience same-sex attraction.
In addition to these numbers, consider that each person with same-sex attraction has family members who are effected by the issues, including parents, spouses, siblings, and children.
Note that estimates are problematic not only because it is hard to get accurate information, but also because it is difficult to define what same-sex attraction is. Do you include in the numbers everyone who has had a same-sex thought or just those who have had a homosexual experience? How many experiences or thoughts qualify? Some people are reluctant to admit homosexual experiences, while others exaggerate the numbers. Further, since it is to the political advantage of those who seek to normalize homosexuality to establish the practice as widespread, you must be cautious about how the studies are reported.
Alfred C. Kinsey conducted research on human sexuality in the late 1940s and early 1950s and published his findings in Sexual Behavior in the Human Male (Kinsey, 1948) and Sexual Behavior in the Human Female (1953). Kinsey ranked his findings on a seven-point scale with exclusive heterosexuality at zero and exclusive homosexuality at six. (Kinsey, 1948, p. 638) Among twenty-five–year-old males in the United States, he claimed that 79% were at zero (exclusively heterosexual) and 2.9% were at six (exclusively homosexual). (Kinsey, 1948, p. 651) He claimed the following about white American males between the ages of sixteen and fifty-five (Kinsey, 1948, p. 651):
- 10% were "more or less exclusively homosexual (i.e., rate 5 or 6) for at least three years."
- 8% were "exclusively homosexual (i.e., rate 6) for at least three years."
- 4% were "exclusively homosexual throughout their lives, after the onset of adolescence."
His findings showed that 10% of the males had seven or more homosexual experiences. Further, he claimed that as many as 37% had some kind of homosexual experience after adolescence.
Kinsey’s research methodologies have been questioned. Although he used a large number of subjects—they took sex histories on more than 18,000 people and used data from 5,000 men and 6,000 women—he did not use methods of random sampling that scientists commonly use today. His subjects came from boarding houses, college fraternities, prisons, mental wards, and wherever else he could get them. As many as 20–25% had prison experience and 5% may have been male prostitutes. Since one would expect that this group would have higher than average homosexual experiences, the findings of Kinsey’s studies may not be representative of the population as a whole. (American Family Association, pp. 9–10). Also see Kinsey, Sex and Fraud: The Indoctrination of a People by Judith A. Reisman and Edward W. Eichel, Huntington House, LaFayette, LA, 1990.
There has been significant research since the 1950s to indicate that the occurrence of same-sex attraction in America and in other countries is much lower than the Kinsey statistics would indicate. (Burtoft, p. 23) Milton Diamond of the John A. Burns School of Medicine at the University of Hawaii analyzed studies of populations in the United States, Scandinavia, Asia, and Europe, and found that including all individuals who have ever engaged in any kind of same-sex behavior, the numbers would be "5–6 percent for males and 2–3 percent for females." (Diamond, p. 303)
A large study by the Alan Guttmacher Institute reported in 1993 that of sexually-active men aged 20–39, only 2.3% had any same-gender sexual activity and only 1.1% reported exclusive homosexual contact during the last ten years. (Billy, pp. 52–60)
Perhaps the largest and most scientifically-based modern survey was concluded in 1994 by academics at the University of Chicago’s National Opinion Research Center. (U.S. News & World Report) They asked 210 pages of questions of 3,432 Americans, ages eighteen to fifty-nine, and published their findings in The Social Organization of Sexuality. (University of Chicago) On the subject of homosexuality, this survey found the following:
Have you had sex with someone of your gender?
- 2.7% of men (and 1.3% of women) had sex in the past year
- 7.1% of men (and 3.8% of women) had sex since puberty
Are you sexually attracted to people of the same gender?
- 6.2% of men said yes
- 4.4% of women said yes
The survey also showed larger percentages in urban areas. The twelve largest cities in the United States showed more than 9% of men identifying themselves as homosexual, as opposed to only 1% in rural areas. Since people who experience same-sex attraction tend to migrate from the rural areas and suburbs to larger cities, these larger urban groups feed the perception that a larger percentage of the total population experiences same-sex attraction.
Conclusions on Existing Research
Different studies show different findings. Kinsey claimed that 4–10% of the male population was more or less exclusively homosexual for at least three years. Other research since that time shows the figure to be a more conservative 1–3%. However, if you consider everyone who has had homosexual contact since puberty, the numbers are more in the neighborhood of 5–10%. In addition to individuals, same-sex attraction also effects parents, spouses, brothers and sisters, grandparents, uncles, aunts, and friends.
|
<urn:uuid:439f4596-c29f-4749-9940-a06bf0d5b64c>
|
CC-MAIN-2016-26
|
http://www.samesexattraction.org/how-many-are-gay.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00060-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969579 | 1,368 | 2.546875 | 3 |
1. How does Lanser request his meeting with the Mayor?
To get his needed meeting with the Mayor, Lanser sends a courier with a message requesting a meeting with the Mayor.
2. What does Lanser beg the Mayor to do when he first invades the town and why?
During their first meeting after the town was invaded, Colonel Lanser beggs Mayor Orden to help him keep peace and order in the town so that they can avoid any further bloodshed.
3. What does Annie do when she gets mad at some soldiers?
When Annie gets upset about the soldiers making eyes at her in an inappropriate manner, she reacts by throwing boiling water on them and going to their leader to explain the situation.
4. What does Dr. Winter say the townspeople will do after the invasion, and why does he think this?
Dr. Winter states that he does not think the townspeople will simply lie down and allow the invasion to happen. He is certain that they will find a way to fight back even if it means death. He thinks this because he is a historian and has read over the past history of the town.
This section contains 2,594 words
(approx. 9 pages at 300 words per page)
|
<urn:uuid:3b927b84-d9ee-4ad5-9893-aa12a6db56a3>
|
CC-MAIN-2016-26
|
http://www.bookrags.com/lessonplan/the-moon-is-down/shortessaykey.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00183-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.97437 | 257 | 2.546875 | 3 |
Testudo (Agrionemys) horsfieldii
- Family: Testudinidae
- Adult Size: 1 to 2 lbs
- Range: Iran, east to China, north to Russia, south to the Gulf of Oman and Pakistan.
- Habitat: Steppes (desert grassland) and rocky desert.
- Captive Lifespan: More than 20 Years
- Care Level: Beginner
The Russian tortoise is one of the most common tortoises in the pet trade today because it is currently heavily imported. Captive bred Russian tortoises are becoming more common as some of these imported groups become established. When the political situation with Russian was not as it is today the Russian tortoise was not often imported and as a result was not as common as it is today. Caution should be exercised when acquiring an imported Russian tortoise because they are usually stressed and heavily parasitized. A veterinarian visit is a must when a wild caught Russian tortoise is obtained.
Russian tortoises make great pets because they are relatively small and very active in captivity. They are very cold tolerant but do not do well in cold and damp environments. They will develop respiratory problems rather quickly if kept under cool and damp conditions for any extended length of time although they do handle cool dry conditions very well at which time they will be inclined to hibernate.
These tortoises are opportunistic feeders taking dark leafy greens and fibrous vegetables. Fruits such as pears and apples in addition to berries can be fed to add variety but should be fed sparingly. Russian tortoises will also take insects and carrion.
A water dish with clean fresh water should be provided at all times although they do not use it very often seeming to get their hydration from their food.
|
<urn:uuid:a5561bb3-cf56-40a0-8227-3c0d3cf88055>
|
CC-MAIN-2016-26
|
http://www.reptilesmagazine.com/Turtle-Tortoise-Species/Russian-Tortoise/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00192-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.967517 | 372 | 2.546875 | 3 |
The M16 (more formally Rifle, Caliber 5.56 mm, M16) is the United States military designation for the AR-15 rifle. Colt purchased the rights to the AR-15 from ArmaLite and currently uses that designation only for semi-automatic versions of the rifle. The M16 rifle fires the 5.56x45mm cartridge and can produce massive wounding and hydrostatic shock effects when the bullet impacts at high velocity and yaws in tissue leading to fragmentation and rapid transfer of energy. However, terminal effects can be unimpressive when the bullet fails to yaw or fragment in tissue.
The M16 entered United States Army service as the M16 and was put into action for jungle warfare in South Vietnam in 1963, becoming the standard U.S. rifle of the Vietnam War by 1969, replacing the M14 rifle in that role. The U.S. Army retained the M14 in CONUS, Europe, and South Korea until 1970. Since the Vietnam War, the M16 rifle family has been the primary infantry rifle of the U.S. military. With its variants, it has been in use by 15 NATO countries, and is the most produced firearm in its caliber.
|
<urn:uuid:968d5015-1d97-4c1d-8550-10ae9d47f782>
|
CC-MAIN-2016-26
|
http://www.turbosquid.com/FullPreview/Index.cfm/ID/547324
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00172-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952027 | 246 | 2.640625 | 3 |
There are different ways to create the list of items displayed by a combo box or a list box on a VBA UserForm. One way is to "hard code" the list into the UserForm's Initialize event procedure using the .AddItem method. This is fine if you know what the contents of the list should be, and if is not going to change regularly.
In Excel, you can set the control's RowSource property to a range of cells containing the list (the best way is to name the range and enter that name as the property value). This also allows you to change the list if you need to without having to edit the VBA code. You can even use a dynamic range name so that you don't have to redefine the range each time you add a new item.
But if you are working in a program other than Excel you have to generate the list with code. A UserForm in Word or PowerPoint doesn't have a range of cells it can refer to. And even if you are working in Excel maybe you would like to get your list from somewhere else.
This tutorial explains how to build a UserForm's combo box or list box list (they are both treated the same way) by retrieving the list items from a table in an Access database.
Let's assume that you have a UserForm to which you have added a combo box or a list box, and that you also have a database that contains a table from which you can retrieve the list items. The code that retrieves the information from the database uses ADO (ActiveX Data Objects). This is a subset of the Visual Basic programming language specifically designed for communicating with databases. Microsoft Access, being a database program, already knows about ADO but if you are using any other Microsoft Office program you have to set a reference to ADO so that your program knows how to speak to the database.
In the Visual Basic Editor go to Tools > References to open the References dialog. In the list of Available References you will see that some already have a tick against them. Unless ADO is already selected, scroll down the list and find the entry for Microsoft ActiveX Data Objects 2.x Library (where x is the highest available number - unless you are programming for an earlier version of Office). Place a tick in the adjacent checkbox and click the OK button...
If you reopen the References dialog you will see that the ADO reference has moved to join the other selected ones near the top of the list.
Since the code needs to interact with the database file it needs to know the exact path and filename. As you will see below it uses this to create a Connection String to open a connection to the database. The Connection String also specifies the appropriate driver to use. This example is appropriate for a Microsoft Access database. If you are working with something else (such as a database on Microsoft SQL Server) you will have to make changes. Search for help on ADO Connection Strings to find out what to use. Here is a typical example of a connection to an Access database:
cnn.Open "Provider=Microsoft.Jet.OLEDB.4.0;" & _
After successfully connecting to the database ADO uses an SQL statement to open a recordset which is held in the computer's memory. Even if your database table contains just a single field containing each of the list items in the order you want them, you still have to use an SQL statement to build the recordset. The SQL statement I use in this example retrieves a unique list of Department names from a field named Department contained in a table called tblStaff. I have also chosen to sort the list into ascending alphabetical order:
"SELECT DISTINCT [Department] FROM tblStaff ORDER BY [Department];"
If you are not confident to write your own SQL statement you can use the query tool in Access to create a query that returns the list you need, then copy the resulting SQL from the query's SQL View.
The code should be placed in the UserForm's Initialize event procedure. This event fires each time the form is opened so the list will always be up-to-date. If necessary, right-click the UserForm and choose View Code to open its code module then choose UserForm and Initialize from the drop-down lists (left and right respectively) at the top of the code window to create an empty procedure. The finished code, tailored to your own requirements, should look like this:
On Error GoTo UserForm_Initialize_Err
Dim cnn As New ADODB.Connection
Dim rst As New ADODB.Recordset
cnn.Open "Provider=Microsoft.Jet.OLEDB.4.0;" & _
rst.Open "SELECT DISTINCT [Department] FROM tblStaff ORDER BY [Department];", _
Loop Until rst.EOF
On Error Resume Next
Set rst = Nothing
Set cnn = Nothing
MsgBox Err.Number & vbCrLf & Err.Description, vbCritical, "Error!"
Remember to edit the cnn.Open and rst.Open statements to suit your own requirements. Change the name of the combo box or list box to match yours (here it is called ComboBox1), and enter the name of the field that contains the list items into the AddItem statement. If you have coded everything correctly your UserForm will build the list as it opens:
Note that I have included an error handler and exit routine into the code. It is good practice to include an error handler in any procedure during which something might go wrong. This is particularly important when working with databases.
Following the error handling instruction and the necessary variable declarations, the procedure starts by opening a connection to the database. It then opens a recordset based on the supplied SQL statement and moves to the first record. It clears any existing items from the combo box list then proceeds to loop through the recordset. For each record it adds a new item to the combo box list, getting the item's value from the specified field (here it is the Department field) before moving to the next record. When the loop reaches the end of the recordset (EOF = End Of File) it closes the recordset and the connection connection to the database then sets their variables to Nothing to clear the computer's memory.
The code required to create a multi-column list is slightly different. Remember to set the ColumnCount and ColumnWidths properties of your combo box or list box to the appropriate values. You will also need to modify the SQL statement to return as many columns as you need.
In this example two fields (Code and Country) are brought from a table named tblISOCountryCodes. I have also declared a variable i to act as a counter to keep track of them index number of each row as it is added to the list. This listing shows how the code differs from the previous example (the unchanged code is not shown):
i As Integer
rst.Open "SELECT [Code], [Country] FROM tblISOCOuntryCodes ORDER BY [Country];", _
i = 0
.List(i, 0) = rst![Code]
.List(i, 1) = rst![Country]
i = i + 1
Loop Until rst.EOF
As the procedure loops through the recordset it adds a new item to the list as before, but this time, since it has to write into several columns, it is a bit more complicated. The .AddItem method adds a new empty row to the list but, unlike the previous example, does not specify what it contains. The variable i is keeping track of the index number of each new row. The .List property of the (in this example) list box is then set for each column. It gives the row index (i) and the column index (0, 1, etc. numbering from zero) and the value to be written into the list (the appropriate field from the recordset). Before moving to the next record the value of i is incremented by one ready for the next item.
This example has just two columns but you can have as many as you
like. Remember to set the control's properties to accept the
additional columns, adjust the SQL statement to return the required
number of fields and add an extra .List statement for each
©2007 Martin Green - www.fontstuff.com - [email protected] - All rights reserved.
|
<urn:uuid:81011816-6e7b-43cc-8ee4-15628972d992>
|
CC-MAIN-2016-26
|
http://www.fontstuff.com/vba/vbatut10pfv.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00133-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.879823 | 1,773 | 2.71875 | 3 |
It has been fun over the past weeks writing about Mavila — or Mabila as it is sometimes spelled.
The coastal Alabama city of Mobile is named for that battle. It has also been an easy undertaking because, though the event is neglected in American history textbooks, it is well documented by the four principle chroniclers of the De Soto expedition. Given the fact that the event occurred five centuries ago, those who care to do the research have available to them some fairly good resources.
What happened on Oct. 18, 1540, in what is today Alabama is beyond dispute. The violent confrontation between Tuscaloosa’s warriors and De Soto’s knights was a monumental encounter altering the destinies of Native Americans and Europeans alike. However, given the fact that the event was so noteworthy, we have no idea where exactly it took place. Professional archaeologists sifting through the debris of mid-16th century native towns say that the battle site remains illusive.
Finding the location of the battle depends in large measure on how accurate scholars are in reconstructing the overall route taken by De Soto from his winter camp in Florida to his next winter camp in Mississippi. Some 75 years ago when many were observing the 400th anniversary of the expedition, anthropologist John Swanton published an official report on the likely route. The problem is that archaeologists today, with the help of new technologies and discoveries, question Swanton’s line of march.
I’ve been fortunate to have met some of the most distinguished scholars at the forefront of the search for Mavila. Back in the late 90s when I was riding my horse along trails through the Southeastern United States, I based my travels on the route mapped out by anthropologist Charles Hudson of the University of Georgia. On the two occasions we met, all he would say is that Mavila could well have been at the Old Cahawba site southwest of Selma which has a mound in a sixteenth-century palisaded town.
Another scholar I met more recently is Professor Vernon Knight of the University of Alabama. He co-chaired a 2006 conference called “The Search for Mabila,” edited a book of the same name published in 2009 and leads of a multidisciplinary team looking for the battle site. The pool of talent excites high expectations. He is working with anthropologists, historians, linguists, geographers, geologists and NASA scientists using satellite technology. The collaboration, he believes, will yield results.
Knight told me that one of the team’s projects has been to hold periodic area artifact shows in which farmers in the Alabama floodplain bring in their finds to have them identified and catalogued. The hope is that eventually a farmer will bring in triangular arrowheads and some Spanish artifacts. Problematic, he lamented, is that fewer people are farming and that lands once worked by the plow are now planted in pine.
Knight believes that Mavila is located somewhere in the agriculturally rich floodplain of the Alabama River because the chroniclers describe clusters of towns around Mavila. There is archaeological evidence here of population density based on intensive corn cultivation. And there is another imprint on the region that could bear fruit. The chronicler Garcilasco de la Vega wrote that during the month in which the Spaniards camped around the ruins of Mavila, detachments went out into the surrounding countryside in all directions for four leagues (about twelve miles) and torched the towns. “Any legitimate candidate for Mavila,” says Knight, “must be surrounded by the remains of other settlements, which will superficially look alike because they have all been burned.”
The same trip that took me to see Professor Knight in Tuscaloosa also led me to Dr. Jack Bergshesser who oversees Old Cahawba State Historical Park and who suspects that this could be the site of Mabila. He walked me over to a map on the wall of the visitor center and pointed out the configuration of the old Indian village with the palisade, the ditch and the mound. The problem for me, I told him, was that none of the four chroniclers mentions a mound at Mabila or a river nearby. Rather a chronicler wrote that the “Christians” drank from a nearby pond whose water was tainted by the blood of the dead and dying. Why drink from a dirty pond when the Alabama River is just a few yards away? Bergshesser admitted that there were other plausible candidates for Mavila and recommended that I speak to Ned Jenkins who is a respected member of Dr. Knight’s search team. Jenkins, he informed me, was director of the Fort Toulouse National Historic Park near Montgomery.
That opportunity to visit Ned Jenkins came a few days ago when I made a detour to Fort Toulouse on my way to participate in the 150th anniversary of the Civil War battle of Olustee. He was good enough to see me even after the park officially closed. Not only is he an active participant in Dr. Knight’s team of scholars looking for Mavila, his masterly summary of what archaeological signatures are likely to be found at the site constituted chapter six of Knight’s book “The Search for Mabila: The Decisive Battle between Hernando de Soto and Chief Tascalusa.”
According to Ned Jenkins, the site of the battle, whether Old Catawba or not, should give up certain artifacts that one would expect from a place of violent conflagration. There should be a plethora of human bones, some charred and some showing signs of trauma from sharp implements of steel. There should be the bones of pigs and butchered horses. There should be an abundance of discolored freshwater pearls and Spanish metal objects. And, because the palisade and houses were said to be covered in a thick daub, one of the most important archaeological signatures should be a vast amount of orange-colored, brick-like fired clay and daub. When I asked Jenkins about Catawba, he said simply that it should not be discounted.
All the seekers with whom I spoke in Alabama are of one mind on an important point. They agree with Vernon Knight that the lost battle site of Mavila is the predominant historical mystery of the Deep South. And they persevere not to satisfy the agenda of local pride but to seek knowledge for its own sake. Such knowledge will inevitably lead to a greater understanding of what happened that fateful October day in 1540.
Bill Andrews is a retired college history professor who lives in Columbia.
|
<urn:uuid:364cf4bd-2df2-4223-9458-8c647b1e1183>
|
CC-MAIN-2016-26
|
http://columbiadailyherald.com/opinion/columns/search-mavila-seekers
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00177-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969105 | 1,371 | 3.125 | 3 |
- Download PDF
1 Answer | Add Yours
The purpose of the Higher Education act of 1965 was to make post-secondary education more accessible financially. It created more opportunities for students studying in college and university to obtain financial assistance through new scholarships and low-interest loans. It also supported the institutions themselves by increasing the dollar amount of government funds available to institutions.
Since 1965 several changes and additions have been made to the act. As of 1998, persons convicted of drug related crimes are ineligible to receive government financial assistance, and in 2003, the funding available to institutions was increased and the mandatory grace period regulated when institutions could ask for more loan money was waived making it easier for institutions to grant more loans.
The effect has been overwhelmingly positive with a gradual increase in enrolment and graduates exiting the post-secondary system in the United States.
We’ve answered 327,831 questions. We can answer yours, too.Ask a question
|
<urn:uuid:28a28a8f-5a1a-41e2-a677-4313b84ccce5>
|
CC-MAIN-2016-26
|
http://www.enotes.com/homework-help/how-does-this-act-affect-private-institutions-438492
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00152-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962969 | 191 | 3.234375 | 3 |
The use of modern harmonic-balance simulators and electromagnetic analysis software has been instrumental in the design of modern mixers. Especially, it has allowed the development of new types of balun structures, without which broadband monolithic balanced mixers would be impossible. Design techniques, however, must be adjusted to make most efficient use of these technologies. This paper describes the current state of the art in the design, analysis, and computer modeling of microwave and RF mixers - and offers some pointers improving designs.
We show how modern computer analysis (CAD) tools, especially general-purpose harmonic-balance simulators and planar electromagnetic simulators, have improved both the quality of mixer designs and the efficiency of the design process. Simultaneously, new approaches to the design of baluns and passive structures have resulted in high-performance, broadband designs. Thus results in a new level of sophistication for mixer technology.
Since the invention of the superheterodyne receiver by Edwin Armstrong in 1917, mixers have been essential parts of radio communication systems. Mixer design has traditionally been an approximate process, at best using special-purpose computer programs. The development of general-purpose harmonic-balance simulators and electromagnetic simulators, however, has improved the accuracy of the design process enormously, and it has even made the design of a wide variety of new balun structures possible. These have been particularly valuable in monolithic circuits.
Figure 1. A common type of commercial, suspended-substrate diode mixer
The composite, low-dielectric-constant substrate is very thin (typically 125-250 microns) and is mounted in a housing or carrier. An open area under the substrate is essential.
Mixers can be broadly categorized as active or passive. Passive mixers primarily use Schottky-barrier diodes, although a relatively new type of passive mixer, the FET resistive mixer 1, recently has become popular. FET resistive mixers use the resistive channel of a MESFET to provide low-distortion mixing, with approximately the same conversion loss as a diode mixer. Active mixers use either FET or bipolar devices. FETs (either MESFETs or HEMTs) are used for most microwave and RF applications where active mixers are employed; bipolar junction transistors (BJTs) and occasionally heterogeneous-junction bipolar transistors (HBTs) are used most frequently as Gilbert multipliers 2 for modulation, phase detection, and similar purposes. The theory of both active and passive mixers has been well known for some time 3 - 8.
Mixer Types and Technologies
Although single-device mixers occasionally are used, most practical mixers are balanced. Balanced mixers require baluns or hybrids, and these largely determine the bandwidth and overall performance of the mixer. Thus, they are the subject of considerable research interest. In this article, we shall consider only balanced mixers.
In spite of the maturity of FET circuits, diode mixers are still widely used in microwave circuits. Diode mixers have an important advantage over FETs and bipolar devices: a Schottky-barrier diode is inherently a resistive device, and as such has very wide bandwidth. The bandwidths of diode mixers are limited primarily by the bandwidths of the baluns, not the diodes. FETs, in contrast, have a high-Q gate-input impedance, causing difficulties in achieving flat, wide bandwidth.
Diode mixers usually have 5-8 dB conversion loss, while active mixers usually can achieve at least a few dB of gain. Although properly designed active mixers can achieve somewhat lower noise figures than diode mixers, most systems can tolerate a relatively noisy mixer, so the diode mixer's loss and noise are rarely a significant disadvantage. Broadband diode mixers usually do not require more local-oscillator (LO) power than active mixers, but narrowband active mixers may have an LO-power advantage. Finally, balanced active mixers always require an IF hybrid or balun; diode mixers generally do not. When the IF frequency is low, the resulting large size of the IF balun may be troublesome, especially in monolithic circuits. Finally, even balanced active mixers require matching and filtering circuits, while balanced diode mixers largely do not.
Active mixers have a few important advantages over diode mixers besides their superior gain and noise figure. High-quality diodes are often difficult to produce in FET monolithic circuit technologies, so active FET mixers often are easier to integrate. Diodes in such technologies usually consist of a FET gate-to-channel junction, which usually is a very poor diode. Dual-gate FET mixers offer inherent LO-RF isolation, even in single-device circuits, although noise figure and gain usually are slightly worse than in single-gate FET mixers.
The design of balanced mixers-passive or active-involves two fundamental tasks: (1) design of the baluns and passive matching circuits, and (2) design and analysis of the complete mixer.
The design of baluns and passive circuits for discrete-component mixers is very mature. Figure 1 shows a common structure. In this mixer, the baluns consist of simple, parallel-coupled strips mounted on a suspended substrate. Often, the lower strip (which is connected to the ground surface of the housing) is tapered to improve the balun's performance.
But such baluns - widely used in hybrids - are impractical for new-generation RF ICs, and attempts to "translate" suspended-substrate baluns into planar monolithic form have been largely unsuccessful. The fundamental problem is in the extra capacitance between the monolithic circuit's microstrips and ground. Because the substrate is thin (usually 100 microns) and has a high dielectric constant (12.9), this capacitance is unavoidably large. It allows an even mode impedance to exist on the balun. The even mode unbalances the mixer and allows input-to-output coupling, which reduces port-to-port isolation. Unless special efforts are made to reduce it, the imbalance is severe.
Practical approaches to the design of on-chip baluns for braodband circuit are still scarce. The Marchand balun offers some hope as a building block for broadband, planar monolithic mixers. Although its even-mode characteristic impedance is no higher than that of other structures, its performance tolerates low even-mode impedance much better.
Figure 2 shows a planar Marchand balun, and Figure 3 shows its calculated performance. Clearly, the Marchand balun is intrinsically capable of good performance over a multioctave band. In less idealized cases, we find that an octave bandwidth, or slightly greater, is practically achievable.
Figure 2. A planar Marchand balun consists of two quarter-wavelength coupled-line sections
The odd-mode characteristic impedance is chosen so that the structure acts as a transformer between the source and load, and the even-mode impedance is made as great as possible.
Figure 3. Performance of a somewhat idealized Marchand balun with Z0o = 25 ohms, Z0e = 180 ohms, and ZL = 60 ohms
The output terminals are each treated as separate ports. The even- and odd-mode phase velocities are equal, causing the balance to be (theoretically) perfect.
We have experimented extensively with Marchand baluns and Marchand-like balun structures. Inevitably we find that a three-strip structure gives the best trade-off between odd-mode and even-mode impedances. Unfortunately, such asymmetrical coupled-line structures are not simple to analyze.
Our approach to analysis of these structures is as follows. We use a quasistatic, moment- method electromagnetic simulator called LINPAR 9 to determine the current and voltage modes on the coupled-line structure used in the balun. We then import these data into our circuit simulator, where length information is introduced and a Y matrix for the coupled- line structure is created. The circuit can then be analyzed directly in the linear-circuit simulator or as part of a complete mixer by harmonic-balance simulation. A coupled-line structure having arbitrary line widths and spacings can be analyzed in this manner.
The coupled-line structure's admittance matrix can be determined from its length, its modal matrices, the modal phase velocities. The vector of input current I0 of a set of coupled lines with a short-circuited output is
where V0 is the excitation vector. The output current vector IL is
where SI is the modal current matrix, SV is the modal voltage matrix, 1 is the identity matrix, and L is the diagonal matrix,
where n are the propagation constants of each mode and L is the length of the coupled- line structure. 2L is a similar matrix having 2L instead of L. These expressions realize the first column of the admittance matrix,
The rest of the matrix can be filled in from the obvious symmetries.
This process has two important advantages compared to a general-purpose planar electromagnetic simulator using spectral-domain moment methods or other full-wave approaches. First, it is much faster, and more variations of the coupled-line geometry can be studied in limited time. Second, the length of the structure is not specified until the circuit analysis is performed, so the length can be optimized within the circuit simulator. This results in a very efficient design process.
A disadvantage of this method is the quasistatic nature of the electromagnetic analysis. This is less of a difficulty than one might initially imagine, since non-TEM dispersion effects are generally insignificant in monolithic baluns at frequencies below ~50 GHz, and probably, in many cases, higher.
Harmonic-balance analysis is the method of choice for designing RF and microwave mixers. Time-domain analysis (for example, SPICE 10) may also be acceptable in some cases.
In "classical" harmonic-balance analysis 5, only a single excitation tone is used. The method has been extended, however, to allow two or more noncommensurate excitation frequencies. These methods increase the number of frequency components in the analysis and slow the analysis significantly. Several methods can be used to improve the efficiency of mixer analysis by multitone harmonic balance. One is to select the frequencies in the analysis so they include only the LO harmonics and sidebands around each harmonic. This reduces the size of the frequency set considerably, and thereby improves efficiency. Another is to use conversion-matrix analysis. In this method, the mixer is first analyzed under LO excitation alone, and then a noniterative calculation, treating the RF as a small deviation on the LO voltage, follows. This process is very efficient, because the computation time required for the conversion-matrix analysis is usually insignificant, and the harmonic-balance analysis is single-tone. Conversion-matrix analysis is applicable to both active and passive mixers.
Numerical optimization of mixer designs is possible in most harmonic-balance simulators, but the time required for such optimization is usually prohibitive. A more intelligent design process usually obviates such optimization, or at least reduces considerably the amount needed. We begin with an idealized circuit, using only lumped or simple distributed components, and baluns are replaced by transformers. We then determine input and optimum load impedances, and we design simple matching networks, usually lumped- element. The circuit is again optimized, the ideal elements are replaced one-by-one with real structures, and the mixer's performance is recalculated, reoptimized, and maintained throughout the process. When the finished circuit emerges, it needs little or no numerical optimization.
Figure 4. A planar star mixer uses three-strip Marchand baluns in a CPW-like configuration
This mixer exhibits low conversion loss, high isolation, and excellent intermodulation performance from 26-40 GHz. The IF frequency range is DC- 12 GHz.
Figure 5. This planar ring-diode mixer operates from 18 to 40 GHz, with a 12-GHz IF
It consists of Marchand baluns for both the RF and LO, and a second "horseshoe" balun for IF extraction and further even-mode rejection.
Figure 4 shows a planar star mixer using three-strip Marchand baluns in a coplanar- waveguide (CPW) structure. We have designed a large number of mixers of this type, most operating over octave bandwidths between 12 and 45 GHz. The mixer shown in the figure operates over a 26-40 GHz RF and LO band and a DC-12 GHz IF band. Conversion loss is 7 to 9 dB over this frequency range. The RF-to-LO isolation, probably the best indication of the balun's effectiveness, is greater than 40 dB. This is the first mixer of this type that we developed; subsequent mixers have exhibited 18 GHz IF bandwidth, 20 to 40 GHz RF and LO bandwidth, and lower conversion loss. These mixers typically exhibit input third- order intercept points above 20 dB.
Figure 5 shows a rather unusual mixer that makes extensive use of coupled-line baluns. The RF and LO baluns are multistrip, asymmetrical Marchands. One of the quarter-wave sections of each balun is the usual three-strip structure, while the other has six equal-width, equally spaced strips. The large number of strips gives the section a very low odd-mode impedance, which improves the bandwidth considerably.
The RF balun excites a curved, coupled-line section which we have come to call the horseshoe. This section has two purposes: first, it provides an approximate virtual-ground point for an IF connection, always a difficulty in microwave ring mixer designs. Second, it improves the balun's balance. This mixer exhibits low conversion loss (~7 dB) and high RF-LO isolation (~35 dB) over an 18-40 GHz band. Unfortunately, the LO-to-IF and RF- to-IF isolations are only modest, approximately 13 dB. Subsequent designs used a stub in the IF connection to improve the rejection.
The use of modern harmonic-balance simulators and electromagnetic analysis software has been instrumental in the design of modern mixers. Especially, it has allowed the development of new types of balun structures, without which broadband monolithic balanced mixers would be impossible. Design techniques, however, must be adjusted to make most efficient use of these technologies. The result is high-performance, low-cost circuits operating into the millimeter-wave region.
*A version of this paper was presented at the 1999 Wireless Symposium.
1 S. Maas, "A GaAs MESFET Mixer with Very Low Intermodulation," IEEE Trans. Microwave Theory Tech., vol. MTT-35, no. 4, p. 425, April, 1987.
2 B. Gilbert, "A Precise Four-Quadrant Multiplier with Subnanosecond Response," IEEE J. Solid-State Circuits, vol. SC-3, p. 365, Dec., 1968.
3 A. A. M. Saleh, Theory of Resistive Mixers, MIT Press, Cambridge, MA 1971.
4 S. Egami, "Nonlinear, Linear Analysis and Computer-Aided Design of Resistive Mixers," IEEE Trans. Microwave Theory Tech., vol. MTT-22, p. 270, 1974.
5 S. Maas, Nonlinear Microwave Circuits, Artech House, Norwood, MA, 1988.
6 S. Maas, Microwave Mixers, Second Edition, Artech House, Norwood, MA, 1992.
7 S. Maas, "Theory and Analysis of GaAs MESFET Mixers," IEEE Trans. Microwave Theory Tech., vol. MTT-32, no. 10, p. 1402, Oct., 1984.
8 R. A. Pucel, D. Masse', and R. Bera, "Performance of GaAs MESFET Mixers at X Band," IEEE Trans. MTT, vol. MTT-24, no. 6, p. 351, June, 1976.
9 A. R. Djordjevic et al., LINPAR for Windows, ver. 2.0, Artech House, Norwood, MA 1999.
10 SPICE3, Electronics Research Laboratory, University of California, Berkeley, CA USA 94720.
|
<urn:uuid:b920fbe2-cdf4-4e96-bea0-ceb3a5fefcfc>
|
CC-MAIN-2016-26
|
http://www.eetimes.com/document.asp?doc_id=1272174
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00161-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.916956 | 3,475 | 2.96875 | 3 |
Explores the rocks, plate tectonics, and other geologic features, and evolution of the Pacific Northwest, including the Cacades, Columbia Plateau, Olympic Mountains, and Yellowstone. Laboratory includes rock identification, and interpretation of topographic and geologic maps of the Northwest. Offered: Sp.
TEXT AND MATERIALS: We will read all of both books as assignments. Alt, D. and D.W. Hyndman. 1995. Northwest Exposures: A Geologic Story of the Northwest. Missoula: Mountain Press. 443 p.( ISBN 0-87842-323-0)
Figge, J. 2009. Evolution of the Pacific Northwest: An Introduction to the Historical Geology of Washington State and Southern British Columbia. Seattle: The Northwest Geological Institute. 355 p. http://northwestgeology.com/. Available free online only.
COURSE DESCRIPTION: Pacific Northwest Geology explores the rocks, plate tectonics, geomorphology and evolution of the Pacific Northwest, including the Puget Sound Trough, Cascades, Columbia Plateau, Olympic Mountains, and Yellowstone. Activities include rock identification, application of stratigraphic and tectonic principles, and interpretation of topographic and geologic maps of the Northwest. There is a GREAT optional one week field trip for 1 lab credit in the summer. June 14-20 Mon-Sun to the John Day fossil beds national monument in Central Oregon. It is well lead by Julie Masura who used to teach this course. I may be on the trip as well. Your good participation and grade in this course will be considered when selecting students for the summer field course.
Student learning goals
General method of instruction
Lecture, discussion, and both in class and outside class assignments involving interactive learning.
basic study skills.
Class assignments and grading
|
<urn:uuid:a22d9746-59b9-4835-a3dc-0d425665bd26>
|
CC-MAIN-2016-26
|
https://www.washington.edu/students/icd/T/tesc/316sandywil.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00135-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.847621 | 381 | 3.671875 | 4 |
of Argos, or Hera
’s City, as it was known, is one of the lesser-known Greek
city-states. Although it is not known extensively in our time, in it’s time, it was a great state. This is why.
When researching the government of Argos, the name of one leader comes up quite often. A man named Temenus was largely responsible for the success of Argos. Shortly after, a tyrant known as Pheidon took control of the city-state. Argos continued to flourish under his rule, with advancements in nearly every facet of society. It was the successive leaders that weakened Argos until it was a shadow of how it had once been.
Argos traded extensively with the other city-states. The leading crops grown in Argos were vegetables, tobacco, wheat and corn. These crops were traded to Corinth, for more crops that could support the Argive population.
The economy of Argos was probably the most advanced of its time. In place of using barter to exchange goods, the Argives used coins crafted of silver. These coins were supported by the government, and helped foster a thriving marketplace in Argos.
The social classes of Argos were fairly defined. In order, from people with the most rights to people with the least: Gods and Goddesses, the rich, citizens, non-citizens (women), free slaves, and last, slaves. A citizen was defined as someone who participated in government, and since women were barred from participating in the government, they could not be considered citizens.
The artisans of Argos were quite skilled at what they did, which included metal work, sculpture, pottery, and paintings. They crafted unique armor for the military of the city, and created intricate silver coins for use in trade.
Another art of Greek times, the creation of myths, was also practiced in Argos. The Argives told a story of Argus, a gigantic monster with 100 eyes. Argus was also called Panoptes, which means all-seeing. The goddess Hera assigned Argus to guard her hated rival, the beautiful Princess Lo, a mistress of her husband Zeus. For this reason, the term Argus is still sometimes used to describe a watchful guardian. Acting on orders from Zeus, the god Hermes killed Argus. Hera used Argus’ 100 eyes to decorate the tail of her peacock. This tale was widely known throughout Greece.
ROLE OF WOMEN
Women had no political power in Argos. They had required tasks of burying dead children, taking care of the household for her husband, and bearing more children. This was not uncommon throughout Greece, but things were a little worse for the women of Argos.
Like all of Greece, the people of Argos were polytheistic, believing in flawed human-like gods. A few of the deities were Zeus, Athena and Hermes. The majority of people believed these gods to control nature and fate.
Argos was a city-state more concerned with what went on within Greece, as opposed to looking beyond, to the Persians, for instance. With Athens and Sparta called for help in the fight against the Persians in 480 BC, Argos refused, and for this reason the city-state was disgraced before all of Greece. When battling within Greece’s borders, Argos’ main enemy was Sparta.
Argos had shifting alliances. At times, they were aligned with Arkadia, Sikyon, Pisa and the Messenians. The most significant ally of all was Athens. They aligned with Athens from time to time to keep the city-state of Sparta from becoming too powerful.
Like other city-states, Argive farmers of the city did not grow grain, instead they grew cash-crops such as olives and wine. Argos established colonies in places more suited to growing grain. These colonies sent back a surplus to support Argos.
Argive architecture was as great as any to be found in Greece, with possibly the exception of Athens. Argos is thought to be the oldest city-state in all of Greece, having its roots in the Bronze Age. This long-standing history influenced the architecture of the city. Walking down one of the avenues, one would see architecture old and new alike. The older would be less advanced, of course. The newer would have more of an Athenian influence, using columns and arches.
Science is Argos is not what the city-state is known for. The Argives were more concerned with their military and arts.
CRAFTS AND INDUSTRY
The Argive coins of silver required considerable skill to produce. Artisans created these coins one at a time, a labor-intensive task.
Argos is not known for its great intellect, or its great military. Some historians think of Argos as a mixture of Athens and Sparta. Athens is known for the great thinking of its citizens, but Argos is not. Boys were sent to school, while girls were not. Basically, the more wealthy your parents were, the more schooling you received. The boys were taught by a slave, called a “paedagogus”, to whom the parents paid a fee.
CONTRIBUTIONS TO GREEK CULTURE
There are two things that Argos gave to Greece that are most remembered. The fist is the little-known system of weights and measures that Argos created. This system was used throughout Greece for most of the empire’s existence. The second contribution has already been discussed, the Argive system of currency. Arguably, you could also say that the various alliances in which Argos took part in changed Greek history.
All told, Argos deserves the place it has earned in history. It is a great city-state, but is often over shadowed by the great accomplishments of Athens, Sparta, and Corinth.
node your homework!
Mrs. Judy Spencer
Buena High School
note: I originally wrote this as a sophomore in high school, details and facts should be double checked before use in anything serious. Ever BS a paper? =D
|
<urn:uuid:08a89208-d0a6-4eec-bfaf-15f45890ba36>
|
CC-MAIN-2016-26
|
http://www.everything2.com/title/Argos
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00192-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.983436 | 1,278 | 3.5625 | 4 |
Thermal radiation is the energy radiated from hot surfaces as electromagnetic waves. It does not require
medium for its propagation. Heat transfer by radiation occur between solid
surfaces, although radiation from gases is also possible. Solids radiate over a wide range of wavelengths,
while some gases emit and absorb radiation on certain wavelengths only.
When thermal radiation strikes a body, it can be absorbed by the body, reflected from the body, or transmitted through the body. The fraction of the incident radiation which is absorbed by the body is called absorptivity (symbol ). Other fractions of incident radiation which are reflected and transmitted are called reflectivity (symbol ) and transmissivity (symbol ), respectively. The sum of these fractions should be unity i.e.
|
<urn:uuid:2762d2f1-da39-4fe4-af37-dfed48de9ff8>
|
CC-MAIN-2016-26
|
http://www.taftan.com/thermodynamics/RADIAT.HTM
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00026-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948521 | 156 | 4 | 4 |
allies in name, France and Russia were never real friends.
economy was being hurt by Napoleon
Bonaparte's Continental System that banned trade with
Britain and internal pressures forced Tsar
Alexander to turn a blind eye to those who broke it.
Bonaparte decided to bring the Russians back into line
and gathered a Grande Armee of more than 500,000 men -
including contingents from all France's allies - to frighten
implied threat did not work and the tsar ordered two Russian
armies to protect the Motherland.
by General Barclay de
Tolly and General
Bagration, the Russians retreated as Bonaparte's troops
swarmed across the frontier on the River Niemen on 24
at Smolensk, the Russian armies fought at Smolensk
and Valutino, but the overall strategy was to trade space
for time and continue to avoid a major battle with the
French. Finally the retreat stopped some 110 kilometres
west of Moscow.
under the command of General
Mikhail Kutusov, the Russians set up strong defensive
positions for his 120,000 troops at Borodino
and waited for Bonaparte's men to come on.
did so, 133,000 strong, and the fighting was brutal, even
in Napoleonic terms, with little quarter being given.
advised by Marshal Davout
to manouevre around the defences and attack from another
direction, Bonaparte threw his men into a series of bloody
attacks on the Russian positions.
the end of the day - and at the cost of 44,000 Russian
casualties and 30,000 French losses - the battle was indecisive,
as Bonaparte withheld his Imperial Guard in a move that
probably saved Kutusov's army from destruction. But, so
far from friendly territory, Bonaparte said he could not
take the risk.
retreated again and the French occupied a burning Moscow
- set on fire by the Russians themselves.
for a Russian surrender that never came, Bonaparte waited
in Moscow for five weeks - far too long - and then began
what would become one of the greatest disasters in military
ignoring good advice from Davout to take a different,
better-supplied route to that they had advanced on, Bonaparte
sent his men back to Smolensk through already-plundered
make a bad situation worse, the snows came early in 1812
and the cold, together with hunger and cossack attacks,
doomed what had been one of the most impressive armies
ever to be formed.
by a magnificent fighting rearguard led by Marshal
Ney, the French struggled on. They were almost destroyed
during the crossing of the River
Beresina where a two-day battle to hold off the Russians
allowed what was left of the army to limp across two fragile
left the army on 5 December to return to Paris where a
coup had been foiled and to raise another army. His troops
dragged themselves on and on 7 December finally crossed
the Niemen out of Russian territory. They had survived,
but only 20,000 of them.
|
<urn:uuid:3679c0e0-bc84-42bf-afc4-598ada3d103a>
|
CC-MAIN-2016-26
|
http://www.napoleonguide.com/campaign_russia.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00029-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952138 | 678 | 3.3125 | 3 |
The color of our skin... hmmmm
This "description" was borrowed from a white newspaper in Tennessee, It was written in 1931, WRITER UNKNOWN. but most likely was white, this shows you how naive people were not that long ago...
"WHAT IS THE COLOR OF NEGRO BABIES AT BIRTH?~~That the negro enters the world with a skin as light as that of a caucasin is a common notion,and frequently travelers refer to the faintly colored newly born children of the black race. Many people believe that negro babies are born white (with the exception of a black band around the
body) and that a short time after birth their skins turn black.As a matter of fact newly born infants even of the white race are not really light in color.Generally caucasian children are reddish at birth,although the exact hue varies widely. The children of the darker races are lighter than their parents,and the colored child of very light parents may be indistingguishable from a white child so far as color is concerned,but ordinarily colored children enter the worls noticeably pigmented and many of them are quite dark from the
As a rule the coloe deepens for several years,this is especially true in those born light.It seldom happens,however,that a newly born colored child has the deep black color of the typical full blooded adult.Decided pigmentation first appears on the ears,breast and
a belt across the lower part of the back.Often it is necessary to these characteristic markings to distinguish the newly born children of Caucasians from those of the darker races."
Now you know! Yours truly,
|
<urn:uuid:1774574c-3d33-40c7-a697-290d665d3f52>
|
CC-MAIN-2016-26
|
http://www.afrigeneas.com/forum-world/index.cgi/md/read/id/2052/sbj/the-color-of-our-skin-hmmmm/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00137-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.976225 | 345 | 2.703125 | 3 |
February 12, 2014 (KHARTOUM) – Changing attitudes in Sudan surrounding female genital mutilation (FGM) is helping to reduce the prevalence of the practice, UNICEF says.
- Nearly nine out of 10 Sudanese women aged 15 to 49 have undergone some form of FGM, UNICEF says (ST)
According to the agency, more than 50 per cent of Sudanese women believe the practice should be discontinued amid growing awareness about its health dangers.
UNICEF child protection specialist Abdelraouf Elsiddig Ahmed said a comparison of the 2006 and 2010 Sudanese Household Surveys shows a notable decrease in the practice of cutting.
“For instance, in the 5 to 9 age bracket, 34.5 per cent had been cut in 2010, compared to 41 per cent in 2006. In the next Household Survey, we expect to see a further decrease”, he said.
Nearly nine out of 10 Sudanese women aged 15 to 49 have undergone some form of cutting to various degrees of severity. The procedure, also known as infibulation, or ‘pharaonic circumcision’, is usually performed on underage girls by traditional practitioners, who have no medical training.
Women’s health advocates say the procedure has life-long implications for women, including ongoing infection, infertility, psychological trauma and complications during childbirth.
In Eastern Sudan’s Kassala state, 78.9 per cent of girls and women have undergone the procedure – the third-highest prevalence in the country, according to the 2010 Sudan Household Health Survey.
The origins of the practice is steeped in traditional and societal ideals of beauty and cleanliness, religion and morality, and is also used as a method of stifling female libido.
The Sudanese government has introduced stiff penalties for those who continue to perform the procedure, however, the practice, which is still not criminalised by law in Sudan, remains widespread, particularly in rural communities.
The eradication of FGM is further complicated by cultural and societal pressures, as well as religious sensitivities surrounding the issue.
UNICEF is providing support for a national strategy to abolish FGM known as the Saleema initiative. Conceived in 2008, the campaign is being supervised by the National Council on Child Welfare.
The campaign slogan ‘Saleema’, meaning ‘whole’ or ‘intact’, is used to signify that a girl should remain the way she was born and attempts to promote the positive aspects and health benefits of not performing FGM on girls.
However, despite an extensive media campaign, the strategy has been criticised by some advocates for being too vague as it does not directly refer to FGM.
An estimated 125 million girls women and girls are thought to have undergone FGM in 29 countries in Africa and the Middle East.
|
<urn:uuid:f314febd-fb23-4391-9361-228803569fe8>
|
CC-MAIN-2016-26
|
http://www.sudantribune.com/spip.php?article49935
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00115-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.965135 | 585 | 3 | 3 |
Thousands of miles above the atmosphere, a gigantic sculpture surrounds our planet.
The sculpture is not solid, but is made of electrically-charged, sub-atomic particles — bits and pieces of busted-up atoms.
Magnetize an iron needle and hang it by a thread to make a compass. The needle swings to point, roughly, north/south. The needle seeks, not Earth’s geographic poles — true north and south — but her magnetic poles. The magnetized needle aligns with Earth’s magnetic field.
Rub a stick-pen briskly through your very dry hair, then move the pen just over your arm, without touching a hair. Rubbing the pen gives it an electric charge, and you can see — even feel — the hair on your arm rise in response to the pen’s electric field.
Electrically charged sub-atomic particles also respond to electric and magnetic fields. The picture on an old-fashioned TV tube is painted by a beam of such particles, steered by such fields.
The charged particles that make up the giant sculpture surrounding Earth are worked and re-worked by electric and magnetic fields generated by Earth’s electromagnetic metal core — by the sun — by lightning bolts — by electric currents that course over our heads.
Working with and against each other, the fields mold Earth’s magnetosphere, the region of space dominated by Earth’s magnetic field, a region that stretches out millions of miles.
Even as they are molded by these fields, the charged particles — the “clay” — push and pull back on the fields (think of pushing two magnets together). The “clay” re-shapes itself, and re-shapes the fields, tweaking the sculptors that sculpt it.
Electric and magnetic fields don’t just shape the magnetosphere; they pump energy into its particles, accelerating them to velocities close to the speed of light — relativistic velocities.
Electric and magnetic fields focus these relativistic particles into a region that extends some thousands of miles above the atmosphere — the Van Allen Radiation Belts.
Much as a fast bullet penetrates armor, relativistic particles penetrate the skin of a spacecraft, and anything or anyone within that spacecraft.
Mostly, the International Space Station orbits below the radiation belts. But Earth’s magnetic field is off-center, pushing the radiation belts low over the South Atlantic. As they pass through the South Atlantic Anomaly, ISS astronauts hunker down in a part of the space station that provides the most shielding.
The radiation belts are a death trap — even for things that never lived. Satellites passing through the South Atlantic Anomaly regularly turn off their electronics, lest the radiation damage their logic circuits.
If possible, spacecraft avoid the radiation belts entirely.
The sun’s magnetic cycle should peak, shortly after its sunspots peak, sometime in the next couple of years. During that peak, the sun’s magnetic field most actively sculpts Earth’s magnetosphere.
A pair of twin spacecraft have just launched, to study how Earth’s radiation belts are sculpted, how their particles are energized, especially, to learn why a sun-triggered geomagnetic storms — major disruptions of electric and magnetic fields — sometimes energize the belts, sometimes de-energize them, sometimes do nothing.
For the next two years, the Radiation Belt Storm Probes — their electronics shielded by extra-thick walls — will not avoid the radiation belts, but will live in them.
Al Stahler’s science programs can be heard on KVMR (89.5 FM). He teaches classes to students of all ages, and may be reached at [email protected]
The radiation belts are a death trap — even for things that never lived.
|
<urn:uuid:02d83ca5-8703-4b22-8071-83d3fc759086>
|
CC-MAIN-2016-26
|
http://www.theunion.com/news/localnews/2652270-113/magnetic-fields-radiation-belts
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00091-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929485 | 806 | 3.5625 | 4 |
Its estimated that a total of 20 million Americans have a thyroid disorder, but 60% of those people are not diagnosed.
Every cell in the human body needs thyroid hormone to function properly. It effects every system of the body and helps control metabolism. Proper testing is sorely needed.
Unfortunately there are three major problems with modern testing conventions. The normal ranges are inaccurate, testing is incomplete, and antibody tests are rarely administered.
Normal TSH Range
TSH is an abbreviation for thyroid stimulating hormone. It is released by the pituitary gland in response to the levels of thyroid hormones in the body. Increased TSH tells the thyroid gland to increase hormone production.
Most TSH ranges are way too broad. The original range was made in 1973 by testing a group of 200 people and making a bell curve. Those in the tall portion of the bell curve are considered “normal”. The problem is in this group of people they did not exclude those diagnosed hypothyroid or those who may have it and be undiagnosed. This makes the resulting range inaccurate.
The range they came up with was around 0.5-5.0 mU/L (milliunits per liter). In 2002 the American Association of Clinical Endocrinologists updated it to 0.3-3.0 mU/L. Some places have adopted the new standard, but others have not. Different places use different ranges causing much confusion.
Some studies found a range of 0.5-2.5 was normal in healthy adults. The TSH normal range needs to reflect the true normal and needs to be standardized.
Incomplete Hormone Testing
If you walk in to your doctor’s office and ask them to test your thyroid, they will most likely just test your TSH. TSH goes up in response to low thyroid hormone levels to tell your gland to make more thyroid. So high levels means that your thyroid is low. This does make sense, however that is only part of what is going on.
The thyroid gland makes a hormone called T4. This is the inactive form of the thyroid hormone. The liver then converts T4 to T3 which can be used in the cells. If you have trouble with the conversion you could be hypothyroid with high normal TSH levels. In these cases you have enough thyroid, but your body can’t use it.
Another level of complexity is total versus free T3 tests. Total T3 is exactly what it sounds like, the total amount of thyroid hormones in the body. In order to be transported through the body it has to be attached to a carrier protein. Free T3 is the amount not attached to one of these proteins. The hormone then has to separate from the protein to cross the cell membrane and be used.
Think of the carrier protein as a car. You get in your car and drive to work. You need to get out of your car to walk through the door. You can’t fit your car through the door. If the hormone fails to separate it can’t go to work. Normal total T3 levels but low Free T3 causes hypothyroidism because the hormone is not available inside the cell.
In order to get the big picture all of the above tests need to be done. TSH is only one component of a very complex system.
Two thyroid diseases are caused by autoimmune disease. Hashimoto’s thyroiditis is hypothyroidism caused by the immune system attacking the thyroid gland. It is the most prevalent cause of hypothyroidism in the industrialized countries. Similarly Graves disease is the leading cause of hyperthyroidism. It causes an antibody to be made that mimics TSH.
Hashimoto’s is diagnosed by testing for thyroid peroxidase (TPO) and antithyroglobulin antibodies (TG). Testing for Graves disease checks for thyroid stimulating immunoglobulin (TSI).
Unfortunately, despite their prevalence, the tests for these diseases are rarely done.
Many people are going to their doctor complaining of thyroid disease symptoms and being told they are fine. I was one of them, until I was finally diagnosed with Hashimoto’s myself. If you suspect your thyroid may be the root of your problems, talk to your doctor about these tests. If they don’t listen, find someone who will.
I am not a medical professional and cannot give medical advice. Find a naturopath or holistic doctor who will help you look at your big picture.
|
<urn:uuid:584db5c0-3eea-4bf7-b48b-8682d1e91dac>
|
CC-MAIN-2016-26
|
http://hopehealthcoach.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00017-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.942926 | 930 | 3.296875 | 3 |
A study led by researchers at The University of Nottingham has identified a gene that protects the body from lung cancer.
The research, published in the journal Proceedings of the National Academy of Sciences, USA and funded by a 72,000 grant from the British Lung Foundation, has found that the tumour suppressor gene, LIMD1, is responsible for protecting the body from developing lung cancer paving the way for possible new treatments and early screening techniques.
Lead researcher Dr Tyson Sharp and his University of Nottingham team, together with US collaborator Dr Greg Longmore, set out to examine if loss of the LIMD1 gene correlated with lung cancer development.
The University of Nottingham team examined lung cancer tissue from patients with the disease and compared it to healthy lung tissue. They found that the LIMD1 gene was missing in the majority of lung cancer samples, indicating that the presence of the LIMD1 gene protects the body against lung cancer.
Dr Greg Longmore's team in the USA supported these findings, using a mouse without the LMID1 gene which developed lung cancer.
Dr Sharp said: "The LIMD1 gene studied in this research is located on part of chromosome 3, called 3p21.
"Chromosome 3p21 is often deleted very early on in the development of lung cancer due to the toxic chemicals in cigarettes, which implies that inactivation of LIMD1 could be a particularly important event in early stages of lung cancer development.
"We are now going to extend these finding by developing LIMD1 as a novel prognostic tool for detection of early stage lung cancer."
Lung cancer is the UK's biggest cancer killer, claiming around 33,600 lives a year. Ninety per cent of cases are caused by smoking. At present lung cancer is often detected late, meaning that 80 per cent of patients die within a year of being diagnosed.
Dame Helena Shovelton, Chief Executive of the British Lung Foundation said: "This is very exciting research which could lead to the development of early screening techniques and treatments for lung cancer. We are very proud to have made this breakthrough possible".
|Contact: Dr. Tyson Sharp|
University of Nottingham
|
<urn:uuid:ea440fde-8a32-4804-ba4d-f8a2cf5996ad>
|
CC-MAIN-2016-26
|
http://www.bio-medicine.org/medicine-news-1/Gene-which-protects-against-lung-cancer-identified-31049-1/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00122-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954334 | 443 | 3.15625 | 3 |
What is the ALS Prediction Prize and why are the solutions so important?
The DREAM-Phil Bowen ALS Prediction Prize4Life Challenge aims to confront a basic puzzling question in ALS: most patients are like Lou Gehrig, with a rapidly progressing disease course. Some patients, however, turn out to be more like Stephen Hawking, where the disease progression is delayed. What separates the Lou Gehrigs from the Stephen Hawkings? Within the ALS patient population there is enormous variability with some people living for many years or even decades, while others die much sooner. This makes it extremely difficult to develop new and effective treatments for this as-yet incurable disease. Solving this mystery is important, for patients and their families and for those planning clinical trials of potential new treatments. The winners of the ALS Prediction Prize may hold the key.
On average, people diagnosed with ALS, a fatal disease, live about 1,000 days (around three years). Therefore, it is extremely difficult to develop new and effective treatments for this as-yet incurable disease. Currently, there is only one FDA approved drug available for ALS patients. The drug does not improve quality of life and only extends life by about three months.
How were the winners of the ALS Prediction Prize able to determine their solutions?
The winning solvers of the ALS Prediction Prize have developed algorithms that predict a given patient's disease status within a year's time based on three months of data. This solution is important because it could impact how clinical trials for ALS therapies are designed and conducted, fostering faster breakthroughs in effective treatments for the disease.
Registered solvers were provided a small subset of data from the PRO-ACT database, the largest database of clinical data from ALS patients ever created. The fully anonymized data includes patient demographics, medical and family history data, functional measures, vital signs and lab results. The full set of data from over 8,500 ALS patients will be globally available for research purposes beginning December 5, 2012.
The Prediction Prize is a powerful example of how "Big Data" can lead to improved advances in medicine. Anyone with quantitative abilities, be they an engineer or atmospheric chemist, can help in the fight against ALS.
Why did we use a crowdsourcing approach for the ALS Prediction Prize?
Currently, ALS trials must include large numbers of patients to account for the enormous variance in the course of the disease progression within the ALS patient population, making these trials costly, slow and difficult to interpret. By making clinical trial data available to a global community of data scientists, researchers, and computer mavens, we are speeding up the process while driving down the costs of discovery, which is good news for both the scientific and patient communities we serve.
Prize4Life's mission is to accelerate the development of treatments for ALS using a prize-for-breakthrough model. The ALS Prediction Prize follows the success of the $1 million ALS Biomarker Prize4Life awarded earlier this year; both are examples of our organization successfully using crowdsourcing to encourage scientific and medical breakthroughs.
Who are the winners?
Two teams have secured first place in the ALS Prediction Prize: a duo from Stanford University, postdoctoral candidate in mathematics and statistics Lester Mackey, and recent JD and Master's Degree recipient Lilly Fang; and the team of Liuxia Wang, Principal Scientist, and her colleague Guang Li, Quantitative Modeler at Washington, DC-based scientific marketing company, Sentrana. Each team will receive $20,000 for generating the top-performing solutions to predict disease progression in ALS patients.
In addition, Torsten Hothorn, a distinguished statistics professor from Germany, was awarded a second-place prize of $10,000 for his unique solution which included an alternative approach to assessing disease progression to that specified in the challenge criteria. The ALS Prediction Prize judging panel found that using this alternative method could potentially yield highly impactful results, so the organization created a second-place prize to recognize Hothorn's innovative thinking.
Why did Prize4Life choose three winners?
We saw an overwhelming response to the competition, and the efforts from data scientists worldwide resulted in at least three viable (and potentially complementary) solutions. For ALS patients and their families, this is a huge win and a very promising step toward effective treatments for ALS.
Among the many proposed solutions submitted over the Innocentive platform, the solutions offered by the Wang/Li and Mackey/Fang teams scored virtually identically, even though both the statistical methods and the parameters chosen by each team were different; in addition, the solution offered by Hothorn scored extremely closely to that of the other two teams.
Over 1000 individuals and teams registered to participate in the challenge, 25 of them submitted complete algorithms. Given the quality of the results submitted, our judge's panel realized it was impossible to award just one prize as we had originally planned. With the help of a generous donor deeply committed to helping find a cure for ALS, we decided to double the prize amount we had initially allocated for the winning solutions.
What do these solutions mean to the search for treatments and a cure for ALS?
The solutions to the ALS Prediction Prize bring us one step closer to effective treatments for ALS. Currently, it is impossible to know upon being diagnosed with ALS how long a patient will live. New prediction tools, such as those developed by the winners of the ALS Prediction Prize, give scientists and medical experts another weapon in their arsenal to use in the fight against ALS.
Prize4Life is a non-profit organization and our mission is to accelerate the development of treatments for ALS using a prize-for-breakthrough model. Prize4life awarded $1M to the winner/solver of the ALS Biomarker Prize4Life Challenge and is now awarding $50,000 to the winners/solvers/ of the ALS Prediction Prize; both are examples of the organization successfully using crowdsourcing to encourage scientific and medical breakthroughs.
Who did Prize4Life partner with?
Prize4Life partnered with three organizations for the ALS Prediction Prize:
- InnoCentive - the world's largest prize platform, which has an international solver network numbering in the millions
- The DREAM Project - an international organization of computational biologists dedicated to open access data and challenges addressing important scientific problems (organizers of RECOMB conference). http://www.the-dream-project.org/
- The family of Phil Bowen - Mr. Bowen's son Peter, who was also involved in the start-up phase of creating the PRO-ACT database, raised a large amount of funding and played an integral role in garnering attention and awareness for the prize in honor of his father, who died of ALS http://fundraise.prize4life.org/e/pbowen
Who are the ALS Prediction Prize Judges?
Merit Cudkowicz M.D. MSc. is the Julianne Dorn Professor of Neurology at Massachusetts General Hospital, at Harvard Medical School. Dr. Cudkowicz completed medical training at the Health Science and Technology program of Harvard Medical School, and she was a resident in Neurology at MGH. She obtained a Master's degree in Clinical Epidemiology from the Harvard School of Public Health. Dr. Cudkowicz's research and clinical activities are dedicated to the study and treatment of patients with neurodegenerative disorders, in particular amyotrophic lateral sclerosis (ALS). Dr. Cudkowicz directs the MGH ALS clinic and the Neurology Clinical Trials Unit . She is one of the founders and co-directors of the Northeast ALS Consortium (NEALS), a group of 92 clinical sites in the United States and Canada dedicated to performing collaborative academic led clinical trials in ALS. In conjunction with the NEALS consortium, she planned and completed 7 multi-center clinical trials in ALS and is currently leading three new trials in ALS. Dr. Cudkowicz received the American Academy of Neurology 2009 Sheila Essay ALS award. She is actively mentoring young neurologists in clinical investigation. Dr. Cudkowicz is on the Research Council of the American Acadenmy of Neurology and the medical advisory boards for the Muscular Dystrophy Association.
Orla Hardiman BSc MB BCh BAO MD FRCPI FAAN is an consultant neurologist. She is a HRB Clinician Scientist, Clinical Professor of Neurology at the University of Dublin and a Consultant Neurologist at the National Neuroscience Center of Ireland at Beaumont Hospital, Dublin. Hardiman has become a prominent advocate for neurological patients in Ireland, and for patients within the Irish health system generally. She is co-Founder of the Neurological Alliance of Ireland, and Doctors Alliance for Better Public Healthcare. Hardiman is current Dean of the Irish Institute of Clinical Neuroscience. In the past, she established the bi-annual Diaspora Meeting, a forum for Irish neurologists based overseas to present and discuss their research findings with neurologists working in Ireland
Robert Küffner habilitated in informatics in 2010 and is currently a group leader for computer science and bioinformatics at the Ludwig-Maximilians Universität München. He received his PhD in molecular biology in 1998 at the Heinrich-Heine Universität in Düsseldorf, Germany. His main interests include the analysis of biological networks via Petri Nets as well as the areas of text mining, expression analysis, gene regulation, and systems biology. Recently, Dr. Küffner's team was recognized as the best performer in two international community-wide computational challenges where comprehensive blinded assessments of network inference approaches have been conducted.
Raquel Norel is part of the Functional Genomics and System Biology Group in IBM Research where she uses math and computing to bring insight to complex biological problems. Recently she has been working on collaboration-by-competition projects; since 2010 she has contributed to the DREAM project as an organizer and scorer. She contributes regularly to Faculty of 1000. Raquel holds a PhD in Computer Science from Tel Aviv University, where she wrote her thesis on "Algorithms for Protein Docking" while co-authoring 10 papers on the subject. She also has an MSc in Computer Science from Weizmann Institute of Science, with a thesis on "A Model for the Adjustment of the Mitotic Clock by Cyclin and PMF levels"; this worked was published in Science. She earned a BSc in Engineering Sciences from Universidad de Chile.
Gustavo Stolovitzky received his M.Sc. in Physics (with honors), from the University of Buenos Aires (1987) and his Ph.D. in Mechanical Engineering from Yale University (1994), which awarded him the Henry Prentiss Becton Prize for Excellence in Engineering and Applied Sciences. In 1998 he joined the IBM Computational Biology center at IBM Research where he is the manager of the IBM Functional Genomics & System Biology Group.
Gustavo has had an active role in organizing the systems biology community. He founded and leads the DREAM project, an international effort that has nucleated thousands of participants to assess the performance of systems biology methods. He also co-organizes the RECOMB Systems and Regulatory Genomics and DREAM challenge conferences, which have attracted around 500 attendees every year for the last 4 years. He has co-authored more than 100 scientific publications, has 9 issued patents, and has edited 2 books. His work has been highlighted in The New York Times, The Economist, Technology Review, and Scientific American (where his DNA transistor project was chosen as one of the 10 world changing ideas of 2010) among other media. Gustavo is a member the PLoS ONE and the OMICS editorial board, and has been elected fellow of the NY Academy of Sciences, fellow of the American Physical Society, fellow of the American Association for the Advancement of Science, and fellow of the World Technology Network. He holds a position as an adjunct Associate Professor at Columbia University.
His most recent scientific interests are in the field of high-throughput biological-data analysis, reverse engineering biological circuits, the mathematical modeling of biological processes, and next generation technologies for DNA sequencing.
|
<urn:uuid:007fe0c3-29cf-478c-b587-e7d25d47f47f>
|
CC-MAIN-2016-26
|
http://www.prize4life.org/page/prediction_faq
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00015-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952833 | 2,496 | 2.515625 | 3 |
Palm oil is on a lot of people’s minds. In Indonesia, the industry is booming, with US$19.7 billion of crude palm oil exports in 2011. But expanding oil palm plantations have taken their toll on remaining forests and other natural habitats in tropical regions and led to conflict over land with local people.
The world’s top scientists are also raising concerns. According to a recent study in Nature Climate Change, from 1990 to 2010, 90 percent of lands converted to oil palm plantations in Kalimantan were forested.
There need not, however, be a trade-off between palm oil, forests and communities. It is possible to grow more crops, including oil palm, while keeping forests, and also cutting rural poverty.
In order to do so, companies and investors must lead by supporting sustainable production on land that has already been cleared, while also ensuring that local people benefit and consent to new plantations. Global markets and the governments of major producer countries should give stronger support to such efforts.
It was therefore encouraging to see that last week in Singapore, at the Roundtable on Sustainable Palm Oil (RSPO) 10th annual meeting, the UK government and 14 major industry associations pledged to buy only certified sustainable palm oil by 2015.
Big buyers like the British Retail Consortium, Food and Drink Federation, and Seed Crushers and Oil Processors Association have all signed on.
And the US Environmental Protection Agency (EPA) has indicated that emissions associated with deforestation could keep Indonesian palm oil from meeting US renewable biofuel standards.
The RSPO, which manages a voluntary market standard against which production can be certified, requires new plantations to obtain free, prior and informed consent of any affected local communities and avoid loss of biodiversity-rich forests.
It also excludes plantations that have resulted in recent forest clearing. Major international buyers and traders, such as Walmart, Unilever and Nestlé, as well as mainstream environmental groups like WWF, endorse the approach.
This market-driven strategy helps to bolster efforts by the Indonesian government.
Further government involvement is essential in making the shift to responsible investment more attractive and less burdened by excessive red tape and slow approvals processes, for example when approving “land swaps” that allow companies to exchange prospective plantation sites within forests for already cleared or degraded land nearby.
New research by the global environment and development think tank, the World Resources Institute (WRI), highlights the opportunity that is being missed to expand production onto already-deforested land in order to spare forests.
WRI has published a method enabling rapid identification of already-deforested land that could be suitable for sustainable oil palm cultivation.
The method and a suite of online applications launched at the RSPO meeting last week, enable planners, investors and communities to quickly find land where oil palm can be grown without contributing to forest clearing and burning.
Use of these applications must be combined with field visits and community consultations to ensure local community land and resource rights are respected.
WRI’s tools suggest that more than 14 million hectares of land in Kalimantan may be suitable for sustainable palm oil production. Not all of these hectares should necessarily become plantations; local people may not want particular tracts to go into oil palm.
But the scale of potential is significant. For comparison, experts have predicted a total of 3 to 7 million hectares of oil palm cultivation expansion in all of Indonesia
These numbers suggest that palm oil production targets could be met without clearing another hectare of forest or draining more peatland.
Oil palm is a remarkable crop. It creates much-needed employment and opportunities for smallholder farmers in some of the poorest and most remote rural regions.
Financial returns for larger investors have also been outstanding and seem likely to remain buoyant as demand grows.
But in order for Indonesian palm oil to maintain global market access, land use planning to reduce forest clearing and community consent will need to become the norm.
The resulting expansion in sustainable palm oil production would be a huge boost to the Indonesian economy, and also set an example for the rest of the world of how to grow a nation’s economy while also conserving its forest and reducing poverty.
The writer leads the forests team at the World Resources Institute.
|
<urn:uuid:be7d5a81-75fd-4800-9951-fa6986f538f2>
|
CC-MAIN-2016-26
|
http://www.thejakartapost.com/news/2012/11/10/the-false-choice-between-palm-oil-and-ri-forests.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00118-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946829 | 869 | 2.90625 | 3 |
- freely available
Plants 2013, 2(1), 16-49; doi:10.3390/plants2010016
Abstract: Selection and adaptation of individuals to their underlying environments are highly dynamical processes, encompassing interactions between the individual and its seasonally changing environment, synergistic or antagonistic interactions between individuals and interactions amongst the regulatory genes within the individual. Plants are useful organisms to study within systems modeling because their sedentary nature simplifies interactions between individuals and the environment, and many important plant processes such as germination or flowering are dependent on annual cycles which can be disrupted by climate behavior. Sedentism makes plants relevant candidates for spatially explicit modeling that is tied in with dynamical environments. We propose that in order to fully understand the complexities behind plant adaptation, a system that couples aspects from systems biology with population and landscape genetics is required. A suitable system could be represented by spatially explicit individual-based models where the virtual individuals are located within time-variable heterogeneous environments and contain mutable regulatory gene networks. These networks could directly interact with the environment, and should provide a useful approach to studying plant adaptation.
There is an increasing awareness of how our climate is changing due to continuing urbanization and industrialization of our planet, and of the possible conservational, ecological and sociological implications. With research studies demonstrating that climate change can affect crops on a genotypic and a phenotypic level , it is desirable to improve our understanding for plant adaptation so that it may be exploited to produce crops more resilient to shifting climates, pests and disease, which in turn can be grown to produce larger yields. A related field is the study of genotype-by-environment (GxE) interaction. GxE interactions are widely studied within epidemiological studies [3,4,5], and are of particular relevance to agronomy. These studies are concerned with finding significant correlations between crop genotypes and non-genetic factors, such as climate or environments, in the interests of increasing crop yield [6,7,8,9,10]. In order for such GxE interaction studies to be performed, a comprehensive knowledge of genotypes and of polymorphic non-neutral loci is required. Single-nucleotide polymorphism (SNP) data extracted from using amplified fragment length polymorphisms (AFLPs) are useful markers for demonstrating such genetic differentiation and AFLPs combined with whole genome-scans have previously demonstrated adaptation at different environments such as temperature mediated selection in trees . Mega-bases of sequence data belonging to non-model organisms may now be obtained from next-generation sequencing (NGS) technologies with such studies having been used to demonstrate population differentiation [14,15] and adaptation of individuals to different environments [16,17]. As the environments that organisms reside in are highly dynamic due to regular events such as seasons, night and day cycles, or due to unexpected events such as droughts or floods, it is desirable to quantify the differential gene expression of the alleles of interest. Experimental techniques for accurately identifying protein-DNA interactions such as chromatin-immunoprecipitation coupled with microarrays (ChIP-chip) [18,19] or in more recent years the use of RNA-Seq combined with ChIP-Seq has allowed differential expression patterns at different conditions and the inference of gene-regulatory networks (GRNs) to be determined. Complete GRNs when used in a predictive capacity will provide a useful tool for agronomists to improve crops [22,23]. In recent years, there has also been an interest in merging together the different disciplines in order to assess the differential gene expression data in segregating populations, in the form of eQTL’s . However, despite the numerous mathematical modeling studies that have developed GRNs from expression data, and the numerous programs and tools developed for population and landscape genetics used to model the evolution of individuals with neutral and adaptive loci, to our knowledge no studies have been made to combine the two disciplines and develop models that contain simulated individuals with GRNs that can adapt through a landscape. Such models would be capable of simulating a system that contains the hierarchical levels of regulation that are intricately involved in the adaptation of organisms to their environment. These levels include the gene, genome, individual, population and environment, Figure 1. In such models, a gene may interact with other genes and up-regulate or down-regulate its downstream target genes, eventually inducing a phenotype. The genome accumulates mutations and recombines its comprising homologous chromosomes to produce new genotypic variants, some of which may be adaptive or deleterious. The individual undergoes different life histories, generates gametes, reproduces, and either synergistically or antagonistically interacts with other individuals. The (sub)population collectively adapts to its local environment and may undergo range expansions and admixture with other populations within the meta-population, sometimes outcompeting these populations or forming hybrid zones when speciation events have occurred. Finally, the environment contains dynamic abiotic and biotic factors that may interact with the individuals. Abiotic factors include light or temperature that can change cyclically or unexpectedly, directly impacting on the needs of the individuals (such as facilitating or inhibiting their dispersal for instance), whereas biotic factors from other organisms interact with the individuals of interest. Plants are interesting organisms to model due to their sedentary nature. For example, their dispersal is more limited and often reliant upon environmental features in the case of anemophily and anemochory (wind dispersal of pollen and seeds), hydrophily and hydrochory (water dispersal) or is reliant upon other organisms in the case of entomophily and zoochory, or through cultivation by humans. Unlike animals, they are unable to migrate away from their environments and often exhibit phenotypic plasticity as a result. Furthermore, many plants are allopolyploids and autopolyploids with the potential for providing more of an insight into the underlying genetics, although at a greater complexity. In this review, we discuss previous population and landscape genetics simulation models, including simulations from our laboratory and current methodologies to simulation GRNs. We then move onto examples where population genetics models making use of GRNs will be beneficial within evolutionary biology.
2. Current Tools in Evolutionary Biology, Population Genetic and Landscape Genetic Simulation Models
2.1. Fisherian Population Genetics Models
Simulation models in population genetics classically are based on a number of simplifying assumptions, such as panmixia, non-overlapping generations and constant population sizes. These assumptions allow the mathematics behind these principles to be described formally and allows the simulated populations to behave in computationally tractable and deterministic ways, such as Hardy-Weinberg equilibria (HWE) . Often these assumptions are biologically reasonable: For instance, it is not uncommon for plant species to be found exhibiting HWE [26,27,28,29,30], especially when pollen dispersal may be distributed via entomophily or hydrophily and seed dispersal via zoochory. Often in these cases, a departure from neutrality can indicate selection. Many population genetics simulation models are based on genealogical trees with many being backwards-in-time coalescent simulations. In coalescent simulations, sampled alleles are traced back via the simulation of gametogenesis until the most recent-common-ancestor (MRCA) has been found . A tree-based forward-time simulation system, TreeSimJ , has also been developed, however. Programs such as ms and simCoal are coalescent simulation programs able to simulate genealogies and infer demography and population structure amongst a number of populations. simCoal has three mutation models, a two-allele finite sites model for simulating RFLP data, a stepwise mutation model for microsatellite data and several finite-sites models for simulating mutation of DNA sequence data. The program simCoal has also been further developed to allow for diploid individuals, heterogeneous recombination rates between adjacent loci, multiple coalescent events per generation and to use multiple time points as with ancient DNA data . The program ms has been further developed to process input recombination hotspots and to use elements of a forward time simulator to model selection at a single diploid locus .
2.2. Landscape Genetics
In reality, geographic landmarks such as lakes, mountains and even roads can provide barriers to gene flow sufficient enough to induce population differentiation, and biotic, climactic and edaphic factors can induce adaptation of individuals at different geographical locations during range expansions. Such biogeographic effects concern the developing fields of Landscape Genetics [40,41], which broadly speaking can be described as a combination of the fields of population genetics and landscape ecology (the field concerned with the interactions between ecological processes and the underlying spatial contexts in which these processes reside). MS and simCoal for example are able to take into account spatial information by the use of migration matrices between subpopulations with either the stepping-stone or island models. Another notable program, SPLATCHE (SPatiaL And Temporal Coalescences in Heterogeneous Environments) along with SPLATCHE2 has been developed in mind to simulate the expansion of a population through an arena comprised of heterogeneous environments. Each SPLATCHE simulation is comprised of two simulations: The first being a forward-in-time simulation of the demographic and spatial expansion, and the second step being a coalescent simulation based on simCoal for reconstructing the genealogies throughout the simulated subpopulations. Here the input terrain files (input from a “vegetation” and a “roughness” ascii raster file, the format used in most geographical information systems (GIS)) are used to represent geographic regions with variable carrying capacities and friction values, a parameter used to represent the difficulty of migration from one deme to another. SPLATCHE allows dynamic simulations such that carrying capacities and friction values may change throughout a simulation according to an input file, and can generate DNA, STR, RFLP and standard genetic data as an output. SPLATCHE has been used in previous studies on range expansions [44,45]. A number of other simulation studies have included demic information [46,47] and the use of population units within simulations lends itself conveniently to the calculation of population-based measures of differentiation, such as Fst. Such simulations could be described as being spatially implicit and are often biologically reasonable, as populations can be found within discrete units. For instance Manel et al. gives fish in isolated ponds or birds nesting on separate islands within archipelagos as examples . However, many populations are found to exhibit continuous genetic differences across space, as is the case with Arabidopsis thaliana over Eurasia and North America . When individuals are distributed across an area exhibiting a gradient of a certain influencing environmental variable, spatial autocorrelations of the genotypes and the variable magnitude can reveal clinal variation: This has been seen with the flowering times of Barley latitudinally across Europe . Such high-resolution genetic data may be obtained by the explicit simulation of individuals rather than populations whose interaction is spatially constrained within a two or a three dimensional arena. Such simulation models are termed spatially explicit individual-based models (SIBMs).
2.3. Spatially Explicit Individual-Based Models and Their Use in Simulation Studies
Interest in forward-time individual based-models (IBMs) has arisen in the potential for increased individual heterogeneity and stochasticity within the system. Within IBMs, the individual becomes the fundamental modeling unit within the system, unlike mean field models, where populations are represented as homogenous collections of individuals with identical attributes based on summary statistics. The various states that the individual may occupy can therefore be modeled explicitly, allowing for different life histories and other behaviors to be incorporated that may provide more biological realism to the model. These models are generally less efficient than coalescent models, as the coalescent will only simulate genealogies from survived offspring that have made it to the present, and not the entire evolutionary history as with IBMs. However, the greater flexibility posed by forward-simulation models may make them more desirable in some studies and it has been suggested that a tradeoff between the two modeling approaches exists in terms of efficiency and flexibility [50,51].
2.3.1. Semi-Spatial Models
A number of software tools using IBMs have been developed. These include EasyPop, a population genetics simulator to simulate neutral loci datasets under various mating schemes and migration models ; IBDSim, a program for simulating isolation by distance between individuals ; QuantiNemo, an individual-based model for simulating quantitative traits amongst individuals within heterogeneous “patches” ; and SimuPop, a flexible simulator that consists of a library of python functions that are required by the user to be “glued together” within a python script, which again has various different mating schemes and migration models at the users disposal [55,56]. GenomePop is an IBM that utilizes Markovian nucleotide or codon models of DNA mutation, such as the Jukes-Cantor or general time reversible mutation model to generate synonymous and non-synonymous mutations. GenomePop thus provides an IBM that can simulate more information at the nucleotide level. GenomePop can also simulate recombination, allow constant or variable population sizes and provides different migration models such as the Island model and the stepping stone model.
2.3.2. Spatially Explicit Models
The programs listed in [42,43,52,53,54] have been described as being semi-spatial . However, due to the flexibility of IBMs they can readily have a fully spatial element incorporated within them to become spatially explicit. Broadly speaking, spatially-explicit individual-based models (SIBMs) contain individuals that are distributed across an area, such as a lattice or matrix (although non-lattice models have been proposed ) and may interact with other individuals in a spatially constrained way rather than purely at random. A number of plant-based SIBMs simulation studies have also emerged [59,60,61] in which the spatial element of these models is of particular importance, due to the sedentary nature of plants. The spatial element is of increased importance in anemophilous crops and trees due to their limited dispersal, which follows a “leptokurtic” curve . Doligez et al. compared their simulated plant populations, when permitted to form a uniform distribution throughout their matrix, with the clumped populations that readily formed through limited dispersal. They found that the clumped populations exhibited greater spatial genetic structure than the continuously distributed populations, particularly when selfing was allowed. Kitchen and Allaby developed a plant-based SIBM to study the effects of spatial extension between individuals upon the heterozygosity of the plant populations when compared to mean-field HWE expectations. They showed that when plant-mating systems approximated mean-field assumptions (i.e., the density was such that the individuals were approximately randomly mating) the observed and expected heterozygosities were largely equivalent. However, the heterozygosity of individuals decreased from mean-field expectations as sparseness amongst individuals increased. AMELIE is a SIBM with a rather more direct application towards food-security and GM crops, and was used to study the amount of introgression from GM forests to conventional forests. It can also allow various life histories and mating systems and can provide demographic and environmental stochasticity. These simulations, however, are only simulating neutral markers and do not attempt to model selection. It is relatively straightforward to take an IBM or SIBM framework and then hard-code a specific adaptive trait, such as one that may influence selection through the perturbation of mortality, or reproductive rate, at a di-allelic or perhaps even a multi-allelic locus if necessary. However, the goal is to be able to account for a possible continuum in the range of landscape heterogeneity and on the strength of the selection inferred from the landscape. One emergent approach is to utilize the concept of resistance surfaces [63,64] and modify the surface in such a way as to produce a “fitness landscape”.
2.4. Resistance Surfaces
Resistance surfaces are essentially matrices that contain variables relating to different environmental or landscape features that may impede or facilitate connectivity between individuals in the form of migration or gene flow. They can be parameterized through field data as obtained from GIS systems and are useful for providing hypotheses on the nature of how spatial genetic structure through migration, introgression and dispersal may have formed. One notable SIBM that utilizes resistance surfaces is CD-POP (Cost Distance POPulations) that contains cost distance matrices for representing resistance to movement through the landscape . The program uses gradients of cumulative cost to impede dispersal between grid cells and can facilitate reproduction according to four different functions: linear, inverse square, nearest neighbor and random mixing. The initial version of CD-POP could be used only with neutral loci. However, this was improved upon in an important follow-up paper where CD-POP made use of a fitness landscape in order to simulate selection . CD-POP was upgraded to include a di-allelic single or multi-locus system with any number of neutral loci and up to two unlinked, di-allelic, selective loci (with alleles A, a, B, and b). Selection is then implemented according to the grid value where generated offspring reside and the genotypes of the selective loci that they contain. This represents an important step towards providing a general model for simulating selection. More recently, another study utilizing CD-POP’s selection model has been used in a study to assess the role of adaptive and neutral markers towards population differentiation . Another open-source software tool that uses resistance surfaces is Circuitscape, which is based upon resistance paths that are analogous to those within an electrical circuit . It may be used to predict dispersal of animals or plants and patterns of genetic differentiation among in heterogeneous landscapes .
These efforts in landscape genetics simulations represent the first stages into relating genotype to environment and the resulting effects on selection and adaptation. As with CD-POP, different genotypes of unlinked loci may produce different effects on fitness of an individual according to the spatial grid point on which it is located. However, in reality genes do not exist in isolation but exist in networks, and through cis-acting and trans-acting regulatory effects can up-regulate or de-regulate each other, ultimately affecting the expressed phenotype in a dynamic way. It has therefore been suggested that in the interest of genotype to phenotype mapping, genes should be considered in the context of networks . We discuss genes within networks in the next section.
3. GRNs, Network Motifs and Inference
Efforts to ascertain all the interacting genes with regards to the expression of a particular phenotype is an area of which is highly relevant to most, if not all, disciplines within biology. Such information, for instance, could provide biologists with potential molecular targets, be they genes, proteins or metabolites, whose function may be altered through gene silencing, catabolism, or through agonistic or antagonistic ligands. The identification of GRNs has multiple uses ranging from developing drug targets in complex disease, understanding stress response (with clear uses in developing drug targets and in agronomy), decreasing antibiotic, herbicide or pesticide resistance and identifying key developmental genes. One application of a GRN can be to model transcriptional networks within a cell, although interactions at the proteomic and metabolomic level and other areas of the “interactome” may also be modeled. Transcription factors (TFs) may behave as transcriptional activators that up-regulate other TFs or behave as transcriptional repressors that can down-regulate their targets. The crosstalk between the up- and down-regulation of transcription allows dynamicity to the amount of protein product that is expressed, which ultimately, will have an effect on the phenotype of the individual. One example is the GRN regarding photoperiodicity and vernalization of barley as described by Fuller and Allaby (Figure 2), which is closely related to the GRNs of wheat and Arabidopsis thaliana, a model organism widely used in GRN related studies . In this, relatively simple, pathway, gene Vrn2 down-regulates Vrn1, which through a series of upstream interactions indirectly promotes flowering. The increased cold and lower amount of light from the shorter days in winter down-regulates Vrn2, thus limiting the repressive effect Vrn2 has upon Vrn3 and Vrn1. This lack of repression is insufficient, however, to promote flowering alone and a period of long days during summer is required to activate gene Ppd1 and the remaining cascade, which leads to flowering. This simple example emphasizes the role that cyclical environmental patterns have upon expression of the phenotype. Indeed, through mutation the sensitivity of these genes to their environmental inputs may become altered. For example, a loss of function mutation of PPD1 renders the plant less sensitive to sunlight and delays flowering, whereas a loss of function mutation in VRN1, VRN2 or VRN3 results in an early flowering phenotype due to increased sensitivity. These mutations have been shown to be the cause for clinal variation of Barley across Europe , with late flowering plants being more prevalent in darker northern Europe, and the early flowering phenotype more common in southern Europe.
3.1. The GRN Topologies Observed in Nature
The genes within a network may be visualized as directed graphs containing a set of nodes, representing the genes, protein and/or metabolites, connected by a set of edges, which represent the interactions between these nodes. The number of edges that belongs to a node is its degree, and the distribution of the number of edges across networks is the degree distribution. Intuitively it may be assumed that the degree distribution would approximate a Poisson distribution, however, conversely they tend to approximate a power-law distribution, where most nodes are sparsely connected and a small number has a much larger degree [72,73]. When auto-regulation of genes is not permitted, the maximum number of edges within a network of size N must necessarily be N(N-1) edges, however, many genes do regulate themselves as in single-gene positive or negative feedback loops. Expression data obtained from technologies such as Yeast 2-Hybrid, ChIP-chip or ChIP-Seq can provide relationships such as correlative relationships between sets of expression data. The resultant expression data can be processed by software and mathematical models can be inferred (reviewed in [74,75]). An interesting paradigm emergent from this data is the existence of common network topologies that are observed across different taxa and even different types of networks (i.e., non-GRNs). This paradigm was first observed by Milo et al. [76,77] who generated null distributions of network sub-graphs through randomizing the edges of networks with the same degrees, and selected motifs that were found to be in numbers significantly higher than at random . A follow up study used z-scores to calculate a significance profile for comparison of network local structure when compared with random structures . Both studies found commonly occurring motifs not only within transcriptional networks, but also within protein-signaling networks, neuronal networks and non-biological networks, such as those found in social networks, power-grids and within the World Wide Web. These methods did receive some criticism, however. For example, it was stated that C. elegans neuronal pathways are spatially dependent with networks being formed between spatially closer nodes and that these spatial dependencies were not included by Milo et al. in their network inference . Common examples of the motifs observed are illustrated in Figure 3. These include the single and multi-input modules, the positive feedback loop, the negative feedback loop, the three-cycle positive feedback loop, the feed-forward loop (FFL) and the bi-fan motif.
3.2. Motif Function
Putative functions of these motifs illustrated in Figure 3 have been described by Alon , and it has been suggested that certain motifs can facilitate one of two roles: Either as sensory networks, which respond to nutrient levels and facilitate stress responses; or memory-based networks, which act as irreversible switches with putative roles in organism development or cellular differentiation. The FFL is an extremely common motif and has been shown to have either coherent (where the sign of the direct pathway equals the overall sign of the indirect pathway) or incoherent (the signs of the two paths differ) behavior, Figure 3. The coherent type-1 FFL has been shown in studies using E. coli to be a “sign-sensitive delay” element and a “persistence detector” [79,80]: For example, when both paths need to be active for activation of the final gene in the pathway (“AND” behavior), time is required for transcripts of the intermediate gene in the indirect pathway to accumulate sufficiently to become active, delaying up-regulation of the final gene. Conversely, when either pathway is sufficient to activate the final gene (“OR” behavior), up-regulation by the intermediate gene of the indirect pathway of the final gene will persist, even after the initial gene in the pathway has been deactivated. The incoherent type-1 FFL has been described as a pulse-generator, with such behavior observed in E. coli . Here, activation of the initial gene will immediately activate the final gene. However, once the indirect pathway’s intermediate gene is activated, transcription of the final gene is halted, generating the pulse. An example of a memory-based motif is the double-positive feedback loop motif, Figure 3. In this motif, activation of the top gene will activate both of its target genes. The reciprocity amongst these two genes will keep them locked into being constantly activated even when the top gene is no longer activated, hence, they retain a “memory” of having been activated. This sort of behavior would be appropriate for irreversible processes that can decide the fate of a cell, such as differentiation, reproduction or apoptosis.
The apparent commonness of many of these motifs has attracted much attention, and a number of studies have been made to help explain this paradigm. One explanation is that these motifs are dynamically stable and are robust to small perturbations in signal , and that this robustness could account for the motif’s apparent abundance. Mutational robustness or insensitivity of the genes within the motif to mutations, could also lead to an abundance of these motifs in nature. However, Widder et al. recently studied the kurtosis of the probability distributions for the FFL to perform a range of different functions and computationally studied the effects of repeated mutations to the functional robustness of the motifs. Their results suggested that the abundance is more influenced by the plasticity of the FFL in performing a wide-range of functions and that mutational insensitivity was unlikely to account for the abundance. A wide range in function of the Bi-fan motif has also been reported , with a caution from the authors of the study that the particular structure of a motif should not necessarily be expected to guarantee a particular function. Furthermore, a study by Konagurthu and Lesk reported that through their implementation of a random-edge search algorithm, the frequencies of common motifs within natural networks was similar to those within random networks. They also noted that random connectivity within a three-node network, such as the FFL loop or a three-member positive feedback loop (3-cyc) would naturally form an FFL due to the search space involved (with 23 possible conformations, six will be consistent with FFL architecture, and two with the 3-cyc) and that the search space may account more for the abundance than the function.
3.3. Mathematical Modeling of GRNs
Developing GRNs from experimental data is often described as reverse engineering, or network inference, and comprises a particularly large field within the discipline of systems biology. Although major advances in experimental techniques and advances in modern computing power have no doubt assisted efforts in network inference, it still remains a non-trivial task. Ultimately the quality of an inferred network model is highly dependent upon the quality of the data, and this can come at a considerable cost with large networks, as the amount of required data is proportional to the number of network nodes. Perturbation experiments such as generating gene knock-outs, stress experiments or RNAi experiments can provide an informative insight into the dynamicity of a particular network. However, the large amount of noise within expression data often requires that experiments be repeated in order to determine the extent of the noise. Constraints on the GRN can be placed to alleviate the model’s complexity and data requirements, however. These include limits on the number of nodes in the inferred network (thereby generating a sparser network) and restricting the model parameters, e.g., through connectivity limitations. It is also often desirable when inferring a network to make use of prior biological knowledge (such as molecule binding sequence motifs, posttranslational modification sites or molecular interactions), which may assist with model validation or with constraining the model complexity. A number of online repositories of such information are available such as the Gene Ontology (GO) or the Kyoto encyclopedia of genes and genomes (KEGG).
3.3.1. Boolean Networks
The activation of some genes within a network may hold certain dependencies with the activities of other genes, such as the “AND” and “OR” behavior described previously. Thus, it is possible to represent genes in a similar manner to logic gates, where a gene may belong to one of two discrete states, namely “ON” or “OFF” and hold a set of discrete dependencies in terms of activation with other genes in the network, such as “AND”, “OR” and “NOT” relationships, Figure 4. Boolean representations of genes were first described by Kauffman and are widely used today. An example piece of software for inference of Boolean network from experimental data is REVEAL (REVerse Engineering Algorithm) , which enumerates through all possible Boolean networks from the input data and uses mutual information to score each network, with the most sparse network that best describes the data being given as the optimal network. Boolean networks, which although able to represent dynamical networks, do have quite clear limitations, however. Firstly the transcriptional levels of a gene are continuous values and cannot simply be discretized into a binary variable such as “on” or “off”. Multiple discrete states, however, such as “gene product present” or “gene product absent” as well as “on” or “off” have been proposed . Furthermore, Boolean networks are intrinsically deterministic and may be inadequate for describing the various stochastic effects within a network. To this end, probabilistic and more recently stochastic Boolean network variants have been proposed, which retain the rule-based determinism of Boolean networks yet can better model uncertainty.
3.3.2. Continuous GRN Models and Bayesian Networks
Genes are not simply active or inactive, yet are transcribed at continuous rates so that the amount of transcript for one gene is dependent upon the rate of transcription of another gene (although this may be discretized through the use of “gene thresholds” for activation in modeling efforts). This lends itself conveniently to using ordinary differential equations (ODE’s) to represent GRNs. The resulting modeling functions used may be linear or non-linear with an example software tool used to infer linear models from expression data being EXAMINE (Expression Array MINing Engine) . Another approach for the mathematical modeling of GRNs is to describe gene expression values as random variables following probability distributions, such as in Bayesian inference . Bayesian networks form a directed-acyclic graph (DAG) and may represent dynamic or static (i.e., representing a GRN once a steady-state has been reached) networks using continuous or discrete data, and are readily able to model the randomness and stochastic effects that may exist amongst GRNs. This makes them more robust in the presence of noise or missing data than Boolean networks. Another benefit of Bayesian networks is that they provide a framework that allows researchers to incorporate prior knowledge for network inference. However, caution should be made when little or no information is available, as the use of uninformative priors (e.g., uniform priors) can make Bayesian network inference inefficient. As Bayesian networks are formed with a DAG, static networks cannot represent cycles such as in feedback loops. However, this limitation is not present with dynamic Bayesian networks , as they avoid cyclical representations by using discrete time steps to separate input nodes (e.g., at time t) from output nodes (e.g., at time t + ∆t). BANJO (BAyesian Networks with Java Objects) is a software tool that has been developed for the inference of static and dynamic Bayesian networks .
4. Synthesis: Spatial Individual-Based Models with Gene Networks: Approaches, Applications to Plant Science and Potential Pitfalls
Within this review we have discussed theory within the fields of population and landscape genetics and systems biology, and have described software and approaches to simulating adaptation. We believe that modeling efforts within evolutionary biology have reached a suitable step where coupling systems of genes to SIBMs that can interact with the surrounding environment and induce phenotype in a more complex and perhaps more biologically reasonable way, can now be considered. Ultimately, a unified approach based upon stochastic elements of GRN evolution, migration and range expansion could allow emergent paradigms in how phenotype relates to GRN topology and raise questions as to how this relates to different abiotic and biotic interactions at different spatial locations. Thus, such systems could direct research into a number of previously unanswered questions in evolutionary biology and evolutionary systems biology, including:
How does a functional (non-neutral) mutation to the sensitivity (as in threshold) or output of a GRN node affect the expressed phenotype or the fitness of an individual? How do the phenotypic effects differ from simulating single non-interacting loci?
How do perturbations of the edges within a network (such as edge deletion, addition and rewiring) or node duplications impact on the fitness of an individual within different environments?
How does the conformation of a GRN affect the quantitative trait that is ultimately expressed? Can population models or SIBMs show that certain motifs may be selected for within different environments?
What role do evolutionary forces such as gene flow and range expansion play on the diversity of GRN topologies?
Considering the effects of gene flow, can certain environments (i.e., abiotic factors) favor specific GRN topologies? Similarly, can biotic interaction select for certain GRN topologies?
Which choice of GRN representations (such as static-edge, Boolean, Bayesian, ODE-based networks) is a better fit to the system in question?
The first two questions require the use of GRNs, whereas the last two require a spatial element and a landscape genetics approach to provide sufficient environmental heterogeneity. We discuss these elements in the next two subsections.
4.1. GRN Evolution and the Resulting Phenotypic Effects
4.1.1. Simulating GRNs in Population Models Instead of Quantitative Trait Loci
GRNs have so far received little attention within evolutionary studies at the population genetics level . Studies of genotype by phenotype interaction commonly involve the analysis of quantitative traits, such as seed size or petal color that are influenced by one or more loci. Therefore the modeling of quantitative traits or even quantitative trait loci (QTLs) may be a viable alternative to explicitly modeling GRNs and may benefit a model in terms of efficiency or when there is insufficient data in which to infer a GRN. However, QTLs themselves may interact with cis-acting or trans-acting elements on the transcriptomic and proteomic levels, and may code for catalytic proteins that interact with substrates on the metabolomic level, before the quantitative trait is expressed. It has also been suggested that all genes are not equivalent regarding their evolutionary role, as in standard population genetics models, yet it is a gene’s position within a network that determines its evolutionary role [96,97,98]. Therefore differential effects on phenotypic variation may arise from mutation of the genes in a network. For instance, we have already discussed an example found in nature with mutation of the nodes within the photoperiodicity system (Figure 2) causing either late or early flowering times. Allelic variants of these elements may also be under selection: For example, we know that 6% of the human genome is currently under selection, yet only 1.5% of the genome is protein coding , with the rest of the purifying selection possibly on regulatory elements. If selection favors co-inheritence of a collection of alleles which interact with each other within a GRN, then these alleles may also be placed under linkage disequilibrium and not become segregated by recombination . Therefore simulation of GRNs may provide researchers with a better understanding of the specific alleles that need to be in a network to fully take advantage of a given set of environmental conditions.
4.1.2. Simulating Network Evolution
Evolution in the context of GRNs has been receiving more interest in recent years [96,101], especially in the field of evolutionary developmental biology [102,103] or Evo-Devo, concerned with the comparative analysis of the developmental processes of species and of the evolutionary relationship between the developmental processes. The bioinformatics community is also becoming increasingly interested with the study of the ancestral relationships between biomolecular networks, with algorithms being developed for network alignment [104,105]. In their review, Knight and Pinney describe seven mechanistic perturbations of biological networks including rewiring, or new edges being introduced between nodes; node duplication; node loss and entire network duplication. It has been shown that a single point mutation is sufficient to induce entire proteomic network rewiring . It is also understood that duplication may lead to sub- and neo-functionalization within networks, where either the resulting paralogs take on separate functions from each other (where the ancestral gene was capable of all functions) or one paralog takes on a new function, respectively. The concept that single gene and whole genome duplication could lead to evolutionary diversification has existed for decades and is still commonly under study [108,109]. We have already discussed the widely documented examples of network motifs found within biological motifs, their potential roles and how their structure may relate to function, if at all. Whether the structure of a motif necessarily relates to function may currently be a topic of debate, however, it is conceivable that selection for a particular phenotype may require a specific structural motif, and this has been suggested for the positive feedback loop . There have also been a number of studies of how motif structure may influence stochastic fluctuations, or noise, from a network motif, and it has been suggested that noise itself can be placed under selection . Noise may control organism stress-responses such as persistence in bacteria, where the cell may enter a state of dormancy in harsh environmental conditions at the cost of cellular growth rate. Through mathematical modeling of the HipBA toxin-antitoxin system in E. coli Koh and Dunlop showed that by altering the architecture of the network (through removing feedback and placing the two genes on separate operons), they were able to alter the frequency of persistence, a trait that could be selected for in different environmental conditions . Interestingly, a study from Tsong et al. demonstrated that for the two species S. cerevisiae and C. albicans, a particular network shared by the two species had been reversed in structure (one regulated by a repressor, the other by an activator). The “logical output” or phenotype, however, remained the same due to several changes in cis- and trans-regulatory elements. Therefore network evolution may converge to the same outcome as well as diverge.
4.1.3. Choice of GRN Model within the Context of a Spatially Explicit Individual-Based Model
The GRN reverse engineering approaches described in Section 3.2 can be conceptualized as “top-down” processes, where we begin with a phenotype of an individual (i.e., after subjected to stress or after a gene knock-out procedure), observe the expression patterns, and infer a genetic model from the data using statistical and mathematical approaches. However, the inferred networks and the modeling paradigm used to describe it (such as Boolean or continuous GRNs) could readily be used in a “bottom-up” approach to demonstrate the range in expression and/or the resulting phenotype once subjected to different environmental inputs. We therefore believe that SIBMs parameterized with resistance surfaces or landscape patches provide an excellent framework for producing such models. The GRN could be represented using a Boolean network form or as a continuous form, using linear or non-linear ODEs, that would take its input from the surrounding environment, interact with the other nodes in the network and produce a phenotype. Gene threshold parameters could be used to define the criteria needed for activation, and genes at the top of the network could directly interact with the environment. Whereas Boolean or ODE-based GRNs would classically represent deterministic networks, the output on each gene could instead be a random variable generated from a certain probability distribution, providing a network that may more approximate Bayesian networks. A potentially interesting study could be: If given genetic network data within a real environmental system (such as the distribution of flowering times latitudinally across Europe), which GRN model best explains the data and provides the maximum likelihood?
4.2. Benefit through Using a Spatially-Explicit System
In this review we propose that research should be directed towards looking at the phenotypic effects of network evolution in the context of populations located within patchy landscapes. The addition of spatially explicit heterogeneous landscapes will add another layer of complexity to any model, and adding any intra-annual variation in environmental parameters will increase this complexity. Although it is not the goal of modeling to accurately represent nature in all of its complexity, we argue that such extra detail is necessary in order to fully understand how phenotypic variation (through mutation of GRNs) may emerge and become selected for or against. Firstly we need to adequately model gene flow, which provides the homogenizing force between subpopulations that would otherwise ultimately differentiate through a process of mutation and genetic drift. Although the flow of chromosomes containing genes that may interact with one-another in a GRN may be modeled within a mean-field system, gene flow itself is often spatially constrained and may be influenced through geographic landmarks, such as mountains, rivers or roads. Impeding gene flow can lead to increased population differentiation, which can lead towards speciation. The explicit modeling of space is a convenient way to allow the simulation of range expansions and the subsequent limiting effects on allelic diversity through the subsequent founder effects. Incorporating heterogeneous environments into the spatially explicit arena will also allow abiotic interaction to select for different alleles, and possibly select for different GRN conformations. For example, the GRN conformations for Barley, wheat and Arabidopsis have shown to be quite different, despite sharing many of the same components [71,115]. A particularly fundamental question to be addressed in evolutionary systems biology is why do certain GRN conformations exist in different environments and why are they favored in some way? One possible way to answer such a question could be to keep GRN topologies constant and randomize environmental parameters according to a given prior distributions, as in a Bayesian analysis.
4.2.1. Biotic Interaction
We have described how the resistance surfaces that may be explicitly incorporated into an SIBM may represent climactic or edaphic factors that can impede dispersal or influence selection of the simulated individuals. However, in a similar vein, they may also represent biotic interactions from animals or plants. Biodiversity varies latitudinally across the globe , and biotic interaction is thought to be of particular importance in the tropics . One example of biotic interaction is seed predation, and this has famously been proposed in what is collectively termed the Janzen-Connel hypothesis [118,119] to prevent competitive exclusion. Seed predation can be represented in simulations as probabilities of predation for dispersed seeds, either throughout the entirety of the simulation or at individual grid-points, for example. It may be difficult in this approach, however, to simulate the dynamics of predator-prey co-evolution, unless some form of dependency was incorporated between the modeled individuals and the resistance surfaces. Another approach is to have multiple classes of individuals within a simulation that could represent “species”. Individuals belonging to different species could then be modeled with different GRNs, as has been seen in nature with the barley, wheat and the Arabadopsis photoperidocity network. Individuals may then compete for space (in order to germinate). If the model is specific and growth and nutrient uptake are explicitly modeled (see Section 4.6.2 on functional-structural plant modeling), then different species could potentially compete for resources.
4.2.2. Analyzing Past and Future Events on Adaptation
GRNs are dynamic, and therewith comes the necessity of incorporating time-dependent environmental variation when GRNs are simulated within the context of SIBMs. A natural extension of this is that it will become convenient to study past shifts in the environment onto the genotypic and phenotypic characteristics of a population (such as through the effects of bottlenecks and migration, for example). Hypothesized future effects could also be studied in a similar manner.
4.3. Producing Complex Modeling Systems in a Step-Wise Manner
Complicated models with multiple levels of regulation could be developed within a step-wise manner, yet there is no one correct path a researcher may take. The model should be validated as each level of regulation (Figure 1) is added. Deterministic systems based on mean-field assumptions such as Hardy-Weinberg equilibria may provide a means of model validation. Complex models may require time-consuming simulations, and if there is much stochasticity in the system, it could become difficult to interpret their results. Therefore a suitable strategy might be to start with simple models, such as mean-field models and/or single population models. For example, the initial stage of a modeling study could be to begin with a population of limited spatial structure, single genes or QTLs and only neutral non-selective abiotic parameters, where the only source of genetic variation is through mutation and genetic drift. After validation, extra elements could be added including a more heterogenous environment and a rudimentary GRN, and so on. If a modeling system is designed in order to be modular, as in to allow certain features to be enabled or disabled in the model, it may be convenient to begin with simple systems and prevent the need to develop new models for each step of the study.
The relevant question here is at which level of regulation the modeler begins, which will be highly influenced by the hypothesis that the researcher intends to address. One possible hypothesis could be that certain environmental parameters would select for a particular GRN variant, for example, and so a study might involve analyzing the effects of GRN conformation on individual fitness. GRN conformation could indicate the shape of its degree distribution, or could simply mean choice of structural motif, for example. In the first study, simulations could provide data on fitness (in the form of population growth curves, for example) for different GRN configurations that are kept constant (i.e., no mutation or rewiring) throughout the simulation. In a subsequent step GRN reconfiguration could be enabled and the final configuration recorded, to determine whether GRNs have evolved into an “optimal” configuration. Final simulations could involve allowing populations containing evolving GRNs expanding throughout a heterogeneous landscape, and spatial genetic structure could be analyzed.
Another study might be to attempt to explain the spatial genetic structure of a population found in nature, for which GRN data exists, through attempting to recreate data observed in nature (such as allele frequencies or selection coefficients). Initial simulations could be within mean-field systems, with non-stochastic migration rates between subpopulations and only single gene nodes or QTLs being simulated. Subsequent simulations could add spatial explicitness, abiotic and biotic factors and GRNs. At each step of the study, likelihood densities could be generated to explain which models best explain the observed data. Our research group has previously applied Approximate Bayesian Computation (ABC) to our SIBM in our research (currently unpublished). ABC can be a powerful numerical technique within population genetics. It allows for likelihood densities to be generated from parameter subsets that can simulate summary statistic data that is sufficiently close to data observed in reality. It has been widely used within a number of population genetics studies thus far (for example, see [121,122]).
4.4. Adaptive Dynamics
When simulating selection in models it is important to consider the role of evolutionary tradeoffs and how they may influence the adaptation of a species. Antagonistic pleiotropic effects [123,124,125] as first proposed by Williams occur when a mutation with a beneficial change in fitness on one trait has a detrimental effect upon another trait. This can lead to the emergence of evolutionary fitness costs [126,127,128,129,130,131,132,133] where increased resource allocation from one function leaves more limited allocation to another function. One plant example of a trade-offs in the literature is increased transposable element silencing despite deleterious effects on the expression of nearby genes in Arabidopsis thaliana. Another study showed that increased investment in female and male reproductive structures limited the quantity and nitrogen content of clonal propagules, respectively, in Sagittaria latifolia . A further example exists in Arabidopsis thaliana where a mutation in the EMBRYONIC FLOWER (EMF) genes EMF1 and EMF2 induces very early flowering but also a reduction in seed production . Thus evolutionarily “perfect” organisms are not trivial to obtain. Trade-offs may also exist according to the ecological characteristics within the geographical area that a population resides within. To give an example: Selection for increased plant size may increase the rate of depletion of the nutrient resource within the soil, thus, adaptation of the plant population to its surrounding environment in turn influences the environment. In order to help study such dynamic genotype by environment interactions, the 1990s saw the emergence of the field of adaptive dynamics (AD, reviewed in [135,136]), which, through mathematical modeling allowed the researcher to gain an insight into the long-term dynamics of the evolutionary and the ecological processes within a given system. AD has developed from evolutionary game theory and the study of evolutionary stable strategies, which may describe the payoffs associated with a mutant, m, of strategy A invading a resident population, r, with strategy B. It makes four assumptions: clonal reproduction, separation of ecological time scales, small mutational steps and a small initial invading mutant frequency within the monomorphic population r. The invasion fitness is given as the exponential growth rate of a m within r. Positive values of f indicate that m will successfully invade and replace r, and negative values indicate that the mutant will be unsuccessful in invading the resident. Using the invasion fitness function, f, pairwise invasion plots (PIPs) may be plotted. PIPs are two-dimensional plots where the zero contour line is plotted at the various quantitative values of the m and r phenotype, allowing potential regions of invasion success and failure to graphically be identified. Intersection of the isocline at the 45-degree line from the origin (where m = r) allows identification of possible evolutionary end points at certain values of the resident phenotype. Using the AD framework, Geritz et al. produced a model to study the evolutionary dynamics of seed size, which contained a trade-off between seed size and seed number. They were able to adjust the influence of the seed size on the competitive ability of their seeds (which they called competitive asymmetry), the resources per germination site and the type of precompetitive environment in which their seeds resided (a continuum from favorable to unfavorable). They found that strong competitive asymmetry, high resource levels, and intermediate harshness of the precompetitive environment favored a polymorphic population containing the coexistence of plants with different seed sizes, where although a single large seed may outcompete a single small seed, the higher numbers of smaller seeds was also competitive. Boudsocq et al. presented an AD study that investigated the trade-off between plant size (due to increased nutrient uptake), where larger plants are fitter, and increased plant mortality with greater nutrient uptake. The authors set out to determine whether natural selection could lead to “evolutionary suicide” or Harman’s “tragedy of the commons” where resources become too depleted to allow plant survival, or whether Tilman’s R* rule, where the plant with the lowest steady-state resource level is selected for will apply. In their model, Boudsocq et al. found that evolution leads to a minimization of soil mineral nutrient content, yet the nutrient resource was not intensely depleted, supporting Tilman’s R* rule.
Simulation of Evolutionary Tradeoffs with GRNs
Such example AD studies have the benefit of allowing researchers to quantify the effects of certain trade-offs to an evolutionary system. We believe the modeling framework proposed in this study of coupling GRNs to SIBMs could also allow for such tradeoffs through interactions between one gene and numerous target genes/traits. When considering complex interconnected networks, it becomes clear that potential trade-offs could be programmed into the system. For instance, a simple example may be where mutation of a gene may cause up-regulation of one or more of its target genes with beneficial fitness effects to trait A, whilst this may indirectly negatively impact the fitness provided by trait B. However, as with the AD framework, such interactions have to be hypothesized. This may not be the case, however, if a model is complicated enough to allow for GRN re-wiring. Through stochastic GRN re-wiring through mutation and movement through a heterogeneous landscape, emergent trade-offs may be observed that may not have previously been hypothesized. This may provide opportunities to document such trade-offs and analyze their evolutionary impact.
4.5.1. Algorithmic and Programming Complexity
The complexity of SIBMs is not trivial and development of a large simulation software tool may not be without problems if inadequate care is not put into the development process, or if there is ambiguity in its function, as this may make the tool difficult to communicate or reproduce. To this end a few authors have proposed protocols that can be used in the design and development of IBMs [51,139]. SIBMs are generally less efficient than aspatial IBMs due to the processing of spatial distances or landscape values, if landscape information is incorporated. The use of a quadtree structure , which breaks the two-dimensional space down into nodes and are stored in a hierarchical way (as in a tree-like data structure) may provide some optimization over brute-force searches when individuals interact over space. A further approach for optimization in landscape genetics based on the quadtree was to use a hierarchical system of patches within an irregular grid . Although the efficiency of developed software tools poses one problem, the implementation of complex systems within a model can be non-trivial, especially if interacting genes and environmental information are to be incorporated. Software engineering approaches into the design of a system provide a more thoroughly planned design-process that will allow a greater transparency of the system specification to non-developers and may prevent design flaws or other complications during the development phase. These include the use of process-management models including the waterfall or iterative model, analysis and design of behavior using data flow diagrams, and the use of diagrams specified within the unified modeling language (UML) such as class hierarchy diagrams for object design and use-case diagrams for system interaction analysis and design. Object-oriented programming languages, including languages such as C++, Java, C# and Python provide a number of concepts such as object-inheritance, polymorphism, abstraction and interfaces, which can greatly facilitate the design and implementation of IBMs. For example, classes such as Individual, Gene, Genome, Chromosome and Patch could be implemented, and a number of individual-based modeling studies have taken similar object-oriented approaches [54,55,56,60,141,143]. However, it has been suggested that the use of certain features within SIBMs, such as environmental or terrain features, may be best not represented as objects . Furthermore, the implementation of an IBM using an object-oriented approach in Java and C++ was shown to be less efficient than when implemented with a procedural approach in Fortran 95 . Inexorably, object generation can be computationally costly, therefore, excessive use of objects when unnecessary should be cautioned against.
4.5.2. Accurate Representations of GRNs
Arguably the most obvious pitfall with using such models is the high computational cost associated with the large ranges in scale required, from subcellular processes within the simulated individuals to the dynamical environment in which they reside. It is generally required that SIBM simulations be run with thousands of individuals, therefore, large GRNs with large numbers of nodes and large numbers of edges may become more intractable. Furthermore, sensory based GRNs such as the delay-response element and the persistence detector mentioned in Section 3.1 may become difficult to implement within simulated individuals as they represent time-dependent processes at a microscopic-scale, with a requirement for continuous transcript levels that builds up or breaks down over a period of time. The level of detail required for such processes could greatly slow down the rest of the simulation at the individual, population and environmental scales. If the simulation model was also run using discrete time steps (such as generations or months), a particularly fine-grained time step, such as hours or even minutes, may realistically be required, confounding the tractability of running the simulation for a meaningful length of time at the population level (such as 1,000 generations, for example). However, discrete GRN models such as Boolean networks or discrete Bayesian networks cannot represent these sorts of sensory networks themselves. GRNs representing memory-based motifs used for cell-fate determination as previously described, however, may be more suitable as they could guide differentiation events at the individual-based level. These could act as switches to ensure that individuals change from one life cycle stage to another, and thus would therefore have important implications to the fitness of an individual.
4.6. Applications to Plant Science
As previously discussed, selection of individuals for certain traits may occur from a number of different selection regimes. In some populations, it may be that edaphic or other climactic effects such as light, as in the case of flowering time, or selection may be facilitated more from biotic interaction arising from pests or predators. Another example of a selection regime upon plants is crop domestication, a topic of considerable debate , where selection is imposed upon populations of crops by humans, who provide the biotic interaction. Interestingly, the nature of the human biotic interaction is so important that crop traits acquired through the domestication process are deleterious in nature. We believe that the described system of coupling GRNs with SIBMs is equally applicable to modeling selection as imposed by human cultivators as modeling selection by the wild. This is an ongoing research effort within our group. For modeling domestication, however, specific models may be required for simulating cultivator involvement, such as harvesting and sowing of crops, and removal of pests, for example.
4.6.1. Domestication as a Selection Regime
Domestication represents an important model of evolution where all aforementioned levels of regulation played a role, including the interactions at the genic level, the population level and the roles of abiotic and biotic factors (such as local climactic effects on crops and the roles of weeds and pests to crop yield). Through domestication our crops have developed traits that better serve human needs in agriculture. These traits include the non-shattering phenotype within cereals, where wind is insufficient to mediate dispersal of seeds from the ears and human intervention is necessary; increased seed size, which enables seeds to be sown deeper within the soil due to the larger endosperm, therefore preventing seeds from blowing away from the farmers field; a loss of hooks and awns, helping to prevent loss of seed from the field; and enhanced culinary chemistry, allowing superior food products to be produced (for reviews, see [71,100,147,148,149]). All of the aforementioned domestication traits are heavily relied upon today. It is understood that the non-shattering phenotype is a monogenic trait that occurs within double-recessive homozygotes, whereas the larger seed size phenotype is a polygenic trait . Understanding how such genes interact and the evolutionary processes behind the selection of these traits is an area that warrants further study. Intra-annual variation has also played important roles in the domestication process, as crops were sown and harvested at certain times of the year, and some crops have since developed a lack of sensitivity to environmental cues for flowering or germination (hence a loss of dormancy amongst seeds). A meta-analysis conducted by Munguía-Rosa et al. found that flowering time is still under selection in many plants and increased fitness amongst populations has been seen to be associated with local alleles of flowering time in Arabidopsis lyrata . A model for the simulation of vernalization in onion has been developed by Streck which demonstrated a response in flowering to the temperature and to the duration of vernalization (in days), using statistical functions. However, this simulation was not at a population or a genetic level. Developing models that can incorporate a landscape genetics element and a GRN element could greatly improve our understanding on such phenotypic variation. Dormancy and germination are other complex plant-processes where regulation exists on a population genetics level, where periods of dormancy will have important effects on the emergent seedlings fitness, and where regulation exists at a systems-based level. Dormancy has been described as having a number of categories: morphological, physiological deep, physiological non-deep and physical dormancy . Morphological dormancy arises due to an underdeveloped seed embryo that requires time to mature, whereas physical dormancy involves the development of a water impermeable seed coat that requires scarification. Physiological dormancy, however, arises due to an imbalance in the ratio of abscissic acid and giberrellins, with abscissic acid promoting dormancy . Moisture and temperature (specifically thermoinhibition) are important environmental conditions that may induce germination and hydrothermal models have been developed (including one from Watt et al. ) for simulation of germination at different environmental conditions. These models lack the population, landscape and genetic elements to selection, however, which could be simulated with the use of SIBMs and incorporated GRNs.
4.6.2. Simulation Models Accounting for Polyploidy amongst Plants
It is not uncommon for flowering plants to exhibit polyploidy . Examples of triploid plants are apple and banana, tetraploids include durum and cotton, and bread wheat is an example of a hexaploid. Polyploidy of many flowering plants are relatively recent events whereas some flowering plants, such as tetraploid brassicas, are paleopolyploids after ancient genome duplication events . Simulation models that simulate independent assortment of chromosomes may not be able to accurately reflect the gametogenesis of allopolyploids, as there is a tendency there for homoeologous chromosomes to preferentially pair during meiosis. However, a recent simulation model of meiosis developed by Voorips and Maliepard , called PedigreeSim, allows varying degrees of preferential pairing and the formation of different quadrivalent chromosomal configurations, which can be used for the study of allotetraploids. Future simulation studies will have to take into account similar approaches if polyploid plants or other organisms are to be accurately simulated.
4.6.3. Functional-Structural Plant Modeling and Efforts in the Simulation of Plant Growth and Morphology
Understanding plant growth habit and morphology is of particular importance to agronomic and ecological studies, as plants react to their environment by adjusting their growth and morphology to maximize their gained benefits from nutrient acquisition. Thus modeling efforts that take plant growth and morphology according to simulated environmental conditions could be useful for determining the impact of changes to the availability of light, temperature or moisture, etc. A currently developing field within the plant science and computational biology disciplines is the field of functional-structural plant modeling (FSPM) [159,160,161]. Modeling efforts within this field are concerned with the acquisition of nutrients from sources such as light, carbon, water and soil minerals and how this impacts upon the growth and morphology of the resulting plants. Complex plant architectures comprising organs such as stalks, leaves and meristems are simulated, often in three dimensions, which take on mass and form complex morphologies. Widely-used algorithmic concepts behind these models are fractal-like rewriting systems called L-systems , where in the case of plants, the plant architecture is represented by a text string of components (or phytomers) which represent building blocks that comprise the plant, such as the stalks, branches, flowers and meristems. This systematic approach enables virtual plants to be simulated with realistic morphologies that grow and develop new morphologies over time. Such studies have been used to simulate leaf development according to light input in Arabidopsis thaliana , carbon-water acquisition in orange trees , carbon and nitrogen acquisition and light competition in general virtual plants, and hormone biosynthesis and photosynthetate of Poplar ; where graph-rewriting systems called relational growth grammars (RGGs) based on L-systems were used to model a metabolic regulatory network to simulate biosynthesis. The aforementioned studies do not attempt to simulate the population genetics of these plants. However, a notable study by Buck-Sorlin et al. developed a FSPM of barley using RGGs, where a GRN of seven genes was used to synthesize giberrellic acid, which played a role in the growth and morphology of the virtual barley plants . The genes were able to crossover, therefore sexual reproduction was simulated, allowing the resulting genotypes to influence the resultant barley phenotypes. Only five individuals were simulated per generation, however. A follow-up study used simulated rice morphologies, and the model was parameterized with quantitative trait loci taken from a cultivated population, allowing the phenotypic effects of the morphologies to be influenced by the input genotypes . Another more recent rice FSPM study simulated growth rates and was parameterized with different genotypes, with different effects towards the growth rate . These studies represent important modeling efforts with application towards G × E interactions. Bornhofen et al. provided an interesting FSPM study that utilized an evolutionary L-Systems approach that allowed plant strategies to evolve. Their simulations began with distribution of 1,000 seed individuals throughout a heterogeneous environment (consisting of five patches) that grew into mature virtual plants according to procurement of biomass from the surrounding environment. Their individuals contained a mutating genotype that comprised a set of parameters involved with the life history of the individuals, their dispersal and the system rules concerned with biomass acquisition and distribution. The individuals were able to reproduce asexually. Interdisciplinary work involving FSPM and evolutionary biology or landscape genetics is an interesting avenue of research, although due to the scale required by landscape genetics studies it could be that computational costs involved may impede development of such models at present, as is a limitation discussed by Bornhofen. However, assuming that computational power increases in the future, larger populations of simulated plants within FSPM studies may provide an interesting insight into the demography and adaptation of a population according to nutrient resource availability. They may also provide an important way to study biotic interactions from competitors such as weeds.
In this review we have covered a number of aspects from the need for plant genetic modeling and simulation, concepts in population and landscape genetics and network motifs and inference within the field of systems biology. We believe that modeling systems that could incorporate regulation at the genic, genome, individual, population and environmental scales would be able to provide flexible systems for studying adaptation within highly dynamical environments. Population genetics simulations are often based upon simplifying assumptions that may not necessarily represent the complexity within a real population. Conversely, the complexities within large inferred GRNs may often be to such a degree that the noise arising from the numerous nodes may add ambiguity to each of their roles. By merging the two modeling systems, both could complement each other, as the complexity of large GRNs could be culled to the minimum set of nodes that is necessary to represent the system (obtaining this set could be another application of such models), and the GRNs at the population genetic level will provide an interface for interaction of genetic material with the environment. We are not describing an unnecessarily complex modeling approach intended to accurately represent nature in all its detail. Instead we are describing a general approach containing the requisite components to be able to simulate adaptation of individuals in certain environments through networks of interacting genes. The genes stimulate or repress each other according to their input and this invokes expression of a phenotype. This system provides greater dynamicity regarding simulation of phenotypic expression than by simply simulating QTLs when we consider the role of mutation to the interacting nodes, and therefore requires a dynamic environment, such as seen throughout the year. Such a model immediately has application to the simulation of time-dependent processes such as germination or flowering. Furthermore by coupling GRNs to SIBMs and allowing GRN evolution, through network rewiring or duplication, for example, we will be given an insight into how networks evolve as populations expand throughout a landscape, a field that to our knowledge remains largely unexplored. Simulation of movement of gene networks through heterogenous landscapes combined with stochastic evolutionary forces such as gene flow, mutation and genetic drift will allow emergent properties of GRN evolution and phenotypic diversity to be observed. Such observations may be difficult to achieve without a unified simulation model. This way we envision an approach to modeling GRN evolution that incorporates all levels of biological organization.
This work is supported by the Leverhulme Trust F/00 215/BC.
- Fitzgerald, T.L.; Shapter, F.M.; McDonald, S.; Waters, D.L.; Chivers, I.H.; Drenth, A.; Nevo, E.; Henry, R.J. Genome diversity in wild grasses under environmental stress. Proc. Natl. Acad. Sci. USA 2011, 108, 21140–21145. [Google Scholar]
- Nevo, E.; Fu, Y.B.; Pavlicek, T.; Khalifa, S.; Tavasi, M.; Beiles, A. Evolution of wild cereals during 28 years of global warming in Israel. Proc. Natl. Acad. Sci. USA 2012, 109, 3412–3415. [Google Scholar]
- Aschard, H.; Lutz, S.; Maus, B.; Duell, E.J.; Fingerlin, T.E.; Chatterjee, N.; Kraft, P.; van Steen, K. Challenges and opportunities in genome-wide environmental interaction (GWEI) studies. Hum. Genet. 2012, 131, 1591–1613. [Google Scholar] [CrossRef]
- Amato, R.; Pinelli, M.; D’Andrea, D.; Miele, G.; Nicodemi, M.; Raiconi, G.; Cocozza, S. A novel approach to simulate gene-environment interactions in complex diseases. BMC Bioinformatics 2010, 11, 8. [Google Scholar] [CrossRef]
- Pinelli, M.; Scala, G.; Amato, R.; Cocozza, S.; Miele, G. Simulating gene-gene and gene-environment interactions in complex diseases: Gene-Environment iNteraction Simulator 2. BMC Bioinformatics 2012, 13, 132. [Google Scholar] [CrossRef]
- Gunasekera, C.P.; Martin, L.D.; Siddique, K.H.M.; Walton, G.H. Genotype by environment interactions of Indian mustard (Brassica juncea L.) and canola (B. napus L.) in Mediterranean-type environments: 1. Crop growth and seed yield. Eur. J. Agron. 2006, 25, 1–12. [Google Scholar] [CrossRef]
- Helgadottir, A.; Kristjansdottir, T.A. Simple Approach to the Analysis of Gxe Interactions in a Multilocational Spaced Plant Trial with Timothy. Euphytica 1991, 54, 65–73. [Google Scholar] [CrossRef]
- Haji, H.M.; Hunt, L.A. Genotype x environment interactions and underlying environmental factors for winter wheat in Ontario. Can. J. Plant Sci. 1999, 79, 497–505. [Google Scholar] [CrossRef]
- DeLacy, I.H.; Kaul, S.; Rana, B.S.; Cooper, M. Genotypic variation for grain and stover yield of dryland (rabi) sorghum in India: 1. Magnitude of genotype x environment interactions. Field Crops Res. 2010, 118, 228–235. [Google Scholar] [CrossRef]
- Kang, M.S. Using genotype-by-environment interaction for crop cultivar development. Adv. Agron. 1998, 62, 199–252. [Google Scholar] [CrossRef]
- Holderegger, R.; Herrmann, D.; Poncet, B.; Gugerli, F.; Thuiller, W.; Taberlet, P.; Gielly, L.; Rioux, D.; Brodbeck, S.; Aubert, S.; et al. Land ahead: Using genome scans to identify molecular markers of adaptive relevance. Plant Ecol. Div. 2008, 1, 273–283. [Google Scholar] [CrossRef]
- Cox, K.; Broeck, A.V.; van Calster, H.; Mergeay, J. Temperature-related natural selection in a wind-pollinated tree across regional and continental scales. Mol. Ecol. 2011, 20, 2724–2738. [Google Scholar]
- Schuster, S.C. Next-generation sequencing transforms today’s biology. Nat. Methods 2008, 5, 16–18. [Google Scholar] [CrossRef]
- Cannon, C.H.; Kua, C.-S.; Zhang, D.; Harting, J.R. Assembly free comparative genomics of short-read sequence data discovers the needles in the haystack. Mol. Ecol. 2010, 19, 147–161. [Google Scholar] [CrossRef]
- Whittall, J.B.; Syring, J.; Parks, M.; Buenrostro, J.; Dick, C.; Liston, A.; Cronn, R. Finding a (pine) needle in a haystack: Chloroplast genome sequence divergence in rare and widespread pines. Mol. Ecol. 2010, 19, 100–114. [Google Scholar]
- Ferguson, L.; Lee, S.F.; Chamberlain, N.; Nadeau, N.; Joron, M.; Baxter, S.; Wilkinson, P.; Papanicolaou, A.; Kumar, S.; Kee, T.-J.; et al. Characterization of a hotspot for mimicry: Assembly of a butterfly wing transcriptome to genomic sequence at the HmYb/Sb locus. Mol. Ecol. 2010, 19, 240–254. [Google Scholar] [CrossRef]
- Kloch, A.; Babik, W.; Bajer, A.; SiŃSki, E.; Radwan, J. Effects of an MHC-DRB genotype and allele number on the load of gut parasites in the bank vole Myodes glareolus. Mol. Ecol. 2010, 19, 255–265. [Google Scholar] [CrossRef]
- Aparicio, O.; Geisberg, J.V.; Struhl, K. Chromatin Immunoprecipitation for Determining the Association of Proteins with Specific Genomic Sequences in Vivo. Curr. Protoc. Cell Biol. 2004, 23, 17.7.1–17.7.23. [Google Scholar]
- Buck, M.J.; Lieb, J.D. ChIP-chip: Considerations for the design, analysis, and application of genome-wide chromatin immunoprecipitation experiments. Genomics 2004, 83, 349–360. [Google Scholar]
- Wang, Z.; Gerstein, M.; Snyder, M. RNA-Seq: A revolutionary tool for transcriptomics. Nat. Rev. Genet. 2009, 10, 57–63. [Google Scholar] [CrossRef]
- Park, P.J. ChIP-seq: Advantages and challenges of a maturing technology. Nat. Rev. Genet. 2009, 10, 669–680. [Google Scholar] [CrossRef]
- Ferrier, T.; Matus, J.T.; Jin, J.; Riechmann, J.L. Arabidopsis paves the way: Genomic and network analyses in crops. Curr. Opin. Biotechnol. 2011, 22, 260–270. [Google Scholar] [CrossRef]
- Singh, D.; Singh, P.K.; Chaudhary, S.; Mehla, K.; Kumar, S. Chapter Three—Exome Sequencing and Advances in Crop Improvement. In Advances in Genetics; Theodore Friedmann, J.C.D., Stephen, F.G., Eds.; Academic Press: New York, NY, USA, 2012; Volume 79, pp. 87–121. [Google Scholar]
- Jansen, R.C.; Nap, J.P. Genetical genomics: The added value from segregation. Trends Genet. 2001, 17, 388–391. [Google Scholar]
- Hardy, G.H. Mendelian Proportions in a Mixed Population. Science 1908, 28, 49–50. [Google Scholar]
- Barrett, M.D.; Wallace, M.J.; Anthony, J.M. Characterization and Cross Application of Novel Microsatellite Markers for a Rare Sedge, Lepidosperma Gibsonii (Cyperaceae). Am. J. Bot. 2012, 99, E14–E16. [Google Scholar] [CrossRef]
- King, T.L.; Springmann, M.J.; Young, J.A. Tri- and tetra-nucleotide microsatellite DNA markers for assessing genetic diversity, population structure, and demographics in the Holmgren milk-vetch (Astragalus holmgreniorum). Conserv. Genet. Resour. 2012, 4, 39–42. [Google Scholar] [CrossRef]
- Wohrmann, T.; Guicking, D.; Khoshbakht, K.; Weising, K. Genetic variability in wild populations of Prunus divaricata Ledeb. in northern Iran evaluated by EST-SSR and genomic SSR marker analysis. Genet. Resour. Crop Evol. 2011, 58, 1157–1167. [Google Scholar]
- Millar, M.A.; Byrne, M.; Barbour, E. Characterisation of eleven polymorphic microsatellite DNA markers for Australian sandalwood (Santalum spicatum) (R.Br.) A.DC. (Santalaceae). Conserv. Genet. Resour. 2012, 4, 51–53. [Google Scholar] [CrossRef]
- Muir, K.; Byrne, M.; Barbour, E.; Cox, M.C.; Fox, J.E.D. High levels of outcrossing in a family trial of Western Australian sandalwood (Santalum spicatum). Silvae Genetica 2007, 56, 222–230. [Google Scholar]
- Rosenberg, N.A.; Nordborg, M. Genealogical trees, coalescent theory and the analysis of genetic polymorphisms. Nat. Rev. Genet. 2002, 3, 380–390. [Google Scholar]
- O'fallon, B. TreesimJ: A flexible, forward time population genetic simulator. Bioinformatics 2010, 26, 2200–2201. [Google Scholar] [CrossRef]
- Hudson, R.R. Generating samples under a Wright-Fisher neutral model of genetic variation. Bioinformatics 2002, 18, 337–338. [Google Scholar] [CrossRef]
- Excoffier, L.; Novembre, J.; Schneider, S. SIMCOAL: A general coalescent program for the simulation of molecular data in interconnected populations with arbitrary demography. J. Hered. 2000, 91, 506–509. [Google Scholar] [CrossRef]
- Laval, G.; Excoffier, L. SIMCOAL 2.0: A program to simulate genomic diversity over large recombining regions in a subdivided population with a complex history. Bioinformatics 2004, 20, 2485–2487. [Google Scholar] [CrossRef]
- Anderson, C.N.K.; Ramakrishnan, U.; Chan, Y.L.; Hadly, E.A. Serial SimCoal: A population genetics model for data from multiple populations and points in time. Bioinformatics 2005, 21, 1733–1734. [Google Scholar] [CrossRef]
- Hellenthal, G.; Stephens, M. msHOT: Modifying Hudson’s ms simulator to incorporate crossover and gene conversion hotspots. Bioinformatics 2007, 23, 520–521. [Google Scholar] [CrossRef]
- Ewing, G.; Hermisson, J. MSMS: A coalescent simulation program including recombination, demographic structure and selection at a single locus. Bioinformatics 2010, 26, 2064–2065. [Google Scholar] [CrossRef]
- Garroway, C.J.; Bowman, J.; Wilson, P.J. Using a genetic network to parameterize a landscape resistance surface for fishers, Martes pennanti. Mol. Ecol. 2011, 20, 3978–3988. [Google Scholar] [CrossRef]
- Manel, S.; Schwartz, M.K.; Luikart, G.; Taberlet, P. Landscape genetics: Combining landscape ecology and population genetics. Trends Ecol. Evol. 2003, 18, 189–197. [Google Scholar] [CrossRef]
- Segelbacher, G.; Cushman, S.A.; Epperson, B.K.; Fortin, M.-J.; Francois, O.; Hardy, O.J.; Holderegger, R.; Taberlet, P.; Waits, L.P.; Manel, S. Applications of landscape genetics in conservation biology: Concepts and challenges. Conserv. Genet. 2010, 11, 375–385. [Google Scholar] [CrossRef]
- Currat, M.; Ray, N.; Excoffier, L. splatche: A program to simulate genetic diversity taking into account environmental heterogeneity. Mol. Ecol. Notes 2004, 4, 139–142. [Google Scholar] [CrossRef]
- Ray, N.; Currat, M.; Foll, M.; Excoffier, L. SPLATCHE2: A spatially explicit simulation framework for complex demography, genetic admixture and recombination. Bioinformatics 2010, 26, 2993–2994. [Google Scholar] [CrossRef]
- Hamilton, G.; Currat, M.; Ray, N.; Heckel, G.; Beaumont, M.A.; Excoffier, L. Bayesian estimation of recent migration rates after a spatial expansion. Genetics 2005, 170, 409–417. [Google Scholar] [CrossRef]
- Klopfstein, S.; Currat, M.; Excoffier, L. The Fate of Mutations Surfing on the Wave of a Range Expansion. Mol. Biol. Evol. 2006, 23, 482–490. [Google Scholar]
- Van Etten, J.; Hijmans, R.J. A geospatial modelling approach integrating archaeobotany and genetics to trace the origin and dispersal of domesticated plants. PLoS One 2010, 5, e12060. [Google Scholar] [CrossRef]
- Itan, Y.; Powell, A.; Beaumont, M.A.; Burger, J.; Thomas, M.G. The Origins of Lactase Persistence in Europe. PLoS Comput. Biol. 2009, 5, e1000491. [Google Scholar] [CrossRef]
- Platt, A.; Horton, M.; Huang, Y.S.; Li, Y.; Anastasio, A.E.; Mulyati, N.W.; Agren, J.; Bossdorf, O.; Byers, D.; Donohue, K.; et al. The scale of population structure in Arabidopsis thaliana. PLoS Genet. 2010, 6, e1000843. [Google Scholar] [CrossRef]
- Jones, H.; Leigh, F.J.; Mackay, I.; Bower, M.A.; Smith, L.M.J.; Charles, M.P.; Jones, G.; Jones, M.K.; Brown, T.A.; Powell, W. Population-Based Resequencing Reveals That the Flowering Time Adaptation of Cultivated Barley Originated East of the Fertile Crescent. Mol. Biol. Evol. 2008, 25, 2211–2219. [Google Scholar] [CrossRef]
- Carvajal-Rodriguez, A. Simulation of genomes: A review. Curr. Genomics 2008, 9, 155–159. [Google Scholar] [CrossRef]
- Carvajal-Rodriguez, A. Simulation of Genes and Genomes Forward in Time. Curr. Genomics 2010, 11, 58–61. [Google Scholar]
- Balloux, F. EASYPOP (version 1.7): A computer program for population genetics simulations. J. Hered. 2001, 92, 301–302. [Google Scholar] [CrossRef]
- Leblois, R.; Estoup, A.; Rousset, F. IBDSim: A computer program to simulate genotypic data under isolation by distance. Mol. Ecol. Resour. 2009, 9, 107–109. [Google Scholar] [CrossRef]
- Neuenschwander, S.; Hospital, F.; Guillaume, F.; Goudet, J. quantiNemo: An individual-based program to simulate quantitative traits with explicit genetic architecture in a dynamic metapopulation. Bioinformatics 2008, 24, 1552–1553. [Google Scholar] [CrossRef]
- Peng, B.; Amos, C.I. Forward-time simulations of non-random mating populations using simuPOP. Bioinformatics 2008, 24, 1408–1409. [Google Scholar] [CrossRef]
- Peng, B.; Kimmel, M. simuPOP: A forward-time population genetics simulation environment. Bioinformatics 2005, 21, 3686–3687. [Google Scholar] [CrossRef]
- Carvajal-Rodriguez, A. GENOMEPOP: A program to simulate genomes in populations. BMC Bioinformatics 2008, 9, 223. [Google Scholar] [CrossRef]
- Epperson, B.K.; McRae, B.H.; Scribner, K.; Cushman, S.A.; Rosenberg, M.S.; Fortin, M.J.; James, P.M.; Murphy, M.; Manel, S.; Legendre, P.; et al. Utility of computer simulations in landscape genetics. Mol. Ecol. 2010, 19, 3549–3564. [Google Scholar] [CrossRef]
- Doligez, A.; Baril, C.; Joly, H.I. Fine-scale spatial genetic structure with nonuniform distribution of individuals. Genetics 1998, 148, 905–919. [Google Scholar]
- Kitchen, J.L.; Allaby, R.G. The Limits of Mean-Field Heterozygosity Estimates under Spatial Extension in Simulated Plant Populations. PLoS One 2012, 7, e43254. [Google Scholar] [CrossRef]
- Kuparinen, A.; Schurr, F.M. A flexible modelling framework linking the spatio-temporal dynamics of plant genotypes and populations: Application to gene flow from transgenic forests. Ecol. Modell. 2007, 202, 476–486. [Google Scholar] [CrossRef]
- Beckie, H.J.; Hall, L.M. Simple to complex: Modelling crop pollen-mediated gene flow. Plant Sci. 2008, 175, 615–628. [Google Scholar] [CrossRef]
- McRae, B.H. Isolation by resistance. Evolution 2006, 60, 1551–1561. [Google Scholar]
- Spear, S.F.; Balkenhol, N.; Fortin, M.J.; McRae, B.H.; Scribner, K. Use of resistance surfaces for landscape genetic studies: Considerations for parameterization and analysis. Mol. Ecol. 2010, 19, 3576–3591. [Google Scholar] [CrossRef]
- Landguth, E.L.; Cushman, S.A. cdpop: A spatially explicit cost distance population genetics program. Mol. Ecol. Resour. 2010, 10, 156–161. [Google Scholar] [CrossRef]
- Landguth, E.L.; Cushman, S.A.; Johnson, N.A. Simulating natural selection in landscape genetics. Mol. Ecol. Resour. 2012, 12, 363–368. [Google Scholar] [CrossRef]
- Landguth, E.; Balkenhol, N. Relative sensitivity of neutral versus adaptive genetic data for assessing population differentiation. Conserv. Genet. 2012, 13, 1421–1426. [Google Scholar] [CrossRef]
- McRae, B.H.; Dickson, B.G.; Keitt, T.H.; Shah, V.B. Using circuit theory to model connectivity in ecology, evolution, and conservation. Ecology 2008, 89, 2712–2724. [Google Scholar] [CrossRef]
- Shah, V.; McRae, B. Circuitscape: A Tool for Landscape Ecology. In Proceedings of the 7th Python in Science Conference (SciPy), Pasadena, CA, USA, 19–24 August 2008; Varoquaux, G., Millman, J., Vaught, T., Eds.; pp. 62–65.
- Pigliucci, M. Genotype-phenotype mapping and the end of the “genes as blueprint” metaphor. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2010, 365, 557–566. [Google Scholar] [CrossRef]
- Fuller, D.Q.; Allaby, R. Seed Dispersal and Crop Domestication: Shattering, Germination and Seasonality in Evolution under Cultivation. In Annual Plant Reviews; Fruit Development and Seed Dispersal, Østergaard, L., Ed.; Wiley-Blackwell: Oxford, UK, 2009; Volume 38, pp. 238–295. [Google Scholar]
- Jeong, H.; Tombor, B.; Albert, R.; Oltvai, Z.N.; Barabasi, A.L. The large-scale organization of metabolic networks. Nature 2000, 407, 651–654. [Google Scholar] [CrossRef]
- Bork, P.; Jensen, L.J.; von Mering, C.; Ramani, A.K.; Lee, I.; Marcotte, E.M. Protein interaction networks from yeast to human. Curr. Opin. Struct. Biol. 2004, 14, 292–299. [Google Scholar] [CrossRef]
- Hecker, M.; Lambeck, S.; Toepfer, S.; van Someren, E.; Guthke, R. Gene regulatory network inference: Data integration in dynamic models—A review. Biosystems 2009, 96, 86–103. [Google Scholar] [CrossRef]
- Smolen, P.; Baxter, D.A.; Byrne, J.H. Mathematical modeling of gene networks. Neuron 2000, 26, 567–580. [Google Scholar] [CrossRef]
- Milo, R.; Shen-Orr, S.; Itzkovitz, S.; Kashtan, N.; Chklovskii, D.; Alon, U. Network motifs: Simple building blocks of complex networks. Science 2002, 298, 824–827. [Google Scholar] [CrossRef]
- Milo, R.; Itzkovitz, S.; Kashtan, N.; Levitt, R.; Shen-Orr, S.; Ayzenshtat, I.; Sheffer, M.; Alon, U. Superfamilies of evolved and designed networks. Science 2004, 303, 1538–1542. [Google Scholar] [CrossRef]
- Artzy-Randrup, Y.; Fleishman, S.J.; Ben-Tal, N.; Stone, L. Comment on “Network motifs: Simple building blocks of complex networks” and “Superfamilies of evolved and designed networks”. Science 2004, 305, 1107. [Google Scholar]
- Mangan, S.; Zaslaver, A.; Alon, U. The Coherent Feedforward Loop Serves as a Sign-sensitive Delay Element in Transcription Networks. J. Mol. Biol. 2003, 334, 197–204. [Google Scholar] [CrossRef]
- Kalir, S.; Mangan, S.; Alon, U. A coherent feed-forward loop with a SUM input function prolongs flagella expression in Escherichia coli. Mol. Syst. Biol. 2005. [Google Scholar] [CrossRef]
- Basu, S.; Mehreja, R.; Thiberge, S.; Chen, M.T.; Weiss, R. Spatiotemporal control of gene expression with pulse-generating networks. Proc. Natl. Acad. Sci. USA 2004, 101, 6355–6360. [Google Scholar] [CrossRef]
- Prill, R.J.; Iglesias, P.A.; Levchenko, A. Dynamic properties of network motifs contribute to biological network organization. PLoS Biol. 2005, 3, e343. [Google Scholar] [CrossRef]
- Widder, S.; Sole, R.; Macia, J. Evolvability of feed-forward loop architecture biases its abundance in transcription networks. BMC Syst. Biol. 2012, 6, 7. [Google Scholar] [CrossRef]
- Ingram, P.J.; Stumpf, M.P.; Stark, J. Network motifs: Structure does not determine function. BMC Genomics 2006, 7, 108. [Google Scholar]
- Konagurthu, A.S.; Lesk, A.M. On the origin of distribution patterns of motifs in biological networks. BMC Syst. Biol. 2008, 2, 73. [Google Scholar] [CrossRef]
- Kauffman, S.A. Metabolic stability and epigenesis in randomly constructed genetic nets. J. Theor. Biol. 1969, 22, 437–467. [Google Scholar] [CrossRef]
- Liang, S.; Fuhrman, S.; Somogyi, R. Reveal, a general reverse engineering algorithm for inference of genetic network architectures. Pac. Symp. Biocomput. 1998, 1998, 18–29. [Google Scholar]
- Thomas, R. Regulatory networks seen as asynchronous automata: A logical description. J. Theor. Biol. 1991, 153, 1–23. [Google Scholar] [CrossRef]
- Shmulevich, I.; Dougherty, E.R.; Kim, S.; Zhang, W. Probabilistic Boolean networks: A rule-based uncertainty model for gene regulatory networks. Bioinformatics 2002, 18, 261–274. [Google Scholar] [CrossRef]
- Liang, J.; Han, J. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks. BMC Syst. Biol. 2012, 6, 113. [Google Scholar] [CrossRef]
- Deng, X.; Geng, H.; Ali, H. EXAMINE: A computational approach to reconstructing gene regulatory networks. Biosystems 2005, 81, 125–136. [Google Scholar] [CrossRef]
- Friedman, N.; Linial, M.; Nachman, I.; Pe'er, D. Using Bayesian networks to analyze expression data. In Proceedings of the Fourth Annual International Conference on Computational Molecular Biology; ACM: Tokyo, Japan, 2000; pp. 127–135. [Google Scholar]
- Van Berlo, R.J.P.; van Someren, E.P.; Reinders, M.J.T. Studying the Conditions for Learning Dynamic Bayesian Networks to Discover Genetic Regulatory Networks. Simulation 2003, 79, 689–702. [Google Scholar]
- Hartemink, A.; Gifford, D.; Jaakkola, T.; Young, R. Using Graphical Models and Genomic Expression Data to Statistically Validate Models of Genetic Regulatory Networks, Pacific Symposium on Biocomputing; Altman, R., Dunker, K., Hunker, L., Eds.; World Scientific Publishing: Stanford, CA, USA, 2001; pp. 422–433. [Google Scholar]
- Prud'homme, B.; Gompel, N.; Carroll, S.B. Emerging principles of regulatory evolution. Proc. Natl. Acad. Sci. USA 2007, 104, 8605–8612. [Google Scholar] [CrossRef]
- Stumpf, M.P.H.; Kelly, W.P.; Thorne, T.; Wiuf, C. Evolution at the system level: The natural history of protein interaction networks. Trends Ecol. Evol. 2007, 22, 366–373. [Google Scholar] [CrossRef]
- Chouard, T. Darwin 200: Beneath the surface. Nature 2008, 456, 300–303. [Google Scholar] [CrossRef]
- Stern, D.L.; Orgogozo, V. Is genetic evolution predictable? Science 2009, 323, 746–751. [Google Scholar] [CrossRef]
- Lander, E.S. Initial impact of the sequencing of the human genome. Nature 2011, 470, 187–197. [Google Scholar] [CrossRef]
- Allaby, R. Integrating the processes in the evolutionary system of domestication. J. Exp. Bot. 2010, 61, 935–944. [Google Scholar] [CrossRef]
- Knight, C.G.; Pinney, J.W. Making the right connections: Biological networks in the light of evolution. Bioessays 2009, 31, 1080–1090. [Google Scholar] [CrossRef]
- Fischer, A.H.; Smith, J. Evo-devo in the era of gene regulatory networks. Integr. Comp. Biol. 2012, 52, 842–849. [Google Scholar] [CrossRef]
- Muller, G.B. Evo-devo: Extending the evolutionary synthesis. Nat. Rev. Genet. 2007, 8, 943–949. [Google Scholar]
- Flannick, J.; Novak, A.; Do, C.B.; Srinivasan, B.S.; Batzoglou, S. Automatic parameter learning for multiple local network alignment. J. Comput. Biol. 2009, 16, 1001–1022. [Google Scholar] [CrossRef]
- Kolar, M.; Meier, J.; Mustonen, V.; Lassig, M.; Berg, J. GraphAlignment: Bayesian pairwise alignment of biological networks. BMC Syst. Biol. 2012, 6, 144. [Google Scholar] [CrossRef]
- Knight, C.G.; Zitzmann, N.; Prabhakar, S.; Antrobus, R.; Dwek, R.; Hebestreit, H.; Rainey, P.B. Unraveling adaptive evolution: How a single point mutation affects the protein coregulation network. Nat. Genet. 2006, 38, 1015–1022. [Google Scholar] [CrossRef]
- Ohno, S. Evolution by Gene Duplication; Springer: New York, NY, USA, 1970. [Google Scholar]
- Farid, N.; Christensen, K. Evolving networks through deletion and duplication. New J. Phys. 2006, 8, 212. [Google Scholar] [CrossRef]
- Evlampiev, K.; Isambert, H. Modeling protein network evolution under genome duplication and domain shuffling. BMC Syst. Biol. 2007, 1, 49. [Google Scholar]
- Tsai, T.Y.C.; Choi, Y.S.; Ma, W.Z.; Pomerening, J.R.; Tang, C.; Ferrell, J.E. Robust, tunable biological oscillations from interlinked positive and negative feedback loops. Science 2008, 321, 126–129. [Google Scholar] [CrossRef]
- Zhang, Z.H.; Qian, W.F.; Zhang, J.Z. Positive selection for elevated gene expression noise in yeast. Mol. Syst. Biol. 2009, 5, 299. [Google Scholar]
- Schumacher, M.A.; Piro, K.M.; Xu, W.; Hansen, S.; Lewis, K.; Brennan, R.G. Molecular Mechanisms of HipA-Mediated Multidrug Tolerance and Its Neutralization by HipB. Science 2009, 323, 396–401. [Google Scholar] [CrossRef]
- Koh, R.; Dunlop, M. Modeling suggests that gene circuit architecture controls phenotypic variability in a bacterial persistence network. BMC Syst. Biol. 2012, 6, 47. [Google Scholar]
- Tsong, A.E.; Tuch, B.B.; Li, H.; Johnson, A.D. Evolution of alternative transcriptional circuits with identical logic. Nature 2006, 443, 415–420. [Google Scholar]
- Song, Y.H.; Ito, S.; Imaizumi, T. Similarities in the circadian clock and photoperiodism in plants. Curr. Opin. Plant Biol. 2010, 13, 594–603. [Google Scholar] [CrossRef]
- Fischer, A.G. Latitudinal Variations in Organic Diversity. Evolution 1960, 14, 64–81. [Google Scholar] [CrossRef]
- Schemske, D.W.; Mittelbach, G.G.; Cornell, H.V.; Sobel, J.M.; Roy, K. Is There a Latitudinal Gradient in the Importance of Biotic Interactions? Annu. Rev. Ecol. Evol. Syst. 2009, 40, 245–269. [Google Scholar] [CrossRef]
- Connel, J.H. On the role of natural enemies in preventing competitive exclusion in some marine animals and in rain forest trees. In Dynamics of Population; den Boer, P.J., Gradwell, G.R., Eds.; Cent. Agric.: Wageningen, The Netherlands, 1971; pp. 298–312. [Google Scholar]
- Janzen, D.H. Herbivores and the Number of Tree Species in Tropical Forests. Am. Nat. 1970, 104, 501–528. [Google Scholar]
- Beaumont, M.A.; Zhang, W.Y.; Balding, D.J. Approximate Bayesian computation in population genetics. Genetics 2002, 162, 2025–2035. [Google Scholar]
- Estoup, A.; Lombaert, E.; Marin, J.M.; Guillemaud, T.; Pudlo, P.; Robert, C.P.; Cornuet, J.M. Estimation of demo-genetic model probabilities with Approximate Bayesian Computation using linear discriminant analysis on summary statistics. Mol. Ecol. Resour. 2012, 12, 846–855. [Google Scholar]
- Itan, Y.; Powell, A.; Beaumont, M.A.; Burger, J.; Thomas, M.G. The Origins of Lactase Persistence in Europe. PLoS Comput. Biol. 2009, 5, e1000491. [Google Scholar] [CrossRef]
- Williams, G.C. Pleiotropy, Natural-Selection, and the Evolution of Senescence. Evolution 1957, 11, 398–411. [Google Scholar] [CrossRef]
- Cheverud, J.M. Developmental integration and the evolution of pleiotropy. Am. Zool. 1996, 36, 44–50. [Google Scholar]
- Elena, S.F.; Sanjuan, R. Climb every mountain? Science 2003, 302, 2074–2075. [Google Scholar] [CrossRef]
- Van Drunen, W.E.; Dorken, M.E. Trade-offs between clonal and sexual reproduction in Sagittaria latifolia (Alismataceae) scale up to affect the fitness of entire clones. New Phytol. 2012, 196, 606–616. [Google Scholar] [CrossRef]
- 127 Kalske, A.; Muola, A.; Laukkanen, L.; Mutikainen, P.; Leimu, R. Variation and constraints of local adaptation of a long-lived plant, its pollinators and specialist herbivores. J. Ecol. 2012, 100, 1359–1372. [Google Scholar]
- Freitak, D.; Wheat, C.W.; Heckel, D.G.; Vogel, H. Immune system responses and fitness costs associated with consumption of bacteria in larvae of Trichoplusia ni. BMC Biol. 2007, 5, 56. [Google Scholar] [CrossRef]
- Hollister, J.D.; Gaut, B.S. Epigenetic silencing of transposable elements: A trade-off between reduced transposition and deleterious effects on neighboring gene expression. Genome Res. 2009, 19, 1419–1428. [Google Scholar] [CrossRef]
- Jacobs, M.W.; Sherrard, K.M. Bigger is not always better: Offspring size does not predict growth or survival for seven ascidian species. Ecology 2010, 91, 3598–3608. [Google Scholar]
- Denison, R.F. Past evolutionary tradeoffs represent opportunities for crop genetic improvement and increased human lifespan. Evol. Appl. 2011, 4, 216–224. [Google Scholar] [CrossRef]
- Sanchez-Humanes, B.; Sork, V.L.; Espelta, J.M. Trade-offs between vegetative growth and acorn production in Quercus lobata during a mast year: The relevance of crop size and hierarchical level within the canopy. Oecologia 2011, 166, 101–110. [Google Scholar] [CrossRef]
- Friesen, M.L. Widespread fitness alignment in the legume-rhizobium symbiosis. New Phytol. 2012, 194, 1096–1111. [Google Scholar] [CrossRef]
- Moon, Y.H.; Chen, L.J.; Pan, R.L.; Chang, H.S.; Zhu, T.; Maffeo, D.M.; Sung, Z.R. EMF genes maintain vegetative development by repressing the flower program in Arabidopsis (vol 15, pg 681, 2003). Plant Cell 2003, 15, 1257–1257. [Google Scholar] [CrossRef]
- Dieckmann, U. Can adaptive dynamics invade? Trends Ecol. Evol. 1997, 12, 128–131. [Google Scholar] [CrossRef]
- Geritz, S.A.H.; Gyllenberg, M. Seven answers from adaptive dynamics. J. Evol. Biol. 2005, 18, 1174–1177. [Google Scholar] [CrossRef]
- Geritz, S.A.H.; van der Meijden, E.; Metz, J.A.J. Evolutionary dynamics of seed size and seedling competitive ability. Theor. Popul. Biol. 1999, 55, 324–343. [Google Scholar]
- Boudsocq, S.; Barot, S.; Loeuille, N. Evolution of nutrient acquisition: When adaptation fills the gap between contrasting ecological theories. Proc. R. Soc. B Biol. Sci. 2011, 278, 449–457. [Google Scholar] [CrossRef]
- Grimm, V.; Berger, U.; Bastiansen, F.; Eliassen, S.; Ginot, V.; Giske, J.; Goss-Custard, J.; Grand, T.; Heinz, S.K.; Huse, G.; et al. A standard protocol for describing individual-based and agent-based models. Ecol. Modell. 2006, 198, 115–126. [Google Scholar] [CrossRef]
- Finkel, R.A.; Bentley, J.L. Quad trees: A data structure for retrieval on composite keys. Acta Informatica 1974, 4, 1–9. [Google Scholar] [CrossRef]
- Tischendorf, L. Modelling individual movements in heterogeneous landscapes: Potentials of a new approach. Ecol. Modell. 1997, 103, 33–42. [Google Scholar] [CrossRef]
- Sommerville, I. Software Engineering, 6th ed; Addison Wesley: Harlow, UK, 2007. [Google Scholar]
- Kool, J.T. An object-oriented, individual-based approach for simulating the dynamics of genes in subdivided populations. Ecol. Inform. 2009, 4, 136–146. [Google Scholar] [CrossRef]
- Bian, L. Object-Oriented Representation of Environmental Phenomena: Is Everything Best Represented as an Object? Ann. Assoc. Am. Geogr. 2007, 97, 267–281. [Google Scholar] [CrossRef]
- Barnes, D.J.; Hopkins, T.R. The impact of programming paradigms on the efficiency of an individual-based simulation model. Simul. Modell. Pract. Theory 2003, 11, 557–569. [Google Scholar] [CrossRef]
- Abbo, S.; Lev-Yadun, S.; Gopher, A. Origin of Near Eastern plant domestication: Homage to Claude Levi-Strauss and “La Pensee Sauvage”. Genet. Resour. Crop Evol. 2011, 58, 175–179. [Google Scholar] [CrossRef]
- Ross-Ibarra, J.; Morrell, P.L.; Gaut, B.S. Plant domestication, a unique opportunity to identify the genetic basis of adaptation. Proc. Natl. Acad. Sci. USA 2007, 104, 8641–8648. [Google Scholar] [CrossRef]
- Fuller, D.Q.; Allaby, R.G.; Stevens, C. Domestication as innovation: The entanglement of techniques, technology and chance in the domestication of cereal crops. World Archaeol. 2010, 42, 13–28. [Google Scholar]
- Brown, T.A.; Jones, M.K.; Powell, W.; Allaby, R.G. The complex origins of domesticated crops in the Fertile Crescent. Trends Ecol. Evol. 2009, 24, 103–109. [Google Scholar] [CrossRef]
- Munguía-Rosas, M.A.; Ollerton, J.; Parra-Tabla, V.; de-Nova, J.A. Meta-analysis of phenotypic selection on flowering phenology suggests that early flowering plants are favoured. Ecol. Lett. 2011, 14, 511–521. [Google Scholar] [CrossRef]
- Leinonen, P.H.; Remington, D.L.; Leppala, J.; Savolainen, O. Genetic basis of local adaptation and flowering time variation in Arabidopsis lyrata. Mol. Ecol. 2013, 22, 709–723. [Google Scholar] [CrossRef]
- Streck, N.A. A vernalization model in onion (Allium cepa L.). Revista Brasileira de Agrociência 2003, 10, 99–105. [Google Scholar]
- Finch-Savage, W.E.; Leubner-Metzger, G. Seed dormancy and the control of germination. New Phytol. 2006, 171, 501–523. [Google Scholar]
- White, C.N.; Proebsting, W.M.; Hedden, P.; Rivin, C.J. Gibberellins and seed development in maize. I. Evidence that gibberellin/abscisic acid balance governs germination versus maturation pathways. Plant Physiol. 2000, 122, 1081–1088. [Google Scholar] [CrossRef]
- Watt, M.S.; Bloomberg, M.; Finch-Savage, W.E. Development of a hydrothermal time model that accurately characterises how thermoinhibition regulates seed germination. Plant Cell Environ. 2011, 34, 870–876. [Google Scholar]
- Meyers, L.A.; Levin, D.A. On the abundance of polyploids in flowering plants. Evolution 2006, 60, 1198–1206. [Google Scholar]
- Lysak, M.A.; Cheung, K.; Kitschke, M.; Bures, P. Ancestral chromosomal blocks are triplicated in Brassiceae species with varying chromosome number and genome size. Plant Physiol. 2007, 145, 402–410. [Google Scholar] [CrossRef]
- Voorrips, R.; Maliepaard, C. The simulation of meiosis in diploid and tetraploid organisms using various genetic models. BMC Bioinformatics 2012, 13, 248. [Google Scholar] [CrossRef]
- Godin, C.; Sinoquet, H. Functional-structural plant modelling. New Phytol. 2005, 166, 705–708. [Google Scholar] [CrossRef]
- Fourcaud, T.; Zhang, X.; Stokes, A.; Lambers, H.; Körner, C. Plant Growth Modelling and Applications: The Increasing Importance of Plant Architecture in Growth Models. Ann. Bot. 2008, 101, 1053–1063. [Google Scholar]
- Vos, J.; Evers, J.B.; Buck-Sorlin, G.H.; Andrieu, B.; Chelle, M.; de Visser, P.H.B. Functional-structural plant modelling: A new versatile tool in crop science. J. Exp. Bot. 2010, 61, 2101–2115. [Google Scholar] [CrossRef]
- Prusinkiewicz, P.; Lindenmayer, A. The Algorithmic Beauty of Plants; Springer-Verlag: New York, NY, USA, 1990; p. 228. [Google Scholar]
- Chenu, K.; Franck, N.; Lecoeur, J. Simulations of virtual plants reveal a role for SERRATE in the response of leaf development to light in Arabidopsis thaliana. New Phytol. 2007, 175, 472–481. [Google Scholar]
- Qu, H.; Wang, Y.; Cai, L.; Wang, T.; Lu, Z. Orange tree simulation under heterogeneous environment using agent-based model ORASIM. Simul. Modell. Pract. Theory 2012, 23, 19–35. [Google Scholar] [CrossRef]
- Drouet, J.-L.; Pagès, L. GRAAL-CN: A model of GRowth, Architecture and ALlocation for Carbon and Nitrogen dynamics within whole plants formalised at the organ level. Ecol. Modell. 2007, 206, 231–249. [Google Scholar] [CrossRef]
- Clark, B.; Bullock, S. Shedding light on plant competition: Modelling the influence of plant morphology on light capture (and vice versa). J. Theor. Biol. 2007, 244, 208–217. [Google Scholar] [CrossRef]
- Buck-Sorlin, G.; Kniemeyer, O.; Kurth, W. A Model of Poplar (Populus sp.) Physiology and Morphology Based on Relational Growth Grammars Mathematical Modeling of Biological Systems; Deutsch, A., Parra, R.B.D.L., Boer, R.J.D., Diekmann, O., Jagers, P., Kisdi, E., Kretzschmar, M., Lansky, P., Metz, H., Eds.; Birkhäuser: Boston, MA, USA, 2008; Volume II, pp. 313–322. [Google Scholar]
- Kurth, W.; Kniemeyer, O.; Buck-Sorlin, G. Relational Growth Grammars—A Graph Rewriting Approach to Dynamical Systems with a Dynamical Structure Unconventional Programming Paradigms. In Unconventional Programming Paradigms; Banâtre, J.-P., Fradet, P., Giavitto, J.-L., Michel, O., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3566, p. 97. [Google Scholar]
- Buck-Sorlin, G.H.; Kniemeyer, O.; Kurth, W. Barley morphology, genetics and hormonal regulation of internode elongation modelled by a relational growth grammar. New Phytol. 2005, 166, 859–867. [Google Scholar] [CrossRef]
- Xu, L.; Henke, M.; Zhu, J.; Kurth, W.; Buck-Sorlin, G. A functional-structural model of rice linking quantitative genetic information with morphological development and physiological processes. Ann. Bot. 2011. [Google Scholar] [CrossRef]
- Luquet, D.; Soulié, J.C.; Rebolledo, M.C.; Rouan, L.; Clément-Vidal, A.; Dingkuhn, M. Developmental Dynamics and Early Growth Vigour in Rice 2. Modelling Genetic Diversity Using Ecomeristem. J. Agron. Crop Sci. 2012, 198, 385–398. [Google Scholar] [CrossRef]
- Bornhofen, S.; Barot, S.; Lattaud, C. The evolution of CSR life-history strategies in a plant model with explicit physiology and architecture. Ecol. Modell. 2011, 222, 1–10. [Google Scholar] [CrossRef]
© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
|
<urn:uuid:3e401d51-fb73-422b-9258-dfb94fea8513>
|
CC-MAIN-2016-26
|
http://mdpi.com/2223-7747/2/1/16/htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00105-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.871728 | 25,560 | 2.59375 | 3 |
Index of fantasai.tripod.com/www-style/2003/ruby
I haven't read the CSS3 Ruby drafts, so I don't have any comments. But I did find some interesting examples:
- Footnote Ruby and Underlined Ruby - Footnotes in Japanese texts are typically written in between lines, like ruby. You can see this in the third line.
Underlined text is top left. Note that the underline is along the "before" edge, and also that it's between the ruby and the main text.
- Kanji Ruby 1 - Ruby is usually a phoenetic expansion of the associated word. Here, the ruby text is kanji (Han ideographs). Note the parentheses.
- Kanji Ruby 2
- Kanji Ruby 3
- Ruby on English - Last line in the first column.
|
<urn:uuid:a90d64d6-452e-41dc-afd9-2c20244bc76f>
|
CC-MAIN-2016-26
|
http://fantasai.tripod.com/www-style/2003/ruby/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00147-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.915658 | 174 | 2.84375 | 3 |
|Authors: ||E. Fallahi, B. Fallahi, G.H. Neilsen, D. Neilsen, F.J. Peryea|
|Keywords: ||Malus × domestica, mineral nutrition, postharvest, quality prediction, storage|
Several mineral nutrients can influence fruit quality and disorders of apple.
Among these, nitrogen (N), potassium (K), phosphorous (P), calcium (Ca), and boron (B) are most often correlated to apple fruit quality and disorders.
Leaf mineral analysis is a useful tool to diagnosis apple tree deficiencies but often is poorly related to fruit quality.
Using fruit analysis alone or in combination with leaf analysis often permits more precise prediction of fruit quality.
Over the last several years, we have developed several models for predicting apple fruit quality.
In addition, we have examined the effects of various orchard factors and cultural practices, such as irrigation, rootstocks, and fertigation and foliar application of nutritional sprays, on apple fruit mineral composition and quality.
A ranking of major minerals has been developed that predicts fruit quality within a year and between years.
Increasing fruit N is inversely related to fruit yellow or red colour and positively associated with fruit respiration and ethylene.
Fruit Ca tends to be imprecisely related to bitter pit and fruit firmness.
Potassium fertigation in four apple cultivars increased fruit size, yield, acidity, and colour, but decreased firmness at harvest.
Multiple sprays of soluble Ca often reduce bitter pit and usually but not always increase Ca concentrations in subdermal cortical tissue.
Early-season Ca sprays often are more effective than later sprays at reducing bitter pit; however, later applications of Ca have a greater influence on fruit Ca concentration.
The B concentration of apple fruit is much more strongly affected by early season B sprays that are B in leaves.
Fruit from B-sprayed trees may exhibit quality loss due to B excess even though leaf B appears normal.
Water stress reduced leaf and fruit K but increased leaf Mg.
An overview of several orchard factors on mineral nutrition and fruit quality and disorders will be presented.
Download Adobe Acrobat Reader (free software to read PDF files)
|
<urn:uuid:d02e114a-91fa-44ab-bdf9-0bc096f63a13>
|
CC-MAIN-2016-26
|
http://www.actahort.org/books/868/868_3.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00044-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935599 | 471 | 2.578125 | 3 |
Axolotls are large aquatic salamanders only found in parts of Mexico. They are easy to keep and grow to an impressive 30cm, making the Axolotl a popular exotic pet.
Axolotls are available in around five colour variations; black, white, wild type, albino and a golden colour. They grow to approx 30cm (12in) and will live for approx 12 years in captivity. Axolotls are studied around the world, due to their ability to regenerate a new fully functional limb within two months of losing it.
Axolotls are only found in certain parts of Mexico, the canal systems of the former Lake Xochimilcho. Officially an endangered species they have been captive bred since the 1800s.
A good set up for one Axolotl would consist of an aquarium of 60 x38 x30cm (24 x 15 x 12in). The water should be around 10-20 C (50-68 F) and shallow, as deep as the Axolotl is long. Decorate the aquarium with a mixure of plastic plants and oxygenating plant, with maybe a couple of large pebbles. However, don't over crowd your set up - make sure your Axolotl has plenty of space. Keep your set-up out of direct light as Axolotls don't have eyelids and are sensitive to too much light.
You can use an internal water filter to help keep the water clean (keeping in mind Axolotls like the water relatively still). Weekly cleaning of pebbles/plants and a partial water change is recommended. Remember, do not use water straight from the tap, let it stand for 48hrs before introducing to your aquarium. Avoid using any size gravel that can be swallowed; causing ill health and can even be fatal.
Although best kept alone, a few Axolotls of a simlar size can be kept together. If not fed regularly or if they don't have enough room, Axolotls have been known to bite off each others limbs. They feed on a varied diet from crickets to worms, eating nearly anything that will fit in their mouth (they love freshwater shrimp and will eat pellets).
For further reading we recommend Keeping Axolotls by Linda Adkins.
From time-to-time we sell imperfect Axolotls at a reduced price, this usually means one or more limbs are missing/damaged. A missing limb should grow back within a couple of months.
Imperfect Axolotls are healthy, and only sold when we are happy that any missing limbs are healing well.
We are regularly asked for pellets for Axolotls, so we are now offering the sinking pellets we feed our Axolotls. Buy Axolotl Pellets.
We do sell Axolotl Spawn; it is only is on site for a limited period and sells fast. If you're interested in some Axolotl eggs, please request an e-mail update via the E-mail Notifications link below. For some helpful tips, read this article on rearing Axolotl Spawn.
Caution: Please Note! When providing water for your amphibians, this MUST be treated with an Aquarium de-chlorinated solution. The Chlorine will harm and possible kill your amphibians after a period of time. Alternatively, you can use fresh, clean rainwater!
The Yemen or Veiled Chameleon as its also known, is one of the best reptiles for beginners. However this species still requires an amount of specialist care.
Prices from £49.00
IHS Breeders Meeting (Sunday 19th June)
Tuesday 14 June 2016
We will be attending this weekends IHS show at Doncaster Racecourse with a range of livefood, supplies and inverts.
British Tarantula Society Show (BTS) 2016
Thursday 19 May 2016
This weekend is the biggest and best spider show in the country. We will of course be in attendance with a range of inverts, pet food and supplies.
|
<urn:uuid:31f83806-981b-404b-8657-c52b17588650>
|
CC-MAIN-2016-26
|
http://www.exotic-pets.co.uk/axolotl.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00006-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.943834 | 846 | 2.5625 | 3 |
Can an adult learn to speak a second language with the accent of a native? Not likely, but new research suggests that we would make better progress, and be understood more easily by our conversational partners, if we abandoned a perfect accent as our goal in the language learning process.
For decades, traditional language instruction held up native-like pronunciation as the ideal, enforced by doses of “fear, embarrassment and conformity,” in the words of Murray J. Munro, a professor of linguistics at Simon Fraser University in Canada. Munro and a co-author, University of Alberta linguist Tracy Derwing, argue that this ideal is “clearly unrealistic,” leading to disappointment and frustration on the part of most adult language learners. Indeed, a growing body of evidence points to a “critical period” in childhood for acquiring correctly accented fluency in a given language; even as research on neuroplasticity has pushed the limits of what adults can learn, this boundary has remained stubbornly in place. In light of these findings, a newer generation of adult foreign-language teachers has given up pronunciation instruction altogether, assuming it is a futile effort.
Both of these assumptions are wrongheaded, contend Munro and Derwing. Pronunciation can be learned—but it should be learned with the goal of communicating easily with others, not with achieving a textbook-perfect accent. Adult students of language should be guided by the “intelligibility principle,” not the old “nativeness principle.” As Derwing and Munro note, “even heavily accented speech can be highly comprehensible.” (In a 2009 article published in the journal Language Teaching, the two warn against the “charlatanism and quackery” of the “accent reduction industry.” Such books, tapes and classes claim to be able “to eliminate a foreign accent within specific periods of time; 28 days is a popular number,” the authors observe. “There is no empirical evidence that this ever actually happens.”)
Learners guided by the intelligibility principle focus less attention on individual vowels and consonants, and more attention to the “macro” aspects of language, such as general speaking habits, volume, stress, and rhythm. A study by Derwing and colleagues showed that this approach can work. The investigators divided subjects into three groups: the first received foreign language instruction with no particular focus on pronunciation; the second received instruction with a focus on pronouncing the individual segments of language; and the third received “global” pronunciation instruction on the general way the foreign tongue should sound. After 12 weeks of classes, the students were asked to tell a story in their new language, and their efforts were rated by native-speaking listeners. Only the global group, the listeners reported, showed significant improvement in comprehensibility and fluency.
(MORE: The New Ways Doctors Learn)
The intelligibility principle may be behind the acknowledged effectiveness of immersion-learning programs: when we immerse ourselves in a foreign language, particularly as spoken by natives, we’re picking up more than specific vocabulary words: we’re getting the gist of how the language is spoken, and our own attempts reflect this expansive awareness. Few of us have the time or money to engage in complete immersion, but a good tip is to limit your conversational practice with other native English speakers. The speech of second language learners, research shows, tends to “converge” toward a version of the foreign tongue that is more like the speakers’ native language. Instead, seek out someone who grew up talking the way you want to talk, and practice, practice, practice. You won’t sound perfectly like a native, but the natives will understand you perfectly well.
MORE: High-Status Stress
|
<urn:uuid:6901b776-d8cd-45f4-82fe-71cb7056df9f>
|
CC-MAIN-2016-26
|
http://ideas.time.com/2012/04/04/how-to-speak-like-a-native/?xid=gonewsedit
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00028-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945509 | 796 | 3.03125 | 3 |
A hazardous waste is a waste with a chemical composition or other properties that make it capable of causing illness, death, or some other harm to humans and other life forms when mismanaged or released into the environment.
This new page is part of our Hazardous Waste Management Program web page update process and is under construction. The links to the left will take you to the main Hazardous Waste page, as well as the general category pages, and the Related Links are those links related to the content on the page. If you are looking for the original "Managing Hazardous Waste" web page, please click here. This page will be updated frequently over the coming year, and when complete, the original page will no longer be available.
A waste is a hazardous waste if it is a listed waste, characteristic waste, used oil and mixed wastes. Specific procedures determine how waste is identified, classified, listed, and delisted.
TYPES OF HAZARDOUS WASTE
Hazardous waste is divided into different types (e.g., universal waste) or categories, including RCRA hazardous waste and non-RCRA hazardous waste. Properly categorizing a hazardous waste is necessary for land disposal restrictions, treatment standards and fees, amongst other things.
Hazardous waste generators are divided into two categories (Small Quantity Generators and Large quantity Generators) based on the amount of waste they produce each month. Different regulations apply to each generator category.
HAZARDOUS WASTE RECYCLING
DTSC implements hazardous waste recycling laws and developed the hazardous waste recycling regulations to promote the reuse and reclamation of useful materials in a manner that is safe and protective of human health and the environment. Hazardous waste laws define recycling differently.
|
<urn:uuid:963316a3-5a31-4daf-84eb-3e493f9818c4>
|
CC-MAIN-2016-26
|
http://www.dtsc.ca.gov/HazardousWaste/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00007-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.857763 | 356 | 3.5 | 4 |
The New York Times, Nov. 27
Bone Finding May Point to Hope for Osteoporosis
The New York Times, Nov. 27
Tickling Worms Leads to Discoveries, and a Measure of Fame
The New York Times, Nov. 26
Chemical in Gut Linked to Bone Formation
Reuters UK, Nov. 7
Scientists Say a Rock Can Soak Up CO2
BBC, Nov. 3
Transient Lunar Phenomena
NBC Nightly News, Oct. 21
Alzheimer's Disease Rate Higher in Minority Groups
The 2008 Nobel Prizes were awarded to the newest group of laureates at Stockholm's Concert Hall in Sweden on Dec. 10. Among this year's winners is Columbia professor Martin Chalfie, who shares the Nobel Prize in Chemistry with Roger Tsien of the University of California San Diego and Osamu Shimomura of the Woods Hole Oceanographic Institution, for their individual work on the green fluorescent protein (GFP) and its use in biological science research.
Image credit: Hans Mehlin
Introducing the Nobel Laureates in Chemistry, Professor Måns Ehrenberg credited the researchers for having "radically changed the scientific agenda." (Watch the full ceremony.)
"Improved variants of GFP and GFP-like proteins in synergy with high-resolution microscopes, computational technology, and powerful theoretical approaches are currently fueling a scientific revolution...in biological, medical and pharmaceutical research," said Ehrenberg. GFP naturally occurs in the Pacific-dwelling jellyfish, aequorea Victoria.
Chalfie, chair of Columbia's department of biological sciences and the William R. Kenan Jr. Professor of Biological Sciences, was honored for demonstrating that GFP could be expressed in other organisms.
Few researchers in the scientific community believed that the protein could be expressed in other organisms, but Chalfie held on to his differing opinion, believing in the promise of GFP's potential impact on scientific research.
In 1993 and 1994, Chalfie was able to prove that GFP could indeed be expressed in two living organisms: a small roundworm, c. elegans, and the bacterium e. coli. "Professor Chalfie's results not only show the power of experiment over scientific prejudice," said Ehrenberg, "but also make it clear to many that GFP was destined to become a universal genetic marker."
Following the introduction, Chalfie received the Nobel Prize Medal, Nobel Prize Diploma and the document confirming the Nobel Prize amount from King Carl XVI Gustaf of Sweden. After shaking hands with the King, Chalfie, with a broad smile on his face, bowed and blew a kiss to the audience.
Chalfie shares this prestigious honor with Shimomura, who was awarded the Nobel for his discovery of GFP in the aequorea Victoria jellyfish, and Tsien, whose research has led to the development of engineered variants of GFP.
© Columbia University
|
<urn:uuid:d0b40075-2838-4078-b13c-a3b9288f5e66>
|
CC-MAIN-2016-26
|
http://www.columbia.edu/cu/news/research/08/12/nobel.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00120-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.916492 | 610 | 2.59375 | 3 |
Sign up to our newsletter
Browser security warnings can work to protect users from phishing and malware sites – but “warning fatigue” means important alerts over site security can be conmpletely ignored.
Users of Google’s Chrome ignored SSL warnings (relating to a secure protocol used for passwords, internet transactions and banking) 70.2% of the time, a study of 25 million real-life warnings found. Overall, a study using metrics Firefox and Chrome found that the effectiveness of warnings varies widely.
“Google Chrome’s SSL warning had a clickthrough rate of 70.2%. Such a high clickthrough rate is undesirable:either users are not heeding valid warnings, or the browser is annoying users with invalid warnings and possibly causing warning fatigue,” said the U.C. Berkeley researchers. The study, Alice in Warningland, was part-funded by Google.
“During our field study, users continued through a tenth of Mozilla Firefox’s malware and phishing warnings, a quarter of Google Chrome’s malware and phishing warnings, and a third of Mozilla Firefox’s SSL warnings,” the researchers said.
The researchers analysed the size, type and frequency of warning messages and found that users tended to click rapidly through warnings about “untrusted issuers” and name and date errors – both common warnings, and ignored by nearly half of users.
The researchers say that “warning fatigue” has significant impact – “users click through more-frequent errors more quickly,” they say.
The researchers concluded that previous studies – showing that browser warnings simply did not work – relied on outdated data, harvested in a period between 2002 and 2009 when browsers were rapidly evolving. In particular, the large phishing warnings now delivered by modern browsers were much more effective than previous, more discreet warnings.
“Phishing toolbars have been replaced with browser-provided, full-page interstitial warnings. As a result, studies of passive indicators and phishing toolbars no longer represent the state of modern browser technology. In contrast, a majority of users heeded five of the sixtypes of browser warnings that we studied,” the researchers said.
Users with high levels of technical knowledge – such as Linux users – might be even more likely to ignore warnings, the researchers said. Warnings should be tailored to their audience, the paper concludes.
“Technically advanced users might feel more confident in the security of their computers, be more curious about blocked websites, or feel patronized by warnings,” the researchers said. “Studies of these users could help improve their warning responses.Designers of new warning mechanisms should always perform an analysis of the number of times the system is projected to raise a warning, and security practitioners should consider the effectsthat warning architectures have on warning fatigue.”
Author Rob Waugh, We Live Security
|
<urn:uuid:5b1b0b50-5075-41d1-9038-8a6d86eefe28>
|
CC-MAIN-2016-26
|
http://www.welivesecurity.com/2013/07/15/warning-fatigue-means-browser-users-ignore-up-to-70-of-security-alerts/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00030-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.912977 | 602 | 2.78125 | 3 |
Structure and diversity in Atlantic and Pacific communities of the toxigenic diatom Pseudo-nitzschia
Thursday, July 28, 2011Species in the diatom genus Pseudo-nitzschia are distributed throughout the world’s oceans. Several Pseudo-nitzschia species are known to produce the neurotoxin domoic acid (DA), which accumulates in bivalves and planktivorous fish and causes the human syndrome Amnesiac Shellfish Poisoning (ASP). Like other diatom genera, Pseudo-nitzschia species exhibit cryptic morphology and can be difficult to identify with microscopy. Here, molecular approaches were developed and used to describe Pseudo-nitzschia species distributions in Pacific and Atlantic waters. The detection of highly structured communities across local and regional spatial scales, and during different seasons, demonstrates the importance of coastal and estuarine processes in the assemblage of phytoplankton species. We are currently using biophysical models and the Environmental Sample Processor, an in situ genomic sensor, to better characterize the dynamics of toxic Pseudo-nitzschia species in the Gulf of Maine.
Redfield Auditorium - 12:00 Noon
Dr. Kate Hubbard
Postdoctoral Scholar Biology Department
Woods Hole Oceanographic Institution
co-sponsored event with CINAR
|
<urn:uuid:6df29159-d361-4aa6-bf44-e0f664ee0bf9>
|
CC-MAIN-2016-26
|
http://www.whoi.edu/page.do?pid=96436&tid=3622&cid=126809
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00113-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.886308 | 278 | 2.828125 | 3 |
In the cash basis of accounting, an expense is recorded when paid, and revenue is recorded when received. In the accrual basis of accounting, an expense is recorded when it is incurred, and revenue is recorded when earned. The accrual basis requires adjusting entries.
Adjusting entries are made at the end of an accounting period to apply the matching principle and to more accurately state the amount of assets and liabilities. Most adjusting entries can be grouped into two categories: accruals and deferrals.
Accruals[close] Accruals are the accumulation of expenses or revenue over a period of time. At the end of an accounting period, there are usually some items that have accrued but have not been recorded. In order to show the proper amount of items, adjusting entries should be made for all accruals.
Deferrals[close] Deferrals are the advance payment of expense items that benefit more than one accounting period or the advance receipt of revenue that will not be fully earned at the end of an accounting period. Adjusting entries should be made for deferrals to allocate the appropriate amount of expenses or revenue to the appropriate accounting period.
Accrued expenses represent both an expense and a liability. Thus, accrued expenses can also be referred to as accrued liabilities. Accrued expenses[close] Accrued expenses are those expenses that build up during the current accounting period but will not be paid until the next accounting period. A common example of an accrued expense is unpaid salaries at the end of an accounting period. An adjusting entry must be made to debit the Salaries Expense account and credit the Salaries Payable account. This has the effect of recording all salaries incurred in a period in an expense account and recognizing the liability for unpaid salaries. All accrued expenses involve a debit to an expense account and a credit to a liability account.
Accrued revenue represents both an asset and revenue. Thus, accrued revenue can also be referred to as accrued assets. Accrued revenue[close] Accrued revenue occurs when revenue has been earned but not collected at the time the accounting period ends. The adjusting entry for accrued revenue requires a debit to an asset account (such as Accounts Receivable, Rent Receivable, Interest Receivable, etc.) and a credit to a revenue account.
When an accrued expense is paid in the next accounting period - or when accrued revenue is received in the next accounting period - the entry must be split between the part of the accrual that pertains to the previous accounting period and the part that pertains to the current accounting period. Some accountants, however, do not like to split an entry between two accounting periods. In Chapter 10 we discussed a technique known as reversing entries that allows the accountant to make routine entries as if an accrual had not taken place. A reversing entry is made as of the first day of the next accounting period and is the exact reverse of the adjusting entry for the accrual.
A deferred expense[close] deferred expense is an advance payment for goods or services that benefits more than one accounting period. Deferred expenses are also referred to as prepaid expenses or deferred charges.
Prepaid expenses can be accounted for in two ways. The prepayment can be recorded (1) as an asset or (2) as an expense. Both methods yield identical results, but the end-of-period adjusting entry depends on how the prepayment was first recorded.
Deferred revenue[close] Deferred revenue is the advance receipt of revenue that will not be fully earned until a later period. Common examples of deferred revenue include sales of season tickets by an athletic team, subscriptions received in advance by a magazine, and rent collected at the beginning of a lease period. With regard to subscriptions, the Subscriptions Income account[close] Subscriptions Income account shows the amount earned from subscription sales, while the Unearned Subscriptions Income account[close] Unearned Subscriptions Income account shows the dollar amount of subscriptions due to subscribers.
Revenue that is deferred for a shorter period (less than a year) is referred to as unearned revenue and is listed on the balance sheet as a current liability. Revenue that is deferred for a longer period (in excess of a year) is referred to as deferred credits and is reported on the balance sheet under the heading Deferred Credits.
Deferred revenue can be accounted for in two ways. The advance receipt can be recorded (1) as a liability or (2) as revenue. Both methods yield identical results, but the end-of-period adjusting entry depends on the initial recording.
Materials posted on this site are copyrighted by Paradigm Publishing Inc. Permission is granted by the publisher to adopters of the text product that this electronic material accompanies to reproduce portions of these materials, and to adapt them as needed for educational use at a single location.
|
<urn:uuid:250433fe-bc8a-44c4-86a8-c2b7d66da328>
|
CC-MAIN-2016-26
|
http://www.emcp.com/college_resource_centers/listonline.php?GroupID=10251
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00153-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.965134 | 992 | 3.09375 | 3 |
As humans spread into prime bruin habitat, some bears are becoming "suburban guerrillas." But a team of wildlife managers in northwestern Montana is working as hard to retrain such "problem" bears as the bears have to work to put on the 20,000 to 30,000 calories per day they need before winter.
In True Grizz: Glimpses of Fernie, Stahr, Easy, Dakota and Other Real Bears in the Modern World, wildlife biologist Douglas Chadwick, a self-proclaimed "grizzly groupie," rides along with bear educators who wear caps emblazoned with the slogan "Teach Your Bears Well." They range about the Rockies in a pickup truck, trying to improve the odds that the 1,000 to 1,300 grizzlies that live south of Canada will continue their modest comeback.
The grizzly educators’ lesson plan relies on a stiff course of negative re-enforcement. Tactics include shooting bruins with rubber bullets, shepherding the bears with a pack of imported Karelian dogs, and — their most controversial teaching tool — feeding roadkill to a particularly vexing specimen, so he’ll put on enough fat to den, and quiet down.
The bear educators teach humans, too. (Note to self: Keep the 50-pound bag of dog food off the back porch when a hungry sow and her two cubs have been spotted in the neighborhood.)
Some of the bears are good students, and return to a diet heavy in wild huckleberries, instead of half-eaten hamburgers. Others, however, just won’t learn, and either end up in captivity on a diet of human handouts, or are killed.
Despite its corny title, True Grizz goes a long way toward clearing away rip-snorting tales to explain what most encounters today between Montana’s two top predators — grizzlies and humans — are actually like. Sadly, today, it’s not just circus bears that have to be trained.
True Grizz: Glimpses of Fernie, Stahr, Easy, Dakota and Other Real Bears in the Modern World By Douglas H. Chadwick, 176 pages, hardcover: $24.95 Sierra Club Books, 2003.
|
<urn:uuid:e07581da-b79a-4dcd-8a42-352e3b801c02>
|
CC-MAIN-2016-26
|
http://www.hcn.org/issues/275/14773/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00061-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.925817 | 466 | 2.6875 | 3 |
Galaxies and the Universe
Misconceptions and Educational Research
Common misconceptions include:
- Other galaxies may be inside our Solar System or the Milky Way.
- All galaxies are the same.
- Most galaxies can easily be seen without a telescope.
How do visitors understand the universe?
This article, which was published in the May/June 1999 Association of Science-Technology Centers Newsletter, presents an overview of several front-end visitor studies from museums and science centers.
The Solar System in Its Universal Context: Ideas, Misconceptions, Strategies, and Programs to Enhance Learning
J.A. Grier, E.L. Reinfeld, M.E. Dussault, S.J. Steel and R.Gould1, Universe Forum, presented at LPSC XXXVI
The study's data suggests that some of the misconceptions relating to the size of the solar system, placement, distance, scale and hierarchy of astronomical objects are introduced or reinforced by not including the solar system in a consistent, coherent picture within the rest of the galaxy and universe. If these ideas and misconceptions are not addressed, they can form barriers to developing new and more accurate internal models, and impede the assimilation of any new evidence or ideas within those models.
Beyond the Solar System: Expanding the Universe in the Classroom
How can teachers and students explore some of the biggest questions about our place in space and time? This professional development DVD is filled with video, print, and online resources for educators of students and adults alike.
|
<urn:uuid:a69d7993-ef2a-4e6b-93b7-ef04ca33732e>
|
CC-MAIN-2016-26
|
http://www.lpi.usra.edu/education/pre_service_edu/GalaxiesMisconceptions.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00001-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.881487 | 312 | 3.921875 | 4 |
The First Century 1840 to 1940
Barossa is founded by a wealthy, philanthropic English shipping merchant, George Fife Angas, soon after South Australia is settled in 1836. The free colony’s first Surveyor General, Colonel William Light names the fertile valley after the Barrosa Ridge in the Spanish region of Andalusia where he fought a famous battle in the Peninsula Wars of 1811. However there is an error in the registration process and a new Australian name is born, Barossa.
Back in London, Angas welcomes a proposition by a dissenting Lutheran leader, Pastor August Kavel, who wants to re-settle his flock of Silesian peasant farmers and tradesmen to the New World and they arrive in 1842 in Bethany. The Silesian settlers find fruit growing -– especially grapes -– ideally suited to the Mediterranean climate and by the 1890s dozens of wineries have been established including: Oscar Seppelt’s Seppeltsfield, Johann Gramp’s Orlando, Samuel Smith’s Yalumba, William Salter’s Saltram and Johann Henschke’s Henschke Wines.
Fortified wines such as Port, Sherry, Muscat and Tokay became popular overseas due to the Mother Country’s policy of Imperial Preference and by 1929, 25% of Australia’s total wine production comes from Barossa. However, the Great Depression and World War II slash demand for wine and wineries and growers struggle to sell their fruit.
Read next story: Post-War Reconstruction 1940
|
<urn:uuid:222ddd18-3201-416a-9664-5ba39642b8e7>
|
CC-MAIN-2016-26
|
http://www.barossa.com/wine/a-wine-history-from-1842/the-first-century-1840-to-1940/the-first-century-1840-to-1940
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00010-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.918348 | 332 | 3.234375 | 3 |
Dancing robots may be better at improving the communication skills of children with autism than human teachers, researchers believe.
Evidence emerging from a trial suggests that pupils who are on the autistic spectrum learn better from the automatons than human teachers.
Researchers say that if they can be proven to help difficult to reach youngsters, then in the future they could also be used to help pupils in mainstream classes.
Robots are being used as classroom buddies for autistic children in a groundbreaking initiative that aims to improve social interaction and communication.
Max and Ben, two knee-high humanoid robots that can dance to Thriller, play games and emulate Tai Chi, are to be showcased by researchers at the University of Birmingham as part of the ESRC festival of Social Sciences.
The Aldebaran robots, which have been trialled by pupils at Topcliffe Primary School in Castle Vale, Birmingham since March, are the latest in a range of innovative technologies that was on display at the school.
The event is about using technologies to help children with autism. Children, teachers and researchers will demonstrate the latest technologies and share their experiences of using them in the classroom.
"We have been looking at how technology can support pupils with autism to communicate more effectively," Dr Karen Guldberg, from the University of Birmingham's School of Education, said.
"Pupils and teachers are experimenting with the robots and other technologies in a developmental way and they are showing significant benefits for the classroom. The robots have been modelling good behaviour and acting as buddies," Guldberg said.
Research shows that children with autism often find computers and technology safe, motivating and engaging, particularly in the areas of social interaction an communication.
Aldebaran robotics are world leaders in the development of a humanoid robot. Topcliffe have hosted two robots in their classrooms since March 2012.
The image is for representation purpose only
|
<urn:uuid:7ef1c5dd-831b-445d-8a78-1aef79e154a8>
|
CC-MAIN-2016-26
|
http://www.rediff.com/business/report/tech-now-dancing-robots-to-teach-autistic-kids/20121109.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00029-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960148 | 379 | 3.25 | 3 |
Dangerous Road Ahead
Technology makes our lives easier by allowing us to stay connected to the world around us. But, at what point does technology make life more dangerous? Lately, newspapers and magazines have been filled with articles about people who were injured because they were sending text messages to friends or using their cell phones to access the Internet while driving. One recent incident involved celebrity plastic surgeon Frank Ryan. According to the California Highway Patrol, Dr. Ryan died while "tweeting" about his dog.
Here are some disturbing statistics:
- 72% of adults use text messaging.
- 47% of adults who use text messaging say they have sent or read messages while driving, according to a Pew Research Center survey.
- 49% of adults said they have been in a car when the driver was sending or reading text messages, according to the Pew survey.
- 54% of workers who have smart phones – including 66% of sales workers and 59% of professional business services workers – have admitted to checking messages while driving, according to a CareerBuilder survey.
- Text messaging while driving increases the risk for an accident or driving-related problem by 23.2 times, according to a Virginia Tech Transportation Institute study.
- A person who sends a text message while driving at the speed of 35 mph will travel 25 feet before coming to a complete stop, compared to a distance of 4 feet for a drunk driver, also according to the Virginia Tech study.
Many state and federal legislators have decided to take action against this problem. In Michigan, it is against the law for drivers to read, write or send text messages while they drive. Specifically, House Bill 4394 states "a person shall not read, manually type, or send a text message on a wireless 2-way communication device that is located in the person's hand or . . . lap . . . while operating a motor vehicle that is moving on a highway or street in this state." Drivers who violate the law will receive a $100 fine for the first offense, a $200 fine for subsequent violations and points on their driving records.
The U.S. Department of Transportation (DOT) while partnering with the Occupational Safety and Health Administration (OSHA) announced a rule that commercial bus and truck drivers will be prohibited from sending text messages while driving, and train operators will be barred from using cell phones and other electronic devices while on the job.
Within the DOT, the Federal Motor Carrier Safety Administration (FMCSA) has prepared a final rule that will allow the FMCSA to fine drivers up to $2,750 and motor carriers up to $11,000 for violations. Additionally, states would be required to disqualify commercial licenses for 60 days for drivers who violate the rule twice within three years and 120 days for drivers who violate the rule three times within three years.
Secretary of Labor Hilda Solis stated the reasoning: "OSHA is clear, employers must provide a workplace free of serious recognized hazards. It is imperative that employers eliminate financial incentives that encourage workers to text while driving."
So, what does this mean for you? It means that you may be liable if one of your employees sends a work-related text message or e-mail while driving. In Michigan, an employer may be held liable for the negligence of its employees, agents and contractors – even when the employer did not act negligently. Generally, it applies when an employee is acting within the scope of employment or for the benefit of the employer.
So, what can you do? You can update your policy to specifically address texting and e-mailing while driving and warn employees of its dangers. While it may be easier or more convenient to allow employees to continue to text and drive at the same time, it is a better practice to put an end to it – before an accident occurs.
|
<urn:uuid:9bfda144-99e2-4a41-b478-aef23aeef745>
|
CC-MAIN-2016-26
|
http://www.wnj.com/Publications/Dangerous-Road-Ahead
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00054-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957386 | 783 | 2.65625 | 3 |
What does the UN climate solutions report tell us about efforts to tackle climate change and the role that we can play to address it? A new set of infographics from Climasphere and the United Nations Foundation breaks the latest climate report down for you.
The UN’s panel of science experts has told us that they are more certain than ever before that humans are causing the climate to change and that climate change will impact everything from ecosystems and species to hunger, poverty, development, and global conflict. Now it’s time to embrace the climate solutions.
On April 13, the UN’s Intergovernmental Panel on Climate Change (IPCC) released its Working Group III report on climate solutions and the practices we can adopt to address climate change and avoid the most severe future consequences. From policymakers working to end deforestation and investing in cleaner forms of energy, to citizens urging leaders to take action on climate and making behavioral shifts, we all have a role to play in addressing climate change.
What will yours be?
TAKE ACTION: Share these infographics on your social media channels and urge your friends and family to learn about how we can address climate change!
|
<urn:uuid:b78559cc-a279-48c7-abb4-87ab17fe1200>
|
CC-MAIN-2016-26
|
http://unfoundationblog.org/infographic-un-climate-solutions-report-in-numbers/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00122-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929711 | 236 | 2.875 | 3 |
February 1, 2003
magine shrinking all the information at the Library of Congress into a device the size of a sugar cube or detecting cancerous tumors as tiny as a few cells. That was President Clinton's vision when he announced the National Nanotechnology Initiative in 2000.
Federal scientists, together with researchers at universities and in industry, have been working to harness the vast world of microscopic materials and structures to develop breakthrough technologies that include computers as small as the head of a pin. The $700 million program to support cutting-edge research in nanotechnology is moving ahead surprisingly well, considering the difficulties in management and coordination among 16 agencies.
Despite the ultra-small working scale, government officials and researchers expect the nanotechnology initiative to produce macro-sized payoffs. The multi-agency effort is designed to pave the way for revolutionary breakthroughs in advanced materials and manufacturing, computers and electronics, aerospace technology, medicine and health care, environment, energy, biotechnology, agriculture and national security. Research goals include developing materials with 10 times the strength of steel, but only a fraction of the weight, and tiny "quantum dots"-crystals that emit different wavelengths of light, depending on their size. Such dots offer applications for advanced lasers and computers, as well as for possible biological markers of cellular activity.
In the emerging world of nanotechnology, scientists and engineers are manipulating matter atom by atom, molecule by molecule. Dimensions are measured in billionths of a meter-or approximately 1/100,000th the diameter of a human hair.
Funded by 10 agencies, the nanotechnology initiative was launched in fiscal 2001 with a budget of $464 million. Funding jumped to $604 million for fiscal 2002, and $710 million has been requested for fiscal 2003. Some researchers say federal support should be ramped up even more rapidly to take advantage of new developments in the field and keep pace with nanotechnology efforts in Europe and Japan.
The Bush administration is enthusiastic about providing continued support and funding for the nanotechnology initiative. John Marburger, the president's science adviser and chief of the White House Office of Science and Technology Policy, has made nanotechnology one of six research priorities.
The nanotechnology initiative "holds great promise across many scientific fields and most sectors of the economy," Marburger and Office of Management and Budget Director Mitch Daniels said in a May 30 memo to department and agency heads involved in the program. "Of particular importance are nanostructures that more effectively collect and deliver samples to sophisticated sensors (chemical, biological, radiological, electromagnetic, photonic, acoustic, or magnetic)," they noted. The enhanced sensors are being developed for such applications as rapid detection of chemical and biological weap- ons on the battlefield as well as in terrorist attacks.
Physicist Neal Lane, who was President Clinton's science adviser and a leader in the nanotechnology initiative, is pleased with the government's progress.
"The excitement that we noted in the research community for work in nanoscale science and engineering has simply continued to grow. And, of course, that's very important, because a lot of these questions still remain-very fundamental research questions. So you've got to have the best people in fields like chemistry, physics, materials, biology working on these problems," says Lane, now a professor at Rice University in Houston.
Major players in the nanotechnology initiative include the White House Office of Science and Technology Policy, the National Science Foundation, the Defense Department, the Energy Department, NASA, the National Institute of Standards and Technology and the National Institutes of Health. Also funding some research are the Environmental Protection Agency and the Agriculture, Justice and Transportation departments. Other agencies with input are the Food and Drug Administration,the National Oceanographic and Atmospheric Administration, the Nuclear Regulatory Commission, and the State and Treasury departments.
At the White House, the nanotechnology initiative is managed by the National Science and Technology Council's Subcommittee on Nanoscale Science, Engineering and Technology (NSET). The subcommittee's 41 members include representatives of the White House and the 16 agencies involved. The council's National Nanotechnology Coordination Office handles technical and administrative support.
The National Academy of Sciences praised the initiative's progress in a June 2002 report. "The leadership and investment strategy established by NSET has set a positive tone for the NNI [National Nanotechnology Initiative]," the report said. "The initial success of the NNI can also be measured by the number of foreign governments that have established similar . . . programs in response."
But Samuel Stupp, chairman of the academy's review committee, sees room for management improvements. "I wouldn't say that there have been serious shortcomings. That's maybe a bit too strong. I would say that they have been doing a fair job, but they could do better." Stupp is director of the Institute for Bioengineering and Nanoscience in Advanced Medicine at Northwestern University in Evanston, Ill.
The review panel pointed to problems with both interagency coordination and the development of interagency partnerships.
"NSET forms a solid foundation on which to build an NNI that adds up to more than the sum of its parts. However, more is needed to achieve meaningful interagency coordination and collaboration," the report said. The panel recommended formation of a nanoscience and nanotechnology advisory board "capable of identifying research opportunities that do not fit within any single agency's mission.
"NSET member agencies have done a much better job of encouraging federal partnerships with industry, universities and local government than they have of encouraging meaningful interagency partnerships," the report said. "While the NNI implementation plan lists major interagency collaborations, the committee has no sense that there is much common strategic planning in those areas, any significant interagency communication between researchers working in those areas, or any significant sharing of results before they are published in the open literature."
Managing the nanotechnology initiative can be "like herding cats," says James Murday, executive secretary of the nanoscale science subcommittee and director of the coordination office. "You've got 16 different groups. They have very diverse interests, very diverse ways of doing things," Murday says.
In developing effective interagency partnerships, Murday says, part of the challenge is dealing with different agency cultures and operating methods. For example, he says, the NSF disperses university research grants through peer review panels, while the Defense Department leaves the decisions to its program managers.
Another challenge is the rapidly expanding workload. Murday, superintendent of the chemistry division at the Naval Research Laboratory, says his job as director of the coordination office was supposed to be half time, but he puts in 70-hour workweeks. The coordination office, based in Arlington, Va., plans to replace Murday with a full-time director and bring aboard a third contract staffer.
Researchers are upbeat about the potential benefits of nanotechnology, but there is also an undercurrent of concern about the risks inherent in such research. One of the most outspoken critics of the technology is Bill Joy, co-founder of Sun Microsystems.
"I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil," Joy wrote in the April 2000 issue of Wired magazine. "Unfortunately, as with nuclear technology, it is far easier to create destructive uses for nanotechnology than constructive ones. Nanotechnology has clear military and terrorist uses." He proposed that the government and scientific community set stringent limits on how far nanotechnology research will be pursued.
Richard Russell, associate director for technology at the White House Office of Science and Technology Policy, acknowledges that there are dangers with any scientific endeavor.
But with nanotechnology, Russell says, "the kind of research that we're talking about is in areas that are directly helpful to both our economy and things like human health. So we don't view research in these areas to be something that we should be afraid of. It's research that is going to be beneficial to the human race."
|Here are the fiscal 2003 budget requests for nanotechnology at the major agencies involved in the nanotechnology initiative:|
|$221||million||National Science Foundation|
|$43.8||million||National Institute of Standards and Technology|
|$43.2||million||National Institutes of Health|
February 1, 2003
|
<urn:uuid:377e56e9-6b7b-44fd-b851-3386de5cfc00>
|
CC-MAIN-2016-26
|
http://www.govexec.com/magazine/magazine-managing-technology/2003/02/think-tiny-think-big/13439/print/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00077-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949142 | 1,690 | 3.265625 | 3 |
Scarecrows are a tried and true piece of human technology, but nowadays birds harass more than just crops; they also bother planes. That's why South Korea has developed a new kind of overkill scarecrow for airports: an unmanned tank with laser and acoustic weapons.
The Korean Atomic Energy Group and a subsidiary of LG have been working together on the project for years, and now actual bird-scaring unmanned ground vehicles are rolling out to select Korean airports and air bases alike. Eight feet long and 1.2 tons, each tank is outfitted with cameras to track birds in daylight and at night, acoustic sensors, directional acoustic transmission hardware, and green lasers all in the interest of harassing birds with 100 db popping noises and frenetic laser patterns.
While unmanned, the vehicles are monitored and piloted from remote control stations, though they can avoid obstacles and travel to and from pre-programmed locations on their own. The designers of the tank claim it's 20 percent more effective than other solutions when it comes to keeping birds from hitting planes or getting sucked into engines. It's certainly at least 20 percent cooler. It's likely this technology will go on to be used in unmanned landmine detection systems, combat, and supply vehicles, but it's first job as a robotic scarecrow will always be the most bizarrely awesome. Where can I take one for a test drive? [KBS News via Gizmag]
|
<urn:uuid:485d7e8b-134e-48d2-887a-04c51443d5e9>
|
CC-MAIN-2016-26
|
http://gizmodo.com/5961619/scarecrow-laser-tanks-are-as-awesome-as-they-are-overkill?tag=Overkill
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00139-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945227 | 288 | 2.65625 | 3 |
Polygons are closed plane figures whose edges are straight lines. Polygons are regular if all of their sides and angles are equal. This means that all the corners, or vertices, of a regular polygon will lie on a circle. Usually the simplest method, then, to construct a regular polygon is to inscribe it in a circle.
an equilateral triangle and a hexagon
|Procedure: The radius of a circle can be struck exactly six times around the circle. Connecting the intersections of every other arc yields an equilateral triangle; connecting each successive intersection produces a six-sided figure or hexagon.|
Set the compass to the radius of the circle and strike six equidistant
arcs about its perimeter. Connect two neighboring intersections to the
center of the circle. Bisect the resulting angle. Beginning at the intersection
of the bisector and the circle strike six more arcs around the circle.
There will be twelve equidistant intersections on the circle. These will
mark the vertices of a dodecagon.
Inscribing a square
The tilted square, or diamond, was inscribed by connecting the ends
of the horizontal and vertical diameters of a circle. A vertical diameter
can be constructed as the perpendicular bisector of the horizontal diameter
and vice-versa. The normal square was inscribed by connecting the diagonal
diameters of the circle. These diameters were constructed by bisecting
the right angles created by the horizontal and vertical diameters.
|Procedure: Construct horizontal and vertical diameters and then bisect the quadrants of the circle to divide it into eight segments. Connect the endpoints of the four diameters to create an octagon.|
The number of sides of any inscribed polygon may be doubled by further bisecting the segments of the circle. All of polygons above are doublings of the relatively simple constructions of the equilateral triangle and the square. Much more complex are the construction of figures like the pentagon (five sides). This is covered in part II.
|
<urn:uuid:2b54b8f6-0f5d-4b6c-8275-4890d30aac28>
|
CC-MAIN-2016-26
|
http://condor.depaul.edu/slueckin/inscribereg1.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00152-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.881627 | 429 | 4.09375 | 4 |
|Name: _________________________||Period: ___________________|
This test consists of 5 short answer questions, 10 short essay questions, and 1 (of 3) essay topics.
Short Answer Questions
1. What biochemist does von Däniken cite in Chapter 1 as hypothesizing that the conditions for life may have developed more quickly on other planets?
2. Piri Reis is an admiral from where?
3. How many stars does the telescope of even a small observatory make visible, according to von Däniken in Chapter 1?
4. In Chapter 2, how many tons is the payload in von Däniken's hypothetical spaceship?
5. In Chapter 3, von Däniken writes that our historical past is pieced together through what kind of knowledge?
Short Essay Questions
1. What does von Däniken suggest about the Piri Reis maps in Chapter 3?
2. What does von Däniken cite from Genesis regarding giants in Chapter 4?
3. Why does von Däniken suggest our planet is not the only one which is capable of sustaining life in Chapter 1?
4. What does von Däniken suggest that scientists shed in their examinations?
5. Why would the crew choose the planet they would in the scenario in Chapter 2?
6. In Chapter 1, what question does von Däniken pose? What is his theory for an answer?
7. What hypothetical scenario does von Däniken present in Chapter 2?
8. How are the "Vimanas" described in Chapter 6? What evidence is related to Kunti?
9. Who is Jules Verne and why does von Däniken mention him in Chapter 2?
10. How fast does the rocket in Chapter 2 travel? How?
Write an essay for ONE of the following topics:
Essay Topic 1
What examples does von Däniken cite involving Noah and Moses from the Holy Bible? What is his theory on the Ark of Covenance?
Essay Topic 2
Explain the connections between the Easter Island statues and Tiahuanaco. What explanation does von Däniken supply in Chariots of the Gods? Do you agree or disagree? Why?
Essay Topic 3
Describe the Easter Island statues. What is their importance to von Däniken's theories? How does he hypothesize they were created and erected?
This section contains 854 words
(approx. 3 pages at 300 words per page)
|
<urn:uuid:edfdcf97-08b0-4f81-b21c-d38cc41e8b4c>
|
CC-MAIN-2016-26
|
http://www.bookrags.com/lessonplan/chariots-of-the-gods/test5.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00168-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.911941 | 535 | 2.890625 | 3 |
This volume contains the text of eight poems by the third-century BC Greek poet Theocritus, together with an introduction and extensive commentary. This is the first full-scale commentary on the work of Theocritus since Gow's edition of 1950, and is the first to exploit the recent revolution in the study of Hellenistic and Roman poetry. It offers new readings of all the poems, which show both how Theocritus differs from subsequent pastoral poetry, and how his poems, through their influence on Virgil, established the Western pastoral tradition.
Back to top
Rent Theocritus 1st edition today, or search our site for other textbooks by Theocritus. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Cambridge University Press.
Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our Literature tutors now.
|
<urn:uuid:d74786b8-9688-4417-bd38-62461dceb6d1>
|
CC-MAIN-2016-26
|
http://www.chegg.com/textbooks/theocritus-1st-edition-9780521574204-052157420x?ii=10&trackid=08ad220a&omre_ir=1&omre_sp=
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00017-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.953389 | 190 | 2.609375 | 3 |
Cervical spondylolisthesis is a vertebral misalignment condition located in the neck, most typically at C5, C6 or C7. Spondylolisthesis is usually seen in the lumbar spine, at L4 or L5, but can occur anywhere in the spinal anatomy in less typical circumstances.
Although not a common condition, severe cases of vertebral slippage in the neck can have dire and horrific effects. Luckily, most cases are minor and not even inherently symptomatic. In fact, many minor cases are vilified unfairly as the source of pain, when there is absolutely no evidence of a pathological or pain-inducing process anywhere to be found in the surround neurological anatomy.
This essay examines vertebral displacement in the cervical spine and its potential effects.
Anterolisthesis is a vertebral misalignment in which the affected bone moves forward , towards the anterior aspect of the body, and out of usual alignment with the remainder of the vertebral bodies.
Retrolisthesis is the exact opposite diagnosis, in which one of the vertebra moves rearwards, towards the posterior aspect of the body, and out of typical alignment with the other cervical bones.
In either case profile, the extent of the slippage can range greatly. To simplify the diagnostic process, spondylolisthesis is graded using 4 categories of slippage and further quantified using a percentage scale.
Spondylolisthesis in the neck is diagnosed by the percentage of slippage endured by the affected bone. This will range from 1% to over 100% and will be further detailed using a grading scale of 1 to 4:
Grade 1 spondylolisthesis in the neck is minor and rarely cause for alarm or symptoms. Most cases involve a misalignment of less than 10%, although technically, grade 1 cervical vertebral displacement is defined as slippage between 1% and 25%.
Grade 2 spondylolisthesis is considered moderate and many cases are still not symptom generating. The degree of misalignment is rated between 26% and 50%
Grade 3 spondylolisthesis in the neck is rated at 51% to 75% slippage and is considered severe. Many cases will cause pain and some variety of neurological impairment, but this is not an absolute rule.
Grade 4 spondylolisthesis is extreme and is described as vertebral misalignment ranging from 76% to over 100%. Most cases are symptomatic and some may even enact a terribly affective condition known as an unstable spine.
Spondylolisthesis can result from a range of possible causative or contributory processes:
Congenital cases are very common and are statistically mild and not problematic. Most patients will not even know they have vertebral slippage unless it is discovered through coincidental spinal imaging later in life.
Traumatic neck injury can dislodge vertebral bones and cause lasting misalignment issues.
The normal degenerative processes can cause the spinal joints to lose structural integrity and contribute to minor vertebral migration. Only in the worst cases of spinal deterioration will these cases progress to grade 3 or 4 levels.
A particular fracture of one of the spinal joint structures, called the pars interarticularis, can cause vertebral migration, especially in elderly patients. This fracture is called spondylolysis.
Minor to moderate cervical spondylolisthesis issues are usually not overly concerning, but should always be monitored for continued progression or degeneration. Grade 3 and 4 vertebral slippage issues are likely to cause pain to a mild or severe degree and may also enact significant functional impairment from neurological compression.
As the vertebrae drift further apart, the central spinal canal will no longer line up, possibly causing spinal stenosis in the neck.
The neuroforaminal openings will also not line up correctly, causing foraminal stenosis in the neck and possible cervical pinched nerves.
The vertebral interactions are also possibly affected, causing the potential for purely mechanical pain to exist from facet joints and other connecting structures which no longer interact properly.
|
<urn:uuid:1ae8f4f3-015a-43dd-9811-0b3f2061f2f9>
|
CC-MAIN-2016-26
|
http://www.neck-pain-treatment.org/cervical-spondylolisthesis.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00021-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.927977 | 849 | 2.5625 | 3 |
There are some major evolutionary jumps that seem to have occurred only once. Eukaryotic cells contain membrane-enclosed structures to perform different functions, and they comprise all forms of multicellular life on Earth. They arose from prokaryotes only once in four billion years, and no prokaryotic cells have been found that show intermediate levels of complexity.
Why only once? A recent "Hypothesis" paper in Nature posits that the answer lies in bioenergetics. The mitochondria that produce much of a eukaryotic cell's energy, which were once free living prokaryotes, and still carry their own genomes, now contain only genes essential for energy production. In order to get an equal dose of energy-producing genes, a prokaryote now has to make extra copies of its entire genome, a hurdle that keeps it from evolving a complex genome.
|
<urn:uuid:44a6f821-d8a2-4775-91f5-788c485744f3>
|
CC-MAIN-2016-26
|
http://www.microbeworld.org/index.php?option=com_jlibrary&view=article&id=5118
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00029-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.943726 | 183 | 3.453125 | 3 |
African Methodist Episcopal Zion Church, Methodist denomination. It was founded in 1796 by black members of the Methodist Episcopal Church in New York City and was organized as a national body in 1821. The church operates in the United States, Africa, South America, and the West Indies and maintains Livingstone College in Salisbury, N.C. The U.S. membership of the church in 1998 was about 1.2 million, making it one of the largest African Methodist bodies.
See D. H. Bradley, A History of the A.M.E. Zion Church (2 vol., 1956–70).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
|
<urn:uuid:bb323882-73cf-42b3-9d50-e7a120f51b6b>
|
CC-MAIN-2016-26
|
http://www.factmonster.com/encyclopedia/society/african-methodist-episcopal-zion-church.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00073-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935449 | 146 | 2.703125 | 3 |
Beginnings of the Liberal Republican Party
In 1870, a faction of the Republican Party in Missouri bolted because it felt that the Republican Party was being too vindictive in its treatment of former Confederate sympathizers. The group called for a repeal of all legislation which “discriminated” against ex-Confederates. In the election of 1870, Missouri’s Liberal Republicans (with the support of state Democrats) elected B. Gratz Brown Governor and won two of the state’s nine seats in the U.S. House.
Following this dramatic victory, a movement began to take the party to a national level. A nascent LRP had existed in New York since 1870, when four candidates contested races for the U.S. House there. Carl Schurz, a U.S. Senator from Missouri, began to travel throughout the nation, urging the creation of a new national party which addressed the issues of concern to the average person [e.g., his speech in Nashville TN was covered by the New York Times on 9/21/1871]. One month later, the St. Louis Republican stated in an editorial that since the Democratic Party had no contender who could defeat President Grant in 1872, the Democrats should not offer a candidate but should allow the Liberal Republican Party to field a candidate [New York Times 10/17/1871]. The immediate names raised as presidential material included U.S. Senator Lyman Trumbull (R-IL), Charles F. Adams, and U.S. Supreme Court Associate Justice David Davis.
|
<urn:uuid:480ddb80-b28c-4efe-9d1c-5c3d19404650>
|
CC-MAIN-2016-26
|
http://www.ourcampaigns.com/RaceDetail.html?RaceID=58521
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00159-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966366 | 319 | 3.53125 | 4 |
There are a number of techniques that fall into the category of East Asian Physical Medicine. The techniques that I use most often include Gua Sha, Cupping, Massage, and Moxibustion.
Gua Sha is an East Asian folk medicine. Gua means to scrape or rub. Sha is a ‘reddish, elevated, millet-like skin rash’ (aka petechiae). Sha is the term used to describe Blood stasis in the subcutaneous tissue before and after it is raised as petechiae. Gua Sha is technique that raises Sha rash or petechiae and is very therapeutic. Gua Sha can be used for acute or chronic pain. If normal finger pressure on the area causes blanching that does not fade quickly, Gua Sha is indicated. In addition to resolving pain, Gua Sha can help prevent upper respiratory infections and asthma.
Lubricate the area to be Gua Sha-ed with a thick oil. A round-edged instrument is then applied to the treatment area with firm downward strokes. Stroke one area until the petechiae is completely raised, then move onto the next area. If there is no Blood stasis the petechiae will not form and the skin will only turn pink. For lubrication I use Badger balm or another type of thick salve, often the lid of the container can be used to administer Gua Sha.
What kind of instrument is used to Gua Sha?
A soup spoon, coin, or slice of water buffalo horn is traditionally used in Asia. I use a simple metal cap with a rounded lip or ceramic Chinese soup spoon. In two to four days the Sha petechiae should fade. In cases of Blood deficiency it may take longer to fade and further support is necessary.
Often patients feel immediate relief from their physical discomfort. Gua Sha circulates Qi and Blood, mimicks sweating, and moves Fluids. These fluids contain metabolic waste congest the body. Gua Sha improves circulation and metabolic processes. It is a useful remedy for external and internal pain of acute and chronic nature.
Gua Sha is a Safe and effective form of medicine. After receiving Gua Sha, activity should be moderate and resemble rest. Drugs, alcohol, overexertion, overeating or fasting is not recommended (for general health and especially after Gua Sha.)
This link is an excellent resource about Gua Sha: http://www.guasha.com/whatis.html
Cupping is another type of treatment I often use. It stimulates acupuncture points by applying suction through a metal, wood or glass jars, in which a partial vacuum has been created. This technique forces blood into the area and pulls out toxins and congestion. Cupping is used for low backache, sprains, soft tissue injuries, and helping relieve fluid from the lungs in chronic bronchitis.
There are many forms of East Asian Massage. I am trained in Shiatsu and Tui Na and Shonishin. With my patients, I most often focuses on the Hua Tou Jia Jie points that are on either side of the vertebrae. By clearing the energy in the spine, the whole body can more easily attain balance.
In East Asian medicine, moxibustion is primarily used for people who have a cold or stagnant condition. Burning moxa expels cold and warms the meridians. This leads to smoother flow of blood and qi. In Western medicine, moxibustion has successfully been used to turn breech babies into a normal head-down position prior to childbirth. A landmark study published in the Journal of the American Medical Association in 1998 found that up to 75% of women suffering from breech presentations before childbirth had fetuses that rotated to the normal position after receiving moxibustion at an acupuncture point on the Bladder meridian. Other studies have shown that moxibustion increases the movement of the fetus in pregnant women, and may reduce the symptoms of menstrual cramps when used in conjunction with traditional acupuncture.
|
<urn:uuid:543736a2-2288-4a26-b084-3371eeedeb8f>
|
CC-MAIN-2016-26
|
http://www.newmoonacupuncture.com/services/asian-physical-medicine/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00035-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.914768 | 836 | 2.703125 | 3 |
Calcium carbonate ( a type of
limestone rock), zeolites, and sodium bicarbonate (baking
soda) with clay. If you used a fluoride gel, then you
used sand and feldspar.
The calcium carbonate and sand act as an abrasive. The
sodium bicarbonate acts as a cleansing agent.
Grind up some antacids like TUMS into a powder. Place 1/2
teaspoon of the calcium carbonate into the plastic cup.
Add 1/4 teaspoon baking soda to the cup. Add a drop or
two of water to make a paste. Stir.
Now, go brush your teeth!
The source of
this article was taken from Wherever You Are On Earth. .
.You're On Rock! by WHAM, the minerals information
company. If you would like additional educational
information and materials, stop by at www.aggman.com
|
<urn:uuid:fddc4f90-8225-492b-8385-ed982dbd6215>
|
CC-MAIN-2016-26
|
http://www.rogersgroupincint.com/IndustryResources/Rockology101Activities/ActivityHomemadeToothpaste/tabid/112/Default.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00025-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.841152 | 202 | 2.96875 | 3 |
Encryption in Cloud Computing
This article makes the important argument that encryption -- where the user and not the cloud provider holds the keys -- is critical to protect cloud data. The problem is, it upsets cloud providers' business models:
In part it is because encryption with customer controlled keys is inconsistent with portions of their business model. This architecture limits a cloud provider's ability to data mine or otherwise exploit the users' data. If a provider does not have access to the keys, they lose access to the data for their own use. While a cloud provider may agree to keep the data confidential (i.e., they won't show it to anyone else) that promise does not prevent their own use of the data to improve search results or deliver ads. Of course, this kind of access to the data has huge value to some cloud providers and they believe that data access in exchange for providing below-cost cloud services is a fair trade.
Also, providing onsite encryption at rest options might require some providers to significantly modify their existing software systems, which could require a substantial capital investment.
That second reason is actually very important, too. A lot of cloud providers don't just store client data, they do things with that data. If the user encrypts the data, it's an opaque blob to the cloud provider -- and a lot of cloud services would be impossible.
Posted on November 12, 2012 at 5:47 AM • 59 Comments
|
<urn:uuid:4a6bb31f-8aec-47f2-9705-14d192425a19>
|
CC-MAIN-2016-26
|
https://www.schneier.com/blog/archives/2012/11/encryption_in_c.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00064-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936277 | 292 | 2.859375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.