text
stringlengths 237
516k
| score
float64 3
4.9
|
---|---|
What is epiglottitis?
Epiglottitis is an acute life-threatening bacterial or viral infection that results in swelling and inflammation of the epiglottis. The epiglottis is an elastic cartilage structure at the root of the tongue that prevents food from entering the windpipe (trachea) when swallowing. This causes breathing problems, including stridor, that can progressively worsen and may, ultimately, lead to airway obstruction. There is so much swelling that air cannot get in or out of the lungs resulting in a medical emergency.
What causes epiglottitis?
The primary cause of epiglottitis is a bacterial infection which is spread through the upper respiratory tract. The bacteria usually is Haemophilus influenzae type B (HIB). The reason some children develop the disease, while others do not, is not completely understood. Another bacteria that can cause epiglottitis is group A ß-hemolytic streptococci.
The Centers for Disease Control and Prevention recommends three to four doses of the HIB vaccine. Primary doses are given at 2 and 4 months of age or at 2, 4, and 6 months or age, based on the brand used by the physician office. A booster dose is given by 12 to15 months of age.
The HIB vaccine protects against this bacteria, therefore decreasing the chance of developing epiglottitis.
Facts about epiglottitis:
- The use of the HIB vaccine has significantly decreased the risk of developing the disease.
- The disease usually occurs in children 2 to 6 years of age, but has also occurred in adults.
- The disease can occur at any time; there is no one season that it is more prevalent.
What are the symptoms of epiglottitis?
The symptoms of epiglottitis are similar, regardless of the organism causing the inflammation. The following are the most common symptoms of epiglottitis. However, each child may experience symptoms differently. Symptoms may include:
- upper respiratory infections (In some children, symptoms of epiglottitis begin with symptoms of an upper respiratory infection.)
- quick onset of a very sore throat
- muffled voice
- no cough
- cyanosis (blue skin coloring)
As the disease worsens, the following symptoms may appear:
- difficulty breathing
- unable to talk
- the child sits leaning forward
- the child keeps his/her mouth open
How is epiglottitis diagnosed?
Because of the severity of the disease and the need for immediate intervention, the diagnosis is usually made on physical appearance and a thorough medical history. At this point, if epiglottitis is suspected, the child will immediately be transferred to the hospital. As the disease continues, there is a chance of the child's entire airway becoming occluded(blocked), which can make the child stop breathing.
At the hospital, the following additional tests may be performed to confirm the diagnosis:
- x-ray of the neck - a diagnostic test which uses invisible electromagnetic energy beams to produce images of internal tissues, bones, and organs onto film.
- blood tests
- visualization of the airway - visualization of the airway, under optimal safety conditions by a surgeon in the operating room, may be necessary.
- blood culture - test to identify the bacteria
- throat culture - test to identify the bacteria
Treatment for epiglottitis:
The treatment for epiglottitis requires immediate emergency care to prevent complete airway occlusion. The child's airway will be closely monitored, and, if needed, the child's breathing will be assisted with machines.
Also, intravenous (IV) therapy with antibiotics will be started immediately. This will help treat the infection by the bacteria. Treatment may also include:
- steroid medication (to reduce airway swelling)
- intravenous (IV) fluids, until the child can swallow again
- humidified oxygen
- breathing tube
What is the prognosis of epiglottitis?
How well the child recovers from this disease is related to how quickly treatment begins in the hospital setting. Once the child is being monitored, the airway is safe, and antibiotics are started, the disease usually stops progressing within 24 hours. Complete recovery takes longer and depends on each child's condition.
Prevention of epiglottitis:
As mentioned above, epiglottitis caused by the bacteria HIB can be prevented with vaccines that start at the age of 2 months. Epiglottitis caused by other organisms cannot be prevented at this time, but are much less common.
If a child is diagnosed with epiglottitis, the child's family or other close contacts are usually treated with a medication called Rifampin, to prevent the disease in those people who might have been exposed.M
Click here to view the
Online Resources of Respiratory Disorders
Disclaimer - This content is reviewed periodically and is subject to change as new health information becomes available. The information provided is intended to be informative and educational and is not a replacement for professional evaluation, advice, diagnosis or treatment by a healthcare professional. © 2009 Staywell Custom Communications.
| 4.083535 |
The United States Flag
History of the
The Flag of the United
States is the third oldest of the National standards of the
world; older than the Union Jack of Britain or the Tricolor of
The Flag was first
authorized by Congress June 14, 1777. This date is now observed
as Flag Day throughout America.
The colors of the Flag may be thus explained: The red
is for valor, zeal and fervency; the white for hope,
purity, cleanliness of life and rectitude of conduct; the
blue, the color of heaven, for reverence of God, loyalty,
sincerity, justice and truth.
It was decreed that there should be a star and a stripe for
each state, making 13 of both, for the states at that time had
just been erected from the original 13 colonies.
The star (an ancient symbol
of India, Persia and Egypt) symbolizes dominion and sovereignty,
as well as aspirations. The constellation of the stars within
the union---one star for each state---is emblematic of our
Federal Constitution, which reserves to the states their
individual sovereignty, except as to rights delegated by them to
the Federal Government.
The symbolism of the Flag
was thus interpreted by Washington:
"We take the stars from heaven, the red from our mother
country, separating it by white stripes, thus showing that we
have separated from her, and the white stripes shall go down in
posterity representing liberty."
In 1791, Vermont, and in 1792, Kentucky were admitted to the
Union and the number of stars and stripes were raised to 15 in
correspondence. As other states came into the Union, it became
evident there would be too many stripes. So in 1818, Congress
enacted that the number of stripes be reduced and restricted to
13, representing the 13 original states, while a star should be
added for each succeeding state. That is the law of today. The
name 'Old Glory' was given to the Flag, August 10, 1831, by
Captain William Driver of the brig Charles Doggett.
The Flag was first flown from Fort Stanwix, on the site of
the present city of Rome, New York, on August 3, 1777. It was
first under fire three days later in the Battle of Oriskany,
August 6, 1777.
The Flag was first carried in battle at the Brandywine,
September 11, 1777. It first flew over foreign territory January
28, 1778, at Nassau, Bahama Islands; Fort Nassau having been
captured by the Americans in the course of the war for
independence. The first foreign salute to the Flag was rendered
by the French admiral, LaMotte, off Quiberon Bay, February 13,
The United States Flag
is unique in the deep and noble significance of its message to
the entire world---a message of national independence, of
individual liberty, of idealism, of patriotism.
It symbolizes national
independence and popular sovereignty. It is not the Flag of a
reigning family or royal house, but of over two hundred million
free people welded into one Nation, one and inseparable, united
not only by community of interest, but by vital unity of
sentiment and purpose, a Nation distinguished for the clear
individual conception of its citizens alike of their duties and
their privileges, their obligations and their rights. It
incarnates for all mankind the spirit of Liberty and the
glorious ideal of human Freedom; not the freedom of unrestraint
or the liberty of license, but an unique ideal of equal
opportunity for life, liberty and the pursuit of happiness,
safeguarded by the stern and lofty principles of duty, of
righteousness and of justice, and attainable by obedience to
Floating from the lofty
pinnacle of American idealism, it is a beacon of enduring hope,
like the famous Bartholdi Statue of Liberty Enlightening the
World to the oppressed of all lands. It floats over a wondrous
assemblage of people from every racial stock of the earth whose
united hearts constitute an indivisible and invincible force for
the defense and succor of the downtrodden.
It embodies the essence of
patriotism. Its spirit is the spirit of the American nation. Its
history is the history of the American people. Emblazoned upon
its folds in letters of living light are the names and fame of
our heroic dead, the Fathers of the Republic who devoted upon
its altars their lives, their fortunes and their sacred honor.
Twice told tales of national honor and glory cluster thickly
about it. Ever victorious, it has emerged triumphant for nine
great national conflicts. It bears witness to the immense
expansion of our national boundaries, the development of our
natural resources, and the splendid structure of our
civilization. It prophesies the triumph of popular government,
of civic and religious liberty and of national righteousness
throughout the world.
The Flag first rose over
thirteen states along the Atlantic seaboard, with a population
of some three million people. Today it flies over fifty states,
extending across the continent, and over great islands of the
two oceans; and owe allegiance. It has been brought to this
proud position by love and sacrifice.
Citizens have advanced it
and heroes have died for it. It is the sign made visible of the
strong spirit that has brought liberty and prosperity to the
people of America. It is the Flag of all of us alike. Let us
accord it honor and loyalty.
| 3.885399 |
Definition of Injection snoreplasty
Injection snoreplasty: An injection of a chemical called sodium tetradecyl sulfate that promotes stiffening of the soft palate by creating scar tissue in order to relieve snoring. The soft palate is the area above your throat in the back of your mouth. Snoring is typically caused by the fluttering of tissues of the back of the throat. The most common form of snoring is in fact called palatal flutter snoring.Source: MedTerms™ Medical Dictionary
Last Editorial Review: 6/14/2012
Get tips, treatments, & motivation.
| 3.07952 |
The Safe@School website is a combination resource centre and teacher training module, created by OTF and COPA to promote the prevention of bullying in schools. The website is composed of three main sections:
About Safe@School identifies the project partners and the main components of our bullying prevention project.
The Resources section contains a listing of useful websites, books and other materials on bullying prevention gathered by OTF and COPA. These have been organized into four sections for easy use: Professional Resources, Resources for Classroom Use and Student Use, Community Interest Resources, and a sample listing of the Ministry of Education Registry of bullying prevention resources.
The e-learning Teacher Training Module provides current information about the latest research on, and best practices for, assessing and preventing bullying in the school utilizing COPA's approach to prevention education as the foundation. The Teacher Training Module became available in September 2007, and is open to teachers and other school staff.
The Teacher Training Module identifies appropriate intervention strategies as well as ways to establish healthy communication throughout the school. A section on steps to change explains ways to move through specific stages in planning for and implementing bullying prevention programs. A further section contains methods to integrate bullying prevention strategies into the curriculum along with sample surveys to allow schools to assess their individual school climates and monitor the success of their intervention strategies.
Individual school reports based on the CD-ROM e-learning materials are filed and posted within the website module. These reports are available to schools as resources for modification and updating of program implementation and for other schools to use as a reference.
| 3.145226 |
(TORONTO, Canada – Dec. 13, 2012 ) – Cancer scientists led by Dr. John Dick at the Princess Margaret Cancer Centre have found a way to follow single tumour cells and observe their growth over time. By using special immune-deficient mice to propagate human colorectal cancer, they found that genetic mutations, regarded by many as the chief suspect driving cancer growth, are only one piece of the puzzle. The team discovered that biological factors and cell behaviour – not only genes – drive tumour growth, contributing to therapy failure and relapse.
The findings, published today online ahead of print in Science, are "a major conceptual advance in understanding tumour growth and treatment response," says Dr. Dick, who holds a Canada Research Chair in Stem Cell Biology and is a Senior Scientist at University Health Network's McEwen Centre for Regenerative Medicine and Ontario Cancer Institute, the research arm of the Princess Margaret Cancer Centre. He is also a Professor in the Department of Molecular Genetics, University of Toronto. The research work was primarily carried out in Toronto by Antonija Kreso, Catherine O'Brien, and other members of the Dick lab with support from clinician-scientists at Mount Sinai Hospital and at the Ontario Institute for Cancer Research, and from genome scientists at St Jude Research Hospital, Memphis, and the University of Southern California, Los Angeles.
By tracking individual tumour cells, they found that not all cancer cells are equal: only some cancer cells are responsible for keeping the cancer growing. Within this small subset of propagating cancer cells, some kept the cancer growing for long time periods (up to 500 days of repeated tumour transplantation), while others were transient and stopped within 100 days. They also discovered a class of propagating cancer cells that could lie dormant before being activated. Importantly, the mutated cancer genes were identical for all of these different cell behaviours.
When chemotherapy was given to mice in which the human tumours were growing, the team made the unexpected finding that the long-term propagating cells were generally sensitive to treatment. Instead, the dormant cells were not killed by drug treatment and became activated, causing the tumour to grow again. The cancer cells that survived therapy had the same mutations as the sensitive cancer cells proving that cellular factors not linked to genetic mutation can be responsible for therapy failure.
The research challenges conventional wisdom in the cancer research field that the variable growth properties and resistance to therapy of cancer cells are solely based on the spectrum of genetic mutations within a tumour, says Dr. Dick. Instead, the scientists have validated a developmental view of cancer growth where other biological factors and cell functions outside genetic mutations are very much at play in sustaining disease and contributing to therapy failure.
The research published today builds on decades of experience by Dr. Dick, who focuses on understanding the cellular processes that maintain tumour growth. In 2004, Dr. Dick published related findings in leukaemia, but in the present study his team was able to compare the importance of genetic events with cellular mechanisms for the first time. It is also the first study of its kind in a solid tumour system.
Dr. Dick says the findings convinced him that the conventional view that only explores gene mutations is no longer enough in the quest to accelerate delivery of personalized cancer medicine to patients – targeted, effective treatments customized for individuals.
"The data show that gene sequencing of tumours to find the spectrum of their mutations is definitely not the whole story when it comes to determining which therapies will be most effective," says Dr. Dick.
"This is a paradigm shift that shows research also needs to focus on the biological properties of cells. For example, finding a way to put dormant cells into growth cycles could make them more sensitive to chemotherapy treatment. Targeting the biology and growth properties of cancer cells could expand the repertoire of usable therapeutic agents and provide better outcomes for patients."
Dr. Dick is renowned for pioneering the cancer stem cell field by identifying leukemia stem cells in 1994 and colon cancer stem cells in 2007. Also in 2011, Dr. Dick isolated the normal human blood stem cell in its purest form – as a single stem cell capable of regenerating the entire blood system. Collectively, Dr. Dick's research is paving the way for better clinical cancer therapy.
| 3.046642 |
Poison ivy is a woody vine that is well-known for its ability to produce urushiol, a skin irritant which for most people will cause an agonizing, itching rash.
Poison ivy grows vigorously throughout much of North America.
It can grow as a shrub up to about 1.2 m (4 ft) tall, as a groundcover 10-25 cm (4-10 in) high, or as a climbing vine on various supports.
Older vines on substantial supports send out lateral branches that may at first be mistaken for tree limbs.
The reaction caused by poison ivy, urushiol-induced contact dermatitis, is an allergic reaction.
For this reason some people do not respond to the "poison" because they simply do not have an allergy to urushiol.
However, sensitivity can develop over time.
For those who are affected by it, it causes a very irritating rash.
If poison ivy is burned and the smoke then inhaled, this rash will appear on the lining of the lungs, causing extreme pain and possibly fatal respiratory difficulty.
If poison ivy is eaten, the digestive tract and airways will be affected, in some cases causing death.
Urushiol oil can remain viable on dead poison ivy plants and other surfaces for up to 5 years and will cause the same effect.
For more information about the topic Poison ivy, read the full article at Wikipedia.org, or see the following related articles:
Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools:
| 3.101947 |
Jan. 26, 2004 As the nation recently celebrated the 100th anniversary of human flight, an internationally recognized University of Delaware robotics expert turned his attention to the skies.
Sunil K. Agrawal, UD professor of mechanical engineering, is working on the design and construction of small robotic devices that mimic the flight of birds and insects, in particular, the hummingbird and the hawkmoth.
Agrawal said that once fully developed, the devices will be able to carry miniature cameras and fly in flock-like formations to send surveillance data back to a central computer for processing.
Such detailed information would be of value in industrial and military applications and also could be used in rescue operations to map the interiors of collapsed buildings.
While the need for such devices in surveillance and telemetry has existed for some time, Agrawal said the technology to enable such miniaturization is relatively new and still evolving.
"We are quite enthusiastic about being able to build these machines," according to Agrawal, whose research team is focused on the design, fabrication and control of a variety of devices in addition to the birds.
Early versions of the robotic birds were made of balsa wood and powered by rubber band engines that made the wings flap, and the first successful flight was outside Spencer Laboratory.
A subsequent design, with wings powered by battery, took flight on the University's Green and they noticed an unexpected reaction. "When it flew, birds from nearby came and circled around it," Agrawal said. That robotic bird spent two minutes in flight but lacked a means for remote control.
Current designs have replaced the balsa components with carbon fiber composites and paper wings with Mylar, dropping the total weight from 50 to 15 grams and strengthening the frame to withstand crashes.
Agrawal said the research team is now working to optimize the design so that the mass and power required can be kept to a minimum. He said he hopes to further miniaturize the birds to the point that they are small enough to fit in the palm of a hand, while at the same time working to integrate controls to guide flight.
When it comes time to control a group of birds in flight, Agrawal will turn to technologies he has developed to make land-bound robots work in unison.
"We want to demonstrate that the flapping wing machines can be built and optimized and, eventually, we would like to expand from a single flying machine to a group of cooperative flying machines," Agrawal said. "This will be in the future from where we are now, but it is where I think we would like to go."
At the moment, Agrawal says he simply wants to build a better bird. The research team is studying individual wing motions, and looking at birds and insects to better understand how they get lift. The hummingbird is a valuable model, he said, because it can hover, and the ability to do that is key to effective surveillance.
"Making things mimic nature is much more difficult than it might seem," Agrawal said. "It is scientifically fascinating but also extremely challenging."
Agrawal said the research team plans to take new designs to a wind tunnel, where the birds will be put in various flying attitudes to gather data on force and torque. That information will be used to predict how to improve and control the movement of the birds, and future designs will then be refined using computer models.
Agrawal said the idea for robotic birds came to him two years ago, and he found support from U.S. Air Force officials at Eglin Air Force Base in Valparaiso, Fla.
The military uses were readily apparent because if the robotic birds can provide a stable platform for cameras, they can create detailed maps of nearly any environment.
Industrial uses also are possible, with the birds compiling important information on large factory floors.
Further, there are police and rescue applications, with SWAT teams able to gather valuable data and rescue teams able to send the birds in to map the interiors of collapsed buildings.
Agrawal's laboratory receives funding from the National Science Foundation, the U.S. Air Force, the National Institute of Standards and Technology and the National Institutes of Health.
The Alexander von Humboldt Foundation honored Agrawal as one of 10 researchers worldwide to receive a 2001-03 Friedrich Wilhelm Bessel Research Award at a ceremony held in Berlin, Germany, June 27.
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by University Of Delaware.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
| 3.231418 |
Feb. 1, 2009 Biology exists in a physical world. That’s a fact cancer researchers are beginning to recognize as they look to include concepts of physics and mathematics in their efforts to understand how cancer develops -- and how to stop it.
The movement, led by researchers at the University of Michigan Comprehensive Cancer Center, has come to a head with a new section in one of the top cancer research journals and a new grant program from the National Cancer Institute.
Traditional cancer biology involves taking a sample of cells and holding them in time so they can be studied. Then the researchers look at that slice of cells to understand what signals and pathways are involved. But that doesn’t capture the full picture, says Sofia Merajver, M.D., Ph.D., co-director of the Breast Oncology Program at the U-M Comprehensive Cancer Center.
“The living cell is really a dynamic process. We need to consider the properties of physics to help us understand these data. In order to develop a drug directed against a given molecule that has real hope of treating cancer, we need to understand how that molecule is sitting in the cell, interacting with other molecules,” says Merajver, professor of internal medicine at the U-M Medical School.
Merajver and her team have developed a sophisticated mathematical model to help researchers apply these concepts to cancer. The mathematical model is designed to help give researchers a complete picture of how a cell interacts with its surrounding environment. By understanding the full complexity of signaling pathways, researchers can better target treatments and identify the most promising potential new drugs.
Researchers have learned from this modeling that a well-known and major type of signaling pathway naturally transmits information not just in a forward direction, but also backwards. That implies new considerations for developing drugs to inhibit major growth and metastasis pathways in cancer.
This crosstalk was missed by conventional methods. Typically, when scientists begin to look at a cell, they must make assumptions to simplify the picture of what is happening in cells.
“When you make simplifying assumptions, you always run the risk of eliminating critical aspects of your system, but you have no way of knowing what was discarded. When you simplify, you don’t know exactly what you’re throwing away because you never looked at the complex case,” Merajver says. Mathematical modeling allows researchers to look at the complex case more thoroughly.
“To understand how the laws of physics can be applied to biological systems is a new frontier,” she says.
Merajver and her colleagues were successful in getting the journal Cancer Research to add a new regular section to the twice-monthly journal precisely focused on mathematical modeling. The journal has also added new editors to its board who have expertise in this discipline. Merajver and Trachette Jackson, Ph.D., professor of mathematics at U-M, will lead this effort as senior editors.
A review article about mathematical modeling appears in the Jan. 15 issue of Cancer Research, authored by Merajver, Jackson and Alejandra Ventura, Ph.D., a senior postdoctoral fellow in internal medicine at U-M.
Funding for this work is primarily from the Breast Cancer Research Foundation.
Reference: Cancer Research, Vol. 69, No. 2, pp. 400-402
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
| 3.156923 |
July 18, 2012 Scientists have recreated the extreme conditions at the boundary between Earth's core and its mantle, 2,900 km beneath the surface. Using the world's most brilliant beam of X-rays, they probed speck-sized samples of rock at very high temperature and pressure to show for the first time that partially molten rock under these conditions is buoyant and should segregate towards Earth's surface. This observation is a strong evidence for the theory that volcanic hotspots like the Hawaiian Islands originate from mantle plumes generated at Earth's core-mantle boundary.
The results are published in Nature (July 19, 2012).
The group of scientists was led by Denis Andrault from the Laboratoire Magmas et Volcans of University Blaise Pascal in Clermont, and included scientists from the CNRS in Clermont and the European Synchrotron Radiation Facility (ESRF) in Grenoble, France.
Most volcanoes are situated where continental plates are pushed or pulled against each other. Here, the continental crust is weakened, and the magma can break through to the surface. The Pacific "Ring of Fire," for example, exhibits such plate movements, resulting in powerful Earthquakes and numerous active volcanoes.
Volcanic hotspots are of a completely different nature because most of them are far away from plate boundaries. The Hawaiian Islands, for example, are a chain of volcanoes thought to have their origin in a mysterious hot spot beneath the Pacific ocean floor. Every island in the chain starts as an active volcano fed by the hot spot that eventually rises above the ocean surface. As plate tectonics move the volcano away from the hotspot, it becomes extinct. The hot spot will in the meantime create another volcano: the next island in the chain. The Hawaiian Islands are one of many examples of this process, like the Canary Islands, La Réunion or the Azores.
The nature of the hot-spot source and its location in the mantle have remained elusive to the present day. One explanation is narrow streams of magma conveyed to Earth's surface from the boundary between Earth's core of liquid iron and the solid mantle of silicate rock. Whether the lowermost mantle expels such streams of magma called mantle plumes is one of today's major controversies among geologists.
What material can be stored at the core-mantle boundary and become sufficiently light to rise through 2900 km of thick solid mantle? This was the question Denis Andrault and his colleagues addressed when they set out to recreate in a laboratory the conditions found at the core-mantle boundary. They compressed tiny pieces of rock, the size of a speck of dust and ten times thinner than a human hair, between the tips of two conical diamonds to a pressure of more than one million bar. A laser beam then heated these samples to temperatures between 3000 and 4000 degrees Celsius, which scientists believe is representative of the 200km-thick core-mantle boundary. The samples are extremely small compared to the natural processes occurring in Earth. However, the melting processes are very well reproduced experimentally. Therefore, the observations can be confidently transferred from micron scale in the experiments to kilometre scale in the deep mantle.
Beams of X-rays at the ESRF, focused to a diameter of one 1000th of a millimetre, were used to map these samples and identify where the solid rock had melted. "Obviously, these tiny samples produce weak interaction signals, and this is why it is important to have the most brilliant X-ray beams for this type of experiments, says Mohammed Mezouar, the scientist responsible for the high-pressure beamline ID27 at the ESRF.
Once regions with molten rock had been identified, another X-ray technique was used at the ESRF to compare the chemical compositions of previously molten and solid parts. "It is the iron content which is decisive for the density of molten rock at the core-mantle boundary. Its accurate knowledge allowed us to determine that molten rock under these conditions is actually lighter than solid," says Denis Andrault.
Gravity makes the light liquid rock from a hotspot move slowly upwards like a bubble in water until it reaches the surface where the magma plume will form a volcano. The hotspots of liquid occur in the relatively thin boundary region between the solid lower mantle and the liquid outer core of Earth where the temperature rises over a distance of just 200 kilometres from 3000 to 4000 degrees. This steep rise is caused by the vicinity of the much hotter core and induces a partial melting of the rocks.
The results of the experiment are also of great significance for the understanding of the early history of Earth, as they provide an explanation why many chemical elements playing a key role in our daily life gradually accumulated from Earth's inside to its thin crust, close to the surface.
"We know less about the Earth's mantle than about the surface of Mars. It is impossible to drill a hole of even 100 kilometres into the Earth, so we have to recreate it in the laboratory. This is important knowledge, because active hot spot volcanoes like those in Iceland can be dangerous and disruptive for the daily lives of people far away," concludes Denis Andrault.
Other social bookmarking and sharing tools:
- Denis Andrault, Sylvain Petitgirard, Giacomo Lo Nigro, Jean-Luc Devidal, Giulia Veronesi, Gaston Garbarino, Mohamed Mezouar. Solid–liquid iron partitioning in Earth’s deep mantle. Nature, 2012; 487 (7407): 354 DOI: 10.1038/nature11294
Note: If no author is given, the source is cited instead.
| 4.267147 |
Dawson City, Yukon—After revving up with a roar, a core drill designed to punch holes in concrete begins digging into ice more than 100,000 years old. Here in the Klondike, the drill serves as a kind of gas-powered, handheld time machine, bringing up frozen earth from the Pleistocene, when mammoths and other megafauna once ruled. In a land where miners still hunt for gold, paleomammalogist Ross MacPhee of the American Museum of Natural History in New York City and his colleagues seek a different kind of treasure—DNA from extinct titans.
Millennia ago, as the earth in the Klondike cracked during the springtime thaw, water leaked in, only to freeze again during winter to form wedges of ice, explains geologist Duane Froese of the University of Alberta. Dripping in with this water was sediment from the surface, which might hold DNA from mammoths, as well as that of the plants, bacteria and other life once found in the region, MacPhee says. Nothing is known about the genetics of mammoths from the middle Pleistocene, and such DNA could elucidate their evolution. The researchers hope to find clear evidence that two species of mammoth, not just one, roamed the Americas at the end of the last ice age.
This area, dominated today by spruce forest mixed with paper birch and aspen trees, was once part of Beringia, the grassland steppe ranging from North America to Asia that nowadays lies submerged under the icy Bering Strait. Froese has worked in the Klondike for the past 15 or so field seasons, aiming to reconstruct a full picture of Beringia over the past few million years. Sampling trapped sediment for DNA could prove a far easier way to analyze how Beringia’s ecosystems shifted over time as compared with attempting to collect hundreds of fossils from different taxa.
In joining the team for seven days in June, I learn that ancient DNA molecules are not the only clues the researchers seek here. Paleoentomologist Svetlana Kuzmina of the University of Alberta sifts through sediment for fossil insects—by studying where modern examples of these now dwell, she can extrapolate what the climate might have been like back then. Lee Arnold of the University of Wollongong in Australia will scan crystalline grains to pinpoint the ages of all the finds, thus helping to reveal the proper sequence of events—which is as important as having words in the right order in a sentence. And later the scientists will head north by plane, helicopter and boat to dig for bones.
The fact that gold mining continues in the Klondike has proved invaluable. We can drive over mining roads right up to sites, as opposed to lugging heavy equipment a mile or more by foot. The miners have also been very supportive, even using excavators to scrape off tons of surface material, called overburden, from the frozen earth at a rocky site named Paradise Hill. Their help makes research far more cost-effective, Froese explains. MacPhee agrees: “You’d be lucky to get one site done in Siberia in a week.”
Still, fieldwork remains a hard, dirty task. The giant wedge of ice we mine at Gold Run Creek on the fourth day of our expedition was hidden under a slope of powdery muck—silt loaded with ancient, decomposing organic material, which smells much like manure. As we expose the ice to the sun, water mixes with the muck to form a slippery ooze that occasionally traps us up to our thighs, much to our chagrin. Field time also can unpredictably vanish, as we discover when the rough, gravel roads take their toll on the rental SUV, which suffers three flats in just two days.
In the end, all the hard-won scientific treasure could help solve key mysteries. MacPhee hopes, for instance, that the DNA could explain why so many megafauna went extinct in the Americas. Did rapid swings in climate kill them off? Or was it the cunning of human hunters? Or was it species-jumping plagues that humans brought over, as MacPhee suggests?
| 3.843372 |
The emergence of disease-causing bacteria which are resistant to known antibiotics is one of the most important current global health challenges. Drug-resistant "superbugs" kill thousands of people every year. This is a growing problem, because new antibiotics are not being discovered fast enough to keep up with the rate of evolution of resistance. Using a simple theoretical model of a bacterial population which expands to colonize a new territory, Philip Greulich, Bartlomiej Waclaw and Rosalind Allen of the Universiy of Edinburgh show that a non-uniform concentration of antibiotic can greatly speed up the evolution of resistance, compared to the case where the drug is evenly distributed. Non-uniform drug distributions are expected to be very common: for example, drugs in our body accumulate to different levels in different organs. Importantly, the speedup in evolution of resistance that is predicted by the model depends on the sequence of genetic mutations by which the bacteria become drug resistant. It only happens if all the mutations along the pathway increase the drug resistance. Unfortunately, this seems to be the case for many commonly-used antibiotics. This research shows that simple, statistical physics models can provide important insights into biological problems. The theory developed by the Edinburgh researchers may also be relevant to the evolution of cancer cells resistant to chemotherapeutic drugs, suggesting that the highly non-uniform microenvironments found inside tumours may present a major obstacle to the successful treatment of the tumour before drug resistance emerges.
| 3.57138 |
Peter Parks/AFP/Getty Images
Visitors look at a display celebrating Chinese lunar new year in Shanghai on February 8, 2013. Preparations continue in China for the Lunar New Year which will celebrate the Year of the Snake on February 10.
This year, Lunar New Year is on Feb. 10. But those who celebrate the yearly tradition start much earlier than that. The cultural holiday involves many traditions, some which date back to ancient times, and though there are many variations on each, there are many that are widely celebrated.
1. Year of the Snake
There are twelve animals of the Zodiac. This year will be the Year of the Snake. Professor and Chair of the Department of Religious Studies at UC Riverside, Vivian-Lee Nyitray says that there are many variations on the snake’s characteristics, but overall people agree that a person of the snake year is charming, rational, intuitive and lucky with finances. She says the year is likely to be unpredictable and unstable, requiring people to prepare for the unexpected.
UC Irvine History Professor Yong Chen says the snake resembles the dragon, which is a symbol of power, dignity and prosperity. According to Chinese legend, ancient ancestors had a human face, but a body in the shape of a snake, so the snake represents ancestral worship. Chen says that some believe that the snake symbolizes longevity not only because of its shape, but also that it can regenerate life itself.
2. New Year’s Eve Dinner and Food
The New Year’s Eve Dinner takes place the night before the New Year, just like Christmas Eve the night before Christmas. Chen says it’s also like a Thanksgiving dinner, a time when the family gets together to celebrate. Along with the dinner comes a multitude of traditional foods.
Although there are regional variations, certain dishes in the meal often have symbolic meaning, Nyitray says.
The main dessert is the New Year’s Cake, made out of ground sticky rice. In Chinese, it is called “Nian Gao.” The “Nian” means “year,” while the “Gao” means “cake” and also “high” or “tall.” The overall meaning: A wish that the coming year will fulfill one’s high hopes. The cake may have fruits and nuts or seeds in it.
Fish is a common dish for the dinner. The word fish in Chinese is “yu,” which is a homophone for the word “plenty.” This symbolizes prosperity and surplus.
Oranges or tangerines are popular fruits for the new year. Since these fruits include segmented parts that make a whole, they symbolize a whole family made of individuals.
Nuts or seeds symbolize the hope for many children in the family. Some typical seeds used in the New Years dinner are sunflower, melon and pumpkin seeds.
Dumplings are also a common dish; its shape represents unity.
Prior to the New Year, the house needs to be thoroughly cleaned. The significance is in sweeping away any negativity from the past year. It also signifies a fresh start, a new beginning. This is the origin, in Chinese cultural areas, of “spring cleaning,” since New Year’s is also known as the “Spring Festival,” Nyitray says.
The characters for auspicious words such as “spring,” “happiness,” “longevity,” “prosperity,” are written on squares of red paper to be pasted on one’s door.
?One of the main characters in Chinese is called “fu,” which means good fortune. Sometimes the character is pasted upside down to indicate spirits looking down from above that the family’s fortunes have not been good and could use a change. In some interpretations, they’re inverted because the word for “upside down” is a homophone for “arrive,” and thus is a hope for the arrival of fortune. For some, the “fu” is pasted upside down so that devils would not know the meaning of it, therefore warding off evil spirits.
Couplets, a pair of often rhyming good wishes, are written on red paper to paste on either side of the front door.
New images of “door gods,” who are deified warriors that once guarded an emperor’s private chambers, are pasted on doors to guard the entrance to people’s homes.
Paper decorations could also include other auspicious images. Bats are one of them; the word for “bat” is a homophone for “fortune.” Ships carry the meaning of “smooth sailing” or “good fortune arriving easily.” Peaches are associated with Daoist notions of immortality and thus “long life.”
5. New Year’s Day
Visiting family is a major tradition on New Year’s Day. Some families stay awake for the whole night, while some retire after the midnight firecrackers. If they stayed up all night, they would wake up later to visit family and spend the day together. New Year’s Day is also when families distribute red envelopes. After doing so, people will visit other family members and friends for “a convivial start of the New Year,” Nyitray says.
6. Red Envelopes and New Money
Red envelopes are a meaningful gift for children and a way to pay respects to elders, all the while sharing good fortune.
Nyitray says red is an auspicious color, a color of vitality and life. Red envelopes are given by elders to the youth. They are filled with cash only even, not odd, amounts. “It might be two $10 bills for a total of $20, not a single bill and never an odd amount such as $15,” Nyitray says.
The bills should be crisp and new, just like the New Year.
Children will line up in front of their grandparents and parents on New Year’s morning, or after midnight and the setting off of firecrackers, to pay their respects by bowing, after which they receive their red envelope, Nyitray says.
Chen says that red envelopes are a more acceptable way to give money. It is not acceptable to give cash alone.
7. New Year’s Greeting
There is one main greeting for the New Year. In Mandarin Chinese, it is “Gong Xi Fa Cai.” In Cantonese Chinese, it is “Gong Hay Fat Choy.” In Vietnamese, it is “Chuc Mung Nam Moi.” Its meaning is “Congratulations on reaching the New Year, May you be prosperous.”
The New Year is a cheerful time that should not be marked by taboos.
One should avoid the word “death” and any other word that may sound like it. Instead, good luck wishes should make up most of the conversation.
One should not use a broom or sweep on New Year’s Day, as this might sweep away good luck.
Some believe that holding scissors on New Year’s might “cut” the New Year’s luck. Nyitray says it may vary depending on the region. The tradition might have also carried-over from pregnancy taboos, “pregnant women were not supposed to carry scissors for fear of symbolically cutting the fetus and hurting it,” Nyitray says.
Similarly, it is widely believed that one should not cut their hair on New Year’s Day. It should be done prior to the New Year. "Weapon-like objects are not supposed to be used in the
holiday season," Chen says. "In some places, cutting hair on the first day of the New Year jeopardizes the life of mother's brothers."
Cussing is not allowed for some as well. Nyitray says it would make sense to have a “clean” mouth.
People will also try to repay all debts by the start of the New Year. Similar to cleaning the house, paying all debts signifies starting the New Year with a clean slate.
9. Length of Holiday
The New Year starts the night of New Year’s Eve and lasts for fifteen days. The 15th Day is celebrated as the Lantern Festival, where red lanterns decorate the scene. The festival begins with the new moon and ends when the moon reaches fullness. The New Year ends with families eating glutinous dumplings on the 15th Day.
Chen says the fifteen days is a time of idle, “a time to recreate, regenerate, relax, visit friends and family.”
Many businesses also close 3 days to a week after New Year and then reopen.
10. Temples or other prayer
Whether one visits a temple depends on their regional and national background. It may or may not be done, depending on family affiliation and tradition, says to Nyitray.
"People do go to temples to hear the great bells (or gongs) and drums sounded to welcome in the new year and chase out the old," says Nyitray. "But the suppression of religion in China during the Cultural Revolution led to the cessation of this tradition in areas where it had been prominent."
Many do visit to hear the gongs and drums that welcome the New Year and chase out the old.
People also observe New Year’s at home. During New Year’s Day, many set up and pay respect to shrines, lighting up incense and burning paper money for their ancestors.
Firecrackers are lit at midnight, just as New Year’s Day begins. They're used to announce the ending of the old year and to usher in the new. The noise and flashes of light also drive away any lingering evil spirits.
12. Lion Dance
The Lion Dance serves to bring the community together every New Year. It is believed to bring good fortune and scare away devils.
Lion dances were traditionally performed at the re-opening of businesses. Nowadays their performances could happen anytime during the year. The lions are believed to chase away evil spirits and to ensure a good opening and prosperity for the business.
According to Chen, the Cantonese brought this tradition to the U.S. in the late 19th century. Today, the Lion Dance is a part of the tradition many major American cities.
Interested in celebrating? Check our map of Lunar New Year events in Southern California below.
| 3.325507 |
to: Summary Page
A Model of Trade oriented toward Labor and the Environment
by William McGaughey, Minnesota Fair Trade Campaign
The ideology of free trade is based on a model of world society which no longer exists. This model features a multitude of nation states which faithfully represent the economic interests of their citizens. It includes business organizations within each nation or community whose fortunes are closely tied to the fortunes of the nation or community. The world economy would reflect various geographical, cultural, economic, and other circumstances that allow business enterprises in particular nations to produce certain kinds of goods more efficiently than business enterprises in other nations. The doctrine of comparative advantage maintains that it is better to allow national economies to specialize in the kinds of production which they are better able to produce and to trade the surplus for other products which they are less able to produce.
The free-trade agenda is inappropriate for today's world economy because the businesses which produce for trade are no longer so closely identified with particular communities. More than one third of world trade is intra-company trade. Between 50% and 70% of trade between Mexico and the United States is of this sort. Intra-company trade, which is trade between different facilities belonging to the same company, means that the corporation is operating in at least two different countries and, therefore, cannot identify exclusively with either one. The company has both nations' or neither nations' interests at heart. Also, because it is the same company operating in both countries, one cannot plausibly argue that the operations in one of the countries enjoy a comparative advantage due to better management, technology, financing, etc.
The multinational corporations which produce the bulk of goods and services traded in the world economy are no longer national entities, but ones which, operating in several different countries, have outgrown restrictions placed upon them by national governments. Interested in cutting costs, they shop around the world for the best deal. They, of course, want cheap labor, low taxes, environmental permissiveness, public subsidies, and ineffective regulation. Inevitably, one government or another is willing to oblige them. In this new environment, when we speak of "comparative advantage", we are no longer just talking about natural endowments for production but, more importantly, about a government's willingness to shortchange the interests of its own citizens to accommodate business demands. Conversely, we must discard the idea that corporations are loyal citizens of the communities where they operate. While a few corporate executives may show some lingering attachment to particular communities where their companies historically operated,the business community in general has come to regard this attitude as an emotional extravagance.
Yet, the government is still negotiating trade deals as if the national interest were synonymous with that of business firms headquartered in the US. It has, for instance, made a priority of strengthening protection of intellectual-property rights because "US" companies, such as pharmaceutical manufacturers or Hollywood film producers, sell products in other countries whose commercial value depends on enforcing patent or copyright laws. The corruption of policy in the trade area has progressed to the point that government is abetting corporate efforts which are in direct conflict with its citizens' interests. The government is actively helping firms headquartered in the US to arrange a transfer of jobs out of the US. That is what the North American Free Trade Agreement is about.
To their horror, trade researchers discovered in the text of NAFTA and GATT provisions that would require the US to invalidate numerous laws and regulations designed to protect the environment, consumer safety, or public health. Such laws and regulations are considered potential non-tariff trade barriers. The NAFTA and GATT agreements would require the federal government to pressure state and local governments to change their laws to bring them into conformity with the minimalist approach to regulation preferred by international advisory groups such as the Codex Alimentarius. In effect, these so-called "trade" agreements would allow unelected international officials, deliberating in secret, to override US political decisions reached openly and in accordance with law on many other matters besides trade. This "Stealth" agenda of international business represents a severe setback to American democracy. With respect to NAFTA and GATT, the only adequate response would be to recommend that the Congress vote "no" when President Clinton submits the enabling legislation.
Some contend that government simply lacks the power to regulate international business. If government comes down too hard on business, then business will move production to another political jurisdiction and jobs will be lost. That argument ignores an important base of government power. Government can effectively regulate business by restricting the sale of products within its own territory. If General Motors moves its production operations to Mexico to escape US regulation, the US could intercept GM products at the border and deny permission for those products to be sold in the US market. If could further make it unlawful for US-based dealers to sell GM cars and trucks. Now, of course, the US Government would not do that to General Motors. But, if this example seems far-fetched, substitute illegal drugs from Columbia for GM products. The US Government has, indeed, gone to great lengths to flex its regulatory muscles against certain kinds of prod ucts supplied by business entrepreneurs.
The alternative to an unregulated international economy is a regulated one. Government needs to create a structure of laws and enforcement procedures that will cause business firms selling in the world market to act in a socially and environmentally responsible way. If business refuses to comply, then government can and should restrict access to markets. A theoretical model of this regulation would be the Fair Labor Standards Act of 1938, which, among other things, sets minimum wages and maximum hours of work.
The Constitution gave Congress the power to regulate foreign commerce. Congress could ban from the US market any goods or services that were not produced in accordance with labor or environmental standards. Alternatively, it could burden those products with tariffs. One must recognize, however, NAFTA and GATT both include features that would prevent government from exercising that power. NAFTA would phase out tariffs on products traded between Mexico, Canada, and the US. GATT contains a provision that countries may not consider how goods were produced or harvested in restricting certain types of imports. Although environ mental concerns relating to the slaughter of dolphins underlay a US ban on imported tuna from Mexico, a GATT panel in August 1991 ruled that US enforcement of the Marine Mammal Protection Act unfairly restricted trade. The same principle, forbidding process-related evaluation of products, could apply to child labor, slave labor, or other kinds of regulatory objectives.
I want now to spell out how government might effectively regulate trade to protect labor and the environment. Congressional initiatives undertaken in the 1980s linked access to US markets to respect for worker rights. The 1983 Caribbean Basin Initiative and the 1984 Trade and Tariff Act allowed products from certain developing countries to enter the United States duty free on the condition that that those countries observed internationally recognized worker rights. The list of worker rights included workers' rights of association (in free trade unions) and collective bargaining, prohibition against convict labor and child labor, and the right to enjoy reasonable wages, hours, and occupational safety and health. The US suspended Paraguay, Nicaragua, and Romania from the Generalized System of Preferences trade program because their governments had violated worker rights. The Omnibus Trade Act of 1988 required the President to attempt to include worker-rights criteria in the GATT.
while a step in the right direction, contains a fundamental shortcoming. The
current structure of trade assumes an adversarial relationship between national
governments. A nation's government is supposed to negotiate with the governments
of other nations for a better position in world trade. Yet, if the principal
conflict is between business and government, then national governments ought
to cooperate with each other in regulating business regardless of the business
firms' "nationality." We need a structure of world trade that will
regulate international business firms to promote the well-being of humanity.
As state governments cooperate with each other and with the federal government
to promote the general welfare, so national governments should work together
to set standards of business conduct and to punish violations of them. It makes
little sense to accuse Mexico of abusing labor when the violations occurred
at Mexican companies named RCA, Zenith, or Ford. Evaluations
of conduct should be targeted to particular employers rather than to nations.
Current discussions between Mexican and US officials about repairing damage to the environment in the border region illustrate what is wrong with the present structure of trade relations. Maquiladora employers have created an environmental "cesspool" by dumping untreated industrial waste and by refusing to help pay for community infrastructure to accommodate their burgeoning work force. When the Mexican government proposed in 1988 to levy a 2% tax on maquiladora wages to pay for infrastructure improvements, companies protested. "Several (employers) say that they are in Mexico to make profits and that infrastructure is Mexico's problem," explained a Wall Street Journal article. Now the Salinas government is arguing that Mexico is too poor to clean up the border environment and so the US should provide the funding. The same US corporations that created the environmental mess would be allowed to escape its financial consequences under the Salinas plan as would the Mexican government which used environmental permissiveness to lure those corporations to Mexico. Instead, the US taxpayer, whose employment opportunities have been eroded by the flight of jobs to Mexico, will bankroll the cleanup. Obviously, economic justice requires that the cleanup costs be targeted more accurately to those whose environmentally irresponsible actions caused a need for them.
What approach can be taken?
The US should not eliminate its tariff system on goods and services traded between Mexico, Canada, and the US; but, instead, should retain this system and convert it into an employer-specific method of screening imports according to social and environmental criteria. The degree of business adherence to certain standards would be reflected in a numerical compilation which would, in turn, drive the amount of tariff imposed upon a firm's products as they entered the US. The higher the degree of compliance with social and environmental standards, the lower the tariff. The lower the degree of compliance, the higher the tariff. So these tariffs would be designed to offset the cost advantage of "social or environmental dumping". Specifically, they would be designed to recover certain costs which the multinationals hoped to avoid by moving production to unregulated economies. The tariffs could be compiled to reflect the following three areas of concern:
(1) Environmentally responsible production. A multinational corporation producing goods in Mexico (or another foreign country) would be expected to discharge its industrial wastes according to "world class" standards for disposal of wastes into the water or air or for handling hazardous or toxic materials. If the producing company observed those standards, nothing would be added to the tariff. If the company did not observe the standards, then the regulatory authority would develop a plan for constructing waste-water or sewage-treatment facilities, for installing scrubbers in smokestacks, or for disposing of harzardous waste properly, and would determine the cost of implementing the plan. This total cost would be allocated to the units of production which the company expected to export to the US during a specified time period such as five years. The per-unit costs would be translated into a percentage mark-up to the product price. That mark-up would become the basis of the tariff which the US Government would collect as goods are shipped from Mexico to the United States. The US Government might then use the proceeds to assist the Mexican government in building sewage-treatment facilities and other infrastructure improvements needed to maintain a clean natural environment.
(2) Socially responsible production. Multinational corporations operating in Mexico would be expected to pay their employees an hourly wage equal to the highest level of prevailing wage in their industry by Mexican standards as well as to give their employees the maximum amount of paid leisure or other benefits which they would be entitled to receive by the highest Mexican standards. If the company compensated its employees according to this standard, nothing would be added to the tariff. Otherwise, the US would collect a tariff, through a percentage markup to product price, which would be equal to the difference between actual and expected labor costs per unit of product spread over a number of units of goods exported to the US during a specified time period. The proceeds might be used for services for dislocated workers in the US who were injured by the relocation of production to Mexico. In addition, I would propose that the "highest prevailing" Mexican wage be escalated upwards by a certain percentage each year as part of a development plan for the world economy.
(3) Production that respects human rights. This third category would identify certain corporate activities which are considered to be humanly intolerable. Among them would be production in an unsafe work environment or production with child or convict labor. If a company is discovered to be violating any of these basic standards, its products would be assessed a fine that would be collected through tariffs levied by the US in the same manner as that described above. Alternatively, severe violations of human-rights standards might warrant an outright ban on importation of the offending company's products into the US.
It is obvious that a tariff-based system of enforcing labor and environmental standards in world trade would be dealt a crippling setback if the US Congress approves NAFTA. Such a free-trade agreement requires that government surrender this important tool for regulating business activity. Tariffs, however, represent a less severe regulatory technique than litigation leading to bans on the sale of products. While the mechanism of inspection and evaluation and application to particular products might seem to increase bureaucratic red tape, existing product classifications in world trade, computer technology, and the use of bar codes and optical scanners could make the process quite manageable. Harder to achieve would be the political consensus that government ought to undertake this kind of business regulation.
Compounding the problem is the prospect that evaluating corporate performance according to legal definitions of "internationally recognized worker rights" may not be adequate to prevent the real damage likely be done if Congress approves NAFTA or the latest GATT agreement. While governments might punish employers for violating such rights of workers as the right of association, US workers would still suffer enormously from free trade with Mexico even if employers there scrupulously observed all the regulations. The proposed safeguards still do not adequately address the problem that US factory workers earning perhaps $15 an hour are made to compete on cost with Mexican workers earning $ 4 or $5 a day. Such disparities of wage have little to do with production efficiencies or the virtues of educational systems, but, instead, reflect factors relating to the two countries' differing levels of economic development.
The only way that government can effectively regulate wages and protect living standards is by intervening directly in the labor market. Such intervention would take the form of limiting the labor supply. The best way to limit the labor supply is by reducing the hours of work. When supply is reduced relative to demand, the price of the commodity sold rises. So the free market for labor would ultimately provide a higher hourly wage if work hours were reduced.
Government can induce employers to cut work schedules by enacting legislation which prescribe a certain lower standard number of work hours in a week and require that work done beyond the standard be compensated at a higher rate of pay. The federal government can make this change in the context of amending the Fair Labor Standards Act. About ten years ago, Rep. John Conyers of Michigan introduced a bill in Congress that proposed to reduce the standard workweek gradually from 40 hours to 32 hours and to increase overtime pay from time-and-one-half to double-time.
But the US economy is not a closed systemshorter work hours would not necessarily reduce the labor supply. The amount of shrinkage could be made up by increased importation of foreign products. And since employers, especially in the United States, are generally phobic about granting shorter hours, one would anticipate that unilateral moves by government to cut work hours would stimulate a new effort by business to shift production to other countries. A solution, therefore, might be to internationalize the campaign for shorter work hours. Working people in several countries, through unions and other socially conscious organizations, need to build a fire under their own governments to persuade those governments to cut working hours in their national economies. Each nation might thereby do its part in shrinking the global labor supply by reducing work hours according to a cooperative world development plan.
The industrially and financially more advanced nations, especially ones that enjoy a trade surplus, can contribute more to reducing labor supply than the industrially or financially weaker nations can. Fortunately, the government of Japan has developed an initiative to do just that. MITI's latest trade and industrial plan proposes to harmonize trade relations between Japan and its trading partners by encouraging Japanese workers, in journalists' language, to "work less and play more." Specifically, this plan calls for annual work hours in the Japanese economy to fall to around 1,800 hours by the mid-1990s.
Environmentalists, too, have a stake in globally reduced work hours for this would mean breaking the historic link between employment and ecologically damaging economic "growth." No longer would it be necessary to force-feed production through the natural environment just to have jobs. More people could become gainfully employed on a given volume of productive work. Moreover, with more free time, people would have more time to mend and repair consumer products instead of throwing broken products away and purchasing replacement items. The "throwaway" culture could become a thing of the past. Consumers would have more time for recycling. Given more time for spiritual growth, people could turn to a less materialistic type of personal satisfaction that treads more lightly on the environment. With a little imagination, the extra days off could be staggered to reduce traffic jams and, of course, cut down on work-related commuting trips. Happily the interests of labor and the environment combine in a requirement that work time be reduced.
Today we stand at a fork in the road in the world's economic history, contemplating whether to take the "free-trade" path that leads to cheap labor and environmental degradation or the path of social and environmental responsibility. If we choose the latter, government will need to rise to the occasion, reform itself, and assume a new economic role as a necessary regulator of the free market.
William McGaughey is author of A US-Mexico-Canada Free-Trade Agreement: Do We Just Say No?" (Thistlerose Publications, 1992).
Note: This article appeared in Synthesis/ Regeneration, a publication of the U.S. Green Party, in its sixth issue, spring 1993
Click for a translation into:
French - Spanish - German - Portuguese - Italian
to: Summary Page
| 3.024292 |
Low-Light Photo Tips
The basic problems facing the low-light photographer are being able to use a fast enough shutter speed to permit hand-held shooting, and being able to use a small enough lens aperture to provide the required depth of field. The tools to solve these problems include tripods and other camera supports (which hold the camera steady so you can use long exposure times with non-moving subjects), fast lenses (which let you use faster shutter speeds in any given light level), and higher ISO speeds (which let you use faster shutter speeds and smaller lens apertures in a given light level). Image-stabilizers (built into lenses or cameras, or separate units) are also quite helpful.
One problem with long exposures is that film loses sensitivity as exposure times increase. This is popularly known as "reciprocity failure," and it means that at long exposure times, your photos likely will be underexposed if you go by the meter reading. Since color films have three or four emulsion layers, not all of which lose speed at the same rate, there's a color shift as well as a speed loss as exposure times increase. (Reciprocity failure affects extremely short exposure times, too, but those are not encountered in low-light shooting.) Film manufacturers provide reciprocity compensation data for specific films; this information is often available on their websites.
With digital cameras, long exposures result in increased image noise ("digital grain"). Some digital cameras have noise-reduction features; activate these to improve image quality when making long exposures (see the camera manual for details).
It's a good idea to bracket your low-light exposures whenever possible. Shoot one photo at the exposure setting you think is correct, then shoot additional images, giving more exposure than that. (Normally, you'd bracket in both directions, but overexposure is rarely a problem in low-light photography.) If your camera offers automatic exposure bracketing, you can shoot the bracketed series very quickly, but with practive, you'll learn to do it quickly manually, too.
1. Get Some Support
If the conditions require a long exposure time, it's a good idea to find something to support the camera. The best support is a tripod. It offers the advantages of holding the camera still to prevent image blur due to camera movement (although it won't help with subject movement), and locking in your composition so you can examine it carefully, and won't accidentally change it as you squeeze off the shot. The disadvantages are that you have to buy the tripod and cart it around with you, and in some venues don't permit the use of tripods.
A monopod is essentially a tripod leg with a camera mount at the top. Sometimes monopods are permitted where tripods aren't, and monopods are much easier to cart around. In fact, I often use mine as a walking stick on hikes. The drawback to the monopod is that it isn't as steady as a tripod.
A beanbag is a great low-light photography tool. Set the beanbag on any handy support (I often use tree branches), and nestle the camera into the form-fitting beanbag.
You also can brace the camera on walls, tables, the floor, or any handy solid object. Just take care not to damage the support surface.
A tripod held the camera solidly for this pre-sunrise city-skyline image on ISO 64 slide film. This was shot from a rooftop; be aware that buildings aren't quite the solid photo platforms you might think: they vibrate a bit, what with air conditioners running, winds, trucks rumbling by on adjacent streets, and the like.
2. Fast Lenses
A fast lens lets you shoot at a faster shutter speed in any given light level, and as a bonus, provides a brighter viewfinder image (with SLR cameras) for easier composition and manual focusing. The drawbacks are that faster lenses cost more than slower ones, and are much bulkier; and very wide apertures provide little depth of field. But if you plan to do a lot of low-light shooting, you should consider investing in a fast lens of the appropriate focal length(s). Even zoom lenses are available with maximum apertures of f/2.8 or faster.
A fast lens will let you shoot at a faster shutter speed in any given light level, and allows you to stop the lens down for increased depth of field.
This image was shot at ISO 1600 with a pro digital SLR, and while noisier than a lower-ISO image, quality is still quite good.
| 3.241259 |
Reducing Your Salt Intake
It's often a good idea to reduce the amount of salt (sodium) in your diet if you are diagnosed with certain conditions, such as Reference nephrotic syndrome Opens New Window, Reference Cushing's syndrome Opens New Window, or Reference heart failure Opens New Window. Exactly how much daily salt is needed varies from person to person.
Try some of these tips for lowering your salt intake:
- Flavor your foods with herbs and spices such as basil, tarragon, or mint, or use salt-free sauces or lemon juice. Try plain or flavored vinegar to flavor soups and stews. Use about 1 tsp (4.9 mL) of vinegar for every 2 qt (1.9 L) of soup or stew.
- Choose fresh or frozen vegetables and fruits.
- Include more grains and beans in your diet.
- Choose foods marked "low-salt" or "low-sodium." Foods labeled this way must contain less than 140 mg of sodium in a serving.
- Do not use salt during cooking or at the table. Talk to your doctor before using a salt substitute. It may not be recommended, because most salt substitutes contain potassium. Potassium can build up in the bodies of people who have kidney disease and cause severe illnesses and even death.
- Avoid fast foods, prepackaged foods (such as TV dinners and frozen entrees), and processed foods (such as lunch meats and cheeses). Always check the serving size on processed food. Eating more than the single serving size may increase your sodium beyond a healthy level.
- Avoid foods that contain monosodium glutamate (MSG) and disodium phosphate.
- Avoid canned foods.
- Avoid salted ham, potato chips, pretzels, salted nuts, and other salty snack foods.
|By:||Reference Healthwise Staff||Last Revised: July 12, 2012|
|Medical Review:||Reference Kathleen Romito, MD - Family Medicine
Reference Rhonda O'Brien, MS, RD, CDE - Certified Diabetes Educator
| 3.176013 |
Curiosity Really Starts to Click
Curiosity is on a mission to find evidence of life on Mars -- but scientists aren't expecting to find fossilized purple people eaters or anything of the kind. "If life did arise on Mars, and if evolution had the same pace on Mars as it did here on Earth, only single-celled creatures would have had time to evolve before Mars lost its atmosphere," explained James R. Webb, director of FIU's SARA Observatory.
Nasa's Curiosity rover has begun satisfying the curiosity of mission scientists by sending high-quality images of Mars' surface back to Earth. Although it's only had since Sunday night to collect data, following its touchdown in the Gale Crater, Curiosity has sent a batch of snapshots that are already allowing the NASA team to garner a good deal of information.
The rover's first pictures show where its hardware -- including the sky crane, a parachute, a heat shield and a back shell -- landed, giving scientists new insights about the Martian surface.
"Next to the rover, you can see where the rocket thrust blew away some of the soil and revealed a harder material underneath," said Mike Malaska, a NASA Jet Propulsion Laboratory solar system ambassador. "That tells us that the firm material layer might not be very thick."
The rover wheel itself also helps scientists better gauge Mars' topography.
"The wheel is resting on the surface -- it hasn't sunk in at all -- so the surface must be pretty firm," Malaska explained. "It looks like the materials are a pretty uniform size. That's also a clue that it has been geologically sorted -- maybe either by wind or water in the past."
Images from Curiosity show a massive mound in the distance, dubbed "Mount Sharp," as well as dark dunes that scientists guess are made of volcanic sands. The plan is for Curiosity to explore the dramatic scenery as its mission progresses.
Looking for Life
Towering higher than Mount Ranier, Mount Sharp stores layers of rock and minerals that have accumulated over more than 2 billion years. Scientists handpicked the Gale Crater for Curiosity's mission because the massive rock layers at Mount Sharp can give the best clues about the water -- and possibly life -- that may have existed in Martian history.
"Curiosity's mission is to look for signs of one of the most significant things imaginable -- signs that life exists or has existed somewhere other than Earth," said James R. Webb, professor of physics and director of the SARA Observatory at Florida International University.
"Astronomers have known for many years that Mars used to have a much thicker atmosphere and that it undoubtedly had surface water, lakes and rivers," Webb told TechNewsWorld.
Now, liquid water would evaporate immediately on the surface of Mars, he said, but beneath its permafrost and polar ice caps, water still exists. Scientists knew that much about Mars before Curiosity's mission, from previous images gathered by orbiting crafts.
Curiosity will sniff around the planet for more clues about past Martian life -- although that life probably won't look like anything out of a sci-fi film, according to Webb.
"Curiosity is a large, car-sized rover which is fine-tuned to search for signs that Mars once, or perhaps still, supported some form of life," he noted. "If life did arise on Mars, and if evolution had the same pace on Mars as it did here on Earth, only single-celled creatures would have had time to evolve before Mars lost its atmosphere."
Curiosity is geared to explore that possibility of life over the course of its mission, transmitting clues about the planet's past back to Earth, little by little.
"As the mission progresses, we'll climb up that layer stack and read the stories in the rock like chapters in a book, one after another," said Malaska. "Each rock layer will give us clues how Mars developed. We'll start at the bottom, which are the oldest layers, and work our way upwards towards younger deposits."
Curiosity's Thrilling Arrival
The relatively relaxed photo-snapping follows what was perhaps the most grueling episode in Curiosity's adventure so far.
After traveling for eight months through space, the craft had just seven minutes -- dubbed the "seven minutes of terror" -- to touch down in a never-before-tried landing strategy called the "Skycrane maneuver."
Many of the landing's technical challenges hadn't been used before, but everything worked smoothly.
"This engineering success paves the way for the next and future missions," said Malaska.
Living Up to the Hype?
Curiosity's landing was unique in that it generated buzz even outside the world of science and exploration. It was broadcast through a live online stream, and also for the general public on large screens across the country, including at Times Square for a crowd that didn't seem to mind the middle-of-the-night timing.
Simply launching Curiosity and mastering the landing at a time when initiatives from NASA are some of the first to get slashed due to budget cuts is a major achievement, said Webb.
"This mission is already a success on so many levels, some that have nothing to do with the actual science it will perform. It represents a success for NASA, an agency that is being starved for funds," he said.
The collective spirit surrounding Curiosity's mission is a triumph on another level, said Malaksa.
"From a human perspective, this is a wonderful accomplishment," he remarked. "It is really neat to turn on the news and see a bunch of people excited because we've done something magnificent that moves human progress and exploration forward. And it is a true 'we.' MSL Curiosity contains instruments from several nations, and the images are available to everyone through the Internet. We are watching exploration history."
| 3.742882 |
Women and HIV
Women and HIV
By 1997, women accounted for almost 20% of all diagnosed AIDS cases in the United States and more than 50% worldwide. The U.S. numbers may under-represent the real percentage since many women are not tested for HIV unless they become pregnant or ill. Over the past several years, the clinician and researcher perception of individuals "at risk" for HIV infection has begun to change to include women. However, this change in thinking is a slow one and research specific to and inclusive of women with HIV is just starting in many arenas. Fortunately, there are many similarities in the treatment and care of both men and women, and many of the recent advances in our understanding of HIV and the disease process apply equally well to both.
A common rumor that many HIV-positive women have heard is that women with AIDS die faster than men. This is simply not true. What is true is that, in general, people with HIV who do not access services and lack competent medical care die faster than people who take an active role in their health care and work with a doctor or health care provider experienced in managing HIV disease. In fact, in the study that originally showed this difference, women appeared to die faster until the researchers went back and figured out who had access to health care and other services. Those (men and women) who had health care and support services were less likely to become ill or die, primarily because they knew their HIV status earlier and were able to prevent illness rather than treat it. Unfortunately, many women find out about their HIV status later in the disease process than men and thus miss the opportunity to take many of these preventative health measures. The good news here is that, biologically, women are not at greater risk for progressing to AIDS or dying. Women can and should have the same chance to survive and thrive as men living with HIV and AIDS.
The goal of this discussion paper is to provide readers with some of the known health-related issues uniquely affecting women living with HIV disease. There are many topics common to both men and women that will be mentioned but not covered here, primarily because these are addressed in other Project Inform materials. To help sort out the general distress and confusion which often accompanies a new HIV diagnosis, Project Inform provides a Discussion Paper called Day One. Day One helps readers understand the basics of HIV disease and what being HIV-positive means, while also introducing topics ranging from antiviral strategies to specific drug information that are later covered in great depth in other Fact Sheets and Discussion Papers. For further information or Fact Sheets on these and other topics, contact the Project Inform Hotline.
As soon as a person receives an HIV diagnosis, he or she is confronted with many choices. Some of the most complicated decisions center on HIV treatments. The world of treatments for HIV is a big one, and it is getting bigger every day. It can be so intimidating many people choose not to approach it until they become ill. But treatments are tools, not enemies, in this battle. In the long run, it is important to be informed about the various treatment options because this kind of knowledge gives you the power to decide for yourself how and when you will begin treatment. As long as you are aware that not using therapy today may reduce the ability to rebuild the immune system tomorrow, that can be an informed and empowered decision. There is no one absolutely `right' way to treat HIV, only the way that is right for you.
One of the biggest hurdles for a person with HIV can be changing his or her mentality about treatment. In our society, taking treatments is something that is done when one is ill or has bothersome symptoms. HIV disease calls for a different medical response, however. People in the early stages of HIV disease often have few if any obvious symptoms, but their immune systems are nonetheless suffering a gradual decline. Up to a point, the immune system suffers in silence, giving no sign of its distress. Eventually, however, the damage becomes serious and dangerous infections begin to break through our immune defenses. Some of the damage to the immune system may be beyond repair, at least today. Most researchers believe the best way to treat HIV disease is to take action early enough to prevent serious immune system damage, and thus prevent the risk of secondary, or "opportunistic", infections and severe damage to the immune system itself. Initiating therapy to slow or halt this damage is one important step you can take to prevent or delay progression of HIV-disease (getting sick). For the most part, this means getting on a treatment program before, not after, serious symptoms occur.
In addition to preventing damage to the immune system, acting directly to prevent or treat opportunistic infections (OIs) is also important. Preventative medications are available for some of the most common OIs.
Some information suggests that there are gender differences in rates of certain infections associated with HIV disease. Gynecologic manifestations are clearly unique to women. Compared to men, women may experience more frequent candidiasis (vaginal, esophageal, and oral thrush or yeast infections), herpes infections and types of cytomegalovirus (CMV) disease.
Table 1 lists the most prevalent OIs by gender. The information is from a large community-based trial programs database from 1990-1994. The table lists the major infections that occurred in the six months prior to death of 1,883 people living with AIDS, including 253 women. Although this information is dated, it illustrates some of the differences in infections between men and women.
All of the gender differences and the reasons they happen are not known. In addition to physical differences between men and women, there are psychosocial and lifestyle issues that may impact disease rates. For example, it may be that a large percentage of HIV-positive women in this database have a history of injection drug use, which has been associated with a higher incidence of bacterial pneumonia, shown in the Table. Although these statistics such are interesting, it is important to know they can only serve as sources to help guide your decisionmaking. They cannot tell you what infections you are at risk for. The more you know, the better able you will be to decide which therapies to use and when. Preventing OIs should not take a back seat to anti-HIV treatment. Planning your treatment strategy should include consideration of potential risks for OIs and preventative measures that can be taken. Project Inform hotline volunteers can help you through some issues to consider as you formulate ideas around your own strategy for managing HIV-disease, including prevention of OIs.
It is important to feel comfortable with a treatment strategy. If a clinician does not explain his or her thoughts in an understandable way, it is your responsibility to ask questions. (See Building a Doctor/Patient Relationship, available from the PI Hotline.) At a time when you are expected to alter your lifestyle to commit to a complex, multi-drug regimen, your doctor needs to be clear, comprehensive and forthright with the rationale and reasoning behind any therapy recommendations. In the end, it's your decision and you should make sure you have all the information you want. It is also important to participate in building a long-term treatment strategy that you feel comfortable with and empowered by.
There are several possible reasons why a drug may work differently in a woman than in a man. The issue of gender differences in medicine is not unique to HIV. Overall, the data that have been presented on gender analysis have identified differences in toxicity, side effects and blood levels of drug, but not differences in effectiveness of therapy. Perhaps the most striking study illustrating this thus far is a delavirdine (Rescriptor) + AZT study in which 19% (or 139) of the volunteers were women. In this study, the level of drug which accumulated in the blood of women volunteers was 1.8 times higher than the amount observed in men, even though both were taking exactly the same doses. Interestingly, this did not make a difference in the effectiveness of the drug. It is still unclear what caused this difference, however effects of hormone levels on drug metabolism (break down) has been one suggestion. This at least suggests that women may absorb drugs differently than men in some cases and that drug companies should be careful to watch for this effect.
In addition to higher blood levels of drug, some studies have reported increased or varied side effects associated with other anti-HIV drug use in women. A study looking at ritonavir (Norvir®), a protease inhibitor, showed that women experienced more nausea, vomiting and malaise (depression, fatigue, etc) than men. It's not that these side effects were unique to women, but rather they experienced them generally more than men. This may also be due to a metabolism problem caused by hormone levels, amount of drug or some other unknown variable.
Another obvious difference between men and women is their average weight. Some drugs work best when the dose given is partially determined by the weight of the person. It is unclear, for example, whether a 120-pound woman should be assigned to receive the same dose of potent anti-HIV drugs as a 240-pound man. Yet, this is exactly what happens. Little research has been done to determine the optimum dosing in women, or even in men of different weights.
Unfortunately, far too little is known about gender differences and their causes. For women making treatment decisions now, it is important to gather information about therapies. Discuss all therapies being used in a regimen, including complementary therapies (e.g. herbs and vitamins), with a clinician to make certain that there are no serious drug interactions or reasons not to consider a specific therapy. Be sure to monitor and report to your clinician any symptoms, body changes or side effects that you may experience. There may be steps, such as dose modification or treatments for symptoms, which may help with problems you are experiencing. Studies are being designed and as more women participate in clinical trials, more of these puzzling issues will be explored.
The use of hormone replacement therapies in both men and women for issues such as symptom management and weight maintenance have become common, even though there is little data from studies to guide such decisions. Hormones are chemical substances that the body secretes to help regulate metabolism, activity/energy level, reproductive capability and sex drive. There are many types of hormones. Estrogens and progesterone are the female sex hormones most commonly referred to. Testosterone is the most commonly discussed male sex hormone. All of these hormones are present in everyone, however, just at different levels based on gender.
Because hormones regulate many bodily functions, it makes sense that HIV disease, among other things, affects them and vice versa. For example, in men with advanced HIV-disease, testosterone levels are frequently deficient and replacement therapy is used to increase energy levels and libido (sex drive), manage depression and promote weight maintenance and gain. In women, reports of abnormal menstrual cycles, weight loss, gynecological infections, headaches and fatigue are also common and may be related to decreased estrogen levels.
In addition to general health issues affected by hormone levels, for women there are the added gynecological manifestations, menstrual cycle, and pregnancy issues that are clearly tied to hormone activity. Unfortunately, most of the conversations about hormone use and function focus on the use of hormone therapy as birth control. However, for many women living with HIV, pregnancy issues may play little or no part in their decision to use hormone therapy. Hormone replacement therapy (HRT) is used to regulate menstrual flow, to manage menopause or pre-menstrual syndrome (PMS) or to stabilize or reverse body composition changes. These applications, as well as the impact of HIV and the therapies used to treat it, have not been well studied thus far.
The occurrence and frequency of abnormal menstrual cycles and premature menopause in HIV-positive women have long been debated. Studies comparing menstrual cycle issues in HIV-positive and HIV-negative women have often produced conflicting results. Many doctors view abnormal menstrual cycles as a mere inconvenience rather than a serious medical condition and thus don't address them aggressively. However, one recent study reported that the use of HRT in HIV-positive post-menopausal women was correlated with longer survival. If this survival benefit is confirmed in other studies, hormone regulation may have much broader implications than previously assumed on the health and well being of women living with HIV disease.
Aside from the gynecologic implications of hormone levels, there are many unanswered questions about the relationship between hormone levels and the immune system, drug metabolism and body composition. Little is known about how hormone therapies commonly used by women interact with the many anti-HIV regimens currently being used. The few studies of oral contraceptives that have been done have only looked at how HIV medications affect the levels of contraceptives needed to prevent pregnancy or how the contraceptive affects the HIV medication blood levels. For instance, one of the protease inhibitors, nelfinavir (Viracept®), decreases the levels of ethinyl-estradiol (the most commonly used birth control pill) by 50%. Most doctors recommend that women trying to prevent pregnancy, therefore, increase their dose. However, many important questions have not been answered. For instance, is there a difference between naturally occurring estrogens and synthetic estrogens (like the birth control pill)? Will taking a drug like nelfinavir that decreases synthetic hormone levels also impact natural hormone levels? If yes, then what is the impact? And what, if any, are the risks of increasing the intake of these synthetic substances, even though the amount of estrogen in the system is being normalized?
Not only are there questions about birth control and hormone replacement therapy levels, but also the reverse. Which anti-HIV drugs are metabolized differently because of the use of hormone therapies? Since many of these questions remain unanswered, how do you decide to use hormone replacement therapy or hormonal contraceptives? Look for information at your local AIDS service organization or clinic (see Resource List, below). Talk with your doctor or health care provider. If you are experiencing abnormal periods (unusually heavy, light, irregular, or painful), or if you need additional contraceptive coverage, you may want to consider hormonal contraceptives. If you are menopausal or post-menopausal, you may want to consider estrogen replacement therapy. If you are experiencing body composition changes (weight loss, gain, or redistribution), fatigue, depression, decreased sex drive, or energy loss, then you may want to discuss checking your estrogen levels with your clinician to make sure you are not becoming menopausal prematurely. Be aware estrogen levels go up and down in a monthly cycle, so to get an accurate picture you will probably need to get at least 3 measurements (week 0,2,4). These are taken with a simple blood draw. Unfortunately, even if you get a normal measurement, it may not tell you whether or not to use estrogen replacement. An isolated set of numbers may not reflect what is "normal" for you. Some researchers suggest that rather than checking estrogen levels, clinicians should look at markers of pituitary function. Pituitary hormones, FSH (follicular stimulating hormone) and LH (luteinizing hormone), stimulate progesterone and estrogen.
There have been anecdotal reports that despite 'normal' estrogen levels on laboratory reports, some women have symptoms, including fatigue, improve after initiating hormone therapy. The problem with this is that the use of estrogen replacement therapy has been linked to an increase risk of breast and uterine cancers. On the other hand, estrogen replacement for post-menopausal women has been linked to a decrease risk of heart disease and osteoporosis (a degeneration of the disks in the spinal column that causes older women to be slumped over).
Overall, it is important to recognize the role hormones play in our everyday health and well-being. If hormone replacement therapy will reduce symptoms and improve quality of life without adding long-term risks or side effects, it is probably a viable choice. Discussing all your symptoms and body changes with your clinician may be one way to help identify appropriate therapy options for you. Remember, in most cases, it is easier to prevent illness or degeneration than to treat it.
This article was provided by Project Inform. Visit Project Inform's website to find out more about their activities, publications and services.
| 3.25248 |
The "next big push" in combatting hunger is creating a sustainable system of agriculture that empowers communities around the world. Here UN General Director Ban Ki-Moon gives the five objectives of the "Zero Hunger Challenge."
The five objectives of the "Zero Hunger Challenge" are:
1. A world where everyone has access to enough nutritious food all year round.
2. No more malnutrition in pregnancy and early childhood: an end to the tragedy of childhood stunting.
3. All food systems sustainable - everywhere.
4. Greater opportunity for smallholder farmers - especially women - who produce most of the world's food - so that they are empowered to double their productivity and income.
5. Cut losses of food after production, stop wasting food and consume responsibly."
Courtesy of the Food and Agriculture Organization of the United Nations (FAO).
| 3.085097 |
DEADWOOD, TEXAS. Deadwood, previously known as Linus, is on Farm Road 2517 some ten miles east of Carthage in eastern Panola County. The area was first settled in 1837 by Adam LaGrone and his family, who built a homestead not far from Socogee Creek. Around 1860 LaGrone's son, H. C., built a mill and gin that became the nucleus of the later town. The small settlement was originally known as Linus, but when residents applied for a post office in 1882, another town already had that name, and the new name Deadwood was chosen at a town meeting. By 1885 Deadwood had an estimated population of fifty, two churches, a district school, and a steam cotton gin and gristmill. A hotel was built there around 1900 but went out of business a few years later; the local post office was discontinued in 1917. In the mid-1930s Deadwood had a church, a school, and two stores; its reported population in 1936 was 125. After World War II the community's school was consolidated with the Carthage district, and the remaining businesses at Deadwood closed. In 1990 Deadwood was a dispersed rural community with a reported population of 106. The population remained unchanged in 2000.
History of Panola County. (Carthage, Texas: Carthage Circulating Book Club, 1935?). Leila B. LaGrone, ed., History of Panola County (Carthage, Texas: Panola County Historical Commission, 1979). Historical Marker Files, Texas Historical Commission, Austin. John Barnette Sanders, Index to the Cemeteries of Panola County (Center, Texas, 1964).
The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Christopher Long, "DEADWOOD, TX," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/hld09), accessed June 20, 2013. Published by the Texas State Historical Association.
| 3.079842 |
The overall goal of IFAP is to help UNESCO Member States develop and implement national information policies and knowledge strategies in a world increasingly using information and communication technologies (ICT). In order to achieve this goal, the Programme concentrates its efforts on the five priority areas listed below.
- Information for Development focuses on the value of information for addressing development issues.
- Information Literacy empowers people in all walks of life to seek, evaluate, use and create information effectively to achieve their personal, social, occupational and educational goal.
- Information Preservation will be predominantly executed by strengthening the underlying principles of the Memory of the World Programme, beyond its registers, which serve as catalysts to alert decision makers and the public at large.
- Information Ethics cover the ethical, legal and societal aspects of the applications of ICT and derive from the Universal Declaration of Human Rights.
- Information Accessibility encompasses the many issues surrounding availability, accessibility and affordability of information, as well as the special needs of people with disabilities.
| 3.267424 |
Researchers are searching the ocean forDeadZones?areas with extremely low levels of oxygen that cannotsustain life. Last summer, a huge Dead Zone settled in onthe coast of Oregon, causing fish and crustaceans to die. Itdisappeared in the fall, but now it's back. OceanologistJack Barth says, "What I think we are seeing is a tipping ofthe balance of the ecosystem. We don't fully understand whatthe cause of that is."
Jeff Barnard writes that There are more than 30 man-man DeadZones, including Hood Canal in Puget Sound, the MississippiRiver delta and Chesapeake Bay. These are places wherenitrogen fertilizer from farm fields has washed into thewater, causing excess growth of tiny plants calledphytoplankton. When they die, the bacteria that decomposethem use up all the oxygen in the water, meaning fish, crabsand other marine life suffocate.
Naturally-caused Dead Zones in open water, like the one offOregon, are rare and less well understood. Others have beenfound off the coasts of Peru and South Africa. Marinebiologist Jane Lubchenco says, "[Oregon?s Dead Zone] mightbe a window into possibly important larger scale changes inthe Pacific."
"Because we think it is potentially a long-term change, tobe absolutely certain we need many years of observations,"Barth says. "We are still at the fundamental research level,but the impacts could be quite large."
When we probe the mysteries of the past, we find there'smoreto learn than conventional history books tell us.
Photo Credits: http://www.freeimages.co.uk/
NOTE: This news story, previously published on our old site, will have any links removed.
| 3.586534 |
Generations of literary students have wondered how Sir Arthur Conan Doyle the credulous spiritualist could have created the brilliantly deductive Sherlock Holmes. Doyle was so credulous he actually believed Harry Houdini could dematerialize to escape from confinement and refused to believe Houdini himself when he explained that he used conventional magicians' techniques. Believers in spiritualism and paranormalism have argued that Doyle's creation of Sherlock Holmes demonstrates his rationality so clearly that there must be a rational basis for his belief in spiritualism. We can gain some insight by watching Holmes at work in The Adventure of the Blue Carbuncle.
From a lost hat, Holmes deduced that the wearer was intelligent, preferred a certain type of hair dressing, had grizzled hair, was once prosperous but had fallen on hard times, had no gas lines in his home, and had marital problems. The inference of intelligence came from the popular 19th century notion that intelligence correlated with brain size, but a large hat may signify nothing more than bushy hair. The inferences about hair style are based on bits of hair and hair cream on the hat.
The elaborate scenario involving the man's life style was based essentially on the hat being a recent and expensive style but now in poor condition. Holmes never really considered the very real possibilities that the hat might have been stolen, lost and then found by someone else, or given away. The man's marital problems were explained by the hat's poor maintenance. Unless, of course, the man were single.
I have to insert my personal heresy here. I have never been particularly impressed with Sherlock Holmes. Most of the stories I have read involve banal and inconsequential mysteries. Furthermore, the stories are rarely mysteries in the modern sense, where clues are presented that challenge the reader to solve the problem as well. Mostly the evidence appears without warning, Holmes explains what it means, and follows it to a conclusion of Doyle's own choosing while myriad other possible interpretations of the evidence are simply ignored.
Holmes is infallible because Doyle writes him that way. He scans the evidence, zeroes in unerringly on the correct interpretation, and rarely has to revise his hypotheses. That's part of his immense appeal. Holmes invariably arrives at the correct solutions, rarely examines alternative explanations except to dispose of them, never encounters evidence that is so ambiguous it cannot be used, and generally views formulating a plausible hypothesis as the solution to the problem.
Given this essentially mystical view of the scientific method, where intuitive methods are infallible and never need correction, it is no mystery at all how Doyle could be a credulous spiritualist. Holmes embodies Conan Doyle's fantasies of omnipotent scientific intuition, which Doyle acted out himself in his investigations of spiritualism. The contrast between Holmes and Doyle is the contrast between how well this approach works in fantasy versus how well it works in real life.
George Orwell's novel of a totalitarian future, 1984, has been claimed to have over 200 accurate predictions of future events or trends. How was Orwell able to achieve such incredible accuracy?
Simple. Every single correct "prediction" in 1984 describes something that existed in 1948, when the book was written. Some, like thought control, secret police systems, nuclear weapons or television, really existed, others, like two way visual communications devices, were common themes in predictive literature.
Isaac Asimov pointed out some of the holes in 1984. Although the society of 1984 is physically decrepit, the omnipresent view screens of the Thought Police never break down. Now we might expect a totalitarian state to devote more resources to its police system than to quality of life, like the former Soviet Union did, but to expect that kind of perfection is unrealistic. Far more likely to happen is what actually did in the Soviet Union: the system becomes corrupt and inert and eventually crumbles. Furthermore, if everyone is being watched (at least among the upper classes), there have to be as many watchers as people under surveillance, and to guard against fatigue or lapses in attention, they'd have to be replaced frequently. Furthermore, the watchers themselves would have to be watched, lest they collude with the people they are watching or with each other. The vast majority of the population would have to be watching video screens.
Of course, a modern reader objects, computers could do the job far more effectively. Yes, they could. Britain, in particular, is far down the road to having viewers everywhere, monitored by computers that never get tired and never have dubious loyalties. Nothing so completely reveals the myth of Orwell's predictive powers as his utter failure to predict the rise of computers. Julia, the illicit lover of protagonist Winston Smith, worked in a section of the Propaganda Ministry that turned out junk literature for the working class. There were half a dozen or so plot lines, which were rearranged - how? By computer? No, by mechanical rearrangement of blocks of type which were then cleaned up by writers. Orwell utterly and completely failed to foresee word processing. Now fair enough, nobody in the early days of computing foresaw word processing, and that's my point precisely. Orwell showed no more prescience than anyone else in predicting the future.
Nor is there any mention of space flight in 1984. At one point O'Brien, the secret police officer, told Winston that the stars are only a few hundred miles away. We might suspect that space travel was kept secret from the masses, but given that the novel mentions many other kinds of military technology, about which the state was openly boastful, it's hard to believe they would fail to brag about orbiting weapons platforms or spy satellites if they really had them - if Orwell had actually predicted them, that is.
1984, like Animal Farm, was a deep embarrassment to leftists. Orwell, a socialist disgusted and disillusioned by the excesses of Stalin's regime, wrote both works in protest. Despite many attempts to re-spin 1984 as being "really about the alienation in all modern societies," the references to socialism in 1984 are pervasive. Oceania (the Americas and British Empire) is ruled by a system called Ingsoc (English Socialism), and Eurasia (Russia and Europe) is ruled by Neo-Bolshevism. The lessons of 1984 might be applicable to any totalitarian system, but the novel is first, last, and foremost about socialism.
Created 03 December 2002, Last Update 02 June 2010
Not an official UW Green Bay site
| 3.191295 |
The national flower of Colombia is the orchid Cattleya trianae which was named after the Colombian naturalist José Jerónimo Triana. The orchid was selected by botanist Emilio Robledo, in representation of the Colombian Academy of History to determine the most representative flowering plant of Colombia. He described it as one of the most beautiful flowers in the world and selected Cattleya trianae as National symbol.
The national tree of Colombia is the palm Ceroxylon quindiuense (Quindio Wax Palm) which was named after the Colombian Department of Quindio where is located the Cocora valley, only habitat of this restricted range specie. The Wax palm was selected as the national tree by the government of Belisario Betancur and was the first tree officially declared as protected species in Colombia. C. quindiuense is the only palm that grows at high altitudes and is the tallest monocot in the world.
According to the colombian Ministry of Environment, the following ecoregions have the highest percentage of botanic endemisms:
| 3.274377 |
Globally, the varroa mite, Varroa destructor, is the most serious threat to the western honeybee, Apis mellifera. Varroa is a parasite that feeds on the bee and acts as a vector for viruses. Untreated, colonies will die in just a few years. Varroa is thought to be at the core of unexplained bee losses (Colony Collapse Disorder) across the world.
Identifying Varroa infestations
The presence of varroa is easily overlooked by beekeepers because its red-brown colour and small size (1.5mm by 1mm) make it so difficult to see on the adult bee. Oval in shape, it is able to conceal itself in places on the adult bee where the bee finds it difficult to groom.
Infested hives may seem strong and even give high honey yields – heavily infested bee colonies can bring in a good honey crop and yet be dead within weeks.
Varroa watch list:
- Examine hive floor debris for mites – purpose-made varroa floors with screens help.
- In heavy infestations, mites can be seen on adult bees, on wax combs and in cells.
- As varroa is more attracted to drone brood than workers uncapping and examining samples of drone brood may be used as a diagnostic tool for varroa infestation.
- A sudden crash in adult bee numbers may be an indication of varroa.
- Bees with twisted or shrivelled wings, small abdomens or other deformities may be the result of varroa plus viral infections.
- Poor general colony health and irregular brood pattern may be attributed to varroa plus attack by other disease organisms (viruses, bacteria, fungi) sometimes referred to as Parasitic Mite Syndrome or PMS.
- Diagnostic treatment of the honeybee colony may be performed with approved acaricides and methods.
How Varroa spreads
The Origins of Varroa
The original host of the varroa mite was the Asian honeybee, Apis cerana, but it can tolerate varroa mite infestations because the reproductive rate of the mite is not too high (it could only reproduce in drone brood) and because the adult A cerana bees remove mites by grooming and cleaning behaviours.
In the European honeybee, Apis mellifera, however, the varroa mite can infest both drone and worker brood and A. mellifera shows little little grooming or hygienic behaviour to get rid of the mites.
Varroa jumped hosts to A. mellifera probably sometime in the early 1900s and has spread rapidly to almost all the world’s beekeeping areas.
The spread of varroa within and between colonies
Shortly before the brood cells are to be capped, the varroa mites detach themselves from the adult bees and enter the cells and hide in the brood food provided by nurse bees. Once the cells are capped the young larvae ingests the brood food, liberating the mite or mites.
The varroa mites then pierce the cuticle of the bee larvae and feed off the haemolymph. Only after the first blood meal can the female mite lay her eggs, which quickly hatch and infest the cell with mites.
A single brood cell can contain as many as ten mites of different generations. These sucking parasites weaken the bee brood, impairing normal development. When varroa infestation is severe, worker bees and drones emerge with shortened abdomens, misshapen wings or other deformities. Young bees such as these have a brief life expectancy and are generally immediately rejected by the colony.
USDA has made an excellent 10-minute video of the lifecycles of the honeybee and the varroa mite:
Transfer of varroa between colonies of bees
Attached to flying bees: Varroa mites attach themselves to the abdomen or thorax of adult bees. Spines on their legs also entwine with hairs on the body surface of the bee. Varroa mites can achieve wide geographic distribution by securing themselves underneath or between the sclerites of the bee and being carried in flight.
Carried by a robber bee: A robber bee that has been infested with varroa mites can transfer them to previously uninfested hives during robbing. A robbing bee may also become the unsuspecting host when stealing stores from an infested hive.
Drifting Bees: Varroa mites can also be transmitted during swarming or by drifting bees. Drones, in particular, can carry mites from one hive to another, sometimes over large distances.
The spread of the varroa mite can be accelerated by:
- migratory beekeeping
- transfer of bees between colonies
- where a colony social structure has already been weakened by varroa, hives are more vulnerable to robber bees, which pick up and then disperse the mites to their own and other colonies.
The rise of resistant varroa
The development of resistance is an almost-inevitable biological phenomenon, but its progress has been hastened by poor beekeeping practices.
Using Integrated Pest Management and with strict adherence to instructions on approved treatrments, the risks of encouraging resistant varroa will be minimised.
Vita’s varroa mite control products are Apistan and Apiguard. Despite the existence of varroa resistant to pyrethroids, Apistan is still effective in many areas. A simple test will show if you have pyrethroid-resistant varroa. If many mites fall during a 24hr hive treatment with Apistan, resistance, if present will be low level, allowing good control with Apistan over a 6 week treatment period. Varroa varroa are resistant to Apiguard are unknown.
Vita’s research and development continues to investigate new varroa control treatments.
Currently Vita is focusing on two new acaricides, one of which is a near-natural product.
The stress and physical damage caused by varroa can be devastating, but their associated viral and bacterial infections are often the real cause of colony demise. Varroa acts as a vector for viruses while stress, such as poor foraging weather, lack of food, water and space, increases the vulnerability of bees to viruses and bacterial infections.
While there are currently no treatments known for viral disorders in honeybees it is possible to limit the effects of these diseases by controlling the mite populations and the contributory stress factors.
There are several key strategies for the effective control of varroa:
1. Monitoring the infestation level of the colony.
This will indicate whether the mite population is building up to levels that will harm the colony. It will also indicate if the current method of control is not proving effective.
2. Use a combination of methods.
The most effective control of varroa can be gained by using a combination of both biomechanical methods and chemical methods. These work in different ways and can be practised at different times of the year.
These are proven to work and to be safe for bee and the user. It is also important to follow manufacturers’ instructions. Incorrect use may result in residues in the hive products and it may promote the development of mite resistance.
4. Use essential oil or organic acid treatments with great care.
If legal to do so, in rotation with registered acaricide products in a concerted Integrated Pest Management strategy.
5. Use biomechanical methods.
Drone trapping and restricting queen movement can be a useful diagnostic and seconday control measure.
6. Use a co-ordinated approach.
Developing a treatment programme with other beekeepers in the area will help reduce the likelihood of re-infestation.
| 3.73437 |
Testicular cancer: Symptoms, diagnosis and treatment
What is testicular cancer?
Testicular cancer is a type of cancer that begins within the testicles. The cancer cells no longer follow normal growth patterns, multiplying uncontrollably. If untreated, the cancer can spread, which can be fatal.
This type of cancer starts within the cells of a testicle. The two testicles, or testes, are glands that produce male hormones and sperm. They hang beneath and behind a man's penis in a pouch of skin called the scrotum. The spermatic cord, composed of the sperm duct, nerves, and blood vessels, connects each testicle to the body.
Although testicular cancer is rare, it is the most common type of cancer in men aged 15 to 40
The basics -- testicles
Testicular cancers begin in the testicles themselves. Testicular cancer may spread slowly or rapidly through the lymphatic system or blood vessels, depending on its type, but the path is consistent: Once the cancer cells are free to spread to nearby lymph or blood vessels, they could be carried to the lungs, to the liver, to the bones, and possibly to the brain.
Thanks to advances in diagnosis and treatment, testicular cancer is among the most curable of cancers, if detected early. Over 90% of patients are diagnosed with small, localised cancers that are highly treatable. Improved detection and treatment techniques have raised the overall five-year survival rate above 90%. Even if cancer has spread to nearby organs at diagnosis, patients still have a good chance of long-term survival.
What causes testicular cancer?
Doctors don't know why a man develops testicular cancer. However, doctors have found links between testicular cancer and other factors. These are described here.
Testicular cancer is more likely to occur in men who also had a condition called an undescended testicle (cryptorchidism.) The testicles normally develop within the abdominal/pelvic cavity and in most cases they migrate to the groin and scrotum prior to birth. With cryptorchidism, an abnormality within the testicle itself keeps the testicle from making its way into the scrotum. The undescended testicle then remains somewhere along the normal path, within the abdomen or groin.
Even if an undescended testicle is surgically brought down into the scrotum, it is still at greater risk of developing testicular cancer. However, the normal position allows for better and closer examination.
Testicular cancer is more common in those who have close relatives with the condition.
Men with fertility problems are more likely to be diagnosed with testicular cancer. All men with fertility problems should be checked for cancer of the testicle.
Other factors that increase the risk of testicular cancer include having HIV/AIDS, being Caucasian, and being taller in height than average. Men whose mothers had bleeding in pregnancy are also at greater risk of developing testicular cancer.
In the past it was thought that testicular injury and vasectomy increased the risk of testicular cancer but this is no longer believed to be true.
Non-cancerous growths in the testicle are rare, so it’s important that all masses are checked by a GP to determine if it is cancer or something else.
| 3.586806 |
2002 Research Fair Archive - Chemistry Abstracts
Soil-Lead Contamination in Salt Lake Valley Play Areas
The United States Geological Survey has estimated that lead, a bluish-gray metal, occurs naturally in soil at a national geometric mean concentration of 16 ppm. Soil-lead concentrations may be elevated due to anthropogenic sources, primarily lead-based paint, leaded gasoline emissions, and point source emitters. In many regions, the elevated soil-lead levels are due to a combination of sources. Lead contamination of soil is cumulative; additional sources simply increase the extent of the contamination.
In light of the established threat of lead poisoning to children's health, this study explores whether lead soil contamination is present in child play areas in the Salt Lake Valley. Despite the renovation of many play areas, in which metal equipment was replaced with plastic counterparts, lead contamination may persist in the soil due to previous leaded gas emissions and/or dust from lead paint used on the removed equipment or surrounding buildings.
In this study, soil samples were analyzed from play areas located throughout the Valley. Specifically, high traffic areas of the play zone were targeted for sampling, such as the base of slippery slides or the home base of baseball diamonds, where children are particularly at risk for inhaling disturbed surface soil. Standards adopted by the Environmental Protection Agency were used to determine that soil from the play areas under study is contaminated with relatively low levels of lead, and thus do not pose a significant threat to children for health effects associated with lead poisoning.
Selenium and the Great Salt Lake
Selenium is a trace element essential for life, but it is also toxic at relatively low concentrations. If present in water, selenium is known to bioaccumulate in aquatic flora and fauna resulting in the decimation of fish and bird populations. The seleniferous soils of the Western deserts combined with anthropogenic activities, e.g., copper mining, in the vicinity of the Great Salt Lake make it a likely repository for selenium. The presence of bird sanctuaries makes it necessary to monitor the concentration of selenium in this unique environment so that potential disasters may be avoided. In this study, we have analyzed selenium levels in water and brine shrimp collected from the Great Salt Lake, Utah, using fluorescence spectroscopy. In addition, we have investigated the role of high concentrations of NaCl on the extraction and proper quantification of Se using this analytical technique.
| 3.33985 |
The treatment initiative
Chapter 1 showed the magnitude of the threat posed by HIV/AIDS. This chapter describes the magnitude of the task of responding to it and explains how WHO and its partners are supporting countries in one of the most ambitious endeavours in the history of public health. A comprehensive approach to HIV/AIDS links prevention, treatment and long-term care and support. In much of the developing world, however, treatment has until very recently been the most neglected component. It now needs to be rapidly expanded, along with accelerated prevention efforts, in the countries hardest hit by the pandemic.
Since 1996, more than 20 million people in the developing world have died of AIDS. If antiretroviral therapy had been rapidly deployed, most of these people would probably be alive today. Despite mounting political pressure and evidence that AIDS treatment works in resource-poor settings, by late 2003 less than 7% of people in developing countries in urgent need of antiretroviral drugs were receiving them (see Figure 2.1). In September 2003, LEE Jong-wook, Director-General of WHO, joined Peter Piot, Executive Director of UNAIDS, and Richard Feachem, Director of the Global Fund to Fight AIDS, Tuberculosis and Malaria, to declare the lack of access to antiretroviral therapy a global health emergency. In response, WHO, UNAIDS and a wide range of partners launched the "Treat 3 million by 2005" initiative - known as 3 by 5. Treating 3 million people by the end of 2005 is a necessary target on the way to the goal of universal access to antiretroviral therapy for everyone who needs it.
To reach this goal, major obstacles must be overcome. With few exceptions, HIV/AIDS has struck hardest in countries whose health systems were already weak. Many countries working to expand HIV/AIDS treatment face significant deficits in areas such as health sector human resources, HIV counselling and testing, drug procurement and supply management, health information systems, and laboratory capacity (including the ability to monitor drug resistance).
Delivering the results called for under 3 by 5 will challenge countries' capacities and test the will of the global health community. But it is an essential task whose implications go far beyond the immediate aim of saving millions of lives in the coming years. It may also be the key to saving some of the world's most fragile health systems from further decline, and thereby offering whole societies a healthier future. Seen in this context, the 3 by 5 initiative is a vital opportunity to ensure that the new global resources flowing into HIV/AIDS are invested in ways that strengthen health systems for the long-term benefit of everyone.
This chapter examines public health, economic and social arguments for scaling up antiretroviral treatment in resource-poor settings. It then presents WHO's strategy for working with countries and partners, and provides an estimate of the global investment required. The opportunities and challenges facing countries are explored, highlighting the need to ensure that antiretroviral treatment reaches the poorest and most marginalized people. Finally, the chapter considers the wider importance of 3 by 5 as a new way of working across the global health community for improved health outcomes and equity.
WHO's commitment to support countries is guided by a broad assessment of resources and needs in global public health. Global investment in health has risen in recent years while many other sectors of international development assistance have stagnated, but the bulk of the new health investment is in HIV/AIDS. As the international agency charged with seeking the highest possible level of health for all people, WHO has the responsibility both to support expanded access to antiretroviral therapy and to work with countries and international partners to ensure that the new resources flowing into HIV/AIDS are invested so as to build sustainable health system capacities. Only an international public health agency can fulfil this technical cooperation and stewardship function. Health systems strengthening is the key both to sustainable provision of antiretroviral treatment and to reaching other public health objectives, including the health-related Millennium Development Goals and containment of the expanding epidemics of chronic diseases in the developing world.
| 3.166282 |
Field Guide to Edible Wild Plants: Eastern and Central North America
Author(s): Peterson, L.
Publisher: Boston : Houghton Mifflin
Subject: Edible Plants
Comments: More than 370 edible wild plants, plus 37 poisonous look-alikes, are described here, with 400 drawings and 78 color photographs showing precisely how to recognize each species. Also included are habitat descriptions, lists of plants by season, and preparation instructions for 22 different food uses.
Last Updated: 2007-01-01
| 3.157912 |
Fuel Cells: Problems and Solutions, 2nd Edition
This price is valid for United States. Change location to view local pricing and availability.
Other Available Formats: E-book
The comprehensive, accessible introduction to fuel cells, their applications, and the challenges they pose
Fuel cellselectrochemical energy devices that produce electricity and heatpresent a significant opportunity for cleaner, easier, and more practical energy. However, the excitement over fuel cells within the research community has led to such rapid innovation and development that it can be difficult for those not intimately familiar with the science involved to figure out exactly how this new technology can be used. Fuel Cells: Problems and Solutions, Second Edition addresses this issue head on, presenting the most important information about these remarkable power sources in an easy-to-understand way.
Comprising four important sections, the book explores:
The fundamentals of fuel cells, how they work, their history, and much more
The major types of fuel cells, including proton exchange membrane fuel cells (PEMFC), direct liquid fuel cells (DLFC), and many others
The scientific and engineering problems related to fuel cell technology
The commercialization of fuel cells, including a look at their uses around the world
Now in its second edition, this book features fully revised coverage of the modeling of fuel cells and small fuel cells for portable devices, and all-new chapters on the structural and wetting properties of fuel cell components, experimental methods for fuel cell stacks, and nonconventional design principles for fuel cells, bringing the content fully up to date.
Designed for advanced undergraduate and graduate students in engineering and chemistry programs, as well as professionals working in related fields, Fuel Cells is a compact and accessible introduction to the exciting world of fuel cells and why they matter.
| 3.0878 |
American Heritage® Dictionary of the English Language, Fourth Edition
- adj. Of or belonging to the geologic time, system of rocks, or sedimentary deposits of the fourth period of the Paleozoic Era, characterized by the development of lobe-finned fishes, the appearance of amphibians and insects, and the first forests. See Table at geologic time.
- n. The Devonian Period or its system of deposits.
Century Dictionary and Cyclopedia
- Of or pertaining to Devonshire in England.
- The term was applied specifically, in geology, by Murchison to a great part of the Paleozoic strata of North and South Devon, and used by him as synonymous with Old Red Sandstone, for which term he substituted it, “because the strata of that age in Devonshire—lithologically very unlike the old red sandstone of Scotland, Hereford, and the South Welsh counties—contain a much more copious and rich fossil fauna, and were shown to occupy the same intermediate position between the Silurian and Carboniferous rocks.” Later geologists, however, do not use the terms as identical, the conditions under which the strata were deposited being very different.
- This term was first applied in geology by Sedgwick and Murchison to a series of rocks in North and South Devon and Cornwall in which fossils had been found which were recognized by Lonsdale as intermediate in character between Silurian and Carboniferous. The lower and upper limits of the formation were not defined in Britain, but were more precisely determined by the same geologists in the Rhineland. So uncertain, however, were the bounds assigned to the base of the formation that more recent study in various countries has added to the lower part of this system considerable beds that had before been assigned to the Silurian system. Strictly applied, the term Devonian implies the rocks bearing the marine faunas of that time and is contrasted with the Old Red Sandstone, which is a formation sometimes different lithologically and which represents the lake, lagoon, or delta deposits of the same age.
- n. A native or inhabitant of Devonshire.
- n. In geology, the Devonian series.
- adj. geology of a geologic period within the Paleozoic era; comprises lower, middle and upper epochs from about 415 to 360 million years ago
- adj. Of or pertaining to the English region of Devon.
- n. A native or inhabitant of the English region of Devon.
- n. geology the Devonian period
GNU Webster's 1913
- adj. (Geol.) Of or pertaining to Devon or Devonshire in England.
- n. The Devonian age or formation.
- n. from 405 million to 345 million years ago; preponderance of fishes and appearance of amphibians and ammonites
- After Devon, a county of southwest England. (American Heritage® Dictionary of the English Language, Fourth Edition)
“Now, this oil is found in what they call the Devonian limestone, and it is our first production from Devonian.”
“The bet was placed thousands of miles away in south-eastern United States, on a sandstone rock formation called Devonian shale.”
“Certain ancient strata, known as the Devonian black shale, occupying the Ohio valley and the neighbouring parts of North America to the east and north of that basin, appear to be accumulations which were made beneath an ancient Sargassum sea.”
“The other stowaway, whom I will call the Devonian -- it was noticeable that neither of them told his name -- had both been brought up and seen the world in a much smaller way.”
“The researchers suggest that low global oxygen levels during this period, known as the Devonian, may explain the evolution of air-gulping characteristics.”
“The thick Thunderhead Sandstone (Upper Precambrian Great Smoky Group) in the Great Smoky Mountains along the Tennessee/North Carolina border was deformed and regionally metamorphosed during formation of the Appalachian Highlands, beginning in the so-called Devonian (that is, early in the Flood year).12-14 With increasing temperatures and pressures from northwest to southeast, the regional metamorphism produced in these sandstone layers a series of chemically and mineralogically distinct zones of schists and gneisses.15 These zones are named according to the first appearance of the distinctive metamorphic minerals which characterize them as the intensity of the metamorphism increased laterally—the biotite, garnet, staurolite, and kyanite zones.”
“According to Mr H.B. Woodward (_History of the Geological Society of London_, 1907, p. 107) "Lonsdale's 'important and original suggestion of the existence of an intermediary type of Palæozoic fossils, since called Devonian,' led to a change which was then”
“Any year with a description of a Devonian anomalocarid is worth remembering.”
“* In order from oldest to youngest, the extinction events considered by biologists and paleontologists to be the most severe in the earth's history are the Ordovician – Silurian extinction event, the Late Devonian extinction, the Permian – Triassic extinction event, the Triassic – Jurassic extinction event, the Cretaceous – Tertiary extinction event, and the Quaternary-Holocene extinction event (currently underway).”
“(21 May 2010) * In order from oldest to youngest, the extinction events considered by biologists and paleontologists to be the most severe in the earth's history are the Ordovician – Silurian extinction event, the Late Devonian extinction, the Permian – Triassic extinction event, the Triassic – Jurassic extinction event, the Cretaceous – Tertiary extinction event, and the Quaternary-Holocene extinction event (currently underway).”
Looking for tweets for Devonian.
| 3.314887 |
(WXYZ) - Friday morning at 6: 11 a.m. local time the season will officially change from Fall to Winter with the arrival of the Winter Solstice.
The solstice means the north pole is tipped as far away from the sun as it can go (23.5 degrees). Conversely , the south pole is going to be tipped as far toward the sun as it can go.
The sun's direct rays drop as far south as the Tropic of Capricorn which is 23.5 degrees south of the equator.
All areas north of the Arctic Circle will be in darkness for 24 hours while south of the Antarctic Circle will have 24 hours of sunshine.
In the northern hemisphere this is the shortest day of sunlight.
There is a little discrepancy about this in some places because of the location within a time zone, but generally it is true.
Detroit's sunrise is 7:59 a.m. and sunset is 5:04 p.m. tomorrow, December 21st. The solstice is not always on December 21st.
The solstice can occur anytime between December 20th and the December 23rd.
So, if you are a Winter lover tomorrow is your day! If you prefer a warmer time of year, take note that after tomorrow the days are only getting longer.
Copyright 2012 Scripps Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
| 3.066197 |
Dominica: Mountain Chicken
Chytridiomycosis, a fungal skin condition, is threatening remaining populations of unique amphibian species in the Caribbean. One of the most cherished of these is the large 'Mountain chicken' that holds a special cultural importance in Dominica, but it is now Critically Endangered.
See the fantastic video of the mountain chicken tadpoles feeding on infertile cloacal eggs their mother is releasing for them:
Currently, the mountain chicken is only found in Dominica and Montserrat, and was the traditional national dish of Dominica before the chytrid fungus reached the Caribbean. The name mountain chicken comes from the fact that the frog's meat tastes like chicken. The frog lives mainly in the lowlands and not in the mountains and its importance to the Dominican culture is also reflected by its inclusion in the national Coat of Arms.
Since chytrid arrived, the population of mountain chickens has plunged by 80%, and it is now critically endangered. As part of conservation work in Dominica and Monserrat, ZSL conservationists carried out a rescue expedition. They were able to track down seven of the frogs and remove them from their native habitat before they succumbed to chytrid. The animals are now kept in at ZSL London Zoo.
Conservation Breeding at ZSL London Zoo
Mountain Chickens are one of BIAZA's 'Top ten species dependent on BIAZA zoos' for 2012.
The mountain chicken frogs at the zoo are one of only two groups from Dominica that have been taken into captivity. The other population of 12 frogs is held by a private collector in the United States.
Mountain chicken frogs breed by laying eggs in a foam-filled burrow. The mother stays near the burrow to feed the tadpoles with infertile eggs until they are ready to fend for themselves.
Until now, Mountain chickens have never been bred by ZSL London Zoo, but this year, we had breeding success. Housed in a bio-secure, temperature-controlled breeding unit at the Zoo, a femals laid eggs in her self-made foam nest and guarded them closely as they developed into tadpoles. Demonstrating fantastic mothering skills, she then fed the tadpoles every three to five days with unfertilised eggs.
This success gives hope that the mountain chicken will be able to be sustained in captivity, a breeding captive population maintained until their habitat can be made suitable again and the risk of chytrid has diminshed.
Find out the latest from this vital in-situ conservation project, from chytrid research to community building projects.
Dominica's very special amphibian species need protecting. Find out about the mountain chicken, Gounouj, Tink frog and Johnstone’s whistling frog.
Find out what you can do to help prevent the spread of Chytrid in Dominican amphibians through responsible behaviour and getting in contact.
Since the 1980's, Chytridomycosis has caused dramatic declines in amphibians globally. Dominica's amphibians have been severely affected.
This project’s aim is to build capacity within the Caribbean region to protect against the impacts of chytrid fungus in Dominican amphibians.
Follow ZSL’s amphibian experts as they investigate everything from the African pet trade to frogs and toads in our garden ponds.
| 3.090821 |
Volume 16, Number 8—August 2010
West Nile Virus Range Expansion into British Columbia
In 2009, an expansion of West Nile virus (WNV) into the Canadian province of British Columbia was detected. Two locally acquired cases of infection in humans and 3 cases of infection in horses were detected by ELISA and plaque-reduction neutralization tests. Ten positive mosquito pools were detected by reverse transcription PCR. Most WNV activity in British Columbia in 2009 occurred in the hot and dry southern Okanagan Valley. Virus establishment and amplification in this region was likely facilitated by above average nightly temperatures and a rapid accumulation of degree-days in late summer. Estimated exposure dates for humans and initial detection of WNV-positive mosquitoes occurred concurrently with a late summer increase in Culex tarsalis mosquitoes (which spread western equine encephalitis) in the southern Okanagan Valley. The conditions present during this range expansion suggest that temperature and Cx. tarsalis mosquito abundance may be limiting factors for WNV transmission in this portion of the Pacific Northwest.
West Nile virus (WNV) is a vector-borne flavivirus that is transmitted in an enzootic cycle between birds by mosquitoes; incidental transmission to humans occurs during periods of intense amplification, typically in late summer in the Northern Hemisphere (1). WNV activity is inherently dependent on environmental and ecologic conditions that affect avian and vector populations because of the role these groups play in WNV transmission (2). Environmental factors such as temperature (3,4), precipitation (4), and drought (5), and ecologic conditions such as vector abundance (6) have been identified as possible determinants of WNV activity.
Canada represents the northernmost range of WNV in North America. The first positive WNV indicators appeared in Canada in 2001 when the virus was detected in birds and mosquitoes in Ontario (7). A total of 394 human cases occurred in Ontario and 20 in Quebec during 2002 (7). The virus quickly spread westward into the prairie provinces: 947 confirmed cases in Saskatchewan in 2003, of which 63 were West Nile neurologic syndrome (WNNS) (8), 144 in Manitoba (35 WNNS) (9), and 275 (48 WNNS) in Alberta (10). A second major outbreak occurred in Canada in 2007, a total of 1,456 (113 WNNS) cases were confirmed in Saskatchewan (8), 587 (72 WNNS) in Manitoba (9), and 318 (21 WNNS) in Alberta (10). Although mostWNV activity has occurred in the southern parts of the country, the virus has been detected as far north as Meadow Lake, Saskatchewan (54°08′N) (11).
Despite this widespread activity, no local WNV transmission was detected in Canada’s westernmost province, British Columbia, during the WNV seasons (May to October) of 1999–2008 (7,11). The absence of WNV in British Columbia during this period puzzled provincial public health experts, who had been preparing for the virus’s arrival since 2002; some speculated that British Columbia did not contain the prerequisite environmental and ecologic conditions essential for WNV activity. However, in August 2009, a long-delayed range expansion of WNV into British Columbia was confirmed; 2 locally acquired cases in humans, 10 positive mosquito pools, and 3 cases in horses were detected by provincial surveillance.
The official arrival of WNV in British Columbia puts to rest the question of whether the province can sustain within-season WNV activity. However, new questions have been raised relating to the mechanism of viral introduction, the environmental conditions that limited previous WNV activity in the province, the focus of WNV activity in the southern Okanagan Valley, and whether British Columbia can sustain activity between seasons. We examined spatial and temporal patterns of WNV activity in British Columbia in relation to mosquito abundance and temperature conditions present during the observed range expansion of 2009. Our goal was to identify potential determinants of WNV activity along this portion of British Columbia’s northern and western ranges and to provide additional information regarding factors that facilitate the spread of WNV in North America.
Material and Methods
The province of British Columbia is an ecologically, climatically, and geomorphologically diverse area covering 947,000 km2 that contains a lengthy coastline, high mountain ranges, and a desert region (Figure 1). British Columbia has the most geological, climatic, and biological diversity in Canada (12). This province is dominated by vast regions of temperate forests in mountainous areas >1,000 m above sea level (13). Temperatures in the coastal regions of British Columbia are among the mildest in Canada; daily average temperatures are above freezing year-round (14). The coastal regions receive >1,100 mm of rain per year as moisture-laden air from the Pacific Ocean rises above the Coast Mountain Range, resulting in orographic precipitation. In contrast, the southern interior of the province is part of the semiarid steppe highlands ecoregion, which has near desert-like conditions including hot dry summers, cool winters, and average rainfall of 260 mm per year (14).
Provincial WNV Surveillance
The British Columbia Centre for Disease Control (BCCDC) and the BCCDC Public Health Microbiology and Reference Laboratory (PHMRL), in partnership with regional health authorities, municipalities, and regional governments, have conducted human surveillance, mosquito sampling, and dead corvid surveillance and testing since 2003. During the WNV seasons of 2003–2007, mosquito surveillance covered the southern extent of the province and extended as far north as 55°N latitude. However, in response to the prolonged absence of the virus, mosquito surveillance was reduced in 2008; mosquito traps were placed only at or below 50°N latitude (Table 1; Figure 1). An additional 16 traps were placed in the southern Okanagan Valley in 2009 as part of a research project to supplement the 91 traps operated by the province, effectively acting as targeted surveillance in this area. CDC light traps (Model 512; John W. Hock Company, Gainsville, FL, USA) baited with dry ice were run 1 or 2 nights per week from June through September.
Collected mosquitoes were sent to the BCCDC PHMRL where they were sorted by gender, identified to the genus and/or species level, and pooled to a maximum of 50 mosquitoes per pool. All pools of female Culex spp. mosquitoes were homogenized, and RNA was extracted by using a QIAamp Viral RNA Mini Kit (QIAGEN, Valencia, CA, USA). RNA extracts were subjected to an in-house–developed TaqMan real-time reverse transcription–PCR (RT-PCR) screening specific for the 3′ noncoding region and nonstructural protein 5′ of the WNV genome. Positive pools were then confirmed by using a second TaqMan real-time RT-PCR specific for the WNV envelope protein (15,16).
Passive dead corvid surveillance in British Columbia is conducted by regional health authorities and includes 1) online reporting of dead bird sightings by the public, and/or 2) collection of dead corvids, which are then submitted for testing at the British Columbia Ministry of Agriculture and Lands Animal Health Centre (AHC). Oropharyngeal swabs from dead birds are screened for WNV by using the VecTest (Microgenics Corporation, Fremont, CA, USA); RT-PCR was used as the confirmatory test on pooled tissues from suspected positive birds (17).
WNV infection is a reportable disease in British Columbia, and information about probable human cases is communicated to the requesting physician and to public health officials; a case questionnaire is then administered to collect information on symptoms, travel history, and likely mode of transmission. Cases are classified as West Nile nonneurologic syndrome or WNNS according to the case definitions of the Public Health Agency of Canada (7). Cases are further categorized as probable or confirmed, depending on the level of specificity associated with the laboratory testing. All potential human case-patients are tested for WNV immunoglobulin M (IgM) and IgG by using ELISA (FOCUS Technologies, Cypress, CA, USA) and acute-phase and convalescent-phase serum samples; in-house hemagglutination inhibition (HI) tests are conducted when needed (16). Positive test results from the BCCDC are sent to the National Microbiology Laboratory in Winnipeg, Manitoba, Canada, for confirmatory plaque reduction neutralization testing.
Equine surveillance in British Columbia is passive. Positive equine cases are reported by local veterinarians to the AHC and the provincial chief veterinarian. WNV in horses is identified by using ELISA, serum neutralization, and/or plaque-reduction neutralization test. Horses suspected to have died of WNV are brought to the AHC for diagnostic necropsy. Although equine vaccinations are available in British Columbia, coverage is not widespread with the exception of horses that travel to the United States.
Temperature Analysis and Degree-Day Calculations
Development of WNV vectors and of the virus within an infected mosquito depends on temperature (3,4). Degree-day calculations use the product of temperature and time to estimate the cumulative energy required for an organism to develop (18). An estimated 109 degree-days are required for the completion of the extrinsic incubation period of WNV in Cx. tarsalis mosquitoes; the virus is unable to develop in this species at temperatures <14.3°C (3). We used the single-sign method (19) with a 14.3°C base to calculate the accumulated degree-days between January 1 and August 1 during 2003–2009 for select British Columbia communities. This method combines a 24-h sine wave with daily minimum and maximum thresholds to calculate the accumulated degree-days over 24 hours. The single-sine method provides the most accurate degree-day quantification when daily temperatures are below the minimum development threshold (20), and has been used in other WNV studies to estimate risk (21). Daily minimum temperature was evaluated for the southern Okanagan community of Osoyoos because it was the closest center to the focal point of WNV activity in British Columbia that also contains an official weather station. The daily minimum temperature for 2009 was compared with the 10-year average by using data from the Canadian National Weather Service, Environment Canada (14).
Provincial and Regional WNV Activity
In early August 2009, serum samples from 2 residents of Kelowna (49°55′N, 119°30′W) were positive for WNV (Table 1; Figure 1). Travel histories indicated that neither person had been outside of interior British Columbia during the period of potential exposure and that each had recently traveled in the southern Okanagan Valley, which is 70–80 km south of Kelowna (Figure 1). During the same week, provincial surveillance detected a positive mosquito pool; 9 more were detected over the subsequent 2 weeks. All positive pools came from the southern Okanagan Valley and were located up to 35 km apart. Three WNV-positive horses were reported to the chief veterinarian and the AHC in early September: 2 from the southern Okanagan Valley and 1 from the more eastern Fraser Valley (Figure 1). None of the horses had traveled during their exposure period.
With the exception of British Columbia, WNV activity in Canada in 2009 (only 8 human cases nationwide) was among the lowest recorded (7). Washington, however, had its greatest WNV activity on record in 2009 (38 cases in humans and 73 cases in horses), up from previous highs of 3 cases in humans and 41 cases in horses or other mammals in 2008 (22).
Mosquito Abundance and Infection Rates
A total of 181,942 mosquitoes were collected in 2009 from 107 traps (Table 1). The most common mosquitoes in British Columbia are Coquillettidia purturbans and members of the genus Aedes. British Columbia is the only area in western Canada that has Cx. tarsalis and Cx. pipiens mosquitoes; the former are rare east of the Mississippi River, and the latter are absent in the prairie provinces (23). However, the abundance of these species is typically lower than in the prairie provinces of Saskatchewan and Manitoba, which experience the most intense WNV transmission in Canada (8,9). Cx. pipiens mosquito abundance in the Fraser Valley in 2009 increased relative to previous years; an average of 36.1 mosquitoes were caught per trap-night. An average of 33.1 Cx. tarsalis mosquitoes were caught per trap-night in the provincial interior, which was the highest abundance of this species reported in the previous 5 years (Table 1). This average from the southern Okanagan Valley includes data from 16 novel traps placed as part of a targeted research project. However, the average provincial count was still the highest since 2006 when these traps were excluded (Table 1). Peaks in the abundance of Cx. tarsalis mosquitoes have been observed previously in British Columbia in late June, but a second substantial increase in the abundance of this species was observed in early August 2009. Several locations in the southern Okanagan Valley showed maximum nightly trap counts >800 Cx. tarsalis mosquitoes (Figure 2). The first WNV-positive mosquito pools were collected during this period of elevated Cx. tarsalis mosquito abundance; the estimated exposure period for both human cases also occurred at this time (Figure 2). Cx. pipiens mosquitoes consistently increased in the Fraser region throughout the summer; some traps caught up to 750 mosquitoes in a single night in 2009. Although the average trap catch of this species has been increasing continuously in this area since 2003, the abundance of WNV vectors in British Columbia remains much lower than in areas of Canada that have experienced large WNV outbreaks (8,9).
Cx. tarsalis was the only vector species in British Columbia that was positive for WNV in 2009. However, only Cx. tarsalis and Cx. pipiens mosquitoes are tested for WNV in British Columbia. Bias-corrected maximum-likelihood estimates (MLEs) of vector infection rates were calculated by using the Centers for Disease Control and Prevention’s (CDC’s) PooledInfRate Microsoft Excel add-on (24). The virus reached detectable levels in late July and peaked in the latter half of August (Table 2) with 2-week MLEs of mosquito infection rates reaching 4.97/1,000 (95% confidence interval [CI] 0.89–16.63) for the weeks of August 23, 2009, to Sepember 5, 2009 (Table 2). Low mosquito abundance has, however, limited the precision of these estimates. Minimum infection rates are larger than comparable MLEs from July 26–August 8 and smaller than MLEs thereafter (Table 2), indicating that >1 infected mosquito may be present per pool as is common when infection rates are high (24).
A total of 6,681 corvids were tested for WNV during 2003–2009; none were positive (Table 1). The decreasing number of dead corvids reported since 2006 likely results from a combination of changes to regional surveillance strategies, decreases in the frequency of education campaigns, and changing public perception given the prolonged absence of the virus. We do not believe that the observed decrease resulted from a die-off of WNV-infected birds.
More degree-days were accumulated in 2009 for most locations in the province, including Osoyoos, than in any year since 2003–2004 (Table 3). Daily minimum temperatures in the winter and spring in Osoyoos in 2009 were below the 10-year average yet quickly increased in late May and early June and remained above the 10-year-average for much of the summer (Figure 3). The average minimum temperature in July 2009 in Osoyoos (15.5°C) was nearly a full degree higher than the 20-year average; average minimum temperatures in August (15.3°C) were the highest seen in 20 years (14). Maximum temperatures reached 34.9°C, 38.6°C, and 39.5°C in June, July, and August, respectively (14).
The delayed establishment of WNV in British Columbia may stem from 1) limited or failed introduction of the virus from adjacent areas with WNV activity before 2009, and/or 2) previously unsuitable environmental or ecologic conditions that prevented establishment, persistence, and amplification of WNV to detectable levels. Although both factors have contributed to the delayed establishment of WNV in British Columbia, they should be clearly separated because virus introduction and persistence are unique events (25). The comparative role of these processes in explaining the prolonged absence of WNV in British Columbia is difficult to determine, but the timing and location of British Columbia’s initial WNV activity do provide clues as to potential drivers of this range expansion.
WNV activity in British Columbia in 2009 was centered in the south-central part of the province in the southern Okanagan Valley (Figure 1). Provincial risk maps created by the BCCDC identified this area as having relatively high WNV risk for reasons other than its proximity to the United States. WNV activity is negatively associated with mountainous landscapes (26), and the Okanagan Valley is one of the few nonmountainous areas in southern British Columbia. Valleys may act as paths of least resistance for local vector- or reservoir-mediated introduction of the virus into British Columbia from Washington (27,28) or by migrating birds along the Pacific Flyway (29). The southern Okanagan Valley also contains abundant irrigated landscapes that are clustered along with human habitation near the rivers and lakes in the southern Okanagan Valley. This aggregation of favorable habitats brings vectors, reservoirs, and humans into close proximity and may facilitate virus amplification and transmission to humans and horses (30).
Not only does the southern Okanagan Valley contain favorable habitats, but it also has a climate that, unlike much of British Columbia, is favorable for WNV amplification and transmission. The southern Okanagan Valley is the hottest region in British Columbia during the summer months. Temperature is positively related to mosquito development rates and the frequency with which mosquitoes take blood meals (31). Rates of virus development within mosquito vectors also increase with temperature, and such relationships have consequences for disease transmission because failure of the virus to replicate before mosquito death can halt virus amplification (3).
Temperatures in 2009 were above average for much of British Columbia, including the southern Okanagan Valley. WNV outbreaks in the United States and Canada have occurred primarily during years with above-normal temperatures (7,32), and a recent case-crossover study (an epidemiologic study design in which each case serves as its own control, allowing comparison of exposure at the time of disease onset to exposure at another time point) of 16,298 WNV cases in the United States showed that a 5°C increase in mean maximum weekly temperature is associated with a 32%–50% increase in WNV incidence (4). The first positive mosquito pool in the southern Okanagan Valley was detected ≈1 week after heavy rainfall and immediately after a period of extreme heat during which nightly temperatures were well above the 14.3°C limit for virus replication in Cx. tarsalis mosquitoes (3) (Figure 3). This rainfall likely increased the number of vector development sites; the ensuing period of high temperatures facilitated rapid mosquito development, virus amplification, and subsequent transmission in avian and mosquito populations.
The above-average abundance of Cx. tarsalis mosquitoes observed in 2009 is likely another key driver of the observed WNV range expansion (Table 1). Cx. tarsalis mosquitoes are bridge vectors that feed on birds and mammals (33). The elevated abundance of this species in 2009, especially the large peaks observed at the end of July and beginning of August (Figure 2), likely facilitated virus transmission from avian populations into humans and horses. However, Cx. tarsalis mosquitoes are much less common in British Columbia compared with other areas of Canada that experience large WNV outbreaks (8,9); this rarity may be 1 factor that has prevented past WNV activity in this region. Little is currently known about the ecology of Cx. tarsalis mosquitoes in the southern Okanagan Valley, and specific information is needed regarding the habitat preferences and overwintering practices of this species to enable more focused prevention efforts.
The detection of WNV in British Columbia in 2009 proves that the southern portion of the province contains the prerequisite environmental and ecologic conditions for within-season WNV amplification and transmission, at least in some years. What is less certain is whether the observed range expansions along the virus’s northern limit will lead to yearly endemic activity or to rare instances of sporadic disease as is typical in Washington State. WNV can overwinter in adult mosquitoes (34), thereby increasing the probability of future virus transmission in areas that had positive WNV indicators in 2009. Historic trends in some areas of the United States show marked increases in outbreak severity in the year after WNV introduction (32). Furthermore, the presence of a WNV-positive horse in the Fraser Valley is concerning given its proximity to British Columbia’s populated urban areas where Cx. pipiens mosquitoes are a consistent presence between June and August. Urban transmission of WNV in British Columbia in 2010 could lead to an increase in human cases and identifies a need for continued surveillance programs and appropriate prevention.
Outbreaks in human populations do require specific sequential weather conditions that may not be met in 2010 despite predictions for an El Niño year (35). In addition, the historically low abundance of key WNV vectors in British Columbia may limit WNV transmission in this region, and a return to provincial norms for Cx. tarsalis mosquito abundance could disrupt WNV transmission in the rural areas of the southern Okanagan Valley. St. Louis encephalitis virus, an arbovirus that shares vectors and reservoirs with WNV, was detected in southern British Columbia in mammals and humans in 1968 (36). Yet St. Louis encephalitis virus has caused no locally acquired human cases since, which indicates that arboviruses can circulate endemically in animal populations in the area without resulting in human cases.
In summary, the introduction and within-season amplification of WNV in 2009 represent a long-delayed range expansion. Although reasons for the delay remain unknown, we hypothesize that WNV activity in Washington State in 2009 provided, for the first time, a sufficient nearby source of WNV for northward introduction of the virus into British Columbia through cross-border mountain valleys. This introduction likely combined with uniquely warm nightly temperatures and elevated numbers of Cx. tarsalis mosquitoes in the southern Okanagan Valley; this combination of factors presented a convergence of favorable events that facilitated establishment and amplification in mosquito and avian populations. WNV activity levels in British Columbia in 2010 will provide valuable insight into the nature of WNV expansion and transmission along British Columbia’s northern and western borders. The presence of WNV activity in 2010, despite a return to normal temperatures and vector abundance, would suggest that ineffective virus introduction may be responsible for the prolonged absence of WNV in the province. Conversely, a return to normal temperatures and vector abundance combined with a lack of WNV activity in 2010 would suggest that environmental and ecologic conditions in this part of the Pacific Northwest are typically unsuitable for yearly WNV establishment, amplification, and transmission. Regardless, surveillance and ongoing consideration of appropriate prevention strategies are required to lessen the possibility of future WNV transmission to human populations in the region.
Mr Roth is a PhD candidate at the School of Population and Public Health at the University of British Columbia, Vancouver, Canada. He also works with the British Columbia Centre for Disease Control Vector Borne and Emerging Zoonotic Pathogens Team. His research interests focus on the effects of environment and ecology on the transmission and control of vector-borne and zoonotic diseases.
We thank British Columbia’s regional health authorities, municipalities, regional governments, and the Osoyoos Indian Band for their help with WNV surveillance in the province; the Parasitology and Zoonotic and Emerging Pathogen Sections of BCCDC PHMRL for their role in morphologic and molecular identification of samples submitted for WNV testing; the Ministry of Agriculture and Lands and the Animal Health Centre for their help in WNV testing of collected birds; the National Microbiology Laboratory in Winnipeg, Manitoba, Canada, for its help with confirmatory human testing; and Phil Curry for his valuable input on WNV surveillance and control in Canada.
Partial funding for mosquito surveillance on First Nations lands in the southern Okanagan Valley was provided by the First Nations and Inuit Health branch of Health Canada and the BC First Nations Environmental Contaminants Program. Personal funding for D.R. came from a Canadian graduate scholarship from the Canadian Institute for Health Research (CIHR) and a CIHR/Michael Smith Foundation for Health Research Bridge Program Fellowship.
- Campbell GL, Marfin AA, Lanciotti RS, Gubler DJ. West Nile virus. Lancet Infect Dis. 2002;2:519–29.
- Eisenberg JNS, Desai MA, Levy K, Bates SJ, Liang S, Naumoff K, Environmental determinants of infectious disease: a framework for tracking causal links and guiding public health research. Environ Health Perspect. 2007;115:1216–23.
- Reisen WK, Fang Y, Martinez VM. Effects of temperature on the transmission of West Nile virus by Culex tarsalis (Diptera: Culicidae). J Med Entomol. 2006;43:309–17.
- Soverow JE, Wellenius GA, Fisman DN, Mittleman MA. Infectious disease in a warming world: how weather influenced West Nile virus in the United States (2001–2005). Environ Health Perspect. 2009;117:1049–52.
- Landesman WJ, Allan BF, Langerhans RB, Knight TM, Chase JM. Inter-annual associations between precipitation and human incidence of West Nile virus in the United States. Vector-Borne Zoonot. 2007;7:337–43.
- Kilpatrick AM, Kramer LD, Campbell SR, Alleyne EO, Dobson AP, Daszak P. West Nile virus risk assessment and the bridge vector paradigm. Emerg Infect Dis. 2005;11:425–9.
- Public Health Agency of Canada. West Nile virus monitor [cited 2009 Dec 14]. http://www.phac-aspc.gc.ca/index-eng.php
- Government of Saskatchewan. West Nile virus: surveillance results [cited 2010 Mar 23]. http://www.health.gov.sk.ca/wnv-surveillance-results
- Manitoba Health. West Nile virus: surveillance statistics [cited 2010 Mar 23]. http://www.gov.mb.ca/health/wnv/stats.html
- Government of Alberta. Health and wellness: West Nile virus—surveillance evidence in Alberta [cited 2010 Mar 23]. http://www.health.alberta.ca/health-info/WNv-evidence.html
- British Columbia Centre for Disease Control. West Nile virus activity in British Columbia: 2009 surveillance program results [cited 2009 March 1]. http://www.bccdc.ca/NR/rdonlyres/73AB78E6-6D61-454C-8113-512A99A59B1E/0/WNVSurveillanceresults2009v2.pdf
- Farley AL. Atlas of British Columbia. Vancouver (Canada): University of British Columbia Press, 1979. p. 30.
- Campbell RW, Branch B. The birds of British Columbia. Vancouver (Canada): University of British Columbia Press, 1990. p. 55.
- Government of Canada. Canada’s national climate archive [cited 2009 Dec 14]. http://www.climate.weatheroffice.gc.ca
- Eisler DL, McNabb A, Jorgensen DR, Isaac-Renton JL. Use of an internal positive control in a multiplex reverse transcription-PCR to detect West Nile virus RNA in mosquito pools. J Clin Microbiol. 2004;42:841–3.
- Lanciotti RS, Kerst AJ, Nasci RS, Godsey MS, Mitchell CJ, Savage HM, Rapid detection of West Nile virus from human clinical specimens, field-collected mosquitoes, and avian samples by a TaqMan reverse transcriptase-PCR assay. J Clin Microbiol. 2000;38:4066–71.
- Stone WB, Okoniewski JC, Therrien JE, Kramer LD, Kauffman EB, Eidson M. VecTest as diagnostic and surveillance tool for West Nile virus in dead birds. [PMID 15663856]. Emerg Infect Dis. 2004;10:2175–81.
- Wilson LT, Barnett WW. Degree-days: an aid in crop and pest management. Calif Agric. 1983;37:4–7.
- Allen JC. A modified sine wave method for calculating degree-days. Environ Entomol. 1976;5:388–96.
- Pruess KP. Day-degree methods for pest management. Environ Entomol. 1983;12:613–9.
- Zou L, Miller SN, Schmidtmann ETA. GIS tool to estimate West Nile virus risk based on a degree-day model. Environ Monit Assess. 2007;129:413–20.
- Washington State Department of Health. West Nile virus in Washington [cited 2009 May 12]. http://www.doh.wa.gov/ehp/ts/zoo/wnv/Surveillance09.html
- Conly JM, Johnston BL. Why the west in West Nile virus infections? Can J Infect Dis Med Microbiol. 2007;18:285–8.[REMOVED HYPERLINK FIELD]
- Biggerstaff BJ. PooledInfRate: a Microsoft Excel add-in to compute prevalence estimates from pooled samples. Fort Collins (CO): Centers for Disease Control and Prevention; 2006 [cited 2009 Dec 14]. http://www.cdc.gv/ncidod/dvbid/westnile/software.htm
- Hudson P, Perkins S, Cattadori I. The emergence of wildlife disease and the application of ecology, In: Ostfeld R, Keesing F, Eviner, VT, editors. Infectious disease ecology: the effect of ecosystems on disease and of disease on ecosystems. Princeton (NJ): Princeton University Press, 2008. p. 347–67.
- Gibbs SEJ, Wimberly MC, Madden M, Masour J, Yabsley MY, Stallknecht DE. Factors affecting the geographic distribution of West Nile virus in Georgia, USA: 2002–2004. Vector-Borne Zoonot. Dis. 2006;6:73–82.
- Bailey SF, Eliason DA, Hoffmann BL. Flight and dispersal of the mosquito Culex tarsalis coquillett in the Sacramento Valley of California. Hilgardia. 1965;37:73–113.
- Rappole JH, Compton BW, Leimgruber P, Robertson J, King DI, Renner SC. Modeling movement of West Nile virus in the Western Hemisphere. Vector Borne Zoonotic Dis. 2006;6:128–39.
- Rappole JH, Derrickson SR, Hubalek Z. Migratory birds and West Nile virus. J Appl Microbiol. 2003;94(Suppl):47S–58S.
- Shaman J, Day JF, Stieglitz M. Drought-induced amplification and epidemic transmission of West Nile virus in southern Florida. J Med Entomol. 2005;42:134–41.
- Becker N. Influence of climate change on mosquito development and mosquito-borne diseases in Europe. Parasitol Res. 2008;103:19–28.
- Reisen W, Brault AC. West Nile virus in North America: perspectives on epidemiology and intervention. Pest Manag Sci. 2007;63:641–6.
- Kent R, Juliusson L, Weissmann M, Evans S, Komar N. Seasonal blood-feeding behavior of Culex tarsalis (Diptera: Culicidae) in Weld County, Colorado, 2007. J Med Entomol. 2009;46:380–90.
- Nasci RS, Savage HM, White DJ, Miller JR, Cropp BC, Godsey MS, West Nile virus in overwintering Culex mosquitoes, New York City, 2000. Emerg Infect Dis. 2001;7:742–4.
- National Weather Service. Climate Prediction Center. El Niño/southern oscillation (ENSO) diagnostic discussion [cited 2009 Dec 14]. http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/enso_advisory/index.shtml
- McLean DM, Chernesky MA, Chernesky SJ, Goddard EJ, Ladyman SR, Peers RR, Arbovirus prevalence in the East Kootenay region, 1968. Can Med Assoc. 1969;100:320–236.
Suggested citation for this article: Roth D, Henry B, Mak S, Fraser M, Taylor M, Li M, et al. West Nile virus range expansion into British Columbia. Emerg Infect Dis [serial on the Internet]. 2010 Aug [date cited]. http://wwwnc.cdc.gov/eid/article/16/8/10-0483.htm
1Members of the British Columbia WNV Surveillance Team: Lucy Beck, Victoria Bowes, Elizabeth Brodkin, Steve Chong, Ken Christian, Dalton Cross, Murray Fyfe, Roland Guasparini, Paul Hasselback, Randy Heilbron, Mira Leslie, James Lu, Craig Nowakowski, Robert Parker, Tim Shum, Kevin Touchet, and Eric Young.
| 3.412157 |
The compact disk, commonly known as CD, is an optical disk used to store digital data. It was originally invented to only hold audio but later on it can store video, software, text, and graphics. The compact disk contains transparent coating, which allows information to be received from a laser beam. Not only can a compact disk provide information but it can take in information as well. Compact disk is able to store large amounts of data by compressing it together.
Compact disk comes in different sizes and formats.
One such format is the compact disk read only memory commonly known as CD ROMs. It is about twelve centimeters in diameter. CD ROMs have data that can be accessed yet data cannot be erased on a CD ROM and additional data cannot be added. Many things can be stored on a CD ROM like audio or video. It can contain as much as 900 megabytes of information that is an equivalent to 99 minutes of audio or video.
Next is Digital Video Disk or DVD for short. DVDs can contain 4.7 gigabytes on a single layer on one side of the optical disk. It has many uses like for storing video. It can play videos when inserted to a DVD player. There are also DVDs that can record video on its disk known as DVD-R. It can record video only once. After recording a video the first time data cannot be erased or changed. With DVD-RW it can be record videos and be rewritten.
Then there is the Universal Media Disk or the UMD. It is about 6 centimeters in diameter. It is different from other optical disk in that it has a protective shutter. Its data can contain movies, music, games, and even TV shows. Its design is to be played on a PlayStation Portable, which is a handheld video game system. It is a read only disk in which it data cannot be erased or rewritten. It can store 900 megabytes on a single layer or 1.8 gigabytes on dual layers. It uses a MPEG4 AVC codec to play videos.
Finally there is Blu-ray. It is an optical disk that can hold more information then the standard DVD. The disk is the same size as standard DVDs and CD-ROMs. Blu-ray uses a shorter wavelength, which explains why it is able to hold more information then a standard DVD. It is able to play high-definition video. Blu-ray gets its name from using a blue-violent laser then a red laser commonly used on DVDs. As time went by there has been success in adding more memory in Blu-ray thus allowing more information to be stored on the optical disk.
The benefits of owing a compact disk is that it takes up less space then the video home system tapes. Compact disk offers better quality as oppose to other media such as videotapes. Unlike videotapes where the quality gets lower every time the videotape is played CD’s quality remains the same regardless how many times it has been played. CDs are also cheaper to produce then ROM cartridges. Even though CD ROMs have a longer load time compared to ROM cartridges it can still hold more data. CDs are easy to use as well. All you have to do is put it in an appropriate media player and the media will usually automatically play depending on what kind of format it is.
| 3.593735 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
July 5, 1999
Explanation: As Mars rotates, most of its surface becomes visible. During Earth's recent pass between Mars and the Sun, the Hubble Space Telescope was able to capture the most detailed time-lapse pictures ever from the Earth. Dark and light sand and gravel create an unusual blotted appearance for the red planet. Winds cause sand-tinted features on the Martian surface to shift over time. Visible in the above pictures are the north polar cap, made of water ice and dry ice, clouds including an unusual cyclone, and huge volcanoes leftover from ancient times. The Mars Global Surveyor satellite orbiting Mars continues to scan the surface for good places to land future robot explorers.
Authors & editors:
Jerry Bonnell (USRA)
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
& Michigan Tech. U.
| 3.693758 |
Determination of melting point
The melting point is determined in a capillary tube. The expression "melts about..." means that the temperature at which the substance is completely melted, as indicated by the disappearance of the solid, will be in the range ± 4°C from the stated value, unless otherwise indicated.
Details of the procedure
The following technique is adequate for the determination of melting point:
Grind about 50 mg of the substance to be tested in a small mortar. Place the ground substance in a vacuum desiccator over silica gel or phosphorus pentoxide at room temperature and dry for about 24 hours (unless another drying procedure is given in the test sheet). Place the substance in a dry capillary tube of 1-mm internal diameter forming a column about 3 mm high. Heat the melting-point apparatus to a temperature 5-10°C below the expected temperature of melting and adjust the heating so that the temperature in the chamber rises about 1°C per minute. Introduce the capillary with the substance into the heated chamber and note the temperature when the sintered substance becomes completely transparent; this is considered to be the melting point.
The difference between the purely theoretical definition of the melting temperature and the results obtained in practice is now widely recognized. A precise physical definition exists only for the so-called triple point, i.e., the temperature at which all three phases (solid, liquid and gaseous) are in equilibrium. The measurement of the triple point is achieved in a highly complicated experiment. Many compendia do not use this temperature, but describe melting intervals as observed in practice, when the formation of droplets, the softening of the substance or its sintering are considered to be the beginning of the melting process, while the formation of a clear and transparent drop of liquid is taken to be the end of the melting process.
In the case of pure substances that melt without decomposition, the beginning of melting can be observed with some certainty. For impure substances, the beginning of the melting process will vary, depending on the nature of the impurities. Therefore it has been proposed that in the basic tests the following definition of melting point be used. This definition is similar to that used in The International Pharmacopoeia, to describe melting temperature:
The melting point denotes the temperature at which the substance has just completely melted; this is indicated by the disappearance of the solid phase and complete transparency of the melt.
This approach has the disadvantage that, if impurities are present, their presence can only be deduced from the lowering of the melting-point value, as no observation is made of the melting interval. An increase in the latter usually indicates low purity of a substance. These considerations, however, have less importance for basic test identification, where this disadvantage is fully offset by increased reproducibility of the values of melting point determined according to the above procedure.
The expression "melting behaviour" used in the basic tests denotes the melting point of substances that melt with decomposition. It is also used for melting points above 250°C to indicate that the reproducibility of the value may be low.
It is necessary to bear in mind that a difference exists between true melting points (or melting ranges) and the temperature of decomposition. Ideally, in the case of a true melting point, no chemical change occurs in the substance. However, when some substances are heated, decomposition takes place either before or during the process of melting, being indicated by a change in the colour of the substance or by the evolution of a gas. In such situations, the observed temperature of melting is not a true melting point of the substance but the melting point of a mixture with decomposition products. It is obvious that the temperature of decomposition cannot be considered as a physical property of a substance as the amount of decomposition products, and consequently the temperature of decomposition, depend on the length of the period of heating and therefore have low reproducibility, even if a standardized procedure is used.
Determination of eutectic temperature
Eutectic temperature is given as a single value only and designates the beginning of melting, i.e., the temperature at which the solid collapses or forms drops on the wall of the capillary tube. The mixture to be used in the test is usually prepared by thorough mixing of approximately equal parts of the test substance and the accessory substance, unless the use of strictly equal amounts of both substances is specially indicated in the test procedure.
Details of the procedure
The following technique is adequate for the determination of eutectic temperature:
Grind equal parts (by weight) of the substance to be tested and the accessory substance, both of them previously dried for about 24 hours at room temperature in a vacuum desiccator over silica gel or phosphorus pentoxide. Fill a dry capillary tube, of 1-mm internal diameter, with the mixture, forming a column about 3 mm high. Heat the melting-point apparatus to a temperature 5-10°C below the expected temperature of melting and adjust the heating so that the temperature in the chamber rises about 1°C per minute. Introduce the capillary with the mixture into the heated chamber, and note the temperature at which the solid collapses or forms droplets on the wall of the capillary tube.
The measurement of eutectic temperature has been introduced in the basic tests as an additional criterion of identity. An exact determination of the eutectic melting point requires a set of measurements carried out on mixtures prepared in different ratios. The eutectic melting point thus measured is thermodynamically exactly defined and may be used as a criterion of both identity and purity. Such a procedure is not, however, practical for the basic test project, as it requires a long time and adequate laboratory facilities. For the purpose of basic tests, the determination is carried out at a constant ratio of 1:1. However, this has the disadvantage that in some cases the melt will not become transparent, so that the reproducibility of the measurement is low owing to individual errors. Nevertheless, the eutectic temperatures given in the basic tests are usually reproducible to within ±5°C.
It should be noted that during eutectic temperature determination the beginning of the process of melting is observed, whereas during melting-point determination it is the end of the process that is surveyed.
Mixed melting point
The determination of a mixed melting point is carried out in a glass capillary as described under "Determination of melting point", page 6. Equal amounts of the substance to be tested and the authentic substance are mixed and placed in a capillary. A separate capillary is filled with the substance to be tested and a further capillary with the authentic substance. All three capillaries are simultaneously heated in the melting-point apparatus. The melting point of the mixture should not differ by more than ±4°C from the melting points of the single substances.
Although mixed melting-point determinations are not included in the basic tests, this procedure is a highly reliable criterion in deciding whether two substances are identical. The general introduction of mixed melting-point determination as an identity test would require a wide accessibility of appropriate reference substances, which can sometimes only be arranged on a national basis. Each laboratory can, however, gradually create for itself a collection of authentic substances from incoming consignments of materials of good quality and can then use the mixed melting point as a strong additional criterion of identity. Such a collection, once established, may further be used in identity tests using the thin-layer chromatography technique.
Type of apparatus
A number of types of melting-point apparatus are produced. A review of those that are commercially available is given by Büchi & Hasler.a
a Büchi, J. & Hasler, C. Pharmaceutica acta Helvetiae, 49:47 (1974).
The apparatus employed in the determination should be equipped with a magnifying glass, have a controlled heating arrangement that permits a heating rate of 1-2°C/min around the temperature of melting, and be equipped to be used with capillaries of 1-mm inner diameter.
The heating arrangement can take the form of a stirred bath, such as the Thiele apparatus and its modifications,b or a heated block, e.g., the Lindström or Culatti modifications.c
b Skan, E.L. & Arthur, J.C. Jr. In: Weissberger, A., ed. Technique of organic chemistry, New York, Interscience, 1971, Vol. 1, p. 105.
c Kienitz, H. In: Houben-Weyl, Methoden der organischen Chemie, Stuttgart, Georg Thieme Verlag, 1953, Vol. 2, p. 788.
Calibration of thermometers
For the various measurements of melting characteristics to be of any value, it is essential to use accurate thermometers. The thermometer used should preferably be certified by a duly recognized body. Alternatively, it could be calibrated against such a thermometer. Another method of checking the accuracy of the thermometer is by measuring the melting points of a set of WHO melting-point reference substances using a 1-mm capillary; if the observed melting points of the reference substances lie within ±2°C of the melting temperature indicated for that substance, the thermometer may be considered satisfactory. An important requisite, however, is that the geometrical arrangement of the thermometer and capillaries in the apparatus is practically identical in every determination. The length of the column of mercury in the thermometer exposed to room temperature can introduce significant error particularly at high temperature. It is therefore desirable to use thermometers with narrow ranges of temperature such as 0-110°C, 110-210°C or 200-300°C. If this is not possible, a correction factor should be introduced according to the formula given in The International Pharmacopoeia, third edition (volume 1, page 22).
The expression "heating behaviour" used in the basic tests denotes the behaviour of the substance (such as colour changes or evolution of gas) when heated in an open test-tube in a flame or in an electrical heater.
| 3.947649 |
Make Whey for Progress
New Uses for Dairy Byproducts
Food technologist Charles Onwulata inspects molded dairy bioplastic made from surplus whey proteins.
The average American consumes more than 30 pounds of cheese every year. Every pound produced creates an estimated 9 pounds of whey, the liquid byproduct that remains after the curds, or solids, coagulate.
Where does all the whey go? It’s used in a range of products such as candy, pasta, baked goods, animal feed—and even pharmaceuticals.
Since its inception, ARS’s Dairy Processing and Products Research Unit at the Eastern Regional Research Center (ERRC) in Wyndmoor, Pennsylvania, has investigated uses for whey and other dairy byproducts. Today, thanks in part to ERRC research, cheesemakers have markets for over 1 billion pounds of whey every year.
New research shows that whey can also be used to create eco-friendly products. For example, using a process called “reactive extrusion,” food technologist Charles Onwulata supplements polyethylene—a common nonbiodegradable plastic—with whey proteins.
Reactive extrusion involves forcing plastic material through a heating chamber, where it melts and combines with a chemical agent that strengthens it before it’s molded into a new shape. Onwulata showed that by combining dairy proteins with starch during this process, it’s possible to create a biodegradable plastic product that can be mixed with polyethylene and molded into utensils.
Working with laboratory chief Seiichiro Isobe, of the Japanese National Food Research Institute, Onwulata created a bioplastic blend by combining whey protein isolate, cornstarch, glycerol, cellulose fiber, acetic acid, and the milk protein casein and molded the material into cups. Onwulata observed that dairy-based bioplastics were more pliable than other bioplastics, making them easier to mold.
Bioplastic blends can replace only about 20 percent of the polyethylene in a product, so resulting materials are only partially biodegradable. But Onwulata and his colleagues are currently applying this process to polylactide (PLA), a biodegradable polymer.
“Blending dairy-based bioplastics with PLA could eventually allow producers to make completely biodegradable materials,” he says.
In a separate project, research leader Peggy Tomasula and her colleagues have developed technology to create biodegradable films from byproducts of both dairy processing and biofuels production. Tomasula found that combining casein with water and glycerol—a byproduct of biodiesel production—produces a water-resistant film that can be used as an edible coating for groceries and other products.
“We use carbon dioxide as an environmentally friendly solvent to isolate dairy proteins from milk, instead of harsh chemicals or acids, which can be difficult to dispose of,” Tomasula says. Carbon dioxide is a byproduct of the glucose fermentation that is used to make ethanol, and she says it makes the edible film more water resistant.
The resulting food coatings are glossy, transparent, and completely edible. Like traditional food packaging, edible films can extend the shelf life of many foods, protect products from damage, prevent exposure to moisture and oxygen, and improve appearance. By using renewable resources instead of petrochemicals, the scientists can create more biodegradable products and reduce waste.—By Laura McGinnis, Agricultural Research Service Information Staff.
This research is part of Quality and Utilization of Agricultural Products, an ARS National Program (#306) described on the World Wide Web at www.nps.ars.usda.gov.
Peggy M. Tomasula and Charles I. Onwulata are in the USDA-ARS Dairy Processing and Products Research Unit, Eastern Regional Research Center, 600 E. Mermaid Ln., Wyndmoor, PA 19038-8598; phone (215) 233-6703 [Tomasula], (215) 233-6497 [Onwulata], fax (215) 233-6795.
"New Uses for Dairy Byproducts" was published in the May/June 2007 issue of Agricultural Research magazine.
| 3.189496 |
On the morning of July 16, 1969, Buzz Aldrin, Neil Armstrong, and Michael Collins sat in the command module, Columbia, atop the Saturn V SA-506. The launch occurred at 9:32AM, and the trio entered orbit a scant 12 minutes later thanks to the power of the five F-1 engines of the first stage and five J-2 engines powering the second stage. After one-and-a-half orbits, the crew fired the final powerful J-2 engine to initiate their translunar injection and began traversing the void between the Earth and the Moon.
Three days later, on July 19, the command module flew behind the moon and entered lunar orbit. After 30 trips around the moon, on July 20, 1969 the lunar module Eagle detached from Columbia and began its 50,000 ft descent onto the surface of the moon. The early parts of the descent were marked by communication outages, and Aldrin and Armstrong noticed that they were seeing landmarks on the lunar surface early in their flight; this suggested that they were landing long, overshooting the planned landing site.
As the descent continued, the navigation and guidance computer began reporting program alarms. The program alarms 1202 and 1201 signified that the computer, which had less power than a modern desktop calculator, could not keep up with the requisite computations—a potentially mission-ending error. Computer analysts, led by Jack Garman, in a back room at NASA's Mission Control determined that as long as the alarms were intermittent, there was no need for abort due to computer problems. It was later learned that the error was the result of the computer processing data from both the landing and rendezvous radar, thus overloading the processor with unnecessary calculations.
With Aldrin acknowledging the intermittent alarms, Armstrong was looking out the window at the surface trying to find an acceptable place to land. It was noted that the landing site the computer had picked out was strewn with boulders between one and two meters across, and was on the edge of what would come to be known as "West crater." Landing here could have severely damaged the LM, or made lunar ascent impossible. Armstrong later said that he felt he could land if he could pilot the LM to come up short of the boulder and rock field, but it soon became obvious that this wouldn't be a possibility.
Unable to land short, Armstrong and Aldrin had to fly over the field, further overshooting their landing target. Since they were traveling longer than they anticipated, the fuel level was reaching the point where an abort decision could need to be called. As they neared 60 feet in altitude, Houston informed them that they had 80 seconds of fuel remaining before a abort decision needed to be made. As they passed 20 feet, there was 50 seconds of fuel left. Touchdown on the lunar surface happened 102 hours, 45 minutes, and 45 seconds after the mission began at 4:17pm EDT July 20, 1969. Thirteen seconds later, Neil Armstrong uttered some of the first words ever spoken on another celestial body, "Houston, Tranquility Base here. The Eagle has landed." They had 25 seconds worth of fuel remaining onboard.
Shortly after landing, while the pair was prepping for the first lunar extravehicular activity, Buzz Aldrin sent the following broadcast, "This is the LM pilot. I'd like to take this opportunity to ask every person listening in, whoever and wherever they may be, to pause for a moment and contemplate the events of the past few hours and to give thanks in his or her own way." Aldrin then privately partook in the sacrament of communion, an event that was not made public until years after the landing. The chalice he used on the moon now resides in Webster Presbyterian Church in Webster, TX where the kit was prepared.
After planning on locations to plant the American flag and Early Apollo Scientific Experiment Package—an instrument kit that had the Laser Ranging Retroreflector and the Passive Seismic Experiment Package—the two began getting ready to take a walk outside. It turned out that getting out of the LM was not as easy as they thought it would be. At some point in the design process, the hatch leading into and out of the LM had been redesigned, but the portable life support systems the astronauts wore had not, resulting in a tighter than ideal fit through the door. After some trouble opening the hatch and some squeezing by Armstrong with Aldrin's help, at 10:56pm EDT on July 20th, 1969 Neil Armstrong became the first man to walk on the surface of the moon, uttering his now famous line, "That's one small step for [a] man, one giant leap for mankind." This event was heard and seen live by over 600 million households around the world.
Fifteen minutes later, Buzz Aldrin joined Neil Armstrong on the surface, becoming the second human in history to take such steps. He described the lunar landscape as a "magnificent desolation." The two spent a total of two hours and thirty six minutes on their lunar EVA and collected over 47 pounds of moon rocks. They traveled only about 400 feet from the LM to what is known as "East crater." As each task took longer to accomplish than expected, the pair was not able to complete all their planned tasks in the short time they spent on the moon. In comparison, Apollo 17 astronauts would spend over 22 hours on the lunar surface over the course of three days and three EVAs. Before leaving the lunar surface, Aldrin and Armstrong left a memorial package dedicated to deceased cosmonauts Yuri Gagarin, Vladimir Komarov (the first man to die on a spaceflight), and Apollo 1 astronauts Gus Grissom, Ed White, and Roger Chaffee.
After their EVA, the pair climbed back into the lunar module and began prepping for the trip back to Columbia, where they would rejoin Collins. During this time, they discovered that the switch that controlled the main circuit breaker that armed the ascent stage rocket was broken. In Aldrin's own words, "Houston, Tranquility. Do you have a way of showing the configuration of the engine arm circuit breaker? Over. (Pause) The reason I'm asking is because the end of it appears to be broken off. I think we can push it back in again. I'm not sure we could pull it out if we pushed it in, though. Over." The solution to this was simply to force a felt-tipped pen into the slot.
Returning to Earth
After a few hours of rest, the pair launched from the lunar surface and rejoined Michael Collins aboard Columbia. They brought with them the 47 pounds of moon rocks, and left behind the experiments, the memorial bag, and the descent stage of the LM which had a plaque on it that read:
"Here Men From The Planet Earth First Set
Foot Upon the Moon, July 1969 A.D., We Came in Peace For All Mankind."
It was signed by all three astronauts and President Nixon and contained
images of the Eastern and Western hemisphere of Earth. It was also
noted that during the ascent, the force from the engine knocked the
American flag over, something fixed in future missions by placing it
further from the spacecraft.
The night before the scheduled splash-down back on Earth, the crew made a final TV broadcast where each gave a synopsis of their thoughts on the mission. Command Module Pilot Micheal Collins spoke on the efforts to create the machine they traveled in: "...The Saturn V rocket which put us in orbit is an incredibly complicated piece of machinery, every piece of which worked flawlessly... We have always had confidence that this equipment will work properly. All this is possible only through the blood, sweat, and tears of a number of a people... All you see is the three of us, but beneath the surface are thousands and thousands of others, and to all of those, I would like to say, 'Thank you very much.'"
Lunar Module Pilot Buzz Aldrin continued this line and quoted from the book of Psalms, "...This has been far more than three men on a mission to the Moon; more, still, than the efforts of a government and industry team; more, even, than the efforts of one nation. We feel that this stands as a symbol of the insatiable curiosity of all mankind to explore the unknown... Personally, in reflecting on the events of the past several days, a verse from Psalms comes to mind. 'When I consider the heavens, the work of Thy fingers, the Moon and the stars, which Thou hast ordained; What is man that Thou art mindful of him?'"
Commander Neil Armstrong closed by saying, "The responsibility for this flight lies first with history and with the giants of science who have preceded this effort; next with the American people, who have, through their will, indicated their desire; next with four administrations and their Congresses, for implementing that will; and then, with the agency and industry teams that built our spacecraft, the Saturn, the Columbia, the Eagle, and the little EMU, the spacesuit and backpack that was our small spacecraft out on the lunar surface. We would like to give special thanks to all those Americans who built the spacecraft; who did the construction, design, the tests, and put their hearts and all their abilities into those craft. To those people tonight, we give a special thank you, and to all the other people that are listening and watching tonight, God bless you. Good night from Apollo 11."
On July 24th, 1969 the mission ended with the command module Columbia splashing down safely in the Pacific ocean. The trio were picked up by rescue teams stationed aboard the USS Hornet, and at that moment, President Kennedy's challenge to the American people earlier in the decade had been fulfilled. After three weeks in quarantine, to ensure they had not brought anything back with them, the three were welcomed back as heroes with awards and accolades heaped on them. As a science fiction fan, I find it interesting (but not especially surprising) that the three were awarded a special Hugo award for "The Best Moon Landing Ever" in 1969.
As Armstrong's team acknowledged before their return to earth, Apollo 11, and project Apollo in general, was not the achievement of three men atop a rocket. It was the work of the employees at NASA; the workers, engineers, and designers at North American, Grumman, Boeing, and Douglas; the people who built the tools that were used to build the spacecraft; the individuals who picked up the astronauts when they returned from Earth; and the imagination of people the world over throughout time. For a moment in time, the world came together to marvel at the achievement not of a few individuals, not of a single country, but of mankind as a whole.
On the 40th anniversary of this feat, we at Nobel Intent and Ars Technica would like to add our voices to the worldwide chorus congratulating Micheal Collins, Buzz Aldrin, Neil Armstrong, and all those who helped make this momentous achievement possible. Since that monumental day 40 years ago, 10 other men have walked on the surface of the moon, but we have not been back since 1972, when Eugene Cernan took the last step off the moon and onto the Challenger lunar lander for ascent and rendezvous with the command module America. I was not alive yet in 1969 and did not get to witness these events as the transpired; however, I do hope that at some point in my own life the world can come together and witness one of their own stepping foot onto another terra firma within the heavens.
| 4.23732 |
This regional assessment examines the impacts of temperature change from 1951-2006 on natural resources in Arizona, New Mexico, Colorado, and Utah. It documents that warming has already affected habitats, watersheds, and species in the Southwest, by influencing the timing of seasonal events or amplifying the impacts of natural disturbances such as wildfire and drought. The report concludes that to begin adapting to climate change, natural resource managers should reevaluate the effectiveness of current restoration tools, modify resource objectives, learn from climate-smart adaptive management and monitoring, and share information across boundaries.
Select a keyword below to find all associated reports and data sets, or browse all of our reports and data.
For best results, do not view the PDF in your web browser. Instead, right-click the file and select "Save file as" in Firefox, or "Save target as" in Internet Explorer to save the PDF to your computer.
| 3.4721 |
Every religion or culture all over the world has their own way to define and celebrate their new year. For example, the Chinese have the Imlek year and to celebrate it, have, as they called it in their own language, "Gong Xi Fat Choy". The Moslem societies have their Muharam year, and any of the people over the world using the Gregorian calendar, celebrate the New Year on January 1st.
The same thing also occurs in Bali, however the Balinese use many different calendar systems. They have adopted the Gregorian calendar for business and government purposes. But for the endless procession of holy days, temple anniversaries, celebrations, sacred dances, building houses, wedding ceremonies, death and cremation processes and other activities that define Balinese life, they have two calendar systems. The first is the Pawukon (from the word Wuku which means week) and Sasih (which is means month). Wuku consists of 30 items starting from Sinta, the first Wuku and end up with the Watugunung the last one. The Pawukon, a 210-day ritual calendar brought over from Java in the 14th century, is a complex cycle of numerological conjunctions that provides the basic schedule for ritual activities on Bali. Sasih, a parallel system of Indian origin, is a twelve month lunar calendar that starts with the vernal equinox and is equally important in determining when to pay respect to the Gods.
Westerners open the New Year in revelry, however, in contrast, the Balinese open their New Year in silence. This is called Nyepi Day, the Balinese day of Silence, which falls on the day following the dark moon of the spring equinox, and opens a new year of the Saka Hindu era which began in 78 A.D.
Nyepi is a day to make and keep the balance of nature. It is based on the story of when King Kaniska I of India was chosen in 78 A.D. The King was famous for his wisdom and tolerance for the Hinduism and Buddhism societies. In that age, Aji Saka did Dharma Yatra (the missionary tour to promote and spread Hinduism) to Indonesia and introduce the Saka year.
The lead upto Nyepi day is as follows:
- Melasti or Mekiyis or Melis (three days before Nyepi)
Melasti is meant to clean the pratima or arca or pralingga (statue), with symbols that help to concentrate the mind in order to become closer to God. The ceremony is aimed to clean all nature and its content, and also to take the Amerta (the source for eternal life) from the ocean or other water resources (ie lake, river, etc). Three days before Nyepi, all the effigies of the Gods from all the village temples are taken to the river in long and colourful ceremonies. There, they have are bathed by the Neptune of the Balinese Lord, the God Baruna, before being taken back home to their shrines.
- Tawur Kesanga (the day before Nyepi)
Exactly one day before Nyepi, all villages in Bali hold a large exorcism ceremony at the main village cross road, the meeting place of demons. They usually make Ogoh-ogoh (the fantastic monsters or evil spirits or the Butha Kala made of bamboo) for carnival purposes. The Ogoh-ogoh monsters symbolize the evil spirits surrounding our environment which have to be got rid of from our lives . The carnivals themselves are held all over Bali following sunset. Bleganjur, a Balinese gamelan music accompanies the procession. Some are giants taken from classical Balinese lore. All have fangs, bulging eyes and scary hair and are illuminated by torches.The procession is usually organised by the Seka Teruna, the youth organisation of Banjar. When Ogoh-ogoh is being played by the Seka Teruna, everyone enjoys the carnival. In order to make a harmonic relation between human being and God, human and human, and human and their environments, Tawur Kesanga is performed in every level of society, from the people's house. In the evening, the Hindus celebrating Ngerupuk, start making noises and light burning torches and set fire to the Ogoh-ogoh in order to get the Bhuta Kala, evil spirits, out of our lives.
On Nyepi day itself, every street is quiet - there are nobody doing their normal daily activities. There is usually Pecalangs (traditional Balinese security man) who controls and checks for street security. Pecalang wear a black uniform and a Udeng or Destar (a Balinese traditional "hat" that is usually used in ceremony). The Pecalangs main task is not only to control the security of the street but also to stop any activities that disturb Nyepi. No traffic is allowed, not only cars but also people, who have to stay in their own houses. Light is kept to a minimum or not at all, the radio or TV is turned down and, of course, no one works. Even love making, this ultimate activity of all leisure times, is not supposed to take place, nor even attempted. The whole day is simply filled with the barking of a few dogs, the shrill of insect and is a simple long quiet day in the calendar of this otherwise hectic island. On Nyepi the world expected to be clean and everything starts anew, with Man showing his symbolic control over himself and the "force" of the World, hence the mandatory religious control.
- Ngembak Geni (the day after Nyepi)
Ngembak is the day when Catur Berata Penyepian is over and Hindus societies usually visit to forgive each other and doing the Dharma Canthi. Dharma Canthi are activities of reading Sloka, Kekidung, Kekawin, etc.(ancient scripts containing songs and lyrics).
From the religious and philosophy point of view, Nyepi is meant to be a day of self introspection to decide on values, eg humanity, love, patience, kindness, etc., that should kept forever. Balinese Hindus have many kind of celebrations (some sacred days) but Nyepi is, perhaps the most important of the island's religious days and the prohibitions are taken seriously, particularly in villages outside of Bali's southern tourist belt. Hotels are exempt from Nyepi's rigorous practices but streets outside will be closed to both pedestrians and vehicles (except for airport shuttles or emergency vehicles) and village wardens (Pecalang) will be posted to keep people off the beach. So wherever you happen to be staying on Nyepi Day in Bali, this will be a good day to spend indoors. Indeed Nyepi day has made Bali a unique island.
| 3.541339 |
* The Social Security program itself is race blind; the benefits it pays are a function of a worker’s earnings history and family situation.
* Studies show African Americans receive modestly more in Social Security benefits for each dollar they pay in payroll taxes than whites do.
* African Americans earn 73 percent as much as whites, on average, but because of Social Security’s progressive benefit structure, their average retirement benefit is about 85 percent as much as whites’.
* Social Security Administration study: Dean Leimer of the Social Security Administration reported that “the results generally support the findings of closely related previous research, confirming that… the ‘Other Races’ group fared better by these measures than the ‘White’ race group in most of the cohorts considered.” Leimer found that males of “other races” received a 0.4 percent higher annual rate of return, on average, than white males, and females of other races received a 0.7 percent higher average rate of return than white females.
* African Americans benefit disproportionately from Social Security’s disability and survivors benefits, since they are more likely than other workers to become disabled or die before retiring. This is reflected in Social Security statistics. African Americans constitute 11.5 percent of all workers who are covered by Social Security but 17.6 percent of Social Security disability beneficiaries. While 15 percent of all U.S. children are African American, 23 percent of the children receiving Social Security survivors benefits are.
* Nearly five million African Americans receive Social Security benefits; roughly half of them are retired workers, and the other half are either disabled workers or the spouses or children of disabled, retired, or deceased workers.
* African Americans also benefit from the fact that Social Security benefits are based on a worker’s highest 35 years of earnings. (Earnings in other years are disregarded.) Because African Americans have double the unemployment rates of whites and experience longer average spells of unemployment, they have more years with no earnings than whites do, on average. By not counting some years of little or no earnings in calculating benefits, Social Security benefits African Americans.
* A major Treasury Department study of Social Security retirement and survivors benefits, based on nearly 40,000 actual earnings histories, found that African Americans’ average annual rate of return was half a percentage point higher than whites’.
* A Government Accountability Office study found that, on average, non-Hispanic blacks receive nearly 10 percent more in Social Security retirement, disability, and survivors benefits for every tax dollar contributed to the program than non-Hispanic whites do.
| 3.224812 |
Proto-Indo-European *sleiH- "bluish" became Serbo-Croatian šljìva "plum" and šljivovica "plum brandy", borrowed as slivovitz. *sleiH- also became Old English slāh and English sloe, blackthorn fruit.
According to the AHD, *sleiH- became Latin līuēre "to be bluish" and līuidus "bluish", borrowed thru French as livid.
Lavender is from Anglo-French lavendre from medieval Latin lauendula. It was thought to be connected to lauare "to wash", either because the plant was used in perfuming baths or laid among linen. But the OED notes that "on the ground of sense-development this does not seem plausible; a word literally meaning 'washing' would hardly without change of form come to denote a non-essential adjunct to washing". Another suggestion is that lauendula is from *līuendula, from līuidus. Lavender is bluish.
| 3.104773 |
The library staff is teaching research skills throughout all elementary grade levels. As children learn to list their sources, they also learn about plagiarism and how to avoid it.
You can help your child learn the importance of always listing the sources for information used. The following is the progression we are using at Waterford.
1st Grade: Students will name the book they used for information.
2nd Grade: Students will list the title and author of the book(s) they used and the word internet if they used a website.
3rd Grade: Students will list the title, author, publisher of books used, and/or the name of the internet site they used.
4th Grade: Students will list the title, author, publisher, copyright date of all books used, and/or the name and URL of the internet sites used.
5th Grade: Students will list all of the above and put it in the MLA format used in Middle School.
| 3.882148 |
ARE WE ATTRACTED?
We are obviously attracted to the Earth. Few people have ventured off of the Earth. The second obvious attraction is to the sun.
For the last few hundred years the first physics children have had was that the Earth went around the sun. The next attraction is a little bit more obscure because it is located 28,000 light-years away from us. The center of the Milky Way galaxy is a great center of gravitational attraction of most objects visible to the naked eye. The last "Great Attractor" known to us is a little more obscure. It lies 400,000,000 light-years away and seems to attract our entire local group. There are however many things obscuring our view of it. The interstellar medium blocks 20% of our visible sky, and in this case the Great Attractor lies within that 20%. It is a conglomeration of perhaps 100,000 galaxies beyond the local group.
WHERE ARE WE GOING?
The strongest attraction in this neck-of-the-woods of the universe is believed to be a cluster of galaxies. The center of which is believed to be Abell 3627.
It appears that the Earth is moving, in the direction of the constellation Leo (RA: 11.2h, dec: -7deg), around 380 km/s.
This does however include the revolution of the Sun around the galaxy, and includes the movement of the Milky Way about the center of the local
group. This taken into account sum to around 300 km/s in the direction of the constellation Cygnus. After correcting for this one
finds the local group is moving at around 600 km/s relative to the cosmic microwave background, measured via doppler-shift, in the
direction of the Hydra-Centaurus supercluster. This is the reason for Dressler and collaborators' idea that there should be
something pulling us in that direction.
| 3.045725 |
Table 12.11. Arithmetic Operators
The usual arithmetic operators are available. The result is determined according to the following rules:
If both operands are integers and any of them are unsigned,
the result is an unsigned integer. For subtraction, if the
SQL mode is enabled, the result is signed even if any
operand is unsigned.
In division performed with
/, the scale
of the result when using two exact-value operands is the
scale of the first operand plus the value of the
system variable (which is 4 by default). For example, the
result of the expression
5.05 / 0.014 has
a scale of six decimal places
These rules are applied for each operation, such that nested
calculations imply the precision of each component. Hence,
(14620 / 9432456) / (24250 / 9432456),
resolves first to
(0.0014) / (0.0026), with
the final result having 8 decimal places
Because of these rules and the way they are applied, care should be taken to ensure that components and subcomponents of a calculation use the appropriate level of precision. See Section 12.10, “Cast Functions and Operators”.
For information about handling of overflow in numeric expression evaluation, see Section 22.214.171.124, “Out-of-Range and Overflow Handling”.
Arithmetic operators apply to numbers. For other types of
values, alternative operations may be available. For example, to
add date values, use
see Section 12.7, “Date and Time Functions”.
SELECT 3+5;-> 8
SELECT 3-5;-> -2
Unary minus. This operator changes the sign of the operand.
SELECT - 2;-> -2
SELECT 3*5;-> 15 mysql>
SELECT 18014398509481984*18014398509481984.0;-> 324518553658426726783156020576256.0
SELECT 3/5;-> 0.60
Division by zero produces a
SELECT 102/(1-1);-> NULL
A division is calculated with
BIGINT arithmetic only if
performed in a context where its result is converted to an
SELECT 5 DIV 2;-> 2
| 3.267411 |
Truncate a file to a specified length
#include <unistd.h> int truncate( const char* path, off_t length ); int truncate64( const char* path, off64_t length );
- The path name of the file that you want to truncate.
- The new size of the file.
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The truncate() function causes the regular file named by path to have a size of length bytes. The truncate64() function is a large-file version of truncate().
The effect of truncate() on other types of files is unspecified. If the file previously was larger than length, the extra data is lost. If it was previously shorter than length, bytes between the old and new lengths are read as zeroes. The process must have write permission for the file.
If the request would cause the file size to exceed the soft file size limit for the process, the request fails and the implementation generates the SIGXFSZ signal for the process.
This function doesn't modify the file offset for any open file descriptions associated with the file. On successful completion, if the file size is changed, truncate() marks for update the st_ctime and st_mtime fields of the file, and if the file is a regular file, the S_ISUID and S_ISGID bits of the file mode may be cleared.
- An error occurred; errno is set.
- A component of the path prefix denies search permission, or write permission is denied on the file.
- The path argument points outside the process's allocated address space.
- The length argument was greater than the maximum file size.
- A signal was caught during execution.
- The length argument is invalid, or the path argument isn't an ordinary file.
- An I/O error occurred while reading from or writing to a filesystem.
- The named file is a directory.
- Too many symbolic links were encountered in resolving path.
- The maximum number of file descriptors available to the process has been reached.
- Components of path require hopping to multiple remote machines and filesystem type doesn't allow it.
- The length of the specified pathname exceeds PATH_MAX bytes, or the length of a component of the pathname exceeds NAME_MAX bytes.
- Additional space couldn't be allocated for the system file table.
- A component of path doesn't name an existing file or path is an empty string.
- The path argument points to a remote machine and the link to that machine is no longer active.
- A component of the path prefix of path isn't a directory.
- The named file resides on a read-only filesystem.
| 3.092374 |
See Google's What is Android? page for an overview of Android components, and a diagram of the architecture.
The diagram on that page appears in every presentation I have ever seen about Android technical topics (with the exception of my own).
Here is the Android Architecture Diagram, obtained from here.
See also Android internals diagram
Basically Android has the following layers:
- applications (written in java, executing in Dalvik)
- framework services and libraries (written mostly in java)
- applications and most framework code executes in a virtual machine
- native libraries, daemons and services (written in C or C++)
- the Linux kernel, which includes
- drivers for hardware, networking, file system access and inter-process-communication
- Android is not just Java on Linux
- Great presentation by Tetsuyuki Kobayashi overview of Android
- See this Android Internals presentation by Karim Yaghmour
- You'll find both the video and the slides there
- Mythbusters_Android.pdf Presentation by Matt Porter at ELC Europe
- Has bits and pieces showing problematic Android code and policies
Breakdown of running Android system
A quick look at Android contents and programs running when Android starts is at:
Relation to the Linux kernel
Here is Greg Kroah-Hartmans presentation on Android from the CELF conference 2010, discussing how Google/Android work (or don't work) with the Linux community.
Java is used as a language for application programming, but it is converted into a non-java byte code for runtime interpretation by a custom interpreter (Dalvik).
Java/Object Oriented Philosophy
Practicality is more important than purity in implementing the Android system.
Dianne Hackborn, one of the principal engineers working on Android, wrote:
| 3.344315 |
Computational phylogenetics is the application of computational algorithms, methods and programs to phylogenetic analyses. The goal is to assemble a phylogenetic tree representing a hypothesis about the evolutionary ancestry of a set of genes, species, or other taxa. For example, these techniques have been used to explore the family tree of hominid species and the relationships between specific genes shared by many types of organisms. Traditional phylogenetics relies on morphological data obtained by measuring and quantifying the phenotypic properties of representative organisms, while the more recent field of molecular phylogenetics uses nucleotide sequences encoding genes or amino acid sequences encoding proteins as the basis for classification. Many forms of molecular phylogenetics are closely related to and make extensive use of sequence alignment in constructing and refining phylogenetic trees, which are used to classify the evolutionary relationships between homologous genes represented in the genomes of divergent species. The phylogenetic trees constructed by computational methods are unlikely to perfectly reproduce the evolutionary tree that represents the historical relationships between the species being analyzed. The historical species tree may also differ from the historical tree of an individual homologous gene shared by those species.
Producing a phylogenetic tree requires a measure of homology among the characteristics shared by the taxa being compared. In morphological studies, this requires explicit decisions about which physical characteristics to measure and how to use them to encode distinct states corresponding to the input taxa. In molecular studies, a primary problem is in producing a multiple sequence alignment (MSA) between the genes or amino acid sequences of interest. Progressive sequence alignment methods produce a phylogenetic tree by necessity because they incorporate new sequences into the calculated alignment in order of genetic distance.
Types of phylogenetic trees
Phylogenetic trees generated by computational phylogenetics can be either rooted or unrooted depending on the input data and the algorithm used. A rooted tree is a directed graph that explicitly identifies a most recent common ancestor (MRCA), usually an imputed sequence that is not represented in the input. Genetic distance measures can be used to plot a tree with the input sequences as leaf nodes and their distances from the root proportional to their genetic distance from the hypothesized MRCA. Identification of a root usually requires the inclusion in the input data of at least one "outgroup" known to be only distantly related to the sequences of interest.
By contrast, unrooted trees plot the distances and relationships between input sequences without making assumptions regarding their descent. An unrooted tree can always be produced from a rooted tree, but a root cannot usually be placed on an unrooted tree without additional data on divergence rates, such as the assumption of the molecular clock hypothesis.
The set of all possible phylogenetic trees for a given group of input sequences can be conceptualized as a discretely defined multidimensional "tree space" through which search paths can be traced by optimization algorithms. Although counting the total number of trees for a nontrivial number of input sequences can be complicated by variations in the definition of a tree topology, it is always true that there are more rooted than unrooted trees for a given number of inputs and choice of parameters.
Coding characters and defining homology
Morphological analysis
The basic problem in morphological phylogenetics is the assembly of a matrix representing a mapping from each of the taxa being compared to representative measurements for each of the phenotypic characteristics being used as a classifier. The types of phenotypic data used to construct this matrix depend on the taxa being compared; for individual species, they may involve measurements of average body size, lengths or sizes of particular bones or other physical features, or even behavioral manifestations. Of course, since not every possible phenotypic characteristic could be measured and encoded for analysis, the selection of which features to measure is a major inherent obstacle to the method. The decision of which traits to use as a basis for the matrix necessarily represents a hypothesis about which traits of a species or higher taxon are evolutionarily relevant. Morphological studies can be confounded by examples of convergent evolution of phenotypes. A major challenge in constructing useful classes is the high likelihood of inter-taxon overlap in the distribution of the phenotype's variation. The inclusion of extinct taxa in morphological analysis is often difficult due to absence of or incomplete fossil records, but has been shown to have a significant effect on the trees produced; in one study only the inclusion of extinct species of apes produced a morphologically derived tree that was consistent with that produced from molecular data.
Some phenotypic classifications, particularly those used when analyzing very diverse groups of taxa, are discrete and unambiguous; classifying organisms as possessing or lacking a tail, for example, is straightforward in the majority of cases, as is counting features such as eyes or vertebrae. However, the most appropriate representation of continuously varying phenotypic measurements is a controversial problem without a general solution. A common method is simply to sort the measurements of interest into two or more classes, rendering continuous observed variation as discretely classifiable (e.g., all examples with humerus bones longer than a given cutoff are scored as members of one state, and all members whose humerus bones are shorter than the cutoff are scored as members of a second state). This results in an easily manipulated data set but has been criticized for poor reporting of the basis for the class definitions and for sacrificing information compared to methods that use a continuous weighted distribution of measurements.
Because morphological data is extremely labor-intensive to collect, whether from literature sources or from field observations, reuse of previously compiled data matrices is not uncommon, although this may propagate flaws in the original matrix into multiple derivative analyses.
Molecular analysis
The problem of character coding is very different in molecular analyses, as the characters in biological sequence data are immediate and discretely defined - distinct nucleotides in DNA or RNA sequences and distinct amino acids in protein sequences. However, defining homology can be challenging due to the inherent difficulties of multiple sequence alignment. For a given gapped MSA, several rooted phylogenetic trees can be constructed that vary in their interpretations of which changes are "mutations" versus ancestral characters, and which events are insertion mutations or deletion mutations. For example, given only a pairwise alignment with a gap region, it is impossible to determine whether one sequence bears an insertion mutation or the other carries a deletion. The problem is magnified in MSAs with unaligned and nonoverlapping gaps. In practice, sizable regions of a calculated alignment may be discounted in phylogenetic tree construction to avoid integrating noisy data into the tree calculation.
Distance-matrix methods
Distance-matrix methods of phylogenetic analysis explicitly rely on a measure of "genetic distance" between the sequences being classified, and therefore they require an MSA as an input. Distance is often defined as the fraction of mismatches at aligned positions, with gaps either ignored or counted as mismatches. Distance methods attempt to construct an all-to-all matrix from the sequence query set describing the distance between each sequence pair. From this is constructed a phylogenetic tree that places closely related sequences under the same interior node and whose branch lengths closely reproduce the observed distances between sequences. Distance-matrix methods may produce either rooted or unrooted trees, depending on the algorithm used to calculate them. They are frequently used as the basis for progressive and iterative types of multiple sequence alignments. The main disadvantage of distance-matrix methods is their inability to efficiently use information about local high-variation regions that appear across multiple subtrees.
Neighbor-joining methods apply general data clustering techniques to sequence analysis using genetic distance as a clustering metric. The simple neighbor-joining method produces unrooted trees, but it does not assume a constant rate of evolution (i.e., a molecular clock) across lineages. Its relative, UPGMA (Unweighted Pair Group Method with Arithmetic mean) produces rooted trees and requires a constant-rate assumption - that is, it assumes an ultrametric tree in which the distances from the root to every branch tip are equal.
Fitch-Margoliash method
The Fitch-Margoliash method uses a weighted least squares method for clustering based on genetic distance. Closely related sequences are given more weight in the tree construction process to correct for the increased inaccuracy in measuring distances between distantly related sequences. The distances used as input to the algorithm must be normalized to prevent large artifacts in computing relationships between closely related and distantly related groups. The distances calculated by this method must be linear; the linearity criterion for distances requires that the expected values of the branch lengths for two individual branches must equal the expected value of the sum of the two branch distances - a property that applies to biological sequences only when they have been corrected for the possibility of back mutations at individual sites. This correction is done through the use of a substitution matrix such as that derived from the Jukes-Cantor model of DNA evolution. The distance correction is only necessary in practice when the evolution rates differ among branches. Another modification of the algorithm can be helpful, especially in case of concentrated distances (please report to Concentration of measure phenomenon and Curse of dimensionality): that modification, described in, has been shown to improve the efficiency of the algorithm and its robustness.
The least-squares criterion applied to these distances is more accurate but less efficient than the neighbor-joining methods. An additional improvement that corrects for correlations between distances that arise from many closely related sequences in the data set can also be applied at increased computational cost. Finding the optimal least-squares tree with any correction factor is NP-complete, so heuristic search methods like those used in maximum-parsimony analysis are applied to the search through tree space.
Using outgroups
Independent information about the relationship between sequences or groups can be used to help reduce the tree search space and root unrooted trees. Standard usage of distance-matrix methods involves the inclusion of at least one outgroup sequence known to be only distantly related to the sequences of interest in the query set. This usage can be seen as a type of experimental control. If the outgroup has been appropriately chosen, it will have a much greater genetic distance and thus a longer branch length than any other sequence, and it will appear near the root of a rooted tree. Choosing an appropriate outgroup requires the selection of a sequence that is moderately related to the sequences of interest; too close a relationship defeats the purpose of the outgroup and too distant adds noise to the analysis. Care should also be taken to avoid situations in which the species from which the sequences were taken are distantly related, but the gene encoded by the sequences is highly conserved across lineages. Horizontal gene transfer, especially between otherwise divergent bacteria, can also confound outgroup usage.
Maximum parsimony
Maximum parsimony (MP) is a method of identifying the potential phylogenetic tree that requires the smallest total number of evolutionary events to explain the observed sequence data. Some ways of scoring trees also include a "cost" associated with particular types of evolutionary events and attempt to locate the tree with the smallest total cost. This is a useful approach in cases where not every possible type of event is equally likely - for example, when particular nucleotides or amino acids are known to be more mutable than others.
The most naive way of identifying the most parsimonious tree is simple enumeration - considering each possible tree in succession and searching for the tree with the smallest score. However, this is only possible for a relatively small number of sequences or species because the problem of identifying the most parsimonious tree is known to be NP-hard; consequently a number of heuristic search methods for optimization have been developed to locate a highly parsimonious tree, if not the best in the set. Most such methods involve a steepest descent-style minimization mechanism operating on a tree rearrangement criterion.
Branch and bound
The branch and bound algorithm is a general method used to increase the efficiency of searches for near-optimal solutions of NP-hard problems first applied to phylogenetics in the early 1980s. Branch and bound is particularly well suited to phylogenetic tree construction because it inherently requires dividing a problem into a tree structure as it subdivides the problem space into smaller regions. As its name implies, it requires as input both a branching rule (in the case of phylogenetics, the addition of the next species or sequence to the tree) and a bound (a rule that excludes certain regions of the search space from consideration, thereby assuming that the optimal solution cannot occupy that region). Identifying a good bound is the most challenging aspect of the algorithm's application to phylogenetics. A simple way of defining the bound is a maximum number of assumed evolutionary changes allowed per tree. A set of criteria known as Zharkikh's rules severely limit the search space by defining characteristics shared by all candidate "most parsimonious" trees. The two most basic rules require the elimination of all but one redundant sequence (for cases where multiple observations have produced identical data) and the elimination of character sites at which two or more states do not occur in at least two species. Under ideal conditions these rules and their associated algorithm would completely define a tree.
Sankoff-Morel-Cedergren algorithm
The Sankoff-Morel-Cedergren algorithm was among the first published methods to simultaneously produce an MSA and a phylogenetic tree for nucleotide sequences. The method uses a maximum parsimony calculation in conjunction with a scoring function that penalizes gaps and mismatches, thereby favoring the tree that introduces a minimal number of such events. The imputed sequences at the interior nodes of the tree are scored and summed over all the nodes in each possible tree. The lowest-scoring tree sum provides both an optimal tree and an optimal MSA given the scoring function. Because the method is highly computationally intensive, an approximate method in which initial guesses for the interior alignments are refined one node at a time. Both the full and the approximate version are in practice calculated by dynamic programming.
MALIGN and POY
More recent phylogenetic tree/MSA methods use heuristics to isolate high-scoring, but not necessarily optimal, trees. The MALIGN method uses a maximum-parsimony technique to compute a multiple alignment by maximizing a cladogram score, and its companion POY uses an iterative method that couples the optimization of the phylogenetic tree with improvements in the corresponding MSA. However, the use of these methods in constructing evolutionary hypotheses has been criticized as biased due to the deliberate construction of trees reflecting minimal evolutionary events.
Maximum likelihood
The maximum likelihood method uses standard statistical techniques for inferring probability distributions to assign probabilities to particular possible phylogenetic trees. The method requires a substitution model to assess the probability of particular mutations; roughly, a tree that requires more mutations at interior nodes to explain the observed phylogeny will be assessed as having a lower probability. This is broadly similar to the maximum-parsimony method, but maximum likelihood allows additional statistical flexibility by permitting varying rates of evolution across both lineages and sites. In fact, the method requires that evolution at different sites and along different lineages must be statistically independent. Maximum likelihood is thus well suited to the analysis of distantly related sequences, but because it formally requires search of all possible combinations of tree topology and branch length, it is computationally expensive to perform on more than a few sequences.
The "pruning" algorithm, a variant of dynamic programming, is often used to reduce the search space by efficiently calculating the likelihood of subtrees. The method calculates the likelihood for each site in a "linear" manner, starting at a node whose only descendants are leaves (that is, the tips of the tree) and working backwards toward the "bottom" node in nested sets. However, the trees produced by the method are only rooted if the substitution model is irreversible, which is not generally true of biological systems. The search for the maximum-likelihood tree also includes a branch length optimization component that is difficult to improve upon algorithmically; general global optimization tools such as the Newton-Raphson method are often used. Searching tree topologies defined by likelihood has not been shown to be NP-complete, but remains extremely challenging because branch-and-bound search is not yet effective for trees represented in this way.
Bayesian inference
Bayesian inference can be used to produce phylogenetic trees in a manner closely related to the maximum likelihood methods. Bayesian methods assume a prior probability distribution of the possible trees, which may simply be the probability of any one tree among all the possible trees that could be generated from the data, or may be a more sophisticated estimate derived from the assumption that divergence events such as speciation occur as stochastic processes. The choice of prior distribution is a point of contention among users of Bayesian-inference phylogenetics methods.
Implementations of Bayesian methods generally use Markov chain Monte Carlo sampling algorithms, although the choice of move set varies; selections used in Bayesian phylogenetics include circularly permuting leaf nodes of a proposed tree at each step and swapping descendant subtrees of a random internal node between two related trees. The use of Bayesian methods in phylogenetics has been controversial, largely due to incomplete specification of the choice of move set, acceptance criterion, and prior distribution in published work. Bayesian methods are generally held to be superior to parsimony-based methods; they can be more prone to long-branch attraction than maximum likelihood techniques, although they are better able to accommodate missing data.
Model selection
Molecular phylogenetics methods rely on a defined substitution model that encodes a hypothesis about the relative rates of mutation at various sites along the gene or amino acid sequences being studied. At their simplest, substitution models aim to correct for differences in the rates of transitions and transversions in nucleotide sequences. The use of substitution models is necessitated by the fact that the genetic distance between two sequences increases linearly only for a short time after the two sequences diverge from each other (alternatively, the distance is linear only shortly before coalescence). The longer the amount of time after divergence, the more likely it becomes that two mutations occur at the same nucleotide site. Simple genetic distance calculations will thus undercount the number of mutation events that have occurred in evolutionary history. The extent of this undercount increases with increasing time since divergence, which can lead to the phenomenon of long branch attraction, or the misassignment of two distantly related but convergently evolving sequences as closely related. The maximum parsimony method is particularly susceptible to this problem due to its explicit search for a tree representing a minimum number of distinct evolutionary events.
Types of models
All substitution models assign a set of weights to each possible change of state represented in the sequence. The most common model types are implicitly reversible because they assign the same weight to, for example, a G>C nucleotide mutation as to a C>G mutation. The simplest possible model, the Jukes-Cantor model, assigns an equal probability to every possible change of state for a given nucleotide base. The rate of change between any two distinct nucleotides will be one-third of the overall substitution rate. More advanced models distinguish between transitions and transversions. The most general possible time-reversible model, called the GTR model, has six mutation rate parameters. An even more generalized model known as the general 12-parameter model breaks time-reversibility, at the cost of much additional complexity in calculating genetic distances that are consistent among multiple lineages. One possible variation on this theme adjusts the rates so that overall GC content - an important measure of DNA double helix stability - varies over time.
Models may also allow for the variation of rates with positions in the input sequence. The most obvious example of such variation follows from the arrangement of nucleotides in protein-coding genes into three-base codons. If the location of the open reading frame (ORF) is known, rates of mutation can be adjusted for position of a given site within a codon, since it is known that wobble base pairing can allow for higher mutation rates in the third nucleotide of a given codon without affecting the codon's meaning in the genetic code. A less hypothesis-driven example that does not rely on ORF identification simply assigns to each site a rate randomly drawn from a predetermined distribution, often the gamma distribution or log-normal distribution. Finally, a more conservative estimate of rate variations known as the covarion method allows autocorrelated variations in rates, so that the mutation rate of a given site is correlated across sites and lineages.
Choosing the best model
The selection of an appropriate model is critical for the production of good phylogenetic analyses, both because underparameterized or overly restrictive models may produce aberrant behavior when their underlying assumptions are violated, and because overly complex or overparameterized models are computationally expensive and the parameters may be overfit. The most common method of model selection is the likelihood ratio test (LRT), which produces a likelihood estimate that can be interpreted as a measure of "goodness of fit" between the model and the input data. However, care must be taken in using these results, since a more complex model with more parameters will always have a higher likelihood than a simplified version of the same model, which can lead to the naive selection of models that are overly complex. For this reason model selection computer programs will choose the simplest model that is not significantly worse than more complex substitution models. A significant disadvantage of the LRT is the necessity of making a series of pairwise comparisons between models; it has been shown that the order in which the models are compared has a major effect on the one that is eventually selected.
An alternative model selection method is the Akaike information criterion (AIC), formally an estimate of the Kullback-Leibler divergence between the true model and the model being tested. It can be interpreted as a likelihood estimate with a correction factor to penalize overparameterized models. The AIC is calculated on an individual model rather than a pair, so it is independent of the order in which models are assessed. A related alternative, the Bayesian information criterion (BIC), has a similar basic interpretation but penalizes complex models more heavily.
See also
- List of phylogenetics software
- Phylogenetic comparative methods
- Phylogenetic tree
- Microbial phylogenetics
- Evolutionary dynamics
- Strait DS, Grine FE. (2004). Inferring hominoid and early hominid phylogeny using craniodental characters: the role of fossil taxa. J Hum Evol 47(6):399-452.
- Hodge T, Cope MJ. (2000). A myosin family tree. J Cell Sci 113: 3353-3354.
- Mount DM. (2004). Bioinformatics: Sequence and Genome Analysis 2nd ed. Cold Spring Harbor Laboratory Press: Cold Spring Harbor, NY.
- Felsenstein J. (2004). Inferring Phylogenies Sinauer Associates: Sunderland, MA.
- Swiderski DL, Zelditch ML, Fink WL. (1998). Why morphometrics is not special: coding quantitative data for phylogenetic analysis. 47(3):508-19.
- Gaubert P, Wozencraft WC, Cordeiro-Estrela P, Veron G. (2005). Mosaics of convergences and noise in morphological phylogenies: what's in a viverrid-like carnivoran? Syst Biol 54(6):865-94.
- Wiens JJ. (2001). Character analysis in morphological phylogenetics: problems and solutions. Syst Biol 50(5):689-99.
- Jenner RA. (2001). Bilaterian phylogeny and uncritical recycling of morphological data sets. Syst Biol 50(5): 730-743.
- Fitch WM, Margoliash E. (1967). Construction of phylogenetic trees. Science 155: 279-84.
- Lespinats, S., Grando, D., Maréchal, E., Hakimi, M.A., Tenaillon, O., et Bastien, O. (2011) “How Fitch-Margoliash Algorithm can benefit from Multi Dimensional Scaling.” Evolutionary Bioinformatics 2011:7 61-85
- Day, WHE. (1986). Computational complexity of inferring phylogenies from dissimilarity matrices. Bulletin of Mathematical Biology 49:461-7.
- Hendy MD, Penny D. (1982). Branch and bound algorithms to determine minimal evolutionary trees. Math Biosci 60: 133-42.
- Ratner VA, Zharkikh AA, Kolchanov N, Rodin S, Solovyov S, Antonov AS. (1995). Molecular Evolution Biomathematics Series Vol 24. Springer-Verlag: New York, NY.
- Sankoff D, Morel C, Cedergren RJ. (1973). Evolution of 5S RNA and the non-randomness of base replacement. Nature New Biology 245:232-4.
- Wheeler WC, Gladstein DG. (1994). MALIGN: a multiple nucleic acid sequence alignment program. J Heredity 85: 417-18.
- Simmons MP. (2004). Independence of alignment and tree search. Mol Phylogenet Evol 31(3):874-9.
- Mau B, Newton MA. (1997). Phylogenetic inference for binary data on dendrograms using Markov chain Monte Carlo. J Comp Graph Stat 6:122-31.
- Yang Z, Rannala B. (1997). bayesian phylogenetic inference using DNA sequences: a Markov chain Monte Carlo method. Mol Biol Evol 46:409-18.
- Kolaczkowski, B.; Thornton, J. W. (2009). "Long-Branch Attraction Bias and Inconsistency in Bayesian Phylogenetics". In Delport, Wayne. PLoS ONE 4 (12): e7891. doi:10.1371/journal.pone.0007891. PMC 2785476. PMID 20011052.
- Simmons, M. P. (2012). "Misleading results of likelihood-based phylogenetic analyses in the presence of missing data". Cladistics 28 (2): 208–222. doi:10.1111/j.1096-0031.2011.00375.x.
- Sullivan, Jack; Joyce, Paul (2005). "Model Selection in Phylogenetics". Annual Review of Ecology Evolution and Systematics 36 (1): 445. doi:10.1146/annurev.ecolsys.36.102003.152633.
- Galtier N, Guoy M. (1998.) Inferring pattern and process: maximum-likelihood implementation of a nonhomogeneous model of DNA sequence evolution for phylogenetic analysis. Mol. Biol. Evol. 15:871–79.
- Fitch WM, Markowitz E. (1970). An improved method for determining codon variability in a gene and its application to the rate of fixation of mutations in evolution. Biochemical Genetics 4:579-593.
- Pol D. (2004.) Empirical problems of the hierarchical likelihood ratio test for model selection. Syst Biol 53:949–62.
Further reading
- Charles Semple and Mike Steel (2003), Phylogenetics, Oxford University Press, ISBN 978-0-19-850942-4
- Barry A. Cipra (2007), Algebraic Geometers See Ideal Approach to Biology, SIAM News, Volume 40, Number 6
- Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 16.4. Hierarchical Clustering by Phylogenetic Trees". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
| 3.486 |
Islam in Albania
|Islam by country|
|Part of a series on|
|Balkan countries with substantial Albanian population|
|Varieties of Albanian|
During the Ottoman rule, the majority of Albanians converted to the Muslim affiliation (Sunni and Bektashi). However, decades of state atheism which ended in 1991 brought a decline in religious practice in all traditions.
A recent Pew Research Center demographic study put the percentage of Muslims in Albania at 79.9%. However, a recent Gallup poll gives percentages of religious affiliations with only 43% Muslim, 19% Eastern Orthodox, 15% Catholic and 23% atheist or nonreligious. In the 2011 census the declared religious affiliation of the population was: 56.70% Muslims, 2.09% Bektashis, 10.03% Catholics, 6.75% Orthodox, 0.14% Evangelists, 0.07% other Christians, 5.49% believers without denomination, 2.50% Atheists, 13.79% undeclared.
Ottoman period
Islam came to Albania through the Ottoman rule in the 14th century and confronted Christianity. In the North, the spread of Islam was slower due to resistance from the Roman Catholic Church and the mountainous terrain which contributed to curb Muslim influence. In the center and south, however, by the end of the seventeenth century the urban centers had largely adopted the religion of the growing Albanian Muslim elite. The existence of an Albanian Muslim class of pashas and beys who played an increasingly important role in Ottoman political and economic life became an attractive career option for most Albanians.
The Muslims of Albania were divided into two main communities: those associated with Sunni Islam and those associated with the Bektashi Sufis, a mystic Dervish order that came to Albania during the Ottoman period, primarily during the 18th and 19th centuries. The Bektashi sect is considered heretical by most mainstream Muslims. Historically Sunni Islam found its strongest base in northern and central Albania, while Bektashis were found primarily in the Tosk lands of the south.
During Ottoman rule the Albanian population gradually began to convert to Islam through the teachings of Bektashism, in order to gain considerable advantages in the Ottoman trade networks, bureaucracy and army. Many Albanians were recruited into the Ottoman Janissary and Devşirme and 42 Grand Viziers of the Ottoman Empire were of Albanian origin. The most prominent Albanians during Ottoman rule were: Davud Pasha, Hamza Kastrioti, Iljaz Hoxha, Nezim Frakulla, Köprülü Mehmed Pasha, Ali Pasha, Edhem Pasha, Haxhi Shehreti, Ali Pasha of Gucia, Ibrahim Pasha of Berat, Köprülü Fazıl Ahmed, Muhammad Ali of Egypt, Kara Mahmud Bushati, Ahmet Kurt Pasha.
The country won its independence from the Ottoman Empire in 1912. Following the National Renaissance tenets and the general lack of religious convictions, during the 20th century, the democratic, monarchic and later the communist regimes followed a systematic dereligionization of the nation and the national culture. Due to this policy, as all other faiths in the country, Islam underwent radical changes.
In 1923, following the government program, the Albanian Muslim congress convened at Tirana decided to break with the Caliphate, established a new form of prayer (standing, instead of the traditional salah ritual), banished polygamy and the mandatory use of veil (hijab) by women in public, practices forced on the urban population by the Ottomans.
The Muslim clergy, following suit with the Catholic and Orthodox clergy, was totally eradicated during the communist regime of Enver Hoxha who declared Albania the only non-religious country in the world, banning all forms of religious practice in public in 1967.
See also
- Miller, Tracy, ed. (October 2009), Mapping the Global Muslim Population: A Report on the Size and Distribution of the World’s Muslim Population (PDF), Pew Research Center, retrieved 2009-10-08
- Albanian census 2011
- John Hutchinson, Anthony D. Smith, "Nationalism: Critical Concepts in Political Science"
- Albania dispatch, Time magazine, April 14, 1923
- Official website of the OIC
- The Muslim Forum of Albania
- Albanian Institute of Islamic Thought & Civilization
- The Bektashi Community
- Muslim Albania
| 3.226752 |
Temporal range: Pliocene–Recent
|Families & genera|
Palpigrades are tiny cousins of the uropygids, or whip scorpions, no more than 3 millimetres (0.12 in) in length, and averaging 1–1.5 mm (0.04–0.06 in). They have a thin, pale, segmented integument, and a segmented abdomen that terminates in a whip-like flagellum. This is made up of 15 segment-like parts, or "articles", and may make up as much as half the animal's length. Each article of the flagellum bears bristles, giving the whole flagellum the appearanace of a bottle brush. The carapace is divided into two plates between the third and fourth leg pair of legs. They have no eyes.
As in some other arachnids, the first pair of legs are modified to serve as sensory organs, and are held clear of the ground while walking. Unusually, however, palpigrades use their pedipalps for locomotion, so that the animal appears to be walking on five pairs of legs.
Some palpigrades have three pairs of abdominal lung-sacs, although these are not true book lungs as there is no trace of the characteristic leaflike lamellae which defines book lungs. However, many species have no respiratory organs at all and breathe directly through the cuticle.
Ecology and behaviour
Species of Palpigradi live in interstitially in wet tropical and subtropical soils. A few species have been found in shallow coral sands and on tropical beaches. They need a damp environment to survive, and they always hide from light, so they are commonly found in the moist earth under buried stones and rocks. They can be found on every continent, except in Arctic and Antarctic regions. Terrestrial Palpigradi have hydrophobic cuticles, but littoral (beach-dwelling) species are able to pass through the water surface easily.
Very little is known about palpigrade behaviour. They are believed to be predators like their larger relatives, feeding on minuscule animals in their habitat. Their mating habits are unknown, except that they lay only a few relatively large eggs at a time.
By 2003, approximately 79 species of palpigrades had been described worldwide, in two families, containing a total of 7 genera. The two families are differentiated by the presence of ventral sacs on sternites IV–VI in Prokoeneniidae, and their absence in Eukoeneniidae.
A single fossil palpigrade species has been described from the Onyx Marble of Arizona, which is probably of Pliocene age. Its familial position is uncertain. Older publications refer to a fossil palpigrade (or palpigrade-like animal) from the Jurassic of the Solnhofen limestone in Germany, but this has now been shown to be a misidentified fossil insect.
See also
- Peter Ax (2000). "Palpigradi – Holotracheata". Multicellular animals. The phylogenetic system of the Metazoa. Volume II. Springer. pp. 120–121. ISBN 978-3-540-67406-1.
- James B. Nardi (2007). Life in the soil: a guide for naturalists and gardeners. Chicago Lectures in Mathematics Series. University of Chicago Press. ISBN 978-0-226-56852-2.
- Barnes, Robert D. (1982). Invertebrate Zoology. Philadelphia, PA: Saunders College. p. 614. ISBN 0-03-056747-5.
- Olav Geire (2009). "Palpigradi (Arachnidae)". Meiobenthology: the microscopic motile fauna of aquatic sediments. Springer. pp. 205–206. ISBN 978-3-540-68657-6.
- Mark S. Harvey (2003). "Order Palpigradi Thorell". Catalogue of the smaller arachnid orders of the world: Amblypygi, Uropygi, Schizomida, Palpigradi, Ricinulei and Solifugae. CSIRO Publishing. pp. 151–174. ISBN 978-0-643-06805-6.
- Joel Cracraft & Michael J. Donoghue (2004). "Palpigrades (Palpigradi)". Assembling the tree of life. Oxford University Press. p. 302. ISBN 978-0-19-517234-8.
- J. Mark Rowland & W. David Sissom (1980). "Report on a fossil palpigrade from the Tertiary of Arizona, and a review of the morphology and systematics of the order (Arachnida: Palpigradida)". Journal of Arachnology 8: 69–86.
- Haase, E. 1890. Beitrag zur Kenntniss der fossilen Arachniden. Zeitschrift der Deutsche geologische Gesellschaft, 1890: 629–657
- Xavier Delclòs, André Nel, Dany Azar, Günter Bechly, Jason A. Dunlop, Michael S. Engel & Sam W. Heads (2008). "The enigmatic Mesozoic insect taxon Chresmodidae (Polyneoptera): New palaeobiological and phylogenetic data, with the description of a new species from the Lower Cretaceous of Brazil" (PDF). Neues Jahrbuch für Geologie und Paläontologie, Abhandlungen 247: 353–381. doi:10.1127/0077-7749/2008/0247-0353.
|Wikispecies has information related to: Palpigradi|
| 4.085625 |
In all modern states, some land is held by central or local governments. This is called public land. The system of tenure of public land, and the terminology used, varies between countries. The following examples illustrate some of the range.
Commonwealth countries
In Portugal the land owned by the State, by the two autonomous regions (Azores and Madeira) and by the local governments (municipalities (Portuguese: municípios) and freguesias) can be of two types: public domain (Portuguese: domínio público) and private domain (Portuguese: domínio privado). The latter is owned like any private entity (and may be sold), while public domain land cannot be sold and it is expected to be used by the public (although it can be leased to private entities for up to 75 years in certain cases). Examples of public domain land are the margins of the sea and of the rivers, roads, streets, railways, ports, military areas, monuments,... The State's private domain is managed by Direção-Geral do Tesouro e Finanças and the State's public domain is managed by various entities (state companies and state institutes, like Agência Portuguesa do Ambiente, I.P., Estradas de Portugal, E.P.E., Refer - Rede Ferroviária Portuguesa, E.P.E., APL - Administração do Porto de Lisboa, S.A., etc.).
West Bank
Israeli land laws on the West Bank are based in the Ottoman Empire law specifying land not worked for over ten years becomes 'state lands'. This became the base for deciding cases brought up by Arabs when certain Israeli settlements were created on presumed barren land (see Halamish).
United States
In the United States governmental entities including cities, counties, states, and the federal government all manage land which are referred to as either public lands or the public domain.
The majority of public lands in the United States are held in trust for the American people by the federal government and managed by the Bureau of Land Management (BLM), the United States National Park Service, Bureau of Reclamation, or the Fish and Wildlife Service under the Department of the Interior, or the United States Forest Service under the Department of Agriculture. Other federal agencies that manage public lands include the National Oceanic and Atmospheric Administration and the United States Department of Defense, which includes the U.S. Army Corps of Engineers.
In general, Congress must legislate the creation of new public lands, such as national parks; however, under the 1906 Antiquities Act, the President may designate new national monuments without congressional authorization.
Each western state also received federal "public land" as trust lands designated for specific beneficiaries, which the States are to manage as a condition to acceptance into the union. Those trust lands cannot any longer be considered public lands as allowing any benefits to the "public" would be in breach of loyalty to the specific beneficiaries. The trust lands (two sections, or about 1,280 acres (5.2 km2) per township) are usually managed extractively (grazing or mining), to provide revenue for public schools. All states have some lands under state management, such as state parks, state wildlife management areas, and state forests.
Wilderness is a special designation for public lands which have been completely undeveloped. The concept of wilderness areas was legislatively defined by the 1964 Wilderness Act. Wilderness areas can be managed by any of the above Federal agencies, and some parks and refuges are almost entirely designated wilderness. A wilderness study area is a tract of land that has wilderness characteristics, and is managed as wilderness, but has not received a wilderness designation from Congress.
Typically each parcel is governed by its own set of laws and rules that explain the purpose for which the land was acquired, and how the land may be used.
The private uses of public lands continues to be a challenging issue in the United States. Environmental groups have used the Public Trust Doctrine to re-establish rights to common resources such as water in the arid west and in Hawaii. An expanded vision of the Public Trust doctrine that includes soils, air and other species has been argued. Recently there have also been increasing efforts to privatize many public lands through land trades and other privatization schemes.
Recreation on U.S. public lands
Most state- and federally managed public lands are open for recreational use. Recreation opportunities depend on the managing agency, and run the gamut from the free-for-all, undeveloped wide open spaces of BLM lands to the highly developed and controlled national and state parks. Wildlife refuges and state wildlife management areas, managed primarily to improve habitat, are generally open to wildlife watching, hiking, and hunting, except for closures to protect mating and nesting, or to reduce stress on wintering animals. National forests generally have a mix of maintained trails and roads, wilderness and undeveloped portions, and developed picnic and camping areas.
In an attempt to present a balanced view of the history and uses of America's public lands, two teams trekked the US, from the Canadian and Mexican borders, in a project known as American Frontiers: A Public Lands Journey.
Grazing on U.S. public lands
Historically in the western United States, most public land is leased for grazing by cattle or sheep. This includes vast tracts of National Forest and BLM land, as well as land on Wildlife Refuges. National Parks are the exception. This use became controversial in the late 20th century as it was examined by environmentalists.
See also
- Public Lands Information Center: state and federal lands in the Western U.S.
- Recreation on Federal Lands: United States/ nationwide
- Public Land / Water Access Assoc.: Montana/ United States
- Our Public Lands: Western States/ United States
Further reading
- Nancy Ferguson, Sacred Cows at the Public Trough, Maverick Publications (December, 1983), trade paperback, ISBN 0-89288-091-0
- Hoberman, Haggai (2008). Keneged Kol HaSikuim [Against All Odds] (in Hebrew) (1st ed.). Sifriat Netzarim page = 169.
- Western States Data Public Land Acreage
- Pages 14-73, "The Public Lands Debate", Sharman Apt Russell, Kill the Cowboy: A Battle of Mythology in the New West, Addison-Wesley (May, 1993), hardcover, 218 pages, ISBN 0-201-58123-X
| 3.784729 |
Cartographer's ruling pen
A ruling pen is a drawing instrument for drawing with ink or with other drawing fluids.
A ruling pen contains ink in a slot between two flexible metal jaws, which are tapered to a point. It enables precise rendering of the thinnest lines. The line width can be adjusted by an adjustment screw connecting the jaws. The adjustment screw can optionally have a number dial.
Illustration of ruling pen use from A Textbook on Ornamental Design
Originally used for technical drawings in engineering and cartography together with straight rulers and French curves, it is today used for specific uses, such as picture framing or calligraphy.
See also
- ^ a b Cicale, Ann (2006). The Art & Craft of Hand Lettering. Sterling Publishing Company, Inc. p. 96. ISBN 978-1-57990-809-6.
- ^ Kirby, Richard Shelton (1918). The Fundamentals of Mechanical Drawing. J. Wiley & Sons, Inc. pp. 8–9.
| 3.077714 |
Stereoscopy (also called stereoscopics or 3D imaging) is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from the Greek "στερεός" (stereos), "firm, solid" + "σκοπέω" (skopeō), "to look", "to see".
Most stereoscopic methods present two offset images separately to the left and right eye of the viewer. These two-dimensional images are then combined in the brain to give the perception of 3D depth. This technique is distinguished from 3D displays that display an image in three full dimensions, allowing the observer to increase information about the 3-dimensional objects being displayed by head and eye movements.
Stereoscopy creates the illusion of three-dimensional depth from given two-dimensional images. Human vision, including the perception of depth, is a complex process which only begins with the acquisition of visual information taken in through the eyes; much processing ensues within the brain, as it strives to make intelligent and meaningful sense of the raw information provided. One of the very important visual functions that occur within the brain as it interprets what the eyes see is that of assessing the relative distances of various objects from the viewer, and the depth dimension of those same perceived objects. The brain makes use of a number of cues to determine relative distances and depth in a perceived scene, including:
- Accommodation of the eye
- Overlapping of one object by another
- Subtended visual angle of an object of known size
- Linear perspective (convergence of parallel edges)
- Vertical position (objects higher in the scene generally tend to be perceived as further away)
- Haze, desaturation, and a shift to bluishness
- Change in size of textured pattern detail
(All the above cues, with the exception of the first two, are present in traditional two-dimensional images such as paintings, photographs, and television.)
Stereoscopy is the production of the illusion of depth in a photograph, movie, or other two-dimensional image by presenting a slightly different image to each eye, and thereby adding the first of these cues (stereopsis) as well. Both of the 2D offset images are then combined in the brain to give the perception of 3D depth. It is important to note that since all points in the image focus at the same plane regardless of their depth in the original scene, the second cue, focus, is still not duplicated and therefore the illusion of depth is incomplete. There are also primarily two effects of stereoscopy that are unnatural for the human vision: first, the mismatch between convergence and accommodation, caused by the difference between an object's perceived position in front of or behind the display or screen and the real origin of that light and second, possible crosstalk between the eyes, caused by imperfect image separation by some methods.
Although the term "3D" is ubiquitously used, it is also important to note that the presentation of dual 2D images is distinctly different from displaying an image in three full dimensions. The most notable difference is that, in the case of "3D" displays, the observer's head and eye movement will not increase information about the 3-dimensional objects being displayed. Holographic displays or volumetric display are examples of displays that do not have this limitation. Similar to the technology of sound reproduction, in which it is not possible to recreate a full 3-dimensional sound field merely with two stereophonic speakers, it is likewise an overstatement of capability to refer to dual 2D images as being "3D". The accurate term "stereoscopic" is more cumbersome than the common misnomer "3D", which has been entrenched after many decades of unquestioned misuse. Although most stereoscopic displays do not qualify as real 3D display, all real 3D displays are also stereoscopic displays because they meet the lower criteria as well.
Wheatstone originally used his stereoscope (a rather bulky device) with drawings because photography was not yet available, yet his original paper seems to foresee the development of a realistic imaging method:
For the purposes of illustration I have employed only outline figures, for had either shading or colouring been introduced it might be supposed that the effect was wholly or in part due to these circumstances, whereas by leaving them out of consideration no room is left to doubt that the entire effect of relief is owing to the simultaneous perception of the two monocular projections, one on each retina. But if it be required to obtain the most faithful resemblances of real objects, shadowing and colouring may properly be employed to heighten the effects. Careful attention would enable an artist to draw and paint the two component pictures, so as to present to the mind of the observer, in the resultant perception, perfect identity with the object represented. Flowers, crystals, busts, vases, instruments of various kinds, &c., might thus be represented so as not to be distinguished by sight from the real objects themselves.
Stereoscopy is used in photogrammetry and also for entertainment through the production of stereograms. Stereoscopy is useful in viewing images rendered from large multi-dimensional data sets such as are produced by experimental data. An early patent for 3D imaging in cinema and television was granted to physicist Theodor V. Ionescu in 1936. Modern industrial three-dimensional photography may use 3D scanners to detect and record three-dimensional information. The three-dimensional depth information can be reconstructed from two images using a computer by corresponding the pixels in the left and right images (e.g.,). Solving the Correspondence problem in the field of Computer Vision aims to create meaningful depth information from two images.
Visual requirements
Anatomically, there are 3 levels of binocular vision required to view stereo images:
- Simultaneous perception
- Fusion (binocular 'single' vision)
These functions develop in early childhood. Some people who have strabismus disrupt the development of stereopsis, however orthoptics treatment can be used to improve binocular vision. A person's stereoacuity determines the minimum image disparity they can perceive as depth. It is believed that approximately 12% of people are unable to properly see 3D images, due to a variety of medical conditions. According to another experiment up to 30% of people have very weak stereoscopic vision preventing them from depth perception based on stereo disparity. This nullifies or greatly decreases immersion effects of stereo to them.
Traditional stereoscopic photography consists of creating a 3D illusion starting from a pair of 2D images, a stereogram. The easiest way to enhance depth perception in the brain is to provide the eyes of the viewer with two different images, representing two perspectives of the same object, with a minor deviation equal or nearly equal to the perspectives that both eyes naturally receive in binocular vision.
If eyestrain and distortion are to be avoided, each of the two 2D images preferably should be presented to each eye of the viewer so that any object at infinite distance seen by the viewer should be perceived by that eye while it is oriented straight ahead, the viewer's eyes being neither crossed nor diverging. When the picture contains no object at infinite distance, such as a horizon or a cloud, the pictures should be spaced correspondingly closer together.
The principal advantages of side-by-side viewers is that there is no diminution of brightness so images may be presented at very high resolution and in full spectrum color. The side-by-side method is simple to create. Little or no additional image processing is required. Under some circumstances, such as when a pair of images is presented for crossed or parallel eye viewing, no device or additional optical equipment is needed. But it can be difficult or uncomfortable to view without optical aids.
Freeviewing is viewing a side-by-side image without using a viewer.
- The parallel view method uses two images not more than 65mm between corresponding image points; this is the average distance between the two eyes. The viewer looks through the image while keeping the vision parallel; this can be difficult with normal vision since eye focus and binocular convergence normally work together.
- The cross-eyed view method uses the right and left images exchanged and views the images cross-eyed with the right eye viewing the left image and vice-versa. Prismatic, self-masking glasses are now being used by cross-view advocates. These reduce the degree of convergence and allow large images to be displayed.
An autostereogram is a single-image stereogram (SIS), designed to create the visual illusion of a three-dimensional (3D) scene within the human brain from an external two-dimensional image. In order to perceive 3D shapes in these autostereograms, one must overcome the normally automatic coordination between focusing and vergence.
Stereoscope and stereographic cards
The stereoscope is essentially an instrument in which two photographs of the same object, taken from slightly different angles, are simultaneously presented, one to each eye. A simple stereoscope is limited in the size of the image that may be used. A more complex stereoscope uses a pair of horizontal periscope-like devices, allowing the use of larger images that can present more detailed information in a wider field of view.
Transparency viewers
Pairs of stereo views are printed on translucent film which is then mounted around the edge of a cardboard disk, images of each pair being diametrically opposite. An advantage offered by transparency viewing is that a wider field of view may be presented since images, being illuminated from the rear, may be placed much closer to the lenses. The practice of viewing film-based transparencies in stereo via a viewer dates to at least as early as 1931, when Tru-Vue began to market filmstrips that were fed through a handheld device made from Bakelite. In the 1940s, a modified and miniaturized variation of this technology was introduced as the View-Master.
Head-mounted displays
The user typically wears a helmet or glasses with two small LCD or OLED displays with magnifying lenses, one for each eye. The technology can be used to show stereo films, images or games, but it can also be used to create a virtual display. Head-mounted displays may also be coupled with head-tracking devices, allowing the user to "look around" the virtual world by moving their head, eliminating the need for a separate controller. Performing this update quickly enough to avoid inducing nausea in the user requires a great amount of computer image processing. If six axis position sensing (direction and position) is used then wearer may move about within the limitations of the equipment used. Owing to rapid advancements in computer graphics and the continuing miniaturization of video and other equipment these devices are beginning to become available at more reasonable cost.
Head-mounted or wearable glasses may be used to view a see-through image imposed upon the real world view, creating what is called augmented reality. This is done by reflecting the video images through partially reflective mirrors. The real world view is seen through the mirrors' reflective surface. Experimental systems have been used for gaming, where virtual opponents may peek from real windows as a player moves about. This type of system is expected to have wide application in the maintenance of complex systems, as it can give a technician what is effectively "x-ray vision" by combining computer graphics rendering of hidden elements with the technician's natural vision. Additionally, technical data and schematic diagrams may be delivered to this same equipment, eliminating the need to obtain and carry bulky paper documents.
Virtual retinal displays
A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), not to be confused with a "Retina Display", is a display technology that draws a raster display (like a television) directly onto the retina of the eye. The user sees what appears to be a conventional display floating in space in front of them.
3D viewers
There are two categories of 3D viewer technology, active and passive. Active viewers have electronics which interact with a display.
Shutter systems
A Shutter system works by openly presenting the image intended for the left eye while blocking the right eye's view, then presenting the right-eye image while blocking the left eye, and repeating this so rapidly that the interruptions do not interfere with the perceived fusion of the two images into a single 3D image. It generally uses liquid crystal shutter glasses. Each eye's glass contains a liquid crystal layer which has the property of becoming dark when voltage is applied, being otherwise transparent. The glasses are controlled by a timing signal that allows the glasses to alternately darken over one eye, and then the other, in synchronization with the refresh rate of the screen.
Polarization systems
To present stereoscopic pictures, two images are projected superimposed onto the same screen through polarizing filters or presented on a display with polarized filters. For projection, a silver screen is used so that polarization is preserved. The viewer wears low-cost eyeglasses which also contain a pair of opposite polarizing filters. As each filter only passes light which is similarly polarized and blocks the opposite polarized light, each eye only sees one of the images, and the effect is achieved.
Interference filter systems
This technique uses specific wavelengths of red, green, and blue for the right eye, and different wavelengths of red, green, and blue for the left eye. Eyeglasses which filter out the very specific wavelengths allow the wearer to see a full color 3D image. It is also known as spectral comb filtering or wavelength multiplex visualization or super-anaglyph. Dolby 3D uses this principle. The Omega 3D/Panavision 3D system has also used an improved version of this technology In June 2012 the Omega 3D/Panavision 3D system was discontinued by DPVO Theatrical, who marketed it on behalf of Panavision, citing ″challenging global economic and 3D market conditions″. Although DPVO dissolved its business operations, Omega Optical continues promoting and selling 3D systems to non-theatrical markets. Omega Optical’s 3D system contains projection filters and 3D glasses. In addition to the passive stereoscopic 3D system, Omega Optical has produced enhanced anaglyph 3D glasses. The Omega’s red/cyan anaglyph glasses use complex metal oxide thin film coatings and high quality annealed glass optics.
Color anaglyph systems
Anaglyph 3D is the name given to the stereoscopic 3D effect achieved by means of encoding each eye's image using filters of different (usually chromatically opposite) colors, typically red and cyan. Anaglyph 3D images contain two differently filtered colored images, one for each eye. When viewed through the "color-coded" "anaglyph glasses", each of the two images reaches one eye, revealing an integrated stereoscopic image. The visual cortex of the brain fuses this into perception of a three dimensional scene or composition.
Chromadepth system
The ChromaDepth procedure of American Paper Optics is based on the fact that with a prism, colors are separated by varying degrees. The ChromaDepth eyeglasses contain special view foils, which consist of microscopically small prisms. This causes the image to be translated a certain amount that depends on its color. If one uses a prism foil now with one eye but not on the other eye, then the two seen pictures – depending upon color – are more or less widely separated. The brain produces the spatial impression from this difference. The advantage of this technology consists above all of the fact that one can regard ChromaDepth pictures also without eyeglasses (thus two-dimensional) problem-free (unlike with two-color anaglyph). However the colors are only limitedly selectable, since they contain the depth information of the picture. If one changes the color of an object, then its observed distance will also be changed.
Pulfrich method
The Pulfrich effect is based on the phenomenon of the human eye processing images more slowly when there is less light, as when looking through a dark lens. Because the Pulfrich effect depends on motion in a particular direction to instigate the illusion of depth, it is not useful as a general stereoscopic technique. For example, it cannot be used to show a stationary object apparently extending into or out of the screen; similarly, objects moving vertically will not be seen as moving in depth. Incidental movement of objects will create spurious artifacts, and these incidental effects will be seen as artificial depth not related to actual depth in the scene.
Over/under format
Stereoscopic viewing is achieved by placing an image pair one above one another. Special viewers are made for over/under format that tilt the right eyesight slightly up and the left eyesight slightly down. The most common one with mirrors is the View Magic. Another with prismatic glasses is the KMQ viewer. A recent usage of this technique is the openKMQ project.
Other display methods without viewers
Autostereoscopic display technologies use optical components in the display, rather than worn by the user, to enable each eye to see a different image. Because headgear is not required, it is also called "glasses-free 3D". The optics split the images directionally into the viewer's eyes, so the display viewing geometry requires limited head positions that will achieve the stereoscopic effect. Automultiscopic displays provide multiple views of the same scene, rather than just two. Each view is visible from a different range of positions in front of the display. This allows the viewer to move left-right in front of the display and see the correct view from any position. The technology includes two broad classes of displays: those that use head-tracking to ensure that each of the viewer's two eyes sees a different image on the screen, and those that display multiple views so that the display does not need to know where the viewers' eyes are directed. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, volumetric display, holography and light field displays.
Research into holographic displays has produced devices which are able to create a light field identical to that which would emanate from the original scene, with both horizontal and vertical parallax across a large range of viewing angles. The effect is similar to looking through a window at the scene being reproduced; this may make CGH the most convincing of the 3D display technologies, but as yet the large amounts of calculation required to generate a detailed hologram largely prevent its application outside of the laboratory.
Volumetric displays
Volumetric displays use some physical mechanism to display points of light within a volume. Such displays use voxels instead of pixels. Volumetric displays include multiplanar displays, which have multiple display planes stacked up, and rotating panel displays, where a rotating panel sweeps out a volume.
Other technologies have been developed to project light dots in the air above a device. An infrared laser is focused on the destination in space, generating a small bubble of plasma which emits visible light.
Integral imaging
Integral imaging is an autostereoscopic or multiscopic 3D display, meaning that it displays a 3D image without the use of special glasses on the part of the viewer. It achieves this by placing an array of microlenses (similar to a lenticular lens) in front of the image, where each lens looks different depending on viewing angle. Thus rather than displaying a 2D image that looks the same from every direction, it reproduces a 4D light field, creating stereo images that exhibit parallax when the viewer moves.
Wiggle stereography
Wiggle stereoscopy is an image display technique achieved by quickly alternating display of left and right sides of a stereogram. Found in animated GIF format on the web. Online examples are visible in the New-York Public Library stereogram collection. The technique is also known as "Piku-Piku".
Stereo photography techniques
Film photography
It is necessary to take two photographs for a stereoscopic image. This can be done with two cameras, with one camera moved quickly to two positions, or with a stereo camera incorporating two or more side-by-side lenses.
In the 1950s, stereoscopic photography regained popularity when a number of manufacturers began introducing stereoscopic cameras to the public. The new cameras were developed to use 135 film, which had gained popularity after the close of World War II. Many of the conventional cameras used the film for 35 mm transparency slides, and the new stereoscopic cameras utilized the film to make stereoscopic slides. The Stereo Realist camera was the most popular, and its 5P picture format became a standard. The stereoscopic cameras were marketed with special viewers that allowed for the use of such slides. With these cameras the public could easily create their own stereoscopic memories. Although their popularity has waned, some of these cameras are still in use today.
The 1980s saw a minor revival of stereoscopic photography extent when point-and-shoot stereo cameras were introduced. Most of these cameras suffered from poor optics and plastic construction, and were designed to produce lenticular prints, a format which never gained wide acceptance, so they never gained the popularity of the 1950s stereo cameras.
Digital photography
The beginning of the 21st century marked the coming of the age of digital photography. Stereo lenses were introduced which could turn an ordinary film camera into a stereo camera by using a special double lens to take two images and direct them through a single lens to capture them side by side on the film. Although current digital stereo cameras cost hundreds of dollars, cheaper models also exist, for example those produced by the company Loreo. It is also possible to create a twin camera rig, together with a "shepherd" device to synchronize the shutter and flash of the two cameras. By mounting two cameras on a bracket, spaced a bit, with a mechanism to make both take pictures at the same time. Newer cameras are even being used to shoot "step video" 3D slide shows with many pictures almost like a 3D motion picture if viewed properly. A modern camera can take ten pictures per second, with images that greatly exceed HDTV resolution.
If anything is in motion within the field of view, it is necessary to take both images at once, either through use of a specialized two-lens camera, or by using two identical cameras, operated as close as possible to the same moment.
A single camera can also be used if the subject remains perfectly still (such as an object in a museum display). Two exposures are required. The camera can be moved on a sliding bar for offset, or with practice, the photographer can simply shift the camera while holding it straight and level. This method of taking stereo photos is sometimes referred to as the "Cha-Cha" or "Rock and Roll" method. It is also sometimes referred to as the "astronaut shuffle" because it was used to take stereo pictures on the surface of the moon using normal monoscopic equipment.
For the most natural looking stereo most stereographers move the camera about 65mm or the distance between the eyes, but some experiment with other distances. A good rule of thumb is to shift sideways 1/30th of the distance to the closest subject for 'side by side' display, or just 1/60th if the image is to be also used for color anaglyph or anachrome image display. For example, when enhanced depth beyond natural vision is desired and a photo of a person in front of a house is being taken, and the person is thirty feet away, then the camera should be moved 1 foot between shots.
The stereo effect is not significantly diminished by slight pan or rotation between images. In fact slight rotation inwards (also called 'toe in') can be beneficial. Bear in mind that both images should show the same objects in the scene (just from different angles) – if a tree is on the edge of one image but out of view in the other image, then it will appear in a ghostly, semi-transparent way to the viewer, which is distracting and uncomfortable. Therefore, the images are cropped so they completely overlap, or the cameras 'toed-in' so that the images completely overlap without having to discard any of the images. However, too much 'toe-in' can cause 'keystoning' and eye strain for reasons best described here.
Digital stereo bases (baselines)
There are different cameras with different stereobase (distance between the two camera lenses) in the not professional market of 3D digital cameras used for video and also for stills:
10 mm Panasonic 3 D Lumix H-FT012 lens (for the GH2, GF2, GF3, GF5 cams and also for the hybrid W8 cam).
12 mm Praktica and Medion 3D (two clones of the DXG-5D8 cam).
20 mm Sony Blogie 3D.
23 mm Loreo 3D Macro lens.
25 mm LG Optimus 3D and LG Optimus 3D MAX smartphones and the close-up macro adapter for the W1 and W3 Fujifilm cams.
28 mm Sharp Aquos SH80F smartphone and the Toshiba Camileo z100 camcorder.
30 mm Panasonic 3D1 camera.
32 mm HTC EVO 3D smartphone.
35 mm JVC TD1, DXG-5G2V and Vivitar 790 HD (only for anagliph stills and video) camcorders.
40 mm Aiptek I2, Aiptek IS2, Aiptek IH3 and Viewsonic 3D cams.
50 mm Loreo for full frame cams, and the 3D FUN cam of 3dInlife.
55 mm SVP dc-3D-80 cam (parallel & anagliph, stills & video).
60 mm Vivitar 3D cam (only for anagliph pictures.
75 mm Fujifilm W3 cam.
77 mm Fujifilm W1 cam.
88 mm Loreo 3D lens for digital cams.
140mm Cyclopital3D base extender for the JVC TD1 and Sony TD10.
200mm Cyclopital3D base extender for the Panasonic AG-3DA1.
225mm Cyclopital3D base extender for the Fujifilm W1 and W3 cams.
Base line selection
For general purpose stereo photography, where the goal is to duplicate natural human vision and give a visual impression as close as possible to actually being there, the correct baseline (distance between where the right and left images are taken) would be the same as the distance between the eyes. When images taken with such a baseline are viewed using a viewing method that duplicates the conditions under which the picture is taken then the result would be an image pretty much the same as what would be seen at the site the photo was taken. This could be described as "ortho stereo."
An example would be the Realist format that was so popular in the late 1940s to mid-1950s and is still being used by some today. When these images are viewed using high quality viewers, or seen with a properly set up projector, the impression is, indeed, very close to being at the site of photography.
The baseline used in such cases will be about 50mm to 80mm. This is what is generally referred to as a "normal" baseline, used in most stereo photography. There are, however, situations where it might be desirable to use a longer or shorter baseline. The factors to consider include the viewing method to be used and the goal in taking the picture. Note that the concept of baseline also applies to other branches of stereography, such as stereo drawings and computer generated stereo images, but it involves the point of view chosen rather than actual physical separation of cameras or lenses.
Longer base line for distant objects "Hyper Stereo"
If a stereo picture is taken of a large, distant object such as a mountain or a large building using a normal base it will appear to be flat. This is in keeping with normal human vision, it would look flat if one were actually there, but if the object looks flat, there doesn't seem to be any point in taking a stereo picture, as it will simply seem to be behind a stereo window, with no depth in the scene itself, much like looking at a flat photograph from a distance.
One way of dealing with this situation is to include a foreground object to add depth interest and enhance the feeling of "being there", and this is the advice commonly given to novice stereographers. Caution must be used, however, to ensure that the foreground object is not too prominent, and appears to be a natural part of the scene, otherwise it will seem to become the subject with the distant object being merely the background. In cases like this, if the picture is just one of a series with other pictures showing more dramatic depth, it might make sense just to leave it flat, but behind a window.
For making stereo images featuring only a distant object (e.g., a mountain with foothills), the camera positions can be separated by a larger distance (called the "interaxial" or stereo base, often mistakenly called "interocular") than the adult human norm of 62–65mm. This will effectively render the captured image as though it was seen by a giant, and thus will enhance the depth perception of these distant objects, and reduce the apparent scale of the scene proportionately. However, in this case care must be taken not to bring objects in the close foreground too close to the viewer, as they will show excessive parallax and can complicate stereo window adjustment.
There are two main ways to accomplish this. One is to use two cameras separated by the required distance, the other is to shift a single camera the required distance between shots.
The shift method has been used with cameras such as the Stereo Realist to take hypers, either by taking two pairs and selecting the best frames, or by alternately capping each lens and recocking the shutter.
It is also possible to take hyperstereo pictures using an ordinary single lens camera aiming out an airplane. One must be careful, however, about movement of clouds between shots.
It has even been suggested that a version of hyperstereo could be used to help pilots fly planes.
In such situations, where an ortho stereo viewing method is used, a common rule of thumb is the 1:30 rule. This means that the baseline will be equal to 1/30 of the distance to the nearest object included in the photograph.
This technique can be applied to 3D imaging of the Moon: one picture is taken at moonrise, the other at moonset, as the face of the Moon is centered towards the center of the Earth and the diurnal rotation carries the photographer around the perimeter, though the results are rather poor, and much better results can be obtained using alternative techniques.
This is why high quality published stereos of the moon are done using libration, the slight "wobbling" of the moon on its axis relative to the earth. Similar techniques were used late in the 19th century to take stereo views of Mars and other astronomical subjects.
Limitations of hyperstereo
Vertical alignment can become a big problem, especially if the terrain on which the two camera positions are placed is uneven.
Movement of objects in the scene can make syncing two widely separated cameras a nightmare. When a single camera is moved between two positions even subtle movements such as plants blowing in the wind and the movement of clouds can become a problem. The wider the baseline, the more of a problem this becomes.
Pictures taken in this fashion take on the appearance of a miniature model, taken from a short distance, and those not familiar with such pictures often cannot be convinced that it is the real object. This is because we cannot see depth when looking at such scenes in real life and our brains aren't equipped to deal with the artificial depth created by such techniques, and so our minds tell us it must be a smaller object viewed from a short distance, which would have depth. Though most eventually realize it is, indeed, an image of a large object from far away, many find the effect bothersome. This doesn't rule out using such techniques, but it is one of the factors that need to be considered when deciding whether or not such a technique should be used.
In movies and other forms of "3D" entertainment, hyperstereo may be used to simulate the viewpoint of a giant, with eyes a hundred feet apart. The miniaturization would be just what the photographer (or designer in the case of drawings/computer generated images) had in mind. On the other hand, in the case of a massive ship flying through space the impression that it is a miniature model is probably not what the film makers intended!
Hyper stereo can also lead to cardboarding, an effect that creates stereos in which different objects seem well separated in depth, but the objects themselves seem flat. This is because parallax is quantized.
Illustration of the limits of parallax multiplication, refer to image at left. Ortho viewing method assumed. The line represents the Z axis, so imagine that it is laying flat and stretching into the distance. If the camera is at X point A is on an object at 30 feet. Point B is on an object at 200 feet and point C is on the same object but 1 inch behind B. Point D is on an object 250 feet away. With a normal baseline point A is clearly in the foreground, with B,C, and D all at stereo infinity. With a one foot base line, which multiplies the parallax, there will be enough parallax to separate all four points, though the depth in the object containing B and C will still be subtle. If this object is the main subject, we may consider a baseline of 6 feet 8 inches but then the object at A would need to be cropped out. Now imagine that the camera is point Y, now the object at A is at 2,000 feet, point B is on an object at 2,170 feet C is a point on the same object 1 inch behind B. Point D is on an object at 2,220 feet. With a normal baseline, all four points are now at stereo infinity. With a 67 foot basline, the multiplied parallax allows us to see that all three objects are on different planes, yet points B and C, on the same object, appear to be on the same plane and all three objects appear flat. This is because there are discrete units of parallax, so at 2,170 feet the parallax between B and C is zero and zero multiplied by any number is still zero.
A practical example
In the red-cyan anaglyph example below, a ten-meter baseline atop the roof ridge of a house was used to image the mountain. The two foothill ridges are about four miles (6.5 km) distant and are separated in depth from each other and the background. The baseline is still too short to resolve the depth of the two more distant major peaks from each other. Owing to various trees that appeared in only one of the images the final image had to be severely cropped at each side and the bottom.
In the wider image, taken from a different location, a single camera was walked about one hundred feet (30 m) between pictures. The images were converted to monochrome before combination.(below)
Shorter baseline for ultra closeups "Macro stereo"
|Closeup stereo of a cake photographed using a Fuji W3. Taken by backing off several feet and then zooming in.|
When objects are taken from closer than about 6 1/2 feet a normal base will produce excessive parallax and thus exaggerated depth when using ortho viewing methods. At some point the parallax becomes so great that the image is difficult or even impossible to view. For such situations, it becomes necessary to reduce the baseline in keeping with the 1:30 rule.
When still life scenes are stereographed, an ordinary single lens camera can be moved using a slide bar or similar method to generate a stereo pair. Multiple views can be taken and the best pair selected for the desired viewing method.
For moving objects, a more sophisticated approach is used. In the early 1970s, Realist incorporated introduced the Macro Realist designed to stereograph subjects 4 to 5 1/2 inches away, for viewing in Realist format viewers and projectors. It featured a 15mm base and fixed focus. It was invented by Clarence G. Henning.
In recent years cameras have been produced which are designed to stereograph subjects 10" to 20" using print film, with a 27mm baseline. Another technique, usable with fixed base cameras such as the Fujifilm FinePix Real 3D W1/W3 is to back off from the subject and use the zoom function to zoom to a closer view, such as was done in the image of a cake. This has the effect of reducing the effective baseline. Similar techniques could be used with paired digital cameras.
Another way to take images of very small objects, "extreme macro", is to use an ordinary flatbed scanner. This is a variation on the shift technique in which the object is turned upside down and placed on the scanner, scanned, moved over and scanned again. This produces stereos of a range objects as large as about 6" across down to objects as small as a carrot seed. This technique goes back to at least 1995. See the article Scanography for more details.
In stereo drawings and computer generated stereo images a smaller than normal baseline may be built into the constructed images to simulate a "bug's eye" view of the scene.
Baseline tailored to viewing method
How far the picture is viewed from requires a certain separation between the cameras. This separation is called stereo base or stereo base line and results from the ratio of the distance to the image to the distance between the eyes (usually about 2.5 inches). In any case the farther the screen is viewed from the more the image will pop out. The closer the screen is viewed from the flatter it will appear. Personal anatomical differences can be compensated for by moving closer or farther from the screen.
To provide close emulation of natural vision for images viewed on a computer monitor, a fixed stereo base of 6 cm might be appropriate. This will vary depending on the size of the monitor and the viewing distance. For hyper stereo, a ratio smaller than 1:30 could be used. For example if a stereo image is to be viewed on a computer monitor from a distance of 1000 mm there will be an eye to view ratio of 1000/63 or about 16. To set the cameras the appropriate distance apart for the desired effect, the distance to the subject (say a person at a distance from the cameras of 3 meters) is divided by 16 which yields a stereo base of 188 mm between the cameras.
However, images optimized for a small screen viewed from a short distance will show excessive parallax when viewed with more ortho methods, such as a projected image or a head mounted display, possibly causing eyestrain and headaches, or doubling, so pictures optimized for this viewing method may not be usable with other methods.
Where images may also be used for anaglyph display a narrower base, say 40mm will allow for less ghosting in the display.
Variable base for "geometric stereo"
As mentioned previously, the goal of the photographer may be a reason for using a baseline that is larger than normal. Such is the case when, instead of trying to achieve a close emulation to natural vision, a stereographer may be trying to achieve geometric perfection. This approach means that objects are shown with the shape they actually have, rather than the way they are seen by humans.
Objects at 25 to 30 feet, instead of having the subtle depth that one being there would see, or what would be recorded with a normal baseline, will have the much more dramatic depth that would be seen from 7 to 10 feet. So instead seeing objects as one would with eyes 2 1/2" apart, they would be seen as they would appear if one's eyes were 12" apart. In other words, the baseline is chosen to produce the same depth effect, regardless of the distance from the subject. As with true ortho, this effect is impossible to achieve in a literal sense, since different objects in the scene will be at different distances and will thus show different amounts of parallax, but the geometric stereographer, like the ortho stereographer attempts to come as close as possible.
Achieving this could be as simple as using the 1:30 rule to find a custom base for every shot, regardless of distance, or it could involve using a more complicated formula.
This could be thought of as a form of hyperstereo, but less extreme. As a result, it has all of the same limitations of hyperstereo. When objects are given enhanced depth, but not magnified to take up a larger portion of the view, there is a certain miniaturization effect. Of course, this may be exactly what the stereographer has in mind.
While geometric stereo neither attempts nor achieves a close emulation of natural vision, there are valid reasons for this approach. It does, however, represent a very specialized branch of stereography.
Precise stereoscopic baseline calculation methods
Recent research has led to precise methods for calculating the stereoscopic camera baseline. These techniques consider the geometry of the display/viewer and scene/camera spaces independently and can be used to reliably calculate a mapping of the scene depth being captured to a comfortable display depth budget. This frees up the photographer to place their camera wherever they wish to achieve the desired composition and then use the baseline calculator to work out the camera inter-axial separation required to produce the desired effect.
This approach means there is no guess work in the stereoscopic setup once a small set of parameters have been measured, it can be implemented for photography and computer graphics and the methods can be easily implemented in a software tool.
Multi-rig stereoscopic cameras
The precise methods for camera control have also allowed the development of multi-rig stereoscopic cameras where different slices of scene depth are captured using different inter-axial settings, the images of the slices are then composed together to form the final stereoscopic image pair. This allows important regions of a scene to be given better stereoscopic representation while less important regions are assigned less of the depth budget. It provides stereographers with a way to manage composition within the limited depth budget of each individual display technology.
Stereo Window
For any branch of stereoscopy the concept of the stereo window is important. If a scene is viewed through a window the entire scene would normally be behind the window, if the scene is distant, it would be some distance behind the window, if it is nearby, it would appear to be just beyond the window. An object smaller than the window itself could even go through the window and appear partially or completely in front of it. The same applies to a part of a larger object that is smaller than the window.
The goal of setting the stereo window is to duplicate this effect.
To truly understand the concept of window adjustment it is necessary to understand where the stereo window itself is. In the case of projected stereo, including "3D" movies, the window would be the surface of the screen. With printed material the window is at the surface of the paper. When stereo images are seen by looking into a viewer the window is at the position of the frame. In the case of Virtual Reality the window seems to disappear as the scene becomes truly immersive.
In the case of paired images, moving the images further apart will move the entire scene back, moving the images closer together will move the scene forward. Note that this does not affect the relative positions of objects within the scene, just their position relative to the window. Similar principles apply to anaglyph images and other stereoscopy techniques.
There are several considerations in deciding where to place the scene relative to the window.
First, in the case of an actual physical window, the left eye will see less of the left side of the scene and the right eye will see less of the right side of the scene, because the view is partly blocked by the window frame. This principle is known as "less to the left on the left" or 3L, and is often used as a guide when adjusting the stereo window where all objects are to appear behind the window. When the images are moved further apart, the outer edges are cropped by the same amount, thus duplicating the effect of a window frame.
Another consideration involves deciding where individual objects are placed relative to the window. It would be normal for the frame of an actual window to partly overlap or "cut off" an object that is behind the window. Thus an object behind the stereo window might be partly cut off by the frame or side of the stereo window. So the stereo window is often adjusted to place objects cut off by window behind the window. If an object, or part of an object, is not cut off by the window then it could be placed in front of it and the stereo window may be adjusted with this in mind. This effect is how swords, bugs, flashlights, etc. often seem to "come off the screen" in 3D movies.
If an object which is cut off by the window is placed in front of it, an effect results that is somewhat unnatural and is usually considered undesirable, this is often called a "window violation". This can best be understood by returning to the analogy of an actual physical window. An object in front of the window would not be cut off by the window frame but would, rather, continue to the right and/or left of it. This can't be duplicated in stereography techniques other than Virtual Reality so the stereo window will normally be adjusted to avoid window violations. There are, however, circumstances where they could be considered permissible.
A third consideration is viewing comfort. If the window is adjusted too far back the right and left images of distant parts of the scene may be more than 2.5" apart, requiring that the viewers eyes diverge in order to fuse them. This results in image doubling and/or viewer discomfort. In such cases a compromise is necessary between viewing comfort and the avoidance of window violations.
In stereo photography window adjustments is accomplished by shifting/cropping the images, in other form of stereoscopy such as drawings and computer generated images the window is built into the design of the images as they are generated. It is by design that in CGI movies certain images are behind the screen whereas others are in front of it.
- "The Kaiser (Emperor) Panorama". June 9, 2012.
- στερεός Tufts.edu, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- σκοπέω, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- Flight Simulation, J. M. Rolfe and K. J. Staples, Cambridge University Press, 1986, page 134
- Contributions to the Physiology of Vision.—Part the First. On some remarkable, and hitherto unobserved, Phenomena of Binocular Vision. By CHARLES WHEATSTONE, F.R.S., Professor of Experimental Philosophy in King's College, London. Stereoscopy.com
- Welling, William. Photography in America, page 23
- Stereo Realist Manual, p. 375.
- Stereo Realist Manual, pp. 377–379.
- Fay Huang, Reinhard Klette, and Karsten Scheibe: Panoramic Imaging (Sensor-Line Cameras and Laser Range-Finders). Wiley & Sons, Chichester, 2008
- Dornaika, F.; Hammoudi, K (2009). "Extracting 3D Polyhedral Building Models from Aerial Images using a Featureless and Direct Approach" (PDF) Machine Vision Applications. Proc. IAPR/MVA. Retrieved 2010-09-26.
- "Eyecare Trust". Eyecare Trust. Retrieved 29 March 2012.
- "Daily Telegraph Newspaper". The Daily Telegraph. Retrieved 29 March 2012.
- Posted on (19 December 2011). "Understanding Requirements for High-Quality 3D Video: A Test in Stereo Perception". 3droundabout.com. Retrieved 29 March 2012.
- The Logical Approach to Seeing 3D Pictures. www.vision3d.com by Optometrists Network. Retrieved 2009-08-21
- How To Freeview Stereo (3D) Images. Greg Erker. Retrieved 2009-08-21
- How to View Photos on This Site. Stereo Photography – The World in 3D. Retrieved 2009-08-21
- "Seeing is believing""; Cinema Technology, Vol 24, No.1 March 2011
- "Glossary". June 8, 2012.
- "openKMQ". June 8, 2012.
- "Fuji W3". Shopfujifilm.com. Retrieved 2012-03-04.
- Mac Digital Photography – 2003, Wiley, p. 125, Dennis R. Cohen, Erica Sadun – 2003. Books.google.com. 2003-10-17. ISBN 9780470113288. Retrieved 2012-03-04.
- Stereo World, National Stereoscopic Association Vol 17 #3 pp. 4–10
- "The chacha method". Nzphoto.tripod.com. Retrieved 2012-03-04.
- 3-D Revolution Productions. "How a 3-D Stereoscopic Movie is made – 3-D Revolution Productions". The3drevolution.com. Retrieved 2012-03-04.
- DrT (2008-02-25). "Dr. T". Drt3d.blogspot.com. Retrieved 2012-03-04.
- "Stereo Realist Guide, by Kenneth Tydings, Greenberg, 1951 page 100". Digitalstereoscopy.com. Retrieved 2012-03-04.
- Stereo Realist Manual, p. 27.
- Stereo Realist Manual, p. 261.
- Stereo Realist Manual, p. 156.
- "Buckingham Palace In Hyperstereo". Brianmay.com. Retrieved 2012-03-04.
- Stereo World Volume 37 #1 Inside Front Cover
- Stereoworld Vol 21 #1 March/April 1994 IFC, 51
- Stereoworld Vol 16 #1 March/April 1989 pp 36–37
- "Lens separation in stereo photography". Berezin.com. Retrieved 2012-03-04.
- Stereoworld Vol 16 #2 May/June 1989 pp. 20–21
- Stereoworld Vol 8 #1 March/April 1981 pp. 16–17
- Stereoworld Vol 31 #6 May/June 2006 pp. 16–22
- Stereoworld Vol 17 #5 Nov/DEC 1990 pp. 32–33
- Stereo Lunar Photos by John C. Ballou An in depth looks at moon stereos with examples using several techniques
- Stereoworld Vol 23 #2 May/June 1996 pp. 25–30
- "Stereo moon photo". Christensenastroimages.com. Retrieved 2012-03-04.
- "Brians Soapbox February 2009". Brianmay.com. Retrieved 2012-03-04.
- London Stereoscopic Company – Official Web Site a more indepth explanation
- Stereoworld Vol 15 #3 July/August 1988 pp. 25–30
- "Stereo Realist Guide, by Kenneth Tydings, Greenberg, 1951 page 101". Digitalstereoscopy.com. Retrieved 2012-03-04.
- The Vision of Hyperspace, Arthur Chandler, 1975, Stereo World , vol 2 #5 pp. 2–3, 12
- "Historical World Trade Center Photographs". Mymedialibrary.com. Retrieved 2012-03-04.
- Hyperspace a comment, Paul Wing, 1976, Stereo World , vol 2 #6 page 2
- "Cardboarding". Nzphoto.tripod.com. Retrieved 2012-03-04.
- Willke & Zakowski
- 3dstereo.com. "The 3D Mac". 3dstereo.com. Retrieved 2012-03-04.
- "Bercovitz Formulae for stereo base". Nzphoto.tripod.com. Retrieved 2012-03-04.
- "Rocky Mountain Memories". Rmm3d.com. Retrieved 2012-03-04.
- Jones, G.R.; Lee, D., Holliman, N.S., Ezra, D. (2001). "Controlling perceived depth in stereoscopic images" (PDF). Stereoscopic Displays and Applications. Proc. SPIE 4297A.
- Holliman, N. S. (2004). "Mapping perceived depth to regions of interest in stereoscopic images" (PDF). Stereoscopic Displays and Applications. Proc. SPIE 5291.
- Simmons, Gordon (March/April 1996). "Clarence G. Henning: The Man Behind the Macro". Stereo World 23 (1): 37–43.
- Willke, Mark A.; Zakowski, Ron (March/April 1996). "A Close Look into the Realist Macro Stereo System". Stereo World 23 (1): 14–35.
- Morgan, Willard D.; Lester, Henry M. (October 1954). Stereo Realist Manual. and 14 contributors. New York: Morgan & Lester. OCLC 789470.
|Wikimedia Commons has media related to: Stereoscopy|
- Stereoscopy at the Open Directory Project
- The Quantitative Analysis of Stereoscopic Effect
- Durham Visualization Laboratory stereoscopic imaging methods and software tools
- University of Washington Libraries Digital Collections Stereocard Collection
- Stereographic Views of Louisville and Beyond, 1850s–1930 from the University of Louisville Libraries
- Stereoscopy on Flickr
- Extremely rare and detailed Stereoscopic 3D scenes
- International Stereoscopic Union
- 3D STEREO PORTAL Videos & Photos Collection
- American University in Cairo Rare Books and Special Collections Digital Library Underwood & Underwood Egypt Stereoviews Collection
- Views of California and the West, ca. 1867–1903, The Bancroft Library
- The Ten Commandments of Stereoscopy, article about taking good stereoscopy images (photo and video)
- Moriarty, Philip. "3D Glasses". Sixty Symbols. Brady Haran for the University of Nottingham.
| 4.145578 |
The Bhagavad Gita (Sanskrit in Devanagari script: भगवद् गीता, in transliteration: Bhagavad Gītā), is a religious text within the Mahābhārata, located in the Bhishma-Parva chapters 23–40. A core text of Hinduism and Indian philosophy, often referred to simply as "the Gita", it is a summation of many aspects of the Vedic, Yogic, Vedantic and Tantric philosophies. The Bhagavad Gita, meaning "Song of the Lord", refers to itself as an 'Upanishad' and is sometimes called Gītopanişad. During the message of the Gita, Krishna proclaims that he is an Avatar, or a Bhagavat, a manifestation of the all-embracing God. To help Arjuna believe this, he reveals to him his divine form which is described as timeless and leaves Arjuna shaking with awe and fear.
- You grieve for those who should not be grieved for;
yet you speak wise words.
Neither for the dead nor those not dead do the wise grieve.
Never was there a time when I did not exist
nor you nor these lords of men.
Neither will there be a time when we shall not exist;
we all exist from now on.
As the soul experiences in this body
childhood, youth, and old age,
so also it acquires another body;
the sage in this is not deluded.
- When one's mind dwells on the objects of Senses, fondness for them grows on him, from fondness comes desire, from desire anger.
Anger leads to bewilderment, bewilderment to loss of memory of true Self, and by that intelligence is destroyed, and with the destruction of intelligence he perishes
- Ch. II, 62-63
- Of the Vrishnis, I am Vasudeva; of the sons of Pandu, Arjuna; of the sages, moreover, I am Vyasa; of poets, the poet Ushana.
- Krishna, Chapter X, verse 37; Winthrop Sargeant translation
- Thou seest Me as Time who kills, Time who brings all to doom,
The Slayer Time, Ancient of Days, come hither to consume;
Excepting thee, of all these hosts of hostile chiefs arrayed,
There shines not one shall leave alive the battlefield! Dismayed
No longer be! Arise! obtain renown! destroy thy foes!
Fight for the kingdom waiting thee when thou hast vanquished those.
By Me they fall—not thee! the stroke of death is dealt them now,
Even as they stand thus gallantly; My instrument art thou!
Strike, strong-armed Prince! at Drona! at Bhishma strike! deal death
To Karna, Jyadratha; stay all this warlike breath!
’Tis I who bid them perish! Thou wilt but slay the slain.
Fight! they must fall, and thou must live, victor upon this plain!
Quotes about the Bhagavad Gita
- Strength founded on the Truth and the dharmic use of force are thus the Gita's answer to pacifism and non-violence. Rooted in the ancient Indian genius, this third way can only be practised by those who have risen above egoism, above asuric ambition or greed. The Gita certainly does not advocate war; what it advocates is the active and selfless defence of dharma. If sincerely followed, its teaching could have altered the course of human history. It can yet alter the course of Indian history.
- "Greatest Gospel of Spiritual Works" in New Indian Express (10 December 2000). These have sometimes mistakenly been quoted as the words of Sri Aurobindo, because they appear after a quotation by him in the essay.
- We knew the world would not be the same. Few people laughed, few people cried, most people were silent. I remembered the line from the Hindu scripture, the Bhagavad-Gita. Vishnu is trying to persuade the Prince that he should do his duty and to impress him takes on his multi-armed form and says, "Now I am become Death, the destroyer of worlds." I suppose we all thought that, one way or another.
- Robert Oppenheimer, in an interview about the Trinity nuclear explosion, first broadcast as part of the television documentary The Decision to Drop the Bomb (1965), produced by Fred Freed, NBC White Paper; Oppenheimer is quoting from the 1944 Vivekananda - Isherwood translation of the Gita. The line is spoken to Arjuna by Krishna, who is revered in Hindu traditions as one of the major incarnations of Vishnu; some assert that the passage would be better translated "I am become Time, the destroyer of worlds."
- In the morning I bathe my intellect in the stupendous and cosmogonal philosophy of the Bhagvat-Geeta, since whose composition years of the gods have elapsed, and in comparison with which our modern world and its literature seem puny and trivial; and I doubt if that philosophy is not to be referred to a previous state of existence, so remote is its sublimity from our conceptions. I lay down the book and go to my well for water, and lo! there I meet the servant of the Bramin, priest of Brahma and Vishnu and Indra, who still sits in his temple on the Ganges reading the Vedas, or dwells at the root of a tree with his crust and water jug. I meet his servant come to draw water for his master, and our buckets as it were grate together in the same well. The pure Walden water is mingled with the sacred water of the Ganges.
- Six commentaries - Adi Sankara, Ramanuja, Sridhara Swami, Madhusudana Sarasvati, Visvanatha Chakravarti and Baladeva Vidyabhusana (Roman transliteration of Sanskrit)
- Bhagavad Gita introduction lecture by A.C. Bhaktivedanta Swami Prabhupada
- Commentary on the Gita by Swami Nirmalananda Giri
- Bhagavad Gita with Commentaries by Vladimir Antonov
- Spiritual Quotes from Bhagavad Gita
English translations and commentaries
- Sowmya's Gitaaonline Bhagavad Gita verses in Real Audio, various discourses on Gita chapters, Summaries of different chapters and a unique FAQ. Includes Gita Dhyaanam (Invocation)in Real Audio, Downloads of Bhagavad Gita for Busy People and Gita Arati.
- Audio recitations of the Bhagavad-Gita in MP3 spoken in 15 languages and sung in Sanskrit, plus introductions of the Bhagavad-Gita from the four authorised samparadayas. Also articles on Bhagavad-Gita from the Brahma Madhva Gaudiya Vaisnava Sampradaya disciplic succession.
- Gita Supersite Multilingual Bhagavadgita with translations, classical and contemporary commentaries and much more.
- Bhagavad Gita As It Is by His Divine Grace Sri Srimad A.C. Bhaktivedanta Swami Prabhupada
- Online Bhagavad Gita by His Divine Grace Sri Srimad A.C. Bhaktivedanta Swami Prabhupada
- Srimad Bhagavad-Gita Overview by Jagannath Das
- Swami Chinmayananda translation and commentary
- Sir Edwin Arnold translation
- Kashinath Trimbak Telang translation
- Swami Nirmalananda Giri translation in metered verse for singing.
- Dr. Ramanand Prasad translation
- Sanderson Beck translation
- Swami Tapasyananda translation
- William Quan translation
- Sowmya's Gitaaonline A sloka a Day, Discourses on Bhagavad Gita, The Essence of Bhagavad Gita, The Gita Way, Sanskrit verses of selected chapters of the Bhagavad Gita, Links to other web sites with Bhagavad Gita Audio.
- Verses in Sanskrit Devnagari, transliteration, word-for-word translations, verse translations and accompanying chants in Realaudio
- Recitation of verses in Sanskrit (downloadable mp3s)
- One little angel
- Devanagri Sanskrit transliterations and Hare Krishna-influenced Sanskrit-to-English translations for all 700 verses
- Gita excerpt from the Mahabharata by Kisari Mohan Ganguly (published between 1883 and 1896), the most comprehensive English translation to date
Eknath Easwaran's poetic translation
| 3.030405 |
1911 Encyclopædia Britannica/Refectory
|←Reeves, John Sims||1911 Encyclopædia Britannica, Volume 23
|See also Refectory on Wikipedia, Refectory on Wiktionary, and our 1911 Encyclopædia Britannica disclaimer.|
REFECTORY, (med. Lat. refectorium, from reficere, to refresh), the hall of a monastery, convent, &c., where the religious took their chief meals together. There frequently was a sort of ambo, approached by steps, from which to read the legenda sanctorum, &c., during meals. The refectory was generally situated by the side of the S. cloister, so as to be removed from the church but contiguous to the kitchen; sometimes it was divided down the centre into two aisles, as at Fountains Abbey in England, Mont St Michel in France and at Villiers in Belgium, and into three aisles as in St Mary’s, York, and the Bernardines, Paris. The refectory of St Martin-des-Champs in Paris is in two aisles, and is now utilized as the library of the École des Arts et Métiers. Its wall pulpit, with an arcaded staircase in the thickness of the wall, is still in perfect preservation.
| 3.23793 |
Capybaras are the largest of rodents, weighing from 35 to 66 kg and standing up to 0.6 meters at the shoulder, with a length of about 1.2 meters. Females of this species are slightly larger than males. Their fur is coarse and thin, and is reddish brown over most of the body, turning yellowish brown on the belly and sometimes black on the face. The body is barrel-shaped, sturdy, and tailless. The front legs are slightly shorter than the hind legs, and the feet are partially webbed. This, in addition to the location of the eyes, ears, and nostrils on top of the head, make capybaras well-suited to semi-aquatic life.
Range mass: 35 to 66 kg.
Range length: 106 to 134 cm.
Other Physical Features: endothermic ; homoiothermic; bilateral symmetry
Sexual Dimorphism: female larger
No one has provided updates yet.
| 3.54428 |
Brief SummaryRead full entry
Sipuncula are marine invertebrate worms commonly known as peanut worms (or star worms), with approximately 150 recognized species (Cutler 1994). They are widely distributed throughout the world's oceans from the tropical intertidal to cold deep-water habitats. These little-known marine invertebrates are often confused with holothurians, echiurans or nemerteans and are easily overlooked by inexperienced observers. However sipunculans have several characteristics that separate them easily from these other groups. The body consists of a cylindrical trunk and an introvert that invaginates completely inside the trunk. The mouth is located at the tip of the introvert and may be surrounded by digitiform tentacles. The peculiar position of the anus in the antero-dorsal region of the trunk is an easily seen external character that distinguishes the sipunculans from other worm-like invertebrates. Sipunculans are dioecious but sexes are not distinguishable externally. Fertilization is external and development may be either direct with no larval form or indirect, usually with a trochophore larva followed by a pelagosphera, a larval type unique to sipunculans. Pelagosphera larvae of many species can spend long periods of time in the water column; consequently they are capable of long distance dispersal. At least one species (Aspidosiphon elegans) is able to reproduce asexually by fission of small pieces from the posterior. As infaunal animals, sipunculans burrow into the substrate or they are cryptic inhabitants of coral rubble or empty gastropod shells and are therefore not readily observed or collected.
The phylogenetic position of sipunculans has been contentious. This group has been ranked at differing taxonomic levels such as family, order, class or phylum (Saiz Salinas, 1993; Cutler, 1994). Phylum status for this group was established only in the middle of the 20th century (Hyman, 1959) and the current name, Sipuncula, was proposed by Stephen (1964) and restated by Stephen and Edmonds (1972). More recently, molecular phylogenetic studies have provided strong evidence that sipunculans are either within, or closely related to, annelids (e.g. Boore & Staton, 2002; Struck et al. 2007; Dunn et al. 2008; Dordel et al. 2010).
| 3.596088 |
Research Project Search
Poisonous Paint Cleaned in a Flash
(New Scientist Magazine - December 14, 2006) - As a researcher on Ronald Reagan's "Star Wars" anti-ballistic missile programme in the 1980s, Ray Schaefer had to learn all about how laser beams interact with surfaces. Now he is applying his extensive knowledge to a very different task: making old houses safe for children.
Instead of trying to blast holes in Soviet missiles, powerful pulses from a new type of light source Schaefer has developed can vaporise lead paint that might poison incautious youngsters.
Once widely used in housing, lead paint is now banned in most countries because of its toxicity. However, it is still be found in the US in many houses built before 1978, putting children at risk of ingesting lead from dust and by chewing painted objects. Removing paint by scraping or by dipping items in paint remover is costly, time-consuming and creates extra contamination risks.
Full Text of Article (subscription to New Scientist required)
Project Abstract for SBIR Contract 68D03046: Paint Removal From Architectural Surfaces With an Innovative Pulsed Light Source
| 3.289098 |
What is a Source?
Examples of sources include:
The following emphasize certain aspects of the HRS definition of source. The following are NOT sources for purposes of HRS scoring.
- Ground water plumes originating from known sources (such as a landfill);
- Surface water plumes originating from known sources (such as discharge pipes or overland runoff discharge areas);
- Areas of contaminated surface water sediments arising from discharges from known sources; and
- Volumes of contaminated ambient air.
The following ARE considered sources for purposes of HRS scoring:
- Ground water and surface water plumes of unknown origin;
- Areas of contaminated surface water sediments arising from direct placement (other than discharge) of waste materials into surface water bodies when the origin is unknown;
- Cylinders containing confined, gaseous hazardous substances; and
- Soils contaminated as a result of overland runoff, volatilization of ground water contaminants, or atmospheric deposition (from non-vehicular sources); and
Areas of observed soil contamination are sources. Sources need not contain waste materials. Materials that might not be considered a source if undisturbed, may become a source if excavated and moved (e.g., contaminated dredge disposal materials).
[Slide 1 of 7] [Home]
| 3.613573 |
Maternal feeding practices and feeding behaviors of Australian children aged 12 to 36 months
Chan, Lily, Magarey, Anthea, & Daniels, Lynne (2010) Maternal feeding practices and feeding behaviors of Australian children aged 12 to 36 months. Matern Child Health Journal.
To explore parents' perceptions of the eating behaviors and related feeding practices of their young children.
Mothers (N=740) of children aged 12 to 36 months and born in South Australia were randomly selected by birth date in four 6-month age bands from a centralized statewide database and invited to complete a postal questionnaire.
Valid completed questionnaires were returned for 374 children (51% response rate; 54% female). Although mothers generally reported being confident and happy in feeding their children, 23% often worried that they gave their child the right amount of food. Based on a checklist of 36 specified items, 15% of children consumed no vegetables in the previous 24 hours, 11% no fruit and for a further 8% juice was the only fruit. Of 12 specified high fat/sugar foods and drinks, 11% of children consumed none, 20% one, 26% two, and 43% three or more. Six of eight child-feeding practices that promote healthy eating behaviors were undertaken by 75% parents 'often' or 'all of the time'. However, 8 of 11 practices that do not promote healthy eating were undertaken by a third of mothers at least ‘sometimes’.
In this representative sample, dietary quality issues emerge early and inappropriate feeding practices are prevalent thus identifying the need for very early interventions that promote healthy food preferences and positive feeding practices. Such programs should focus not just on the 'what', but also the 'how' of early feeding, including the feeding relationship and processes appropriate to developmental stage.
Key words: Maternal feeding practices, infants, obesity
Citation countsare sourced monthly fromand citation databases.
These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science generally from 1980 onwards.
Citations counts from theindexing service can be viewed at the linked Google Scholar™ search.
Full-text downloadsdisplays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.
|Item Type:||Journal Article|
|Keywords:||Maternal feeding practices, Infants , Obesity|
|Subjects:||Australian and New Zealand Standard Research Classification > MEDICAL AND HEALTH SCIENCES (110000) > NUTRITION AND DIETETICS (111100) > Nutrition and Dietetics not elsewhere classified (111199)|
Australian and New Zealand Standard Research Classification > MEDICAL AND HEALTH SCIENCES (110000) > PAEDIATRICS AND REPRODUCTIVE MEDICINE (111400) > Paediatrics (111403)
Australian and New Zealand Standard Research Classification > MEDICAL AND HEALTH SCIENCES (110000) > PUBLIC HEALTH AND HEALTH SERVICES (111700) > Public Health and Health Services not elsewhere classified (111799)
|Divisions:||Current > QUT Faculties and Divisions > Faculty of Health|
Current > Institutes > Institute of Health and Biomedical Innovation
Current > Schools > School of Public Health & Social Work
|Copyright Owner:||Copyright 2010 Springer Science+Business Media, LLC|
|Copyright Statement:||The original publication is available at SpringerLink http://www.springerlink.com|
|Deposited On:||08 Oct 2010 10:28|
|Last Modified:||01 Mar 2012 00:21|
Repository Staff Only: item control page
| 3.01756 |
Johan Galtung has shown that there are several different ways of classifying the phenomenon of violence. Here I will summarize the three main types of violence: (1) personal or direct, (2) structural or indirect, and (3) cultural or symbolic.
In his paper “Violence, Peace, and Peace Research,” Galtung made his highly significant — and now widely accepted — distinction between the two fundamental types of violence:
We shall refer to the type of violence where there is an actor that commits the violence as personal or direct, and to violence where there is no such actor as structural or indirect. In both cases individuals may be killed or mutilated, hit or hurt in both senses of these words [i.e., physical and psychological], and manipulated by means of stick or carrot strategies. But whereas in the first case these consequences can be traced back to concrete persons as actors, in the second case this is no longer meaningful. There by not be any person who directly harms another in the structure. The violence is built into the structure and shows up as unequal power and consequently as unequal life chances. (1969: 170-171)
In the follow-up paper, Galtung introduced his third category, cultural violence:
By ‘cultural violence’ we mean those aspects of culture, the symbolic sphere of our existence . . . that can be used to justify or legitimize direct or structural violence. (1990: 291)
For Galtung, simplistic stereotypes that identify entire cultures as violent are not very helpful; it’s much more preferable to say, instead, that a particular aspect of a particular culture is an example of cultural violence. Explaining further, Galtung notes:
Cultural violence makes direct and structural violence look, even feel, right — at least not wrong. . . . One way cultural violence works is by changing the moral color of an act from red/wrong to green/right or at least to yellow/acceptable; an example being ‘murder on behalf of the country as right, on behalf of oneself wrong’. Another way is by making reality opaque, so that we do not see the violent act or fact, or at least not as violent. (1990: 291-292)
Galtung suggests that the three types of violence can be represented by the three corners of a violence triangle. The image is meant to emphasize that the three types are causally connected to each other.
Among the three types of violence represented in the above diagram, the most obvious type is direct or personal. Everything from threats and psychological abuse to rape, murder, war, and genocide belong to this category. It is called personal violence because the perpetrators are human beings, i.e., persons.
The second type, structural violence, is much less obvious, though it can be as deadly, or deadlier, than direct violence. Typically, no particular person or persons can be held directly responsible as the cause behind structural violence. Here, violence is an integral part of the very structure of human organizations — social, political, and economic.
Structural violence is usually invisible — not because it is rare or concealed, but because it is so ordinary and unremarkable that it tends not to stand out. Such violence fails to catch our attention to the extent that we accept its presence as a “normal” and even “natural” part of how we see the world.
Galtung explains the distinction as follows:
Violence with a clear subject-object relation is manifest because it is visible as action. . . . Violence without this relation is structural, built into structure. Thus, when one husband beats his wife there is a clear case of personal violence, but when one million husbands keep one million wives in ignorance there is structural violence. Correspondingly, in a society where life expectancy is twice as high in the upper as in the lower classes, violence is exercised even if there are no concrete actors one can point to directly attacking others, as when one person kills another. (1969: 171)
Even though structural violence has real victims, it has no real perpetrators. And because there are no real perpetrators, the question of intention does not arise. To identify structural violence, it is imperative to focus on consequences rather than intentions. Galtung points out that Western legal and ethical systems have been preoccupied with intentional harm because of their concern with punishing (or holding accountable) the guilty party. This concern is appropriate for direct violence, but quite irrelevant for structural violence. In fact, too much concern with catching the perpetrators keeps our attention focused on one kind of violence, allowing the other, more pervasive kind to go unnoticed. According to Galtung:
This connection is important because it brings into focus a bias present in so much thinking about violence, peace, and related concepts: ethical systems directed against intended violence will easily fail to capture structural violence in their nets — and may hence be catching the small fry and letting the big fish loose. (1969: 172)
Finally, there is the issue of cultural violence.
Violence, whether direct or structural, is a human phenomenon. As such, it poses for human beings not only a physical or existential problem but also a problem of meaning. Both types of violence, therefore, need to be justified or legitimated in one form or another. This occurs in the arena of culture, in the realm of beliefs, attitudes, and symbols. It would be erroneous to say that culture is the root cause of violence, since the causal influence runs bilaterally among the three corners of the violence triangle. Yet, neither direct nor structural violence can go on for long without at least some support from the culture. In any given culture, the justification or legitimation of violence can come from a variety of directions — most significantly from religion, ideology, and cosmology, but also from the arts and sciences.
| 3.143187 |
Also see: German Word Order - Part 2
The fancy word for it is syntax. Any way you express it, word order (die Wortstellung) in German sentences is both more variable and more flexible than in English.
In many cases, German word order is identical to English. This is true for simple subject + verb + other elements sentences: "Ich sehe dich." ("I see you.") or "Er arbeitet zu Hause." ("He works at home."). This "normal" word order places the subject first, the verb second, and any other elements third.
Throughout this guide, it is important to understand that when we say verb, we mean the conjugated or finite verb, i.e., the verb that has an ending that agrees with the subject (er geht, wir gehen, du gehst, etc.). Also, when we say "in second position" or "second place," that means the second element, not necessarily the second word. For example, in the following sentence, the subject (in blue) consists of three words and the verb (in red) comes second, but it is the fourth word:
With compound verbs, the second part of the verb phrase (past participle, separable prefix, infinitive) goes last, but the conjugated element is still second:
"Der alte Mann ist gestern angekommen."
"Der alte Mann will heute nach Hause kommen."
However, German often prefers to begin a sentence with something other than the subject, usually for emphasis or for stylistic reasons. Only one element can precede the verb, but it may consist of more than one word (e.g., "vor zwei Tagen" below). In such cases, the verb remains second and the subject must immediately follow the verb:
"Vor zwei Tagen habe ich mit ihm gesprochen."
No matter which element begins a German declarative sentence (a statement), the verb is always the second element. The subject will either come first or immediately after the verb if the subject is not the first element. This is a simple, hard and fast rule. In a statement (not a question) the verb always comes second. If you don't remember anything else about word order, remember that the verb is always in second place.
If you don't remember anything else about word order, remember that the verb is always in second place.
This rule applies to sentences and phrases that are independent clauses. The only verb-second exception is for dependent or subordinate clauses. In subordinate clauses the verb always comes last. (Although in today's spoken German, this rule is often ignored.) We'll discuss word order in subordinate clauses in Part 2 of this lesson.
One other exception to this rule: interjections, exclamations, names, certain adverbial phrases - usually set off by a comma. Here are some examples:
"Maria, ich kann heute nicht kommen."
"Wie gesagt, das kann ich nicht machen."
In the sentences above, the initial word or phrase (set off by a comma) comes first, but does not alter the verb-second rule.
TIME, MANNER, PLACE
Another area where German syntax may vary from that of English is the position of expressions of time (wann?), manner (wie?) and place (wo?). In English we would say, "Erik is coming home on the train today." English word order in such cases is place, manner, time... the exact opposite of German. In English it would sound odd to say, "Erik is coming today on the train home," but that is precisely how German wants it said: time, manner, place. "Erik kommt heute mit der Bahn nach Hause."
Wann - Wie - Wo
Time - Manner - Place
The only exception would be if you want to start the sentence with one of these elements for emphasis. Zum Beispiel: "Heute kommt Erik mit der Bahn nach Hause." (Emphasis on "today.") But even in this case, the elements are still in the prescribed order: time ("heute"), manner ("mit der Bahn"), place ("nach Hause"). If we start with a different element, the elements that follow remain in their usual order, as in: "Mit der Bahn kommt Erik heute nach Hause." (Emphasis on "by train" - not by car or plane.)
These are the essential rules for German word order. We'll discuss a few more details and the verb-last rule for dependent clauses in a future article.
Also see: German Word Order - Part 2
Subscribe to a free newsletter!
OUR GERMAN FORUMS
German for Beginners
Our free online German course.
| 3.370937 |
Did Cats can speak?
Many domestic cats speak more with their body language than their vocal chords. While cats will communicate with their eyes, ears and fur, they also use their tail in a variety of ways. This can be a good indication of what your cat is up to.
In general, if a cat is rather content and relaxed its tail will be in a relaxed position as well. Curled gently around themselves is a sign your feline is comfortable and undisturbed. If they are walking, the tail will gently follow with the walking motion in an upward position.
The opposite is true if your cat is angry with you, so if the tail is low and wagging like a happy dog it may be time to take a step back. The more agitated the cat is the more their tail will briskly swipe back and forth. Interacting with them may result in a quick claw swipe, so be warned.
You can also tell when your cat is on a stalking mission, whether it is a toy or a shadow it is planning to attack. The more focused they become on their prey, the tail will stiffen and only the tip will twitch slightly. This is done in preparation of the attack and may not be quite so obvious in long-haired cats.
The confrontation mode of a cat’s tail is one that includes the entire body. This will happen when they feel a possible battle with other animals, or a need to defend their territory. A cat will fluff up its tail (and all of its other fur) to the biggest size it can. This is done in an effort to appear larger to the opponent. Depending upon the size of the cat, some can appear twice their usual size.
These are common ways that cats make their feelings known with their tails. If you pay attention, it may give you a bit of insight into what your pet is thinking. This can help you keep your beloved cat happy and understood.
Published in: Pets
| 3.113677 |
One consequence of IT standardization and commodification has been Google’s datacenter is the computer view of the world. In that view all compute resources (memory, CPU, storage) are fungible. They are interchangeable and location independent, individual computers lose identity and become just a part of a service.
Thwarting that nirvana has been the abysmal performance of commodity datacenter networks which have caused the preference of architectures that favor the collocation of state and behaviour on the same box. MapReduce famously ships code over to storage nodes for just this reason.
Change the network and you change the fundamental assumption driving collocation based software architectures. You are then free to store data anywhere and move compute anywhere you wish. The datacenter becomes the computer.
On the host side with an x8 slot running at PCI-Express 3.0 speeds able to push 8GB/sec (that’s bytes) of bandwidth in both directions, we have enough IO to feed Moore’s progeny, wild packs of hungry hungry cores. And in the future System on a Chip architectures will integrate the NIC into the CPU and even faster speeds will be possible. Why we are still using TCP and shoving data through OS stacks in the datacenter is a completely separate question.
The next dilemma is how to make the network work. The key to bandwidth nirvana is explained by Microsoft in MinuteSort with Flat Datacenter Storage, which shows how in a network with enough bisectional bandwidth every computer can send data at full speed to every computer, which allows data to be stored remotely, which means data doesn’t have to be stored locally anymore.
What the heck is bisectional bandwidth? If you draw a line somewhere in a network bisectional bandwidth is the rate of communication at which servers on one side of the line can communicate with servers on the other side. With enough bisectional bandwidth any server can communicate with any other server at full network speeds.
Wait, don’t we have high bisectional bandwidth in datacenters now? Why no, no we don’t. We typically have had networks optimized for sending traffic North-South rather than East-West. North-South means your server is talking to a client somewhere out in the Internet. East-West means you are talking to another server within the datacenter. Pre cloud software architectures communicated mostly North-South, to clients located outside in the Internet. Post cloud most software functionality is implemented by large clusters that talk mostly to each other, that is East-West, with only a few tendrils of communication shooting North-South. Recall how Google has pioneered large fanout architectures where creating a single web page can take a 1000 requests. Large fanout architectures are the new normal.
Datacenter networks have not kept up with the change in software architectures. But it’s even worse than that. To support mostly North-South traffic with a little East-West traffic, datacenters used a tree topology with core, aggregation, and access layers. The idea being that the top routing part of the network has enough bandwidth to handle all the traffic from all the machines lower down in the tree. Economics made it highly attractive to highly oversubscribe, like 240-1, the top layer of the network. So if you want to talk to a machine in some other part of the datacenter you are in for a bad experience. Traffic has to traverse highly oversubscribed links. Packets go drop drop fizz fizz.
Creating an affordable high bisectional bandwidth network requires a more thoughtful approach. The basic options seem to be to change the protocols, change the routers, or change the hosts. The approach Microsoft came up with was to change the host and add a layer of centralized control.
Their creation is fully described in VL2: A Scalable and Flexible Data Center Network:
A practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics. VL2 uses (1) flat addressing to allow service instances to be placed anywhere in the network, (2) Valiant Load Balancing to spread traffic uniformly across network paths, and (3) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane.
The general idea is to create a flat L2 network using a CLOS topology. VMs keep their IP addresses forever and can move anywhere in the datacenter. L2 ARP related broadcast problems are sidestepped by changing ARP to use a centralized registration service to resolve addresses. No more broadcast storms.
This seems strange, but I attended a talk at Hot Interconnects on VL2 and the whole approach is quite clever and seems sensible. The result delivers the low cost, high bandwidth, low latency East-West flows needed by modern software architectures. A characteristic that seems to be missing in Route Anywhere vSwitch type approaches. You can’t just overlay in performance when the underlying topology isn’t supportive.
Now that you have this super cool datacenter topology what do you do with it? Microsoft implemented a version of the MinuteSort benchmark that was 3 times faster than Hadoop, sorting nearly three times the amount of data with about one-sixth the hardware resources (1,033 disks across 250 machines vs. 5,624 disks across 1,406 machines).
Microsoft built the benchmark code on top of the Flat Datacenter Storage (FDS) system, which is distributed blob storage system:
Notably, no compute node in our system uses local storage for data; we believe FDS is the first system with competitive sort performance that uses remote storage. Because files are all remote, our 1,470 GB runs actually transmitted 4.4 TB over the network in under a minute
FDS always sends data over the network. FDS mitigates the cost of data transport in two ways. First, we give each storage node network bandwidth that matches its storage bandwidth. SAS disks have read performance of about 120MByte/sec, or about 1 gigabit/sec, so in our FDS cluster a storage node is always provisioned with at least as many gigabits of network bandwidth as it has disks. Second, we connect the storage nodes to compute nodes using a full bisection bandwidth network—specifically, a CLOS network topology, as used in projects such as Monsoon. The combination of these two factors produces an uncongested path from remote disks to CPUs, giving the system an aggregate I/O bandwidth essentially equivalent to a system such as MapReduce that uses local storage. There is, of course, a latency cost. However, FDS by its nature allows any compute node to access any data with equal throughput.
Details are in the paper, but as distributed file systems have become key architectural components it’s important for bootstrapping purposes to have one that takes advantage of this new datacenter topology.
With 10/100 Gbps networks on the way and technologies like VL2 and FDS, we’ve made good progress at making CPU, RAM, and storage fungible pools of resources within a datacenter. Networks still aren’t fungible, though I’m not sure what that would even mean. Software Defined Networking will help networks to become first class objects, which seems close, but for performance reasons networks can never really be disentangled from their underlying topology.
What can we expect from these developments? As fungibility is really a deeper level of commoditization we should expect to see the destruction of approaches based on resource asymmetry, even higher levels of organization, greater levels of consumption, the development of new best practices, and even greater levels of automation should drive even more competition in the ecosystem space.
- A Guided Tour through Data-center Networking
- Data in the Fast Lane
- Minutesort with Flat Datacenter Storage
- Minutesort with Flat Datacenter Storage by Andrew Wang (On Reddit)
- A quick summary of the VL2 data-center network scheme
- FULL MESH IS THE WORST POSSIBLE FABRIC ARCHITECTURE
- Microsoft's FDS data-sorter crushes Hadoop
- MINUTESORT WITH FLAT DATACENTER STORAGE
- Explaining L2 Multipath in Terms of North/South, East West Bandwidth
- VL2: A Scalable and Flexible Data Center Network by by MS Researc
- VL2: A Scalable and Flexible Data Center Network by Murat Demirbas
- Jellyfish: Networking Data Centers Randomly
| 3.008102 |
The major event that would lead to the start of World War I was known as the Assassination at Sarajevo.This assassination occured on June 28th, 1914. On this date, Archduke Franz Ferdinand, heir to the throne of the Austro-Hungarian Empire, and his wife were on their visit to the newest acquired territory of the Empire, Bosnia. However, they were unaware that a group of Serbian revolutionaries, known as "Young Bosnia", waited to fire upon them as they drove through the streets. At first, the couple escaped unscathed but when they made the mistake of turning, unknowingly, towards one of the revolutionaries, Gavrilo Princip, they were shoot dead and soon, anger of his and his wife's death boiled over on the empire and war began in a matter of a month.
Much to the encouragment of the German Empire, formed years before by the prussian prime minister, Otto Von Bismarck, the Austro-Hungarian Empire declared war on Serbia on July 28th, 1914 for being supposedly fully responsible for the assassination of Ferdinand. Then, fearing that Serbia would fall, Russia joined the battle on Serbia's side. The German Empire, also, fearing the same for Austria, entered the war with their ally. Germany, however, split their army into, mainly, two. One group marched through Belgium, into France in order to make sure that Germany was not defenseless when France would, supposedly, surprise attack Germany's western borders. While the other group marched eastwards against Russian borders. After France was attacked by the Germans, Britain joined the French side, much against Germany's hopes that they would remain neutral.
The Germans, at first, struggled to even pass through Belgium's defenses, giving the French troops time to prepare against their enemy. When Germany finally passed Belgium, they very nearly reached Paris, passing many of the French defenses easily. However, the French army regrouped and chased the Germans north, towards the English Channel. By this time, both the Germans and the French had developed a Trench System in order to increase their defenses.
The Trench System were hand-dug trench lines that were heavily protected. An average trench would have barbed wire covering the top and machine guns ready to fire at any unwelcomed guests. The Germans, however, advanced this further when they began using Chlorine as a poisonous gas against the allies, or Entente. This deadly advancement would contribute greatly to the high death rate of World War I. Then again, the trenches were already very unsanitary, even before Chlorine was being used. Epidemics of disease weren't uncommon when it came to the Trench System.
The war pushed on into the year of 1917, when neither side had any major advantage and both the allies and the Central Powers were losing or winning the same amount of battles. However, in Eastern Europe, the war seemed to be going the Central Powers' way. The Austro-Hungary and German Empire both won many successful battles against Russia and took large amounts of territory from the massive but declining country. The Russians, luckily, regrouped and took back the lost territory but, also, refused to fight anymore in the war, due to the Russian people's dissatisfaction with the Czar, or the ruler of Russia. Russia, however, also suffered attacks against another threat, the Ottoman Empire.
The sultan once had supreme control all over the Empire. However, in the past years before World War I, the empire had suffered a revolution by a group known as the "Young Turks". The Young Turks succeeded in forcing the sultan to agree to a constitution and by doing this, formed the CUP. The CUP had done successful progress in improving education and the economy but chose very poorly on who to support during World War I. In favor of the Central Powers, the Ottomans successfully destroyed many Russian Black Sea ports and crushed the allied force that attacked them over the Gallipoli Peninsula Straits. The Allies only found a weakness in the Ottoman Empire when british officer, T.E Lawrence, persuaded Sharif Hussein ibn Ali of Mecca to start a revolution against the Ottoman rulers. The Arab Revolt of 1916 then allowed the allies to capture Jerusalem and Istanbul, the capital, ending the war in this region in September 1918. However, in 1922, Kemal Mustafa Ataturk drove the Allies out of Istanbul and all of Turkey and became the president of the new Turkish Republic in 1923.
The German navy was filled with many heavily armored warships, known as dreadnoughts. However, these dreadnoughts could not win any battles or situations against the allies. The Germans then decided to use submarine warfare against the allies instead. The submarines, or German U-boats, proved to be quite successful, disrupting much of the British, French, etc, trade. However, the U-boats were also a contribution to the U.S entering the war when U-boat #20 sank the world-famous ocean liner, the Lusitania, which, when sunk, was carrying many inoccent American passengers.
The stalemate of 1917 was finally broken by reckless German action, such as their unrestricted U-boat attacks on foreign shipping, the sinking of the Lusitania and the most threatening of all, they were highly encouraging that Mexico attack the U.S. The U.S at last sent troops to assist the allies in Europe and crushed all the German forces that were outside of their home country. However, by this time, the Russian economy was near collapse. On November 7th, 1918, Vladimir Lenin had seized power. In March, 1918, Russia and the German Empire signed the Treaty of Brest-Litovsk, effectively ending Russian participation in the war. The Allies then surrounded Germany and began pressurizing the German Trench defenses. Germany slowly began to succumb to the Allies' pressure on them. Since, after all, their major ally, Austro-Hungary, had signed an armistice, agreement to stop fighting, with Italy and the allies after losing a series of battles against them. And now, the German people had started a revolution against Kaiser, or King, Wilhelm II of Germany, in response to Allies' presence. Finally, the Allies broke through the German Trenches and began marching eastwards towards the German capital, Berlin. Germany then finally gave in and signed an armistice on November 11th, 1918, ending the war for good.
The Treaty of Versailles aimed greatly to punish the Central Powers for the amount of destruction they caused during the war; especially Germany. The policy the Allies had put upon Germany and Austria could be certainly considered strict. Germany and Austria could not ever be allies with one another again and all future alliances of either country had to compromise with the policy of the Allies. Plus, the Central Powers were to pay an extremely large fee, some 11 billion euro. However, this policy and fee had sent both Austria and Germany into bankruptcy and marked the dissolution of both empires. The Germans could never let go of their grudge against the Allies and because of this, World War II was just waiting to occur.
Table of Contents
Origins of the WarEdit
Triple Entente (list)Edit
Triple Alliance (list)Edit
Major Battles of World War IEdit
The Western Front, or Western Europe
- The Battle of 1st Marne, occured in 1914, land battle, Entente/Ally Victory
- The Battle of 1st Aisne, occured in 1914, land battle, indecisive
- The Battle of 1st Ypres, occured in 1914, land battle, Entente/Ally Victory
- The Battle of Verdun, occured in 1916, land battle, Entente/Ally (French) Victory
- The Battle of 1st Somme, occured in 1916, land battle, indecisive
- The Battle of Vimy Ridge, occured in 1917, land battle, Entente/Ally Victory
- The Battle of Messines, occured in 1917, land battle, Entente/Ally Victory
- The Battle of Passchendaele, occured in 1917, land battle, Entente/Ally Victory
- The Battle of Cambrai, occured in 1917, land battle, indecisive
- The Battle of 2nd Marne, occured in 1918, land battle, Entente/Ally Victory
The Eastern Front, or Eastern Europe and Asia
- The Battle of Tannenberg, occured in 1914, land battle, Central Powers Victory (against Russia)
- The Battle of Gorlice-Tarnow, occured in 1915, land battle, Central Powers Victory (against Russia)
The Middle East
- The Battle of Kut-al-Amara, 1915-1916, Siege, Turkish Victory (against British, Indians)
- The Battle of Gallipoli, 1915-1916, land battle, Turkish Victory (against British, Australians, New Zealanders)
- The Battle of Megiddo, 1918, land battle, Entente/Ally Victory (against Turks)
The Countries of Both SidesEdit
The Triple Entente/Allies
- British Empire
- New Zealand
| 3.801338 |
Schools in poor communities in many parts of the world including South Africa, still suffer from the legacy of large classes, deplorable physical conditions and the absence of learning resources, and yet the teachers and learners in these poor schools are expected to achieve the same levels of teaching and learning as those in schools with well endowed resources in largely well developed urban areas. A model that combines the salient features of class-based activities with VBSRL (Video-Based Self-Regulated Learning) provides a low-risk and low-cost approach to serve such high-risk learners. When information is difficult to master for poor learners, there is a greater need for creating opportunities for learners to learn how to learn. Video-based learning is regarded as a means to capacitate the learner to avoid and prevent failing an educational task. Self-regulated learning implies activities directed at acquiring information, skills, and knowledge that involve cognitive, metacognitive, management, motivational, and behavioural strategies. The VBSRL approach as presented in this paper paves the way for poor learners to assume responsibility for their learning and enjoy success. Under these circumstances they tend to demonstrate more intelligence by getting to know what to do when the odds are against them!
|Keywords:||Video-based Learning, Self-regulated Learning, Video-Based Self-regulated Learning, Cognitive Dissonance, Metacognition|
Professor of Advanced Studies in Education, Research, Technology & Innovation Unit, Faculty of Education, Nelson Mandela Metropolitan University, Port Elizabeth, Eastern Cape, South Africa
There are currently no reviews of this product.Write a Review
| 3.117584 |
Before we begin writing code for this lab, we need to introduce one more Python module. The random module allows us to generate random numbers. Its easy to use:
The randrange function as called in the example above, generates a random number from 1 to 9. Even though we said 10 the randrange function works just like the range function when it comes to starting and stopping points. Now if you run the program over and over again you should see that each time you run it a different number is generated. Random numbers are the basis of all kinds of interesting programs we can write, and the randrange function is just one of many functions available in the random module.
In this lab we are going to work step by step through the problem of racing turtles. The idea is that we want to create two or more turtles and have them race across the screen from left to right. The turtle that goes the farthest is the winner.
There are several different, and equally plausible, solutions to this problem. Lets look at what needs to be done, and then look at some of the options for the solution. To start, lets think about a solution to the simplest form of the problem, a race between two turtles. We’ll look at more complex races later.
When you are faced with a problem like this in computer science it is often a good idea to find a solution to a simple problem first and then figure out how to make the solution more general.
Here is a possible sequence of steps that we will need to accomplish:
Here is the Python code for the first 4 steps above
Now, you have several choices for how to fill in code for step 5. Here are some possibilities to try. Try coding each of the following in the box above to see the different kinds of behavior.
So, which of these programs is better? Which of these programs is most correct? These are excellent questions. Program 1 is certainly the simplest, but it isn’t very satisfying as far as a race is concerned. Each turtle simply moves their distance on their turn. That is not very satisfying as far as a simulated race goes. Program 2 ends up looking a lot like Program 1 when you run it. Program 3 is probably the most ‘realistic’ assuming realism is very important when we’re talking about a simulated race of virtual turtles.
You may be thinking why can’t each turtle just move forward until they cross some artificial finish line? Good question! We’ll get to the answer to this, and look at the program in a later lesson when we learn about something called the while loop.
| 4.321122 |
What about Muslim Women?
Islam sees a woman, whether single or married, as an individual in her own right, with the right to own and dispose of her property and earnings. A marriage dowry is given by the groom to the bride for her own personal use, and she keeps her own family name rather than taking her husband's in marriage.
Both men and women are expected to dress in a way which is modest and dignified; the traditions of female dress found in some Muslim countries are often the expression of local customs.
At a time when the rest of the world, from Greece and Rome to India and China, considered women as no better than children or even slaves, with no rights whatsoever, Islam acknowledged women's equality with men in a great many respects. The Qur'an states:
"And among His signs is this: that He created mates for you
from yourselves that you may find rest ,peace of mind in them,
and He ordained between you love and mercy, Lo, herein
indeed are signs for people who reflect." (Qur'an 30:21)
Prophet Muhammad said:
"The most perfect in faith amongst believers is he who is best
in manners and kindest to his wife."
(Authenticated by: Abu Dawud)
Muslims believe that Adam and Eve were created from the same soul. Both were equally guilty to their mistake and fall from grace, and both were forgiven by Allah. Many women in Islam have had high status; consider the fact that the first person to convert to Islam was Khadijah, the wife of Muhammad, whom he both loved and respected. After the death Khadijah he married Aisha who became renowned as a scholar and is considered one of the most significant sources of Hadith literature. Many of the female Companions accomplished great deeds and achieved fame, and throughout Islamic history there have been famous and influential scholars and jurists.
With regard to education, both women and men have the same rights and obligations. This is clear in Prophet Muhammad's saying:
"Seeking knowledge is mandatory for every believer," (Authenticated by: Ibn Majah)
This implies men and women.
A woman is to be treated with the utmost respect as God has endowed her with rights for her to be treated as an individual, with the right to own and dispose of her own property and earnings, enter into contracts, even after marriage. She has the right to be educated and to work outside the home if she so chooses. She has the right to inherit from her father, mother and husband. A very interesting point to note is that in Islam, unlike any other religion, a woman can b an imam, a leader of communal prayer, for a group of women.
A Muslim woman also has obligations. All the laws and regulations pertaining to prayer, fasting, charity, pilgrimage, doing good deeds, etc, apply to women, albeit with minor differences having mainly to do with female physiology.
Before marriage, a woman has the right to choose her husband. Islamic law is very strict regarding the necessity of having the woman's consent for marriage. A marriage dowry is given by the groom to the bride for her own personal use. She keeps her own family name, rather than taking her husband's. As a wife, a woman has the right to be supported by her husband even it she is already rich. She also has the right to seek divorce and custody of young children. She does not return the dowry, except in a few unusual situations. Despite the fact that in many places and times Muslim communities have not always adhered to all or even many of the foregoing in practice, the ideal has been there for fourteen hundred years, while virtually all other major civilizations did not begin to address these issues or change their negative attitudes until the 20th century, and there are still many contemporary civilizations which have yet to do so
| 3.018962 |
Vomiting, feeding difficulties, and trouble swallowing are often seen in patients with gastroesophageal reflux disease (GERD). However, when your child’s symptoms do not improve with reflux therapies, it is important to consider other causes.
What is eosinophilic esophagitis?
Eosinophilic esophagitis (EoE) is an emerging disease in children and adults with symptoms very similar to reflux. Esophagitis is inflammation of the esophagus (aka food pipe), and eosinophils are the allergic type of white blood cells we see causing this inflammation. EoE is more common in patients with personal or family history of food allergies, eczema, asthma, or environmental allergies.
The main symptoms can include:
- Poor weight gain (failure to thrive)
- Refusal to eat
- Vomiting with meals
- Difficulty swallowing (dysphagia)
- Pain or discomfort with swallowing (odynophagia)
- Food becoming lodged within the esophagus (food impaction)
Index of suspicion must be high as symptoms can be very similar to reflux (but do not resolve with reflux treatments) or often attributed to behavioral issues. In addition, patients often minimize their symptoms or develop compensating behaviors such as:
- Drinking large amount of liquids with meals
- Cutting food into very small pieces
- Prolonged chewing
- Slow eating, or
- Avoiding foods such as meats and bread which are more likely to cause dysphagia.
What to do if you think your child may have EoE:
If you are concerned that your child may have eosinophilic esophagitis, discuss this with your primary care physician. Referral to a pediatric gastroenterologist is important to evaluate for other causes of the patients’ symptoms. If EoE is suspected, the only way to confirm the diagnosis is with upper endoscopy (EGD) by a gastroenterologist to look for increased numbers of eosinophils in the esophagus.
How is EoE treated?
Food allergens are the most common trigger of EoE. When allergenic foods are removed from the diet, EoE symptoms can resolve. Allergists can play a central role by trying to identify these triggers with allergy testing. There are also elimination diets which do not require allergy testing. Dieticians familiar with food allergy and EoE are extremely helpful to educate families and patients on diet while maintaining adequate nutrition. Other treatments for EoE include topical (swallowed) steroids such as budesonide and fluticasone.
Once the diagnosis has been confirmed, the patient can be seen in our pediatric Eosinophilic Esophagitis Clinic* at IU Health North or Riley Hospital for Children (downtown) to meet with our dedicated team of:
- pediatric gastroenterologists (Sandeep Gupta, MD and Emily Contreras, MD)
- pediatric allergist (Girish Vitalpur, MD), and
- pediatric allergy dietician (Laura Dean, RD)
* Patients unable to go to Eosinophilic Esophagitis clinic can still be followed by any of our pediatric gastroenterologists, allergists, and dieticians.
| 3.628796 |
Pub. date: 2005 | Online Pub. Date: September 15, 2007 | DOI: 10.4135/9781412952514 | Print ISBN: 9780761927310 | Online ISBN: 9781412952514| Publisher:SAGE Publications, Inc.About this encyclopedia
Probation is one of the most widely used sanctions in misdemeanor and felony courts, yet its purpose and methods of operation are largely unclear to most citizens. Probation is often called an “alternative to incarceration,” but in recent years it has more likely been used as an adjunct to jail and prison terms. Originally, probation diverted alcoholic men and women from local jail sentences and offered them a chance to reform themselves under the guidance of a volunteer overseer. Historically, probation cases have come to involve increasingly serious offenses, including various degrees of felony crimes. Nowadays, probation officers have case-loads that comprise a diverse group of offenders who have committed a wide range of offenses. Ever since the Middle Ages, sanctions have been developed to mitigate the harshness and punitiveness of usual penalties such as corporal punishment, death, and social exclusion. Benefit of clergy, judicial reprieve, sanctuary, and abjuration are ...
| 3.446576 |
Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Followup to: Hand vs. Fingers
Fundamental physics—quarks 'n stuff—is far removed from the levels we can see, like hands and fingers. At best, you can know how to replicate the experiments which show that your hand (like everything else) is composed of quarks, and you may know how to derive a few equations for things like atoms and electron clouds and molecules.
At worst, the existence of quarks beneath your hand may just be something you were told. In which case it's questionable in one what sense you can be said to "know" it at all, even if you repeat back the same word "quark" that a physicist would use to convey knowledge to another physicist.
Either way, you can't actually see the identity between levels—no one has a brain large enough to visualize avogadros of quarks and recognize a hand-pattern in them.
But we at least understand what hands do. Hands push on things, exert forces on them. When we're told about atoms, we visualize little billiard balls bumping into each other. This makes it seem obvious that "atoms" can push on things too, by bumping into them.
Now this notion of atoms is not quite correct. But so far as human imagination goes, it's relatively easy to imagine our hand being made up of a little galaxy of swirling billiard balls, pushing on things when our "fingers" touch them. Democritus imagined this 2400 years ago, and there was a time, roughly 1803-1922, when Science thought he was right.
But what about, say, anger?
How could little billiard balls be angry? Tiny frowny faces on the billiard balls?
Put yourself in the shoes of, say, a hunter-gatherer—someone who may not even have a notion of writing, let alone the notion of using base matter to perform computations—someone who has no idea that such a thing as neurons exist. Then you can imagine the functional gap that your ancestors might have perceived between billiard balls and "Grrr! Aaarg!"
Forget about subjective experience for the moment, and consider the sheer behavioral gap between anger and billiard balls. The difference between what little billiard balls do, and what anger makes people do. Anger can make people raise their fists and hit someone—or say snide things behind their backs—or plant scorpions in their tents at night. Billiard balls just push on things.
Try to put yourself in the shoes of the hunter-gatherer who's never had the "Aha!" of information-processing. Try to avoid hindsight bias about things like neurons and computers. Only then will you be able to see the uncrossable explanatory gap:
How can you explain angry behavior in terms of billiard balls?
Well, the obvious materialist conjecture is that the little billiard balls push on your arm and make you hit someone, or push on your tongue so that insults come out.
But how do the little billiard balls know how to do this—or how to guide your tongue and fingers through long-term plots—if they aren't angry themselves?
And besides, if you're not seduced by—gasp!—scientism, you can see from a first-person perspective that this explanation is obviously false. Atoms can push on your arm, but they can't make you want anything.
Someone may point out that drinking wine can make you angry. But who says that wine is made exclusively of little billiard balls? Maybe wine just contains a potency of angerness.
Clearly, reductionism is just a flawed notion.
(The novice goes astray and says "The art failed me"; the master goes astray and says "I failed my art.")
What does it take to cross this gap? It's not just the idea of "neurons" that "process information"—if you say only this and nothing more, it just inserts a magical, unexplained level-crossing rule into your model, where you go from billiards to thoughts.
But an Artificial Intelligence programmer who knows how to create a chess-playing program out of base matter, has taken a genuine step toward crossing the gap. If you understand concepts like consequentialism, backward chaining, utility functions, and search trees, you can make merely causal/mechanical systems compute plans.
The trick goes something like this: For each possible chess move, compute the moves your opponent could make, then your responses to those moves, and so on; evaluate the furthest position you can see using some local algorithm (you might simply count up the material); then trace back using minimax to find the best move on the current board; then make that move.
More generally: If you have chains of causality inside the mind that have a kind of mapping—a mirror, an echo—to what goes on in the environment, then you can run a utility function over the end products of imagination, and find an action that achieves something which the utility function rates highly, and output that action. It is not necessary for the chains of causality inside the mind, that are similar to the environment, to be made out of billiard balls that have little auras of intentionality. Deep Blue's transistors do not need little chess pieces carved on them, in order to work. See also The Simple Truth.
All this is still tremendously oversimplified, but it should, at least, reduce the apparent length of the gap. If you can understand all that, you can see how a planner built out of base matter can be influenced by alcohol to output more angry behaviors. The billiard balls in the alcohol push on the billiard balls making up the utility function.
But even if you know how to write small AIs, you can't visualize the level-crossing between transistors and chess. There are too many transistors, and too many moves to check.
Likewise, even if you knew all the facts of neurology, you would not be able to visualize the level-crossing between neurons and anger—let alone the level-crossing between atoms and anger. Not the way you can visualize a hand consisting of fingers, thumb, and palm.
And suppose a cognitive scientist just flatly tells you "Anger is hormones"? Even if you repeat back the words, it doesn't mean you've crossed the gap. You may believe you believe it, but that's not the same as understanding what little billiard balls have to do with wanting to hit someone.
So you come up with interpretations like, "Anger is mere hormones, it's caused by little molecules, so it must not be justified in any moral sense—that's why you should learn to control your anger."
Or, "There isn't really any such thing as anger—it's an illusion, a quotation with no referent, like a mirage of water in the desert, or looking in the garage for a dragon and not finding one."
These are both tough pills to swallow (not that you should swallow them) and so it is a good easier to profess them than to believe them.
I think this is what non-reductionists/non-materialists think they are criticizing when they criticize reductive materialism.
But materialism isn't that easy. It's not as cheap as saying, "Anger is made out of atoms—there, now I'm done." That wouldn't explain how to get from billiard balls to hitting. You need the specific insights of computation, consequentialism, and search trees before you can start to close the explanatory gap.
All this was a relatively easy example by modern standards, because I restricted myself to talking about angry behaviors. Talking about outputs doesn't require you to appreciate how an algorithm feels from inside (cross a first-person/third-person gap) or dissolve a wrong question (untangle places where the interior of your own mind runs skew to reality).
Going from material substances that bend and break, burn and fall, push and shove, to angry behavior, is just a practice problem by the standards of modern philosophy. But it is an important practice problem. It can only be fully appreciated, if you realize how hard it would have been to solve before writing was invented. There was once an explanatory gap here—though it may not seem that way in hindsight, now that it's been bridged for generations.
Explanatory gaps can be crossed, if you accept help from science, and don't trust the view from the interior of your own mind.
Part of the sequence Reductionism
Next post: "Heat vs. Motion"
Previous post: "Hand vs. Fingers"
| 3.040008 |
S.O.A.R., Stop Over and Read, is a mentor project that started several years ago in the Poway Unified School District in the San Diego area by Bonnie Humes and Reba van Benthem. The program became a model for the Rolling Reader Program. The motto of the program is "Highlight my strengths and my weaknesses will disappear." SOAR is a one-on-one opportunity for children, mostly fourth and fifth graders, to be tutored by an adult volunteer. The focus is on problem solving text with almost daily practice to allow them to become successful as readers. Most importantly, students are guided in the use of reading strategies that promote independent reading. Students read with their assigned tutor for fifteen minutes for as many days each week as tutors are available, preferably four or five. Tutors use cueing systems, meaning, structure and visual, to lead students to the use of strategies. Praise is an important part of the tutoring experience for the student. Tutors also spend a few minutes at the end of a session discussing the story to see how well the student has comprehended the text. Most volunteers gain as much from this program as their students.
| 3.040409 |
- Animal Nutrition & Health
- Future of Farming
- Feeding the World
- About Alltech
Maintaining gut stability is critical for raising profitable dairy herds. It all starts with the young animals that need support in order to maintain healthy microflora in their intestinal tract.
Optimum rumen function is achieved when the nutrients supplied by the ration maximize the growth and activity of the rumen microflora. The ration must have the right range of particle size to form a good rumen mat, which is essential to retain the finer ingredients in the rumen to be fully fermented and also to encourage “cudding”, a sign that the rumen is working properly.
Fermentable energy is the “driver” of rumen activity. A range of fermentable energy sources along with a yeast culture selected for its action in the rumen will help resist the build up of lactic acid. This should be complemented by protein sources of differing rates of rumen degradability so that the microbes can “capture” all the energy and turn it into microbial growth, increasing both the rate of fermentation and the yield of microbial protein.
Some microbes, particularly the “fiber digesters”, can utilize non-protein nitrogen, and a slow release form of this will help maintain the pool of rumen ammonia at the right level throughout the digestive cycle. This should be used along with more complex immediately-available nitrogen sources.
| 3.204566 |
Early shipbuilding was quite
different from modern shipbuilding, the biggest difference being the
materials used. In years past, local shipbuilders would normally use
local wood which would be cut and hauled during winter months. The
ships would then be constructed over the winter and into the spring to
be used in the coming fishing season.
Most of the wealthy merchants would
hire their own master shipbuilders along with other labourers to
complete this task for them. Whereas, the common folk would have to
complete this task themselves, while possibly calling on help from a
family member or friend.
In this section you will read some
diary entries from wealthy merchants of Trinity, Newfoundland. You will
also get to read a document referring to the Newhook family who is known
by many Newfoundlanders as the greatest shipbuilding family in the
history of Newfoundland history.
| 3.179256 |
The next breath you take could have come from beyond Pluto. Researchers studying the composition of ancient gases trapped in deep wells in New Mexico have found convincing evidence that some of our planet's atmosphere originated in the far reaches of the solar system. The discovery could change long-standing thinking about how Earth's atmosphere evolved.
Since the 1950s, scientists have thought that as Earth congealed from the primordial cloud of dust and gases that also formed the sun and other planets, it trapped some of those gases within its mantle. Then, over hundreds of millions of years, volcanic eruptions returned the gases to Earth's surface, where gravity kept them from drifting off into space. The mixing of these gases--along with the oxygen and other molecules added by life--created the atmosphere we have today.
A reasonable idea, at least until researchers began collecting samples from the New Mexico wells several years ago. Their original purpose was to study the pristine bubbles of volcanic gases trapped underground for billions of years. These bubbles had never made it to the surface before and therefore remained uncontaminated by the modern atmosphere. In particular, the researchers were studying the isotope ratios--or the comparative abundances of the different forms of a particular element--of the gaseous elements krypton and xenon.
The krypton and xenon isotope ratios in the gas samples provided a surprising result: They did not match the ratios found in the primordial cloud that spawned the solar system. Those ratios persist today in the solar wind and have been measured by satellites. Instead, they matched the ratios contained within certain types of meteorites that formed 4.5 billion years ago at the beginning of the solar system, which have been measured from meteorites that have landed on Earth. The conclusion, the team reports today in Science, is that part of Earth's atmosphere must have arrived after the planet had fully formed--possibly carried by comets that hit the planet and whose ice would have evaporated upon impact, leaving water vapor and traces of krypton, xenon, and other elements in the atmosphere.
"This was an unexpected find," says isotope geochemist and co-author Christopher Ballentine of the University of Manchester in the United Kingdom. "The first gases captured by the Earth and trapped in its interior cannot [have contributed] to the gases now in the atmosphere," he says. "This means the atmosphere arrived far later than expected." To confirm that comets contributed to the atmosphere, scientists will need to analyze the krypton and xenon isotope ratios in the cometary samples recently captured by NASA's Stardust mission, says geochemist and lead author Greg Holland, also at Manchester.
The data the researchers have collected on the krypton and xenon isotope ratios are "superb," says physicist Bob Pepin of the University of Minnesota, Twin Cities. "They're of a quality not seen before." But Pepin notes that when early Earth was struck by the cataclysmic impact that resulted in the formation of the moon, primordial gases in the mantle could have been completely released to the atmosphere, leaving behind no evidence that they had ever existed there. So until new work can model the exact process involved, Pepin says, "the jury will probably stay out."
| 3.909319 |
Planet that may be habitable is a mere 12 light-years away12/19/2012
In-laws driving you crazy at Christmas? Don't despair. Now you can move so far away they'll never drop in again. Astronomers have discovered five planets circling Tau Ceti, a star almost identical to our sun, and one of those orbiting orbs is in the "Goldilocks zone" — meaning it's neither too hot nor too cold but just right to allow for surface water, making it potentially habitable. Of course, the planet currently known as "Tau Ceti e" may be a water world, since the scientists don't believe it has a rocky surface, but Kevin Costner already showed us how to survive on one of those. [Source]
Click to see more on msnNOW.com, updated 24 hours a day.
What would another habitable planet mean for us here on Earth?
| 3.053066 |
The Endoscopy Department assists physicians in diagnosing and treating disorders of the digestive and respiratory tracts. State-of-the-art technology allows physicians to use flexible video cameras to directly view and perform interventions such as:
Barrett's esophagus is the abnormal growth of intestinal-type cells above the border of the stomach, into the esophagus. Although this disorder is possibly a defense mechanism to protect the esophagus from gastric-esophageal reflux disease (GERD) it is considered to be precancerous and requires careful treatment after it is diagnosed.
An ulcer is an open sore in the lining of the stomach or intestine, much like mouth or skin ulcers. Peptic ulcers are eventually caused by acid and pepsin, a digestive stomach enzyme. These ulcers can occur in the stomach, where they are called gastric ulcers. They can also occur in the first portion of the intestine. These are called duodenal ulcers. "Peptic Ulcer" is the term used to describe either or both of these two types of ulcers.
The colon is home to many beneficial bacteria -- helpful as long as they stay in the colon. However, these bacteria can seep through the thin wall of small pockets of projection in the weak areas of the colon wall called diverticuli and cause infection. This infection around diverticuli is called diverticulitis. It can be mild with only slight discomfort in the left lower abdomen. Or it can be quite extreme with severe tenderness and fever. Treatment for diverticulitis requires antibiotics and resting of the bowel by avoiding food or, at times, even liquids. For severe cases, the patient must be hospitalized.
Cancer of the colon is a major health problem in the United States. It ranks as a leading form of cancer, along with lung and breast cancer. Importantly, colon cancer is also one of the most curable forms of cancer. When detected early, more than 90 percent of patients can be cured.
This disease begins in the cells that line the colon. The complete cause of polyp formation and colon cancer is unknown, but it is known that heredity plays a key role. The cells in the polyp eventually become uncontrolled and turn into a cancer. Colon cancer also can develop with other conditions, such as ulcerative colitis, a chronic inflammation in the colon.
What is a Colon Polyp?
A polyp is a growth that occurs in the colon and other organs. These growths, or fleshy tumors, are shaped like a mushroom or a dome-like button, and occur on the inside lining of the colon. They may be as small as a tiny pea or larger than a plum. Colon polyps start out as benign tumors but in time may become malignant. The larger the polyp, the more likely it is to contain cancer cells.
Types of procedures:
- Using a flexible high definition videoscope, allows physician to directly view the digestive tract, from the mouth to the duodenum.
- Surgeons can perform biopsies, interventions involving the gallbladder and bile duct, even stone removal.
- Allows physicians to view the intestinal tract from the rectum to the colon.
- Surgeons can perform biopsies, polypectomies, and diagnostic exams.
- Allows physicians to view the respiratory tract.
- Physicians can perform biopsies, washings, and diagnose certain types of disease.
Endoscopic Retrograde Cholangio-Pancreatography (ERCP)
- A dye is injected into the bile and pancreatic ducts using a flexible video endoscope.
- Real time x-ray is used to outline the bile ducts and pancreas. The reasons for the procedure involve blockages of the bile duct from either stones or stricture.
| 3.282619 |
Sea Floor Mapping
Bob Embley, Geophysicist
NOAA, Pacific Marine Environmental Laboratory
The first primitive maps of the sea floor came from soundings which involved lowering weighted lines into the water and noting when the tension on the line slackened. The depth was then measured by the amount of line that had payed out. These early maps gave only the most general picture of the ocean floor and only the larger features could be identified by looking for patterns of many such soundings. Most of these surveys were conducted to identify near-shore hazards to shipping. Only in the late 19th century did expeditions begin to take large numbers of soundings in deep water.
The first modern breakthrough in sea floor mapping came with the use of underwater sound projectors, called sonar, which was first used in World War I. By the 1920s, the Coast and Geodetic Survey (an ancestor of the National Oceanic and Atmospheric Administrations National Ocean Service) was using sonar to map deep water. The team of A. C. Veatch and P. A. Smith produced one of the first detailed maps of the ocean floor. This map showed that the canyons off the U.S. East Coast extended into very deep water. During World War II, advances in sonar and electronics led to improved systems that provided precisely timed measurements of the sea floor in great water depths. These systems provided the data with which scientists constructed the first real maps of important features such as deep-sea trenches and mid-ocean ridges, and led to the discovery of many new sea floor features of smaller scales. The publication of Heezen and Tharps Physiographic Map of the North Atlantic in 1957 was the first map of the sea floor that enabled the general public to begin to visualize what the ocean floor really looked like. These early maps, based on hundreds of thousands of hand-picked depths, provided the context for the plate tectonics revolution of the 1960s, which finally provided scientific explanations for the formation of mid-ocean ridges, trenches, and the ring of fire around the Pacific.
Still, these systems only produced depth soundings immediately below the ships tracks. To produce maps of the shape of the sea floor, one had to laboriously contour an area by connecting lines of equal depth together. Although the advent of digital computers in the 1960s provided much needed automation to the plotting of such data, the same basic technology was still being used by the civilian scientific community until the 1970s. In the 1960s, the U. S. Navy began using a new technology called multibeam sonar. Arrays of sonar projectors produced soundings not only along the track, but also for significant distances perpendicular to the ships track. Instead of lines of soundings, these new multibeam systems produced a swath of soundings. Combined with automated contouring, multibeam systems produced the ability to make detailed, complete maps of large areas of the sea floor, which became available to the scientific community for the first time in the late 1970s, after the Navy declassified the technology. Since these first multibeam systems, this technology has steadily improved and modern systems can map swaths up to several times the water depth. Combined with positional information provided by the GPS navigation systems now in common use, these systems provide a whole new view of the sea floor.
This is a critical part of our knowledge of the ocean. It is also important to map changes in the composition of the sea floor. Whereas depth can be measured using the timing of the signals going to and from the sea floor, a precise measurement of the strength of the sonar return is required to discern texture. For example, a sound pulse impinging on a mud sea floor will be absorbed for the most part with only a small percentage returning to the receiver, whereas a rock bottom will absorb very little sound and return most of it. In this way, modern sea floor survey systems measure relative strength of return signals as well as the depth.
In deeper water, a deeptow sidescan sonar system provides the best view of sea floor texture. These systems are towed on long cables and send and receive sound signals over a broad swath of sea floor. The resulting data consists of a pattern of signal returns of variable strength which provide a picture of the sea floors composition. Depending on the pitch of the system (low-pitch sound travels farthest), this type of survey can cover a swath up to several hundred kilometers wide.
For the Astoria survey in early June, the team will use a multibeam system mounted on the vessel Auriga and a deeptow sidescan system towed behind the vessel on a long electrically conducting cable. The deeptow system, provided by Search, Survey and Recover of Florida, will be towed at a speed of several knots some 200 to 300 m above the sea floor. At the same time, the multibeam system mounted on the ship will record detailed depth data at the same time. This will produce a map of the canyon that is detailed enough that will enable the team to choose the most interesting targets to investigate with the ROV (remotely operated vehicle).
Even the most detailed maps of the sea floor cannot answer the tantalizing question of what does the sea floor really look like? Over land areas, one can take detailed photographs from airplanes and spacecraft and actually walk around at sites of interest. In the ocean, light only penetrates about 100 m, and it is difficult, with current technology, to take useable sea floor photographs from depths greater than about 10 m (33 ft).
Sign up for the Ocean Explorer E-mail Update List.
| 4.391725 |
01 August 2012
Posted in Our Planet, Our Universe
Nearly eight months ago, NASA launched Curiosity - the latest Mars rover - into space. Set to land on Monday, August 6 at 1:31 a.m. EST, NASA scientists and observers around the world anxiously await to see if Curiosity will able to maneuver the landing process and successfully set down on the Red Planet.
NASA scientists and engineers spend so much time working with the Mars Laboratory rovers that the robots become almost like pets, and just like pets, the rovers get names that often say a lot about their "personalities." The name "Curiosity" explains exactly the nature of this rover’s mission, which is to act as a mobile science laboratory on Mars to investigate whether life could ever exist on the planet.
The rover will begin by studying Gale Crater to see if the area contains any of the necessary ingredients that could sustain life. NASA scientists considered 60 different landing sites and spent diligent time analyzing all possibilities before deciding upon Gale Crater as the designated landing location for Curiosity. About as large as Rhode Island, the site was chosen because it provides a variety of interesting places for the rover to explore and is clear of hazards which will help with a safe landing. The rover, which is no larger than a small SUV, will spend the majority of its time examining rocks and soils in the remote areas of Gale Crater.
While Curiosity is not the first rover ever sent to Mars, it will certainly be the most advanced. Like a mobile science laboratory, Curiosity is packed with special instruments and cameras for doing all kinds of studies while on Mars. It is equipped with 17 cameras that will act as the rover's "eyes" helping Curiosity get where it needs to go and investigate objects it comes across. The rover also has 10 science instruments, some of which include the cameras, to do many of the tasks scientists do in a lab. By utilizing such elite equipment and technology, instead of sending the samples back to Earth for humans to analyze, the Curiosity rover will be able to do laboratory tests right from the Mars surface. The tests and research conducted by Curiosity will not only promote the study of life on Mars, but help scientists and engineers plan for future human missions to our planet's nearest neighbor.
Curiosity is planned to explore and operate on Mars for approximately one Martian year. A Martian year is slightly longer because a day on Mars is 39 minutes longer than a day on Earth, so the overall time of the rover’s journey on Mars equates to 98 weeks, or 687 days, on Earth. NASA also has two other spacecrafts, the Reconnaissance Orbiter and Mars Odyssey, which will be orbiting Mars to assist with communication back to Earth during the rover’s exploration and to help with the overall success of the mission.
Curious about the rover? Follow the Curiosity Rover on Twitter @MarsCuriosity to track the remainder of the trip and to hear the most recent news and striking updates!
| 3.779122 |
A new study by a team of researchers shows that for zebra finches, bonding trumps sex. Post-Doc fellow Julie Elie of the University of California and her team describe in the journal Behavioural Ecology and Sociobiology, how male finches, in the absence of females, chose to bond with other males and then to maintain such a relationship even when females are introduced afterwards.
To find out how strong the bonds are between zebra finches, who normally form male/female life-long relationship bonds, the team raised a group of all male birds to adulthood, at which point nearly half of them paired up and bonded, which the team describe as perching next to each other, singing, preening and nuzzling beaks.
Once the bonds were formed, the team then introduced females to the group. They found that of the eight male-male pairs that had bonded, five of them disregarded the females entirely, choosing instead to continue with their male partners.
Elie, in an interview with the BBC noted that selecting a social partner, regardless of gender, could be a bigger priority. In other words, for zebra finches, it appears that its more important that a bird find a mate for cohabitation and socialization, then for reproduction. One interesting side note, though the authors mention the types of activities the bird engage in once they form bonds, no mention is made of whether the male birds attempt to mate with one another, a rather critical factor it would seem, in labeling the birds as homosexual, rather than as just life-long pals. Also not mentioned is if female-female bonds ever occur.
Elie adds that her findings demonstrate that pair-bonding, even in animals can, be more complex than just a male and a female who meet to reproduce. She also suggests that for zebra finches at least, finding a suitable partner is more than just fun and games, it is also likely a key to survival as the birds team up to defend food they have obtained or to fight off predators.
Elie also noted that there are many examples of same-sex parings in nature, such as with gulls and albatrosses where males pair up but still mate with a female. Also, she mentions the apparently gay chin-strap penguins that lived in New Yorks Central Park Zoo last year, who went so far as to hatch an artificially fertilized egg together.
Explore further: Front-row seats to climate change
| 3.470727 |
A University of Louisville scientist has determined for the first time how the bacterium that causes Legionnaires' disease manipulates our cells to generate the amino acids it needs to grow and cause infection and inflammation in the lungs. The results are published online today (Nov. 17) in Science.
Yousef Abu Kwaik, Ph.D., the Bumgardner Endowed Professor in Molecular Pathogenesis of Microbial Infections at UofL, and his team believe their work could help lead to development of new antibiotics and vaccines.
"It is possible that the process we have identified presents a great target for new research in antibiotic and vaccine candidates, not only for Legionnaires' disease but in other bacteria that cause illness," he said.
According to the Centers for Disease Control and Prevention, Legionnaires' disease is a lung infection caused by the bacterium called Legionella. The bacterium got its name in 1976, when many people who went to a Philadelphia convention of the American Legion suffered from an outbreak of pneumonia of unknown causes that was later determined to be caused by the bacterium. Each year, between 8,000 and 18,000 people are hospitalized with Legionnaires' disease in the U.S. There is no vaccine currently available for it.
For two years, the researchers examined Legionella which is an intercellular bacterium that exists in amoebae in the water systems; it is transmitted to humans through inhalation of water droplets. Cooling towers and whirlpools are the major sources of transmission. The bacterium uses the amoeba's cellular process to "tag" proteins, causing them to degrade into their basic elements of amino acids. These amino acids are used by the bacteria as the main source of energy to grow and cause disease.
"The bacteria live on an 'Atkins diet' of low carbs and high protein, and they trick the host cell to provide that specialized diet," Abu Kwaik said.
The same process occurs in a host animal or human who inhales the bacterium and is diagnosed with Legionnaires' disease. However, the bacteria do not tag the proteins, but rather trick the host into tagging the proteins for degradation to generate the amino acids.
In the laboratory, Abu Kwaik and his team saw that by inactivating the bacterial virulence factor responsible for tricking the cell into tagging proteins for degradation in mice models, the pulmonary disease was totally prevented. This was totally due to disabling the bacteria from generating amino acids, he said.
The process was then reversed, and the disease became evident when the mice, infected by the disabled bacteria, were injected with amino acids to compensate for the inability of the altered bacteria.
"Bacteria need to live on high protein and amino acids as sources of nutrition and energy in order to replicate in a host. This is what causes pulmonary disease," Abu Kwaik said. "No one has known how they generate sufficient sources of nutrients from the host to proliferate. Our work is the first to identify this process for any bacteria that cause disease."
He added that the type of host infected does not appear to affect the process. "Whether in a single-cell amoeba or a multi-cellular mammal, Legionella seems to know what to do; the process is the same, and is highly conserved through evolution. By interfering with the bacterium's sources of nutrients, we can stop it from thriving and causing disease."
Examining nutrient sources for organisms with the goal of stopping them from acquiring nutrients is a relatively new arena of basic research that deserves further study, he said. "We went after the basics the food and energy source which are prerequisite for the bacteria to grow and cause disease. It is not a process that is well understood yet, but by first discovering how an organism gets nutrients by tricking the host into degrading proteins, and then interfering with that process, we can, in effect, starve it to death and prevent or treat the disease."
Explore further: X-ray tomography on a living frog embryo
| 3.585363 |
Men consistently outperform women on spatial tasks, including mental rotation, which is the ability to identify how a 3-D object would appear if rotated in space. Now, a University of Iowa study shows a connection between this sex-linked ability and the structure of the parietal lobe, the brain region that controls this type of skill.
The parietal lobe was already known to differ between men and women, with women's parietal lobes having proportionally thicker cortexes or "grey matter." But this difference was never linked back to actual performance differences on the mental rotation test.
UI researchers found that a thicker cortex in the parietal lobe in women is associated with poorer mental rotation ability, and in a new structural discovery, that the surface area of the parietal lobe is increased in men, compared to women. Moreover, in men, the greater parietal lobe surface area is directly related to better performance on mental rotation tasks. The study results were published online Nov. 5 by the journal Brain and Cognition.
"Differences in parietal lobe activation have been seen in other studies. This study represents the first time we have related specific structural differences in the parietal lobe to sex-linked performances on a mental rotation test," said Tim Koscik, the study's lead author and a graduate student in the University of Iowa Neuroscience Graduate Program. "It's important to note that it isn't that women cannot do the mental rotation tasks, but they appear to do them slower, and neither men nor women perform the tasks perfectly."
The study was based on tests of 76 healthy Caucasian volunteers -- 38 women and 38 men, all right-handed except for two men. The groups were matched for age, education, IQ and socioeconomic upbringing. When tested on mental rotation tasks, men averaged 66 percent correct compared to 53 percent correct for women. Magnetic resonance imaging (MRI) revealed an approximately 10 percent difference between men and women in the overall amount of parietal lobe surface area: 43 square centimeters for men and 40 square centimeters for women.
"It's likely that the larger surface area in men's parietal lobes leads to an increase in functional columns, which are the processing unit in the cortex," said Koscik. "This may represent a specialization for certain spatial abilities in men."
The findings underscore the fact that not only is the brain structure different between men and women but also the way the brain performs a task is different, said Peg Nopoulos, M.D., a study co-author and professor of psychiatry and pediatrics at the University of Iowa Carver College of Medicine.
"One possible explanation is that the different brain structures allow for different strategies used by men and women. While men appear able to globally rotate an object in space, women seem to do it piecemeal. The strategy is inefficient but it may be the approach they need to take," said Nopoulos, who also is a psychiatrist with University of Iowa Hospitals and Clinics.
"The big question remains whether this is nature or nurture. On the one hand, boys, compared to girls, may have opportunities to cultivate this skill, but if we eventually see both a strong performance and parietal lobe structural difference in children, it would support a biological, not just environmental, effect," Nopoulos added.
Source: University of Iowa
Explore further: How serotonin receptors can shape drug effects, from LSD to migraine medication
| 3.030858 |
I'm reading about how the soon-to-be-launched NuSTAR is on the cutting edge of focusing x-rays, which captures 5 to 80 keV radiation by focusing them with optics that have a 10.15 meter focal length onto 2 sets of 4 32×32 pixel detector arrays. These are particular "hard" (high energy) x-rays, which is a part of what makes the task difficult and the NuSTAR telescope novel.
If I understand correctly, imaging gets particularly difficult with electromagnetic radiation beyond a certain energy, as true gamma rays (above 100 keV) are detected with a family of radiation detectors that sense the Compton scatter or photoelectric absorption with an electrical pulse that is (in a naive sense) insensitive to the originating direction or location within the detector. It should be obvious that imaging can still be done with the use of an array of detectors, each constituting a single pixel, and these capabilities may improve with time as semiconductor detector technology evolves.
So the critical distinction I'm trying to establish is between x-rays and gamma rays. It would seem that we focus x-rays and do not focus gamma rays. For a very good example of researchers not focusing gamma rays, consider Dr. Zhong He's Radiation Measurement Group at UM, who do actual imaging of a gamma ray environment (the UM Polaris detector). They use a grid of room temperature semiconductors laid out bare in a room and use back-processing of the signals to triangulate a sequence of scatter-scatter-absorption reactions in 3D space. This is a lot of work that would be completely unnecessary if you could focus the gamma rays like we do for a large portion of the EM spectrum.
Both of the technologies I reference, the NuSTAR telescope and the UM Polaris detector, use CdZnTe detectors. Functionally they are very very different in that the telescope uses optics to capture light from just a few arc-seconds of the sky.
My question is what is the specific limitation that prevents us from focusing photons above a certain energy? It seems this cutoff point is also suspiciously close to the cutoff between the definition of x-rays and gamma rays. Was this intended? Could future technology start using optics to resolve low-energy gamma rays?
| 3.132445 |
Please Read How You Can Help Keep the Encyclopedia Free
Starting with Frege, the semantics (and pragmatics) of quotation has received a steady flow of attention over the last one hundred years. It has not, however, been subject to the same kind of intense debate and scrutiny as, for example, both the semantics of definite descriptions and propositional attitude verbs. Many philosophers probably share Davidson's experience: ‘When I was initiated into the mysteries of logic and semantics, quotation was usually introduced as a somewhat shady device, and the introduction was accompanied by a stern sermon on the sin of confusing the use and mention of expressions’ (Davidson 1979, p. 79). Those who leave it at that, however, miss out on one of the most difficult and interesting topics in the philosophy of language.
Quotation interests philosophers and linguists for various reasons, including these:
- When language is used to attribute properties to language or
otherwise theorize about it, a linguistic device is needed that
‘turns language on itself’. Quotation is one such device.
It is our primary meta-linguistic tool. If you don't understand
quotation, then you can't understand sentences like
- ‘Snow is white’ is true in English iff snow is white.
- ‘Aristotle’ refers to Aristotle.
- ‘The’ is the definite article in English.
- ‘bachelor’ has eight letters.
Those who are in the business of theorizing about language are particularly interested in understanding the mechanisms that render (1)–(4) intelligible.
- Theories of quotation address questions not just about how quotations refer, but also about what they refer to. In this regard, theories of quotation tell us what we are talking about in (1)–(4).
- Quotation is a paradigmatic opaque context, i.e. a context in which substitution of synonymous or co-referential expressions can fail to preserve truth-value. To understand the nature of opacity you must understand how quotation functions.
- Quotation is a device for talking about language, but it does so in a particularly tricky way: somehow quotation manages to use its referent to do (or at least to participate in) the referring; the referent of “Aristotle” is part of “Aristotle”. As such, it is a particularly interesting referential device.
- As with all issues in the philosophy of language, theories of quotation harbor assumptions about how best to draw the distinction between semantics and pragmatics, and they do so in a particularly illuminating way.
- Theories of quotation raise a range of important questions about how indexicals should be interpreted and about the nature of context sensitivity.
More generally, quotation presents semanticists with a particularly challenging range of puzzles. Those interested in such puzzles tend to find the study of quotation intrinsically interesting.
- 1. How to Characterize Quotation
- 2. Basic Quotational Features
- 3. Five Theories of Quotation
- 3.1 Proper Name Theory
- 3.2 Description Theory
- 3.3 Paratactic/Demonstrative Theory
- 3.4 Disquotational Theory
- 3.5 Identity Theory (or: Use Theory)
- 4. Mixed Quotation: Semantic or Pragmatic?
- 5. What Kinds of Entities do Quotations Refer to?
- 6. Alternative Quotational Devices
- 7. Formal-Material Modes
- Academic Tools
- Other Internet Resources
- Related Entries
Problems arise right at the outset since quotation is not an easy category to characterize. We start with reflections on how one might go.
There's an easy and relatively non-controversial way to identify quotation: it is the sort of linguistic phenomenon exemplified by the subject in (4) and the direct object in (5); these are instances of pure and direct quotation, respectively.
5. Quine said, ‘Quotation has a certain anomalous feature’.
That leaves open the question of which semantic and syntactic devices belong to that sort. Any characterization of a more specific nature, either of a syntactic or a semantic sort, moves into controversial territory immediately.
A syntactic characterization might go something like this: Take two quotation marks— single apostrophes in Britain, double in the United States, double angles in parts of Europe — and put, for example, a letter, a word, or a sentence between the two. What results is a quotation, as in (4)–(5). There are two problems with this identification:
- In spoken language, no obvious correlates of quotation marks
exist. Spoken utterances of (6) seem often to be unaccompanied
by lexical items corresponding to ‘quote/unquote’.
6. My name is Donald.
- Even if attention is restricted to written language, quotation is
not invariably indicated by the use of quotation marks. Sometimes, for
example, italicization is used instead, as in (7):
7. Bachelor has eight letters
Other devices employed as substitutes for quotation marks include bold face, indentation, and line indentation (cf. Quine 1940, pp. 23–24; Geach 1957, p. 82). There's no clear limit on the range of distinct written options, other than that they are used as quotation marks, but this renders the syntactic characterization incomplete, and thus, unsatisfactory.
Another tempting strategy is to say that an expression is quoted if it is mentioned. There are two problems with this characterization.
- Several theorists want to distinguish between mention and quotation (see Section 3.5). This definition would rule their theories out by stipulation.
- This characterization is no clearer than the intuitive distinction between use and mention, and matters become even more complicated as soon as we do try to characterize ‘mention’ and ‘use’. Isn't ‘bachelor’ in (4) in some sense used to refer to itself? If the response is that it is used, but not with its normal semantic value, then we are left with the challenge of defining ‘normal’ and ‘abnormal’ semantic values. That, again, leads immediately to controversy.
In order to remain as neutral as possible, we will stick with a simple identification-through-examples strategy, and emphasize that it is an open question as to how to identify the sort of linguistic devices to which the subject in (4) belongs.
Quotation is a subject matter that brings together a rather spectacular array of linguistic and semantic issues. Here are six basic quotational features of particular importance (BQ1-BQ6, for short) that will guide our search for an adequate account:
BQ1. In quotation you cannot substitute co-referential or synonymous terms salva veritate.An inference from (4) to (8), for example, fails to preserve truth-value.
4. ‘bachelor’ has eight letters
8. ‘unmarried man’ has eight letters
No theory of quotation is adequate unless it explains this feature (and no theory of opacity is complete before it explains why quotation has this feature).
BQ2. It is not possible to quantify into quotation.(9), for example, does not follow from (4):
9. (∃x)(‘x’ has eight letters)
An adequate theory of quotation must explain why not. The product of quoting ‘x’ is an expression that refers to the 24th letter of the Roman alphabet. The point is that quotation marks, at least in natural language, cannot be quantified into because they trap the variable; what results is a quotation that refers to that very variable.
BQ3. Quotation can be used to introduce novel words, symbols and alphabets; it is not limited to the extant lexicon of any one language.Both (10) and (11) are true English sentences:
10. ‘Φ’ is not a part of any English expression.
11. ‘’ is not an expression in any natural language.
An adequate theory of quotation must explain what makes this practice possible.
BQ4. There's a particularly close relationship between quotations and their semantic values.
“lobsters” and its semantic value are more intimately related than ‘lobster’ and its semantic value, i.e., the relationship between “lobster” and ‘lobster’ is closer than that which obtains between ‘lobsters' and lobsters. Whereas the quotation (i.e., “lobster”), in some way to be further explained, has its referent (i.e., ‘lobster’) contained in it, the semantic value of ‘lobsters', i.e., lobsters, are not contained in ‘lobster’. One way to put it is that an expression e is in the quotation of e. No matter how one chooses to spell this out, any theory of quotation must explain this relationship.
BQ5. To understand quotation is to have an infinite capacity, a capacity to understand and generate a potential infinity of new quotations.
We don't learn quotations one by one. Never having encountered the quotation in (10) or (11) does nothing to prohibit comprehending them (Christensen 1967, p. 362) and identifying their semantic values.
Similarly, there doesn't seem to be any upper bound on a speaker's ability to generate novel quotations. One natural explanation for this is that quotation is a productive device in natural language.
BQ6. Quoted words can be simultaneously used and mentioned.This is an important observation due to Davidson, as exemplified in (12).
12. Quine said that quotation ‘has a certain anomalous feature’.
(12) is called a ‘mixed quotation’. This is because it mixes the direct quotation (as in (5)) and indirect (as in (13)).
5. Quine said ‘Quotation has a certain anomalous feature’.
13. Quine said that quotation has a certain anomalous feature.
In this regard, the quotation in (12) is, in an intuitive sense, simultaneously used and mentioned. It is used to say what Quine said (viz. that quotation has a certain anomalous feature), and also to say that Quine used the words ‘has a certain anomalous feature’ in saying it.
Mixed quotation had not been much discussed prior to Davidson (1979) but it has recently taken center stage in discussions of theories of quotation. For those who believe themselves unfamiliar with the data, we point out that mixed quotation is one of the most frequently used forms of quotation. Casually peruse any newspaper and passages like the following from the New York Times are ubiquitous:
NYT Dec 7, 2004: The court ruled that the sentence was invalid because the document signed into law by President Bill Clinton contained a phrase that was illogical. The law said that defendants like Mr. Pabon, who was convicted two years ago of advertising to receive or distribute child pornography over the Internet, should be fined or receive a mandatory minimum sentence of 10 years ‘and both.’ The appeals court said this language ‘makes no sense.’
An adequate theory of quotation must account for how such dual use and mention is possible. (For further discussion of how to understand this requirement see Section 4 below).
In what follows we will refer back to these six features and make the following assumption:
It is a necessary adequacy condition on a theory of quotation that it either explains how quotations can exhibit features (BQ1)–(BQ6), or, if it fails to do so, then it must present an argument for why the unexplained feature(s) doesn't require explanation.
BQ1-BQ6 play an important role because theories of quotation are attempts to answer certain questions, and those questions won't have satisfactory answers unless BQ1-BQ6 are accounted for. Three questions can be thought of as the guiding questions for a theory of quotation:
Q1. In a quotation, what does the referring? There are three options:
- The quotation marks
- The expression between the quotation marks
- A complex of the expression and the quotation marks
Alternatively, one might hold that quotations fail to refer at all, but rather that speakers refer contingent upon the intentions with which they use an expression—with or without quotation marks.
Q2. How do quotations refer?
Are they names, descriptions, demonstratives, functors or some sui generis linguistic category?
In addition to Q1 and Q2, theories of quotation often try answer a third question:
Q3. What do quotations refer to?
What kinds of objects are picked out? Is it always the same object or are quotations ambiguous?
Our primary focus in what follows will be in Q1 and Q2, but along the way we will also address Q3. (Section 5 is entirely devoted to Q3.)
It is standard practice in philosophy to distinguish the use of an expression from the mentioning of it. Confusing these two is often taken to be a philosophical mortal sin. Despite its ubiquitous appeal, it is controversial exactly how to draw the distinction. The initial thought is easy enough. Consider (D1) and (D2):
D1. Jim went to Paris
D2. ‘Jim’ has three letters
In (D1) the word ‘Jim’ is used to talk about (or signify, or denote) a person, i.e. Jim, and the sentence says about that person that he went to Paris. In (D2) the word is not used in that way. Instead, ‘Jim’ is used to talk about (or signify or denote) a word, i.e. ‘Jim’, and the sentence says about that word that it has three letters. In (D1), ‘Jim’ is being used and in (D2) it mentioned.
Other attempts to characterize the use-mention distinction quickly run into difficulties. Here are two familiar characterizations often found in philosophical introductions of the distinction:
- Expression E is mentioned in sentence S just in case E is quoted in S.
Problem: According to some theories, an expression can be mentioned without being quoted (see Objection 3 in section 3.3.2).
- Expression E is mentioned in sentence S just in case it is used to refer to itself in S.
Problems: First, notice that according to (ii), E must be used in order to mention it; that's potentially puzzling. More significantly, it is controversial whether standard meta-linguistic devices such as quotation are referring expressions. The theories presented in sections 3.2 and 3.3 treat quotation as descriptions. If descriptions are quantified expressions, then quotations are quantifiers, and quantifiers are typically not treated as referring expressions. Proposals along the lines of (ii) would also have to ensure that, ‘the first seven words in this sentence’ in (D3) don't end up referring to ‘the first seven words in this sentence’:
D3. The first seven words in this sentence contain thirty letters.
In sum (and this isn't recognized often enough), any attempt to characterize the distinction between use and mention more sophisticated than the initial characterization in this section will need to address at least some of the tricky issues that face the various theories of quotation we describe in what follows.
There are, roughly, five kinds of theories of quotation that have been central to the discussion of quotation: the Proper Name Theory, the Description Theory, the Demonstrative/Paratactic Theory, the Disquotational Theory, and the Use/Identity Theory. In the following sections we discuss each of these and review their strengths and weaknesses. The first two we discuss primarily for historical and heuristic purposes. The last three are the central live options in contemporary discussion of quotation.
It is now almost a tradition in the literature on quotation to include a brief dismissive discussion of the Proper Name Theory of Quotation. This view is found in passages in Quine and Tarski (e.g. Quine 1940, pp. 23–26; 1961, p.140; Tarski 1933, p.159ff), and comments in passing in both Reichenbach (1947, p. 335) and Carnap (1947, p. 4) strongly suggest they too were adherents. It no longer is defended by anyone and there is even some debate about whether Quine and Tarski ever held the view (see, e.g., Bennett 1988, Richard 1986, Saka 1998, and Gomez-Torrente 2001).
Today the view is discussed in part because of its distinguished pedigree, but primarily for heuristic purposes. One common view is that the reasons why it is ‘an utter failure’ (Saka 1998, p. 114) reveals something about how to go about constructing an acceptable theory of quotation. Following this tradition, we begin our discussion of classical theories of quotation by presenting the Proper Name Theory and some of the reasons why the unanimous consensus is that it fails miserably.
According to the Proper Name Theory, quotations are unstructured proper names of the quoted expressions. Quine writes:
From the standpoint of logical analysis each whole quotation must be regarded as a single word or sign, whose parts count for no more than serifs or syllables. (Quine 1940, p. 26)
The personal name buried within the first word of the statement ‘Cicero’ has six letters, e.g., is logically no more germane to the statement than is the verb ‘let’ which is buried within the last word. (Quine 1940, p. 26)
Quotation-mark names may be treated like single words of a language, and thus like syntactically simple expressions. The single constituents of these names—the quotation marks and the expressions standing between them—fulfill the same function as the letters and complexes of successive letters in single words. Hence they can possess no independent meaning. Every quotation-mark name is then a constant individual name of a definite expression (the expression enclosed by the quotation marks) and is in fact a name of the same nature as the proper name of a man. (Tarski 1933, p. 159)
The Proper Name Theory nicely accommodates (BQ1)–(BQ3); that is, on this theory we see why co-referential expressions cannot be substituted for one another. According to the Proper Name Theory, the name ‘Cicero’ does not occur in ‘‘Cicero’’, from the fact that Cicero = Tully, ‘Tully’ cannot be substituted for ‘Cicero’ in ‘‘Cicero’’. As Quine puts it, ‘[t]o make substitution upon a personal name, with such a context, would be no more justifiable than to make a substitution upon the term ‘cat’ within the context ‘cattle’ (Quine 1961, p. 141). The Proper Name Theory permits the creation of new quotations much as natural languages permit the introduction of new names. And it prohibits quantifying in, since each quotation is a single word, and so, there is nothing to quantify into. To see that this is so, think of the left and right quotation marks as the 27th and 28th letters of the roman alphabet, then quantifying into (4) in deriving (9) makes as much sense as deriving (16) from (14) and (15) by quantifying into (14):
4. ‘bachelor’ has eight letters.
9. (∃x)(‘x’ has eight letters)
14. There is a birth dearth in Europe.
15. Earth is the third planet form the sun.
16. (∃x)(x is the third planet from the sun & there is a birth dx in Europe)
The occurrence of the 24th letter of the alphabet in (9), as Quine notes with regards to a similar sentence, ‘is as irrelevant to the quantifier that precedes it as is the occurrence of the same letter in the context ‘six’’ (Quine 1961, p. 147).
Here are three objections to the Proper Name Theory of Quotation. (The main objections are in Davidson (1979, pp. 81–83), though some were anticipated by Geach (1957, p.79ff).)
Objection 1: The Proper Name Theory cannot explain how we can generate and interpret an indefinite number of novel quotations (see BQ5).
If quotations were proper names and lacked semantic structure altogether, then there would be no rule for determining how to generate or interpret novel quotations. To understand one would be to learn a new name. (Remember, the quotation marks, according to Quine, carry no more significance than the serifs you see on these letters.) But (11), e.g., can be understood by someone who has never encountered its quoted symbol before.
11. ‘’ is not a letter in any language
Understanding (11) is not like understanding a sentence with a previously unknown proper name. Upon encountering (11), it would seem that you know exactly which symbol is being referenced in a way that you do not with a name you’ve never before encountered.
This is the most obvious flaw in the Proper Name Theory and its obviousness has caused some philosophers to doubt whether Quine or Tarski ever held this view (see references above).
Objection 2: There's a special relationship between quotations and their semantic values (see BQ4).
According to the Proper Name Theory, the relationship between “lobsters” and ‘lobsters’ is no closer than the relationship between ‘lobsters’ and lobsters. That seems to miss the fundamental aspect of quotation spelled out in (BQ4).
Davidson summarizes these first two objections succinctly:
If quotations are structureless singular terms, then there is no more significance to the category of quotation-mark names than to the category of names that begin and end with the letter ‘a’ (“Atlanta’, ‘Alabama’, ‘Alta’, ‘Athena’, etc.). On this view, there is no relation, beyond an accident of spelling, between an expression and the quotation-mark name of that expression. (Davidson 1979, pp. 81–82; cf., also, Garcia-Carpintero 1994, pp. 254–55)
Objection 3: Proper Name Theory leaves no room for dual use and mention (see BQ6).
If quotations were proper names, and if their interiors lacked significant structure, there would seem to be no room for dual usage of the kind found in (12); indeed, on the Proper Name Theory (12) has the same interpretive form as (17):
12. Quine said that quotation ‘has a certain anomalous feature’.
17. Quine said that quotation Ted.
That is, the Proper Name Theory fails to account for (BQ6).
Other objections have been raised against the Proper Name Theory of Quotation. Since the view has no proponents today, we will not pursue these objections here. For further discussion, see Davidson (1979), Cappelen and Lepore (1997b) and Saka (1998).
The Description Theory of Quotation was introduced in order to guarantee that ‘a quoted series of expressions is always a series of quoted expressions’ (Geach 1957, p. 82) and not ‘a single long word, whose parts have no separate significance’ (ibid., p. 82). According to this theory, there is a set of basic units in each language: words, according to Geach (ibid., Ch. 18 and 1970); letters, according to Tarski (1956, p. 160) and Quine (1960, p. 143, p. 212)).
This view retains the Proper Name Theory for basic quotations, e.g. according to Quine, “a” is a name of one letter, “b” a name of another, etc. For Geach, each word has a quotation name. Complex quotations, i.e., quotations with more than one basic unit, are understood as descriptions of concatenations of the basic units. Here is an illustration from Geach (where ‘-’ is his sign for concatenation):
…the quotation ‘‘man is mortal’’ is rightly understood only if we read it as meaning the same as ‘‘man’-‘is’-‘mortal’’, i.e., read it as describing the quoted expression in terms of the expressions it contains and their order. (Geach 1957, pp. 82–83)
For Quine and Tarski, (4) gets analyzed as (18):
4. ‘Bachelor’ has eight letters.
18. ‘B’-‘a’-‘c’-‘h’-‘e’-‘l’-‘o’-‘r’ has eight letters
where ‘-’ is their sign for concatenation and the individual quotations are names of the letters.
Davidson's characterizes the difference between the two versions as follows:
In primitive notation, which reveals all structure to the eye, Geach has an easier time writing (for only each word needs quotation marks) but a harder time learning or describing the language (he has a much larger primitive vocabulary—twice normal size if we disregard iteration). (Davidson 1979, p. 84)
In one respect the Description Theory is an immense improvement over the Proper Name Theory: it deals with no more than a finite set of basic names, thus, potentially accommodating (BQ5). In other respects, however, the theory is, by a wide consensus, not much of an improvement over the Proper Name Theory. At the basic level, the theory still treats quotations as names. So, at that level, it inherits all of the problems confronting the simpler Proper Name Theory and is for that reason not considered much more attractive. Some the obvious objections are these (again we mention only some of the most obvious objections here since the theory is not central to contemporary discussions):
- At the basic level (i.e., the level of words or letters), there's no rule for determining how to interpret and generate novel quotations (BQ3). This is so because there is no a priori reason to believe there are finitely many basic expressions (cf., Lepore 1999).
- At the basic level, it doesn't explain the special relationship between the expression and the quotation of that expression (BQ4). It's obvious to us that “Sam” and “Alice” do not refer to the same expression, but how can Geach explain this triviality if both are just proper names; ditto for Quine with respect to “a” and “b” (cf., Davidson 1979, p. 87).
- At the basic level, it fails to account for dual use and mention (BQ6). This is particularly a problem for Geach's version. Any account, including the Proper Name Theory, according to which the semantic function of word-tokens inside quotation marks is just to refer to word-types (or some other type of linguistic entity) fails to assign correct truth-conditions to (12). What we have seen is that in order to account for mixed cases as in (12) a theory of quotation must do two things: it must account for how the complement clause of (12) can be employed to effect simultaneously a report that Quine uttered the words ‘has a certain anomalous feature’ and one that Quine said that quotation has a certain anomalous feature.
- According to Davidson, the Description theory can't explain why we can't quantify into quotation (i.e. fails to account for BQ2). The argument to this effect is intriguing, but not entirely easy to unpack. The interested reader should look at Davidson (1979, pp. 86–87).
The seminal paper on quotation in the twentieth century is, almost by universal consensus, Davidson's ‘Quotation’ (1979). It is without comparison the most discussed and influential paper on the subject. The view Davidson defends is called the Demonstrative Theory. (It is also called the Paratactic Theory; though we shall use the former label in our discussion.) The Demonstrative Theory is presented in the final pages of ‘Quotation’ and the key passages are these:
…quotation marks…help refer to a shape by pointing out something that has it…The singular term is the quotation marks, which may be read ‘the expression a token of which is here’. (Davidson 1979, p. 90)
On my theory which we may call the demonstrative theory of quotation, the inscription inside does not refer to anything at all, nor is it part of any expression that does. Rather it is the quotation marks that do all the referring, and they help to refer to a shape by pointing out something that has it. (Davidson 1979, p. 90)
Quotation marks could be warped so as to remove the quoted material from a sentence in which they play no semantic role. Thus instead of:‘Alice swooned’ is a sentence.
we could write:
Alice swooned. The expression of which this is a token is a sentence. (Davidson 1979, p. 90)
The Demonstrative Theory has three central components:
- The quotation marks are treated as contributing a definite
description containing a demonstrative to sentences in which they
occur, i.e., the quotation marks in (4) become ‘The expression of
which this is a token’, as in (19):
4. ‘Bachelor’ has eight letters.
19. Bachelor. The expression of which that is a token has eight letters.
- In the logical form of a sentence containing a quotation, the token that occurs between the two quotation marks in the surface syntax is discharged, so to speak, from the sentence containing the quotation. What occurs between the quotation marks in the surface syntax is not part of the sentence in which those quotation marks occur. It is demonstrated by a use of the quoting sentence.
- Utterances of quotation marks, by virtue of having a demonstrative/indexical ingredient, refer to the expression instantiated by the demonstrated token, i.e., the expression instantiated by the token that in surface syntax sits between the quotation marks.
The Demonstrative Theory is attractive for at least five reasons:
- To grasp the function of quotation marks is to acquire a capacity with infinite applications (BQ5). The Demonstrative Theory explains why: there's no limit to the kinds of entities we can demonstrate. Hence, (BQ5) is explained without making quotation a productive device (for elaboration see Cappelen and Lepore 1997b).
- Opacity is explained (BQ1): There's no reason to think that two
sentences demonstrating different objects will have the same
truth-value. (4) and (8) demonstrate different objects, so there's no
more reason to think the move from (4) to (8) is truth preserving than
there is to think that the move from (20) to (21) is:
20. That's nice.
21. That's nice.
- We have an elegant explanation of mixed quotation, i.e., we can
explain (BQ6). Davidson says:
I said that for the demonstrative theory the quoted material was no part, semantically, of the quoting sentence. But this was stronger than necessary or desirable. The device of pointing can be used on whatever is in range of the pointer, and there is no reason why an inscription in active use can't be ostended in the process of mentioning an expression. (Davidson 1979, p. 91)
This, according to Davidson, is what goes on in (12). A token that is being used for one purpose is at the same time demonstrated for another: ‘Any token may serve as target for the arrows of quotation, so in particular a quoting sentence may after all by chance contain a token with the shape needed for the purposes of quotation’ (Davidson 1979, pp. 90–91; cf., also, Cappelen and Lepore 1997b). On this view, (12) is understood as (22):
12. Quine said that quotation ‘has a certain anomalous feature’
22. Quine says, using words of which these are a token, that quotation has a certain anomalous feature.
(Here the ‘these’ is accompanied by a pointing or indexing to the token of Quine's words.)
- There is no mystery about how to introduce new vocabulary; since there's no limit to what can be demonstrated, there's no limit to what can be quoted. (BQ3) is explained.
- Quantifying-in is obviously ruled out, since the quoted token is placed outside the quoting sentence, i.e., the Demonstrative Theory can explain (BQ2).
The Demonstrative Theory is both bold and radical. It triggered an entire cottage industry devoted to criticizing and defending it. For proponents, see, for example, Partee (1973), Garcia-Carpintero (1994), and Cappelen and Lepore (1997b); for critics, see just about anyone else writing on quotation after 1979.
In what follows we present five criticisms of the Demonstrative Theory. Needless to say, the list is not exhaustive (e.g., see Sorensen 2008 and Saka 2011 on the problem of empty quotation), and indeed, each objection has triggered lively discussion which space limitations prohibit our taking up here.
Objection 1. If the Demonstrative Theory were correct, it should be possible for (4) to demonstrate, e.g., a penguin.
Here is an argument that mimics a range of objections raised against Davidson's account of the semantics for indirect reports (Burge 1986, Stainton 1999.)
Recall that according to Davidson the logical form of (4) is (19).
4. ‘Bachelor’ has eight letters
19. Bachelor. The expression of which this is a token has eight letters.
(19) contains a demonstrative and demonstratives refer to whatever is demonstrated with their use. What is demonstrated on a given occasion depends on the speaker (either the demonstration or the intention or some combination of the two). It should be possible, then, for a speaker to utter (4) and not demonstrate the exhibited token of ‘bachelor’. That is to say, if there really is a demonstrative in (19), that demonstrative should have the same kind of freedom that other demonstratives have: it should be able to reference, for example, a nearby penguin. Of course, no utterance of (4) makes reference to a penguin. So the Demonstrative Theory is wrong. (For replies to this objection see Cappelen and Lepore 1999b.)
Objection 2. The Problem of Relevant Features
According to Davidson, a quotation refers to an expression indirectly, by referring to a token that instantiates that expression. Davidson thinks expressions are shapes or patterns (see Davidson 1979, p. 90). A problem for this view is that any one token instantiates indefinitely many distinct shapes or patterns, i.e. many different expressions. So how, on Davidson's view, do we get from a particular token to a unique type, i.e., from a token to an expression?
Jonathan Bennett formulates the problem as follows:
Any displayed token has countless features, and so it is of countless different kinds. Therefore, to say the inscription-types instantiated here: Sheep or what amounts to the same thing, the inscription-type each token of which is like this: Sheep is to leave things open to an intolerable degree. How do we narrow it down? That is what I call the problem of relevant features. It urgently confronts the demonstrative theory which must be amplified so as to meet it. (Bennett 1988, p. 403, see also Washington 1992, pp. 595–7.)
A related worry is this: Read (4) out loud. It seems obvious that a spoken utterance says (makes) the same claim as a written utterance of (4). On the Demonstrative Theory it is unclear why this should be so: the spoken utterance demonstrates a vocal pattern, and the written utterance a graphemic pattern. They seem to be attributing properties to different objects. (Several suggestions are on offer for how to amend the Demonstrative Theory in this respect: cf., Garcia-Carpintero 1994, Cappelen and Lepore 1997b, 1999c, and Davidson 1999.)
Objection 3. The Problem of Missing Quotation Marks
According to Davidson, quotation marks are what are used to do the referring. They are descriptions containing demonstratives whose uses refer to whatever pattern is instantiated by the demonstrated token. This makes the presence of quotation marks essential. Much recent work on quotation argues that we can quote (or do something quote-like) without quotation marks and that a theory of quotation should be capable of explaining how quotation can take place in the absence of quotation marks. Here is Reimer's version of this objection:Consider the following sentence:
(3) Cat has three letters
Here, we have a case in which an expression is quoted—not by means of quotation marks—but by means of italicization. But surely it would be absurd to suppose (consistently with Davidson's view) that the italicization of (3)'s subject term is itself a demonstrative expression! (Reimer 1996, p. 135)
The same idea is expressed by Washington (1992):
In conversation, oral promptings (‘Quote-unquote’) or finger-dance quotes can often be omitted without impairing the intelligibility or well-formedness of the utterance. When I introduce myself, I do not say ‘My name is quote-unquote Corey,’ nor do I make little finger gestures or even use different intonation in order to show that it is my name and not myself that is being talked about. (Washington 1992, p. 588; Saka 1998, pp. 118–19; Recanati 2001; and Benbaji 2004a, 2004b.)
The Demonstrative Theory depends on the presence of quotation marks (inasmuch as they are what get used to do the referring), so if quotation can occur without quotation marks (as in the Reimer and Washington cases), it's hard to see how the Demonstrative Theory is adequate.
Proponents have been unimpressed. Several possible replies spring to mind. So, consider an utterance of (6):
6. My name is Donald
- One thing a Demonstrative Theorist might say is that there are no missing quotation marks in (6): they are in the logical form of the sentence, not in its surface syntax.
- Alternatively, quotation marks for an utterance of (6) could be generated as conversational or conventional implicatures. (He can't be saying that Donald, the person, is his name as he knows that that is false… so he must be conversationally implicating that the expression of which he used a token is a name (cf., Garcia-Carpintero 1994, pp. 262-63). Or by an appeal to the distinction between semantic reference and speaker reference. (6) is grammatically correct but false; nonetheless, someone can succeed in communicating something true about Donald's name if he succeeds in conveying to his audience his intention to refer to it (cf., Gomez-Torrente 2001).
- Finally, a Demonstrative Theorist can argue that these other quotation-like phenomena are just that—quotation-like. They require a separate treatment. There's no need for a unified theory (cf., Cappelen and Lepore 2003).
Objection 4. The Problem of Iteration
The Demonstrative Theory seems to have difficulty dealing with iterated quotation. (24) refers to the quotation in (23):
23. ‘smooth’ is an English expression.
24. “smooth” is an English expression.
The Demonstrative Theory's account for (23) is (25).
25. Smooth. The expression of which that is a token is an English expression
How, then, can the account accommodate (24)? (24), after all, includes two sets of quotation marks. It might seem like the Demonstrative Theory would have to treat it as the ungrammatical (26) or the unintelligible (27).
26. Smooth. That that is an English expression.
27. Smooth. That. That is an English expression.
This objection has been raised by Saka (1998, pp. 119–20), Reimer (1996), and Washington (1992).
In response, Demonstrative Theorists insist that quotations are not iterative. Cappelen and Lepore write:
However, it does follow, on the demonstrative account, that quotation is not, contrary to a common view, genuinely iterative. Quoted expressions are exhibited so that speakers can talk about the patterns (according to Davidson) they instantiate. The semantic properties of the tokens are not in active use; they are semantically inert…So, quotation marks within quotation marks are semantically inert. (Cappelen and Lepore 1997b, pp. 439–40)
For further discussion of whether quotation is a genuinely iterative device, see Cappelen and Lepore (1999a) and Saka (forthcoming).
Objection 5. The Problem of Open Quotation: Dangling Singular Terms
Recanati (2001) focuses on cases where quoted expressions do not serve as noun phrases in sentences. He has in mind cases like (29) and (30):
- Stop that John! ‘Nobody likes me’, ‘I am
Don't you think you exaggerate a bit?
- The story-teller cleared his throat and started talking. ‘Once upon a time, there was a beautiful princess named Arabella. She loved snakes and always had a couple of pythons around her…’
In these cases it looks like the Demonstrative Theory would have to postulate a dangling singular term, something like (31) or (32):
- Stop that john. That. Nobody likes me. That. I am miserable.…Don't you think you exaggerate a bit?
- The story-teller cleared his throat and started talking. That. Once upon a time, there was a beautiful princess named Arabella. She loved snakes and always had a couple of pythons around her…
In response to the idea that (29) is elliptical for (33),
- Stop that John! You say ‘Nobody likes me’, ‘I am miserable’… Don't you think you exaggerate a bit?
I deny that and are synonymous. Nor are there any grounds for postulating ellipsis here except the desire to save the theory in the face of obvious counterexamples’ (Recanati 2001, p. 654).
Less baroque than the Demonstrative Theory, the Disquotational Theory is probably the simplest, most natural and obvious account of quotation. It is endorsed by a wide range of authors, often in passing, as if completely obvious. A simple version of it can be found in Richard's Disquotational Schema (DQR):
DQR: For any expression e, the left quote (lq) followed by e followed by the right quote (rq) denotes e (Richard 1986, p. 397)
Ludwig and Ray write:
Its semantic function is given by the following reference clause in the theory: (ref(┌‘E’┐) = E (Ludwig and Ray, 1998, p.163, note 43)
where the ‘┌’ and ‘┐’ are the left and right corner quotes. (See also Mates 1972, p. 21; Wallace 1972, p. 237; Salmon 1986, p. 6; Smullyan 1957.)
On this account, quotations are not proper names, or descriptions or demonstratives but rather they are functors that take an expression as their argument and return it as value.
The two most obvious strengths of the Disquotational Theory are its simplicity and intuitiveness. If asked how quotation functions, the obvious reply is something along the lines of (DQR). It is also an axiom (or axiom schema) that's pleasingly simple and requires no complicated assumptions about the surface structure of the sentence (in this respect, it has a clear edge on the Demonstrative Theory).
In addition to be being exceedingly simple and intuitive, this theory easily accounts for three of the Basic Facts about Quotation:
- It explains opacity: ‘bachelor’ and ‘unmarried man’ have different semantic values because what is between the quotation marks are distinct expressions. An expression's semantic value is irrelevant for determining the semantic value of the quotation of that expression, thus accounting for (BQ1).
- Since quotations are functor expressions without internal structure, (BQ2) is explained: there's no possibility for quantifying into a quotation on this view.
- Since quotations, as functors, map all expressions onto themselves, this account can explain the special relationship between a quotation and the quoted expression—namely, identity—thus explaining (BQ4).
Even with these advantages, at least three serious difficulties confront the Disquotational Theory:
- (BQ3) says that we can use quotation to refer to symbols that are
not in the English lexicon, as in (9) and (10):
9. ‘Φ’ is not a part of any English expression.
10. ‘’ is not an expression in any language.
(DQR) says we can take any expression e, put quotation marks around e, and what results is an expression that refers to e. What exactly is meant by ‘any expression’ in (DQR)? Richard offers the following answer:
It is easy enough to come up with a finite list of elements (the letters, punctuation symbols, the digits, the space, etc.) and an operation (concatenation) with which one can generate all of the concatenates…If we are formalizing a grammar for a language with quotation names, we would include, as part of the specification of the lexicon, a proviso to the effect that, for each concatenate e, the left quote (lq), followed by e, followed by the right quote (rq) is a singular term (Richard 1986, pp. 386–89, our emphasis).
If this is how expressions are generated, how then are we to account for the truth of (9) and (10)? More generally, the worry is this: (DQR) needs to specify in some manner or other the domain of expressions over which it quantifies. How can it do this without unreasonably limiting the kinds of symbols that can be quoted? (See Lepore (1999) for elaboration on this point.)
- According to (BQ6), a theory of quotation should leave room for
dual use and mention. It is hard to see how (DQR) leaves such room. If
quotes are referring expressions (as they are according to DQR), then
if we let ‘Ted’ name the expression ‘has a certain
anomalous feature’, (12) should say the same as, express the
same proposition as, (17).
12. Quine said that quotation ‘has a certain anomalous feature’.
17. Quine said that quotation Ted.
Not only should (12) and (17) express the same proposition, according to DQR, they should also have the same logical form. That's an obviously incorrect account of (12). (For further discussion of mixed quotation see Section 4 below.)
- Those who raise Objection 4 against the Demonstrative Theory would probably raise the same objection here: the Disquotational Theory fails to account for quotation without quotation marks. All the work is being done by quotation marks, so there's no room for quotation without quotation marks. (DQR) proponents would presumably be as unimpressed as Demonstrative Theorists and their replies would be much the same (see above).
The label ‘the Identity Theory of Quotation’ was first used, as far as we know, by Washington (1992), though he attributes the view to Frege (1892) and Searle (1969). However, the passages in which this view was allegedly offered prior to Washington hardly count as presenting a theory; they read more like dogmatic pronouncements devoid of argumentation. Since Washington paper was published related views have been developed in more detail by Saka (1998, 2004), Reimer (1996), Recanati (2000, 2001) and others.
Washington's presentation of the Identity Theory is somewhat compressed—the key passage is this:
The quotation as a whole is analyzed into the marks that signify quotational use of the quoted expression and the quoted expression itself used to mention an object. All expressions, even those whose standard uses are not as mentioning expressions, become mentioning expressions in quotation…a quoted expression is related to its value by identity: a quoted expression mentions itself. (Washington 1992, p. 557)
- The use of quotation marks (as in (4)) is a derivative phenomenon. The basic phenomenon is what Washington calls quotational use (Washington 1992, p. 557)
- The primary function of quotation marks is to indicate that words are used quotationally (or mentioned) and not (merely) used with their regular extensions.
- Quotation marks do not refer according to Washington; that which is doing the referring in the quotation is the expression itself, so in (2), for example, it is ‘Aristotle’, not “Aristotle” that refers to ‘Aristotle’, i.e., ‘Aristotle’ refers to itself. It refers to itself because it is being used quotationally (not with its regular extension.)
One way to get a handle on this kind of view is to consider a spoken utterance of (6)
6. My name is Donald
According to the Use/Identity theory, (6), when spoken, is grammatical (Washington 1992, pp. 588–90). There are no missing (or implicit) quotation marks. When uttered by a person whose name is ‘Donald’ it is true (if ‘Donald’ is used quotationally). The function of quotation marks in written language is simply to indicate that words are being used in this special, quotational, way, i.e. not (only) with their regular extensions. In other words, ‘Donald’ can be used in two different ways: with its usual semantic value (its regular referent) or quotationally. In the latter case, its semantic value is an expression.
We wish that the Identity Theory were not called ‘the Identity Theory’. (In private communication, Washington has expressed the same sentiment.) The ‘identity’ component is picked up from the formulation, ‘a quoted expression is related to its value by identity: a quoted expression mentions itself’. This formulation, however, is deeply misleading (as Washington himself has pointed out to us), since according to both Washington (and later, for example, Saka (1998)), quotations are ambiguous. They can, according to Washington, refer to types, tokens, or shapes (see Washington 1992, p. 594) and according to another proponent of this kind of view, Saka, they are even more flexible (Saka 1998, see further presentation of Saka's view below). A better label would be ‘the Use-Theory of Quotation’, since this emphasizes the point that a proper understanding of quotation requires appeal to a special way of using language. In what follows, we use the ungainly compromise ‘the Use/Identity Theory’.
It is useful to contrast Washington's account with the Disquotational Theory. Recall, that, according to DQR, quotation marks have a semantic function and that function is spelled out in the disquotational schema (DQR). (DQR) is a semantic axiom. It treats quotation marks as identity functions. Speaker's intentions figure not at all in this axiom (other than the intention to speak English). There is no need, on the Disquotational Theory, to appeal to a special kind of quotational usage. For Washington, the quotation marks have no genuine semantic function. They are no more than a heuristic device for indicating that expressions are used in a special way, i.e. quotationally.
Several recent views share important components of Washington's. Reimer (2003) combines versions of the Demonstrative Theory and the Identity Theory. Recanati (2000, 2001, 2010) doesn't explicitly discuss any version of the Identity Theory, but his theory incorporates some of its components; it is distinctive by focusing on what he calls ‘Open Quotation’ (see Objection 5 to the Demonstrative Theory above) and the iconic aspects of quotation (see Recanati 2001 for elaboration). Two theories might be worth a closer look for those interested in exploring Use/Identity Theories further (the second of these is mostly of historical interest):
- Saka (1998, 1999, 2003) has developed a theory that has
much in common with Washington's (though it also differs in important
respects). He agrees with Washington in emphasizing ‘quotational
use’ (Saka calls it ‘mentioning’), but Saka has more
to say about mentioning than Washington has to say about quotational
use (see Saka 1998 and 2003). Saka also goes further than
Washington in claiming that quotation marks are not required for
mentioning even in written language. (For Washington, it is only in
spoken language that we can quote without quotation marks). According
to Saka, (34) ‘is a grammatical and true sentence’ (Saka 1998, p.118.
34. Cats is a noun.
Even though Saka agrees with Washington that quotation marks ‘announce ‘I am not (merely) using expression X; I am also mentioning it’ (Saka 1998, p. 127)', he differs from Washington in that he assigns them a genuine syntactic and semantic function (Saka, 1998, p. 128). In this respect he incorporates components of what we above called the Disquotational Theory.,
Saka's account also differs from Washington's in that he emphasizes that quotations are ambiguous (or indeterminate) and that what they refer to depends on the speaker's intentions (see Saka 1998, pp. 123–4). For further discussion of this point, see Section 5 below.
- There is one view we have not discussed and which might
(admittedly with some difficulty) be squeezed into the category of the
Use/Identity Theory. Quine said a number of things about quotation; in
one passage he writes: …a quotation is not a description but a
hieroglyph; it designates its object not by describing it in terms of
other objects, but by picturing it (Quine 1940, p. 26). Davidson
(1979), taking his cue from this passage and others, baptized the view
intimated in Quine's passage as ‘the Picture Theory of
Quotation’ (cf., also, Christensen 1967, p. 362). As Davidson
notes, on this view:
…it is not the entire quotation, that is, expression named plus quotation marks, that refers to the expression, but rather the expression itself. The role of the quotation marks is to indicate how we are to take the expression within: the quotation marks constitute a linguistic environment within which expressions do something special… (Davidson 1979, pp. 83–84)
Notice that according to Davidson's description of this view, the quotation marks per se have no semantic function; rather, they indicate that the words are being used in a special way. They are being used ‘autonomously’, that is, to name themselves”. So understood, the Picture Theory has at least this much in common with a Use/Identity Theory: they agree that quotation marks are inessential; they only indicate a special use. They indicate that expressions are being used as a picture (or as a hieroglyph). The Picture Theory—if it's even appropriate to call it a theory—is never elaborated in any great detail and we suspect that if it were, it would become obvious that it is a version of the Use/Identity Theory.
Use/Identity Theorists claim that their theories are explanatorily more powerful than traditional semantic theories. By seeing quotation marks as a parasitic phenomenon, they are able to explain the semantics of both quotation and this more general phenomenon in a unified way.
There's a great deal of specific data that these theories claim to be able to explain. The most important of these is the possibility of mentioning without quotation marks (as in spoken language and in written language when no quotation marks are used). If we take appearances at face value, that means meta-linguistic discourse (call it mentioning or quotational use) can take place in the absence of quotation marks. Hence, an account of meta-linguistic discourse must proceed independently of an account of the semantics (or pragmatics) for quotation marks.
The following are additional claims made on behalf of the Use/Identity Theory (we take it to be an open question at this point whether these points are sustainable):
- Whatever can be mentioned can be quoted. If new symbols and signs can be mentioned, then they can also be quoted; hence, (BQ3) is satisfied.
- There is, on this view, often a particularly close relationship between the quoted material and the referent; sometimes it is identity, sometimes it is instantiation, etc, so in various ways we might say that (BQ4) is satisfied.
- If our capacity for mentioning is limitless, then we have an account of how quotation can be too (i.e., we have at least the beginning of an account of (BQ1)).
Discussion of the Use/Identity Theories is not yet as extensive as discussion of the Demonstrative Theory, so there are fewer objections to report. We discuss four concerns that have surfaced in various discussions. (See, however, Johnson and Lepore 2011.)
Question about the Relevance of Quotational Use/Mention: Use/Identity Theories put a great deal of weight on the idea that the semantics for quotation cannot be developed without an account of quotational usage (mention in Saka's terminology.) There are several reasons for doubting this, two of which are these:
- It is not at all clear that the alleged phenomenon of mention without quotation marks is genuine. It might very well be, as mentioned in connection with Objection 3 to the Demonstrative Theory, that it is not possible to mention without using quotation marks. The cases appealed to might all turn out to be cases in which a conversational or conventional implicature is generated and where that implicature contains quotation marks. Alternatively, the quotation marks might actually be in the logical form of the sentence through some form of ellipsis. (See Carpintero 1994 and Cappelen and Lepore 1999).
- Even if we suppose that the phenomenon of mentioning without quotation marks is genuine, it is not at all clear why we should consider it relevant to the semantics for quotation. Suppose that you're in the business of trying to develop a semantic theory of sentences with quotation marks. Suppose it also turns out that it is possible to talk about language by mentioning without the use of quotation marks. This might just be a different way of talking about language. An interesting phenomenon, no doubt, but not one that needs to have anything to do with the semantics for sentences containing quotation marks. It does not follow from there being a variety of ways in which language can be used to talk about language, that all of these ways are relevant to the semantics of quotation.
Question about the Semantics-Pragmatics Divide: On the Use/Identity Theory, a lot of work is done by pragmatic mechanisms. The appeal to speaker intentions plays a central role on all levels of analysis. On Saka's view, for example, what a quoted expression refers to is largely up to the speaker's intentions (and, maybe, what's salient in the context of utterance). A consequence of this view is that there is no guarantee, for example, that an utterance of (35) or (36) will be true:
35. ‘a’ = ‘a’
36. ‘run’ is a verb in English
The two ‘a's’ in (35) could refer to different objects; the “run” in (36) might refer to, for example, a concept (see Cappelen and Lepore 1999b and Saka 1999, 2003, 2011).
Over-generation Problems: According to Use/Identity Theories, you can do a lot with quotation. The question is whether this results in such theories running into problems of over-generation. Take, for example, Saka's claim that a quotation refers to an item associated with the expression. There's only one restriction: This item cannot be the expression's regular extension. If this is the sole restriction on what quotations can be used to refer to, we should be able to do things with quotation that there's no evidence that we can. It could, for example, be the case that in a particular context, the (regular) extension of ‘love’, call it love, was associated with the expression ‘money’; maybe, for some reason, that association was contextually salient. Nonetheless, (37) cannot be used to say that love plays a central role in many peoples' lives.
37. ‘Money’ plays a central role in many peoples' lives.
Use/Identity Theories have to explain what blocks such readings (or show that they are possible). (See Saka 2003, and Cappelen and Lepore 2003.)
Dual Use-Mention (BQ6): Washington says that quotation marks indicate quotational usage and that expressions used quotationally refer to themselves (or some related entity). If so, the logical form of (12) should be that of (17), clearly not a correct result.
12. Quine said that quotation ‘has a certain anomalous feature’.
17. Quine said that quotation Ted.
It is at least a challenge to Identity/Use theorists to explain how the theory can accommodate simultaneous use and mention.
In addition to the above points, there's now a lively debate about the specifics of identity/use theories. For discussions of Saka, cf., Cappelen and Lepore (2003), Reimer (2003), and for discussions of Recanati, cf., Cappelen and Lepore (2003), Reimer (2003), Benbaji (2003, 2004a, 2004b), and Cumming (2003).
We now turn from discussions of large-scale theories of quotation to discussions of how to understand specific aspects of our quotational practice. Two issues have been particularly important in the recent discussions: Mixed Quotation and the alleged ambiguity of quotations. One's view of these issues has wide reaching implications for which theory of quotation one favors. We discuss these in turn.
There is currently a great deal of discussion devoted to the correct understanding of the phenomenon Cappelen and Lepore in their 1997a paper labeled ‘Mixed Quotation’ (see (BQ6) above.) Much of the discussion concerns whether mixed quotation is a semantic or pragmatic phenomenon, i.e., whether a theory of quotation should treat mixed quotation as a genuinely semantic phenomenon. How one comes down on this issue will significantly shape one's overall theory of quotation.
By ‘a semantic account of mixed quotation’, we mean any theory that accepts all of (a)–(c):
- The semantic truth conditions for (12) require that Quine used the
locution ‘has a certain anomalous feature’ (in proposition
talk: the proposition semantically expressed by an utterance of (12)
can't be true unless Quine used the locution ‘has a certain
12. Quine said that quotation ‘has a certain anomalous feature’.
- The semantic truth conditions for (12) require that Quine used the locution ‘has a certain anomalous feature’ because (12) contains “has a certain anomalous feature”, i.e., it is a part of the semantic truth conditions for (12) that arise as the result of its compositional structure, in particular, as the result of the presence, and position, of “has a certain anomalous feature” in (12).
- As a corollary to (b), the requirement specified in (a) arises independently both of whatever intentions a speaker might happen to have when uttering (12) and also independently of the context that she happens to find herself in when she makes her utterance.
By a ‘pragmatic account of mixed quotation’ we mean any theory that denies one or more of (a)–(c). Versions of the pragmatic account have been presented, by among others, Recanati (2001), Clark & Gerrig (1990), Wilson (2000), Sperber & Wilson (1981), Tsohatzidis (1998), Staintion (1999), Saka (2003) and is discussed (though not fully endorsed) by Reimer (2003).
Here's Stainton's version of the pragmatic account:
A speaker could report parts of Alice's conversation in a squeaky voice, or with a French accent, or with a stutter, or using great volume. In none of these cases would the speech reporter say, assert, or state that Alice spoke in these various ways. […] In these cases, the truth conditions of the speech report are exhausted by the meaning of the words, and how the words are put together; as far as truth conditions are concerned, the tone, volume, accent etc. add nothing whatever. Ditto, say I, for the quotation marks in mixed quotation. In which case, ‘Alice said that life ‘is difficult to understand’ isn't false where Alice actually speaks the words ‘is tough to understand’. It may, of course, be infelicitous and misleading. (Stainton 1999: 273–274; italics added; similar quotes can be found in Clark & Gerrig 1990).
There's a lively debate about these issues (see, for example, Cappelen and Lepore 2003, and Reimer 2003). The debate has a lot in common with other discussions about whether certain phenomena are semantic or pragmatic in nature. (It is, for example, analogous in many ways to discussions about the significance of referential uses of definite descriptions.)
One aspect of this discussion, however, is worth particular mention: the behavior of indexicals within mixed quotes.
Here is an example of a journalist's mixed quotation from Cappelen and Lepore (1997b):
Mr. Greenspan said he agreed with Labor Secretary R. B. Reich ‘on quite a lot of things’. Their accord on this issue, he said, has proved ‘quite a surprise to both of us’. (Cappelen and Lepore 1997b, p. 429)
Notice the occurrence of ‘us’ in the last sentence of this passage. It refers to Greenspan and Reich and not to the journalist and someone else. If the quotation marks in this sentence were semantically superfluous (as they are according to non-semanticists such as Recanati and Stainton), then this occurrence of ‘us’ should be read as spoken by the journalist (i.e. the reporter).
Two examples from Cumming (2003) make this point even clearer:
(C1) Bush also said his administration would ‘achieve our objectives’ in Iraq. (New York Times, November 4, 2004) (C2) He now plans to make a new, more powerful absinthe that he says will have ‘a more elegant, refined taste than the one I'm making now.’
According to Recanati, ‘the proposition expressed by the complement sentence is the same with or without the quotation marks’ (Recanati 2001, p. 660). According to Stainton, ‘as far as truth conditions are concerned, the [quotation marks in mixed quotation] add nothing whatever’. Now try to remove them. What results are (C1*) and (C2*):
(C1*) Bush also said his administration would achieve our objectives in Iraq. (C2*) He now plans to make a new, more powerful absinthe that he says will have a more elegant, refined taste than the one I'm making now.
These are obviously mistaken renderings of (C1) and (C2). Cappelen and Lepore (2003) claim that this is an argument in favor of a semantic account of mixed quotation because it shows that the quotation marks cannot just be dropped without semantic consequences. For discussion of this point see also Recanati (2001), Cumming (2003) and Geurts and Maier (2003).
Kaplan (1989) defines ‘a monstrous operator’ as one that shifts the context of evaluation of an indexical away from the context of the actual speech act. He claims that monsters not only do not exist, but that they could not exist in a natural language. The data above make it extremely tempting to understand mixed quotation as monsters. (For monstrous interpretations, see, for example, Cumming 2003 and Geurts and Maier 2003.) Some think this temptation should be resisted, and the account in Cappelen and Lepore (2003) does not introduce monsters.
Running parallel to the debate about how quotations refer (how they manage to hook up with their semantic values) is a debate about what quotations refer to. One view that is widespread is that quotations are ambiguous or indeterminate. That is, one and the same quotation, e.g. “lobster” can, on this view, refer to different objects on different occasions of use, all depending on the context of utterance. Garcia-Carpintero (1994, p. 261) illustrates this kind of view and the kind of argument typically given for it. He says that “gone” can refer to any of the following:
- The expression (‘‘gone’ is dissyllabic’);
- Different types instantiated by the tokens (“gone’ is cursive’);
- Different types somehow related to the token (say, the graphic version of the uttered quoted material, or the spoken version of the inscribed quoted material, as in “gone’ sounds nice’);
- Different tokens somehow related to the quoted token (‘What was the part of the title of the movie which, by falling down, caused the killing?—‘gone’ was’);
- The quoted token itself (‘At least one of these words is heavier than ‘gone” which you should imagine written in big wooden letters’);
Others think quotation can pick out contents or concepts. Goldstein says:
For when Elvis says ‘Baby, don't say ‘don't’,’ he is not just requiring his baby to refrain, when confronted with a certain request, from uttering tokens of the same phonetic shape as ‘don't’, but from uttering any tokens that mean the same. (Goldstein 1984, p. 4)
Saka (1998, p. 124) concurs and claims that “premise” and “premiss” in (38) pick out concepts:
38. The concept ‘premise’ is the same as the concept ‘premiss’.
Tsohatzidis (1998) claims that since T1 is true, even though Descartes didn't speak English, “is a thinking substance” in T1 can't refer to an English expression.
(T1) In one of the greatest philosophy books ever written in Latin, Descartes said that man ‘is a thinking substance’.
These arguments all take the same form: first, they identify a sentence S that we are inclined to interpret as true and suggest that the only way to understand how S can be true is to assume that quotations can refer to some kind of object O. This is then alleged to be evidence that quotations can be used to refer to objects of kind O.
If quotation has this kind of flexibility, the five theories discussed above will all have to be evaluated with respect to whether they can accommodate it. The Proper Name Theory, Description theory, and Disquotational Theory, all seem to have particular difficulties in this respect.
Not all, however, are convinced that quotations are flexible in just this way. Some have expressed skepticism both about the form of argument (see Cappelen and Lepore 1999a) and about the specific examples. Cappelen and Lepore (1999a) also argue that the multiple ambiguity view over-generates, i.e., it predicts that it is possible to express propositions with quotation sentences that it is not possible to express with such sentences. For further discussion, see also Saka (2003).
A number of authors over the years have thought that our standard practices of quotation is not suitable for all purposes and have, in effect, introduced new technical devices. They cannot all be summarized here, but here is a brief sketch of some such devices.
Reichenbach (1947, p. 284) writes: ‘Whereas the ordinary-quotes operation leads from a word to the name of that word, the token-quotes operation leads from a token to a token denoted by that token’. Reichenbach uses little arrows ( ) for token quotes, so that the sign (39):
represents not a name for the token of ‘a’, but a token for it. That is, the token in (39) cannot be repeated. (40), for example, is not only a token different from (39) but refers to a different token (Reichenbach 1947, pp. 285–86).
Token quotes then are much like writing a demonstrative expression like ‘this’ and fastening to it an object to produce a symbol of that object. (As noted above, some authors (Bennett 1988, Saka 1998, Washington 1992 opine that this is how ordinary quotation sometimes functions.)
As Quine (1940, §6) notes, the quotation:
designates only the specific expression therein depicted, containing a specific Greek letter. In order to effect reference to the unspecified expression he introduces a new notation of corners (namely, ‘┌ ’and ‘ ┐’). So, for example, if we take the expression ‘Quine’ as μ, then ┌(μ)┐is ‘(Quine)’. The quasi-quotation is synonymous with the following verbal description: The result of writing ‘(’ and then μ and ‘)’ (Quine 1940, p. 36).
Someone who uses the word ‘red’ in speaking or thinking would generally be held to be employing the same concept as a French person who uses ‘rouge’. Assuming that ‘rouge’ is a good translation of ‘red’, Sellars (1963) thought it convenient to have a general term by which to classify words that are functional counterparts in this way. Such a term is provided by Sellars' dot quotes. Dot-quotes form a common noun true of items in any language that play the role performed in our language by the tokens exhibited between them, So, any expression that is a functional counterpart to ‘red’ can be described as a ·red·. In Sellars’ terminology, the concept red is something that is common and peculiar to ·red·s.
41. ‘a’ concatenated with ‘b’ is an expression.
Michael Ernst claims this sentence is ambiguous. Read one way, (41) means that the concatenation of the first letter of the Roman alphabet with the second letter is an expression; read another way it means that the expression between the two outer quote marks is an expression. Confronted with this ambiguity, Boolos (1995) introduced a notation for quotation in which every quotation mark ‘knows its name’. So understood there are denumerably many distinct quotation marks, each formed by prefixing a natural number n of strokes to a small circle, and in order to meaningfully enclose an expression with Boolos quotation marks we have to choose quotation marks of a ‘higher order’ than the quotation marks that occur in the expression to be quoted (if there are any) and each quotation mark in a grammatical sentence is to be ‘paired’ with the next identical quotation mark (Boolos 1995, p. 291).
Carnap introduced in Logical Syntax of Language (1937) a distinction between formal and material modes. The material mode is generally used to describe the non-linguistic world; the formal mode is generally used to discuss the language that is used to describe the material world. Thus ‘one is a number’ is a sentence in material mode; and ‘ ‘one’ is a number word’ is its sentential counterpart in formal mode. This distinction is mentioned in passing here since some authors have thought it corresponds to the use/mention distinction. Since the use/mention distinction isn't particularly about quotation, there isn't much that needs to be said about the material/formal mode here either. Revealing that a statement is basically about the use of language may succeed in removing some of its metaphysical mystery. And it may even be that semantic ascent (that is, the device of making a sentence the topic, instead of what the sentence purports to refer to) succeeds in demystifying much of what goes on in certain quarters of philosophy. But as we've said, this distinction, and the various philosophical moves surrounding it, doesn't have much to do with quotation per se.
- Benbaji, Yitzhak, 2003. ‘Who needs semantics of quotation marks?’ Belgian Journal of Linguistics, 17: 27–49.
- –––, 2004a. ‘A demonstrative analysis of “open quotation”‘. Mind and Language, 19: 534–547.
- –––, 2004b. ‘Using others' words’, Journal of Philosophical Research, 29: 93–112.
- Bennett, Jonathan, 1988. ‘Quotation’, Noûs, 22: 399–418.
- Boolos, G., 1995. ‘Quotational ambiguity’, in On Quine, P. Leonardi and M. Santambrogio (eds.), Cambridge: Cambridge University Press, pp.283–296.
- Burge, T., 1986. ‘On Davidson's ‘Saying that’’, in Truth and Interpretation, Basil E. Lepore (ed.), Oxford: Blackwell, pp. 190–208.
- Cappelen, H., 1997. Signs, Doctoral Dissertation, Philosophy Department, University of California-Berkeley.
- Cappelen, H. and E. Lepore, 1997a. ‘On an alleged connection between semantic theory and indirect quotation’, Mind and Language, 12: 278–296.
- –––, 1997b. ‘Varieties of quotation’, Mind, 106: 429–50.
- –––, 1998. ‘Using, mentioning, and quoting: reply to Tsohatzidis’, Mind, 107: 665–666.
- –––, 1999a. ‘Reply to Saka’, Mind, 108: 741–50.
- –––, 1999b. ‘Reply to Stainton’, in Kumiko and Robert, pp. 279–283.
- –––, 1999c. ‘Reply to Pietroski’, in Kumiko and Robert, pp. 283–285.
- –––, 1999d. ‘Semantics of quotation’, in Zeglen, pp. 90–99.
- –––, 2003. ‘Varieties of quotation revisited’, Belgian Journal of Linguistics, 17: 51–75.
- –––, 2004. Insensitive Semantics, Oxford: Basil Blackwell Publishers.
- Carnap, R., 1937. Logical Syntax of Language, London: Routledge and Kegan Paul.
- –––, 1947. Meaning and Necessity, Chicago: University of Chicago Press.
- Christensen, Niels, 1967. ‘The alleged distinction between use and mention’, Philosophical Review, 76: 358–67.
- Clark, Herbert & Richard Gerrig, 1990. ‘Quotations as demonstrations’, Language, 66(4): 764–805.
- Cumming, Sam, 2003. ‘Two accounts of indexicals in mixed quotation’, Belgian Journal of Linguistics, 17: 77–88.
- Elugardo, Reinaldo, 1999. ‘Mixed quotation’, in Murasugi & Stainton.
- Davidson, D., 1968. ‘On saying that’, in Inquiries Into Truth and Interpretation, Oxford: Oxford University Press, pp. 93–108.
- –––, 1975. ‘Thought and talk’, in Inquiries Into Truth and Interpretation, Oxford: Oxford University Press, pp. 155–170.
- –––, 1979. ‘Quotation’, in Inquiries Into Truth and Interpretation, Oxford: Oxford University Press, pp.79–92. Originally published in Theory and Decision, 11 (1979): 27–40.
- –––, 1999. ‘Reply to Cappelen and Lepore’, in Zeglen, pp. 100–102.
- Frege, Gottlob, 1892. ‘On sense and reference’, in Translations from the Philosophical Writings of Gottlob Frege, P. Geach and M. Black (eds.), 3rd edition, Oxford: Basil Blackwell, 1980, pp. 56–78.
- Garcia-Carpintero, Manuel, 1994. ‘Ostensive signs: against the identity theory of quotation’, Journal of Philosophy, 91: 253–64.
- –––, 2004. ‘The deferred ostension theory of quotation’, Noûs, 38(4): 674–692.
- Geach, P., 1957. Mental Acts, London: Routledge Kegan Paul.
- Geach, P., 1970. ‘Quotation and quantification’, in Logic Matters, Oxford: Basil Blackwell.
- Geurts, B. and E. Maier, 2003. ‘Quotation in context’, Belgian Journal of Linguistics, 17: 109–128.
- Goddard, L. and R. Routley, 1966. ‘Use, mention, and quotation’, Australasian Journal of Philosophy, 44: 1–49.
- Goldstein, Laurence, 1984. ‘Quotation of types and types of quotation’, Analysis, 44: 1–6.
- Gomez-Torrente, Mario, 2001. ‘Quotation revisited’, Philosophical Studies, 102: 123–53.
- Johnson, Michael and Ernest Lepore, 2011. ‘Quotation and demonstration’, in E. Brendal, J. Meibauer, M. Steinbach (eds.), Understanding Quotation, Berlin: Mouton De Gruyter, pp. 231–248.
- Kaplan, D., 1973. ‘Bob, Ted, Carol and Alice’, in Approaches to Natural language, J. Hintikka, et al. (eds.), pp. 490–518.
- –––, 1989. ‘Demonstratives’, in Themes from Kaplan, J. Almog, J. Perry, and H. Wettstein (eds.), Oxford: Oxford University Press, pp. 481–564.
- Lepore, E., 1999. ‘The scope and limits of quotation.’ in The Philosophy of Donald Davidson, L. E. Hahn (ed.), Open Court Publishers, pp. 691–714.
- Lepore, E., and B. Loewer, 1989. ‘You can say that again’, Midwest Studies in Philosophy, 14: 338–356.
- Ludwig, K. and G. Ray, 1998. ‘Semantics for opaque contexts.’ Philosophical Perspectives, 12: 141–166.
- Mates, B., 1972. Elementary Logic, 2nd edition, Oxford: Oxford University Press.
- Munro, Pamela, 1982. ‘On the transitivity of say-verbs’ in P. Hopper & S. Thompson (eds.), Studies in Transitivity. Syntax and Semantics, New York, San Francisco, London: Academic Press, pp. 301–318.
- Murasugi, Kumiko & Robert Stainton (eds.), 1999. Philosophy and Linguistics, Boulder CO: Westview.
- Parsons, T., 1982. ‘What do quotation marks name? Frege's theories of quotations and that-clauses’, Philosophical Studies, 42: 315–328.
- Partee, B., 1973. ‘The syntax and semantics of quotation’, in A Festschrift for Morris Halle, S.R. Anderson and P. Kiparsky (eds.), New York: Holt, Reinehart and Winston, pp. 410–418.
- Pietroski, Paul, 1999. ‘Compositional quotation’, in Murasugi & Stainton, pp. 245–58.
- Predelli, S., 2003. ‘Scare quotes and their relation to other semantic issues’, Linguistics and Philosophy, 26(1): 1–28.
- –––, 2003. ‘“Subliminable” messages, scare quotes, and the use hypothesis’, Belgian Journal of Linguistics, 17: 153–166.
- Prior, Arthur, 1971. Objects of Thought, Oxford: Oxford University Press.
- Quine, W.V.O., 1940. Mathematical Logic, Boston, MA: Harvard University Press.
- –––, 1955. ‘Quantifiers and propositional attitudes’, Journal of Philosophy, 53: 177–86.
- –––, 1960. Word and Object, Cambridge, MA: MIT Press.
- –––, 1961. ‘Reference and modality’, in Quine 1961a, pp. 139–159.
- –––, 1961a. From a Logical Point Of View, Cambridge, MA: Harvard University Press.
- Read, Stephen, 1997. ‘Quotation and Geach's puzzle’, Acta Analytica, 19(S): 9–20.
- Recanati, F., 2000. Oratio Obliqua, Oratio Recta: An Essay on Metarepresentation, Cambridge, MA: MIT Press.
- –––, 2001. ‘Open quotation’, Mind, 110: 637–87.
- –––, 2010. Truth-conditional Pragmatics, Oxford: Oxford University Press.
- Reichenbach, H., 1947. Elements of Symbolic Logic, New York: Free Press.
- Reimer, Marga, 1996. ‘Quotation marks: demonstratives or demonstrations?’ Analysis, 56: 131–42.
- –––, 2003. ‘Too counter-intuitive to believe? Pragmatic accounts of mixed quotation’, Belgian Journal of Linguistics, 17: 167–186.
- Richard, Mark, 1986. ‘Quotation, grammar, and opacity’, Linguistics and Philosophy, 9: 383–403.
- Saka, Paul, 1998. ‘Quotation and the use-mention Distinction’, Mind, 107: 113–35.
- –––, 1999. ‘Quotation: a reply to Cappelen & Lepore’, Mind 108 (432): 751–54.
- –––, 2003. ‘Quotational constructions’, Belgian Journal of Linguistics, 17: 187–212.
- –––, 2011. ‘The act of quotation’, in E. Brendal, J. Meibauer, M. Steinbach (eds.), Understanding Quotation, Berlin: Mouton De Gruyter, 303–22.
- Salmon, N., 1986. Frege's Puzzle, Cambridge, MA: MIT Press.
- Searle, John, 1969. Speech Acts, §4.1. Cambridge: Cambridge University Press.
- Sellars, W., 1963. ‘Abstract entities’, Review of Metaphysics, 16(4): 627–671; reprinted in Sellars, Philosophical Perspectives, Springfield, IL, Charles Thomas, 1967, pp. 229–269.
- Seymour, M., 1994. ‘Indirect discourse and quotation’, Philosophical Studies, 74: 1–38.
- Smullyan, R.M., 1957. ‘Languages in which self-reference is possible’, Journal of Symbolic Logic, 22: 55–67.
- Sorensen, Roy, 2008. ‘Empty quotation’, Analysis, 68(297): 57–61.
- Sperber, D. and D. Wilson, 1981. ‘Irony and the use-mention distinction’, in P. Cole (ed.), Radical Pragmatics, New York: Academic Press, pp. 295–318.
- –––, 1986. Relevance: Communication and Cognition, Oxford: Blackwell, 2nd edition, 1995.
- Stainton, R., 1999. ‘Remarks on the syntax and semantics of mixed quotation’, in Murasugi & Stainton, 259–278.
- Tarski, A., 1933. ‘The concept of truth in formalized languages’, in A. Tarski, Logic, Semantics, Metamathematics, 2nd edtion, Indianapolis: Hackett, 1983, pp. 152–278.
- Tsohatzidis, Savas, 1998. ‘The hybrid theory of mixed quotation’, Mind, 107: 661–64.
- Wallace, J., 1972. ‘On the frame of reference’, in Semantics of Natural Language, D. Davidson and G. Harman (eds.), Dordrecht: D. Reidel, pp. 219–252.
- Washington, C., 1992. ‘The identity theory of quotation’, Journal of Philosophy, 89: 582–605.
- Wertheimer, R., 1999. ‘Quotation apposition’, Philosophical Quarterly, 49(197): 514–19.
- Wilson, D., 2000. ‘Metarepresentation in linguistic communication’, in D. Sperber (ed.), Metarepresentations: A Multidisciplinary Perspective, New York: Oxford University Press, pp. 411–448.
- Zeglen, U., 1999. Donald Davidson: Truth, Meaning and Knowledge, London: Routledge.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
[Please contact the authors with suggestions.]
| 3.552453 |
Problem 163Published on Saturday, 13th October 2007, 02:00 am; Solved by 915
Consider an equilateral triangle in which straight lines are drawn from each vertex to the middle of the opposite side, such as in the size 1 triangle in the sketch below.
Sixteen triangles of either different shape or size or orientation or location can now be observed in that triangle. Using size 1 triangles as building blocks, larger triangles can be formed, such as the size 2 triangle in the above sketch. One-hundred and four triangles of either different shape or size or orientation or location can now be observed in that size 2 triangle.
It can be observed that the size 2 triangle contains 4 size 1 triangle building blocks. A size 3 triangle would contain 9 size 1 triangle building blocks and a size n triangle would thus contain n2 size 1 triangle building blocks.
If we denote T(n) as the number of triangles present in a triangle of size n, then
T(1) = 16
T(2) = 104
| 3.703943 |
Written by NIWA oceanographer and voyage leader Dr Mike Williams.
Location: 58.600213°S, 158.239401°E
Weather: Cloudy, rain, 20-30 knots wind
Sea state: 2-3 m swell
Both on our transit to Antarctica, and on our way home, we have repeated lines of CTD stations know as sections. On our way south we occupied the southern part of a section known as SR3 (from Tasmania to Antarctica along 139°51S), and on the way north a section called 150°E (unsurprisingly along longitude 150°E).
This is not the first time either of these sections has been measured. SR3 was first occupied in 1991, and has seen 9 full repeats (all the stations along the section between Tasmania and Antarctica) and 6 partial repeats. 150°E has only been measured a couple of times.
As for the section names, they are historical. In the late 1980s and throughout the 1990s a bold attempt was made to understand the ocean circulation around the whole globe. Named WOCE (World Ocean Circulation Experiment), it consisted of a set of oceanographic sections along which standard measurements would be made. There were too many sections to do all at once, so they were undertaken over a 10 year period. To understand if there were any changes some sections were repeated. This is what gives SR3 its name – S for Southern Ocean, R for repeat, and 3 for the third section in the Southern Ocean.
For oceanographers these sections provide us with a way to understand the ocean. Some of these sections form the side of a box in the ocean, and we are able to compare the sides and learn something about the changes within the box. By repeating the sections we can also monitor changes in the ocean over time.
Our two sections have been chosen as they lie to the east and west of the Mertz Polynya region. Between them we are looking for changes in the deepest waters in the ocean, a water mass called Antarctic bottom water (see blog post 21: The formation of the Antarctic bottom water). Water masses form at the surface of the ocean where interaction with the atmosphere sets their temperature, salinity, and other chemical properties. These properties allow the water mass to be tracked through the ocean.
On 150°E we expect to see bottom water that has formed in the Ross Sea. While along SR3 we should see a combination of Ross Sea water and bottom water formed in the Mertz Polynya Region. These can be identified by their subtle differences in salinity and temperature, as well as dissolved oxygen and CFCs.
A few years ago these repeats of SR3 and 150°E found that the properties of the Antarctic bottom water mass, the coldest, densest water in the deep ocean, are changing. It is not as dense, or salty, as it was 10 years earlier, and is much fresher than it was in the 1970s. This suggests there have been major changes around Antarctica. Whether this is less sea ice, or more melt water is hard to tell, and the challenge that we need to tease apart.
| 3.265874 |
The Correct Use of Acronyms
When and when not to use acronyms
There is a time and place for everything and the use of acronyms is no exception. The whole point of using acronyms in your business writing is to make your writing clearer. However, if you misuse or abuse acronyms, you'll accomplish just the opposite, turning your memos and manuals into a confusing brew.
What is an acronym?
Essentially, acronyms are shorter forms of words or phrases that can come in handy when you need to repeat the same word or phrase a number of times throughout the same piece of writing. For example, "World Trade Organization" is often written as "WTO." You can see how writing the three-letter acronym can save you a lot of time and keep your business document from sounding repetitive.
Important things to consider before using an acronym
Outline what the acronym means
Short forms aren't always the best way to avoid redundancies. So, if you're going to use acronyms in your business writing, remember: The first time you use an acronym in your document, the words should be written out with the short form placed in parentheses immediately after. This way, it's clear to the readers exactly what the letters mean. Here's an example:
A New World Order (NWO) came into effect after 9/11.
Readers will then be aware that any future reference to the "NWO" in your document really refers to the New World Order. After you've established an acronym in your paper, you must consistently use that acronym in place of the words.
Stick to one definition of the acronym
Always clarify in your own mind the exact definition of each acronym you use. If you define SEM as "scanning electron microscopy" (which is a process), your acronym should refer only to the process throughout your paper. For example, the following sentence would be incorrect if included in the same paper:
We used an SEM in our experiments.
If you've already defined SEM as standing for the process, you cannot use SEM to refer to the item (i.e., a scanning electron microscope, which you use to perform the process of scanning electron microscopy), even though the first letters of each word are the same. In short, the same acronym can only refer to one thing in a document.
Don’t forget about using articles
Remember that many acronyms still require articles (i.e., "a," "an," or "the"). Let's use the New World Order again:
Incorrect: NWO has emerged in the 21st century.
Correct: An NWO has emerged in the 21st century.
Remember that NWO stands for a noun "New World Order," and nouns require articles before them.
If you're confused about whether to use "a" or "an" in front of an acronym that begins with a consonant, remember to speak the acronym out loud. If the first letter of the acronym makes a vowel sound (regardless of whether or not the first letter is actually a vowel), you should use "an." The acronym "NWO" is a perfect example. While "N" is a consonant, it makes the short e sound (i.e., a vowel sound) when you say it. Consequently, "an" should be used.
Check to see there is already an established acronym for your phrase
It's also important to remember that while you can sometimes make up acronyms, there are many words/phrases that require acronyms that are established and universal. There are a number of online acronym dictionaries you can use to search for commonly used acronyms.
Acronyms in academic writing
If you're using acronyms in academic writing, remember that some scientific journals require you introduce acronyms once in the abstract of your article and then again upon the first use in the body of the article. Should you be unsure about how to use acronyms when writing an academic article, please refer to your journal's specific requirements.
Too many acronyms can turn your business writing into alphabet soup
Please remember that acronyms should only be used for words or phrases that are repeated a number of times throughout your document. If you use too many acronyms, readers will become confused. Here's an example of extreme acronym usage in a press release:
In the US, the notion of an NWO became popular after the terrorist attacks on the WTC. However, officials in NATO and the WTO rarely refer to an NWO in proceedings relating to the GATT, and it can be said that the MVTO, the MFN clause, and SROs have little to do with an NWO.
As you can see, too many acronyms can make your writing more difficult to understand. If numerous acronyms are necessary, we recommend including a glossary of acronyms; your readers may then refer to it if they become confused.
TTYL—Save your casual acronyms for text messages
Finally, while you may often ROFL with your bff about the Chem hw that you need to get done ASAP, please remember that acronyms used in instant messaging are rarely, if ever, appropriate for business or professional writing.
Remember that while using acronyms correctly may help readers understand your work more easily, the incorrect use of acronyms could turn your work into a mess. When in doubt, submit your work to our business editors for a fast, professional opinion.
| 3.580313 |
Mar 20, 2009
Posted Jan. 30, 2009 – Our grasp of the structure and immensity of the cosmos is hand-me-down knowledge that started 5,000 years ago, with the Babylonians and Egyptians, who accurately noted the cycles of the sun and moon. But through it all, only the ancient Greeks went beyond merely observing celestial patterns. They were the first to come up with correct original explanations. Let’s pay some of them a tribute now, in chronological order.
Anaxagoras (450 BC) correctly believed that moon reflects light from the sun, and therefore understood why the moon darkens during an eclipse.
Heracleides (350 BC) was the first to propose that, since Mercury and Venus stay so close to the sun, they might orbit it and that the Earth might rotate on an axis.
Eudoxus (in 375 BC) originated a geometric method of calculating the distance from the Earth to the Sun and Moon.
Aristotle (340 BC) is famous, but he set back science for 2000 years with his geocentric model of the universe, which went unquestioned until the time of Galileo. But some of his other writings were correct, like when he said that the Earth was not flat, but spherical.
Aristarchus (265 BC) was among the greatest of the great, the first to correctly determine the relative sizes of the Earth, Sun, and Moon. And once he realized that the sun is far larger than the Earth, he proposed that the Sun, and not Earth, was the center of the solar system. Aristarchus wins the cigar for the heliocentric model.
Eratosthenes (200 BC) was another genius – the first person to correctly measure the size of the Earth. He used the angle of the sun’s shadow at noon in two different towns, and the distance between the towns to set up a proportion with the 360 degrees in a circle and the unknown size of our planet.
Ptolemy (170 AD) is famous, despite being wrong about nearly everything. Ptolemy supported a geocentric universe, which unfortunately became a religious principle for 1,700 years.
Hipparchus (130 BC) discovered the 26-century wobble of Earth’s axis (now called precession) and created the first accurate star catalog, using a system of dividing stars into 6 magnitudes of brightness, which is still used today. He also determined the length of a year to within 6 minutes.
In other words, astronomy may be Greek to you. But it wasn’t to those guys.
| 3.911204 |
Review to be emailed:
Audience: K - 3rd Grade
This updated volume has all the information needed for an early report on Neptune. We learn that there is no water on Neptune, that the clouds are made up of methane gas, and that Neptune is one of the largest planets in our solar system. We also learn that Neptune was discovered by scientists who were studying Uranus. They used mathematics, and started with Uranus' unusual orbit to predict where another planet might be. Two different diagrams show Neptune in relationship to the rest of the solar system. It is shown as the eighth, and last, planet thus reflecting the recent scientific decision that Pluto is not really a planet.
Date read: 4/13/2009
Library Home |
My Account |
St. Charles Public Library, 1 South 6th Avenue, St. Charles, IL 60174
630-584-0076 • 630-584-9390 Youth Services
Copyright © 2004-2013 St. Charles Public Library. All rights reserved.
| 3.284043 |
June is National Dairy Month, a time that America has set aside to celebrate the bounty of milk produced across the country. Summer months experience a surplus of milk after the brief Spring months of live births and the coming in of the milk. At this time animals are pastured and milked twice a day. At The Cheese Traveler, we love cheese and celebrating all things cheese-related. Milk is the number one ingredient in the cheesemaking process along with salt, culture, and rennet. It is also the official beverage of New York State. In our research on the history of National Dairy Month, we had some surprising discoveries.
The auspicious date – 1937, the first “National Milk Month” later coined in 1939 “National Dairy Month”– coincides with one of the largest labor strikes in New York State history – that of the Dairy Farmers’ Union. As milk production increased with the aid of mechanical and scientific advancements in the early decades of the twentieth century, the depression era significantly decreased the demand for milk and dairy products. Moreover, the cost of transportation of milk increased. Retailers and large scale cooperatives responded by slashing prices, engaging in a price war, and developed a monopoly in the state undercutting the cost of production for small, family farms. So, as the National Milk Month campaign advertised at local shops to increase the demand for a surplus supply of milk, farmers were waging a battle on the farm front to stabilize prices on milk, respond to the increased cost of production, and secure their small farms.
The Dairy Farmers’ Union strike was not the first dairy strike in New York State, nor the first instance of corruption in New York’s dairy industry. In 1858, the “swill milk” scandal of watered down, contaminated, or doctored milk was uncovered in New York City which necessitated standardized practices in the industry for public health safety. Contaminated and diseased milk from poor milk handling to animal cruelty – such as feeding distilled whiskey mash to cows or lifting and milking a dying cow – was often and unknowingly the cause of transmission of infectious disease. In 1933 as commodity prices fell, New York State’s milk strikes spread like wildfire and grew quite violent, bringing the state close to marshall law as one New York Times reporter noted. The 1937 strike, following the largest drop in milk prices in fifteen years, was eventually successful, as small family farmers shut down two of the largest milk cooperatives in the state through persistent and surreptitious means, from picketing with long boards with exposed nails to protect their picket lines from anti-strike motorists and greasing the train rails to prevent milk shipment departures from the facility.
Some memory of the battle persists today as small farmers still bemoan the large-scale factories’ hold over pricing and the market. Small scale dairy farming continues to be difficult to near impossible to sustain on only commodity production.
To celebrate National Dairy Month, we at The Cheese Traveler see cheese production as the natural response to summer’s increased milk supply. It takes approximately ten pounds of milk to make a pound of cheese. A gallon of milk is about 8.6 pounds, so to make one lovely ten pound wheel of Madeleine for example, Sprout Creek Farm uses over twelve gallons of goat’s milk. Likewise, cheesemaking has been the historical solution to excess milk supply. Other countries with a long history of incorporating cheese in their diet such as Greece and France experience lower rates of hypertension and obesity in the population than those in the U.S. The health benefits of cheese – offering a high-quality protein as well as calcium, phosphorus, and Vitamin A – provide a strong support for the continued development of cheese production and its ties to local and regional food culture.
In New York State, home of The Cheese Traveler and the third largest dairy-producing state in the country, small farms have turned toward farmstead and artisan cheesemaking as a value-added option to increase their viability. Value-added products are those that take a commodity such as milk and add labor, time, and craftsmanship to it to make it more valuable. The art of cheesemaking adds value in several ways: a low price commodity becomes an economically viable agricultural product, a perishable becomes an “aged” product, saving the cost of freezing or keeping milk cooled through the winter months of low milk production, and a commodity with little variation becomes highly diversified in form, taste, and craft.
The Cheese Traveler is deeply committed to selling the cheeses of these small producers who either use their own milks produced on their farms or use locally sourced milks from natural, grass-fed, pastured, or organically fed goats, sheep, and cows. So, as we commemorate June as National Dairy Month, let us also remember the efforts of our forbears who have fought to make food safe, affordable, and delicious. Cheese is a wonderful addition to any meal and can be added to enhance the flavor of many summer dishes. We have been enjoying the classic Mediterranean beans-n-greens with white beans, radicchio, mizuna, fresh oregano, rosemary, thyme, and garlic scapes, onion, balsamic vinegar; sautéed in butter; finished with olive oil, salt, pepper, and Toma Pepato from Cooperstown Cheese Company.
| 3.522412 |
European scientists have recently announced plans build the tallest, and second-largest structure ever created by man, second only to the Great Wall of China. Located over 3,000 feet below the Mediterranean Ocean, the building will serve as a way “to find astrophysical neutrinos originating in cosmic cataclysms,” reports Popsci.
The neutrino detector, named KM3NeT, “will stare at the seafloor in an effort to see neutrinos making their way through the Earth.” The detector, which spans three cubic kilometers, “will also serve as a new oceanography observatory in one of the world’s busiest bodies of water, helping biologists listen to whales and study bioluminescent organisms.” More from Popsci:
The goal is to find astrophysical neutrinos originating in cosmic cataclysms, Riccobene said. They could help explain the origin of cosmic rays, the proton flux that rains down on the Earth from unknown sources.
Much of the general public probably had never heard the word “neutrino” until the still-controversial faster-than-light claims made by a separate group of Italian physicists made news this fall. The supposedly speedy neutrinos glimpsed by the OPERA experiment were created in a beam of protons, and hurled underneath the Alps from Geneva to Gran Sasso, an Italian mountain that sits atop a physics lab.
The heart of The KM3NeT is a ‘stand alone sensor module’ with 31 three-inch photomultiplier tubes in a 17-inch glass sphere. These spheres contain hypersensitive cameras that can detect a single photon.
To discover more about this feat, take some time to read Rebecca Boyle’s far more articulate full article. You’ll be blown away.
| 3.673777 |
MIT develops way to bank solar energy at home
CAMBRIDGE, Massachusetts |
CAMBRIDGE, Massachusetts (Reuters) - A U.S. scientist has developed a new way of powering fuel cells that could make it practical for home owners to store solar energy and produce electricity to run lights and appliances at night.
A new catalyst produces the oxygen and hydrogen that fuel cells use to generate electricity, while using far less energy than current methods.
With this catalyst, users could rely on electricity produced by photovoltaic solar cells to power the process that produces the fuel, said the Massachusetts Institute of Technology professor who developed the new material.
"If you can only have energy when the sun is shining, you're in deep trouble. And that's why, in my opinion, photovoltaics haven't penetrated the market," Daniel Nocera, an MIT professor of energy, said in an interview at his Cambridge, Massachusetts, office. "If I could provide a storage mechanism, then I make energy 24/7 and then we can start talking about solar."
Solar has been growing as a power source in the United States -- last year the nation's solar capacity rose 45 percent to 750 megawatts. But it is still a tiny power source, producing enough energy to meet the needs of about 600,000 typical homes, and only while the sun is shining, according to data from the Solar Energy Industries Association.
Most U.S. homes with solar panels feed electricity into the power grid during the day, but have to draw back from the grid at night. Nocera said his development would allow homeowners to bank solar energy as hydrogen and oxygen, which a fuel cell could use to produce electricity when the sun was not shining.
"I can turn sunlight into a chemical fuel, now I can use photovoltaics at night," said Nocera, who explained the discovery in a paper written with Matthew Kanan published on Thursday in the journal Science.
Companies including United Technologies Corp produce fuel cells for use in industrial sites and on buses. Automakers including General Motors Corp and Honda Motor Co are testing small fleets of fuel-cell powered vehicles.
POTENTIAL FOR CLEAN ENERGY
Fuel cells are appealing because they produce electricity without generating the greenhouse gases associated with global climate change. But producing the hydrogen and oxygen they run on typically requires burning fossil fuels.
That has prompted researchers to look into cleaner ways of powering fuel cells. Another researcher working at Princeton University last year developed a way of using bacteria that feed on vinegar and waste water to generate hydrogen, with minimal electrical input.
James Barber, a biochemistry professor at London's Imperial College, said in a statement Nocera's work "opens up the door for developing new technologies for energy production, thus reducing our dependence on fossil fuels and addressing the global climate change problem."
Nocera's catalyst is made from cobalt, phosphate and an electrode that produces oxygen from water by using 90 percent less electricity than current methods, which use the costly metal platinum.
The system still relies on platinum to produce hydrogen -- the other element that makes up water.
"On the hydrogen side, platinum works well," Nocera said. "On the oxygen side ... it doesn't work well and you have to put way more energy in than needed to get the (oxygen) out."
Current methods of producing hydrogen and oxygen for fuel cells operate in a highly corrosive environment, Nocera said, meaning the entire reaction must be carried out in an expensive highly-engineered container.
But at MIT this week, the reaction was going on in an open glass container about the size of two shot glasses that researchers manipulated with their bare hands, with no heavy safety gloves or goggles.
"It's cheap, it's efficient, it's highly manufacturable, it's incredibly tolerant of impurity and it's from earth-abundant stuff," Nocera explained.
Nocera has not tried to construct a full-sized version of the system, but suggested that the technologies to bring this into a typical home could be ready in less than a decade.
The idea, which he has been working on for 25 years, came from reflecting on the way plants store the sun's energy.
"For the last six months, driving home, I've been looking at leaves, and saying, 'I own you guys now,'" Nocera said.
(Editing by Vicki Allen)
- Tweet this
- Share this
- Digg this
| 3.366967 |
On Tuesdays on WGBY at 9:00pm, starting May 7, you can breathe new life into the traditional civics lesson with Constitution USA with Peter Segal. Traveling across the country on a Harley Davidson to find out where the U.S. Constitution lives, Peter Segal looks at how it works and doesn’t work, how it unites us as a nation, and how it has nearly torn us apart. Watch a preview.
A vast digital library of classroom resources, PBS LearningMedia is continuing to add new content from Constitution USA. Here are just a few highlights for grades 9-12:
Separation of Powers The framers of the Constitution feared too much centralized power, adopting the philosophy of divide and conquer.
Federalism Federalism is one of the most important and innovative concepts in the U.S. Constitution, although the word never appears there. Federalism is the sharing of power between national and state governments
Rights What is a right, and where does it come from? A right is a power or privilege that is recognized by tradition or law.
| 3.78999 |
The World Heritage Centre expresses its deep concern regarding the cargo spill off the British Coast.
The MSC Napoli is grounded near Sidmouth's coastline, part of the Dorset and East Devon Coast World Heritage site, inscribed on the World Heritage List in 2001. The chemical containers and oil released from the ship may represent a potential environmental disaster. If the information provided is correct, the most serious damage could result from the oil spill that could affect the coastline, its biodiversity and wildlife. UNESCO will consult with the British authorities and the site management to assess any damage to the outstanding universal value of the site and any urgent measures to be taken.
The cliff exposures of the World Heritage site provide an almost continuous sequence of rock formations spanning some 185 million years. The area's important fossil sites and classic coastal geomorphologic features have contributed to the study of earth sciences for over 300 years and are outstanding from a scientific and natural point of view.
| 3.073513 |
River restoration with highway removalIn megacity Seoul, Korea, the restoration of a culturally important river teaches key lessons. A river can be recovered and restored even in a large and dense central business district, with multiple positive effects. These include providing habitat for nature, preserving cultural heritage, providing access to nature for the public, flood control, and microclimate regulation. The river restoration also promoted more sustainable transport forms over roads and cars removed to make space for the river and its surrounding parks.
Keywords: river restoration, highway removal, sustainable transport, habitat
Seoul is classified a megacity with a population of circa 23 million in the greater metropolitan region. The Cheonggyecheon river (pronounced “chung-gye-chun”) was a highly appreciated and used seasonal river that dissects the city. After increasing pollution and canalisation, it was concreted over and a six-lane highway was built over it during the 1970s.
But a politician, Lee Myung-bak, led a campaign as Seoul’s mayor to restore the river – and was elected South Korea’s president a few years after. The success of the Cheonggyecheon river restoration has had massive ripple effects: in East Asia and North America, cities are studying the project to gain its benefits for ecology, environmental quality, and urban sustainability. Within South Korea, nearly 100 other elevated roads have been scheduled for removal.
Clearing the transportation hurdle
The strongest objection to the project was that the highway, carrying 160,000 cars per day, was vital to the city’s transportation and economy, even though perpetually congested. In fact the project provided transportation improvements of many kinds. With reduced road capacity in the centre, Seoul radically expanded its bus rapid transit (BRT) service, and better integrated it into other public transport, e.g. underground rail, buses, as well as improved infrastructure for non-motorised transport (see also Guangzhou). Cars disappeared, buses ran faster and were better utilised, subway use increased. Walking was also facilitated.
Many benefits to environment
The environmental benefits of the restored river are multiple. Air quality improved: one report cites small-particle air pollution decreasing from 74 micrograms per cubic metre to 48 in the vicinity of the river. Microclimate benefits come from the river acting as a natural air-conditioner: temperatures in the river corridor are 3-4 degrees C lower than areas only 400 metres away, and wind speeds are on average 50% higher than before the river was recovered. These are important benefits for climate adaptation, in addition to the increased resilience against flooding when a city has open watercourses.
The natural ecology has benefited, e.g. birds, fish, insects, and plants. According to a 2009 report, the number of bird species in the river corridor increased from 6 to 36, fish species from 4 to 25, and insect species from 15 to 192. The green corridor is eight km long and 730 metres wide, with a 400 ha park, and features waterfalls, bridges, and running tracks.
There are remaining critical questions about the sustainability tradeoffs when pumping water artificially to a newly restored river. Critics of the river restoration question the overall benefits to citizens quality of life and whether the process had sufficient input from civil society and environmental actors.
Jeffrey R. Kenworthy, 2006, “The eco-city: ten key transport and planning dimensions for sustainable city development”, Environment and Urbanization, April 2006, vol. 18 no. 1, 67-85
Andrew C. Revkin, 2009, “Peeling Back Pavement to Expose Watery Havens”, The New York Times, July 16 http://www.nytimes.com/2009/07/17/world/asia/17daylight.html?_r=2&pagewanted=all
Michael Replogle, Walter Hook, 2006, “What dynamic local leaders can teach us about environmental stewardship”, The Sacramento Bee, January 24, http://www.itdp.org/news/local-leaders-teach-stewardship/
John Vidal, 2007, “U-TURN”, Resurgence Magazine, http://www.resurgence.org/magazine/article204-u-turn.html
Preston L. Schiller, Eric C. Bruun, Jeffrey R. Kenworthy, 2010, An introduction to sustainable transportation: policy, planning and implementation, London: Earthscan
Key data are retrieved from the UN World Urbanization Prospects, the 2009 Revision, http://esa.un.org/unpd/wup/index.htm
| 3.571679 |
Ancient Amazon home to large 'cities'
Brazil's northern Amazon region, once thought to have been pristine until the encroachment of modern development, actually hosted sophisticated networks of towns and villages hundreds of years ago, according to a new study by U.S. and Brazilian researchers.
A report by Dr Michael Heckenberger of the University of Florida and colleagues, published in today's issue of the journal Science, suggests the society was advanced and complex, and had worked out ways of using the Amazon forest without destroying it.
The researchers used archeological evidence and satellite images to show the area was densely settled long before Columbus and European settlers arrived, with towns featuring plazas, roads up to 50 m wide, deep moats and bridges.
Nineteen evenly spaced villages were linked by straight roads, and the cluster could have supported between 2,500 and 5,000 people, said the researchers. The villages were all laid out in a similar manner - and the roads were mathematically parallel: "This really blew us away," Heckenberger said. "It's fantastic stuff."
Heckenberger, who worked with indigenous chiefs from the Upper Xingu region as well as a team at the Universidade Federal do Rio de Janeiro, said the settlements dated to between 1200 A.D. and 1600 A.D.
"Every 3 km to 5 km there is another village or town," he said. "Some of these villages are 50 hectares in size ... maybe 150 or so acres in total size," he added.
"In the villages sometimes the roads are 50 m wide. Why 50 m? There were no wheeled vehicles. They were not having car races up and down these things and certainly you were not moving Incan armies."
Heckenberger believes the wide boulevards and plazas were the early Xinguano society's version of monuments - akin to the pyramids of the Maya: "Clearly it is an aesthetic thing," he said. "It speaks of very sophisticated astronomical knowledge and mathematical knowledge and the kind of things that we associate with pyramids. It is a different human alternative to social complexity."
It would have taken a productive economy to fund such works, he added. But the civilization was not as large and urbanised as better known South American civilizations.
"Everyone loves the 'lost civilization in the Amazon story'. What the Upper Xingu and middle Amazon stuff shows us is that Amazon people organised in an alternative way to urbanisation. We shouldn't be expecting to find lost cities. But that doesn't mean they were primitive tribes, either."
The agriculture was clearly sophisticated, too, the researchers said, and probably very unlike modern clear-cutting strategies. They clearly, however, altered the forest, Heckenberger said.
"What it does show is there are alternatives to what is commonly presented as an all-or-nothing scenario," he said.
The Amazon was not primordial when European colonists arrived - bringing with them the diseases such as smallpox and measles that virtually wiped out indigenous populations.
"I firmly believe that the majority of what is now forested landscape would have been converted into some other type of environment - secondary forest or fields of grass or orchards of fruit trees or manioc gardens," he said.
Xinguano people still live in the region and are certainly descended from whoever built the cities, he said - but the populations are considerably sparser.
| 3.207267 |
How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
The $260 million mission, the first to Venus since NASA sent up the Magellan mission in 1989, aims to study the greenhouse effect on the planet, where the atmosphere is intensely hot and crushingly dense.
It also hopes to learn what caused volcanic activity some 500 million years ago and whether there is any taking place today.
Venus Express was launched Nov. 9 atop a Russian booster rocket from the Baikonur Cosmodrome in Kazakhstan.
Under current plans, the mission is to last 500 days, with the possibility of extending it for another 500.
On the Net: http://www.esa.int
News stories provided by third parties are not edited by "Site Publication" staff. For suggestions and comments, please click the Contact link at the bottom of this page.
| 3.43269 |
Hang a Right at Jupiter
For space navigators, the best course to a distant object is never a straight line.
- By Michael Milstein
- Air & Space magazine, January 2001
NASA/JHU Applied Physics Laboratory
Bob Farquhar feels lucky. And that’s good, since there’s nothing he or anyone else can do now but hope for the best. His spacecraft is out there on its own, 119 million miles away, and whatever’s going to happen next has already been programmed into the onboard computers.
Farquhar, whose mild manner seems more like that of a high school teacher than a space explorer, watches and waits from an unlikely place—not the Jet Propulsion Laboratory in Pasadena, California, headquarters for almost all past U.S. interplanetary missions, but a nondescript building at Johns Hopkins University’s Applied Physics Laboratory, outside Baltimore. It could be any office park in the country, except that the room in which Farquhar sits—at the head of a long conference table—is linked to NASA’s Deep Space Network. At the moment, one of the network’s giant dish antennas is relaying signals from a boxy little spacecraft called NEAR Shoemaker in orbit around a potato-shaped asteroid known as Eros.
NEAR stands for Near Earth Asteroid Rendezvous, the first spacecraft ever to match orbits with an asteroid and hang around for an extended study of its chemical and physical makeup (“Shoemaker” was added to the name in memory of the late astrogeologist Eugene Shoemaker). While Farquhar and his team monitor the signals, NEAR orbits 62 miles above the slowly tumbling asteroid’s surface. The satellite is about to go in for a closer look, firing thrusters to cut the orbital altitude in half.
Farquhar has just learned that the engine burn, the instructions for which were long ago loaded into the spacecraft’s computers, should last 144 seconds, nudging the craft from its piddling 5 mph to a whopping 6.5 mph. “Oh, that’s great!” he exclaims, drawing quizzical looks from others watching a screen full of numbers charting NEAR’s position. “144 is 12 squared. 12 is a lucky number. I was born September 12. [My first space mission] launched 12 minutes and 12 seconds after the hour and can you guess what the date was?
Farquhar’s faith in lucky numbers should not be easily dismissed, considering that he and his colleagues have sent spacecraft where no one thought possible, on less fuel and in less time than most people would have guessed. He calls himself an astrodynamicist, but he’s the unofficial king of the space navigators, a cadre of behind-the-scenes engineers who direct shiny, expensive spacecraft from here to there, with here being Earth and there being an asteroid, comet, planet, or moon.
Space navigation is like threading a needle, only a thousand times harder. With today’s space missions aiming at ever smaller targets (like asteroids), the eye of the needle gets so narrow, with so little room for error, that those threading it either succeed spectacularly, like Farquhar and his team have done so far at Eros, or they miss.
And in space, there’s no way to miss but spectacularly.
| 3.376498 |
By Giorgio Carboni, November 1997
Translation edited by Donald Desaulniers, Ph.D.
What build a microscope? Why not! The following is a guide on how to build a stereoscopic microscope. This guide allows you assemble, at a relatively low cost, an instrument that is useful for observing natural objects such as insects, plants, minerals, fossils and flowers. It can also be used for home projects like examining art or antique objects, repairing watches and thin gold chains, looking for defects in electronic printed circuit boards or even help you find and extract an annoying splinter from your finger.
At first glance, the construction of a such an instrument may seem to be a particularly complex and difficult endeavor since costly precision machine tools are usually required build this kind of instrument. As you will see below, the approach we have adopted makes its fabrication relatively simple without having to use to expensive machine tools.
As a school project, the construction of this stereoscopic microscope can be a good exercise in optics, in mechanics (preparing drawings with CAD programs and working the pieces), as well as opening new horizons in biology and natural sciences. In fact, we have deliberately omitted the dimensions in the drawings provided in this article to simplify their presentation and to allow you to adapt the plans to the materials and components available to you. As presented, this microscope will cost less than a tenth of the price of a commercially available instrument of the same performance.
THE STEREOSCOPIC MICROSCOPE
So, what is a stereoscopic microscope anyway? In biological sciences, the two main types of microscope commonly used are the "conventional" type, usually referred to as the compound microscope, and the stereoscopic microscope. The main difference between these two types of microscope is that the compound microscope sees the sample from a single direction, whereas the stereoscopic type sees the object from two slightly different angles which provides the two images needed for the stereoscopic vision. The stereoscopic microscope gives a three-dimensional view of the object, while the same object appears flat when viewed through a compound microscope. This holds true even if the compound microscope has a binocular head because each eye sees exactly the same image.
Also, conventional compound microscopes are used to observe objects that are transparent or translucent and typically have a magnification ranging from 50 to 1,200 times. The stereoscopic microscope, however, views objects mainly by means of reflected light and its power, typically ranging from 8 to 50 times, is much less than the compound microscope. Conventional compound microscopes are often so powerful that you cannot even see the specimen with the naked eye. Stereoscopic microscopes, on the other hand, are generally not as powerful as compound microscopes. Although the stereoscopic microscope does not allow you to observe such small objects as microbes, there are nonetheless an amazingly large number of things to observe with this type of microscope.
While this apparent limitation of magnifying power may appear to be a drawback, the lower power does have its advantages. Indeed, the relative proportions of what you see with the naked eye compared to what you see through the stereoscopic microscope is maintained thereby making it easier to use. With the compound microscope you must often follow relatively complex procedures to prepare the specimens for viewing. This is not necessary with stereoscopic microscopes. The greater ease of use of stereoscopic microscope makes it particularly well suited for initiating children to nature and the sciences. This does not mean that it is an instrument for children only. Stereoscopic microscopes are widely used by researchers all over the world, in fields such as the entomology, botany, mineralogy, including applications like micro-surgery, micro-electronics and in may other commercial and industrial activities.
The simplicity and ease of use of the stereoscopic microscope is also attractive for the adult naturalist or amateur scientist who does not always have the time nor patience to prepare specimens for viewing. For example, when I come home, tired from a full day of work, I am not usually inclined to undertake complex tasks. On the contrary, I enjoy observing objects under the microscope just to relax. With a stereoscopic microscope, a short stroll in the garden will provide numerous and amazing objects for you to observe under the microscope. You can, for example, observe ants bringing out little grains of soil from their underground colonies, bees collecting nectar on flowers or even protozoa swimming in a pond of water.
As a young boy not having much money and with a great interest microscopy, I tried building my own instruments. I succeeded in making a compound microscope without much difficulty, but making a stereoscopic microscope proved to be a more challenging project. The commercial models I had seen in shops were made of two separate microscopes that were kept convergent as shown in Figure 1. The problem was not so much building the two microscopes that comprise the stereoscopic microscope, but to keep them perfectly aligned as you vary the interpupillary distance or the power of the instrument.
I pondered this problem for a long time, but the mechanical challenges proved to be too difficult and discouraged me from venturing any further in this project. For several years, the problem remained unresolved, until finally one day I came upon a brochure showing a cross-section of a stereoscopic microscope. I then realized that this instrument was not simply made up of two separate and convergent microscopes but rather an optical device in which the light passes through a common objective where it then follows two distinct paths. In this scheme, I reasoned that the common objective, which is comprised by several lenses, can be reduced to a single lens. This schematic drawing allowed me to understand that it was possible, and optically correct, to collect the two images needed for stereoscopic vision using a single objective. But more importantly, the problem of aligning the two separate microscopes, which is so important and difficult to maintain, was resolved. Indeed, as discussed later, this makes it possible to construct an orthogonal and parallel mechanical structure, instead of the convergent one of the original model shown in Figure 1 (A).
The next step in this project was to recognize that the optical components needed to construct the stereoscopic microscope could be obtained from a normal pair of binoculars. This is important since binoculars are widely available and can be obtained at a relatively low cost.
Also, using a pair of 8 x 30 binoculars as the "eyepiece" was important in order to simplify the project. At this point, many problems were resolved. Nonetheless, the question of adjusting the interpupillary distance puzzled me for a while. In the beginning, I thought of designing a stereoscopic microscope with a central pivot that could move together with the "eyepiece" of the binoculars. This device was puzzling and did not satisfy me. The idea of using the large sized prisms of a dismantled set of binoculars in order to contain the variations of the optical path solved every problem. In fact, simply moving the upper pair of binoculars will allow you to adjust the interpupillary distance. After having eliminated all other problems, the focusing movement is the main problem left, but, as you will see later, you can resolve it without great difficulty.
THEORY OF OPERATION
Let us examine how the common-objective stereoscopic microscope works. If you have a pair of binoculars, you can try this simple experiment. Unscrew one of the objective lenses and place it in front to the other. Approach an object until its image is in focus. You will see the object magnified. What is happening? Consider a specimen placed at the focal distance from the objective. The divergent light rays coming from the object pass though the objective lens and emerge parallel. This light then passes through the second objective of the binoculars. But binoculars are specifically designed to observe distant objects that are essentially the result of parallel light rays. In fact, the second objective forms an image of the object in the correct position of the eyepiece which makes it appear distinct.
In this simple experiment we obtained a monocular microscope, however we want to build a stereoscopic unit. To accomplish this, it is necessary to collect two distinct optical paths from a single objective and to pass them through the binoculars, which serve as the eyepieces. This can be easily done by means of four prisms as illustrated in Figures 2 and 3.
Let us examine more closely the optical schematic of the common-objective stereo microscope. The lower part of the microscope includes an achromatic positive lens and four prisms. You can obtain all of the required components from an old pair of binoculars (distal binoculars) which you can afford to dismantle. The upper part of the stereoscopic microscope comprises a second pair binoculars (proximal binoculars). This second pair is used without any alteration as if it were an "eyepiece". When the microscope is not in use, the later pair of binoculars can be removed and used for its original purpose: to view distant objects.
If you maintain the common objective at the focal distance from the object you want to observe, the divergent lights rays from the specimen pass through the objective and emerge as parallel rays which are then suitable to be observed through the binoculars which serve as the eyepieces. The four intermediate prisms, as shown in Figure 2 and 3, allow the proximal binoculars to view the two optical paths passing through the common objective.
This design of stereoscopic microscope is not only optically simple, it also has some important advantages from a mechanical point of view. The fact that light rays are parallel as they emerge from the common objective allows you construct a parallel and orthogonal mechanical structure, thereby avoiding problems of optical convergence and alignment as would be the case if the instrument were made up of two distinct microscopes as shown in Figure. 1. Moreover, the parallel arrangement of the two optical paths is well suited to the structure of the proximal binoculars, which is also parallel. These features simplify considerably the construction of this instrument without having to use precision machining equipment such lathes or boring and milling machines.
With this simple design, you can also adjust the interpupillary distance without any mechanical movement of the microscope by simply making the adjustment on the proximal binoculars since they already designed to do this without any difficulty.
In summary, the key features of this stereoscopic microscope design are the following:
- collects the two images required for stereoscopic vision from the same objective;
- uses a series of prisms which allows a pair of proximal binoculars to focus on the optical paths emerging through the common objective;
- exploits the relatively large size of distal prisms to allow the adjustment of the interpupillary distance without any mechanical movement of the microscope;
- the optical components are readily obtained from a dismantled pair of binoculars;
- uses an 8 x 30 binoculars as an integral part of the microscope.
Thus, what we now have is an optically correct structure that is also simple from a mechanical point of view. The optical design we have adopted makes the construction of a stereoscopic microscope reasonably accessible even to the amateur scientist working at home without any specialized tools.
COMPONENTS OF THE MICROSCOPE
Let us firstly distinguish the optical part of the microscope from the stage. The optical part comprises all of the optical components and the metal parts, which hold them together as illustrated in Figure 8. The stage includes the pedestal, the column and the focusing system (figure 5).
In the construction of this microscope, we recommend that you progress from the bottom upward, just as you would do when building a house. At first, start with the pedestal, then the column, followed by the focusing device and finally the optical part.
SELECTING THE BINOCULARS
If you do not have not any binoculars, you will need to obtain two sets. You can find low cost binoculars in second hand shops, flea markets, and bazaars or at weekend garage sales. Firstly you will need a set of 8 x 30 binoculars, where 8 is the magnification and 30 is the diameter in mm of the objectives. This type of binoculars is widely available. This pair of binoculars will not be altered and will not be permanently fixed to the microscope, so you can continue to use them as a regular pair of binoculars. When buying your binoculars, be sure that they have a good chromatic correction. They should not produce double or blurred images, as occurs when the images are not perfectly superimposed. Keep in mind that this binoculars should have as a wide field of view as possible. This makes viewing more comfortable thereby enhancing the spectacular three-dimensional effect of stereoscopic vision.
The set of binoculars that are to be dismantled must have an objective of 50 mm in diameter in order to handle the displacement of the optical paths (Fig. 4 and 9). Their power is not important since the eyepieces will not used. These binoculars can be 7 x 50, 8 x 50, 10 x 50 or a 16 x 50, etc. These types of binoculars typically cost about 70 to 150 US$ for a new pair, but used pairs can often be purchased for much less. It is important to verify that the objectives have metal rather than plastic locknuts as they will later be used to connect the objective to the microscope.
When selecting this pair of binoculars, it is important to verify their quality. It is usually sufficient just to observe the amount of the chromatic aberration. To do this, you need to point the binoculars towards a TV antennae or tree branches which adequate backlighting or with the sky as a background. Good quality optics will not produce any orange or blue colors along the edges of the antennae or branches, rather the image should appear to be dark and sharp.
- all dimensions are in mm
- # refers to thickness
- M refers to the metric screws system
- Ø means diameter
- black anodized aluminum square tube # 2 x 45 x 45 x 170 (prisms-housing tube)
- aluminum square tube # 2 x 45 x 45 x 140 (linkage tube)
- black anodized aluminum "U" shaped bar # 2 x 50 x 10 x 170 used to support the binoculars
(These three forms can be usually obtained from companies which produce or install aluminum windows and door frames)
- Plexiglas or rigid black plastic plate # 8 x 30 x 166 (prism-holder plate)
- 3 small black plastic plates # 2 x 41 x 41 (plugs for prism-holder tube and linkage tube)
- stainless steel sheet # 1 x 65 x 105 (support for second objective )
- drawn metal rod Ø 12 x 36 (support for the second objective)
- 1 cylindrical head screw M 3 x 8 (for the prisms-holder plate)
- 2 conical head screws M 3 x 5 (for the "U" shaped bar)
- 4 conical head screws M 2 x 5 (to fix the first objective)
- 2 cap socket head screws M 4 x 7 (for the mounting of the second objective)
- 4 prisms and two 50 mm-diameter objectives. You can scavenge these from an old set of binoculars.
- rack: pitch=1, 15 x 15 x 275
- pinion: pitch=1 z=15 (z=number of teeth)
- drawn steel rod Ø 6 x 80 (pinion shaft)
- 2 aluminum plates #5 x 40 x 40 (wings)
- steel plate #5 x 50 x 80 (rack baseplate)
- steel square tube #2 x 25 x 25 x 100 (focusing system carriage)
- "L" shaped aluminum bar #2 x 15 x 15 x 100 (focusing system carriage)
- Teflon or Nylon sheet #1 x 43 x 98 (focusing system carriage)
- 6 cap socket head screws M 4 x 7 (for wings)
- 4 flat tip set screws M 4 x 8 (to push against the "L" shaped bar)
- 1 spring pin Ø2 x 12 (to fix the pinion on the shaft)
- 2 knobs Ø50 about (for the focusing system movement)
- 2 flat tip set screws M 4 x 8 (for knobs)
- 4 cap socket head screws M 4 x 5 (to fix the linkage tube)
- 1 black covered chipboard panel or wood board #15 x180 x 200
- white plastic trim h=15
- contact cement to fix the plastic trimming in place around the base of the pedestal
- 4 hexagonal head screws M 5 x 35 + 4 washer Ø5 + 4 nuts M5 (for baseplate)
- 4 white rubber feet Ø20 x 10
- 4 self-tapping screws Ø3.5 x 20 + 4 washer Ø4 (for the rubber feet)
- epoxy cement to fix prisms in place
- mat black aerosol paint can to coat the inner parts of the optical section
- wood board for the carrying case
- vinyl glue for the wood case
- sheet of methacrylate #3 for the bell
CONSTRUCTION OF THE STAGE
You can make the pedestal as shown in Figure 5 using a black Formica-covered chipboard panel 15-mm thick and with sides of 180 x 200 mm. Round off the four corners and apply a white plastic trimming around the board of the panel. Instead of the chipboard panel, you can also use a wood board of the same dimensions. Place four white rubber pads at the four corners on the underside of the pedestal.
Sometimes one can find such rack and pinion mechanisms in shops selling used photographic equipment. Look for old tripods, photographic enlargers or supports for bellows that have simple rack and pinion devices. Otherwise buy a rack of pitch=1 mm and with a cross-section of about 15 x 15 mm. Pay attention that its sides be smooth and devoid of rust, dents and other defects as the carriage of the focusing system slides along these surfaces. Cut a crop in the rack 275 mm long. If necessary, trim the surfaces with a file and sandpaper. The rack also serves to support column. Weld a steel base plate at the bottom to connect it to the pedestal. When you mount the rack on the pedestal, be sure to check that it is at a right angle with a square. If necessary, you can place shims under the base-plate to ensure that the column is perpendicular to the base.
Two parts form the focusing system: the carriage and the movement device. As detailed in Figure 7, the carriage allows displacements in the vertical direction. The knobs of the movement device allow you to precisely adjust the displacement of the optical section. The coupling of the carriage to the rack ensure that the movement of the carriage is vertical and is kept in alignment with the object. As illustrated in Figure 7, because the square tube of the focusing system fits closely over the rack, it is forced to follow a straight path up and down the rack.
But how can the tube be made to firmly contact rack? This is accomplished by mean of an L-shaped metal bar which sits in the space between the inner wall of the square tube and smooth sides of the rack. A light pressure is applied to the bar by means of four setscrews on adjacent sides of the tube. To give the device a smooth movement, a piece of Teflon or a Nylon sheet 0.5 or 1 mm in thickness is inserted along the three smooth faces of the central rack. You should be able to find commercially available racks with a very good surface finishing which will provide a very regular and smooth movement of the mechanism. Note that because Teflon sheeting should not be folded around a sharp edge, you will have to bevel the edges of the rack as shown in Figure 7.
At this point, the carriage slides along the rack and the fine movement needed for adjusting the focus is provided by the pinion which engages in the rack. As depicted in Figure 7, you will have to cut an opening in the middle of the carriage tube to allow the pinion to engage with the rack.
On two opposite sides of the carriage tube, fix two plates through which the shaft of the pinion shaft passes. To drill the holes for the shaft, you can use the pinion as a driving bearing to guide the drill bit. When you are done, fix the pinion to the shaft with a spring pin Ø2 x 12.
A square aluminum tube is used to link the optical part to the focusing carriage and space them (see Figures 5 and 8). The optical part is kept higher. This allows you to use a shorter column.
FABRICATION OF THE OPTICAL SECTION
The optical section, illustrated in Figure 8, consists of a square aluminum tube. This tube houses the four prisms that are simply glued to a rigid plastic plate. This plate is then connected to the prism housing tube with one screw. An "U" shaped aluminum bar is fastened over the prism housing with 2 screws. This provides a support and guide for the proximal binoculars. The common objective is fixed to the underside of the prism housing. This lens remains in the tube of the dismantled binoculars. At the back of the prism housing tube, there are two threaded holes to affix it to the rack mechanism.
MAKING PIECES OF THE OPTICAL SECTION
To allow light from the common objective to reach the binoculars, you must cut slots into the prism housing and on the "U" shaped bar. As illustrated in Figures 8 and 9, the central slot is cut into the bottom part of the prism housing tube. Figure 9 is a plan view showing the diameter of the objective, the two central prisms and four light beams crossing them. The two innermost light beams correspond to their position when the binoculars are adjusted at the minimal interpupillary distance, whereas the two outer beams correspond to the maximum interpupillary distance.
Imagine collecting light through the objective of the microscope as two beams of 12 mm in diameter (we will explain later where this value is derived). Due to the dimension of the bevels in the prisms used in this design, the light beams cannot have a clearance any smaller than 16 mm. Given that the objective has a diameter of 50 mm, the greater maximum clearance of the two beams is 50-12 = 38 mm. From these values, the maximum displacement of the beams is 38-16 = 22 mm. This range therefore allows you to adjust the interpupillary distance from 50 to about 76-mm.
Based on the above calculations, we deduce that all of the slots should be 12 mm in width, however, it is necessary to make them 2 mm larger, therefore 14 mm wide. In fact, what determine the diameter of the light beams passing through the microscope are two diaphragms of 12 mm in diameter which we will insert in front to both the objectives of the proximal binoculars as will be explained later. The lower central slot should be as long as the diameter of the objective. In this case it is 50 mm.
Figure 11 shows that the distance between two adjacent beams of light as they pass through the prisms. This then allows us to determine that the upper slots will be 7 + 11 + 7 = 25 mm in width.
What remains to be determined is the distance between the two upper slots. To accomplish this, adjust the proximal binoculars to their maximum interpupillary distance and measure the clearance between their objectives (iobmax). Figure 10 shows how to measure the distance between the objectives and the eyepieces of a pair of binoculars.
As depicted in Figure 11, the outer extents of the slots have to be at a distance equal to iobmax + 12 mm. For example, in the case of my binoculars iobmax = 124 mm, the separation distance between the two slots is therefore 124 + 14 = 138 mm. To ensure proper alignment of the slots on the upper "U" shaped bar and the corresponding slots on the top of the prism housing, is advantageous to fasten these two pieces before cutting the slots.
To install the objective, you can use four small screws under the front metal ring of the objective, as shown Figure 8. Note that the objective lenses in a pair binoculars are designed to focus incoming parallel light rays to converging light on the other side. The objective lens of your microscope must have the opposite orientation as in the pair of binoculars; otherwise the image will not form correctly. This means that the surface of the objective lens that was facing the observer in the binoculars must be oriented instead toward the sample as depicted in Figure 2. To do this, you can simply leave the objective in its tube and mount it as shown in Figure 8.
Figures 3 and 8 show a mask with two openings inserted between the objective and the central prisms. In addition to serving as a dust protector, this mask is used to reduce parasitic light that would otherwise reduce the contrast of the image. You can simply cut this mask from a black card. The dimensions of the openings can be obtained from Figure 9.
When you observe an object with the microscope, the points that are out of focus are not circular, but longer in the horizontal axis. This is due to the shape of the slots, which are not circular, but rectangular. To correct this problem, you need to insert a circular aperture in front to both the objectives of the proximal binoculars as shown in Figure 12. What essentially determine the diameter of the light beams passing through the microscope are these diaphragms. Their diameter should be 12 mm. Why 12 mm? As shown Figure 9, an aperture of this diameter allows light to enter the optical section but prevents any reflection against the borders of the prisms and of the objective, when the beams are at their minimum or maximum distance. However, if you do not expect to use the proximal binoculars at their minimum or maximum interpupillary distance, you can make these diaphragms 14 mm in diameter. These apertures can be made by simply cutting a hole in a disk made of black cardboard. You can also use the two lens covers of your binoculars and used a socket punch to perforate the holes.
Cut the slots two mm larger in diameter than the apertures, thus14 mm (or 16 mm). The front mask placed between the objective lens and the prisms, however, must have the openings no greater than 12 mm (or 14 mm) in diameter to minimize parasitic light.
MOUNTING THE PRISMS
Before you start cementing the prisms to the base plate, you must first complete all of the essential components of the microscope. In fact, to complete the adjustment of the prisms, the mechanical part of the instrument should be fully functional.
To assemble the prisms, use a rigid base plate made of a material like Plexiglas or Bakelite. Its thickness has to be such that the prisms will align symmetrically under the slots. The thickness of the base plate is not critical because of the width of the slots is smaller than that of the prisms. On the base plate, trace a centerline that will serve as a reference line to position the prisms. Before you cement the prisms, you will need to make a threaded hole to fasten the base plate. Be sure to carefully clean the prisms.
Next you will have to cement the internal prisms to the base plate. You will have to mount them nearly close to one another as shown in Figure 11. Place a few drops of a two-component epoxy resin on the appropriate faces of the two prisms to be cemented. Following the arrangement shown in Figure 14, place the prisms in the right position referring to the centerline you traced on the base plate. To prevent scratching of the central prisms during their assembly inside the square tube, keep them a few tenths of a millimeter above the plate. This can be done during gluing operation by inserting strips of paper between prisms and supporting surface.
Depending on the curing time of the epoxy cement, the resin should have hardened enough to prevent any movement of prisms under its own weight but it should be sufficiently soft to allow the prisms to be adjusted by applying a little force. At this point, you have to check and adjust the alignment of the prisms to ensure they are co-planar. If their bottom surfaces are coplanar, the reflected images will coincide perfectly as if it was a single mirror. Wait until the epoxy has properly set before proceeding any further with the assembly.
From Figure 11, you can easily obtain the mounting position of external prisms. The "d" dimension defines their position, which is the distance from the diagonals of a couple of prisms, measured following the horizontal direction. It corresponds also to the horizontal tract of light paths. Calculating this value is simple, and you can refer to the maximum objective clearance of the proximal binoculars (iobmax):
d = (iobmax-38)/2
For example, if iobmax is 124 mm, then d = (124-38)/2 = 43 mm.
As light crossing the prisms is parallel, the "d" dimension is not critical. You can verify this distance by measuring it with a ruler. In any case, try to reduce the error to a minimum.
To glue the external prisms, first place the base plate on a table. Secondly, place the epoxy resin on the base plate and then the prisms on the plate. With a ruler verify the position of the prisms and correct it if needed. For this operation, you can also use a plastic or wood shim. To align the prisms to the upper edge of the plate, you can use a metal guide.
ALIGNMENT OF THE PRISMS
The difficulty in setting the prisms comes mainly from the fact that a small departure from the correct position produces an important misalignment of the two images. Even if you carefully mount the prisms, the images produced will not overlap perfectly. To ensure the the prisms are well aligned, adjust them until the images are correctly superimposed. You must complete this operation within few minutes, before the epoxy resin sets or solidifies.
The errors in the positioning of the prisms are of two types: displacements in the XYZ directions and rotation around the three spatial axes. Avoiding rotational error is most critical for ensuring that superimposed images are formed, so try to be as accurate as possible when positioning the prisms. If the mounting of the prisms is done with reasonable care, the remaining alignment errors should be quite small. To correct them, it should be sufficient to adjust only one of the prisms.
The setting of the alignment of the prisms is tricky and requires care and patience. To facilitate this operation, we have divided it up into several steps. Before proceeding, we need to define a few terms. Horizontal alignment is the relative displacements of the images along the horizontal direction (<--->), vertical alignment indicates relative displacement along the vertical direction (v ^), angular alignment means errors in the parallelism of the images ( \ / ).
A) Alignment of couples of prisms. Make this adjustment few minutes after gluing the external prisms. Start with the pair of prisms on the right side. Referring to Figure 15, look at a distant object (about 20 m). If you use a more distant object, you will have an important horizontal misalignment between the images. In any case, during this adjustment, do not pay too much attention to the horizontal alignment because it will be corrected in a later step, but carefully correct any vertical misalignment. If the pair of prisms is well aligned, the image seen through the prism and the one seen directly, are continuous. If needed, correct the position of the external prism. Make the same with the pair of prisms on the left side. Finally, verify that a horizontal line, appearing through the two central prisms is at the same height. If necessary, you may have to raise one corner of the prisms slightly. Make these fine adjustments by inserting thin wood wedges under prisms. This will prevent the elastic recovery of the epoxy resin from moving the prism out of alignment.
Before continuing, let everything rest while the epoxy resin begins to cure while the plate is held in the vertical position. This will preventing the prisms from moving under their own weight however it will still be possible to move them by applying a slightly greater force.
B) Alignment of the prisms as a whole. Now that you have adjusted each pair of prisms independently of one another, we can now proceed to adjust the alignment of the prisms as a whole. Firstly, mount the base plate with the prisms in its square tube. Secondly, put the 8 x 30 binoculars on the microscope and look at a some fine print. This is the "word-test": it is a severe test of the alignment. Make the needed corrections always using wooden wedges (Fig. 16). In this step, the adjustments should be done only for the horizontal alignment of the prisms, but it will be necessary to correct any other misalignments you are able to detect. Moreover, our eyes have a tendency to compensate for alignment errors, mainly in the horizontal direction.
During the final adjustments, I found it useful to mask one eye with a card, leave rest it for a wile, and then quickly uncover it. In this way, I could better detect whether the images were well aligned or not. To help the users of your microscope not to have to cross their eyes too much, you may need to adjust the horizontal alignment. To do so, look now and then, at an object that is about a meter away then look immediately into the microscope while trying to maintain roughly the same convergence of your eyes. If necessary, rotate one of the prisms until the horizontal misalignment disappears. In these steps, you will have to be patient but do not despair, you will see that your microscope will yield well-aligned images.
When you have finished with these adjustments, lay down the microscope in such a way that the prisms exert their weight on the plate. Let the parts rest for about 30 minutes. Check to see if the prisms have maintained their correct alignment and wait a couple more hours for the epoxy resin to properly harden, at which time the microscope will be ready to use. The following day, remove the base plate and remove the wedges under the prisms or cut them with a sharp blade.
PROTECTION AGAINST PARASITIC LIGHT
If you are near a window or lamp, stray light can enter from the openings at each end of the tube used to house the prisms. This will cause images to lose some contrast. To overcome this problem, you need cover the ends of the tube with plugs. Otherwise, the plugs on the linkage tube have only an aesthetic function. These plugs are simply inserted into the tube with light pressure.
BLACKENING THE INSIDE PARTS
Before the final assembly of the microscope, you should blacken the inner surfaces of the tubes used to hold the prisms, the sides of the slots and the inner surfaces of the linkage tube. You can do this quite easily by applying a mat black spray paint. Remember to cover the surfaces you want to protect from the paint with paper and adhesive tape. Allow the painted parts to dry for 24 hours before reassembling the microscope.
FIXING THE BINOCULARS
Finally to view the image, the proximal binoculars are simply placed on top of the microscope. In this position, however, the binoculars could fall and should be fixed to the mechanism. This can be accomplished by means of a fork-shaped support.
MAGNIFYING POWER AND ITS VARIATION
The magnification power of this stereoscopic microscope is given by:
Im = 250 x In/Fd
Im = magnification of the microscope
In = nominal magnification of the proximal binoculars
Fd = focal length of the common objective
If as proximal binoculars you have an "8 x 30", and if Fd is 200 mm, then:
Im = (250 x 8)/200
Im = 10 X
To determine the focal length of your lens, place the lens between a lamp and a screen then adjust the position of the lamp to get a sharply focussed image on the screen. Measure the distance A from the lens center to the lamp and the distance B from the lens center to the screen. The focal length is given by:
F = AxB/(A+B)
From the F value, you have to subtract the distance of the nodes of the lens. You can consider this value to be roughly one half of the lens thickness (I hope that no opticians read this!).
Many users will surely like having more than one magnification, just as commercial microscopes usually do. To double the magnification of the microscope, you can add the other objective lens of the binoculars that you dismantled for this project. The performance of the microscope with double magnification is good, although it is not as sharp as when a single objective is used. You can mount the second objective on a rotating support (Fig. 17). Note that the tube of the first objective has to be cut in order to maintain the shortest distance possible between the two lenses.
To change the magnification, you could also replace the eyepiece with more powerful ones. Another method would be to use proximal binoculars with a zoom feature, but these are often costly and the quality can be poor.
I examined the possibility of using zoom photographic objectives to make a zoom stereo microscope. Unfortunately, these objectives have a small working aperture, which hinders the possibility of adjusting the interpupillary distance. To use such zoom lenses would require changing the structure of the microscope. Work on this project is in progress.
SURFACE FINISHING OF PIECES
When you choose the materials, try to avoid metals that are sensitive to corrosion. You should use nickel-plated steel for the square tube of the focusing system. This galvanic treatment performed on the rack would make it finer, but the differences in the thickness of nickel layers would hinder the movement of the pinion. For this reason, do not plate the rack nor the pinion and its shaft. Moreover the parts of the focusing system should be made of quenched and tempered steel and they are quite resistant rusting as long as you keep the microscope indoors. Aluminum surfaces are also good untreated, but they can also be black anodized. This treatment is suitable for used aluminum parts that are bought and show signs of wear. Consult your local "yellow pages" or business directory for companies that can to the anodizing for you.
It is springtime, and it's time to get outdoors for a nature excursion! You cannot very well bring your instrument with you as it is. It has been sitting at home all the winter and now it is time to work those lenses and grease up the focusing gears. As it is a precious and fragile instrument, it must be protected. One way of protecting your microscope is to house it in a wooden case. The case can also be used to store the components of the microscope including a light source and accessories that can be placed in special drawers or compartments. The case can be made of plywood or any suitable wood. A wooden case with tongue and groove joints is delightful to see. A nicely finished case will give your microscope a certain prestige and esthetic appeal.
The microscope that you have built with your own hands is something in which you can take pride. But if you leave it in its case, nobody will ever notice it. On the other hand, keeping it on your desk or workbench will expose it to dust. To protect your microscope, you can cover it with a plastic hood but this does not exhibit your microscope very well and does not protect well against dust.
A good solution is to make a bell cover made of Plexiglas. This is a commonly available transparent and rigid plastic material that you can find also in a variety of colors including smoky gray. To make a bell cover, use a 3-mm thick sheet of this material. As illustrated in Figure 18, you have to bend it in two positions. To do this, you take advantage its thermoplastic properties. This material softens at about 100 °C. In order to bend it, use a metal bar heated in an oven. Apply the heated bar along the bending line on one side of the plastic sheet and then to the other side. To prevent scratching of the surfaces, place a thin sheet of paper or a thin cotton tissue between the plastic sheet and the heated bar. When the plastic softens, remove the bar and fold it. When you have got your correct angles, you have to cut two pieces of Plexiglas to close the sides of the bell. Glue them with chloroform, which is a solvent for this plastic, or use an adhesive for methacrylate. This Plexiglas bell case is a simple and elegant solution. It shows the microscope while protecting it from dust.
Below the microscope or on a side of the tube used to house the prisms, you can place a little plate with your name and the date of construction. Consult you local business directory for companies that make or engrave nameplates.
USE OF THE MICROSCOPE
The set screws which push the "L" shape of the focusing device have to be tightened slightly: just enough to stop the focusing carriage dropping by itself. To prevent squeaking of the pinion shaft, put a few of oil into the holes through which it passes.
Put the proximal binoculars in a symmetric position by over the slots. If they are not parallel to the "U" shaped form, you will tend to see double images in the vertical direction. This can be useful also to adjust for any slight errors in the alignment of the prisms.
The microscope is quite high with the binoculars in the vertical disposition. If you place the microscope on a normal table, you will have to stretch your neck or do you observations from a standing position. Unfortunately, it can be tiring maintain this posture for a long time. A possible solution is to place the microscope on a low table or a small bench or stool. If you have a suitable table, you greatly increase your comfort when observing for prolonged periods of time.
When you are outdoors, you can use the direct sunlight. But, if you want to see fine details, you have to diffuse this light by means of a suitable translucent or reflecting white screen. When you are at home, you can use the microscope with a table lamp, but you can obtain better results with a powerful directional light. With a good illumination, you will see a fine play of colors and shades that enhance the relief and the colors of the specimens. For this, you can use a 20-Watt halogen lamp. Choose a model that you can easily orient. Unfortunately, these lamps produce heat, which is fatal to many insects, since these creature are accustomed living in a cool and moist environment. With a strong light, rich in infrared, live specimens are at risk of dying because of desiccation under prolonged exposure the strong light and heat. In these cases, you should to use this type of light for only short periods of time then release the specimen. You can filter the light using a heat absorbing filter that lowers the transmission of the hottest frequencies of light and gives a more cold light. You can find such filters in slide projectors or in a shop.
The stereo microscope is a research instrument. When using the microscope, you may need to manipulate the samples you observe. Your use of the microscope can be made easier if you have some of the following accessories:
- a few petri dishes to contain liquids or insects for viewing
- a pair of tweezers with thin tip
- a black card on which to put samples and displace them easily
- plastic jars or vials with screw cap to collect samples of water from ponds
- a glass Pasteur pipette
- transparent boxes to collect insects
- a box to collect vegetables, lichens and mushrooms to avoid crush them
- a bag to collect humus
- a heat absorbing glass for the spot light
- a camera adapter for taking photographs through the microscope
- a daylight filter (to take pictures at artificial light)
- screwdriver and spanners to adjust the microscope
Great! You have finished your instrument. Finally you can use it and enjoy the results of your hard work. The quality of the images produced by your microscope will be of excellent quality and comparable to images produced by instruments available on the market, which are priced at a thousand dollars or more.
As for the choice of objects to observe, there would be too many things to say for this document. I have found that insects, flowers, minerals, are inexhaustible sources of amazement. I have noticed that the microscope reveals things that are often completely unexpected. Because of this, it is a good idea to invite some friends or family members to share your adventure. It is often quite astounding to see the way in which nature has adapted to resolve certain problems. Consider for example a little yellow spider of a flower. As all arachnids, it does not have compound eyes like insects. Each of its eight eyes is made up of a hemispherical lens. But these eyes do not rotate like ours. Then, how does a spider direct its sight? If you observe closely behind the two central eyes, which are the most important ones, you may see two dark colored shapes moving from side to side. These are the retinas!
You will rarely find something that is not worth viewing. One of your fingers, especially in summer, may show little beads of sweat, which twinkle under the light then to dry up as the water evaporates. A rose bud infested with aphids is a spectacular sight showing winged individuals, adults, pupa, whelping females and molting. If you are patient in your observations, you can also watch a mosquito hatch from the pupa stage. During winter, a snowfall provides ample opportunities to admire the wonderful structure of snow crystals.
Take some frog eggs for example. If you are lucky enough to collect them just after they are laid, you can view the first phases of the egg cell division (Fig. 19). In fact, the fertilized cells divide first in two parts, then four, eight, and so on until the cells become so small that you cannot see them even with the stereoscopic microscope. In the blood vessels of tadpoles you can see the movement of the blood cells. The passage of the blood in the branchiae of a newt is really quite spectacular.
Figure 19. Cell division in a frog's egg.
The study of fresh water aquatic life can be fascinating especially when viewed under the stereoscopic microscope. In pond water, you can see a variety of insect larvae, little crustaceans, small colonies as volvox and vorticella, protists, etc. Some protists are large enough to be observed even with low magnification. The shapes of rotifers, paramecium, diatoms and the way they swim are fascinating. Foraminifers are microscopic shell creatures that live in both fresh water environments like ponds, lakes and rivers and brackish waters like marine environments. Foraminifers have also been around for a long time and fossilized foraminifers are even used by paleontologists to determine the relative age of sedimentary rock formations and to determine the environmental conditions in the distant past. In figure 20, you can see some fossil foraminifera collected from a deposit of Pliocene clay in Val di Zena, near Pianoro (Bologna), Italy.
Try putting a snail on a sheet of glass and observe it. When it opens its pulmonary orifice to breathe, you can also see some of its internal organs. An ant is very fine specimen to view, but it runs around tirelessly and is not easy to follow with the microscope. To get the ant to stay still, place a drop of water sweetened with honey in a petri dish. Gently capture the ant and put it in the petri dish. As soon as it finds the honey water, it should stop to drink, which will allow you to examine it. A caterpillar nibbling on a leaf is a lot of fun to observe.
Figure 20: Foraminifera as seen under the stereoscopic microscope.
Once, when I was observing a caterpillar, I noticed some black grains on the table. After a while, these grains increased in number. I did not understand what was happening, until, at a lower magnification, I saw the caterpillar raise the posterior part of its body to shoot small pill-shaped excrement. In a pond of water, you can find many life forms to observe with your microscope. If you go into wooded area, pick up a few of soil samples: you will be able see many small insects, most of them of primitive species. A mushroom quickly becomes the prey of several parasites. Even when it is fresh, among its gills, you can find numerous small insects feeding on the spores. What about the wonderful structure of butterfly wings with all its little colored tiles? For a mineral collection, you need only obtain small mineral samples. When viewed under the microscopes these samples will show many perfect crystals of different all shapes and colors which often intersect each other.
Let us mention briefly another aspect of this project, which would be the object of a book in itself. With this microscope, you can also take pictures. Its vertical layout makes it ideal for mounting a camera. You can use the camera either with or without objective. In both cases, adjust focus with the microscope. If the field you see is small and circular, adjust the vertical position of the camera so that the pupil of the eyepiece is in the same plane as the camera's diaphragm. When you use the camera without an objective, you can obtain a larger magnification simply by moving the camera away from the eyepiece. In both cases, you need to make an adapter to center your camera on the eyepiece of the microscope to prevent parasitic light from entering the camera. If your adapter is linked to a bellows, you can easily adjust the magnification. To determine the right exposure, your camera should have a TTL exposure; otherwise you will need to calibrate your exposures by doing tests. To do time lapse photography you will have to use a flash unit. You can also link a television camera to the microscope and record to videotape. The methods to setup the camera are the same as for a regular camera. You can record interesting events on videotape, or simply connect it to a video monitor or television to show your family and friends what is happening under the microscope.
You should keep the microscope in a wooden case or under a dust cover to protect it from the dust. If you see rust on any steel pieces, rub the surfaces with an oily cloth slightly. From time to time you may need to tighten some of the screws. If you have to clean optical surfaces, use a clean cotton cloth or optical cleaning paper. Before doing this, remove the dust with a brush, because grains of sand can scratch surfaces of the lenses. In any case, do this operation sparingly. In fact what affect microscopes are not so much dust particles, but films of material which accumulate on the lenses after a prolonged exposure to the ambient air. To prevent this, avoid smoking near the microscope.
Until now, the construction of a stereoscopic microscope was beyond the reach of the amateur naturalist. Through an innovative optical approach, the mechanical structure of the microscope is simplified making its fabrication accessible to anyone without the need for specialized machining tools. The construction of this microscope is beneficial from a educational point of view since it provides the young scientist an opportunity to learn some of the basic principles of optics, it also requires one to work out the mechanical details of the project, to obtain the necessary components and to assemble the microscope. A hand made instrument is something in which one can take pride in having.
For parents, this project provides an opportunity share in the learning experience with their sons. This microscope can be a means to encourage the discovery of natural sciences including botany, ecology, entomology, geology, mineralogy and paleontology. While this instrument can be a precious tool for the amateur naturalist, what makes the microscope such a fantastic instrument is your curiosity and ability to be amazed by the microscopic world around us. Without this curiosity, your instrument will be destined to collect dust.
| 3.721465 |
Sponsored Link •
The previous two chapters discussed how Java's architecture deals with the two major challenges presented to software developers by a networked computing environment. Platform independence deals with the challenge that many different kinds of computers and devices are usually connected to the same network. The security model deals with the challenge that networks represent a convenient way to transmit viruses and other forms of malicious or buggy code. This chapter describes not how Java's architecture deals with a challenge, but how it seizes an opportunity made possible by the network.
One of the fundamental reasons Java is a useful tool for networked software environments is that Java's architecture enables the network mobility of software. In fact, it was primarily this aspect of Java technology that was considered by many in the software industry to represent a paradigm shift. This chapter examines the nature of this new paradigm of network-mobile software, and how Java's architecture makes it possible.
Prior to the advent of the personal computer, the dominant computing model was the large mainframe computer serving multiple users. By time-sharing, a mainframe computer divided its attention among several users, who logged onto the mainframe at dumb terminals. Software applications were stored on disks attached to the mainframe computer, allowing multiple users to share the same applications while they shared the same CPU. A drawback of this model was that if one user ran a CPU-intensive job, all other users would experience degraded performance.
The appearance of the microprocessor led to the proliferation of the personal computer. This change in the hardware status quo changed the software paradigm as well. Rather than sharing software applications stored at a mainframe computer, individual users had individual copies of software applications stored at each personal computer. Because each user ran software on a dedicated CPU, this new model of computing addressed the difficulties of dividing CPU-time among many users attempting to share one mainframe CPU.
Initially, personal computers operated as unconnected islands of computing. The dominant software model was of isolated executables running on isolated personal computers. But soon, personal computers began to be connected to networks. Because a personal computer gave its user undivided attention, it addressed the CPU-time sharing difficulties of mainframes. But unless personal computers were connected to a network, they couldn't replicate the mainframe's ability to let multiple users view and manipulate a central repository of data.
As personal computers connected to networks became the norm, another software model began to increase in importance: client/server. The client/server model divided work between two processes running on two different computers: a client process ran on the end-user's personal computer, and a server process ran on some other computer hooked to the same network. The client and server processes communicated with one another by sending data back and forth across the network. The server process often simply accepted data query commands from clients across the network, retrieved the requested data from a central database, and sent the retrieved data back across the network to the client. Upon receiving the data, the client processed, displayed, and allowed the user to manipulate the data. This model allowed users of personal computers to view and manipulate data stored at a central repository, while not forcing them to share a central CPU for all of the processing of that data. Users did share the CPU running the server process, but to the extent that data processing was performed by the clients, the burden on the central CPU hosting the server process was lessened.
The client/server architecture was soon extended to include more than two processes. The original client/server model began to be called 2-tier client/server, to indicate two processes: one client and one server. More elaborate architectures were called 3-tier, to indicate three processes, 4-tier, to indicate four processes, or N-tier, to indicate people were getting tired of counting processes. Eventually, as more processes became involved, the distinction between client and server blurred, and people just started using the term distributed processing to encompass all of these schemes.
The distributed processing model leveraged the network and the proliferation of processors by dividing processing work loads among many processors while allowing those processors to share data. Although this model had many advantages over the mainframe model, there was one notable disadvantage: distributed processing systems were more difficult to administer than mainframe systems. On mainframe systems, software applications were stored on a disk attached to the mainframe. Even though an application could serve many users, it only needed to be installed and maintained in one place. When an application was upgraded, all users got the new version the next time they logged on and started the application. By contrast, the software executables for different components of a distributed processing system were usually stored on many different disks. In a client/server architecture, for example, each computer that hosted a client process usually had its own copy of the client software stored on its local disk. As a result, a system administrator had to install and maintain the various components of a distributed software system in many different places. When a software component was upgraded, the system administrator had to physically upgrade each copy of the component on each computer that hosted it. As a result, system administration was more difficult for the distributed processing model than for the mainframe model.
The arrival of Java, with an architecture that enabled the network-mobility of software, heralded yet another model for computing. Building on the prevailing distributed processing model, the new model added the automatic delivery of software across networks to computers that ran the software. This addressed the difficulties involved in system administration of distributed processing systems. For example, in a client/server system, client software could be stored at one central computer attached to the network. Whenever an end-user needed to use the client software, the binary executable would be sent from the central computer across the network to the end-user's computer, where the software would run.
So network-mobility of software represented another step in the evolution of the computing model. In particular, it addressed the difficulty of administering a distributed processing system. It simplified the job of distributing any software that was to be used on more than one CPU. It allowed data to be delivered together with the software that knows how to manipulate or display the data. Because code was sent along with data, end-users would always have the most up-to-date version of the code. Thus, because of network- mobility, software can be administered from a central computer, reminiscent of the mainframe model, but processing can still be distributed among many CPUs.
The shift away from the mainframe model towards the distributed processing model was a consequence of the personal computer revolution, which was made possible by the rapidly increasing capabilities and decreasing costs of processors. Similarly, lurking underneath the latest software paradigm shift towards distributed processing with network-mobile code is another hardware trend--the increasing capabilities and decreasing costs of network bandwidth. As bandwidth, the amount of information that can be carried by a network, increases, it becomes practical to send new kinds of information across a network; and with each new kind of information a network carries, the network takes on a new character. Thus, as bandwidth grows, simple text sent across a network can become enhanced with graphics, and the network begins to take on an appearance reminiscent of newspapers or magazines. Once bandwidth expands enough to support live streams of audio data, the network begins to act like a radio, a CD-player, or a telephone. With still more bandwidth, video becomes possible, resulting in a network that competes with TV and VCRs for the attention of couch potatoes. But there is still one other kind of bandwidth-hungry content that becomes increasingly practical as bandwidth improves: computer software. Because networks by definition interconnect processors, one processor can, given enough bandwidth, send code across a network for another processor to execute. Once networks begin to move software as well as data, the network begins to look like a computer in its own right.
As software begins to travel across networks, not only does the network begin to take on a new character, but so does the software itself. Network-mobile code makes it easier to ensure that an end-user has the necessary software to view or manipulate some data sent across the network, because the software can be sent along with the data. In the old model, software executables from a local disk were invoked to view data that came across the network, thus the software application was usually a distinct entity, easily discernible from the data. In the new model, because software and data are both sent across the network, the distinction between software and data is not as stark--software and data blur together to become "content."
As the nature of software evolves, the end-user's relationship to software evolves as well. Prior to network-mobility, an end-user had to think in terms of software applications and version numbers. Software was generally distributed on media such as tapes, floppy disks, or CD-ROMs. To use an application, an end- user had to get the installation media, physically insert them into a drive or reader attached to the computer, and run an installation program that copied files from the installation media to the computer's hard disk. Moreover, the end-user often did this process multiple times for each application, because software applications were routinely replaced by new versions that fixed old bugs and added new features (and usually added new bugs too). When a new version was released, end-users had to decide whether or not to upgrade. If an end-user decided to upgrade, the installation process had to repeated. Thus, end-users had to think in terms of software applications and version numbers, and take deliberate action to keep their software applications up-to-date.
In the new model, end-users think less in terms of software applications with discrete versions, and more in terms of self-evolving "content services." Whereas installing a traditional software application or an upgrade was a deliberate act on the part of the end-user, network-mobility of software enables installation and upgrading that is more automatic. Network-delivered software need not have discrete version numbers that are known to the end-user. The end-user need not decide whether to upgrade, and need not take any special action to upgrade. Network-delivered software can just evolve of its own accord. Instead of buying discrete versions of a software application, end-users can subscribe to a content service--software that is delivered across a network along with relevant data--and watch as both the software and data evolve automatically over time.
Once you move away from delivering software in discrete versions towards delivering software as self- evolving streams of interactive content, your end-user loses some control. In the old model, if a new version appeared that had serious bugs, an end-user could simply opt not to upgrade. But in the new model, an end- user can't necessarily wait until the bugs are worked out of a new version before upgrading to the new version, because the end-user may have no control over the upgrading process.
For certain kinds of products, especially those that are large and full-featured, end-users may prefer to retain control over whether and when to upgrade. Consequently, in some situations software vendors may publish discrete versions of a content service over the network. At the very least, a vendor can publish two branches of a service: a beta branch and a released branch. End-users that want to stay on the bleeding edge can subscribe to the beta service, and the rest can subscribe to the released service that, although it may not have all the newest features, is likely more robust.
Yet for many content services, especially simple ones, most end-users won't want to have to worry about versions, because worrying about versions makes software harder to use. The end-user has to have knowledge about the differences between versions, make decisions about when and if to upgrade, and take deliberate action to cause an upgrade. Content services that are not chopped up into discrete versions are easier to use, because they evolve automatically. Such a content service, because the end-user doesn't have to maintain it but can just simply use it, takes on the feel of a "software appliance."
Many self-evolving content services will share two fundamental characteristics with common household appliances: a focused functionality and a simple user-interface. Consider the toaster. A toaster's functionality is focused exclusively on the job of preparing toast, and it has a simple user-interface. When you walk up to a toaster, you don't expect to have to read a manual. You expect to put the bread in at the top and push down a handle until it clicks. You expect to be able to peer in and see orange wires glowing, and after a moment, to hear that satisfying pop and see your bread transformed into toast. If the result is too light or too dark, you expect to be able to slide a knob to indicate to the toaster that the next time, you want your toast a bit darker or lighter. That's the extent of the functionality and user-interface of a toaster. Likewise, the functionality of many content services will be as focused and the user-interface will be as simple. If you want to order a movie through the network, for example, you don't want to worry whether you have the correct version of movie-ordering software. You don't want to have to install it. You just want to switch on the movie-ordering content service, and through a simple user-interface, order your movie. Then you can sit back and enjoy your network-delivered movie as you eat your toast.
A good example of a content service is a World Wide Web page. If you look at an HTML file, it looks like a source file for some kind of program. But if you see the browser as the program, the HTML file looks more like data. Thus, the distinction between code and data is blurred. Also, people who browse the World Wide Web expect web pages to evolve over time, without any deliberate action on their part. They don't expect to see discrete version numbers for web pages. They don't expect to have to do anything to upgrade to the latest version of a page besides simply revisiting the page in their browser.
In the coming years, many of today's media may to some extent be assimilated by the network and transformed into content services. (As with the Borg from Star Trek, resistance is futile.) Broadcast radio, broadcast and cable TV, telephones, answering machines, faxes, video rental stores, newspapers, magazines, books, computer software--all of these will be affected by the proliferation of networks. But just as TV didn't supplant radio entirely, network-delivered content services will not entirely supplant existing media. Instead, content services will likely take over some aspects of existing media, leaving the existing media to adjust accordingly, and create some new forms that didn't previously exist.
In the computer software domain, the content service model will not completely replace the old models either. Instead, it will likely take over certain aspects of the old models that fit better in the new model, add new forms that didn't exist before, and leave the old models to adjust their focus slightly in light of the newcomer.
This book is an example of how the network can affect existing media. The book was not entirely replaced by a content service counterpart, but instead of including resource pointers (sources where you can find further information on topics presented in the book) as part of the book, they were placed on a web page. Because resource pointers change so often, it made sense to let the network assimilate that part of the book. Thus, the resource pointers portion of the book has become a content service.
The crux of the new software paradigm, therefore, is that software begins to act more like appliances. End-users no longer have to worry about installation, version numbers, or upgrading. As code is sent along with data across the network, software delivery and updating become automatic. In this way, simply by making code mobile, Java unleashes a whole new way to think about software development, delivery, and use.
Java's architectural support for network-mobility begins with its support for platform independence and security. Although they are not strictly required for network-mobility, platform independence and security help make network-mobility practical. Platform independence makes it easier to deliver a program across the network because you don't have to maintain a separate version of the program for different platforms, and you don't have to figure out how to get the right version to each computer. One version of a program can serve all computers. Java's security features help promote network-mobility because they give end-users confidence to download class files from untrusted sources. In practice, therefore, Java's architectural support for platform independence and security facilitate the network-mobility of its class files.
Beyond platform independence and security, Java's architectural support for network-mobility is focused on managing the time it takes to move software across a network. If you store a program on a server and download it across a network when you need it, it will likely take longer for your program to start than if you had started the same program from a local disk. Thus, one of the primary issues of network- mobile software is the time it takes to send a program across a network. Java's architecture addresses this issue by rejecting the traditional monolithic binary executable in favor of small binary pieces: Java class files. Class files can travel across networks independently, and because Java programs are dynamically linked and dynamically extensible, an end-user needn't wait until all of a program's class files are downloaded before the program starts. The program starts when the first class file arrives. Class files themselves are designed to be compact, so that they fly more quickly across networks. Therefore, the main way Java's architecture facilitates network-mobility directly is by breaking up the monolithic binary executable into compact class files, which can be loaded as needed.
The execution of a Java application begins at a
main() method of some class, and
other classes are loaded and dynamically linked as they are needed by the application. If a class is never
actually used during one session, that class won't ever be loaded during that session. For example, if you are
using a word processor that has a spelling checker, but during one session you never invoke the spelling
checker, the class files for the spelling checker will not be loaded during that session.
In addition to dynamic linking, Java's architecture also enables dynamic extension. Dynamic extension is
another way the loading of class files (and the downloading of them across a network) can be delayed in a
Java application. Using user-defined class loaders or the
forname() method of class
Class, a Java program can load extra classes at run-time, which then become a part of
the running program. Therefore, dynamic linking and dynamic extension give a Java programmer some
flexibility in designing when class files for a program are loaded, and as a result, how much time an end-user
must spend waiting for class files to come across the network.
Besides dynamic linking and dynamic extension, another way Java's architecture directly supports network mobility is through the class file format itself. To reduce the time it takes to send them across networks, class files are designed to be compact. In particular, the bytecode streams they contain are designed to be compact. They are called "bytecodes" because each instruction occupies only one byte. With only two exceptions, all opcodes and their ensuing operands are byte aligned to make the bytecode streams smaller. The two exceptions are opcodes that may have one to three bytes of padding after the opcode and before the start of the operands, so that the operands are aligned on word boundaries.
One of the implications of the compactness goal for class files is that Java compilers are not likely to do much local optimization. Because of binary compatibility rules, Java compilers can't perform global optimizations such as inlining the invocation of another class's method. (Inlining means replacing the method invocation with the code performed by the method, which saves the time it takes to invoke and return from the method as the code executes.) Binary compatibility requires that a method's implementation can be changed without breaking compatibility with pre-existing class files that depend on the method. Inlining could be performed in some circumstances on methods within a single class, but in general that kind of optimization is not done by Java compilers, partly because it goes against the grain of class file compactness. Optimizations are often a tradeoff between execution speed and code size. Therefore, Java compilers generally leave optimization up to the Java virtual machine, which can optimize code as it loads classes for interpreting, just-in-time compiling, or adaptive optimization.
Beyond the architectural features of dynamic linking, dynamic extension and class file compactness, there are some strategies that, although they are really not necessarily part of the architecture, help manage the time it takes to move class files across a network. Because HTTP protocols require that each class file of Java applet be requested individually, it turns out that often a large percentage of applet download time is due not to the actual transmission of class files across the network, but to the network handshaking of each class file request. The overhead for a file request is multiplied by the number of class files being requested. To address this problem, Java 1.1 included support for JAR (Java ARchive) files. JAR files enable many class files to be sent in one network transaction, which greatly reduces the overhead time required to move class files across a network compared with sending one class file at a time. Moreover, the data inside a JAR file can be compressed, which results in an even shorter download time. So sometimes it pays to send software across a network in one big chunk. If a set of class files is definitely needed by a program before that program can start, those class files can be more speedily transmitted if they are sent together in a JAR file.
One other strategy to minimize an end-user's wait time is to not download class files on-demand. Through various techniques, such as the subscription model used by Marimba Castanet, class files can be downloaded before they are needed, resulting in a program that starts up faster. You can obtain more information about such approaches from the resource page for this chapter.
Therefore, other than platform independence and security, which help make network-mobility practical, the main focus of Java's architectural support for network-mobility is managing the time it takes to send class files across a network. Dynamic linking and dynamic extension allow Java programs to be designed in small functional units that are downloaded as needed by the end-user. Class file compactness helps reduce the time it takes to move a Java program across the network. The JAR file enables compression and the sending of multiple class files across the network in a single network file-transfer transaction.
Java is a network-oriented technology that first appeared at a time when the network was looking increasingly like the next revolution in computing. The reason Java was adopted so rapidly and so widely, however, was not simply because it was a timely technology, but because it had timely marketing. Java was not the only network-oriented technology being developed in the early to mid 1990s. And although it was a good technology, it wasn't the necessarily the best technology--but it probably had the best marketing. Java was the one technology to hit a slim market window in early 1995, resulting in such a strong response that many companies developing similar technologies canceled their projects. Companies that carried on with their technologies, such as AT&T did with a network-oriented technology named Inferno, saw Java steal much of their potential thunder.
There were several important factors in how Java was initially unleashed on the world that contributed to its successful marketing. First, it had a cool name--one that could be appreciated by programmers and non-programmers alike. Second, it was, for all practical purposes, free--always a strong selling point among prospective buyers. But the most critical factor contributing to the successful marketing of Java, however, was that Sun's engineers hooked Java technology to the World Wide Web at the precise moment Netscape was looking to transform their web browser from a graphical hypertext viewer to a full-fledged computing platform. As the World Wide Web swept through the software industry (and the global consciousness) like an ever-increasing tidal wave, Java rode with it. Therefore, in a sense Java became a success because Java "surfed the web." It caught the wave at just the right time and kept riding it as one by one, its potential competitors dropped uneventfully into the cold, dark sea. The way the engineers at Sun hooked Java technology to the World Wide Web--and therefore, the key way Java was successfully marketed--was by creating a special flavor of Java program that ran inside a web browser: the Java applet.
The Java applet showed off all of Java's network-oriented features: platform independence, network- mobility, and security. Platform independence was one of the main tenets of the World Wide Web, and Java applets fit right in. Java applets can run on any platform so long as there is a Java-capable browser for that platform. Java applets also demonstrated Java's security capabilities, because they ran inside a strict sandbox. But most significantly, Java applets demonstrated the promise of network-mobility. As shown in Figure 4-1, Java applets can be maintained on one server, from which they can travel across a network to many different kinds of computers. To update an applet, you only need to update the server. Users will automatically get the updated version the next time they use the applet. Thus, maintenance is localized, but processing is distributed.
Java-capable browsers fire off a Java application that hosts the applets the browser displays. To display a web page, a web browser requests an HTML file from an HTTP server. If the HTML file includes an applet, the browser will see an HTML tag such as this:
<applet CODE="HeapOfFish.class" CODEBASE="gcsupport/classes" WIDTH=525 HEIGHT=360></applet>
This "applet" tag provides enough information to enable the browser to display the applet. The
CODE attribute indicates the name of the applet's starting class file, in this case:
CODEBASE attribute gives the
location of the applet's class files relative to the base URL of the web page. The
HEIGHT attributes indicate the size in pixels of the applet's panel, the visible
portion of the applet that is displayed as part of the web page.
When a browser encounters a web page that includes an applet tag, it passes information from the tag to
the running Java application. The Java application either creates a new user-defined class loader object, or
re-uses an existing one, to download the starting class file for the applet. It then initializes the applet, by
invoking first the
init() and then the
start() method of the
applet's starting class. The other class files for the applet are downloaded on an as needed basis, by the
normal process of dynamic linking. For example, when a new class is first used by the applet's starting class,
the symbolic reference to the new class must be resolved. During resolution, if the class has not already been
loaded, the Java virtual machine will ask the same user-defined class loader that loaded the applet's starting
class to load the new class. If the user-defined class loader is unable to load the class from the local trusted
repository through the bootstrap class loader, the user-defined class loader will attempt to download the
class file across the network from the same location it retrieved the applet's starting class. Once initialization
of the applet is complete, the applet appears as part of the web page inside the browser.
Although the network-mobility of code made possible by Java's architecture and demonstrated by Java applets represented an important step in the evolution of computing models, Java's architecture held one other promise: the network mobility of objects. An object can move across a network as a combination of code, which defines the object's class, plus data that gives a snapshot of the object's state. Where network mobility of code can help simplify the work of systems administrators, network mobility of objects can help simplify the work of software developers designing and deploying distributed systems. Through object serialization and Remote Method Invocation (RMI), the Java API supplies a distributed object model that extends Java's local object model beyond the boundaries of the Java virtual machine. The distributed object model enables objects in one virtual machine to hold references to objects in other virtual machines, to invoke methods on those remote objects, and to exchange objects between virtual machines as parameters, return values, and exceptions of those method invocations. These capabilities, which are made practical by Java's underlying network-oriented architecture, can simplify the task of designing a distributed system because they in effect bring object-oriented programming to the network.
One technology that takes full advantage of the network mobility of objects made possible by Java's underlying network-friendly architecture, object serialization, and RMI, is Sun's Jini. Jini is a set of protocols and APIs that support the building and deployment of distributed systems that is targeted at the emerging proliferation of diskless embedded devices connected to networks. One particular piece of the Jini architecture, the service object, provides a good illustration of the way in which network-mobility of objects can be useful.
A Jini system is centered on a lookup service in which services register themselves by sending, among other objects, a service object. The service object represents the service to clients. Clients who want to access the service retrieve a copy of the service object from the lookup service and then interact with the service by invoking methods on the service object. The service object is responsible for implementing the service, either locally or by talking across the network to a software process or piece of hardware that implements the service.
To the Jini way of thinking, the network is made up of "services" that can be used by clients or other services. A service can be anything that sits on the network ready to perform a useful function. Hardware devices, software servers, communications channels -- even human users themselves -- can be services. A Jini-enabled disk drive, for example, could offer a "storage" service. A Jini-enabled printer could offer a "printing" service.
To perform a task, a client enlists the help of services. For example, a client program might upload pictures from the image storage service in a digital camera, download the pictures to a persistent storage service offered by a disk drive, and send a page of thumbnail-sized versions of the images to the printing service of a color printer. In this example, the client program builds a distributed system consisting of itself, the image storage service, the persistent storage service, and the color-printing service. The client and services of this distributed system work together to perform the task: to offload and store images from a digital camera and print out a page of thumbnails.
Jini provides a runtime infrastructure that enables service providers to offer their services to clients, and enables clients to locate and access services. The runtime infrastructure resides on the network in three places: in lookup services that sit on the network; in the service providers (such as Jini- enabled devices); and in clients. Lookup services are the central organizing mechanism for Jini- based systems. When new services become available on the network, they register themselves with a lookup service. When clients wish to locate a service to assist with some task, they consult a lookup service.
The runtime infrastructure uses one network-level protocol, called discovery, and two object-level protocols, called join and lookup. Discovery enables clients and services to locate lookup services. Join enables a service to register itself in a lookup service. Lookup enables a client to query a lookup service for services that can help the client accomplish its goals.
The discovery process begins automatically when a service provider, such as a Jini-enabled disk drive that offers a storage service, is plugged into the network. When a service provider is connected to the network, it broadcasts a presence announcement by dropping a multicast packet onto a well- known port. Included in the presence announcement is an IP address and port number where the service provider can be contacted by a lookup service.
Lookup services monitor the well-known port for presence announcement packets. When a lookup service receives a presence announcement, it opens and inspects the packet. The packet contains information that enables the lookup service to determine whether or not it should contact the sender of the packet. If so, it contacts the sender directly by making a TCP connection to the IP address and port number extracted from the packet. Using RMI, the lookup service sends an object, called a service registrar, across the network to the originator of the packet. The purpose of the service registrar object is to facilitate further communication with the lookup service. By invoking methods on the service registrar object, the sender of the announcement packet can perform join and lookup on the lookup service. In the case of a disk drive, the lookup service would make a TCP connection to the disk drive and would send it a service registrar object, through which the disk drive would then register its storage service via the join process.
Once a service provider has a service registrar object, the end product of discovery, it is ready to do a
join -- to become registered in the lookup service. To do a join, the service provider invokes the
register() method on the service registrar object, passing as a parameter an object
called a service item, a bundle of objects that describe the service. The
register() method sends a copy of the service item up to the lookup service, where
the service item is stored. Once this has completed, the service provider has finished the join process: its
service has become registered in the lookup service.
The service item is a container for several objects, including the object called the service object, which clients can use to interact with the service. The service item can also include any number of attributes, which can be any kind of object. Some potential attributes are icons, classes that provide GUIs for the service, and objects that give more information about the service.
Service objects usually implement one or more interfaces through which clients interact with the service.
For example, a lookup service is a Jini service, and its service object is the service registrar. The
register() method invoked by service providers during join is declared in the
ServiceRegistrar interface, which all service registrar objects implement. Clients
and service providers talk to the lookup service through the service registrar object by invoking methods
declared in the
ServiceRegistrar interface. Likewise, a disk drive would provide a
service object that implemented some well-known storage service interface. Clients would look up and
interact with the disk drive by this storage service interface.
Once a service has registered with a lookup service via the join process, that service is available for use by clients that query that lookup service. To build a distributed system of services that will work together to perform some task, a client must locate and enlist the help of the individual services. To find a service, clients query lookup services via a process called lookup.
To perform a lookup, a client invokes the
lookup() method on a service registrar
object. (A client, like a service provider, gets a service registrar through the process of discovery, described
previously.) The client passes as an argument to
lookup() a service
template, an object that serves as search criteria for the query. The service template can include a
reference to an array of
Class objects. These
indicate to the lookup service the Java type (or types) of the service object desired by the client. The service
template can also include a service ID, which uniquely identifies a service, and attributes,
which must exactly match the attributes uploaded by the service provider in the service item. The service
template can also contain wildcards for any of these fields. A wildcard in the service ID field, for example,
will match any service ID. The
lookup() method sends the service template to the
lookup service, which performs the query and sends back zero to many matching service objects. The client
gets a reference to the matching service objects as the return value of the
In the general case, a client looks up a service by Java type, usually an interface. For example, if a client
needed to use a printer, it would compose a service template that included a
object for a well-known interface to printer services. All printer services would implement this well-known
interface. The lookup service would return a service object (or objects) that implemented this interface.
Attributes can be included in the service template to narrow the number of matches for such a type-based
search. The client would use the printer service by invoking on the service object methods declared in the
well-known printer service interface.
In Jini systems, network-mobile objects fly all over the place. When a client or service performs discovery, for example, it receives a service registrar object from a lookup service. When a service registers itself with a lookup service through the join process, it sends to the lookup service a service item object, which itself is a container of many objects, including attributes and the service object. When a client performs lookup, it sends a service template object, a bundle of objects that serves as a search criteria for the lookup query. If the lookup is successful, the client receives either the service object or the entire service item for the service or services that matched the query.
How do all the objects flying across the network between clients, services, and the lookup service actually make distributed programming easier? In short, Jini's use of network-mobile objects (in particular, the network-mobile service object) raises the level of abstraction for distributed systems programming, effectively turning network programming into object-oriented programming.
Jini's architecture brings object-oriented programming to the network by enabling network services to take advantage of one of the fundamentals of object-oriented programming: the separation of interface and implementation. For example, a service object can grant clients access to the service in many ways. The object can actually represent the entire service, which is downloaded to the client during lookup and then executed locally. Alternatively, the service object can serve merely as a proxy to a remote server. When the client invokes methods on the service object, it sends the requests across the network to the server, which does the real work. The local service object and a remote server could also share the work.
One important consequence of Jini's architecture is that the network protocol used to communicate between a proxy service object and a remote server does not need to be known to the client. As illustrated in Figure 4-2, the network protocol is part of the service's implementation. This protocol is a private matter decided upon by the developer of the service. The client can communicate with the service via this private protocol because the service injects its own service object into the client's address space -- the service object moves across the network from service to client. The injected service object could communicate with a back-end server via RMI, CORBA, DCOM, some home-brewed protocol built on top of sockets and streams, or anything else. The client simply doesn't need to care about network protocols, because it can talk to the well-known interface that the service object implements. The service object takes care of any necessary communication on the network.
Different implementations of the same service interface can use completely different implementation approaches and completely different network protocols. A service can use specialized hardware to fulfill client requests, or it can do all its work in software. Different implementations of a service can be tuned for different environments. In addition, the implementation approach taken by a single service provider can evolve over time. The client can be sure it has a service object that understands the current implementation of the service, because the client receives the service object (by way of the lookup service) from the service provider itself. To the client, a service looks like the well-known interface, regardless of how the service is implemented.
Thus, Jini attempts to raise the level of abstraction for distributed systems programming, from the network protocol level to the object interface level. In the emerging proliferation of embedded devices connected to networks, many pieces of a distributed system may come from different vendors. Jini makes it unnecessary for vendors of devices to agree on low-level network protocols that allow their devices to interact. Instead, vendors will need to agree on high-level Java interfaces through which their devices can interact. Raising the level of discourse, from the network protocol level to the object interface level, will allow vendors to focus more on high-level concepts and less on low-level details. The higher level of discourse made possible by Jini should facilitate the process by which vendors of similar products come to an agreement on how their services will interact with clients.
In addition, Jini's architecture enables software developers to enjoy the benefits of separation of interface and implementation when they develop distributed systems. One such benefit is that well-defined object interfaces can help software developers work together effectively on large distributed systems projects. Similar to the way in which object interfaces define the contract between the parts of any object-oriented program, object interfaces can also serve to clarify the contract between the various members and teams of a large project who are responsible for individual pieces of the program. Another benefit of the separation of interface and implementation is that programmers can use it to reduce the impact of change by minimize coupling. The only point of coupling of a well-designed object is its interface; The implementation of an object can change without affecting code in any other object.
Jini brings the object-oriented benefits resulting from raising the level of abstraction and clearly separating interface from implementation to distributed systems programming by taking advantage of Java's support for network-mobile objects. The network-mobility of objects made possible by the combination of Java's underlying architecture, object serialization, and RMI, and demonstrated by the Jini service object, enables Jini to bring the benefits of object-oriented programming to the network.
Network-mobile Java software can come in many forms besides just the two forms, Java applets and Jini service objects, described in the previous sections. Yet although network-mobile Java is not limited to applets and service objects, the framework that supports any other form of network-mobile Java will likely look similar to the framework that supports applets and service objects. Like applets and service objects, for example, other forms of network-mobile Java will execute in the context of a host Java application, and the class files for the network-mobile Java code will be loaded by user-defined class loaders. User-defined class loaders can download class files across a network in custom ways that bootstrap class loaders usually can't. And because network-mobile class files are not always known to be trustworthy, the separate name-spaces provided by user-defined class loaders are needed to protect malicious or buggy code from interfering with code loaded from other sources. Lastly, because network-mobile class files can't always be trusted, there will generally be a security manager or access controller establishing a security policy for the network- mobile code.
The key to understanding Java's architecture, then, is to see enabling the network mobility of code and objects as the design center of Java. Although Java can offer valuable benefits, such as increased programmer productivity and increased program robustness, in situations that don't even remotely involve a network, the main focus of Java's architecture is the network. The Java virtual machine, Java class file, Java API, and Java programming language work in concert to make network mobile software both possible and practical. Through its support for network mobile code and objects, Java helps software developers meet challenges and seize opportunities presented by the ever-progressing network age.
For links to information about other examples of network-mobile Java, see the resources page:
| 3.460785 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.