text
stringlengths 124
652k
|
---|
What Are the Duties of a Litigation Paralegal?
by Louis Kroeck Google
Paralegals perform functions similar to a lawyer, but they are not permitted to actually practice law. Litigation paralegals assist lawyers who specialize in court appearances. The general duties of a litigation paralegal include conducting client interviews, performing legal research, organizing documents and assisting in document production for trial, assisting with pleadings and court filings and preparing for trial.
Litigation is the process of taking a case to court. Some lawyers do not engage in trial practice, for example, lawyers who specialize in document preparation, estates and corporate law. A litigation paralegal is familiar with the court process, the rules of civil procedure and the nuances of filing documents with the court.
Legal and Factual Research
One of the main job duties for litigation paralegals is legal and factual research. Factual research often includes looking up background information on clients and opposing parties, researching and clarifying the facts of the case and assisting in locating key documents related to the case. Legal research includes researching previous court decisions and statutory law in order to determine how the law applies to the particular facts of the case. Most paralegals use services such as LexisNexis or Westlaw to perform legal research.
Drafting Documents
Although a licensed lawyer must sign any document that is filed before the court, litigation paralegals often draft these documents. Examples of some of the documents that litigation paralegals might draft include complaints, answers, motions before the court and legal response briefs. In addition to drafting these documents, litigation paralegals may be responsible for submitting documents to the court, distributing copies to other parties and indexing documents in the case file.
Discovery is the process of requesting documents from the other party and providing the documents that are requested. Because the discovery process is a major portion of litigation, litigation paralegals spend a large amount of their time engaging in discovery processes. Common discovery tasks for litigation paralegals may include drafting discovery requests, drafting discovery responses, speaking with the client to determine what documents are available and creating a system for categorizing and organizing all vital case documents.
Trial Support
Prior to trial litigation, paralegals will assist their employer by helping with depositions and preparing for all pre-trial disclosures. Litigation paralegals also engage in a variety of support functions during trial such as attending trial, preparing trial exhibits and binders, managing trial documents, assisting witnesses, helping with jury selection, interacting with clients and taking notes during trial.
About the Author
Photo Credits
• Photos.com/Photos.com/Getty Images
Suggest an Article Correction |
The World Remembers Neil Armstrong
Aug 31, 2012
On his first day as a test pilot, his plane lost three engines. He and his commander landed it on one. Or practicing the lunar landing, again, a malfunctioning rocket threatened to end his life and he ejected moments before impact. But it was during the Gemini mission that Gene Kranz, formerly flight director at NASA, saw that Armstrong had the right stuff.
FLATOW: So on Apollo 11, as the Eagle Lander descended towards a dangerous lunar landscape, it came as no surprise that a computer malfunctioned in the final, crucial seconds would find Neil Armstrong manually taking over control of the lander and guiding it to a safer spot with but 15 seconds of fuel left. Then he walked on the moon. Today, a service is being held in Ohio for Neil Armstrong, who passed away last week. After spending decades evading death as an aviator test pilot and astronaut, it finally caught him at the age of 82. He was a true American hero, celebrated in song, print and ticker tape parade.
JOHN STEWART: (Singing) Black boy in Chicago, playing in the street, not enough to wear, not near enough to eat. And don't you know he saw it on a July afternoon? He saw a man named Armstrong walk upon the moon. The young girl in Calcutta, barely eight years old, and the flies that swarm the market place will see she don't get old. But don't you know she heard it on that July afternoon? She heard a man named Armstrong that walk upon the moon.
FLATOW: And that's it for SCIENCE FRIDAY this week. Transcript provided by NPR, Copyright National Public Radio. |
gmat preparation courses
Order Page About Us FAQ Contact Us Home
Sentence Correction
I: Introduction
II: Sentence Correction Tips
III: Glossary
IV: Three-Step Method
V: Seven Error Types
1. Subject-Verb Agreement
a. Introduction
b. Subject/Verb Separation
c. Collective Nouns
d. Plural / Singular
e. Neither / Either
f. Or / Nor
g. Subject / Verb / Object
h. Quantity Words
i. Sample Questions
2. Modifiers
3. Parallelism
4. Pronoun Agreement
5. Verb Time Sequences
6. Comparisons
7. Idioms
VI: Sample Questions
C. Subject-Verb Agreement: Collective Nouns
Subject-Verb Agreement
A. Introduction
B. Subject / Verb Separation
C. Collective Nouns
D. Plural / Singular
E. Neither / Either
F. Or / Nor
G. Subject / Verb / Object
H. Quantity Words
I. Sample Questions
Collective nouns, such as family, majority, audience, and committee are singular when they act in a collective fashion or represent one group. They are plural when the members of the collective body act as individuals. Collective nouns will usually be singular in Sentence Correction sentences. The difficulty of these questions lies in identifying a noun as a collective noun.
A majority of the shareholders wants the merger.
These nouns usually look plural, but are in fact singular. Confused? If you're having trouble determining singularity or plurality, it might be helpful to visualize what's actually going on in the sentence. Ask yourself these questions:
Is the sentence talking about something that acts as a singular entity?
Or, is it talking about the individual elements within that entity?
In the sentence above we are presented with the noun "majority". The "majority of shareholders" likely contains several shareholders; however, they are only spoken of as a group, not as individuals. There is no indication that the sentence is referring to the individuals within the majority – even though it comprises several people, the "majority" acts as one – as a singular entity - and therefore requires a singular verb, "wants."
GMAT Sentence Correction: If graphic doesn't load, press shift-refresh in your webbrowser to reload the page.
The flock of birds is flying south.
This sentence presents another ambiguous noun – "flock" – followed by a plural noun, "birds". Again, the confusing noun is referred to as a singular group: even though a flock comprises many birds, we're not talking about each bird's direction of flight, but the direction of the flock as a whole. And because the flock as a whole is singular, it therefore requires a singular verb to accompany it: the singular verb "is," not the plural verb "are."
Here is an example of a collective noun that requires a plural verb. Even though you will not see this very often on the GMAT, it's helpful to illustrate the importance of reading the entire sentence and visualizing what it describes every time you come across a confusing noun.
The sentence above describes the fighting that occurs between the individual members of the team. Because "team" refers to several individual members, it is a plural noun, and therefore requires a plural verb - "are" - as a result.
The key to these questions is simplicity:
1. recognize the collective noun
2. visualize what's going on in the sentence to make sure it is a collective noun
3. proceed.
These questions are included in the GMAT not because they are especially difficult, but because test writers expect most students to be unfamiliar with the rules governing collective nouns. But if you know to look out for those tricky collective nouns, then you have no reason to worry, because you're already ahead of the game.
List of Common Collective Nouns
army clergy government
audience council jury
band (musical band) crowd majority
board (political) department minority
cabinet (political) enemy public
choir group school
class herd senate
committee faculty society
company family
corporation team
B. Subject/Verb Separation
D. Plural Singular |
How Things Work: The Ouija Board
Think of a shipboard chess game with airplanes instead of pawns.
Airman Timothy Johnson (at left) and Lieutenant Alan Proctor update the "ouija board" after a launch from the USS Dwight D. Eisenhower. (MCSN David Danals, USN)
Air & Space Magazine | Subscribe
Guys gathered around a table, playing with toy airplanes: It’s a scene you’d expect to find in the back of a comic book store frequented by geeky teenagers. But it’s happening 24 hours a day on u.s. aircraft carriers around the globe. the guys standing ARound the table are U.S. Navy officers, and the little models they’re playing with represent multi-million-dollar aircraft. They’re manning the “ouija board,” a system they use to track every move of every airplane on a carrier.
“The ouija board is one of the most critical tools we have in coordinating flight operations,” says Lieutenant Commander Ray Spradlin, aircraft handler aboard the USS Enterprise. It’s a replica of the carrier’s flight deck and hangar deck, on a scale of 1/16 inch to one foot. The board is about six feet long and two and a half feet wide, about the size of a large coffee table, with the flight deck on top and the hangar bay underneath, like a second shelf. Scattered over both surfaces are small templates representing aircraft, made to the same scale, “so in theory, anything that’ll fit on the ouija board in flight deck control will fit out on the flight deck or in the hangar bay,” Spradlin says.
A carrier flight deck is a dangerous place, with huge machines in constant motion, screaming jet exhausts, spinning rotors, flexing steel cables, powerful catapults, and men and women working amid it all. To avoid disaster, it’s crucial to know what’s happening where and when. The ouija board provides a real-time snapshot of the whereabouts of the approximately 70 aircraft on board.
To represent crucial data on each airplane, such as its armament, maintenance needs, and mission status, “we use low-tech gadgets like thumbtacks, nuts and bolts, wing nuts, and washers,” Spradlin says. There’s no standard system of marking the airplane templates. “Every carrier has their own plan. What means something on one carrier may mean something different on another carrier. For the most part, we all keep track of the same information; we just may use a green pin for a first-go aircraft on one carrier, and the green pin on another carrier may mean something else.”
The ouija board is the centerpiece of Flight Deck Control, located on the flight deck level of the “island,” the structure that towers above the starboard carrier deck amidships. “It’s one of the busiest spaces on the ship during flight operations,” says Spradlin. Air crew, maintenance personnel, “everybody that works on the flight deck is constantly in and out of there, keeping track of information,” all of which the aircraft handler records on the board.
Information includes how airplanes are parked on the flight deck. “Most aircraft are parked along the outer edge of the flight deck with their tails extending out over the water to conserve deck space,” says Spradlin. “Sometimes crews have to do maintenance on the rudder or the elevator or something that would normally be out over the water. Putting an orange tack on an aircraft template on the ouija board tells us that an aircraft needs to be parked with its tail over the flight deck.”
Using the ouija board, the airplane handler oversees everyone involved in moving aircraft, including the “blue shirts,” who chock airplane tires and chain them down, the “yellow shirts,” who direct airplanes taxiing on deck, and the elevator operators, who move aircraft back and forth between the flight deck and the hangar deck. “We have four elevators, and we’re capable of taking two aircraft at a time on each elevator,” Spradlin says. “The hangar bay on this ship is divided into two bays. On the new Nimitz-class carriers they have three hangar bays. We can store about 27 to 29 aircraft in our hangar bay.” The rest, of course, are either out on the flight deck or on a mission. On the flight deck, airplanes are moved with small tractors, while “in the hangar bay we move the aircraft around with what we call spotting dollies, three-wheeled tractor-type contraptions with hydraulic arms.” And back in Flight Deck Control, every move is recorded on the ouija board.
The aircraft handler keeps the “air boss” updated with the data he needs to run flight operations from Primary Flight Control (Pri-Fly), several decks above, atop the island. “We have people in the hangar bay and flight deck physically controlling the aircraft, but the person in charge of all that and making sure that the aircraft get where we need them to be is the handler,” says the Enterprise’s air boss, Captain Ryman Shoaf. “If I see something on the deck and I don’t understand why it’s happening, then I can call down to Ray and he’ll just look at the ouija board and tell me.”
The ouija board system has been around since World War II, when the aircraft carrier came into its own as a warship, and hasn’t changed much since then. While practically everything else aboard the Navy’s warships is operated with state-of-the-art computers and digital technology, there’s a compelling reason that the ouija board remains so low-tech.
“Computers are nice, having electronic equipment is nice, but if you ever take any sort of battle damage, the first thing that’s going to go out is all those powered systems,” says Shoaf. With the ouija board, “if ship’s power goes down, you don’t lose a thing. It’s still right there in front of you. It’s cheap, it’s reliable, and it’s been working for the last 60 years. It’s an effective system, there’s no real reason to update it and make it computerized, so nobody has.” Anyone who has lost a document on a computer can appreciate that thinking.
Comment on this Story
comments powered by Disqus |
Bulawayo, Zimbabwe - On February 21, Africa's oldest sitting head of state, President Robert Gabriel Mugabe of Zimbabwe, turns 90. At the helm since the country's independence in April 1980, Mugabe - once a shy and studious boy who kept company with Catholic priests - became Zimbabwe's most renowned freedom fighter whose distinct brand of nationalism, pan-Africanism and authoritarianism has enabled him to rule the country for 34 years.
Armed with revolutionary zeal and degrees in education, economics and law earned during his 11-year incarceration, Mugabe's early policies sought to improve the lives of the disadvantaged. However, as time wore on, the chaotic struggle unleashed by Mugabe's more controversial policies on land reform, black empowerment and war veterans brought the country to its knees.
The president's birthday is a day reserved to celebrate Mugabe's role in the anti-colonial war. Vintage footage of Mugabe negotiating with the colonial government for a ceasefire and giving speeches as prime minister in the early 1980s have already started looping on state media. The 21st February Movement, created in 1986 to honour the president, now has a presence on social networks like Facebook.
However, beyond the parades of children in red sashes and Mugabe tribute songs, reports of forced attendance and donations have cast a shadow over the president's birthday in recent years.
"In my view, 21 February cannot be an important date in Zimbabwe because it celebrates authoritarianism premised on one-man rule, and its celebration is associated with the use of coercive force, abuse of public resources in a country experiencing high levels of poverty," said Pedzisai Ruhanya, a political analyst and rights activist.
In the pan-African pantheon
Commenting on Mugabe's historicised portrait, Sabelo Ndlovu-Gatsheni, author and professor at the University of South Africa, told Al Jazeera that the Zimbabwean leader still had aspirations to be held to be in the same regard as other pan-African liberators from the Democratic Republic of Congo, Tanzania and Ghana.
Mugabe wants to be remembered as a consistent anti-colonial revolutionary who led Zimbabwe to independence.
- Sabelo Ndlovu-Gatsheni, professor at the University of South Africa
"Mugabe wants to be remembered as a consistent anti-colonial revolutionary who led Zimbabwe to independence and continued to rail against neo-colonialism and global imperialism, while safeguarding Zimbabwe's sovereignty and territorial integrity. It's clear he wishes to be in the same league legacy-wise with such stalwarts of decolonisation as Patrice Lumumba, Julius Nyerere, Kwame Nkrumah and Nelson Mandela," he said.
Gatsheni added that although Mugabe's pan-Africanist ideals and anti-Western stance were admired by many across the continent, the use of violence and intimidation as a political tactic hurt his popularity at home and his relations with leaders such as President Ian Khama of Botswana and Tanzania's President Jakaya Kikwete.
At times, the dark past of state-sponsored violence and regional marginalisation has meant Mugabe and Zanu PF has struggled to gain popularity in the southern Matebeleland provinces, the home of many leaders of the rival liberation movement, Zimbabwe African People's Union (Zapu). In the early 1980s, longstanding political and ethnic rivalry between the military units of Zanu-PF and Zapu exploded into conflict in the Matebeleland and Midlands provinces, resulting in the massacre of more than 20,000 Ndebele and Kalanga people led by the elite Fifth Brigade.
Since the founding of Zimbabwe's main opposition party, the Movement for Democratic Change (MDC) in 1999, Mugabe and Zanu-PF have been repeatedly challenged at the polls. But the elections have all been marred by numerous electoral irregularities and grave human rights abuses, with the majority of casualties being MDC supporters. In 2013, a fairly peaceful poll was held on July 31, but the opposition rejected the election results as being rigged.
Mugabe and the West
Now in his seventh term in office, Mugabe - the man once knighted by the Queen of England - remains under sanctions since his fallout with the British government and the European Union. In 2002, during the most intense period of land confiscations and state-sponsored repression of the opposition, the EU imposed targeted sanctions on Zanu-PF officials. Travel bans and asset freezes were enforced, but as Zimbabwe has become more stable, the EU has removed some individuals from the lists. EU bans against the remaining officials were eased at an annual review on February 17, but not on Mugabe and his wife, Grace.
Zimbabwe court rules disputed election 'fair'
Zanu-PF spokesperson Rugare Gumbo told Al Jazeera that while the easing of sanctions was a welcome move, the continued targeting of Mugabe remained unacceptable. "You don't lift sanctions on some people and not on others... What has Mugabe done that others have not done in Zimbabwe?" said Gumbo.
Knox Chitiyo, an associate fellow in the Africa Programme at London-based think tank Chatham House, described sanctions as a futile solution to Zimbabwe's political crisis. "Sanctions have not been effective. They've been at best a blunt instrument, and at worst counter-productive, for the EU. They've not changed behaviours," Chitiyo said.
He added that the opposition's support of Western sanctions gave Zanu-PF a way to malign the MDC. "Zanu-PF has used sanctions as a tool with which to undermine the opposition by accusing them of serving foreign interests, but the opposition has struggled to find an effective counter-narrative to win hearts and minds."
In charge too long?
Endless rumours of Mugabe's failing health abound, and false death reports often make the rounds, but Mugabe remains confident he will steer the country and his party through to the next elections in 2018. A Wikileaks cable released in 2011 claimed Mugabe had prostate cancer, but Gumbo brushed away concerns about Mugabe's health. "There may be views about his age and his health, but as long as he has the support of the people, there is nothing wrong. As it was proved in the harmonised elections held on July 31, he leads with the wishes of the people."
With the majority of Zimbabwe's 13.7 million people under the age of 35, Mugabe is the only ruler many Zimbabweans have ever known. Joanna Moyo, 24 - a university graduate from Chegutu, a small town near Harare - questioned Mugabe's lengthy rule. "I don't think it's right that he's still in power considering his age. He's been there since 1980. So does it mean that he is the only person capable being the president? We need rotation, just as we rotate crops in the field."
Celebrating his 90th birthday with a party costing $1m, Mugabe has earned his place in history as an African leader and a freedom fighter. But history will also remember the struggles of ordinary Zimbabweans, as well as the subtle and stark brutalities of his long rule.
Follow Tendai on Twitter: @i_amten
Source: Al Jazeera |
Air conditioning
From All Car Wiki - Car Specification Wiki
Jump to: navigation, search
The concept of air conditioning is known to have been applied in Ancient Rome, where aqueduct water was circulated through the walls of certain houses to cool them down. Other techniques in medieval Persia involved the use of cisterns and wind towers to cool buildings during the hot season. Modern air conditioning emerged from advances in chemistry during the 19th century, and the first large-scale electrical air conditioning was invented and used in 1902 by Willis Haviland Carrier.
Pre-industrial cooling
The 2nd-century Chinese inventor Ding Huan (fl. 180) of the Han Dynasty invented a rotary fan for air conditioning, with seven wheels 3 m (9.8 ft) in diameter and manually powered.[2] In 747, Emperor Xuanzong (r. 712–762) of the Tang Dynasty (618–907) had the Cool Hall (Liang Tian) built in the imperial palace, which the Tang Yulin describes as having water-powered fan wheels for air conditioning as well as rising jet streams of water from fountains.[3] During the subsequent Song Dynasty (960–1279), written sources mentioned the air-conditioning rotary fan as even more widely used.[4]
In the 17th century, Cornelius Drebbel demonstrated "turning Summer into Winter" for James I of England by adding salt to water.[5]
In 1758, Benjamin Franklin and John Hadley, a chemistry professor at Cambridge University, conducted an experiment to explore the principle of evaporation as a means to rapidly cool an object. Franklin and Hadley confirmed that evaporation of highly volatile liquids such as alcohol and ether could be used to drive down the temperature of an object past the freezing point of water. They conducted their experiment with the bulb of a mercury thermometer as their object and with a bellows used to "quicken" the evaporation; they lowered the temperature of the thermometer bulb down to −14 °C (7 °F) while the ambient temperature was 18 °C (64 °F). Franklin noted that, soon after they passed the freezing point of water 0 °C (32 °F), a thin film of ice formed on the surface of the thermometer's bulb and that the ice mass was about a quarter-inch thick when they stopped the experiment upon reaching −14 °C (7 °F). Franklin concluded, "From this experiment, one may see the possibility of freezing a man to death on a warm summer's day".[6]
Mechanical cooling
File:Gorriemuseumapalachicola ice mchn1.jpg
Three-quarters scale model of Gorrie's ice machine. John Gorrie State Museum, Florida.
In 1820, British scientist and inventor Michael Faraday discovered that compressing and liquefying ammonia could chill air when the liquefied ammonia was allowed to evaporate. In 1842, Florida physician John Gorrie used compressor technology to create ice, which he used to cool air for his patients in his hospital in Apalachicola, Florida.[7] He hoped eventually to use his ice-making machine to regulate the temperature of buildings. He even envisioned centralized air conditioning that could cool entire cities.[8] Though his prototype leaked and performed irregularly, Gorrie was granted a patent in 1851 for his ice-making machine. His hopes for its success vanished soon afterwards when his chief financial backer died; Gorrie did not get the money he needed to develop the machine. According to his biographer, Vivian M. Sherlock, he blamed the "Ice King," Frederic Tudor, for his failure, suspecting that Tudor had launched a smear campaign against his invention. Dr. Gorrie died impoverished in 1855, and the idea of air conditioning faded away for 50 years.
James Harrison's first mechanical ice-making machine began operation in 1851 on the banks of the Barwon River at Rocky Point in Geelong (Australia). His first commercial ice-making machine followed in 1854, and his patent for an ether vapor-compression refrigeration system was granted in 1855. This novel system used a compressor to force the refrigeration gas to pass through a condenser, where it cooled down and liquefied. The liquefied gas then circulated through the refrigeration coils and vaporised again, cooling down the surrounding system. The machine employed a 5 m (16 ft.) flywheel and produced 3,000 kilograms (6,600 lb) of ice per day.
Though Harrison had commercial success establishing a second ice company back in Sydney in 1860, he later entered the debate of how to compete against the American advantage of unrefrigerated beef sales to the United Kingdom. He wrote Fresh Meat frozen and packed as if for a voyage, so that the refrigerating process may be continued for any required period, and in 1873 prepared the sailing ship Norfolk for an experimental beef shipment to the United Kingdom. His choice of a cold room system instead of installing a refrigeration system upon the ship itself proved disastrous when the ice was consumed faster than expected.
Electromechanical cooling
In 1902, the first modern electrical air conditioning unit was invented by Willis Haviland Carrier in Buffalo, New York. After graduating from Cornell University, Carrier, a native of Angola, New York, found a job at the Buffalo Forge Company. While there, Carrier began experimenting with air conditioning as a way to solve an application problem for the Sackett-Wilhelms Lithographing and Publishing Company in Brooklyn, New York, and the first "air conditioner," designed and built in Buffalo by Carrier, began working on 17 July 1902.
Designed to improve manufacturing process control in a printing plant, Carrier's invention controlled not only temperature but also humidity. Carrier used his knowledge of the heating of objects with steam and reversed the process. Instead of sending air through hot coils, he sent it through cold coils (ones filled with cold water). The air blowing over the cold coils cooled the air, and one could thereby control the amount of moisture the colder air could hold. In turn, the humidity in the room could be controlled. The low heat and humidity helped maintain consistent paper dimensions and ink alignment. Later, Carrier's technology was applied to increase productivity in the workplace, and The Carrier Air Conditioning Company of America was formed to meet rising demand. Over time, air conditioning came to be used to improve comfort in homes and automobiles as well. Residential sales expanded dramatically in the 1950s.
In 1906, Stuart W. Cramer of Charlotte, North Carolina was exploring ways to add moisture to the air in his textile mill. Cramer coined the term "air conditioning," using it in a patent claim he filed that year as an analogue to "water conditioning," then a well-known process for making textiles easier to process. He combined moisture with ventilation to "condition" and change the air in the factories, controlling the humidity so necessary in textile plants. Willis Carrier adopted the term and incorporated it into the name of his company. The evaporation of water in air, to provide a cooling effect, is now known as evaporative cooling.
Refrigerant development
"Freon" is a trademark name owned by DuPont for any Chlorofluorocarbon (CFC), Hydrogenated CFC (HCFC), or Hydrofluorocarbon (HFC) refrigerant, the name of each including a number indicating molecular composition (R-11, R-12, R-22, R-134A). The blend most used in direct-expansion home and building comfort cooling is an HCFC known as R-22. It is to be phased out for use in new equipment by 2010 and completely discontinued by 2020.
R-12 was the most common blend used in automobiles in the US until 1994, when most designs changed to R-134A. R-11 and R-12 are no longer manufactured in the US for this type of application, the only source for air-conditioning repair purposes being the cleaned and purified gas recovered from other air-conditioner systems. Several non-ozone-depleting refrigerants have been developed as alternatives, including R-410A, invented by Honeywell (formerly AlliedSignal) in Buffalo, and sold under the Genetron (R) AZ-20 name. It was first commercially used by Carrier under the brand name Puron.
Innovation in air-conditioning technologies continues, with much recent emphasis placed on energy efficiency and on improving indoor air quality. Reducing climate-change impact is an important area of innovation because, in addition to greenhouse-gas emissions associated with energy use, CFCs, HCFCs, and HFCs are, themselves, potent greenhouse gases when leaked to the atmosphere. For example, R-22 (also known as HCFC-22) has a global warming potential about 1,800 times higher than CO2.[9] As an alternative to conventional refrigerants, natural alternatives, such as carbon dioxide (CO2. R-744), have been proposed.[10]
Air-conditioning applications
An air conditioner.
Air-conditioning engineers broadly divide air-conditioning applications into what they call comfort and process applications.
Comfort applications aim to provide a building indoor environment that remains relatively constant despite changes in external weather conditions or in internal heat loads.
Air conditioning makes deep plan buildings feasible, for otherwise they would have to be built narrower or with light wells so that inner spaces received sufficient outdoor air via natural ventilation. Air conditioning also allows buildings to be taller, since wind speed increases significantly with altitude making natural ventilation impractical for very tall buildingsTemplate:Citation needed. Comfort applications are quite different for various building types and may be categorized as
• Low-Rise Residential buildings, including single family houses, duplexes, and small apartment buildings
• High-Rise Residential buildings, such as tall dormitories and apartment blocks
• Commercial buildings, which are built for commerce, including offices, malls, shopping centers, restaurants, etc.
• Institutional buildings, which includes government buildings, hospitals, schools, etc.
• Industrial spaces where thermal comfort of workers is desired.
• Sports Stadiums – recently, stadiums have been built with air conditioning, such as the University of Phoenix Stadium[11] and in Qatar for the 2022 FIFA World Cup.[12]
The structural impact of an air conditioning unit will depend on the type and size of the unit.[13] In addition to buildings, air conditioning can be used for many types of transportation – motor-cars, buses and other land vehicles, trains, ships, aircraft, and spacecraft.
Humidity control
Refrigeration air-conditioning equipment usually reduces the absolute humidity of the air processed by the system. The relatively cold (below the dewpoint) evaporator coil condenses water vapor from the processed air (much like an ice-cold drink will condense water on the outside of a glass), sending the water to a drain and removing water vapor from the cooled space and lowering the relative humidity in the room. Since humans perspire to provide natural cooling by the evaporation of perspiration from the skin, drier air (up to a point) improves the comfort provided. The comfort air conditioner is designed to create a 40% to 60% relative humidity in the occupied space. In food-retailing establishments, large open chiller cabinets act as highly effective air dehumidifying units.
A specific type of air conditioner that is used only for dehumidifying is called a dehumidifier. A dehumidifier is different from a regular air conditioner in that both the evaporator and condenser coils are placed in the same air path, and the entire unit is placed in the environment that is intended to be conditioned (in this case dehumidified), rather than requiring the condenser coil to be outdoors. Having the condenser coil in the same air path as the evaporator coil produces warm, dehumidified air. The evaporator (cold) coil is placed first in the air path, dehumidifying the air exactly as a regular air conditioner does. The air next passes over the condenser coil, re-warming the now dehumidified air. Note that the terms "condenser coil" and "evaporator coil" do not refer to the behavior of water in the air as it passes over each coil; instead they refer to the phases of the refrigeration cycle. Having the condenser coil in the main air path rather than in a separate, outdoor air path (as with a regular air conditioner) results in two consequences – the output air is warm rather than cold, and the unit is able to be placed anywhere in the environment to be conditioned, without a need to have the condenser outdoors.
Unlike a regular air conditioner, a dehumidifier will actually heat a room just as an electric heater that draws the same amount of power (watts) as the dehumidifier would. A regular air conditioner transfers energy out of the room by means of the condenser coil, which is outside the room (outdoors). That is, the room can be considered a thermodynamic system from which energy is transferred to the external environment. Conversely, with a dehumidifier, no energy is transferred out of the thermodynamic system (room) because the air conditioning unit (dehumidifier) is entirely inside the room. Therefore all of the power consumed by the dehumidifier is energy that is input into the thermodynamic system (the room) and remains in the room (as heat). In addition, if the condensed water has been removed from the room, the amount of heat needed to boil that water has been added to the room. This is the inverse of adding water to the room with an evaporative cooler.
Dehumidifiers are commonly used in cold, damp climates to prevent mold growth indoors, especially in basements. They are also sometimes used in hot, humid climates for comfort because they reduce the humidity which causes discomfort (just as a regular air conditioner does, but without cooling the room). They are also used to protect sensitive equipment from the adverse effects of excessive humidity in tropical countries.
The engineering of physical and thermodynamic properties of gas–vapor mixtures is called psychrometrics.
Energy use
It is typical for air conditioners to operate at "efficiencies" of significantly greater than 100%.[15] However, it may be noted that the input electrical energy is of higher thermodynamic quality (lower entropy) than the output thermal energy (heat energy).
Health issues
Air-conditioning systems can promote the growth and spread of microorganisms, such as Legionella pneumophila, the infectious agent responsible for Legionnaires' disease, or thermophilic actinomycetes; however, this is only prevalent in water cooling towers. As long as the cooling tower is kept clean (usually by means of a chlorine treatment), these health hazards can be avoided. Conversely, air conditioning, including filtration, humidification, cooling, disinfection, etc., can be used to provide a clean, safe, hypoallergenic atmosphere in hospital operating rooms and other environments where an appropriate atmosphere is critical to patient safety and well-being. Air conditioning can have a negative effect on skin, drying it out,[16] and a positive effect on sufferers of allergies and asthma. Air conditioning can also cause dehydration.[17]
Refrigerant environmental issues
Prior to 1994 most automotive air conditioning systems used Dichlorodifluoromethane (R-12) as a refrigerant. It was usually sold under the brand name Freon-12 and is a chlorofluorocarbon halomethane (CFC). The manufacture of R-12 was banned in many countries in 1994 because of environmental concerns, in compliance with the Montreal Protocol. The R-12 was replaced with R-134a refrigerant, which has a lower ozone depletion potential. Old R-12 systems can be retrofitted to R-134a by a complete flush and filter/dryer replacement to remove the mineral oil, which is not compatible with R-134a.
Portable air conditioners
A portable air conditioner is one on wheels that can be easily transported inside a home or office. They are currently available with capacities of about 6,000–60,000 BTU/h (1,800–18,000 W output) and with and without electric-resistance heaters. Portable air conditioners are either evaporative or refrigerative.
Portable refrigerative air conditioners come in two forms, split and hose. These compressor-based refrigerant systems are air-cooled, meaning they use air to exchange heat, in the same way as a car or typical household air conditioner does. Such a system dehumidifies the air as it cools it. It collects water condensed from the cooled air and produces hot air which must be vented outside the cooled area; doing so transfers heat from the air in the cooled area to the outside air.
A portable split system has an indoor unit on wheels connected to an outdoor unit via flexible pipes, similar to a permanently fixed installed unit.
Hose systems, which can be monoblock or air-to-air, are vented to the outside via air ducts. The monoblock type collects the water in a bucket or tray and stops when full. The air-to-air type re-evaporates the water and discharges it through the ducted hose and can run continuously.
A single-duct unit uses air from within the room to cool its condenser, and then vents it outside. This air is replaced by hot air from outside or other rooms, thus reducing the unit's effectiveness. Modern units might have a coefficient of performance (COP, sometimes called "efficiency") of approximately 3 (i.e., 1 kW of electricity will produce 3 kW of cooling). A dual-duct unit draws air to cool its condenser from outside instead of from inside the room, and thus is more effective than most single-duct units.
Evaporative air coolers, sometimes called "swamp air conditioners", do not have a compressor or condenser. Liquid water is evaporated on the cooling fins, releasing the vapour into the cooled area. Evaporating water absorbs a significant amount of heat, the latent heat of vaporisation, cooling the air — humans and other animals use the same mechanism to cool themselves by sweating. They have the advantage of needing no hoses to vent heat outside the cooled area, making them truly portable; and they are very cheap to install and use less energy than refrigerative air conditioners. Disadvantages are that unless ambient humidity is low (as in a dry climate) cooling is limited and the cooled air is very humid and can feel clammy. Also, they use a lot of water, which is often at a premium in the dry climates where they work best.
Heat pumps
Heat pump is a term for a type of air conditioner in which the refrigeration cycle can be reversed, producing heat instead of cold in the indoor environment. They are also commonly referred to, and marketed as, a reverse cycle air conditioner. Using an air conditioner in this way to produce heat is significantly more efficient than electric resistance heating. Some home-owners elect to have a heat pump system installed, which is actually simply a central air conditioner with heat pump functionality (the refrigeration cycle is reversed in the winter). When the heat pump is enabled, the indoor evaporator coil switches roles and becomes the condenser coil, producing heat. The outdoor condenser unit also switches roles to serve as the evaporator, and produces cold air (colder than the ambient outdoor air).
Heat pumps are more popular in milder winter climates where the temperature is frequently in the range of 40–55°F (4–13°C), because heat pumps become inefficient in more extreme cold. This is due to the problem of the outdoor unit's coil forming ice, which blocks air flow over the coil. To compensate for this, the heat pump system must temporarily switch back into the regular air conditioning mode to switch the outdoor evaporator coil back to being the condenser coil, so that it can heat up and de-ice. A heat pump system will therefore have a form of electric resistance heating in the indoor air path that is activated only in this mode in order to compensate for the temporary air conditioning, which would otherwise generate undesirable cold air in the winter. The icing problem becomes much more prevalent with lower outdoor temperatures, so heat pumps are commonly installed in tandem with a more conventional form of heating, such as a natural gas or oil furnace, which is used instead of the heat pump during harsher winter temperatures. In this case, the heat pump is used efficiently during the milder temperatures, and the system is switched to the conventional heat source when the outdoor temperature is lower.
Absorption heat pumps are actually a kind of air-source heat pump, but they do not depend on electricity to power them. Instead, gas, solar power, or heated water is used as a main power source. Additionally, refrigerant is not used at all in the process. To extract heat, an absorption pump absorbs ammonia into water. Next, the water and ammonia mixture is pressurized to induce boiling, and the ammonia is boiled off.[18]
Some more expensive window air conditioning units have the heat pump function. However, a window unit that has a "heat" selection is not necessarily a heat pump because some units use electric resistance heat when heating is desired. A unit that has true heat pump functionality will be indicated its literature by the term "heat pump."
See also
1. ASHRAE Terminology of HVAC&R, ASHRAE, Inc., Atlanta, 1991,
2. Needham, Joseph (1991). Science and Civilisation in China, Volume 4: Physics and Physical Technology, Part 2, Mechanical Engineering. Cambridge University Press. pp. 99, 151, 233. ISBN 978-0-521-05803-2.
3. Needham, pp. 134 & 151.
4. Needham, p. 151.
5. Laszlo, Pierre (2001-06). Salt: Grain of Life. ISBN 978-0-231-12198-9.
6. Cooling by Evaporation (Letter to John Lining). Benjamin Franklin, London, June 17, 1758
7. History of Air Conditioning Source: Jones Jr., Malcolm. "Air Conditioning". Newsweek. Winter 1997 v130 n24-A p42(2). Retrieved 1 January 2007.
8. The History of Air Conditioning Lou Kren, Properties Magazine Inc. Retrieved 1 January 2007.
9. "Chapter.2_FINAL.indd" (PDF). Retrieved 2010-08-09.
10. "The current status in Air Conditioning – papers & presentations". Retrieved 2010-08-09.
11. "Qatar promises air-conditioned World Cup". CNN. 2010-12-03.
12. [1]
13. Oakland Air Conditioning. "Structural Impact of Air Conditioning Installation". Retrieved 2012-01-23.
14. Jan F. Kreider. Handbook of heating, ventilation, and air conditioning. CRC press. ISBN 0-8493-9584-4.
15. Winnick, J (1996). Chemical engineering thermodynamics. John Wiley and Sons. ISBN 0-471-05590-5.
16. What your skin is telling you#Air conditioning
17. Is your office killing you?#Air conditioning
18. "Common Heat Pumps". Retrieved 2010-08-09.
Template:Technologyar:تكييف الهواء bn:শীতাতপ নিয়ন্ত্রণ ব্যবস্থা bg:Климатизация ca:Aire condicionat cs:Klimatizace da:Luftkonditionering de:Klimaanlage el:Κλιματισμός es:Acondicionamiento de aire eo:Klimatizilo fa:تهویه مطبوع fr:Climatisation hr:Klimatizacija id:Penyejuk udara it:Condizionatore d'aria he:מיזוג אוויר lv:Gaisa kondicionēšana lt:Oro kondicionierius nl:Airconditioning ja:空気調和設備 no:Klimaanlegg pl:Klimatyzacja pt:Condicionamento de ar ro:Aer condiționat ru:Кондиционирование воздуха sk:Klimatizácia sr:Klimatizacija fi:Ilmanvaihtokone sv:Luftkonditionering ta:வளிப் பதனம் th:ระบบปรับอากาศ tr:Klima uk:Кондиціонер vi:Điều hòa không khí yi:לופטקילער zh-yue:空氣調節 zh:空氣調節
Personal tools
Social Networking
Google ads help us out |
Tag Archives: God of Longevity (Shou)
According to tradition, in the New Year, people wish each other happiness, wealth and longevity. These three Chinese characters are seen on horizontal lacquered boards, the New Year calendar and the signboard of jewelry shops and restaurants.
As reported on http://english.vovnews.vn/Home/New-Year-wishes-for-Happiness-Wealth-and-Longevity/20112/123630.vov
These New Year greetings are formed with images of the Three Abundances, or Fu Lu Shou. The God of Happiness (Fu) holds a baby. The God of Wealth (Lu) wears a mandarine costume and bonnet. The God of Longevity (Shou) is a short and bald-headed old man. One of his hands holds a stick and the other, a peach. They all have a long, white beard, and a ruddy and smiling expression.
Where were these Gods from?
According to Chinese legends, these gods had been leaders of three different dynasties.
The God of Blessing is Guo Ziyi, who had been ruler of the Tang dynasty. He was known as a clean-fingered mandarin and his family lived in harmony. For five generations they lived together under the same roof. He and his wife lived long to 83. After their deaths they were buried side by side.
During his lifetime, he always worked for charity to bring blessings to people. He left the world with mercy, family happiness and respect. People worshipped him with their respect and wishes to be blessed as he had been.
The God of Prosperity is Zhao Gongming who had been a leader in the Chin dynasty in China. He was said to be a corrupt official and enriched himself with bribery. He lived a wealthy and glorious life. However, when he reached 80, he still had no grandchildren to maintain the continuity of his family line. He was so sad that he fell ill and died in loneliness. Before dying, he always complained about his issueless situation and felt sorry for his great wealth.
The God of Longevity is Dongfang Shuo who had been a scholar and leader during the Han dynasty. He was said to be a flatterer and therefore received many loaves and fishes from the king. He used the money to buy young and beautiful concubines in order to use the Yin to nourish the Yang. He lived till 125 years old.
The Chinese people put these three men into the three abundances as a warning. Meanwhile, their good side became people’s wishes, namely blessing, prosperity and longevity.
The concept of Happiness, Wealth and Longevity at present
Every person wants to be happy, wealthy and long-lived. However, the concept changes with the passage of time.
In the past, people held that happiness came from having many children, particularly boys. At present, both boys and girls bring happiness to their parents as long as they are good, dutiful, healthy and successful in life. What is the use of being rich if one dies in loneliness as Zhao Gongming. A long-lived person should be healthy, joyful and helpful.
In short, happiness, wealth and longevity are people’s long-term wishes. That’s why they send to each other new years greetings with these three wishes. |
You've got family at Ancestry.
Find more Aechternacht relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 2 more people named Aechternacht in the United States — and some of them are likely related to you.
Start a tree and connect with your family.
Create, build, and explore your family tree.
What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 6 people named Aechternacht in the 1930 U.S. Census. In 1940, there were 33% more people named Aechternacht in the United States. What was life like for them?
Picture the past for your ancestors.
In 1940, 8 people named Aechternacht were living in the United States. In a snapshot:
• They typically took 4 weeks of vacation a year
• The average annual income was $1,320
• 1 was child
• Although 38% were female, the most common name for males was Henry
Learn where they came from and where they went.
As Aechternacht families continued to grow, they left more tracks on the map:
• They most commonly lived in Texas |
You've got family at Ancestry.
Find more Seckula relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 3 less people named Seckula in the United States — and some of them are likely related to you.
Start a tree and connect with your family.
Create, build, and explore your family tree.
What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 11 people named Seckula in the 1930 U.S. Census. In 1940, there were 27% less people named Seckula in the United States. What was life like for them?
Picture the past for your ancestors.
In 1940, 8 people named Seckula were living in the United States. In a snapshot:
• The average annual income was $1,360
• 100% rented out rooms to boarders
• The typical household was 2 people
Learn where they came from and where they went.
As Seckula families continued to grow, they left more tracks on the map:
• 45% were born in foreign countries
• 6 were first-generation Americans
• 5 were born in foreign countries
• Most immigrants originated from Poland |
You've got family at Ancestry.
Find more Swacus relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 6 less people named Swacus in the United States — and some of them are likely related to you.
Start a tree and connect with your family.
Create, build, and explore your family tree.
What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 12 people named Swacus in the 1930 U.S. Census. In 1940, there were 50% less people named Swacus in the United States. What was life like for them?
Picture the past for your ancestors.
In 1940, 6 people named Swacus were living in the United States. In a snapshot:
• 6 rented out rooms to boarders
• They typically took 7 weeks of vacation a year
Learn where they came from and where they went.
As Swacus families continued to grow, they left more tracks on the map:
• 8 were first-generation Americans
• 17% migrated within the United States from 1935 to 1940
• 2 were born in foreign countries
• Most immigrants originated from Hungary |
Fast, easy and free
Create your website now
I create my website
Free website created on
Best Binocular Guide
How to Buy the Right Binoculars
Other than the eyeglasses, binoculars are the instrument that is used most worldwide. They can be used differently. It can be a big task to choose the right binocular to buy and this guide is designed to assist you on how to buy the right binoculars.
Understand the numbers
You have to understand the 2 numbers that binoculars brisbane are referred with. For example, 7 x 35; in this case "x" is the power or the magnification factor. 7 x 35 simply implies that the lenses make the object to be closer 7 times. 35 being the second number refer to the diameter of the objective lenses. The exit pupil value is calculated by dividing the second number by the first.
The bigger the magnification the bigger the dimmer the image will be, and your field of view will be small hence it will be hard for you to focus on the image. The largeness of the objective lens is directly proportional to the amount of light gathered, which is ideal in low- light activities though it increases the weight of the binoculars. If the pupil is big, more light reaches your eyes. You should go for a pupil that the width that your eyes dilate to.
Most binoculars' lenses are made of glass which makes the image better, but it costs much more than plastic lenses. However you will pay for more on a set of plastic lenses with the same quality of image as that of a set of glass lenses.
Evaluate the eyepiece and establish if the lenses of the eyepiece rest a comfortable distance away from your eyes as required. This is referred to as "eye relief" and it usually ranges from 5 - 20 millimeters and if you wear glasses, your eye relief should be 14 - 15 millimeters or greater because most eye glasses are 9 - 13 millimeters from the eye.
Test its ability
Look at how closely you are able to focus on the binocular in the shop and calculate the distance that separates them and what you are focusing at.
Look at the design of the prism
The main lenses in most carl zeiss binoculars are widely spaced than the eyepiece, due to the prisms used. This enlarges and makes nearby objects look to be more 3D. Binoculars with roof prisms makes the main lenses to rest perfectly with the eyepiece and this makes the binoculars to be compact in the cost of the picture quality.
Consider water-resistant versus waterproof
In case you will not be using your binocular under bad weather, or where they can easily get wet, you should go for the water-resistant binoculars. If you will be taking them along whitewater rafting, buy waterproof binoculars instead. |
• Safety First. And Congestion Reduction Last.
by • August 21, 2013 • Economics, Safety
For the last two years that federal statistics are available (2012 and 2013), Delaware had the highest per capita pedestrian fatality rate in America.
As the Delaware Department of Transportation and the Council on Transportation revise the Department’s process about deciding which transportation projects should get built, it is a good time for a reminder about the differing magnitudes of the cost of congestion and the cost of crashes.
Researchers at the Georgia Institute of Technology looked into the data in 2011 and quantified the (per capita) costs of both. The red bars are the cost of cashes and the blue bars are the cost of congestion:
Source: “Economic Cost of Traffic Crashes: Establishing a System Performance Measure for Safety” (Transportation Research Board)
The costs vary depending on whether you live in a big city or a smaller city.
America’s largest cities are both more congested and safer – from a traffic point of view – than smaller cities. But as you go to smaller cities (and then presumably to suburban and rural areas), the per capita cost of congestion goes down while the per capita cost of crashes rise. For small cities, the total cost of crashes is nearly 10 times more than the total cost of congestion.
(What is a “small” city by the way? The Georgia Tech study defines “small” as less than 500,000….Wilmington, Delaware’s largest city, has less than 80,000 people in it.)
For Delaware (with no big cities at all) the cost of crashes vastly outweigh the costs associated with congestion. And that does not even take into account another important issue: reducing congestion can actually reduce safety. This report from Portland Metro (2012) documents this phenomenon. The “KSI” (“killed or seriously injured”) crash rate in “uncongested” conditions was nearly 50% higher than when conditions were “congested” (page 33 of the report). The data reflect that reducing vehicle congestion by widening arterial roads is not an effective long term safety measure. The problems introduced due to widening are faster vehicle operating speeds, complicated lane transitions and more difficult multimodal crossings. Clearly, there are crashes in the periods where congestion exist, but it is not the majority of crashes because the transportation system offers significant exposure throughout the other 23 hours after the peak hour. (For example, in Delaware most fatal pedestrian crashes happen after the evening rush hour when there is little or no congestion.)
From a strict cost-benefit point-of-view that only compares the economic costs of congestion and safety, we need to prioritize safety. In the context of a fixed amount of funds for transportation improvement, that also means we need to de-prioritize projects that reduce congestion without a safety benefit (or worse, actually make deadly crashes more likely).
James Wilson is the executive director of Bike Delaware.
• Economic Cost of Traffic Crashes: Establishing a System Performance Measure for Safety (Transportation Research Board)
Metro State of Safety Report
|
Being Basque
The United States has been called a "melting pot," a colorful stew of race, culture and ideology that borrows from all over the world. As a result, ancestral lines have blurred over the years, breeding affection for many lands and people without full understanding or appreciation of what it means to be Irish, Mexican, German or Greek.
There are pockets within the U.S. where people live the traditions of an ancient way, and Boise is home to one of the most rare and remarkable-Basque. General knowledge of Basque culture seems to stop with chorizo and ethnic dance. Non-Basques see names like Urquidi, Enchausti and Yzaguirre and wonder what brings these people together to celebrate, eat, drink, dance and worship in a way that honors the unusual purity of their lineage and richness of their heritage. In trying to answer this question, we tracked down local Basques of all ages and professions to ask what it means to "be Basque."
Phil Goodson (Echevarria), 30,
Basque Center bartender
I have a deep sense of the history of my culture and a strong connection to my ancestors. There's the old countryb where my grandparents grew up and the community they started when they came over here; each has its own identity. I've never been to the Basque country, so all I know is the subculture. It's very close and tight-knit with a strong emphasis on the family unit and the extension of that family unit. We take pride in the fact that we consider third, fourth and fifth cousins "family." Sometimes you can't even remember how you're related to someone, but it doesn't matter. I'm not 100 percent Basque, but it's the one part of my heritage that I relate to more than anything else. When my grandpa came over from the old country, he didn't want my dad to speak any Basque; he wanted to Americanize him as quickly as possible because they had been so discriminated against. It has come full circle since then, with an influx of people wanting to learn the language and cultivate ties to the Basque country. I was fortunate enough to grow up here and be a part of that community, but it's almost intuitive-my pride in the Basque culture is in me. When I took this job two years ago, I had the overwhelming feeling of being back in a family. I moved all over the country, but you only have one home.
Susan Gamboa, 49,
wife and mother
I'm Italian and my husband John is Basque, and the cultures are very similar in the way they celebrate. So when we got married, we had an amazing party. It really was like a scene from My Big Fat Greek Wedding. We moved away for about 12 years and didn't get back into the Basque scene until our kids started dancing with Oinkari. We feel really lucky to be part of it, and we have so many friends that we wouldn't have known otherwise. I think one of the reasons Basques are so hospitable is that they had so many problems in Spain. They were repressed and not allowed to speak their language, so it's really important to them to preserve their heritage and way of life. It's such a great community of people. Everybody just has fun; it's a big huge family.
Dave Bieter, 45,
Mayor of Boise
It has been a profound part of my growing up. I remember going down to the Basque Center when I was 6 or 7, but things really transformed when I was living in the Basque country. My whole family lived there for two different years and I went again after law school to teach English and study Basque. Spanish dictator Franco was still alive the first time I went, and it was amazing-I gained a whole different perspective. You grow up with this folkloric view of your heritage, and being there provided a very striking contrast between America and Spain at that time and a sense of the political setting. I was 14, and there were roadblocks and sub-machine guns everywhere. It makes a big impression on you, and the combination of growing up here and living there made me appreciate my roots. There's a real communal aspect to the culture that has kept it alive all these years. Despite the influence of history, they've maintained a language that is unlike any other and transferred here a stubbornness that lends to the preservation of things. My grandfather came over in 1913, and it's still a very active community that's very well versed in having fun. At this point, I have real clarity on what and who I am-an American with Basque roots, not hyphenated.
Dave Lachiondo, 58,
President of Bishop Kelly High School
My father emigrated from Spain, so Basque culture has always been part of my life. It's like air or water: integral. Basques as a people are adventuresome, inclusive, intense, and I have to admit, competitive-particularly in games. Our music reflects our life spirit; we take life head on and are not really subtle about a lot of things. We have pride about being who we are and our unique genetic make-up. There are pockets of Basques in other parts of Idaho, central California, Nevada, Wyoming, Colorado, New York, South America and even Australia. They're linked, all of these "Basque clubs," and that's what Jaialdi is all about-trying to bring the people of the Basque diaspora together. I'm happy that I am who I am, happy with my culture, but I feel that way about people who are Irish. I love anyone who feels like he has a connection to that expression of pride in culture. But I don't speak Basque; I speak English. I'm an American first.
Jake Murgoitio, 18,
student/accordion player
It's an honor to be Basque-to be part of a culture so vibrant and full of life. We express ourselves through music and dance and conversation. The personality is welcoming and friendly, and we have good, solid morals and know what hard work is. We're proud of who we are and proud to show our culture to others and all the things that make us unique. But being Basque is part of our normal routine, so when you ask what it means to be Basque, I had to take a moment to really think about exactly what makes us stand out: honor, welcoming sprit, work ethic, strong morals and pride.
Christina Gamboa, 19,
student/Oinkari dancer
Being Basque is really cool because living in Boise allows me to be part of a culture that is so unique. Not everybody gets to be involved in the language and dancing and singing of their ancestors, so that separates you and makes you feel kind of special. It's about keeping tradition going and understanding your heritage and roots. It's important to keep the culture alive because it's a big part of where you come from and who you are.
Pin It
Comments are closed.
More by Erin Ryan
Latest in Features
Larry King Interviews…
© 2016 Boise Weekly
Website powered by Foundation |
Pin Me
Creating Databases in your Android Application
written by: Jbeerdev•edited by: Wendy Finn•updated: 7/30/2011
We are going to create an Android database in our application to store data. This is one of the multiple ways of using persistence in our system. Let's take a look at the steps in Android databases development with SQLite3.
• slide 1 of 5
As Android developers, sometimes we need to store persistence data in our application. Maybe we just need a Preferences System, but if the data we are going to store is a bit more complex than a vector of pair-values, we should use a database. There are some different ways of using databases; one of them is using Content Providers and the other one is going to be explained in this article.
I think this approach is a bit “classical" and fits in the Model Controller View (MCV), but is very flexible and useful when we have in mind a complex system with lots of interconnected tables. I have been working this way in an Android application with 40 tables. The model I'm going to use is simply explained in the next image (click for a larger view).
Database Model
From Activities we will access each table manager or table handler. The tables manager will contain methods to access the table it represents. Each of the table handlers uses the DatabaseHandler -- this class is the one that manages the connection to the database and gives the functionality of storing, retrieving, deleting data to each table manager. We are going to start developing from bottom to top, so let's define the “DatabaseHandler."
• slide 2 of 5
Creating the Database Handler
Our DatabaseHandler class will extend from SqliteOpenHelper, this is the Android class that will help us to create and manage databases. So we have the following piece of code:
public class DatabaseHandler extends SQLiteOpenHelper
Extending SqliteOpenHelper forces us to implement the
public void onCreate(SQLiteDatabase db)
The onCreate method is used when the database is created the first time. The onUpgrade method is used when a we have a new version of the database. You don't have to use these methods explicitly.
In this class I use the following two attributes:
private static final String DATABASE_NAME = "brightHubDatabase";
private static final int DATABASE_VERSION = 1;
The first one will be the database name and the second one is the actual version in code of this database.
So the full code of this class will be as follows:
Full Code
I have added a constructor with the Android Context as input parameter.
public DatabaseHandler(Context context) {
super(context, DATABASE_NAME, null, DATABASE_VERSION);
Here I use the SqliteOpenHelper constructor for passing the context, and the database name and the database version as parameters, so we can have an instance of the Database to work with it. If the database doesn't exist, create it.
As you can see in the onCreate method, I have added the following line:
The STUDENT_TABLE_CREATE is a constant that contains the “CREATE TABLE ... “ SQL String that creates the “student" table. This will be explained in next section, just don't forget that here is where I create the tables. Now we are using just one table, but imagine that you have 10 of them, so you can add one line per table. This way when the database is created, the structure of tables are created too.
I have added some code to the onUpgrade method, but it's not very relevant. Imagine that you want to add a modification to your database, and your application is already deployed. You can increase the database version in your code and add it here. In this method, the changes you want to accomplish are: ¿Add new columns? ¿Add new tables? ¿Delete old tables? All this can be done here.
• slide 3 of 5
Creating the Table Manager
You can create a “table manager" for each table you have in your application. In this example we have only one table called “students". So I have created a class called “StudentTableManager"; an ordinary class, with no extensions or implements.
Here are the variables and constants I like to place in this class:
private static final String STUDENTS_TABLE_NAME = "students";
The name of the database
public static final String KEY_NAME = "name";
public static final String KEY_ROWID = "_id";
Here I add a constant for every column in the table. Our table has only 2 columns “name" and “_id".
private DatabaseHandler mDbHelper;
private SQLiteDatabase mDb;
These objects will help us to manage the database. Notice that DatabaseHandler is the class we created before. The SQLiteDatabase is an object that represents the database itself.
public static final String STUDENT_TABLE_CREATE =
"create table students (_id integer primary key autoincrement, "
+ "name text not null);";
This string is used to create the table at the very beginning of the DatabaseHandler onCreate method. As you can see, is just a "create table" SQL sentence.
private final Context mCtx;
We need the Android Context to create the database.
The basic methods I use to create this class are:
public StudentTableManager(Context ctx) {
this.mCtx = ctx;
The constructor with the Context is used as the input value, so we can initialize our class Context (mCtx).
public StudentTableManager open() throws SQLException {
mDbHelper = new DatabaseHandler(mCtx);
mDb = mDbHelper.getWritableDatabase();
return this;
Here I have created the “open" method. Maybe it's not the best place to put this functionality but well, from here you can improve the design as you desire. Fine, here we initialize the DatabaseHandler and the SqliteDatabase object (mDb) using the “getWritableDatabase()" method.
Now we have all variables initialized, what can we do with them? We can use them to create a new row in the table:
public long createStudent(String title) {
ContentValues initialValues = new ContentValues();
initialValues.put(KEY_NAME, title);
return mDb.insert(STUDENTS_TABLE_NAME, null, initialValues);
Using the mDb (SqliteDatabase) object and the “insert" method, we add the new row to the table. Notice that we use the “ContentValues" object to store the key-values we are going to insert. Just create the ContentValue object, pair each value with its key and use it in the insert method. Now we have only one field, but if you have more, just add to the ContentValues object (imagine it as a vector).
initialValues.put(ONE_COLUMN_KEY, “WhatEver");
initialValues.put(OTHER_COLUMN_KEY, “More here");
initialValues.put(LAST_COLUMN_KEY, 5);
In the Git Repository you can find examples of how to delete, update, fetch data from databases, as well as the article code. Here I have given you some basis on Android databases development with SQLite3 so you can go it alone.
• slide 4 of 5
Using the library
And how do we use all this in our application? Here is the basic functionality that you have to place in your Activity:
StudentTableManager mDbHelper = new StudentTableManager(this);;
We create the table manager, we open it and we use it! That simple.
• slide 5 of 5
Source: Author's own experience.
SQLite Open Helper,
Android Persistence
A serie of articles with code and examples of how to use persistence in your android application.
1. Creating Databases in your Android Application
2. Learn About Developing Preference Screens for Your Android App |
Share this:
02/17/2012 08:00:00
Moore Foundation Gives Caltech $6 Million for Chemistry of Cellular Signaling Center
PASADENA, Calif.—Chemists have become extremely adept at characterizing biology's molecules—at determining structures and investigating the roles of individual membrane proteins or cell receptors, for example. But molecules in a living cell rarely work alone. They constantly receive chemical and physical signals from other molecules and, in turn, communicate with still others. If such interactions fail to take place, bad things can happen to the cell and, often, to the organism.
Acknowledging the importance of that interactivity, and with $6 million of funding from the Gordon and Betty Moore Foundation, the California Institute of Technology (Caltech) has established a new center dedicated to understanding the intricacies of cellular signaling. The Chemistry of Cellular Signaling Center will build on Caltech's successes at the interface of chemistry and biology, and will focus on determining how complex systems of molecules interact to create the pathways that regulate the lives of cells and allow them to respond to their environments.
"At the center, we're trying to go to this next level where we don't just study a complex biomolecule in isolation, but we try to understand how it is part of a larger path of molecules," says Dennis Dougherty, the George Grant Hoag Professor of Chemistry at Caltech and director of the new center. "We want to understand molecule-to-molecule interactions, and then, how they create signaling paths."
The new center draws on the expertise of six faculty members from the Division of Chemistry and Chemical Engineering: Dougherty; division chair Jacqueline Barton, the Arthur and Marian Hanisch Memorial Professor; Peter Dervan, the Bren Professor of Chemistry; Linda Hsieh-Wilson, a professor of chemistry who is also an investigator with the Howard Hughes Medical Institute; Shu-ou Shan, a professor of chemistry; and Long Cai, an assistant professor of chemistry.
These researchers' groups have already been working to understand the various aspects of the cellular machinery that make signaling possible—from the delivery of a message, to the changes such a message initiates within the cell, to the propagation of the message to other cells. The new center is looking to draw on common research strategies and equipment to produce a more holistic view of cellular signaling.
The goal is for the center to become a hub for all things related to the chemistry of signaling, says Dougherty, and to get the different groups talking to each other more consistently. "That cross-fertilization of ideas always leads to new opportunities," he says. "The real action happens when the students get together and start to talk about what they're working on. That's always a good way to get things going."
"We are so grateful to the Gordon and Betty Moore Foundation for supporting this new center," says Jacqueline Barton, chair of the Division of Chemistry and Chemical Engineering. "Bringing together these groups and especially these students gives us a chance to look at the interplay of chemistry within the cell and between cells in a completely new way. What will emerge will definitely be more than the sum of the parts."
What may emerge from an enhanced understanding of cellular signaling, says Barton, are new targets and approaches for therapeutic interventions. Many pathological states result from a malfunction of signaling within cellular pathways. For example, cystic fibrosis—the most common genetic disease among Caucasians—is typically caused by defects in the way a receptor is folded and moved through the cell, not by problems with the receptor itself. By understanding how networks of molecules produce and process the information needed for cells to function properly, researchers have a better chance of figuring out what happens when such functions fail—and, ultimately, of fixing those problems.
# # #
The Gordon and Betty Moore Foundation, established in 2000, seeks to advance environmental conservation and scientific research around the world and improve the quality of life in the San Francisco Bay Area. The foundation's science program aims to make a significant impact on the development of provocative, transformative scientific research and increase knowledge in emerging fields. For more information, visit
Written by Kimm Fesenmaier |
A study is considering how Ottawa's white winter bounty could be used in future summers as a form of green energy, saving the city money.
Ottawa council asked city staff and Hydro Ottawa Wednesday to go ahead with the study, which will examine how to use the "cold energy" stored in tonnes of snow collected from city streets in the winter to cool Ottawa buildings such as hospitals, universities and government complexes during the hot summer months.
The report back to council about the potential application of the idea is due in February of next year.
Last winter, Ottawa was buried in more than 400 centimetres of snow. Clearing it off the roads and trucking it to the city's snow dumps cost $88 million.
Insulated snow
According to Frederick Michel, director of the Institute of Environmental Science at Carleton University, snow that is collected during the winter usually melts by early June, and will last until September if it is insulated using a material such as wood chips.
The cold melt-water could be treated and run in pipes through buildings during the hot summer months, cooling them.
Similar systems have already been used in Sweden to cool a hospital complex since 2000 and is soon to be installed in an airport in Japan. Similar technology that pumps cold water from Lake Ontario is already used to cool buildings in downtown Toronto.
Diane Deans, the city councillor who brought the proposal forward, said it could save the city money that would otherwise be spent powering air conditioners.
"I think our taxpayers would be a lot happier knowing that that snow, instead of just piling up and melting and costing them money, was actually being reused," she said. |
Lawyer Dustin Milligan is the author of a children's series about the Charter of Rights and Freedoms. (CBC)
A Prince Edward Island man has written the final instalments in a children’s series illustrating the various rights protected under the Charter of Rights and Freedoms.
The final four books in the Charter for Children series by lawyer Dustin Milligan are due out July 1.
Each story features a particular province and highlights a Charter right while focusing on the local culture.
The book on Prince Edward Island is called Two Two-Eyed Potatoes and discusses equality of sexual orientation.
The series includes 14 books. Milligan, who now works for a Toronto law firm, launched the project seven years ago when he realized there were few resources to help children learn about the Charter.
"I had been going out to schools in the Montreal area teaching children about human rights," he said. "And I was doing it with a group of other students. And every time we went out, we realized that we didn't have any resources and were relying on makeshift materials." |
Gorilla Carries Toddler as Father Records Encounter (Video)
The extraordinary footage was reportedly released by conservation activist Damian Aspinall, and is thought to be more than two decades old.
In the video it is in fact Aspinall's own daughter playing with the gorilla when she was just a young girl.
Gorillas are often portrayed as strong and sometimes violent creatures, which can attack humans. However, Aspinall has stated his belief that they are at heart gentle creatures.
The British conservation activist shot the video footage 22 years ago at Howletts Wild Animal Park in Kent, but has only just released it. The video shows the 300-pound gorilla easily picking up his daughter and then carrying her around playfully. His daughter was just 18 months old at the time.
According to ABC News, Aspinall has decided to release the footage of his daughter Tansy now to reveal to everyone how gentle gorillas can in fact be.
Tansy seems very comfortable with the huge gorilla and can be seen patting it on its head. The gorilla in response seems quite satisfied and never looked a threat, other than its clearly huge size compared to the tiny girl.
Aspinall is a devout conservationist and the organization he works for sends endangered gorillas back to the wild into their native climate.
Gorillas have been known to attack humans, but instances of this are very rare. |
Class warfare
From Conservapedia
(Redirected from Class Struggle)
Jump to: navigation, search
Class warfare is a Marxist notion that people in different social classes must necessarily be in conflict with each other, the rich seeking to keep the poor down and the poor seeking to take away what belongs to the rich.
"'Class warfare' first entered the political lexicon primarily as an attack by liberals against conservatives." [1]
The era of Obama has ushered in a new push by Democrats to make class warfare a centerpiece of the 2012 political cycle. Obama makes the most noise when it comes to class warfare but his congressional allies and those in the media very much help facilitate the message.
Weeks before America heard of the Buffett Rule (Tax the rich), Democrats in San Francisco were trying out their class warfare message.[2]
The Occupy Wall Street movement is fundamentally a classic Marxist class warfare direct action movement.
Marxist theory
The apocalyptic language of Marx's class warfare argument is articulated in Volume I of Das Kapital. Few modern economists believe there is any scientific basis for Marx's dark forebodings, yet some sociologists and political scientists remain dedicated to varying twists of Marx's emotional appeal.
Hand in hand with this centralization, or this expropriation of many capitalists by few, develops…the entanglement of all nations in the net of the world market, and with this, the international character of the capitalist régime. Along with the constantly diminishing number of the magnates of capital, who usurp and monopolize all advantages of this process of transformation, grows the mass of misery, oppression, slavery, degradation, exploitation; but with this too grows the revolt of the working class, a class always increasing in numbers, and disciplined, united, organized by the very mechanism of the process of capitalist production itself. The monopoly of capital becomes a fetter upon the mode of production, which has sprung up and flourished along with it, and under it. Centralization of the means of production and socialization of labor at last reach a point where they become incompatible with their capitalist integument. This integument bursts. The knell of capitalist private property sounds. The expropriators are expropriated.[3]
Why is class warfare problematic?
At the first glance, the idea of "taxing the rich and giving to the poor" appears "fair". However, there are many problems with this idea. Under a capitalist system, one becomes "rich" through the process of hard work, and raising taxes on this group signals that this hard work is not being appreciated. In addition, the demonized top 1% of Americans pay 22% of all revenue,[4] while the bottom 50% of Americans pay nothing in income tax, clearly indicating that those who are wealthy already pay a "fair" share of revenue. The richest Americans are also known to be in control of job creating industries; taking too much from this group will yield job losses, which ends up hurting the average American.
See also
External links |
Duncan Copp's Barbican Space
The scientist and filmmaker presents a set of stunning visuals of recent space voyages alongside Holst's orchestral suite, 'The Planets'
Pin It
A collection of Roman deities, the controllers of emotional drive, or simply vast spheres of gas and light, the planets have always been a source of great power and wonderment, so it’s not a surprise that a visual and musical collaboration between NASA and the Houston Symphony Orchestra has become a glowing success this year. This Saturday, the Barbican centre will present the multimedia event as part of their Great Performers season. Created by scientist and filmmaker Duncan Copp, Holst’s popular orchestral suite ‘The Planets’ will be played alongside new and visually astounding space imagery taken from recent voyages into the solar system. Dazed caught up with the man behind the production, Duncan Copp to find out more about this exciting astronomical performance...
Dazed Digital: How did you come to create the project? What was the inspiration behind it?
Duncan Copp: Shortly after I produced ‘In the shadow of the Moon’ I received a call from the Houston Symphony. They’d paid a visit to the Johnson Space Centre with the idea of updating a production they had shown before; a visual accompaniment to Holst’s The Planet Suite. A good friend of mine at the space centre put them in contact with me, and the rest is history!
DD: As an ancient study, why do you think astronomical and astrological activity is still so fascinating to us today as a modern audience?
Duncan Copp: Well, with astronomy it’s such a broad science when you think about it, it can’t help but touch all of us at some stage during our lives. A full Moon, a shooting star, the latest Hubble Space Telescope image - when you see these, you can’t help but engage with them. That prompts similar questions which were asked by our ancestors, how did it form? How long has it been there? And before long the questions become more profound; how big is the universe? How did we come to be?
Astrology, of course is a pseudo-science lacking the rigor of scientific thinking. But today horoscopes have more significance in the newspapers than financial information – most people read their stars rather than share prices. That I guess is testament to the fact that we all want to be guided to some degree.
DD: How did you bring the performance into the present day? What was the process of selecting the HD footage?
Duncan Copp: A good score is timeless. You can listen to Bach or the Beatles, some of those tunes transcend time and will be with us forever. I think Holst’s ‘The Planets’ falls within this genre, it’s why this piece of music is so often the basis for film scores past and present, so I didn’t really have to work hard to bring the performance into the present day, it’s always been with us.
After studying planetary geology for my Ph.D. thesis, I was pretty familiar with all the wonderful imagery which was out there, or so I thought! When it came to actually selecting the images I hadn’t fully appreciated just how many incredible images existed. The exploration of the solar system has been so successful since I left university that the number of images to choose from was bewildering, and there was no real substitute than to sit down and start sifting. I reckon I looked at many thousands.
DD: The ‘concept’ of the planets can induce fear, wonderment, dread and pure fascination. Where does the fascination lie for you? Why did you become a scientist?
Duncan Copp: I think wonderment is instilled in you at a very early age, I guess fear and dread is too. With me I simply got a feeling in my stomach. It sounds corny I know, but that’s the best way to describe what I can recall when looking up at the sky as a young boy. Getting up at 2 am to see a meteor shower, waking the parents at four in the morning to see an eclipse of the Moon, or finding a fossil which looks like a toenail in a rock, and your Dad recounting it’s been there for millions of years! These things, simple as they sound, spark a curiosity, which at that age you aren’t eloquent enough to put into words or explain - it just happens. That led to me wanting to find out more and study science, which eventually led to my participating in helping create the first high resolution geological maps of Venus.
DD: Are there any aspects of the planets, either scientific or mythological that you feel particularly drawn to?
Duncan Copp: I’m not drawn to the planets in a mythological sense since I don’t really identify with that. But I can’t help but wonder what it must be like to actually be there. I love the incredible scientific statistics, for example it’s hot enough to melt lead on the surface of Venus, why so? Or the Valles Marineris on Mars would dwarf the Grand Canyon, so how was that formed then? It’s a difficult question to answer, but I guess it boils down to curiosity again, and the ever present ‘What if’ question, What if we could walk, fly over, touch, these other worlds, that’s what draws me to them.
DD: What effect are you trying to create by pairing Holst’s music to space imagery?
Duncan Copp: Holst’s ‘The Planets’ is a wonderful score, and individually, the images beamed back by robotic spacecraft from other worlds are fascinating. But I hope by putting the music and images together one enhances the other – there’s a synergy which creates even more enjoyment. I don’t think you could do this with every piece of music, but I think it works with ‘The Planets’.
DD: Each movement in the suite is intended to convey ideas and emotions associated with the influence of the planets on the psyche. Do you think the planets effect the psyche, if so, how?
Duncan Copp: This is going to sound rather dull, but no, I don’t think planets effect the psyche. It’s true in past times they were personified and thought to affect us – but today sadly I don’t think many people would be able to point out a planet in the sky and name it, let alone say that its presence there effects the way they feel, if that’s what you mean by the question.
DD: You previously shot the much acclaimed film ‘In the Shadow of the Moon’ - where next in the solar system do you think you might explore?
Duncan Copp: I’d like to make a documentary about the Sun. It’s a very fertile area of research at the moment. We take the Sun for granted and yet if we stop and think about it it’s incredible. It’s a real live star, and as far as the distance of stars go, we’re practically staring it in the face. We owe our existence to it and depend on it entirely. I think it represents the largest and most amazing physics lab in the solar system.
‘The Planets’ will be performed at the Barbican on 16 October (a Family matinee at 3pm and an evening concert at 7.30pm) which will start off the Barbican's "Great Performers" season
More Arts+Culture
Like this?
Like Dazed on Facebook |
Definitions for diatomˈdaɪ ə təm, -ˌtɒm
This page provides all possible meanings and translations of the word diatom
Princeton's WordNet
1. diatom(noun)
1. diatom(Noun)
One of the Diatomaceae, a family of minute unicellular algae having a siliceous covering of great delicacy.
2. Origin: From διά + τέμνειν, i.e., "cut in half"
Webster Dictionary
1. Diatom(noun)
2. Diatom(noun)
a particle or atom endowed with the vital principle
3. Origin: [Gr. dia`tomos cut in two. See Diatomous.]
1. Diatom
Diatoms are a major group of algae, and are among the most common types of phytoplankton. Most diatoms are unicellular, although they can exist as colonies in the shape of filaments or ribbons, fans, zigzags, or stars. Diatoms are producers within the food chain. A unique feature of diatom cells is that they are encased within a cell wall made of silica called a frustule. These frustules show a wide diversity in form, but are usually almost bilaterally symmetrical, hence the group name. The symmetry is not perfect since one of the valves is slightly larger than the other allowing one valve to fit inside the edge of the other. Fossil evidence suggests that they originated during, or before, the early Jurassic Period. Only male gametes of centric diatoms are capable of movement by means of flagella. Diatom communities are a popular tool for monitoring environmental conditions, past and present, and are commonly used in studies of water quality.
Chambers 20th Century Dictionary
1. Diatom
dī′a-tom, n. one of an order of microscopic unicellular algæ, of the Diatomaceæ.—adj. Diatomā′ceous.—n. Diat′omite, diatomaceous earth. [Gr. diatomosdia, through, temnein, to cut.]
1. Chaldean Numerology
The numerical value of diatom in Chaldean Numerology is: 3
2. Pythagorean Numerology
The numerical value of diatom in Pythagorean Numerology is: 8
Images & Illustrations of diatom
Translations for diatom
From our Multilingual Translation Dictionary
Get even more translations for diatom »
Find a translation for the diatom definition in other languages:
Select another language:
Discuss these diatom definitions with the community:
Word of the Day
Please enter your email address:
Use the citation below to add this definition to your bibliography:
"diatom." STANDS4 LLC, 2016. Web. 7 Dec. 2016. <>.
Are we missing a good definition for diatom? Don't keep it to yourself...
Nearby & related entries:
Alternative searches for diatom:
Thanks for your vote! We truly appreciate your support. |
Brain cancer risk from hypertension
Have your say
People with high blood pressure may be twice as likely to develop a brain tumour, according to the Daily Mail. The newspaper said a new study had found an association between the two factors, although crucially it could not show that high blood pressure actually caused the tumour to develop.
The research followed more than half a million Norwegian, Swedish and Austrian people for an average of about 10 years, looking at how several factors related to their risk of developing a brain tumour. After dividing people into five bands according to their blood pressure, the researchers found that people with the highest 20% of blood pressure readings were between 45% and 84% more likely to have a brain tumour. However, they found that having high blood pressure while the heart is at rest was only associated with an 18% risk increase once adjustments were made to account for other factors, such as age, gender and smoking status. After these adjustments, there was no increased risk for people who had higher systolic blood pressure (pressure while the heart contracts and pumps blood).
While some news sources have suggested that high blood pressure is associated with a doubling in risk for brain tumours, most of the study’s results suggested the associated risk was much lower. Brain tumours were also still extremely uncommon in the group, regardless of the subject’s blood pressure. This study has various other limitations and is a single study, which means that further study is warranted.
The study was carried out by researchers from the Innsbruck Medical University, Austria and researchers from other institutes in Norway, Sweden and the US. It was funded by the World Cancer Research Fund International and published in the peer-reviewed Journal of Hypertension.
News sources were correct to highlight that this study did not show that high blood pressure causes brain tumours, although some of the statistics they quoted may be misinterpreted. For example, some reports quoted figures suggesting that the risk of a certain type of tumour called meningioma more than doubled, but the risk increase was actually much lower than this. The researchers also produced a model adjusting their results to account for important factors such as age, smoking status and gender. It would have been more appropriate for the newspapers to quote these adjusted figures.
The research also separately analysed two types of blood pressure measurements (diastolic and systolic), which were each associated with different risks. Systolic measurements express blood pressure at the point the heart is contracting and forcing blood out into the body, while diastolic is the blood pressure between beats, when the heart is at rest.
This was a prospective cohort study that assessed whether there was an association between the risk of brain tumour and metabolic syndrome. Metabolic syndrome is a combination of medical conditions (such as raised cholesterol, raised blood pressure, obesity and high blood sugar) that increases the risk of heart disease and diabetes.
Cancer Research UK reports that there are around 8,000 brain tumours each year in the UK. As brain tumours are relatively rare, the researchers needed to follow a large number of people over time to see which factors were associated with developing a brain tumour. This type of study can only show an association between a factor and brain tumours. It cannot determine whether the factor caused the tumour to develop.
The cohort study involved is called the Metabolic Syndrome and Cancer Project. It included 578,462 participants with ages ranging from 15 to 99 at the point at which they entered the study, known as the “baseline”. Participants were recruited between 1972 and 2005. The study population was from Austria, Norway and Sweden. When each person entered the cohort, information about their height, weight, blood pressure, blood glucose, cholesterol and blood fats were recorded. Each participant’s smoking status was also noted: whether they had never smoked or were a former smoker or current smoker.
The researchers used nationwide cancer and cause-of-death registries to identify patients who had developed both benign and cancerous brain tumours. In their analyses, the researchers adjusted for sex, birth year, baseline age and smoking status. They did this in a way that took into account how certain factors, such as smoking, influence both blood pressure and cancer.
The average age of the cohort at baseline was 41. Nearly half of the participants were overweight and nearly a third had hypertension. People in the cohort were followed for 9.6 years on average, and in this time there were 1,312 diagnoses of primary brain tumours (where the cancer originated in the brain rather than spreading from another part of the body affected by cancer). The average age of diagnosis with a brain tumour was 56.
A third of the tumours were classified as a type called a 'high grade glioma', and 8% were 'low grade gliomas'. In the Swedish and Norwegian cohorts, further diagnostic details were available and in these groups 29% of people with brain tumours had a 'meningioma', which is a cancer of the meninges (a membrane that envelops the brain).
The researchers used the participants’ baseline data to divide people into five groups of the same size. Group allocation was dependent on body mass index (BMI), so people with BMIs in the top 20% would be in the top group (or 'quintile'), and people with BMIs in the lowest 20% would be in the bottom quintile. They also grouped the participants into quintiles according to cholesterol levels, fat content in the blood, blood pressure (both systolic blood pressure and diastolic blood pressure) and blood glucose levels to analyse how these factors were associated with tumour risk.
The researchers found that when they compared the risk of brain tumours in the top quintile with the bottom quintile, BMI, cholesterol and blood fat levels were not associated with a risk of developing a brain tumour.
The researchers then looked at blood pressure and found that the group with the highest systolic blood pressure measurements (average 157mmHg) were 45% more likely to have a brain tumour than people in the quintile with the lowest blood pressure measurements (average 109mmHg) [hazard ratio (HR) 1.45; 95% confidence interval (CI) 1.01 to 2.09].
People in the quintile with the highest diastolic blood pressure measurements (average 95mmHg) were 84% more likely to have a brain tumour than people in the quintile with the lowest blood pressure measurements (average 65mmHg) [HR 1.84, 95% CI 1.24 to 2.72].
The researchers repeated the same analysis but this time they looked at whether there was an association between blood pressure and the risk of developing a particular type of brain tumour. They found that:
Finally, the researchers performed analysis in which the data was adjusted for gender, age, age at baseline and smoking status. Using this model, diastolic blood pressure (but not systolic blood pressure) was associated with a greater risk of having a brain tumour of any type (HR 1.18, 95% CI 1.05 to 1.32).
The researchers said that increased blood pressure was related to the risk of primary tumour, particularly of meningioma and high-grade glioma.
This large prospective cohort study comprising more than 500,000 people from Austria, Norway and Sweden suggested an association between high blood pressure and some types of brain tumour. It should be noted, however, that even among the group of people with highest blood pressure the overall incidence of brain cancers was low.
Furthermore, there were several limitations to this study:
A strength of this study is that it followed a large number of people for a long period of time. However, further validation of these results is needed in other populations and the reasons for the association need to be followed up. |
On the eve of Independence Day, Massachusetts Gov. Deval Patrick signed legislation dealing with energy that may increase work for electrical contractors. The Green Communities Act, a comprehensive energy reform bill, offers a combination of heightened standards and incentives to help the state achieve a more diverse energy economy based on efficiency and alternative energy sources.
The new law doubles the rate of increase in the state’s existing Renewable Portfolio Standard from 0.5 percent per year to 1 percent per year with no cap. As a result, utilities and other electricity suppliers will be required to obtain renewable power equal to 4 percent of sales in 2009, rising to 15 percent in 2020, 25 percent in 2030 and more thereafter.
Toward that goal, the law requires utility companies to enter into 10- to 15-year contracts with renewable energy developers to help developers of clean energy technology obtain financing to build their projects. The agreements will target Massachusetts-based projects. The measure also authorizes utility companies to own solar electric installations they put on their customers’ roofs—a practice that was previously prohibited—up to 50 MW apiece after two years.
The act also makes it possible for people who own wind turbines and solar-generated power to sell their excess electricity into the grid (net metering) at favorable rates for installations of up to 2 MW (up from 60 kW).
Finally, the new law will make energy-efficiency programs compete in the market with traditional energy supply. Utility companies will be required to purchase all available energy-efficiency improvements that cost less than it does to generate power. Utility companies will offer rebates and other incentives for customers to upgrade lighting, air conditioning and industrial equipment to more efficient models, whenever those incentives cost less than generating the electricity it would take to power their older, less-efficient equipment. |
What Makes an Engine Run Lean?
Engines are surprisingly delicate things -- even the best on Earth consistently tread a very fine line between peak performance and complete meltdown. Just keeping an engine running means constantly balancing counter-destructive forces with nano-metric precision. Your engine's air-fuel ratio is a perfect example of carefully controlled balance; just a bit too much -- or too little -- of either air or fuel can turn your powerhouse into a time bomb. And that knocking you hear under your hood when it runs "lean?" Think of it as a ticking timer, counting down the bomb's final moments.
Disturbing the Balance
• An engine requires a very precise mixture of fuel and air: ideally, about 14 parts air to 1 part fuel. The ratio can typically be as low as 10-to-1 for performance applications, or as high as 16-to-1 for maximum fuel economy. But most engines will run from about a 12- to 15-to-1 ratio of air to fuel. A "lean" condition is one where there's either too much air or not enough fuel in the mix. This is the opposite of a "rich" condition, in which there's too much fuel and not enough air. Most engines are calibrated to run slightly rich -- about 13-to1 -- under cruise conditions; a rich mixture makes for a cooler and more stable fuel burn, which prevents "detonation" and keeps the engine from self-destructing.
Fuel System
• Lean conditions often happen because there's not enough fuel for the amount of air going in, so a malfunctioning fuel system is a prime suspect when the engine runs lean. A clogged fuel filter can reduce both fuel delivery and fuel pressure; low fuel pressure will reduce the fuel flow rate at the fuel injectors and cut the amount of fuel available in the float bowl in a carburetor. Either way, you're looking at a deficit of fuel and a lean condition.
Oxygen Sensor
• Oxygen sensors are used to monitor the amount of oxygen in the engine's exhaust. Your computer uses information from the oxygen sensors to tell the fuel injectors how long to stay open, and thus how much fuel to inject. If the oxygen sensor malfunctions, it can send the computer incorrect information, and send the engine into a lean condition. Bad oxygen sensors will almost always trigger a check-engine light on anything made since 1996, but not all O2 sensors do the same job. Only the first set of oxygen sensors -- the ones before the catalytic converter -- directly monitor the engine. The second O2 sensor monitors the converter.
Mass Airflow Sensor
• The mass airflow -- MAF -- sensor monitors and tells the on-board computer how much air is entering the engine. The MAF sensor uses a heated wire hanging down into the intake system to monitor airflow. Air flowing over the sensor wire cools it down by a certain amount, and the computer uses that information to determine how much air is going in. MAF sensors will malfunction over time, often because a layer of dirt and grime builds up on the sensor wire. The grime coating insulates it like a sweater, so the computer thinks there's less air going in than there is. These sensors are usually easy to clean, and spray-on MAF sensor cleaner solutions are available at most auto parts stores.
Other Sensors
• Almost any sensor that monitors airflow or fuel pressure can cause a lean condition. This includes not just the O2 and MAF sensors, but the manifold air pressure -- MAP -- sensor, intake air temperature sensor and even the sensor that monitors the exhaust gas recirculation system. An EGR stuck in the open position will act just like a massive vacuum leak, allowing excess air from the exhaust to re-enter the engine in an uncontrolled manner. Any of these should trigger a check-engine light.
Air Leaks
• Legitimate vacuum leaks aren't as common as they used to be, but they still happen. Vacuum leaks happen anywhere intake manifold vacuum has a chance to pull air in from the outside. Any number of hoses and lines could leak, but so could loose air intake hoses and leaking intake manifold gaskets. The old mechanic's trick is to spray ether starting fluid in short bursts at the suspected vacuum leak. If a vacuum leak is present, the engine will suck the starting fluid in, smooth out, briefly increase rpm and run properly for a few seconds. Be careful with this though -- starting fluid is extremely flammable, and doesn't get along well with electrical sensors and connections.
• Photo Credit svedoliver/iStock/Getty Images
Promoted By Zergnet
You May Also Like
Related Searches
Check It Out
How To Travel For Free With Reward Points
Submit Your Work! |
Through the application of natural language processing or NLP, electronic health records are being improved in terms of physician use. Point and click templates have always been resisted by physicians to document visits by patients within EHRs.
As an alternative, physicians prefer speech recognition software programs that actually transcribe the dictation into the EHRs. In order to show use that is both meaningful and effective, physicians are required to enter a particular amount of structured data. In an effort to make this process easier for users, electronic health records vendors are trying to improve input that imbeds natural processing language software directly into the products.
Let’s look at a brief overview of what natural language processing is and how it works. NLP describes a process of technology where communication is naturally presented to humans via spoken language or text. New studies reveal that NLP, a branch of computer science that employs both linguistics and regular speech, may increase the effectiveness of record keeping and improving care for patients. This is why physicians are taking notice of NLP and asking that their EHRs include this processing.
Researchers show that natural language processing actually improves accuracy in records and is much more effective than other automated systems. Why is this? Harvard Medical School offers some insight into the question, stating that clinical data can finally be used to measure patient safety more systematically and methodically. The practice ensures accuracy, which is something that has never been seen in the medical community. Great benefits are being brought to manage care, which niche is extremely needful for these kinds of approaches.
The Need for NLP
Seen throughout the medical community of managed care is the need for electronic health records to provide computerized tracking of both patients and associated networks of institutions. This is needed to detect whether or not a patient is at risk for complications. This may be one specific complication or an array of several complications.
This also shows whether or a not a specific department or hospital is performing at a lower credential than others. The tracking is needed in order to allow administration and other key players to help with quality assurance and improvement throughout the network. This is something greatly needed in managed care.
The growing need for this sort of application is on the rise. It is becoming more and more difficult to detail these components without an automated tracking system, such as the NLP. When the information cannot be accessed or is non-existent, it is impossible for the facility to take notice or improve.
Tackling these critical issues is a must and through the natural language processing algorithms in place, is now a possibility across the medical community. The algorithms set forth incorporate particular rules of speech and language directly into the analysis. This is exciting news for medicine and particularly, for doctors. Improvement is needed and without the use of this kind of technology, healthcare is taking a terrible turn for the worse. |
My father has stage IV colon cancer. He has a large mass in his hepatic section and polyps from his rectum on up, included in every section of his colon. He also has emphysema and is on 4 liters of oxygen. He is in poor health and cannot walk across a room without difficulty. He is 72 years old. Would surgery be right for him? I am concerned that the surgeons are only doing what they normally do in most cases of colon cancer and not looking at the risk of him not making it through surgery. If he were to make it through surgery, would this lengthen his life? Thank you.
It is difficult to answer your questions without having complete details of your father’s case. However, based on the information you have provided, it sounds like your father would be a high-risk patient for surgery. Moreover, surgery is rarely an appropriate treatment in stage IV colon cancer and is not done unless patients are suffering because the colon tumor is obstructing some other organ and interfering with normal bodily function.
In a case like this, doing surgery would not prolong the life of the patient because the tumor has already spread. Rather, extending the patient’s life would depend more on the success of chemotherapy in controlling the disease. |
Crustose lichen stage - xerarch, Biology
Crustose Lichen Stage - Xerarch
On bare rocks, conditions are inhospitable for life, as there is extreme deficiency of water and nutrients, great exposure to sun, and extremes of temperature. Crustose lichens alone are usually able to grow in such situations. Some examples of these pioneering species are, Rhizocarpon, Rhinodina, Lecidea and Lecanora. These plants flourish during periods of wet weather and remain in a state of desiccation for very long periods during drought. During the wet weather they rapidly absorb moisture by their sponge-like action. Mineral nutrients are obtained by the secretion of carbon dioxide which, with water forms a weak acid that slowly eats into the rock into which the rhizoids sometimes penetrate for a distance of several millimetres.
Nitrogen is brought by rain or by wind-blown dust. Thus all the life requirements of this simple, crust-like species is met with. Thus, lichens help corrode and decompose the rock, supplementing the other forces of weathering. And by mixing the rock particles with their own remains, make conditions favourable for - growth of other organisms. Thus, a thin layer of soil is formed. The rapidity with which a small amount of soil is formed is controlled largely by the nature of the rock and by the climate. On quartzite or basalt rocks in a dry climate, the crustose-lichen stage might persist for hundreds of years. But on limestone or sandstone in a moist climate, sufficient changes permit the invasion of foliose lichens, and all this may occur within a life time.
Posted Date: 1/21/2013 12:24:31 AM | Location : United States
Related Discussions:- Crustose lichen stage - xerarch, Assignment Help, Ask Question on Crustose lichen stage - xerarch, Get Answer, Expert's Help, Crustose lichen stage - xerarch Discussions
Write discussion on Crustose lichen stage - xerarch
Your posts are moderated
Related Questions
Nitrogen Fixation Nitrogen is an essential constituent of living organisms and there is an inexhaustible supply of it in the atmosphere in the free form. Majority of living o
Calcium antagonists are not recommended for the treatment of CHF because of their negative inotropic effects. However, second-generation dihydropyridine-type calcium antagonists su
Explain Chemical Formulas and Reactions? Chemical Formulas and Reactions : Molecules formed by the chemical bonding of atoms are described by chemical formulas. If energy
ROLE OF A NURSE IN LEGAL PSYCHIATRY: 1) Standard Care : The nurse must function at laid down standards and keep up with the standard of care by knowing the policies & pro
Explain the Proteus - Characteristics of Bacteria? It is gram negative, non-sporulating rod, which is characterized by rapid motility (peritrichous flagella) and swarming type
1. DNA damaga can be spontaneous or can be 'Induced by external agents a. What are 'spontaneous' mutations? Give examples of causes of spontaneous mutations and the kinds od DNA
thermophillic micro-organisms
Describe how a phagocyte destroys bacteria. The phagocyte forms a pouch in its cell membrane and engulfs bacteria in the pouch. It then pinches off the pouch to produce a vesi
Define Reagents Required and Methodology for Mucic Acid Test? - Sugar solution - Conc. Nitric acid Methodology Take about 50 mg of sugar in a test tube. Add 1 ml
Q. Show Foods that damage GI mucosa? Foods that damage GI mucosa: A number of spices, herbs and other condiments have been found to have little or no irritating effect on the m |
Example of distance - rate problems, Algebra
Two cars are 500 miles apart & directly moving towards each other. One car is at a speed of 100 mph and the other is at 70 mph. Supposing that the cars start at the same time how much time it take for the two cars to meet?
Let's assume t represent the amount of time which the cars are traveling before they meet. Now, we have to sketch a figure for this one. This figure will help us to write the equation which we'll have to solve out.
1270_Example of Distance - rate problems.png
From this figure we can note that the Distance Car A travels as well as the Distance Car B travels has to equal the total distance separating the two cars, 500 miles.
Following is the word equation for this problem into two separate forms.
1466_Example of Distance - rate problems1.png
We utilized the standard formula here twice, once for each car. We know that the distance a car travels is the rate of the car times the time traveled by the car. In this case we know that Car A travels at 100 mph for t hours & that Car B travel at 70 mph for t hours as well. Plugging these in the word equation and solving out gives us,
100t + 70t = 500
170t = 500
t = 500/170 =2.941176 hrs
Thus, they will travel for approximately 2.94 hours before meeting.
Posted Date: 4/6/2013 3:46:19 AM | Location : United States
Related Discussions:- Example of distance - rate problems, Assignment Help, Ask Question on Example of distance - rate problems, Get Answer, Expert's Help, Example of distance - rate problems Discussions
Write discussion on Example of distance - rate problems
Your posts are moderated
Related Questions
Two sides of an isosceles triangle are 7and 3. The perimeter of the triangle is?
how do i find the y-intercept when i already found the slope?
4(x)^2-40x+107 in standard form
turn into an inequality expression, y= (0,330) x= (110,0)
For these properties we will suppose that x > 0 and y > 0 log b ( xy ) = log b x + log b y log b ( x/y) = log b x - log b y log b (x r ) = r log x If log |
Bloom Control Feed
3 Ways to Make a New Plant Science of Gardening
Page 1 Page 2 Page 3 Page 4
What do a tomato and a bacterium have in common? They both rely on the DNA molecule to direct their functions and characteristics. While each species has its own unique combination of DNA, called its genome, the molecule works by the same rules in all living organisms. What does this mean for a high-tech plant breeder? That useful genes from one organism can potentially be transferred to the genome of another.
Scientists have done this with the bacterium Bacillus thuringiensis, or Bt for short. Bt, which is found naturally in soil and on the leaves of many plants, is toxic to many plant pests. The genes that give it that toxicity were first put into potatoes in 1997. Now many vegetables grown commercially have the Bt genes in them.
This gene-adding process is called “genetic engineering” (GE). GE is one form of what’s called “genetic modification” (GM). Though GE and GM are related, they are not the same. “Genetically modified” refers to any plant (or other organism) whose DNA has been manipulated by humans. This could happen through artificial selection and hybridization or through engineering. “Genetically engineered” refers specifically to one or more extra genes that have been inserted into a plant’s genome from an organism it can’t mingle genes with via sexual reproduction. The products of such engineering feats— like Bt tomatoes—are called “transgenic.”
Intuitively, GE doesn’t seem like a very “natural” process. After all, genes in nature don’t cross so many boundaries. But the line between natural and unnatural may be grayer than you think.
In the wild, most plants (actually, most living things) share genes only with members of their own or very closely related species living in nearby environments. Hybridization allows plant breeders to go beyond these limitations that exist in nature. Suppose a wild tomato exists that seems particularly resistant to cold. A breeder can cross this plant with one that produces store-bought tomatoes in order to impart these useful ancestral genes. The resulting tomatoes may look and taste like the ones we’re familiar with, and their DNA doesn’t contain any “nontomato” genes. But this cross isn’t natural: these two plants would never have found each other in the wild. In fact, the grocer’s tomato doesn’t even exist in nature. It’s the product of hundreds of similar hybridizations.
The main difference between a hybridized tomato and a transgenic one is that the latter has additional bits of DNA that originally came from another species. It has an altered genome. Critics of genetic engineering have argued that this process may have unintended consequences: They are concerned that seeds from GE plants could carry their genetic alterations into the environment and the food supply, or that untested GE products may pose health risks for consumers. Supporters of genetic engineering counter that the benefits of GE—plants with added nutrition, higher yields, or the ability to produce pesticides or drugs—outweigh the risks, and that a world with rapidly increasing population can’t be fed without them.
Different countries have adopted different policies toward the growing, selling, and labeling of genetically engineered foods. For example, GE ingredients in packaged foods have to be identified as such in Europe, but not in the United States. Until March 2005, farmers in Brazil were not allowed to grow GE crops, while farmers in the United States have long been encouraged to do so. In addition, the companies that develop genetically-engineered crop plants claim ownership of the seeds and any plants they produce as offspring for several generations. This has met with resistance from farmers around the world, some of whom have had pollen from GE plants blown or carried into their fields, and others in developing countries who would like the benefits of genetic engineering without being dependent on seed producers for their seed supply.
As a backyard gardener, you’re not likely to be planting any transgenics. These engineered plants have been developed for agriculture. They are patented, and the companies that sell them would make you aware of that when you buy them. That doesn’t mean, however, that the plants you grow in your garden are the products of completely natural processes. There’s manipulations everywhere out there.
Links presenting various views on transgenics, from the Action Bioscience Web site, produced by the American Institutes of Biological Sciences:
Biotechnology and the Green Revolution: An Interview with Norman Borlaugh
Borlaugh has been involved in crop improvement since the 1940s, and won the Nobel Peace prize in 1970 for his work. He is an staunch advocator of genetic engineering to increase crop yields and provide more food in developing nations.
The Ecological Impacts of Agricultural Biotechnology
By Miguel A. Altieri
Altieri teaches agroecology at the University of California at Berkeley, and also works on sustainable agriculture issues for the United Nations. He has been outspoken about his concerns regarding the ecological risks of plant biotechnology.
The Debate Over Genetically Modified Foods
By Kerryn Sakko
This article describes some of the differences between breeding and genetically engineering crops, and presents arguments on both sides of the issue. Written by an undergraduate student from Australia who represented her country at the 2004 Youth Science Festival in Singapore, sponsored by Asia-Pacific Economic Cooperation Forum.
Other Web site resources:
Food Future
Produced on behalf of the food and drink industry in the United Kingdom, this site presents many perspectives on questions such as “Who owns GM technology” and “What about diversity?”
How do you make a transgenic plant?
Through animations, illustrations, and text, this page gives background on DNA and describes how scientists move genes from one organism to another.
© Exploratorium |
B Cell Biology
B cells are white blood cells that are produced in the bone marrow and migrate to the spleen and other tissues. They make immunoglobulin, which is a protein used by the immune system to identify and protect against foreign objects such as bacteria and viruses. Defects in the development of B cells or in their function may lead to an overproduction of B cells, which causes a form of leukemia, or to production of antibodies, which leads to autoimmune diseases like lupus or rheumatoid arthritis or lupus.
Feinstein Institute researchers are advancing our understanding about these defects in the development of B cells with the aim of identifying new treatments and diagnostics for diseases such as chronic lymphocytic leukemia and lupus.
Feinstein Institute investigators conducting B cell research include Nicholas Chiorazzi; Anne Davidson; Betty DiamondKanti R. Rai and Ping Wang. |
Why Sharing Is a Common Cause That Unites Us all
By Adam Parsons and Rajesh Makwana / sharing.org
Given that a call for sharing is already a fundamental (if often unacknowledged) demand of engaged citizens and progressive organisations, there is every reason why we should embrace this common cause that unites us all.
Across the world, millions of campaigners and activists refuse to sit idly by and watch the world’s crises escalate, while our governments fail to provide hope for a more just and sustainable future. The writing is on the wall: climate chaos, escalating conflict over scarce resources, growing impoverishment and marginalisation in the rich world as well as the poor, the looming prospect of another global financial collapse. In the face of what many describe as a planetary emergency, there has never been such a widespread and sustained mobilisation of citizens around efforts to challenge global leaders and address critical social and environmental issues. A worldwide ‘movement of movements’ is on the rise, driven by an awareness that the multiple crises we face are fundamentally caused by an outmoded economic system in need of wholesale reform.
But despite this growing awareness of the need for massive combined action to reverse ongoing historical trends, clearly not enough is being done to tackle the systemic causes of the world’s interrelated problems. What we still lack is a truly unified progressive movement that comprises the collective actions of civil society organisations, grassroots activists and an engaged citizenry. A fusion of progressive causes is urgently needed under a common banner, one that can create a consensus among a critical mass of the world population about the necessary direction for transformational change. As many individuals and groups within the progressive community both recognise and proclaim, this is our greatest hope for bringing about world renewal and rehabilitation.
new report by Share The World’s Resources (STWR) demonstrates how a call for sharing wealth, power and resources is central to the formation of this growing worldwide movement of global citizens. As more and more people raise their voices for governments to put human needs and ecological preservation before corporate greed and profit, this demand for sharing is consistently at the heart of civil society demands for a better world. In fact, the principle of sharing is often central to efforts for progressive change in almost every field of endeavour. But this basic concern is generally understood and couched in tacit terms, without acknowledging the versatility and wide applicability of sharing as a solution to the world’s problems. For this reason, STWR argues that the call for sharing should be more widely perceived and promoted as a common cause that can help connect the world’s peace, justice, pro-democracy and environmental movements under a united call for change.
How is the call for sharing expressed?
In many ways the need for greater sharing in society is longstanding and self-evident, as there can be no social or economic justice when wealth and income inequalities continue to spiral out of control, increasingly to the benefit of the 1% (or indeed the 0.001%). There is now an almost continuous and high-profile discussion on the need to tackle growing extremes of inequality, which is a debate that is often framed entirely – if not always explicitly – around the need for a just sharing of wealth and power across society as a whole.
At the same time, advocacy for new development paradigms or economic alternatives is increasingly being framed and discussed in terms of sharing. This is most apparent in the international debate on climate change and sustainable development, in which many policy analysts and civil society organisations (CSOs) are calling for ‘fair shares’ in a constrained world – in other words, for all people to have an equal right to share the Earth’s resources without transgressing the planet’s environmental limits. Furthermore, some prominent CSOs - including Christian Aid, Oxfam International and Friends of the Earth - clearly espouse the principle of sharing as part of their organisational strategies and objectives, and call for dramatic changes in how power and resources are shared in order to transform our unjust world.
The renewed concept of the commons has also fast become a well-recognised global movement of scholars and activists who frame all the most pressing issues of our time – from unsustainable growth to rising inequality – in terms of our need to cooperatively protect the shared resources of Earth. On a more local and practical level, there is also a flourishing sharing economy movement that is empowering people to share more in their everyday lives through the use of online platforms and sharing-oriented business models, as well as through gift economies and shared community projects.
In most other instances, however, the basic demand for sharing is implicitly discussed or inadvertently promoted in popular calls for change. For example, millions of people across the world are struggling for democracy and freedom in manifold ways, from people-led uprisings against corrupt governments to those who are actively participating in new democracy movements within communities and workplaces. But there can be no true form of democracy - and no securing of basic human rights for all - without a just sharing of political power and economic resources, as outlined in a section of STWR’s report on participative democracy.
Similarly, the principle of sharing underlies many of the campaigns and initiatives for peaceful co-existence, whether it’s in terms of redirecting military spending towards essential public goods, or ending the scramble for scarce resources through cooperative international agreements. From both a historical and common sense perspective, it is clear that competition over resources causes conflict – and there is no sense in perpetuating an economic paradigm where all nations are pitted against each other to try and own what could easily be shared.
Yet the basic necessity of sharing is often not recognised as an underlying cause for all those who envision a more equitable and peaceful world without insecurity or deprivation. This is despite the fact that the mass protest movements that have swiftly emerged in recent years, including the Arab Spring demonstrations and Occupy movements, are also invariably connected by their implicit call for greater economic sharing across society, not least in their reaction to enormous and growing socio-economic divisions.
Why advocate for sharing?
Given that a call for sharing is already a fundamental (if often unacknowledged) demand of a diverse group of progressive individuals and organisations, there are a number of reasons why we should embrace this common cause and advocate more explicitly for sharing in our work and activities. In particular, a call for sharing holds the potential to connect disparate campaign groups, activists and social movements under a common theme and vision. Such a call represents the unity in diversity of global civil society and can provide an inclusive rallying platform, which may help us to recognise that we are all ultimately fighting the same cause. It also offers a way of moving beyond separate silos and single-issue platforms, but without needing to abandon any existing focuses or campaign priorities.
A call for sharing can also engage a much broader swathe of the public in campaign initiatives and movements for transformative change. Many people feel disconnected from political issues owing to their technical complexity, or else they feel overwhelmed by the enormity of the challenges that face us and ill equipped to take action. But everyone understands the human value of sharing, and by upholding this universal principle in a political context we can point the way towards an entirely new approach to economics – one that is inherently based on a fair and sustainable distribution of resources. In this way, the principle of sharing represents a valuable advocacy and educational tool that can help to generate widespread public engagement with critical global issues.
In addition, a popular demand for governments to adopt the principle of sharing has radical implications for current economic and political arrangements, both within countries and internationally. This is clear when we examine the influence of the neoliberal approach to economics that continues to dominate policymaking in both the Global North and South, and which is in many ways the antithesis of an economic approach based on egalitarian values and the fulfilment of long-established human rights. In an increasingly unequal and unsustainable world in which all governments need to drastically re-order their priorities, a call for sharing embodies the need for justice, democracy and sound environmental stewardship to guide policymaking at every level of society.
A global movement for sharing
Ultimately, only a collective demand for a fairer sharing of wealth, power and resources is likely to unify citizens across the world in a common cause. Unless individuals and organisations in different countries align their efforts in more concrete ways (a process that is already underway), it may remain impossible to overcome the vested interests and entrenched structures that maintain business-as-usual. While we face the increasing prospect of social, economic and ecological collapse, there is no greater urgency for establishing a broad-based global movement that upholds the principle of sharing as a basic guide for restructuring our societies and tackling the multiple crises of the 21st century. In the end, this may represent our greatest hope for influencing economic reforms that are based on the needs of the world as a whole, and guided by basic human and ecological values.
If the case for promoting sharing as our common cause seems convincing, then it compels us to acknowledge that we are all part of this emerging movement that holds the same values and broad concerns. Without doubt, a dramatic shift in public debate is needed if the principle of sharing is to be understood as integral to any agenda for a more just and sustainable world. If you agree with the need to catalyse a global movement of citizens that embrace sharing as a common cause, please sign and promote STWR’s campaign statement. By joining this ‘global call’, any individual or organisation can influence the development of this emerging theme and vision, and help spark public awareness and a wider debate on the importance of sharing in economic and political terms.
0.0 ·
What's Next
Trending Today
Why People Cling to Old Beliefs
Veterans at Standing Rock Ask Forgiveness for War Crimes Against Tribal Nations
The Myth of Positivity: Why Your Pain Holds a Mighty Purpose
13 Crises That We All Must Face
Sky Roosevelt-Morris: The Secret of Indigenous Resiliency
The Other Way of Knowing
Australian Government Promotes Crap with Adani Carmichael Coal Mine
15 Films Inspiring and Illuminating the 'New Story' Revolution
Andy Goes In - Working Undercover in a Factory Farm
How Mindfulness Empowers Us
Incredible Stories From 5 Inspirational Farms
Mary Lyons Describes 'The We'
The Trouble With Equality: Feminism and the Forgotten Places of Power
All the News Is Fake!
How a White Supremacist Became a Civil Rights Activist
Load More
Like us on Facebook?
Why Sharing Is a Common Cause That Unites Us all |
Gardening Tips & Ideas
In the past, most gardening was done out of necessity. Today many people garden because it's refreshing and the rewards are fresher than anything one could purchase at the grocery store. These gardening tips and ideas are sure to help make better use of your gardening endeavors, so you get to spend more time doing the gardening chores you love and less time preparing and organizing supplies to make it happen.
Easy-to-Transplant Seed Starters
Reuse corrugated egg cartons to start seeds indoors. Collect egg cartons and when it's time to start seeds indoors, simply fill with soil and use each egg holder to plant seeds in. By the time the seedlings are ready to transplant, the carton will pull apart with ease and each seedling can be planted directly in the ground without disruption to the tender plant or its developing roots.
Monthly Garden Tips
Take advantage of monthly gardening to-do lists. Most newspapers, magazines and websites offer monthly tips to help prepare for the following months' gardening chores, often in correlation with your particular climate zone. There is much that can be done long before planting season to help eliminate cramming everything into one or two weekends prior to planting. A few gardening tasks that can be accomplished during the resting period include: test your garden soil; start a compost bin; rake leaves to add to compost; check tools and equipment (winterize if necessary); read through plant and seed catalogs and plan your spring garden on paper; and purchase supplies such as organic fertilizer, organic compost and tools ahead of time. The Master Gardener Volunteer Program at Ohio State University Extension came up with a comprehensive month-by-month garden guide to include all gardening tasks.
Help Your Hands
Gardening will take its toll on a gardener's hands in a short period of time; that's why a good pair of garden gloves is recommended. A great pair of gardening gloves does not have to be expensive. Gardening gloves that fit snug, are made of durable cloth and have a light grip added to the fingers and palms will meet your gardening needs. Another gardening tip that works wonders--apply a generous amount of lotion to your hands before putting on gardening gloves to render soft, supple hands when your work is done.
Rainwater Harvesting
Harvesting rainwater is an inexpensive solution to providing necessary water to thirsty gardens. For as little as $25 at most gardening centers, you can walk away with a rain barrel that will catch rainwater from the gutter of your home or other building that you can later use to water your garden. Additionally, the North Carolina State University Cooperative Extension recommends using rainwater for plants as it does not contain added salts or minerals and is highly oxygenated.
Keywords: gardening tips, gardening ideas, garden tips ideas
About this Author
Patricia Hill is a freelance writer who contributes to several sites and organizations, including eHow, Associated Content, Break Studios and various private sectors. She also contributes to the online magazine, |
Steller's Sea Cow – driven to extinction | Greenpeace International
Steller's Sea Cow – driven to extinction
Background - 8 May, 2009
The Steller's sea cow (Hydrodamalis gigas) was a large sirenian mammal, which grew up to 10 metres long and weighed between 6 and 8 tonnes. It was discovered in 1741 near the Asiatic coast of the Bering Sea by German biologist Georg Steller, who was travelling with the explorer Vitus Bering. Just 28 years later, the species was extinct. It is the first recorded example of humans driving a marine species to extinction.
Drawing of a Steller's Sea Cow - (Mid 18th century).
In November 1741, the weary crew of the Bering voyage, plagued by scurvy and hunger, were 5 months into their journey home from North America. They had all but given up on survival when they spotted an unknown island (Bering Island) and ran aground.
Initially the hungry crew fed on sea otters, which were so abundant hundreds could be found just 3 kilometres from their camp. Six months later, the crew had hunted so many otters that they were now forced to travel up to 40 kilometres over difficult terrain to hunt them. Facing another winter on the island, they began to look for alternative food sources. Soon sea cows replaced otters as the staple food in the crew's diet.
The following spring Steller and his companions built a new ship from the wreck of the old, and left Bering Island on August 14, 1742. Though forced to leave behind much of Steller's painstakingly gathered research, they nevertheless told the world about the sea cow. As the news of their travels initiated a 'gold rush' among adventurers seeking to profit from the newly-discovered lands, virtually all ships on their way to the new world stopped at Bering to load sea cow meat.
Just 28 years after it was first discovered, Steller's sea cow was extinct.
Today, archaeological evidence tells us that the sea cow was formerly widespread in the seas between Japan and California, long before Steller 'discovered' it. But overexploitation by indigenous peoples and loss of its kelp forest habitats had forced the species to a retreat in the Bering Sea.
Today, history is being repeated in the Bering Sea region. Another magnificent marine mammal, the Steller sea lion, is being pushed to the point of extinction. The sea lion and its close relative the northern fur seal are being starved of Alaska Pollock, their main food source. Alaska Pollock is heavily overfished by the world's largest single species fishery - primarily to provide whitefish fillets for the fast-food industry. |
Fire sprinkler
Fire sprinklers can be open orifice or automatic. Automatic sprinklers are activated by heat that breaks the sensing device keeping the sprinkler closed. The water from the pipe comes through the sprinkler and hits the deflector and gives a water spray.
Fire sprinklers can be automatic or open orifice. Automatic fire sprinklers operate at a predetermined temperature, utilizing a fusible element, that either melts or breaks the glass bulb containing liquid, allowing the plug in the orifice to be pushed out by the water pressure. This results in a water flow from the orifice.
The water hits a deflector, which produces a specific spray pattern designed according to the sprinkler type. Most automatic fire sprinklers are activated individually by heat from a fire. Automatic fire sprinklers have glass bulbs, which comply to a standardized color coding indicating their activation temperature. Fire sprinklers are selected in accordance of the building hazard.
Design of the sprinklers are described by the fire standard authorities like NFPA and VdS etc.
Deluge fire sprinkler systems have open orifice sprinklers where all sprinklers operate at the same time. The open orifice sprinklers are similar to the automatic sprinklers but without the heat sensitive element.
A special sprinkler type for mainly residential occupancies, where a fast response is lifesaving, is the ESFR sprinkler – Early Suppression Fast Response. This sprinkler has a very fast response time and opens quickly compared to normal sprinklers.
Facebook Twitter LinkedIn Technorati |
The Collapse Of The Theory Of Evolution In 20 Questions
Download (DOC)
Download (PDF)
< <
8 / total: 21
7. Why Is The Claim That Dinosaurs Evolved Into Birds An Unscientific Myth?
'Dinozorların sinek avlamaya çalışırken kanatlanıp kuş oldukları' iddiası
The Archaeopteryx deception
Archaeopteryx, the so-called ancestor of modern birds according to evolutionists, lived approximately 150 million years ago. The theory holds that some small dinosaurs, such as Velociraptors or Dromaeosaurs, evolved by acquiring wings and then starting to fly. Thus, Archaeopteryx is assumed to be a transitional form that branched off from its dinosaur ancestors and started to fly for the first time.
However, the latest studies of Archaeopteryx fossils indicate that this explanation lacks any scientific foundation. This is absolutely not a transitional form, but an extinct species of bird, having some insignificant differences from modern birds.
However, the seventh Archaeopteryx fossil, which was found in 1992, disproved this argument. The reason was that in this recently discovered fossil, the breastbone that was long assumed by evolutionists to be missing was discovered to have existed after all. This fossil was described in the journal Nature as follows:
The recently discovered seventh specimen of the Archaeopteryx preserves a partial, rectangular sternum, long suspected but never previously documented. This attests to its strong flight muscles, but its capacity for long flights is questionable. 30
This discovery invalidated the mainstay of the claims that Archaeopteryx was a half-bird that could not fly properly.
Morevoer, the structure of the bird's feathers became one of the most important pieces of evidence confirming that Archaeopteryx was a flying bird in the true sense. The asymmetric feather structure of Archaeopteryx is indistinguishable from that of modern birds, and indicates that it could fly perfectly well. As the eminent paleontologist Carl O. Dunbar states, "Because of its feathers, [Archaeopteryx is] distinctly to be classed as a bird."31 Paleontologist Robert Carroll further explains the subject:
The geometry of the flight feathers of Archaeopteryx is identical with that of modern flying birds, whereas nonflying birds have symmetrical feathers. The way in which the feathers are arranged on the wing also falls within the range of modern birds… According to Van Tyne and Berger, the relative size and shape of the wing of Archaeopteryx are similar to that of birds that move through restricted openings in vegetation, such as gallinaceous birds, doves, woodcocks, woodpeckers, and most passerine birds… The flight feathers have been in stasis for at least 150 million years… 32
Another fact that was revealed by the structure of Archaeopteryx's feathers was its warm-blooded metabolism. As was discussed above, reptiles and—although there is some evolutionist wishful thinking on the opposite direction—dinosaurs are cold-blooded animals whose body heat fluctuates with the temperature of their environment, rather than being homeostatically regulated. A very important function of the feathers on birds is the maintenance of a constant body temperature. The fact that Archaeopteryx had feathers shows that it was a real, warm-blooded bird that needed to retain its body heat, in contrast to dinosaurs.
The anatomy of Archaeopteryx and the evolutionists' error
Two important points evolutionary biologists rely on when claiming Archaeopteryx was a transitional form, are the claws on its wings and its teeth.
It is true that Archaeopteryx had claws on its wings and teeth in its mouth, but these traits do not imply that the creature bore any kind of relationship to reptiles. Besides, two bird species living today, the touraco and the hoatzin, have claws which allow them to hold onto branches. These creatures are fully birds, with no reptilian characteristics. That is why it is completely groundless to assert that Archaeopteryx is a transitional form just because of the claws on its wings.
Neither do the teeth in Archaeopteryx's beak imply that it is a transitional form. Evolutionists are wrong to say that these teeth are reptilian characteristics, since teeth are not a typical feature of reptiles. Today, some reptiles have teeth while others do not. Moreover, Archaeopteryx is not the only bird species to possess teeth. It is true that there are no toothed birds in existence today, but when we look at the fossil record, we see that both during the time of Archaeopteryx and afterwards, and even until fairly recently, a distinct group of birds existed that could be categorised as "birds with teeth."
Studies of Archaeopteryx's anatomy revealed that it possessed complete powers of flight, just like a modern bird has. The efforts to liken it to a reptile are totally unfounded.
The most important point is that the tooth structure of Archaeopteryx and other birds with teeth is totally different from that of their alleged ancestors, the dinosaurs. The well-known ornithologists L. D. Martin, J. D. Stewart, and K. N. Whetstone observed that Archaeopteryx and other similar birds have unserrated teeth with constricted bases and expanded roots. Yet the teeth of theropod dinosaurs, the alleged ancestors of these birds, had serrated teeth with straight roots.33 These researchers also compared the ankle bones of Archaeopteryx with those of their alleged ancestors, the dinosaurs, and observed no similarity between them. 34
Studies by anatomists such as S. Tarsitano, M.K. Hecht, and A.D. Walker have revealed that some of the similarities that John Ostrom, a leading authority on the subject who claims that Archaeopteryx evolved from dinosaurs, and others have seen between the limbs of Archaeopteryx and dinosaurs were in reality misinterpretations.35 For example, A.D. Walker has analysed the ear region of Archaeopteryx and found that it is very similar to that of modern birds. 36
In his book Icons of Evolution, American biologist Jonathan Wells remarks that Archaeopteryx has been turned into an "icon" of the theory of evolution, whereas evidence clearly shows that this creature is not the primitive ancestor of birds. According to Wells, one of the indications of this is that theropod dinosaurs—the alleged ancestors of Archaeopteryx—are actually younger than Archaeopteryx: "Two-legged reptiles that ran along the ground, and had other features one might expect in an ancestor of Archaeopteryx, appear later." 37
All these findings indicate that Archaeopteryx was not a transitional link but only a bird that fell into a category that can be called "toothed birds." Linking this creature to theropod dinosaurs is completely invalid. In an article headed "The Demise of the 'Birds Are Dinosaurs' Theory," the American biologist Richard L. Deem writes the following about Archaeopteryx and the bird-dinosaur evolution claim:
The results of the recent studies show that the hands of the theropod dinosaurs are derived from digits I, II, and III, whereas the wings of birds, although they look alike in terms of structure, are derived from digits II, III, and IV... There are other problems with the "birds are dinosaurs" theory. The theropod forelimb is much smaller (relative to body size) than that of Archaeopteryx. The small "proto-wing" of the theropod is not very convincing, especially considering the rather hefty weight of these dinosaurs. The vast majority of the theropod lack the semilunate wrist bone, and have a large number of other wrist elements which have no homology to the bones of Archaeopteryx. In addition, in almost all theropods, nerve V1 exits the braincase out the side, along with several other nerves, whereas in birds, it exits out the front of the braincase, though its own hole. There is also the minor problem that the vast majority of the theropods appeared after the appearance of Archaeopteryx. 38
These facts once more indicate for certain that neither Archaeopteryx nor other ancient birds similar to it were transitional forms. The fossils do not indicate that different bird species evolved from each other. On the contrary, the fossil record proves that today's modern birds and some archaic birds such as Archaeopteryx actually lived together at the same time. It is true that some of these bird species, such as Archaeopteryx and Confuciusornis, have become extinct, but the fact that only some of the species that once existed have been able to survive down to the present day does not in itself support the theory of evolution.
Latest Evidence: Ostrich Study Refutes The Dino-Bird Story
In the same report, Dr. Feduccia also made important comments on the invalidity—and the shallowness—of the "birds evolved from dinosaurs" theory:
"There are insurmountable problems with that theory," he [Dr. Feduccia] said. "Beyond what we have just reported, there is the time problem in that superficially bird-like dinosaurs occurred some 25 million to 80 million years after the earliest known bird, which is 150 million years old."
This evidence once again reveals that the "dino-bird" hype is just another "icon" of Darwinism: a myth that is supported only for the sake of a dogmatic faith in the theory.
Evolutionists' bogus dino-bird fossils
Exactly 1 year ago, paleontologists were abuzz about photos of a so-called "feathered dinosaur," which were passed around the halls at the annual meeting of the Society of Vertebrate Paleontology. The Sinosauropteryx specimen from the Yixian Formation in China made the front page of The New York Times, and was viewed by some as confirming the dinosaurian origins of birds. But at this year's vertebrate paleontology meeting in Chicago late last month, the verdict was a bit different: The structures are not modern feathers, say the roughly half-dozen Western paleontologists who have seen the specimens…paleontologist Larry Martin of Kansas University, Lawrence, thinks the structures are frayed collagenous fibers beneath the skin—-and so have nothing to do with birds.41
So how was it that National Geographic could have presented such a huge scientific forgery to the whole world as "major evidence for evolution"? The answer to this question lay concealed in the magazine's evolutionary fantasies. Since National Geographic was blindly supportive of Darwinism and had no hesitation about using any propaganda tool it saw as being in favour of the theory, it ended up signing up to a second "Piltdown man scandal."
Evolutionist scientists also accepted National Geographic's fanaticism. Dr. Storrs L. Olson, head of the famous U.S. Smithsonian Institute's Ornithology Department, announced that he had previously warned that the fossil was a forgery, but that the magazine's executives had ignored him. In a letter he wrote to Peter Raven of National Geographic, Olson wrote:
dino-kuş sahtekralığı
Prior to the publication of the article "Dinosaurs Take Wing" in the July 1998 National Geographic, Lou Mazzatenta, the photographer for Sloan's article, invited me to the National Geographic Society to review his photographs of Chinese fossils and to comment on the slant being given to the story. At that time, I tried to interject the fact that strongly supported alternative viewpoints existed to what National Geographic intended to present, but it eventually became clear to me that National Geographic was not interested in anything other than the prevailing dogma that birds evolved from dinosaurs.43
In a statement in USA Today, Olson said, "The problem is, at some point the fossil was known by Geographic to be a fake, and that information was not revealed."44 In other words, he said that National Geographic maintained the deception, even though it knew that the fossil it was portraying as proof of evolution was a forgery.
We must make it clear that this attitude of National Geographic was not the first forgery that had been carried out in the name of the theory of evolution. Many such incidents have taken place since it was first proposed. The German biologist Ernst Haeckel drew false pictures of embryos in order to support Darwin. British evolutionists mounted an orangutan jaw on a human skull and exhibited it for some 40 years in the British Museum as "Piltdown man, the greatest evidence for evolution." American evolutionists put forward "Nebraska man" from a single pig's tooth. All over the world, false pictures called "reconstructions," which have never actually lived, have been portrayed as "primitive creatures" or "ape-men."
In short, evolutionists once again employed the method they first tried in the Piltdown man forgery. They themselves created the intermediate form they were unable to find. This event went down in history as showing how deceptive the international propaganda on behalf of the theory of evolution is, and that evolutionists will resort to all kinds of falsehood for its sake.
29. Harun Yahya, Darwinism Refuted, pp.207-222
30. Nature, vol. 382, August, 1, 1996, p. 401.
31. Carl O. Dunbar, Historical Geology, John Wiley and Sons, New York, 1961, p. 310.
32. Robert L. Carroll, Patterns and Processes of Vertebrate Evolution, Cambridge University Press, 1997, p. 280-81.
33. L. D. Martin, J. D. Stewart, K. N. Whetstone, The Auk, vol. 97, 1980, p. 86.
34. L. D. Martin, J. D. Stewart, K. N. Whetstone, The Auk, vol. 97, 1980, p. 86; L. D. Martin, "Origins of the Higher Groups of Tetrapods,” Ithaca, Comstock Publishing Association, New York, 1991, pp. 485-540.
35. S. Tarsitano, M. K. Hecht, Zoological Journal of the Linnaean Society, vol. 69, 1980, p. 149; A. D. Walker, Geological Magazine, vol. 117, 1980, p. 595.
36. A.D. Walker, as described in Peter Dodson, "International Archaeopteryx Conference,” Journal of Vertebrate Paleontology 5(2):177, June 1985.
37. Jonathan Wells, Icons of Evolution, Regnery Publishing, 2000, p. 117
38. Richard L. Deem, "Demise of the 'Birds are Dinosaurs' Theory,”
39. "Scientist say ostrich study confirms bird 'hands' unlike these of dinosaurs,”
41. Ann Gibbons, "Plucking the Feathered Dinosaur,” Science, vol. 278, no. 5341, 14 November 1997, pp. 1229 – 1230
42. "Forensic Palaeontology: The Archaeoraptor Forgery," Nature, March29, 2001
43. Storrs L. Olson "OPEN LETTER TO: Dr. Peter Raven, Secretary, Committee for Research and Exploration, National Geographic Society Washington, DC 20036,” Smithsonian Institution, November 1, 1999
44. Tim Friend, "Dinosaur-bird link smashed in fossil flap,” USA Today, 25 January 2000, (emphasis added)
8 / total 21
© 1994 Harun Yahya. - |
Effective Homemade Remedies
by HealthyFood 5 views0
Whether you have a cold, pain in the stomach, rashes or you have cut yourself, you can find easy first aid in your kitchen. Some household remedies have proven to be effective through generations.
Use it for: Minor cuts and burns, cough and sore throat
Effective homemade remedies
How it works: Most of us put honey in tea to reduce pain in the throat, but for centuries it has been used to treat wounds. Studies have shown that honey helps in calming the cuts, and the recent Dutch study showed that protein defensin-1 have antibacterial properties.
Try : Apply warm honey on the place where you have cut or burn yourself , and then protect the injured place with gauze or a bandage and change it once a day . However, if your wound causes swelling, fever or pain, consult a doctor because you may need an antibiotic.
Use it for: sinuses, sore throat
How it works: When you drunk water with a higher concentration of salt in our body, it will cause you to draw fluid from the tissue.
Try: For a sore throat, dissolve half a table-spoon (or better sea salt) into a glass of water and just rinse the throat. For sinuses, sip the water. Use only distilled water or a boiled and cooled to room temperature.
Mint tea
Use it for: Pain in the stomach and digestion problems
How it works: The oil from the leaf and stalk of mint can soothe digestion, facilitate the flow of gases and remove obstacles. Avoid this tea if the pain is caused by reflux.
Oat flakes
Use for: eczema, burns, rashes
How it works: Oats contain phytochemicals with anti-inflammatory action which soothes itching and inflammation of the skin. Most doctors recommend using inflammatory colloidal oatmeal, but for you any kind will be useful.
Try: grind oat flakes into powder. Put them in a cup of warm water, soak them and wait 15 minutes. If you use colloidal oatmeal, then just put them directly into a cup of water. |
Pavitra Dwadashi
Pavitra Dwadashi is observed on the 12th day during the shukla paksha or waxing phase of moon in the month of Shravan. Pavitra Dwadashi 2017 date is August 4. The rituals on the day are dedicated to God Vishnu and are mainly observed in parts of Gujarat and Rajasthan.
The day is also known as Damodar Dwadashi in some regions.
Another unique ritual that is followed on the day is the Pavitra Baras. This is also observed by Vaishnava sect in Gujarat and Rajasthan.
The prayers and pujas on the day is also a continuation of the Pavitra Ekadasi which is observed on the previous day. |
Lakotas: Feared Fighters of the Plains | HistoryNet MENU
Lakotas: Feared Fighters of the Plains
6/12/2006 • Wild West
The fighting men discovered a large tepee village near a creek on the Great Plains. According to the reminiscences of one of those men, ‘A great dance was in progress, in the center of which a small pole from which floated an Indian flag was standing.’ The man came up with a plan. He and several of the other well-trained fighting men would break off from the main body and surprise the Indians of the village. They would charge on horseback ‘through that portion of the village farthest removed from the congregated dancers’ and do whatever was necessary to capture that offensive flag.
The charge began. As a diversion, the small party of fighting men set fire to the first lodge they came to before dashing for the flag. Although surprised by the sudden appearance of their longtime enemies, warriors in the village responded quickly. The fighting men soon faced, according to their leader’s account, ‘flying arrows and scathing bullets.’ The leader was about to cut the sapling that supported the flag when one of his men took a rifle bullet and started to fall from his horse. The leader and another man caught their wounded comrade and held him in the saddle as they galloped back to the main body, which had drawn off toward a bluff just west of the village.
Warriors from the village climbed on their horses and quickly gathered between their lodges and their attackers. Undaunted, the attackers came again, for they were fighting men and they had a job to do. What they did, their leader later recalled, was ‘maneuver for a feigned attack upon the south side of the village; then suddenly changing [our] course made a charge toward the north side with all the rapidity that the speed of [our] horses could accomplish.’ The villagers, however, were alert for just such a move and responded with a rapid maneuver of their own, flanking the charging men. The attackers, as their leader related, were driven ‘from [our] course over the bridge to the north of the village.’
For the next two days there was fighting off and on. Nobody from either side was killed, but many were wounded, according to the one surviving account. On the final afternoon, the opposing forces had a parley from a distance. The warriors from the attacked village, though, broke off the talks. They waved a blanket, which in sign language meant, ‘Come and fight us.’ The men who had so bravely charged the village two days before declined the offer. Soon, according to their leader, they were ‘again on the move.’
The 19th-century ‘battle’ described above has no name. Exactly when it happened is not known. Where it happened is somewhat less vague–along Prairie Creek, not too far from the Platte River in present-day Hall County, Neb. The names of the individuals involved, except for one, are not available. The lack of details might seem disappointing or annoying, but it can’t be helped. No man in the fight was required to make an official report. Perhaps the fight sounds a bit like one of those engagements that occurred when U.S. Army patrols or columns discovered a ‘hostile’ Plains Indian encampment. Well, not exactly. True, there was a leader with a plan; true, the main body divided instead of attacking as one; true, it was a surprise attack on an unsuspecting village; and true, a lodge was torched. But no soldiers were involved. Of course not, a history-minded cynic might suggest, for had the attackers been soldiers, they would have been after more than just a flag and there would have been a ‘massacre,’ one way or another.
The Indians in the village were members of the Omaha tribe, who usually lived in earth lodges in eastern Nebraska near the Missouri River, but who used skin tepees whenever they ventured west to hunt buffalo. The attackers, who had objected to these ‘easterners’ infringing on their hunting grounds, were among the most feared fighting men of the Plains. They were Oglalas, a subdivision of the western Teton Sioux, or Lakotas. On this occasion, the Lakotas and Omahas were of equal strength, and though the fight lasted much longer than most Indian vs. Indian engagements, it did not prove deadly. The battle is remembered today only because the Lakota leader who tried to capture the Omaha flag went on to greater military successes–against the U.S. Army in the 1860s–and then, in 1893, reminisced about his early years during visits with an old friend at the Pine Ridge Reservation of South Dakota. Those reminiscences can be found in the 1997 book Autobiography of Red Cloud: War Leader of the Oglalas, edited by R. Eli Paul.
‘Achieving great success
in his younger years as a Lakota warrior, Red Cloud became arguably his people’s greatest war leader until the rise of Crazy Horse,’ Paul writes in his introduction. Even people with only a passing interest in frontier history recognize the distinctive names of those two remarkable Oglalas. Yet, Red Cloud and Crazy Horse still must take a back seat in the grand Teton tepee to Sitting Bull, the militant spiritual leader from the Hunkpapa subdivision. Together, those three Lakotas must be the most recognizable Indian trio of the 19th-century West, perhaps rivaled only by the Big Three of the Apaches–Geronimo, Cochise and Mangas Coloradas.
It might also be argued whether the adjective ‘warlike’ has appeared in print more frequently before ‘Sioux’ or ‘Apaches.’ Surely in the 19th century, the Spanish, Mexicans and Americans of the Southwest would have voted one way, while the pale-skinned folks who lived in or traveled through Minnesota, the Dakotas, Nebraska, Wyoming and Montana would have cast a different vote. No question, though, that when it came to history-making large-scale confrontations with the U.S. Army in the West, the Sioux were war bonnets above the Apaches. Such deadly engagements as the Minnesota (Sioux) Uprising, Grattan Massacre, Fetterman Massacre, Wounded Knee Massacre, Wagon Box Fight, Battle of the Rosebud, Battle of Slim Buttes, Battle of Blue Water and Battle of Wolf Mountain immediately come to mind, even while those labels–‘massacres,’ ‘fights,’ ‘battles,’ ‘uprisings’–get lost in the fog of semantics. As for the indefatigable Battle of the Little Bighorn, well, it never really leaves the mind–just stays lodged there like a spent 7th Cavalry bullet or a Lakota arrowhead.
What sometimes does slip the mind is the fact that the Sioux were a warlike people even before they began to seriously resist Euro-American expansion into western Minnesota and the northern Plains in the middle of the 19th century. The Omaha hunters attacked by a young Red Cloud were just one of many native peoples who, over the many moons, did not see eye to eye with the Sioux. In fact, the name ‘Sioux’ derives from an Ojibwa (Chippewa) word, nadowe-is-iw, meaning ‘adder’ or ‘enemy,’ that was transformed into something like nadoussioux by French voyageurs. Tribe members most often referred to themselves as Dakota (eastern group), Nakota (central group) or Lakota (western group)–all of which mean ‘alliance of friends’ in the three Siouan dialects of the same names. They also called themselves Oceti Sakowin (‘Seven Council Fires’) because of the seven major allied subgroups–Sisseton, Wahpeton, Wahpukute and Mdewakanton (the eastern group, collectively known to whites as the Santee Sioux, speakers of Dakota); Yankton and Yanktonai (central group, the Yankton Sioux, speakers of Dakota and Nakota); and Teton (western group, the Teton Sioux, speakers of Lakota). Today, the Dakota-Nakota-Lakota speakers are often collectively called Sioux, although more and more people seem to prefer ‘Dakotas’ or ‘Lakotas’ as the encompassing term.
In the early 17th century, the Sioux mainly occupied what would become Minnesota and parts of Wisconsin, but Lakota bands began to migrate from the upper Mississippi River valley onto the Great Plains because of costly warfare with the Cree Indians, who were armed with French rifles, and pressure from the Ojibwas to the east. The lure of the great buffalo herds also encouraged the westward expansion and, after horses were acquired around 1750, the moving became a whole lot easier…and so did the fighting.
The Lakotas warred against settled agricultural people such as the Pawnees and Arikaras and also against other mounted nomads such as the Cheyennes, Kiowas, Arapahos and Crows. Upon ‘discovering’ the forested slopes and lush meadows of the Black Hills (Paha Sapa) around 1776, the Lakotas, now well supplied with firearms, proceeded to displace the Cheyennes and Kiowas, who had previously enjoyed the region’s abundant game, timber and water. Defeating the Arikaras in 1792 allowed the Lakotas to expand into the middle Missouri Valley and what would become western South Dakota. In 1814 the Lakotas made peace with the Kiowas, who now formally recognized that their former enemies controlled the Black Hills. In the early 1820s, the Lakotas joined forces with another former enemy, the Cheyennes, to drive the Crows out of what would become eastern Wyoming. Historian Elliott West describes this ‘expansionist burst’ in his award-winning 1998 book The Contested Plains. ‘By the 1830s,’ he writes, ‘the Lakotas were the preeminent power of the northern plains. With the Black Hills as their spiritual and geopolitical center, they ranged west to the Continental Divide, east to the Missouri basin, south to the South Platte and Smoky Hill Rivers, and north to the lands of two powerful rivals, the Crows and the Blackfeet.’
By the 1840s the Lakotas had made peace with the Cheyennes and Arapahos, but there was no peace with those tribes to the east that ranged westward for bison (Pawnees, Osages, Omahas, Potawatomies, etc.) or with the Crows and Blackfeet to the north. Encounters with non-Indians, which had occurred infrequently in the past, now increased as Oregon-bound settlers and California-bound gold seekers began crossing the Plains. The buffalo herds were disrupted, and the Plains Indians, in turn, tried to disrupt some of the wagon trains. ‘It was only a matter of time,’ writes R. Eli Paul, ‘before Lakota expansionism came into conflict with that other great power, the United States.’
At mid-century, about 15,000 Lakotas stood in the way of ‘progress.’ This western group included seven subdivisions–Hunkpapa, Oglala, Minneconjou, Two-Kettle, Sans-Arc, Blackfoot and Brulé. Red Cloud was almost 30 at the time, Sitting Bull was not yet 20 and Crazy Horse was only about 10 and still known as Curly or Curly Hair. Even the young Crazy Horse may have already displayed bravery, generosity, wisdom and fortitude–the four great virtues of the Lakota male–by that time, and certainly Red Cloud had already made a name for himself among his Lakota peers. But the trio was unknown to the white world and would have held no interest for the white man in any case. That would only change when they became threats to that white world…or at least to that small part of the white world that passed through Teton territory.
In an attempt to head off trouble at the pass in 1851, representatives of the U.S. government negotiated the Treaty of Fort Laramie (also known as the Treaty of Horse Creek), which was signed by representatives of the Lakotas and other tribes. The treaty was designed to buy off the natives so that there would be peace on the emigrant road (the Indians were not to attack the white people just passing through) and on the Plains (the Indians were not to attack each other). It was a pipe dream. For one thing, the Indian signees did not represent all of their tribesmen. For another, a warrior culture could not be transformed overnight. Far too many Plains Indians were fighters to the bone. And far too many whites were coming.
Three years later, near Fort Laramie (in what would become Wyoming), the Lakotas had their first significant clash with the U.S. Army. In mid-August 1854, a wayward cow from an emigrant wagon train was killed by a Minneconjou man, and Brevet 2nd Lt. John L. Grattan, determined to do something about it, led an expedition of 30 men to a large Lakota camp. Negotiations with the headof the camp, Brulé Chief Conquering Bear, broke down in no time, and the impatient young lieutenant tried to force the issue despite being badly outnumbered. Who fired first is not certain, but Grattan died with his boots on, and Conquering Bear died with his moccasins on. Because all Grattan’s men were also killed, while the cow killer got nary a scratch, the clash has been labeled a ‘massacre’–the Grattan Massacre.
Red Cloud was a witness to the killings, but he and most other Lakotas paid the skirmish little mind. They went on with their lives; skirmishes, after all, were part of life. The U.S. War Department, not liking anything about that particular skirmish, eventually called upon Brevet Brig. Gen. William S. Harney to exact revenge. ‘By God, I’m for battle–no peace,’ Harney announced, and in early September 1855 he proved it by attacking the Brulé Chief Little Thunder’s village on Blue Water Creek near Ash Hollow, in Nebraska Territory. Harney’s force of more than 600 men destroyed the village and suffered relatively minor casualties (four dead, four badly wounded) while killing at least 85 inhabitants. Most history books call it the Battle of Blue Water, though ‘Harney’s Massacre’ has been suggested as an alternative by a few. Red Cloud was not a witness to General Harney’s punitive action, but legend has it that Curly (Crazy Horse) was in Little Thunder’s camp that bloody September day. Whether he was actually there or not, the future warrior was surely affected by the unprecedented Lakota losses. His uncle, Spotted Tail, had been wounded in the Blue Water fight, and Spotted Tail’s wife and baby daughter were among the 70 women and children captured by the soldiers.
The ruthlessness of Harney did not drive the Lakotas to war. In fact, they apparently became better behaved because of the possibility that the aggressive general might be back in full force the following spring. For the remainder of the 1850s, an uneasy truce existed between the Lakotas and the U.S. government. Red Cloud, for one, chose to withdraw with his Oglala band to the Powder River country (in present-day north-central Wyoming and southeastern Montana), where the hunting was still good and the whites were still few.
Things changed drastically in the 1860s, beginning to the east, where starving and discontented Dakotas (Santee Sioux) led by Mdewakanton Chief Little Crow killed some 700 whites in the Minnesota (Sioux) Uprising. Little Crow himself was killed by white settlers in July 1863, and nearly all the surviving Santees were kicked out of Minnesota into Dakota Territory. By then, the Lakotas had started their own little uprising because white men were traveling to the Montana gold fields on the Bozeman Trail, which cut right through the Powder River hunting grounds. Red Cloud, a’shirt wearer’ (head warrior) of the Oglalas who had counted coup some 80 times, would no longer have only skirmishes with Indian enemies on his mind. War against the whites was on the horizon.
Raids against white emigrants occurred in 1863, and the U.S. government sent Brig. Gens. Henry Hastings Sibley and Alfred Sully, who had subdued the Santees in Minnesota, to attack Lakota camps on the Little Missouri. Things grew worse in 1864, but mostly farther south. Lakotas raided with their Cheyenne and Arapaho allies along the Platte River Road (see related story, P. 32), and then Colorado militiamen slaughtered a village of Cheyennes at Sand Creek that November. Cheyenne, Lakota and Arapaho warriors responded early in 1865 by twice sacking Julesburg and generally spreading death and destruction along the South Platte. The raiders then moved north, where Red Cloud and the other Lakotas in the Powder River country seemed to have it a little better. But not for long. General Sully returned to the upper Missouri for another campaign, and even worse, Brig. Gen. Patrick Edward Connor led one of the three columns that invaded the Powder River country.
The Powder River Expedition of 1865 was a fiasco. Connor did not succeed in engaging the Lakotas in battle, but he did further stir up Red Cloud and his followers. The U.S. government now tried a different tack and gave the free-roaming Lakotas gifts, including arms and ammunition, to come down to Fort Laramie and parley in June 1866. The government’s goal was a peace treaty that would allow gold seekers and others to move freely on the Bozeman Trail. Red Cloud, Man Afraid of His Horses (who was the principal chief) and other Powder River leaders proved to be tough negotiators, especially after they learned the soldiers had already made plans to build three outposts–Forts Reno, Phil Kearny and C.F. Smith–to guard that detested trail. The council failed, and Red Cloud’s status grew in the Indian world as he denounced the way the white man had treated his people and the way the peace commissioners were now treating the Lakota leaders as if they were children.
If Red Cloud–who was not actually a chief–did not yet have a reputation in the white world, that changed in dramatic fashion on December 21, 1866, when he struck a blow that rocked the nation even more than the Grattan Massacre of ’54 and resulted in the U.S. Army’s most shocking defeat in the Indian wars until the debacle at the Little Bighorn in 1876. Lured away from Fort Phil Kearny by decoy parties, overconfident Captain William J. Fetterman and 80 men were wiped out by the main body of Indians–mostly Lakotas, but also some Cheyennes and Arapahos–in about 40 minutes. During the Indians’ victory celebration, they scalped and mutilated the dead soldiers.
Best known to whites as the Fetterman Massacre, the clash is often referred to today as the Fetterman Fight or the Fetterman Disaster. The 31-year-old captain, who once boasted that with a company of soldiers he ‘could ride through the Sioux Nation,’ certainly left the fort looking for a fight, and despite falling into a trap, he and his men did not go down easily. At least 60 warriors are said to have died on the battlefield. The Indians did not call it Fetterman anything, instead referring to it as the Battle of the Hundred in the Hands or the Battle of the Hundred Slain. It is uncertain whether Red Cloud had a hand in directing the action that cold December day. Historian Robert Utley contends that the Minneconjou High-Back-Bone was the man behind the plan. Crazy Horse, according to most accounts, led one of the decoy parties, but in his recent biography of Crazy Horse, Mike Sajna puts him with the main force, adding: ‘Crazy Horse’s leadership of the Oglala in the Fetterman Fight could be taken as an indication that by the winter of 1866 he had…become head war chief of his people.’
Whatever roles they played in Fetterman’s failure, Red Cloud, Crazy Horse and other leaders remained on the offensive, intent on driving the white soldiers out of Lakota land. On August 1, 1867, a Northern Cheyenne war party, along with some Lakota warriors, attacked a group of hay-cutting soldiers near Fort C.F. Smith. The very next day, a large war party of Lakotas, including Red Cloud and Crazy Horse, attacked the wagon camp of some wood-cutting soldiers about five miles from Fort Phil Kearny. Both attacks failed in the end because most of the troops were armed with new Springfield breechloaders and because relief columns arrived from the forts.
Although the Hayfield Fight and Wagon Box Fight were victories by the whites, the Powder River Indians were hardly defeated. They kept the soldiers bottled up in their isolated forts and continued to deny emigrants use of the Bozeman Trail. U.S. government officials became intent on reaching a settlement with the warring Lakotas and friends. But Red Cloud wouldn’t come to Fort Laramie to sign the treaty. There was one big sticking point. ‘When we see the soldiers moving away and the forts abandoned, then I will come down and talk,’ said Red Cloud. In the summer of 1868, he got his wish. The soldiers abandoned the three forts on the Bozeman Trail, and the Indians promptly burned down Forts C.F. Smith and Phil Kearny. Red Cloud finally arrived at Fort Laramie that November to sign the Fort Laramie Treaty of 1868. The Lakotas were granted a great territory that included the Black Hills and hunting privileges in the Powder River country. Red Cloud’s War (1866-68) was over, and he had won. He was the first Indian leader to win a war against the United States–and the last.
Between 1868 and 1876, the Lakotas were–at least to white Americans–not quite so warlike. While they continued to skirmish with the likes of the Shoshones and the Crows, they were at peace with the United States, in accordance with President Ulysses S. Grant’s peace policy. Relations remained strained, though, and Red Cloud did a lot of complaining in Washington and elsewhere as the spokesman not only for the Oglalas but also for the entire Lakota Nation. The Indian Bureau wanted the Lakotas to make the transition to reservation life and live like white settlers. In 1873, the government consented to build two agencies in northwestern Nebraska–the Red Cloud Agency for the Oglalas and the Spotted Tail Agency for the Brulés–outside the Great Sioux Reservation. The U.S. government’s peace with Red Cloud would last, but other Lakotas rejected the forced lifestyle changes, the dependence on annuities delivered by ineffective and corrupt administrators, and the Army’s reluctance to keep white gold seekers out of the Black Hills. Many of Red Cloud’s followers now turned to men like Sitting Bull and Crazy Horse–Lakotas who were still willing to fight the white intrusion with more than just words.
Sitting Bull, like most of the other Hunkpapas, had been living and hunting up in Yellowstone River country and was not directly involved in the Red Cloud War. But like the older Red Cloud, Sitting Bull was firmly against white intrusions into the northern Plains. In the aftermath of the Minnesota Uprising, he had skirmished with General Sibley during the summer of 1863 and had tried to defend the Little Missouri River camp that was successfully attacked by General Sully on July 28, 1864, in the Battle of Killdeer Mountain (near present-day Killdeer, N.D.). During General Connor’s three-pronged Powder River expedition the following year, Sitting Bull helped thwart the marches of both Colonel Nelson Cole’s column and Colonel Samuel Walker’s column.
After rejecting the 1868 Treaty of Fort Laramie, Sitting Bull became the recognized leader of not only the Hunkpapa bands but also all the other nontreaty Lakotas–Indians who were officially viewed as ‘hostile’ once they failed to obey the order to report to the reservations by January 31, 1876. The U.S. Army sent soldiers to find these winter roamers. The Great Sioux War of 1876-77 was about to begin.
On March 17, 1876, a cavalry force led by Colonel Joseph J. Reynolds attacked a village along the Powder River. Reynolds reportedly believed it was the village of Crazy Horse, but it turned out to be the Cheyenne camp of Two Moons. The villagers lost their horse herd but regained it, and most of them were able to escape to a small camp nearby–the camp of Crazy Horse. Next, they all pushed north, traveling another 60 miles to the larger camp of Sitting Bull. Reynolds’ attack made the free-living bands more determined than ever to resist. When the Army sent three columns from three directions to converge in the Powder River Country as part of a spring-summer campaign to force their compliance, the Lakotas and their allies were ready for them–physically and spiritually. It helped that in early June, Sitting Bull had a vision of soldiers falling upside down from the sky.
A few weeks later, in the Battle of the Rosebud, Crazy Horse and other Lakotas fought Brig. Gen. George Crook’s invading force to a standstill–but that was not the great victory Sitting Bull had envisioned. The Indians’ greatest triumph came just over a week after the Rosebud Creek fight when Lt. Col. George Armstrong Custer attacked Sitting Bull’s extensive village on the Little Bighorn River (known to the Lakotas as the Greasy Grass) in Montana Territory. Custer and all the soldiers in his immediate command did not exactly fall from the sky, but fall they did–never to rise again, except in a million books and a billion imaginations. The Battle of the Little Bighorn, June 25-26, 1876, was of course the crowning triumph for the warlike Lakotas, even if Sitting Bull did not take part in the actual fighting and even if Crazy Horse, as brave as he was, did not make a legendary charge over Custer Hill.
Custer’s Last Stand, as everyone on this side of Custer Hill (and the other side, too) knows, was almost the last stand for the Lakotas. They had won the battle, but could not be expected to win this war. In the aftermath of a fight that totally overshadowed the Fetterman and Grattan massacres (and every other Indian engagement, too), the U.S. Army pursued the hostiles. On September 9, 1876, Crook’s troops found the Lakota village of American Horse at Slim Buttes (in what today is northwestern South Dakota). They eventually torched it, but not before Crazy Horse, who had arrived with a band of warriors during the battle, gave them a scare or two.
That winter, Colonel Nelson Miles tenaciously tracked down Crazy Horse’s village near the Tongue River in Montana Territory, and on January 8, 1877, with about 3 feet of snow on the ground, the two sides clashed in what would become known as the Battle of Wolf Mountain. Blizzard conditions cut the fighting short, and casualties were light, but Crazy Horse had suffered a mighty blow. His people could run, but they could not hide. The war ended in 1877, not because Sitting Bull and Crazy Horse were defeated in battle but because the hungry Lakotas were unable to hunt or gather food. In early May, Crazy Horse rode into the Red Cloud Agency to surrender, about the same time that Miles struck Minneconjou Sioux Lame Deer’s band on Muddy Creek, a small tributary of Rosebud Creek, in Montana Territory. Lame Deer was among the casualties in that May 7, 1877, clash, and the Battle of Lame Deer (or Muddy Creek) was the last significant engagement of the Great Sioux War.
Four months later, Crazy Horse was bayoneted to death by a guardhouse sentry at Camp Robinson. Sitting Bull, insisting that he did not want to become an agency Indian, sought sanctuary in Canada and found it for a while. But he, too, surrendered–at Fort Buford, in Dakota Territory, on July 19, 1881. By then the buffalo had all but disappeared from the homestead-infested Great Plains, and there was little choice but to forsake the nomad way of life for the reservation.
Sitting Bull lived long enough on the Standing Rock Reservation in the Dakotas to see the late Crazy Horse’s cousin Kicking Bear kick up his heels in the first Sioux-style Ghost Dance, a frenzied performance that frightened the Indian agent down at Pine Ridge no end. But the great Hunkpapa spiritual leader was shot down by Indian police while ‘resisting arrest’ on December 15, 1890, two weeks before soldiers from Custer’s old regiment, the 7th Cavalry, opened up on Big Foot’s band along Wounded Knee Creek on the Pine Ridge Reservation. That shocking bloodbath, in which the old Minneconjou leader and at least 150 other Lakota men, women and children were killed, has come to be known as the Wounded Knee Massacre.
Organized Lakota resistance to the white world faded in the aftermath of Wounded Knee. Not all the old warriors were dead, however. Later, some of them would tell their stories, including Red Cloud, who did not die until 1909. By then, many of his earlier military accomplishments were forgotten. That was due, in part, to his long life and the fact he had not resisted and fought to the bitter end like that brave Oglala warrior Crazy Horse or that charismatic Hunkpapa hero Sitting Bull. But unlike the other two members of the most famous Indian trio, Red Cloud had faced an even more difficult task in the end–trying to meet the confusing demands of the white man’s world while also trying his best to keep Lakota culture alive. Lakotas had often been warlike in the past, but war, he knew, was not everything–especially when the odds against them were stacked higher than the Black Hills.
This article was written by Gregory Lalire and originally appeared in the April 2001 issue of Wild West magazine.
, , , , , , |
How to Calculate Velocity
Understanding how velocity is calculated is essential if you are to grasp the laws of physics. Learn the calculations necessary to figure it out.
You will need
• Change in position
• Associated time
Step 1 Understand speed and velocity Recognize the difference between speed and velocity. Speed is the rate at which distance is changing. Velocity is the rate at which distance is changing and in what direction.
Step 2 Calculate speed Calculate speed. For example, if someone walked 20 feet east, 40 feet south, 20 feet west, and then 40 feet north in 400 seconds, the average speed would be 80 feet divided by 400 seconds, or 0.20 feet per second.
Step 3 Compare speed and velocity Calculate velocity. In the preceding example, the individual returns to the starting point. The average velocity is zero, even though the average speed was 0.2 feet per second.
Step 4 Calculate velocity Specify the speed and change in position when calculating velocity. For example, if an airplane is traveling 300 miles per hour in a westward direction, its velocity would be 300 miles per hour, west. |
Lent: The Wilderness Within and Without
03/08/2012 08:53 am ET | Updated May 08, 2012
Please join the HuffPost community in "A Lenten Journey" for reflections throughout Lent, and join our online Lenten community here.
Lent is the time when we think of Jesus in the wilderness. The gospel of Mark is short and vivid on this topic: "He was in the wilderness 40 days, tempted by Satan; and he was with the wild beasts; and the angels waited on him." But usually we don't stop there when we think of Jesus in the wilderness. Matthew and Luke elaborate on the temptations. In their telling, Jesus was tempted by Satan with food to overcome his hunger; tempted to jump from the pinnacle of the temple, to prove he was the Son of God; and tempted to claim dominion over the world.
The story suggests that Jesus may have experienced a surge of self-doubt and a longing to overcome it by proving himself. Moreover, when Satan says, "If you're the Son of God, do this or that," he implies that turning stones into bread and jumping from the temple are the kinds of things that the Son of God would do. There is no argument; it's just assumed. The frontal attack tries to sow doubt in Jesus' mind, while Satan casually presumes that the Anointed One should perform spectacular feats.
But Jesus repeatedly rejected the expectations that his enemies and followers held for him. At the outset of his ministry Jesus identified himself with the suffering servant of Isaiah, the fellow -- suffering, partisan redeemer who brings hope and deliverance to the oppressed: "The Spirit of the Lord is upon me because he has anointed me to bring good news to the poor. He has sent me to proclaim release to the captives and recovery of sight to the blind, to let the oppressed go free, to proclaim the year of the Lord's favor." Then he had to run for his life, for Luke says that the crowd, filled with wrath, tried to throw Jesus off a cliff. From the infancy narratives, to the Magnificat, to the temptation in the desert, to the Sermon on the Mount, to the confrontation in the temple, to the cross, the gospels present Jesus as a prophet of righteousness who spurned the power-worshipping ways of the world. Jesus refused to rationalize oppression, or grab for power, or appeal to national pride.
What does it profit anyone to gain the whole world but lose one's soul? This haunting question was an echo of Jesus' temptation in the wilderness. Human beings have an indefinable craving that the world cannot fill. We need the things of this world; many of them are good; but this need leads us into temptation. Our economy rests on the constant growth of the gross domestic product. The devotion to production crowds out the craving in us for something besides material goods. We are tempted and exhorted to consume far more than we need, in an extravagantly wasteful way, and encouraged to go into serious debt.
Look where that led. The financial crash started with people who were just trying to buy a house of their own; who had no concept of predatory lending; and who had no say in the securitization boondoggle that bunched up thousands of sub-prime mortgages, chopped the package into pieces, and sold them as corporate bonds. Financial professionals were caught in the terribly real pressure of the market to produce constant short-term gains. Speculators gamed the system and regulators looked the other way. Mortgage brokers made fortunes off mortgages they had no business selling; bond bundlers made fortunes packaging the loans into securities; rating agencies made fortunes giving inflated bond ratings to the loans; corporate executives made fortunes putting the bonds on their balance sheets. The rating agencies handed out triple-A ratings for toxic securities, being paid by the very issuers of the bonds they rated. The big banks got leveraged up to 50-to-1 and kept piling on debt.
The day of reckoning came, and now we are consumed with the politics and policy options of the aftermath: the crushing loss of jobs, savings, and homes. The megabanks, now bigger than ever, have rushed back to gamble in the swaps market and pay huge bonuses. And the outposts of progressive religion, long accustomed to their marginalization, have struggled to find their voices on economic justice, militarism, and other social justice issues. Social justice has been off the table for so long that many religious communities have little memory of how to stand up for it.
I spend much of my time dealing with the latter problem. But the word from the desert, a spiritual reminder, has not changed: We need consumer goods, but they do not fill the chasm. We find, like Jesus, that we are tempted into evil by things that are not evil; some are even good. Driven into the wilderness, we confront our superficiality, which conjures up the beasts of anxiety, jealousy, malice, contempt, and fear.
Meanwhile our craving for something more is still there. On first impression this desire seems too small to make a difference. Yet Jesus compared the Kingdom of God to a mustard seed, and he cautioned that we shall not live by bread alone.
These words strike us at the center of our being. They articulate a hunger that sends us in the right direction. We need to hold fast to this hunger and not be diverted from it. By attending to it we recognize that we are not in control. Our restlessness with what the world has to offer points us to God, and draws us to social justice work that allows others to share in the harvest. |
Tainan or T'ai-nan (both: tĪˈnänˈ) [key], city (1994 pop. 706,811), W central Taiwan, on the Taiwan Strait. The fourth largest city of Taiwan, it has industries producing metals, textiles, machinery, processed foods, and handicrafts. It is also a center for the marketing and processing of sugarcane, rice, peanuts, and salt, and there is an important fishing industry. Settled in 1590, Tainan is the oldest city of Taiwan. It was taken over by the Dutch and used as their headquarters from 1624 to 1662. It then became the island's capital under Koxinga and his son. Called Taiwan or Taiwanfu, it remained the political center of the island until the transfer of government to Taipei in 1885, when the city was renamed Tainan. A cultural center, it has many temples, the shrine of Koxinga, and a modern college of engineering.
See more Encyclopedia articles on: Taiwan Political Geography |
This Day In History Sept 19
1180 Death of King Louis VII of France (b. 1120)
1180 Philip Augustus becomes king of France.
1437 Farmer uprising in Transylvania.
1454 In the Battle of Chojnice, Polish army is defeated by Teutonic army during the Thirteen Years’ War.
1502 Christopher Columbus lands at Costa Rica on his fourth, and final, voyage.
1505 Birth of Maria of Austria, wife of Louis II of Hungary and Bohemia (d. 1558)
1544 Charles V of Germany and Francis I of France sign peace treaty.
1573 Spanish attack Alkmaar.
1587 Birth of Francesca Caccini, Italian composer (d. circa 1640)
1598 Death of Toyotomi Hideyoshi, Japanese warlord (b. 1536)
1630 Death of Melchior Klesl, Austrian cardinal and statesman (b. 1552)
1634 Anne Hutchinson arrives in the New World.
1635 Emperor Ferdinand II declares war on France.
1643 Birth of Gilbert Burnet, Scottish Bishop of Salisbury (d. 1715)
1663 Death of St Joseph of Cupertino, Italian saint (b. 1603)
1675 Death of Charles IV, Duke of Lorraine (b. 1604)
1679 New Hampshire becomes a county of the Massachusetts Bay Colony.
1684 Birth of Johann Gottfried Walther Erfurt Germany, composer/Musicographer.
1709 Birth of Dr Samuel Johnson writer (Boswell’s tour guide).
1718 Birth of Nikita Ivanovich Panin, Russian statesman (d. 1783)
1721 Death of Matthew Prior, English poet and diplomat (b. 1664)
1722 Death of AndrT Dacier, French classical scholar (b. 1651)
1733 Birth of George Read lawyer/signed Declaration of Independence.
1739 The Treaty of Belgrade is signed, ceding Belgrade to the Ottoman Empire.
1739 Treaty of Belgrade-Austria cedes Belgrade to Turks.
1750 Birth of Tomas de Iriarte, Spanish writer (d. 1791)
1752 Birth of Adrien-Marie Legendre mathematician, worked on elliptic integrals.
1755 Fort Ticonderoga, New York is opened.
1755 Fort Ticonderoga, New York opens.
1759 British troops capture QuTbec City during the French and Indian War.
1759 The British capture Quebec City.
1760 The town (later city) of Mayagüez, Puerto Rico is founded.
1765 Birth of Pope Gregory XVI (d. 1846)
1769 Boston Gazette reports first US piano (a spinet).
1769 The first American-built piano is reported by the Boston Gazette.
1779 Birth of Joseph Story Massachusetts, US Supreme Court justice (1812-45).
1783 Death of Leonhard Euler, Swiss mathematician (b. 1707)
1786 Birth of Justinus Kerner, German poet, medical writer (d. 1862)
1789 American government takes out first ever loan, a total of $191,608.81.
1792 Death of August Gottlieb Spangenberg, German religious leader (b. 1704)
1793 George Washington lays the cornerstone of the US Capitol building.
1793 The first cornerstone of the Capitol building is laid by George Washington.
1794 Austrian army is decisively defeated at the battle of Sprimont.
1809 Royal Opera House in London opens.
1809 The Royal Opera House in London opens.
1810 Chile declares independence from Spain.
1810 First Government Junta in Chile. Though supposed to rule only in the absence of the king, it was in fact the first step towards independence from Spain, and it is commemorated as such.
1811 English expeditionary army conquerors Dutch Indies.
1812 Birth of Herschel Vespasian Johnson, American politician (d. 1880)
1812 Fire in Moscow destroys 90% of houses & 1,000 churches.
1819 Birth of Jean-Bernard-LTon Foucault his pendulum proved Earth rotates.
1827 Death of Robert Pollok, Scottish poet (b. 1789)
1830 A horse beats the first US-made locomotive in a race near Baltimore, Maryland.
1830 A horse beats the first U.S.-made locomotive near Baltimore.
1838 Anti-Corn Law League established by Richard Cobden.
1838 Birth of Anton Mauve, Dutch artist (d. 1888)
1842 The first edition of the Pittsburgh Post-Gazette is published.
1850 The U.S. Congress passes the Fugitive Slave Act.
1851 The New-York Daily Times, which will become The New York Times, begins publishing.
1857 Birth of John Hessin Clarke, U.S. Supreme Court Justice (d. 1945)
1858 Birth of Kate Booth, the oldest daughter of William and Catherine Booth (d. 1955)
1860 Death of Joseph Locke, English railway builder and civil engineer (b. 1805)
1862 General Robert E. Lee’s army pulls away from Antietam Creek.
1863 American Civil War: Battle of Chickamauga
1863 Birth of Hermann Kutter, Swiss theologian (d. 1931)
1866 The Grand Masonic Lodge of Nevada lays the cornerstone for the Carson City Mint.
1868 First convocation of the University of the South in Sewanee, Tennessee.
1870 Birth of Clark Wissler anthropologist (American Indian).
1870 Old Faithful Geyser is observed and named by Henry D. Washburn.
1872 King Oscar II accedes to the throne of Sweden-Norway.
1873 The Panic of 1873 begins.
1876 Birth of James Scullin, ninth Prime Minister of Australia (d.1953)
1881 Chicago Tribune reports on a televide experiment.
1882 Pacific Stock Exchange opens (as the Local Security Board).
1883 Birth of Lord Berners (Gerald Tyrwhitt) England, composer (first Childhood).
1885 Riots break out in Montreal to protest compulsory smallpox vaccination.
1888 Start of the Sherlock Holmes adventure “The Sign of Four”.
1889 Birth of Doris Blackburn, Australian politician (d. 1970)
1891 Death of William Ferrel, American mathematician (b. 1817)
1893 Birth of Arthur Benjamin Sydney Australia, composer (Jamaican Rumba).
1895 Birth of John G Diefenbaker Neustadt Ontario, 13th Canadian Prime Minister (Conservative) (1957-63).
1895 Booker T Washington delivers “Atlanta Compromise” address.
1895 Booker T Washington delivers “Atlanta Compromise” address.
1895 Daniel David Palmer makes the first chiropractic adjustment.
1895 D.D. Palmer of Davenport, Iowa, becomes first chiropractor.
1896 Death of Hippolyte Fizeau, French physicist (b. 1819)
1898 Lord Kitchener’s ships reach Fashoda, Sudan.
1901 Birth of Harold Clurman producer/director (Deadline at Dawn).
1903 Phillie’s Chick Fraser no-hits Chicago Cubs, 10-0.
1904 Completion of the first crossing of the Canadian Rockies in an automobile.
1905 Birth of Agnes De Mille New York City, choreographer (Oklahoma).
1905 Death of George MacDonald, Scottish writer and minister (b. 1824)
1905 Electric tramline opens in Rotterdam.
1906 A typhoon with tsunami kills an estimated 10,000 people in Hong Kong.
1907 Birth of Leon Askin, Austrian actor (d. 2005)
1908 Cleveland Indian Bob “Dusty” Rhoades no-hits Boston, 2-1.
1910 In Amsterdam, 25,000 demonstrate for general male/female suffrage.
1911 Birth of Syd Howe, Canadian hockey player (d. 1976)
1911 Britain’s first twin-engine airplane (Short S.39) test flown.
1911 The first British twin-engined plane is test flown.
1914 Battle of Aisne ends with Germans beating French during WW I.
1914 The Irish Home Rule Bill becomes law, but is delayed until after World War I.
1914 World War I: South African troops land in German South West Africa.
1914 World War I: The Battle of Aisne ends with Germans beating French.
1915 Birth of Ethel Greenglass in New York, USA; secretary for U.S. Army Signal Corps, executed for spying.
1915 Boston Braves trounce Saint Louis Cardinals 20-1.
1916 Birth of John J Rhodes (Representative-Republican-Arizona).
1917 Birth of June Foray, American voice actress
1918 Birth of John Berger, English politician
1919 Hurricane tides 16 feet above normal drown 280 along Gulf Coast.
1919 The Netherlands gives women the right to vote.
1919 The Netherlands gives women the right to vote.
1920 Birth of Jack Warden in Newark, New Jersey, USA; actor (NYPD, Crazy Like a Fox, Norby).
1922 Birth of Ray Steadman-Allen, English composer
1922 Charles Ruijs de Beerenbrouck re-elected in the Netherlands.
1922 Hungary admitted to League of Nations.
1923 Birth of Peter Smithson, English architect (d. 2003)
1924 Death of Francis Herbert Bradley, British philosopher (b. 1846)
1925 Birth of Harvey Haddix, baseball player
1926 Birth of Bud Greenspan, American film producer and director
1926 Hurricane hits Miami and south Florida, USA, destroying hotels, piers, marinas, mansions built in preceding years. 400 killed, 50,000 made homeless.
1927 Birth of Bob Toski, American golfer
1927 Columbia Broadcasting System goes on the air.
1927 Columbia Broadcasting System goes on the air in the USA (16 radio stations).
1928 Birth of Phyllis Kirk, American actress
1928 Juan de la Cierva flies 1st helicopter above Channel.
1928 Walt Disney’s “Mickey Mouse” trademark application is granted.
1930 New York Yankees’ pitcher Red Ruffing hits two home runs to beat Saint Louis Browns, 7-6.
1931 Geli Raubal is found shot dead in Adolf Hitler’s apartment.
1931 Japan stages the Mukden Incident as a pretext to occupy Manchuria.
1931 The Mukden Incident gives Japan the pretext to invade and occupy Manchuria.
1932 Actress Peg Entwistle commits suicide by jumping from the H in the Hollywood sign.
1932 Birth of Nikolai N Rukavishnikov; cosmonaut (Soyuz 10, 16, 33).
1933 Birth of Jimmie Rodgers in Washington, USA; country singer (“Honeycomb”).
1934 Saint Louis Browns’ player Bobo Newsom loses no-hitter to Boston Braves in 10 innings, 2-1.
1934 USSR admitted to League of Nations.
1938 Despite losing a double header, New York Yankees clinch pennant number 10.
1938 New York Yankees win 10th pennant.
1939 Birth of Frankie Avalon, American musician
1939 Death of Stanis?aw Ignacy Witkiewicz, Polish writer, painter, and photographer (b. 1885)
1939 Russian forces reach Vilna and Brest-Litovsk in Poland, and meet with German forces. A joint German-Soviet military commission meets to draft plans for partition.
1939 World War II: A German U-boat sinks the British aircraft carrier HMS Courageous.
1939 World War II: Polish government of Ignacy Mo?cicki flees to Romania.
1940 Birth of Frankie Avalon in Philadelphia, Pennsylvania, USA; actor (Beach movies), singer (“Venus”).
1940 Soviet Minister of Defence Marshal S.K. Timoshenko and Chief of General Staff K.A. Meretskov submit a war plan to Josef Stalin and Prime Minister Vyacheslav Molotov, proposing an attack on Germany north of the Pripet marshes, with a strong defence to the south, or vice-versa.
1940 World War II: Italian troops conquer Sidi Barrani.
1942 Canadian Broadcasting Corporation authorized.
1942 Canadian Broadcasting Corporation authorized for radio service.
1943 World War II: Hitler orders deportation of Danish Jews.
1943 World War II: The Jews of Minsk are massacred at Sobib=r.
1944 Birth of Charles Lacy Veach in Chicago, Illinois, USA; astronaut (STS 39).
1944 World War II: British submarine HMS Tradewind torpedoes Junyo Maru, 5,600 killed.
1945 1000 whites walk out of Gary, Indiana, schools to protest integration.
1945 Gen. Douglas MacArthur moves his command headquarters to Tokyo.
1945 In Gary, Indiana, 1000 whites walk out of schools to protest integration.
1946 Birth of Rocfo Jurado, Spanish singer and actress (d. 2006)
1946 Joe Louis knockouts Tami Mauriello in 1 round for the heavyweight boxing title.
1947 Country singers Ernest Tubb and Roy Acuff performed at Carnegie Hall in New York City, making it the venue’s first country performance.
1947 The United States Air Force becomes an independent service.
1947 The United States Air Force is created separate from the United States Army.
1947 The United States Department of Defense begins operation (formerly known as National Military Establishment).
1948 Birth of Ken Brett, baseball player (d. 2003)
1948 Communist Madiun-uprising in Dutch Indies.
1948 Margaret Chase Smith becomes the first woman elected to the Senate without completing another senator’s term when she defeats Democratic opponent Adrian Scolten.
1948 Ralph Bunche confirmed as acting UN mediator in Palestine.
1948 Ralph J Bunche confirmed as acting United Nations mediator in Palestine.
1949 Baseball major league record four grand slams hit.
1949 Birth of Jim McCrery, American politician
1949 Death of Frank Morgan, American actor (b. 1890)
1950 Birth of Shabana Azmi, Indian actress
1951 Birth of Benjamin Carson, American neurosurgeon
1952 Birth of Rick Pitino, American basketball coach
1954 Birth of Takao Doi, Japanese astronaut
1954 Cleveland Indians clinch American League pennant, beat Detroit Tigers (3-2).
1955 Ford produces 2,000,000th V8 engine.
1955 Toast of the Town becomes The Ed Sullivan Show.
1956 Birth of Peter Stastny, Slovak ice hockey player
1958 Birth of John Aldridge, Irish footballer
1959 Barbara Joanna Blakeley marries Zeppo Marx.
1959 Birth of Ryne Sandberg, baseball player
1959 Death of Benjamin PTret, French surrealist author
1959 Harvey Murray Glatman executed in a California gas chamber for murdering three young women in Los Angeles.
1959 Vanguard 3 launched into Earth orbit.
1961 Birth of James Gandolfini, American actor
1961 Dag Hammarskjold, secretary general of the United Nations, is killed when his DC-6 plane crashes into the jungle in Zambia.
1961 Death of Dag Hammarskj�ld, Swedish United Nations Secretary-General, recipient of the Nobel Peace Prize (b. 1905)
1961 U.N. Secretary-General Dag Hammarskjold dies in a plane crash while attempting to negotiate peace in the war-torn Katanga region of the Democratic Republic of the Congo.
1961 USSR performs nuclear test at Novaya Zemlya
1962 Birth of Joanne Catherall, English singer
1962 Rwanda, Burundi and Jamaica admitted to the United Nations.
1962 Rwanda, Burundi, Jamaica, and Trinidad admitted (105th-108th) to the United Nations.
1962 The Fellowship (FGFCMI) founded in Dallas, Texas.
1963 Birth of Rob Brettle, English historian
1963 Final game at Polo Grounds, 1,752 see Philadelphia Phillies beat New York Mets 5-1.
1964 Birth of Marco Masini, Italian singer-songwriter
1964 Constantine II of Greece marries Danish princess Anne-Marie.
1964 Death of Clive Bell, English art critic (b. 1881)
1964 North Vietnamese Army begins infiltration of South Vietnam.
1965 Mickey Mantle plays in his 2000th game.
1965 The first episode of “I Dream of Jeannie” shown on NBC.
1966 Birth of Spike; vocal/guitar (Ian Spice Breathe, Flash Cadillac-R&R Forever).
1967 Birth of Tara Fitzgerald, English actress
1967 Death of John Cockcroft, British physicist, Nobel Prize laureate (b. 1897)
1967 Esporte Clube Santo AndrT, from Brazil, is founded.
1968 Birth of Toni Kuko?, Croatian basketball player
1968 Ray Washburn (Saint Louis Cardinals) no-hits San Francisco Giants 2-0.
1970 Birth of Darren Gough, English cricketer
1970 Death of Jimi Hendrix, American musician (b. 1942)
1970 Jimi Hendrix, rock guitarist, dies at age 27 in London, England.
1971 Birth of Lance Armstrong, American cyclist
1972 Death of Robert Faesi in Zollikon, Switzerland; playwright, poet, author, professor of German literature at the University of Znrich.
1972 First Ugandans expelled by Idi Amin arrive in the UK.
1973 Birth of James Marsden, American actor
1973 East and West Germany are admitted to the United Nations.
1973 East and West Germany are both taken into the United Nations.
1974 Actress Doris Day wins a $22.8 million malpractice suit against her former lawyer.
1974 Birth of Sol Campbell, English footballer
1974 Hurricane Fifi strikes Honduras with 110 mph winds, 5,000 die.
1975 Birth of Anthony McPartlin, English television presenter
1975 FBI captures heiress/bank robber Patricia Campbell Hearst in San Francisco, California.
1975 Patty Hearst is arrested after a year on the FBI Most Wanted List.
1976 Dom Mintoff re-elected in Malta.
1976 Mao Tse Tung’s funeral takes place in Beijing.
1976 Rev. Sun Myung Moon holds “God Bless America” convention.
1977 Birth of Li Tie, Chinese footballer
1977 Death of Paul Bernays, Swiss mathematician (b. 1888)
1977 US Voyager I takes 1st space photograph of Earth & Moon together.
1977 US Voyager I takes the first space photograph of the Earth and Moon together.
1978 Leaders of Israel and Egypt reach a settlement for the Middle East at Camp David.
1979 Birth of Alison Lohman, American actress
1979 Bolshoi Ballet dancers Leonid and Valentina Kozlov defect.
1979 Bolshoi Ballet dancers Leonid & Valentina Kozlov defect.
1980 Death of Katherine Anne Porter, American novelist
1980 Soyuz 38 carries 2 cosmonauts (1 Cuban) to Salyut 6 space station.
1980 Soyuz 38 carries two cosmonauts (one Cuban) to Salyut 6 space station.
1981 A museum honoring former U.S. President Ford is dedicated in Grand Rapids, Michigan.
1981 France abolishes capital punishment.
1981 Georgia General Assembly approved a joint resolution proposing a new constitution for the state.
1982 Birth of Lukas Reimann; Swiss politician.
1982 Christian militia begin massacre of 600 Palestinians in Lebanon.
1983 George Meegen completes 2,426-day (19,000 miles) walk across Western Hemisphere.
1983 New Orleans Saints’ first overtime victory; beating Chicago Bears 34-31.
1984 Detroit Tigers become fourth team to stay in first place from opening day.
1984 In Turkey, a magnitude 6.4 earthquake occurs. Three people killed, 38 injured, and 75,000 houses destroyed or damaged in the Olur-Senkaya area.
1984 Joe Kittinger completes 1st solo balloon crossing of Atlantic.
1984 Joe Kittinger completes the first solo balloon crossing of Atlantic.
1984 Off the east coast of Honshu, Japan, a magnitude 6.9 earthquake occurs.
1984 The 39th session of the U.N. General Assembly was opened with an appeal to the U.S. and Soviet Union to resume arms negotiations.
1985 Steve Jobs resigns from Apple Computer.
1986 Birth of Keeley Hazell, British model.
1986 Motorola announces the Motorola 68030 microprocessor. It incorporates about 300,000 transistors.
1987 Detroit Tigers’ Darrell Evans is first 40-year-old to hit 30 home runs.
1987 Ronald Reagan announces joint destruction of nuclear war heads by USA and USSR.
1988 Birth of Annette Obrestad; Norwegian poker player.
1988 Burma suspends its constitution.
1988 Death of Mohammad-Hossein Shahriar, Iranian Azari poet (born 1906).
1989 Charles Keating jailed in Los Angeles after being indicted on criminal fraud charges concerning saving-and-loans.
1989 Hurricane Hugo causes extensive damage in Puerto Rico.
1989 Hurricane Hugo hits Puerto Rico, killing six.
1989 Ontario NDP Leader Bob Rae arrested with 15 others in Temagami Wilderness Society anti-logging blockade.
1990 500 lb Hershey’s Kiss is displayed at Times Square.
1990 A 500-pound 6-foot chocolate Hershey Kiss is displayed at 1 Times Square, New York City.
1990 In Tokyo, Japan, the International Olympic Committee chooses Atlanta, Georgia, to host the 1996 Summer Olympic Games.
1990 Liechtenstein becomes a member of the United Nations.
1992 Nine formerly pro-Albanian Marxist-Leninist parties hold a conference in Strassburg, Germany.
1992 The existence of the National Reconnaissance Office, operating since 1960, is declassified.
1992 Undaunted by his earlier withdrawal, supporters of U.S. presidential candidate H. Ross Perot succeed in getting his name on the ballot in all 50 states.
1994 Death of Vitas Gerulaitis, American tennis player (b. 1954)
1994 Haiti’s military leaders agreed to depart on October 15th.
1994 National Party of Canada collapses due to party infighting.
1996 Okinawans vote to have the USA remove its 28,000 troops stationed on their island.
1997 Death of Jimmy Witherspoon, blues singer (born 1920).
1997 Voters in Wales vote yes (50.3%) on a referendum on Welsh autonomy.
1997 Wales votes in favour of devolution and the formation of a National Assembly.
1998 ICANN is formed.
1998 ICANN is formed.
1998 The Food and Drug Administration approves a once-a-day easier-to-swallow medication for AIDS patients.
2001 Death of Ernie Coombs, Canadian entertainer (b. 1927)
2001 First mailing of anthrax letters from Trenton, New Jersey in the 2001 anthrax attacks.
2002 Death of Bob Hayes, American athlete (born 1942).
2003 Death of Emil Fackenheim, German Holocaust survivor and philosopher (b. 1916)
2003 Hurricane Isabel makes landfall as a Category 2 Hurricane on North Carolina’s Outer Banks. It will directly kill 16 people in the Mid-Atlantic area.
2003 Hurricane Isabel makes landfall in the U.S.
2003 The UK’s Local Government Act 2003, repealing Section 28, receives Royal Assent.
2004 Death of Norman Cantor, Canadian historian (born 1929).
2005 Death of Michael Park, British Rally co-driver (b. 1966)
2005 Federal elections in Germany. Leading Chancellor candidates are Gerhard Schr�der (Social Democratic Party, 34.3 percent) and Angela Merkel (Christian Democratic Union, 27.8 percent).
2005 Swedish Church elections take place
2007 Former prime minister Benazir Bhutto returns to Pakistan after an eight-year self-imposed exile.
2007 The Federal Reserve cuts interest rates in the U.S. by half a point (0.5 percent) for the first time since 2006 to ease the ongoing panic in the financial markets due to the subprime mortgage crisis.
2008 Death of Mauricio Kagel, Argentine composer (born 1931).
2009 Death of Irving Kristol, American writer and political commentator (born 1920). |
English version
resonator in Colours & sounds topic
From Longman Dictionary of Contemporary Englishresonatorres‧o‧na‧tor /ˈrezəneɪtə $ -ər/ noun [countable] CAPMa piece of equipment that makes the sound of a musical instrument louder
Examples from the Corpus
resonatorThe mouth is used to modulate the volume, like a resonator.These rhythm pipes and rhythm sticks are usually hollowed out and function as resonators.But how could the advantages of the klystron with its enclosed resonators be combined with the more favourable geometry of the magnetron?The magnetic resonator tests show no damage.I have an old Dobro - it's one of those wooden ones with a metal resonator.We proceed, therefore, to a brief survey of the relevant aspects of resonator theory { 24,34 }.This last degeneracy does not occur in a Fabry-Perot resonator, in which the light bounces between parallel mirrors.The characteristic rich, booming tone of this instrument is due to the length and large diameter of the resonators. |
Dynamic Kernels: Discovery
This article, the second of five, introduces part of the actual code to create custom module implementing a character device driver. It describes the code for module initialization and cleanup, as well as the open and close system calls.
Using Minor Numbers
In the last article I introduced the idea of minor device numbers, and it is now high time to expand on the topic.
If your driver manages multiple devices, or a single device but in different ways, you'll create several nodes in the /dev directory, each with a different minor number. When your open function gets invoked, then, you can examine the minor number of the node being opened, and take appropriate actions.
The prototypes of your open and close functions are
int skel_open (struct inode *inode,
struct file *filp);
void skel_close (struct inode *inode,
struct file *filp);
and the minor number (an unsigned value, currently 8 bits) is available as MINOR(inode->i_rdev). The MINOR macro and the relevant structures are defined within <linux/fs.h>, which in turn is included in <linux/sched.h>.
Our skel code (Listing 3) will split the minor number in order to manage both multiple boards (using four bits of the minor), and multiple modes (using the remaining four bits). To keep things simple we'll only write code for two boards and two modes. The following macros are used:
#define SKEL_BOARD(dev) (MINOR(dev)&0x0F)
#define SKEL_MODE(dev) ((MINOR(dev)>>4)&0x0F)
The nodes will be created with the following commands (within the skel_load script, see last month's article):
mknod skel0 c $major 0
mknod skel0raw c $major 1
mknod skel1 c $major 16
mknod skel1raw c $major 17
But let's turn back to the code. This skel_open() sorts out the minor number and folds any relevant information inside the filp, in order to avoid further overhead when read() or write() will be invoked. This goal is achieved by using a Skel_Clientdata structure embedding any filp-specific information, and by changing the pointer to your fops within the filp; namely, filp->f_op.
Changing values within filp may appear a bad practice, and it often is; it is, however, a smart idea when the file operations are concerned. The f_op field points to a static object anyways, so you can modify it lightheartedly, as long as it points to a valid structure; any subsequent operation on the file will be dispatched using the new jump table, thus avoiding a lot of conditionals. This technique is used within the kernel proper to implement the different memory-oriented devices using a single major device number.
The complete skeletal code for open() and close() is shown in Listing 3; the flags field in the clientdata will be used when ioctl() is introduced.
Note that the close() function shown here should be referred to by both fopss. If different close() implementations are needed, this code must be duplicated.
Multiple- or Single-open?
A device driver should be a policy-free program, because policy choices are best suited to the application. Actually, the habit of separating policy and mechanism is one of the strong points of Unix. Unfortunately, the implementation of skel_open() leads itself to policy issues: is it correct to allow multiple concurrent opens? If yes, how can I handle concurrent access in the driver?
Both single-open and multiple-open have sound advantages. The code shown for skel_open() implements a third solution, somewhat in-between.
If you choose to implement a single-open device, you'll greatly simplify your code. There's no need for dynamic structures because a static one will suffice; thus, there's no risk to have memory leakage because of your driver. In addition, you can simplify your select() and data-gathering implementation because you're always sure that a single process is collecting your data. A single-open device uses a boolean variable to know if it is busy, and returns -EBUSY when open is called the second time. You can see this simplified code in the busmouse drivers and lp driver within the kernel proper.
A multiple-open device, on the other hand, is slightly more difficult to implement, but much more powerful to use for the application writer. For example, debugging your applications is simplified by the possibility of keeping a monitor constantly running on the device, without the need to fold it in the application proper. Similarly, you can modify the behaviour of your device while the application is running, and use several simple scripts as your development tools, instead of a complex catch-all program. Since distributed computation is common nowadays, if you allow your device to be opened several times, you are ready to support a cluster of cooperating processes using your device as an input or output peripheral.
The disadvantages of using a conventional multiple-open implementation are mainly in the increased complexity of the code. In addition to the need for dynamic structures (like the private_data already shown), you'll face the tricky points of a true stream-like implementation, together with buffer management and blocking and non-blocking read and write; but those topics will be delayed until next month's column.
At the user level, a disadvantage of multiple-open is the possibility of interference between two non-cooperating processes: this is similar to cat-ing a tty from another tty—input may be delivered to the shell or to cat, and you can't tell in advance. [For a demonstration of this, try this: start two xterms or log into two virtual consoles. On one (A), run the tty command, which tells you which tty is in use. On the other (B), type cat /dev/tty_of_A. Now go to A and type normally. Depending on several things, including which shell you use, it may work fine. However, if you run vi, you will probably see what you type coming out on B, and you will have to type ^C on B to be able to recover your session on A—ED]
A multiple-open device can be accessed by several different users, but often you won't want to allow different users to access the device concurrently. A solution to this problem is to look at the uid of the first process opening the device, and allow further opens only to the same user or to root. This is not implemented in the skel code, but it's as simple as checking current->euid, and returning -EBUSY in case of mismatch. As you see, this policy is similar to the one used for ttys: login changes the owner of ttys to the user that has just logged in.
The skel implementation shown here is a multiple-open one, with a minor addition: it assures that the device is “brand new” when it is first opened, and it shuts the device down when it is last closed.
This implementation is particularly useful for those devices which are accessed quite rarely: if the frame grabber is used once a day, I don't want to inherit strange setting from the last time it was used. Similarly, I don't want to wear it out by continuously grabbing frames that nobody is going to use. On the other hand, startup and shutdown are lengthy tasks, especially if the IRQ has to be detected, so you might not choose this policy for your own driver. The field usecount within the hardware structure is used to turn on the device at the first open, and to turn it off on the last close. The same policy is devoted to the IRQ line: when the device is not being used, the interrupt is available to other devices (if they share this friendly behaviour).
The disadvantages of this implementation are the overhead of the power cycles on the device (which may be lengthy) and the inability to configure the device with one program in order to use it with another program. If you need a persistent state in the device, or want to avoid the power cycles, you can simply keep the device open by means of a command as silly as this:
sleep 1000000 < /dev/skel0 &
As it should be clear from the above discussion, each possible implementation of the open() and close() semantics has its own peculiarities, and the choice of the optimum one depends on your particular device and the main use it is devoted to. Development time may be considered as well, unless the project is a major one. The skel implementation here may not be the best for your driver: it is only meant as a sample case, one amongst several different possibilities. Additional Information
Alessandro Rubini ([email protected]) Programmer by chance and Linuxer by choice, Alessandro is taking his PhD course in computer science and is breeding two small Linux boxes at home. Wild by his very nature, he loves trekking, canoeing, and riding his bike.
Comment viewing options
Where are functions defined
Anonymous's picture
In which file are all the functions which are declared in the struct file_operations e.g open(), read();
Wherever You Like
Mitch Frazier's picture
Those functions have to be supplied by you. So you can put them wherever seems most appropriate for your code.
Mitch Frazier is an Associate Editor for Linux Journal. |
The Facts
Malaria is a parasitic infection spread by Anopheles mosquitoes. The Plasmodium parasite that causes malaria is neither a virus nor a bacterium – it is a single-celled parasite that multiplies in red blood cells of humans as well as in the mosquito intestine.
When the female mosquito feeds on an infected person, male and female forms of the parasite are ingested from human blood. Subsequently, the male and female forms of the parasite meet and mate in the mosquito gut, and the infective forms are passed onto another human when the mosquito feeds again.
Malaria is a significant global problem. There are approximately 216 million cases of the disease worldwide, killing about 655,000 people every year. Malaria is prevalent in Africa, Asia, the Middle East, Central South America, Hispaniola (Haiti and the Dominican Republic), and Oceania (Papua New Guinea, Irian Jaya, and the Solomon Islands). However, malaria is most prevalent in Africa where 60% of all cases are reported. In Canada, malaria is most often caused by travel to and from endemic areas.
Each year, up to 1 million Canadians travel to malaria-endemic areas. This results in 350 to 1,000 annual cases of malaria in Canada and 1 to 2 deaths per year.
Although the parasite has progressively developed resistance to several older anti-malarial medications, there are still many safe and effective medications both for treatment and prevention.
There are four species of the Plasmodium parasite that can cause malaria in humans: P. falciparum, P. vivax, P. ovale, and P. malariae. The first two types are the most common. Plasmodium falciparum is the most dangerous of these parasites because the infection can kill rapidly (within several days), whereas the other species cause illness but not death. Falciparum malaria is particularly frequent in sub-Saharan Africa and Oceania.
You can only get malaria if you're bitten by an infected mosquito, or if you receive infected blood from someone during a blood transfusion. Malaria can also be transmitted from mother to child during pregnancy.
The mosquitoes that carry Plasmodium parasite get it from biting a person or animal that's already been infected. The parasite then goes through various changes that enable it to infect the next creature the mosquito bites. Once it's in you, it multiplies in the liver and changes again, getting ready to infect the next mosquito that bites you. It then enters the bloodstream and invades red blood cells. Eventually, the infected red blood cells burst. This sends the parasites throughout the body and causes symptoms of malaria.
Malaria has been with us long enough to have changed our genes. The reason why many people of African descent suffer from the blood disease sickle cell anemia is because the gene that causes it also confers some immunity to malaria. In Africa, people with a sickle cell gene are more likely to survive and have children. The same is true of thalassemia, a hereditary disease found in people of Mediterranean, Asian, or African American descent. (See the article on "Anemia" for more information.)
Symptoms and Complications
Symptoms usually appear about 12 to 14 days after infection. People with malaria have the following symptoms:
• abdominal pain
• chills and sweats
• diarrhea, nausea, and vomiting (these symptoms only appear sometimes)
• headache
• high fevers
• low blood pressure causing dizziness if moving from a lying or sitting position to a standing position (also called orthostatic hypotension)
• muscle aches
• poor appetite
In people infected with P. falciparum, the following symptoms may also occur:
• anemia caused by the destruction of infected red blood cells
• extreme tiredness, delirium, unconsciousness, convulsions, and coma
• kidney failure
• pulmonary edema (a serious condition where fluid builds up in the lungs, which can lead to severe breathing problems)
P. vivax and P. ovale can lie inactive in the liver for up to a year before causing symptoms. They can then remain dormant in the liver again and cause later relapses. P. vivax is the most common type in North America.
Making the Diagnosis
You may have malaria if you have any fever during or after travel in malarial regions. See a doctor quickly, and get your blood tested to check if the parasite is present. The doctor will also check to see if you have an enlarged spleen, which sometimes accompanies the fever of malaria. Don't wait to get home for treatment if you get malaria abroad.
Plasmodium parasites in the blood are usually visible under the microscope. There are also simple dipstick tests (done by dipping a piece of paper with chemicals on it into your blood) that can be used to identify P. falciparum. Blood tests as well as liver and kidney function tests will be done to check the effects of the parasite on your body.
Treatment and Prevention
If recognized early, malaria infection can be completely cured. You may be treated as an outpatient. The medication chosen by your doctor depends on:
• the type of malaria (knowing the species of parasite will help your doctor choose the most appropriate medication for you or determine whether hospitalization is necessary)
• the area you travelled to or visited when you contracted malaria (the doctor needs to know this because in certain geographical locations the malaria is resistant to some medications)
• the severity of the illness
• your medical history
• if you are pregnant
Treatment usually lasts for 3 to 7 days, depending on the medication type. To get rid of the parasite, it's important to take the medication for the full length of time prescribed – don't stop taking the medication even if you feel better. If you experience any side effects, your doctor can recommend ways to manage them or may choose to give you a different medication.
If you're travelling to a malarial region, you should take a course of preventive treatment. Medications similar to those used to cure malaria can prevent it if taken before, during, and after your trip. It's vital to take your medication as prescribed, even after you return home.
Before travelling, check with your doctor or travel clinic about the region's malaria status. Risk of infection also depends on:
• altitude (lower altitudes have higher risk)
• camping vs. hotel stay
• length of stay
• rural vs. urban areas (rural areas have higher risk)
• season (infection is more common during the rainy season)
• time of day (night is worse)
Since mosquitoes are night feeders, stay away from danger zones – particularly fields, forests, and swamps – from dusk to dawn to avoid being bitten. Use permethrin-treated mosquito netting when sleeping. Using mosquito coils and aerosolized insecticides containing pyrethroids may also help improve protection during this time.
Wear long sleeves and pants, and light-coloured clothing. Put mosquito repellent containing DEET on exposed skin. Use products containing up to 30% DEET for adults and children over 12 years – higher concentrations can have serious side effects, especially in children. Children 12 years old and younger should use products containing 10% DEET or less. Do not apply more than 3 times a day on children 2 to 12 years old. For children aged 6 months to 2 years, apply no more than once a day of a product containing 10% or less of DEET. Effects of DEET last between 4 to 6 hours. DEET and sunscreen combinations are not recommended. If sunscreen is needed, apply the sunscreen first, wait 20 minutes, and then apply DEET. |
E.g., 12/07/2016
E.g., 12/07/2016
CAFTA: What Could It Mean for Migration?
Adjust Font | Print | RSS | Reprint Permission
CAFTA: What Could It Mean for Migration?
A number of factors can influence the connection between free trade and migration, particularly between developed and less developed countries. Yet it is believed that more trade can lead to less international migration as the less-developed country increases its exports and its economy expands.
However, trade and economic opportunities in general do not automatically spur development in poor nations where migration can mean a search for well being driven by economics as well as deeply rooted cultural, social, and psychological causes.
Indeed, in the case of Central America, the reasons for high rates of emigration to the United States began with devastating civil wars in the late 1970s and early 1980s but have become more complex over time (see Central America: Crossroads of the Americas).
The Central America Free Trade Agreement (CAFTA) between the United States, five Central American countries, and the Dominican Republic may be the most important economic event in the region in 20 years. The agreements could allow some of the poorest Central American countries to combine their existing benefit from remittances (one of the "profits" of migration) with the job creation and investment opportunities free trade can offer.
CAFTA Basics
CAFTA, after a few years of discussion, took a leap forward in August 2004 when Guatemala, El Salvador, Honduras, Nicaragua, Costa Rica, and, later, the Dominican Republic, agreed to establish with the United States stable and permanent regulations, based on reciprocity, for trading both goods and services in areas such as telecommunications, finance, insurance, and consulting, among others.
The new rules also would make it easier for U.S. companies and others from the rest of the world to invest in CAFTA countries, and CAFTA countries would see improvements in current tariff preferences, including the elimination of U.S. quotas for all products except sugar.
The CAFTA process started when Costa Rica, which signed free-trade agreements with Mexico in 1995 and Canada in 2002, proposed bilateral trade negotiations with the United States. However, the United States was only willing to negotiate with a block of Central American countries.
CAFTA's implementation requires every country's legislature to ratify the agreement; only Costa Rica's legislature has not done so (the U.S. Congress approved CAFTA in July 2005). However, CAFTA came into force for El Salvador on March 1, 2006, because the Office of the U.S. Trade Representative (USTR) determined that El Salvador had taken sufficient steps to complete its commitments under the agreement, including adopting new laws and regulations where necessary.
The U.S. government will put CAFTA into effect on a rolling basis for the remaining signatory countries — Costa Rica, the Dominican Republic, Guatemala, Honduras, and Nicaragua — as they meet the agreement's requirements, for which they have a maximum period of two years as of March 1, 2006.
Arguments for and Against CAFTA
With their high rates of poverty, CAFTA's Central American member countries — including leader-of-the-pack Costa Rica — are in need of foreign direct investment (see Table 1). The World Bank and others believe CAFTA would make it easier for Central America to obtain capital and that free trade would also modernize economies and societies as well as reduce poverty.
Table 1. Economic Indicators in Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua (2004)
Costa Rica
El Salvador
GDP per capita (in U.S. dollars)9,6064,7814,1842,6653,262
Percent below national poverty line22%48.3%56.2%53%47.9%
Human Development Index Ranking
(from 1 to 177)
Source: UN Human Development Report, 2004
Also, CAFTA could strengthen an already solid trade relationship. No other market in the world is as important to Central America as the United States. In 2001, the value of trade between the United States and the six CAFTA countries totaled US$32 billion, US$15 billion in exports and US$17 billion in imports — more than the trade between the United States and Russia, India, and Indonesia together. Over 15,000 U.S. companies have operations in the region, and Central America, with over 40 million consumers, is the second-largest market in Latin America after Mexico.
Not surprisingly, the agreement has strong opponents. In Central America, some believe it would exacerbate poverty and hurt small farmers. Still others believe it would make medicines more expensive, cripple social security systems and impair public services like insurance and telecommunications for low-income consumers.
Central American Integration
Coordinated trade and integration measures date back to December 1960, when four countries — El Salvador, Nicaragua, Guatemala, and Honduras — established the Central American Common Market (CACM, Mercado Común Centroamericano) trade organization; Costa Rica joined two years later. CACM's aim was to set up a common market and customs union within five years. In the years leading up to CACM, a number of bilateral trade agreements between Central American countries had been signed.
The United States, interested in countering the threat of communist Cuba, was eager to see economic integration and development in Latin America. In early 1961, U.S. President John F. Kennedy announced the Alliance for Progress, an ambitious, 10-year plan that sought to bring to all the people of the Americas homes, work, land, health, and schools.
At this time, there was minimal migration from Central America to the United States with the exception of Honduran migration to New Orleans. This migration flow began with the close trading relationship between New Orleans-based Standard Fruit Company and banana growers in Honduras. By mid century, the relationship had led to the settlement of many Hondurans of all socioeconomic backgrounds. In 1970, Hondurans represented 12.8 percent of Louisiana's foreign-born population.
Although Central American countries saw improvement in some economic indicators during the 1960s, by the early 1970s both CACM and the Alliance for Progress, for various reasons, had fallen short of their long-term goals of creating greater economic and political cooperation (CACM) and in bringing Latin America out of poverty and establishing democratic governments (Alliance for Progress).
CACM collapsed in 1969 because of war between Honduras and El Salvador but was reestablished in 1991, once the region had become more peaceful. The Organization of American States (OAS), a regional agency whose members include all countries in the Western Hemisphere, dismantled its permanent committee on the Alliance for Progress in the 1970s.
By the 1980s, Central American countries had begun a process to open trade with the rest of the world, dismantling unilaterally the high tariffs crafted for CACM 20 years earlier.
In 1983, 24 countries around the Caribbean basin, including five in Central America and some in South America, signed the Caribbean Basin Initiative (CBI) with the United States. CBI allowed these countries to export commodities, including fresh produce, fresh and frozen seafood, specialty foods, medical and surgical supplies, and other goods to the United States without paying tariffs. The program also included U.S. government assistance for economic development.
CBI was expanded in 2000 to include the U.S.-Caribbean Basin Trade Partnership Act (CBTPA), which allows "import sensitive" articles such as apparel and footwear to enter the United States free of quota and duty. Because, in 1994, the North America Free Trade Agreement (NAFTA) gave advantages to Mexico in areas such as apparel, CBI countries experienced some disinvestment and slower growth of their exports. CBTPA is meant to level the playing field with Mexico and also to help countries devastated by Hurricanes Mitch and Georges in 1998.
CBI and the CBTPA provisions have been very successful in promoting new and more Central American exports to the U.S. market, which in turn has created many local jobs in the apparel industry and nontraditional agriculture and promoted the modernization needed in any development process, particularly in agricultural and rural areas. For example, small entrepreneurs in the region have been able to abandon subsistence agriculture because they face few hurdles in sending new products to the United States.
Table 2. Exports to the United States from Central American Countries with CBI Designation, 2004
Total Exports to the United States
CBI Exports
Percent of CBI exports receiving CBTPA preference
Honduras$3.6 billion$2.3 billion92 percent
Costa Rica$3.3 billion$1.1 billion35 percent
Guatemala$3.1 billion$1.2 billion76 percent
El Salvador$2.1 billion$1.1 billion97 percent
Nicaragua$990.5 million$330.4 million59 percent
Panama$316.1 million$32.8 million1 percent
Belize$107.1 million$44.5 million33 percent
Note: All countries listed here also export goods to the United States under other tariff-preference categories, not just CBI/CBTPA.
Source: Office of the U.S. Trade Representative, Sixth Annual Report to Congress on the Operation of the Caribbean Basic Economic and Recovery Act (December 2005). Available online.
At the same time, CBI and CBTPA also have limitations that have hindered development. As a unilateral instrument of the United States, the U.S. president has the authority to "withdraw, suspend, or limit benefits if he determines that the country is not meeting designation criteria." Consequently, such uncertainties have made businesses more cautious.
CBTPA is set to operate until September 30, 2008, unless the United States signs another trade agreement with any of the beneficiary countries. While most analysts believe the Free Trade Area of the Americas (FTAA), in discussion since the mid 1990s, could replace CBI/CBTPA, CAFTA's implementation will also affect CAFTA countries that belong to CBI. As of March 1, El Salvador became ineligible for duty-free treatment under CBTPA, but some temporary adjustments will have to be made prior to others countries receiving full CAFTA status.
In the long-term, once all the CAFTA-signatory countries that belong to CBI become full-fledged CAFTA countries, the process of modernization and change could be accelerated. Unlike CBTPA, CAFTA implies a set of permanent and lasting rules. Businesses that were previously hesitant to invest in Central America because of the threat of unilateral suspension could become more willing to make economic commitments.
What CAFTA Says About Migration
Unlike some of the other trade agreements the U.S. has signed (notably NAFTA and Chile), CAFTA explicitly avoids linking trade provisions with temporary migration or visas. In fact, the agreement includes an understanding that states, "No provision of the Agreement shall be construed to impose any obligation on a Party regarding its immigration measures."
In the case of services provided across borders, chapter 11 of CAFTA says that workers providing those services will be subject to the laws of the host country. The Office of the U.S. Trade Representative (USTR) gives the example of a Costa Rica musician who wants to bring his services to the United States. The musician would have to pursue a visa through existing U.S. channels; he would not be eligible for any special consideration because of CAFTA.
At the same time, no CAFTA country would be allowed to limit the number of "natural persons" (individuals) that may be employed in any kind of cross-border service. In addition, all parties have agreed to work together to set standards for education and experience requirements in certain professions; all members would recognize these standards, eliminating the basis for discrimination.
NAFTA and Migration
To understand what CAFTA may mean for migration, it's instructive to look at the effects of NAFTA.
When the U.S. Senate debated NAFTA in the mid 1990s, the argument of "more trade and less international migration" was used convincingly. Mexico also believed that NAFTA would create new jobs at home and decrease the pressure to migrate.
Ten years later, different evaluations support different results. According to USTR, migration flows from the areas in Mexico that received NAFTA-related investment have decreased. Yet, although the number of Mexican jobs in manufacturing increased, net job gains have been either modest or flat, depending on the measurement and its timing; wages in the United States and Mexico are not close to converging.
Most notably, Mexican immigration, particularly of unauthorized immigrants, increased sharply after NAFTA went into effect. As already noted, NAFTA allows for temporary migration but only of certain professionals under strict guidelines; no provisions were made for the temporary movement of low-skilled workers.
Some analysts have argued that NAFTA cannot be blamed for increased migration. Rather, Mexico's financial crises and restructuring efforts, the booming U.S. economy, and strong migration networks, among other factors, have had more powerful effects on migration.
A reasonable conclusion could be that, although international trade can create some economic conditions that eventually reduce migration trends, migration does not depend only on trade-agreement provisions. Also, reducing poverty requires a comprehensive strategy that can include trade but cannot be built on trade alone.
CAFTA's Possible Effects on Migration
Today, approximately five million people from Central America — about 50 percent of them without legal status — live in the United States, according to estimates from the International Organization for Migration (IOM). While the first large wave arrived in the 1980s after fleeing civil war at home, others came after natural disasters, such as Hurricane Mitch in 1998.
Over the last 25 years, the United States has become the top destination of those seeking to escape persecution and those wanting a better life. At the same time, many Central American immigrants maintain close ties to their home countries, sending remittances and returning frequently to visit.
The effects of CAFTA on these migration patterns will depend on the intensity with which the agreement can induce economic growth in both marginal urban areas and rural areas. It will also depend on how economic growth affects social development. Studies by the University of Michigan and the World Bank indicate that the agreement could create more than 300,000 new jobs and could increase the region's GDP by US$5.3 billion.
If the agreement does not succeed in reducing poverty, then it should not be expected to reduce migration. Additionally, if the U.S. economy continues to grow and laws regarding legal, work-based migration are not changed, then the United States will likely continue attracting workers from Central America.
In other words, the sole existence of CAFTA, as with NAFTA, will not reverse established migration patterns.
Looking Ahead
CAFTA was a key issue in Costa Rica's hotly contested presidential elections in February 2006. Pro-CAFTA candidate (and former president and Nobel Peace Prize laureate) Oscar Arias was named the winner in March, and Costa Rica is expected to make the needed legal reforms to comply with CAFTA's terms of negotiations.
Except for El Salvador, the rest of the CAFTA countries still need to institute legal changes, including reforms on local intellectual property, labor, and services laws.
At this time, only Honduras' legislature has taken the necessary steps for compliance. Nicaragua, the Dominican Republic, and Guatemala will definitely take more time.
Considering the history of Central American economic integration, the established migration patterns between the region and the United States, and the slow roll out of CAFTA, no major changes, either in economics or migration, can be expected in the near future.
This article is based on Salomon Cohen's report for the IOM office in Guatemala entitled "The Effects of the Free Trade Agreement Between Central America, the United States and the Dominican Republic in Central American Migratory Processes." The report is available here.
Brown, Drusilla, Kozo Kiyota, and Robert M. Stern (2005). "Computational Analyses of the U.S FTAs with Central America, Australia and Morocco." Discussion paper No. 527, the University of Michigan. Revised January 31. Available online.
Papademetriou, Demetrios, John Audley, Sandra Polaski, and Scott Vaughn (2003). "NAFTA's Promise and Reality: Lessons from Mexico for the Hemisphere." Washington, DC: Carnegie Endowment for International Peace. Available online.
U.S. Department of Commerce, International Trade Administration (2000). "Guide to Caribbean Basin Initiative." November. Available online.
Office of the United States Trade Representative (2006). "Statement of USTR Rob Portman Regarding Entry Into Force of the U.S.-Central America-Dominican Republic Free Trade Agreement (CAFTA-DR) for El Salvador." February 24. Available online.
Office of the United States Trade Representative. "CAFTA Briefing Book." Available online. |
Building a great credit score from scratch
By Justin Boyle
Building a great credit score from scratch
You've likely known that credit scores are important since long before you got your first credit card. But when you're just starting out in the world of credit, learning how to create a strong credit score can be tricky.
Credit cards can offer the key to beginning your credit history on the right foot, but only if you use them smartly. The guidelines below can help you use your credit cards to begin establishing a great credit score from the start.
Using credit cards to boost your credit score
You must begin using credit to establish your credit score, which for many means applying for a credit card. Once you have that new card, these tips can help you craft good credit that will last a lifetime:
1. Create a strong credit history. If you've never had a credit card before, you'll probably be limited to card options with a fairly high APR. With high-interest cards, it's wise to stick to small purchases and pay your balance off before the end of each billing cycle. This will help you save on interest and show lending agencies that you can handle the responsibility that comes with spending on credit.
2. Consider owning a few cards. After using your first card to make some responsible purchases, lower APR offers may start coming in the mail. Use these new offers as an opportunity to expand your credit portfolio and show that you're responsible enough to maintain multiple accounts without making a late payment. Some personal finance gurus say that between four and seven cards is the "sweet spot" that can help your credit score go up the fastest, but it's wise to acquire these cards over time (as I'll explain later).
3. Stay well below your credit limit. If the balances on your credit cards start edging up toward their limits, the rise in your credit score will slow down or even reverse. If you're ready to try a big-ticket credit purchase, it's smartest to wait until you have a lower APR and your spending limit reaches to at least twice the cost of the item you're looking to buy. That can help you continue increasing your score without risking huge amounts of interest in case you can't pay your full balance in one payment.
Most credit experts will tell you that the most important tip for a strong credit score is to always pay your bills on time. Even one late payment can cancel out the gains from several months of responsible habits.
Avoiding a pitfall of credit applications
It's important to note that one of the ways to increase your score can sometimes push it in the opposite direction. Sound complicated? It's actually pretty simple.
When you apply for new lines of credit, you're giving a lending agency permission to pull your credit report. Every report requested can ding your score slightly, and that effect compounds with each new credit application.
So how are you supposed to increase that all-important score if what you do to raise it could also bring it down? The answer is simple: Have patience.
The hit you'll take from new credit factors into your score less than the boost you get from responsible credit use. On top of that, you get an extra lift to your score from having a long credit history. As you continue to manage your credit responsibly, those occassional new credit applications will factor less and less into the overall equation.
What's more, a diverse credit portfolio also helps increase your score. If you're in a place where you can apply for a mortgage or auto loan in addition to your credit card, you may add even more positive factors to your lending history.
It comes down to your reputation
Patience is a key virtue when it comes to building a strong credit score. Your credit score is a measure of your reputation as a borrower, and it isn't going to change overnight.
So stay smart, watch your spending limits and pay all of your bills on time. By doing so, your credit could go from non-existent to outstanding in just a few short years.
Justin Boyle is a writer and journalist in Austin, Texas.
Disclaimer: Discover is a paid advertiser of this site.
Feed for this Entry
Leave a Reply
Search this site |
@techreport{NBERw9788, title = "From Cradle to Grave? The Lasting Impact of Childhood Health and Circumstance", author = "Anne Case and Angela Fertig and Christina Paxson", institution = "National Bureau of Economic Research", type = "Working Paper", series = "Working Paper Series", number = "9788", year = "2003", month = "June", doi = {10.3386/w9788}, URL = "http://www.nber.org/papers/w9788", abstract = {We quantify the lasting effects of childhood health and economic circumstances on adult health and earnings, using data from a birth cohort that has been followed from birth into middle age. We find, controlling for parents' incomes, educations and social status, that children who experience poor health have significantly lower educational attainment, and significantly poorer health and lower earnings on average as adults. Childhood factors appear to operate largely through their effects on educational attainment and initial adult health. Taken together with earlier findings that poorer children enter adulthood in worse health and with less education than wealthier children, these results indicate that a key determinant of health in adulthood is economic status in childhood rather than economic status in adulthood. Overall, our findings suggest more attention be paid to health as a potential mechanism through which intergenerational transmission of poverty takes place: cohort members born into poorer families experienced poorer childhood health, lower investments in human capital and poorer health in early adulthood, all of which are associated with lower earnings in middle age -- the years in which they themselves become parents.}, } |
A Very Lonely Japan
The Japanese tend to expect diplomatic bouquets from even the most insignificant of their foreign visitors. So imagine the audience reaction when German ex-chancellor Helmut Schmidt, invited to give a lecture in Tokyo last month, treated his hosts to an exercise in bluntness. He accused the Japanese of soft-pedaling their country's responsibility for its wartime past--and came to a devastating conclusion: "Sadly, the Japanese nation doesn't have too many genuine friends in the world outside." It was a syndrome he blamed on "the ambiguity of the Japanese public when it comes to acknowledging the conquests, the start of the Pacific war and the crimes of the past history." His listeners didn't appear to find much consolation in Schmidt's concession that his own country had committed "even worse crimes within Europe."
Small wonder, perhaps, that no Japanese media picked up on the content of the speech. But the Japanese had better get used to dire verdicts on their handling of history, because there's plenty more to come. After Japanese Prime Minister Junichiro Koizumi last week paid yet another visit to Yasukuni Shrine--the Tokyo war memorial that honors 2.47 million wartime dead, including 14 class-A war criminals--his country can look forward to a deepening of the remarkable diplomatic isolation that has enveloped it in recent years. China and South Korea expressed their anger in fiercely worded statements and canceled planned diplomatic meetings in protest. Even some of Japan's erstwhile allies in Asia, Malaysia and Singapore, also registered their disapproval.
Sixty years after the end of World War II, Japan's wartime past has never mattered so much. Last week's Yasukuni visit didn't prompt ferocious street protests as previous ones had, but each such incident further cements the widespread view that Japanese expressions of regret over the war are insincere. Beyond that, Japan has territorial disputes with almost all of its neighbors, a situation unique among the leading industrialized nations; a dispute with China over drilling in the East China Sea flared just last week. And perhaps most bitter of all for Tokyo's bureaucrats is the resounding failure of Japan's recent bid to win a permanent seat on the U.N. Security Council--an ambition that no significant Asian nation supported, despite the billions in investment and aid Tokyo has spread around the region in the last half century. "To be honest I was totally surprised," says leading diplomatic commentator Yoichi Funabashi. "It was a complete disaster."
Until recently, Japan could to a certain extent ignore the suspicion and resentment it inspired across Asia. The country was an economic powerhouse, bolstered by its alliance with the United States. But now that animosity threatens Japan's further progress, at a time when the country finds its claim to regional leadership increasingly challenged by the rising might of Beijing. In effect, the country that spent most of the 20th century aspiring to a leadership role in East Asia now finds itself virtually relegated to a corner for bad behavior. And that's the last thing the region needs at a time when there is already plenty of instability to go around, thanks to a rapidly modernizing Chinese military, a nuclear-armed North Korea and a variety of potentially explosive territorial disputes. "The wounds of war remain and haven't been healed in neighboring Asian countries," says Tomiichi Murayama, the former Japanese prime minister whose 1995 apology to the victims of Japanese wartime conquest set the gold standard for all future public expressions of remorse. "They still lack confidence in Japan."
Why is that? Hasn't the country's postwar pacifism become so deeply rooted that a resurgence of Japanese militarism is unthinkable? And hasn't Japan apologized for its wartime actions over and over again? True enough. Tokyo University scholar Sven Saaler points out that public-opinion surveys consistently show that most Japanese accept the description of their country's military campaigns from 1931 to 1945 as "wars of aggression." Meanwhile, a new museum devoted to the fate of foreign women recruited as sex slaves by the Japanese Army during the war opened in Tokyo at the beginning of August. And on the anniversary of the Pacific war's end this summer, the very same Koizumi who can't stay away from Yasukuni gave a much-noted speech reaffirming his country's readiness to acknowledge its responsibility for the war.
And yet a significant portion of the Japanese population does not agree on the precise parameters of Japan's war guilt. Koizumi's visit certainly destroys any good will occasioned by his Aug. 15 speech. For every Japanese bureaucrat or politician who expresses remorse for the war, there is another who will make an inflammatory remark. During the past year, Education Minister Nariaki Nakayama has several times lauded a revisionist history textbook that minimizes the Japanese military's role in forcible wartime prostitution. Notes Jeff Kingston, a professor at Temple University in Tokyo: "The bottom line is that there is no consensus in Japan on war responsibility. If there's no consensus on memory, you can't assume responsibility--and without responsibility, you can't move to reconciliation."
The question is why all this is flaring up now. That lack of consensus, after all, has held true for decades. But two things are different. First, a new generation of Japanese without personal memories of the war are revolting against what they see as the "masochism" of institutionalized self-reproach and U.S.-imposed pacifism. Young conservatives, including Koizumi, have vowed to transform Japan into a "normal country"--a pledge that includes pursuit of a more assertive international role for Tokyo and the revision of the pacifist Constitution to acknowledge the country's considerable armed forces. Koizumi's insistence on visiting Yasukuni reflects a growing refusal among ordinary Japanese to kowtow to foreign sensitivities.
The external environment has changed, too. Back when Japan was the region's sole economic dynamo, other countries often accepted economic aid from Tokyo in return for tacitly agreeing to avoid bringing up the war. Now that years of prosperity have bred substantial and increasingly assertive middle classes in both China and South Korea, history is returning to the agenda. In September, South Korean Prime Minister Lee Hae-chan said: "We're not asking for money from the Japanese government. We have enough money. What the Korean government wants from Japan is truth and sincerity, and [a commitment] to help develop healthy relations between the two countries." What's more, both Chinese and Korean leaders have powerful domestic reasons to bash Japan--it's a surefire tool for garnering popular support.
The past few months abound in evidence that disagreements over the past can have perceptible economic and political effects. The anti-Japanese riots in China in April of this year triggered sharp falls on the Tokyo stock market. Japanese companies have been reassessing their strategies for investment in China, and many are already relocating factories to countries viewed as less politically sensitive. Japanese business leaders had lobbied Koizumi vigorously to stay away from Yasukuni, for the sake of good relations with China--a sign of how high the stakes are for them.
The continuing tensions also hobble Japan's diplomatic clout. The country's whole postwar diplomatic strategy has been about projecting soft power. Tokyo has focused much of its foreign-policy energy on issues like human rights or climate change, precisely as a way of soothing foreign qualms about Japan's economic might. And Tokyo has been a major supporter of the United Nations. Japan's campaign for a permanent spot on the U.N. Security Council was motivated partly by the fact that Tokyo contributes about 20 percent of the U.N.'s annual budget--more than four of the UNSC's five members. (Japan is second only to the United States.) Yet only three Asian countries--Afghanistan, Bhutan and the Maldives--proved willing to offer official support for the Japanese bid when a formal proposal was put forward in August. (The measure, which also envisioned seats for India, Brazil and Germany, never came to an actual vote.)
Japan's failed U.N. bid was partly due to intensive lobbying by Beijing, which happily used the history issue to blacken Japan's image. Notes Funabashi: "It all leaves China looking like it has moral superiority over Japan"--a powerful edge at a time when both countries are engaged in a struggle for political and economic superiority within Asia.
So how can Japan extricate itself from the mess? Some, predictably, are arguing that the fault lies entirely with Japan's critics. A prominent group of ex-government officials and military men, including Masahiro Sakamoto, vice president of the Japan Forum for Strategic Studies, assert that Japan should respond to China by being tougher diplomatically. They point out that the Chinese Communist Party suffers from much historical amnesia itself. The Japanese Foreign Ministry, for its part, has been shifting the emphasis to public diplomacy. It recently started a new Internet offensive designed to promote a positive Japanese image. The effort will include posting copies of original Japanese Foreign Ministry documents on the site as a way of explaining policy.
Neither of those approaches seems designed to foster what is most needed: a broader spirit of reconciliation and historical awareness. Andrew Horvat of the International Center for the Study of Historical Reconciliation at Tokyo Keizai University points out that one reason why Germans succeeded so well at reconciling with their neighbors after the war was because they made lots of nongovernmental contacts with other Europeans. The cross-border contacts ranged from church and civic groups to trade unions and academic institutions. In Japan, by contrast, restrictive legislation on nonprofit organizations (including tough rules on tax-exempt status) has stunted the growth of civil organizations that might bond with counterparts abroad. "Communication is crucial," says Wang Jin, a 30-year-old Chinese woman studying for her M.B.A. at Waseda University in Tokyo. "If Chinese people have a chance to come here [to Japan], they [might] change their opinions." Wang says she spends much of her time correcting misperceptions about Japan and China to angry friends in both countries. "It's very sad," she says. "I want to have Japan and China be like Germany and France. They have a good relationship. They became stronger even though they had a bad experience."
As Saaler of Tokyo University points out, one reason that Japan hasn't come to terms with Asian countries is that it's long been a staunch ally of the United States. With a superpower as a geopolitical partner, Japan didn't really feel the need to reach out. Back in the 1950s, '60s and '70s none of the other Asian countries mattered economically--now Japan has very intimate economic relations with all of them. China recently surpassed the United States as Japan's top trade partner.
A planned east Asian summit in Kuala Lumpur this December might help forge a new spirit of cooperation. The confab is aimed at laying the groundwork for a new East Asian Community loosely modeled on the European Union. Japan has been pushing the idea of stronger Asian integration for years, and with security anxieties growing apace, the time might be ripe for a new regional alignment. It would be unfortunate if Tokyo's desire to shape that future were to be derailed by its inability to come to grips with its past.
Join the Discussion |
Deep Freeze in the Great Lakes
How are the ice-covered Great Lakes impacting the environment?
Article ID: 613906
Released: 19-Feb-2014 12:05 AM EST
Source Newsroom: Michigan Technological University
• Credit: NOAA Great Lakes Environmental Research Lab
A collage of icy Great Lakes conditions.
• Guy Meadows, PhD Director, Michigan Tech Great Lakes Research Center
Newswise — Lake Superior is more than 90 percent iced over, and experts say there's a possibility it will be covered completely before winter's end for the first time in nearly 20 years. Someone has proposed a hike across Lake Michigan, and Lake Huron and Lake Erie are 95 percent frozen.
But even without 100 percent ice cover, the icy lakes are having a major effect on the environment around them.
"The biggest impact we'll see is shutting down the lake effect snow," said Guy Meadows, director of Michigan Technological University's Great Lakes Research Center in Houghton, on Michigan’s snowy Upper Peninsula. Lake effect snow occurs when weather systems from the north and west pick up evaporating lake water that's warmer than the air, then drop it as snow after reaching land, he explained. An ice cover prevents that evaporation.
Ice on the Great Lakes can also contribute to more frigid temperatures, Meadows noted, because the warmer lake water won't have the chance to moderate the temperatures of those same northerly weather systems the way it usually does.
if there the weather is cold and calm, the ice can grow fairly quickly, because the water temperature is near the freezing point. However, strong winds can break up ice that's already formed, pushing it into open water and piling it vertically both above and below the water line.
The Soo Locks are currently closed for the winter, and all shipping on Lake Superior has halted, but ice buildups can cause problems in the spring. Even icebreaker ships can't do much about ice buildup that can be as much as 25 or 30 feet deep..
The ice can also have positive effects though. Lake Superior's whitefish and some other fish, for example, need ice cover to protect their spawning beds from winter storms. Heavy ice, therefore, should lead to good fishing.
Meadows said invasive nuisance species have been thriving at the bottom of Lake Superior in recent years largely because of warmer temperatures, so "cooling things back down will be a good thing in that sense."
Chat now! |
Rosalyn S. Yalow, a medical physicist who persisted in entering a field largely reserved for men to become only the second woman to earn a Nobel Prize in Medicine, died on Monday in the Bronx, where she had lived most of her life. She was 89.
Her son, Benjamin Yalow, confirmed her death.
Dr. Yalow, a product of New York City schools and the daughter of parents who never finished high school, graduated magna cum laude from Hunter College in New York at the age of 19 and was the college’s first physics major. Yet she struggled to be accepted for graduate studies. In one instance, a skeptical Midwestern university wrote: “She is from New York. She is Jewish. She is a woman.”
Undeterred, she went on to carve out a renowned career in medical research, largely at a Bronx veterans hospital, and in the 1950s became a co-discoverer of the radioimmunoassay, an extremely sensitive way to measure insulin and other hormones in the blood. The technique invigorated the field of endrocrinology, making possible major advances in diabetes research and in diagnosing and treating hormonal problems related to growth, thyroid function and fertility.
The test is used, for example, to prevent mental retardation in babies with underactive thyroid glands. No symptoms are present until a baby is more than 3 months old, too late to prevent brain damage. But a few drops of blood from a pinprick on the newborn’s heel can be analyzed with radioimmunoassay to identify babies at risk.
Continue reading the main story
The technique “brought a revolution in biological and medical research,” the Karolinska Institute in Sweden said in awarding Dr. Yalow the Nobel Prize in Physiology or Medicine in 1977.
“We are witnessing the birth of a new era of endocrinology, one that started with Yalow,” the institute said.
Dr. Yalow developed radioimmunoassay with her longtime collaborator, Dr. Solomon A. Berson. Their work challenged what was then accepted wisdom about the immune system; skeptical medical journals initially refused to publish their findings unless they were modified.
Dr. Berson died in 1972, before Dr. Yalow was honored with the Nobel. The institute does not make awards posthumously. Dr. Yalow was the second woman to win the Nobel Prize in Physiology or Medicine. The first, in 1947, was Gerty Theresa Cori, an American born in Prague. Dr. Yalow shared her Nobel with two other scientists for unrelated research. (Eight more women have won the medicine prize since then.)
Rosalyn Sussman was born in the South Bronx on July 19, 1921. Her father, Simon Sussman, who had moved from the Lower East Side of Manhattan to the Bronx, was a wholesaler of packaging materials; her mother, the former Clara Zipper, who was born in Germany, was a homemaker.
Dr. Yalow told interviewers that she had known from the time she was 8 years old that she wanted to be a scientist in an era when women were all but prohibited from science careers. She loved the logic of science and its ability to explain the natural world, she said.
At Walton High School in the Bronx, she wrote, a “great” teacher had excited her interest in chemistry. (She was one of two Walton graduates, both women, to earn a Nobel in medicine, the other being Gertrude Elion, in 1988. Walton was closed in 2008 as a failing school.) Her interests gravitated to physics after she read Eve Curie’s 1937 biography of her mother, Marie Curie, a two-time Nobel laureate for her research on radioactivity.
Nuclear physics “was the most exciting field in the world,” Dr. Yalow wrote in her official Nobel autobiography. “It seemed as if every major experiment brought a Nobel Prize.”
Rosalyn S. Yalow and Sol Berson in Pittsburgh with a check they won from the Universtiy of Pittsburgh.
She went on to Hunter College, becoming its first physics major and graduating with high honors at only 19. After she applied to Purdue University for a graduate assistantship to study physics, the university wrote back to her professor: “She is from New York. She is Jewish. She is a woman. If you can guarantee her a job afterward, we’ll give her an assistantship.”
No guarantee was possible, and the rejection hurt, Dr. Yalow told an interviewer. “They told me that as a woman, I’d never get into graduate school in physics,” she said, “so they got me a job as a secretary at the College of Physicians and Surgeons and promised that, if I were a good girl, I would take courses there.” The college is part of Columbia University.
World War II and the draft were creating academic opportunities for women; to her delight, Dr. Yalow was awarded a teaching assistantship at the College of Engineering at the University of Illinois. She tore up her steno books and headed to Champaign-Urbana, becoming the first woman to join the engineering school’s faculty in 24 years.
As the only woman among 400 teaching fellows and faculty members, however, she faced more than the usual pressure to prove herself. When she received an A-minus in one laboratory course, the chairman of the physics department at Illinois said the grade confirmed that women could not excel at lab work; the slight fueled her determination.
She married a fellow graduate student, Aaron Yalow, in 1943. He died in 1992. Besides her son, Benjamin, of the Bronx, she is survived by her daughter, Elanna Yalow of Larkspur, Calif., and two grandchildren.
Dr. Yalow received her doctorate in nuclear physics in 1945, and went to teach at Hunter College the following year. When she could not find a research position, she volunteered to work in a medical lab at Columbia University, where she was introduced to the new field of radiotherapy. She moved to the Bronx Veterans Administration Hospital, now the James J. Peters Veterans Affairs Medical Center, as a part-time researcher in 1947 and began working full time in 1950. That same year, she began her 22-year collaboration with Dr. Berson.
Dr. Berson was seen as the dominant partner. By virtue of his gender and medical degree, he had more contacts with journals, professional societies and in academia. Dr. Yalow was single-mindedly focused on her research; for much of her professional life, she lived in a modest house in the Bronx less than one mile from the hospital. She had no hobbies and traveled only to give lectures and attend conferences.
In their work on radioimmunoassay, Dr. Yalow and Dr. Berson used radioactive tracers to measure hormones that were otherwise difficult or impossible to detect because they occur in extremely low concentrations. They went on to use the test to measure concentrations of vitamins, viruses and other substances in the body. Today the test has been largely supplanted by a technique that does not use radioactivity.
Their early work met with resistance. Scientific journals initially refused to publish their discovery of insulin antibodies, a finding fundamental to radioimmunoassay. The discovery, in 1956, challenged the accepted understanding of the immune system; few scientists believed antibodies could recognize a molecule as small as insulin. Dr. Yalow and Dr. Berson had to delete a reference to antibodies before The Journal of Clinical Investigation accepted their paper, and Dr. Yalow did not forget the incident; she included the rejection letter as an exhibit in her Nobel lecture.
With Dr. Berson, Dr. Yalow made other discoveries. Using radioimmunoassay, she determined that people with Type 2 diabetes produced more insulin than non-diabetics, providing early evidence that an inability to use insulin caused diabetes. Researchers in her lab at the Bronx veterans hospital modified radioimmunoassay to detect other hormones, vitamin B12 and the hepatitis B virus. The latter adaptation allowed blood banks to screen donated blood for the virus.
Dr. Berson’s death affected Dr. Yalow deeply; she named her lab in his honor so his name would continue to appear on her published research.
She was elected to the National Academy of Sciences in 1975 and received the Albert Lasker Medical Research Award, often a precursor to the Nobel, in 1976. At her death, she was senior medical investigator emeritus at the Bronx veterans medical center and the Solomon A. Berson distinguished professor-at-large at Mount Sinai School of Medicine in New York.
Five years after she received the Nobel, Dr. Yalow spoke to a group of schoolchildren about the challenges and opportunities of a life in science. “Initially, new ideas are rejected,” she told the youngsters. “Later they become dogma, if you’re right. And if you’re really lucky you can publish your rejections as part of your Nobel presentation.”
Correction: December 17, 2012
Because of an editing error, a credit on Thursday with an obituary about the financier Joe L. Allbritton misspelled the surname of a contributing reporter. He is Daniel E. Slotnik, not Slotnick. His surname was also misspelled as Slotnick in a reporting credit in June 2011 for an obituary about Rosalyn S. Yalow, a physicist who won the Nobel Prize in Medicine.
Continue reading the main story |
Innovator Under 35: Ronaldo Tenório
A deaf person walks into a bar. That isn’t the beginning of a joke, but a potentially frustrating situation—unless the bartender happens to know sign language. That’s where Hand Talk comes in. It translates spoken words into sign language that an avatar then conveys on a smartphone screen.
For now, Hand Talk can only translate Portuguese into Libras, the sign language used in Brazil—the home of the program’s creator, Ronaldo Tenório. But Brazil alone has at least 10 million deaf people, one million of whom have downloaded Hand Talk’s mobile app.
The users hold up their smartphone to a hearing person, who sees a message on the screen that says “Speak to translate.” As soon as the person starts talking, an animated avatar named Hugo begins signing.
Turning the audio into animations of gestures requires laborious programming because everything has to be exactly right, all the way down to Hugo’s facial expressions, which also carry meaning in sign language. Tenório and his team feed their program thousands of example sentences every month and match them with 3-D animations of sign language. They constantly push these improvements out through app updates.
Tenório plans to roll out different versions of the avatar in the future so users can switch the gender or race of their Hugo in an effort to broaden the appeal and accessibility of having a virtual translator in one’s pocket.
Julia Sklar
comments powered by Disqus |
New 'Spaser' technology to fuel future nano-technologies
Subscribe to Oneindia News
Washington, Jan 13 (ANI): Tel Aviv University researchers have developed a new technology called 'Spaser', a laser that can be as small as needed to fuel nano-technologies of the future.
Prof. David Bergman had patented the idea 2003 with Prof. Mark Stockman of Georgia State University in Atlanta.
'Spaser' stands for 'surface plasmon amplification by stimulated emission of radiation' and are considered a critical component for future technologies based on nanophotonics.
A Spaser-based microscope might be so sensitive that it could see genetic base pairs in DNA.
The technology could also lead to computers and electronics that operate at speeds 100 times greater than today's devices, using light instead of electrons to communicate and compute.
"It rhymes with laser, but our Spaser is different. Based on pure physics, it's like a laser, but much, much, much smaller," said Prof. Bergman, who owns the Spaser patent with his American partner.
The physical limitations of current materials are overcome in the Spaser because it uses plasmons, and not photons. Smaller than the wavelength of light, nano-sized plasmonic devices will be fast and small.
The team is currently working on commercializing their invention, which they suggest could represent a quantum leap in the development of nano-sized devices. (ANI)
Please Wait while comments are loading... |
Definition of: symbolic link
symbolic link
In Unix, a file that points to another file or directory. It is used to allow a variety of sources to point to a common destination. The Windows 2000 counterpart is the "virtual directory." When URLs are redirected, it is called "URL mapping." A symbolic link is like a Windows shortcut, except that the link is an index entry in the Unix file system, whereas the shortcut is a regular Windows file. See redirection. |
Dismiss Notice
A History Of Snow Plows & Snow Plowing
Discussion in 'Residential Snow Removal' started by soma56, Jul 23, 2009.
1. soma56
soma56 Junior Member
from Ontario
Messages: 10
The most widely accepted commercial tool for snow removal is the snow plow. This is especially useful in large capacities. In modern times, a snow plow consists of a large pick-up truck with a large plow that is permanently attached. Some plows will an electric and/or hydraulics used to raise and lower them. Even bigger plows may be affixed to a very large tractor, backhoe or loader. Some of which may contain more then one large plow and even distribute salt as they plow. Aside from pickup trucks, snow plows can also be found on other types of vehicles such as a personal SUV or even a small riding mower that is traditionally used to cut grass in the summer. Snow plows are also used to mount on rail cars to remove snow from train tracks.
Where snow blowers work by use of an impeller to draw snow into the chute a snow plow works different and uses a much simpler concept. Using the force of the vehicle the snow plow is pushed either forward or on an angle. The blade of the snow plow captures the snow and forces it towards the direction of the vehicle clearing the surface previously covered.
The earliest versions of a snow plow were powered by horses. The wedge-type blades were made of wood. Since the invention of the automobile the snow plow was logically adopted and converted for use with vehicles. Patents for snow plows were issued as far back as the early 1920’s. The first infamous plow for vehicles was created by two brothers named Hans and Even Overaasen from Norway. They constructed a plow for use on vehicles which was soon paved the way for traditional equipment used today to clear roads, railways and airports. Soon after the Overaasen Snow Removal Systems came into being. Another milestone inventor by the name of Carl Frink was also considered an early manufacturer car-mounted snow plows. His company, Frink Snowplows, which was based out of Clayton, New York, was created in 1920 and still runs today under the name Frink-America.
Trains and snow plowing go back as far as the mid-1800’s. An interesting invention known as the rotary snow plow was created by a Canadian dentist named J.W. Elliot. A rotary snow plow contains a set of blades positioned in a circle. It works by rotating the blades and cutting through the snow as the train moves forward. The rotary snow plow was conceived after ongoing problems with the traditional wedge plow. The wedge snow plow, which works like many plows today, simply could not move the snow aside quick enough for trains. The rotary snow plow requires the power of an engine to rotate the blades. Usually, a second engine is used to assist in moving the train while the first one in front is responsible for removing the snow. As the blades turn the snow is lifted through a channel and forced to the top out the chute. The operator sits up top in a cab behind the chute he or she has the ability to control the direction of the chute and rate of speed of the blades. These controls eventually led back to the ‘pushing’ engine so that the operator of a pushing locomotive could have control. In areas where severe snow fall is called for the use of ‘double’ or ‘duel’ rotary engines were put into use. The engines would contain rotary plows on both ends. They were often effective in clearing snow from rail stations and in situations where the snow continued to accumulate after going in one direction.
The earliest rotary blades were power by stem engines while newer ones are powered by gas or electricity. Due to the advancement of newer technologies rotary blades are seldom used anymore. They are also very expensive to maintain an only used as a last resort by many railway companies.
Plows were a godsend to citizens in the late 1800’s. It helped ease the stresses of transportation. While horse-drawn plow was uncommon in most cities in North America in the 1860’s – it soon became widespread with popularity. However with the clearing of roadways came a new problem that we still see today. While plowing effectively cleared roadways it blocked the sidewalks and sideroads that pedestrians used to travel on. Piles of snow lined the sides of streets. Citizens complained and even brought about lawsuits targeting plowing companies. Store owners complained that their store fronts were inaccessible to customers because of the mounds of snow left behind as a result of plowing. Pedestrians had to overcome the snow while walking down sidewalks. Sleigh riders also became annoyed as the resulting plowed surface created ruts and uneven surfaces.
The citizens of major cities across North America responded in several ways. They hired people to shovel the walkways and horse-drawn carts to remove the snow. Often, they worked in conjunction with the plow companies to haul the snow away into nearby rivers. This not only resolved the issues for pedestrians and store owners but also created a small surplus of jobs for the winter season. This can still be seen today.
2. ATV Plow King
ATV Plow King Senior Member
Messages: 166
3. Kevin Kendrick
Kevin Kendrick Senior Member
Messages: 397
Must be an older article because Frink has been out of business for about 8 or 9 years. But an interesting read none the less.
4. grandview
grandview PlowSite Fanatic
Messages: 14,609
Maybe I better update my fleet .
5. soma56
soma56 Junior Member
from Ontario
Messages: 10
Very Funny 'Grand View'
Kevin, I didn't dig in deep enough into Frinks I suppose. In any event I wrote the article pretty much around the date it was posted.
6. mercer_me
mercer_me PlowSite Fanatic
Messages: 6,361
This Caterpillar Thirty was built and design for use with a large V snow plow & wings. It was purchased new by the town of Mercer Maine and worked for many years plowing rural roads for the town. Now Mercer puts all 33 miles of roads out to bid. My uncle curently has the bid.
Caterpillar 30 Mercer ME.jpg
Last edited: Jul 31, 2009
7. mercer_me
mercer_me PlowSite Fanatic
Messages: 6,361
Tis 1939 Caterpillar D-6 3cyl diesel dozer, was owned by the Town of West Gardiner Maine, and used to plow snow.
1939 Cat D6.jpg |
"Self Pollinating" and Tomato Growing Mythology
Long ago, a group of monks got into a discussion of horses teeth. How many did a horse have? All the ancient writings were consulted, and the discussion became more and more heated. Finally a young monk suggested they look in the horse's mouth. One and all turned on this impertinent, irreverent monk and they literally threw him out of the monastery.
At risk of being thrown out of tomato groups, I will try to inject some common sense into the tomato pollination discussion. It's funny how often it's said that tomatoes self pollinate, but always in the context of ways to help them self pollinate, or reasons why they didn't self pollinate. Oxymoronish isn't it?
The best pollinator for tomatoes is the original, a bee which "sonicated" at the resonant frequency of the flower. Sonication, also called buzz pollination is when the bee vibrates its wing muscles but doesn't fly; it just hangs on.
The reason is that tomato pollen is not in the exterior of the anthers like most flowers, rather it is produced internally and then released thru pores in the anther. Motion is required to release the pollen, and the greatest quantity is released by sonication of the correct frequency. However other bees with different frequency, or even shaking by wind will release some pollen.
There are a couple problems: one is that the natural pollinator (a wild bee) didn't travel with the tomato as it was spread throughout the world. The other is that the flower is not very attractive to other bees, and when bee populations are low the tomato generally gets ignored. Bumblebees are the most often seen on tomatoes, though honeybees, when hungry enough will also work them, as will some solitary bees.
Did you ever watch a bumblebee work a tomato blossom? When it does, it pulls the flower down into a vertical position, puts its fat belly against the stigma, and buzzes. The pollen that is released, now will fall by gravity (since the flower is now tilting down) directly to the bee's fuzzy (and statically charged) belly, which is rubbing against the sticky stigma as it vibrates. Tomatoes are self fertile, but the pollen can come from any other tomato that the bee has visited, a bane for seed growers who want to keep varieties pure, but lovely for the gardener who wants fruit.
The size of the fruit is dependent on the number of ovules fertilized, up to the 100% mark. In other words, the more seeds, the meatier the 'mater. So we want to get pollination as full as possible. This is the reason the bee is best, it delivers the most grains of pollen, exactly where it is needed, on the sticky surface of the stigma.
When shaking is done by hand, think about mimicking the natural resonances of sonicating bees. Shaking should not be violent, just as close to the right frequency as possible. Electric vibrators were long used in greenhouses for tomatoes, but have been replaced, as bumblebees are found to be far more efficient. Using an artist brush with tomatoes is very inefficient because the pollen is not on the surface.
Yup, tomatoes are self fertile, but self pollinating?...only when conditions are ideal...they often need help. "Self pollinating" is one of the myths of tomato growers.
Tomato Pollination Pictures |
The groundwater level readings for 2013 were different than they have been in four years. The average groundwater level actually rose in central Kansas following years of drought but good moisture in 2013.
Unfortunately, the western third of the state continues to show decreases in water table levels but the amount of lowering has shown decreases in the same time period.
The further east in the state measurements are taken, the more likely the ground water table level will be higher.
“It’s pretty incredible how quickly it increases as you move east,” said Brownie Wilson, Kansas Geological Survey water data manager.
Wilson said that taking the average of all the wells in Pratt County revealed a change very close to zero. Some areas were up and some down so, on the average, Pratt County didn’t really move.
The northern part of the county is showing rising levels while the center and southern part are down about a half a foot.
While Pratt County as a whole has hardly changed, it only takes a short distance outside the county to see bigger increases. Just across the county line to the north in Stafford County, wells are registering increases of two feet.
Other areas of GMD 5 have shown even more dramatic changes. On the eastern edge of GMD 5 some wells showed a rise in water table level of up to four or even five feet, Wilson said.
The state sits on aquifers, large underground water areas that cover various parts of the state. The western third sits on the Ogallala Aquifer while Pratt and the surrounding counties sits on the Great Bend Prairie Aquifer.
Most of the reason for the increase in water level was from higher than normal rainfall amounts in July and August where some areas got from 300 to 400 percent above the average rainfall in the area.
In the Cheney Lake, the ponds had dried up and the lake was very low but the rains filled the farm ponds and recharged the lake to normal levels.
A lot of Kansas still remains in drought conditions. While the rainfalls were not drought busters, they were a great boost to water table levels, Wilson said.
The groundwater level varies across Kansas. In the south central part of the state, the water level is around 50 to 60 feet. That number really jumps in the western part of the state like in Haskell County where the water table level is 400 feet, Wilson said.
The closer the water table is to the surface, the faster it can recharge from moisture.
The Kansas Geological Survey and the Kansas Department of Agriculture Division of Water Resources record the water levels across the state.
The state is divided into Groundwater Management Districts with Pratt County in GMD 5. Most of the observation is done through irrigation systems although some special wells are dedicated to just water level measurement.
Some wells have built in sensors that automatically record water levels. In other wells where manual readings are done, a very simple system is used.
A long steel tape measure is coated with blue chalk then lowered down into the well until it hits the water. The water washes away the chalk then the tape is brought to the surface and the distance between the washed off chalk and the top of the well is recorded. This way of measuring water levels has been used for some 60 years.
Production wells, such as for irrigation, have to be measured in the manual fashion.
The information goes into a database and gives the geological survey an accurate indication of water level change.
Several factors impact the amount of change in the groundwater level. Precipitation is a key factor. If it doesn’t rain much, like in 2011 and 2012, most of the water levels will go down.
Irrigation is also a key factor. The more irrigation demand on an aquifer, the more the level sinks. In some areas the amount of available water in the last ten years has dropped from thousands of gallons a minute to tens of gallons and some wells have gone dry.
Irrigating corn takes the most water so some farmers have switched to wheat or milo that take much less water or have reduced the number of acres they irrigate.
Watering livestock plays a factor in water table level as well.
Soil type plays a big part in recharging the system. Sandy soil allows water to sink into the aquifer much faster then clay soils where water tends to run off rather than soak into the ground. |
Jungle Theme Lesson Plans For Preschool Kids
Preschool Jungle Lesson Plans for Kindergarten & Pre K
Preschool jungle theme lesson plans for teaching preschoolers & kindergarten children
Tigers, lions & jungle animals of all sorts. Let’s make things fun and educational for the preschool children with a fun preschool jungle lesson plan & safari ride.
This is a fun educational jungle lesson plan and theme idea for the children. It will be so much fun for all the preschool children as they learn new exciting things about the jungle.
Children & preschoolers usually love some sort of animals or have their favorite wild animals. This jungle theme & it’s activities are very interesting and appealing to many preschool and kindergarten children as they associate the Jungle with many of their favorite animals like lions, tigers, hippos, monkeys & giraffes etc.
Are You Ready For Your Preschool Jungle Lesson Plan Theme?
This preschool jungle theme is a popular theme for preschoolers and can be used for teaching kids at home or at school in the classroom setting with some great, simple educational ideas.
Below are some simple ideas on how to get a jungle theme & lesson plan started and some educational, fun ways to keep using it for as long as the preschoolers and kindergarten kids are interested in the learning activities.
Remember, this is simply a starting point, you can add to the theme ideas below and build out your jungle theme as much as your preschool children would like.
Starting this Jungle Lesson Plan:
• You can start off your lesson plan by introducing the word ‘jungle’ to the children and providing a simple description of what it is to the children in the class.
• Make sure that you have various books in classroom library area and find various books to read to the kids based on the jungle to help build your theme . (These books should be age appropriate)
• Take the time to find different jungle animal pictures and other jungle related pictures posted around the pre k classroom for the children to see.
• You could have an art activity for the children where you make fun and crazy jungle tribal masks from paper bags or construction paper etc. with the children.
• Provide various types of jungle animal toys on the toy shelf in the classroom for the kids. (eg.) monkeys, giraffes, lions, etc.)
• You could even provide various types of Jungle theme props like safari hats for the kids, toy binoculars etc. for a dramatic play activity for the children.
• Find different types of jungle themed puzzles and jungle activity pages.
• Play some fun jungle style music for the children to set the mood, or music that is about specific types of jungle animals, like monkeys, tigers etc.
Explain What Types of Animals Live in the Jungle & What Foods Do They Eat?
• You could talk about different jungle cat species like panthers, tigers, jaguars, leopards, lions etc. (These animals eat meat or other animals that they hunt down as prey.)
• Discuss reptiles and snakes (Many of these will eat rodents or frogs, some large snakes will eat large animals etc.)
• You can discuss bears and different types of bears like Koala bears, panda bears (Bears generally like to eat things like fish, different types of nuts, berries, honey, leaves and more.)
• Monkeys, apes and gorillas. (Monkeys & gorillas will eat fruits like bananas, coconuts, seeds and things like that.)
• Zebras are like a type of horse that is black and white that is out on jungle plains and you might see on a safari. (Zebras eat tall grass and vegetation)
• Giraffes are very tall and run very fast with long necks. (Giraffes eat leaves and berries, fruits etc. off of tall trees.)
• Elephants are wrinkly and grey or brown usually. They have long trunks and white ivory tusks. (Elephants eat things like leaves from trees, shrubs, and shoots.)
• Cheetahs are another type of wild cat that can be found on a safari or in the jungles. (Cheetahs run very fast and eat meat or other animals that they hunt down.)
• Alligators or crocodiles live in the rivers, swamps and lakes. (Crocodiles like to eat meat or other animals that enter to close to the water or fish.)
• Rhinoceros are big strong animals with a horn in between it’s eyes. (They eat plants and different forms of vegetation etc.)
• Different Jungle Birds like parrots and vultures etc.( Some jungle birds eat insects, seeds, nuts and fruit others like vultures eat meat of dead or dying animals.)
• Hippopotamus’ swim in the water and have very large mouths with big teeth and strong jaws. (They eat grass, vegetation and fish, water life.)…etc.
You Can Talk About & Explain What a Jungle Looks Like?
• How is the jungle different than a forest?
• What types of things might you find in a jungle? (eg. Things like water falls, rivers, lakes for the animals to have drinking water)
• What type of plants, trees and grass might you find in the jungle? (eg. Palm trees, coconut trees, banana trees, mango trees, pineapples)
• Provide colorful books and photos of the jungle using real life images of jungle related plants, animals, fruits, water falls, rivers, jungle people etc.
• You can then talk the opportunity during circle time or small group to have the preschool children explain what they see in the colorful photos of the jungle.
• Discuss the pictures and things in the pictures with the kids to encourage social skills and observations etc.
• Find coloring sheets and worksheets about the jungle, jungle games and animals for this jungle theme.
• What colors do we find in the jungle?
• What colors are the different animals that you find there?
Where Does Each Jungle Animal Live?
Each kind of animal will find a home within their natural habitat, the jungle.
What kind of a home do they have for them and their animal families?
Different animals have different types of homes.
These could be a nest, a den or a burrow. Other animals maybe living under a rock, in a hollow tree trunk, up high in trees, in the water etc.
Dramatic Play for Kids using a Jungle Theme:
• Set up a jungle themed dramatic play center for the preschool classroom for kids to use their imaginations and pretend they are in a jungle.
• You could make a simple jungle jeep, safari jeep or car out of a cardboard box for the children.
• Safari looking hats are fun as props to set the mood of the jungle theme for the kids.
• Toy children’s binoculars or you could make some out of old toilet paper rolls that you simply tape or glue together. You could paint them or color them to make them look more real.
• Get simple toy cameras or pretend cameras or even small boxes like old bar soap boxes to make pretend cameras for the children on their safari trip.
• Put up pictures of a variety of different jungle animals on the wall in the preschool, or school classroom and let your pre k kids go on a fun, educational and pretend, jungle safari right in school. Add some fun theme music to set the mood and let the children have some fun.
A Jungle lesson plan activity like this really has no set limits on how it needs to be presented to the kids. Stick to your specific educational theme and keep it fun and interesting for the children. Think outside the box. That’s what great teaching is all about and kids will love it!
Have a great time – in the preschool jungle with the children.
Please be sure to Bookmark & Share this kids learning activity.
Preschool Learning Online © 2016. |
How do scientists know what the dinosaurs looked like? No-one can say for sure, but there are some lines of evidence in the fossil record, and from studies of modern animals. Putting it all together is like the detective work in solving a difficult murder case.When you see a colour painting, or an animation, of a dinosaur as a living animal, this has been based on a series of steps in reconstruction:
• The skeleton is rebuilt from the bones that are extracted from the rock.
• The muscles can be laid on with some confidence, since each end of the muscle is fixed into the bone, and marks may be seen on the fossil bones.
• Other soft parts, like the guts, eyeballs, tongue, and so on can be added partly by guesswork, and comparison with living animals.
• The skin texture may be reconstructed precisely, since impressions of dinosaur skin have been fossilized. There are even a few rare cases of organic preservation of dinosaur skin.
• The colour is entirely guesswork. Was Tyrannosaurus blue with yellow spots, or maybe you like red stripes? Colours are based on modern animals, and a bit of inspired imagination by the scientists and artists.
Dinosaurs Teeth:
Tyrannosaur teeth were uneven, which placed most of the force of the bite on just a few teeth at a time, giving them more penetrating power.When a number of teeth penetrated the fibers, then the tyrannosaur just tore on the dotted line.
Dinosaurs Skin:
Dinosaur skin is amazing. We do have some preserved skin impressions. Most of them show polygonal scales in different groupings. Duckbills had a background of small scales with patches of larger scales every now and then.
The patches were bigger and more common on the back. On the crest the impression was more like a rooster’s comb. Horned dinosaurs had similar scales, but a little larger. Instead of the patches that duckbills had, for a change of pattern the horned dinosaurs had large rounded scales with a rosette of polygonal scales making the change back to the basic pattern.The big round scales were more common on the back and sides. Long-necks like Seismosaurus had large scales, about 2-3 cm, with small bumps, about 2 mm, all over them. They also had a fringe down the back that stood up and were tall thin triangles. Dinosaurs could have bony scales like the bumpy ones on alligators. They could be scattered almost anywhere, but were more common on the back and sides. In the Stegosaurus, some of them formed huge plates that went down the back, and even the spikes on the tail were these bony scales. In the Ankylosaurs, they formed a “shell” over the whole body. In the horned dinosaurs, they attached to the skull and formed the ornate horns of the frills. Some dinosaurs even appear to have had feathers!
Duckbill Dinosaur Skin Impression
Horned Dinosaur Skin Impression
Stegosaurus Tail Spike
Stegosaurus Back Plate
Dinosaurs Horn:
The horned dinosaurs were from North America. There were two major types; the centrosaurines, like Styracosaurus and the chasmosaurines, like Triceratops. The centrosaurines generally had a big nose horn, although some just had a “nasal boss”. They had more ornate frills than the chasmosaurines. The chasmosaurine generally had the frontal horns (brow horns) as the major horns. The chasmosaurines had a hollow cup at the base of the frontal horns that must have given them a nice clacking sound when they fought with each other. Some of the skulls have holes in the bones that look like they were made by fighting with other horned dinosaurs. The horn had a bone core covered with chitin - like your fingernails. Cow Horn in cross section, showing the bone core and the chitinous sheath.
Chasmosaurine Frontal Horns.
Dinosaurs Brain:
Here are castings of two dinosaur brains. The one on the right is a Maiasaurua and the pictures on the left are a Tyrannosaurus. Wes cut a cow skull in half so that you can see where the brain would be. The cow brain is much bigger than any dinosaur brain. We have even bigger brains and feel that intelligence is very important. Dinosaurs did amazingly well with their little brains and never had to worry about global thermonuclear war or MAD - Mutual Assured Destruction. Of course, they couldn’t know about the comet that was on a path leading to a collision with the earth. After all, their best astrophysicist had a brain the size of a walnut.
Dinosaurs Food:
Dinosaurs must have eaten something, and a lot of it. It is fairly easy to imagine that Tyrannosaurs ate other dinosaurs - and anything else that they wanted. But what did the plant-eating dinosaurs eat? Plants have been evolving for millions of years. When most of the dinosaurs lived, there were no grasses. Early in the age of dinosaurs there were no plants with flowers, but cycads seem to have been common. Cycad seeds would have been good and cycad trunks have a lot of starch in them . Some people eat cycads today. Another tree that was common was the Ginkgo. Ginkgo leaves are edible and the seeds are considered a delicacy in China. There is some evidence from gut contents and droppings that duckbills ate conifers - like Christmas trees.
Dinosaurs Claws:
Claws, like horns, have a bony core with a hard chitin sheath. Some claws allowed the predatory dinosaurs to tear into the flesh of their victims. Claws could also have been used to hold down prey while the dinosaur used its powerful jaws and serrated teeth to rip off large chunks of flesh. The foot seen at the left is from a small tyrannosaur. The claw in the middle is the killing claw from the back leg of Utah Raptor. It was used to rip a long deep gash in another animal, like a kick boxer with a switch-blade. The sharp curved claw of the Allosaurus was a meat hook. It allowed the Allosaurus, seen on the right to grab and hold on to another animal.
Dinosaurs Backbone:
The large vertebra has a hole in the side. That is the opening to the air space in side of the vertebra. In many dinosaurs the back bone is hollow. This hollow space made the bones lighter. The back bone allows the body to bend while forming a strong support for the body.
Dinosaurs Whip-Tail:
A whip-tailed Seismosaurus could possibly thrash a predator even approaching from the front. Poor Allosaur. One estimate based on Diplodocus is that the tip of the tail could exceed the speed of sound. It would have generated a sonic boom when it was whipped. So much energy would be in the whip and released as a boom that it would have been as loud as the blast of a 16-inch gun from a battleship! Seismosaur is even bigger - about 50% bigger. That is at least double the power! Besides being a weapon, the sound may have been used to communicate with other Seismosaurs.
Dinosaurs Sounds:
If you know that a big animal can make loud trumpeting sounds from its head, what does that tell you about its behavior? Why would a dinosaur need to have this peculiar feature? You can find answers by looking at modern animals that share similar features. An elephant makes loud trumpeting sounds through its trunk. Elephants use these sounds for two main reasons; to communicate with other members of its herd and to warn away its enemies. Scientists who study elephants have found that there are many different sounds they make to communicate different things. There are sounds of warning, sounds of fear, and sounds for excitement, happiness and sadness. Many paleontologists think that Parasaurolophus used its ability to make sounds in much the same way as modern elephants. |
Choose glossary language
Synonyms: forest clearance
Wikipedia definition:
Deforestation, clearance or clearing is the removal of a forest or stand of trees where the land is thereafter converted to a non-forest use. Examples of deforestation include conversion of forestland to farms, ranches, or urban use. More than half of the animal and plant species in the world live in tropical forests. The term deforestation is often misused to describe any activity where all trees in an area are removed. However in temperate climates, the removal of all trees in an area—in conformance with sustainable forestry practices—is correctly described as regeneration harvest. In temperate mesic climates, natural regeneration of forest stands often will not occur in the absence of disturbance, whether natural or anthropogenic. Furthermore, biodiversity after regeneration harvest often mimics that found after natural disturbance, including biodiversity loss after naturally occurring rainforest destruction. Deforestation occurs for many reasons: trees are cut down to be used or sold as fuel (sometimes in the form of charcoal) or timber, while cleared land is used as pasture for livestock, plantations of commodities and settlements. The removal of trees without sufficient reforestation has resulted in damage to habitat, biodiversity loss and aridity. It has adverse impacts on biosequestration of atmospheric carbon dioxide. Deforestation has also been used in war to deprive an enemy of cover for its forces and also vital resources. A modern example of this was the use of Agent Orange by the United States military in Vietnam during the Vietnam War. Deforested regions typically incur significant adverse soil erosion and frequently degrade into wasteland. Disregard or ignorance of intrinsic value, lack of ascribed value, lax forest management and deficient environmental laws are some of the factors that allow deforestation to occur on a large scale. In many countries, deforestation, both naturally occurring and human induced, is an ongoing issue. Deforestation causes extinction, changes to climatic conditions, desertification, and displacement of populations as observed by current conditions and in the past through the fossil record. Among countries with a per capita GDP of at least US$4,600, net deforestation rates have ceased to increase.
Source: dbpedia
glossary info
How to Search Terms
Please enter a search term or choose a letter to navigate the glossary and to find definitions. This glossary aims to facilitate collaboration on the development of ambitious energy efficiency measures by clarifying definitions and highlighting common terminology. Definitions have been collected from trusted sources. It intended to expand this glossary in the future to include a wiki ensuring a truly collaborative process. If you contribute to this glossary please contact use at:
Understanding Relations
The relation browser enables the user to see the relations between terms.
Below is an explanation of the letters that appear on the arrows in the relation browser
C = Has Concept Scheme. Points to a concept scheme in the reegle glossary. Concept schemes provide the main categories of the reegleglossary.
T = Has Top Concept. Points to a top concept in the reegleglossary. Top Concepts are the top level concepts in a concept scheme.
N = Has Narrower. Points to a narrower concept of the selected concept |
Longer passwords would be more resistant to brute-force attacks
The strength of Apple's revised encryption scheme in iOS 8 hinges on users choosing a strong passcode or password, which they rarely do, according to a Princeton University fellow.
Apple beefed up the encryption in its latest mobile operating system, protecting more sensitive data and employing more protections within hardware to make it harder to access. The new system has worried U.S. authorities, who fear it may make it more difficult to obtain data for law enforcement since Apple has no access to it.
Despite the new protections, data is still vulnerable in certain circumstances, wrote Joseph Bonneau, a fellow at the Center For Information Technology Policy at Princeton, who studies password security.
"Users with any simple passcode have no security against a serious attacker who's able to start guessing with the help of the device's cryptographic processor," he wrote.
If an iPhone is seized when it's turned off, it's unlikely that the keys can be derived from its cryptographic co-processor called the "Secure Enclave," which does the heavy lifting to enable encryption.
But if an attacker can boot the phone and get access to the Secure Enclave, it would be possible to start guessing passwords in a brute-force attack, and that's where the weakness lies.
Apple doesn't make it easy to completely copy all of the data on a device and boot it up using external firmware or another operating system, which would be an attacker's first step, Bonneau wrote.
His theory of how easy it would be to obtain the data from a device is dependent on an attacker being able to bypass the complicated "secure boot" sequence of an iOS 8 device.
"We'll assume this can be defeated by finding a security hole, stealing Apple's key to sign alternate code or coercing Apple into doing so," he wrote.
If that is possible, the attacker can begin guessing passcodes or passwords against the Secure Enclave. Apple's documentation suggests that such guesses could be conducted at a rate of either 12 guesses per second or 1 guess every five seconds.
By default, Apple asks users to set a "simple passcode," which is a four-digit numerical PIN, although users can set much longer pass phrases.
If an attacker can guess four-digit passcodes at 12 per second, the entire space of 10,000 possible PINs can be guessed in about 13 minutes, or 14 hours at the slower rate of one per five seconds, Bonneau wrote.
Apple could slow down the rate at which passwords can be entered, but that would probably annoy users. An alternative would be to limit the number of overall incorrect guesses and erase the phone's data, but that approach would require warning users that they're at risk of blanking their phone if they continue guessing, he wrote.
Even users who opt to set a longer passcode or phrase rather than a four-digit PIN are probably still at risk.
Bonneau said it's unlikely that users choose stronger passwords to protect their devices than Web services accounts, since "entering passwords on a touchscreen is painful."
The best advice is to create a password that is at least a 12-digit random number or a nine-character string of lower-case letters, he wrote. And do not use that password for any other services.
"These aren't trivial to memorize, but the vast majority of humans can do this with practice," Bonneau wrote.
If there's a fear a device may be seized, it's best to keep it off -- such as when crossing international borders -- as that offers the greatest level of encryption protection, he wrote.
Follow Us
Join the New Zealand Reseller News newsletter!
Error: Please check your email address.
Tags Applesecuritymobile securityencryption
Meet the leading HP partners in New Zealand...
Meet the leading HP partners in New Zealand...
Meet the leading HP partners in New Zealand...
Channel comes together as Ingram Micro Showcase hits Auckland
Channel comes together as Ingram Micro Showcase hits Auckland
Channel comes together as Ingram Micro Showcase hits Auckland
Show Comments |
By Stanley Changnon
• Grades: 6–8, 9–12
Weather refers to the state of the atmosphere and includes temperature, precipitation, humidity, cloudiness, visibility, pressure, and winds. Weather, as opposed to climate, includes the short-term variations of the atmosphere, ranging from minutes to months. Climate is typically considered the weather that characterizes a particular region over time.
The weather must be measured and records kept to gain an understanding of the forces at work and to yield the information on the averages and extremes. By studying weather records, atmospheric scientists may be able to predict the weather ahead on scales of weeks to months with greater accuracy and modify more successfully the weather to increase precipitation or ameliorate severe storms.
Causes of Weather
The five factors that determine the weather of any land area are: the amount of solar energy received because of latitude; the area's elevation or proximity to mountains; nearness to large bodies of water and relative temperatures of land and water; the number of such storm systems as cyclones, hurricanes, and thunderstorms resulting from air-mass differences; and the distribution of air pressure over the land and nearest oceans, which produces varying wind and air mass patterns.
How these five factors interact over the North American continent is an excellent example of how the weather of the United States is produced. Because the landmass of North America encompasses a greater range of latitude than longitude (10° to 80° north latitude), a great amount of differential heating occurs. This in turn creates air-mass differences. The presence of a large water area, the Gulf of Mexico, below the southern states affects the character of air masses and placement of pressure centers and storm systems over the eastern half of the continent. Warm, moist air from the south often meets cold, dry air from the north over the central United States. These contrasting air masses include: continental Arctic and polar, from cold land sources; maritime polar, from cold ocean regions; and the warmer and moist Gulf or Atlantic oceanic sources. Air from the Pacific Ocean affects the weather in the western mountains of the United States, which in turn affects the interaction of cold and warm air in the eastern United States.
The air movements resulting from these five weather-producing factors of North America provide an exceptional variety of weather among regions, including droughts, floods, and every known form of severe storm, including hail, ice storms, and tornadoes. These extremes alternate with calm periods of clouds or sunshine. Thunderstorms yield about half of the total precipitation in most of the United States — 80 percent in drier mountain climates, 65 percent in the Great Plains, 50 percent in the Midwest, and 40 percent in the East.
The weather in most places is sensitive to a few key factors. For example, severe drought in the sub-Saharan region of Africa is thought to occur when onshore winds from the Atlantic Ocean change direction by at least 60° in a relatively small area. A seasonal shift of this type is presumably related to slight differences in ocean temperatures. Such differences in turn may have resulted from changes in cloudiness related to a slight shift in hemispheric pressure patterns.
In most locales the weather changes as a result of the diurnal (night/day) cycle and the annual cycle. The latter encompasses daily, monthly, and seasonal variations. These two cycles reveal the Sun as the major factor influencing the weather. At a given time the weather differs greatly with distance. During heavy rains, for example, the differences in the amount of rainfall between regions in proximity are often great. The diurnal cycle exists everywhere but varies by climate type. Within a climatic zone it varies by season. Honolulu, with an equable oceanic weather regime dominated by the trade winds, has a night-to-day range of 6° C (11° F) in July, whereas St. Louis, with a continental climate, has an average diurnal range of 12.8° C (23° F) in July. In January, however, the St. Louis diurnal range is 9° C (16° F).
All weather information for a given area is derived by a local weather station, and daily high and low temperature values for a month are averaged. These two values are averaged to yield the "mean," or "normal," monthly temperature. The highest average values of precipitation occur in locales with strong maritime or orographic influences. These values occur in warm or cold climates. Extremely low precipitation averages occur in interior zones far from moist air or at locales where cold oceanic currents stabilize the air.
In a similar fashion, the daily wind and humidity values are averaged to yield their monthly means, and the daily rainfall and snowfall values are totaled — along with the number of days with rain, thunderstorms, freezing temperatures, and clear or cloudy skies. Certain monthly extremes are also identified on a monthly basis, including the highest and the lowest temperature, the heaviest one-day precipitation (rain or snow), and the fastest wind speed. These summaries are often combined and become the basis for describing the weather of a region. Atmospheric scientists also study weather-producing conditions, such as fronts and high- and low-pressure areas, to describe how the region's weather is produced.
Weather variations that are excessive during a given time span are often called extremes. Records of weather are examined to define extremes. Absolute extremes are the highest and lowest values of a weather element observed over the entire period of record. The highest temperature ever recorded is 57.8° C (136° F), in the Libyan Desert. The lowest is –89.2° C ( –128.6° F), in Antarctica. The world's highest one-year precipitation for a given area was 26,461 mm (1,042 in) at Cherrapunji, India, and the lowest was 0.8 mm (0.03 in), in the Atacama Desert.
Great temperature extremes have been recorded at a number of points within the United States. For example, the record highest and lowest temperatures at Fairbanks, Alaska, are 37.2° C and –54.5° C (99° F and –66° F), respectively, a difference of 91.6° C (165° F ). Those at Boise, Idaho, are 45.6° C and –42.8° C (114° F and –45° F), a range of 88.4 °C/159° F. The greatest temperature change during a 24-hour period in the United States occurred in Billings, Mont., when the temperature fell 56.5° C (101°F), from 7.8° C to –48.3° C (46° F to –55° F).
by Stanley A. Changnon, Jr.
Bibliography: Batten, Louis J., Weather, 2d ed. (1985); Bender, Lionel, Weather: Science Facts (1992); Eagleman, Joe R., Weather Concepts and Terminology (1989); Holford, Ingrid, Guinness Book of Weather Facts and Feats, 2d ed. (1984); Mason, John, Weather and Climate (1991); Ruffner, J. A., and Bair, F. E., Weather Almanac, 7th ed. (1994); Upgren, Arthur, and Stock, Jurgen, Weather: How It Works and Why It Matters (2000).
• Subjects:
• Skills:
Scholastic GO!
|
What can "neuroeconomics" teach us about how we think about money?
The economic mysteries of daily life.
Nov. 1 2008 7:55 AM
Money on the Brain
What can "neuroeconomics" teach us about how we shop?
This morning, I had a remarkable experience: I strolled into a delicatessen and bought some delicious Stilton. What made the shopping trip unusual was that I was wearing a brain scanner while I did it. My costume consisted of an electroencephalograph cap, which looks like a polka-dot shower cap with wires plugged into it; a pair of wraparound glasses with a tiny video camera attached; a clothes peg on one finger to measure my heart rate; two other finger monitors that function like a lie detector; a thermometer patch on a fourth finger; and a satchel to hold a computer gathering the data.
Most of these devices, or their equivalent, can be hidden under clothes or baseball caps so that the wearer looks as if they are sporting only shades and an iPod, but in my case the boffins hadn't bothered, and so I entered the deli looking like an extra from a 1970s episode of Doctor Who.
This was all part of my efforts to understand "neuroeconomics," a new, controversial, and eclectic marriage between economics, marketing, and various branches of physiology and brain science. With very different aims, economists and marketers are attempting to tap into the dramatic advances in our understanding of the brain that have taken place over the past 15 years. Their tools encompass mood-altering drugs, tests for hormone levels, animal studies, and fMRI scans (which use immobile scanners to measure blood flows deep inside the brain).
"Neuromarketing" is the simplest application, and the one in which I was participating. David Lewis, a neurophysiologist at the Mind Lab, a spinoff from the University of Sussex, showed me how the physiological readings could be viewed alongside output from my camera to provide a simple but—presumably—useful demonstration of what really grabbed my attention in the deli. Among Lewis' findings are that eating chocolate is more exciting than making out (at least, making out in an electrical shower cap while surrounded by men with clipboards) and that, subconsciously, young men are more interested in sneakers than in the wares on display in an Ann Summers sex shop.
While the possible applications for marketers are obvious enough, such trials are hardly unlocking the deepest secrets of thought. It remains to be seen whether neuroscience has much to contribute to economics itself, a subject that has long focused on the decisions people make, without relying on any particular theory of how they make them. It is also hard to point to anything terribly interesting that the neuroeconomists have discovered, although neuroeconomics may contribute more as time goes by.
Neuroeconomics may provide more shape to the older and more famous field of behavioral economics. A mixture of economics and psychology, behavioral economics has used laboratory experiments to expose a bewildering number of exceptions to the traditional economic theory of rational choice. At present, though, there is little pattern to what the behavioral economists are observing, and it's possible that a greater understanding of how the brain works might help to provide one.
Yet neuroscience might also help reinforce the traditionalists. Wolfram Schultz, a neuroscientist at Cambridge who studies how the brain processes risk and reward, says that just as the brain registers sensations such as sight, he can now see it registering rewards. There was no reason to expect that the mathematically convenient economists' fantasy of "utility" had any real analogue in the brain—but it seems that it might, after all. There's a thought. |
Lifeboat Ethics: the Case Against Helping the Poor
Only available on StudyMode
• Download(s) : 530
• Published : February 16, 2012
Open Document
Text Preview
Garrett Hardin, biologist from Stanford, used the metaphor of Earth being a “spaceship” persuading other countries, industries and people to stop polluting and washing natural resources of the world. He illustrates that the “spaceship” is represented by the wealthy countries, and the natural resources are represented as the poorer countries of the world. The wealthy people of the world have all the resources they need to survive and more, while other hand the poorer countries are unfortunate. Their rations are broken up into smaller and smaller portions because of their growing population and it lessens the resources to everyone of that country. Hardin's argument is based on sharing. He proposes that the bigger countries should share what they have with the unfortunate countries of the world. He eventually reveals the meaning of his metaphors. The natural resources are exposed to be food. His argument is that there are so many countries in this world that are dying and suffering from lack of food. Hardin believes that if the wealthier countries share their “wealth then the weaker countries will have an opportunity to survive. Through the “lifeboat” metaphor, the use of logos, and the discovery of food bank, Hardin uses these key points as his argument.
With regards to the population of the poor, Hardin uses a lifeboat for better understanding of the situation. “Metaphorically each nation can be seen as a lifeboat, full of comparatively rich people. In the ocean outside each lifeboat swim the poor of the world, who would like to get in or at least share some of the wealth” (415). This metaphor explains to the people that there are people out in the world that need help. People that are on the verge of dying, all they need is a helping hand for their survival. “For example, the weather varies from year to year, and periodic crop failures are certain. A wise and competent government saves out of the production of the good years in anticipation of years to come....
tracking img |
Why Marijuana Should Not Be Legalized
Only available on StudyMode
• Download(s) : 149
• Published : November 28, 2010
Open Document
Text Preview
Why Marijuana Should Not Be Legalized
What is so mesmerizing about smoking marijuana? I want to know what constructive outcome you get out of smoking it. Is it curiosity, to fit in, or just for the satisfaction of becoming high? For whatever principle marijuana is a harmful drug so with that saying it should not be legalized. Many people assume that marijuana was made illegal through some kind of process involving scientific, medical, and government hearings; that it was to protect the citizens from what was determined to be a dangerous drug. The actual story shows a much different picture. Those who voted on the legal fate of this plant never had the facts, but were dependent on information supplied by those who had a specific agenda to deceive lawmakers. The very first federal vote to prohibit marijuana was based entirely on a documented lie on the floor of the Senate. The history of marijuana’s criminalization is filled with racism, fear, protection of Corporate Profits, Yellow Journalism, ignorance, incompetent, Corrupt Legislators, and personal career advancement and creed. For most of human history, marijuana has been completely legal. It’s not a recently discovered plant, nor is it a long-standing law. Marijuana has been illegal for less than 1% of the time that it’s been in use. Its known uses go back further than 7,000 B.C. and it was legal as recently as when Ronald Reagan was a boy. The marijuana (hemp) plant, of course, has an incredible number of uses. The earliest known woven fabric was apparently of hemp, and over the centuries the plant was used for food, incense, cloth, rope, and much more. This adds to some of the confusion over its introduction in the United States, as the plant was well known from the early 1600′s, but did not reach public awareness as a recreational drug until the early 1900′s. America’s first marijuana law was enacted at Jamestown Colony, Virginia in 1619. It was a law “ordering” all farmers to grow Indian hempseed. There...
tracking img |
Stop complaining and start coding
App developer Steve Dryall thinks technology pros should use their expertise to make solutions rather than complain about changes.
One thing I've noticed about people who work in technology is that we're never short on opinions. Whenever I feel my area of expertise covers a topic of discussion, I express my position. But when does a technology expert stop offering opinions and start creating solutions?
As app developers, we have the chance to influence people's lives by helping them use their computing devices to the fullest potential — this transforms us from problem creators to problem solvers. If you can create technology that solves problems, then you have every right to complain about the problem you solved. Actions speak louder than words and, in the case of app development, those words are usually code.
Three steps you can take
Create a new model
As technologies have progressed, so have the business models that make that technology work. The "freemium" business model is one that has solidified its existence with many thanks to apps. Banner advertising and other forms of embedded ads have become standard practice thanks to the Internet. New technologies and new models can create widespread change if they work. If you can create a new model for technology to function, you can implement change.
Create a new platform
The task of creating a new platform can be overwhelming, but it can be simplified and broken down into manageable parts. You do not have to create hardware, an OS, and an ecosystem to create a platform; if you create a system that enables people and that system becomes adopted, you have a platform. Many successful technology companies were built on creating a platform for users.
Create a new channel
There are many places online where people can share their views, but there are still opportunities to discover and explore new channels or sub-categories of niches that already exist.
Voicing about what does not work without providing a solution is merely complaining. I think IT pros, and especially app developers, should use their abilities to propose solutions to technology problems.
Steve is an independent technology and content developer. His experience spans decades and covers areas including rich-media production, software development, and education. Steve has contributed to the digital realm in many ways and has no plans on ...
Editor's Picks
Free Newsletters, In your Inbox |
After 400 years in the Virginia dirt, the box came out of the ground looking like it had been plucked from the ocean. A tiny silver brick, now encrusted with a green patina and rough as sandpaper. Buried beneath it was a human skeleton. The remains would later be identified as those of Captain Gabriel Archer, one of the most prominent leaders at Jamestown, the first permanent English colony in America. But it was the box, which appeared to be an ancient Catholic reliquary, that had archaeologists bewildered and astonished.
Researchers believe the box was buried with Archer after his death between 1608 and 1616—which would mean the person who buried him would have known the significance of the artifact. Archaeologists and historians announced their discovery at the Smithsonian on Tuesday, along with the identities of three other key Jamestown leaders whose remains were buried nearby. All four men were “involved in all of the major decisions that took place during the first four years of the colony's history,” Horn said in a video about the discovery. Researchers sussed out their identities from a list of several dozen high-status men who could have died in the early 1600s—a particularly chaotic period at Jamestown that included what’s known as “the starving time,” a grueling winter when three-quarters of the colonists died, and some resorted to cannibalism. Along with Archer, researchers found the remains of Reverend Robert Hunt, the first Anglican minister at Jamestown; Sir Ferdinando Wainman, a high-ranking officer who was in charge of horses and artillery for the colony; and Captain William West, a nephew of the governor of the Virginia Company that funded the establishment of Jamestown and other colonies in the New World.
“The discovery brings us back, in a very powerful way, to looking at individuals and personalities that were at Jamestown,” said William Kelso, the director of archaeology for the Jamestown Rediscovery Foundation. “The story gets personal, and therefore you can have more empathy toward what people were up against, what they succeeded in doing, and what they failed in doing.” The presence of the relic in Archer’s grave also calls into question some of what researchers previously believed—their understanding of Archer as an individual, and of Jamestown and the trajectory of Catholicism in America more broadly.
Silver Reliquary and fragments of coffin wood found in the grave of Gabriel Archer. (Jamestown Rediscovery Foundation / Preservation Virginia)
Archer, an influential secretary and magistrate “was one of the most prominent of the first leaders at Jamestown,” Horn told me. Historians knew Archer as a rival of Captain John Smith, the explorer who, according to legend, was saved from execution by Pocahontas, the daughter of a Powhatan chief. Smith and Archer were rivals. “And Archer spent a good chunk of time trying to remove Smith from the government council of Jamestown,” Horn told me. Researchers now wonder whether there was more to the antagonistic relationship between Smith and Archer. Could Archer’s motives—as a colonial leader, as a searing critic of Smith's—have been linked to a secret religious identity?
“Gabriel Archer was a prime character, an eminent leader in this early period,” Horn told me. “He was taking on Smith, he’s involved in bringing down the first president [of the colony], he’s really at the heart of intrigue. I think historians have always considered that his motives were primarily personal, trying to elevate his own position. But was there something more going on? Was he trying to destabilize the colony's leadership from within?”
This idea is stunning for a couple of reasons, the most important of which is that Jamestown was fundamentally anti-Catholic. “This was a big ambition here on the part of the English,” Horn said. “Jamestown is not meant to be a fairly minor enterprise. It’s meant to be the beachhead for an English empire in America that will serve as a bulwark against Catholicism. That’s a lot of freight for this little object to carry.”
Catholicism was feared by the English, too. Settlers at Jamestown believed there was a very real threat that Spanish warships would one day arrive with Catholic conquistadors prepared to fight for the New World. Incidentally, this anti-Spanish, anti-Catholic attitude—which continued long after Archer and his townsmen died—is what, in 1632, situated the Province of Maryland where it is today, rather than further south where its Roman Catholic founder originally wanted it to be.
“When George Calvert was campaigning to get the charter to Maryland, he was actually looking to get territory—and he was approved to get territory—in what is now North Carolina,” said Farrelly, the Brandeis professor. “The people in Virginia were campaigning for him not to get a charter. The tactic they used is that [they said], ‘He’s going to use this charter as an excuse to bring Spanish priests and nuns over into Virginia, and they’re going to invade Virginia and take over the colony.’ That argument did prove to be contentious enough that at the last minute, it looked like Calvert was going to lose the territory.”
So there was certainly incentive for Archer, decades before Calvert’s time, to have hidden his Catholicism at Jamestown. “This person could have been from a family that was outwardly Anglican but privately Catholic,” Farrelly said. “That would explain why they would be bringing a relic over with them. It does make you wonder: What was it like for him? How secretive did he feel he needed to be, given that he’s living in a colony that is rabidly anti-Catholic. And who buried him with this relic?”
When archaeologists found the box in Archer’s grave in 2013, they could tell right away that there was something inside. It was light enough to feel hollow, and its contents rattled when researchers turned the box over in their hands. But they knew as soon as they gently scrubbed off the oxidation from its copper-alloy exterior—a conservation project that took more than 100 hours and revealed a minimalist engraving of the letter ‘M,’—that they wouldn't be able to open the box without causing irreparable damage. It was through subsequent CT-scan imaging that forensic historians were able to identify shards of bone and the lead ampulla inside, clear evidence of a Catholic relic.
"It was not uncommon—I'm not going to say it was common—but there were two different words to describe somebody who was basically a secret Catholic or a crypto-Catholic in England at the time," Farrelly told me. “Meaning he attended Anglican church services regularly, and therefore was not subject to fines, but would also attend Catholic services. ‘Schismatic’ was the term that Catholic priests used, and protestants called [them] ‘papists’ ... Neither the Catholic priests nor the Anglican priests liked these people. You’re not being true Anglicans and you’re not being true Catholics.”
But there’s still a nagging question in all this: What if the box wasn’t a Catholic relic at all? Such symbols have histories that are, at times, “messy,” Horn acknowledged. “And this is a line of inquiry we find quite intriguing," he said. “Perhaps this is a former Catholic holy object that, during the reformation, was translated or repurposed for Anglican use, therefore representing the spiritual heart of the new Church of England in the New World. We know that sacred objects were repurposed for Protestant use, for Anglican use, during this period.”
But there are other hints that suggest Archer was indeed a Catholic, and possibly even an important figure to other Catholics. He was buried in a hexagonal wooden coffin with his head pointing east. “Because of the orientation of archer in the grave, his head to the east, this is usually a sign of clerics,” Horn said. “He could have been the leader of a secret Catholic cell and even possibly a secret Catholic priest.”
Conserved silver reliquary with ‘M’ on lid. (Jamestown Rediscovery Foundation / Preservation Virginia)
Then there are the other Catholic objects, fragments found over the years at Jamestown that are taking on new meaning after the most recent discovery. “We have been finding bits and pieces of rosaries and crucifixes and other things that obviously were Catholic,” Kelso said. “One interpretation is they were bought over here to give to the Indians, even just to trade as trinkets. But now I think about it in a whole different way.”
“A new piece of archeological or historical evidence can help you better understand a whole range of previous evidence,” Horn said. Or, it can call into question much of what you thought you knew.
“When you think about the circumstances of Archer’s burial and the way this object was placed—it wasn’t just thrown in surreptitiously,” he said. “It was deliberately placed. It would have been quite public. Someone would have had to get down into the grave. These are real puzzles for us.” |
Who Killed Martin Luther King?
by Philip Melanson
Odonian Press, 1993, paper
Murder in Memphis
In March 1967, Martin Luther King, Jr. made a decision that may have cost him his life. He and his Southern Christian Leadership Conference (SCLC) denounced the war in Vietnam as "morally and politically unjust" and promised to do "everything in our power" to stop it.
In King stepped up his attack. At a speech at the Riverside church in New York City, he called the US "the greatest purveyor of violence in the world today" and compared American practices in Vietnam to Nazi practices in WWII. He challenged all young men eligible for the draft to declare themselves conscientious objectors.
Before this, King had kept his civil rights work separate from the peace movement, partly on the advice of other black leaders who felt racial justice should be his first goal. But he increasingly saw that "the giant triplets of racism, materialism and militarism" couldn't be separated. The war was siphoning off money desperately needed for the poor and racially oppressed at home.
So King planned "civil disobedience on a massive scale" in order "to cripple the operations of an oppressive society." There would be sit-ins of the unemployed at factory entrances across the country, "a hungry people's sit-in' at the Department of Labor" and a Poor People's March on Washington, where thousands of demonstrators of all races would pitch their tents in the nation's capitol and stay until they'd been heard. There were even rumors (though King denied them) that he might run in the 1968 presidential election on an antiwar, third-party ticket with Dr. Benjamin Spock.
King's actions brought sharp criticism from all sides, black and white alike. Life magazine called the Riverside speech "demagogic slander that sounded like a script for Radio Hanoi." It charged King with "introducing] matters that have nothing to do with the legitimate battle for equal rights here in America."
Even the more moderate National Association for the Advancement of Colored People (NAACP) agreed: "To attempt to merge the civil rights movement with the peace movement," they said, "will serve the cause neither of civil rights nor of peace."
From the government there wasn't just hostility-there was fear. King had already demonstrated the ability to instigate massive unrest, and his rumored presidential candidacy would appeal to those appalled by the war.
For years the FBI had wiretapped King's home and office, intercepted phone conversations and planted paid informants within the SCLC; now it stepped up its surveillance. President Lyndon Johnson is said to have admitted privately, "That goddamn nigger preacher may drive me out of the White House."
Tensions were high and King's list of enemies was long when, the following spring, he came to Memphis to support a strike by (mostly black) sanitation workers who were demanding job safety, better wages and an end to racial discrimination on the job.
The murder
King visited Memphis twice in March 1968. On the 18th, he addressed a crowd of 17,000 supporters of the strike. He promised then that he'd return on March 28 to lead a citywide demonstration of sympathy for the workers.
The March 28 event erupted in violence. As demonstrators marched through the city, rampaging black youths broke store windows and looted. King tried to curtail the escalating violence by requesting that the demonstration be cut short. But by the time it was over, police had moved on the crowd, wielding mace, nightsticks and guns. One black youth was shot and killed, and 60 persons were injured. Nevertheless, King promised to return on April 3 to plan another demonstration; this time, he hoped, Memphis would see the power of his nonviolent approach.
King spent the last day of his life, April 4, 1968, closeted inside the Lorraine Motel on Mulberry Street, in one of Memphis' seedier neighborhoods. Alter a long day conferring with aides about the upcoming event, he was looking forward to a prime rib and soul food dinner at Rev. Samuel B. Kyles' home that evening.
Just before 6 pm, King and Kyles stepped out onto the second-floor balcony overlooking the motel's courtyard. King exchanged greetings with several persons who stood below, waiting to join him for dinner. Kyles headed downstairs to get his car. King stood alone on the balcony.
At 6:01 a single shot from a high-powered rifle cracked through the evening air. The bullet tore into the right side of King's face, sing him violently backward.
It wasn't until April 19 that investigators identified fingerprints on the gun thought to be the murder weapon. They knew then for the first time that the man they sought was James Earl Ray not Eric S. Gait) Even so, Ray eluded capture until June 8, when he was caught in London trying to board a plane for Brussels.
Ray spent the next nine months preparing to go to trial. Then, unexpectedly, on March 10, 1969, he pleaded guilty and was sentenced to 99 years in prison.
Who was involved?
When the HSCA exonerated the government of any role in a King assassination conspiracy, their conclusion was based on a less-than-thorough review of only two government groups-the Memphis Police Department and the FBI. The evidence indicates that these groups shouldn't have been dismissed so readily, and that other government agencies may also have had a motive to kill King.
The Memphis Police Department
The Memphis Police Department (or MPD) had prepared for King's visit in three ways. First, several officers from the intelligence unit were stationed in the firehouse across from the Lorraine Motel to spy on King. Second, a four-man security detail was assigned to protect King. Third, tactical (TACT) units for "emergency or riot situations" were created to control any violence that might erupt as a result of King's presence.
The MPD made two changes in security arrangements in the early days of April: the four-man security detail assigned to King was withdrawn 25 hours before the assassination, and three to four TACT units were pulled back from the Lorraine Motel the morning of the assassination.
The first change probably wasn't conspiratorial. King's entourage didn't want the MPD security-they perceived it as part of the hostile white power structure and so refused to divulge the details of King's itinerary. Inspector Donald Smith claimed he got tired of "tagging along" without knowing where King was headed and asked permission to withdraw the detail.
But the shift in TACT units is more disturbing. These units, each consisting of three vehicles and twelve officers, had been formed alter violence erupted during King's March 28 visit to Memphis. From King's arrival on April 3 to the morning of the assassination, the units (a total of nine to twelve vehicles) were patrolling within the five to six block area "immediately surrounding" the Lorraine. On April 4, the units were pulled back to five blocks away.
The MPD's explanation-that the units withdrew because an "unidentified" member of King's entourage "instructed" them to do so-is suspect. Unlike the security detail, these units weren't there to protect King, but rather to protect the city of Memphis from the violence that might accompany King's visit.
While it's possible that King's staff would want the TACT squads kept at a distance, it's highly improbable that the MPD would comply. If anything, such a suggestion would lead police to suspect King's group was up to something. If the TACT units were in fact responding to a request that they stay out of sight, there was no need to have moved back five blocks. A distance of, say, two blocks would have been sufficient.
If the TACT vehicles had remained in place, or at least closer to the Lorraine, it would have been extremely difficult for anyone to escape the crime scene. As it was, only one unit-TACT 10-could respond quickly to news of the shooting. That's because it was taking a break in the firehouse near the Lorraine at the time King was shot.
More important actions were taken too late For-not at all. The dispatcher's order to seal off the two-block area around the Lorraine wasn't given until 6:06, three minutes after the shooting was reported. The dispatcher never issued a "signal Y," a code indicating that all main exits from Memphis should be blocked. He also never issued an APB, an all-points bulletin describing the suspect for the neighboring states of Arkansas, Mississippi and Alabama. As a result, Ray (and any others involved) slipped through each law-enforcement net that ordinarily would have trapped him.
Lt. Kallaher, the "shift commander of communications" on April 4, tried to explain these failures of communication as a result of the "massive confusion" after the assassination. But this doesn't explain why the dispatcher ordered certain procedures and not others, and the confusion wasn't reflected in police transcripts.
In 1968, there wasn't any good evidence that the FBI had a motive to murder King. But subsequent revelations made clear FBI director J. Edgar Hoover's hatred of King and the Bureau's attempts to destroy "the Black Messiah" personally and politically through what it called COINTELPRO ("counterintelligence program"). Yet the HSCA's investigation of the FBI employed logic so questionable it might have been lifted from a primer issued by the Warren Commission. Here are some examples.
The HSCA reasoned that if the FBI had set up the assassination, it would need to have had control over Ray. By control, the committee seems to have meant that Ray would be checked in at a motel near the Lorraine. Since Ray stayed at a distant motel his first night in Memphis and didn't move to Brewer's boarding house until the next day, the HSCA concluded that the FBI must not have had control over Ray's movements and thus didn't mastermind the assassination.
Evidently it never occurred to the committee that in a well-planned assassination, the conspirators might elect to keep their trigger man away from the target area for as long as possible to reduce the chances that he could be identified after the shooting. The committee never defended the logic that a hit man must be dispatched to the crime scene as soon as he arrives in town.
With similarly dubious reasoning, the HSCA decided that since the FBI continued its dirty tricks against King right up to the time of the assassination, the Bureau was exonerated. After all, the committee deduced, it would hardly have been necessary to continue a nationwide program of harassment against a man soon to be killed. In a review of all COINTELPRO files on Dr. King, the committee found substantive evidence that the harassment program showed no signs of abatement as the fateful day approached. In other words, the HSCA didn't consider that the Bureau might be providing a cover for its complicity, or that the agents who ran COINTELPRO might not be the ones who plotted the assassination.
The HSCA's failure to investigate the CIA' stems in part from the impression the agency sought to project-that it had only a cursory interest in King and the SCLC, and that this interest was largely satisfied by whatever data Hoover shared with the agency. The CIA describes its own King file material as routine, oriented toward matters of foreign policy and centered on world reaction to King's death. A November 28, 1975 internal memorandum even states, "we have no indication of any Agency surveillance or letter intercept which involved King."
Not many documents are publicly available to challenge this claim, but those that are tell a different story. In January 1984, in response to a Freedom of Information Act (FOIA) request, I obtained 134 pages of heavily-deleted CIA documents on "Dr. Martin Luther King, Jr." and the "Southern Christian Leadership Conference." These documents indicate that the CIA not only received FBI data on King, but that in at least two instances, it passed data to the FBI.
The documents also indicate surveillance of King; for example, there's a July 10, 1966 dispatch containing photocopies of several scrawled notes, apparently made by King or members of his staff. There are also lists of phone calls placed from his Miami hotel during a two-day period, photocopies of receipts, a page from an appointment calendar with a message for King and an assortment of business cards. There was no indication who collected the data or how it was obtained.
It's likely that much more information exists about the CIA's interest in King. In December 1990, I interviewed an ex-CIA agent who'd been a high-ranking officer and field agent. Unfortunately, I can't describe the agent, the entire interview or even why he was willing to talk to me, since these facts could reveal his identity. I also have no way to verify his allegations, but I believe his story for two reasons: the interview was arranged by a person trusted by both of us, and the source's bona fides as a CIA agent have been validated by a non-agency source I trust, by a major corporation and by a network news organization (on a story unrelated to the King case).
This ex-agent confirmed that the CIA's publicly released King file is deceptively brief. Although there were very few cables in the file, he claimed that cable traffic on King was extensive, and went back as far back as 1963. He confirmed that in the spring of 1965, CIA agents worked directly with FBI agents to bug King's Miami hotel room, but this information wasn't filed with the CIA's Office of Security (which ran domestic operations). It was filed instead with the "Western Hemisphere desk," which was responsible for the agency's vast anti-Castro operations, including the Bay of Pigs invasion.
This deceptive filing assured that the agency's politically sensitive, if not illegal, bugging of King would never pop up in domestic-surveillance files. Instead it would be cloaked by the top security of clandestine, anti-Castro operations.
Why was the CIA so interested in King? Because of its attitude toward "black power groups" and their alleged communist connections. Jay Richard Kennedy, a highly respected CIA source with close ties to the civil rights movement, warned the agency about this alleged infiltration:
The Communist left is making an all out drive to get into the Negro movement .... Communists or Negro elements who will be directed by the Communists may be in a position to, if not take over the Negro movement, completely disrupt it and cause extremely critical problems for the Government of the United States.
Kennedy believed that this wasn't simply a domestic problem, to be handled by the FBI alone, but should be considered an "nternational situation." So the CIA targeted black political groups with zeal.
Solving the case
In 1978, the HSCA turned its findings over to the Justice Department and suggested further investigation. A decade later, the Justice Department claimed that all known leads had been checked arid that further investigation appears to be warranted... unless new information... becomes available."
Further investigation is warranted, for several reasons. First, the HSCA inquiry was glaringly inadequate. It's shameful that an investigation into the death of a man as important to this country's past and future as Martin Luther King, Jr., a man whom we now honor with a national holiday, was conducted so shabbily. He and his family-as well as the nation-deserve the full truth.
Second, the case has new leads, people and topics to be probed. If they're pursued, the question "who killed Martin Luther King?" may now be answerable.
* The National Security Agency, Defense\ Department, Air Force and CIA should be formally queried about any information they might have concerning Ray's aliases.
* The FBI and CIA should be required to produce all documents concerning their attempt to influence history or public opinion about the King case.
* The HSCA's files should be released to the public. Despite the committee's failures, their key documents and interviews could help to pursue the above leads. The film JFK evoked public pressure to release the HSCA's Kennedy files, but Congress still intends to keep its King files secret until the year 2028.
Who should conduct the investigation? It shouldn't be the FBI-even after two decades, the Bureau has at least a historical conflict of interest. Nor should the Justice Department have a primary role, due to its secrecy and inactivity during the decade following the HSCA's investigation. And another congressional effort would very likely become mired in the web of politics and personalities spawned by the previous committee.
The best alternative-although not without pitfalls-is to appoint a special prosecutor.
FBI watch
Index of Website
Home Page |
Archaeological evidence indicates that the atolls of Tokelau were settled around 1000 years ago. Oral history traces local traditions and genealogies back several hundred years and details the origins of the social and political order that was in place by the 19th century. According to oral sources, the three atolls functioned largely independently while maintaining social and linguistic cohesion. Tokelauan society was governed by chiefly clans, and there were occasional inter-atoll skirmishes and wars as well as inter-marriage. Historically, Fakaofo held dominance over Atafu and Nukunonu. Life on the atolls was subsistence-based, with reliance on fish and coconut. There is no soil on Tokelau, and therefore the vegetables and fruit that provided staples elsewhere in the Pacific (such as taro and bananas) were not available.
Contact with Europeans led to some significant changes in Tokelauan society. Trading ships brought new foods, cloth and materials, and exposure to new information and ways of doing things. In the 1850s, missionaries from the Roman Catholic Church and the London Missionary Society, with the assistance of Tokelauans who had been introduced to religious activities in Samoa, introduced Christianity, which was readily embraced. Currently, the majority of the Atafu population are Congregational Christians and most of the Nukunonu population are Catholic. On Fakaofo the majority of the population (around 70 percent) are Congregational Christians and most of the remainder are Catholic.
In the 1860s, Peruvian slave ships visited the three atolls and forcibly removed almost all able-bodied men (253) to work as labourers in Peru. The men died in the dozens of dysentery and smallpox, and very few ever returned to Tokelau. The impact of the slave ships was devastating, and led to major changes in governance. With the loss of chiefs and able-bodied men, Tokelau moved to a system of governance based on the Taupulega, or Councils of Elders. On each atoll, individual families were represented on the Taupulega (though the method of selection of family representatives differed among atolls). Village governance today is squarely the domain of the Taupulega.
Tokelau became a British protectorate in 1877, a status that was formalised in 1889. The British Government annexed the group (which had been renamed the Union Islands) in 1916, and included it within the boundaries of the Gilbert and Ellice Islands Colony (Kiribati and Tuvalu). In 1926 Britain passed administration of Tokelau to New Zealand. There has never been a residential administrative presence on Tokelau, and therefore administration has been ‘light-handed’ and impinged to a relatively small extent on everyday life on the atolls. Formal sovereignty was transferred to New Zealand with the enactment of the Tokelau Act 1948. While Tokelau was declared to be part of New Zealand from 1 January 1949, it has a distinctive culture and its own political, legal, social, judicial and economic systems.
Over the past three decades Tokelau has moved progressively towards its current advanced level of political self-reliance. It has its own unique political institutions, including a national legislative body and Executive Council. It runs its own judicial system and public services. It has its own shipping and telecommunications systems. It has full control over its budget. It plays an active role in regional affairs and is a member of a number of regional and international bodies. |
Chickpea Flour kcal to gr converter for culinary teaching and diet.
chickpea flour conversion
Breadcrumbs: main Flours menuchickpea flour menuKilocalories
Amount: 1 kilocalorie (kcal) of chickpea flour energy
Equals: 3.99 grains (gr) in chickpea flour mass
chickpea flour from kilocalorie to grain Conversion Results:
Enter a New kilocalorie Amount of chickpea flour to Convert From
Enter Your Amount :
Decimal Precision :
Work out grains of chickpea flour per 1 kilocalorie unit. The chickpea flour converter for chefs and bakers, culinary arts classes, students and for home use.
TOGGLE : from grains into kilocalories in the other way around.
CONVERT : between other chickpea flour measuring units - complete list.
Conversion calculator for webmasters.
Chickpea flour (besan)
Convert chickpea flour culinary measuring units between kilocalorie (kcal) and grains (gr) of chickpea flour but in the other direction from grains into kilocalories.
Culinary arts school: chickpea flour conversion
Other applications of this chickpea flour converter are ...
One kilocalorie of chickpea flour converted to grain equals to 3.99 gr
How many grains of chickpea flour are in 1 kilocalorie? The answer is: The change of 1 kcal ( kilocalorie ) unit in a chickpea flour measure equals = into 3.99 gr ( grain ) as per the equivalent measure and for the same chickpea flour type.
Professional people always ensure, and their success in fine cooking depends on, they get the most precise units conversion results in measuring their ingredients. In speciality cooking a measure of chickpea flour can be crucial. If there is an exact measure in kcal - kilocalories for chickpea flour, it's the rule in culinary career, that the kilocalorie portion number gets converted into gr - grains of chickpea flour absolutely exactly. It's like an insurance for the master chef for having always all the meals created perfectly.
Conversion for how many grains, gr, of chickpea flour are contained in a kilocalorie, kcal? Or, how much in grains chickpea flour in 1 kilocalorie? To link to this chickpea flour - kilocalorie to grains on line culinary converter for the answer, simply cut and paste the following.
The link to this tool will appear as: Culinary chickpea flour from kilocalorie (kcal) into grains (gr) conversion.
|
Jump to
Latest Headlines Quotes Wiki
share with facebook
share with twitter
1 of 1
Steve Wozniak, one of the founders of Apple Computer, answers at Macworld in San Francisco on January 9, 2007. (UPI Photo/Terry Schmitt)
| License Photo
Steve Wozniak News
First Prev Page 1 of 4 Last Next
Stephen Gary "Woz" Wozniak (born August 11, 1950) is an American computer engineer and programmer who founded Apple Computer, Co. (now Apple Inc.) with co-founder, Steve Jobs, and Ronald Wayne. His inventions and machines are credited with contributing significantly to the personal computer revolution of the 1970s. Wozniak created the Apple I and Apple II computers in the mid-1970s.
In 1970, Wozniak became friends with Steve Jobs, when Jobs worked for the summer at a company where Wozniak was working on a mainframe computer. According to Wozniak's autobiography, iWoz, Jobs had the idea to sell the computer as a fully assembled printed circuit board. Wozniak, at first skeptical, was later convinced by Jobs that even if they were not successful they could at least say to their grandkids they had had their own company. Together they sold some of their possessions (such as Wozniak's HP scientific calculator and Jobs's Volkswagen van), raised USD $1,300, and assembled the first prototypes in Jobs's bedroom and later (when there was no space left) in Jobs's garage. Wozniak's apartment in San Jose was filled with monitors, electronic devices, and some computer games Wozniak had developed, similar to SuperPong but with voice overs to the blips on the screen. Wozniak carried electronic devices with him often, and would entertain party goers with novel devices.
This article is licensed under the GNU Free Documentation License.
It uses material from the Wikipedia article "Steve Wozniak." |
Utne Blogs > Science and Technology
How Botox Could Inhibit Emotions
by Bennett Gordon
Tags: Science, Technology, neurology, Botox, drugs, Discover,
Scientists think that human facial expressions have evolved over millions of years for better communication and empathy, Carl Zimmer writes for Discover. Babies instinctively mimic other people’s facial expressions, and some think this is helps them understand what grownups are thinking. Some go further, postulating that facial expressions actually create emotions. “When humans mimic others’ faces,” Zimmer writes, “we don’t just go through the motions. We also go through the emotions.”
It makes sense, then, that emotional exchanges would be irrevocably altered by drugs like Botox. Plastic surgeons use Botox to make people look younger, but the drug also paralyzes facial muscles and inhibits facial expressions. Neuroscientists have tested patients using Dysport, a Botox-like drug found in Europe, by showing them images of angry faces and asking them to mimic or observe the expressions. Using brain scans, the scientists found that Dysport patients had weaker activity in the amygdala, a part of the brain that is key to experiencing emotions. This signals a change in the way that the Dysport patients experience emotions. Zimmer writes that through drugs like Botox and Dysport, “we’re tampering with the ancient lines of communication between face and brain that may change our minds in ways we don’t yet understand.”
rachel levitt
10/30/2008 9:42:46 AM
It makes sense that drugs like Botox would affect emotions, now that I think about it. Scientists have done plenty of research on it and found that mimicking positive or negative emotion actually creates it in study subjects! http://web.psych.ualberta.ca/~varn/bc/Kleinke.htm |
Sep 20
Green Laundry: Small, But Firm Steps To Save Earth
green laundryThe more you save earth, the more you nurture it. Every tiny step counts when it comes to saving earth and adopting eco-friendly ways to life. Laundry is no exception and making this process as green as possible can actually save a lot of resources. Electricity, waste, water and chemicals there are a lot of things involved. Minimize all resource wasting steps while laundering and you can definitely help preserve environment. Make your laundry more eco-friendly by implementing points given below. Try them next time you wash clothes.
1. Check Temperature Regulation. While setting washing machine’s temperature keep in mind that too much electricity is consume during this process. Ideally, it is important to give clothes a hot water wash and this makes. But using hot water for a wash cycle means you will be consuming double electricity. However, you can always keep high temperatures for dirtiest of the loads. Lighter, frequently used and less dirty clothes can be washed with cold water.
2. Pre-wash Stained Clothes. Stained or heavily soiled clothes can be pre-washed before they are put into wash cycle. This will make them cleaner by using lesser resources in a washing machine. Besides, you will also save other clothes from getting dirty by doing this.
3. Wash Full Load Often. Speaking in terms of energy and water efficiency a full load is recommended. Often people run wash cycles for half loads which wastes a lot of resources. You should set a fixed date for laundry and wait until you have a full load to wash. If you are going for half or lesser loads then it is wise to adjust the machine setting accordingly. Washers equipped with water return system can return used water that can be utilized or reused suitably.
4. Prefer Air & Sun Drying. Dryers should be used only when they are really needed. Use nature’s power in form of air and solar heat to dry clothes. There are many therapeutic and health benefits associated with air and sun drying clothes. While the sun disinfects clothes, the air bushes through the clothes to give natural aroma to clothes. Sun kissed clothes are crisp and last longer. Dryers usually cause fabric strength to weaken and forms wrinkles in clothes. In addition Sun naturally dries clothes helping clothes to stay sturdy and wrinkle free.
5. Detergents, Fabric Sanitizer & Others. Using conventional, chemical induced detergents can be a big problem for environment, clothes and for you. Wondering how? We’ll, chemicals seep into the aquatic system through drains and spoil the ecosystem there. Harsh detergents pose grave harm for your skin causing allergies, skin infections and rashes. Is there a solution? Surely, to begin with:
• Seek solution in nature by using alternatives like soap nuts. These are seeds from certain plant that forms a soapy solution once it is mixed with water. Using vinegar in the rise cycle can not only remove stains but render clothes soft and residue free. Tea Tree oil used in rise cycle disinfects clothes and leaves them fresh and aromatic.
• Using concentrated detergent reduces the carbon foot print associated with the packing of detergents. This is because more efficient detergent can be packaged at reduced cost, space and resource usage. Besides using eco friendly detergents, fabric sanitizer and natural laundry disinfectant can save the environment from getting polluted.
6. Hand Wash vs. Washing Machines. This argument only brings out what we have been stressing about eco-friendly laundering. Though hand washing clothes eliminates the need of electricity, but the water wastage doubles up. That is why washing machines come out to be more energy and water efficient. Nowadays HE, i.e. high efficiency washing machines are available that make laundering significantly environmental friendly. They have reduced water usage to 55% – 66% and electricity consumption to 50%. They also reduce drying time by soaking more water from clothes. By using HE washing machines and HE detergents you can get completely clean clothes. This makes modern washing machines more eco- friendly for laundering.
Author Bio: The author is a fitness freak. His passion for writing forces him to write articles. He has been doing this for 2 years. This time he is providing some Eco – Friendly Techniques to disinfect clothes.
Leave a Reply |
Blame it on your genes if coriander tastes like soap
• 13 Sep 2012
For many people, coriander is an essential herb. Innumerable regional cuisines rely on it as a basic ingredient. Then there are those who think it tastes like soap. Until now it has been largely assumed that this was just down to those people not being exposed to coriander as kids, as with other foodstuffs people tend to dislike, but it now emerges that there could be a genetic influence at work.
In a paper published on 10 September, and as noted by Nature News, statistical geneticist Nicholas Eriksson and colleagues worked through a genetic comparison of two separate samples of over 10,000 people, one a full range of people of European ancestry who said coriander tasted like soap, the other one of people of all genetic backgrounds who had declared their like or dislike of coriander. The result was a correlation between disliking coriander and two genes -- one associated with enjoying smells, and another associated with linking smells to taste.
This isn't the first time that coriander preference has been linked to genetics, and there was even a study in May that found that there was a distinctive divide between those cultures where coriander where a common herb and those where it was less common -- dislike among South Asians, Hispanic and Middle Eastern participants ranged between three and seven percent, whereas among those of South Asian, Caucasian or African descent the range was between 14 to 21 percent.
The new study does, however, offer a hint as to what mechanism might be at play -- a large part of what we experience as taste is actually smell, so if coriander smells of soap it will override the taste of the herb in the mouth. It's similar to the study previously reported on which found a similar gene-smell link to explain why some people find the whiff of meat utterly appalling.
Hatred of coriander is a common thing -- it doesn't take much effort to find groups of people online who detest the herb. But now it looks like it's OK -- they're not misguided.
They just can't help themselves. Poor souls.
Image: Shutterstock |
What is a Healthy Body Fat Percentage?
Article Details
• Written By: John Lister
• Edited By: Bronwyn Harris
• Last Modified Date: 17 November 2016
• Copyright Protected:
Conjecture Corporation
• Print this Article
Free Widgets for your Site/Blog
Medical experts today often argue that body fat percentage is a better guide to a person's health than other measures such as their body mass index. Body fat is made up of two components: essential fat and storage fat. Because different people have different needs for essential fat, the healthy body fat percentage varies across age and gender.
Body fat percentage is exactly what the name suggests: the percentage of your body weight which is made up of fat. It's usually considered to be a better guide to health than the body mass index, which simply looks at your weight and height. This can be misleading as, for example, an athletic and healthy person may have a high body mass index because they have a muscular physique. Alternatively, somebody with a low body mass index may have a dangerous proportion of their weight made up by fat, which can increase the danger of obesity-related health issues.
There are several ways of measuring to see if you have a healthy body fat percentage. The most common is bioelectrical impedance. This involves sending a small electrical signal through your body. Because fat has a lower water content than other tissue, it is a less effective conductor of electricity and thus slows down the movement of the signal.
Less common methods include using calipers to see how much fat can be pulled away from particular areas of the body. Some researchers use a technique involving infrared light, though this is not particularly reliable. The most effective way to check if somebody has a healthy body fat percentage is to weigh them underwater with their lungs emptied. This works because fat floats in water while bones and muscle sink. Naturally this technique should only ever be carried out by trained professionals.
Calculations to find a healthy body fat percentage have to take into account the age and gender of the person. That's because there's a certain level of fat, known as essential fat, which people need to survive. This level varies: women need more than men as their bodies are designed for pregnancy, while older people normally need more to keep warmer. Fat beyond this level is known as storage fat, some of which is needed as fuel if the body does not have enough food. Too little or too much storage fat can cause health risks.
Exactly what body fat percentage is healthy is a disputed point. There are several online charts which vary in the figures they give, though the general trends are largely the same. To give one example, the World Health Organization lists healthy female ranges as rising from 17-31% for an 18-year-old to 24-36% for somebody aged over 60, while the male ranges rise from 10-20% for an 18-year-old to 13-25% for somebody aged over 60. Because of the differing figures from different sources, it is best to consult your own doctor or other medical professional to determine what is a healthy body fat percentage for you.
You might also Like
Discuss this Article
Post 3
@ocelot60- I have also found that not everyone fits the mold of what doctors, dieticians, and other professionals say is a healthy body weight based on height and age. I have known thin, active people who have had health problems, and larger people who are the picture of health.
Follow your doctor's advice, make sure your blood pressure and cholesterol levels are in check, and otherwise don't worry too much about what your healthy body fat percentage should be based on something you heard or read.
Post 2
@ocelot60- Don't listen to others, but do what feels best to you. If you are what the experts say is 20 pounds overweight but you are healthy and feel good, don't worry about body fat charts that indicate that you are too fat.
Post 1
How do you know what your ideal body weight is? Though I have the tendency to gain a few extra pounds here and there, I feel best when I am not at what statistics say my best weight and healthy body fat percentage should be. For example, if I get too thin, I am tired all the time and just don't feel well. How do I find a happy medium when it comes to my weight, and feel good about it?
Post your comments
Post Anonymously
forgot password? |
Reverse Word Search Lookup
Dictionary Suite
bib a piece of cloth tied under the chin and worn esp. by babies to protect the clothing during a meal. [1/2 definitions]
bonnet a traditional women's cloth hat with a brim and fastened beneath the chin with ribbons, now worn primarily by infants. [1/6 definitions]
burnsides a mustache and side whiskers worn with the chin clean-shaven. (See sideburns.)
Canada goose a large wild goose of North America, having brownish gray feathers, a black head and neck, and a white patch under the chin and up both sides of the face.
chin in gymnastics, to pull (oneself) upward from a dangling position while grasping a horizontal bar until the chin is level with the bar. [2/3 definitions]
chinless combined form of chin.
chuck1 to playfully or affectionately touch or pat, esp. under the chin. [2/6 definitions]
double chin one or more fatty folds of flesh beneath the chin.
face the part of the head that extends from the forehead to the chin and from ear to ear. [1/11 definitions]
feature an element of the face such as the eyes, nose, or chin. [1/9 definitions]
gill1 (pl.) a fleshy, wrinkled piece of skin or flesh hanging below the beak of a bird or under the chin of a person; wattle. [1/5 definitions]
goatee a small beard on a man's chin, often trimmed to a tuft or point and resembling the beard of a goat.
imperial2 a small, pointed beard grown on the lower lip and chin.
jawline the line or contour formed by the lower edge of the jawbone and chin.
muttonchops side whiskers on either side of a clean-shaven chin, that are rounded and broad at the lower jaw and narrow at the temple.
pull-up an exercise to strengthen the arms in which one hangs by the hands from an overhead bar and gradually pulls the body up until the chin is even with the bar. [1/2 definitions]
uppercut a quick blow that is directed upward, usu. to an opponent's chin. [1/2 definitions]
violin a relatively small, high-pitched stringed instrument with an unfretted fingerboard, whose four strings are tuned in fifths, and that is played by being held horizontally out from chin and shoulder.
whisker (usu. pl.) facial hair growing on the upper lip, cheeks, and chin. [1/3 definitions] |
lord of flies
Essay by rus1234 September 2014
download word file, 5 pages 0.0
Downloaded 4 times
Anthony Lomatchinski
Ms. McManigal
English 2, Period 3
16 December, 2013
Even the Most Innocent
Can an object stand for something more than just its self? If so this object would be called a symbol. In the book "Lord of the Flies", William Golding uses many different symbols. The book is about a group of young boys surviving on a remote island. In the story William Golding uses many different objects to show the boys' decent into savagery. Three symbols that he uses are a pair of specs, a killing of a sow, and also one of the oldest boys: Jack.
One of the symbols Golding uses, to show the boys' decent into savagery, is a pair of specs owned by a plumpish boy named Piggy. In the beginning of the story, when Piggy and Ralph get into a lagoon, Piggy explains how he cannot swim and "rose dripping from the water and stood naked, cleaning his glasses with a sock" (Golding 13).
In this scene the glasses, which stand for survival, are looking nice, in good working condition, and help Piggy see and survive on the island. The boys also use the specs to make a signal fire, which is the only hope they have for survival. Later on in the story Jack and Piggy get into an argument. Jack, getting mad, "smacked Piggy's head. Piggy's glasses flew off and tinkled on the rocks" (71). Simon picks them up and states that they are broken. At this part of the story Piggy's specs are still usable but are in bad condition and hard to work with. Piggy has more trouble seeing and is having a harder time trying to survive due to his splintering headaches he gets because of looking through only one spec. Also now the... |
Essay by EssaySwap ContributorHigh School, 11th grade February 2008
download word file, 2 pages 0.0
Reconstruction was Abraham Lincoln's plan to reunite the country, not to reform it. After the war Lincoln sent one of his generals to tour the south and find out what years of war had done to the confederate states. During the generals tour of the south he realized that if the confederate states were left alone during the post war years they would come up with a new form of forced labor and blacks would be in the same position as they were before the war. Blacks needed land and voting rights to defend themselves and until southern whites abandoned their old beliefs that would not happen.
The conditions of the former slaves didn't change that drastically at first. Most slaves went back to work on farms and plantations but now they were getting wages and didn't have to worry about violence from their planters since whipping had been outlawed after the war.
Most ex-slaves weren't happy with this because they wanted their own land since they were after all free men. In 1865 the government set up the Freedman's Bureau to ease the transition of blacks from slave to free people. The bureau also got permission from congress to divide abandoned property into 40 acre plots and rent the plots to freedman with intentions of selling them the property later. To most blacks however freedom just meant that they could do everything that they had wanted to do but couldn't since they were bound to a plantation. A lot of blacks enjoyed just going for walks and seeing what was up the road or over the hill, they could sleep past sunrise and speak to whites whenever they felt like it. Most freed slaves left slavery with a desire to learn to read and write and many blacks went... |
Sophocles:The Legend of Greek Tragedy
Essay by JenemartUniversity, Bachelor'sA, April 2002
download word file, 4 pages 4.6
Downloaded 168 times
Sophocles: The Legend of Greek TragedySophocles, perhaps more than any other Athenian author, typified the ancient Greek ideal that a man, no matter what his other ambitions and accomplishments, should live fully in the present (Robinson 1). Sophocles exemplified living in the present. He was involved in many different activities, from civic duties to various capacities in the state. He has been called the quintessence of the Greek, the great tragedian, and he has become immortal within the realm of Greek poets. The spirit of Sophocles is the strife to understand the irresistible movement of events, and man's helplessness as far as fate is concerned (Hamilton 258-59). This strife for knowledge is the driving force behind Sophocles' great tragedies. Aristotle wrote, "Tragedy should be a serious and complete imitation of action; it should arouse pity and fear and provide a catharsis, or purging of these emotions" (Robinson 3).
In ancient Greece, dramas were performed each year in Athens as part of the festival of Dionysius, the god of wine, vegetation, religious ecstasy, the mask, and the theater. Tragedies were introduced relatively late in the history
of the Athenian theater, probably in the 430's (Segal 36-37). The entire process of Greek tragedy production was in the hands of the author.
The staging of the plays was simple. The circular performance area was backed by a long, low building with a slightly raised platform in front of it. Compared to modern plays, the stage was bare. There were few props, and only three male actors per play. The plays were performed from beginning to end without an intermission (Segal 38-39). The simplicity of the staging and the lack of props allowed the imagination of the audience to run wild. Sophocles used all aspects of Greek theater, including the... |
A CHARITY fundraiser from York will see first-hand how international aid benefits people on the ground in India.
Caroline Beavers, of Holgate York, who works as a manager at a service partner for Yorkshire Water in Bradford, has been chosen to represent the company on a week-long trip to India along with fundraisers from 11 other water companies.
The 32-year-old will visit communities to find out what life is like without safe water and sanitation and visit WaterAid projects in both urban slums and rural villages to see how the money raised by herself and her colleagues is making a difference.
Miss Beavers said: “It’s shocking that 2,000 children die every day from diseases caused by dirty water and poor sanitation.
“Clean water is something we take for granted in the UK but some people have no choice but to drink dirty water that could make them ill, or worse.”
As part of the trip, to the Madhya Pradesh region of India, Miss Beavers will spend time with a local family living without clean water and sanitation, learning first-hand about the challenges they face without access to these vital resources.
Brought up in the Acomb area of York, where her family still lives, Miss Beavers went to Manor CE School and York Sixth Form College and then Durham University.
While out in India she will also meet children from local schools, sit in on hygiene education sessions, take part in some construction work and learn how access to clean water and sanitation has helped transform people’s lives.
Miss Beavers said: “This trip is a chance for me to see for myself the work that WaterAid is doing to change this and I hope to use my experiences to inspire even more people to get involved and raise funds for this vital cause.”
India has a population of more than one billion.
Diseases are common throughout the country due to contaminated drinking water sources and poor sanitation.
WaterAid estimates that only 31 per cent of the population has adequate sanitation and 320,000, children under five die every year as a result. |
Wednesday, March 06, 2013
Do children listen to less music? Netmums say yes.
There's a survey been gathered by Netmums, which is a bit like Mumsnet but with its name the other way round; they're attempting to work out when childhood ends by asking parents what they think.
The results don't actually appear to have been published, but the summary has news that might make music industry executives a little queasy:
Only 23% spend time reading compared to 41% of their parents at the same age, while half the number of modern tweens listen to music (17%) compared to their parents (39%).
The musical sky is falling in! The musical sky is falling in!
But hang on a moment. The summary of the findings isn't exactly scientifically worded. Take this bit:
Parents also slammed retailers provision for tween fashion, especially for girls, with over half (54%) angry that stores only provide 'clothes that can be too sexual, such as overtly short skirts or crop tops'.
It's a valid concern, certainly. But did the research actually ask parents if they were "angry"? Or just if they agreed it was happening? And the word "only" in there is suspising. Stores only sell clothes that can be too sexual? The "only" seems quite definitive, but the "can be" seems more vague.
And what does this mean about the other 46%? Are they okay with the idea? Do they not believe the proposition?
Perhaps in the original research this question is a bit clearer, but without access to that data, all we've got is a presentation that has been designed to generate headlines.
So, we should approach this finding with caution. But even if you take it at face value, are less than one-in-five tweens listening to music?
Almost certainly not. It's probably more a generational difference in what constitutes "listening to music". Across the last generation, music has crept more and more inside personal devices, listened to through earbuds, with silent, gestural interfaces; music is purchased remotely and doesn't enter the house in plastic bags, making its arrival harder to spot.
And while a parent can think back and recall that at a social gathering, they listened to music, they're less able to judge if "listening to music" is part of an event that is pitched to them as "having mates over".
And there's probably a smidge of generational snobbery in there too - when I had a young, fresh face, i can recall scraggier, more wrinkly people telling me that my generation didn't really listen to music.
For their cohort, it was communal experience, rolling ciggies on the album sleeve and communing with the music, whereas us lot? We just had it on in the background and stuck photos of popstars in scrapbooks.
Are the current generation of parents any different? Kids today don't listen to music like Justin Jespen. It's not like them and The Spice Girls, where you had to really pay attention to understand the message.
So, less than one-in-five tweens listening to music? We could try asking them directly. But we'll have to wait until they've finished making their Harlem Shake video.
No comments:
Post a Comment
|
Esta guía en Español Young men's version of this guide
Share on FacebookTweet about this on TwitterEmail this to someonePrint this page
calciumCalcium is a mineral that gives strength to your bones. Calcium is also necessary for many of your body’s functions, such as blood clotting and nerve and muscle function. During the teenage years (particularly ages 11-15), your bones are developing quickly and are storing calcium so that your skeleton will be strong later in life. Nearly half of all bone is formed during these years. It’s important that you get plenty of calcium in your diet because if the rest of the body doesn’t get the calcium it needs, it takes calcium from the only source that it has: your bones. This can lead to brittle bones later in life and broken bones or stress fractures at any time. Unfortunately, most teen girls actually do not get enough calcium in their diet.
What is osteoporosis?
Osteoporosis is a bone disease that develops slowly and is usually caused by a combination of genetics and too little calcium in the diet. Osteoporosis is a disease in which bones become fragile and more likely to break. Osteoporosis can also lead to shortened height because of collapsing spinal bones and can cause a hunched back.
How do I know if I’m at risk?
Several factors can put a young person at risk for developing osteoporosis. They include:
• Being white
• Being female
• Having irregular periods
• Doing little or no exercise
• Not getting enough calcium in your diet
• Being below a normal weight
• Having a family history of osteoporosis
• Smoking
• Drinking large amounts of alcohol
Osteoporosis can be prevented. There are some risk factors that you cannot change (such as your race and the fact that you’re female), but there are some you can! Eat a healthy diet, get some exercise, and don’t smoke!
How much calcium do I need?
Children and teenagers between the ages of 9 and 18 should aim for 1,300 milligrams per day, which is about 4 servings of high-calcium food or drinks. Each 8-ounce glass of milk (whether skim, 1%, 2%, or whole) and each cup of yogurt has about 300 milligrams of calcium. Adults 19 to 50 years of age should aim for 1,000 milligrams per day.
How do I know how much calcium is in the foods I eat?
For foods that contain calcium and have a nutrition facts label, there will be a % Daily Value (DV) listed next to the word calcium. To figure out how many milligrams of calcium a serving of food has, take the % DV, drop the % sign, and add a zero. Can you use the label to find out how much calcium is in one cup of skim milk? 30% means there is about 300mg of calcium per serving. The table below shows how much calcium is in some calcium-rich foods from different food groups.
What foods contain calcium?
You probably know that dairy foods such as milk and cheese are good sources of calcium, but do you know that tofu and beans contain calcium, too? Even if you don’t drink milk or eat cheese, you can get the calcium you need from other foods. See the list of high-calcium foods at the end of this guide.
What if I’m lactose intolerant?
If you are lactose intolerant and can’t drink milk, there are plenty of other ways to get enough calcium. These include eating foods high in calcium and drinking fortified soy milk, fortified juice, almond milk or lactose-free milk (the lactase enzyme that you are missing has been added into the milk). You may also take lactase enzyme tablets before eating dairy products to help digest the lactose sugar in the milk. Some people who are lactose intolerant can tolerate having small amounts of milk or other dairy products.
How can I get more calcium in my diet?
Here are some ideas for how you can get more calcium in your breakfast, lunch, dinner, and snacks:
Calcium tips
What if I just can’t get enough calcium in my diet?
It’s best to try to meet your calcium needs by having calcium-rich foods and drinks, but some teens find it hard to fit in 4 servings of high-calcium foods daily. If you don’t like dairy foods or calcium fortified juice or soymilk, you may need a calcium supplement. Calcium carbonate (for example, Viactiv® or a generic chewable) and calcium citrate (for example, Citracal®) are good choices. When choosing a supplement, keep the following tips in mind:
• Most calcium supplements have between 200 and 500 milligrams of calcium. Remember, your goal is 1,300 milligrams of per day.
• If you have to take more than one supplement per day, it is best to take them at different times of the day because your body can only absorb about 500 milligrams of calcium at a time.
• Don’t count on getting all of your calcium from a multivitamin. Most basic multivitamin/mineral tablets have very little calcium in them.
• Look for a calcium supplement that has vitamin D added. Vitamin D helps your body absorb calcium.
• Know that your dietitian or health care provider will be able to support you with recommendations on what supplement will best suit your needs.
Food: Serving: Milligrams of Calcium:
Dairy Products:
Yogurt, low-fat1 cup338-448
Ricotta cheese, part-skim1/2 cup335
Milk (skim)1 cup299
Fortified soy and rice milks1 cup301
Milk (1%)1 cup305
Milk (whole)1 cup276
Ricotta cheese, whole1/2 cup255
Swiss cheese1 ounce224
Mozzarella cheese, part skim1 ounce222
Cheddar cheese1 ounce204
Muenster cheese1 ounce203
American cheese1 ounce296
Frozen yogurt1/2 cup103
Ice cream1/2 cup84
Pudding4 ounce countainer55
Protein Foods
Canned sardines (with bones)3 ounces325
Soybeans, cooked1 cup261
Canned salmon (with bones)3 ounces212
Nasoya Tofu Plus®, firm3 ounces201
Kidney beans, canned1/2 cup44
White beans, cooked1/2 cup80
Crab, canned3 ounces90
Clams, canned and drained3 ounces55
Almonds1 oz (24 nuts)76
Sesame seeds1 tablespoon88
Collard greens, cooked1/2 cup134
Spinach, cooked1/2 cup122
Kale, cooked1/2 cup47
Broccoli, cooked1/2 cup31
Calcium-fortified orange juice1 cup349
Rhubarb, cooked1/2 cup174
Dried figs1/3 cup72
Cereals and Bars:
Total Raisin Bran® Cereal1/2 cup500
Cream of Wheat® Cereal1 cup303
Basic 4® Cereal1 cup250
Kix® Cereal1 1/4 cup171
Luna® Bar1 bar425
U.S. Department of Agriculture, Agricultural Research Service. 2013. USDA National Nutrient Database for Standard Reference, Release 26. |
Bill Gates Ruffles Feathers, Defends Genetically Modified Organisms
From an article The Guardian LV: “The week after the New Year seems to be the week of genetically modified organisms (GMOs) taking over business headlines all over the United States, and now Bill Gates is ruffling a few feathers as he defends GMO crops. Gates has requested that everyone think with an open mind about the fact that the modified seeds are a need of the hour and cannot be ignored without a comprehensive and conclusive study of their side effects.
It all started when a bill introduced in the island of Hawaii to ban genetically engineered crops in May, 2013 led to a lot of questions in the minds of the general population. The resulting hype against GMOs left many companies in the processed food industry with jitters since a majority of them use GMOs in their products. All such products used are certified safe after scientific investigations.
Such negative reactions and demand for stringent laws against GMO crops would be detrimental to business. The discussion also seems to have found a greater push since the announcement by General Mills that it will be making its Cheerios “GMO free.”
For the uninitiated, a genetically modified organism (GMO) is a living organism, the genetic material of which has been modified. The alteration may be achieved either by mutation, deletion or insertion of genes from another species to achieve certain desirable characteristics such as bigger size or resistance to disease or bugs.
Extensively used in processed and other food items, GMO crops form a major portion of today’s soybean and corn production and are also gaining traction in the production of canola oil, alfalfa and sugar beets as well.
Another well-known name in the form of Bill Gates defending genetically modified organisms and the resulting GMO crops can only lead to discussions going forward with greater vigor. He has touched a few nerves and ruffled enough feathers to boost the topic’s importance further with his request that all keep an open mind going forward.
Gates is of the opinion that GMOs are a particularly important subject for poor countries where the government finds it very difficult feed everyone. Such countries should adopt safe testing and innovation practices similar to what they follow for medicines. Safety checks would mitigate the concerns of consumers and also ensure that healthy drought and disease resistant crops could be achieved.
With an estimated 85 percent of U.S corn output already being genetically modified, it is likely that others are soon to follow suit. Since, everyone has their own take on the subject, it has become important to highlight the bottom line and put forth a variety of viewpoints.
This shifts the balance in favor of the customer who can then decide what they desire and then mold business practices to meet that demand effectively.
U.S. regulators View
Used in U.S for 20 years and deemed safe
FDA regulates its usage (This raises the question, though, if it’s safe then why the need to regulate?)
Critics View:
Genetic engineering alters nutritional value, creates toxic plants or allergic molecules
Need more research
GMOs are herbicide resistant thus lead to higher herbicide use
World View:
EU tests and insists on labeling GMO products
It’s banned in Switzerland, Austria and Hungary
Plants and animals with GMOs are banned in Japan, New-Zealand and parts of Australia.
Counties in California and in other parts of the United States are also contemplating bans on GMOs
It is difficult to pinpoint where this storm has its origin, it could be from The New York Times report by Amy Harmon or the continued discussion by other news agencies. It sure has taken the fancy and fear of the gullible consumers to another level.
Whether there would be a more matured response from business houses in-terms of labeling of GMO products, whether they will say cheers to Cheerio’s example and drop GMO from their products or whether GMOs will become a part of everyone’s generic lifestyle forever will soon become apparent.
All the people currently rallying against or defending Genetically Modified Organism (GMO) crops should, in effect, listen to Bill Gates and discuss the entire gamut of viewpoints with an open mind. Only then will the world be able to reach a definitive and all-inclusive solution to the concerns that GMO crops have bought into the limelight and ease the ruffled feathers in the milieu.
Royce Christyn
About Royce Christyn (3467 Articles)
Documentarian, Writer, Producer, Director, Author. |
Explore BrainMass
Culture and ethnicity on childhood development
1). Describe the influence of culture, ethnicity, and socioeconomic status on childhood development, and discuss the role of acculturation in this process.
2) What impact do (the influence of culture, ethnicity, and socioeconomic status) each of these factors have on parenting styles and the overall growth and development of children?
3). For example, what do current research findings tell us about the differences between ethnic minority parents and those in a higher socioeconomic group?
Min. 250 words in length. Support your answers using reputable academic resources, and properly cite any references with APA format.
Solution Preview
1) Culture tends to affect childhood development due to the fact that a child picks up a great deal of his or her tendencies and methods of interaction with his or her environment based upon the cultural norms that are expressed to them by their parents, etc. Different ethnicities have different methods of raising their children and teaching their children, which influences the manner in which children develop psychologically. Socio-economic status is a strong determining ... |
Does canning or freezing have a smaller footprint?
canned food on table
A season’s canning, ready to go home. (Photo: Lloyd Alter)
Recently Tom Oder addressed the question Is freezing or canning better? He concludes that “it depends” — on variables like preference, time and the type of food. But there are other variables that perhaps are worth mentioning too.
Which has a smaller energy footprint?
Canning involves boiling the jars of food to sterilize and seal them, a one shot burst of energy. However freezing the food requires continuous long-term consumption of electricity to keep the food frozen. The longer you store the food, the more it costs. An academic study published in the Journal of Food Science in 1980 calculated the energy use for processing and storing 50 pounds of vegetables and determined that freezing for six months used about three times as much energy as canning; for a year, it used six times as much energy.
But it’s hard to extrapolate this to today; new fridges use a third of the electricity used by 1980 fridges while electricity costs 2.5 times as much as it did then. The numbers also vary significantly according to whether the fridge is full or not. Chest freezers are twice as efficient as uprights, so a lot can be done to minimize electricity use.
A more recent analysis, done for a book that promotes dehydration and with unverifiable sources, concludes that freezing uses 15 times as much electricity and costs four times as much per pound as canning, when you factor in the cost of the equipment.
peachesWe store our peaches in the wall framing. (Photo: Lloyd Alter)
Which has a smaller physical footprint?
Another source, The Natural Canning Resource Book, made a couple of relevant points beside energy use.
Canning is less energy-intensive than refrigeration and long-term freezing. Frozen food stored longer than four months uses more energy than canning. Moreover, canning requires only a one-time use of energy, rather than a non-interruptible supply. Canning makes modern life with a small or non-existent refrigerator possible.
There are two important points there. If you’re worried about resilience, about dealing with power outages and other disruptions, canned food is a lot better than frozen food. If you’re living in a small space or a tiny apartment, if you’re a renter rather than an owner, it’s a lot easier to store canned food than it is to store a freezer. And as I say all the time over on TreeHugger, Small fridges make good cities.
On the other side of the table, that same study in the Journal of Food Science found that people preferred the taste of frozen food over canned by a long shot. And whatever method you use, the end result is that you want people to eat it and like it. So it’s not so simple.
Read more:
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
%d bloggers like this: |
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Carbine (disambiguation).
Not to be confused with carbyne or carbene.
Various muzzle-loading arms, to scale; numbers 1, 10, and 11 are identified as carbines. (Encyclopædia Britannica, 1910)
A carbine (/ˈkɑːrbn/ or /ˈkɑːrbn/),[1] from French carabine,[2] is a long arm firearm but with a shorter barrel than a rifle or musket.[3] Many carbines are shortened versions of full-length rifles, shooting the same ammunition, while others fire lower-powered ammunition, including types designed for pistols.[citation needed]
The smaller size and lighter weight of carbines make them easier to handle. They are typically issued to high-mobility troops such as special-operations soldiers and paratroopers, as well as to mounted, artillery, logistics, or other non-infantry personnel whose roles do not require full-sized rifles, although there is a growing tendency for carbines to be issued to front-line soldiers to offset the increasing weight of other issued equipment. An example of this is the US Army's M4 carbine, which is standard-issue.
Word origin[edit]
Some sources derive the name of the weapon from its first users — cavalry troopers called "carabiniers", from the French carabine, from Old French carabin (soldier armed with a musket), perhaps from escarrabin, gravedigger, which derives from scarabee, scarab beetle.[4]
Early history: before the 1900s[edit]
Carbine model 1793, used by the French Army during the French Revolutionary Wars.
Left image: Jean Lepage flintlock carbine named "du Premier Consul" in honour of Napoleon , circa 1800.
Right image: Rifling of Lepage carbine.
The carbine was originally a lighter, shortened weapon developed for the cavalry. Carbines were short enough to be loaded and fired from horseback, but this was rarely done – a moving horse is a very unsteady platform, and once halted, a soldier can load and fire more easily if dismounted, which also makes him a smaller target (Napoleonic-era and earlier cavalry did fight from horseback, but they fought with sabers and large muzzle-loading horse pistols, so called because their large size meant they were most easily carried in a saddle holster, much like the later Colt-Walker revolver). After the Napoleonic Wars, cavalry began fighting dismounted, using the horses only for greater mobility, an early form of what is today known as mechanized infantry. By the American Civil War, dismounted cavalry were mostly the rule. The principal advantage of the carbine was that its length made it very portable. Troops could carry full-length muskets comfortably enough on horseback if just riding from A to B (the practice of the original dragoons and other mounted infantry). Cavalry proper (a "Regiment of Horse") had to ride with some agility and engage in sword-wielding melées with opposing cavalry or pursue running infantry, so carrying anything long would be a dangerous encumbrance. A carbine was typically no longer than a sheathed sabre, and like a sheathed sabre was carried arranged to hang clear of the rider's elbows and horse's legs.
Carbines were usually less accurate and less powerful than the longer muskets (and later rifles) of the infantry, due to a shorter sight plane and lower velocity of bullets fired from the shortened barrel. With the advent of fast-burning smokeless powder, the velocity disadvantages of a shorter barrel became less of an issue (see internal ballistics). Eventually, the use of horse-mounted cavalry would decline. But carbines continued to be issued and used by many who preferred a lighter, more compact weapon even at the cost of reduced long-range accuracy and power, such as artillery troops, who might need to defend themselves from attack but would be hindered by keeping full-sized rifles around; thus, a common title for many short rifles in the late 19th century was artillery carbine.
During the early 19th century, carbines were often developed separately from the infantry rifles and, in many cases, did not even use the same ammunition, which made for supply difficulties. A notable weapon developed towards the end of the American Civil War by the Union was the Spencer carbine, one of the very first breechloading, repeating weapons. It had a spring-powered, removable tube magazine in the buttstock which held seven rounds and could be reloaded by inserting spare tubes. It was intended to give the cavalry a replacement weapon which could be fired from horseback without the need for awkward reloading after each shot (although it saw service mostly with dismounted troopers and infantrymen, as was typical of cavalry weapons during that war). In the late 19th century, it became common for a number of nations to make bolt-action rifles in both full-length and carbine versions. One of the most popular and recognizable carbines were the lever-action Winchester carbines, with several versions available firing revolver cartridges. This made it an ideal choice for cowboys and explorers, as well as other inhabitants of the American West, who could carry a revolver and a carbine, both using the same ammunition.
Shorter rifles, shorter carbines: World War I and World War II[edit]
M1 Garand and M1 Carbine
In the decades following World War I, the standard battle rifle used by armies around the world had been growing shorter, either by redesign or by the general issue of carbine versions instead of full-length rifles. This move was initiated by the US Model 1903 Springfield, which was originally produced in 1907 with a short 24-inch barrel, providing a short rifle that was longer than a carbine but shorter than a typical rifle, so it could be issued to all troops without need for separate versions. Other nations followed suit after World War I, when they learned that their traditional long-barreled rifles provided little benefit in the trenches and merely proved a hindrance to the soldiers. Examples include the Russian Model 1891 rifle, originally with an 800 mm (31 in) barrel, later shortened to 730 mm (29 in) in 1930, and to 510 mm (20 in), and in 1938, the German Mauser Gewehr 98 rifles went from 740 mm (29 in) in 1898 to 600 mm (24 in) in 1935 as the Karabiner 98k (K98k or Kar98k), or "short carbine". The barrel lengths in rifles used by the United States did not change between the bolt-action M1903 rifle of World War I and the World War II M1 Garand rifle, because the 610 mm (24 in) barrel on the M1903 was still shorter than even the shortened versions of the Model 1891 and Gewehr 98. The US M1 carbine was more of a traditional carbine in that it was significantly shorter and lighter, with a 457.2 mm (18.00 in) barrel, than the M1 Garand rifle, and that it was intended for rear-area troops who couldn't be hindered with full-sized rifles but needed something more powerful and accurate than a Model 1911 pistol (although this didn't stop soldiers from using them on the front line). Contrary to popular belief, and even what some books claim, in spite of both being designated "M1", the M1 Carbine was not a shorter version of the .30-06 M1 Garand, as is typical for most rifles and carbines, but a wholly different design firing a smaller, less-powerful cartridge. The "M1" designates each as the first model in the new US designation system, which no longer used the year of introduction, but a sequential series of numbers starting at "1": the M1 Carbine and M1 Rifle.
The United Kingdom also developed a "Jungle Carbine" version of their Lee–Enfield service rifle, featuring a shorter barrel, flash suppressor, and manufacturing modifications designed to decrease the rifle's weight. Officially titled Rifle, No. 5 Mk I, it was introduced in the closing months of World War II, but it did not see widespread service until the Korean War, the Mau Mau Uprising, and the Malayan Emergency.
After World War II[edit]
Mauser Karabiner 98 Kurz. Translate as Carbine 98 Short or a shortened carbine of the Gewehr 98.
FN FAL rifle - (left) full size, (right) carbine/paratrooper variant with a folding stock and shortened barrel
A shorter weapon was more convenient when riding in a truck, armored personnel carrier, helicopter, or aircraft, and also when engaged in close-range combat. Based on the combat experience of World War II, the criteria used for selecting infantry weapons began to change. Unlike previous wars, which were often fought mainly from fixed lines and trenches, World War II was a highly mobile war, often fought in cities, forests, or other areas where mobility and visibility were restricted. In addition, improvements in artillery made moving infantry in open areas even less practical than it had been.
The majority of enemy contacts were at ranges of less than 300 metres (330 yards), and the enemy was exposed to fire for only short periods of time as they moved from cover to cover. Most rounds fired were not aimed at an enemy combatant, but instead fired in the enemy's direction to keep them from moving and firing back (see suppressive fire). These situations did not require a heavy rifle, firing full-power rifle bullets with long-range accuracy. A less-powerful weapon would still produce casualties at the shorter ranges encountered in actual combat, and the reduced recoil would allow more shots to be fired in the short amount of time an enemy was visible. The lower-powered round would also weigh less, allowing a soldier to carry more ammunition. With no need of a long barrel to fire full-power ammunition, a shorter barrel could be used. A shorter barrel made the weapon weigh less, was easier to handle in tight spaces, and was easier to shoulder quickly to fire a shot at an unexpected target. Full-automatic fire was also considered a desirable feature, allowing the soldier to fire short bursts of three to five rounds, increasing the probability of a hit on a moving target.
The Germans had experimented with selective-fire carbines firing rifle cartridges during the early years of World War II. These were determined to be less than ideal, as the recoil of full-power rifle cartridges caused the weapon to be uncontrollable in full-automatic fire. They then developed an intermediate-power cartridge round, which was accomplished by reducing the power and the length of the standard 7.92×57mm Mauser rifle cartridge to create the 7.92×33mm Kurz (Short) cartridge. A selective-fire weapon was developed to fire this shorter cartridge, eventually resulting in the Sturmgewehr 44, later translated as "assault rifle" (also frequently called "machine carbines" by Allied intelligence, a quite accurate assessment, in fact). Very shortly after World War II, the USSR would adopt a similar weapon, the ubiquitous AK-47, the first model in the famed Kalashnikov-series, which became the standard Soviet infantry weapon, and which has been produced and exported in extremely large numbers up until the present day. Although the United States had developed the M2 Carbine, a selective-fire version of the M1 Carbine during WW2, the .30 Carbine cartridge was closer to a pistol round in power, making it more of a submachine gun than an assault rifle. It was also adopted only in very small numbers and issued to few troops (the semi-automatic M1 carbine was produced in a 10-to-1 ratio to the M2), while the AK47 was produced by the millions and was standard-issue to all Soviet troops, as well as those of many other nations. The US was slow to follow suit, insisting on retaining a full-power, 7.62×51mm NATO rifle, the M14 (although this was selective fire), until too-hastily adopting the 5.56mm M16 rifle in the mid-1960s, with initially poor results due to the rapidity of its introduction (but later to become a highly successful line of rifles and carbines).
In the 1950s, the British developed the .280 British, an intermediate cartridge, and a select-fire bullpup assault rifle to fire it, the EM-2. They pressed for the US to adopt it so it could become a NATO-standard round, but the US insisted on retaining a full-power, .30 caliber round. This forced NATO to adopt the 7.62×51mm NATO round (which in reality is only slightly different ballistically to the .30-06 Springfield), to maintain commonality. The British eventually adopted the 7.62mm FN FAL, and the US adopted the 7.62mm M14. These rifles are both what is known as battle rifles and were a few inches shorter than the standard-issue rifles they replaced (22" barrel as opposed to 24" for the M1 Garand), although they were still full-powered rifles, with selective fire capability. These can be compared to the even shorter, less-powerful assault rifle, which might be considered the "carbine branch of weapons development", although indeed, there are now carbine variants of many of the assault rifles which had themselves seemed quite small and light when adopted.
Bullet drop of the M16A2 rifle (yellow) vs M4 carbine (red)
By the 1960s, after becoming involved in War in Vietnam, the US did an abrupt about-face and decided to standardize on the intermediate 5.56×45mm round (based on the .223 Remington varmint cartridge) fired from the new, lightweight M16 rifle, leaving NATO to hurry and catch up. Many of the NATO countries couldn't afford to re-equip so soon after the recent 7.62mm standardization, leaving them armed with full-power 7.62mm battle rifles for some decades afterwards, although by this point, the 5.56mm has been adopted by almost all NATO countries and many non-NATO nations as well. This 5.56mm NATO round was even lighter and smaller than the Soviet 7.62×39mm AK-47 cartridge, but possessed higher velocity. In U.S. service, the M16 assault rifle replaced the M14 as the standard infantry weapon, although the M14 continued to be used by designated marksmen. Although at 20", the barrel of the M16 was shorter than that of the M14, it was still designated a "rifle" rather than a "carbine", and it was still longer than the AK, which used a 16" barrel. (It is interesting to note that the SKS – an interim, semi-automatic, weapon adopted a few years before the AK-47 was put into service – was designated a carbine, even though it's 20" barrel was significantly longer than the AK series' 16.3". This is because of the Kalashnikov's revolutionary nature, which altered the old paradigm. Compared to previous rifles, particularly the Soviets' initial attempts at semi-automatic rifles, such as the 24" SVT-40, the SKS was significantly shorter. The Kalashnikov altered traditional notions and ushered in a change in what was considered a "rifle" in military circles.)
In 1974, shortly after the introduction of the 5.56mm NATO, the USSR began to issue a new Kalashnikov variant, the AK-74, chambered in the small-bore 5.45×39mm cartridge, which was a standard 7.62×39mm necked down to take a smaller, lighter, faster bullet. It soon became standard issue in Soviet nations, although many of the nations with export Kalashnikovs retained the larger 7.62×39mm round. In 1995, the People's Republic of China adopted a new 5.8×42mm cartridge to match the modern trend in military ammunition, replacing the previous 7.62×39mm round as standard.
Later, even lighter carbines variants of many of these short-barreled assault rifles came to be adopted as the standard infantry weapon. In much modern tactical thinking, only a certain number of soldiers now need to retain longer-range weapons, these serving as designated marksmen. The rest can carry lighter, shorter-ranged weapons for close-quarters combat and suppressive fire. This is basically a more extreme extension of the idea that brought the original assault rifle. Another factor is that with the increasing weight of technology, sighting systems, ballistic armor, etc., the only way to reduce the burden on the modern soldier was to equip him/her with a smaller, lighter weapon. Also, modern soldier rely a great deal on vehicles and helicopters to transport them around the battle area, and a longer weapon can be a serious hindrance to entering and exiting these vehicles. Development of lighter assault rifles continued, matched by developments in even lighter carbines. In spite of the short barrels of the new assault rifles, carbines variants like the 5.45×39mm AKS-74U and Colt Commando were being developed for use when mobility was essential and a submachine gun wasn't sufficiently powerful. The AKS-74U featured an extremely short 8.1" barrel which necessitated redesigning and shortening the gas-piston and integrating front sights onto the gas tube; the Colt Commando was a bit longer, at 11.5". Neither was adopted as standard issue, although the US did later adopt the somewhat-longer M4 carbine, with a 14.5" barrel.
Modern history[edit]
Contemporary military forces[edit]
Steyr AUG rifle (508 mm (20.0 in) barrel).
Steyr AUG carbine (407 mm (16.0 in) barrel). Carbine conversion is achieved by changing to a shorter barrel.
By the 1990s, the US had adopted the M4 carbine, a derivative of the M16 family which fired the same 5.56mm cartridge but was lighter and shorter (in overall length and barrel length), resulting in marginally reduced range and power, although offering better mobility and lighter weight to offset the weight of equipment and armor that a modern soldier has to carry.
However, in spite of the benefits of the modern carbine, many armies are experiencing a certain backlash against the universal equipping of soldiers with carbines and lighter rifles in general, and are equipping selected soldiers, usually called Designated Marksmen, or DM, with higher-power rifles. Another problem comes from the loss of muzzle velocity caused by the shorter barrel, which when coupled with the typical small, lightweight bullets, causes effectiveness to be diminished; a 5.56mm gets its lethality from its high velocity, and when fired from the 14.5" M4 carbine, its power, penetration, and range are diminished. Thus, there has been a move towards adopting a slightly more powerful round tailored for high performance from both long and short barrels. The US has done experiments regarding adopting a new, slightly larger and heavier caliber such as the 6.5mm Grendel or 6.8mm Remington SPC, which are heavier and thus retain more effectiveness at lower muzzle velocities, but has for the time decided to retain the 5.56mm NATO round as standard issue.
While the US Army adopted the M4 carbine in the 1990s, the US Marine Corps retained their 20" barrel M16A4 rifles long afterwards, citing the increased range and effectiveness over the carbine version; officers were required to carry an M4 carbine rather than an M9 pistol, as Army officers do. Due to the Marine Corps emphasis on being riflemen, the lighter carbine was considered a suitable compromise between a rifle and a pistol. Marines with restricted mobility such as vehicle operators, or a greater need for mobility such as squad leaders, were also issued M4 carbines. In July 2015, the Marine Corps approved the M4 carbine for standard issue to front-line Marines, replacing the M16A4 rifle. The rifles will be issued to support troops while the carbines go to the front-line Marines, in a reversal of the traditional roles of "rifles for the front line, carbines for the rear".
Special forces[edit]
Special forces need to perform fast, decisive operations, frequently airborne or boat-mounted. A pistol, though light and quick to operate, is viewed as not having enough power, firepower, or range. A submachine gun has selective fire, but firing a pistol cartridge and having a short barrel and sight radius, it is not accurate or powerful enough at longer ranges. Submachine guns also tend to have poorer armor and cover penetration than rifles and carbines firing rifle ammunition. Consequently, carbines have gained wide acceptance among SOCOM, UKSF, and other communities, having relatively light weight, large magazine capacity, selective fire, and much better range and penetration than a submachine gun.
The smaller size and lighter weight of carbines makes them easier to handle in close-quarter situations such as urban engagements, when deploying from military vehicles, or in any situation where space is confined. The disadvantages of carbines relative to rifles include inferior long-range accuracy and a shorter effective range (when referring to carbines of the same power and class as the rifle). Larger than a submachine gun, they are harder to maneuver in tight encounters where superior range and stopping power at distance are not great considerations. Firing the same ammunition as standard-issue rifles or pistols gives carbines the advantage of standardization over those personal defense weapons (PDWs) that require proprietary cartridges.[citation needed]
The modern usage of the term carbine covers much the same scope as it always had, namely lighter weapons (generally rifles) with barrels up to 20 inches in length. These weapons can be considered carbines, while rifles with barrels longer than 20 inches are generally not considered carbines unless specifically named so. Conversely, many rifles have barrels shorter than 20", yet aren't considered carbines. The AK series rifles has an almost universal barrel length of 16.3", well within carbine territory, yet has always been considered a rifle, perhaps because it was designed as such and not shortened from a long weapons. Modern carbines use ammunition ranging from that used in light pistols up to powerful rifle cartridges, with the usual exception of high-velocity magnum cartridges. In the more powerful cartridges, the short barrel of a carbine has significant disadvantages in velocity, and the high residual pressure, and frequently still-burning powder and gases, when the bullet exits the barrel results in substantially greater muzzle blast. Flash suppressors are a common, partial solution to this problem, although even the best flash suppressors are hard put to deal with the excess flash from the still-burning powder leaving the short barrel (and they also add several inches to the length of the barrel, diminishing the purpose of having a short barrel in the first place). The shorter the barrel, the more difficult it is to hide the flash; the AKS-74U has a complex, effective muzzle-booster/flash suppressor, yet it still suffers from extreme muzzle flash.[citation needed]
Pistol-caliber carbines (PCC)[edit]
Marlin Model 1894C — .357 Magnum carbine
One of the more atypical classes of carbine is the pistol caliber carbine or PCC. These first appeared soon after metallic cartridges became common. These were developed as "companions" to the popular revolvers of the day, firing the same cartridge but allowing more velocity and accuracy than the revolver. These were carried by cowboys, lawmen, and others in the Old West. The classic combination would be a Winchester lever-action carbine and a Colt Single Action Army revolver in .44-40 or .38-40. During the 20th century, this trend continued with more modern and powerful revolver cartridges, in the form of Winchester and Marlin lever action carbines chambered in .38 Special/.357 Magnum and .44 Special/.44 Magnum.
Modern equivalents also exist, such as the discontinued Ruger Police Carbine, which uses the same magazine as the Ruger pistols of the same caliber, as well as the (also discontinued) Marlin Camp Carbine (which, in .45ACP, used M1911 magazines). The Ruger Model 44 and Ruger Deerfield Carbine were both carbines chambered in .44 Magnum. The Beretta Cx4 Storm shares magazines with many Beretta pistols and is designed to be complementary to the Beretta Px4 Storm pistol. The Hi-Point 995 Carbine is a cheaper yet reliable alternative to other pistol caliber carbines in the United States, and its magazines can be used in the Hi-Point C-9 pistol. Another example is the Kel-Tec SUB-2000 series chambered in either 9 mm Luger or .40S&W, which can be configured to accept Glock, Beretta, S&W, or SIG pistol magazines. The SUB-2000 also has the somewhat unusual (although not unique) ability to fold in half.
The primary advantage of a carbine over a pistol using the same ammunition is controllability. The combination of firing from the shoulder, a greatly increased sight picture, two-handed stability, and precision offer a significantly more user-friendly platform. In addition, the longer barrel can offer increased velocity and, with it, greater energy and effective range. As long guns, pistol-caliber carbines may be less legally restricted than handguns in some jurisdictions. Compared to carbines chambered in intermediate or rifle calibers, such as .223 Remington and 7.62×54mmR, pistol-caliber carbines generally experience less of an increase in external ballistic properties as a result of the propellant. The drawback is that one loses the primary benefits of a handgun, i.e. portability and concealability, resulting in a weapon almost the size of, but less accurate than, a long-gun, but not much more powerful than a pistol.
Also widely produced are semi-automatic and typically longer-barreled derivatives of select-fire submachine guns, such as the FN PS90, HK USC, KRISS Vector, Thompson carbine, and the Uzi carbine. In order to be sold legally in many countries, the barrel must meet a minimum length (16" in the USA). So the original submachine gun in given a legal-length barrel and made into a semi-automatic, transforming it into a carbine. Though less common, pistol-caliber conversions of centerfire rifles like the AR-15 are commercially available.
Shoulder-stocked handguns[edit]
Some handguns used to come from the factory with mounting lugs for a shoulder stock, notably including the "Broomhandle" Mauser C96, Luger P.08, and Browning Hi-Power. In the case of the first two, the pistol could come with a hollow wooden stock that doubled as a holster.
Carbine conversion kits are commercially available for many other pistols, including M1911 and most Glocks. These can either be simple shoulder stocks fitted to a pistol or full carbine conversion kits, which are at least 26 in (660 mm) long and replace the pistol's barrel with one at least 16 in (410 mm) long for compliance with the US law. In the US, fitting a shoulder stock to a handgun with a barrel less than 16" long legally turns it into a short-barreled rifle, which is in violation of the National Firearms Act.
Legal issues[edit]
United States[edit]
Under the National Firearms Act of 1934, firearms with shoulder stocks or originally manufactured as a rifle and barrels less than 16 in (410 mm) in length are classified as short-barreled rifles. Short-barreled rifles are restricted similarly to short-barreled shotguns, requiring a $200 tax paid prior to manufacture or transfer – a process which can take several months. Because of this, firearms with barrels of less than 16 in (410 mm) and a shoulder stock are uncommon. A list of firearms not covered by the NFA due to their antique status may be found here[5] or due to their Curio and Relic status may be found here;[6] these lists includes a number of carbines with barrels less than the minimum legal length and firearms that are "primarily collector's items and are not likely to be used as weapons and, therefore, are excluded from the provisions of the National Firearms Act." Machine guns, as their own class of firearm, are not subject to requirements of other class firearms.[citation needed]
Distinct from simple shoulder stock kits, full carbine conversion kits are not classified as short-barreled rifles. By replacing the pistol barrel with one at least 16 in (410 mm) in length and having an overall length of at least 26 in (660 mm), a carbine converted pistol may be treated as a standard rifle under Title I of the Gun Control Act of 1968 (GCA).[7] However, certain "Broomhandle" Mauser C96, Luger, and Browning Hi-Power Curio & Relic pistols with their originally issued stock attached only may retain their pistol classification.
Carbines without a stock and not originally manufactured as a rifle are not classified as rifles or short barreled rifles. A carbine manufactured under 26 in (660 mm) in length without a forward vertical grip will be a pistol and, state law notwithstanding, can be carried concealed without creating an unregistered Any Other Weapon. A nearly identical carbine with an overall length of 26 in (660 mm) or greater is simply an unclassified firearm under Title I of the Gun Control Act of 1968, as the Any Other Weapon catch-all only applies to firearms under 26 in (660 mm) or that have been concealed. However, a modification intending to fire from the shoulder and bypass the regulation of short-barreled rifles is considered the unlawful possession and manufacture of an unregistered short-barreled rifle.
In some historical cases, the term machine carbine was the official title for submachine guns, such as the British Sten and Australian Owen guns. The semiautomatic-only version of the Sterling submachine gun was also officially called a "carbine". The original Sterling semi-auto would be classed a "short barrel rifle" under the U.S. National Firearms Act, but fully legal long-barrel versions of the Sterling have been made for the U.S. collector market.[citation needed]
See also[edit]
Further reading[edit]
• Beard, Ross E. Carbine : the story of David Marshall Williams. Williamstown, NJ: Phillips, 1997. ISBN 0-932572-26-X OCLC 757855022
• Carbines : cal. .30 carbines M1, M1A1, M2 and M3. Washington, DC: Headquarters, Departments of the Army and the Air Force, 1953.
• McAulay, John D. Carbines of the Civil War, 1861–1865. Union City, TN: Pioneer Press, 1981. ISBN 978-0-913159-45-3 OCLC 8111324
• McAulay, John D. Carbines of the U.S. Cavalry, 1861–1905. Lincoln, RI: Andrew Mowbray Publishers, 1996. ISBN 0-917218-70-1 OCLC 36087526
1. ^ "Carbine". Retrieved October 8, 2014.
2. ^ "carbine." Merriam-Webster Online Dictionary. 2010.
3. ^ Wikisource-logo.svg Chisholm, Hugh, ed. (1911). "Carbine". Encyclopædia Britannica (11th ed.). Cambridge University Press.
4. ^ The American Heritage Dictionary of the English Language, Fourth Edition
5. ^ "Curios or Relics List —Update March 2001 through May 2005". Bureau of Alcohol, Tobacco, Firearms and Explosives. Retrieved 2015-11-13.
6. ^ "Curios or Relics List —Update January 2009 through June 2010". Bureau of Alcohol, Tobacco, Firearms and Explosives. Retrieved 2015-11-13.
7. ^ "ATF Rule 2011-4 pertaining to Carbine Conversion Units". Bureau of Alcohol, Tobacco, Firearms and Explosives. Retrieved 2015-11-16. |
From Wikipedia, the free encyclopedia
(Redirected from Deinterlace)
Jump to: navigation, search
Deinterlacing is the process of converting interlaced video, such as common analog television signals or 1080i format HDTV signals, into a non-interlaced form.
An interlaced video frame consists of two sub-fields taken in sequence, each sequentially scanned at odd, and then even, lines of the image sensor. Analog television employed this technique because it allowed for less transmission bandwidth and further eliminated the perceived flicker that a similar frame rate would give using progressive scan. CRT-based displays were able to display interlaced video correctly due to their complete analogue nature. Newer displays are inherently digital, in that the display comprises discrete pixels. Consequently, the two fields need to be combined into a single frame, which leads to various visual defects. The deinterlacing process should try to minimize these.
Deinterlacing has been researched for decades and employs complex processing algorithms; however, consistent results have been very hard to achieve.[1][2]
Both video and photographic film capture a series of frames (still images) in rapid succession; however, television systems read the captured image by serially scanning the image sensor by lines (rows). In analog television, each frame is divided into two consecutive fields, one containing all even lines, another with the odd lines. The fields are captured in succession at a rate twice that of the nominal frame rate. For instance, PAL and SECAM systems have a rate of 25 frames/s or 50 fields/s, while the NTSC system delivers 29.97 frames/s or 59.94 fields/s. This process of dividing frames into half-resolution fields at double the frame rate is known as interlacing.
Since the interlaced signal contains the two fields of a video frame shot at two different times, it enhances motion perception to the viewer and reduces flicker by taking advantage of the persistence of vision effect. This results in an effective doubling of time resolution as compared with non-interlaced footage (for frame rates equal to field rates). However, interlaced signal requires a display that is natively capable to show the individual fields in a sequential order, and only traditional CRT-based TV sets are capable of displaying interlaced signal, due to the electronic scanning and lack of apparent fixed resolution.
Most modern displays, such as LCD, DLP and plasma displays, are not able to work in interlaced mode, because they are fixed-resolution displays and only support progressive scanning. In order to display interlaced signal on such displays, the two interlaced fields must be converted to one progressive frame with a process known as de-interlacing. However, when the two fields taken at different points in time are re-combined to a full frame displayed at once, visual defects called interlace artifacts or combing occur with moving objects in the image. A good deinterlacing algorithm should try to avoid interlacing artifacts as much as possible and not sacrifice image quality in the process, which is hard to achieve consistently. There are several techniques available that extrapolate the missing picture information, however they rather fall into the category of intelligent frame creation and require complex algorithms and substantial processing power.
Deinterlacing techniques require complex processing and thus can introduce a delay into the video feed. While not generally noticeable, this can result in the display of older video games lagging behind controller input. Many TVs thus have a "game mode" in which minimal processing is done in order to maximize speed at the expense of image quality. Deinterlacing is only partly responsible for such lag; scaling also involves complex algorithms that take milliseconds to run.
Progressive source material[edit]
Main article: Telecine
Interlaced video can carry progressive scan signal, and deinterlacing process should consider this as well.
Typical movie material is shot on 24 frames/s film; when converting film to interlaced video using telecine, each film frame can be presented by two progressive segmented frames (PsF). This format does not require complex deinterlacing algorithm because each field contains a part of the very same progressive frame. However to match 50 field interlaced PAL/SECAM or 59.94/60 field interlaced NTSC signal, frame rate conversion should be performed using various "pulldown" techniques; most advanced TV sets can restore the original 24 frame/s signal using an inverse telecine process. Another option is to speed up 24-frame film by 4% (to 25 frames/s) for PAL/SECAM conversion; this method is still vastly used for DVDs, as well as television broadcasts (SD & HD) in the PAL markets.
DVDs can either encode movies using one of these methods, or store original 24 frame/s progressive video and use MPEG-2 decoder tags to instruct the video player on how to convert them to the interlaced format. Most movies on Blu-ray discs have preserved the original non interlaced 24 frame/s motion film rate and allow output in the progressive 1080p24 format directly to display devices, with no conversion necessary.
Some 1080i HDV camcorders also offer PsF mode with cinema-like frame rates of 24 or 25 frame/s. The TV production can also use special film cameras which operate at 25 or 30 frame/s; such material does not need framerate conversion for broadcasting in the intended video system format.
Deinterlacing methods[edit]
Deinterlacing requires the display to buffer one or more fields and recombine them into full frames. In theory this would be as simple as capturing one field and combining it with the next field to be received, producing a single frame. However, the originally recorded signal was produced as a series of fields, and any motion of the subjects during the short period between the fields is encoded into the display. When combined into a single frame, the slight differences between the two fields due to this motion results in a "combing" effect where alternate lines are slightly displaced from each other.
There are various methods to deinterlace video, each producing different problems or artifacts of its own. Some methods are much cleaner in artifacts than other methods.
Most deinterlacing techniques can be broken up into three different groups all using their own exact techniques. The first group are called field combination deinterlacers, because they take the even and odd fields and combine them into one frame which is then displayed. The second group are called field extension deinterlacers, because each field (with only half the lines) is extended to the entire screen to make a frame. The third type uses a combination of both and falls under the banner of motion compensation and a number of other names.
Modern deinterlacing systems therefore buffer several fields and use techniques like edge detection in an attempt to find the motion between the fields. This is then used to interpolate the missing lines from the original field, reducing the combing effect.[3]
Field combination deinterlacing [edit]
• Weaving is done by adding consecutive fields together. This is fine when the image hasn't changed between fields, but any change will result in artifacts known as "combing," when the pixels in one frame do not line up with the pixels in the other, forming a jagged edge. This technique retains the full vertical resolution at the expense of half the temporal resolution (motion).
• Blending is done by blending, or averaging consecutive fields to be displayed as one frame. Combing is avoided because the images are on top of each other. This instead leaves an artifact known as ghosting. The image loses vertical resolution and temporal resolution. This is often combined with a vertical resize so that the output has no numerical loss in vertical resolution. The problem with this is that there is a quality loss, because the image has been downsized then upsized. This loss in detail makes the image look softer. Blending also loses half the temporal resolution since two motion fields are combined into one frame.
• Selective blending, or smart blending or motion adaptive blending, is a combination of weaving and blending. As areas that haven't changed from frame to frame don't need any processing, the frames are woven and only the areas that need it are blended. This retains the full vertical resolution and half the temporal resolution, and it has fewer artifacts than weaving or blending because of the selective combination of both techniques.
• Inverse Telecine: Telecine is used to convert a motion picture source at 24 frames per second to interlaced TV video in countries that use NTSC video system at 30 frames per second. Countries which use PAL at 25 frames per second do not use Telecine since motion picture sources are sped up 4% to achieve the needed 25 frames per second. If Telecine was used then it is possible to reverse the algorithm to obtain the original non-interlaced footage, which has a slower frame rate. In order for this to work, the exact telecine pattern must be known or guessed. Unlike most other deinterlacing methods, when it works, inverse telecine can perfectly recover the original progressive video stream.
• Telecide-style algorithms: If the interlaced footage was generated from progressive frames at a slower frame rate (e.g. "cartoon pulldown"), then the exact original frames can be recovered by copying the missing field from a matching previous/next frame. In cases where there is no match (e.g. brief cartoon sequences with an elevated frame rate), then the filter falls back on another deinterlacing method such as blending or line-doubling. This means that the worst case for Telecide is occasional frames with ghosting or reduced resolution. By contrast, when more sophisticated motion-detection algorithms fail, they can introduce pixel artifacts that are unfaithful to the original material. For telecine video, decimation can be applied as a post-process to reduce the frame rate, and this combination is generally more robust than a simple inverse telecine, which fails when differently interlaced footage is spliced together.
Field extension deinterlacing [edit]
Half-sizing displays each interlaced field on its own, resulting in a video with half the vertical resolution of the original, unscaled. While this method retains all vertical resolution and all temporal resolution it is understandably not used for regular viewing because of its false aspect ratio. However, it can be successfully used to apply video filters which expect a noninterlaced frame, such as those exploiting information from neighbouring pixels (e.g., sharpening).
Line doubling
Line doubling takes the lines of each interlaced field (consisting of only even or odd lines) and doubles them, filling the entire frame. This results in the video having a frame rate identical to the field rate, but each frame having half the vertical resolution, or resolution equal to that of each field that the frame was made from. Line doubling prevents combing artifacts but causes a noticeable reduction in picture quality since each frame displayed is doubled and really only at the original half field resolution. This is noticeable mostly on stationary objects since they appear to bob up and down. These techniques are also called bob deinterlacing and linear deinterlacing for this reason. Line doubling retains horizontal and temporal resolution at the expense of vertical resolution and bobbing artifacts on stationary and slower moving objects. A variant of this method discards one field out of each frame, halving temporal resolution.
Line doubling is sometimes confused with deinterlacing in general, or with interpolation (image scaling) which uses spatial filtering to generate extra lines and hence reduce the visibility of pixelation on any type of display.[4] The terminology 'line doubler' is used more frequently in high end consumer electronics, while 'deinterlacing' is used more frequently in the computer and digital video arena.
Motion detection[edit]
Best picture quality can be ensured by combining traditional field combination methods (weaving and blending) and frame extension methods (bob or line doubling) to create a high quality progressive video sequence; the best algorithms would also try to predict the direction and the amount of image motion between subsequent sub-fields in order to better blend the two subfields together.
One of the basic hints to the direction and amount of motion would be the direction and length of combing artifacts in the interlaced signal. More advanced implementations would employ algorithms similar to block motion compensation used in video compression; deinterlacers that use this technique are often superior because they can use information from many fields, as opposed to just one or two. This requires powerful hardware to achieve realtime operation.
For example, if two fields had a person's face moving to the left, weaving would create combing, and blending would create ghosting. Advanced motion compensation (ideally) would see that the face in several fields is the same image, just moved to a different position, and would try to detect direction and amount of such motion. The algorithm would then try to reconstruct the full detail of the face in both output frames by combining the images together, moving parts of each subfield along the detected direction by the detected amount of movement.
Motion compensation needs to be combined with scene change detection, otherwise it will attempt to find motion between two completely different scenes. A poorly implemented motion compensation algorithm would interfere with natural motion and could lead to visual artifacts which manifest as "jumping" parts in what should be a stationary or a smoothly moving image.
Where deinterlacing is performed[edit]
Deinterlacing of an interlaced video signal can be done at various points in the TV production chain.
Progressive media[edit]
Deinterlacing is required for interlaced archive programs when the broadcast format or media format is progressive, as in EDTV 576p or HDTV 720p50 broadcasting, or mobile DVB-H broadcasting; there are two ways to achieve this.
• Production – The interlaced video material is converted to progressive scan during program production. This should typically yield the best possible quality, since videographers have access to expensive and powerful deinterlacing equipment and software and can deinterlace at the best possible quality, probably manually choosing the optimal deinterlacing method for each frame.
• Broadcasting – Real-time deinterlacing hardware converts interlaced programs to progressive scan immediately prior to broadcasting. Since the processing time is constrained by the frame rate and no human input is available, the quality of conversion is most likely inferior to the pre-production method; however, expensive and high-performance deinterlacing equipment may still yield good results when properly tuned.
Interlaced media[edit]
When the broadcast format or media format is interlaced, real-time deinterlacing should be performed by embedded circuitry in a set-top box, television, external video processor, DVD or DVR player, or TV tuner card. Since consumer electronics equipment is typically far cheaper, has considerably less processing power and uses simpler algorithms compared to professional deinterlacing equipment, the quality of deinterlacing may vary broadly and typical results are often poor even on high-end equipment.[citation needed]
Using a computer for playback and/or processing potentially allows a broader choice of video players and/or editing software not limited to the quality offered by the embedded consumer electronics device, so at least theoretically higher deinterlacing quality is possible – especially if the user can pre-convert interlaced video to progressive scan before playback and advanced and time-consuming deinterlacing algorithms (i.e. employing the "production" method).
However, the quality of both free and commercial consumer-grade software may not be up to the level of professional software and equipment. Also, most users are not trained in video production; this often causes poor results as many people do not know much about deinterlacing and are unaware that the frame rate is half the field rate. Many codecs/players do not even deinterlace by themselves and rely on the graphics card and video acceleration API to do proper deinterlacing.
Concerns over effectiveness[edit]
The European Broadcasting Union has argued against the use of interlaced video in production and broadcasting, recommending 720p 50 fps (frames per second) as current production format and working with the industry to introduce 1080p50 as a future-proof production standard which offers higher vertical resolution, better quality at lower bitrates, and easier conversion to other formats such as 720p50 and 1080i50.[5][6] The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames.
Yves Faroudja, the founder of Faroudja Labs and Emmy Award winner for his achievements in deinterlacing technology, has stated that "interlace to progressive does not work" and advised against using interlaced signal.[2][7]
See also[edit]
1. ^ Jung, J.H.; Hong, S.H. (2011). "Deinterlacing method based on edge direction refinement using weighted maximum frequent filter". Proceedings of the 5th International Conference on Ubiquitous Information Management and Communication. ACM. ISBN 978-1-4503-0571-6.
2. ^ a b Philip Laven (January 26, 2005). "EBU Technical Review No. 301 (January 2005)". EBU.
3. ^
4. ^ PC Magazine. "PCMag Definition: Deinterlace".
5. ^ "EBU R115-2005: FUTURE HIGH DEFINITION TELEVISION SYSTEMS" (PDF). EBU. May 2005. Archived from the original on 2009-05-27. Retrieved 2009-05-24.
6. ^ "10 things you need to know about... 1080p/50" (PDF). EBU. September 2009. Retrieved 2010-06-26.
7. ^ Philip Laven (January 25, 2005). "EBU Technical Review No. 300 (October 2004)". EBU.
External links[edit] |
Messier 3
From Wikipedia, the free encyclopedia
Jump to: navigation, search
NGC 5272
Messier 3 - Adam Block - Mount Lemmon SkyCenter - University of Arizona
Mount Lemmon SkyCenter image of Messier 3
Observation data (J2000 epoch)
Class VI[1]
Constellation Canes Venatici
Right ascension 13h 42m 11.62s[2]
Declination +28° 22′ 38.2″[2]
Distance 33.9 kly (10.4 kpc)[3]
Apparent magnitude (V) +6.2[4]
Apparent dimensions (V) 18′.0
Physical characteristics
Mass 4.5×105[5] M
Radius 90 ly
Tidal radius 113 ly (30 pc)[mean][6]
Metallicity = –1.34[7] dex
Estimated age 11.39 Gyr[7]
Other designations NGC 5272[4]
See also: Globular cluster, List of globular clusters
Messier 3 (also known as M3 or NGC 5272) is a globular cluster of stars in the northern constellation of Canes Venatici. It was discovered by Charles Messier on May 3, 1764,[8] and resolved into stars by William Herschel around 1784. Since then, it has become one of the best-studied globular clusters. Identification of the cluster's unusually large variable star population was begun in 1913 by American astronomer Solon Irving Bailey and new variable members continue to be identified up through 2004.[9]
Arcturus can be used to help locate M3
Messier 3 with amateur telescope
Many amateur astronomers consider it one of the finest northern globular clusters, following only Messier 13.[1] M3 has an apparent magnitude of 6.2,[4] making it a difficult naked eye target even with dark conditions. With a moderate-sized telescope, the cluster is fully defined. It can be a challenge to locate through the technique of star hopping, but can be found by looking almost exactly halfway along an imaginary line connecting the bright star Arcturus to Cor Caroli. Using a telescope with a 25 cm (9.8 in) aperture, the cluster has a bright core with a diameter of about 6 arcminutes and spans a total of 12 arcminutes.[1]
This cluster is one of the largest and brightest, and is made up of around 500,000 stars. It is estimated to be 8 billion years old. It is located at a distance of about 33,900 light-years away from Earth.[citation needed]
Messier 3 is located 31.6 kly (9.7 kpc) above the Galactic plane and roughly 38.8 kly (11.9 kpc) from the center of the Milky Way. It contains 274 known variable stars; by far the highest number found in any globular cluster. These include 133 RR Lyrae variables, of which about a third display the Blazhko effect of long-period modulation. The overall abundance of elements other than hydrogen and helium, what astronomers term the metallicity, is in the range of –1.34 to –1.50 dex. This value gives the logarithm of the abundance relative to the Sun; the actual proportion is 3.2–4.6% of the solar abundance. Messier 3 is the prototype for the Oosterhoff type I cluster, which is considered "metal-rich". That is, for a globular cluster, Messier 3 has a relatively high abundance of heavier elements.[10]
1. ^ a b c Thompson, Robert Bruce; Thompson, Barbara Fritchman (2007), Illustrated guide to astronomical wonders, DIY science O'Reilly Series, O'Reilly Media, Inc., p. 137, ISBN 0-596-52685-7.
2. ^ a b Goldsbury, Ryan; et al. (December 2010), "The ACS Survey of Galactic Globular Clusters. X. New Determinations of Centers for 65 Clusters", The Astronomical Journal, 140 (6): 1830–1837, arXiv:1008.2755Freely accessible, Bibcode:2010AJ....140.1830G, doi:10.1088/0004-6256/140/6/1830.
3. ^ Paust, Nathaniel E. Q.; et al. (February 2010), "The ACS Survey of Galactic Globular Clusters. VIII. Effects of Environment on Globular Cluster Global Mass Functions", The Astronomical Journal, 139 (2): 476–491, Bibcode:2010AJ....139..476P, doi:10.1088/0004-6256/139/2/476.
4. ^ a b c Messier 3, SIMBAD Astronomical Object Database, retrieved 2006-11-15.
5. ^ Marks, Michael; Kroupa, Pavel (August 2010), "Initial conditions for globular clusters and assembly of the old globular cluster population of the Milky Way", Monthly Notices of the Royal Astronomical Society, 406 (3): 2000–2012, arXiv:1004.2255Freely accessible, Bibcode:2010MNRAS.406.2000M, doi:10.1111/j.1365-2966.2010.16813.x. Mass is from MPD on Table 1.
6. ^ Brosche, P.; Odenkirchen, M.; Geffert, M. (March 1999). "Instantaneous and average tidal radii of globular clusters". New Astronomy. 4 (2): 133–139. Bibcode:1999NewA....4..133B. doi:10.1016/S1384-1076(99)00014-7. Retrieved 7 December 2014.
7. ^ a b Forbes, Duncan A.; Bridges, Terry (May 2010), "Accreted versus in situ Milky Way globular clusters", Monthly Notices of the Royal Astronomical Society, 404 (3): 1203–1214, arXiv:1001.4289Freely accessible, Bibcode:2010MNRAS.404.1203F, doi:10.1111/j.1365-2966.2010.16373.x.
8. ^ Machholz, Don (2002), The observing guide to the Messier marathon: a handbook and atlas, Cambridge University Press, ISBN 0-521-80386-1.
9. ^ Valcarce, A. A. R.; Catelan, M. (August 2008), "A semi-empirical study of the mass distribution of horizontal branch stars in M 3 (NGC 5272)", Astronomy and Astrophysics, 487 (1): 185–195, arXiv:0805.3161Freely accessible, Bibcode:2008A&A...487..185V, doi:10.1051/0004-6361:20078231.
10. ^ Cacciari, C.; Corwin, T. M.; Carney, B. W. (January 2005), "A Multicolor and Fourier Study of RR Lyrae Variables in the Globular Cluster NGC 5272 (M3)", The Astronomical Journal, 129 (1): 267–302, arXiv:astro-ph/0409567Freely accessible, Bibcode:2005AJ....129..267C, doi:10.1086/426325.
External links[edit]
Coordinates: Sky map 13h 42m 11.23s, 28° 22′ 31.6″ |
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A misnomer is a word or term that suggests a meaning that is known to be wrong. Misnomers often arise because the thing received its name long before its true nature was known, or because the nature of an earlier form is no longer the norm. A misnomer may also be simply a word that is used incorrectly or misleadingly.[1] "Misnomer" does not mean "misunderstanding" or "popular misconception",[1] and many misnomers remain in legitimate use (that is, being a misnomer does not always make a name incorrect).
Sources of misnomers[edit]
Some of the sources of misnomers are:
• An older name being retained after the thing named has changed (e.g. tin can, mince meat pie, steamroller, tin foil, clothes iron, digital darkroom). This is essentially a metaphorical extension with the older item standing for anything filling its role.
• Transference of a well-known product brand name into a genericized trademark (e.g., Xerox for photocopy, Kleenex for tissue or Jell-o for gelatin dessert).
• An older name being retained even in the face of newer information (e.g., Chinese checkers, Arabic numerals).
• Pars pro toto, or a name being applied to something which only covers part of a region. The name Holland is often used to refer to the Netherlands while it only designates a part of that country; sometimes people refer to the suburbs of a metropolis with the name of the biggest city in the metropolis.
• A name being based on a similarity in a particular aspect (e.g., "shooting stars" look like falling stars but are actually meteors).
• A difference between popular and technical meanings of a term. For example, a koala "bear" (see below) looks and acts much like a bear, but in actuality, it is quite distinct and unrelated. Similarly, fireflies fly like flies, and ladybugs look and act like bugs. Botanically, peanuts are not true nuts, even though they look and taste like nuts. The technical sense is often cited as the "correct" sense, but this is a matter of context.
• Ambiguity (e.g., a parkway is generally a road with park-like landscaping, not a place to park). Such a term may confuse those unfamiliar with the language, dialect and/or word.
• Association of a thing with a place other than one might assume. For example, Panama hats originate from Ecuador, but came to be associated with the building of the Panama Canal.
• Naming particular to the originator's world view.
• An unfamiliar name (generally foreign) or technical term being re-analyzed as something more familiar (see folk etymology).
• Anachronisms, terms being applied to things that belong to another time, especially much later.
Older name retained[edit]
• The "lead" in pencils is made of graphite and clay, not lead; graphite was originally believed to be lead ore, but this is now known not to be the case. The graphite and clay mix is known as plumbago, meaning "lead ore" in Latin, and is still known as "black lead" in Keswick, Cumbria and elsewhere.
• Blackboards can be black, green, red, blue, or brown. And the sticks of chalk are no longer made of chalk, but of gypsum.
• Tin foil is almost always actually aluminium, whereas "tin cans" made for the storage of food products are made from steel with a thin tin plating. In both cases, tin was the original metal.
• Telephone numbers are usually referred to as being "dialed" although rotary phones are now rare.
• When a computer program is electronically transferred from disk to memory, this is referred to as loading the program. "Load" is a holdover term from the mid-20th century, when programs were created on punched cards and then loaded into a hopper for automated processing.
• In golf, the clubs commonly referred to as woods are usually made of metal. The club heads for "woods" were formerly made predominantly of wood.
Similarity of appearance[edit]
Difference between common and technical meanings[edit]
Association with place other than one might assume[edit]
• Although dry cleaning does not involve water, it does involve the use of liquid solvents.
• The "funny bone" is not a bone—the phrase refers to the ulnar nerve.
• A quantum leap is properly an instantaneous change which may be either large or small. In physics, it is the smallest possible change that is of particular interest. In common usage, however, the term is often taken to mean a large, abrupt change.
• "Tennis elbow" (formally lateral epicondylitis) does not necessarily result from playing tennis, nor as a result of any other repetitive strain injury.
1. ^ a b Garner, Bryan (2009). Garner's Modern American Usage (3rd ed.). New York: Oxford University Press. p. 542. ISBN 978-0-19-538275-4.
2. ^ Leitner, Gerhard; Sieloff, Inke (1998). "Aboriginal words and concepts in Australian English". World Englishes. 17 (2): 153–169. doi:10.1111/1467-971X.00089. |
Wolf interval
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Wolf fifth on C About this sound Play
Pythagorean wolf fifth as eleven just perfect fifths
In music theory, the wolf fifth (sometimes also called Procrustean fifth, or imperfect fifth)[1][2] is a particularly dissonant musical interval spanning seven semitones. Strictly, the term refers to an interval produced by a specific tuning system, widely used in the sixteenth and seventeenth centuries: the quarter-comma meantone temperament.[3] More broadly, it is also used to refer to similar intervals produced by other tuning systems, including most meantone temperaments.
When the twelve notes within the octave of a chromatic scale are tuned using the quarter-comma mean-tone systems of temperament, one of the twelve intervals spanning seven semitones (classified as a diminished sixth) turns out to be much wider than the others (classified as perfect fifths). In mean-tone systems, this interval is usually from C to A or from G to E but can be moved in either direction to favor certain groups of keys.[4] The eleven perfect fifths sound almost perfectly consonant. Conversely, the diminished sixth is severely dissonant and seems to howl like a wolf, because of a phenomenon called beating. Since the diminished sixth is meant to be enharmonically equivalent to a perfect fifth, this anomalous interval has come to be called the wolf fifth.
Besides the above-mentioned quarter comma meantone, other tuning systems may produce severely dissonant diminished sixths. Conversely, in 12-tone equal temperament, which is currently the most commonly used tuning system, the diminished sixth is not a wolf fifth, as it has exactly the same size as a perfect fifth.
By extension, any interval which is perceived as severely dissonant and may be regarded as howling like a wolf may be called a wolf interval. For instance, in quarter comma meantone, the augmented second, augmented third, augmented fifth, diminished fourth and diminished seventh may be considered wolf intervals, as their size significantly deviates from the size of the corresponding justly tuned interval (see Size of 1/4-comma meantone intervals).
Temperament and the wolf[edit]
In 12-tone scales, the average value of the twelve fifths must equal the 700 cents of equal temperament. If eleven of them have a value of 700−ε cents, as in quarter-comma meantone and most other meantone temperament tuning systems, the other fifth (more properly called a diminished sixth) will equal 700+11ε cents. The value of ε changes depending on the tuning system. In other tuning systems (such as Pythagorean tuning and 1/12-comma meantone), eleven fifths may have a size of 700+ε cents, thus the diminished sixth is 700−11ε cents. If 11ε is very large, as in the quarter-comma meantone tuning system, the diminished sixth is regarded as a wolf fifth.
In terms of frequency ratios, the product of the fifths must be 128, and if f is the size of eleven fifths, 128/f11, or f11/128, will be the size of the wolf.
We likewise find varied tunings for the thirds. Major thirds must average 400 cents, and to each pair of thirds of size 400−4ε (or +4ε) cents we have a third (or diminished fourth) of 400+8ε (or −8ε) cents, leading to eight thirds 4ε cents narrower or wider, and four diminished fourths 8ε cents wider or narrower than average. Three of these diminished fourths form major triads with perfect fifths, but one of them forms a major triad with the diminished sixth. If the diminished sixth is a wolf interval, this triad is called the wolf major triad.
Similarly, we obtain nine minor thirds of 300+3ε (or −3ε) cents and three minor thirds (or augmented seconds) of 300−9ε (or +9ε) cents.
Quarter comma meantone[edit]
In quarter-comma meantone, the fifth is of size 51/4, about 3.42157 cents (or exactly one twelfth of a diesis) flatter than 700 cents, and so the wolf is about 737.637 cents, or 35.682 cents sharper than a perfect fifth of size exactly 3/2, and this is the original howling wolf fifth.
The flat minor thirds are only about 2.335 cents sharper than a subminor third of size 7/6, and the sharp major thirds, of size exactly 32/25, are about 7.712 cents flatter than the supermajor third of 9/7. Meantone tunings with slightly flatter fifths produce even closer approximations to the subminor and supermajor thirds and corresponding triads. These thirds therefore hardly deserve the appellation of wolf, and in fact historically have not been given that name.
Pythagorean tuning[edit]
In Pythagorean tuning, there are eleven justly tuned fifths sharper than 700 cents by about 1.955 cents (or exactly one twelfth of a Pythagorean comma), and hence one fifth will be flatter by twelve times that, which is 23.460 cents (one Pythagorean comma) flatter than a just fifth. A fifth this flat can also be regarded as howling like a wolf. There are also now eight sharp and four flat major thirds.
Five-limit tuning[edit]
Five-limit tuning determines one diminished sixth of size 1024:675 (about 722 cents, i.e. 20 cents sharper than the 3:2 Pythagorean perfect fifth). Whether this interval should be considered dissonant enough to be called a wolf fifth is a controversial matter.
Five-limit tuning also creates two impure perfect fifths of size 40:27 (about 680 cents; less pure than the 3:2 Pythagorean perfect fifth). These are not diminished sixths, but relative to the Pythagorean perfect fifth they are less consonant (about 20 cents flatter) and hence, they might be considered to be wolf fifths. The corresponding inversion is an impure perfect fourth of size 27:20 (about 520 cents). For instance, in the C major diatonic scale, an impure perfect fifth arises between D and A, and its inversion arises between A and D.
Since the term perfect means, in this context, perfectly consonant,[5] the impure perfect fourth and perfect fifth are sometimes simply called imperfect fourth and fifth.[2] However, the widely adopted standard naming convention for musical intervals classifies them as perfect intervals, together with the octave and unison. This is also true for any perfect fourth or perfect fifth which slightly deviates from the perfectly consonant 4:3 or 3:2 ratios (for instance, those tuned using 12-tone equal or 1/4-comma meantone temperament). Conversely, the expressions imperfect fourth and imperfect fifth do not conflict with the standard naming convention when they refer to a dissonant augmented third or diminished sixth (e.g. the wolf fourth and fifth in Pythagorean tuning).
"Taming the wolf"[edit]
Wolf intervals are an artifact of mapping a two-dimensional temperament to a one-dimensional keyboard.[6] The only solution is to make the number of dimensions match. That is, either:
• Keep the (one-dimensional) piano keyboard, and shift to a one-dimensional temperament (e.g., equal temperament), or
• Keep the two-dimensional temperament, and shift to a two-dimensional keyboard.
Keep the piano keyboard[edit]
When the perfect fifth is tempered to be exactly 700 cents wide (that is, tempered by approximately 1/11 of a syntonic comma, or exactly 1/12 of a Pythagorean comma) then the tuning is identical to the familiar 12-tone equal temperament.
Because of the compromises (and wolf intervals) forced on meantone tunings by the one-dimensional piano-style keyboard, well temperaments and eventually equal temperament became more popular.
A fifth of the size Mozart favored, at or near the 55-equal fifth of 698.182 cents, will have a wolf of 720 cents, 18.045 cents sharper than a justly tuned fifth. This howls far less acutely, but still very noticeably.
The wolf can be tamed by adopting equal temperament or a well temperament. The very intrepid may simply want to treat it as a xenharmonic music interval; depending on the size of the meantone fifth it can be made to be exactly 20/13 or 17/11, or less commonly to 32/21 or 49/32.
Keep the two-dimensional tuning system[edit]
Figure 1: The Wicki isomorphic keyboard, invented by Kaspar Wicki in 1896.[7]
Figure 2: The syntonic temperament’s tuning continuum.[6]
To use a two-dimensional temperament without wolf intervals, one needs a two-dimensional keyboard that is "isomorphic" with that temperament. A keyboard and temperament are isomorphic if they are generated by the same intervals. For example, the Wicki keyboard shown in Figure 1 is generated by the same musical intervals as the syntonic temperament — that is, by the octave and tempered perfect fifth — so they are isomorphic.
On an isomorphic keyboard, any given musical interval has the same shape wherever it appears — in any octave, key, and tuning — except at the edges. For example, on Wicki's keyboard, from any given note, the note that's a tempered perfect fifth higher is always up-and-rightwardly adjacent to the given note. There are no wolf intervals within the note-span of this keyboard. The only problem is at the edge, on the note E. The note that's a tempered perfect fifth higher than E is B, which is not included on the keyboard shown (although it could be included in a larger keyboard, placed just to the right of A, hence maintaining the keyboard's consistent note-pattern). Because there is no B button, when playing an E power chord, one must choose some other note that's close in pitch to B, such as C, to play instead of the missing B. That is, the interval from E to C would be a "wolf interval" on this keyboard.
However, such edge conditions produce wolf intervals only if the isomorphic keyboard has fewer buttons per octave than the tuning has enharmonically-distinct notes.[6] For example, the isomorphic keyboard in Figure 2 has 19 buttons per octave, so the above-cited edge-condition, from E to C, is not a wolf interval in 12-TET, 17-TET, or 19-TET; however, it is a wolf interval in 26-TET, 31-TET, and 53-TET. In these latter tunings, using electronic transposition could keep the current key's notes centered on the isomorphic keyboard, in which case these wolf intervals would very rarely be encountered in tonal music, despite modulation to exotic keys.[8]
A keyboard that is isomorphic with the syntonic temperament, such as Wicki's keyboard above, retains its isomorphism in any tuning within the tuning continuum of the syntonic temperament, even when changing tuning dynamically among such tunings.[8] Figure 2 shows the valid tuning range of the syntonic temperament.
1. ^ A.L. Leigh Silver (1971), p.354
3. ^ The wolf fifth
4. ^ Duffin, Ross W. (2007). How Equal Temperament Ruined Harmony (and Why You Should Care). New York: W. W. Norton. p. 35. ISBN 978-0-393-06227-4.
5. ^ Definition of Perfect consonance in Godfrey Weber's General music teacher, by Godfrey Weber, 1841.
6. ^ a b c Milne, Andrew; Sethares, William; Plamondon, James (December 2007). "Invariant Fingerings Across a Tuning Continuum". Computer Music Journal. 31 (4): 15–32. doi:10.1162/comj.2007.31.4.15*. Retrieved 2013-07-11.
7. ^ Gaskins, Robert (September 2003). "The Wicki System—an 1896 Precursor of the Hayden System". Concertina Library: Digital Reference Collection for Concertinas. Retrieved 2013-07-11.
8. ^ a b Plamondon, Jim; Milne, A.; Sethares, W.A. (2009). "Dynamic Tonality: Extending the Framework of Tonality into the 21st Century" (PDF). Proceedings of the Annual Conference of the South Central Chapter of the College Music Society. |
From Wikisource
Jump to: navigation, search
In the physiological view, the law that links the emotion with its exterior signs is the same that governs all the manifestations of life and force; it is the law of the equivalence of movements. At any particular moment, the quantity of nervous force corresponding to the state of consciousness called sensation has to expend itself in some way, and engender somewhere an equivalent manifestation of force. The expended force may itself follow three different courses. Sometimes the nervous excitation is transformed simply into cerebral movements corresponding with a mental agitation. This is what takes place, for example, when a child hears a story that interests and moves it. At other times the nervous excitation is transformed into movements of the viscera, and follows the ganglionic nerves. Agreeable thoughts, for example, aid digestion. Fear may paralyze the nerves of the intestine. The heart beats more rapidly under emotion, and sometimes stops, and this influence is accomplished through the means of the pneumogastric nerves. Or the nervous excitation, following the motor nerves, is transformed into movements of the muscles, which then become the exterior and visible signs of the emotion. A burn on the finger produces a contraction of the features. A lively joy or a deep disquiet throws us into a condition of agitation and purposeless talking and moving about. If the emotion is concentrated, the cerebral disturbance increases in violence as the muscular agitation diminishes. When we spend the excess of our agitation in external movements, in gestures, walking back and forth, tears, and lamentations, the cerebral agitation is correspondingly diminished. These phenomena of diversion are nothing else than particular cases of the conservation of force and the propagation of movements. Sometimes the propagation results in a real metamorphosis. Very violent emotions, producing a reaction on the central parts of the innervation, bring on a sudden paralysis of a number of muscular groups, while feeble disturbances of the sensibility produce superexcitation, which is subsequently replaced by exhaustion. This is what Wundt calls the law of the metamorphosis of nervous action. There result from it effects of balancing and compensation which, in our opinion, are still simply an application of the law of equivalence between movements.
M. Mosso's physiological explanations usually revert into Wundt's law, and with stronger reason into the general law of the equivalence of forces. He has shown that cerebral excitation makes the blood flow to the brain, and that, during intellectual labor, the afflux is sufficient to diminish the volume of the arm. He observed the circulation of the blood in three subjects whose craniums had been partially destroyed. Whenever a stranger came in, or a sudden noise was heard, the cerebral pulse rose immediately. Under the influence of fear the blood flows back to the extremities, to such an extent that a ring can not be pulled off from the finger. M. Mosso has also applied the balance to the study of the circulation. A man is laid full length in a wooden box, arranged as a balance upon a knife-edge, with apparatus for marking the trace of the pulse in the feet and hands, and the changes of volume undergone by these organs. When the balance and the man in it are in equilibrium and repose, something is said to the man. Instantly, by the effect of the excitation received and the attention responding to it, the balance inclines toward the man's head.
Mr. Warner has carefully studied the effects of the emotions in nutrition, which he calls the trophic signs. Maladies that modify nutrition also modify the nervous system, and render it more irritable. The poorly-nourished child often has what the doctors call the nervous—that is, shaky—hand; a more reduced nutrition may end in chorea. Plants also afford examples of excessive irritability, arising from imperfect nutrition. Some sensitive plants were sowed in clear sand, and others in vegetable mold mixed with sand in different proportions. The first, which had nothing but air to feed upon, languished and died; they were extremely sensitive to the lightest touch; a breath, or a slight motion of the pot made them droop. Those plants which had a third or two thirds of vegetable mold were still irritable, but in a less degree, and would not bloom. Those which had pure vegetable earth became robust and nearly insensitive; striking their leaves with a stick, would make them double up, but they would unfold again almost in an instant.
If the physiologists had considered the emotions in their psychological elements, they would have been better able to account for their manifestations, and would not have involved themselves in an inextricable confusion. In all passion there is first an intellectual element—perception or idea; next a sensible element—pleasure or pain; and, finally, a volitional element—desire or aversion. We must, then, to account completely for an expressive motion, seek first the sensitive and mental state which it expresses; second, the affective state; and, third, the corresponding attitude of the will.
Some psychologists, with Herbart, have locked for the primary origin of the emotions in the intelligence, and have sought to explain them by a simple play of ideas. Herbart has made the mistake of having seen only the intellectual effect in passion.
M. Wundt rather sees the force of the will under that of the ideas, but he places this force solely in the attention, in what he calls the apperception, or the grasp of objects by the intelligence. Emotion is, then, according to him, in its origin, only the effect produced by the feeling on the attention. He concludes that the elementary emotion is surprise, "which behaves, in regard to the more complex movements of the soul, merely as the æsthetic feeling awakened by a simple geometric form as opposed to the effect produced by a work of art." M. Wundt might have added that surprise is the intellectual analogue of the mechanical shock with its well-known elastic effects.
The study of the physical effects will also help to enlighten us as to the nature of the causes. Surprise is manifested by open eyes, elevated eyebrows, open mouth, and raised hands. The eyes are opened to gain a clearer view of the strange object, and the lifting of the eyebrows is an accompaniment to that movement. The opening of the mouth is a consequence of the relaxation of the muscles caused by the flight of nervous force to the brain, and is also a movement promoting the deeper inspiration which is a requisite to energetic effort, and which accompanies the accelerated beatings of the heart. The raising and throwing back of the hands may be regarded as a cautionary movement.
Now let us consider what states of sensibility would correspond among the rudimentary animals with the different modes of general activity, accompanied by movements of expansion and contraction. We shall then have the two following situations: first, approach of an advantageous object, followed by increase of activity beyond the normal state, with pleasure and the movement of general expansion, which is the sign of it; and, second, on the approach of the injurious object, descent of activity below the normal, pain, and the movement of general contraction. With a step further in evolution, the internal movement of contraction, perfecting itself by natural selection, has brought the living being to a massive movement of transport in space, which will take it away from the object—this is the movement of aversion and flight. The movement of expansion, on the contrary, would have provoked a transportation of the whole body of the living being toward the agreeable object it is the movement of inclination and pursuit. Here are two new signs in the natural language. Add to them the idea of the object that causes the pain or the pleasure, and we shall have conscious repulsion and desire.
These are the primary emotions, with the general movement of the body that expresses them at the first moment. We can say, then, contrary to Mr. Spencer, that, if the intensity of an agreeable feeling is expressed by an exaltation and expansion of motive activity, the intensity of a painful feeling is expressed at once by a contraction and diminution of motive activity. In joy the different organs only reproduce and aid the general movement of expansion; the features dilate, the eyebrows turn upward, the entire physiognomy opens, the voice rises and swells, and the gestures expand in more ample and more numerous movements. We can also say correctly that the lungs dilate, and their play is rendered easier; the cerebral functions are performed with more rapidity and ease; the intelligence is more animated; the sensibility more expansive; the will more kindly. In a word, the expression of joy is a general expression of liberty, and, by that fact, of liberality.
Next, we pass to the immediate expression of pain. At the first moment the depression of activity is manifested by a general depression of the motive force. "The lips are relaxed," says Sir Charles Bell, "the lower jaw drops, the upper eyelid falls and covers half of the pupil, and the eyebrows incline like the mouth." It is true that some other muscles simultaneously become tense, and enter into play, but Mr. Bain has shown that they are the ones the contraction of which is related to the relaxation of the other muscles. "With a little force a greater one is relaxed." The expenditure in this case is made for saving, and takes place, we think, because the first motion in the face of pain being a movement of conservation and concentration on self, is also a tendency to save the force which is felt to he diminishing—we retire from the pain, and try to recover ourselves. The first stage of pain does not last long, for the reaction begins at once. While the will can consent to pleasure, it can not consent to pain. It defends itself, it struggles, against it. After the first stroke of pain that casts down, we perceive the signs of effort. Sometimes the effort is spasmodic, and involves a prodigality of force that can hardly fail to bring on quick prostration.
Suffering and joy are always accompanied by aversion and desire. The movement of concentration upon self and of the defensive, common to all personal or egotistical feelings, gives to their expression, as M. Mantegazza has remarked, a character essentially concentric or centripetal, while the expression of the benevolent affections is centrifugal and "eccentric." Fear presents the type of the concentric physiognomy pertaining to the affections which have for their center the me.
While the feelings derived from aversion are concentric, those derived from desire are expansive. The setting forth of them is expressed by the body, the arms, the head, lips, and eyes, by a tendency to enlargement and touch, the aspect of which is varied according to the nature of the objects and of the possible touch. With joy and suffering, aversion and desire, Ave have the four fundamental passions, the commingling of which is sufficient to account for all the others, and the expression of which in like manner engenders the most complex mimicry. Physiologists have not taken enough notice of the simplifications which could thus be effected by psychology. The whole can be definitely relegated to a general movement of the will toward the objects or their opposites; and it is the correlative movement of organic expansion or contraction that is the real generator of the language of the emotions.
We pass next to the considerations, ordinarily neglected, that can be borrowed from sociology. When the series of brain-disturbances is produced which have their origin in the appetite or the zest of life, the movement is then inevitably propagated to all the organs. There is in this case, in the first instance, a mechanical contagion, but there is, also, we think, a psychological contagion, and consequently a social phenomenon. The organism, in fact, is a compound of elementary organisms, a society of living cells, united among one another by bonds more or less strict. The cerebral cells being analogous to all the other cells, it is hardly probable that these should not also have their mental side—that is, that they should not be the seat of rudimentary sensations, of vague emotions, and of blind appetitions. In the myriapod it is the head or terminal segment that directs, sees, and smells, but all the other segments also fulfill their appropriate functions, and have their peculiar life in the midst of the collective life. If we cut the animal into several parts, the different parts will continue to move and react under external excitations; it is, therefore, improbable that the head should be the only part to possess sensibility and appetite. When a wound is inflicted upon the animal, it is felt in different degrees by all the segments, and the reaction is propagated from segment to segment. With the superior animals, which are a sort of very centralized states, the concentration of consciousness into the head only obscures the rudiment of sensibility which is still subsisting between the other parts.
Thus, we think, is explained the association of similar sensations with one another, and of sensations with emotions. Wundt has insisted upon these two psychological laws, while he has perhaps limited himself too much in establishing them. By virtue of the first law, analogous sensations are associated together; grave sounds have a relationship with somber colors; high tones with bright colors and with white. The sharp sound of the trumpet, and bright yellow and red, correspond. We say, with reason, that there are shrill colors, also that there are cold colors and warm. The reason of these existing affinities between different sensations is that they can be relegated to a fundamental unity; they are all, fundamentally, excitations and sympathetic reactions of the same primordial appetite.
This fundamental unity explains, we think, the other great psychological law of association, which connects the sensations with analogous emotions—a law which plays a very important part in expression. Wundt has shown that there is something exact in the images of vulgar language—a hard necessity, a sweet tenderness, bitter griefs, black cares, a somber destiny. These images, so far from being wholly artificial, have their natural origin in the constitution of our sensibility and in the relation of the sensitive organs to the motor muscles. Our sensitive organs are provided with muscles which have the double purpose of better disposing them to receive favorable excitations and removing harmful agents. The mouth takes a different form and expression accordingly as we are tasting a sweetened liquor or swallowing a bitter draught; in the former case, it seems to dispose itself to attract and receive, in the latter to repel and reject. Darkness, a glaring light, a clear daylight, give by turns a different figure to the physiognomy. By virtue of the association of the emotions with similar sensations and of these with their corporeal expression, agreeable or disagreeable feelings—joy, esteem, fear, grief, spite—are manifested by muscular contractions resembling either the action of pleasing tastes and smells, and of the luster of a tempered light, or of bitterness, poisonous odors, darkness, and blindness. If the expression is the same for the physical sensation and the moral feeling, it is because both have their unity, not only in the same field of consciousness, but also in the same movement of the appetite and the will. Whatever the causes and whatever the objects, we simply desire what augments our activity, and repel what diminishes it.
Reciprocally, the willful expression of an emotion which we do not feel, generates it by generating the sensations connected with it, which in their turn are associated with analogous emotions: the actor who expresses and simulates anger ends by feeling it to a certain extent. Absolute hypocrisy is an ideal; it is never complete with a man, realized in full, it would be a contradiction of the will with itself. In every case, Nature is ignorant of it; sincerity is the first law of Nature as it is the first law of morals. So it is with sympathy. Nature knows no isolation of ideal egoism; it brings together, it confounds, it unites. Like heat and light, it can not give life and sensibility to one point without making them radiate upon the other points. Even within the individual organism, it establishes a society; and he who believes himself one and solitary is already several: the I is already the we. In this way, all the organs, the heart, arteries, nerves, and muscles sympathize with the brain, and tell, each in its own language, of the suffering or enjoyment in which they are participating. In this way, too, the brain sympathizes with the organs, changes their pain into sadness, and their sensation into feeling; it sends them back its pain and receives it multiplied; a sad thought soon has a cortege of myriads of painful sensations, from the movements of the heart and chest to the most superficial parts of the organism.
To the association of analogous sensations or emotions may he referred, we think, the third of the laws of expression, which Darwin has studied without exhibiting its real meaning—the law of antithesis. Some states of mind, says Darwin, induce in the animal certain habitual acts which are useful to the support or defense of life; and when a state of mind of a directly inverse character is produced, the animal instinctively and by antithesis performs the opposite acts, even when they are useless. Physiologists have rejected the Darwinian principle of antithesis, and the examples he cites in illustration of it may generally be explained in another way. But we think the principle has a psychological value which Darwin failed to elucidate. The association of states of consciousness takes place by contrast and antithesis as well as by analogy; contraries as well as similars are subject to a law of association, which is especially manifested in the domain of the emotions. There exists a fundamental antithesis between pleasure and pain, between acceptation and repulsion by the will. An organic connection appeal's to be established between these opposites, in such a way as to produce a perpetual bifurcation of movements. It is not, therefore, strange that the contrary of a feeling should be expressed by contrary movements or attitudes, aside from all considerations of utility or all choice of the will. This contrast affords a means of facilitating the interpretation of signs.
The law of antithesis is thus a particular case of the law of association, which itself results from the natural concert of all the organs. This concert, or sociality, is so much the essential character of the emotion and its language, that the absence of accord and consonance between all the parts of the organism gives us the means of distinguishing feigned emotions from i*eal. Thus, in theatrical pain, the expression is exaggerated out of all proportion to the occasion, and the real physical condition is so unlike the assumed that the sham is easily detected, and the illusion may be destroyed by a slight accident. On the other hand, when dissimulation of a real emotion is attempted, it is very hard to keep the current of feeling, which is not allowed to express itself in the natural way, from finding vent in some other way, as in mental excitement, or in movements which apparently have no relation to the suffering experienced. Passions on the point of breaking out may be revealed by rhythmical movements of the fingers, or by forced respiration.
The professions also leave their traces in the forms of the organs and in the features. "The hearing of the soldier," says M. Mantegazza, "is precise, stiff, and energetic; that of the priest, supple and unctuous. The soldier, even in civil life, shows in his movements the habit of obedience and command; while the priest in a lay dress wears the mark of the cassock and the cloth, and his fingers seem all the time to be blessing or absolving." So many other professions may be recognized by their attitudes, but there are limitations in the matter; for physiognomy, as M. Mantegazza says, "can not yet be considered an exact science, because we do not yet know all the elements of the problem. It has, nevertheless, its well-established general laws We are not likely to confound a frank physiognomy with a tricky one, or an honest face with the face of a debauchee or rascal."
There remain a few words to be said on the interpretation of signs, in which the old psychology saw a mysterious faculty. We regard it as the simple continuation in another of the sympathetic contagion, of the solidarity which is first manifested in the interior of an organism. In the exterior as well as in the interior of our body, sympathy is the only psychological law of expression; to interpret is to sympathize. In a mechanical view, this sympathy is a real communication of movements, as when the vibrations of a bell set another bell in vibration; in the psychological and social view, it is a real solidarity of sensations, impressions, and volitions. The instinctive reaction of the will under the influence of the feeling, having been extended by contagion to our whole organism, extends by contagion to similar organisms, and, if other men comprehend what we feel, it is because they themselves feel it. The final result of this sympathetic communication is the retranslation of the emotion felt by one into similar emotions in the others. The emotion of our neighbor is returned to us by a kind of response or return shock. Seeing the movements and . attitudes of others, we tend to realize them in ourselves; then, as by a counter-stroke, the movement and attitude realized by us reproduce in us the feelings that correspond to them.
1. Angelo Mosso, "La Paura," 1885.
2. Warner, "Physical Expression," 1886. |
Family Planning and the 19th Century Family Tree
According to the CDC’s MMR publication Achievements in Public Health, 1900-1999: Family Planning, discussing birth control, counseling women about family planning or distributing contraception was illegal under state and federal laws. “In 1912, the modern birth-control movement began. Margaret Sanger, a public health nurse concerned about the adverse health effects of frequent childbirth, miscarriages, and abortion, initiated efforts to circulate information about and provide access to contraception. In 1916, Sanger challenged the laws that suppressed the distribution of birth control information by opening in Brooklyn, New York, the first family planning clinic. The police closed her clinic, but the court challenges that followed established a legal precedent that allowed physicians to provide advice on contraception for health reasons. During the 1920s and 1930s, Sanger continued to promote family planning by opening more clinics and challenging legal restrictions. As a result, physicians gained the right to counsel patients and to prescribe contraceptive methods. By the 1930s, a few state health departments (e.g., North Carolina) and public hospitals had begun to provide family planning services.”
Despite this, we learn from history that contraception, in one form or another, has been used for centuries. Planned Parenthood provides A History of Birth Control Methods, and describes ways our ancestors may have attempted to limit their family sizes. In China, women drank lead and mercury, which we now know will cause sterility, but unfortunately, may also result in death. Of course, there were other ineffective methods tried. From the Planned Parenthood 2006 Report: “During the Middle Ages in Europe, magicians advised women to wear the testicles of a weasel on their thighs or hang its amputated foot from around their necks (Lieberman, 1973). Other amulets of the time were wreaths of herbs, desiccated cat livers or shards of bones from cats (but only the pure black ones), flax lint tied in a cloth and soaked in menstrual blood, or the anus of a hare. It was also believed that a woman could avoid pregnancy by walking three times around the spot where a pregnant wolf had urinated. In more recent New Brunswick, Canada, women drank a potion of dried beaver testicles brewed in a strong alcohol solution.”
While many religions frowned upon family planning (birth control viewed as something that immoral women or prostitutes would use, not those who were married), husbands and wives continued to look for ways to decrease the sizes of their families. A good example is extended and complete breastfeeding, which has been used around the world to increase the time between the birth of children. While many found this to be very effective, it was not popular among the wealthy, who often utilized wet nurses. Abstinence was another method of birth control, which was promoted in the 1870s for married women attempting to limit family sizes. Planned Parenthood attributes a rise in sexually transmitted diseases to this movement, as men began to turn to prostitutes instead of their wives.
Barrier methods were also used. Surprising to many, the condom is one of the oldest forms of contraception dating back to Egypt about 1,000 BC. Originally created from animal gut in an effort to protect from syphylis, it wasn’t until the 1700s that the contraceptive properties of the condom were recognized. By the 1840s, rubber condoms were available, and in the 1930s latex condoms became popular.
While birth control methods have certainly evolved through the years, and the 21st century woman has many options available to her today, it saddens me to think how many years it took for birth control to be accepted. While watching ABC World News last night, I was immediately reminded how much we take for granted. The segment included a Middle Eastern women with her newborn infant, who had the opportunity to talk via Skype to an American mom. She asked, “Are you also afraid of dying in child birth?” If such fears continue to plague the 21st century Middle East mothers-to-be, imagine the anxiety experienced by our American ancestors. According to the CDC, in 1800 in the average mother bore seven children. By 1900, the family size had decreased to 3.5 children, and six to nine of every 1000 women died in childbirth. Some statistics report that the 19th century death rate was as high as 10% for those giving birth.
I am incredibly grateful for modern medicine and birth control, and even more in awe of my many female ancestors who married and had families without the advantages available to 21st century women today.
For additional reading, here are some other really interesting web sites discussing the history of family planning:
2 responses to “Family Planning and the 19th Century Family Tree
%d bloggers like this: |
Skip Navigation
What Makes a Great Developer?
What makes a truly great developer? Some might say a positive attitude. Some might say a high-sugar, high-caffeine, high-bacon diet. Some might say an absence of sunlight and as many monitors as a desk can support. I say pessimism and laziness are high up the list.
Certainly, everyone has anecdotes about developers they've worked with who they thought were brilliant. Unfortunately, most of the time that judgement is made not based on code quality, or hitting of deadlines, but on less relevant criteria, like whether or not the developer knew the names of their colleagues, how many lines of code they output or how confident they sounded when talking about their work.
Unfortunately, the best developers don't always come across positively. While this list may not be applicable to every development environment, here are a few of the traits to look out for to spot a great developer.
Great developers are almost always pessimistic with regard to their work. That doesn't mean they're not upbeat, lively or even cheerful - just that they will always be thinking about what can go wrong and how it can be dealt with.
They'll assume that at some point they'll need to undo work already completed, that hardware will fail, that all security will be compromised, and that your office will burn to the ground. The really brilliant ones will assume that will all happen on the same day. And they won't be happy until there is a specific, actionable, testable - and fully tested - plan for dealing with these sorts of issues. Even then they won't be completely happy.
Pessimistic developers will be the ones that find constant flaws in ideas, and the important thing to remember when working with them is that they're not doing that to tear down other people's ideas - they're doing it to ensure that the ideas that turn into projects are properly thought through and that as many problems as possible have been anticipated in advance. That neurotic, paranoid, pessimistic attitude is exactly what you should be looking for if what you want from your developers is robust, secure, reliable code.
By contrast, an optimistic developer will be more likely to simply assume code will work, or that it is secure, or give a deadline for a project without considering all the potential pitfalls.
Likely to be heard saying: "And what happens when that goes wrong?"
Laziness is not usually viewed as a desirable trait, and in this case I don't mean turns-up-late-and-pretends-to-work laziness or just-move-that-logic-to-the-view laziness - both entirely unwanted. I mean a desire to not do tasks that are repetitive, or to waste time doing things a machine can do for you, or even to avoid future work by writing better code now. A lazy developer is one that builds a reusable code library, or wants a fully automated build process rather than a manual copy-and-paste one, or wants comprehensive automated unit testing, or writes code to be scalable even though that wasn't a requirement (rather than revisit it later).
As a bonus, a lazy developer is also usually one who will try and keep a project focussed on its core goals, rather than try and cram more work into the same time, providing a buffer against feature creep.
For example, when writing a category structure, a lazy developer might be likely to assume a many-to-many relationship between parent and child categories, even though the project specification says it will be a one-to-many relationship. Why? Because it might be needed one day and it would be better to write it that way from the start than to revisit it later.
Likely to be heard saying: "We could automate that."
Good developers are often rather like Gregory House. They're very easily bored by repetitive work (see laziness) and spend most of their time ploughing through it looking for an interesting and challenging (and hopefully new) problem to solve. The less time they can spend on the repetitive, the higher the frequency of the challenges.
Curious developers will be constantly looking for new problems to solve, and better ways to solve previous problems. They'll be the ones encouraging new ways to work and constantly tweaking and trying to improve existing systems. They'll also be the ones most conscious of existing problems in the current working environment, and trying to correct those problems. Curious developers will usually have a wide breadth of knowledge, not just of their primary language(s), but of supportive, associated and alternative technologies.
Curious (or easily-bored) developers are often the least stuck in their ways - the most open to change. They may well need convincing of why a new way of working is better (and that's no bad thing) but as long as it's an improvement, and likely to release more time to spend on the interesting problems, they'll embrace it with a minimum of resistance.
Curiosity also breeds creativity, another highly desirable trait in any developer. A strong desire to work out what has caused a problem and how to solve it is highly likely to motivate someone to continue once obvious avenues are exhausted. It is that sort of mentality that fosters "outside the box" thinking and creative coding.
Possibly the most useful attribute of a curious developer is that desire to find and cure a problem rather than just paper over the crack.
Likely to be heard saying: "Maybe there's another way to do this."
Many great developers are sticklers for detail. They will demand consistency in their work and the work of their team (they're likely to care about common code standards and naming conventions, for example). They'll want unit testing and peer review of code. They'll want everyone in their team to comment on and document code. They are likely to be fussy about version control log messages.
They'll also be fussy about details in communication, and happy to ask what might seem like obvious questions, simply to be sure they have properly understood. This is especially true of things like bug reports. While they may not be terribly motivational communicators, they will usually be able to explain concepts clearly and effectively. That clarity is a tremendous advantage in any development environment, especially if teaching and learning are encouraged.
Likely to be heard saying: "I just have a couple of questions ..."
Join Me On My Journey!
To keep up to date with new posts, and news about my sites and my experiments with passive income, join the awesome mailing list!
comments powered by Disqus |
Pare in Oxford
12 Sep
I wrote this article 1 year ago when I just knew about English structure for the first time. Here we go:
First time I heard about Pare is 5 years ago when I became a student in UIN Jakarta. At that time, I just knew that Pare is a small village in Kediri, East Java. Beside it, I also knew that there were many course places over there, especially English Course, and camps (dormitory) that be used for student to study with its English Area. I have to go there, my heart said. My dream comes true to go there, because I’ve been here, now. It’s fantastic! How happy I am!
There are a lot of opinions about the word “Pare”. I’ll explain to you, why this place is given name by “Pare”. I just hear from some resource person who I ever meet. Are you ready?😀
1. Pare in Javanese called panglerenan. It means place to take a rest. Why do people take a rest in Pare? In the past, there was a flood in somewhere. Some places were flood stricken. Fortunately, Pare wasn’t flood area. So, people flooded out to Pare to save their selves.
2. If you check in Oxford Dictionary: pare /pe∂(r)/ v[T] 1~ (off/away) cut away the outer part, edge or skin of sth 2~ (back/down) gradually reduce the size or amount of sth.
3. Most of people here believe that who is comes to Pare, it means he/she wants to reduce his/her stupidity. There are a lot of purposes why people come here: wants to be a good speaker in English/Arabic/Japanese/other languages or master of grammar, getting TOEFL/IELTS score, repairing pronunciation, practice his/her ability in speaking, memorizing some vocabularies/short expressions/idioms or just fill holiday.
4. Pare also is the name of vegetables that its taste is bitter. It means, people who come here must be able to life suffer. They are to memorize some lessons, wake up early to brainwash with some short expression/idioms, speak English every time–not everyday–and everywhere because if they don’t do it, they will get punishment from their tutor camp. There are many kinds of punishment, such as memorize some vocabularies/short expressions with correct pronunciation, pay money, clean the bathroom, or so on.
I just mention some reasons why called Pare. I believe that there are a lot of opinions about Pare. Do you agree nor not, it doesn’t matter. Now, it’s your turn to give your comment here for me😀
Posted by on 12 September, 2011 in Opini
Tags: , , , , ,
6 responses to “Pare in Oxford
1. Carmine Blakes
29 September, 2011 at 5:33 pm
“Love is an irresistible desire to be irresistibly desired.” ~ Robert Frost
2. Doudoune moncler
5 October, 2011 at 7:32 am
Intriguing post. I have been searching for some beneficial resources for solar panels and discovered your blog. Planning to bookmark this one!
3. Doudoune moncler
8 October, 2011 at 12:58 am
I this website is seriously terrific. Im glad that i identified a location to get such beneficial facts. Keep up the good function!.
4. JungCoretta
9 October, 2011 at 2:08 pm
Personally, I will be able to positioned the squeeze on my web page and use it to get an inventory that I will be able to market many times.
5. mirza3m
13 October, 2011 at 3:46 am
Thanks for all, guys🙂
6. Brent Riggles
20 October, 2011 at 4:41 pm
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
%d bloggers like this: |
japgardenKaresansui, meaning dry garden in Japanese, is better known in the West as Zen garden. Its deep symbolism unites the age-old Japanese art of gardening and Zen philosophy. The elements, water, stones, and plants, represented by gravel and rocks, are suspended, each with its own value, inside the main element, emptiness. Ocean water is represented by pebble rivers; stone is the symbol of all which exists in the natural world; rocks are a mother tiger with her cubs swimming towards a dragon, and are part of the Kanji character used in Japanese writing for heart and mind. Wabi Sabi is the serenity and the beauty of knowing that everything in this world has life.
You can read more at my Garden Art page |
%0 Book Section %B Grassland, Quietness and Strength for a New American Agriculture %D 2009 %T The western United States rangelands, a major resource %A Havstad, KM %A Joyce, L. %A Pieper, R. %A Svejcar, AJ %A Yao, J %A Bartolome, J. %A Huntsinger, L. %A Peters, D.C. %A Allen-Diaz, B. %A Bestelmeyer, BT %A Briske, D. %A Brown, J. %A Brunson, M. %A Herrick, J. E. %A Johnson, P. %K JRN %K western u.s %X Rangeland is a type of land found predominantly in arid and semiarid regions, and managed as a natural ecosystem supporting vegetation of grasses, grass-like plants, forbs, or shrubs. There are approximately 761 m ac of rangeland in the United States, about 31% of the total land area. This land type is characterized by 4 features: 1) limited by water and nutrients, primarily nitrogen (N), 2) annual production is characterized by tremendous temporal and spatial variability, 3) a nested landscape of public and private ownership, and 4) throughout their history of use these lands have been uniquely coupled systems of both people and nature. In the U.S. Department of Agriculture’s 1948 Yearbook of Agriculture, the chapter on rangelands focused on a description of these lands occurring by region across the western United States, and the principles, developed mostly in the early 20th century, to manage these lands to provide the provisioning services of food and fiber through livestock grazing. In the last 60 years, these western rangelands have undergone a transformation as the U.S. population has grown to over 300 million and relocated to urban areas within the western and southwestern states. This population dynamic, along with tremendous changes in agricultural production and a reduction in the population involved in agriculture have resulted in significant changes in the uses and emphases placed upon these western lands. This land type is now often looked to provide a multitude of goods and services not only to rural populations, but also to tens of millions of people in large urban areas located within these rangelands. In this chapter it is our intent to reflect on the extent and nature of this transformation over the last 60 years. We start with a description of this human dynamic, and its sociological implications. We describe the major regions of the western continental U.S., the focal point of U.S. rangelands. %B Grassland, Quietness and Strength for a New American Agriculture %G eng %U bibliography/09-011.pdf %M LTER.2009-90132 |
How can we help?
You can also find more resources in our Help Center.
32 terms
Milady's chapter 8 Basics of nutrition
The three basic food groups: proteins, carbohydrates, and fats that make up the largest part of the nutrition we take in.
Chains of amino acid molecules used in all cell functions and body growth.
Amino Acid
Organic acids that from the building blocks of protein.
Deoxyribonucleic acid
The blueprint material of genetic information; contains all the information that controls the function of every living thing in the cell.
Nonessential amino acid
Amino acids that can be synthetized by the body and do not have to be obtained from the diet.
Complementary foods
Combinations of two incomplete foods; complementary proteins eaten together provide all the essential amino acids and make a complete protein.
Compounds that break down the basic chemical sugars and supply energy for the body.
Adenosine Triphosphate
The substance that provides energy to the cells and converts oxygen to carbon dioxide, a waste product we breathe out.
Carbohydrate-lipid complexes that are also good water binders.
A water binding substance between the fibers of the dermis.
Carbohydrates made up of one basic sugar unit.
These are made up of two molecular sugar units.
These compounds consist of a chain of sugar unit molecules.
When blood or blood sugar drops too low without adequate carbohydrates.
Fats (lipids)
Macronutrients used to produce energy in the body; the materials in the sebaceous glands that lubricate the skin.
Linoleic acid
Omega 6, an essential fatty acid used to make important hormones and the lipid barrier of the skin.
Omega 3 fatty acids
alpha linolenic acid; a type of "good" polyunsaturated fat that may decrease cardiovascular diseases. It is also an anti inflammatory and beneficial for the skin.
A measure of heat units; measures food energy for the body.
Clogging and hardening of the arteries.
Biological catalysts made of protein and vitamins, break down complexes food molecules into smaller molecules to utilize the energy extracted from food.
Vitamins and substances that have no calories or nutritional value, yet are essential for body functions.
Vitamin A (retinol)
An antioxidant that aids in the functioning and repair of the skin and skin cells.
Vitamin D
Fat soluble vitamin sometimes called the sunshine vitamin because the skin synthesizes vitamin D from cholesterol when exposed to sunlight. Essential for growth and development.
A reduction in the quality of bone or atrophy of the skeletal tissues.
Vitamin E (tocopherol)
Primarily antioxidant; helps protect the skin from the harmful effects of the sun's rays.
Vitamin K
An essential for the synthesis of proteins necessary for blood coagulation.
B vitamins
These water-soluble vitamins interact with other water soluble vitamins and act as coenzymes (catalysts) by facilitating enzymatic reactions. B vitamins include niacin, riboflavin, thiamine, pyridoxine, folacine, biotin, cobalamine, and pantothenic acid.
Vitamin C (asorbic) Acid
An antioxidant that helps protect the body from many forms of oxidation and from problems involving free radicals.
Referred to as vitamin P; enhance the absorbtion of vitamin C.
Inorganic materials essential in many cell reactions and body functions
Fat soluble vitamins
A, D, E, and K are generally present in fats within foods, the body stores them in the liver and in adipose (fat) tissue.
Water soluble vitamins
B and C are not stored in the body and must be replaced daily. |
symbols - their history and meaning
calreisan symbology
the old language of calreisa used symbols, and like egyptian heiroglyphs there was a root word and a secondary symbol which could change the whole word or context. but for these people, a race called "the docile" (english translated) of calreisa, symbols meant much more. these reptoids were dominated by the aggressor race of their world, and by that i mean slavery. they were an enslaved race for as long as they had history. as such they were never educated beyond the tasks given. mathematics and written language was outlawed to them. even their language; "me" was not part of their vocabulary, and "i" was not a word they could use in the presence of their masters ("me" draws the focus on the self, and these people weren't allowed it's use). written anything was sacred to them because it was all they had to recall the past of their people. this first symbol was in use 20000yrs ago, earth time (it is still used today to identify the slave quarters). it is symbolic of their social relationship. they served the dominant race. "he" in this case is a generic term, meaning male or female. it is not a symbol of the master race, but the enslaved.
this next symbol predates that but by how long i don't know. again it's a symbol with meaning from the perspective of the enslaved. and points to history, but without giving an age. because when this symbol was in use there were three races socially dominated by an aggressive kind. but over time, two were wiped out.
these first two symbols become more important later, but i'm hand drawing them with just my ruler, and the dimensions have to be very precise. you will note, however, that the triangles cross but don't 'meet'. this is because they represent a social heirarchy in which the dominant race never interbred or even interacted socially with the dominated.
these next symbols are essentially a 'break down' of the other two. it is the symbol of the dominant race, or "master race". in this case, it literally means the race is at the top of the social hierarchy.
it is the upper two sides of an equilateral triangle, with or without the circle above, or the 'feet'. this becomes important later. next is the slave component of the symbol - the upturned equilateral triangle. if it has 'feet', then the direction of these feet is important. outwards indicates slaves. inwards indicates complicity - or those that willingly serve.
so, to put into perspective as to what the symbols mean, if pointing up triangle denotes the dominant race, the feet towards, but not touching the upside down one to represent both dominance and social heirarchy. the upturned triangle, when it has feet, denotes whether the dominated race is enslaved or willing to 'obey'. this is a symbol of the (unwillingly) enslaved in that social relationship -
and this is co-operative or capitualated enslavement or obedience to the "master race"-
moses and the ascended masters
if you read the book of enoch and the book of jubilees it is clear that they pointed to human experimentation. they also give the general location - in northern ethiopia. but, the humans used for the eden experiments weren't local to africa - they were from what is now called turkey - specifically the ararat moutain region (remember noah and the ark? this will be elaborated further down the track) - and from other locations in the middle east and around the world, including south america and the pacific islands.
according to the books of enoch and jubilees, the calreisans transported a group of humans from what is now turkey to what is now ethiopia as base stock for their experiments, and then let the survivors loose - these are the hebrew/jewish tribes who entered egypt. however i do not believe the traditional story that they were slaves of the egyptians. what i want to look at first specifically is the part of the moses story where 'god' speaks to moses and tells him to stop his people worshiping "the bull". remember the calreisan language at that time was heavily symbolic by nature. symbols instead of letters. remember that an upside-down "V" stood for their status as the "master race".
if you look at the ancient languages which are from that part of the world, you come across three types. the egyptians used hieroglyphs - pictures to represent words or word groups; the sumerians who used a non-phonetic written languageusing 'wedges' arranged into specific patterns to represent words and numbers; and the phonecian alphabet which was non-phonetic, but based on symbols. it is the phonecian alphabet which gives us the first and most important clues to decoding the biblical story which is almost taken for fact these days. the first and last symbol of the phonecian alphabet was the same symbol - a "V' on it's side. and the symbol stood for 'ox'.
when moses was told by "god" to stop the people he was leading from worshipping "the bull", it wasn't intended as to mean they literally were worshiping a bull figure. because it was the first symbol of their alphabet and therefore the most significant he was instructed to turn the symbol upright and as such show reverence to them.
when a contemporary cloned member of the calreisan oligarch (or clone representative anyway) later approached a growing/dominant society of the ancient greeks, he must have indicated to their alphabet to identify himself. however the greek alphabet was different to the phonecian. the first was still the same, but the last symbol had changed. now it was "omega". and the greeks did not make the connection. however, as latin is is based on greek, the symbolism for 'god' in the catholic religion remain the same - "i am the alpha and the omega" - "the first and the last".
the six pointed star
the next thing i want to focus on is the "star of david". the kaballah tried to pass it off as dating back to david, the second king of israel, however by tradition it is the menorah (or chandelier) which adorned his shield. the cabal also tried to claim that it was symbolic of the metal used to create the shield - that it was hide over a six pointed star frame because of the scarcity of the metal. but reason would dictate that a circle of metal with a band across the diameter to fix an handle is far more logical a design. besides which it was wooden shields in those days. a leather hide would stop nothing. so the kaballah/cabal have tried to disguise the origins of the six-pointed star.
logically, if a symbol of their "god" relationship was given to moses, it would have to be the two inverted triangles. in christ's time it was known as the "star of moses", although reference of a six-pointed star to moses is almost impossible to find - but i did find one. it was mentioned in the samaritan exegesis. samaritans are both genetically and religiously tied to the jews. but their biblical text is all but unknown.
this is from a book written by s lowey who wrote "the principles of samaritan bible exegesis", and is a samaritan reference to moses - "if moses is described as 'the star of creation, whom god created from the six days', this does not mean that the samaritans speculated about a pre-existent moses or any logos-theories, but that from the 'primal light' which is the star (=light) of creation, originates the design which embraces the light of the prophets, and above all the star of moses, which is the zenith of the divine design. so, as it happens in christ's time the six-pointed star was known as the star of moses, and it was given to moses by his 'god'.
the burning bush
exodus 3:2-5 "and the angel of the lord appeared unto him in a flame of fire out of the midst of a bush: and he looked, and, behold, the bush burned with fire, and the bush was not consumed. and moses said, i will now turn aside, and see this great sight, why the bush is not burnt.
and when the lord saw that he turned aside to see, god called unto him out of the midst of the bush, and said, moses, moses. and he said, here am i. and he said, draw not nigh hither: put off thy shoes from off thy feet for the place whereon thy standest is holy ground."
the burning bush is one of the most remembered story of the old testament, and said to be a sign from 'god'. but this occurance of 'holy' fire is not exclusively the provence of religion. it has occured before, far more recently in the spring of 1872 in england. so what are we to make of this 'sign of god' now. because there definitely wasn't an exodus in england then, nor any prophetic movements, nor any great religious revolution.
but who was this man to experience the 'holy fire'? and could this give us hints as to why he was 'chosen'?
he is described as "he was of good sturdy english stock on both sides, his father was a graduate of trinity college, cambridge and a clergy man. his mother, sister to an eminent qc, was the granddaughter of sir robert walpole, the famous author and statesman." his family having moved to canada a year after he was born, he grew up working on his family's farm, educated at home by his father, and left home aged 17 to live the life of a pioneering spirit. but in his 21st year, after a winter expedition left him with one foot amputated and the other partially removed, he used a small inheritance to put himself through mcgill medical school. having inherited a sharp and keen intelect from both parents, he not only graduated highest on the list but won first prize for his thesis.
doors opened to him world-wide to further his study - europe for his post graduate, london working for sir benjamin ward richardson, france, germany; until returning to canada in 1964, opening his own practice and later marrying and starting his own family. however recognition for his skills as a doctor continued. in 1876 he was appointed superintendent of he newly built provincial asylum for the insane at hamilton, ontario, and in 1877 of the london hospital, ontario. he became the foremost alienist (professor for the treatment of mental disorders) on the american continent, and his credentials were recognised both sides of the atlantic as the professor for mental and nervous disorders at western university, ontario (1882), president of the psychological section of the british medical association (1888), and the president of the american medico-psychological association.
this is the man whose reputation was so unquestionable that his account of what he experienced was published in "the proceedings and transactions of the royal society of canada", and it not in the least affected the the esteem in which he was held by colleagues or his rise in his profession.
in case you aren't aware of what the "proceedings and transactions of the royal society of canada" is - they still exist. "Welcome to RSC: The Academies of Arts, Humanities and Sciences of Canada, the Canadian institution devoted to recognizing excellence in learning and research, as well as recognizing accomplishments in the arts, humanities and sciences, since 1882."
and here is what was written about this person's experience -
'he and two friends had spent the evening reading wordsworth, shelley, keats, browning and especially whitman. they parted at midnight, and he had a long drive in a hansom. his mind, deeply under the influence of the ideas, images and emotions called up by the reading and talk of the evening, was calm and peaceful. he was in a state of quiet, almost passive, enjoyment.
all at once, without warning of any kind, he found himself wrapped around, as it were, by a flame-coloured cloud. for an instant he thought of fire -- some sudden conflagaration in the great city [london, england]. the next (instant) he knew that the light was within himself.
directly after there came upon him a sense of exultation, of immense joyousness, accompanied or immediately followed by an intellectual illumination quite impossible to describe. into his brain streamed one momentary lightning-flash of brahmic-splendor which ever since lightened his life. upon his heart fell one drop of bhramic bliss, leaving thenceforth for always the aftertaste of heaven."
this happened to a man aged 35 years, an expert at the treatment of mental disorders, in 1872. not to a prechristian prophet of the old testament. and this is not even the only account of experiencing such flames. BUT from such a man, with a world-wide reputation, it's not so easy to dismiss or ignore as superstition or some sort of mental aberration.
noah and the 'flood'
as mentioned, the book of enoch holds parallels with the book of jubilees, and placing the history described in both as happening concurrently.
firstly this is an exerpt from the book of enoch:
[Chapter 33] (holographic projections) "and from thence i went to the ends of the earth and saw there great beasts, and each differed from the other, and birds also differing in appearance and beauty and voice, the one differing from the other. and to the east of those beasts i saw the ends of the earth whereon the heaven rests, and the portals of the heaven open. and i saw how the stars of heaven come forth, and i counted the portals of out which they proceed, and wrote down all their outlets, of each and their times and their months, as uriel the holy angel who was with me showed me. he showed all things to me and wrote them down for me: also their names he wrote for me, and their laws and their companies.
[Chapter 34] (climate simulation) and from thence i went towards the north to the ends of the earth, and there i saw a great and glorious device at the ends fo the whole earth. and here i saw three portals of heaven open in the heaven: through each proceed north winds: when they blow there is cold, hail, frost, snow, dew, and rain. and out of one portal they blow for good: but when they blow through the other two portals, it is with violence and affliction on the earth, and they blow with violence.
[Chapter 35] (holographic projections) and from thence i went towards the west to the ends of the earth, and saw thre three portals of the heaven open such as i had seen in the east and the same number of portals, and the same number of outlets.
[Chapter 36] (south, climate simulation. east, holographic projections) and from thence i went to the south to the ends of the earth, and saw there three open portals of the heaven: and thence there come dew, rain, and wind. and from hence i went to the east to the ends of the heaven, and saw here three eastern portals of heaven open and small portals above them. through each of these small portals pass the stars of heaven and run their course west on the path which was shown to them. and as often as i saw i blessed always the lord of glory, and i continued to bless the lord of glory who has wrought great and glorious wonders, to show the greatness of his work to the angels and to spirits and to men, that they might praise his work and his creation: that they might see the work of his might and praise the great woark of his hands and bless him for ever."
now over 7000 years ago nobody native to this world would have an understanding of what a biodome was. 3D projectors and airconditioning were similarly not invented here on earth at that time. yet enoch wrote down a very detailed explanation of the environment he has his fellow subjects found themselves in. he carefully recorded what he observed, without understanding of the technology. and while he may not have completely understod the nature of the dispute between the archangels and the oligarchy, he was keenly astute in his observations and his records. so with some conjecture and more modern understanding, what he intended to communicate to us is available in this day as it could never have been to his own people of the day. the humans of then could never have understood.
this is from the book of jubilees - "And the Lord opened seven flood-gates of heaven, And the mouths of the fountains of the great deep, seven mouths in number. And the flood-gates began to pour down water from the heaven forty days and forty nights, And the fountains of the deep also sent up waters, until the whole world was full of water." - as spoken by NOAH.
the e'hadi[ai]n (eden) experiments
this first is used for genome experiments. the first time it was used was for when two races (and not of the same species) were 'interbred'. more likely the fertilized egg was created and transplanted at some stage to the female. the progeny is symbolised by the triangle, with apex upwards to symbolise superiority to the gene stock and typically with the 'feet' not touching the upturned triangles.
this second symbol is also a 'master' symbol. however the keyhole is inside the apex to symbolize 'hidden' or 'secret', and the seven 'rays' plus the two from the apex of the triangle equals nine, the number of ruling oligarchy members. in some cases the keyhole is literally a keyhole for a laser key.
to put them in their context, together they look like this - |
Gravity? Who needs it!
What are the Properties?
Helium is located on period 1 group 18. It is a Noble Gas. The symbol of Helium is He. Helium is a colorless, odorless, tasteless, non-toxic gas. This element has 2 electrons and protons and 2 neutrons. It is this because Helium or He is a noble gas. It has the lowest boiling and melting points of any elements in the Periodic Table. Helium is the only element which doesn't become a solid when the temperature is lowered. Helium is the second lightest element. Helium also has a high thermal conductivity.
What are the common uses?
The largest use for helium is in cryogenics. Helium is used in cooling the super conducting magnets in MRI scanners. About 78% of helium is used in pressurizing and purging. Helium is used in blimps because it is lighter than air.
Sales Pitch
Helium is great for many things. It should be used for the survival because let's say you are running out of fuel; just take some helium and put it in the fuel tank with oxygen and it creates fuel. Also, say your oxygen tank is almost empty; just put in some helium and it should help, but do this at your own risk.
Skills Rank
Biological Need: 2.5 stars
Social Need: 4 stars
Functional Need: 3 stars
Defensive Need: 1 stars
The biological need is at 2.5 because you really can't use it to help you. You can't eat or drink helium; If you inhale too much you will get Asphyxia and die of suffocation.
The social need is better than the biological need. It is rated at 4 stars because you can trade it for a good bargain.
The functional need is at 3 stars because it will help you if you get in trouble. When you start losing fuel you can just put some Helium into the gas tank and it will create fuel.
The defensive need is the lowest out of all of them. This is mainly because you really can't use it as a weapon. The only thing that you actually can do is to make the aliens breath it so they will die.
Comment Stream |
A Career as a Firemen
Career Goal Firefighter
Career Overview
Firefighters are people who put out fires and save lives and rescue people who are in danger or need help.
The Main Goal of a firemen is to either fight fires or rescue people who need help, Firefighters can be either a Volunteer or Full time, Full time Firemen live at the Firehouse and typically they work about 40 hours per week. Volunteers are stand buy firemen that do the work because they are brave and save lives for nothing. They are either in a small town or a bigger city that only want to work part time. There are Firemen all over the U.S. some could be Volunteers in small towns or they could be in big cities and work Full time.
Firefighters are the biggest populated job in America because with out them how would we be able to control fires or save lives? We need firemen to do the job that they do because they are the biggest part of the world. Some of them do it for free there is no pay for that, Maybe they have benefits for it but they are not doing it for them selves they are doing it because they want to be People that care about everyone.
Career skills and Interests
Firemen do a lot with the areas that they are in, They hold events and activities that are held in town or they help out in other places. Sometimes they will get called out on calls that are not even in there area, It is not because they have to it is because other fire departments are friends and family they work together and that's how respect meets a firefighters main goal. They help everyone it don't matter what time of day, it don't depend on the weather when ever that bell goes off they will be there.
Career Working Conditions
They have to be accurate with there work and get along with others, They need to have a strong communication so that they can talk to victims and help out when they need to. A fire call can happen at anytime It don't matter when. The weather is not a factor. It could be really hot out or really cold it don' matter firefighters will respond to all types of calls. Fire and Rescue, Search and Recovery, Water patrols, Car accidents, Etc. They work about 40 hours per week if living at a fire house and are working full time. While Volunteers stand by and help out in smaller towns and counties. They typically need a full time job because they do not live at the fire house, They stay active with the department with meetings and discussions and monthly training to keep them ready for the next call.
Career wages and Outlooks
Firefighters that are either full time or Volunteer but full time Firefighters get paid by the government or there chief. Volunteers get good benefits and sometimes they get paid by the State. Full time Fire departments get paid about 24,500 A year, It may not be much but they are doing things that they like.
Career Related Occupations
They are Firefighters,
Parts of the Job include Rescue, Car Accidents, House Fires, Grass Calls and more, There are two types of firefighters Forest fire and Regular City Firefighters. Forest Firemen get paid better and there hours are longer.
Program Overview
They have fire courses were you can take the testing that you need to have in order to be on the fire department and there is also training that you need to do in order to do the job safely. There are schools that will help the students who want to be in this type of occupation. The Military also offers offers Air force firefighters that Put out plane fires and rescue procedures.
Program Admission
In a fire department you need to have a good high school GED and you have to take certain types of classes. Examples, Science, Speech, Math and More,
These are important wile a firemen needs to know science for back draft reasons, Speech needs to be able to understand the victims that you are helping, Math for water pressure and truck equipment.
College Support
There are Many colleges in this Job. The Biggest College that I would talk about is Albany Georgia Technical College they offer many courses and of course fire and rescue types of career, There are many things that you could do in order to get into a School of fire but you need to know what you want to do, Fighting fires is not for everyone, There are factors like small places, Heights, Weather, Weight, Dead and Alive situations and More,
Comment Stream |
Subsets and Splits