text
stringlengths
124
652k
Try a glass of milk before you run Drinking low-fat or skim milk before you run will provide sustained energy, because milk is a low-glycemic food; i.e., the carbohydrates are released slowly into the bloodstream. Speedwork and pace Speedwork teaches you the sense of pace that you need to race well. Better pacing also will allow you to do your training runs more evenly, which is easier than uneven-paced running. Lutein and cancers Consuming foods rich in lutein and zeaxathin may reduce your risk of colon cancer. Lutein and zeaxathin are carotenoids, a type of antioxidant, which protects cells from the damaging effects of compounds created during metabolism. You can get lutein from spinach, broccoli and other greens, tomatoes, carrots, oranges and eggs. Miss a Digest? Soybeans a versatile dietary staple Drink sports drinks for better health Warm up, cool down for better speedwork Olive oil may prevent colon cancer Strengthen your quads at your desk Discuss This Article
Dusky Dolphins save place Photos (1) Plane loader Animated dots Dusky Dolphins Dusky Dolphins Dusky dolphins, originally named Fitzroy's dolphins by Charles Darwin, are easily distinguished from other dolphins. The head is small and evenly sloped, and there's no beak at the end of the snout. The tail and back are bluish black in color, with a dark band that runs diagonally from the flank to the tail. The belly is white, and there's a two-pronged blaze in white or cream from the dorsal fin to the tail. Dusky dolphins are extraordinarily social, sometimes traveling in pods with as many as 1,000 members. They're also highly acrobatic—watch to see them leaping out of the water to turn somersaults in the air. Their squeals, whistles and clicks can sometimes be heard as far as three kilometers (two miles) away. Original mexico 300x350
You've got family at Ancestry. Find more Huffsteller relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 50 more people named Huffsteller in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 40 people named Huffsteller in the 1930 U.S. Census. In 1940, there were 125% more people named Huffsteller in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 90 people named Huffsteller were living in the United States. In a snapshot: • 4 were disabled • 42% were children • 14 adults were unmarried Learn where they came from and where they went. As Huffsteller families continued to grow, they left more tracks on the map: • 1 were first-generation Americans • Most fathers originated from South Carolina • They most commonly lived in South Carolina • Most mothers originated from South Carolina
You've got family at Ancestry. Find more Kenery relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 34 more people named Kenery in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 24 people named Kenery in the 1930 U.S. Census. In 1940, there were 142% more people named Kenery in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 58 people named Kenery were living in the United States. In a snapshot: • 3 were disabled • Most common occupation was farmer • 6% were disabled • 18 were children Learn where they came from and where they went. As Kenery families continued to grow, they left more tracks on the map: • Most immigrants originated from Ireland • They most commonly lived in New York • The most common mother tongue was Polish • 9 were first-generation Americans
You've got family at Ancestry. Find more Kokol relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 2 more people named Kokol in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 18 people named Kokol in the 1930 U.S. Census. In 1940, there were 11% more people named Kokol in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 20 people named Kokol were living in the United States. In a snapshot: • 1 was disabled • On average men worked 45 hours a week • 38% of adults were unmarried • 7 were children Learn where they came from and where they went. As Kokol families continued to grow, they left more tracks on the map: • Most immigrants originated from Poland • 8 were first-generation Americans • The most common mother tongue was Polish • They most commonly lived in New York
You've got family at Ancestry. Find more Mardula relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 2 more people named Mardula in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 26 people named Mardula in the 1930 U.S. Census. In 1940, there were 8% more people named Mardula in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 28 people named Mardula were living in the United States. In a snapshot: • The average annual income was $1,079 • 33% of women had paying jobs • 17% owned their homes, valued on average at $900 Learn where they came from and where they went. As Mardula families continued to grow, they left more tracks on the map: • 27% were born in foreign countries • 19 were first-generation Americans • 7 were born in foreign countries • They most commonly lived in Illinois
You've got family at Ancestry. Find more Posery relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 2 more people named Posery in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 7 people named Posery in the 1930 U.S. Census. In 1940, there were 29% more people named Posery in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 9 people named Posery were living in the United States. In a snapshot: • 4 were children • 20% of adults were unmarried • 9 rented out rooms to boarders Learn where they came from and where they went. As Posery families continued to grow, they left more tracks on the map: • They most commonly lived in South Carolina • 14% were first-generation Americans • 1 were first-generation Americans
You've got family at Ancestry. Find more Rognlie relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 31 more people named Rognlie in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 65 people named Rognlie in the 1930 U.S. Census. In 1940, there were 48% more people named Rognlie in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 96 people named Rognlie were living in the United States. In a snapshot: • The average annual income was $787 • 19% of women had paying jobs • 3 were disabled • 29% of adults were unmarried Learn where they came from and where they went. As Rognlie families continued to grow, they left more tracks on the map: • 12% were born in foreign countries • 27% migrated within the United States from 1935 to 1940 • The most common mother tongue was Norwegian • Most fathers originated from North Dakota
You've got family at Ancestry. Find more Santusci relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 3 less people named Santusci in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 12 people named Santusci in the 1930 U.S. Census. In 1940, there were 25% less people named Santusci in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 9 people named Santusci were living in the United States. In a snapshot: • 1 woman had paying job • 43% of adults were unmarried • 22% were children • The typical household was 3 people Learn where they came from and where they went. As Santusci families continued to grow, they left more tracks on the map: • 42% were first-generation Americans • The most common mother tongue was Italian • They most commonly lived in New Jersey • Most immigrants originated from Italy
You've got family at Ancestry. Find more Schaden relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 8 more people named Schaden in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 85 people named Schaden in the 1930 U.S. Census. In 1940, there were 9% more people named Schaden in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 93 people named Schaden were living in the United States. In a snapshot: • 22 were children • 20% of adults were unmarried • The typical household was 3 people Learn where they came from and where they went. As Schaden families continued to grow, they left more tracks on the map: • 38% were first-generation Americans • They most commonly lived in Michigan • Most immigrants originated from Austria • 22% were born in foreign countries
You've got family at Ancestry. Find more Traenkle relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 10 more people named Traenkle in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 54 people named Traenkle in the 1930 U.S. Census. In 1940, there were 19% more people named Traenkle in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 64 people named Traenkle were living in the United States. In a snapshot: • 63 rented out rooms to boarders • 2% were disabled • 13 owned their homes, valued on average at $5,423 • 43%, or 10 people, lived in homes they rented Learn where they came from and where they went. As Traenkle families continued to grow, they left more tracks on the map: • 29 were first-generation Americans • 5 migrated within the United States from 1935 to 1940 • 5 were born in foreign countries • 54% were first-generation Americans
Don’t be a Duncan Pfool: Remember to use furniture’s correct vocabulary Every area of special interest has its own vocabulary and words of common usage. The area of antiques certainly falls in this category with some of its more obscure terms like recamier and bergere. But there are also a number of terms that are quite common in the industry and among these common terms are a significant number that are commonly misused, misspelled or misunderstood. One of the ones I see frequently in inquiries from readers concerns that cabinet maker with the musical name, Duncan Phyfe. In fact his family name was Fife but when he came to America from Scotland in the late 18th century he changed it to “Phyfe” to add a little sizzle to an otherwise mundane moniker. He was a talented cabinetmaker who worked in all the styles of his working life: Federal, English Regency and Empire. One style in which he did not work was the style “Duncan Phyfe.” There is no single style attributed to Duncan Phyfe that can truly be called “Duncan Phyfe.” In modern common usage it seems that every table with curved legs extending from a pedestal is called a “Duncan Phyfe” table. He did make some tables with legs like that but so did every other cabinetmaker of the period. That style leg came from mid 18th century English pedestal dining tables. And he also made tables with legs of other styles. An even worse transgression is the misuse of the name itself while describing a misnamed style. More than a few inquiries ask about their “Dunkin & Fife” furniture or similar variations. Another modern misuse of a cabinetmaker’s name involves the mid 19th century English designer Thomas Chippendale. That really was his family name and he was named after his father. His style was an updated take-off on a basic Queen Anne base with some masculine embellishments, often topped off with French or Chinese accents. The style was not “Chip and Dale”; they are Walt Disney cartoon characters. And the style is not the style of the “Chippendales”; they are male exotic dancers. The use of the term “Victorian” as a style is also a misuse of the word. “Victorian” refers to a period of time, 1837 to 1901, when Victoria, the only daughter of Edward, Duke of Kent, the fourth son of George III, sat on the throne of England. No style in modern history has been maintained continuously for 64 years and neither was the style of “Victorian.” Within that span of years a number of prominent and very distinct styles rose and fell in favor, among them were Late Classicism, Gothic, Elizabethan Revival, Rococo Revival, Renaissance Revival, the Aesthetic Movement, Colonial Revival, Arts and Crafts and even Golden Oak. All could be considered to be styles of the Victorian era but none can be said to be the Victorian style. Then there are ambivalent uses of the names of pieces of furniture themselves. In common use the term secretary is often used to describe a desk with a tall bookcase on top, usually with glass panel doors. But the term secretary can actually be used as a simple synonym for a desk with or without the top section. A more accurate way of describing the tall model is to call it a “bookcase/secretary” literally a bookcase on top of a slant front secretary or desk. That’s how the form got started. Most early bookcase/secretaries consisted of two separate parts. Only in the 20th century did they become one piece, tall cabinets. The term “blanket chest” is also open to interpretation. Originally the term referred to a lift top chest with drawers below. This form is sometimes also called a “chest on drawers” and smaller versions are sometimes called a “mule chest.” When properly decorated they were sometimes called a “dower” chest, a place for a young bride to store her dowry. Elaborate versions were the size of full size chest of drawers and had faux drawer fronts above the real drawers. Long low chests without drawers are simply called chests or storage chests. This type of chest in the late 17th and early 18th century was often a six board chest, one single large board for each of six panels. The 20th century helped confuse the issue with introduction of the cedar chest, essentially storage chest made of either solid cedar or lined in cedar to minimize the intrusion of moths. They came in all sizes and forms including simple storage chests, chests on drawers and even chests on stands that looked like complete cabinets with drawers but were really a single chest compartment. They were even called “hope chests” the modern version of the dower chest. But very few of them were truly “blanket” chests in the traditional meaning of the term. The final term in the industry that is open to the most interpretation, even to the point of initiating vigorous arguments and heated exchanges, has to do with the use of the term “antique” itself. What may or may not be an antique is certainly open to debate in many quarters and less so in others. Fortunately there is not room left here for a full discussion of that subject. It will just have to wait. Fred Taylor is an author and syndicated columnist. Send your comments, questions and pictures to P.O. Box 215, Crystal River, FL 34423 or *Great Books, CDs, Price Guides & More *Share YOUR Thoughts in the Antique Trader Forums *Check out our FREE Online Classified Ads *Sign up for your FREE e-mail newsletter *Enter the Antique Trader Treasure Hunt Sweepstakes
The American Indians before 1491 A great wave of interest had been created by a new theory about the American Indians. I first learned of this theory because my Japanese wife was assigned to read a paper on this theory and answer questions about it as part of a reading comprehension test. The test was assigned, of course, by an American Indian who happens to be a college professor. That, of course, is a problem. When I first read it, I thought that this theory was ridiculous and unworthy of consideration. I felt that it was full of holes. However, now I find that I cannot refute it or prove that it is wrong, so I will present it in summary form: It has long been believed that when Columbus discovered America in 1492, there were about one million Indians on the Americas continents. This is the traditional view. However, the new theory is that there were 100 million Indians and that the population of Indians in the Americas was about the same as the population of Europe. According to this theory, within 100 years after 1492, 99 million Indians died of the White Man's Diseases, leaving only the one million that explorers encountered. The obvious questions are why did so many die, what happened to their dead bodies and when alive what did they eat. Was there enough food to sustain a population of 100 million? Here, in summary, are the answers provided: The Indians died of many diseases brought by the White Men, including smallpox. However, more importantly, De Soto, one of the first explorers, brought 300 pigs. Some of these pigs had diseases. Some of the pigs escaped, reproduced rapidly and gave their diseases to other animals. This resulted in an epidemic and the deaths of millions of animals, which the Indians depended upon for food. As to what did the Indians eat, the answer comes easily. Most of the food that we eat today are foods that the Indians gave us. Corn, tomatoes and potatoes are all foods which the Indians developed. We cannot even imagine eating a meal today without eating food provided by the Indians. Did these foods grow by accident? No. The Indians cultivated and developed them. Where did they grow these foods? Why, the same place where we grow them. The Great Plains of the Midwest, the breadbasket of the world, including such states as Iowa and Illinois, were all developed by the Indians. Perhaps the most intriguing aspect of this theory concerns the Amazon Rain Forest. It is a noteworthy fact that almost every tree or plant in the Amazon Rain Forest bears a fruit, nut or berry which people can eat. Amazon Rain Forest is almost completely flat, dropping only 500 feet in 2000 miles. Is this an accident of nature? No, according to this theory. The Amazon Rain Forest is a garden, planted by the Indians. All of these fruit bearing trees were cultivated and developed by the Indians. OK, so you still do not believe it. In that case, where it is wrong? If you want to learn more about this theory, try searching under 1491. Sam Sloan Here are links: My Home Page
Monday, January 12, 2015 Review: The Imitation Game Careful, there may be spoilers below. Shira and I got wild and crazy on Saturday night: we saw a movie in an actual movie theater! Man, I love those 25 minutes of previews! Anyway, the movie we saw was The Imitation Game, which is the story of Alan Turing. Being a Computer Science Major, I'm already somewhat familiar with Mr Turing's work. He's sort of a Charles Darwin of Computer Science. To appreciate this you have to appreciate that Computer Science isn't really about Computers or Science. It's about the study of computation, and problem solving in general. Sure, we have to use these pesky physical machines but that's an implementation detail. Alan Turing made great headway into understanding what's computable, and naturally to do this, he created his own computer and method of programming it. On the surface that may not be particularly impressive, but consider that he did this in 1936 or so. A decade before anyone even saw a computer, and years before computer programming would be a true pursuit, Alan Turing had already devised the most powerful computation machine on the planet, and proved that all general purpose computers were equivalent to his contraption. So before I walked into the theater I had mad respect for the man. For the next 2 hours I was fully entertained by the movie. As the previews suggest, the movie focuses on Turing's involvement with the effort to break Germany's WWII encryption technology. I had known of Turing's involvement in this effort, but the movie truly brought it to life. I suppose it's a special kind of challenge to tell a story where the outcome is known (I'm looking at you, Titanic), and the creators of the Imitation Game pulled it off well. So yes, the movie was entertaining and gave me a fresh perspective into Turing's life. The first order of business after leaving the theater was to Google Imitation Game: differences from real life. It was obvious that they injected a dose of Hollywood into the movie, but how much of it was fake? Slate, among others, answers that question, and no surprise the results aren't pretty. Many of the moments I enjoyed in the film just plain never happened. What do I think about this? I'm not sure. It's easy to kvetch, and say that they should have been more true to the story. But, at the end of the day, if the goal was to show just how amazing Turing was, perhaps they succeeded? In end, I think the spirit of the film is in the right place. So go, watch it and enjoy. Just do your homework after the fact so you can separate fact from fiction. 1. I really liked the movie too. Yes the inaccuracies bothered me, but the way I look at it, many non-techies might have heard about this amazing man for the first time - and that has to be a good thing right? 2. Thinking more about it, I realized that they essentially exaggerated every aspect of the story: he was quirky, so they gave him Aspergers; he was one of the signatures on a letter to Churchill, so they made him be the author and *only* signature; that sort of thing. I suppose, though, if you want to tell his entire life story in 2 hours, that sort of exaggeration is to be expected. In the end, I agree with you: as a way to introduce him to the masses, it works. Oh yeah, it also works as a fun movie to watch. There's that too ;-) Related Posts with Thumbnails
Air Pollution Quiz Broward County > Kids > Environmental Kids Club > Air Pollution Quiz Click on the correct answer 1. Before the industrial revolution, there was no air pollution.   2. Air pollution is a problem only in big cities.   3. Dirty air costs each American about $100 per year.   4. All smokestack emissions pollute the air.   5. When the air is polluted, you can always see and smell it.   6.  Clean air is the responsibility of industry alone.   7. Burning leaves or trash at home contributes to air pollution.   8. The only way air pollution affects the human body is by causing lung disorders. 9. Cars and buses contribute little to the air pollution problem.   10. We have a limitless amount of air to breathe.   11. Air pollution now is under control and will not be a problem in the future.
Canadian Cancer Society logo Breast cancer You are here:  Choosing between breast-conserving surgery and mastectomy In most cases, a woman will be given a choice between breast-conserving surgery (BCS) and mastectomy. Studies done over many years have shown that women with stage I or stage II breast cancer who have BCS combined with radiation therapy have the same survival rates as women who have a mastectomy. Having a mastectomy does not provide a better outcome or improve long-term survival in most cases. Given this body of knowledge, doctors will give women a choice between the surgeries, if there is no medical reason to recommend one surgery over the other. Most women with early stage breast cancer have breast-conserving surgery. For some women, the choice is easy. Other women find making this choice difficult. Some women may want the doctor or a partner to make the decision for them. For many women, the main concern is having the cancer completely removed, so having a mastectomy may give them that peace of mind and assurance. For other women, their breasts are an important part of their identity and self-image as a woman, so breast-conserving surgery may be the best choice for them. The choice between BCS or mastectomy is a very personal one. Individual preferences, priorities and lifestyle all play a part in making the decision. BCS and mastectomy both have advantages and disadvantages. It may help to talk to different women who have had each type of surgery. Advantages and disadvantages of each type of breast surgery Type of Breast SurgeryAdvantagesDisadvantages Breast-conserving surgery BCS is equally effective as a mastectomy (when followed by radiation therapy), in terms of overall survival. There is less change to the appearance of the breast, though there still may be a scar or changes to the shape of the breast. BCS is less likely to affect a woman’s feelings about her body image and sexuality. Some women may be concerned that not all the cancer was removed. BCS is followed by 4–6 weeks of daily radiation treatments, which lengthens the time a woman receives treatment. Some women may have difficulty finding transportation to the radiation treatment centre or may have to travel for treatment. There are potential short- and long-term side effects of radiation therapy. There is a slightly higher risk of developing a recurrence of the cancer in the remaining breast tissue. Mastectomy is equally effective in terms of overall survival as BCS followed by radiation therapy. Some women may feel assured that there is a better chance that the cancer has been cured when the breast is removed. In most cases, a woman who has a mastectomy does not require radiation therapy, so radiation treatment side effects can be avoided. Mastectomy is a longer surgery with a longer recovery time and more potential side effects than BCS. Surgery is longer if the woman has immediate reconstruction or she will need more surgery if she chooses to have breast reconstruction later. In some situations, a mastectomy may need to be followed by radiation therapy, so the potential side effects will not be avoided. The loss of a breast may affect a woman’s feelings about her body image and sexuality. Read Dr Ming-Sound Tsao's story Advocating for cancer patients Illustration of a bullhorn Learn more
Sleep Apnea Apnea is a Greek word that means "without breath." Sleep apnea is a condition in which breathing briefly stops repeatedly through the night as a person sleeps. These pauses last at least 10 seconds. They may happen hundreds of times during the night. A person with sleep apnea is only rarely aware of having difficulty breathing. It is usually recognized as a problem by others who witness the pauses in the breath. Sleep apnea disturbs a person's sleep. A person with sleep apnea may moved from deep sleep to light sleep several times during the night. Levels of oxygen in the blood fall from the pauses in breathing. People with sleep apnea often snore loudly during sleep. (Not all people who snore, however, have sleep apnea.) There are three types of sleep apnea: • Central sleep apnea happens because the muscles that cause the lungs to fill with air don't move. The brain isn't sending the proper signals to the muscles that control breathing. • Obstructive sleep apnea happens because something physical blocks the airflow even as a person's body works to breathe. This is the most common form. An estimated one out of every five Americans has this type of sleep apnea. • Mixed sleep apnea is when a person moves between central sleep apnea and obstructive sleep apnea during an event of apnea. A person with sleep apnea may be unaware that a problem exists. Usually a family member or sleeping partner is the first to recognize the problem. Common signs of sleep apnea include: • Loud snoring. Not all people who snore, however, have sleep apnea. People with central sleep apnea may not snore. • Choking or gasping during sleep • Sleepiness or being tired during the day. An adult or teen suffering from long-standing severe sleep apnea may fall asleep for short periods of time during the course of daily activities if given a chance to rest. A person with sleep apnea may also have: • A dry throat on waking • A hard time concentrating • Problems with memory or learning • A loss of interest in sex • A need to go to the bathroom often during the night • Acid reflux • An increased heart rate • Irritability • Mood swings or personality changes, including feeling depressed • Morning headaches • Night sweats Children who have sleep apnea may be extremely sleepy during the day. In some cases, toddlers or young children will behave as if they are hyper or overtired. Children with sleep apnea may be thin and show signs of a failure to thrive (slowed growth). This happens because the child's body burns calories at a high rate to get enough air into the lungs. If there is a blockage in the throat due to swollen tonsils or adenoids, the child may not be able to smell properly. Food doesn't taste as good to them and may even be difficult to swallow. Causes and Risk Factors Sleep apnea affects more than 12 million Americans, according to the National Institutes of Health. It affects men, women, older people and children alike. Some factors, however, make getting sleep apnea more likely. These include: • Being male. Men are twice as likely to get sleep apnea as women. • Being older. Obstructive sleep apnea is two to three times more likely in older adults aged 65 or older. • Being overweight. The extra soft tissue in the throat makes it harder to keep the throat open during sleep. • Having a family member who has sleep apnea. • Having a thick neck. A person whose neck is more than 17 inches around has a greater risk of developing sleep apnea. • Smoking. It may cause inflammation and fluid retention of the throat and upper airways. Several things can cause sleep apnea, including: • Overly relaxed muscle tone in the throat that causes the walls of the airway to collapse • Structural problems of the head, throat and nasal passages • Heart disease, which is the most common cause of central sleep apnea. People with atrial fibrillation or heart failure have a greater risk of central sleep apnea. • Neuromuscular disorders. These include amyotrophic lateral sclerosis (Lou Gehrig's disease), spinal cord injuries or muscular dystrophy. Each of these conditions can affect how the brain controls breathing. • Stroke or brain tumor. These conditions can disturb the brain's ability to regulate breathing. • Excessive drinking of alcohol or use of sedatives or tranquilizers. These may relax the muscles of the throat too much, interfering with normal breathing and sleep. • Having Down Syndrome. A little more than half the people who have Down Syndrome also have sleep apnea. A person with Down Syndrome may have a more relaxed muscle tone than other people and a relatively narrow nose and throat and large tongue. • Colds, infections or allergies that cause nasal congestion or swelling of the throat or tonsils. Some viruses such as Epstein-Barr can cause the lymph glands to swell. Sleep apnea due to these types of blockages usually only lasts a short period of time. • Enlarged tonsils and adenoids. Children with obstructive sleep apnea usually have this problem. It can be corrected with a tonsillectomy and adenoidectomy. • High altitude, if you aren't accustomed to it. This usually goes away as the body adapts to the higher altitude or if you move to a lower altitude. Many people live for years or decades with sleep apnea, unaware that they have it. Often a family member or sleeping partner brings it to their attention. To diagnose sleep apnea, a doctor will take a medical history and do a physical exam. He or she will check the mouth, nose and throat for extra or large tissues. The doctor may order several tests to be done while you sleep, including: • Measuring the oxygen in your blood (oximetry). This is done by putting a small sleeve over a finger while you sleep. • A polysomnogram. It records brain eye and muscle activity as well as breathing and heart rates. It measures the amount of oxygen in your blood and how much air moves in and out of your lungs while you sleep. This painless test can be done in a sleep laboratory or center or at home using a home monitor. When the test is done at home, a technician comes to your house and helps you apply a monitor that you will wear overnight. The technician will return in the morning to get the monitor and send the results to your doctor. You may be referred to a specialist in lung problems (pulmonologist), the brain or nerves (neurologist), heart and blood pressure problems (cardiologist) or ear, nose and throat problems (otolaryngologists) for additional evaluation. Treatment for sleep apnea is designed to restore regular nighttime breathing, relieve loud snoring and address daytime sleepiness. Treatment also targets complications of sleep apnea such as high blood pressure and higher risks for heart attack and stroke. The interruption of normal sleep can lead to accidents at home, on the job or while driving. It can disturb healing and immune responses. In children, it can interfere with normal growth. Treatment for sleep apnea varies depending on the cause of the problem, your medical history and how severe the condition is. Treatment generally falls into these categories: • Lifestyle changes • Devices to change the position of the jaw, tongue and soft tissues of the mouth and throat • Pressurized air machines • Surgery People with sleep apnea need to take special care when having surgery or undergoing dental procedures. Anesthesia and drugs used to relieve pain and depress consciousness stay in the body for hours or even days after surgery. Even these small amounts can make sleep apnea worse. Dental, mouth or throat surgery can cause swelling in the lining or the mouth and throat also making sleep apnea worse. Be sure your doctors, dentist and surgeons are aware that you have sleep apnea. You will need to be closely monitored after surgery.
Organization of Computer Systems: § 4: Processors Instructor: M.S. Schmalz Reading Assignments and Exercises This section is organized as follows: Information contained herein was compiled from a variety of text- and Web-based sources, is intended as a teaching aid only (to be used in conjunction with the required text, and is not to be used for any commercial purpose. Particular thanks is given to Dr. Enrique Mafla for his permission to use selected illustrations from his course notes in these Web pages. 4.1. The Central Processor - Control and Dataflow Reading Assignments and Exercises Recall that, in Section 3, we designed an ALU based on (a) building blocks such as multiplexers for selecting an operation to produce ALU output, (b) carry lookahead adders to reduce the complexity and (in practice) the critical pathlength of arithmetic operations, and (c) components such as coprocessors to perform costly operations such as floating point arithmetic. We also showed that computer arithmetic suffers from errors due to fintie precision, lack of associativity, and limitations of protocols such as the IEEE 754 floating point standard. 4.1.1. Review In previous sections, we discussed computer organization at the microarchitectural level, processor organization (in terms of datapath, control, and register file), as well as logic circuits including clocking methodologies and sequential circuits such as latches. In Figure 4.1, the typical organization of a modern von Neumann processor is illustrated. Note that the CPU, memory subsystem, and I/O subsystem are connected by address, data, and control buses. The fact that these are parallel buses is denoted by the slash through each line that signifies a bus. Figure 4.1. Schematic diagram of a modern von Neumann processor, where the CPU is denoted by a shaded box -adapted from [Maf01]. It is worthwhile to further discuss the following components in Figure 4.1: The processor represented by the shaded block in Figure 4.1 is organized as shown in Figure 4.2. Observe that the ALU performs I/O on data stored in the register file, while the Control Unit sends (receives) control signals (resp. data) in conjunction with the register file. Figure 4.2. Schematic diagram of the processor in Figure 4.1, adapted from [Maf01]. In MIPS, the ISA determines many aspects of the processor implementation. For example, implementational strategies and goals affect clock rate and CPI. These implementational constraints cause parameters of the components in Figure 4.3 to be modified throughout the design process. Figure 4.3. Schematic diagram of MIPS architecture from an implementational perspective, adapted from [Maf01]. Such implementational concerns are reflected in the use of logic elements and clocking strategies. For example, with combinational elements such as adders, multiplexers, or shifters, outputs depend only on current inputs. However, sequential elements such as memory and registers contain state information, and their output thus depends on their inputs (data values and clock) as well as on the stored state. The clock determines the order of events within a gate, and defines when signals can be converted to data to be read or written to processor components (e.g., registers or memory). For purposes of review, the following diagram of clocking is presented: Here, a signal that is held at logic high value is said to be asserted. In Section 1, we discussed how edge-triggered clocking can support a precise state transition on the active clock pulse edge (either the rising or falling edge, depending on what the designer selects). We also reviewed the SR Latch based on nor logic, and showed how this could be converted to a clocked SR latch. From this, a clocked D Latch and the D flip-flop were derived. In particular, the D flip-flop has a falling-edge trigger, and its output is initially deasserted (i.e., the logic low value is present). 4.1.2. Register File The register file (RF) is a hardware device that has two read ports and one write port (corresponding to the two inputs and one output of the ALU). The RF and the ALU together comprise the two elements required to compute MIPS R-format ALU instructions. The RF is comprised of a set of registers that can be read or written by supplying a register number to be accessed, as well (in the case of write operations) as a write authorization bit. A block diagram of the RF is shown in Figure 4.4a. Figure 4.4. Register file (a) block diagram, (b) implementation of two read ports, and (c) implementation of write port - adapted from [Maf01]. Since reading of a register-stored value does not change the state of the register, no "safety mechanism" is needed to prevent inadvertent overwriting of stored data, and we need only supply the register number to obtain the data stored in that register. (This data is available at the Read Data output in Figure 4.4a.) However, when writing to a register, we need (1) a register number, (2) an authorization bit, for safety (because the previous contents of the register selected for writing are overwritten by the write operation), and (3) a clock pulse that controls writing of data into the register. In this discussion and throughout this section, we will assume that the register file is structured as shown in Figure 4.4a. We further assume that each register is constructed from a linear array of D flip-flops, where each flip-flop has a clock (C) and data (D) input. The read ports can be implemented using two multiplexers, each having log2N control lines, where N is the number of bits in each register of the RF. In Figure 4.4b, note that data from all N = 32 registers flows out to the output muxes, and the data stream from the register to be read is selected using the mux's five control lines. Similar to the ALU design presented in Section 3, parallelism is exploited for speed and simplicity. In Figure 4.4c is shown an implementation of the RF write port. Here, the write enable signal is a clock pulse that activates the edge-triggered D flip-flops which comprise each register (shown as a rectangle with clock (C) and data (D) inputs). The register number is input to an N-to-2N decoder, and acts as the control signal to switch the data stream input into the Register Data input. The actual data switching is done by and-ing the data stream with the decoder output: only the and gate that has a unitary (one-valued) decoder output will pass the data into the selected register (because 1 and x = x). We next discuss how to construct a datapath from a register file and an ALU, among other components. 4.2. Datapath Design and Implementation Reading Assignments and Exercises The datapath is the "brawn" of a processor, since it implements the fetch-decode-execute cycle. The general discipline for datapath design is to (1) determine the instruction classes and formats in the ISA, (2) design datapath components and interconnections for each instruction class or format, and (3) compose the datapath segments designed in Step 2) to yield a composite datapath. Simple datapath components include memory (stores the current instruction), PC or program counter (stores the address of current instruction), and ALU (executes current instruction). The interconnection of these simple components to form a basic datapath is illustrated in Figure 4.5. Note that the register file is written to by the output of the ALU. As in Section 4.1, the register file shown in Figure 4.6 is clocked by the RegWrite signal. Figure 4.5. Schematic high-level diagram of MIPS datapath from an implementational perspective, adapted from [Maf01]. Implementation of the datapath for I- and J-format instructions requires two more components - a data memory and a sign extender, illustrated in Figure 4.6. The data memory stores ALU results and operands, including instructions, and has two enabling inputs (MemWrite and MemRead) that cannot both be active (have a logical high value) at the same time. The data memory accepts an address and either accepts data (WriteData port if MemWrite is enabled) or outputs data (ReadData port if MemRead is enabled), at the indicated address. The sign extender adds 16 leading digits to a 16-bit word with most significant bit b, to product a 32-bit word. In particular, the additional 16 digits have the same value as b, thus implementing sign extension in twos complement representation. Figure 4.6. Schematic diagram of Data Memory and Sign Extender, adapted from [Maf01]. 4.2.1. R-format Datapath Implementation of the datapath for R-format instructions is fairly straightforward - the register file and the ALU are all that is required. The ALU accepts its input from the DataRead ports of the register file, and the register file is written to by the ALUresult output of the ALU, in combination with the RegWrite signal. Figure 4.7. Schematic diagram R-format instruction datapath, adapted from [Maf01]. 4.2.2. Load/Store Datapath The load/store datapath uses instructions such as lw $t1, offset($t2), where offset denotes a memory address offset applied to the base address in register $t2. The lw instruction reads from memory and writes into register $t1. The sw instruction reads from register $t1 and writes into memory. In order to compute the memory address, the MIPS ISA specification says that we have to sign-extend the 16-bit offset to a 32-bit signed value. This is done using the sign extender shown in Figure 4.6. The load/store datapath is illustrated in Figure 4.8, and performs the following actions in the order given: 1. Register Access takes input from the register file, to implement the instruction, data, or address fetch step of the fetch-decode-execute cycle. 2. Memory Address Calculation decodes the base address and offset, combining them to produce the actual memory address. This step uses the sign extender and ALU. 3. Read/Write from Memory takes data or instructions from the data memory, and implements the first part of the execute step of the fetch/decode/execute cycle. 4. Write into Register File puts data or instructions into the data memory, implementing the second part of the execute step of the fetch/decode/execute cycle. Figure 4.8. Schematic diagram of the Load/Store instruction datapath. Note that the execute step also includes writing of data back to the register file, which is not shown in the figure, for simplicity [MK98]. The load/store datapath takes operand #1 (the base address) from the register file, and sign-extends the offset, which is obtained from the instruction input to the register file. The sign-extended offset and the base address are combined by the ALU to yield the memory address, which is input to the Address port of the data memory. The MemRead signal is then activated, and the output data obtained from the ReadData port of the data memory is then written back to the Register File using its WriteData port, with RegWrite asserted. 4.2.3. Branch/Jump Datapath The branch datapath (jump is an unconditional branch) uses instructions such as beq $t1, $t2, offset, where offset is a 16-bit offset for computing the branch target address via PC-relative addressing. The beq instruction reads from registers $t1 and $t2, then compares the data obtained from these registers to see if they are equal. If equal, the branch is taken. Otherwise, the branch is not taken. By taking the branch, the ISA specification means that the ALU adds a sign-extended offset to the program counter (PC). The offset is shifted left 2 bits to allow for word alignment (since 22 = 4, and words are comprised of 4 bytes). Thus, to jump to the target address, the lower 26 bits of the PC are replaced with the lower 26 bits of the instruction shifted left 2 bits. The branch instruction datapath is illustrated in Figure 4.9, and performs the following actions in the order given: 1. Register Access takes input from the register file, to implement the instruction fetch or data fetch step of the fetch-decode-execute cycle. 2. Calculate Branch Target - Concurrent with ALU #1's evaluation of the branch condition, ALU #2 calculates the branch target address, to be ready for the branch if it is taken. This completes the decode step of the fetch-decode-execute cycle. 3. Evaluate Branch Condition and Jump to BTA or PC+4 uses ALU #1 in Figure 4.9, to determine whether or not the branch should be taken. Jump to BTA or PC+4 uses control logic hardware to transfer control to the instruction referenced by the branch target address. This effectively changes the PC to the branch target address, and completes the execute step of the fetch-decode-execute cycle. Figure 4.9. Schematic diagram of the Branch instruction datapath. Note that, unlike the Load/Store datapath, the execute step does not include writing of results back to the register file [MK98]. The branch datapath takes operand #1 (the offset) from the instruction input to the register file, then sign-extends the offset. The sign-extended offset and the program counter (incremented by 4 bytes to reference the next instruction after the branch instruction) are combined by ALU #1 to yield the branch target address. The operands for the branch condition to evaluate are concurrently obtained from the register file via the ReadData ports, and are input to ALU #2, which outputs a one or zero value to the branch control logic. MIPS has the special feature of a delayed branch, that is, instruction Ib which follows the branch is always fetched, decoded, and prepared for execution. If the branch condition is false, a normal branch occurs. If the branch condition is true, then Ib is executed. One wonders why this extra work is performed - the answer is that delayed branch improves the efficiency of pipeline execution, as we shall see in Section 5. Also, the use of branch-not-taken (where Ib is executed) is sometimes the common case. 4.3. Single-Cycle and Multicycle Datapaths Reading Assignments and Exercises A single-cycle datapath executes in one cycle all instructions that the datapath is designed to implement. This clearly impacts CPI in a beneficial way, namely, CPI = 1 cycle for all instructions. In this section, we first examine the design discipline for implementing such a datapath using the hardware components and instruction-specific datapaths developed in Section 4.2. Then, we discover how the performance of a single-cycle datapath can be improved using a multi-cycle implementation. 4.3.2. Single Datapaths Let us begin by constructing a datapath with control structures taken from the results of Section 4.2. The simplest way to connect the datapath components developed in Section 4.2 is to have them all execute an instruction concurrently, in one cycle. As a result, no datapath component can be used more than once per cycle, which implies duplication of components. To make this type of design more efficient without sacrificing speed, we can share a datapath component by allowing the component to have multiple inputs and outputs selected by a multiplexer. The key to efficient single-cycle datapath design is to find commonalities among instruction types. For example, the R-format MIPS instruction datapath of Figure 4.7 and the load/store datapath of Figure 4.8 have similar register file and ALU connections. However, the following differences can also be observed: 1. The second ALU input is a register (R-format instruction) or a signed-extended lower 16 bits of the instruction (e.g., a load/store offset). 2. The value written to the register file is obtained from the ALU (R-format instruction) or memory (load/store instruction). These two datapath designs can be combined to include separate instruction and data memory, as shown in Figure 4.10. The combination requires an adder and an ALU to respectively increment the PC and execute the R-format instruction. Figure 4.10. Schematic diagram of a composite datapath for R-format and load/store instructions [MK98]. Adding the branch datapath to the datapath illustrated in Figure 4.9 produces the augmented datapath shown in Figure 4.11. The branch instruction uses the main ALU to compare its operands and the adder computes the branch target address. Another multiplexer is required to select either the next instruction address (PC + 4) or the branch target address to be the new value for the PC. Figure 4.11. Schematic diagram of a composite datapath for R-format, load/store, and branch instructions [MK98]. ALU Control. Given the simple datapath shown in Figure 4.11, we next add the control unit. Control accepts inputs (called control signals) and generates (a) a write signal for each state element, (b) the control signals for each multiplexer, and (c) the ALU control signal. The ALU has three control signals, as shown in Table 4.1, below. The ALU is used for all instruction classes, and always performs one of the five functions in the right-hand column of Table 4.1. For branch instructions, the ALU performs a subtraction, whereas R-format instructions require one of the ALU functions. The ALU is controlled by two inputs: (1) the opcode from a MIPS instruction (six most significant bits), and (2) a two-bit control field (which Patterson and Hennesey call ALUop). The ALUop signal denotes whether the operation should be one of the following: The output of the ALU control is one of the 3-bit control codes shown in the left-hand column of Table 4.1. In Table 4.2, we show how to set the ALU output based on the instruction opcode and the ALUop signals. Later, we will develop a circuit for generating the ALUop bits. We call this approach multi-level decoding -- main control generates ALUop bits, which are input to ALU control. The ALU control then generates the three-bit codes shown in Table 4.1. The advantage of a hierarchically partitioned or pipelined control scheme is realized in reduced hardware (several small control units are used instead of one large unit). This results in reduced hardware cost, and can in certain instances produce increased speed of control. Since the control unit is critical to datapath performance, this is an important implementational step. Recall that we need to map the two-bit ALUop field and the six-bit opcode to a three-bit ALU control code. Normally, this would require 2(2 + 6) = 256 possible combinations, eventually expressed as entries in a truth table. However, only a few opcodes are to be implemented in the ALU designed herein. Also, the ALU is used only when ALUop = 102. Thus, we can use simple logic to implement the ALU control, as shown in terms of the truth table illustrated in Table 4.2. Table 4.2. ALU control bits as a function of ALUop bits and opcode bits [MK98]. In this table, an "X" in the input column represents a "don't-care" value, which indicates that the output does not depend on the input at the i-th bit position. The preceding truth table can be optimized and implemented in terms of gates, as shown in Section C.2 of Appendix C of the textbook. Main Control Unit. The first step in designing the main control unit is to identify the fields of each instruction and the required control lines to implement the datapath shown in Figure 4.11. Recalling the three MIPS instruction formats (R, I, and J), shown as follows: Observe that the following always apply: Additionally, we have the following instruction-specific codes due to the regularity of the MIPS instruction format: Note that the different positions for the two destination registers implies a selector (i.e., a mux) to locate the appropriate field for each type of instruction. Given these contraints, we can add to the simple datapath thus far developed instruction labels and an extra multiplexer for the WriteReg input of the register file, as shown in Figure 4.12. Figure 4.12. Schematic diagram of composite datapath for R-format, load/store, and branch instructions (from Figure 4.11) with control signals and extra multiplexer for WriteReg signal generation [MK98]. Here, we see the seven-bit control lines (six-bit opcode with one-bit WriteReg signal) together with the two-bit ALUop control signal, whose actions when asserted or deasserted are given as follows: Given only the opcode, the control unit can thus set all the control signals except PCSrc, which is only set if the instruction is beq and the Zero output of the ALu used for comparison is true. PCSrc is generated by and-ing a Branch signal from the control unit with the Zero signal from the ALU. Thus, all control signals can be set based on the opcode bits. The resultant datapath and its signals are shown in detail in Figure 4.13. Figure 4.13. Schematic diagram of composite datapath for R-format, load/store, and branch instructions (from Figure 4.12) with control signals illustrated in detail [MK98]. We next examine functionality of the datapath illustrated in 4.13, for the three major types of instructions, then discuss how to augment the datapath for a new type of instruction. 4.3.2. Datapath Operation Recall that there are three MIPS instruction formats -- R, I, and J. Each instruction causes slightly different functionality to occur along the datapath, as follows. R-format Instruction. Execution of an R-format instruction (e.g., add $t1, $t0, $t1) using the datapath developed in Section 4.3.1 involves the following steps: 1. Fetch instruction from instruction memory and increment PC 2. Input registers (e.g., $t0 and $t1) are read from the register file 3. ALU operates on data from register file using the funct field of the MIPS instruction (Bits 5-0) to help select the ALU operation 4. Result from ALU written into register file using bits 15-11 of instruction to select the destination register (e.g., $t1). Note that this implementational sequence is actually combinational, becuase of the single-cycle assumption. Since the datapath operates within one clock cycle, the signals stabilize approximately in the order shown in Steps 1-4, above. Load/Store Instruction. Execution of a load/store instruction (e.g., lw $t1, offset($t2)) using the datapath developed in Section 4.3.1 involves the following steps: 1. Fetch instruction from instruction memory and increment PC 2. Read register value (e.g., base address in $t2) from the register file 3. ALU adds the base address from register $t2 to the sign-extended lower 16 bits of the instruction (i.e., offset) 4. Result from ALU is applied as an address to the data memory 5. Data retrieved from the memory unit is written into the register file, where the register index is given by $t1 (Bits 20-16 of the instruction). Branch Instruction. Execution of a branch instruction (e.g., beq $t1, $t2, offset) using the datapath developed in Section 4.3.1 involves the following steps: 1. Fetch instruction from instruction memory and increment PC 2. Read registers (e.g., $t1 and $t2) from the register file. The adder sums PC + 4 plus sign-extended lower 16 bits of offset shifted left by two bits, thereby producing the branch target address (BTA). 3. ALU subtracts contents of $t1 minus contents of $t2. The Zero output of the ALU directs which result (PC+4 or BTA) to write as the new PC. Final Control Design. Now that we have determined the actions that the datapath must perform to compute the three types of MIPS instructions, we can use the information in Table 4.3 to describe the control logic in terms of a truth table. This truth table (Table 4.3) is optimized as shown in Section C.2 of Appendix C of the textbook to yield the datapath control circuitry. Table 4.3. ALU control bits as a function of ALUop bits and opcode bits [MK98]. 4.3.3. Extended Control for New Instructions The jump instruction provides a useful example of how to extend the single-cycle datapath developed in Section 4.3.2, to support new instructions. Jump resembles branch (a conditional form of the jump instruction), but computes the PC differently and is unconditional. Identical to the branch target address, the lowest two bits of the jump target address (JTA) are always zero, to preserve word alignment. The next 26 bits are taken from a 26-bit immediate field in the jump instruction (the remaining six bits are reserved for the opcode). The upper four bits of the JTA are taken from the upper four bits of the next instruction (PC + 4). Thus, the JTA computed by the jump instruction is formatted as follows: The jump is implemented in hardware by adding a control circuit to Figure 4.13, which is comprised of: The resulting augmented datapath is shown in Figure 4.14. Figure 4.14. Schematic diagram of composite datapath for R-format, load/store, branch, and jump instructions, with control signals labelled [MK98]. 4.3.4. Limitations of the Single-Cycle Datapath The single-cycle datapath is not used in modern processors, because it is inefficient. The critical path (longest propagation sequence through the datapath) is five components for the load instruction. The cycle time tc is limited by the settling time ts of these components. For a circuit with no feedback loops, tc > 5ts. In practice, tc = 5kts, with large proportionality constant k, due to feedback loops, delayed settling due to circuit noise, etc. Additionally, as shown in the table on p. 374 of the textbook, it is possible to compute the required execution time for each instruction class from the critical path information. The result is that the Load instruction takes 5 units of time, while the Store and R-format instructions take 4 units of time. All the other types of instructions that the datapath is designed to execute run faster, requiring three units of time. The problem of penalizing addition, subtraction, and comparison operations to accomodate loads and stores leads one to ask if multiple cycles of a much faster clock could be used for each part of the fetch-decode-execute cycle. In practice, this technique is employed in CPU design and implementation, as discussed in the following sections on multicycle datapath design. In Section 5, we will show that datapath actions can be interleaved in time to yield a potentially fast implementation of the fetch-decode-execute cycle that is formalized in a technique called pipelining. 4.3.5. Multicycle Datapath Design In Sections 4.3.1 through 4.3.4, we designed a single-cycle datapath by (1) grouping instructions into classes, (2) decomposing each instruction class into constituent operations, and (3) deriving datapath components for each instruction class that implemented these operations. In this section, we use the single-cycle datapath components to create a multi-cycle datapath, where each step in the fetch-decode-execute sequence takes one cycle. This approach has two advantages over the single-cycle datapath: 1. Each functional unit (e.g., Register File, Data Memory, ALU) can be used more than once in the course of executing an instruction, which saves hardware (and, thus, reduces cost); and 2. Each instruction step takes one cycle, so different instructions have different execution times. In contrast, the single-cycle datapath that we designed previously required every instruction to take one cycle, so all the instructions move at the speed of the slowest. We next consider the basic differences between single-cycle and multi-cycle datapaths. Cursory Analysis. Figure 4.15 illustrates a simple multicycle datapath. Observe the following differences between a single-cycle and multi-cycle datapath: Figure 4.15. Simple multicycle datapath with buffering registers (Instruction register, Memory data register, A, B, and ALUout) [MK98]. Note that there are two types of state elements (e.g., memory, registers), which are: 1. Programmer-Visible (register file, PC, or memory), in which data is stored that is used by subsequent instructions (in a later clock cycle); and 2. Additional State Elements(buffer registers), in which data is stored that is used in a later clock cycle of the same instruction. Thus, the additional (buffer) registers determine (a) what functional units will fit into a given clock cycle and (b) the data required for later cycles involved in executing the current instruction. In the simple implementation presented herein, we assume for purposes of illustration that each clock cycle can accomodate one and only one of the following operations: New Registers. As a result of buffering, data produced by memory, register file, or ALU is saved for use in a subsequent cycle. The following temporary registers are important to the multicycle datapath implementation discussed in this section: The IR and MDR are distinct registers because some operations require both instruction and data in the same clock cycle. Since all registers except the IR hold data only between two adjacent clock cycles, these registers do not need a write control signal. In contrast, the IR holds an instruction until it is executed (multiple clock cycles) and therefor requires a write control signal to protect the instruction from being overwritten before its execution has been completed. New Muxes. we also need to add new multiplexers and expand existing ones, to implement sharing of functional units. For example, we need to select between memory address as PC (for a load instruction) or ALUout (for load/store instructions). The muxes also route to one ALU the many inputs and outputs that were distributed among the several ALUs of the single-cycle datapath. Thus, we make the following additional changes to the single-cycle datapath: The details of these muxes are shown in Figure 4.16. By adding a few registers (buffers) and muxes (inexpensive widgets), we halve the number of memory units (expensive hardware) and eliminate two adders (more expensive hardware). New Control Signals. The datapath shown in Figure 4.16 is multicycle, since it uses multiple cycles per instruction. As a result, it will require different control signals than the single-cycle datapath, as follows: It is advantageous that the ALU control from the single-cycle datapath can be used as-is for the multicycle datapath ALU control. However, some modifications are required to support branches and jumps. We describe these changes as follows. Branch and Jump Instruction Support. To implement branch and jump instructions, one of three possible values is written to the PC: 1. ALU output = PC + 4, to get the next instruction during the instruction fetch step (to do this, PC + 4 is written directly to the PC) 2. Register ALUout, which stores the computed branch target address. 3. Lower 26 bits (offset) of the IR, shifted left by two bits (to preserve alginment) and concatenated with the upper four bits of PC+4, to form the jump target address. The PC is written unconditionally (jump instruction) or conditionally (branch), which implies two control signals - PCWrite and PCWriteCond. From these two signals and the Zero output of the ALU, we derive the PCWrite control signal, via the following logic equation: PCWriteControl = (ALUZero and PCWriteCond) or PCWrite, where (a) ALUZero indicates if two operands of the beq nstruction are equal and (b) the result of (ALUZero and PCWriteCond) determines whether the PC should be written during a conditional branch. We call the latter the branch taken condition. Figure 4.16 shows the resultant multicycle datapath and control unit with new muxes and corresponding control signals. Table 4.4 illustrates the control signals and their functions. 4.3.6. Multicycle Datapath and Instruction Execution Given the datapath illustrated in Figure 4.16, we examine instruction execution in each cycle of the datapath. The implementational goal is balancing of the work performed per clock cycle, to minimize the average time per cycle across all instructions. For example, each step would contain one of the following: Thus, the cycle time will be equal to the maximum time required for any of the preceding operations. Note: Since (a) the datapath is designed to be edge-triggered (reference Section 4.1.1) and (b) the outputs of ALU, register file, or memory are stored in dedicated registers (buffers), we can continue to read the value stored in a dedicated register. The new value, output from ALU, register file, or memory, is not available in the register until the next clock cycle. Figure 4.16. MIPS multicycle datapath [MK98]. Table 4.4. Multicycle datapath control signals and their functions [MK98]. In the multicycle datapath, all operations within a clock cycle occur in parallel, but successive steps within a given instruction operate sequentially. Several implementational issues present that do not confound this view, but should be discussed. One must distinguish between (a) reading/writing the PC or one of the buffer registers, and (b) reads/writes to the register file. Namely, I/O to the PC or buffers is part of one clock cycle, i.e., we get this essentially "for free" because of the clocking scheme and hardware design. In contrast, the register file has more complex hardware (as shown in Section 4.1.2) and requires a dedicated clock cycle for its circuitry to stabilize. We next examine multicycle datapath execution in terms of the fetch-decode-execute sequence. Instruction Fetch. In this first cycle that is common to all instructions, the datapath fetches an instruction from memory and computes the new PC (address of next instruction in the program sequence), as represented by the following pseudocode: IR = Memory[PC] # Put contents of Memory[PC] in Instr.Register PC = PC + 4 # Increment the PC by 4 to preserve alignment where IR denotes the instruction register. The PC is sent (via control circuitry) as an address to memory. The memory hardware performs a read operation and control hardware transfers the instruction at Memory[PC] into the IR, where it is stored until the next instruction is fetched. Then, the ALU increments the PC by four to preserve word alighment. The incremented (new) PC value is stored back into the PC register by setting PCSource = 00 and asserting PCWrite. Fortunately, incrementing the PC and performing the memory read are concurrent operations, since the new PC is not required (at the earliest) until the next clock cycle. Reading Assigment: The exact sequence of operations is described on p.385 of the textbook. Instruction Decode and Data Fetch. Included in the multicycle datapath design is the assumption that the actual opcode to be executed is not known prior to the instruction decode step. This is reasonable, since the new instruction is not yet available until completion of instruction fetch and has thus not been decoded. As a result of not knowing what operation the ALU is to perform in the current instruction, the datapath must execute only actions that are: Therefore, given the rs and rt fields of the MIPS instruction format (per Figure 2.7), we can suppose (harmlessly) that the next instruction will be R-format. We can thus read the operands corresponding to rs and rt from the register file. If we don't need one or both of these operands, that is not harmful. Otherwise, the register file read operation will place them in buffer registers A and B, which is also not harmful. Another action the datapath can perform is computation of the branch target address using the ALU, since this is the instruction decode step and the ALU is not yet needed for instruction execution. If the instruction that we are decoding in this step is not a branch, then no harm is done - the BTA is stored in ALUout and nothing further happens to it. We can perform these preparatory actions because of the of MIPS instruction formats. The result is represented in pseudocode, as follows: A = RegFile[IR[25:21]] # First operand = Bits 25-21 of instruction B = RegFile[IR[20:16]] # Second operand = Bits 25-21 of instruction ALUout = PC + SignExtend(IR[15:0]) << 2 ; # Compute BTA where "x << n" denotes x shifted left by n bits. Reading Assigment: The exact sequence of low-level operations is described on p.384 of the textbook. Instruction Execute, Address Computation, or Branch Completion. In this cycle, we know what the instruction is, since decoding was completed in the previous cycle. The instruction opcode determines the datapath operation, as in the single-cycle datapath. The ALU operates upon the operands prepared in the decode/data-fetch step (Section, performing one of the following actions: Memory Access or R-format Instruction Completion. In this cycle, a load-store instruction accesses memory and an R-format instruction writes its result (which appears at ALUout at the end of the previous cycle), as follows: MDR = Memory[ALUout] # Load Memory[ALUout] = B # Store where MDR denotes the memory data register. Reading Assigment: The control actions for load/store instructions are discussed on p.388 of the textbook. For an R-format completion, where Reg[IR[15:11]] = ALUout # Write ALU result to register file the data to be loaded was stored in the MDR in the previous cycle and is thus available for this cycle. The rt field of the MIPS instruction format (Bits 20-16) has the register number, which is applied to the input of the register file, together with RegDst = 0 and an asserted RegWrite signal. From the preceding sequences as well as their discussion in the textbook, we are prepared to design a finite-state controller, as shown in the following section. 4.4. Finite State Control Reading Assignments and Exercises In the single-cycle datapath control, we designed control hardware using a set of truth tables based on control signals activated for each instruction class. However, this approach must be modified for the multicycle datapath, which has the additional dimension of time due to the stepwise execution of instructions. Thus, the multicycle datapath control is dependent on the current step involved in executing an instruction, as well as the next step. There are two alternative techniques for implementing multicycle datapath control. First, a finite-state machine (FSM) or finite state control (FSC) predicts actions appropriate for datapath's next computational step. This prediction is based on (a) the status and control information specific to the datapath's current step and (b) actions to be performed in the next step. A second technique, called microprogramming, uses a programmatic representation to implement control, as discussed in Section 4.5. Appendix C of the textbook shows how these representations are translated into hardware. 4.4.1. Finite State Machine An FSM consists of a set of states with directions that tell the FSM how to change states. The following features are important: Implementationally, we assume that all outputs not explicitly asserted are deasserted. Additionally, all multiplexer controls are explicitly specified if and only if they pertain to the current and next states. A simple example of an FSM is given in Appendix B of the textbook. 4.4.2. Finite State Control The FSC is designed for the multicycle datapath by considering the five steps of instruction execution given in Section 4.3, namely: 1. Instruction fetch 2. Instruction decode and data fetch 3. ALU operation 4. Memory access or R-format instruction completion 5. Memory access completion Each of these steps takes one cycle, by definition of the multicycle datapath. Also, each step stores its results in temporary (buffer) registers such as the IR, MDR, A, B, and ALUout. Each state in the FSM will thus (a) occupy one cycle in time, and (b) store its results in a temporary (buffer) register. From the discussion of Section 4.3, observe that Steps 1 and 2 are indentical for every instruction, but Steps 3-5 differ, depending on instruction format. Also note that oafter completion of an instruction, the FSC returns to its initial state (Step 1) to fetch another instruction, as shown in Figure 4.17. Figure 4.17. High-level (abstract) representation of finite-state machine for the multicycle datapath finite-state control. Figure numbers refer to figures in the textbook [Pat98,MK98]. Let us begin our discussion of the FSC by expanding steps 1 and 2, where State 0 (the initial state) corresponds to Step 1. Instruction Fetch and Decode. In Figure 4.18 is shown the FSM representation for instruction fetch and decode. The control signals asserted in each state are shown within the circle that denotes a given state. The edges (lines or arrows) between states are labelled with the conditions that must be fulfilled for the illustrated transition between states to occur. Patterson and Hennessey call the process of branching to different states decoding, which depends on the instruction class after State 1 (i.e., Step 2, as listed above). Figure 4.18. Representation of finite-state control for the instruction fetch and decode states of the multicycle datapath. Figure numbers refer to figures in the textbook [Pat98,MK98]. Memory Reference. The memory reference portion of the FSC is shown in Figure 4.19. Here, State 2 computes the memory address by setting ALU input muxes to pass the A register (base address) and sign-extended lower 16 bits of the offset (shifted left two bits) to the ALU. After address computation, memory read/write requires two states: In both states, the memory is forced to equal ALUout, by setting the control signal IorD = 1. Figure 4.19. Representation of finite-state control for the memory reference states of the multicycle datapath. Figure numbers refer to figures in the textbook [Pat98,MK98]. When State 5 completes, control is transferred to State 0. Otherwise, State 3 completes and the datapath must finish the load operation, which is accomplished by transferring control to State 4. There, MemtoReg = 1, RegDst = 0, and the MDR contents are written to the register file. The next state is State 0. R-format Execution. To implement R-format instructions, FSC uses two states, one for execution (Step 3) and another for R-format completion (Step 4), per Figure 4.20. State 6 asserts ALUSrcA and sets ALUSrcB = 00, which loads the ALU's A and B input registers from register file outputs. The ALUop = 10 setting causes the ALU control to use the instruction's funct field to set the ALU control signals to implement the designated ALU operation. State 7 causes (a) the register file to write (assert RegWrite), (b) rd field of the instruction to have the number of the destination register (assert RegDst), and (c) ALUout selected as having the value that must be written back to the register file as the result of the ALU operation (by deasserting MemtoReg). Figure 4.20. Representation of finite-state control for the R-format instruction execution states of the multicycle datapath. Figure numbers refer to figures in the textbook [Pat98,MK98]. Branch Control. Since branches complete during Step 3, only one new state is needed. In State 8, (a) control signas that cause the ALU to compare the contents of its A and B input registers are set (i.e., ALUSrcA = 1, ALUSrcB = 00, ALUop = 01), and (b) the PC is written conditionally (by setting PCSrc = 01 and asserting PCWriteCond). Note that setting ALUop = 01 forces a subtraction, hence only the beq instruction can be implemented this way. (a)                                                        (b)                  Figure 4.21. Representation of finite-state control for (a) branch and (b) jump instruction-specific states of the multicycle datapath. Figure numbers refer to figures in the textbook [Pat98,MK98]. Jump Instruction. Similar to branch, the jump instruction requires only one state (#9) to complete execution. Here, the PC is written by asserting PCWrite. The value written to the PC is the lower 26 bits of the IR with the upper four bits of PC, and the lower two bits equal to 002. This is done by setting PCSrc = 102. 4.4.3. FSC and Multicycle Datapath Performance The composite FSC is shown in Figure 4.22, which was constructed by composing Figures 4.18 through 4.21. Figure 4.22. Representation of the composite finite-state control for the MIPS multicycle datapath [MK98]. When computing the performance of the multicycle datapath, we use this FSM representation to determine the critical path (maximum number of states encountered) for each instruction type, with the following results: Since each state corresponds to a clock cycle (according to the design assumption of the FSC controller in Section 4.4.2), we have the following expression for CPI of the multicycle datapath: CPI = [#Loads · 5 + #Stores · 4 + #ALU-instr's · 4 + #Branches · 3 + #Jumps · 3] / (Total Number of Instructions) Reading Assigment: Know in detail the example computation of CPI for the multicycle datapath, beginning on p.397 of the textbook. The textbook example shows CPI for the gcc benchmark is 4.02, a savings of approximately 20 percent over the worst-case CPI (equal to 5 cycles for all instructions, based the single-cycle datapath design constraint that all instructions run at the speed of the slowest). 4.4.4. Implementation of Finite-State Control The FSC can be implemented in hardware using a read-only memory (ROM) or programmable logic array (PLA), as discussed in Section C.3 of the textbook. Combinatorial logic implements the transition function and a state register stores the current state of the machine (e.g., States 0 through 9 in the development of Section 4.4.2). The inputs are the IR opcode bits, and the outputs are the various datapath control signals (e.g., PCSrc, ALUop, etc.) We next consider how the preceding function can be implemented using the technique of microprogramming. 4.5. Microprogrammed Control Reading Assignments and Exercises While the finite state control for the multicycle datapath was relatively easy to design, the graphical approach shown in Section 4.4 is limited to small control systems. We implemented only five MIPS instruction types, but the actual MIPS instruction set has over 100 different instructions. Recall that the FSC of Section 4.4 required 10 states for only five instruction types, and had CPI ranging from three to five. Now, observe that MIPS has not only 100 instructions, but CPI ranging from one to 20 cycles. A control system for a realistic instruction set (even if it is RISC) would have hundreds or thousands of states, which could not be represented conveniently using the graphical technique of Section 4.4. However, it is possible to develop a convenient technique of control system design and programming by using abstractions from programming language practice. This technique, called microprogramming, helps make control design more tractable and also helps improve correctness if good software engineering practice is followed. By using very low-level instructions (called microinstructions) that set the value of datapath control signals, one can write microprograms that implement a processor's control system(s). To do this, one specifies: We consider these issues, as follows. 4.5.1. Microinstruction Format A microinstruction is an abstraction of low-level control that is used to program control logic hardware. The microinstruction format should be simple, and should discourage or prohibit inconsistency. (An inconsistent microinstruction requires a given control signal to be set to two different values simultaneously, which is physically impossible.) The implementation of each microinstruction should, therefore, make each field specify a set of nonoverlapping values. Signals that are never asserted concurrently can thus share the same field. Table 4.5 illustrates how this is realized in MIPS, using seven fields. The first six fields control the datapath, while the last field controls the microinstruction sequencing (deciding which microinstruction will be executed next). Table 4.5. MIPS microinstruction format [MK98]. In hardware, microinstructions are usually stored in a ROM or PLA (per descriptions in Appendices B and C of the textbook). The microinstructions are usually referenced by sequential addreses to simplify sequencing. The sequencing process can have one of the following three modes: 1. Incrementation, by which the address of the current microinstruction is incremented to obtain the address of the next microinstruction. Thsi is indicated by the value Seq in the Sequencing field of Table 4.5. 2. Branching, to the microinstruction that initiates execution of the next MIPS instruction. This is implemented by the value Fetch in the Sequencing field. 3. Control-directed choice, where the next microinstruction is chosen based on control input. We call this operation a dispatch. This is implemented by one or more address tables (similar to a jump table) called displatch tables. The hardware implementation of dispatch tables is discussed in Section C.5 (Appendix C) of the textbook. In the current subset of MIPS whose multicycle datapath we have been implementing, we need two dispatch tables, one each for State 1 and State 2. The use of a dispatch table numbered i is indicated in the microinstruction by putting Dispatch i in the Sequencing field. Table 4.6 summarizes the allowable values for each field of the microinstruction and the effect of each value. Table 4.6. MIPS microinstruction field values and functionality [MK98]. Field Name Values for Field Field Functionality Label Any string Labels control sequencing, per p. 403 of the textbook ALU control Add ALU performs addition operation Subt ALU performs subtraction operation Func code Instruction's funct field determines ALU operation SRC1 PC The PC is the first ALU input A Buffer register A is the first ALU input SRC2 B Buffer register B is the second ALU input 4 The constant 4 is the second ALU input (for PC+4) Extend Output of sign extension module is second ALU input Extshft Sign-extended output of two-bit shifter is second ALU input Register Control Read Read two registers using rs and rt fields of the current instruction, putting data into buffers A and B Write ALU Write to the register file using the rd field of the instruction register as the register number and the contents of ALUout as the data Write MDR Write to the register file using the rd field of the instruction register as the register number and the contents of the MDR as the data Memory Read PC Read memory using the PC as the memory address, writing the result into the IR and MDR [implements instruction fetch] Read ALU Read memory using ALUout as the address, write the result into MDR Write ALU Write to memory using the ALUout contents as the address, writing to memory the data contained in buffer register B PCWrite control ALU Write the output of the ALU into the PC register ALUout-cond If the ALU's Zero output is high, write the contents of ALUout into the PC register Jump address Write the PC with the jump address from the instruction Sequencing Seq Choose the next microinstruction sequentially Fetch Got to the first microinstruction to begin a new MIPS instruction Dispatch i Dispatch using the ROM specified by i (where i = 1 or 2) In practice, the microinstructions are input to a microassembler, which checks for inconsistencies. Detected inconsistencies are flagged and must be corrected prior to hardware implementation. 4.5.2. Microprogramming the Datapath Control In this section, we use the fetch-decode-execute sequence that we developed for the multicycle datapath to design the microprogrammed control. First, we observe that sometimes an instruction might have a blank field. This is permitted when: We can now create the microprogram in stepwise fashion. Instruction Fetch and Decode, Data Fetch. Each instruction execution first fetches the instruction, decodes it, and computes both the sequential PC and branch target PC (if applicable). The two microinstructions are given by: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Fetch Add PC 4 --- Read PC ALU Seq --- Add PC Extshft Read --- --- Dispatch 1 where "---" denotes a blank field. In the first microinstruction, In the second microinstruction, we have the following actions: Dispatch Tables. Patterson and Hennessey consider the dispatch table as a case statement that uses the opcode field and dispatch table i to select one of Ni different labels. For in Dispatch Table #1 (i = 1, Ni = 4) we have label Mem1 for memory reference instructions, Rformat1 for arithmetic and logical instructions, Beq1 for conditional branches, and Jump1 for unconditional branches. Each of these labels points to a different microinstruction sequence that can be thought of as a kind of subprogram. Each microcode sequence can be thought of as comprising a small utility that implements the desired capability of specifying hardware control signals. Memory Reference Instructions. Three microinstructions suffice to implement memory access in terms of a MIPS load instruction: (1) memory address computation, (2) memory read, and (3) register file write, as follows: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Mem1 Add A Extend --- --- --- Dispatch 2 LW2 --- --- --- --- Read ALU --- Seq --- --- --- --- Write MDR --- --- Fetch The details of each microinstruction are given on pp. 405-406 of the textbook. R-format Execution. R-format instruction execution requires two microinstructions: (1) ALU operation, labelled Rformat1 for dispatching; and (2) write to register file, as follows: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Rformat1 Func code A B --- --- --- Seq --- --- --- --- Write ALU --- --- Fetch The details of each microinstruction are given on p. 406 of the textbook. Branch and Jump Execution. Since we assume that the preceding microinstruction computed the BTA, the microprogram for a conditional branch requires only the following microinstruction: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Beq1 Subt A B --- --- ALUout-cond Fetch Similarly, only one microinstruction is required to implement a Jump instruction: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Jump1 --- --- --- --- --- Jump address Fetch Implementational details are given on p. 407 of the textbook. The composite microprogram is therefore given by the following ten instructions: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Fetch Add PC 4 --- Read PC ALU Seq SW2 --- --- --- --- Write ALU --- Fetch Here, we have added the SW2 microinstruction to illustrate the final step of the store instruction. Observe that these ten instructions correspond directly to the ten states of the finite-state control developed in Section 4.4. In more complex machines, microprogram control can comprise tens or hundreds of thousands of microinstructions, with special-purpose registers used to store intermediate data. 4.5.3. Implementing a Microprogram It is useful to think of a microprogram as a textual representation of a finite-state machine. Thus, a microprogram could be implemented similar to the FSC that we developed in Section 4.4, using a PLA to encode the sequencing function and main control. However, it is often useful to store the control function in a ROM, then implementing the sequencing function in some other way. Typically, the sequencer uses an incrementer to choose the next control instruction. Here, the microcode storage determines the values of datapath control lines and the technique of selecting the next state. Address select logic contains dispatch tables (in ROMs or PLAs) and determines the next microinstruction to execute, albeit under control of the address select outputs. This technique is preferred, since it substitutes a simple counter for more complex address control logic, which is especially efficient if the microinstructions have little branching. Using a ROM, the microcode can be stored in its own memory and is addressed by the microprogram counter, similar to regular program instructions being addressed by an instruction sequencer. It is interesting to note that this is how microprogramming actually got started, by making the ROM and counter very fast. This represented a great advance over using slower main memory for microprogram storage. Today, however, advances in cache technology make a separate microprogram memory an obsolete development, as it is easier to store the microprogram in main memory and page the parts of it that are needed into cache, where retrieval is fast and uses no extra hardware. 4.5.4. Exception Handling If control design was not hard enough, we also have to deal with the very difficult problem of implementing exceptions and interrupts, which are defined as follows: In this discussion, we follow Patterson and Hennessey's convention, for simplicity: An interrupt is an externally caused event, and an exception one of all other events that cause unexpected control flow in a program. An interesting comparison of this terminology for different processors and manufacturers is given on pp. 410-411 of the textbook. In this section, we discuss control design required to handle two types of exceptions: (1) an indefined instruction, and (2) arithmetic overflow. These exceptions are germane to the small language (five instructions) whose implementation we have been exploring thus far. Basic Exception Handling Mechanism. After an exception is detected, the processor's control circuitry must be able to (s) save the address in the exception counter (EPC) of the instruction that caused the exception, then (2) transfer control to the operating system (OS) at a prespecified address. The second step typically invokes an exception handler, which is a routine that either (a) helps the program recover from the exception or (b) issues an error message, then attempts to terminate the program in an orderly fashion. If program execution is to continue after the exception is detected and handled, then the EPC register helps determine where to restart the program. For example, the exception-causing instruction can be repeated byt in a way that does not cause an exception. Alternatively, the next instruction can be executed (in MIPS, this instruction's address is $epc + 4). For the OS to handle the exception, one of two techniques are employed. First, the machine can have Cause and EPC registers, which contain codes that respectively represent the cause of the exception and the address of the exception-causing instruction. A second method uses vectored interrups , where the address to which control is transferred following the exception is determined by the cause of the exception. If vectored interrupts are not employed, control is tranferred to one address only, regardless of cause. Then, the cause is used to determine what action the exception handling routine should take. Hardware Support. MIPS uses the latter method, called non-vectored exceptions. To support this capability in the datapath that we have been developing in this section, we need to add the following two registers: Two additional control signals are needed: EPCWrite and CauseWrite, which write the appropriate information to the EPC and Cause registers. Also required in this particular implementation is a 1-bit signal to set the LSB of Cause to be 0 for an undefined instruction, or 1 for arithmetic overflow. Of further use is an address AE that points to the exception handling routine to which control is transferred. In MIPS, we assume that AE = C000000016. In the previous datapath developed through Section 4.4, the PC input is taken from a four-way mux that has three inputs defined, which are: PC+4, BTA, and JTA. Without adding control lines, we can add a fourth possible input to the PC, namely AE, which is written to the PC by setting PCsource = 112. Unfortunately, we cannot simply write the PC into the EPC, since the PC is incremented at instruction fetch (Step 1 of the multicycle datapath) instead of instruction execution (Step 3) when the exception actually occurs. Thus, when an exception is detected, the ALU must subtract 4 from the PC and the ALUout register contents must be written to the EPC. It is fortunate that this requires no additional control signals or lines in this particular datapath design, since 4 is already a selectable ALU input (used for incrementing the PC during instruction fetch, and is selected via ALUsrcB control signal). Hardware support for the datapath modifications needed to implement exception handling in the simple case illustrated in this section is shown in Figure 4.23. In the finite-state diagrams of Figure 4.24 and 4.25, we see that each of the preceding two types of exceptions can be handled using one state each. For each exception type, the state actions are: (1) set the Cause register contents to reflect exception type, (2) compute and save PC-4 into the EPC to make avaialble the return address, and (3) write the address AE to the PC so control can be transferred to the exception handler. To update the finite-state control (FSC) diagram of Figure 4.22, we ned to add the two states shown in Figure 4.24. Figure 4.23. Representation of the composite datapath architecture and control for the MIPS multicycle datapath, with provision for exception handling [MK98]. Thus far, we have discussed exceptions and how to handle them, and have illustrated the requirements of hardware support in the multicycle datapath developed in this section. In the following section, we complete this discussion with an overview of the necessary steps in exception detection. Exception Detection. Each of the two possible exception types in our example MIPS multicycle datapath is detected differently, as follows: Figure 4.24. Representation of the finite-state models for two types of exceptions in the MIPS multicycle datapath [MK98]. Figure 4.25. Representation of the composite finite-state control for the MIPS multicycle datapath, including exception handling [MK98]. As a result of these modifications, Figure 4.25 represents a complete specification of control for our five-instruction MIPS datapath, including mechanisms to handle two types of exceptions. Our design goal remains keeping the control logic small, fast, and accurate. Unfortunately, the FSC in Figure 4.25 has some flaws. For example, the overflow detection circuitry does not cause the ALU operation to be rolled back or restarted. Rather, the ALU result appears in the ALUout register whether or not there is an exception. This contradicts this MIPS ISA, which specifies that an instruction should have no effect on the datapath if it causes an exception. In practice, certain types of exceptions require process rollback and this greatly increases the control system complexity, also decreasing performance. Reading Assigment: Study carefully Section 5.7 of the textbook (pp. 416-419) on the Pentium Pro exception handling mechanism. 4.5.3. Summary We have developed a multicycle datapath and focused on (a) performance analysis and (b) control system design and implementation. Microprogramming was seen to be an especially useful way to design control systems. Unfortunately, there are two assumptions about microprogramming that are potentially dangerous to computer designers or engineers, which are discussed as follows. First, it has long been assumed that microcode is a faster way to implement an instruction than a sequence of simpler instructions. This is an instance of a conflict in design philosophy that is rooted in CISC versus RISC tradeoffs. In the past (CISC practice), microcode was stored in a very fast local memory, so microcode sequences could be fetched very quickly. This made it look as though microcode was executing very fast, when in fact it used the same datapath as higher-level instructions - only the microprogram memory throughput was faster. Today, with fast caches widely available, microcode performance is about the same as that of the CPU executing simple instructions. The one exception is an architecture with few general-purpose registers (CISC-like), in which microcode might not be swapped in and out of the register file very efficiently. Another disadvantage of using microcode-intensive execution is that the microcode (and therefore the instruction set) must be selected and settled upon before a new architecture is made available. This code cannot be changed until a new model is released. In contrast, software-based approaches to control system design are much more flexible, since the (few, simple) instructions reside in fast memory (e.g., cache) and can be changed at will. At the very worst, a new compiler or assembler revision might be required, but that is common practice nowadays, and far less expensive than hardware revision. The second misleading assumption about microcode is that if you have some extra room in the control store after a processor control system is designed, support for new instructions can be added for free. This is not true, because of the typical requirement of upward compatibility. That is, any future models of the given architecture must include the "free" instructions that were added after initial processor design, regardless of whether or not the control storage space might be at a premium in future revisions of the architecture. This concludes our discussion of datapaths, processors, control, and exceptions. We next concentrate on another method of increasing the performance of the multicycle datapath, called pipelining. [Maf01] Mafla, E. Course Notes, CDA3101, at URL http://www.cise.ufl.edu/~emafla/ (as-of 11 Apr 2001). [MK98] Copyright 1998 Morgan Kaufmann Publishers, Inc. All Rights Reserved, per copyright notice request at http://www.mkp.com/books_catalog/cod2/cod2ecrt.htm (1998). [Pat98] Patterson, D.A. and J.L. Hennesey. Computer Organization and Design: The Hardware/Software Interface, Second Edition, San Francisco, CA: Morgan Kaufman (1998).
Krake's presentation at U of M, Crookston part of Disability Employment Awareness Month. Everybody likes a good story about dogs and how they can inspire and how they can even save lives. Earlier this week at the University of Minnesota, Crookston, students, faculty, staff and the public got to hear firsthand how Terri Krake's 4-year-old dog Brody helps her with everyday life. Krake, of Minneapolis, seems like an enthusiastic and able-bodied person, but in reality, she suffers from seizures from time to time because of an injury she sustained when she was younger. Because of this she requires the assistance of a service dog. In the 1980s, Krake was a young deputy sheriff in New Orleans who absolutely loved her job. "Every day was a new adventure," she said. But that all changed when she was called out one day to the site of a gas leak. Before she could run for cover, the gas ignited and there was an explosion that threw her in the air, only to land on her head. She didn't know it then, but she received a brain stem injury. Shortly after the accident, Krake began having seizures. For 4 1/2 years, she went through intensive physical therapy and drug experimentation and soon was able to control her seizures. They eventually returned, however, much stronger and longer than before. Fearful of leaving home, she stayed home most of the time except for going to doctors appointments. In 2008, her neurologist suggested that she get a Vague Nerve Stimulator implant, or VNS, which sends an electrical pulse that interrupts seizure activity and limits the maximum seizing time to 5 minutes. This device had a magnet with it so that when swiped across the implant area it would shorten or even prevent the seizures. But Krake was unable to use it since she had no sense of when one would occur. This was when it was suggested that she get a Seizure Assist dog. Krake applied and was later paired with a 14-month-old lab, Brody. His training included bringing an emergency phone or hitting a Lifeline panic button during a seizure emergency. Brody was also trained to incorporate the VNS and now wears it on is vest. "When I seize," Krake explained, "he will 'cuddle' with me. He lays across my chest with his nose 'snuggling' my neck. This swipes the magnet across the implant and stops the seizure. Then he barks to get someone's attention." Krake thinks very highly of her service dog and is grateful for all he has helped her with. Since getting Brody, she has been able to go out in public and even do volunteer work. "He really is a lifesaver," Krake said. "I probably wouldn't be here today without him." Krake's presentation in Bede Ballroom was held in conjunction with Disability Employee Recognition Month.
Wednesday , 7 December 2016 Breaking News Software helps researchers discover new antibiotics New York : Researchers at The Rockefeller University in New York said they discovered two promising new antibiotics by sifting through the human microbiome with the help of a software. By using computational methods to identify which genes in a microbe’s genome ought to produce antibiotic compounds and then synthesising those compounds themselves, they were able to discover the new antibiotics without having to culture a single bacterium, according to a study published in the journal Nature Chemical Biology. Most antibiotics in use today are based on natural molecules produced by bacteria – and given the rise of antibiotic resistance, there is an urgent need to find more of them. Yet coaxing bacteria to produce new antibiotics is a tricky proposition. Most bacteria won’t grow in the lab. And even when they do, most of the genes that cause them to churn out molecules with antibiotic properties never get switched on. The Rockefeller University team led by Sean Brady offers a new way to avoid these problems. The team began by trawling publicly available databases for the genomes of bacteria that reside in the human body. They then used specialised computer software to scan hundreds of those genomes for clusters of genes that were likely to produce molecules known as non-ribosomal peptides that form the basis of many antibiotics. Brady and his colleagues then used a method called solid-phase peptide synthesis to manufacture 25 different chemical compounds. By testing those compounds against human pathogens, the researchers successfully identified two closely related antibiotics, which they dubbed humimycin A and humimycin B. Both are found in a family of bacteria called Rhodococcus — microbes that had never yielded anything resembling the humimycins when cultured using traditional laboratory techniques. The humimycins proved especially effective against Staphylococcus and Streptococcus bacteria, which can cause dangerous infections in humans and tend to become resistant to various antibiotics, said the study. Leave a Reply
Is “Pink Slime” Healthy? The processed meat-ish byproduct known as "pink slime." Bon appétit. In the last few weeks, you’ve probably heard a lot about so-called “pink slime.” Otherwise known as “lean finely textured beef trimmings,” pink slime is a processed meat byproduct found in 70% of packaged ground beef in the United States. Rather than being made from muscle tissue, this meat-ish byproduct is created from connective tissue and treated with ammonia hydroxide to kill salmonella and E. coli. Doesn’t sound too appetizing. And really, the publicity about pink slime was one of the rare instances where mainstream consumers peered behind the veil and saw the unpleasant reality of industrial farming. The family farms and red barns that adorn product packaging are far cries from the shocking truth about how our food is made. Despite the unappealing process by which it’s created, the USDA considers pink slime safe for human consumption. Moreover, when it is added to ground beef, current regulations do not require that it’s disclosed on labels. Of course, safe and healthy are two different things. Twinkies are safe for consumption, but certainly not part of a healthy diet. The truth is, most Americans eat far too much red meat – pink slime or otherwise. In fact, a recent study by Harvard researchers concluded that 9% of male deaths and 7% of female deaths would be prevented if people lowered red meat consumption to 1.5 ounces (or less) per day. That’s a sobering statistic. The moral of the story is to eat less red meat. Period. It’s not that we need to exclude red meat entirely, but most of us would be significantly healthier with less red meat in our diets. Back in January, I made the decision to limit my red meat consumption to twice weekly. Instead of including red meat as a staple in my diet, it’s more of a special treat – and, when I do eat red meat, I usually opt for healthier, grass-fed varieties. If you hold the mindset that your body is a temple, then you’d want to fill that temple with those things that honor it. Twinkies, pink slime and the like certainly don’t make the cut; make those food choices that nourish, energize and lift up your body. About Davey Wavey 1. I couldn’t agree more, I was appaled when I first read about this “pink slime”. I truly believe that food is one of the most precious things for your body, and that something so important should never be reduced to its lowest common denominator. So many diseases which were never such a common threat came about in the 70/80’s when processed food became plentiful. 2. Marcus(2) says: If we were to remove this pink slime from our food, more children will go hungry every night. Using a little bit of ammonia to kill diseases bacteria and viruses is not going to kill you. Along with proper cooking techniques no one need fear this “mutant” meat. That is why the USDA says its fine to sell and especially to schools who can then take the money they saved from not buying organic soybean fed prime angus beef kosherly slaughtered, to provide a better education to the developing youths. Secondly, if we didn’t eat this processed meat, meat prices would sky rocket in the eyes of the lower classes putting more burden on their wallets. The lower classes cannot afford high quality meat or the ability to be vegetarian or vegan because the cost of those lifestyles are out of their reach. And no we cannot adjust society to make these lifestyles cheaper, unless you want to burn more of the rainforest down. The world is complex, and society makes it complexer. 3. One of the many reason why I only eat fish :) that and in one small package of tuna there are 17g of protein and about a gram of fat :) you guys can have the other meat I stick with fish :) 4. A few months ago I watched the film “Food.inc.”. It really changed my views on what I’m putting into my body. A month ago I made the decision to have meat (fish excluded) once a week only. I’m feeling much better physically, and find that my overall attitude about daily situations is more positive too. And when I do eat meat or dairy– all organic, free range, grass fed, ect. I can really taste and feel the difference. Sure, it’s WAY more expensive…. but you get what you pay for. 5. We are lucky enough to not have this pink slime in Australia but I agree with not eating it. Anything which is processed, created in a lab or genetically modified should be kept as far away from you as possible. Our body simply doesn’t process it properly. Things like this pink slime would possibly contain trans-fats which we know don’t break down, but rather build up as cholesterol in the bloodstream. For weight loss leaner meats and less red meat might be beneficial but for iron deficient people, red meat is still a good source of protein and iron, which is a necessary ion. Unless you are allergic to red meat or have other beliefs whether vegetarian, vegan or religious than some red meat is still good. 6. Please check out this video. Jamie Oliver is a cook and very active in teaching about nutrition. Btw. we don’t have that pink slime in Switzerland, very high meat prices and…people still live and get something to eat. It’s really not necessary to eat meat every day: 7. Travis says: “…9% of male deaths and 7% of female deaths would be prevented if people lowered red meat consumption to 1.5 ounces (or less) per day.”– isn’t that from the study that also says if you eat red meat you smoke and you have lower cholesterol? 8. It’s raelly great that people are sharing this information. 1. […] no secret that most Americans eat far too much red meat. As I recently shared, a Harvard study concluded that 9% of male deaths and 7% of female deaths would be prevented if […] Speak Your Mind
Login | Register    RSS Feed Download our iPhone app Browse DevX Sign up for e-mail newsletters from DevX Use Reflection to Validate Assembly References—Before Your Customers Do : Page 2 Building the AssemblyValidator The first step in validating the currently executing assembly's dependencies is to retrieve a handle to the currently executing assembly, because you need a handle to the topmost assembly for the process in which the validation code is running. The System.Reflection namespace's Assembly class provides the GetEntryAssembly() function that you can use to obtain a handle to this assembly. The function returns an Assembly object, which represents an assembly that has been loaded into memory. Dim objAssembly_Self As Assembly ObjAssembly_Self = Assembly.GetEntryAssembly() For reference, the Assembly class provides access to the metadata exposed by a particular instance of an assembly. It is important to note that the Assembly class is tied to an instance of an assembly loaded into memory because it is possible, especially with the Xcopy deployment methods promulgated by Microsoft, to have many identical assemblies that differ only by their locations. If you are interested in only the generic information about an assembly, use the AssemblyName class. The AssemblyName class stores enough information about an assembly to enable you to load an instance into memory—more specifically, it provides enough information for the .NET Framework to find and load it for you. One key detail used by .NET is the assembly's FullName property, which holds an assembly's name, version, culture, and public key. This combination of attributes ensures that .NET loads the exact assembly you intend—no two assemblies should ever have an identical FullName. When you query the assembly metadata for referenced assemblies the Assembly class returns a list of referenced assemblies as AssemblyName objects. So, after you get a reference to the entry assembly, you can request the list of assemblies it references, returned as an array of AssemblyName objects. You can then iterate through the array, passing each AssemblyName object to a recursive method named ValidateAssembly. The recursive nature of this function ensures that the AssemblyValidator validates all the dependencies that exist in the hierarchical assembly dependency structure. Dim objDepAssembly As AssemblyName For Each objDepAssembly in _ objAssembly_Self.GetReferencedAssemblies() ValidateAssembly(objDepAssembly) Next Internally, the ValidateAssembly method uses an Assembly_List object defined in the validation tool to keep track of which assemblies have been referenced and to maintain details about each of the referenced assemblies. Other than keeping track of assembly details, the most important role of the Assembly_List object is to avoid repeatedly validating assemblies that have already been validated. Worse than the small amount of additional time required to re-verify assemblies, you could quickly cause a stack overflow if the recursive calls exhausted your application's memory resources. So, before adding another assembly to the Assembly_List object, the AssemblyValidator first checks the list to see if it's already been verified. If so (it exists in the list), the tool stops the recursion and returns from the ValidateAssembly method. The ValidateAssembly method first attempts to load the assembly using the AssemblyName object provided as a parameter. The Assembly class provides a shared overloaded Load method; the sample application uses the overload version that accepts an assembly name object. The CreateAssembly method shown below demonstrates how to use the AssemblyName object to load assemblies. Note the possible common exceptions raised by the Load method. 'attempt to create the assembly using the assembly name object Private Function CreateAssembly( _ ByVal p_objAssemblyName As AssemblyName, _ ByRef p_strError As String) As Assembly Dim objAssembly As System.Reflection.Assembly '---- try to create the assembly Try objAssembly = System.Reflection.Assembly.Load( _ p_objAssemblyName) p_strError = "" Catch exSystem_BadImageFormatException As _ System.BadImageFormatException p_strError = "File is not a .NET assembly" objAssembly = Nothing Catch exSystem_IO_FileNotFoundException As _ System.IO.FileNotFoundException p_strError = "Could not load assembly -- " & _ "file not found" objAssembly = Nothing Catch ex As Exception p_strError = "An error occurred loading the assembly" objAssembly = Nothing End Try Return objAssembly End Function If the assembly cannot be loaded, then recursion stops at this level, and the AssemblyValidator logs an error in the Assembly_List indicating why the assembly could not be loaded. When the assembly loads successfully, the AssemblyValidator adds the assembly details to the Assembly_List object, and recursively verifies each of this assembly's referenced assemblies. This process continues until all the dependencies have been verified. Listing 1 shows the complete ValidateAssembly method. At the end of the process, the Assembly_List class provides a FormatList method used to produce a string representation of the list of referenced assemblies. By default, the AssemblyValidator displays only assemblies that could not be loaded, because it's far too difficult to scroll through the lists of dependencies manually, looking for problems—even simple "Hello World" WinForms projects produce long lists of dependencies. As an exercise, I recommend that you modify the sample code to instruct the Assembly_List to display all dependencies (without duplicates), including those assemblies that loaded successfully, and observe the list that is produced. To make this modification, open the Assembly_Validator class in the AssemblyDependencyValidator project, and modify the last line of the ValidateEntryAssembly method, changing the parameter to the FormatList method to be False, as shown below: m_strResults = strValidatorResults & vbCrLf & _ m_objBindingInfo.FormatList(False) Comment and Contribute
Planar lighting technology outshines OLED December 10, 2012 // By Christoph Hammerschmidt Global LighZ, a light technology company from Breitungen (Germany), has demonstrated the prototype of a new area light technology based on plasma technology. The technology could well compete with OLEDs, the company says. It generates glare-free light without shadows and aims at applications in movie, TV and photo studios. According to Global LighZ, the e3 technology (for energy-efficient excitation) offers significantly higher luminous efficacy than comparable OLED luminaires. The technology, originally developed for display backlight applications, is based on the company's research in the area of plasma physics and allows the development of custom-made solutions for applications in the investment goods segment that cannot be realized with conventional technologies including OLED. e3 plasma luminaires can generate light throughout the entire light temperature range from 2000K to 10.000K. Hence it also can be used for medical applications, for instance for the therapy of depressive moods my means of bluish light. Since it offers a high CRI, it also can be used in industrial inspection applications, for instance in coating lines or biological labs. Based on our activities in display applications we have very broad experience in distributing light in areas, explained Global LighZ CEO Klaus Wammes who also invented the 3e technology. "The e3 technology concept shows that huge achievements can be made beyond display technology." He added that the current point in time is favourable to disclose this technology since "many OLED developments fail in practice due to their poor light yield", as Wammes puts it. "With e3 powerflat we prove that we can implement customer-specific lighting solutions based on plasma technology which will remain dreams of the future for OLEDs for a long time". For more information visit
Published Online: Published in Print: August 7, 2002, as Limitations of the Market Model Limitations of the Market Model What's behind the travails of Edison Schools Inc.? Article Tools • PrintPrinter-Friendly • EmailEmail Article • ReprintReprints • CommentsComments A decade ago, during the corporate boom of the 1990s, Christopher Whittle had an idea: Why not start a system of publicly financed for-profit schools? It was the ideal environment for such a venture. What's behind the travails of Edison Schools Inc.? The market was hot, and so was education. The public sector was in disrepute. The time seemed ripe to replace cumbersome, overly politicized public school bureaucracies with smart, efficient corporate management and show how the market could improve the outcomes and the efficiency of public schools. So Mr. Whittle founded the Edison Project (later renamed Edison Schools Inc.), raised tens of millions of dollars in capital, hired a qualified, well-connected team, and set it loose for three years to design an exemplary school. When the first four Edison schools opened in the fall of 1995, they implemented "the Edison design," the well-researched educational plan still used in Edison schools today. In many ways, Edison did the school management business right. Moreover, Edison has tackled educational challenges that others either refused to accept or failed to remedy. In essence, the company has undertaken to convert some of the country's most troubled schools into educational successes. That low-performing schools became Edison's market should surprise no one. The crisis of our country's schools isn't a crisis of schools that educate middle-class or affluent children; it's a crisis of poor children's schools. And while a few school management companies have intentionally positioned their schools to attract specific segments of the middle-class market, Edison's concentration on managing existing public schools under contract to school districts, its reliance on the Success For All model, and its free computers for students' homes seem designed to appeal to struggling schools for disadvantaged children. Edison's growth has been exceptional. Starting with four schools in 1995-96, the company grew 20-fold over its first five years, running 79 schools in 1999-2000. In spring 2000, Edison could boast that it had never lost a contract. And while the idea of for-profit school management, and Edison Schools as its vanguard, always has been controversial, the company's media coverage remained overwhelmingly favorable for many years. From the beginning, Edison claimed that students in its schools were posting significant achievement gains. True, it didn't take long for researchers to begin collecting data and questioning the company's claims; and Edison's "Annual Report on School Performance"—with its lack of backup data and its facile school rating system—reads more like a marketing piece than a serious analysis of student achievement. Still, even the company's harshest critics allowed that the performance of Edison-run schools was "mixed," or on a par with other public schools. Given that the company was attempting to turn around some very troubled schools, this performance was nothing to be ashamed of, even if it didn't live up to Edison's claims. The same media that for years spoke glowingly of Edison's accomplishments now tell a tale of corporate woe. Edison's business strategy of taking over regular public schools under contract to school districts (as well as running charter schools) practically ensured that staff members in many of its schools would be covered by collective bargaining agreements. From its first year of operations, many Edison teachers have been represented by the National Education Association and the American Federation of Teachers, and the company made an early decision to try to work with the two unions. Some NEA and AFT affiliates actively opposed Edison; others just as actively cooperated to bring the company to their districts and to make the Edison-run schools successful once they were there. Both of us visited a number of Edison schools during the company's first five years; we found those schools to be safe and educationally productive. So far, so good. Market advocates should be cheering. But anyone who follows either education or the stock market knows that Edison Schools recently underwent a dramatic reversal of fortunes. The company's stock has plummeted from over $36 a share in February 2001 to barely $1 at the end of this May. The persuasive Chris Whittle, widely acknowledged as a master at raising capital, was hard-pressed to find the $40 million the company needs to open the schools it's scheduled to operate this fall. The same media that for years spoke glowingly of Edison's accomplishments now tell a tale of corporate woe. ("Edison Reels Amid Flurry of Bad News," May 22, 2002.) Certainly Edison is, at least in part, the author of its own misfortune. Its effort last year to take over five New York City schools was an unmitigated disaster, characterized by reliance on high-level political deals and a failure to organize in the schools' communities. Edison repeated its New York mistakes in Pennsylvania, managing to alienate almost the entire city of Philadelphia while relying on its ties with the governor's office. By overpromising, the company managed to turn the award of 20 Philadelphia schools—the largest ever to Edison or anyone else—into a defeat. But Edison's downward spiral has been caused by more than political ineptitude and excessively optimistic business predictions. The company could probably recover from these problems. It's the troubles in Edison schools that are the real cause for concern. Over the past two years, Edison has lost a total of 27 schools, including 16 at the end of the 2001-02 school year. (By contrast, a total of 26 schools have renewed their contracts with Edison.) In each case (with the exception of three schools from which Edison initiated the pullout for primarily financial reasons), the reasons school districts and charter school boards have given for canceling or not renewing Edison contracts have been a combination of low test scores, declining student enrollment, high teacher turnover, and Edison's cost. Moreover, at least 15 additional schools that will remain with Edison next year are on their states' low-performing lists, and six districts that plan to remain with Edison, at least in the short term, have expressed dissatisfaction with the company. Far from the educational home run Edison has been claiming, the company risks an educational strikeout. Why, when it had so much going for it, does Edison today find its very survival in question? Did the leading school management company self-destruct— or was it done in by contradictions inherent in the concept of operating public education as a business? The rules that govern the market contradict essential requirements for creating and maintaining excellent public schools. The answer, of course, is both. Some of Edison's problems are self- inflicted. But the root of its troubles lies in trying to operate public schools as a successful business. The rules that govern the market—that require companies to establish brand identity, attract capital, and become profitable— contradict essential requirements for creating and maintaining excellent public schools. Establishing 'Brand' and Growing Rapidly. Successful consumer companies establish "brand," allowing what they sell to be readily identified in the marketplace. But replicating a successful school at multiple sites is not like replicating a successful restaurant or bookstore. Schools' raw materials (students) are highly individual and unpredictable, the product of forces external to the school. The central control required to create schools that look and feel and educate like all a company's other schools stands in direct contradiction to the need for every school to respond to its students and community, its "customers." School design and curricula are only the starting points of the complex and nuanced task of creating a successful school. There's the matter of finding the right leadership and faculty, and nurturing their understanding of teaching and learning and their relationships with each other, their students, their students' families, and communities. And once a thriving school climate is established, it requires cultivation and support. The requirement for rapid growth further complicates the enormous challenge of establishing brand and controlling quality while responding to local needs. A criticism of Edison is that it has grown too fast. While this is certainly true, rapid growth wasn't simply an Edison whim. It was a demand of the market. In the absence of profitability (more on this later), Edison needed enormous growth to position itself as the market leader and produce steadily and rapidly rising revenues that would bolster its market value. (In all likelihood, this is behind the company's recent problems with the Securities and Exchange Commission. Earlier this year, the SEC found that Edison consistently reported as revenue funds, amounting to over 40 percent of reported revenues, that had never even passed through its banks, but were used by school districts to pay salaries, transportation costs, and other expenses for Edison schools.) So Edison has been attempting to assert corporate control, maintain quality, establish brand, and respond to local conditions—all while adding over 20 new schools a year. It is a Herculean feat, and one at which Edison is failing. But if investors are going to continue putting capital into the company and keep its stock price high, they want to see significant revenue growth. The Demand for Profitability and the Illusion of Scale. The bottom-line demand of the market is profitability. But Edison has never been profitable. It has accumulated $261 million in losses since its founding and recently took on another $40 million in debt so it can continue operations in the 2002-03 school year. Since the company's earliest days, Edison executives have said that when it gets big enough their company will become profitable. While the number of schools said to be needed for profitability has increased over the years, the basic notion has remained: When we have enough schools over which to spread our overhead costs and negotiate discounted prices from suppliers, we'll show a profit. Replicating a successful school at multiple sites is not like replicating a successful restaurant or bookstore. The problem with this scenario is that economies of scale don't apply to the business of schooling. Economies of scale work in industries with uniform products. But as noted above, schooling is not one-design-fits-all, and individual school faculties and communities want input into their schools. In addition, while one can assume that Edison with its 130 schools can bargain a better price with suppliers than a school district with 10 schools, materials and supplies are not what make schooling expensive. Schooling is highly labor-intensive, with salaries constituting 80 percent or more of school budgets. Short of hiring cheap labor (underqualified teachers) or replacing teachers with computers, neither of which is recommended as a way to create successful schools, there's simply no way to dramatically reduce labor costs. In the end, the market metaphor does not apply to public education. What is rational for a society—investing in education—may well not constitute a viable business. The troubles of Edison, a company that began its life in a thriving economic environment with a generous supply of capital and a solid educational blueprint, attest to the difficulties inherent in creating a system of good schools that serve the diverse needs of our nation's children. Public education is a social commitment that transcends individual interest and corporate gain. It is highly probable that schools designed to meet this responsibility are inherently unprofitable. This does not mean the commitment should be abandoned. It means that, as a human service, education is grounded in a belief in human dignity that transcends the values and behaviors associated with markets. It means public education cannot be squeezed to fit the market model and still meet the needs of a just society. Heidi Steffens is a senior policy analyst at the National Education Association in Washington. She can be reached at Peter W. Cookson Jr. is the president of TC Innovations and a professor at Teachers College, Columbia University, in New York City. He can be reached at Vol. 21, Issue 43, Pages 48,51 Ground Rules for Posting All comments are public. Back to Top Back to Top Most Popular Stories
How to find a password for wifi Written by theon weber • Share • Tweet • Share • Pin • Email How to find a password for wifi To obtain a Wi-Fi network's password, you'll need access to the router. (ADSL Router image by Phil2048 from Many Wi-Fi networks are encrypted, meaning that they cannot be used without the correct password. If you have an encrypted Wi-Fi network at home, but have forgotten (or never knew) the password, it can be frustrating trying to connect new devices to the network or make other changes. To find the password for an encrypted Wi-Fi network, you will need access to the wireless router providing the signal. This means you will most likely need to use a computer that is plugged directly into the router via a physical cable, although a computer that is already connected to the Wi-Fi network will also work. Skill level: Other People Are Reading 1. 1 Type the router's IP address in your Internet browser's address bar. This varies depending on the make of your router, but it is often or If neither of these work, check the user's guide that came with your router, or try an Internet search for the brand and model name -- which should be printed on the router itself -- and the words "setup," "access" or "IP." 2. 2 Press "Enter" to access the router's set-up page. This page might be password-protected, most likely with a different password from the one that's protecting the Wi-Fi signal. If you don't know the router password, check your user's guide or search online for instructions on resetting the router. There will likely be a physical reset button that restores the router's original factory settings, including its default password (often "admin" or even nothing at all). Be aware that this will also remove any changes you or others have made to the router's default settings, including clearing the Wi-Fi security settings. 3. 3 Access the router's wireless settings from the set-up page. This might be on a separate page, accessed through a "Wireless Settings" link, or it might simply be part of the main page. The wireless settings will include an option to encrypt the signal using WPA, WEP or some other encryption protocol. Near this will be a text box containing the Wi-Fi password. Write down the password from this box, or simply change the password to whatever you'd like. If you reset the router in Step 2, you will have to enable encryption and set a new Wi-Fi password. 4. 4 Click the "Save" button to save any changes you've made. The Wi-Fi signal will vanish briefly as the router restarts, and once it reappears, you should be able to use your new Wi-Fi password. If you didn't change the password but merely wrote it down, you will be able to use it to gain access to the signal right away. Don't Miss • All types • Articles • Slideshows • Videos • Most relevant • Most popular • Most recent No articles available No slideshows available No videos available
Remoteness (See also Isolation.) Allusions, Definition, Citation, Reference, Information - Allusion to Remoteness (See also Isolation.) 1. Antarctica continent surrounding South Pole. [Geography: NCE, 113–115] 2. Dan to Beersheba from one outermost extreme to another. [O.T.: Judges 20:1] 3. Darkest Africa in European and American imaginations, a faraway land of no return. [Western Folklore: Misc.] 4. end of the rainbow the unreachable end of the earth. [Western Folklore: Misc.] 5. Everest, Mt. Nepalese peak; highest elevation in world (29,028 ft.). [Geography: NCE, 907] 6. Great Divide great ridge of Rocky Mountains; once thought of as epitome of faraway place. [Am. Folklore: Misc.] 7. John O’Groat’s House traditionally thought of as the northern-most, remote point of Britain. [Geography: Misc.] 8. Land’s End the southwestern tip of Britain. [Geography: Misc.] 9. moon earth’s satellite; unreachable until 1969. [Astronomy: NCE, 1824] 10. North and South Poles figurative ends of the earth. [Geography: Misc.] 11. Outer Mongolia desert wasteland between Russia and China; figuratively and literally remote. [Geography: Misc.] 12. Pago Pago capital of American Samoa in South Pacific; thought of as a remote spot. [Geography: Misc.] 13. Pillars of Hercules promontories at the sides of Straits of Gibraltar; once the limit of man’s travel. [Gk. Myth.: Zimmerman, 110] 14. Siberia frozen land in northeastern U.S.S.R.; place of banishment and exile. [Russ. Hist.: NCE, 2510] 15. Tierra del Fuego archipelago off the extreme southern tip of South America. [Geography: Misc.] 16. Timbuktu figuratively, the end of the earth. [Am. Usage: NCE, 2749] 17. Ultima Thule to Romans, extremity of the world, identified with Iceland. [Rom. Legend: LLEI, I: 318] 18. Yukon northwestern Canadian territory touching on the Arctic Ocean. [Geography: Misc.] Repentance (See PENITENCE.) Reproof (See CRITICISM.)
Tn-1 Tn-2 Ercoupe - $4.95 The ERCO Ercoupe is a low wing monoplane first manufactured by the Engineering and Research Corporation (ERCO) shortly before World War II, production continued after WWII by several other manufacturers until 1967. It was designed to be the safest fixed-wing aircraft that aerospace engineering could provide at the time, and the type still enjoys a very faithful following today. Ercoupe-downloadable Cardmodel from Fiddlers Green Classic Ercoupe Ercoupe Inflight Arguably one of the most overlooked private plane in history. It first flew in 1938 and was built again after WWII. The brilliant engineering that went into this plane made it safe and easy to to fly but still it was a marketing failure. What people say... Say, I believe the original "flying milk stool" wasn't a Piper. Some folks may have referred to the Tri pacer as such due to Photo right by Wayne White, This model is best when printed on silver inkjet paper Ercoupe model The tricycle landing gear but it wasn't the first plane to bear that nickname. I'm trying to remember the name but it was a low wing 2 place all metal plane with a bubble cockpit and rudders out on the end of the horizontal stab and the ailerons were linked to the rudder. The combination of the unusual rudder arrangement, the linked controls, and a tricycle gear gave it the nick name the flying milk stool. I finished a 'do' of your Ercoupe, from way back in the Mudget/Fynn days--in Red River Silver, of course. Don't know if you are aware, but there is a serious discrepancy between the upper and lower wings parts. At 1/40 the lower wing is about 3/16 shorter than the upper if you line up the landing lights. You may want to check this out--or not, in your present turmoil. There is, of course, the matter of the pretty crude nose, but if you ever redo the whole plane I'm sure this will be taken care of. John Erco Ercoupe Erco ErcoupeThe Ercoupe (E and R coming from the company's name: Engineering and Research Corporation) was one of the most unusual-and controversial-light airplanes ever built. It was designed by Fred E. Weick, one of aviation's foremost engineers, who decided to solve with one bold stroke the biggest single cause of aviation fatalities: the stall, followed by spin, at altitudes too low to permit recovery. The Ercoupe was designed to be stall proof and spin-proof. (The same idea was executed, in a slightly different form, by Professor Otto Koppen, of MIT. His design, called the Skyfarer, was also stall and spin-proof, but it never reached volume production.) The Ercoupe could not be ignored. The wing was placed low, there were two vertical fins on a horizontal tail boom, and the third landing wheel was under the nose. This design flew in the face of all things known about proper light airplanes, which had high wings, one fin (and rudder), and a tail wheel. The Ercoupe really is a nice little plane though some pilots (mostly those who've never flown it) don't think it's too respectable. Most owners love them. Both the Ercoupe and Skyfarer were built in small quantities before World War II. After the war the Ercoupe came on strong, and was promoted as no airplane had ever been promoted before. It was displayed at state and county fairs, demonstrated at air shows, flown from shopping center parking lots, and even dismantled and reassembled inside department stores. The results were satisfying, to say the least, and Engineering and Research Corporation had to expand their production facilities several times before they could catch up with the demand. ErcoupeWayne White sends in this very nice Ercoupe model Note the extra work he did on the landing gear. How to mystify your modeln' pals.... Ercoupe all white cardmodel Ercoupe all white cardmodel Here are three Ercoupe photos you might be able to use. Printed on Wausau 90 LB Exact Index paper resulting in an eight inch wingspan. Ercoupe all white cardmodel embossed Though there is no print showing, the back surface was given a protective coating of Krylon Acrylic Clear prior to cutting out the parts. This helped keep the surface clean. Bob Penikas Ercoupe with clear cabin submitted by Bob Martin Ercoupe with clear cabin submitted by Bob Martin Ercoupe with clear cabin submitted by Bob Martin All of this took place with absolute disregard of what aviation's old timers were saying about the airplane. Because of its tricycle gear, they called it "the flying milking stool." Because its ailerons and rudder were interconnected-there was only one pedal, for brakes, on the floor-the old timers spoke darkly about the problems of landing in a cross wind. (In fact, there were almost none: the landing gear was sturdy, and would accept a very high level of cross wind and a correspondingly low level of pilot skill.) The Ercoupe was noticeably faster than its contemporaries and quite comfortable and easy to fly. One nice touch was that the cockpit canopy could be opened in flight (at some speed penalty), producing much the same sensation as driving a convertible with the top down. It was a nice looking, all-aluminum machine, once one got used to its unconventional design. It was precisely true that it would neither stall nor spin. Even so, it was soon found to have a serious fault. It would get into a high rate of descent (or "sink") which could only be stopped by full forward yoke and loss of a considerable amount of height. The usual result was a hard landing and expensive airframe damage. Injuries to the occupants rarely required medication, but the experience was unsettling enough to drive some new pilots out of aviation. Unmarked Ercoupe flying around Fred Weick's goal of eliminating the stall-spin accident sequence was achieved, but the airplane was badly oversold. The high sink rate was never mentioned. In fact many salesmen were themselves surprised by it. The major thrust of the sales effort was "anyone can fly," and cases without end were cited in which pilots who had never had a previous lesson soloed in two hours, or three, or even one. When the postwar airplane sales bubble burst, Engineering and Research Corporation was not alone in disaster, but unlike Beech, Cessna, and Piper, it did not survive. The Ercoupe itself refused to die and went through a series of revivals, with each new group of owners as starry-eyed as the last, certain that they could escape the fate which had overtaken their predecessors. Unfortunately, none of the attempts succeeded, not even the most recent revival by Mooney Aircraft, who bought all rights, tooling, and parts from Alon Aircraft, which had been building a few at a time in Kansas. This time the resurrectors took the approach that the only thing wrong with the Ercoupe was its stall-proof, spin-proof philosophy. The tail was redesigned, using one fin and rudder. Rudder pedals were made standard. (A previous field modification had permitted adding rudder controls to the original.) All of the engineering tricks which had made the Ercoupe stall proof and spin-proof were undone. The Cadet, as the reincarnation was called, no longer looked odd: by now, low wings and tricycle gears had become commonplace, and that double fin was gone. The Cadet flew just like other airplanes, given small differences in handling. It would stall, and it would spin. The attempt failed: the Cadet didn't even show the small spark of life visible in the previous tries. The unfortunate part of all this is that the Ercoupe is really quite a nice small airplane. The freedom from stalls and spins doesn't hurt, and anybody who wants to can have rudder pedals installed. The high sink rate can be avoided, as it is in all other airplanes, by proper pilot training and technique. The one remaining Ercoupe problem is social: it is not thought to be a respectable flying machine. Most of those who have this attitude have never flown one and have no idea of its real assets and liabilities, but that does not lessen their scorn. The Ercoupe is worth looking at, even so. The Great Silver Hope Masquerading under the Ercoupe, Alon, and Mooney labels, the Ercoupe design has been around much longer than most people realize. The Ercoupe was designed to a lofty concept and high level of sophistication....and did exactly what it was designed to do. It's roots go back to the early 30's when it was popular to believe that there would someday there would be a mass market for "Everyman's Airplane". Further, it was believed that the great mass market awaited only the appearance of a cheap, easy-to-fly and safe airplane. In 1936 Fred Weick was a young engineer hired by the just formed ERCO (Engineering Research Company) and is generally regarded as the creator of the legendary 1937 Ercoupe. All through the initial design and testing, wind tunnels were not used at all. The airplane was flown, modifications made to correct deficiencies, then flown again and again until it was certified on April 20, 1938. A placard, which was the first for any airplane, was allowed to be placed proudly on the instrument panel reading: "This aircraft characteristically incapable of spinning" Things looked rosy for the Ercoupe but then the Second World War came along and production was halted for lack of aluminum . Sadly, just 112 Ercoupes came off the line. After the war, it became evident that there simply wasn't an "Everyman's Airplane Market" and possibly might never be. The Ercoupe is, arguably, the best tested, best designed, and best researched light airplane ever produced. Even today it has few peers and it's only failure was that it was produced for a non-existent market. Look for one at your local airport. ERCO is "Engineering Research Corporation" whose first product was the Ercoupe. This was the first tricycle aircraft and was designed by Fred Weick. Fred is famous for many things, including the "takeoff/landing over a 50-foot obstacle" specification. He went on to design the Piper PA-28 Cherokee and others. The first JATO (Jet Assisted Take Off) was an Ercoupe which led to the foundation of the Jet Propulsion Laboratory. The Ercoupe, with its distinctive twin-tail design, was originally provided with "coordinated controls", i.e. the rudder was connected to the yoke and yaw correction was automatic - NO RUDDER PEDALS. The steerable nose wheel was connected directly to the yoke - you taxied exactly like you drive your car. This, and limited elevator travel, contributed to the result that the 'Coupe is "characteristically incapable of spinning"! You can try, but the plane will fly out of an incipient spin. An entirely new category of pilot license was created for the thousands of new pilots who had never seen a rudder pedal. This plane was designed pre WW2 and didn't get into real production till 1945 when thousands were sold through such esteemed aviation outlets as the Men's Department at Macy's!! Ercoupe flying nicelyErcoupe Navy version "Rudder Kits" were available to convert the plane from 2-control ("coordinated") to 3-control ("conventional"). Landing a 2-control 'Coupe is an "interesting" experience!! You crab it into the wind and land that way!! The nose wheel will caster and straighten it out ON THE RUNWAY. Another historical fact: all original Boeing 707 pilots were taught to land in the 'Coupe - the 707 had a similar problem - the low hanging engines meant that you couldn't drop a wing into a crosswind - you had to land them crabbed!! The Ercoupe's gear does not swivel, a common misconception, but the geometry causes the airplane to turn in the direction of forward motion. If you fight this tendency you can ground loop.] Mooney built the last 59 with a "Mooney tail" instead of the distinctive twin tail of all previous production. This, and other changes, created an airplane which could stall and spin with the best but also lost a lot of performance. It was their intention that the M10 Cadet be their "trainer". "Alon" was an interesting bit of history: While Forney was building the 'Coupe, one company which came mighty close to buying the type certificate was Beech!! John Allen (Beech plant manager) and Lee Higdon (Beech accounting manager) felt strongly that Beech should take it on, but Olive Beech got cold feet and said no. So they quit and setup the Allen-Higdon (ALON) company to do it. They were so impressed with the plane that they bought the company!! Alon made a number of speed/power changes to the airplane and reverted to providing rudder pedals as standard, with the 2-control by special order only. They changed from vertically sliding window entry to a sliding canopy. Some people dump on 'Coupes. It's unfair and ignorant criticism, but it keeps the prices down and the secret in the family!! If you ever have to opportunity to fly a 'Coupe - try it!! The Ercoupe has climb and cruise performance very similar to the performance of a Cessna 150 - but it drops like a rock when the power goes off. The best thing about a 'Coupe is you can fly it with the sliding windows down. Construction Notes! 2 View Ercoupe Looking at the front view (above), notice that the Ercoupe has a very distinctive forward fuselage shape that narrows toward the bottom. Curiously, the reason for this shape was to accommodate the ERCO inverted inline engine that was custom built for the Ercoupe. The Continental A-65 was ultimately used and the fuselage remained unchanged. Refer to the typical cross-section. Yes, the nose section IS larger to permit engine cooling air to escape. Keep dihedral in mind as you glue the wing center section in place. It's hard to add it as an after thought later. I mean bending the wings up is really dumb. Carefully curve and bend the wing fillets out BEFORE gluing the wings to the fuselage. A pencil is a good diameter over which to shape the fillets. Rocket-Assist Takeoff On Aug. 12, 1941, the first Air Corps rocket-assist takeoff was made by a Wright Field test pilot, Capt. Homer Boushey, using a small civilian-type Ercoupe airplane. Subsequent refinements of this technique were made for assisting heavily-loaded airplanes in taking off from limited space. This technique is still used whenever needed. Takeoff of Ercoupe airplane in much less than normal distance due to firing of rockets attached under its wing. For comparison, the light plane in the foreground although equipped with an engine of approximately the same horsepower as the Ercoupe, had just lifted off the ground at the instant the photo was taken. 2 Views of the Ercoupe Ercoupe Cockpit Cockpit of the Erco Ercoupe. Erco Ercoupe Factory Erco Ercoupe Factory during its post war heyday. Ercoupe Cutaway Specifications for the Ercoupe 3 View of the Erco Ercoupe Crew: 1 Capacity: 1 passenger Length: 20 ft 9 in Wingspan: 30 ft Height: 5 ft 11 in Wing area: 142.6 ft² Empty weight: 749 lb Useful load: 511 lb Max takeoff weight: 1,260 lb Powerplant: 1× flat-4 engine, 75 hp at 2,300 rpm Never exceed speed: 144 mph Maximum speed: 110 mph Cruise speed: 95 mph Stall speed: 48 mph Range: 300 mi Service ceiling: 13,000 ft Rate of climb: 550 ft/min Wing loading: 8.83 lb/ft² Power/mass: 0.13 hp/lb Ercoupe Callout A: The Ercoupe twin tail was chosen for its 'anti' Spin characteristics B: The strong, all aluminum fuselage was easy and inexpensive to build. C: The full, slide back Ercoupe canopy afforded perfect visibility over the low positioned wings D: Very rugged landing gear made flying out of small rough fields possible. Ecroupe Crash On April 11, 2009, at 1450 central daylight time, an Engineering and Research 415C (Ercoupe), N87384, was destroyed by a post crash fire after it impacted terrain about one mile north of the Woodlake Airport (IS65), located in Sandwich, Illinois. The sport pilot and passenger received fatal injuries. Meteorological conditions prevailed at the time of the accident, and no flight plan was filed. Aircoupe (sic)
Rh Furniture Home furnishings retail is an important business. Source: Restoration Hardware. The housing industry plays a major role in driving the U.S. economy. Homebuilders provide new homes for homeowners, while building materials companies prepare the essential components of those new homes before they're built. An entire subsector of the finance industry deals with loans for home construction and mortgages for home purchases. And after homebuyers close on their purchases and move in, they typically need home furnishings, such as furniture, electronics, appliances, household gadgets, and other accessories in order to complete their house. A host of home furnishings companies seek to meet the demand for the goods that help you make your house a home. Let's take a closer look at the home furnishings industry and its opportunities for investors. What is the home furnishings industry? The home furnishings industry most typically refers to companies that specialize in furniture and decorative accessories. From a broader perspective, department stores often have a wide range of furniture to complement their offerings of appliances and electronics, and several big-box electronics retailers have added appliances to cater to new homebuyers. But even though many homebuyers see television home-theater systems, refrigerators, and washer/dryer sets as essential purchases, those areas are treated as separate industry groups. That leaves home furnishings companies to focus on bedding, dining room tables and chairs, living room sets, and accessories ranging from lamps to gourmet coffee makers as their staples. Different companies focus on various segments of the home furnishings industry. Companies like Bed Bath & Beyond and Williams-Sonoma offer one-stop shopping for a large selection of household items, although their furniture selections are often somewhat limited. By contrast, specialists like Ethan Allen Interiors focus on producing furniture sets throughout the home. On the bedding side, Tempur Sealy and Select Comfort make mattresses and related bedroom furniture sets, along with pillows and other accessories. Tpx Bedding Image source: Tempur Sealy. How big is the home furnishings industry? Home furnishings have a larger impact on the U.S. economy than you might expect. Nearly 450,000 employees in the U.S. work in the home furnishings industry, according to the latest figures from the Bureau of Labor Statistics, and almost half of them hold jobs as retail salespeople. In addition, the home furnishings industry employs managers to oversee salespeople as well as workers to stock shelves and transport goods from manufacturers to retail stores. As you'd expect, the size of the home furnishings industry has risen and fallen with the prospects of the broader housing market. In the mid-2000s, furniture and home furnishings store revenue reached peak levels above $110 billion, according to figures from the U.S. Census Bureau. But the end of the housing boom led to a dramatic contraction in overall industry sales, and home furnishings revenue only climbed back above the $100 billion mark in 2013. Statistic: Furniture and home furnishings store sales in the United States from 1992 to 2013 (in billion U.S. dollars) | Statista Find more statistics at Statista. How does the home furnishings industry work? Like most retail businesses, the home furnishings industry involves manufacturers that make the products consumers want, as well as intermediaries to get those products into the hands of retail stores, and retailers that make the final sales to customers. Most of the major companies in the home furnishings sector are retail establishments, so they rely on homeowners and other consumer buyers to drive sales. Furniture manufacturers, on the other hand, have to cater to their direct retail customers in order to fulfill their function as suppliers, while also keeping in mind that they ultimately serve the consumers who buy their products. Two things that distinguish parts of the home furnishings industry from other retail businesses, though, are the high ticket prices of furniture and other items as well as their large physical size. The logistical difficulties involved with those items and the financial challenge consumers face when considering purchases make the home furnishings industry a particularly competitive environment in many respects. Wsm Bed Source: Williams-Sonoma. What drives the home furnishings industry? The most important driver of home furnishings sales is the housing market. When people are moving in and out of new homes, they often take the opportunity to buy new home furnishings or upgrade their existing furniture and accessories, driving sales higher. During times of economic hardship, however, more people stay put in their existing homes, and they don't have the disposable income to finance major purchases of furniture and other high-ticket items. The rise of Internet retail has also had a major impact on home furnishings. For smaller household goods like kitchen appliances, online retailers have posed a substantial competitive threat, undercutting home furnishings specialists and forcing them to establish their own e-commerce presence in order to counter attempts to take away their market share. For furniture and other bulky items, physical stores have more of an advantage against online retailers, but innovative retailers continue to look for ways to make even sales of larger items more efficient and logistically feasible. That could threaten the high margins some manufacturers currently enjoy on those items. The home furnishings industry is inexorably linked to the level of housing activity in the market. Investors need to consider the current state of the housing cycle before investing in the sector, especially after periods of strong performance in housing, or else they risk taking a hit in the next cyclical downturn for the industry.
Proprietary Software Is Often Malware Proprietary software, also called nonfree software, means software that doesn't respect users' freedom and community. A proprietary program puts its developer or owner in a position of power over its users. This power is in itself an injustice. Power corrupts; the proprietary program's developer is tempted to design the program to mistreat its users. (Software whose functioning mistreats the user is called malware.) Of course, the developer usually does not do this out of malice, but rather to profit more at the users' expense. That does not make it any less nasty or more legitimate. Yielding to that temptation has become ever more frequent; nowadays it is standard practice. Modern proprietary software is typically a way to be had.
Conceptual Physics (12th Edition) Published by Addison-Wesley ISBN 10: 0321909100 ISBN 13: 978-0-32190-910-7 Chapter 29 - Think and Explain: 29 The sun is much farther away from us, compared to the lamp. Work Step by Step The sun puts out spherical wavefronts just as the nearby lamp does (see Figure 29.3), but we are so far away that by the time it gets to us, the expanding spherical wave can be considered to be a plane wave. In an analogous way, a sufficiently small area of Earth's spherical surface can be considered to be flat. The lamp is close enough that the curvature of its emitted wavefronts cannot be ignored. Update this answer! Update this answer
An Introduction to Mythology Page: 8 [Pg 21] but they explain, or attempt to explain, primitive scientific notions as well.[18] The desire to know the 'reason why' early creates a thirst for knowledge, an intellectual appetite. "When the attention of a man in the myth-making stage of intellect is drawn to any phenomenon or custom which has to him no obvious reason, he invents and tells a story to account for it."[19] The character of most primitive myths amply justifies this statement. They are mostly explanations of intellectual difficulties, answers to such questions as, What is the origin of or reason for this or that phenomenon or custom? How came the world and man to be formed as they are? In what manner were the heavenly bodies so placed and directed in their courses? Why is the lily white, the robin's breast splashed with red? How came into force this sacrificial custom, this especial ritualistic attitude, the detail of this rite? The early replies to these questions partake not only of the nature of myth, but of science—primitive science, but science nevertheless—for one of the first functions of science is to enlighten man concerning the nature of the objects and forces by which he finds himself surrounded, and their causes and effects. These replies are none the less scientific because they take the shape of stones. Their very existence proves that the above questions, to clear up which they were invented, were asked. They cannot be accounted for without the previous existence of these questions. Mythology is the savage's science, his manner of explaining the universe in which he lives and moves. Says Lang: "They frame their stories generally in harmony with their general theory of things, by what may be called 'savage metaphysics.'" Of course they did not think on the lines of a well-informed modern scholar. Müller remarks in an illuminating passage: [Pg 22] "Early man not only did not think as we think, but did not think as we suppose he ought to have thought." One of the chief differences between the outlook of the primitive savage and that of civilized man is the great extension in the mind of the former of the theory of personality, an outlook we have already called 'animism,' Everything possesses a 'soul,' or at any rate will-power, in the judgment of the savage. But not only are sun, sky, river, lightning, beast, tree, persons among primitive or backward peoples; they are savage persons. Research and travel combine to prove that earliest man and the lowest savages cannot be found without myths, which, as we have seen, are both religion and science. The first recognized stage in man's mental experience is animism, so that the earliest myths must have been 'animistic.'[20] Roughly, animism is the belief that everything has a soul or at least a personality, but no race has yet been discovered possessing purely animistic beliefs. Even the lowest races we know have developed these considerably, and so we are only acquainted with animism in its pure form theoretically,[21] as a phase of religious experience through which man must at one time have passed. It is, in fact, a fossil faith. But just as fossil animals and plants have their living representatives to-day, so do ideas and conceptions representing this petrified form of religion and science still flourish in our present-day superstitions and our present-day faiths. Animistic myths naturally show primitive ideas regarding the soul. Animism will be dealt with more fully hereafter, but[Pg 23] in this introductory sketch we will cite one or two examples of animistic myth to illustrate what was, so far as we know, the earliest type of myth. Stories are found telling of journeys to the spirit land, of talking animals, of men metamorphosed into animals and trees, and these are all animistic or originate in animistic belief.[22] Modern folk-tales containing such stories possess a very great antiquity, or are merely very old myths partly obscured by a veneer of modernity. Spirit stories which have obviously a primitive setting or atmosphere are almost certainly animistic. Thus tales which describe the soul as a bird or a bee, flitting about when the body is asleep, are either direct relics of an animistic age, or have been inspired by earlier animistic stories handed down from that age. The tales of spirit journeys to the Otherworld, the provision of implements, weapons, shoes, and so forth, placed in the grave to assist the soul in its progress to the Land of Shadows, invariably point to an animistic stage of belief—the belief in a separable 'soul,' in an entity entirely different and apart from the 'tenement of clay' that has perished. There are not wanting authorities of discernment who believe that even this early phase was not the primitive phase in the religious experience of man. Of these the most clear-sighted and perspicuous in argument is Dr Marett, reader in anthropology at Oxford University. In a pregnant chapter-preface in his highly suggestive book, The Threshold of Religion, Dr Marett says: "Psychologically, religion requires more than thought, namely, feeling and will as well; and may manifest itself on its emotional side, even when ideation is vague. The question, then, is, whether apart from ideas of spirit, ghost, soul, and the like, and before such ideas have become dominant factors[Pg 24] in the constituent experience, a rudimentary religion can exist. It will suffice to prove that supernaturalism, the attitude of mind dictated by awe of the mysterious, which provides religion with its raw material, may exist apart from animism, and, further, may provide a basis on which an animistic doctrine is subsequently constructed. Objects towards which awe is felt may be termed powers." He proceeds to say that startling manifestations of nature may be treated as 'powers' without any assumption of spiritual intervention, that certain Australian supreme beings appear to have evolved from the bull-roarer,[23] and that the dead inspire awe. This he calls 'supernaturalism,' and regards it as a phase preceding animism. Very closely allied to and coexistent with animism, and not to be very clearly distinguished from it, is fetishism. This word is derived from the Portuguese feitiço, a charm, 'something made by art,' and is applied to any object, large or small, natural or artificial, regarded as possessing consciousness, volition, and supernatural qualities, especially magic power.[24] Briefly and roughly, the fetish is an object which the savage all over the world, in Africa, Asia, America, Australia, and, anciently, in Europe, believes to be inhabited by a spirit or supernatural being. Trees, water, stones, are in the 'animistic' phase considered as the homes of such spirits, which, the savage thinks, are often forced to quit their dwelling-places because they are under the spell or potent enchantment of a more powerful being. The fetish may be a bone, a stone, a bundle of[Pg 25] feathers, a fossil, a necklace of shells, or any object of peculiar shape or appearance. Into this object the medicine-man may lure the wandering or banished spirit, which henceforth becomes his servant; or, again, the spirit may of its own will take up its residence there. It is not clear whether, once in residence or imprisonment, the spirit can quit the fetish, but specific instances would point to the belief that it could do so if permitted by its 'master'[25] We must discriminate sharply between a fetish-spirit and a god, although the fetish may develop into a godling or god. The basic difference between the fetish and the god is that whereas the god is the patron and is invoked by prayer, the fetish is a spirit subservient to an individual owner or tribe, and if it would gain the state of godhead it must do so by long or marvellous service as a luck-bringer. Offerings may be made to a fetish; it may even be invoked by prayer or spell; but on the other hand it may be severely castigated if it fail to respond to its owner's desires. Instances of the castigation of gods proper are of rare occurrence, and could scarcely happen when a deity was in the full flush of godhead, unless, indeed, the assault were directed by an alien hand.[26] We have seen that the ancient Greeks had in their temples stones representing 'nameless gods' who seem to have been of fetish origin. Thus a fetish may almost seem an idol, and the line of demarcation between the great fetish and the idol is slender, the great fetish being a link between the smaller fetish and the complete god.
Last updated: Oct 18, 2011 Getty Images Wendy Foulds Mathes, PhD, is trying to teach rats to binge on Double Stuf Oreo cookies. You might think overstuffing yourself with yummy cookies would come naturally to a rodent, but it doesn't. In fact, Foulds Mathes, a research assistant professor of psychiatry at the University of North Carolina School of Medicine, in Chapel Hill, and her colleagues are working hard to create behavior in rats that comes all too easily to some humans: binge eating. They control when the rats are given cookies, and then look for changes in the brain that might indicate that foods high in fat and sugar affect the brains' reward systems in a similar way to drugs or alcohol. It's a serious question. People with bulimia or the condition known as binge eating disorder have an overwhelming, uncontrollable urge to binge on food in a way that seems similar to people with an addiction, experts say. In addition, they often struggle to change their behavior—which can cause potentially life-threatening health problems such as diabetes, hypertension, and heart arrhythmias. "Many people have noticed that when people with eating disorders—bulimia in general—talk about the foods they binge on, it can sound a lot like how people with substance abuse problems talk about abusing drugs," says B. Timothy Walsh, MD, an eating-disorder researcher and professor of psychiatry at Columbia University Medical Center, in New York City. The behaviors often go hand in hand, in fact. The American Psychological Association estimates that about 5 million Americans suffer from a diagnosable eating disorder. And according to a 2007 analysis of government data, roughly one-third and one-quarter of people with bulimia and binge-eating disorder, respectively, will also have an alcohol or drug problem at some point in their lives. "It's not uncommon to have both problems," says Richard J. Frances, MD, a clinical professor of psychiatry at the New York University Langone Medical Center, in New York City, who works with people with both types of disorders. "The way people have trouble stopping, and the addictive aspect of both kinds of disorders—and the compulsivity—are similarities." Feel-good food? Foulds Mathes's research in rats is paying off. She and her colleagues have seen some brain changes, such as the release of neurotransmitters, in rats that binge on high-fat sugary treats that they suspect are similar to those in rats dependent on drugs or alcohol. But you can only learn so much about binge eating from rodents, who aren't susceptible to peer pressure or other psychological and cultural factors thought to play a role in eating disorders in humans. "You can't ask a rat how it's feeling," Foulds Mathes says. That's where the human studies come in handy. Researchers have found that, similar to what happens in rodents, chemicals such as dopamine are released in specific areas of the brain involved in reward processing when you eat something you find enjoyable. And other studies have found high-calorie foods such as chocolate milkshakes activate "pleasure center" regions of the brain. But not everyone who encounters a chocolate milkshake feels compelled to consume 20 of them. What triggers this compulsive behavior? Dr. Walsh and his team of researchers at the New York State Psychiatric Institute of Columbia University Medical Center have been studying patients with eating disorders, such as bulimia, for about 30 years. Their research suggests these reward pathways may be under-stimulated. In other words, people who start binging may begin a process that makes it harder for them to get the same reward from food, so they keep eating. Allegra Broft, MD, a member of Dr. Walsh's team, used a type of brain scan known as positron emission tomography (PET), and found decreased levels of dopamine receptors in the brains of people with eating disorders. These were similar to the decreased levels seen in people with drug addictions, Dr. Broft says, but on a smaller scale. Dr. Walsh says that this smaller magnitude is probably due to how the reward pathway is activated. Drugs such as cocaine, crack, and heroin "pack a whomp," he says. "That's why they're abused—they're very potent drugs. So they will have a bigger effect on changes in brain chemistry in reward areas than natural rewards like tasty food." In addition to dopamine, other neurotransmitters such as serotonin are likely to be involved in eating disorders, Dr. Walsh says. The future of eating-disorder treatment? The addiction analogy isn't perfect. The brain mechanisms associated with eating disorders and addiction don't exactly overlap, and a binge eater or bulimic can't quit food cold turkey the way an alcoholic or a drug addict can sober up. Still, greater understanding about the brain networks that underlie both addiction and eating disorders could have important implications for treatment. Experts tend to avoid the term "addiction" when talking about eating disorders because treatment approaches for the two conditions are so different, Dr. Walsh says. Although addicts try never to use or consume drugs or alcohol again, people with bulimia must learn how to have a more normal relationship with food, and to eat for nutrition. "You can get over bulimia and live comfortably with foods you used to have problems with," Dr. Walsh says. Both cognitive behavioral therapy and antidepressants like Prozac (fluoxetine) can help people with bulimia, although antidepressants are not very useful for drug problems such as cocaine abuse, he adds. Dr. Broft and Dr. Walsh hope their research ultimately finds more powerful cures for eating disorders, and perhaps one day prevents them. Not all people with eating disorders respond to treatment, and some respond only partly. "I think it's very important to continue to pursue the neurobiology of addictions to substances and the neurobiology of eating disorders, and really try to understand how the neurobiological systems are affected," Dr. Walsh says. "What's similar and what's different—that's the key. It would be very helpful in understanding and treatment if we understood those in more detail."
Hinduism Today Magazine Issues and Articles Facing Life's Tests With Wisdom Category : September/October 2001 Facing Life's Tests With Wisdom Living by the ancient guidance of the yamas and niyamas can help us brave life's challenges When we are children, we run freely, because we have no great subconscious burdens to carry. Very little has happened to us. Of course, our parents and religious institutions try to prepare us for life'stests. But because the conscious mind of a child doesn't know any better, it generally does not accept the preparation without experience, and life begins the waking up to the material world, creating situations about us magnificent opportunities for failing these tests. If we do not fail, we know that we have at some prior time learned the lesson inherent in the experience. Experience gives us a bit of wisdom when we really face ourselves and discover the meaning of failure and success. Failureis just education. But you shouldn't fail once you know the law. There have been many systems and principles of ethicsand moralityestablished by various world teachers down through the ages. All of these have had only one common goal to provide for man living on the planet Earth a guidepost for his thought and action so that his consciousness, his awareness, may evolve to the realization of life's highest goals and purposes. The ancient yoga systems provided a few simple yamasand niyamasfor religiousobservance, defining how all people should live. The yamas, orrestraints, provide a basic system of discipline for the instinctive mind. The niyamas, or positive observances, are the affirming, life-giving actions and disciplines. Life offers you an opportunity. As the Western theologian speaks of sins of omission as well as sins of commission, so we find that life offers us an opportunity to break the law as indicated by the yamas, as well as to omit the observances of the niyamas. If we take the opportunity to live out of tune with Hindu dharma, reaction is built in the subconscious mind. This reaction stays with us and recreates the physical and astral body accordingly. Have you ever known a friend who reacted terribly to an experiencein life and as a result became so changed mentally and physically that you hardly recognized him? Our external conscious mind has a habit of not being able to take the meaning out of life's most evident lessons. It is our teaching not to react to life'sexperiences, but to understandthem and in the understanding to free ourselves from the impact of these experiences, realizing the Self within. The true Self is only realized when you gain a subconscious control over your mind by ceasing to react to your experiences so that you can concentrate your mind fully, experience first meditation and contemplation, then samadhi, or Self Realization. First we must face oursubconscious. There are many amusing ways in which people go about facing themselves. Some sit down to think things over, turning out the lightof understanding. They let their minds wander, accomplishing nothing. Let me suggest to you a better way. We carry with us in our instinctive nature basic tendencies to break these divine laws, to undergo the experiences that will create reactive conditions until we sit ourselves down and start to unravel the mess. If we are still reactingto our experiences, we are only starting on the yoga path to enlightenment. As soon as we cease to react, we have for the first time the vision of the innerlight. What do we mean by this word light? We mean light literally, not metaphysically or symbolically, but light, just as you see the lightof the sun or a light emitted by a bulb. You will see light first at the top of the head, then throughout the body. An openness of mind occurs, and great peace. As a seeker gazes upon his inner light in contemplation, he continues the process of purifying the subconscious mind. As soon as that first yoga awakening comes to you, your whole nature begins to change. You have a foundationon which to continue. The yamasand the niyamasare the foundation. Facing Life's Tests: Two feet planted firmly on the ground, the experienced devotee graciously greets the return of his own self-created karma, paving the way to its resolution rather than its ramification. The Yamas and Niyamas From the holy vedas we have assembled here ten yamas and ten niyamas, a simple statement of the ancient and beautiful laws of life. The ten yamasare: 1) Noninjury, ahimsa: Not harming others by thought, word, or deed. 2) Truthfulness, satya: Refraining from lyingand betraying promises. 3) Nonstealing, asteya: Neither stealing, nor coveting nor entering into debt. 4) Divine conduct, brahmacharya: Controlling lustby remainingcelibate when single, leading to faithfulness in marriage. 5) Patience, kshama: Restraining intolerance with people and impatience with circumstances. 6) Steadfastness, dhriti: Overcoming nonperseverance, fear, indecision and changeableness. 7) Compassion, daya: Conquering callous, cruel and insensitive feelings toward all beings. 8) Honesty, straightforwardness, arjava: Renouncing deceptionand wrongdoing. 9) Moderate appetite, mitahara: Neither eating too much nor consuming meat, fish, fowl or eggs. 10) Purity, saucha: Avoiding impurityin body, mind and speech. The ten niyamasare: 1) Remorse, hri: Being modest and showing shamefor misdeeds. 2) Contentment,santosha: Seeking joy and serenity in life. 3) Giving,dana: Tithingand giving generously without thought of reward. 4)Faith,astikya: Believing firmly in God, Gods, guru and the path to enlightenment. 5) Worshipof the Lord,Isvarapujana: The cultivation of devotion through daily worship and meditation. 6) Scriptural listening, siddhantasravana: Studying the teachings and listening to the wise of one's lineage. 7)Cognition,mati: Developing a spiritual will and intellect with the guru's guidance. 8) Sacredvows,vrata: Fulfilling religious vows, rules and observances faithfully. 9) Recitation,japa: Chanting mantras daily. 10) Austerity,tapas: Performing sadhana,penance, tapas andsacrifice.
Gallery of Plants Tech Blog Plant Profiles Mailing Lists     Search ALL lists     Search help     Subscription info Top Stories sHORTurl service Tom Clothier's Archive  Top Stories Disease could hit Britain's trees hard Ten of the best snowdrop cultivars Plant protein database helps identify plant gene functions Dendroclimatologists record history through trees Potato beetle could be thwarted through gene manipulation Hawaii expands coffee farm quarantine Study explains flower petal loss Unauthorized use of a plant doesn't invalidate it's patent RSS story archive Re: RE:Hyb: Cytoplasmic inheritence was disease resistance {Walter] That peanut phenomenon is intriguing, Walter. I haven't the foggiest of how it might be explained either. All those extra-nuclear structures are derived only from the mother. If some effect disappears over a couple of generations, you are quite right in saying it cannot be because of some condition in the mitochondria or other structures, as they have a relatively slow and fairly constant rate of mutation. Three generations wouldn't be likely to show much of any of those effects, which usually are so subtle as to defy detection without DNA If you ever hear of a likely explanation, I'd be interested to know about it. One might ask, "What have peanuts to do with irises?"--but I would answer--most all biological processes are carried out in almost exactly the same way in phyla and genera widely separated--all the way from legumes to irids--and a long way on each side of either. Anthocyanin production in potatoes, tobacco or tomato leaves is in response to UV radiation, and protects the rather delicate DNA in the cell from UV light--which has enough energy to break DNA bonds. Irises produce anthocyanins in the leaves too--but especially in the flowers. This appears to be a response to a parallel and reciprocal (a pair of evolving events that have a feed-back loop between them) development in insect color vision and flower color. Anthocyanin pigments attract insects and insects pollenate those flowers. Around and around the process proceeds. The chemical chain of events all the way from Acetic Acid to Delphinidin (or whatever anthocyanins are produced) is exactly the same in Tomato leaves as it in flowers with the same pigment. The biologists refer to this kind of parallelism as "the process is conserved across phyla" or whatever the range may be. There's only one way to make soup, and every household uses the same Neil Mogensen z 7 western NC mountains  © 1995-2015 Mallorn Computing, Inc.All Rights Reserved. Our Privacy Statement
Imperfect Triangle "Learning undigested by thought is labor lost. Thought unassisted by learning is perilous," reads the ever-timely Confucian message chalked onto the board of a dingy black township high school in South Africa, 1985, in Athold Fugard's searing 1989 polemic My Children! My Africa!. By play's end, when the fictional uprisings mirror the dramatic eruptions in Sharpeville and Soweto, Fugard -- the theater's most learned, thoughtful singer of apartheid's wrongs -- etches the crucial maxim indelibly into memory. Based on a brief newspaper account of the death of a black teacher during racial unrest near Port Elizabeth, the play depicts the burgeoning friendship between a white schoolgirl and a black schoolboy brought together for academic contests by a paternalistic teacher. At first, despite their cultural differences, the teenagers get along well. Isabel Dyson, a prep-school standout who has never previously ventured into a township, is invited to debate Thami Mbikwana, prized pupil of jovial Mr. M. When Mr. M., thrilled that his precocious young scholars attend to the content of the words and not the color of the faces, proposes that they apply to be a team he'll coach in a national literary competition, Isabel and Thami readily, enthusiastically agree. But this meeting of the minds collapses violently when racial unrest and school boycotts force the comrades in scholastic arms to choose sides. Thami, impatient and disgusted with a country "that doesn't allow the majority of its people any dreams at all," takes up the cause of active protest. What good is it to learn, the young revolutionary asks, when education doesn't lead 25 million people to their rightful shares? Mr. M., "an old-fashioned traditionalist," pleads for reason. "If the struggle needs weapons," he urges, "give it words." Thami's instincts tell him to gather in the streets with rocks at the ready; Mr. M.'s to come to school and work within the system. Isabel, her privileged white world crashing down around her, is paralyzed, caught between the polarizing opposites her new friends represent. Ridden with the guilt and good intentions of white liberalism, she doesn't know what to think or feel anymore. Three points that will never become a triangle, these characters are inevitably divergent, even in the face of death. Upcoming Events The schoolroom debate that becomes life-and-death gives considerable dramatic and metaphoric tension to My Children! My Africa!, a worthy play, if not Fugard's most accomplished. It lacks the intimacy of "Master Harold" ... and the Boys, A Lesson from Aloes, The Road to Mecca and Blood Knot because it has characters who are completely static; from animated opening to knolling ending, they state and restate their stances, never changing or enhancing their positions, despite their erudition. Nor do they ever talk about anything other than political immediacies, so their relationships are never allowed to deepen or complicate -- or seem real. Perhaps because they can't fully interact, Fugard attempts to realize the characters through introspective soliloquies, a technique which becomes distractingly predictable, repeatedly pulling the audience outside the action. The Houston premiere of My Children! My Africa!, at Theater LaB, takes this good but troubled play and makes it better than the text itself. Director Alex Allen Morris (a member of the Alley and Ensemble companies) begins the evening with friendly, spirited competition, then tightens the strain gradually, choking off all the comfortable air until neither the characters nor the audience can breathe deeply in the shock of events. Though Fugard draws the battle lines by the end of the first act, Morris' firm grasp makes the social conflicts resonate deeply into the second. The three poised performers are also superb (as are their accents, coached by Deborah Kinghorn). Adrian Cardell Porter explodes as Thami, whose polite, obedient exterior belies his pent-up rage. Rebecca Harris is utterly charming as Isabel, an engaged listener with an interested smile and direct delivery communicating a self-assurance that serves her well, until once-remote events cause her to lose her ideological bearings. Ray Anthony Walker finds energy and passion in the cheery Mr. M., an educator desperately wanting to feed young people with hope, even at the risk of alienating them. At one point, Mr. M. confides another Confucian proverb: that he can do whatever his heart prompts without transgressing what is right. Even in their single-mindedness, all the characters possess this flawed nobility, for they act out of concern for their people. The cast and crew of Theater LaB give their people a night to remember. My Children! My Africa! runs through April 23 at Theater LaB, 1706 Alamo, 868-7516. Sponsor Content • Top Stories Sign Up > No Thanks! Remind Me Later >
Archaeo-Tourists Mob Ancient Aztec, Mayan Ruins 01/30/2012 08:08 am ET | Updated Mar 31, 2012 They're checking out Chichen Itza, packing Palenque and tooling around Tulum: A whopping 10.6 million tourists explored Mexico's 183 publicly open archaeological sites last year, according to the country's National Institute of Anthropology and History. And the visitor count is expected to soar through the roof in 2012. Most of the sites are the ruins of Mayan city-states that once ruled the roost over eastern Mexico's Yucatan Peninsula. A good number of visitors to the sites are vacationers staying at the hotel-resorts of Cancun and the Riviera Maya on a 100-mile-long strip of the Yucatan's powdery Caribbean beaches. Tulum (City of the Wall) One of the big draws for archaeo-tourists is the clifftop city of Tulum, perched like a guard over an ancient seaport at the southern end of the resort strip. Tourists snap images of El Castillo, Tulum's landmark temple. Tour guides say the city must have lit up like a fireball in the morning when the first rays of the sun began bouncing off its crimson-colored temples, shrines and towers -- perhaps explaining why it was originally called Zama, or "City of the Dawn." It must have been a show-stopper at night, too. Had the Spanish sailors who first spotted the city in 1518 sailed by after sundown, they'd have been treated to the sight of its 60-plus structures shimmering in the glow of torches atop pyramids and ceremonial towers. Among eye-popping buildings along Tulum's cobbled lanes are the magnificent Temple of El Castillo ("The Castle"), the Temple of the Descending God (featuring an upside-down figure of the Mayan god of the bees) and a cliffside sanctuary called Temple of the Winds. The city was renamed Tulum when a wall was built to protect its three land sides from invasions by other tribes. For hundreds of years, Tulum thrived as a trading port for the much larger neighboring city of Coba. Coba (Waters Stirred by the Winds) About an hour's ride inland from Tulum are the ruins of the Mayan super-city of Coba. Climb the 120 steps of the Nohoch Mul pyramid there -- the tallest in the Yucatan, spiraling up nearly 140 feet -- and you'll get a dazzling view of this immense city. Pyramid of Nohoch Mul is some 140 feet high. Besides five lakes around the site (hence the "waters" in its name), archaeologists say Coba could have had as many as 20,000 structures including pyramids, palaces, government offices and homes. They believe anywhere from 20,000 to 50,000 people lived there. About 1,100 years ago, Coba's role as the Yucatan's top city-state began giving way to a tough new competitor in the north, Chichen Itza. Coba was abandoned at the time the Spanish conquered the Yucatan around 1550. Chichen Itza (Mouth of the Well of the Itza) The most-visited Mayan archaeological site, Chichen Itza is a ride of three or so hours inland from Cancun at the northern end of the Caribbean resort strip. Towering over the city is its iconic Pyramid of Kukulkan, one of the New Seven Wonders of the World. The 91-step building's grounds are particularly packed during the two equinoxes when the setting sun creates an amazing spectacle: the shadow of a feathered serpent (representing Kukulkan, the Mayan god of gods) slithering down one of the pyramid's staircases to join up with a carved snake head at the bottom. Make sure to check out the much-photographed Temple of the Warriors, El Caracol (a rounded observatory), the ball court -- where losing teams lost a lot more than the game -- and a huge well called a cenote (possibly the "mouth" of earlier rulers of the city, the Itzas). It's said that maidens weighed down by gold, jade and carved seashells were tossed into the well to please the gods. Apocalypse coming in December? Sparked by the end of the last cycle of the 5,126-year Mayan calendar this year on Dec. 21, the archaeo-visitor count is expected to soar over the coming months as more and more end-of-the-world stories make it into the news media. "Doomsday predictions make great copy," says historian Jaime Capulli in Mexico City, "But most likely the old-time Mayans simply never expected anyone to be around all those years later -- and just didn't bother to add another cycle." All images by Bob Schulman
End of extreme poverty by 2030? Not so fast RELATED  Nearly 30 million Latin Americans risk sliding back into poverty The more than 1 billion people living in extreme poverty can essentially get one item from the McDonald’s menu each day. However they have to use that money to pay for food, school, medicine, shelter and more. Getting people above the point of extreme poverty matter both because they have more money for support and because it is the threshold where things start improving. Though it is not a guarantee. The researchers reveal that some 30% to 40% of people in rural Kenya and South Africa who escaped poverty, did not stay out. The findings from Ethiopia are worse where only 40% of people remained out of extreme poverty in the period between 1999 and 2009. There is variance in the rate of return, but it happens in even the more successful nations. Such a possibility is why the authors warn that as many as 1 billion people could be living in extreme poverty in 2030. The recommendation for education is meant to alleviate the problem. Ensuring that more people receive higher levels of education puts them in a better position to stay out of poverty. RELATED  Poverty and inequality is entrenched, increasing in Africa, study says “Education and social assistance are universally relevant, and require massive public resourcing and political support in the coming years,” write the authors. Ethiopia is held up as a success story for attaining near universal enrollment of boys and girls in primary schools by 2008-9. The success is due to changes in priorities from the government. Public expenditure as a percentage of the overall budget tripled over 25 years. Further targeting of undeserved areas and the vulnerable by ensuring that regions had more leadership over education programs helped make sure that more people benefited. Further suggestions in the report include universal healthcare and disaster risk management. They are in response to the many things that can keep people in extreme poverty, from natural disasters to conflict to unemployment. All is to say that the full report is a call for more. Keeping the course will maintain hype about ending extreme poverty while not moving the needle in any significant way. Or as the authors write, “Such investment could create a virtuous circle of poverty reduction, national economic growth and expanded individual opportunity…Put simply, it will not be possible to ‘get to zero’ unless development policies put those living in chronic poverty front and centre.” About Author Tom Murphy • kenpatterson I appreciate this article. UNESCO released its Global Monitoring Report on Education several weeks ago. The report found that 38% of all children of grade school age cannot read or do basic math. We have some very low hanging fruit when it comes to doing what is necessary to end extreme poverty–let’s start with educating these kids! One way to get this moving is to help the Global Partnership for Education meet it’s financial needs for 2015-18, a total of $3.5 billion. The US is being asked for $250 million over the next two years. Certainly we can do that.
Writing, publishing, geekdom, and errata. I'm the man in the box... No comments Society is boxes. Except that he wanted to preserve those bits of society in place. Men as the head of the household, for example, because that was the way it was done. That's the first of the big criticisms of Parsons. The other two are that his concept was a tautology, and that he gave no mechanism for the generation of either the structures or the deviant procedures. The first is Parsons's mistake between confusing the goal and the mechanism to reach the goal. Parsons made a good argument that multiple roles and needs must be met in a family unit. His mistake is in presuming that there is a real reason that only certain individuals can take certain roles. There is no allowance that the role a confessor priest plays can be substituted by a therapist, for example. The goals he describes and the needs that are modeled - those do seem to be pretty valid. How those concepts and needs are filled... well, that's a different story. And that brings us to the second criticism. Parsons' concept is a tautology, because it's "just a model". It has a great deal of usefulness, but it cannot be confused with reality. Models are by definition simpler than the reality they try to describe. Horribly useful - but you've got to know when to hold them, and know when to run. The final bit might be solved by the concept of memetics - a field of study that ascribes the attributes of genes to ideas. Ironically - because Parsons was originally a biology wonk - the mechanisms of evolution serve well to describe where deviance and the institutions that fill a society's roles come from. Random chance keeps trying until something sticks. Yet it explains so much more. Natural selection only - only - removes attributes that decrease an organism's ability to continue the race. That's why the appendix might be vestigal, but isn't gone. It's not a big enough drain on our individual resources to matter. Likewise, any part of a society that doesn't actively disrupt it can persist indefinitely... even if it's not beneficial. These tweaks to Parsons's concepts can rescue the model of modern functionalism into something useful - but in doing so, undermine the often-unspoken values of functionalists from Comte onward. (Can you tell I'm reading theory again?) No comments :
Forgot your password? Origins of Common Words and Phrases By Edited Mar 10, 2016 1 1 I love learning about unexpected starts to words and phrases I thought I knew.  I've collected a few of my favorites in a random list for you. A couple fun ones first. Skid row Common usage:  Sometimes seen as “skid road”, this refers to a shabby area inhabited by lowlifes and miscreants (love that word).  This is also the place to find your favorite seedy dive bar and Audrey II. Skid road, Seattle, 1874 Origin:  Several cities claim to be the source of the original ‘skid road’, but I, of course, believe it originated in Seattle.  Logs were cut and drug, or skidded down a road to the waiting train cars or ships.  Yesler Way (in the photo above) in downtown Seattle was the skid road for logs coming from Yesler’s Mill at the top of First Hill, headed for transport elsewhere.   The area around the mill was populated with cheap hotels and cookhouses for the itinerant workers.  Bars and taverns also helped the workers avoid carting around too much money, hence the association of a rundown area with the term ‘skid row’. Vancouver, BC also has a reasonable claim to the term, having several seedy areas associated with logging mills and skid roads for the lumber. Bonus word!  The roads used to drag the logs needed to be lubricated to make transport easier.  The person in charge of this job was called the “grease monkey” which may have led to the term being used today to indicate a mechanic. Common usage:  Used to indicate a stupid fellow, I’m pretty sure Bugs Bunny had a lot to do with popularizing, or even creating this usage.  Since the original Nimrod was a hunter, Bugs used the word 'nimrod' to poke fun at the hunter Elmer Fudd. Origin:  The origin seems to have nothing to do with common usage today.  Nimrod was a biblical king and a mighty hunter.  He did have a reputation as being rebellious, so perhaps that helped towards using his name in a derogatory manner.  Depending on the version of the story, Nimrod either sets himself against God, or proclaims himself God.  I suppose for the very religious, Nimrod behaved in a stupid manner, and calling someone a nimrod referred to those very poor life choices he made. A surprising (or not surprising considering the historical importance of the navy) number of words and phrases come from nautical lingo.  Some, such as “keel over” or “above board” seem obvious when you think about them, but a few others are surprising. 1778 Naitical Chart The bitter end Common usage: The final end of a task, no matter how arduous or unpleasant.  The limit of one’s efforts. Origin:  A sturdy post on the deck of a ship was called a bitt (or bit). The end of the anchor line was secured to this bitt. When the line was paid out in to set the anchor, if the water was deeper than anticipated the rope would pay out ‘to the bitter end’ – to the end of the rope that was attached to the bitt. Crossing the line Common usage: Going too far when doing something.  Behaving in a way that is not socially acceptable. Origin:  This was an initiation rite performed onboard when a sailor crossed the equator for the first time. It’s difficult to know if this is a tribute to King Neptune, or a sort of ‘whistling in the dark’ party as the sailor entered the realms of monsters.  I could see the ‘monster’ theme leading to the current meaning of going too far, as sailing into serpent-infested waters wouldn’t be seen as a good thing, but the line crossing ceremony itself has some bad history.  The rites were frequently accompanied by beatings and dragging the newbie through the surf.  Sailors often ended up in sick bay, or dead.  I could see these extreme hazings as also leading to the current meaning, as accidently killing your shipmate seems to be taking things a little too far. Bonus word!  Hazing may also have a nautical origin.  In the 19th century, many captains, to assert their authority, would work the crew night and day, leaving them in a mental haze.  Today the word is used to mean an unpleasant ritual that you have to undergo to become one of the group. Many words and phrases are attributable to Shakespeare.  There’s some discussion about whether he actually coined new words, or was the first to put commonly used words and phrases into a lasting written form.  Perhaps the answer is that some of both events happened. Laughingstock, or laughing-stock, or laughing stock. Common usage:  Something funny at someone’s expense.  The butt of a joke. Stocks at Keevil, Wiltshire Origin:  Popularized by Shakespeare, but predates him by at least 30 years.  As a light form of punishment, people had hands and/or feet trapped in holes between two boards – a stock.  This was not the more severe form which included having your head trapped in a board, which was called a pillory.   Neither of these have anything to do with the phrase, though.  Rather ‘stock’ in this sense seems to refer to something solid, as in, a solid source of the reason for laughter, or the butt (stock) of a joke. Bonus word!  Pilloried, meaning ‘to expose to public ridicule and shame’ comes from being put in a pillory. Fair play Common usage:  Giving all players a fair chance.  Also used to mean fairness and justice in social contexts. Origin:  Coined by the man himself, Shakespeare used this in several of his plays.  He may have simply decided that another phrase in common usage during his time, ‘foul play’ deserved a positive counterpart. Eaten out of house and home Common usage: To eat everything a person has, to consume a large amount of food. Origin: Another Shakespeare original, this was seen in Henry the IV, part three: Too much of a good thing Common usage: Too much of something may lead to harm.  This seemed like a good phrase with which to end this list, lest you become weary of my favorite phrases and ramblings. Origin:  A older phrase that was popularized by Shakespeare.  Seen in As You Like It: ROSALIND: Why then, can one desire too much of a good thing? Mar 11, 2013 12:40pm Great article. I learned a lot. It's interesting how you would never guess the origin of some of the phrases we use every day, isn't it? Add a new comment - No HTML Explore InfoBarrel Follow IB History
buck the trend An expression that refers to a stock whose price activity moves in the opposite direction from similar companies, the industry, or the market in general. A stock is usually said to "buck the trend" if it increases in value while the stock prices of other companies are decreasing. Use buck the trend in a sentence You may want to try and switch things up to buck the trend in hopes of everything turning around for you. When the market crashed, there were very few stocks that were able to buck the trend, most notably Apple inc. The large investment bank will buck the trend because it is sitting on a large amount of cash, whereas it's industry competitors were levered with risky investments. Show more usage examples... buck bucket shop
List Image Tips for Mastering Hunger Cues Successful weight management means tuning in to your hunger cues and satisfaction signals. If you have dieted a lot, you may have lost touch with your sense of physical hunger. You may have trained yourself to ignore typical signs that your body needs foods, like: ● A growling stomach ● A slight headache ● An empty feeling in the pit of your stomach ● Fatigue or light dizziness ● Crankiness Hunger is a gauge, not just an “on” or “off” switch. With a few easy tricks, you can learn to be in tune with both your hunger and satisfaction levels. Before you eat Before your next meal, tune in to your hunger. 1. What hunger cues are you experiencing and how often do they occur? Get familiar with the list above. If you don’t have physical symptoms, it may just be in your head. 2. Distract yourself for 5 minutes and drink a glass of water. What happens? Are you truly hungry or did it pass? True hunger will let you know. Sometimes you may just be thirsty and a glass of water will satisfy you. 3. Take a bite of food. Does it taste better than usual? Paying close attention to how your food tastes can help you know if your body needs fuel. When it’s true hunger, your taste buds are stimulated and food tastes really good. Most people experience true hunger cues every three to four hours. If you ignore your signals and wait too long to eat, your hunger may surge, your energy may plunge and you’ll be more likely to overeat. Before a meal or snack, rate your hunger on a scale of 1 to 5: 1 = Not Hungry 3 = Ready to Eat 5 = Starving Now, adjust your timing so you eat most meals when you are a ‘3’ so that you can respond in a more moderate way with portion control. During and after eating Now that you know that you are truly hungry, go ahead and eat! Just be sure to eat slowly and mindfully. It takes the body roughly 20 minutes to register feelings of fullness. Try these tricks to help you slow down: ● Put your fork down between bites. ● Pace yourself with the slowest eater at the table. ● Chew and swallow before you spoon up your next bite. Mid-way through and as you are finishing, rate your satisfaction. Rating your satisfaction is just as important as determining your true hunger. Here’s how to do it: On a scale of 1-5 are you... 1 = Still Hungry 3 = Satisfied 5 = Stuffed Aim to stop at a level “3-satisfied” -- that just-right feeling when you’ve had enough, but not too much. Try to do this for a week or two, and you will become a master at determining your hunger and satisfaction. Get started today. Pick one meal or snack each day to rate your pre-meal hunger and post-meal satisfaction.
The popular 1980s sitcom Three’s Company revolves around three single roommates: Janet, Chrissy and Jack. When Janet develops a sudden interest in having a baby, Jack and, Chrissy hold hilarious interviews, looking for a “father” for hire. If this sitcom took place today, Janet, Jack, and Chrissy could forgo the interviews and team up and to create a baby of their own. We live in an age of mass customization—customized cars, customized homes, customized watches, and yes, even customized children. Termed “designer babies,” with the help of scientists, parents have been able to partake in inheritable genetic modification. Through genetic screening, embryos are selected for sex and screened for genetic defects or other disease-bearing genes. Last week, the Food and Drug Administration reviewed a new technique, known as three-parent in vitro fertilization, which combines the genetic material of three individuals to make a child free of genetic defects.  The procedure uses a form of mitochondrial manipulation to remove unhealthy mitochondria carrying genetic mutations from the egg of one mother and replaces the mutated cells with the healthy mitochondria of a second mother. One father would donate sperm, containing 100% of his DNA because the mother, not the father, passes mitochondrial DNA. The FDA is set to determine the scientific and technological impact of this new method.  A key concern is whether it is safe to begin clinical trials on humans. Although the FDA has not yet been asked to discuss the legal implications, more than 40 countries have laws banning human gene modification to create inheritable traits in offspring.  However, the United Kingdom—in which in vitro fertilization is more highly regulated than in the US—does not have any evidence deeming procedure unsafe. As a result, the UK is poised to draft new regulations with the consent of Parliament. In the United States, the FDA has banned using this technique without its explicit permission. Although many in the scientific community tout the usefulness of this procedure, some fear that it will be abused by couples wanting to select benign traits such as eye color. In the most extreme form, it is feared that the process will be used to create a new breed of super intelligent children. Regulatory and legal concerns will likely focus on the limiting the use of this procedure and parental rights. For now, however, safety is the primary concern. Depending on clinical results, the era of designer babies may have finally arrived. Samara Shepherd Image Source Tagged with: Comments are closed.
Red Dwarfs With Planets Have Low Metallicities By Ken Croswell November 6, 2006 Image of the red dwarf CHXR 73 by the Hubble Space Telescope. NASA, ESA, and K. Luhman (Penn State University). Three of the nearest red dwarf stars with planets all have less iron than the Sun, say astronomers in Texas. The discovery is a surprise, because heavy elements like iron make up the bulk of most planets in the solar system. Most known extrasolar planets orbit stars with spectral types of F, G, or K, stars that are about as hot and luminous as the Sun. Not surprisingly, these planet-bearing Sunlike stars tend to have high abundances of heavy, planet-forming elements--abundances that match or surpass the Sun's. But the few red dwarfs known to have planets violate this rule, say Jacob Bean, G. Fritz Benedict, and Michael Endl at the University of Texas at Austin. Red dwarfs are smaller, cooler, and fainter than the Sun. They account for three fourths of all stars in the Galaxy, including the Sun's nearest neighbor, Proxima Centauri. Yet red dwarfs glow so feebly that not a single one is visible to the unaided eye. Bean and his colleagues obtained high-resolution spectra of three nearby planet-bearing red dwarfs--Gliese 876 in Aquarius, Gliese 581 in Libra, and Gliese 436 in Leo--by using the Hobby-Eberly and Harlan J. Smith telescopes at McDonald Observatory in Texas. The spectra allowed the astronomers to measure metallicities, the stars' abundances of elements heavier than helium. Measuring red dwarf metallicities is a challenge. First, the stars are dim, but spectroscopic determinations of metallicity require excellent spectra, which in turn require lots of photons from a star. Second, the stars are so cool that atoms in the stellar atmospheres join to form molecules, which complicate the analysis. Nevertheless, Bean and his colleagues succeeded in measuring the metallicity of each red dwarf. Surprisingly, in every case, the star had a lower metallicity than the Sun: StarAlternate NameConstellationDistance (light-years)Spectral TypeTemperatureMetallicity Gliese 876Ross 780Aquarius15M43,478 K76 percent solar Gliese 581Wolf 562Libra20M33,480 K47 percent solar Gliese 436Ross 905Leo33M2.53,498 K48 percent solar "This result makes the Gliese 876 problem even worse than it already was," comments Gregory Laughlin at the University of California at Santa Cruz. Gliese 876 has two Jupiter-mass planets, and Laughlin says forming such huge planets from the disk of material orbiting a small star is troublesome--even if the metallicity were high. "If the metallicity is low," he says, "it's even harder to understand. I would really like to know what happened for Gliese 876." In contrast, the even lower metallicities of the other two stars, Gliese 581 and Gliese 436, pose no trouble, Laughlin says, because they have only Neptune-mass planets, and Neptune is just 5 percent the mass of Jupiter. "Even in a low-metallicity disk," he says, "there's still plenty of solid material available for forming giant planets" with Neptune's mass. As a result, Laughlin predicts no correlation between a star's metallicity and the likelihood it has Neptune-mass planets. However, a strong metallicity-planet correlation should exist for Saturn-mass and especially Jupiter-mass planets. Bean and his colleagues are measuring metallicities of other red dwarfs. The astronomers want to see whether red dwarfs in general have lower metallicities than Sunlike stars. If so, that might explain why few planets have been found around red dwarfs. The new work is not the first to find lower metallicities among a group of planet-bearing stars. In 2005, astronomers reported that several giant stars with planets are metal-poor. Bean and his colleagues will publish their work in a future issue of Astrophysical Journal Letters. Update (April 20, 2009): John Johnson at the University of Hawaii and Kevin Apps in England say that astronomers have underestimated the metallicities of red dwarf stars. In a paper to appear in The Astrophysical Journal, Johnson and Apps say that when these metallicities are corrected, planet-bearing red dwarfs are in fact metal-rich--just as planet-hosting Sunlike stars tend to be. Ken Croswell is an astronomer and the author of Magnificent Universe and Ten Worlds, which describes the ten largest worlds that orbit the Sun--including Pluto and newly discovered Eris. He has previously written about the possibility that red dwarfs might have life.
• You're all caught up! How Long Does It Take to Lose 10 to 15 Pounds? author image Andrea Cespedes How Long Does It Take to Lose 10 to 15 Pounds? The last 10 to 15 pounds takes time to lose. Photo Credit James Darell/The Image Bank/Getty Images A 10 to 15 pound weight loss can have a significant impact on your health -- if you're overweight, it might mean enough weight loss to help improve markers of health, including blood pressure and cholesterol levels. How long it takes to lose this weight depends a lot on your current size and your dedication to achieving the loss. The Centers for Disease Control and Prevention, as well as other health institutions, recommends losing weight at a gradual rate of 1 to 2 pounds per week. At this rate, you'll reach your goal of 10 to 15 pounds lost in as soon as five weeks or as long as 15 weeks. How Weight Loss Works Eating fewer calories than you burn a day leads to an energy deficit and subsequent weight loss. Make that deficit equal to 3,500 calories, and you'll lose one pound. A deficit of 500 to 1,000 calories daily adds up to a loss of 1 to 2 pounds per week. You create a deficit by eating less and moving more. For example, you can trim your diet by just 250 calories and add 250 calories of additional exercise daily to create the 500-calorie-per-day loss required to lose a pound per week. Trim calories from your diet by staying away from refined grains, sugar and saturated fat. Make your meals consist primarily of lean protein, vegetables and whole grains. Choose plain yogurt, scant handfuls of raw nuts, low-fat cheese and fresh fruit for snacks. Gradual Weight Loss Is Best Experts recommend a 1- to 2-pound loss rate because most people find it feasible. Losing weight too quickly can lead to serious side effects, such as gallstones. It's also hard for most people to successfully maintain a weight loss rate of more than 2 pounds per week for any length of time. The dietary restrictions and exercise requirements are just too great. A crash or fad diet might help you lose weight faster, but, when weight is lost so quickly, it usually returns just as fast. Methods of fast weight loss often ban entire categories of nutrients, or food altogether in the case of fasts. Severe deprivation also slows your metabolism so it's harder to lose the next time you try. Fast weight loss also happens because you lose a lot of water and lean tissue, not fat. A gradual approach that takes more than two months to lose the 10 to 15 pounds is more likely to encourage fat loss. Larger People Lose 10 to 15 Pounds Faster If you have a lot of weight to lose, losing 10 to 15 pounds may happen relatively quickly -- even within a week or two of dedicated low-calorie dieting and mild increases in movement. If you are just 10 to 15 pounds away from your ideal weight, losing it will take considerably longer, however. When your body is larger, it takes more calories to maintain your weight. You can cut calories drastically and still get all the nutrients you need. Extremely overweight people also lose a high volume of water weight in the first few weeks of a weight-loss plan, simply because they carry more excess fluid. Take Steps to Make Weight Loss Sustainable The closer you are to your ideal weight, the fewer calories you need to consume for weight maintenance, so you'll need even fewer for weight loss. It's harder to create a dramatic calorie deficit so weight loss occurs more slowly. Set your goal for a 1/2-pound weight loss per week, and you only need to create a deficit of 250 calories per day. For some people, such as the average sedentary woman over the age of 50, creating the 500- to 1,000-calorie deficit is impossible without restricting intake to fewer than 1,200 calories per day. Consuming fewer than 1,200 calories isn't recommended as it's difficult to follow, slows your metabolism and can leave you missing certain nutrients. Slower weight loss also means you don't have to make drastic, unsustainable changes. It might take you 20 to 30 weeks to lose 10 to 15 pounds, but you're more likely to find the process manageable and be able to keep the weight off for the long run. LiveStrong Calorie Tracker THE LIVESTRONG.COM MyPlate Nutrition, Workouts & Tips • Gain 2 pounds per week • Gain 1.5 pounds per week • Gain 1 pound per week • Gain 0.5 pound per week • Maintain my current weight • Lose 0.5 pound per week • Lose 1 pound per week • Lose 1.5 pounds per week • Lose 2 pounds per week • Female • Male ft. in. Demand Media
Login | Join | Write a Review Add Your Business | Login to the Business Center What? (e.g. Roofing Contractors) Where? (e.g. Manter, KS or 67862) Hollywood, Florida - Citrus Grove Service search results Premier Listings Find it at Leisure.com Looking for Citrus? Find it at Leisure.com Definitions of Citrus Grove Services The following terms are commonly used with citrus grove services: Cultivar: In a citrus grove, cultivar refers to the different types of plant species that are being grown for their fruit. Citrus trees like oranges, lemons, and grapefruits have different cultivar breeds that produce fruit of a different size and taste. For example, the so-called blood orange is known for both its bright-red fruit and highly acidic taste that is similar to grapefruit. Soil Types: This term refers to the chemical composition of the soil. The best citrus fruit is produced from trees growing in slightly acidic soil where their roots can grow to a medium depth. Most grove services will periodically test the soil to determine its composition and add any fillers around the root system of the tree as needed. Pruning: Pruning is a gardening method that manages a plant by trimming back its branches in order to sunlight reach all parts of the plant. For citrus trees, timely pruning is crucial to maintaining a good fruit crop. The more sunlight the tree receives, the more fruit it will produce. The best time for pruning trees is in the early spring or late summer. Tree pruning does not need to be done every year, and it is acceptable to wait two to three years before the tree needs pruning again. Pest Management: Tactics used to reduce the harmful effects of insects in an orchard. Pesticides are one form of pest management, and these poisonous chemicals will kill of any insect problems. However, because there is a danger of the pesticides entering the fruit supply, many grove services are starting to stress organic-friendly pest management options. Sort By Limit By City Distance My Organic Juice 123, Boca Raton, FL 33431 100% Raw Cold Pressed Organic Juice Delivery serving the following areas: Boca Raton, Boynton Beach, Delray Beach, Lighthouse Point, and Deerfield Beach. (732) 614-0167 Maresta, Inc. P.o. Box 127, Clewiston, FL 33440 (863) 228-0714 Becker Groves Inc 5997 Sw Green Ridge St, Palm City, FL 34990 (772) 288-3537 Tanner W D Land Clearing 555 S Missouri St, Labelle, FL 33935 (863) 675-2258 Owners of citrus groves who need professionals to provide upkeep and landscaping of their grounds often turn to citrus grove services for help. Citrus grove services are professionals who care for citrus grove trees, which can include orange, lemon, grapefruit, and lime. Some trees are full size, while others are dwarf. Citrus grove services treat the the soil, ship fruit, provide storage and customer service, and perform general maintenance and upkeep of the land zone or area of the crop. Many such professionals are qualified arbor specialists, trained to care for and treat citrus fruit trees. The goal is to end up with a plentiful harvest or bounty for sale during the season. The bounty of some fruit trees are destined for supermarkets and grocery stores all over the country, while others are destined for local fruit stands and farmers' markets. Citrus grove services may also work closely with landscapers, professionals that mow lawns; plant trees and shrubs; perform hydro seeding, hardscaping, colorscaping, edging, lighting, mowing; do weeding and planting; and give advice on landscape design, architecture, and maintenance. These professionals can also install sprinkler systems, to ensure the orange crop, for instance, is well hydrated. Citrus grove services can be found in your local phone book or in online directories. Input your zip code and you will get lists of citrus grove services in your area, along with contact information. You can also ask others in the business for recommendations and referrals. Whether you need upkeep of your dwarf fruit trees, or help with storage, shipment, shop management, and sales, citrus grove services can help. Many such companies have websites where you can browse services offered, photos and testimonials from previous jobs, rates, crop care, and experience. Related Categories Nearby Cities Add your Business
Deep-sea: Hydrothermal Vents Marine Science Chapters Deep-sea Hydrothermal Vents First discovered in 1977, the deep-sea hydrothermal vent communities are loaded with life. Prior to this time it was thought that there were few species that could survive in the deep-sea near any type of volcanic activity and the resulting hot water. However, in 1977 geologists, working near Galapagos, came across huge communities of six foot tall worms and other new species, all near the hot water of hydrothermal vents. Black Smokers (NOAA image) Black Smokers (NOAA image). Seawater, found in cracks in the ocean bottom, is heated by volcanic activity and it becomes less dense and rises. If this water has come into contact with newly solidified rock it will have leached many minerals from that new rock. In many vent areas the super heated water rises quickly from the ocean bottom with so many minerals that it appears black. As it rises from the seafloor some of the minerals precipitate out and form a 'chimney' around the water vent. These chimneys may grow to over 40 feet high while venting the black mineral-rich heated water. This is what is called a 'black smoker' area. As the chimneys continue to grow they often become clogged with their own minerals and the water vents out of a different area so the 'black smokers' are constantly changing. Vent Worms (NOAA image) Vent worms (NOAA image). Large vestimentiferan worms over six feet long are one of the most visible animals at the vents. These are tube worms, secreting a thick paper-like white tube along their body. The vestimentiferans do not have a mouth or gut instead they rely on mutualistic symbiotic bacteria living in their tissues to produce the 'cell food' needed to keep them alive. The discovery of the vent communities was the first time anyone had seen vestimentiferan worms. At first the worms were given their own (new) phylum called Vestimentifera but more recently they have been grouped with the segmented worms in the Phylum Annelida. Most marine annelids are in a taxonomic class called 'Polychaeta' but the vent worms are in a class called 'Pogonophora.' It is believed these worms are some of the fastest growing invertebrates known. Vent Crab (NOAA image) Vent crab with mussels and worm tubes (NOAA image). The chemosynthetic vent bacteria are the base of the food chain at hydrothermal vents. This is a unique community on Earth. Up until 1977 ecologists had believed almost all ecosystems needed photosynthesis as the process that allowed the producers to live and become food for the consumers. Most deep-sea areas known still depended on this photosynthetic base of the food chain (in the form of the 'rain' of organics that sink). But, the vent communities are thriving areas with many species and no plants - instead it was discovered that the vent bacteria were capable of producing 'cell food' by chemosynthesizing the minerals (especially sulfur compounds) in the water. The vents were areas where the seawater had extreme concentrations of dissolved minerals and these bacteria used them to manufacture 'cell food.' As the bacteria bloom there are a large number of filter feeders that exist here (feather duster worms, mussels and clams) and feed on the bacteria in the water. Scavengers, like crabs and shrimp, also are found here along with fish and octopus. The deep-sea is a vast and complex area and one of the least known on our planet. Some scientists say that we know more about the moon than the deep-sea. It will continue to amaze us for many years. Copyright and Credits (Revised 6 August 2007) Page Back Top Page Forward
Gray Whales: The Lagoons in Baja Marine Science Chapters The Lagoons in Baja Whale watching skiff Whale watching skiffs in the Baja lagoons are highly regulated by the local people. Only a limited number of boats are allowed in the whale watching areas and they must have a maximum of 6-8 passengers, a guide and a boat handler. Three major lagoons in Baja California are the primary destination of the southbound gray whales. These are Scammon's Lagoon, San Ignacio Lagoon, and Magdalena Bay. The Mexican government strictly regulates access to these lagoons to ensure that any human activities do not affect the whales while they are in the quiet, protected lagoons of Baja California. The gray whale has used these lagoons, for centuries, for both mating and birthing. The lagoons are remote and provide a protected area for the gray whale to reproduce. Past records show that the gray whale was found in bays as far north as San Diego, but whalers killed the entire herd in San Diego in the 1880s and they never returned. Gray whale spyhopping The power of the gray whale can be observed by their ability to jump out of the water. Called "devil-fish" by the whalers of the 1800s, the gray whale was one of the most feared of all the whales by the whalers who used hand-held harpoons to kill their quarry. The gray whale was known to purposely surface under a whaleboat, breaking the boat and injuring (or killing) the whalers. They were known to actually attack the small boats. Petting a gray whale Especially at the end of the breeding season (March and April) the whales in the Baja lagoons may become exceptionally friendly, allowing the whale watchers to pet them. Called "friendlies" by whale watching tourists, many whales now seek human contact in the Baja lagoons. This behavior was first reported in 1976 from San Ignacio Lagoon and has grown each year. During "friendly" encounters, a gray whale approaches a whale watching skiff, eyes the passengers, and may come alongside and surface so that the tourists can touch (or sometimes even "kiss") the friendly whale. Toward the end of the winter season the calves become quite curious about the tourist boats and some of the mother whales allow their babies to spend time with the tourists. Occasionally a mother will even encourage this behavior by lifting her calf up for tourists to touch. Many of the skiff drivers notice that the gray whales seem to be attracted to the small outboard engines of the skiffs that take 6-8 passengers to experience the gray whales in the Baja lagoons. Some believe the whales can be attracted to the skiff by a splashing noise and encourage their passengers to splash water on the sides of the boats to bring in the "friendlies." Gray whale penis This picture of the inflated penis of a male gray whale is an indication that the group of three gray whales in this area is a mating group. Most mating activity of the gray whale occurs on the way to, or near, the lagoons of Baja California. Occasional sightings of mating gray whales (on their way south) have been reported from Southern California. Usually, mating occurs in groups of three whales, a female and two males. Some believe that a dominant male mates with the female and that the other male helps position the female and facilitate the coupling whereas others believe that the two males are actually in some type of competition during this behavior. In any case, the inflated penis of the male may often be displayed at the surface of the water during this activity. It is normally not visible and inflated only during mating or death (when the male may be decaying and gases from decomposition may inflate the penis). Gray whale mother and calf The mother of this calf can be seen under her baby. She is supporting her baby while the curious baby eyes the whale watchers. Notice the light colored outline of her two blow holes, an indication of the presence of lice there. Gestation is estimated at 11-13 months, allowing the newly pregnant females to return to the Arctic, feed during the summer, and then migrate back to the Baja lagoons in plenty of time before their calf is born. Birthing usually occurs in the back reaches of the lagoons (sometimes up to 30 miles from the entrance). Newly pregnant females often join the birthing areas and help with the actual birthing and care of the young, much like a midwife. Females who give birth will not mate until the following year. Thus, most females give birth every other year after they reach sexual maturity. Gray whale mother This mother gray whale has positioned herself between her calf and the whale watching boat. She may herd her baby away if she is nervous about the encounter or she may allow her baby to explore the skiff. Twelve to seventeen feet in length at birth, the baby gray whale weighs about a ton, and is generally born only in Baja. After birth, the baby nurses underwater by nudging one of the two slits on the belly of its mother. Inside these slits are the mother's nipples. As the baby nudges, the nipple pushes out and injects rich milk into the mouth of the baby. Calves often gain 50 pounds every day in the Baja nurseries by consuming up to 50 gallons of milk per day! The milk is extremely rich (up to 40 percent fat). (Compare that to the 2 percent fat milk that we drink.) During the next several months the mother will exercise her baby and watch over it as it learns about breathing, diving, interacting with other whales, currents, sand bars, and its environment. Almost all of this is done within the protection of the Baja lagoons. Gray whale spyhopping This gray whale may be looking around for landmarks or the presence of other gray whales. Not all the gray whales stay in the lagoons. Most of the males and juveniles spend time outside the lagoons and may be seen "surfing" and playing in the waves that may form on top of the sand bars that form at the entrance to the lagoons. Copyright and Credits (Revised 21 June 2006) Page Back Top Page Forward
Lending A Hand In Therapy Gwen Ing Clinic Manager at REHAB in Aiea Interviewed By Rasa Fournier Where did you receive your schooling and training? I went to the University of Washington in Seattle. How long have you been with REHAB? Twenty-plus years. Gwen Ing Gwen Ing Picture 1 of 2 Gwen Ing As an occupational therapist, is there any particular ailment you’re seeing more of lately? I specialize in hand therapy and also treat neurologically impaired patients. One of the common disorders we have been seeing in our clinic is carpal tunnel syndrome. It is also one of the most common surgeries performed by hand surgeons. Data shows that it could affect up to 3 percent of the population. Does increased computer use and technology contribute to the problem? That would be one of the causes. Carpal tunnel syndrome occurs when the median nerve becomes compressed at the wrist. The median nerve is responsible for sensation to your thumb, index and middle fingers, and sometimes part of the ring finger. There is also a motor component responsible for movement of certain muscles in the hand. The nerve can become compressed during work or leisure activities involving excessive force or repetitive movement at the wrist. Someone who uses tools or computers constantly might be more susceptible, or someone who is exposed to vibration, for instance use of jackhammers or drills. The tunnel is made up of the wrist bones on the bottom and a thick fibrous band at the top, so the tunnel is very non-elastic. There are nine tendons in addition to the median nerve that go through the carpal tunnel so there isn’t a lot of space in the tunnel. If the tunnel becomes swollen, it will affect the nerve’s ability to function normally. In surgery, they often clip the fibrous band to release some of the pressure in the tunnel. The symptoms are relieved in most cases and complications are usually minimal. What treatment is available for carpal tunnel syndrome? The occupational therapist would first do an assessment with the patient to see what could be affecting compression of the nerve. The goal of therapy is to relieve that compression. We might perform a sensory test and look at what factors could be contributing to the mechanical pressures on the wrist. Sometimes a splint is needed to assist with keeping the wrist in neutral position. We usually provide a home exercise program to improve flexibility at the wrist. We can also make recommendations for modifying activities and postures at home and work to minimize the stress to the tendon system, including setting up the work station or home computer to minimize carpal tunnel symptoms. After surgery, therapy goals are to decrease pain and swelling and regain hand strength and function for return to normal daily activities. What other related hand disorders do you treat? A number of people who have carpal tunnel syndrome might have associated problems such as trigger finger or Dequervain’s, which is a type of tenosynovitis. With trigger finger there might be pain, swelling or clicking at the base of the finger. In Dequervain’s there would be pain or swelling on the thumb side of the wrist. Both of these are compression injuries as well, but they involve tendons instead of the nerve. In both instances, the tendon may become swollen or stuck under a fibrous sheath that it normally passes through smoothly. When the tendon is swollen it won’t glide as well so there might be some locking or sticking and therefore pain. Like carpal tunnel syndrome, it’s the result of repetitive movement, awkward movement or constant forceful gripping or pinching. Similar to carpal tunnel syndrome, we would assess what factors are causing the injury and recommend modifying activities and tools, in addition to setting up a home program, and possibly providing a splint to rest the affected area. Does carpal tunnel necessitate surgery? For carpal tunnel, the earlier you catch it the better, because if you learn to modify your environment and activities, you can prevent the symptoms from worsening. If you wait too long, the symptoms will become more aggravated and then surgery could be necessary. In terms of surgery, studies show that 8696 percent of patients who have these surgeries are satisfied after the surgery and find their symptoms relieved. Your physician or hand surgeon would know best if your condition requires surgery. What are common symptoms of carpal tunnel syndrome and the associated disorders? Most common symptoms are numbness or tingling in the first three digits and sometimes partially in the fourth digit of the hand, because the median nerve feeds into those areas. Dropping items or difficulty grasping items is also common. The symptoms may become worse at night. Over time, there might be pain at the wrist. With trigger finger, there is pain, clicking or locking in the affected finger. With Dequervain’s, there is pain on the thumb side of the wrist. Anything else you’d like to mention? The population we’re seeing is getting younger. People are more exposed to repetitive forces at home and at work with video games, computers and other technology. It forces us to use the smaller muscles of the hand more often. If you find that you’re having any of the symptoms, consult your physician because you want to catch it early. Your occupational therapist can help through education to prevent the symptoms from worsening.
Treating tennis elbow  However, it can often last for several weeks or months, because tendons heal slowly. In some cases, tennis elbow can persist for more than a year. A number of simple treatments can help alleviate the pain of tennis elbow. The most important thing you can do is rest your injured arm and stop doing the activity that caused the problem (see below). Invasive treatments, such as surgery, will usually only be considered in severe and persistent cases of tennis elbow, where non-surgical approaches have not been effective. The various treatments for tennis elbow are outlined below. You can also read a summary of the pros and cons of the treatments for tennis elbow, allowing you to compare your treatment options. Avoiding or modifying activities If you have tennis elbow, you should stop doing activities that strain affected muscles and tendons. Alternatively, you may be able to modify the way you perform these types of movements so they do not place strain on your arm. Talk to your employer about avoiding or modifying activities that could aggravate your arm and make the pain worse.  Painkillers and NSAIDs Taking painkillers, such as paracetamol, and non-steroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen, may help ease mild pain and inflammation caused by tennis elbow. As well as tablets, NSAIDs are also available as creams and gels (topical NSAIDs). They are applied directly to a specific area of your body, such as your elbow and forearm. Topical NSAIDs are often recommended for musculoskeletal conditions, such as tennis elbow, rather than anti-inflammatory tablets. This is because they can reduce inflammation and pain without causing side effects, such as nausea and diarrhoea. Some NSAIDs are available over the counter without a prescription, while others are only available on prescription. Your GP or pharmacist will be able to recommend a suitable NSAID.   Read more about non-prescription and prescription-only medicines Your GP may refer you to a physiotherapist if your tennis elbow is causing more severe or persistent pain. Physiotherapists are healthcare professionals who use a variety of methods to restore movement to injured areas of the body. Your physiotherapist may use manual therapy techniques, such as massage and manipulation, to relieve pain and stiffness, and encourage blood flow to your arm. They can also show you exercises you can do to keep your arm mobile and strengthen your forearm muscles.  The use of an orthoses – such as a brace, strapping, support bandage or splint – may also be recommended in the short term. Read more about physiotherapy. Corticosteroid injections Corticosteroid injections are sometimes used to treat particularly painful musculoskeletal problems. However, there is limited clinical evidence to support their use as an effective treatment for tennis elbow. Corticosteroids are a type of medication that contain man-made versions of the hormone cortisol. Corticosteroid injections may help reduce the pain of tennis elbow in the short term, but their long-term effectiveness has been shown to be poor. The injection will be made directly into the painful area around your elbow. Before you have the injection, you may be given a local anaesthetic to numb the area to reduce the pain.   Shock wave therapy Shock wave therapy is a non-invasive treatment, where high-energy shock waves are passed through the skin to help relieve pain and promote movement in the affected area. How many sessions you will need depends on the severity of your pain. You may have a local anaesthetic to reduce any pain or discomfort during the procedure. The National Institute for Health and Care Excellence (NICE) states that shock wave therapy is safe, although it can cause minor side effects, including bruising and reddening of skin in the area being treated. Research shows that shock wave therapy can help improve the pain of tennis elbow in some cases. However, it may not work in all cases, and further research is needed. Surgery may be recommended as a last resort treatment in cases where tennis elbow is causing severe and persistent pain. The damaged part of the tendon will be removed to relieve the painful symptoms. Page last reviewed: 27/09/2014 Next review due: 27/07/2017
• seperator Clinic: Know anything about stickered half? clinic250Readers: What can you tell us about this stickered Kennedy half dollar featuring Chicago Cubs pitchers Kerry Wood, Mark Prior and Greg Maddux? If you know who issued it, when and why, email [email protected]. Why were early accounts in the late 1700s figured in 90ths of a dollar? It was all due to the rate of exchange. The U.S. dollar equaled the Spanish dollar (8 reales) in value, and it in turn was equal to 90 pence in English coins. There were 240 pence in the pound sterling (12 pence to a shilling and 20 shillings to the pound). So-Called Dollars It seems hard to believe, but wasn’t heating a coin once part of the authentication process? Hard to believe, but very true. From early descriptions of authentication prior to World War II, it was common practice to heat a coin to a level above the melting point of solder, to determine if the coin could be separated. This was a check for some of the deceptive soldered electrotypes. If not done properly, this could easily lead to the coin later being rejected as having been in a fire, and it would certainly speed up the oxidation process for copper and copper alloys. This heads the list of “don’t try this at home” items. I’m told that oak or cedar should never be used for boxes or cabinets to store coins. Can you tell me why? These two woods and probably some others have natural oils that in contact with coins, medals or tokens will cause them to tone an ugly dark color. Mahogany is an exception as it is safe to use. Some years ago the U.S. Mint at San Francisco had problems with coins that contained copper turning a golden color, eventually traced to the wooden storage boxes used for the proof coins. A switch to plastic solved the problem. More Coin Collecting Resources: The Essential Guide to Investing in Precious Metals Leave a Reply
Neuroradiology for Medical Students Neuroradiologists primarily study the nervous system, including the brain, cranial nerves, and spinal cord. They play a major role in the diagnosis and assessment of treatment response for head & neck, brain, and spinal cord tumors. CT plays a major role in this field, especially for the assessment of the brain and spine following trauma and as a first line imaging modality for non-traumatic abnormalities such as degenerative disease. MRI plays a major role in this subspecialty and is used to evaluate congenital and developmental abnormalities, demyelinating disorders, infections, neoplasms, and the fine detail of the spine in trauma and degenerative disc disease. Ultrasound plays a minor role in assessing congenital abnormalities such as myelomeningocele and tethered cord. Goals and objectives • Identify normal anatomic structures of the head and neck, brain, and spine on imaging exams and compare the degree of anatomic detail between CT and MR • When is MRI a more appropriate modality for spine imaging? • What is the difference between Vasogenic Edema and Cytotoxic edema? What are common causes for Both? • Recognize imaging signs of increased intracranial pressure and herniation. What does the 'Star' represent? • Discriminate between a subdural and epidural hematoma at CT. • Describe imaging signs of a subarachnoid hemorrhage. • What does it mean when the 'Smile' is gone? • Construct the appropriate imaging approach for common diagnostic scenarios including: suspected stroke, suspected subarachnoid hemorrhage, head trauma, spine trauma, facial trauma, metastatic disease to the CNS, seizures, dementia, brain tumor follow up, sinus disease
Life May have originated on mars September 13, 2013 Could Life On An Asteroid Survive Impact With Earth? John P. Millis, PhD for - Your Universe Online Where did life come from? It is a fundamental question that has consumed scientific inquiry for centuries. And naturally, theories abound. While some believe that life as we know it arose naturally from Earth, others believe that if may have come from outer space. It sounds almost like something out of science fiction, but panspermia – the theory that life can naturally transplant from one from planet, comet, or asteroid, to another – remains a serious scientific position. The challenge is that it is difficult to prove or disprove. In fact, it has been difficult to demonstrate that it is even possible, much less the solution to the question of life’s proliferation here on planet Earth. One of the more popular iterations of the theory suggests that life actually originated on Mars, and that microscopic organisms were carried away to Earth after a large meteorite impact sent them hurdling across the solar system aboard a Martian rock. But could life have survived such an ordeal? First of all, there is the question of the initial collision. A meteorite slamming in the Red Planet would have likely killed much of the life around the impact site. The life would then also have to survive the trip to Earth, and then find protection from the immense heat of entry into our planet’s atmosphere. To test the feasibility of such a theory, Dina Pasini from the University of Kent conducted an experiment where frozen samples of Nannochloropsis oculata – a species of single-celled algae – were fired from a high velocity gas gun into a vat of water. “As you might expect, increasing the speed of impact does increase the proportion of algae that die,” Pasini explain in a statement. "But even at 6.93 kilometres per second (about 4.3 miles per second), a small proportion survived. This sort of impact velocity would be what you would expect if a meteorite hit a planet similar to the Earth.” This is actually a major victory for panspermia models, as the initial impact is thought to be the most brutal. If, for instance, the life were incased in ice or rock, it would probably have little trouble making the trip from Mars to Earth. Furthermore, if the sample were also embedded in a meteorite, for example, the high temperatures of atmospheric friction would have had virtually no effect on the life within.
Magnetic field Magnetic field A230/0105 Rights Managed Request low-res file 530 pixels on longest edge, unwatermarked Request/Download high-res file Uncompressed file size: 62.2MB Downloadable file size: 7.5MB Price image Pricing Please login to use the price calculator Caption: Magnetic field. Coloured image of iron filings aligned along the magnetic field lines surrounding a bar magnet. The magnetic field induces magnetism in the needle-shaped filings, giving each a north and south pole. The magnetised filings align in thin arcing lines due to the interactions between them, even though the magnetic field itself is continuous. Magnetism is a property of ferromagnetic metals (such as iron, nickel and cobalt) caused by the alignment of the spins of the electrons in the metal. Keywords: bar, coloured, computer enhanced, false-colour, false-coloured, ferromagnetic, iron filings, line, lines, magnet, magnetic field, magnetism, metal, north, physical, physics, pole, poles, south
Nirankari Movement (1850's) A fter the fall of kingdom of Maharaja Ranjit Singh, there were several attempts to raise the old glory of the Khalsa. Several movements to reform the Sikhism were started. First one being Nirankari movement, Which was started by Baba Dyal (1783-1855). He was contemporary of Ranjit Singh. A man of humble origin. He preached against the rites and rituals that were creeping into Sikhism. He saw that Sikhism was being assimiliated into Hindusim in front of his eyes. His main target was the worship of images against which he preached vigorously. He re-emphasized the Sikh belief in Nirankar the Formless One. From this, the movement originating from his message came to be known as the Nirankari movement. Situation after the fall of Sarkar Khalsa was were such that to quote Sardar Harbans Singh in Heritage of the Sikhs he says " The Sikhs were deeply galled at the fall of their kingdom, but not unduly dismayed. They attributed the outcome of their contest with the English to the chances of war. They were also aware that, despite the deceitfulness of courtiers such as Lal Singh and Tej Singh, they had fought the ferringhi squarely, and maintained their manly demeanour even in defeat. In this mood, it was easier for them to be reconciled to their lot after normalcy was restored. The peaceful spell which followed, however, produced an attitude of unwariness. Conventional and superstitious ritual which, forbidden by the Gurus, had become acceptable as an adjunct of regal pomp and ceremony during the days of Sikh power gained an increasing hold over the Sikh mind. The true teachings of the Gurus which had supplied Sikhism its potent principle of reform and regeneration were obscured by this rising tide of conservatism. The Sikh religion was losing its characteristic vigour and its votaries were relapsing into beliefs and dogmas from which the Gurus' teaching had extricated them. Absorption into ceremonial Hinduism seemed the course inevitably set for them." Two factors which separated the Sikhs from other Punjabis were the outward marks of their faith, especially the kesas. Baba Dyal's influence was confined to the north-western districts of the Punjab. In 1851, he founded at Rawalpindi the Nirankari Darbar and gave this body the form of a sect. On his death, four years later, he was succeeded in the leadership of the community by his son, Baba Darbara singh . The latter continued to propagate his father' s teachings, prohibiting idolatrous worship, the use of alcohol and extravagant expenditure on weddings. He introduced in the Rawalpindi area the anand form of marrying rite. Anand, an austerely simple and inexpensive ceremony, became a cardinal point with leaders of subsequent Sikh reformation movements. Sardar Harbans Singh ji further quote "What an unambiguous, crucial development the Nirankari movement was in Sikh life will be borne out by this excerpt from the annual report of the Ludhiana Christian Mission for 1853: Sometime in the summer we heard of a movement . . . which from the representations we received, seemed to indicate a state of mind favourable to the reception of Truth. It was deemed expedient to visit them, to ascertain the true nature of the movement and, if possible, to give it a proper direction. On investigation, however, it was found that the whole movement was the result of the efforts of an individual to establish a new panth (religious sect) of which he should be the instructor.... They professedly reject idolatry, and all reverence and respect for whatever is held sacred by Sikhs or Hindus, except Nanak and his Granth...They are called Nirankaris, from their belief in God, as a spirit without bodily form. The next great fundamental principle of their religion is that salvation is to be obtained by meditation of God. They regard Nanak as their saviour, in asmuch as he taught them the way of salvation. Of their peculiar practices only two things are learned. First, they assemble every morning for worship, which consists of bowing the head to the ground before the Granth, making offerings and in hearing the Granth read by one of their numbers, and explained also if their leader be present. Secondly, they do not burn their dead, because that would make them too much like Christians and Musalmans, but throw them into the river." Many people at this time held the view that British was trying to favour Sikhs by making sure that Sikhs were building institutions. The above comment by Ludhiana mission in 1853 discredits any such accusations since at that time British and Sikhs had just fought two lengthy wars. Also Nirankari movement was started four years after Anglo-Sikh war when relations between Sikhs and British were very bad. British only favoured Sikhs in early part of twentieth century when money and land for Khalsa college and other such institutions was granted by British (British also helped create institutions like Aligarh Muslim university and Benaras Hindu university, so Sikhs were not favoured on the expense of others). This Nirankari movement in late 20th century was hijacked by Arya Samajis and other neo Hindu fanatics who wanted Sikhs to drop all their symbols and assimiliate into their religion. These New Neo Nirankaris who believed in "Living Gurus" confronted Sikhs at Amritsar in 1979 on the Baisakhi day when their living guru "Gurbachan" was trying to create Seven Stars just like Guru had created five beloved one's, obviously to proove to the Sikhs that he is more or less like Guru Gobind Singh (a very serious blasphamy for Sikhs, it is like telling christians or muslims that "I am christ" or "I am mohammad". Sikhs under Akhand Kirtani Jatha started their march from Akal Takht to stop Gurbachan but were greeted by bullets. This whole incident was solely responsible for the turmoil in Punjab in 1980's. These new nirankaris have been aptly named "Naqli Nirankaris" or the "False Nirankaris".
Return to Geologic Processes page Geology of Mount Shasta Figure 6 Return to Mount Shasta home page Figure 6: Drawings illustrating the processes that produce the diverse lavas erupted on and around Mount Shasta. The cross section of the upper crust on the left shows a schematic magma reservoir with its "roof" at a depth of 10 to 12 kilometers beneath the mountain. The drawing of three such reservoirs on the right illustrates three end-member processes that may modify the compositions of rising basaltic magmas. During fractional crystallization (a), crystals enriched in magnesium, iron, and calcium grow from the melt along the floor and walls of the reservoir. The removal of the crystals leaves the remaining liquid depleted in these components and enriched in others -- such as silicon, sodium, and potassium -- which give the magma an andesitic or dacitic composition. During assimilation (b) the basaltic magma fractures its wallrocks and engulfs the resulting blocks. As these blocks melt and dissolve, their components are added to the surrounding magma and modify its composition. Finally, during magma mixing (c) two batches of magma with different compositions encounter one another beneath the mountain and blend together to create a new magma with an intermediate composition. These drawings depict the three processes as separate, but in nature they are likely to occur together. For example, heat released by crystallization may enable a magma to warm and dissolve blocks of cold wall rock it could not otherwise assimilate.
1 approaching crisis intervention Published on • Be the first to comment No Downloads Total views On SlideShare From Embeds Number of Embeds Embeds 0 No embeds No notes for slide 1 approaching crisis intervention 1. 1. ©2013, Brooks/Cole Cengage Learning 2. 2. Brief History of Crisis Intervention National Save-a-Live League (1906)  The first known crisis phone line. Cocoanut Grove nightclub fire (1942)  Dr. Erich Lindemann’s clinical assessment of the survivors. Community Mental Health Centers Act of 1963  Large state-run asylums were replaced by community mental health centers. ©2013, Brooks/Cole Cengage Learning 3. 3. The Importance of Volunteerism Tasks completed by volunteer workers may range from menial administrative chores to frontline crisis intervention with clients.  The greatest number of frontline volunteers are used to staff 24-hour suicide hotlines in major cities.  More than 75% of all crisis centers in the United States report that volunteer workers outnumber professional staff by more than 6 to 1. ©2013, Brooks/Cole Cengage Learning 4. 4. Crisis Intervention as a Grassroots Movement Crisis intervention typically remains unrecognized by the public until victims/victim advocates exert enough legal, political, or economic pressure to cause change. As crisis agencies become crisis organizations, they gain power, prestige, and notoriety.  Offer opportunities for research, clinical training sites, and employment for recent graduates. Three major grassroots movements helped shape crisis intervention into an emerging specialty.  Alcoholics Anonymous (AA)  Vietnam veterans  Women’s movement during the 1970s ©2013, Brooks/Cole Cengage Learning 5. 5. Transition from a Grassroots Movement to a Specialty Area Large influx of crisis organizations from the 1970s-1990s. Recognition that immediate intervention is essential in alleviating stress related to trauma. Professional recognition within the helping fields.  Division 56: Trauma Psychology, American Psychology Association (2006)  Accreditation standards set by the Council for Accreditation of Counseling & Educationally Related Programs (2009) and National Association of School Psychologists (2010). The media has a significant influence on public consciousness of crisis after a large-scale disaster. ©2013, Brooks/Cole Cengage Learning 6. 6. The Case Against Too Much “Helping” “Trauma tourism”-burgeoning industry in post-intervention psychological trauma replete with trade shows, trade publications, talk shows, and charitable giving. There is an assumption that experiencing a disaster will invariably lead to psychopathology. The reality is that in most instances, victims of disaster do not panic.  Victims of disaster create an “altruistic or therapeutic community”-characterized by the disappearance of community conflicts, heightened internal solidarity, charity, sharing, communal public works, and a positive attitude. ©2013, Brooks/Cole Cengage Learning 7. 7. Definitions of Crisis There are varied definitions for both an individual and a system in crisis. For the purpose of this text, definitions have been selected.  Individual crisis-crisis is the perception or experiencing of an event or situation as an intolerable difficulty that exceeds the person’s current resources and coping mechanisms.  Systemic crisis-when a traumatic event occurs such that people, institutions, communities, and ecologies are overwhelmed and response systems are unable to effectively contain and control the event in regard to both physical and psychological reactions to it.  “Metastasizing crisis”-occurs when a small, isolated incident is not contained and begins to spread. ©2013, Brooks/Cole Cengage Learning 8. 8. Characteristics of Crisis Presence of both danger and opportunity  A crisis is dangerous because the related stress may result in pathological behavior such as injury to self or others.  A crisis can be an opportunity because it may be the catalyst for the individual to seek help. Crisis can provide the seeds of growth and change  Many times a person will not seek help until they can admit that they do not have control of the problem. No panaceas or quick fixes  It is common that the failure of a quick fix to a problem may actually lead to a crisis situation. ©2013, Brooks/Cole Cengage Learning 9. 9. Characteristics of Crisis Cont. The Necessity of Choice  Choosing is proactive and deciding not to choose is actually a choice that typically has negative results. Universality and Idiosyncrasy  Crises are universal because no one is immune to them.  Crises are idiosyncratic because individuals may react differently to the same situation. Resiliency Perception  It is the perception, not the event, that causes distress. Complicated symptomology  Crisis is complex and defies linear causality. ©2013, Brooks/Cole Cengage Learning 10. 10. Transcrisis States Historically, crises have typically been seen as lasting between 6-8 weeks in duration. Current view is that the events immediately following the crisis have a large impact on the duration. A transcrisis state occurs when unresolved issues from a previous traumatic event resurface because of a current stressor. Transcrisis states are not synonymous with PTSD.  The key difference is that the transcrisis state is residual and recurrent and always present to some degree. ©2013, Brooks/Cole Cengage Learning 11. 11. Transcrisis Points Occur within the therapeutic intervention and are seen as necessary for progression. Are marked by the client gaining awareness of the various aspects of the crisis. May occur frequently and are not regular, predictable, or have a linear progression. When transcrisis points occur, the therapists shifts from traditional therapeutic techniques to crisis intervention.  The individual will experience similar affect, behavior, and cognition as the original crisis event. ©2013, Brooks/Cole Cengage Learning 12. 12. Theories of Crisis Intervention No single theory is 100% comprehensive. Three major theories  Basic Crisis Theory  Expanded Crisis Theory  Applied Crisis Theory ©2013, Brooks/Cole Cengage Learning 13. 13. Basic Crisis Theory Based on a psychoanalytic approach to crisis. Behavioral responses related to grief are normal, temporary, and can be relieved with short-term intervention techniques. Normal grief behaviors include:  Preoccupation with the lost one  Identification with the lost one  Feelings of guilt and hostility  Disorganization of daily routine  Somatic complaints ©2013, Brooks/Cole Cengage Learning 14. 14. Basic Crisis Theory Cont. Crisis occurs when something impedes one’s life goals.  Equilibrium/disequilibrium paradigm  Disturbed equilibrium  Brief therapy or grief work  Client’s working through the problem or grief  Restoration of equilibrium Basic Crisis Theory vs. Brief Therapy  Brief Therapy tends to resolve ongoing emotional issues whereas Basis Crisis Theory assists individuals in crisis and addresses their affective, behavioral, and cognitive distortions resulting from the traumatic event. ©2013, Brooks/Cole Cengage Learning 15. 15. Expanded Crisis Theory Explores social, environmental, and situational factors of a crisis. Is influenced by several theories  Psychoanalytic Theory  Early childhood experiences determines why a traumatic event becomes a crisis.  General Systems Theory  Examines the interdependence among people who experience a crisis.  Ecosystems  Extension of systems theory to include an environmental context ©2013, Brooks/Cole Cengage Learning 16. 16. Theories that Influence Expanded Crisis Theory Cont.  Adaptational Theory  Crisis response is sustained through maladaptive behaviors.  Interpersonal Theory  A state of crisis can not be sustained if a person has an intact sense of self-worth and has a healthy support system.  Chaos Theory  Theory of evolution applied to crisis intervention.  Developmental Theory  Potential for crisis arises from developmental tasks that are not accomplished. ©2013, Brooks/Cole Cengage Learning 17. 17. Applied Crisis Theory Encompasses four domains:  Normal developmental crises  Consequence of events in typical human development that produce an abnormal response  Birth of a child, graduation from college, or career change  Situational crises  Occurs when an uncommon event, that the individual or system has no way to predict or control, causes extreme stress.  Terrorist attacks, automobile accidents, or sudden illness ©2013, Brooks/Cole Cengage Learning 18. 18. Four Domains of Applied Crisis Theory Cont. Existential crises  A result of intrapersonal conflicts related to one’s sense of purpose, responsibility, independence, freedom, or commitment. Ecosystemic crises  When a natural or human-caused disaster overtakes a person or system through no fault of their own.  Natural phenomena (hurricanes, tornadoes, forest fires)  Biologically derived (disease, epidemic)  Politically based (war)  Severe economic depression (Great Depression) ©2013, Brooks/Cole Cengage Learning 19. 19. Crisis Intervention Models Traditional models of crisis intervention  Equilibrium model  Cognitive model  Psychosocial transition model Modern models based on Ecosystemic Theory  Developmental-ecological model  Contextual-ecological model Modern models based on field-practice  Psychological first aid  ACT model Eclectic model of crisis intervention ©2013, Brooks/Cole Cengage Learning 20. 20. Traditional Models Equilibrium Model  Crises are seen as a state of psychological disequilibrium.  Main focus is on stabilizing the individual.  Most appropriately used for early intervention. Cognitive Model  Crisis is a result of distorted thinking related to an event, not the event itself.  The goal is to help people change their perception of the crisis event.  Most appropriately used after the individual has been stabilized. Psychosocial Transition Model  Assumes that people are products of their genes and their environment.  The goal is for the person to gain coping mechanisms and establish a support system.  Most appropriately used after a client is stabilized. ©2013, Brooks/Cole Cengage Learning 21. 21. Ecosystemic Models Developmental-Ecological Model  Crisis worker should assess the individual’s developmental stage, their environment, and the relationship between the two. Contextual-Ecological Model  Contextual elements are layered by physical proximity and the emotional meaning attributed to the event.  Reciprocal impact occurs between the individual and the system.  Primary vs. secondary relationships  Degree of change triggered by the event  Time directly influences the impact of a crisis.  The amount of time that has passed  Special occasions (anniversaries, holidays, etc.) ©2013, Brooks/Cole Cengage Learning 22. 22. Eclectic Model Intentionally and systematically integrates valid concepts and strategies from all available approaches. Operates from a task orientation and has three major tasks.  Identify valid elements in all systems and integrate them.  Consider all pertinent theories, methods, and standards for evaluating and manipulating clinical data.  Do not identify with one specific theory. Fuses two pervasive themes  All people and all crisis are unique and distinctive  Two people may experience the same traumatic event but react to it differently  All people and all crises are similar  There are global elements to specific crisis types ©2013, Brooks/Cole Cengage Learning 23. 23. Field-based Models Psychological First Aid Model  Seeks to address the immediate crisis needs.  Non-intrusive because not everyone exposed to a traumatic event will experience a crisis.  Psychological First Aid: Field Operations Guide (The National Center for PTSD) consists of 8 core actions  Psychological Contact and Engagement  Safety and Comfort  Stabilization (if necessary)  Information Gathering: Current Needs and Concerns  Practical Assistance  Connection with Social Supports  Information on Coping  Linkage with Collaborative Services ©2013, Brooks/Cole Cengage Learning 24. 24. Field-based Models Cont. ACT Model  Assessment of presenting problem.  Connecting clients to support systems.  Traumatic reactions and posttraumatic stress disorders. ©2013, Brooks/Cole Cengage Learning 25. 25. Characteristics of Effective Crisis Workers Effective Crisis intervention is a hybrid of science and art. Crisis workers need a mastery of technical skill, theoretical knowledge, and certain characteristics to develop this hybrid.  Diverse life experiences  Poise  Creativity  Flexibility  Energy  Resiliency  Quick mental reflexes  Assertiveness  Tenacity ©2013, Brooks/Cole Cengage Learning
Natural and Economic Factors Only available on StudyMode • Download(s) : 12 • Published : January 15, 2013 Open Document Text Preview The Economic environment is a major consideration for any business. According to Kotler et al. (2010), the economic environment is directly related to factors affecting consumer spending and buying patterns. Marketers need customers with buying power in order for any business to succeed. The most affluent group that have the highest income demand the highest quality products and services, they’re also willing to pay for it. However, due to the world economic slow-down and recession of 2007, has resulted in higher interest rates and unemployment. Differences in income create different groups which have extremely different spending powers and needs, wants and demands. At the paradoxical end of the wealthy scale are the lower socio-economic class that struggles to cover the basic bills. Inequality in incomes has lead to marketers creating a new concept called value marketing (Kotler et al, 2010). This concept works on the basis that on the condition that the customer feels that they are receiving good value in the service or product that they purchase. They will purchase the product or service. This is relative to the product or service offered but the concept of value creation covers all of the income scale. Hilton has adapted to the change in consumer spending and also the demand for increased perceived value creation in the following ways: 1)The Hotel chain has a multi-tier level of hotels, spas, resorts, long term suite hotels and villas that offer many different pricing options across each division. However, each option within the offering also has many different pricing scales and room options. With room only being the most basic option, with suites with all inclusive meals, drinks and activities at one of the luxurious resorts. By creating these options Hilton are offering to the customer an increase in perceived customer value. Customers are able to purchase a basic room and then add on what other services they may or may not want (Hilton 2013). tracking img
An analysis of the play by William Shakespeare This document was originally published in Characters of Shakespeare's Plays. William Hazlitt. London: Macmillan and Co., 1908. pp. 58-63. THIS is a very noble play. Though not in the first class of Shakespeare's production, it stands next to them, and is perhaps the finest of his historical plays, that is, those in which he made poetry the organ of history, and assumed a certain tone of character and sentiment, in conformity to known facts, instead of trusting to his observations of general nature or to the unlimited indulgence of his own fancy. What he has added to the actual story, is upon a par with it. His genius was, as it were, a match for history as well as nature, and could grapple at will with either. The play is full of that pervading comprehensive power by which the poet could always make himself master of time and circumstances. It presents a fine picture of Roman pride and Eastern magnificence: and in the struggle between the two, the empire of the world seems suspended, "like the swan's downfeather, "That stands upon the swell at full of tide, And neither way declines." The characters breathe, move, and live. Shakespeare does not stand reasoning on what his characters would do or say, but at once becomes them, and speaks and acts for them. He does not present us with groups of stage-puppets of poetical machines making set speeches on human life, and acting from a calculation of problematical motives, but he brings living men and women on the scene, who speak and act from real feelings, according to the ebbs and flows of passion, without the least tincture of pedantry of logic or rhetoric. Nothing is made out by inference and analogy, by climax and antithesis, but every thing takes place just as it would have done in reality, according to the occasion. The character of Cleopatra is a masterpiece. What an extreme contrast it affords to Imogen! One would think it almost impossible for the same person to have drawn both. She is voluptuous, ostentatious, conscious, boastful of her charms, haughty, tyrannical, fickle. The luxurious pomp and gorgeous extravagance of the Egyptian queen are displayed in all their force and lustre, as well as the irregular grandeur of the soul of Mark Antony. Take only the first four lines that they speak as an example of the regal style of love-making: CLEOPATRA: If it be love indeed, tell me how much? ANTONY: There's beggary in the love that can be reckon'd. CLEOPATRA: I'll set a bourn how far to be belov'd. ANTONY: Then must thou needs find out new heav'n, new earth. The rich and poetical description of her person beginning-- "The barge she sat in, like a burnish'd throne, Burnt on the water; the poop was beaten gold, Purple the sails, and so perfumed, that The winds were love-sick"-- seems to prepare the way for, and almost to justify the subsequent infatuation of Antony when in the sea-fight at Actium, he leaves the battle, and "like a doating mallard" follows her flying sails. Few things in Shakespeare (and we know of nothing in any other author like them) have more of that local truth of imagination and character than the passage in which Cleopatra is represented conjecturing what were the employments of Antony in his absence-- "He's speaking now, or murmuring--Where's my serpent of old Nile?" Or again, when she says to Antony, after the defeat at Actium, and his summoning up resolution to risk another fight-- "It is my birthday; I had thought to have held it poor; but since my lord is Antony again, I will be Cleopatra." Perhaps the finest burst of all is Antony's rage after his final defeat when he comes in, and surprises the messenger of Caesar kissing her hand-- "To let a fellow that will take rewards, And say God quit you, be familiar with, My play-fellow, your hand; this kingly seal, And plighter of high hearts." Cleopatra's whole character is the triumph of the voluptuous, of the love of pleasure and the power of giving over every other consideration. Octavia is a dull foil to her, and Fulvia a shrew and shrill-tongued. What picture do those lines give her-- "Age cannot wither her, nor custom stale Her infinite variety. Other women cloy The appetites they feed, but she makes me hungry Where most she satisfies." What a spirit and fire in her conversation with Antony's messenger who brings her the unwelcome news of his marriage with Octavia! How all the pride of beauty and of high rank breaks out in her promised reward to him-- "There's gold, and here My bluest veins to kiss!" She had great and unpardonable faults, but the grandeur of her death almost redeems them. She learns from the depth of despair the strength of her affections. She keeps her queen-like state in the last disgrace, and her sense of the pleasurable in the last moments of her life. She tastes luxury in death. After applying the asp, she says with fondness-- "Dost thou not see my baby at my breast, That sucks the nurse asleep? As sweet as balm, as soft as air, as gentle. Oh Antony!" It is worthwhile to observe that Shakespeare has contrasted the extreme magnificence of the descriptions in this play with pictures of extreme suffering and physical horror, not less striking--partly perhaps to place the effeminite character of Mark Antony in a more favourable light, and at the same time to preserve a certain balance of feeling in the mind. Caesar says, hearing of his rival's conduct at the court of Cleopatra, Leave thy lascivious wassels. When thou once Wert beaten from Mutina, where thou slew'st Hirtius and Pansa, consuls, at thy heel Did famine follow, whom thou fought'st against, Though daintily brought up, with patience more Than savage could suffer. Thou did'st drink The stale of horses, and the gilded puddle Which beast would cough at. Thy palate then did deign The roughest berry on the rudest hedge, Yea, like the stag, when snow the pasture sheets, The barks of trees thou browsed'st. On the Alps, It is reported, thou didst eat strange flesh, Which some did die to look on: and all this, It wounds thine honour, that I speak it now, Was borne so like a soldier, that thy cheek So much as lank'd not." The passage after Antony's defeat by Augustus, where he is made to say-- "Yes, yes; he at Phillipi kept His sword e'en like a dancer; while I struck The lean and wrinkled Cassius, and 'twas I That the mad Brutus ended"-- is one of those fine retrospections which show us the winding and eventful march of human life. The jealous attention which has been paid to the unities both of time and place has taken away the principle of perspective in the drama, and all the interest which objects derive from distance, from contrast, from privation, from change of fortune, from long-cherished passion; and contrasts our view of life from a strange and romantic dream, long, obscure, and infinite, into a smartly contested, three hours' inaugural disputation on its merits by the different candidates for their theatrical applause. The latter scenes of ANTONY AND CLEOPATRA are full of the changes of accident and passion. Success and defeat follow one another with startling rapidity. Fortune sits upon her wheel more blind and giddy than usual. This precarious state and the approaching dissolution of his greatness are strikingly displayed in the dialogue of Antony with Eros. ANTONY: Eros, thou yet behold'st me? EROS: Ay, noble lord. ANTONY: Sometime we see a cloud that's dragonish, A vapour sometime, like a bear or lion, A towered citadel, a pendant rock, A forked mountain, or blue promontory With trees upon't that nod unto the world And mock our eyes with air. Thou hast seen these signs, They are black vesper's pageants. EROS: Ay, my lord. ANTONY: That which is now a horse, even with a thought The rack dislimns, and makes it indistinct As water is in water. EROS: It does, my lord. ANTONY: My good knave, Eros, now thy captain is Even such a body... This is, without doubt, one of the finest pieces of poetry in Shakespeare. The splendor of the imagery, the semblance of reality, the lofty range of picturesque objects hanging over the world, their evanescent nature, the total uncertainty of what is left behind, are just like the mouldering schemes of human greatness. It is finer than Cleopatra's passionate lamentation over his fallen grandeur, because it is more dim, unstable, unsubstantial. Antony's headstrong presumption and infatuated determination to yield to Cleopatra's wishes to fight by sea instead of land, meet a merited punishment; and the extravagance of his resolutions, increasing with the desperateness of his circumstances, is well commented upon by Oenobarbus: "I see men's judgments are A parcel of their fortunes, and things outward Do draw the inward quality after them To suffer all alike." The repentance of Oenobarbus after his treachery to his master is the most affecting part of the play. He cannot recover from the blow which Antony's generosity gives him, and he dies broken-hearted, "a master-leaver and a fugitive." Shakespeare's genius has spread over the whole play a richness like the overflowing of the Nile. Purchase Antony and Cleopatra
Wine Words: Body Wine Words: Body Full-bodied, medium-bodied or light-bodied are terms you heard bandied around to describe a wine. What do these descriptors mean? What determines the body of a wine? These are terms used to describe the general weight, 'fullness' or overall feel of a wine in your mouth. Full-bodied wines are big and powerful. In contrast, light-bodied wines are more delicate and lean. Medium-bodied wines fall somewhere in between. There is no legal definition of where the cut-offs occur and many wines fall into the medium-to-high or light-to-medium body categories. Alcohol and Extract - Key Influencing Factors A number of factors determine the overall body or weight of a wine. Alcohol is typically the primary determinant of body. Alcohol contributes to the viscosity of a wine. The higher the alcohol in a wine, the weightier the mouthfeel, and the fuller the body. Wines with alcohol levels above 13.5% are typically considered full-bodied. Extract is another important factor that contributes to body. Extract includes all the non-volatile solids in a wine such as the phenolics (e.g. tannins), glycerol, sugars, and acids. In general red wines are more full bodied than white wines. If the wine is fermented or matured in oak, it adds further weight and body to a wine. In white wines, certain winemaking techniques, such as leaving the wine on its lees (dead yeast cells) after fermentation, as well as bâtonnage (the periodic stirring of these lees) also add weight to a wine. The Grape Variety Certain grape varieties produce wines that are more full-bodied than others. Typically, there are varieties that when ripe have a high sugar content. Grenache and Gewürztraminer are two that immediately come to mind. Chardonnay wines in general are considered more full-bodied than Sauvignon Blanc or Riesling wines. However, not all Chardonnay wines are full-bodied. The body of a Chardonnay wine is quite influenced by the climate where the grapes are grown. Consider the difference between a crisp, lean Chablis (cool climate) and a barrel-fermented, oak aged Napa Chardonnay (warm climate). Regardless of grape variety, warmer regions, produce riper grapes with more sugar, hence higher potential alcohol - the primary determinant of body. Thick-skinned varieties usually contain more extract than thin-skinned varieties. Red thin-skinned varieties include Gamay (think Beaujolais) and Barbera, while thick-skinned varieties include Cabernet Sauvignon, Merlot and Syrah/Shiraz. Body and Quality More or fuller body does not mean higher quality wine. Quality, which I will discuss another day is more to do with the balance of the wine's different components. For example, consider the very high quality of many light bodied Mosel Rieslings. Examples of very light-bodied wines include German, Mosel Riesling, Asti and Moscato d'Asti with alcohol levels between 5.5% and 9%. Young Hunter Valley Semillon (Australia) and Vinho Verde (Portugal) wines at around 11% are also wines I consider light-bodied. Many more wines sit in the medium-bodied range with alcohol levels between 12% and 13.5%. Finally full-bodied wines typically come from warmer regions and would include most New World Reds, but also many Italian reds (especially Barolo and Southern Italian reds), Southern Rhône wines such as Gigondas or Châteauneuf-du-Pape, as well as Spanish reds from Priorat and Toro to name but a few. Created with Sketch.
Genomic tools offer vision of a cleaner mining industry Randy Shore, Vancouver Sun  A few key microbes are on the verge of becoming key players in B.C.’s mining industry. Engineering professor Sue Baldwin has spent much of the past 15 years farming various combinations of anaerobic bacteria that have the ability to consume or remove heavy metals from mine tailings. Tailings are ground up rock and chemical pollutants left over from the extraction of metals from ore. Baldwin has her toes in the water of several important cleanup projects, including the Teck Resources smelter near Trail, the Imperial Metals Mount Polley Mine, and analysis of the selenium-contaminated run-off from coal mine waste in the Elk Valley. Imperial Metals has been operating a 450-litre-a-minute anaerobic biological reactor at Mount Polley since 2009, according to project engineer Luke Moger. The researchers are working to find the optimal environment and combination of microbes in which sulphate-reducing bacteria mitigate acid mine drainage and metal pollution by consuming sulfates in the tailings pond and water that has come in contact with waste rock. This creates sulphides that react with metals in the water to form harmless solids. The project, now in its second three-year phase, is a partnership between Imperial Metals, Baldwin’s lab at the University of B.C., and Genome BC, which directs funding to research on the application of genomics in sectors such as health care, forestry and mining, including some of Baldwin’s work. Genomics — the analysis of the complete genetic blueprint of living things — makes it possible to identify individual bacteria or combinations of bacteria that have desirable characteristics, such as the ability to remove metals held in solution. The problem has been figuring out how to get bacteria to do this on an industrial scale and do it consistently, according to Baldwin. “Over time, the microbial community in these bioreactors can shift from a favourable group of microbes to different microbes that do not contribute to treatment effectiveness or may have undesirable consequences,” said Baldwin in an email interview. “As we design and operate these bioreactors, we can use genomics to track the microbial community and use this information, together with other geochemical and physical information, to diagnose problems and adjust the operating conditions if needed.” Seepage water from smelter waste at Trail contained a complex mix of arsenic, zinc, sulphate and other trace metals, but the bioreactor was successful at removing the contaminants and rendered the water harmless to aquatic life, she said. Teck has a multi-faceted water quality research program and several studies on the use biological processes to maintain water quality have shown promise, said spokesman Chris Stannell. “Bacterial action has long been known to work in these sorts of roles, the trick is to get it into a scalable form,” said Steve Robertson, vice-president of corporate affairs for Imperial. “They work very slowly for the volume of water that we are dealing with, so the challenge is to make the bacteria as productive as possible.” The payoff for creating a passive, self-sustaining biological system to purify water contaminated by extraction and processing is quite substantial. Mitigation and remediation require significant capital investment and operating funds, Robertson said. Water treatment may be required for decades after a mine closes, powered and manned at the company’s expense. Snapshot of a system Judy Isaac-Renton and Patrick Tang of the BC Centre for Disease Control are developing tools that will take genetic snapshots of thousands of bacteria, viruses and protozoa in both healthy and compromised watersheds in B.C. Their project, supported by Genome Canada and Genome BC, essentially takes all the creatures in a water sample, blends them together and analyses the trillions of pieces of genetic information — the meta-genome — looking for the signals that are representative of a healthy ecological system or indications of disruption due to pollution. “We look for genetic signatures for certain organisms and for signatures that are indicative of certain micro-organism function,” said Tang. The team, along with environmental microbiologist Natalie Prystajecky, is creating detection and analysis tools that can be used to assess water quality and detect and trace pollution by building a database of metagenomic fingerprints from healthy and polluted watersheds. Tools developed for drinking protection are likely in the future to find commercial applications, charting changes in the quality of water in agricultural and industrial settings. Certain microbe populations may thrive or crash due to subtle changes in the presence of nitrogen from farms or selenium in the water near coal mines. “We need to be looking for who is there and who is not there,” said Isaac-Renton. “They are sentinels.” Current assessment tools are based on growing bacterial cultures to detect the presence of potentially harmful microbes. But those tests are slow and only detect the presence of a tiny fraction of the life present in a drop of water, said Prystajecky. “When a water sample is collected, it’s transported to a lab and then we wait 18 to 48 hours for that organism to grow,” said Prystajecky. “We are looking for (genetic signals) that are better markers of water quality ... and we want to find approaches that are faster.” Meta-genomic watershed analysis could be a valuable tool for mining companies, said Gabe Kalmar, vice-president of sector development for Genome BC. “By knowing what is there, mining companies have an opportunity to adjust their extraction methods and to know whether they are having an effect on the environment,” said Kalmar. “It could affect anything from changing how they do the extraction, how they do filtration and control what comes out of that process and how they store effluent.” The object of the exercise is to describe the environment as precisely as possible so you can put it back to normal at the end of the mine’s life. Without a comprehensive plan to put things right when the economic life of the mine is over, a mining company can’t even break ground in B.C. Taseko Mines found out the hard way when its proposed New Prosperity gold and copper mine was turned down for a second time after an environmental review panel concluded the local environment could be irreparably harmed by waste rocks and tailings. The company is now locked in a legal struggle to keep the project alive. Bacterial mining Genomics is also driving exciting advances in extraction, the process of removing tiny amounts of valuable metal from large amounts of rock. State-of-the-art mining still usually involves moving millions of cubic metres of earth and rock, removing millions more cubic metres of ore, grinding it into a powder and then using a variety of chemical processes to coax the valuable metal from a complex blend of minerals and potentially dangerous impurities. Mining is an $8.3-billion, 30,000-job industry with serious esthetic challenges. “The mining industry has to become more selective in how they extract material and more efficient,” said John Thompson, a director of Genome BC and president of the Canada Mining Innovation Council. “That involves a whole host of things from the way they mine through to the way they process and bacteria offer one way to be more selective.” Bacteria have the potential to extract metals from much lower grade ore or extract metal more completely, potentially without the toxic chemicals necessary in many widely used extraction processes. “Getting more metal out of less rock is a good thing economically, but also a good thing for decreasing environmental impacts and the footprint of mines,” said Thompson. “In an ideal world we would be able to leach and extract metals directly, in place, underground, without actually moving the rocks around.” “Genomics could be a clever little piece of that larger puzzle,” he said. Uranium is routinely extracted from the earth in a solution of sulphuric acid or ammonium carbonate that is injected through pipes into the underground ore body and then sucked out and processed. Such in situ mining solutions are also being employed for other metals such as copper and genomics could help identify bacteria suited to this technique or help create engineered purpose-built bacteria, Thompson said. Chilean biomining firm Bio-Sigma — a collaboration between the world’s largest copper mining company Codelco and Nippon Metals and Mining — is using bacteria to extract copper from relatively low-grade ores with vastly lower greenhouse gas emmisions and water requirements than conventional processes. Biomining has the potential to dramatically reduce the amount of toxic effluent produced by the extraction process and it may allow the company to process lower grade ores than ever before, said Kalmar. “BioSigma is the poster child for how this could work,” he said. “If everything could fall into place, our goal would be to have something like that here in B.C.” Made in B.C. solutions Scott Dunbar, a mining engineering professor at the University of B.C., is attempting to create biological copper extraction tools for complex ores using bacteriophage — viruses that affect bacteria. When metals are present in ore in several different chemical forms, recovering just the metals in the form you want is a challenge. But some phage have the ability to selectively bind to particular types of minerals such as chalcopyrite, a desirable form of copper ore. With research partners at the Centre for Blood Research and the department of biochemistry and molecular biology, Dunbar is identifying which of about two billion different phage display useful binding peptides. By exposing the entire phage library to a mineral of interest, they can identify the ones that bind most readily and wash the rest away. Researchers then let the phage infect a benign form of E. coli bacteria, so they can multiply. Repeating the process amplifies the trait like a form of selective breeding. The resulting phage army can bind to its favourite mineral in a solution and change its electrical conductivity or cause the particles to stick together, which opens the door to new and more efficient methods of concentrating and precipitating metal into a solid from a solution. Dunbar also has the idea to engineer colonies of bacteria to display sticky peptides that bind selectively with minerals and then flowing ore slurries over “a bacterial lawn.” “This is all beer talk at the moment, but it could work,” he said. “The genomic contribution will be to alter the genetic machinery of the bacteria to cause it to display the peptide on its surface.”
Your Position: HomeChina GuideChina City Guide Jingdezhen Jingdezhen Overview Jingdezhen in brief Celebrated as the world capital of porcelain, Jingdezhen was in the list of the first batch of twenty-four historical and cultural cities at national level. Located in the northeastern part of Jiangxi province, Jingdezhen  lies on the junction of Zhejiang province, Jiangxi province and Anhui province, with Wuyuan county in the east, Wannian county in the south, Boyang county in the west, Qimen county in the north. Covering an area of 5,248 square kilometers, Jingdezhen is divided into Leping city, Fuliang County, Zhushan district and Changjiang district. Not only is Jingdezhen rich in mineral resources, such as manganese, china clay, coal, marble and alluvial gold, but also in plant resources and animal resources. Most importantly, travelers are recommended to visit Jingdezhen to explore the profound culture of porcelain. History of Jingdezhen – capital of porcelain around China Jingdezhen is known as the capital of porcelain around China. Boasting a long history, Jingdezhen is famous for its abundance in porcelain since Han and Tang dynasties; and in the Song Dynasty (960-1279), it became one of the renowned towns around China. Later in Ming Dynasty (1368-1644) and Qing Dynasty (1636-1912), it developed into a worldwide capital of porcelain. The porcelain culture of thousand-year-long history is the witness of development of Jingdezhen. In the course of more than 1700 years, Jingdezhen gradually formed its own style and epitomized the essence of porcelain making, via assimilating the fine skills from different parts of China. Therefore, china in Jingdezhen is as white as jade, as thin as paper, as clear as mirror and sounds like chime stone. Guo Moruo, a famed scholar in China once put it, "China is a nation of porcelain, while Jingdezhen is the prosperous city of porcelain industry". Jingdezhen nowadays – art palace and vigorous city Ushering the new century, except porcelain, Jingdezhen also attaches great importance to pillar industries. Invaluable relics of china, consummate porcelain making skills, and brilliant ceramics make the city an art palace. Strolling around Jingdezhen, you can catch a glance at porcelain carving, chinaware, ceramic chips, and other porcelain works here and there. Attractions relate to china in this city are also available for visitors. Ushering the new century, except porcelain, Jingdezhen attaches great importance to pillar industries such as chemical industry, pharmaceutical industry, automobile industry, space industry, mechanical industry and so forth. Due to its convenient and fast transportation and excellent investment environment, Jingdezhen becomes one of the areas facilitating industry transfer of developed areas and sees the businesses flourishing.
Malayan sun bear (Helarctos malayanus) Malayan sun bear IUCN VULNERABLE (VU) Facts about this animal The Malayan sun bear (Helarctos malayanus) is the smallest of the eight living bear species. It's head-body length is about 1.2 –1,5 m; height at shoulder ca. 70 cm; and it weighs about 25–65 kg. Sun bears do not hibernate. They are predominantly nocturnal and spend much of the day sleeping or sunbathing in trees. Mating may occur at any time of the year. Gestation lasts 90-110 days and either one or two tiny cubs will be born. The cubs remain with their mother for quite some time, learning how to find food and fend for themselves. They reach sexual maturity at between three and four years of age. Sun bears are omnivorous, that means they feed on both plants and animals like smaller mammals, birds, fish, rodents, fruit, honey and berries. They often climbs in search of food, using its long claws to tear into bee nests and termite mounds. Did you know? That, despite threats from habitat loss and hunting, Malayan sun bear remains one of the most neglected large mammal species in Southeast Asia, and the least known bear species in the world (Servheen 1999)? Name (Scientific) Helarctos malayanus Name (English) Malayan sun bear Name (French) Ours malais ou Ours des cocotiers Name (German) Malaienbär Name (Spanish) Oso del sol, Oso malayo Local names Bahasa: Beruang madu Malay: Bruang, Basindo nan tenggil CITES Status Appendix I CMS Status Not listed Photo Copyright by Ryan E. Poplin Habitat Lowland tropical forest Wild population Because of its shy, secretive nature, and because it lives in dense tropical forest, few is kwown about this species. But numbers are declining because of habitat destruction and poaching for bear parts used in exotic foods, medicines, or aphrodisiacs (Mills and Servheen 1991). Zoo population 126 reported to ISIS In the Zoo Malayan sun bear How this animal should be transported Find this animal on ZooLex Photo Copyright by Frank C. Mueller Why do zoos keep this animal The sun bear is rated data deficient by IUCN but its habitat it rapidly dwindling in many places, being used for subsistence farming or being replaced by oil palm plantations. Zoos therefore keep the sun bear as an ambassador species for South-East Asia's threatened lowland rainforests. Through coordinated breeding programmes zoos aim at maintaining a self-sustaining reserve population of the species. Finally, zoos may also come into the position of having to take care of illegally traded sun bears which were confiscated by the customs or CITES authorities. As such trade affects mainly bear cubs, it is almost never possible to return confiscated animals to the wild.
Zoanthid from Blane Peruns TheSea Photograph by Blane Perun After observing zoanthid polyps for years I have noticed that some grow very quickly while others will grow so slowly that if they grow a few button polyps each year then they are lucky. When they have comparable stock sand oral disc size you would assume that they would respond similarly in the same conditions. However, with zoanthids that is not always the case and you never know what you might get! Because of this you will want to pay attention to the zoanthids and document their growth over time to compare them. Color Morphs Most of the color morphs great at completely different paces despite looking the same and being in the same conditions. Because of this it is challenging to believe that they are the same species that have the exact same needs. When two colonies are identical on the reef and each year one appears to be considerably bigger than the other it makes you wonder whether the difference is just color or if it is an entirely different species. “Pink Candies” When this zoanthid color morph appeared in Julian Sprung’s Invertebrates Guide it garnered quite the attention. The specimen had been photographed by Julian before and it is my most recognized zoanthid besides “Perun’s Purple People Eater.” The Reef Aquarium Of all the specimens I have worked with, this is the fastest growing and most aggressive. I have watched it slowly kill the tissue of a blue Acropora abrolhensis after surrounding it and it began taking over its base. This truly is amazing and totally worth documenting through photos. Chemical Warfare With Heliopora The “Actinic Yellow” is the next aggressive zoanthid I have. Notice how it is growing beside a piece of heliopora. This coral is quick growing and can quickly take over everything in the aquarium. The heliopora colony’s growth was kept in check by button polyps. It was amazing to watch the zoanthid colony defend itself from the heliopora aggression and to begin covering the colony’s perimeter. Heliopora is one of nature’s fastest plating corals. Pay close attention to the zoanthid colony because it will truly surprise you and it is worth paying attention to. Remember, zoanthids may not react the same in the same conditions even if they appear to be the same size. Because of this you truly don’t ever know what you will be getting. Navigate TheSea spacer Oldest Coral Reef Scientists from the Stanford University as well as the University of California at Santa Cruz have recently discovered off the coast of Hawaii what might be... more spacer Coral Reef Animals Coral reefs are among the most special and most interesting habitats to explore, coral reef animals stunning with their diversity and colorfulness. Fish... more spacer Coral Reef Plants Coral reef plants is a term commonly used to describe both flowering plants and algae present in coral reefs. Plants serve a vital role in the... more spacer Polyps Make Up A Coral Reef Coral reefs may look as if they were made of stone, but in fact they are the creation of very sensitive creatures. The animals that make up a coral reef are called polyps... more Welcome To TheSea Blane Peruns TheSea If you have a fear of diving or don't have the time snorkeling can be just as rewarding. If your visits take you to the far south Caribbean, like the Lesser Antilles there is very little difference between diving and snorkeling. spacer Coral Reef Dying Coral reefs have survived for thousands of years through natural change and environmental catastrophes. Today, however, it seems that, with the dawn of... more spacer Coral Reef Food Web Coral reefs form a majestic world of biodiversity that we can spend all our lives exploring without fully understanding the intricacy of its dynamic... more spacer Coral Polyps Are Actually Translucent When you look at a coral reef, the most striking feature and at the same time the quality that attracts the eye the most is the bright, beautiful and varied coloring of the coral... more spacer Coral Reefs Are Carnivores Coral reefs are carnivores, or, to be more precise, being a community, an ecosystem, not an animal, they actually give home to carnivores (predominantly). The majority of the species... more © 1999 - 2016 TheSea All rights reserved.
Poetic essay Essay by nikesA-, June 2014 download word file, 2 pages 0.0 Analytical Skills - Ability to identify IT systems, analyze, and solve problems in an optimal way. Technical Skills - Ability to understand how computers, data networks, databases, operating systems, etc. work together. Management skills - The Organization's management, project management, risk management, change management. Communication Skills - Interpersonal communication (written, verbal, visual, electronic, face-to-face conversations, presentations in front of groups) To become a systems analyst you should have: Bachelor's Degree in an IT or engineering field Must have a good understanding of IT architecture, IT systems, Programming and development Education - Explained If you want to be a systems analyst, you need a college degree. However, you don't have to study computer science or information technology. Computer science and IT degrees are the most common among systems analysts, because those programs prepare aspiring analysts with courses in network administration and management, business software applications and project management. But many employers hire analysts with a business or liberal arts degree who know how to write computer programs. You can acquire the necessary education/qualifications basically at any college that has a Computer Science or Information Technology courses. To become one all you need is a bachelor's degree in IT and you are good to go. Some schools that offer computer programs in the area are Sheridan and Mohawk College. Sheridan has a job placement rate of 85%. Technology is always improving and very quickly too so being in the field of IT you always have to be ready for change. It could happen at any moment. On the job training would be necessary because of the rapid change that could happen. Per say a new language comes out or maybe the equipment we use is upgraded. It's all going to take some time for everyone to get used to and to...
XML.com: XML From the Inside Out oreilly.comSafari Bookshelf.Conferences. Tim Berners-Lee on the W3C's Semantic Web Activity March 21, 2001 Comment on this article What do you think of the Semantic Web Activity? Talk back to XML.com . The World Wide Web Consortium has recently embarked on a program of development on the Semantic Web, Director Tim Berners-Lee's vision of a machine processable Web. I spoke with Berners-Lee to find out the reasons behind the new Semantic Web Activity, and how he saw it relating to the rest of the XML world. Edd Dumbill: Why has the W3C started the Semantic Web activity? Tim Berners-Lee: The W3C operates at the cutting edge, where relatively new results of research become the foundations for products. Therefore, when it comes to interoperability these results need to become standards faster than in other areas. The W3C made the decision to take the lead -- and leading-edge -- in web architecture development. We've had the Semantic Web roadmap for a long time. As the bottom layer becomes stronger, there's at the same time a large amount falling in from above. Projects from the areas of knowledge representation and ontologies are coming together. The time feels right for W3C to be the place where the lower levels meet with the higher levels: the research results meeting with the industrial needs. ED: Before a W3C Activity can start, the members must vote for it. Why did they vote for the Semantic Web Activity? TBL: There's always a danger when explaining why something as broad as this is important -- it's easy to pick an example which understates the case and then undermines the value. The generality is what is devastatingly valuable and excites people. A lot of people see [the Semantic Web] as a generic solution to application integration. Those people who can remember pre-Web documentation systems saw the Web as a tool for integrating those documentation systems -- the same people see the Web as an integration platform for their diverse information applications, solving the N-squared problem. The recent RDF Interest Group meeting was very exciting, because there was a strong feeling that things were coming together. The number of people solving problems with RDF application tools is increasing. Take calendaring for example; there were five people in the room working on such systems based on RDF. There are also a lot of Members who have serious need for ontologies. There's a clearly understood need for ontologies in a large number of industries, and a ripe need for standardization, with things like OIL and DARPA's DAML effort. We're expecting ontology work to come into W3C as a Working Group quite soon. ED: RDF, one of the core Semantic Web technologies, has had a bad image in the past. How will you get round this? TBL: The XML syntax has been designed to make it look like something somebody might write: this looks odd to the Knowledge Representation folks. The RDF model itself is simpler than the XML model, but the syntax which maps between them is more complex than either. My sense from the DAML work is that people who use RDF for knowledge representation are quite happy to use angle brackets. It's a myth that RDF is more complicated, coming from the fact that the XML syntax has more than one option in an effort to make it something that an XML designer would have done. The other thing the myth comes from is that some things were included in the RDF spec, such as the containers, which a lot of people don't need. The concepts of RDF properties and RDF Schema classes have become the basic requirements for learning. There's a possibility of reorganizing the spec to present these first. ED: But what about Perl hackers, HTML authors, etc? How will they get to grips with RDF? TBL: I think there will come a time when the prevalence of graph manipulation tools will be more alluring than the equivalent at the XML level. Command line tools for RDF are starting to appear now, and APIs and so on... The test is "if I decide to use RDF, what do I have to do?". There are tools now where you just write down an ontology and you can use RDF tools. And there are lots of APIs coming on. One by one, individual people are being won over to RDF. I believe that will only continue. There were a huge number of Gopher sites. One by one people realized they could do more with the Web, as it's more powerful and generic. They moved from the tree model to a web model. Similarly, moving from XML to RDF is moving from a tree model to a web model. There may be specific areas in which an open source project tier decides to use RDF to represent information, etc. It's clearly starting to pick up now, and nobody thinks it's about to stop. ED: Several of the areas the Semantic Web addresses seem to overlap with areas the W3C is pursuing such as XML Schema and XML Protocol. What's the relationship of the new Activity to these existing ones? TBL: XML Schema is an interesting example... One of the things that XML Schema does is provide a formal model both of the schema and of the XML document. Therefore if you have a process which takes in a document and represents it in terms of that model, you could then write a schema rule in any Semantic Webs rules language. XML Protocol -- I'll tell you about the way I think this'll fit together. Last year at WWW9, we heard a number of presentations on how SOAP can relate to RDF. As the XML Protocol Working Group puts together their spec, I hope they'll be able to see the opportunity for convergence. Obviously I hope they will, so that there'll be an RDF graph for every XML Protocol message. The Semantic Web can provide an underpinning for the protocols world. ED: But aren't there areas in Working Groups where they've ignored RDF, when they could have made good use of it? TBL: There needs to be more coordination there. It's a shame when people almost do RDF and don't quite. An example would have been WSDL. Uche Ogbuji's paper on this is excellent. ED: I'd like to ask about the Advanced Development side of the new Activity, which aims to involve non-W3C members in the development of the Semantic Web. Isn't this openness unusual? TBL: We always design the Activity to suit the needs of the community at the time. Examples of infrastructural work in which we did this are the HTTP, URI, and XML Signature work. We wanted the attention of the community experts, and things required wide review. More of our Activities and working groups are moving toward a more public model; XML Protocol is a perfect example. SW needs to be really open, as many resources for its growth are from the academic world. We need people who may at some point want to give the group the benefit of their experience, without having a permanent relationship with the consortium. It's not particularly novel. It's combining the RDF Interest Group with W3C internal development stuff. We need to find what the Knowledge Representation community have got that's ripe for standardization, and what it hasn't and so on. Coordination will be very important. Related Articles
How to Program Numeric Keypad as Function Keys By Xah Lee. Date: . Last updated: . This page shows you how to set the number-pad keys as function keys, for {Windows, Linux, Mac}. For example, set a key to switch to browser, close tab, copy, paste, etc. pc keyboard the numeric keypad of a standard PC keyboard Microsoft Windows First, you need to install AutoHotkey. See: AutoHotkey Tutorial Create a file with the following content. ; set number pad keys NumpadDiv::Send ^{PgUp} ; previous tab NumpadMult::Send ^{PgDn} ; next tab NumpadSub::Send ^w ; close window Name the file “numpad_keys.ahk”. Now, double click the file to run it. It will run in the background, and now you can press the / * to switch to pre/next tab, and - to close tab. For more examples, see: For Linux, there are 2 things you need to do. 1. Set a key to run a shell command. 2. The shell command does action. It can send key combination such as 【Ctrl+c】, or {switch, close, launch, …} {windows, apps, tabs, …}. For Linux desktops such as Gnome, KDE, Ubuntu Unity, Xfce, there's a GUI that lets you easily set a key to run a shell command. Look for it in your system control panel. xfce keyboard setting panel 2013-06-01 Xfce keyboard setting panel I recommend using the desktop GUI to set a key. Otherwise, you can use xbindkeys. See Linux: Xbindkeys Tutorial For {switch, open, close} windows, see Linux: Add Key to Switch App For sending key combinations such as 【Ctrl+c】, 【Ctrl+w】, 【Alt+Tab ↹】, use xvkbd. See: Linux: Set F2 F3 F4 to Cut Copy Paste ⌨ For Mac, by default, it's impossible to make the number pad do anything other than inputting numbers, because Mac does remapping at some low level. You need tool to set keys on number pad, see: Mac: Key Remapping, Keybinding Tools . Emacs: Bind Number Pad Keys Buy Numerical Keypad If your keyboard doesn't have number pad, you can buy them, cheap. Best Number Keypad Programable Keypads Alternatively, you can get keypad that are programable by itself. logitech g13 gameboard 2 Programable Keypads
Maths Gamings Maths video games are a great way to learn Maths. You may not typically equate games with academia, they're an efficient lorry to describe a principle. There are a number of clear advantages of using video games for maths. Several video games entail more than one gamer. This implies that the youngsters are called for to connect with each various other, both with listening and even chatting. While playing, they'll should go over policies as well as the state of play. In your home, the moms and dad or guardian needs to connect with the youngster. Intellectual Growth And Higher Order Reasoning Mathematics games require reasoning and reasoning, both which are vital tools forever. They're a great means to assist youngsters develop their intellectual endurance. Such games likewise need the youngsters to control data as well as transform their concepts as the video game progresses. Issue Resolving As a maths video games creates so the gamers need to respond to different scenarios as they arise. They need to be able to customize their concepts to accommodate the development of the game. Issue addressing is a big part of such games and even can be transferred right into various other locations of life. Fair game A video game constantly finishes with triumph or defeat. It is necessary to be able to both win and also lose graciously and also acknowledge a friendly video game, not just the result of it. Whether playing alone or with others the concept of justice as well as sincerity plays a big component. Imagination Once they have actually played a game for mathematics you can go a step further as well as see if they can producing their very own maths games. This entails developing a primary motif and even guidelines. It's a wonderful expansion to game having fun and even permits the kids to not only be creative yet use the concepts they've found out. Discovering From Others Playing maths games with others is a superb method to obtain a various viewpoint. Doing so could help children to mature skills they possibly wouldn't have actually learnt independently and be open to originalities and concepts from their peers. Maths games are fun and stimulating. If a child delights in the finding out procedure then it's most likely that the purpose of the workout will stick with them. And also, they'll be much less aware they're learning thus making the stress as well as stress on the youngster to do less. Independent Knowing Several kids aren't so crazy about homework and even revising outside the classroom. Due to the fact that mathematics video games don't feel so scholastic they're fantastic revision tools to motivate the youngster. The Array The array of games available is vast. Online you can discover a wide variety of fascinating and amusing ready different locations of maths to match any type of kid's discovering style. Maths games are able to assist a youngster both educationally and even socially. Many ready maths could be located online for a multitude of degrees and also learning styles. Flash games have actually redefined the world of multimedias. They are particularly created for online and also mobile usage. Interesting as well as cutting-edge, a few of these games are based on popular existing animated personalities as well as various other titles. The popularity of flash video games could be associated to the fact that they are available in a wide variety of classifications like journey, action, part having fun video games, simulations, instructional, puzzle games, 2D and even 3D games, solitary and even a number of player games. The games are mainly free to play over the web and the pc gaming scene has actually never ever been better. Out of the several flash games available online, there are amazing mathematics video games that guarantee to aid youngsters recognize the standard concepts of this rather mind turning subject. Somebody as soon as said that Mathematics can be taken into consideration as the globe's optimal game. It could be much more soaking up than chess, more of a wager than casino poker and even could last a lot longer than syndicate. Flash games have a method of streamlining this relatively made complex topic that oftens make youngsters restless. Mathematics Flash Gamings Mathematics video games are offered for various qualities like Kinder yard, First quality math, Second grade math and so forth. Besides encouraging youngsters to exercise their basics of enhancement, subtraction, division as well as reproduction these games involve lots of imaginative math tasks that assist indirect learning of the subject. It is necessary for youngsters to grab mathematics facts early in life to obtain a solid structure and even such video games add effectively to this end. It is common monitoring that kids associate much better to finding out that uses the playing technique. Math video games are a fun way to exercise complicated math truths and even numbers. Games that consist of number sense, measurements, time truths, sizes and shapes, money, fractions, multiples, combined procedures, geometry, coolmathgames and also algebra can go a long method in making youngsters experienceded with mathematics. Lots of video games consist of worksheets that are vivid and don't look intimidating to kids. Usually speaking, kids have the tendency to create a disapproval for math at an extremely beginning. Great mathematics video games aid them to begin a lengthy lasting friendship with a topic that has a tough reputation. Youngsters learn different math calculation tricks via games and also progressively drop the large math afraid. There are math flash video games for youngsters that include shooting and even balloon popping, all to keep youngsters brought in. As the kids begin obtaining accustomed with the subject and also start addressing issues by themselves, they mature confidence to manage mathematics without any type of agitations. Be it the pupils of primary school or intermediate school, math flash games profit all. Certified mathematics specialists apply a bunch of assumed in making these video games making them age ideal and amazing at the exact same time. Parents are more than happy to see their youngsters play games that boost favorable understanding and also education and learning. That understands, their youngster might have the possible to be the next fantastic scientist or mathematician! The magic of amazing math games is certain to draw youngsters in the direction of it. In addition to enjoyment children obtain a chance to overcome Math challenges without realizing so. Exactly what's more, the video games can be played right into infinity!
Lazer Lazer - 7 months ago 21 Perl Question What is the significance of an underscore in Perl ($_, @_)? I am new to Perl, and I used them like this foreach (@list) { print "$_\n"; sub search { my ($no, @list) = @_; How exactly do these underscore variables work in Perl? What are the other constructs where they are useful? This is the sort of question which indicates you really should be reading a book, and perhaps the rest of the perl tag faqs. Nevertheless, $_ is a context variable which is created implicitly by certain constructs and used implicitly by certain builtin functions. Here are some examples: while(<>) { next if /^#/; last if /^q(?:uit)?$/; say "Howdy!" if /^hello$/; This doing a lot of work setting and inspecting the $_ variable and is equivalent to: while(defined($_ = <>)) { next if $_ =~ /^#/; last if $_ =~ /^q(?:uit)?$/; say "Howdy!" if $_ =~ /^hello$/; Other constructs which set $_ are: foreach loops, given blocks, map and grep operations, and catch blocks in Try::Tiny. Other constructs which use $_ are: bare print; statements, the s/// substitution operator and the tr/// transliteration operator. I'd advise this: while you are learning Perl, don't use $_ implicitly. Write it all out in full (as in the second example above), to reinforce in your mind what is actually going on. Just like when you are learning a foreign language, you should learn to speak properly and carefully before you learn to abbrv y'language. $_ is useful for writing shorter, terser code, and can actually be clearer by focussing on the processing going on rather than the variables it is being done to, but only once you have learned as second nature which operations are using $_. Otherwise it's just a confusing mess. As mentioned elsewhere, @_ is the argument list to a subroutine.
George de Mestral From Wikipedia, the free encyclopedia Jump to: navigation, search He was born to Albert de Mestral, a civil engineer, and Marthe de Goumoëns in Colombier, near Lausanne, Switzerland. De Mestral designed a toy airplane at age twelve and patented it. He attended the École polytechnique fédérale de Lausanne. After graduation in 1930, he worked in the machine shop of an engineering company. He worked on inventing hook and loop fasteners for ten years starting in 1948. In 1955 he successfully patented hook and loop, eventually selling 60 million yards (about 55,000 km) a year through a multimillion-dollar company. Georges was married three times: to Jeanne Schnyder in 1932, Monique Panchaud de Bottens in 1949, and Helen Mary Dale.[clarification needed] On his father's death in 1966, de Mestral inherited the family home at Colombier, château Saint-Saphorin-sur-Morges. The grave in Commugny De Mestral died in Commugny, Switzerland, where he is buried. The municipality posthumously named an avenue, L'avenue Georges de Mestral, in his honor.[1] He was inducted into the National Inventors Hall of Fame in 1999 for inventing hook and loop fasteners.[2] Invention of Velcro[edit] Velcro, the invention for which de Mestral is famous For more detail, see the History section of Velcro. De Mestral first conceptualized hook and loop after returning from a hunting trip with his dog in the Alps in 1941.[3][4][5] After removing several of the burdock burrs (seeds) that kept sticking to his clothes and his dog's fur, he became curious as to how it worked. He examined them under a microscope, and noted hundreds of "hooks" that caught on anything with a loop, such as clothing, animal fur, or hair.[6] He saw the possibility of binding two materials reversibly in a simple fashion,[4] if he could figure out how to duplicate the hooks and loops.[5] Originally people refused to take him, and the idea, seriously. He took his idea to Lyon, which was then a center of weaving, where he did manage to gain the help of one weaver, who made two cotton strips that worked. However, the cotton wore out quickly, so de Mestral turned to synthetic fibers.[6] He settled on nylon as being the best synthetic after, through trial and error, he eventually discovered that nylon forms hooks that were perfect for the hook side of the fastener when sewn under hot infrared light.[5] Though he had figured out how to make the hooks, he had yet to figure out a way to mechanize the process, and to make the looped side. Next he found that nylon thread, when woven in loops and heat-treated, retains its shape and is resilient, however the loops had to be cut in just the right spot so that they could be fastened and unfastened many times. On the verge of giving up, a new idea came to him. He bought a pair of shears and trimmed the tops off the loops, thus creating hooks that would match up perfectly with the loops.[6] Mechanizing the process of the weave of the hooks took eight years, and it took another year to create the loom that trimmed the loops after weaving them. In all, it took ten years to create a mechanized process that worked.[6] He submitted his idea for patent in Switzerland in 1951 and the patent was granted in 1955.[5] De Mestral expected a high demand immediately. Within a few years, he received patents and subsequently opened shop in Germany, Switzerland, the United Kingdom, Sweden, Italy, the Netherlands, Belgium, and Canada. In 1957 he branched out to the textile center of Manchester, New Hampshire in the United States.[6] De Mestral gave the name Velcro, a portmanteau of the French words velours ("velvet"), and crochet ("hook"), to his invention as well as his company, which continues to manufacture and market the fastening system.[7][5] However, hook and loop's integration into the textile industry took time, partly because of its appearance. Hook and loop in the early 1960s looked like it had been made from left-over bits of cheap fabric, an unappealing aspect for clothiers.[8] The first notable use for Velcro® brand hook and loop came in the aerospace industry, where it helped astronauts maneuver in and out of bulky space suits. Eventually, skiers noted the similar advantages of a suit that was easier to get in and out of. Scuba and marine gear followed soon after. De Mestral unsuccessfully tried to update his patent and it expired in 1978. 1. ^ Les Chemins de Commugny 2. ^ National Inventors Hall of Fame entry 3. ^ McSweeney, Thomas J.; Stephanie Raha (August 1999). Better to Light One Candle: The Christophers' Three Minutes a Day: Millennial Edition. Continuum International Publishing Group. p. 55. ISBN 0-8264-1162-2. Retrieved 2008-05-09.  4. ^ a b "About Us:History". Retrieved 2008-05-09.  7. ^ "Velcro." The Oxford English Dictionary. 2nd ed. 1989. 8. ^ Freeman, Allyn; Bob Golden (September 1997). Why Didn't I Think of That: Bizarre Origins of Ingenious Inventions We Couldn't Live Without. Wiley. pp. 99–pp.104. ISBN 0-471-16511-5. Retrieved 2008-05-09.  External links[edit]
Sceaux, Hauts-de-Seine From Wikipedia, the free encyclopedia Jump to: navigation, search Château in the Parc de Sceaux Château in the Parc de Sceaux Coat of arms of Sceaux Coat of arms Paris and inner ring départements Paris and inner ring départements Country France Region Île-de-France Department Hauts-de-Seine Arrondissement Antony Canton Châtenay-Malabry Intercommunality Hauts de Bièvre  • Mayor (2014–2020) Philippe Laurent Area1 3.60 km2 (1.39 sq mi) Population (2006)2 19,986  • Density 5,600/km2 (14,000/sq mi) Time zone CET (UTC+1)  • Summer (DST) CEST (UTC+2) INSEE/Postal code 92071 / 92330 Elevation 53–103 m (174–338 ft) Wealthy city[edit] The present château, rebuilt between 1856 and 62 in a Louis XIII style, is now the museum of Île-de-France open for visits. Housing costs are extremely high, higher than in many districts of Paris, especially with streets facing the Parc de Sceaux. Sceaux is one of the richest cities of France, according to the INSEE. Sceaux is served by three stations on Paris RER line B: Sceaux, Robinson, and Parc de Sceaux. The latter station is located at the border between the commune of Sceaux and the commune of Antony, on the Antony side of the border. It is also close to Paris-Orly Airport. Sceaux is connected to the A86 motorway that circles around Paris. The commune also offers a developed network of buses which are often used by the Scéens (the name given to the residents of Sceaux). Primary and secondary schools[edit] The commune has the following primary schools:[1] • Public preschools/nurseries (maternelles): des Blagis, du Centre, Clos-Saint-Marcel, du Petit-Chambord • Public elementary schools: des Blagis, du Centre, Clos-Saint-Marcel • One private preschool and elementary school: Écoles maternelle et élémentaire Sainte-Jeanne-d’Arc Sceaux hosts two cités scolaires, combined junior high schools and public high schools/sixth-form colleges, the lycée Marie Curie and the lycée Lakanal.[2] The lycée Marie Curie was named after the famous scientist who was married in, lived in, and was originally interred in Sceaux with her husband Pierre Curie. The lycée Lakanal was named after a French politician, and an original member of the Institut de France, Joseph Lakanal and has remained one of the most prestigious and hardest schools of Île-de-France. The school also offers a middle school and highly ranked "classes préparatoires" undergraduate training. Famous French scientists and writers have graduated from lycée Lakanal, such as Nobel Prize winners Maurice Allais, Jean Giraudoux, Alain-Fournier and Frédéric Joliot-Curie. There is also a public vocational senior high, Lycée des métiers Florian.[2] There is a private junior high school, Externat Sainte-Jeanne-d’Arc.[2] Colleges and universities[edit] Public libraries[edit] The Bibliothèque municipale de Sceaux is the communal library.[3] Cultural life[edit] The commune also has a small movie theater, the Trianon, where international movies are released in their respective language and subtitled in French. The theater is also known for showing independent films and hosting special events. In 2006, a debate revolving around ecology was organized and Al Gore's An Inconvenient Truth was shown. Various music events take place at Sceaux. The classical Music Festival established by Alfred Loewenguth in 1969 (in 2010 entering its 41st season), takes place in the Orangery built by Jules Hardouin-Mansart for the Marquis de Seignelay in 1686, in the Park at Sceaux.[4] The Park also houses an open air opera every summer at the end of June. In the classic French O-Level textbook series for English-speaking pupils, Le Francais d'Aujourd-hui, the Bertillon family move out to Sceaux from inner-city Paris during the course of the book's main narrative. The Parc de Sceaux is home to a population of red squirrels estimated to number between 100 and 120.[6] Twin towns[edit] See also[edit] 1. ^ "Etablissements scolaires." Sceaux. Retrieved on September 9, 2016. 2. ^ a b c d "Etablissements d’enseignement secondaire et supérieur." Sceaux. Retrieved on September 9, 2016. 3. ^ Home. Bibliothèque municipale de Sceaux. Retrieved on September 9, 2016. 4. ^ The Orangery Festival 5. ^ "Billboard Boxscore". Billboard. New York City: Nielsen Business Media, Inc. 1987-10-31. ISSN 0006-2510.  6. ^ Rézouki, Célia; Dozières, Anne; Le Cœur, Christie; Thibault, Sophie; Pisanu, Benoît; Chapuis, Jean-Louis; Baudry, Emmanuelle (15 August 2014). "A Viable Population of the European Red Squirrel in an Urban Park". PLoS ONE. PLOS. 9 (8). doi:10.1371/journal.pone.0105111. Retrieved 17 August 2014. open access publication - free to read External links[edit]
HideShow resource information • Created by: Emma • Created on: 15-05-11 13:21 Plants need food to have enough energy for photosynethesis, respiration, reproduction and growth. Although, plants do not need to eat as they make their own food!, they do it by photosynethis. The equation for photosynethesis is: The cells in the leaves are full of small green parts called chloroplasts. They contain green substance called chlorophyll. Some of the glucose that is produced is used imediatly by the cells of the plant although a lot of the glucose made is converted into starch for storage. Iodine soloution is yellowy-brown liquid which turns DARK BLUE when it reacts with starch. You can use IODINE TEST FOR STARCH to show that photosynthesis has taken place in the plant. The leaves of plants are perfectly adapated because: • Most leaves are broad, they have a big surface area for light to fall on, • They contain chlorophyll in the chloroplasts to absorb the light energy • They have air spaces which allow carbon dioxide to get to the cells and oxygen to leave them • They have veins, which brings plenty of water to the cells of the leaves. 1 of 7 Limiting Factors Plants grow better in the summer compared to them growing in the winter, this is because plants need light, warmth and carbon dioxide if they are going to photosynthesis as fast as they can. If any of these things are in short supply they may limit amount of photsynthesis a plant can manage. This is why they are known as limiting factors. Light -  This is the most obvious factor affecting the rate of photosnythesis, if there is plenty of light lots of photosynthesis can take place unlike if there was not enought light (LIGHT AFFECTS RATE OF PHOTOSYNTHESIS) Temperture - Temperture affecrs all chemical reactions. As the temperture rises so will the rate of photosynthesis. Although it is controlled by enzymes as it is a living organism and the enzymes will be destroyed as the temperture rises to about 40 to 50 degrees. This means that if the temperture gets too high, the rate of photosynthesis will fall as the enzymes controlling it are denatured. Carbon dioxide levels - Plant need carbon dioxide to make GLUCOSE. Increasing the carbon dioxide levels will increase the rate of reaction. 2 of 7 How plants use glucose Respiation -  Any other living cells repire all the time, They break down glucose using oxygen to provide energy for their cells. Carbon dioxide and water are the waste products of the reaction. The energy released in respiration is then used to build up smaller molecules into bigger molecules. Plants also build up sugars into more complex carbohydrates like celluose. They use this to make new plant cell walls. Plants use some energy from respiration to combine sugars with other nutrients (mineral ions) from soil to make amino acids. Energy from respiration is also used to build up fats and oils to make a food store in seeds. Transport and Storage - The food is needed all over the plant, it is moved around the plant in a special transport system. There are TWO seperate transport systems in a plant. The PHLOEM is made up of living tissue, it transports sugars made by photosynthesis from the leaves to the rest of the plant (carried to ALL parts of a plant) The XYLEM is the other transport tissue. It carries WATER and MINERAL IONS from the soil around the plant. Glucose is SOLOUBLE(dissolves in water) & Starch is INSOLOUBLE (doesnt)  3 of 7 Why do plants need Minerals? The problem with the products of photosynthesis is that they are all carbohydrates. Carbohydrates are very important as plants use them for energy for storage and even for structural features like cell walls.A plant needs protein to act as enzymes and to make up a large part of cytoplasm and the membranes. Glucose and starch are made up of carbon, hydrogen and oxygen. Proteins are made up of amino acids which contain carbon, hydrogen, oxygen and nitrogen. Plants need nitrates from the soil to make protein. If the plant does not get enough nitrate it will not grow healthy as it would become small and stunted. When the plant dies the nutrients would go back into the soil for other plants to use. Plants also need magnesium to make chlorophyll. This is vital to plants as chlorophyll absorbs the energy from light which make is possible for plants to photosynthesis , if it does not get light the plant will die. Plants only need a tiny amount of magneisum however if they dont get enough they will go pale,yellowish areas on their leaves. If the plant does not have enough of the mineral ions the plant will die as it does not get its needs. 4 of 7 Plant Problems? Small Holder - Different crops had to be grown each year (crop rotation) and the land was rested between crops. Fields lay fallow (no crops were grown) every few years to let the land recover. Manure was the main fertiliser. By rotating it also keeps the diseases at bay. Do not earn a great deal of money Arable Farmer - It is a pretty big area and they grow wheat and ol seed ****. They have to be careful on what they spend their money on as it also is not a big profit job.  After they have harvested he would plough the stubble back into the soil, they used to burn it although they are no longer able to do that. They are able to support their family with the income and also employs one local person. Hydroponics Grower-  This cab limit the rate of photosyenthesis as may not always get the correct level of what the plant needs. Although, they can control what temperture etc that they need to grow faster. They do not need many employees as the computer control it all. 5 of 7 Chapter Two questions ( 2.1 - 2.4) • What is the word equation fot Photosynthesis? • Why do plants not need to eat? • What will Iodine test for starch do? • How does broad shape of leaves help photosynthesis to take place? • Why do plants grow faster in the summer compared to the winter? • Why does temperture affect photosynthesis? • Why do plants respire? • What is the main storage substance in plants? • Is Glucouse a solouble or insolouble? and what does it mean? • What are the products of photsynthesis? • Why do plants need nitrate? • Why do plants need magensium? 6 of 7 • Carbon dioxide + Water (+light energy) --> Glucose + Oxygen • Plants dont need to eat as they produce their own food • A iodine test will show if photosynthesis has taken place in a plant • A big surface area for light to land on • They grow faster in the summer as they will get Light which will be speed up photosynthesis • Temperture does affect photosynthesis as if it is too hight the enzymes that control photosynthesis will denature • Plants respire because any living organism will • Stacrch • Glucose is Solouble and this means it will disolve in water • They are carbohydrates • Plants need nitrate as without it the plant will be small and stunted and will not be able to grow healthy • Plants need magnesium as without it the plant will not have chlorofyll which will mean that it will not absorb any energy to absorb light and the plant will die. 7 of 7 nice one emmma :P x Former Member very helpful thanks Similar Biology resources: See all Biology resources »See all resources »
Mushin Mukô: no Thinking, no Form During the last class, sensei wrote a calligraphy (see picture) saying: 無心 無光, Mushin Mukô (no thinking, no form). It reminded me of Musashi’s 得光, 無光, Ukô Mukô, (with form, without form). Musashi explains that the concept of Kamae is complex as it mixes both the physical and the mental attitudes. He said that Kamae is “not only a physical stance, but varies according to situation, like the shape of water in various vessels. The physical kamae is like a castle but needs a capable Lord within”. I like this image. Sensei’s Budô is formless and this is what gives it so much power and that is triggering our creativity. There are no preconceived action and no intention. But this is a very high level of expertise and not so many Bujinkan practitioners can even grasp the idea. Needless to say can do it. In order to reach this level, one must first master the basics and the various Bujinkan fighting systems. This ability of Mushin Mukô appears when your saino konki develops itself enough within the Juppô Sesshô. Juppô Sesshô by essence cannot tolerate forms nor shapes as it is a natural reaction to a mienai (invisible) situation. Forms and shapes gives you away and might reveal your intentions to the opponent. If you are able to be Mushin Mukô in the fight your body will simply react with adequacy . The outcome is not important. You move naturally, simply surfing the waves of intention of your opponent. This is why I consider the study of the Tsurugi to be so interesting. The Tsurugi becomes alive because you have mastered the traditional “modern” forms of sword fighting (in the Bujinkan they are to be studied in the kukishin, the togakure, the shotô, the tachi). Strangely we use an old weapon fighting system through the understanding of its modern evolution. Remember that the Tsurugi has been in use until nearly the end of the Heian Jidai (794–1185)*. Which means that Tsurugi techniques have been used for at least 35 centuries (including China). Comparatively the Tachi and then the katana have been in use for less than 8 centuries of actual warfare (10th to 18th c). But as sensei explained the written techniques have disappeared (bamboo blades and paper didn’t make it through time) and only the body can recreate the techniques. Now as the Tsurugi created the tachi and the tachi created the katana, then by learning the relatively “modern” ways of tachi and katana, can we rediscover the old fighting experience of our ancestors. I write “ancestors” because the whole world: the Indians, the Romans, the Greeks and the Vikings used the same straight type of blade. By showing the Tsurugi this year Hatsumi sensei had given us the best tool to get rid of the form. Fighting is about surviving not about looking good. If you know the forms good enough then the Tsurugi will free your taijutsu. But sensei prepared us for this. We studied the sword b going back into time: first ith the Kukishin biken jutsu, then the shotô**, then the togakure sword, then the tachi***. In fact we have been studying sword from modern times going back into the Tsurugi period. Once understood this concept of 無心 無光, Mushin Mukô (no thinking, no form) with the sword, you can apply it to any type of fighting from taijutsu to any weapon. This is why the Tsurugi is so difficult to understand. And this is the beauty of the Juppô Sesshô of the Bujinkan martial arts taught by Hatsumi sensei. * * for those interested, when we studied the shotô in 2003, sensei said that these techniques were coming from a fighting system of the Muromachi period that specialized in small swords to fight the huge tachi in use at this time. In this Ryû, they would carry the shotô on the thigh like a tachi (cutting edge down) with the same type of mounting. I am sorry but I don’t remember the name of the Ryû (it is not one if the 9 schools). *** for those interested, I will give a one week seminar mid August and we will study all the sword techniques from the Bujinkan: kukishin biken, togakure biken, shotô jutsu, tachi waza, and Tsurugi. You can see all the details at 4 thoughts on “Mushin Mukô: no Thinking, no Form 1. Wonderful and true, it can be applied in all aspects of life… It means to let yourself carried away by the feeling inside, without trying to understand it or stopping to find the words for it. Because in that moment it vanishes away… Leave a Reply You are commenting using your account. Log Out / Change ) Twitter picture Facebook photo Google+ photo Connecting to %s
How can we help? You can also find more resources in our Help Center. 80 terms Art 1301 Part 2 Test Chapters 5-8 Rembrandt copied _____ but added some additional features to his own sketched version Leonardo da Vinci's Last Supper As a sketch to record an idea The unforgiving medium of ____was widely used for drawing from the late Middle Ages to the early 1500's, when it was largely replaced by the lead pencil. In silverpoint drawings, the drawing surface must be coated with a ground of______ bone dust or chalk mixed with gum, water, and pigment A form of charcoal was used by our primitive ancestors to create images on _____ cave walls The effects of ____when each is drawn against a paper surface are very similar charcoal, chalk, and pastel Claudio Bravo's Package is an excellently executed trompe l' oeil drawing that presents the illiusion of a package wrapped in _____ crumpled blue paper and string From the Latin for "blood" _____is the name associated with an earthy red chalk color Edgar Degas was one of the masters of pastel drawing in 19th century France. In ____ Degas depicted one of his favorite subjects Woman at her Toilette Jaune Quick-to-See Smith's The Enviroment: Be a Shepherd is reminiscent of a ____ due to its simple forms and sketchy manner mental sketchbook grid transfer method carbon black and water _____artists are masters of the brush-and ink medium. They have used it for centuries for all types of________ Japenese; Calligraphy The medium of brush and wash is more versitile than brush and ink, as seen in Leonardo da Vinci's Study of Drapery. It is so realistic that it is almost_____ In its original meaning, a _____was a full-scale preliminary drawing executed on paper for projectssuch as frescoes, stained glasss, oil paintings, or tapestries Which of the following drawing materials CANNOT be smudged or rubbed for a hazy effect Honor`e Daumier's pen and ink drawing, The Three Lawyers, is a caricatured illustration of ____ pompous, superficial lawyers The binding agent that powdered pigment is mixed with to form paint is known as the ___ True fresco, or ______, is executed on damp ______ buon fresco; lime plaster In Giotto's 14th century painting Lamentation, joints can clearly be seen that break the blue sky into numerous sections. This occurred because of the ____ limitations of fresco While painting the Egypto-Roman Mummy Portrait of a Man, the artist found it necessary to keep the ____at a constant tempature. molten wax egg, pigment, and water ____was the exclusive painting medium of artists during the Middle Ages In both tempera abd oil painting, the surface of the wood or canvas is covered with a ground of powdered chalk or plaster and animal glue known as ____ The transition to oil paint in the 14th and 15th centuries was gradual. For many years, it was only used for ____in order to give the paintings a high sheen. Rembrandt used_____ to create his oil-on-board painting of the Head of St. Matthew When a painter's oil paint becomes too thinck, he has to thin it with a ______ medium of turpentine Pop artist Roy Lichtenstein's portrait of George Washington is an oil-on canvas image that most resembles a(n) comic-book hero _____is a mixture of pigment and a synthetic resin vehicle that can be thinned with water Helen Oji's Mount St. Helen's is an opaque impasto composition in the shape of a _____ Japenese kimono Contemporry watercolor is referred to as ____,made up of pigments and a vehicle of ____ aquarelle; gum arabic ____was the principal painting medium during the Byzantine and Romanesque eras of Christian art. The fluidity and portability of watercolor has often lent itself to _____ rapid sketches and preparatory studies Miriam Schapiro's Maid of Honour is a paint-and fabric construction that she labeled Gilbert Stuart's 18th century traditional portrait of George Washington achieves a realistic likeness largely through The oldest form of printmaking is _____, and most likely the first people to use it were the ancient _____ woodcut; Chinese Zhao Xiaomo's Family by the Lotus Pond is a ____. The areas that were NOT meant to be printed were carved out_____the surface of the wood. woodcut; below Woodcuts make use of the flat surface of wooden boards, but wood engravings use the end sections of the boards, yeilding a _____surface hard, non-directional In Paul Landacre's Growing Corn, we see a good example of the ______that can be obtained from the skillful use of wood engraving. precise lines and tonal gradations Intaglio prints are made from ____into which lines have been incised. metal plates In the ___process, the artist creates clean-cut lines on a plate of copper,zinc, or steel by forcing a sharp burin across the surface with the heel of the hand In creating his Christ Crucified between Two Theives, Rembrandt used a drypoint needle in order to create ______ Soft, velvety lines Etching is an intaglio process in which the matrix is covered with a waxy substance and the design is drawn into the substance. The completed matrix drawing is then put into a(n) acid bath that etches the exposed areas of the matrix Etching is a very versatile medium. In Henri Matisse's Loulou in a Flowered Hat, he used ____to represent the essential features of a woman only a few uniformly etched lines The popularity of relief printing declined with the introduction of the ___process, which did not appear until the 15th century In works such as her Untitled mixed-media print of Chinese girls, Hung Liu's purpose is to ____ highlight the degradation of previous generations of Chinese women. Which of the following types of printmaking is NOT an essentially linear media? In the 17th century, a Dutchman developed a technique for mezzotint, from the Italian for ___, in which the metal plate is worked over with a multi-toothed tool called a ____ half-tint; hatcher Mezzotint is rarely used because______ it is a painstaking and time consuming procedure. In The Painter and His Model, Picasso was able to approximate the effects of mezzotint with a much simpler technique known as ______ Aquatint is frequently used along with line etching to mimic the effects produced by _____ wash drawings _____is the type of etching that can be used to produce the effects of crayon or pencil drawings Soft-ground etching The 20th century American abstract artist Josef Albers created Solo V, an inkless intaglio teqnique known as _____ The advent of the camera replaced the age-old need of art to imitate nature as closely as possible, and this change, in turn, led to the developmenmt of 20th century artistic______ The word photography is derived from Greek roots that mean_______ to write with light In both camera and the ____light enters a narrow opening and is projected onto a photosensitive surface. When a camera shutter opens for a few thousandths of a second over and over in a quick succession _____shots are being taken A(n)____magnifies faraway objects and collapses the spaces between ordinarily distant objects. telephoto lens The "active layer" of film contains a _____of small particles of_____ emulsion; silver halide With a Polaroid camera, the photograph appears before your eyes. This is an example of ____film color reversal Higher quality photographs are said to have higher_____ The first photographic process to leave a permanent image was invented in 1826 and known as ______ William Henry Fox Talbot's first "photographic drawings" were eerie, delicate photographs of____produced from a ______ plants; negative After the daguerreotype, the next major advance in the history of photography was the development of the ____process an example of which is Young Lady with an Umbrella By the 1850's photographic portrait studios became quite popular and began to serve the needs of_____ a grwoing middle class Alexander Gardner's Home of a Rebel Sharpshooter, Gettysburg is a graphic photo taken during the ___probably from a camera in a wagon known as a _____ United States Civil War; Whatsit Dorothea Lange's Migrant Mother is a touching photograph taken during the period of _______ The Great Depression Margaret Bourke White wrote "using the camera was almost a relief; it interposed a slight barrier between myself and the white horror in front of me...." Here Bourke-White is describing_____ Buchenwald during the Holocaust Edwar Steinchen's _____taken in 1906 is one of the foremost early examples of the photograph as a work of art The Flatiron Building-Evening William Wegman's Blue Period is a canine spoof on_____ Picasso's Old Guitarist A flash or whirl of abrublty changing newspaper headlines meant to indicate the progression of time and events in a film is known as a_____ Which classic early color film depicted real life in black and white and imaginary life in expressionostic color? The Wizard of oz
Can brain biology explain why men and women think and act differently? Experts talk about the role of neuroscience in gender differences – and what is still uncertain Print Friendly and PDF Share story:   Are brain differences to blame for communication breakdowns between the sexes? Can they explain why men and women respond differently in stressful situations? The evidence suggests such differences might very well influence behavior — but it's too soon to tell if they really do, according to experts participating in a panel discussion on The Neuroscience of Gender at the German Center for Research and Innovation in New York City. [caption id="attachment_25666" align="alignleft" width="210"]Anke A. Ehrhardt, PhD (Photo by Nathalie Schueller) Anke A. Ehrhardt, PhD (Photo by Nathalie Schueller)[/caption] "We clearly are at the beginning of the story," said panel moderator Dr. Anke A. Ehrhardt, Vice Chair for Faculty Affairs and Professor of Medical Psychology (in Psychiatry) at Columbia University Medical Center and Research Division Chief of the Division of Gender, Sexuality, and Health at the New York State Psychiatric Institute. What's more, she cautioned, "acknowledging brain effects by gender does not mean these are immutable, permanent determinants of behavior, but rather they may play a part within a multitude of factors and certainly can be shaped by social and environmental influences." The speakers, Dr. Bruce S. McEwen, a professor and Head of the Harold and Margaret Milliken Hatch Laboratory of Neuroendocrinology for The Rockefeller University, and Dr. Ute Habel, Professor of Neuropsychological Gender Research for the University Clinic for Psychiatry, Psychotherapy, and Psychosomatics at RWTH Aachen University in Germany, also stressed the need to put research findings in perspective and avoid drawing definitive conclusions at this point. Yet, despite the uncertainty — or perhaps because of it — both continue to forge ahead in their respective areas, working to tease out gender differences in the brain and determine what they may mean in the real world. The hormonal connection Dr. McEwen, a neuroscientist and neuroendocrinologist, works mainly in animal models, studying how sex and stress hormones affect the brain. What finding from his work has surprised him most? "How very differently female and male brains respond to stress on an anatomical level," he said in an interview before the event. "We suspect there are many other subtle differences like this that we don't know anything about as yet." [caption id="attachment_25660" align="alignright" width="400"]Bruce McEwen, PhD (Photo by Nathalie Schueller) Bruce McEwen, PhD (Photo by Nathalie Schueller)[/caption] Dr. McEwen elaborated on the stress response differences during his presentation. Work in his lab and others have demonstrated plasticity — the ability to change in response to experience, environment, injury and other factors — in adult brains, overturning previous beliefs that the adult brain is a static entity. "Even as the brain ages, there is the formation, lengthening and shrinkage of processes called dendrites, which connect nerve cells, creating synapses. Each dendrite has 'spines,' where new synaptic connections are made during the course of everyday life," he explained. In addition, "a small amount of neurogenesis, or production of new nerve cells, occurs in the adult hippocampus, and that's important for learning and memory." What happens when stress enters the picture? In male animals, "stress causes the dendrites in the hippocampus and the prefrontal cortex to shrink, and the spine synapses to be lost," he said. "By contrast, in the amygdala and in the orbital frontal cortex, the same stress causes dendrites to expand and new synapses to be formed." Furthermore, studies in young male animals showed resilience — an ability to rebound when stress is discontinued — evidenced by the recovery of previously shrunken dendrites. However, "a middle-aged rat shows shrinkage and only partial recovery when the stress is terminated, and an aging rat shows the same amount of shrinkage in response to stress, but no evident recovery, even at three weeks (after the stressful incident)," demonstrating that age may hinder resilience. By contrast, in female animals, the response to stress was very different, according to Dr. McEwen. Dendrites connecting to other nerve cells in the cortex didn't shrink as they did in males, and dendrites that communicate with the amygdala, a part of the brain that's important for fear and anxiety, responded to stress by actually expanding. However, that expansion only happened in the presence of estrogens, Dr. McEwen noted. "In other words, the female rat has to have her ovaries intact or we have to give her estradiol for this effect to occur," emphasizing the role of sex hormones in the female brain's stress response. What do these differences mean? "The bottom line is, we don't have a complete picture," Dr. McEwen acknowledged. However, he speculated that the findings might help explain why, as other studies have shown, "women tend to respond to stress with anxiety and depression, whereas men take it out in a different way, showing more substance abuse and antisocial behavior." If, as research progresses, this turns out to be the case, then therapies might eventually be targeted to the affected brain regions, he suggested. The power of gender stereotypes [caption id="attachment_25663" align="alignright" width="320"]Ute Habel, PhD (Photo by Nathalie Schueller) Ute Habel, PhD (Photo by Nathalie Schueller)[/caption] Dr. Habel is a psychologist and psychotherapist whose research focuses on neurobiological correlates of emotion and cognition, and the effects of psychotherapeutic interventions and hormonal influences on brain activation. "Gender might be the most influential factor in our lives, starting even before birth," she stated. "In everyday life, we continuously deal with gender differences, and sometimes we struggle with the peculiarities of the 'typical' male or the 'typical' female." But although neuropsychological testing has shown some "subtle gender differences" — for example, women have a higher perception speed while men are better at mental rotation tasks — "we can't make any general statements that, for example, women are better in the verbal domain and men normally outperform women in non-verbal domains," Dr. Habel stressed. In fact, any interpretations of gender differences in the brain are filtered through "a long history of female discrimination," she said. Charles Darwin taught that women were biologically inferior to men, she noted, and anthropologist Paul Broca, who gave his name to the region of the brain responsible for speech production, stated: "We are therefore permitted to suppose that the relatively small size of the female brain depends in part upon her physical inferiority and in part upon her intellectual inferiority." This thinking has permeated scientific research, Dr. Habel said, "and remains a problem in any evaluation of structural and functional differences in the brain. Gender differences are strongly influenced by gender stereotypes, socialization and learning, as well as genes and hormones and environmental factors. When we do research, it's very difficult to disentangle the individual contributions of each of those factors; instead, we have to acknowledge and take into account that there is a complex interaction." [pullquote align="right"]Gender might be the most influential factor in our lives, starting even before birth.[/pullquote] The impact of gender stereotyping, including self-stereotyping, was apparent in an experiment on empathy. "Although there are divergent differences in the definition of empathy, most researchers would agree there are three essential components: You have to recognize the emotion of the other person; you have to take the perspective of the other person; and you have to vicariously feel this emotion by distinguishing that it is not your own emotion, but rather the emotion of the other person," Dr. Habel explained. To test the components separately in a single experiment, her team developed an "empathy task" using photos of people with different facial expressions: • For the emotional recognition component, study participants were asked to describe the emotion displayed on a person's face.  • For the emotional perspective components, participants were asked to identify the appropriate response (one photo or another) to seeing someone pick his teeth in public.   • Affective responsiveness was assessed by presenting participants with short descriptions of emotional situations — for example, someone lost a valuable souvenir — and asking them how they would feel by choosing between two possible facial expressions. [note color="#f1f9fc" align="alignright" margin=10] An Interview with Dr. Habel In an earlier interview for the German Center for Research and Innovation, Dr. Habel provides additional insights into the extent to which gender influences the neurobiology of emotions and how female and male brains age differently. She also discusses why depression and anxiety are more prevalent in women than in men and which psychotherapeutic inventions she would like to investigate further. [/note] In every case, there were no gender differences in behavior on any of the components. Yet, in a self-assessment questionnaire administered during the same study, females rated themselves as more empathic than males. "It was clear that, automatically, gender stereotypes came into play with those questionnaires," Dr. Habel emphasized. When her team used functional magnetic resonance imaging to look at brain activation during the tests, they found "a marked difference" between men and women, Dr. Habel said. Overall, brain activation was much higher in women compared with men in all tasks, particularly during the follicular phase, suggesting a role for estrogen in the empathy response. Although there was some overlap in brain regions that were activated, "women tended to activate more emotion and self-related regions whereas men activated more cortical, or cognitive areas," she said. "But what does that mean?" Dr. Habel continued. "We have no behavioral differences even though, on a cerebral level, we do see differences." The same is true for earlier studies, which showed for example that, as in animals, women's brains respond differently to stress than do men's. Taken together, "these results suggest, as Dr. McEwen  indicated, that men and women use different strategies to reach the same performance outcomes," she suggested. "That's the best interpretation we have at this point." [caption id="attachment_25675" align="alignnone" width="800"]Dr. Ute Habel used this slide during her presentation to illustrate the power of gender stereotypes. Dr. Ute Habel used this slide during her presentation to illustrate the power of gender stereotypes. "Gender stereotypes — comprising traits and behavior that are considered typical for men or women — can be quite powerful in influencing our behavior and in eliciting prejudices," she explained. "Sometimes they may be based on actual gender differences; often they are not. The problem is that implicit beliefs are very common and exert their influence already at very early ages. Stereotypes that imply a negative trait or the belief in an inferior performance in one gender, also called 'stereotype threat,' may in turn negatively affect performance. Those negative stereotypes have been traditionally often associated with females and may have contributed to women's' discrimination."[/caption] Note from the Author "Whenever you put gender into the title of an event, it's a full house," Dr. Ehrhardt quipped before introducing Drs. McEwen and Habel. This event did, indeed, have a large turnout, and I'll bet at least some of the other attendees shared my hope that one or both of these eminent scientists would show definitive diagrams of the male and female brains in a more evidence-based light than what we saw in Dr. Habel's slide above. But I now understand that although we may want concrete evidence supporting our perspectives on the differences between men and women — or at least something that explains these differences — for now, we need to accept that the best response science can give us is: "it's complicated."[divider] The Author comments powered by Disqus 4 Archived Comments Mr. Gunn July 12, 2013 at 3:51 am It will be interesting to see how these stereotypes and the interesting questions change as we see gay couples starting to develop the same sort of stereotyped roles that hetero couples do now. Marilynn Larkin July 12, 2013 at 9:18 am Great point! On a somewhat related note, Ute Habel has been researching transgender individuals. She has some findings for males and is now crunching data on females. This is an exciting area! Both the research and stereotypes are likely to evolve and influence each other. Ute Habel July 14, 2013 at 5:16 am Indeed, an interesting aspect. Stereotypes exist universally and are shaped by our culture. Gender stereotypes are hence quite old and affect all of us more or less equally, they are not really linked to sexual orientation. The question is rather how can we efficiently get rid of them especially of those that have negative effects. This is especially difficult since stereotypes are quite powerful and exert their influence mostly on a more subconscious level. Jennifer February 14, 2014 at 11:19 am I have several trans friends it is very difficult for me to explain to them my understanding of their &quot;lifestyle&quot; through science. I made the comment genitals are only cells..we don't think or feel with those cells so why then is gender and/or preference based on them. If you were to take this research to a whole other level imagine the possibilities. Share story:
American government is based upon English principles. Explain their evolution and how they led to America's separation from Britain. 3 Answers | Add Yours kodasport's profile pic kodasport | High School Teacher | (Level 2) Adjunct Educator Posted on The United States is based on English government principles in several ways.  These principles led the creation of the United States in July 1776. The English citizens had enjoyed the freedoms that were included in the Magna Carta.  This breakthrough allowed the citizens to have certain freedoms that were not given by the King prior like property rights, trial by jury and the beginnings of democratic principles.  As English citizens the Americans thought they were to enjoy these freedoms also, but often times British attitudes were to the contrary. An english philosopher by the name of John Locke had influenced many of the leaders of the colonies.  Locke advocated a very powerful electorate that should demand that the government only exist to protect the freedoms of the people.  He went as far to suggest that if the government that existed didn't protect these rights, that the people should get rid of that government.  So when you look at the turmoil of the time between the French and Indian War and 1776, you had a situation that had been created that was ripe for revolution.  The freedoms that the American colonies thought they were to be given were in their opinion non existent.  The leaders of the Revolution, such as Adams, Jefferson and Franklin demanded Independence. enotechris's profile pic enotechris | College Teacher | (Level 2) Senior Educator Posted on The evolution of English government (and a parallel study of English History) shows a process by which political power became consolidated and wielded by one ruler.  This was actually a necessary phase for England to become a country and a nation.  Indeed, claiming the sanction of heaven through the political argument of the Divine Right of Kings served the executive powers of the king, or monarch, to justify his or her role in the Great Chain of Being.  By the time of Medaeval England, the notion of "Divine Right" had been seriously questioned, and as embodied in the Magna Carta, had been curtailed.  In effect, this document began the process of disseminating the absolute monarchy, and establishing a judicial and legislative branch of government separate from the executive.  By the time of Renaissance England, the notion that not only was the monarch not absolute, but was also not above the law reflected the shift in power towards the legislative, specifically through the institution of Parliament.  Democracy, where the people have say in the workings of government, became more prominent;  Parliament became supreme over the monarchy.  The notion that any "freeman" could have a vote and a political say reflected the trend towards individual freedom.  America inherited this evolving British concept of government, and contributed the notion that if you didn't like the way you were governed, you could, and had a moral obligation to change it.  The "parliaments" or legislative bodies of the colonies were sadly ignored and sidelined by the Parliament of London, and, by asserting the English notion of self-governance, broke away.  In his Declaration, Thomas Jefferson refers to "our English Brethren" and, as colonists claiming the rights of Englishmen, asserts the notion of the governed to alter or abolish the government should they see fit to do so. akannan's profile pic Posted on There will be many approaches to answering this question.  Certainly, one way would be to suggest how American political thought was represented from English notions of the good.  I think that one way this was present was in the American form of legislation.  Drawing from the British bicameral system, the legislative branch consisting of an "upper" and "lower" manifestation (Senate and House of Representatives) draws from the British tradition of Lords and Commons.  Another parallel from British law came from the Magna Carta and the English Bill of Rights, documents whose principles became an embedded part of the American political tradition.  Rights and entitlements such as inalienable rights that must be safeguarded by government, as well as the notion of equality before and within the law, and the notion of habeas corpus (being aware of criminal charges) are elements that have become bulwarks of the American form of government.  In terms of theoretical approaches to government, John Locke's English tradition of Enlightenment teachings were absorbed by the framers in the development of both the Declaration of Independence and the Constitution.  The ideas of a social contract between rulers and their citizens as well as a sphere of freedom that cannot be vitiated are foundational teachings that were appropriated in the composition of American government. We’ve answered 318,947 questions. We can answer yours, too. Ask a question
The area of a square is represented by `36x^2` . What is the expression for the length of each side of the square?? 1 Answer | Add Yours tjbrewer's profile pic tjbrewer | Elementary School Teacher | (Level 2) Associate Educator Posted on A square is a parallelogram which is both a rhombus and a rectangle. Because a Square is a rectangle it's area is found by the formula `A=bxxh` Because a Square is a Rhombus, each of its sides is the same length, so `b=h=l` and the Area of a Square is `A=l^2` Since we know that `A=36x^2` finding the length of each side is as simple as finding the square root of A, which in this case would be `sqrt(36x^2)=sqrt(36)xxsqrt(x^2)` and `sqrt(36)=6 sqrt(x^2)=x` so the length of each side should be `6x` We’ve answered 318,947 questions. We can answer yours, too. Ask a question
Describe the contributions and significance of Isaac Newton to science and Europe's understanding of the natural world in the Eighteenth Century? 2 Answers | Add Yours martinjmurphy's profile pic martinjmurphy | Middle School Teacher | (Level 1) Associate Educator Posted on Sir Isaac Newton made important contributions to science in three areas: mathematics, optics and physics.  In optics, Newton showed that sunlight is actually made up of colored lights, that is, sunlight is made up of the colors of the rainbow.  He did this through the use of prisms.  This discovery led to Newton perfecting the telescope. In mathematics, Newton developed differential calculus.  This made it possible to calculate areas within a shape that had curved sides.  Perhaps Newton’s greatest discoveries came in the field of physics.  He came up with the law of universal gravitation.  Every object in the universe attracts every other object. This attraction is affected by two factors, the mass of the objects and the distance between them. He also came up with three laws of motion.  These discoveries completely changed how people had previously viewed the worlds and the universe. mrkirschner's profile pic mrkirschner | High School Teacher | (Level 3) Associate Educator Posted on Sir Isaac Newton (1643-1727) was the most influential scientist of the 17th Century. He virtually invented the fields of physics and calculus, two subjects that, even in modern times, are quite challenging. When Cambridge closed down because of the plague, Newton developed some of the great theories in the history of physics. Consider Newton's Laws of Motion (1666), a staple of physics even today: • For every action there is an equal and opposite reaction. He utilized these laws of motion to create theories on gravity that ultimately changed how scientists viewed the universe. He was able to explain the motions of the Sun and the planets in a groundbreaking fashion. Isaac Newton's most accomplished work was the book Principia. In Principia, he broke down the mechanisms of the solar system through the use of equations. These equations explained the nature of orbits and the pull of gravity between heavenly bodies. He was able to explain to the world that the Moon orbits around the Earth because the Earth is substantially heavier than the Moon. This mass allows gravity to pull the Moon around the Earth and not vice-versa. We’ve answered 318,947 questions. We can answer yours, too. Ask a question
How was the colony of Virginia similar and different with those in New England 1 Answer | Add Yours pohnpei397's profile pic pohnpei397 | College Teacher | (Level 3) Distinguished Educator Posted on There were not really all that many similarities between these colonies.  The major similarity, of course, was that both colonies were subordinate to the British.  Both sets of colonies were ruled by governors appointed by the British, for example. But there were many more differences.  Two of the most important of these were: • Most of the people going to Virginia were single people who were there to work on plantations.  Many of them were not free.  By contrast, most of the people who went to New England went as whole families who were there to create small farms for themselves. • The Virginia colony was settled by people who were there mainly for economic reasons.  By contrast New England colonies were settled more by people who wanted to run colonies according to their own religious beliefs. We’ve answered 318,947 questions. We can answer yours, too. Ask a question
The Canterbury Tales Essay - The Canterbury Tales Geoffrey Chaucer The Canterbury Tales It is April. Thirty pilgrims have gathered at the Tabard Inn just south of London prior to departure for the shrine of St. Thomas a Becket, martyred in his cathedral at Canterbury two centuries earlier. Socially they range from the Franklin, a wealthy landowner, to the Plowman; morally from the Parson, who has taught Christ’s word (“but first he followed it himself”) to the Pardoner, a rascally confidence man. The proprietor of the Tabard offers to accompany them as Host and suggests that they entertain themselves on the way by telling stories in turn; the teller of the most entertaining and morally instructive tales will later receive a free meal. The tales also vary, illustrating popular medieval genres: romance, fable, saint’s life, fabliau (a coarse, comic tale), exemplum (a story designed to illustrate the theme of a sermon). Chaucer the pilgrim burlesques a type of popular romance, but his satirical purpose goes unrecognized and the Host will not allow him to finish. The Wife of Bath, on the lookout for a sixth husband, tells a tale cunningly contrived to prove that the main ingredient of domestic happiness is rule by the wife. The Miller, somehow drunk early on the first day, tells of a carpenter deceived and made the laughing stock of his neighborhood by his wife and her lover. The hot-tempered Reeve, a carpenter by profession, responds in kind. The Wife of Bath baits the Monk, who has interrupted her. The Knight averts a brawl between the Host and the Pardoner. The Canterbury Tales is fragmentary and unfinished, but Chaucer carefully concludes with the tale (actually sermon) of the good Parson, who reminds them all that they are on a pilgrimage not merely to Canterbury but to heaven. Several modern translations of the poem are available, but to master Chaucer’s Middle English repays the effort. Many editions and introductions summarize handily his spelling, pronunciation, and grammar.
장바구니에 선택된 아이템 수량 가격 주문금액: ₩0.00 8 Ways Your Nails Reflect the State of Your Health Your nails can provide many clues as to the condition of your overall health. While it's obvious your nails demonstrate your personal hygiene, grooming habits and even your individuality and flair for style, did you know they can also provide hints about the state of your overall health? Signs of underlying health issues can manifest through your nails' texture, thickness, color and shape, so it's important to check them out regularly for anything suspicious. Here are some common appearances your nails can acquire that correspond with health problems, so take a minute to examine your own nails and note whether they match up with any of the descriptions listed. 1. Short and Chewed Off While biting your nails isn't typically a health condition on its own, this habit can cause issues. For one, biting your fingernails is unsanitary and can cause you to transfer potentially harmful bacteria from your fingers and under your nails into your mouth, where it can easily cause illness. Consistently gnawing on your nails can damage your teeth and also contribute to jaw problems by causing pain or issues with your temporomandibular joint (TMJ), according to University of Illinois at Urbana-Champaign McKinley Health Center. Chewing your nails repetitively and constantly may also be a sign of a psychological condition like obsessive-compulsive disorder. 2. Pale Nail Bed Nails are normally pinkish in hue, but if they are looking closer to white or very pale, it could be a sign of the blood condition anemia. This might signify that you aren't receiving enough iron in your diet, which affects the level of red blood cells in your blood. When you have low iron levels it can manifest through inadequate blood oxygenation, causing your skin and certain tissues, like those under your nails, to become pale, according to Shilpi Agarwal, M.D. Pale nails may also be a sign of other health concerns, like congestive heart failure, malnutrition or liver disease. 3. Blue Hued Nails that have a blue tint can convey that your body is lacking a sufficient amount of oxygen. This can be evidence for conditions that restrict and impair your breathing, like the lung disease emphysema. In addition, blue nails may also illustrate that your heart is unable to properly pump blood into your bloodstream. 4. Yellow and Thickened  If your nails are noticeably thicker than usual and have acquired an unpleasant yellow hue, then you may have a fungal infection. Yellow nails may also be a sign of psoriasis, a skin condition. 5. Dark Brown or Black Lines If you notice some dark stripes or spots under your nails, they could be a sign of a very serious illness, like melanoma. Many people don't realize that the sun's harmful UV rays can penetrate your fingernails, reaching your nail bed and potentially doing cancer-causing damage. Sunlight can't make it through colored nail polish, so if you're planning on being out in the sun for a significant period of time, you'll want to make sure your nails are protected before exposing them to the sun. 6. Raw, Bloody and Torn Cuticles Many people who bite their nails also go after their cuticles and the skin around the nail bed. Tearing the skin around your nail beds can cause bleeding, which invites bacteria into the wounds to cause infection.  7. Horizontal Lines or Indentations These kinds of depressions across your nails can commonly appear due to injury. However, these “Beau's lines” can also be a sign of an illness like uncontrolled diabetes, circulatory disease, zinc deficiency or peripheral vascular disease. Other health conditions that are associated with a high fever may also play a part in causing these lines on your nails, according to Mayo Clinic 8. Clubbed or Inverted Appearance You can identify clubbing by checking to see if the tips of your fingers are large and bulbous with the nails curving around them. Clubbed fingers and nails can mean that your blood is poorly oxygenated, or it can also be symptomatic of certain lung diseases. According to the Mayo Clinic, clubbing may also be indicative of other issues like inflammatory bowel, liver or cardiovascular diseases. In an extreme case, nail clubbing may also be a symptom of AIDS. If you notice your nails mirroring any of these appearances or showing similarities, it's very important that you head to your local physician for an examination and appropriate treatment plan, if needed. Your nails could show the first sign of a serious illness, so be sure that you are familiar with their typical appearance so you can easily notice drastic changes, if or when they occur. Check out our great selection of health-enhancing products at eVitamins today, and start living healthier! Related Articles You May Like 우울증 증상완화에 도움을 주는 천연허브 우울증 증상완화에 도움을 주는 천연허브 햄프 단백질 파우더 : 햄프씨드 햄프 단백질 파우더 : 햄프씨드 차 한잔의 여유 - 알고 마시면 맛있다! 차 한잔의 여유 - 알고 마시면 맛있다! 천연 염증치료제 사르사뿌리 천연 염증치료제 사르사뿌리 Nail Care, 네일케어 손 크림 Nail Care, 네일케어 - 고객님께서 좋아할만한 제품 Tate's Nails Alive Aromatherapy Polish Dryer Nature's Plus Ultra Nails - Nail Strengthener Mavala Nailactan Nourishing Cream Natural Factors Biosil Nature's Plus Ultra Nails Benecos Natural Nail Polish Remover 쿠폰 & 프로모션 eVitamins 대한민국, Copyright 1999-2016. All Rights Reserved. 개인정보정책 | 이용약관
Causes of the Civil War Card Set Information Causes of the Civil War 2014-04-27 17:35:22 Mr. Klan Show Answers: Home > Flashcards > Print Preview 1. Antebellum Term meaning before the Civil War. Don't forget the importance of Sectionalism where:One's loyalty rests with their section of the country rather than the nation as a whole.North - Favors Tariff and Bank, Most Populous, Most Industry. Daniel Webster.South - Favors Slavery, Hates Tariff and Bank, Mostly Agricultural. John C. Calhoun.West - Split on slave and tariff issue, least populous. Henry Clay. 2. Compromise of 1820 / Missouri Compromise Maine is a free state. Missouri becomes a slave state, but:No slavery is allowed north of 36 degrees 30' line in the Louisiana Territory.Henry Clay, The Great Compromiser, gets credit for this legislation. 3. slave life Paternalism - The relationship between slave and master was that of child to parent.Only a very small percentage of people owned slaves before the Civil War. Surprised? Well, 6-7% of people, or 25% of families had slaves in the South. Most of the Southern population was comprised of poor yeomen farmers.Slaves held onto their culture in music, but combined it with Christianity.Eli Whitney's cotton gin actually increased the need for slaves, as 2/3 of the world's cotton came from the South by 1860.Black Codes were laws preventing blacks (free and slave) Constitutional rights.Slavery was different from area to area. It is impossible to make generalizations about slave life. Generally though, gang labor persisted on Southern plantations. 4. Texas, The Alamo, etc Americans and Texans (Tejanos) are defeated at the Alamo."Remember the Alamo" was yelled at the Battle of San Jacinto. The Americans win Texas, but ...Jackson won't take Texas into the Union because it would disrupt the free/slave state balance.In 1845, President Tyler annexed Texas as a state. 5. Manifest Destiny The belief that the United States is destined to gain all land from "sea to shining sea," or, Atlantic to Pacific. The term was coined by John L. O'Sullivan in 1845 when he wrote about it in the US Magazine and Democratic Review. 6. Mexican War • The US and President Polk are itching for a fight. They want territory, and believe that the border between Mexico and the US is the Rio Grande ... Mexico believes the border to be the Nueces River. • Polk and Congress believe that the Mexican Army shed American blood on American soil, so we fight a war and win. (Note, a nobody Whig named Abraham Lincoln doubted the spot where American blood was shed).The US gets California and a lot of western territory after the Treaty of Guadalupe Hidalgo is ratified (1846).This creates an imbalance of slave to free states, and the belief that, "Mexico will poison us!" 7. Compromise of 1850 California needs to become a state after the Gold Rush of 1849. It is a free state.The slave trade was abolished (the sale of slaves, not the institution of slavery) in the District of Columbia.The Territory of New Mexico (including present-day Arizona) and the Territory of Utah were organized under the rule of popular sovereignty. The Fugitive Slave Act was passed, requiring all U.S. citizens to assist in the return of runaway slaves.Texas gave up much of the western land which it claimed and received compensation of $10,000,000 to pay off its debt. 8. Underground Railroad A network of safe houses for slaves who were looking to escape to the North and Canada. This allowed them to avoid the Fugitive Slave Law. 9. Ostend Manifesto Named for a secret meeting in Ostend, Belgium, it was a scheme for the US to purchase Cuba from Spain for $120 Million. Inevitably, Cuba would have become a Southern slave state. When free-soilers in the North learned of the scheme, they greatly protested, and the plan was dropped. 10. Wilmot Proviso A failed attempt in 1846 to prevent slavery from expanding into territories taken over in the Mexican War. David Wilmot was a young Congressman with an idea ... the idea was bold enough to break Northern Democrats away from their friends in the South. 11. Kansas-Nebraska Act Reverses the Missouri Compromise. Constructed by the "Little Giant," Stephen Douglas (Clay had passed on by then). Let Kansas and Nebraska decide if they would be a slave state or not -- that notion is called popular sovereignty.Kansas-Nebraska Act 12. Bleeding Kansas Border Ruffians (pro-slavery) and Free Staters converge on Kansas to stuff the ballot box now that popular sovereignty (states choose if they want slavery or not) was the law.Millions in property damage. Dozens are killed. Proved to be a mini-Civil War in Kansas. John Brown becomes a national figure after he kills Border Ruffians at Pottawattamie, Kansas. 13. Lecompton Constitution Kansas votes for slavery, it's added to the state Constitution, and is approved by President Buchanan. 14. What abolitionist writings should I know about? William Lloyd Garrison wrote The Liberator, an anti-slavery newspaper. He was not for equality, but was an abolitionist. Garrison was ahead of his time, but not well received in the North.The Slave Narrative of Frederick Douglass. Douglass was a free black who detailed his experience as a slave. Uncle Tom's Cabin by Harriet Beecher Stowe. Stowe, the daughter of an abolitionist, wrote this book that detailed the horrors of slavery. In 1862, Abraham Lincoln said to her ... "So you’re the little woman who wrote the book that made this great war." The Impending Crisis of the South by Hinton Rowan Helper. Criticism of slavery that was banned in the South.The Grimké Sisters were early women reformers. In 1836, Angela Grimké wrote "An Appeal to the Christian Women of the South," in which she encouraged women to join the abolitionist cause.Not a total abolitionist society, but you need to know the American Colonization Society. It looked to return slaves to Africa, or the colony of Liberia. It was supported by people who ranged from abolitionists to Southerners nervous about the presence of free blacks. 15. John Brown /Harper's Ferry 1859 raid on the federal arsenal. Brown, and a few followers, attempted to fuel a larger rebellion. It didn't happen. Brown got holed up in a firehouse, and was later captured by the Federal Army. After he was hung, he was hailed as a martyr in the North, and a terrorist in the South. Other slave rebellions to know: 16. Denmark Vesey 1822 - failed plot in South Carolina, Vesey and others were executed. 17. Nat Turner - 1831 violent rebellion in Virginia that led to almost 200 deaths (black and white). 18. Republican Party and Bleeding Sumner With sectionalism boiling over on the slave issue, a new political party emerged. The Republican Party combined Northern Democrats, Free-Soilers, Know Nothings (anti-immigrant party), and former Whigs. Their main goal was to stop the spread of slavery in the west. Some in the party were abolitionists (against slavery altogether). Thus, this completes our political party split guide:After Republican Charles Sumner of Massachusetts went after Senator Andrew Butler in a speech, "The Crime Against Kansas," Butler's cousin, Preston Brooks, beat Sumner over the head with a cane on the Senate Floor. 19. Dred Scott v. Sandford, 1857 Dred Scott was a slave who was taken to live in free northern territory. Because he lived on free soil for an extended period of time, he believed he had legal recourse to sue for his freedom. Chief Justice Roger B. Taney, on behalf of the Supreme Court, stated that:Scott was a slave, which meant that he was not protected by the United States Constitution, and couldn’t even sue in court.Slave compromises like the Missouri Compromise were irrelevant, as according to the Fifth Amendment, people could not be deprived of their property. Slaves were property. 20. Lincoln-Douglas Debates 1858 Senatorial debate between Abraham Lincoln and Stephen Douglas. Lincoln becomes a national celebrity and a critic of slavery. NOTE: Lincoln is against the spread of slavery, but is not an abolitionist. In his Freeport Doctrine, Douglas favored popular sovereignty over the Dred Scott Decision. 21. Presidential Election of 1860 Lincoln wins the election without getting a majority of the popular vote. He received 40%. John Breckinridge, a Southern Democrat gets 29%. John Bell of the Constitutional Union party gets 18%, and Stephen Douglas, the Northern Democrat in the race received 13%. Lincoln still won the Electoral College vote with 180. 22. Secession When states leave the Union. The South left after the Election of Lincoln. South Carolina was the first to leave, followed by the rest of the soon-to-be Confederacy. 23. causes of the Civil War Slavery, Sectional Differences - The Bank, The Tariff, Economic Differences, Republican Party Forming, Ideological Differences,Failure of the slave compromises What would you like to do? Home > Flashcards > Print Preview
Epidemiology Chapter 2 Card Set Information Epidemiology Chapter 2 2011-01-27 02:05:55 Epidemiology Chap. 2 exam 1 Show Answers: Home > Flashcards > Print Preview 1. Enumeration and Tabulation Listing all the data and creating a frequency table. 2. Histogram A bar chart used for continuous variables. 3. Line Graph Used to display trends. 4. Pie Chart A circle that shows the proportion of cases according to several categories. 5. Percentage A proportion multiplied by 100. 6. Proportion A type of ratio in which the numberator is part of the denominator. 7. Ratio The value obtained by dividing one quanitity by another. 8. Rate Also a type of ratio, denominator involves a measure of time. 9. Count Total number of cases of a disease or other health phenomenon being studied. 10. Period Prevalence All cases of a disease within a period of time. When expressed as a proportion, refers to the number of cases of illness during a time period divided by the average size of the population. 11. Point Prevalence Number of caes of illness in a group or population at a point in time divided by the total number of persons in that group or population. 12. Prevalence Number of existing cases of a disease or health condition in a population at some designated time. 13. Incidence the occurrence of new disease or mortality within a defined period of observation in a specified population. 14. Incidence Rate Number of new cases of a disease, or other condition, in a population divided by the total number of population at risk over a time period, times a multiplier. 15. Population at Risk Those members of the population who are capable of developing a disease or condition. 16. Reference Population Group from which cases of a disease have been taken; also refers to the group to which the results of a study may be generalized. 17. Case Fatality Rate Number of deats caused by a disease among those wh ohave the disease during a time period. 18. Crude Rate A summary rate based on the actual number of events in a population over given time period. 19. Death rate Approximates the proportion of a dpopulation that dies during a time period of interest. 20. Specific rate Statistic referring to a particular subgroup of the population defined in terms of age, race, or sex; also may refere to the entire population but is specific to some single cause of death or illness. 21. Adjusted rate Rate of morbidity or mortality in a population in which statistical procedures have been applied to permit fair comparisons across populations by removing the effect of differences in the composition of various populations. 22. Cause Specific Rate A measure that refers to mortality divided by the population size at the midpoint of a time period times a multiplier. 23. Age-Specific rate the number of cases per age group of a population during a specified time period 24. Sex-Specific Rate The frequency of a disease in a gender group divided by the total numer of persons in that gender group during a time period times a multiplier. What would you like to do? Home > Flashcards > Print Preview
Marketing Chapter 3 Card Set Information Marketing Chapter 3 2011-09-05 13:35:04 segments marketing types Market Segmentation and Target Marketing Show Answers: Home > Flashcards > Print Preview The flashcards below were created by user Kathlaen on FreezingBlue Flashcards. What would you like to do? 1. What is Market Segmentation? The process of dividing large, diverse markets into smaller submarkets that are more alike and need similar products or marketing mixes. 2. What is a Market Segment? A submarket or group of customers with similar needs and preferences. 3. How is market segmentation used? To find out what kinds of customer groups exist w/in the total market for a product, then determine which market segments to pursue and how to appeal to those segments. 4. What is Single Variable Segmentation? Uses only 1 characteristic, like income, to segment a market. 5. What is Multivariable Segmentation? Uses a combo of characteristics, like income and gender, to segment a market. 6. What is Consumer Market? A market that consists of ppl who buy product for personal/family use. 7. What are Organizational Markets? Markets that consist of individuals or formal organizations that buy product for business purposes. 8. Geographic Segmentation A method of dividing the total market for a product based on the needs & desires of populations in different jurisdictions or physical locations. 9. Micro-insurance Protection against insurable risks of assets and lives of target populations such as micro entrepreneurs, small farmers, the landless, women, and low income earners through formal, semi-formal or informal institutions. 10. Psychographic Segmentation Method of dividing total consumer market for a product or svc based on multiple characteristics that describe consumers' attitudes, beliefs, opinions, values, lifestyles, activities, and interests. 11. Persona A model created by a co's marketing staff used to represent a particular demographic portion of a market segment to better understand the habits, needs, and motivations of consumers in that segment. 12. Behavioristic Segmentation A method of dividing the total market for a product according to consumers' behavior toward a product or company. 13. Benefit Segmentation A type of behavioristic segmentation in which the benefits that prospective consumers seek are used to segment markets. 14. North American Industry Classification System (NAICS) The official system used in North America to categorize businesses according to the type of economic or business activity in which they are involved. 15. Single-employer Group A group made up of the employees of one company. 16. Multiple Employer Group A group consisting of the employees of (1) 2 or more employers in the same industry, (2) or more labor unions, or (3) one or more employers and one or more labor unions. 17. Debtor-Creditor Group Group consisting of lending institutions, such as banks, credit unions, savings & loan associations, finance companies, retail merchants, and credit card companies, and their debtors. 18. Affinity Group A group formed when individuals with common needs, interests, and characteristics communicate regularly with each other. 19. Target Marketing The process co's use to evaluate each identified market segment and then select one or more segments as the focus for their marketing efforts. 20. Undifferentiated Marketing A target mktng strategy that involves defining the total market for a product as the target market and designing a single marketing mix directed toward the entire market. AKA mass marketing. 21. Concentrated Marketing A target marketing strategy that involves focusing all of a co's marketing resources on satisfying the needs of one segment of the total market for a particular type of product. 22. Niche Marketing A form of concentrated marketing where co's target small narrowly defined subgroups w/in a segment that attract only one or a few competitors. 23. Differentiated Marketing A target marketing strategy that aims to satisfy the needs of a large part of the total market for a particular type of product by offering a number of products and marketing mixes designed to appeal to different segments of the total market. 24. One-to -one Marketing A type of target marketing strategy where the marketing mix is customized for each individual consumer or consumers in a specific location. AKA micromarketing. 25. One-To-Few Marketing A type of target marketing strategy where the marketing mix is aimed at a particular group of customers with similar characteristics, needs, or past buying behavior and personalized to some extent. AKA one-to-some marketing. 26. Negotiated Trusteeship (Taft-Hartley Group) A multiple employer group that results form a collective-bargaining agreement b/t one of more unions and employers of union members. AKA a Taft-Hartley Group. 27. Voluntary trade association A multiple employer group that consists of individual employers that work in similar industries and have common business interests. 28. Multiple-Employer Welfare Arrangement (MEWA) A multiple employer group formed when small employers band together to offer group insurance and other benefits to their employees. What would you like to do? Home > Flashcards > Print Preview
Calluses are thickened sections of skin which commonly form on the feet. They can be caused by frequent walking, walking barefoot or any other type of activity that causes friction or pressure on the feet. Dirt can embed deep into calluses, but it can be removed by proper cleaning, which involves removing some of the callused skin to expose the dirt. Things You'll Need Fill a tub with warm water. Use just enough water to cover your feet. Put your feet in the water and soak for a minimum of five minutes. Scrub the callused area of each foot with a foot file or pumice stone while the skin is still moist. Rub gently over the skin, moving the stone or file in only one direction until the skin on the foot feels smooth. Moisten a scrub brush or loofah with water. Apply a few drops of a moisturizing exfoliating scrub to the brush or loofah. Scrub the callused area of your feet until all the dirt is gone. Rinse your feet with warm water to remove the soapy residue, dry them thoroughly, then apply a moisturizer.
Diced vegetables look appealing in a finished dish, but beyond aesthetics, a proper dice serves a practical purpose: the evenly sized pieces cook at the same rate. You can dice any vegetable, including onions, potatoes, peppers, carrots and celery. Things You'll Need Wash the vegetable under running water, scrubbing lightly with a vegetable brush if necessary. Trim off the leaves and peel the vegetable if needed. Cut peppers, tomatoes and other seed-filled vegetables in half and scoop out the seeds before you begin to dice. Cut large vegetables, such as onions, in half. Cut vegetables such as carrots, which don’t have a uniform shape, into pieces of similar size. Place the vegetable flat on the cutting board. Cut the vegetable into horizontal strips of equal width. Make the strips ¾-inch wide for a large dice, ½-inch wide for a medium dice or ¼-inch wide for a small dice. Hold your knife perpendicularly to the strips and cut the vegetables into cubes. Space the cuts evenly so you have large, medium or small cubes of equal size.
Search all MERLOT Select to go to your profile Select to go to your workspace Select to go to your Dashboard Report Select to go to your Content Builder Select to log out Search Terms Enter username Enter password Select OK to launch help window Cancel help Advanced Search Search > Learning Exercise Results > Periodic Elements: Activity for Discovery Learning Exercise: Periodic Elements: Activity for Discovery Periodic Elements: Activity for Discovery Element exploration leading to a presentation on element's unique characteristics while analyzing differences and similarities. This will lead to a better understanding of the organization of the periodic table. Course: general chemistry Submitted by: Date Last Modified: June 30, 2005 Day 1: Each student will pick three elements [1 metal, 1 non-metal and 1 trans-metal] and do a write-up assignment. For each element they must include the following 1. What are its main characteristics? 2. Where is it located on the periodic table? 3. Describe its discovery. 4. How is it useful? 5. What makes it unique? 6. Include a Lewis dot diagram and atomic structure. Day 2: Put the class into teams of three for group presentations. Each person must contribute one of their elements to the presentation. Using whiteboards or posters, have students put down the info that they feel is most important for the whole class to know including differences and similarities among the three elements. Presentations must include the whiteboard/poster and another visual (physical, picture, example etc.). • College General Ed • High School • Middle School Organization of periodic table, Familiarization of elements, their characteristics and their presence on earth ability to navigate on a computer, internet Learning Objectives 1. Familiarize students with the periodic table 2. Analyze the structure 3. Learn about characteristics of various elements 4. Make a connection to real life Type of Task • Core activity • Student-centered • Team Technical Notes Computer is necessary.
Abraham Lincoln Why did Lincoln change vice presidents for his second term? Lincoln approved of Andrew Johnson as his vice president for his second term. It was an unusual choice given that Johnson was a Democrat from Tennessee and, therefore, belonged to a different political party from his president. However, Johnson was a so-called “War Democrat” who showed loyalty to the Union. Lincoln and others realized that rather than running with his current vice president, Hannibal Hamlin, who was from Maine, Lincoln would gain more political support by choosing a Southerner like Johnson.
What is the amount of sunlight in the grasslands? Quick Answer The amount of sunlight that is available to grasslands depends on the latitude in which the grassland is located. Tropical savanna lies at lower latitudes than temperate prairies, so light from the sun strikes the ground there at a more-direct angle, delivering more energy per unit area than elsewhere. Continue Reading Full Answer In the tropics, each square meter of ground can receive as much as 6 kilowatt-hours of energy every day. Northern latitudes and areas where cloud cover is common, receive less. Above 40 degrees, which is approximately the latitude of Boston, the energy available per square meter drops to only 3.6 kilowatt-hours per day. Learn more about Africa Related Questions
What smells repel dogs? Quick Answer Some dogs dislike citrus smells, avoiding orange, grapefruit and lemon peels scattered around. Dogs also avoid chili peppers because the capsicum irritates their skin, particularly their noses. Soaking cotton balls in rubbing alcohol and leaving them in an area is also effective. Dogs dislike the smell of vinegar and ammonia, but because these substances also make effective weed killers, homeowners tend to seek out other measures to protect their lawns. Continue Reading Full Answer While the most effective item to keep a dog away is a fence, this is not always possible, so commercial dog repellents have different methods. Some capitalize on the natural tendencies of dogs to avoid citrus smells or chili peppers. Some use black pepper as well, which dogs also dislike. Other smell-oriented repellents target dogs that urinate or defecate in yards, masking the familiar smells that attract them to the area. Others target their sensitive hearing rather than their noses, emitting sonic and ultrasound waves that irritate them. Dogs also avoid water sprinklers because they are frightened by the spray, noise and movement. Even if they grow used to the sight, many dogs still dislike being sprayed with water unexpectedly. Motion-activated sprinklers are particularly effective at deterring them. Dogs also dislike certain plants, such as lavender and coleus canina. Although dogs dislike garlic, the herb is also toxic to them and should not be used. Learn more about Dogs Related Questions
What are examples of man-made disasters? Quick Answer Examples of man-made disasters, which result from human negligence, error or intent, include nuclear warfare, biochemical warfare, nuclear explosions, toxic emissions, global warming, fires, civil unrest and terrorism. Man-made disasters can be catastrophic, causing extensive damage to the environment and resulting in human loss and devastation. Continue Reading What are examples of man-made disasters? Credit: Hulton Archive Hulton Archive Getty Images Full Answer Some of the most devastating man-made disasters in history include the bombing of the Chinese capital of Chongqing during World War II, which left over 2,500 people dead. London’s killer fog, or the Great Smog, that occurred in 1952 left nearly 12,000 people dead. The Bhopal, India disaster, a gas leak, resulted in nearly 30,000 deaths. Learn more about Human Impact Related Questions
Free Report: 10 Powerful Technical Chart Formations What is Pick? A maneuver in basketball in which a member of the offensive team positions his body in such a way as to allow a teammate with the ball to break free of a defender. Sporting Charts explains Pick - basketball The pick, also referred to as a screen, is a common maneuver employed by teams during offensive plays. The player who is executing the pick will generally take up a position on the perimeter or near the top of the key, angling towards the ball handler. The teammate who possesses the ball will then make a move toward and past the player making the pick, who then impedes any trailing defender from following the ball handler. When setting a pick, it is important for the player doing so to remain stationary. If the player instead moves towards the defender and disrupts his movement, an offensive foul is usually called, resulting in a turnover to the opposing team. One of the most popular plays that involves the pick is known as the pick and roll, in which a player sets the pick, then "rolls" to the basket to receive a pass (typically from the player for whom he set the pick). Related Video
Getting Ahead in the Clouds Atmospheric scientist Dan Cziczo takes his lab to the upper troposphere to solve a pressing climate puzzle. As dusk settles over Cambridge on a midwinter evening, Dan Cziczo stops for a moment to take in a spectacular view. It’s sunset, and just above the horizon, streaks of red and orange bleed into deeper swaths of purple and blue as clouds of every type stretch across the darkening sky. Cotton-ball puffs of cumulus blend with a blanketed layer of stratus, and thin, featherlike threads of cirrus trail overhead. For anyone taking a break from work to look west along the Charles River, the sight is a stunner. Dan Cziczo For Cziczo, a 42-year-old atmospheric scientist at MIT, the view, in a way, is his work. Cziczo studies cloud formation, and he sees clouds—cirrus in particular—as a key to answering a crucial question: exactly how much will Earth warm up in the near future? The best answer scientists have come up with so far is still uncertain—anywhere from 1 to 5 °C, depending on the amount of greenhouse gases that humans add to the atmosphere. In parts of the world such increases could mean rising seas, stronger storms, and damaging fires and floods. With every degree of warming, scientists predict up to 15 percent reductions in crop yields, 15 percent decreases in the area covered by Arctic sea ice, 10 percent increases in rainfall during the heaviest storms, and 400 percent increases in areas burned by wildfires in the western United States. That means the difference between one and five degrees of warming is quite significant. In 2007, in a report issued by the Nobel Prize–winning Intergovernmental Panel on Climate Change, scientists from around the world concluded that much of the uncertainty in climate projections has to do with clouds. The scientists noted that while clouds may block solar radiation from entering the atmosphere, the conditions under which they form, and the extent to which they actually cool the planet by reflecting that radiation away, is very poorly understood. Further complicating matters, a warmer Earth holds more moisture, which could increase the total volume of clouds. To reduce the uncertainty in climate projections, Cziczo and his research group at MIT are studying subjects such as aerosols, or airborne particles, which act as “seeds” that help clouds form. As particles like dust float up into the atmosphere, they provide a surface on which water vapor may condense or freeze, forming a fine mist that from a distance can appear puffy, layered, or wispy, depending on a region’s temperature and relative humidity. “Different particles and clouds act differently, and understanding this balance is really how we’re going to increase the certainty [of climate projections],” Cziczo says. “Pinning it down to say, ‘Are we getting one degree or three degrees of warming?’ That’s the kind of thing we’re trying to figure out.” Seeing through cirrus Tonight, Cziczo is catching the cloud display from an enviable perch: the roof of MIT’s 21-story Cecil and Ida Green Building, the tallest building in Cambridge. The roof has long been an ideal site for atmospheric study, housing instruments that measure wind speed, relative humidity, and temperature. On occasion, Cziczo, an associate professor in the Department of Earth, Atmospheric, and Planetary Sciences, will bring his students up here to take instrument readings, using the data to figure out whether and where clouds will form. This time, however, he’s just here for the view. “If you look through the sunset, you can see the higher clouds, the sort of wispy ones,” Cziczo says as he points out cirrus clouds in the distance. “They make these cool filaments … their Greek name has to do with horsehair or mare’s tails, and those are the ones we’ve been studying lately, because of their importance in climate.” Cirrus clouds form four to 12 kilometers above Earth’s surface in the upper portion of the troposphere, the lowest layer of the atmosphere. At such altitudes, water vapor can freeze around particles, forming ice crystals. The resulting ice clouds, as cirrus clouds are also known, are usually the first cloud layer that sunlight meets as it makes its way to the surface. The ice crystals act as tiny reflectors that scatter sunlight. It’s thought that clouds in general may reflect enough sunlight back into space to offset between half and three-quarters of the warming caused by greenhouse gases such as carbon dioxide. The net impact of cirrus clouds, however, is unclear: while they shield the planet from incoming sunlight, they also trap radiation trying to escape from its surface. To know exactly what role cirrus clouds play, Cziczo says, it’s important to understand how they form—specifically, what particles, or aerosols, are naturally seeding them. As the sun sets, he heads down to his lab on the 13th floor, where two glass tubes, partially enclosed in a metal casing, are brewing up clouds. The setup, which he helped build, is called a cloud chamber. By adjusting the temperature and relative humidity in the chamber, researchers can create perfect conditions for the formation of cloud droplets or ice crystals. The only missing ingredient is an ideal seed on which clouds may form. Sifting for seeds Cziczo has been testing various aerosols to see which will most readily form clouds in the chamber. By feeding these different aerosols into the chamber as he mimics weather conditions in certain parts of the world, he hopes to determine what particles are causing cloud formation in those regions. To demonstrate, he takes out a small jar of gray powder, mineral dust collected in Wisconsin. “Let’s make a dust storm,” he says, and waves the open jar in front of two nozzles, which take air into each glass tube. The tubes are too small to generate visible clouds, so Cziczo uses a system of lasers to measure whether the water vapor has coalesced into droplets large enough to be considered cloud particles. Next, Cziczo dries the cloud droplets by sending them through a small compartment filled with desiccants similar to what’s in the packets found in shoeboxes. He and his colleagues can then analyze them to determine the exact composition of the cloud seed. The cloud chamber is small enough to be packed up and taken to any part of the world to sample directly from a region’s atmosphere, which Cziczo says is a big advantage. Scientists may find that while a certain aerosol is excellent at seeding clouds in the lab, that aerosol isn’t found at the altitude where it might form clouds in nature. It is generally assumed that biological material is a fantastic substance for forming ice clouds, he says, noting that some types of pollen generate clouds remarkably well in his chamber. “But when you go in the field, you realize it’s just not present in the upper troposphere in large numbers, so it can’t have a large effect on clouds. If you just sampled on the ground, you might fool yourself into thinking that it’s important.” So he has made a point of including both field studies and lab work in his group’s research. Sitting in ice clouds Over the past 15 years, Cziczo has visited mountaintops in search of the kinds of aerosols likely to be found throughout the upper troposphere. As a postdoc at the University of Colorado and the National Oceanic and Atmospheric Administration, he made trips to Storm Peak Laboratory, in north central Colorado, where he sampled high-altitude clouds with an early version of the cloud chamber. That experience prepared him for a research and teaching position at the Swiss Federal Institute of Technology in Zurich, and then for a literal high point in his career: a stint sampling clouds at the Sphinx Observatory, a remote research station built along the spine of the Bernese Alps. Named for its sphinx-like architecture, it is one of the highest land-based observatories in the world at more than 11,000 feet above sea level. At this altitude, mixed-phase clouds—which are similar to cirrus clouds—can blanket the peaks. The site, which has been called “the Top of Europe,” is a tourist attraction by day, when people travel up by train—electrically powered so as not to taint scientists’ measurements with exhaust. At night, however, the tourists take off, and the researchers bunk down. “The first night, nobody sleeps,” Cziczo recalls. “You get a pounding headache, and you can feel your heart beating. It takes a couple days to acclimate, but after that, it’s amazing … at times, you’re actually sitting in ice clouds.” After his time in Switzerland, Cziczo continued his work back in the United States at the Pacific Northwest National Laboratory. In 2011, he moved east to join the faculty at MIT. In March 2011, he and his students took the cloud chamber to the Johnson Space Flight Center in Houston, where they mounted it to the nose of an old B-57 bomber. The plane, which was flown in the 1950s during reconnaissance missions, has since been repurposed as a research aircraft and is now used for projects such as a NASA field campaign called the Mid-latitude Airborne Cirrus Properties Experiment (MACPEX). The plane flies as high as 63,000 feet, making it perfect for sampling cirrus clouds, though it can be tricky to predict when they might appear. In a period of six weeks, the team collected cloud samples over the Gulf of Mexico and the desert Southwest. Analyzing their composition showed that mineral dust, such as sand kicked up from a desert storm, accounted for about 60 percent of the aerosols in those clouds. The researchers also found that between 8 and 25 percent of cloud-forming dust particles contained lead. What they didn’t find was perhaps more surprising: biological material such as pollen and spores, or carbon emitted from smokestacks. While researchers have seen carbon and pollen readily form clouds in the lab, this type of aerosol accounted for less than 1 percent of cirrus cloud particles in Cziczo’s findings. Researchers hope such experiments will help pin down exactly which aerosols form cirrus clouds and, more important, whether those aerosols are released by human activity. For example, Cziczo says that while mineral dust is a natural substance, made largely of dirt and sand blown off Earth’s surface, humans have significantly changed the amount of it in the atmosphere. “When you change land uses, when you get rid of forests to create cropland, or you till under crops … you’re perturbing mineral dust,” he says. “So it’s a natural particle, but there’s more of it because of manmade activities. And it looks like it’s one of these things that’s forming ice clouds.” The lead-containing cloud dust the team found probably came from human activity as well: sources like aircraft tailpipes, coal-­burning power plants, and leaded gasoline that wasn’t phased out worldwide until the mid-1990s. Although he’s certainly not advocating further pollution, Cziczo acknowledges that “global warming would be much worse if it wasn’t for the human addition of particles to the atmosphere.” “In the past, climate assessment groups have really not addressed whether anthropogenic activity might be affecting ice clouds, even though they are known to be important in the climate,” says Jon Abbatt, a professor of atmospheric chemistry at the University of Toronto. “That’s what’s special about Dan’s work. He has the capabilities to assess whether there are anthropogenic signatures in ice clouds. That’s the starting point for trying to make an assessment of ice formation as it relates to climate change.” Measuring together Cziczo and a growing community of atmospheric scientists hope that identifying the basics of cloud formation wipes out any remaining uncertainty about global warming. In addition to their experimental work, they are developing climate models that incorporate cloud formation. The data they collect will help make such models much more precise, although there are significant challenges to overcome: most models simulate climate by dividing the globe into a grid, averaging weather data over squares that are, at the finest resolution, 100 square kilometers. Incorporating cloud data at the level of fine aerosols would require enormous computational power. Chien Wang, senior research scientist in MIT’s Center for Global Change Science, is working with Cziczo to find ways to fit this fine-particulate data into large-scale climate models. “Dan’s lab and field work can obviously help us to improve our model to better simulate the linkage between aerosols and ice clouds, and their climate effects,” Wang says. “I’m very glad that we can have him in house.” Cziczo’s work may also help overcome another major obstacle in the field. Researchers in disparate groups tend to build their own cloud chambers, and the measurements from one may not be comparable to those from another. The instrument that Cziczo helped engineer was recently licensed by a company in Colorado, which is manufacturing it as the first commercial ice-cloud chamber. Model number 001 has a place of honor on his MIT lab bench, and other researchers have placed orders for more units. Back in his MIT office, Cziczo looks out his window, a wide view that takes in the Boston skyline and a few stray clouds above. Occasionally he takes pictures of cloud formations, or interesting contrails from passing planes, and asks his students to identify the type of cloud and where it might have formed. It’s an exercise born of plain wonder as much as scientific curiosity. “As a kid I was always sort of fascinated with clouds and flying and things like that,” he remembers. “I think I get more joy out of it now, because I understand some of it, and I still try to look outside and figure things out.” In addition to studying clouds in Earth’s atmosphere, Dan Cziczo is investigating those that may form on Mars. Though the Martian atmosphere is too thin to support life, recent images from NASA’s Mars Reconnaissance Orbiter have shown carbon dioxide snow, precipitated from clouds. martian lab To find out what might be forming those clouds, which looked to Cziczo like “diamond dust,” he and his students are growing clouds under Mars-like conditions in the lab. They recently made a trip to the largest cloud chamber in the world, the Aerosol Interaction and Dynamics in the Atmosphere (AIDA) facility in Karlsruhe, Germany—an old, repurposed nuclear reactor whose core has been replaced with a three-story-tall chamber. Scientists from around the world use the massive chamber to observe large-scale effects they could not see in benchtop models. Cziczo’s team contacted NASA for samples of dust thought to be similar in composition to dust on Mars (it was actually collected from U.S. deserts) and placed them in the cloud chamber, adjusting its temperature and relative humidity to levels that have been observed on Mars. The experiment successfully formed a water-ice cloud. Cziczo hopes to continue this new extraterrestrial branch of his research, which he says was inspired partly by the atmospheric images taken by the Mars Reconnaissance Orbiter and NASA’s Phoenix lander. “You can see ice crystals falling out of the atmosphere,” he says. “And it’s funny, because the first time I saw those images, they looked like clouds in Earth’s atmosphere.” Insider Online Only $19.95/yr US PRICE You've read of free articles this month.
Effects Of Breakwater In The Civil Engineering Field Construction Essay Problems of erosion, reduction in shorelines, disappearance of beaches, and environmental impacts have led to the recession of many economies around the world. To resolve, engineers have devised man made structures like breakwaters and piers to address a variety of coastal problems such as shelter, fishing, docking and coast line recession. While these problems are resolved, new ones emerge when breakwaters and jetties are constructed in the areas. Clearly, breakwater engineering and related civil engineering fields are still at their rudimentary level, despite the fact that these structures have been in use since age old. In the following study, the researcher carries out investigation into the hydrodynamics of breakwaters, and their engineering aspects, with the view to gain insight into their importance to civil engineering fields. The researcher aims to explore, evaluate and analyse the impact of breakwaters on engineering professions, and ways that their knowledge limits or opens up new channels for engineering innovation. The results are compiled, and the researcher concludes that breakwater engineering has great scope in contributing to civil engineering knowledge, provided that its design and applications are researched further. Chapter 1 Introduction Background and Rationale A coast is a geological system that is subject to constant movement and change. Shorelines, beaches, and coastal areas in effect affect human lives, and vice versa. The diverse and complex nature of the coastal system is the result of processes involving waves, tides, currents and winds that affect the geological state of the coast in an attempt to keep a balance between land and water. However, these are not the only factors that influence and shape coastlines. Human activities for economic and social purposes contribute towards its modifications. Natural processes, coupled with human intervention, contribute towards erosion, sedimentation, and accretion (Hsu, Lin, and Tseng 2007). In fact, according to French (1997), human activities bring about changes that influence the environment adversely by creating new habitat and decreasing environment stability. Though not all changes affect the environment adversely, nevertheless the natural processes are affected by the unnatural conditions. Coasts and estuaries are not indifferent towards human intervention where a range of variations in their structure and environment can alter the geological, oceanological and marine system therein. Added to this status is the fact that coasts have become the ideal place for human population, industrialisation, commercialisation transportation etc. Human has, in effect, taken over to develop coastal areas to act as shelters, ports, docks, and for numerous other activities. The pressure for benefitting human lives has inevitably changed the environment drastically towards degradation. To compensate, a host of management strategies have been undertaken to operate, manage and sustain coastal areas, to control the activities and maintain a balance between nature and mankind (d'Angremond and van Roode 2004). One of these management control methods is building of breakwaters and jetties. Jetties and offshore breakwaters are man made structures designed to protect coastal areas from the natural and unnatural recession of the shoreline. Breakwaters are usually built parallel to the shore or at an angle to direct serious wave action from its destructive impact on the shoreline. Jetties, on the other hand, are built with the purpose to prevent erosion of the inlet or harbour area. Offshore breakwaters provide shelter as they are built based on wave refraction and diffraction (Putnam and Arthur, 1948). Similarly, groins are structures built to face seawards and at an angle to slope at the same angle as the normal beach. Groins are built at an elevation above datum to act as the stabilising structure and to increase the width of the beach by arresting the shore drift in part or as a whole (Paige 1950). Apart from these, coastal areas are subjected to geological problems such as natural processes including coastal erosion, deposition, sedimentation, tsunami, tidal waves etc. These require human intervention to protect and conserve human and natural habitat. For these purposes, an engineering field called coastal engineering has been introduced in the academic arena for enhancing the knowledge and skills of professionals to develop coastal areas with minimal damage to the natural and man made environment. Coastal engineering involves developing and protecting existing coastal protection work with the view to predict future natural coastal processes. Comprehending the nature and value of coastal processes, enables engineers to devise plans and strategies to protect these processes better. Moreover, knowledge of the coastal condition helps professionals in the field to construct, facilitate and execute better breakwater construction. Breakwater construction is a field that is directly related with coastal engineering. However, it also has close relations with other engineering fields like geology, construction, environment and computer engineering. It is within this context, that the researcher shall be investigating the importance of breakwater engineering and the ways it affects the engineering field. Aims and Objectives The aim of this dissertation is to investigate how breakwaters and their construction affect various civil engineering fields. The objective is to: a. Identify the various civil engineering fields that breakwaters affect b. Evaluate how breakwaters impact civil engineering professionals; and c. Study how the knowledge of breakwater construction adds to the skill knowledge of engineers Scope and Limitations The research, in essence, is not a pure scientific empirical study, but rather an exploratory one. The researcher is aware that in exploring the dynamics of breakwater engineering, he/she will have to link civil engineering techniques and skills, which makes it a successful defence structures for both, humans and marine life. In this context, the study shall limit its discussion to the various fields breakwater construction entails, and shall not delve extensive into any particular field which concerns its engineering perspectives, such as marine life or construction engineering. However, it will touch upon these topics byway, to enumerate on its role and effects on the engineering field. Audience readers shall find the study insightful and enlightening as it would provide the numerous aspects that coastal engineering of breakwaters impact. However, academics and scholars shall find the content of the study limiting as it shall not be holistically technical. Fellow students shall find the dissertation a good stepping stone for furthering their research into areas of specialisation like geological engineering, construction engineering and so on. Nevertheless, the dissertation shall aim to address the social and scientific aspects of breakwaters. Outline of Dissertation To accomplish the above objectives, the researcher shall endeavour to carry out the study in the following manner: Chapter 1 shall introduce the background and the rationale for the study. Chapter 2 shall provide the theoretical background based on an extensive literature review on the aspects of the study outlined above. Chapter 3 shall outline the methodologies considered and the rationale for the chosen research approach. Chapter 4 will be the analysis segment in which the researcher shall evaluate the data gathered, and discuss with the aim to acquire conclusive results. Chapter 5 shall be the conclusion to the research, offering insights gained from the research, summarising whether the researcher has accomplished the objectives or not, and perhaps some recommendations for future research. Chapter 2 Literature Review Breakwaters and similar coastal structures are human interventions, which are exposed to strong waves, currents and other marine processes. The construction of such structures needs to be enduring, as well as fitting, with the natural environment. The design and construction of breakwaters and interrelated structures indicate that knowledge of pure engineering alone is not practical. In fact, it requires consideration for various empirical and theoretical knowledge for its design. To the extent of this knowledge, the researcher is of the view that civil engineering relating to large scale hydraulic structures has developed considerably. According to d'Angremond and van Roode (2004), coastal problems of erosion, tides and currents have existed since the beginning of civilisation. However, the management of these movements and problems have gained considerable attention today due to the commercialization and population of coastal areas around the world. For these reasons, problems such as sea level rise, tidal asymmetry, sedimentation budget etc. need to be tackled. These are carried out through careful coastal defence and management practices, and engineering skills, which shall be discussed in the following sections. Coastal Engineering Ocean waves are generated by wind and propagated from the ocean towards the shoreline. The orbital motions of wave kinematics influence the depths and heights of the ocean bed. Near shore ocean beds are greatly impacted by the velocities and the wave strengths. As a result, sediment beds often change in topography due to continuous impact of the fluid forces of waves. Sedimentation response or impact is negligible, but, in effect, compound the problem of sediment transportation to and away from the local beach. The scale, depth, and extent of the influence of the waves on the beach may and may not result in coastal degradation. For these reasons, detailed investigation on the continental shelves, fluid dynamics, near shore motion and variation of ocean topography are required in order to monitor and maintain the natural barrier to land. When the problems of natural erosion and sedimentation become too great to manage, measures like construction of barriers, submerged shoals, breakwaters and artificial headlands are undertaken to sustain the environment (Birbena et al 2006). Construction of this nature is triggered by defence planning, storm handling and flood prevention. In fact, coastal defence system and management require formation of framework for projects to be planned, investigated and implemented to meet the needs of the environment and its people. These are the civil aspects of coastal engineering (French 1997). Not only this; structures like breakwaters also require continuous monitoring and protection work to predict future performance. This is carried out through coastal engineering processes such as modelling to estimate the changing environment and angle of repose of shorelines, site investigation to study the cycles of hydrographic and marine life status, as well as processing these to build a profile for the shorelines on which breakwaters are constructed. For example, in Iskander et al's study (2007), the authors studied and developed a monitoring model for studying coastal structure along the El Agami area of Egypt. The study indicates that where breakwaters exist, shoreline fluctuates, marine life is impacted, as well as wave hydraulics. Coastal engineers need to record and study the gradual change that takes place due to the presence of breakwaters. Issues concerning wave distribution, shoreline sand composition, coastal calibration, marine survey, and effect on the harbours' population are taken into account. Apart from these, breakwaters also affect the coastal structure such as villages, ports, or other such human activities (Iskander et al 2007). Furthermore, coastal engineers also need to ensure that the construction of breakwaters and estuaries does not adversely affect human activities as a result of design fault of these structures. For example, in Donnell et al's article (2006), the authors indicate that the breakwaters on Tedious Creek estuary on the shoreline of Chesapeake Bay in Dorchester County, MD caused substantial damage to local vessels than the benefits it provided for its shelters. The setup of breakwaters is aimed at protecting the boat dock and public piers from storms, but, in reality, the project's design fault has resulted in under performance, both in functionality and structure to benefit the locals. It is in instances such as these that coastal engineers need to be ascertained of the need and importance for breakwater structures. Similarly, breakwaters can also result in beach morphology that effectively negates the protection objective when breakwaters are constructed with limited knowledge applied relating to practical engineering. Accurate study of the shore area through cross shore distribution, long shore sedimentation transport rates and performance of breakwaters in advance, as well as using model calibration and validation, hydrodynamic module, wave modules etc. could positively affect the performance of the structures. Therefore, coastal engineers are responsible for studying the wave conditions, down drift side, expected erosion and current patterns behind submerged breakwater, to gauge incident waves. These mechanisms, according to Ranasinghe and Sato (2007), can relatively influence the function and utility of breakwaters' function. Thus, coastal engineering is greatly influenced by the type and design of breakwaters structures. Construction engineering Breakwaters and such coastal structure construction combine design and functionality with the view to protect the coastal area. The design process is similar to structural design of buildings as it entails paying attention to functional requirements, limitations of the state of the structure, exposure, construction phases and occurrence of natural conditions. Breakwaters also require considerations for knowledge of construction materials including quarry stone, concrete blocks, caissons and similar types of materials to apply to its construction. Equipments for both floating and rolling breakwaters too need to be studied and related to the specificity of the breakwaters' site, function and design. The development of breakwaters also requires functional and structural monitoring of performance, with enduring characteristics. According to Camfield and Holmes (1995), coastal structures like breakwaters and jetties are influenced by long periods of water level changes. They need to be built parallel to the entrances, in an attempt to stabilise entrances and safe navigation. Construction along the shore should be carried out with the direction of the channel in mind, to prevent migration of channel thalweg, rapid shoaling and erosion of the coastline (Morang 1992 qt. Camfield and Holmes 1995). This is because construction of jetties and breakwaters often creates a new equilibrium for the tidal system. For this purpose, surveys of adjacent shorelines, natural bypass and the material that may ebb tidal activities need to be carried out for effective construction of the structures aligned with the regional dynamic and hydraulic processes. Construction engineering approach such as cross sectional relationship of inlet and tidal prism, as well as depths of the jetties and breakwaters, and water flows are studied before finding the ideal balance between performance, flow conditions, and natural marine activities. Knowledge of construction material, as mentioned earlier, is imperative for choosing and designing breakwaters to complement the need of the local landscape and environment. Since breakwaters are made up of rubble mounds or caissons or are concrete filled, knowledge of construction material adds to the skills required for developing structures for dispersing wave currents to minimise impact, as well as conserve energy from wave hydraulics where possible (Arena and Filianoti 2007). Not only this; new construction material knowledge also provides an edge over the design and planning of the breakwater armour unit. Reedijk et al (2008), for example, indicate that the development of Xbloc by Delta Marine Consultants in 2001 has innovated armour concepts in terms of designs, tests and prototypes. Xbloc are concrete blocks designed to armour shore protection and are being used in breakwater construction actively by engineers today. Muttray et al (2003), in their study of the suitability of Xbloc in breakwater construction, indicate that Xbloc are shaped to suit the harsh environmental conditions of waves, and such hydraulic activities. When placed interlocked with each other, Xblocs not only reduce concrete volumes, but also achieve the stability required for achieving breakwaters impact from wave loads and damage (Muttray et al 2003; Reedijk et al (2008). Added to this fact is the cost of layering breakwaters with Xbloc, which is significantly reduced as compared to other armour blocks. Furthermore, coastal protection design and construction require development and use of probabilistic design tools to gauge uncertainties, prediction of wave impact, as well as structure stability. One of the main concerns for construction engineers is that the structures can sustain its functionality for coastal protection, regardless of the wave conditions and transformations of water bodies. The basic premise is that wave transformation in foreshores and offshore areas cannot be relied upon through model designs. In fact, it requires construction engineers to have knowledge of coastal shores by using prediction models for wave transformation to study the effect of wave height, setup and distribution before designing the breakwater and jetty structures (Muttray et al 2001; Coduto 1999). Consideration for these aspects would help design structures to achieve its long term goals, as well as retain beach composition from long shore transport processes. Analytical engineering Breakwaters are constructed based on engineering approaches and processes that exploit the nature of wave parameters and hydraulics. According to Huizinga (2003), breakwater engineering often fails after 5 to 10 years as a result of poor design. Engineers fail to grasp the concept of breakwater designs and modelling, which uses propagation of water around of breakwater with the assumptions that water is the ideal fluid and incompressible. Waves are small in amplitude and can be analysed using the linear wave theory. Their flow is usually rotational, which can be analysed through Laplace equations. Breakwater's depth is constant and its dynamics are determined by diffraction, refraction or reflection (Huizinga 2003). Diffraction analysis takes into account of the water height, and the interaction of breakwater and waves. The wave energy is assumed to disperse as the waves come into contact with breakwater structures, which could be understood using linear diffraction theory. In this context, a rubble mound breakwater is a diametric form, with certain density and diameter designed to disperse wave motion. The velocity of the waves is retarded by its action, in contact with the breakwater. The change in direction of the wave affects the sediment supply, composition, wave properties, topography, and breakwater properties. Therefore, the variables in the breakwater interaction change in response to the caisson. The underlying assumption set forth is that the physical movement of breakwater is associated with the wave action, the permeability of the breakwater surface, seabed composition and response of the breakwater over a long period of time (Huizinga 2003; Twu and Chieu 2000). Alternatively, wave reflection and wave run up is the model for analysing breakwater through a cross section and slopes. In this method of engineering, wave reflection is determined by the 3 guage method. Wave conditions comprise of relative depth, height, steepness, and breaker index. Measurement of wave conditions is accomplished by analysing its reflection at the seaward direction when the wave surface comes into contact with the structure and foreshore. The water surface comes into contact with the breakwater as a toe and an anti knot. The wave run up and run down impact the breakwater's wave resistance. When engineers analyse the efficacy and effectiveness of breakwater, they study the angle of the incident wave, as well as its reflection coefficient, to determine the impact of regular wave action. The analysis is critical for gauging the significance of wave run up and run down on breakwater surfaces, and inevitably its longevity. This is achieved by using the higher order wave theory for assimilating waves and horizontal seabed asymmetry. Furthermore, wave reflection measurement is determined by its dynamics such as local wave height, wave pressure, wave energy dissipation and wave penetration into the structure (Muttray and Oumeraci 2002). How waves break or non break is dependent on the breakwater slope and the reflection set for critical wave incident impact (Clyne and Mullarkey 2008). These analytical approaches are various forms of analytical engineering, which are engaged to evaluate the strength, longevity, efficacy and effectiveness of the breakwater functionality. Alternatives in analytical engineering, therefore, help construction of the breakwaters more effective, as they establish the baseline for stabilisation potential, as well as extend the life cycle of the structure (Wiegel 1962). Environment engineering Breakwaters and jetties are engineering solutions to resolve the problem of erosion and sedimentation of shorelines. These are constructed with the view to sustain the shoreline, and in turn benefit the local human communities. Just as breakwaters and jetties affect the hydraulic system of the areas, they also produce long and short term impacts on marine life. Hydrodynamic conditions, sedimentation patterns, wave motion, physical and chemical factors tend to alter the composition and nature of the habitat. Not only this; the habitat tends to change in its characteristics and life cycle due to the change induced by the presence of breakwaters. No doubt, there is an imperative relationship between biological life form and breakwater structures. Even though breakwaters are developed with the objective to provide shelter to marine life, as well as harbour for human activities, the type of alleviation, shoaling and access to aquatic floral and faunal also gets impacted when breakwaters are constructed without careful monitoring of quality, composition and marine lifecycle. In fact, construction of breakwaters for creating inlets often results in floral and faunal morphology of marine life due to the quality of sand, water chemical properties and the wave action. Water temperature, with variation through seasonal change, substantially affects the fish population, as well as other marine life forms. For example, the components of macrozoobenthos, algaes and polychaetous worms' densities change (increase/decrease) according to the increasing or decreasing water depth. Thus, construction of breakwater tends to adversely affect the micro constituents of marine biology ("Biological effects of breakwater construction" 1985). At times, colonisation of fishes within the vicinity is affected due to the elevated turbidity's and suspended solids concentrates near the breakwater. Moreover, maintenance of the depth of entrance to the area, and exposure of the same, can alter the sustenance level of fish populations. By streamlining the natural sand bypass, the morphological performance can be improved to simulate waves, currents and sediment transport, which corresponds with the marine life processes (Broker et al 2007). The reliability of the effect of breakwater calibration process ensures that the constructed structure does not hinder marine life forms. For this purpose, marine engineering knowledge, combined with the breakwater development know how, can help local engineers to establish dynamic coastal structures to fit within the parameters of the natural environment. Risks and failures While it is clear that breakwaters have their own functionality and utility for which they are used to sustain beach line sustainability and continuity, they are also risky.The utility and functionality of breakwaters and jetties depend on the model, material and simulation upon which they have been based. Measurement for their horizontal and vertical fluid velocities, breakwater composition (porous or non porous), energy dissipation rate and modification intensity, all contribute towards its impenetrable nature. However, any variation and standard deviation in the design such as surface elevation, velocity variation, calibration, and structure permeability can result in its wear and breakage. According to Kobayashi et al (2007), breakwater permeability can affect its situation in the beach zone, effectiveness in eliminating serious wave impact and structural longevity. In fact, breakwater transformation as a result of wave load, pressure and velocity can lead to shattering. This is dependent on the design of the breakwater and its sensitivity and test against breaker ratio. Steepness of seaward slope, wave breaking motion, and wave parameters greatly influence the structure, to the extent of predicting its durability (Kobayashi et al 2007). In fact, Oumeraci et al (2006) are of the view that analysis of saturation due to liquefaction phenomena in sand gravity structure tends to increase the risk of structural failure. Vertical breakwaters, especially, are vulnerable to permanent deformation of the subsoil, which leads to irreversible strains at the peak stress level. As a result, breakwaters' structures can give way to wave load induced by the fluctuation in pressure along the seabed and the pore pressure in the concrete itself. Failure of such monumental nature affects the stability, composition, and cyclic mobility. Failure is also the result of the nature of the breakwater structure, whether it is designed for offshore or onshore coastal defence. It is greatly influenced by the depth, and nature of the sand composition underneath the seabed upon which the breakwater is constructed. The relative density of the sand, pressure of the fluid, as well as storm yield, all contribute towards its endurance (Oumeraci et al. 2001). Apart from these physical risks and failures, breakwaters are also vulnerable in terms of their effect on marine life forms. Changing chemical composition due to displacement of fauna colonisation, as well a toxicity of the structures along the sediment banks, can result in breakwater biota fluctuations. While the human benefits of breakwaters last for 5 to 10 years, the long term effects of marine life cycle and fishery can alter the nature of the coast altogether if careful engineering approaches are not undertaken for the construction of breakwaters ("Biological effects of breakwater construction" 1985). The above discussion has been carried out with the view to provide an overview of the relationship between breakwater construction and its impact on engineering fields. While engineering is a vast discipline, in this study the researcher has included engineering fields related to the construction of breakwaters and their maintenance. The discussion indicates that breakwater structures are not merely coastal construction monuments, but have multidimensional impact on the physical, biological and human life. For this purpose, engineering and designing of these structures need to be analysed, planned and implemented with care, for its impact. Chapter 3 Research Methodology The nature of research problem determines the choice of its methods. Before one chooses the research method, its objectives, audience and underlying assumptions should be justified. The methodologies are then weighed and evaluated to justify for its choice. The theoretical perspective of the study should provide the background reality, as well as the constituent for increasing reader's knowledge. Within these dimensions epistemology is "concerned with providing a philosophical grounding for deciding what kinds of knowledge are possible and how we can ensure that they are both adequate and legitimate" (Crotty 1998). The epistemology, therefore, allows the researcher to decide the application and the underlying academic literature that is required for adding knowledge to the "existing consciousness." Generally, there are two options objectivism and constructionism. The objectivistic approach entails the investigation of existing knowledge and spanning it to extend its consciousness. The aim is to discover the objective truth. On the other hand, the constructionist approach entails the research which requires interaction with the world, and finding the truth in the process. Underlying the constructionist approach is the premise that research endeavours need to explore views from multiple angles before deciding on the objective truth. This approach is grounded in the qualitative methodology (Crotty 1998 qt. Levy 2006). Alternatively, researchers in the applied field usually conduct research based on quantitative methods that entail action research and evaluations for studying particular aspects and issues. The premise for choosing action research is to endeavour to capture the reality with certain degree of control on the phenomena under research. Although, the nature of the coastal engineering field mandates that research activities be subject to quantitative empirical methods whereby researchers carry out extensive action research strategies and processes. However, in this case, the researcher has opted for the qualitative approach as it complements the nature and topic under discussion. Whereas the study of breakwater is pragmatic, the exploration of its connection and impact on the engineering field is qualitative in nature. Furthermore, to understand the implications of breakwaters and their effect on civil engineering profession, investigation into the subjective views of experts within the field is required, rather than engaging in empirical research to achieve its findings. Having said that, the researcher is also aware that qualitative research requires a paradigm for basing the enquiry. According to Gummesson (2000), "a paradigm is a very general conception of the nature of scientific endeavours within which a given enquiry is undertaken" (p.18). It is a world view which allows the researcher to base his/her research outcomes and understanding. Research paradigms can be divided into positivist, which is characterised by the world as the external dimension and must be researched through facts and fundamental laws, and by studying concepts through sampling. On the other hand, the phenomenological paradigm involves the social construction of the subject, and characterised by the understanding of the totality of the situation by investigating the issue through established phenomena. For the current study, the researcher shall adopt the phenomenological paradigm for analysing the effect of breakwaters on the engineering field. The rationale is based on the premise that even though through the course of discussion some technical and practical aspects shall be discussed, the analysis shall regard the ideology, decision logic and utility behind breakwaters and their link with civil engineering fields. While the researcher is aware that the phenomenological paradigm is not suited for engineering and scientific research, he/she also has the understanding that research of this qualitative nature requires interpretive understanding, rather than logical and objective conclusions derived from empirical and detached experiments. For this purpose, the research shall study the behavioural (aspects?), as well as the science behind the construction of breakwaters as coastal defence systems. It will attempt to rationalise through examples how breakwaters involve multidimensional of human knowledge and engineering skills for their construction, as well as their maintenance. The researcher shall rely on primary and secondary resources for studying various aspects of the issue. Primary resources such as books, journals, and official publications shall be reviewed, while secondary sources shall include magazines, notes, the Internet, and other publications. Combined, these resources shall form the basis for discussion in the literature review. Once the literature review is completed, it shall be evaluated based on different engineering perspectives to provide conclusive results. Chapter 4 Results and Discussion Problems of erosion in tidal areas in the world often result in deep shorelines, rise in sea level, as well as negative sediment budgets. These cannot be counteracted once the damage has been done. Theoretically, in effect, the modification of such coastal erosions can achieve the desired recovery status of the shoreline. The construction of barriers, submerged shoals, breakwaters, headlands and islands are some examples which can help coastal authorities to manage their coastal systems. However, these so called simple measures entail skills and knowledge of engineering to plan and defend the coastal area without the implication of destroying the coast's aesthetic, biological and environmental composition. In fact, from the above discussion, one understands that breakwaters construction requires planning, investigation and implementation of strategies that would benefit both, the human and natural environment. The balance between the two factors is critical as it determines whether the breakwater has achieved its desired defence objective, or not. To understand the critical nature of breakwater related with engineering fields, at the beginning of the study, the researcher has outlined the following objectives. After the discussion above, the researcher shall now analyse these objectives in conjunction with the results of the literature reviewed. a. Identification of the various civil engineering fields that breakwaters affect Breakwater construction is a multi dimensional engineering activity that inevitably impacts man and fauna alike. Engineering design and application concerning breakwaters are diversified, and multi dimensional. As discussed in the Literature Review, coastal engineering the over encompassing field that is required for the construction of breakwater right from its design, planning and implementation. Areas concerning oceanography, sediment beds, sand composition, beach environment etc., all are taken into account for measuring the scale, depth and extent of the influence of breakwaters on the environment. Construction of structures of barriers and headlands can influence the outcome of the same. For these reasons, various aspects of its impact such as ecosystem, coastal defence and management of the project are imperative for its success. Engineering skills in coastal field, therefore, include analysis and evaluation of coastal need, compatibility, and hindrance all contributing towards its implementation. The setup of breakwaters aims to protect people and fauna alike, therefore its design also needs to match the practical needs. Engineering applications such as shoreline profile, coastal structures, and knowledge of wave action, and mechanisms greatly influence breakwater function. Conjunction with coastal engineering, is construction engineering, which plays a vital role for bringing breakwater to life from concrete to structure formation. No doubt, the design process is necessary for gauging the functionality requirements, but its limitations, exposure, construction phases and exposure are also calculated before breakwaters are constructed. Moreover, breakwaters open new dimensions for insights into monitoring and morphology engineering of these structures to ensure its longevity and cost effectiveness. From the discussion, one understands that engineers of breakwaters need to know material knowledge to be able to compare (breakwater) utility and functionality in conjunction with the intended construction. Discovery and knowledge of material innovation, as well as its usage, not only enables engineers to make superior quality decisions, but also contribute towards its endurance. When it comes to the core construction and analysis of these structures, there is no other way to gauge its function, utility and performance beforehand, except through simulation. For this purpose, analytical engineering skills are needed to design and model the propagation of wave action, and how it makes an impact on breakwaters. The dynamics of diffraction, refraction and reflection analytics are critical for studying the physical morphology of ocean currents and their impact on the breakwater, or even the coastline, to determine the depth and breadth of breakwaters. b. Evaluation of how breakwaters impact civil engineering professionals; and Having said that, the researcher realises that breakwater engineering and construction greatly adds value to civil engineering professionals. Like other civil engineering specialty fields like roads, dams and waterways, breakwater engineering is a speciality, which requires engineers to grasp not only the concept of breakwater designs and modelling, but also its impact on human life, other structures and the environment. No doubt, discussion of breakwaters is focussed holistically on sedimentation, wave motion and impact; they also entail the study of rubble, concrete blocs, material composition, and physical movement around and surrounding breakwaters. Civil engineers with knowledge of breakwaters can easily transfer this knowledge to applications in other engineering fields such as bridge building, offshore structures and onshore structures. The knowledge that engineers gain through breakwater construction such as wave run up and run down impact, slope, reflection, wave incident and its penetration can be carried forward to be applied in innovative building construction of waterways, underwater monuments etc. Hence, there is no doubt that the knowledge gained is invaluable. Despite these facts, the researcher cannot help but notice that breakwater engineering is still at the rudimentary level, when compared to other engineering fields. The risk of failure is clearly inherent in the limited knowledge and exploration in this field. There is great need for exploring the depths of coastal engineering, particularly breakwaters and similar structures to prevent them from being constructed with limited lifespan. Experts are of the view that breakwaters, though costly, usually last 5 to 10 years. Inevitably, they succumb to the wave oscillation motion and break down. The key to its permanency perhaps lies in discovery and research in innovative material, construction design and engineering feats which could harness wave impact and its destructive nature. It is only hence, that the efficacy and effectiveness of breakwaters can be materialised. c. Study of how the knowledge of breakwater construction adds to skill knowledge of engineers Regardless of the failures and risks identified in the above section, the researcher is of the view that breakwater construction and engineering add to the engineers' skill pool. There is great significance in the knowledge that breakwaters are the initial stages of harnessing hydraulics engineering. There is no doubt that future research and investigation could lead to more innovative ways of water construction. To this end, know how of construction options, engineering applications and architectural designs could lead to the development of a better environment for both, man and marine life. However, the furtherance of breakwater engineering and construction is limited to the scope and extent of its function. As long as human commercialisation and population dominate coastal areas, breakwaters and their construction shall be in demand. This demand shall give way to better options if people discover new options for their survival, regardless of the consequences of marine life. Evidence of deep sea pollution, coastal degradation and industrialisation are some instances in which one understands the devaluation of coastlines. Breakwaters, therefore, are just an option for people to harness the natural resources at sea. To ensure that they continue to contribute towards human development, breakwater engineering skills need to be developed and researched. It is only through this approach that perhaps it will enrich engineering skills for the future. Chapter 5 Conclusions The above study indicates that breakwaters, and their construction, has multidimensional effects in the civil engineering field. Four main engineering fields (coastal engineering, construction engineering, environment engineering and analytical engineering) have been studied, and discussed to discover that: i. Breakwaters are the result of engineering of various disciplines including coastal, environment, construction and analytical. It can add further knowledge to the engineering field if and when scaled for new skills and techniques by professionals. ii. Breakwaters are structures that contribute towards betterment of both human and marine life cycle. However, currently, it is limited in its scope in extending its own lifespan due to limited application of marine life and material knowledge in construction. iii. As a result, there is a great need for engineers to explore and exploit new dimensions to breakwaters if they want to endure this technique of planning and defending coastal areas. Considering the length and depths of the world's coastlines, it is an important field to explore. iv. Engineering knowledge contributes towards breakwater efficacy and effectiveness in construction, but it is the knowledge of the functionality and impact of breakwaters in the local environment that motivate engineers to discover new avenues for its applications. For example, the discovery of its failure due to sand liquefaction, sediment budgets and marine life risks lead engineers to discover new approach, and perhaps new type of structures to resolve problems of coastal erosion and sedimentation. v. Clearly, breakwater engineering is a new field that needs to be explored more comprehensively to discover and resolve coastal engineering fields. Inevitably, it leads to the betterment of civil engineering perspectives as it contributes towards betterment of the civil society. Overall, the above study demonstrates that breakwaters have been adopted, by coastal authorities, with engineering knowledge that is limiting to say the least. Attempts need to be taken to further this field, so as to ensure that the future of coastal, construction and biological engineering fields related to breakwaters, are secured. Future researchers perhaps will be able to shed light to areas such as ways to curb liquefaction, sand resolving sediment budgets, and so on. Inevitably, knowledge gained from future research shall add value to civil engineering fields that deal with coastal structures and their construction. Allsop, N.W.H. (2002) Breakwaters, coastal structures and coastlines. Thomas Telford. Publishing. Arena, F. and Filianoti, P. (2007) Small Scale Field Experiment on a Submerged Breakwater for Absorbing Wave Energy. Journal of Waterway, Port, Coastal and Ocean Engineering. Mar/Apr Issue pp. 161. Birben, A. R. et al (2006) Investigation of the effects of offshore breakwater parameters on sediment accumulation. Ocean engineering, vol. 34, no2, pp. 284 302 Broker, L. et al (2007) Morphological modelling: a tool for optimisation of coastal structures. Journal of Coastal Research, 23/5 pp.1148 1158. Camfield, F. E., and Holmes, C. M. (1995) Monitoring completed coastal projects Journal of Performance of Constructed Facilities, Vol. 9, No. 3 pp.161 CETN V 20 3/85.Biological effects of breakwater construction on aquatic communities in the Great Lakes. Coastal Engineering Technical Note. CETN V 20 3/85. Ciria CUR (2007) Rock Manual The use of rock in hydraulic engineering. Ciria CUR. Clyne, M. J. and Mullarky, T. P. (2008) Simulating Wave Reflection Using Radiation Boundary, Journal of Coastal Research 24/ 1A pp.40-48 Coduto, DP, (1999) Geotechnical Engineering. Englewood Cliffs, New Jersey: Prentice Hall, Inc. Crotty, M., (1998), The Foundations of Social Research: Meaning and Perspective in the Research Process, Allen and Unwin D'Angremond, K. and Van Roode, F. (2004) Breakwaters and closure dams. Taylor & Francis, Inc. Donnell, B. et al (2006) Effects of Breakwater Construction of Tedious Creek Small Craft Harbour and Estuary, Maryland. Engineer Research and Development Centre, Vicksburg MS Coastal and Hydraulics Lab. Gummesson, E. (2000), Qualitative Methods in Management Research, 2nd ed., Sage Publications, London. Hsu, T., Lin, T. and Tseng, F. (2007) Human Impact on Coastal Erosion in Taiwan. Journal of Coastal Research 23/ 4 pp.961 973 Huizinga, L. A. (2003) A Breakwater Design for Wilson Inlet Environmental Engineering Honours Project. Centre for Water Research, The University of Western Australia. Iskander, M. M. et al (2007) Investigating the Value of Monitoring the Implemented Coastal Structures along El Agami Beach, Alexandria, Egypt: Case Study. Journal of Coastal Research 23/ 6 pp.1483-1490. Kobayashi, N. et al (2007) Irregular Breaking Wave Transmission over Submerged Porous Breakwater, Journal of Waterway, Port, Coastal and Ocean Engineering, ASCE Mar/Apr Issue. pp.104. Levy, D. (2006) Qualitative methodology and grounded theory in property research. Pacific Rim Property Research Journal, Vol. 12, No 4, pp. 369. Morang, A. (1992) A study of geologic and hydraulic processes at East Pass, Destin, Florida. Rep., TR CERC 92 5, Vol.1 and 2, US Army Engineers, Wtrwy. Experiment Station, Vicksburg, Miss. Muttray, M. et al (2001) Uncertainties in the Prediction of Design Waves on Shallow Foreshores of Coastal Structures. ASCE Publications [Online] Accessed on 22 April 2008 available at: http://cedb.asce.org/cgi/WWWdisplay.cgi?0200817 Oumeraci, et al (2006) Wave Reflection and Wave Run up at Rubble Mound Breakwaters, Proc. 30th Int. Conf. on Coastal Engineering (ICCE), ASCE, San Diego. Oumeraci, H. et al (2006) Liquefaction Phenomena underneath Marine Gravity Structures Subjected to Wave Loads. Journal of Waterway, Port, Coastal and Ocean Engineering, ASCE, Jul//Aug Issue. pp. 325. Oumeraci, H. et al. (2001). Probabilistic design tools for vertical breakwaters, Balkema, Lisse, The Netherlands. Paige, S. (1950) Application of Geology to Engineering Practice: Berkley Volume. Baltimore, MD. pp.219. Putnam J. A., and Arthur R. S. (1948) "Diffraction of water waves by breakwaters", Am. Geophys. Union, Tr., vol. 29, p. 481 49 Ranasinghe, R. S. and Sato, S. (2007) Beach morphology behind single impermeable submerged breakwater under obliquely incident waves. Coastal Engineering Journal, Vol. 49, No. 1 Reedijk, B. et al Development and application of an innovative breakwater armour unit. [Online] A Delta Marine Consultants b.v., Gouda, The Netherlands; Accessed on 22 April 2008 available at: http://www.xbloc.com/htm/downloads.php Twu, S. and Chieu, C. (2000) A highly wave dissipation offshore breakwater. Ocean Engineering, Volume 27, Issue 3, pp. 315 330. Wiegel, R.L., 1962. Diffraction of waves by Semi Infinite Breakwater. Journal, Hydraulics Division, ASCE, pp. 27 44. Wiegel, R.L., 1964. Oceanographical Engineering. Prentice Hall, EnglewoodCliffs, NJ. Writing Services Essay Writing Assignment Writing Service Dissertation Writing Service Coursework Writing Service Dissertation Proposal Service Report Writing Essay Skeleton Answer Service Marking & Proofreading Service Exam Revision
Journaling: A Great Tool For Coping With Anxiety Journaling For Anxiety And Stress Relief: How To Get Started Journaling by a tree Journaling can be an effective tool for managing stress and anxiety. peter zelei/E+/Getty Images Journaling can be an extremely helpful tool for stress relief. (Read more about the research on journaling and stress.)  One of the ways that journaling can relieve stress is by helping you work through your anxious feelings.  This is because feelings of anxiety can lead to stress and rumination when left unchecked, but some of the roots of your anxiety can be minimized through a little-focused examination. Journaling can be a powerful tool for examining and shifting thoughts from anxious and ruminative to empowered and action-oriented. The following plan can help you to write your own ticket out of a place of stress, and find relief within a few minutes. (Note: if you feel you need more help with your anxiety than an article can provide, talk with your doctor; there are other options available.  You can also find help dealing with symptoms of anxiety disorders such as Generalized Anxiety Disorder, Social Anxiety Disorder, and Panic Disorder.) Ready to get started? Grab a pen (or open a document) and here we go! Start by journaling for 5 to 15 minutes. Write about what's on your mind, and what's bothering you: 1. Write about your concerns, writing for several minutes until you feel you have written what needs to be said, but haven’t delved into a mode of rumination. You may prefer a computer, a journal, or just a pad and paper; if you are using paper, please skip a line or two for every line you use—this will be handy later. 2. Detail what is happening right now, describing the events that are currently causing difficulties. Keep in mind that, with anxiety, sometimes it isn’t what is currently happening that is causing stress, but rather your concerns about what could happen from here. If this is the case for you, it’s okay; you can write about what is currently happening and just note that the only part that is really stressful is the possibility of what could happen next. (This, in fact, may be a realization that brings some stress relief in itself.) 3. Next, write about your concerns and fears, and write in chronological order. In other words, start with one of the stressors you are contending with in the present, and explore what you think will happen next, then write what you fear will happen after that. 4. Write how this would affect you. Journaling Your Way To A Better Frame Of Mind peter zelei/Getty Images Writing about your concerns and fears can be helpful in getting these thoughts out of your head and into the open. Next, re-read and re-think what you just wrote. 1. When you look at what is concerning you right now, explore your other options. Would it be possible for things to be different right now? Is there something that you could do to change your circumstances or your thoughts about your circumstances? 2. When you write about what you are concerned could happen next, think critically and try to argue with yourself. Write anything that calls into question whether or not this is truly a concern. How likely is it that this will happen, and how do you know? Are you sure? If what you fear actually does come to pass, is there a possibility that it could be less of a negative experience than you think it would be? Could it actually be a neutral or even positive event? Is there a way you could use your circumstances to create a better outcome for yourself, using what you have available to you and the potential changes that could take place? Is there a change that could occur that you could create that would be even better? You get the idea. Challenging your fears can often help you to relieve anxiety because you see that things either are less likely to happen than you think they are, or are not as bad as you think they could be. 3. For each fear or concern, try to write at least one (but preferably more) way in which you could think about it differently. Generate a new story for yourself, a new set of possibilities, and write them on paper next to the fears that are in your head right now. 4. It can be helpful to examine your cognitive distortions to see how you might benefit from changing habitual stress-inducing thought patterns. Now that you have come up with new ways of looking at things, let's examine ways to use journaling to take action to relieve stress.  Action-Focused Journaling peter zelei/Getty Images Processing your emotions on paper can be quite helpful. Here is how to continue processing and move into a place where you are ready to take action to face the stressful challenges of life. As you write, plan for the worst and hope for the best. 1. Look at what might happen. Now think about the biggest challenges you’ve faced and overcome. Looking at your strongest, wisest moments, do you think you could use that same strength and wisdom to prevail in this potential challenge as well? What do you think you could learn from it, and in what ways do you think you would gain strength as you face these new obstacles? Thinking about your strengths and your best moments can help you to remember that, while you may not enjoy the current circumstances you face, you have the strength to handle what comes. You may find new strengths you didn't know you had! 2. Assuming that what you fear actually does happen, what would you do? You don’t have to create a full plan, but jot down the resources you would utilize and the next steps you’d take. This takes away the fear of the unknown; if you know that you would have resources available to you should you need them, your mind is more likely to stay away from the worst-case scenarios toward which we all sometimes gravitate. 3. Come up with at least one thing you can do right now that would improve your life and prepare you for what you fear. This could be to build your resources by reaching out to friends and strengthening your relationships. You could build skills that you could use now, but would also come in handy if your fears were realized. You could work on creating an effective stress management plan so you may be more emotionally resilient if you face a big challenge and need to endure some extra stress. Putting your energy toward doing something can help you move out of a place of anxiety and toward a place of empowerment. Then even if you don’t need them, you have resources that can help you in your life now, and you’ve distracted yourself in the process. Coming up with a list of such possibilities is the first step. 4. You may want to look at more tips on resilience, and find resilience-building tips as well. Remember that some issues require more help than an article can provide, and it is important to seek help if you need it. That said, this simple journaling technique can provide a tool that can be used in all types of situations to help manage anxiety and stress in life. For additional stress management strategies, see these ongoing resources for stress relief and take advantage of what this site has to offer. Continue Reading
Garden of Eden: What Do We Know About Adam and Eve? Part 2: The first lesson of Genesis is cold and hard: Sustaining human life is not meant to be easy. 8:49 | 12/21/12 Coming up next: Skip to this video now Now Playing: More information on this video Enhanced full screen Explore related content Related Extras Related Videos Video Transcript Transcript for Garden of Eden: What Do We Know About Adam and Eve? In the beginning, the bible tells us, god created the heavens and the earth. ♪ There's no turning back ♪ then god's wind swept over the surface of the darkaters and god said "let there be light." He separated the day from the night and then he created life. And after all of this, god saw that it was good. And so it goes, the story of humanity, our story begins. God created a man and a woman and gave them a perfect place to live, a garden called eden. The garden is depicted as an orchard. God gives them this wonderful orchard, tells them they can eat all of the fruit they want. They live in peace with the animals and with one another. It's an imachblg peace, completion, wholeness. When we imagine the garden of ee den, most of us think of a paradise like this, the ultimate xiangry la, better than anything on earth. What does the bible about where it all began? The biblical description is very short, says there are four rivers, tigris and euphrates are two of them and the other two are unknown. That is the problem. If you could figure out where all four rivers are then you have got the location. And it is the tantalizing mention of these two remaining rivers that has fueled a neverending search for the garden of eden. For centuries people looked everywhere from the depths of the persian gulf to rural missouri and even the planet mars. I have a problem with the looking for the garden of eden, because how are you going to know when you have found it? There is no signpost because writing hasn't been invented That brings us back to the bible story and the two rivers we can locates tigris and euphrates, coming together in the fertile crescent where civilization first began. A perfect backdrop for the biblical beginning. You have the place where early man and women could live in idyllic harmony and with the food readily accessible and all that, that is what we're talk about is an earthly paradise. We're told that adam and eve had everything they could ever need. But in order to keep all of this, they had to obey one rule. God tells adam and eve not to eat from the tree of the knowledge of good and evil. And god warned them that if they disobeyed they would die. Snake comes along and says once you have access to the tree of wisdom you can become like the gods. You can move up the ladder. In a very human moment, we're told, eve couldn't resist the tempta take more. So, she took a bite and she passed the apple to adam. It was the snack that changed history. The men and woman hide because they're afraid because they know they've done something wrong. When god says did you eat? It's adam who points the finger at eve and not only at eve but at god. Because he says, she gave me and you gave her tome. Now, an angry god casts his creation out of paradise and just like adam, throughout the millenn millennia, everyone has blamed eve. Women are blamed for lots of things that perhaps they need not. Ad'amico have said, that fruit? I'm not going to eat it. But he took the fruit and he ate it. Does that trouble you the way it's been portrayed? Of course it's troubling but it reminds me that the bible, for all that we say it's a divine document, it is written from a man's point of view. Ironically, in the muslim holy book the koran, there is more than enough blame to go around. Both adam and eve are to blame equally for eating the forbidden fruit, so they're co-equal human beings and i think that's lost in today's narrative about islam and muslims and that's important to keep in mind. In the end it didn't matter whose fault it was, they both suffered the consequences of disobeying god. The first le sochb genesis is cold and hard. For humans sustaining life on this earth is not meant to be easy. We have to go out into that cold, suffering world that we labor by the sweat of our brow and give forth our children in pain and have to suffer and die. We christians believe this is why jesus came, to solve that problem. To pay the penalty for sin. But maybe, when eve made the choice that christians call original sin, it wasng more. Maybe it was the first act of original thought. Adam and eve? Free will? Oh, absolutely free will. That's a story of you can make a choice. That's the most horrible thing that faces a human being. You got to choose. On our journey, we met believers who say the bible is the literal truth straight from the mouth of god. Do you think it happened? And we met others, even those of faith, who believe that these are stories and have been passed down through the generations. What brought you to israel? A record of a people's struggle with the world and their place in it. So, how are we meant to read the book of creation? This is a wonderful myth. A myth is more than history. It's telling you the meaning of history. The meaning of events. God can communicate truth through different types of literature. It doesn't always have to be newspaper-style account of what happened. For instance was the earth created in seven days or did it take millions of years of evolution? Both. Both -- you are saying -- thats a catholic priest? Yes, absolutely. Science gives us insight into the how. How the universe works, how particles behave but zero insight into why. I mean, why are we here? What's the meaning of it all? For some people, religion offers some degree of insight into those very important questions. Important and difficult questions the bible forces us to think about. Like jealousy and rage and why some people come to hate and harm each other. A lesson starkly taught in the story of the first children, the first siblings, cain is a shepherd and abel a farmer. Both offer sacrifices but god likes abel's better. This is about life as we know it and it's not fair, we feel the pain that those god hasn't chosen. In a fit of jealousy, cain kills his brother abel. This is the first example we ever saw of murder and the gravity with which god holds the taking of another human life. And yet, with the passage of thousands of years, humankind is still at it. As surely as cain killed abel, the slaughter of innocents continues in the very place the bible tells us this story was set. Today's syria. Wracked by the most brutal of wars. And in aepy town in connecticut, the horrifying massacre of children in their elementary school. How do you we make sense of our torn world and torn personalities? And the conflict and despair we fall into when we see suffering and injustice? What I think the bible does really well is help us ask really good questions. How could we do this better? If I were in this person's shoes how would I have acted? If I'm judge and jury what punishment would I assess and god forbid if I committed that crime, what would I want my peers to do to me? {"id":18041835,"title":"Garden of Eden: What Do We Know About Adam and Eve?","duration":"8:49","description":"Part 2: The first lesson of Genesis is cold and hard: Sustaining human life is not meant to be easy.","url":"/2020/video/garden-eden-adam-eve-18041835","section":"2020","mediaType":"default"}
Tuesday, March 3, 2009 Pursuing as an end and pursuing as a means Heath White said... This is a great point. It corresponds to the idea in the theoretical realm that we can believe or know, without understanding the reasons for it. I've never seen the point made for the practical but it works. On a different note: don't mix up 'final' and 'intrinsic' goods. All instrumental goods are (probably) extrinsic, i.e. valuable because of their relation to something else, but it doesn't follow that a final good is intrinsic. Something might be valuable because it is rare or unique, for example, and this is an extrinsic but non-instrumental good. Alexander R Pruss said... A related point in the practical realm is that the alleged dichotomy between being moved by moral and by non-moral reasons (or by moral and by prudential ones) will not be exhaustive. For to act all one needs is the belief that one has a reason--one does not need to know what sort of a reason one has. (Think of an expert who tells one that one has a reason to do A, but who fails to specify if the reason is moral or non-moral.) Similarly, the dichotomy between believing something for pragmatic reasons and believing it for epistemic reasons can fail. (Think of an expert who says in a given situation: "Be optimistic! Assume it'll all work out." You don't know if that's pragmatically or epistemically justified, but you do it anyway, trusting the expert.) All this is grist for the mill of someone like me who thinks the realm of reasons is unified (I often put the point by saying that all reasons are moral reasons). But the point is somewhat independent of that. Though, I think, there is probably a bit of a feeling of alienation when one acts on a reason proferred by an expert and has no idea what the expert's reason actually is. I am not sure about intrinsic vs. final. I think of knowledge as an "intrinsic good". But knowledge is relational. (x knows p only if p, after all) I'll need to think some more on this, thanks! Brandon said... I'm skeptical; in the symmetry case you are obviously ensuring the existence of large symmetrical patterns on your walls for the sake of being better off. That you don't know why or how you'll be better off for it doesn't change anything; it's still obviously a means, because being better off is obviously valuable. Likewise, with your friends case, we don't know whether it should be pursued as a means or an end, but it doesn't change the fact that we are pursuing it as a means -- if you are pursuing good reputation without knowing whether it is intrinsically or instrumentally worth having, you are still pursuing it as an end. What you are doing is pursuing it as an end but leaving open the possibility of subordinating it to a greater end somewhere down the road. And this makes plenty of sense: something can be both an end and a means (e.g., by being the end of this particular action, but chosen because it is a means to a larger project). I do agree, though, that the dichotomy between moral and non-moral reasons is a false one. Alexander R Pruss said... When something is an aspect of well-being, as friendship, esthetic goods, etc., and it is pursued because it is an aspect of well-being, then it is being pursued for its own sake. Then, well-being isn't some further thing, for which this is a means. If I don't know whether something is an aspect of well-being or a means to an aspect of well-being, then I neither pursue it as a means nor as an end. Maybe, though, one can introduce the notion of constitutive, rather than causal, means, and then one may be able to say that one can pursue something as a constitutive means. If so, then I need to modify the symmetrical patterns case. The being tells you simply that it will be better if there are large symmetrical patterns on your walls. the being doesn't tell you if it'll be better for you or for others. The being doesn't tell you if the symmetrical patterns are that which constitutes the value, or if they are merely means to something else (maybe the symmetrical patterns will scare off invading aliens, thereby making the aliens better off morally and the earthlings better off materially). Brandon said... I still don't see that that would make any difference. If the being tells you that it will be better, you are still doing it as a means in the pursuit of what is better. I do think one needs to recognize constitutive means, but since in neither this version nor the original do you actually have any notion of how it would be constitutive, or even that it would be constitutive, you can't be pursuing it as a constitutive means. But that doesn't rule out that it is still a means, with the question of whether it will turn out to be constitutive or not left open. It seems to me that both in the post and in the comments you are conflating two different things: objective means-end relations (or perhaps more accurately how we ideally should order things as means and ends) and subjective use of things as means and pursuit of things as ends. Your cases set up situations where we don't know the former and still can pursue things; but that doesn't actually tell us whether or not we pursue things only as ends or as means. Am I just missing some turn in your argument? Adam said... This is a very interesting take on the subject. It is a theme I've found particularly important (and controversial) in loci of theological aesthetics and practical theology of the arts.
Benefits of Solar Modern solar technology (photovoltaics) is the result of NASA and military research and development projects. The goal was:  Find power sources that are reliable, maintenance free, and with no moving parts or need for refueling. Photovoltaics proved to be the answer, which is why almost all modern satellites are solar powered. Modern day RVers also have a similar desire for independent, reliable and maintenance free power. They also want their power to be clean, quiet, and free of the vibration and pollution associated with petrochemical fired generators. Today’s solar powered systems meet that criteria. Benefits of Solar Electric Battery Charging 1. Clean, Quiet, & Easy to use -Solar panels consume no fuel and give off no waste. -There are no moving parts, which means no mechanical noise. -Simply place the solar panel in the sun and you generate electricity! 2. Solar Power Maximizes Battery Life. Solar panels generate pure D.C. electricity when exposed to sunlight. This is exactly what your batteries want. By saturating your batteries with these electrons in a slow and steady manner on a daily basis, you prevent your batteries from repetitive deep discharges which will shorten the lifespan of lead-acid batteries. In fact, a properly designed solar battery charging system can easily double the useful life of your lead-acid batteries!  (Lithium batteries, on the other hand work a little differently and aren’t worn out by deep discharges in the way that lead-acid batteries are.) 3. Electrical Independence With a properly sized system and the appropriate components, you will be able to park where you want, and be free from the concerns of finding shore power or running your generator. Go to the desert, go to the mountains, go to the beaches, go anywhere the sun shines and declare your electrical independence! 4. Low Maintenance Since solar panels consume no fuel and have no moving parts to wear out, there are no air, oil, or fuel filters to change. Simply keep the surface of the panels clean. 5. It's Safe and Reliable RVs typically operate at 12 volts and less than 30 amps.  As a result, there is very little chance of being electrocuted or starting electrical fires.  If a system is installed using proper wire sizes and fuses, it is inherently safe. As mentioned at the top of this page, NASA and the Military had reliability at the top of their list of criteria. In fact, solar panels are so reliable that manufacturers are now offering 25 year warranties on their products and fully expect them to last over 35 years! System Component Overview A solar battery charging system consists of the following components: Solar Panel(s) – Turn sunlight into charging current, Mounts - Hold the panels to the roof, Charge Controller System - Regulates the flow of electricity to the batteries, System Monitor (optional) - Keeps track of the system’s performance, Wire Harness - Carries the charge from the panels, through the charge controller, to the batteries
Bush flown to hospital for heart trouble Condition is common irregularity Drug can prevent further occurrences May 05, 1991|By Jonathan Bor Doctors said yesterday that President Bush suffered common irregularity of the heart that, given his history of vigor and good health, probably does not signal a life-threatening illness or a condition that will force him to slow his pace. Experts in heart disease cautioned that the condition, known as atrial fibrillation, is common in people with a serious underlying heart disorder such as a leaky heart valve. But they said that when it strikes people like the president, who keep an active pace and have never reported chest pains or other symptoms of a serious abnormality, it usually is a relatively harmless condition that can be managed effectively with drugs. "People don't keel over dead from atrial fibrillation except in the rarest of circumstances when their hearts are in very bad shape to begin with, which we know he wasn't because he was out jogging," said Dr. Alan Guerci, director of the coronary care unit at Johns Hopkins Hospital. Atrial fibrillation is a condition in which the atria, or the upper chambers of the heart, suddenly pump very rapidly and in an uncoordinated fashion. "As a consequence of this, the heart doesn't work efficiently, they have some shortness of breath and they feel palpitations, meaning they are aware of their heart beating," Dr. Guerci said. A person can simulate a normally beating heart by closing his hand into a tight fist, opening it all the way, and then closing it tightly again. In atrial fibrillation, it is as if the hand is half-open and the fingers are fluttering out of control. "It's a rapid and discoordinated activity," Dr. Guerci said. "Typically, when healthy people get atrial fibrillation, they get it in the kind of setting President Bush reported. They're jogging on a hot day, or it's after strenuous exercise on a hot day, or they're rapidly drinking large volumes of very cold fluids." The early reports of President Bush's symptoms were hopeful, he said. Chest pains would have been a likely indication of a heart attack, an interruption of blood flow in the arteries leading to the heart that leaves the heart muscle damaged and unable to beat as efficiently as before. "If he just had atrial fibrillation with shortness of breath, the odds are that he didn't have a heart attack," Dr. Guerci said. Tests at Bethesda Naval Hospital did not indicate any evidence of a heart attack or of a serious abnormality of the heart's structure, White House press secretary Marlin Fitzwater said. The tests included an electrocardiogram, which measures the heart's electrical activity, and ultrasound, which gives a picture of the heart. The president was placed on digoxin, a drug that increases the force of the heart muscle's contractions and slows the abnormally rapid nerve impulses passing between the upper and lower chambers of the heart. "I think there's probably an 80 percent chance that there's going to be no explanation of why this occurred," said Dr. Myron Weisfeldt,chairman of cardiology at Johns Hopkins and a former president of the American Heart Association. "Even in the unlikely case that [there's a serious underlying problem], the chances are extremely likely that it's treatable and curable," he said. Dr. Weisfeldt said shortness of breath is very rarely a first sign of a heart attack -- the early symptoms of which are usually chest pain, nausea, a cold clammy feeling and "a feeling of impending difficulties." The likelihood that a person who experienced shortness of breath but no chest pains suffered a heart attack is "2 percent to 3 percent," he said. Approximately 5 percent of the U.S. population will have atrial fibrillation sometime during their lives, Dr. Weisfeldt said. Dr. Stephen Gottlieb, a cardiologist at the University of Maryland Hospital, said treatment with digoxin or other heart-regulating drugs is common. If drugs don't work, doctors will often restore the heart's normal rhythm by administering an electric shock to the chest. In rare instances, patients with atrial fibrillation will develop blood clots in the upper heart chambers -- clots that could enter the bloodstream and cause a stroke. To prevent this, doctors will often give patients blood-thinning drugs such as aspirin. But less than 1 percent of the patients actually develop clots. Atrial fibrillation is far less serious than ventricular fibrillation -- the rapid and out-of-control beating of the heart's lower chambers. Baltimore Sun Articles
What's the Latest? Here's an arms race I can get behind. Engineering students at Olin College spent three months and a $250 budget to construct a face-tracking marshmallow cannon for their Principles of Engineering course. The Confectionery Cannon, as the four young engineers call it, was programmed using Python and utilizes OpenCV to guide the device's face tracking capabilities. Just check out the video to see how awesome it is: What's the Big Idea? Aside from its potential as the weapon for the world's most delicious firing squad, the Confectionery Cannon allowed the students to achieve course objectives and put their knowledge to work. The cannon's automated reload makes it able to fire six marshmallows in 10 seconds. A pneumatic system launches the sugary ammunition with the help of 135 PSI of pressure. Their impressive website, linked below, offers a fun glimpse into the design and construction of the cannon. Engineering.com has a good write-up on the cannon's specs. Learn more at http://confectionerycannon.com/ Photo credit: Brittny / Shutterstock
Fracking, Chemicals, and Our Health: EPA Considers a Hydraulic Fracturing Chemical Disclosure Rule , , Research Director, Center for Science and Democracy | July 16, 2014, 10:43 am EST Bookmark and Share What’s in the water? What are the chemicals being used? Will they harm me? Or my family? Or my animals? What kind of impacts will my environment experience? These questions have been asked by countless communities since hydraulic fracturing first expanded across the country a few years ago. And during this time period, these questions have often gone unanswered because of a lack of laws to address them. But right now, the EPA has the opportunity to provide some answers. Given its proximity to residences, one of the largest concerns around hydraulic fracturing has been around human and animal exposure to chemicals used in the process. Photo: Trudy E Bell With its proximity to residences, one of the largest concerns around hydraulic fracturing has been human and animal exposure to chemicals used in the process, but a lack of legal requirements have limited the ability of the public to access this information. Photo: Trudy E Bell From their start, many hydraulic fracturing operators have faced criticism from the public, policy makers, and the scientific community for their handling of chemical disclosure (and lack thereof). A 2012 petition to the EPA from the Environmental Integrity Project and 16 other groups said it best. There is simply no adequate, comprehensive framework to ensure that information as to toxic chemicals used in oil and gas extraction is made available to the public. The health consequences of no transparency on fracking chemicals The results haven’t been pretty. Citizens fear the chemicals and potential health effects they are exposed to. Many nearby residents of oil and gas facilities have reported experiencing unexplained health problems. Medical professionals and emergency responders have been left without vital information to treat their patients and protect themselves from harmful chemicals. And whole communities have been left in the dark on the identity of chemicals in their environment—let alone their quantities and known health and environmental effects. To compensate for the lack of federal regulation, many states have passed chemical disclosure laws that require companies to disclose (either publicly or to regulators) information about the chemicals they use in hydraulic fracturing operations. These laws vary widely between states in terms of (1) what is disclosed, (2) when disclosure happens, and (3) how accessible the information is to those who need it. And importantly, many states with hydraulic fracturing still have no law on the books requiring any chemical disclosure. As a result, many communities have no or limited legal mechanisms to require companies to tell them what chemicals they are using. The trouble with trade secrets One major barrier to better chemical disclosure laws has been trade secrets, or confidential business information. Hydraulic fracturing operators claim that a law requiring full disclosure of chemicals will harm their business because they would have to reveal their chemical mixtures to their competitors. Because of this argument, nearly every state chemical disclosure law includes a trade secret exemption that allows companies to conceal chemical identities from the public if they consider the information proprietary. Emergency responders and medical professionals can better react and treat patients in accidents related to hydraulic fracturing when they have quick access to information about the chemicals that may be involved. Photo: NYCDOT/Flickr Such trade secret exemptions can have serious consequences for public health. In 2008, a nurse in Durango, Colorado, fell ill after being exposed to hydraulic fracturing fluid chemicals. With heart, respiratory and liver failure, she came very near death. But despite her dire condition, the company responsible for the chemicals refused to disclose their identity because the fluid’s contents were considered confidential business information. Luckily, doctors were able to save her without knowledge of what chemicals she had been exposed to, but chemical information could undoubtedly speed treatment and potentially save lives in future accidents. In some states, regulators and medical professionals can still gain access to proprietary chemical information, but it isn’t necessarily accessible and available in timeframes relevant for medical professionals and emergency responders. Pennsylvania’s Act 13 includes what’s been dubbed the “Doctors’ Gag Rule,” which allows doctors who suspect their patient has been exposed to hydraulic fracturing chemicals to obtain proprietary chemical information if they sign an agreement saying they will not share this information. Doctors say this law inhibits their ability to treat their patients since they are not allowed to share this information with other doctors, nor with their patients themselves. Currently, this rule is under review in Pennsylvania’s Commonwealth Court. The EPA can shed light on hydraulic fracturing chemicals An opportunity exists for a more comprehensive federal law on chemical disclosure. The EPA is currently taking comments on a rule to require better transparency around chemical substances and mixtures used in hydraulic fracturing. The EPA plans to use its authority under the Toxic Substances Control Act to issue a rule that addresses many of the problems we’ve seen around hydraulic fracturing chemical disclosure. The rule could provide the EPA and the public with better information about the identity and quantity of chemicals, but also more information about their public health and environmental effects—vital information for affected communities, emergency responders, medical professionals, and scientists. However, the rule will not go without opposition. Hydraulic fracturing operators are expected to work to stop or limit the effectiveness of such a rule. Claiming burdensome paperwork and the importance of confidential business information, we can expect many hydraulic fracturing operators to object. In the end, public health should trump business interests, as it has for other regulated industries. Citizens have a right to know what chemicals they might be exposed to, and what their potential health and environmental effects are. But the EPA will need support from citizens, medical professionals, emergency responders, public health specialists, and scientists to move forward with a strong rule. Join me in asking the EPA to take full advantage of this opportunity and issue a comprehensive and mandatory chemical disclosure rule around hydraulic fracturing. Posted in: Energy, Science and Democracy Tags: , , , , , , , Show Comments Comment Policy • TheWatchdog222 List of fracking chemicals: Acetic Acid Ammonium Persulfate Borate Salts Boric Acid Calcium Chloride Choline Chloride Citric Acid Copolymer of Acrylamide and Sodium Acrylate Ethylene Glycol (Anti-freeze) Formic Acid Guar Gum Hydrochloric Acid Hydrotreated Light Petroleum Distillate Isopropyl Alcohol Lauryl Sulfate Magnesium Oxide Magnesium Peroxide Phosphonic Acid Salt Polysaccharide Blend Potassium Carbonate Potassium Hydroxide Potassium Metaborate Quaternary Ammonium Chloride Sodium Carbonate Sodium Chloride Sodium Erythorbate Sodium Hydroxide Sodium Polycarboxylate Sodium Tetraborate Tetrakis Hydroxymethyl-Phosphonium Sulfate Tetramethyl ammonium chloride Thioglycolic Acid Triethanolamine Zirconate Zirconium Complex for full details and usage go to : • Thank you for your comment and resource. Voluntary chemical disclosure like what FracFocus provides can be helpful for obtaining some chemical information; however, such measures are insufficient for providing the public with complete transparency about the chemicals and chemical mixtures used in hydraulic fracturing and their potential health and environmental impacts. We need a mandatory chemical disclosure rule like that which EPA could provide to ensure that medical professionals, emergency responders, scientists, and the public have the information they need. • TheWatchdog222 Public media disclosure of these toxic chemicals is a good first step in getting a requirement mandatory disclosure and approval . Why keep it a secret ? • Krogoth Alexander This is disgusting… Life in prison should be handed out to the heads of all of these companies.. Laws are now created to rape us and empower massive companies.. Democracy has failed… We need Kings and Queens again that is how pathetic and bad our situation is. Democracy is currently much worse than a Monarchy would be.