Dataset Viewer
Auto-converted to Parquet
text
stringlengths
124
652k
Try a glass of milk before you run Drinking low-fat or skim milk before you run will provide sustained energy, because milk is a low-glycemic food; i.e., the carbohydrates are released slowly into the bloodstream. Speedwork and pace Speedwork teaches you the sense of pace that you need to race well. Better pacing also will allow you to do your training runs more evenly, which is easier than uneven-paced running. Lutein and cancers Consuming foods rich in lutein and zeaxathin may reduce your risk of colon cancer. Lutein and zeaxathin are carotenoids, a type of antioxidant, which protects cells from the damaging effects of compounds created during metabolism. You can get lutein from spinach, broccoli and other greens, tomatoes, carrots, oranges and eggs. Miss a Digest? Soybeans a versatile dietary staple Drink sports drinks for better health Warm up, cool down for better speedwork Olive oil may prevent colon cancer Strengthen your quads at your desk Discuss This Article
Dusky Dolphins save place Photos (1) Plane loader Animated dots Dusky Dolphins Dusky Dolphins Dusky dolphins, originally named Fitzroy's dolphins by Charles Darwin, are easily distinguished from other dolphins. The head is small and evenly sloped, and there's no beak at the end of the snout. The tail and back are bluish black in color, with a dark band that runs diagonally from the flank to the tail. The belly is white, and there's a two-pronged blaze in white or cream from the dorsal fin to the tail. Dusky dolphins are extraordinarily social, sometimes traveling in pods with as many as 1,000 members. They're also highly acrobatic—watch to see them leaping out of the water to turn somersaults in the air. Their squeals, whistles and clicks can sometimes be heard as far as three kilometers (two miles) away. Original mexico 300x350
You've got family at Ancestry. Find more Huffsteller relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 50 more people named Huffsteller in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 40 people named Huffsteller in the 1930 U.S. Census. In 1940, there were 125% more people named Huffsteller in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 90 people named Huffsteller were living in the United States. In a snapshot: • 4 were disabled • 42% were children • 14 adults were unmarried Learn where they came from and where they went. As Huffsteller families continued to grow, they left more tracks on the map: • 1 were first-generation Americans • Most fathers originated from South Carolina • They most commonly lived in South Carolina • Most mothers originated from South Carolina
You've got family at Ancestry. Find more Kenery relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 34 more people named Kenery in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 24 people named Kenery in the 1930 U.S. Census. In 1940, there were 142% more people named Kenery in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 58 people named Kenery were living in the United States. In a snapshot: • 3 were disabled • Most common occupation was farmer • 6% were disabled • 18 were children Learn where they came from and where they went. As Kenery families continued to grow, they left more tracks on the map: • Most immigrants originated from Ireland • They most commonly lived in New York • The most common mother tongue was Polish • 9 were first-generation Americans
You've got family at Ancestry. Find more Kokol relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 2 more people named Kokol in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 18 people named Kokol in the 1930 U.S. Census. In 1940, there were 11% more people named Kokol in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 20 people named Kokol were living in the United States. In a snapshot: • 1 was disabled • On average men worked 45 hours a week • 38% of adults were unmarried • 7 were children Learn where they came from and where they went. As Kokol families continued to grow, they left more tracks on the map: • Most immigrants originated from Poland • 8 were first-generation Americans • The most common mother tongue was Polish • They most commonly lived in New York
You've got family at Ancestry. Find more Mardula relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 2 more people named Mardula in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 26 people named Mardula in the 1930 U.S. Census. In 1940, there were 8% more people named Mardula in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 28 people named Mardula were living in the United States. In a snapshot: • The average annual income was $1,079 • 33% of women had paying jobs • 17% owned their homes, valued on average at $900 Learn where they came from and where they went. As Mardula families continued to grow, they left more tracks on the map: • 27% were born in foreign countries • 19 were first-generation Americans • 7 were born in foreign countries • They most commonly lived in Illinois
You've got family at Ancestry. Find more Posery relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 2 more people named Posery in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 7 people named Posery in the 1930 U.S. Census. In 1940, there were 29% more people named Posery in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 9 people named Posery were living in the United States. In a snapshot: • 4 were children • 20% of adults were unmarried • 9 rented out rooms to boarders Learn where they came from and where they went. As Posery families continued to grow, they left more tracks on the map: • They most commonly lived in South Carolina • 14% were first-generation Americans • 1 were first-generation Americans
You've got family at Ancestry. Find more Rognlie relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 31 more people named Rognlie in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 65 people named Rognlie in the 1930 U.S. Census. In 1940, there were 48% more people named Rognlie in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 96 people named Rognlie were living in the United States. In a snapshot: • The average annual income was $787 • 19% of women had paying jobs • 3 were disabled • 29% of adults were unmarried Learn where they came from and where they went. As Rognlie families continued to grow, they left more tracks on the map: • 12% were born in foreign countries • 27% migrated within the United States from 1935 to 1940 • The most common mother tongue was Norwegian • Most fathers originated from North Dakota
You've got family at Ancestry. Find more Santusci relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 3 less people named Santusci in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 12 people named Santusci in the 1930 U.S. Census. In 1940, there were 25% less people named Santusci in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 9 people named Santusci were living in the United States. In a snapshot: • 1 woman had paying job • 43% of adults were unmarried • 22% were children • The typical household was 3 people Learn where they came from and where they went. As Santusci families continued to grow, they left more tracks on the map: • 42% were first-generation Americans • The most common mother tongue was Italian • They most commonly lived in New Jersey • Most immigrants originated from Italy
You've got family at Ancestry. Find more Schaden relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 8 more people named Schaden in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 85 people named Schaden in the 1930 U.S. Census. In 1940, there were 9% more people named Schaden in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 93 people named Schaden were living in the United States. In a snapshot: • 22 were children • 20% of adults were unmarried • The typical household was 3 people Learn where they came from and where they went. As Schaden families continued to grow, they left more tracks on the map: • 38% were first-generation Americans • They most commonly lived in Michigan • Most immigrants originated from Austria • 22% were born in foreign countries
You've got family at Ancestry. Find more Traenkle relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 10 more people named Traenkle in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 54 people named Traenkle in the 1930 U.S. Census. In 1940, there were 19% more people named Traenkle in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 64 people named Traenkle were living in the United States. In a snapshot: • 63 rented out rooms to boarders • 2% were disabled • 13 owned their homes, valued on average at $5,423 • 43%, or 10 people, lived in homes they rented Learn where they came from and where they went. As Traenkle families continued to grow, they left more tracks on the map: • 29 were first-generation Americans • 5 migrated within the United States from 1935 to 1940 • 5 were born in foreign countries • 54% were first-generation Americans
Don’t be a Duncan Pfool: Remember to use furniture’s correct vocabulary Every area of special interest has its own vocabulary and words of common usage. The area of antiques certainly falls in this category with some of its more obscure terms like recamier and bergere. But there are also a number of terms that are quite common in the industry and among these common terms are a significant number that are commonly misused, misspelled or misunderstood. One of the ones I see frequently in inquiries from readers concerns that cabinet maker with the musical name, Duncan Phyfe. In fact his family name was Fife but when he came to America from Scotland in the late 18th century he changed it to “Phyfe” to add a little sizzle to an otherwise mundane moniker. He was a talented cabinetmaker who worked in all the styles of his working life: Federal, English Regency and Empire. One style in which he did not work was the style “Duncan Phyfe.” There is no single style attributed to Duncan Phyfe that can truly be called “Duncan Phyfe.” In modern common usage it seems that every table with curved legs extending from a pedestal is called a “Duncan Phyfe” table. He did make some tables with legs like that but so did every other cabinetmaker of the period. That style leg came from mid 18th century English pedestal dining tables. And he also made tables with legs of other styles. An even worse transgression is the misuse of the name itself while describing a misnamed style. More than a few inquiries ask about their “Dunkin & Fife” furniture or similar variations. Another modern misuse of a cabinetmaker’s name involves the mid 19th century English designer Thomas Chippendale. That really was his family name and he was named after his father. His style was an updated take-off on a basic Queen Anne base with some masculine embellishments, often topped off with French or Chinese accents. The style was not “Chip and Dale”; they are Walt Disney cartoon characters. And the style is not the style of the “Chippendales”; they are male exotic dancers. The use of the term “Victorian” as a style is also a misuse of the word. “Victorian” refers to a period of time, 1837 to 1901, when Victoria, the only daughter of Edward, Duke of Kent, the fourth son of George III, sat on the throne of England. No style in modern history has been maintained continuously for 64 years and neither was the style of “Victorian.” Within that span of years a number of prominent and very distinct styles rose and fell in favor, among them were Late Classicism, Gothic, Elizabethan Revival, Rococo Revival, Renaissance Revival, the Aesthetic Movement, Colonial Revival, Arts and Crafts and even Golden Oak. All could be considered to be styles of the Victorian era but none can be said to be the Victorian style. Then there are ambivalent uses of the names of pieces of furniture themselves. In common use the term secretary is often used to describe a desk with a tall bookcase on top, usually with glass panel doors. But the term secretary can actually be used as a simple synonym for a desk with or without the top section. A more accurate way of describing the tall model is to call it a “bookcase/secretary” literally a bookcase on top of a slant front secretary or desk. That’s how the form got started. Most early bookcase/secretaries consisted of two separate parts. Only in the 20th century did they become one piece, tall cabinets. The term “blanket chest” is also open to interpretation. Originally the term referred to a lift top chest with drawers below. This form is sometimes also called a “chest on drawers” and smaller versions are sometimes called a “mule chest.” When properly decorated they were sometimes called a “dower” chest, a place for a young bride to store her dowry. Elaborate versions were the size of full size chest of drawers and had faux drawer fronts above the real drawers. Long low chests without drawers are simply called chests or storage chests. This type of chest in the late 17th and early 18th century was often a six board chest, one single large board for each of six panels. The 20th century helped confuse the issue with introduction of the cedar chest, essentially storage chest made of either solid cedar or lined in cedar to minimize the intrusion of moths. They came in all sizes and forms including simple storage chests, chests on drawers and even chests on stands that looked like complete cabinets with drawers but were really a single chest compartment. They were even called “hope chests” the modern version of the dower chest. But very few of them were truly “blanket” chests in the traditional meaning of the term. The final term in the industry that is open to the most interpretation, even to the point of initiating vigorous arguments and heated exchanges, has to do with the use of the term “antique” itself. What may or may not be an antique is certainly open to debate in many quarters and less so in others. Fortunately there is not room left here for a full discussion of that subject. It will just have to wait. Fred Taylor is an author and syndicated columnist. Send your comments, questions and pictures to P.O. Box 215, Crystal River, FL 34423 or *Great Books, CDs, Price Guides & More *Share YOUR Thoughts in the Antique Trader Forums *Check out our FREE Online Classified Ads *Sign up for your FREE e-mail newsletter *Enter the Antique Trader Treasure Hunt Sweepstakes
The American Indians before 1491 A great wave of interest had been created by a new theory about the American Indians. I first learned of this theory because my Japanese wife was assigned to read a paper on this theory and answer questions about it as part of a reading comprehension test. The test was assigned, of course, by an American Indian who happens to be a college professor. That, of course, is a problem. When I first read it, I thought that this theory was ridiculous and unworthy of consideration. I felt that it was full of holes. However, now I find that I cannot refute it or prove that it is wrong, so I will present it in summary form: It has long been believed that when Columbus discovered America in 1492, there were about one million Indians on the Americas continents. This is the traditional view. However, the new theory is that there were 100 million Indians and that the population of Indians in the Americas was about the same as the population of Europe. According to this theory, within 100 years after 1492, 99 million Indians died of the White Man's Diseases, leaving only the one million that explorers encountered. The obvious questions are why did so many die, what happened to their dead bodies and when alive what did they eat. Was there enough food to sustain a population of 100 million? Here, in summary, are the answers provided: The Indians died of many diseases brought by the White Men, including smallpox. However, more importantly, De Soto, one of the first explorers, brought 300 pigs. Some of these pigs had diseases. Some of the pigs escaped, reproduced rapidly and gave their diseases to other animals. This resulted in an epidemic and the deaths of millions of animals, which the Indians depended upon for food. As to what did the Indians eat, the answer comes easily. Most of the food that we eat today are foods that the Indians gave us. Corn, tomatoes and potatoes are all foods which the Indians developed. We cannot even imagine eating a meal today without eating food provided by the Indians. Did these foods grow by accident? No. The Indians cultivated and developed them. Where did they grow these foods? Why, the same place where we grow them. The Great Plains of the Midwest, the breadbasket of the world, including such states as Iowa and Illinois, were all developed by the Indians. Perhaps the most intriguing aspect of this theory concerns the Amazon Rain Forest. It is a noteworthy fact that almost every tree or plant in the Amazon Rain Forest bears a fruit, nut or berry which people can eat. Amazon Rain Forest is almost completely flat, dropping only 500 feet in 2000 miles. Is this an accident of nature? No, according to this theory. The Amazon Rain Forest is a garden, planted by the Indians. All of these fruit bearing trees were cultivated and developed by the Indians. OK, so you still do not believe it. In that case, where it is wrong? If you want to learn more about this theory, try searching under 1491. Sam Sloan Here are links: My Home Page
Monday, January 12, 2015 Review: The Imitation Game Careful, there may be spoilers below. Shira and I got wild and crazy on Saturday night: we saw a movie in an actual movie theater! Man, I love those 25 minutes of previews! Anyway, the movie we saw was The Imitation Game, which is the story of Alan Turing. Being a Computer Science Major, I'm already somewhat familiar with Mr Turing's work. He's sort of a Charles Darwin of Computer Science. To appreciate this you have to appreciate that Computer Science isn't really about Computers or Science. It's about the study of computation, and problem solving in general. Sure, we have to use these pesky physical machines but that's an implementation detail. Alan Turing made great headway into understanding what's computable, and naturally to do this, he created his own computer and method of programming it. On the surface that may not be particularly impressive, but consider that he did this in 1936 or so. A decade before anyone even saw a computer, and years before computer programming would be a true pursuit, Alan Turing had already devised the most powerful computation machine on the planet, and proved that all general purpose computers were equivalent to his contraption. So before I walked into the theater I had mad respect for the man. For the next 2 hours I was fully entertained by the movie. As the previews suggest, the movie focuses on Turing's involvement with the effort to break Germany's WWII encryption technology. I had known of Turing's involvement in this effort, but the movie truly brought it to life. I suppose it's a special kind of challenge to tell a story where the outcome is known (I'm looking at you, Titanic), and the creators of the Imitation Game pulled it off well. So yes, the movie was entertaining and gave me a fresh perspective into Turing's life. The first order of business after leaving the theater was to Google Imitation Game: differences from real life. It was obvious that they injected a dose of Hollywood into the movie, but how much of it was fake? Slate, among others, answers that question, and no surprise the results aren't pretty. Many of the moments I enjoyed in the film just plain never happened. What do I think about this? I'm not sure. It's easy to kvetch, and say that they should have been more true to the story. But, at the end of the day, if the goal was to show just how amazing Turing was, perhaps they succeeded? In end, I think the spirit of the film is in the right place. So go, watch it and enjoy. Just do your homework after the fact so you can separate fact from fiction. 1. I really liked the movie too. Yes the inaccuracies bothered me, but the way I look at it, many non-techies might have heard about this amazing man for the first time - and that has to be a good thing right? 2. Thinking more about it, I realized that they essentially exaggerated every aspect of the story: he was quirky, so they gave him Aspergers; he was one of the signatures on a letter to Churchill, so they made him be the author and *only* signature; that sort of thing. I suppose, though, if you want to tell his entire life story in 2 hours, that sort of exaggeration is to be expected. In the end, I agree with you: as a way to introduce him to the masses, it works. Oh yeah, it also works as a fun movie to watch. There's that too ;-) Related Posts with Thumbnails
Air Pollution Quiz Broward County > Kids > Environmental Kids Club > Air Pollution Quiz Click on the correct answer 1. Before the industrial revolution, there was no air pollution.   2. Air pollution is a problem only in big cities.   3. Dirty air costs each American about $100 per year.   4. All smokestack emissions pollute the air.   5. When the air is polluted, you can always see and smell it.   6.  Clean air is the responsibility of industry alone.   7. Burning leaves or trash at home contributes to air pollution.   8. The only way air pollution affects the human body is by causing lung disorders. 9. Cars and buses contribute little to the air pollution problem.   10. We have a limitless amount of air to breathe.   11. Air pollution now is under control and will not be a problem in the future.
Canadian Cancer Society logo Breast cancer You are here:  Choosing between breast-conserving surgery and mastectomy In most cases, a woman will be given a choice between breast-conserving surgery (BCS) and mastectomy. Studies done over many years have shown that women with stage I or stage II breast cancer who have BCS combined with radiation therapy have the same survival rates as women who have a mastectomy. Having a mastectomy does not provide a better outcome or improve long-term survival in most cases. Given this body of knowledge, doctors will give women a choice between the surgeries, if there is no medical reason to recommend one surgery over the other. Most women with early stage breast cancer have breast-conserving surgery. For some women, the choice is easy. Other women find making this choice difficult. Some women may want the doctor or a partner to make the decision for them. For many women, the main concern is having the cancer completely removed, so having a mastectomy may give them that peace of mind and assurance. For other women, their breasts are an important part of their identity and self-image as a woman, so breast-conserving surgery may be the best choice for them. The choice between BCS or mastectomy is a very personal one. Individual preferences, priorities and lifestyle all play a part in making the decision. BCS and mastectomy both have advantages and disadvantages. It may help to talk to different women who have had each type of surgery. Advantages and disadvantages of each type of breast surgery Type of Breast SurgeryAdvantagesDisadvantages Breast-conserving surgery BCS is equally effective as a mastectomy (when followed by radiation therapy), in terms of overall survival. There is less change to the appearance of the breast, though there still may be a scar or changes to the shape of the breast. BCS is less likely to affect a woman’s feelings about her body image and sexuality. Some women may be concerned that not all the cancer was removed. BCS is followed by 4–6 weeks of daily radiation treatments, which lengthens the time a woman receives treatment. Some women may have difficulty finding transportation to the radiation treatment centre or may have to travel for treatment. There are potential short- and long-term side effects of radiation therapy. There is a slightly higher risk of developing a recurrence of the cancer in the remaining breast tissue. Mastectomy is equally effective in terms of overall survival as BCS followed by radiation therapy. Some women may feel assured that there is a better chance that the cancer has been cured when the breast is removed. In most cases, a woman who has a mastectomy does not require radiation therapy, so radiation treatment side effects can be avoided. Mastectomy is a longer surgery with a longer recovery time and more potential side effects than BCS. Surgery is longer if the woman has immediate reconstruction or she will need more surgery if she chooses to have breast reconstruction later. In some situations, a mastectomy may need to be followed by radiation therapy, so the potential side effects will not be avoided. The loss of a breast may affect a woman’s feelings about her body image and sexuality. Read Dr Ming-Sound Tsao's story Advocating for cancer patients Illustration of a bullhorn Learn more
Sleep Apnea Apnea is a Greek word that means "without breath." Sleep apnea is a condition in which breathing briefly stops repeatedly through the night as a person sleeps. These pauses last at least 10 seconds. They may happen hundreds of times during the night. A person with sleep apnea is only rarely aware of having difficulty breathing. It is usually recognized as a problem by others who witness the pauses in the breath. Sleep apnea disturbs a person's sleep. A person with sleep apnea may moved from deep sleep to light sleep several times during the night. Levels of oxygen in the blood fall from the pauses in breathing. People with sleep apnea often snore loudly during sleep. (Not all people who snore, however, have sleep apnea.) There are three types of sleep apnea: • Central sleep apnea happens because the muscles that cause the lungs to fill with air don't move. The brain isn't sending the proper signals to the muscles that control breathing. • Obstructive sleep apnea happens because something physical blocks the airflow even as a person's body works to breathe. This is the most common form. An estimated one out of every five Americans has this type of sleep apnea. • Mixed sleep apnea is when a person moves between central sleep apnea and obstructive sleep apnea during an event of apnea. A person with sleep apnea may be unaware that a problem exists. Usually a family member or sleeping partner is the first to recognize the problem. Common signs of sleep apnea include: • Loud snoring. Not all people who snore, however, have sleep apnea. People with central sleep apnea may not snore. • Choking or gasping during sleep • Sleepiness or being tired during the day. An adult or teen suffering from long-standing severe sleep apnea may fall asleep for short periods of time during the course of daily activities if given a chance to rest. A person with sleep apnea may also have: • A dry throat on waking • A hard time concentrating • Problems with memory or learning • A loss of interest in sex • A need to go to the bathroom often during the night • Acid reflux • An increased heart rate • Irritability • Mood swings or personality changes, including feeling depressed • Morning headaches • Night sweats Children who have sleep apnea may be extremely sleepy during the day. In some cases, toddlers or young children will behave as if they are hyper or overtired. Children with sleep apnea may be thin and show signs of a failure to thrive (slowed growth). This happens because the child's body burns calories at a high rate to get enough air into the lungs. If there is a blockage in the throat due to swollen tonsils or adenoids, the child may not be able to smell properly. Food doesn't taste as good to them and may even be difficult to swallow. Causes and Risk Factors Sleep apnea affects more than 12 million Americans, according to the National Institutes of Health. It affects men, women, older people and children alike. Some factors, however, make getting sleep apnea more likely. These include: • Being male. Men are twice as likely to get sleep apnea as women. • Being older. Obstructive sleep apnea is two to three times more likely in older adults aged 65 or older. • Being overweight. The extra soft tissue in the throat makes it harder to keep the throat open during sleep. • Having a family member who has sleep apnea. • Having a thick neck. A person whose neck is more than 17 inches around has a greater risk of developing sleep apnea. • Smoking. It may cause inflammation and fluid retention of the throat and upper airways. Several things can cause sleep apnea, including: • Overly relaxed muscle tone in the throat that causes the walls of the airway to collapse • Structural problems of the head, throat and nasal passages • Heart disease, which is the most common cause of central sleep apnea. People with atrial fibrillation or heart failure have a greater risk of central sleep apnea. • Neuromuscular disorders. These include amyotrophic lateral sclerosis (Lou Gehrig's disease), spinal cord injuries or muscular dystrophy. Each of these conditions can affect how the brain controls breathing. • Stroke or brain tumor. These conditions can disturb the brain's ability to regulate breathing. • Excessive drinking of alcohol or use of sedatives or tranquilizers. These may relax the muscles of the throat too much, interfering with normal breathing and sleep. • Having Down Syndrome. A little more than half the people who have Down Syndrome also have sleep apnea. A person with Down Syndrome may have a more relaxed muscle tone than other people and a relatively narrow nose and throat and large tongue. • Colds, infections or allergies that cause nasal congestion or swelling of the throat or tonsils. Some viruses such as Epstein-Barr can cause the lymph glands to swell. Sleep apnea due to these types of blockages usually only lasts a short period of time. • Enlarged tonsils and adenoids. Children with obstructive sleep apnea usually have this problem. It can be corrected with a tonsillectomy and adenoidectomy. • High altitude, if you aren't accustomed to it. This usually goes away as the body adapts to the higher altitude or if you move to a lower altitude. Many people live for years or decades with sleep apnea, unaware that they have it. Often a family member or sleeping partner brings it to their attention. To diagnose sleep apnea, a doctor will take a medical history and do a physical exam. He or she will check the mouth, nose and throat for extra or large tissues. The doctor may order several tests to be done while you sleep, including: • Measuring the oxygen in your blood (oximetry). This is done by putting a small sleeve over a finger while you sleep. • A polysomnogram. It records brain eye and muscle activity as well as breathing and heart rates. It measures the amount of oxygen in your blood and how much air moves in and out of your lungs while you sleep. This painless test can be done in a sleep laboratory or center or at home using a home monitor. When the test is done at home, a technician comes to your house and helps you apply a monitor that you will wear overnight. The technician will return in the morning to get the monitor and send the results to your doctor. You may be referred to a specialist in lung problems (pulmonologist), the brain or nerves (neurologist), heart and blood pressure problems (cardiologist) or ear, nose and throat problems (otolaryngologists) for additional evaluation. Treatment for sleep apnea is designed to restore regular nighttime breathing, relieve loud snoring and address daytime sleepiness. Treatment also targets complications of sleep apnea such as high blood pressure and higher risks for heart attack and stroke. The interruption of normal sleep can lead to accidents at home, on the job or while driving. It can disturb healing and immune responses. In children, it can interfere with normal growth. Treatment for sleep apnea varies depending on the cause of the problem, your medical history and how severe the condition is. Treatment generally falls into these categories: • Lifestyle changes • Devices to change the position of the jaw, tongue and soft tissues of the mouth and throat • Pressurized air machines • Surgery People with sleep apnea need to take special care when having surgery or undergoing dental procedures. Anesthesia and drugs used to relieve pain and depress consciousness stay in the body for hours or even days after surgery. Even these small amounts can make sleep apnea worse. Dental, mouth or throat surgery can cause swelling in the lining or the mouth and throat also making sleep apnea worse. Be sure your doctors, dentist and surgeons are aware that you have sleep apnea. You will need to be closely monitored after surgery.
Organization of Computer Systems: § 4: Processors Instructor: M.S. Schmalz Reading Assignments and Exercises This section is organized as follows: Information contained herein was compiled from a variety of text- and Web-based sources, is intended as a teaching aid only (to be used in conjunction with the required text, and is not to be used for any commercial purpose. Particular thanks is given to Dr. Enrique Mafla for his permission to use selected illustrations from his course notes in these Web pages. 4.1. The Central Processor - Control and Dataflow Reading Assignments and Exercises Recall that, in Section 3, we designed an ALU based on (a) building blocks such as multiplexers for selecting an operation to produce ALU output, (b) carry lookahead adders to reduce the complexity and (in practice) the critical pathlength of arithmetic operations, and (c) components such as coprocessors to perform costly operations such as floating point arithmetic. We also showed that computer arithmetic suffers from errors due to fintie precision, lack of associativity, and limitations of protocols such as the IEEE 754 floating point standard. 4.1.1. Review In previous sections, we discussed computer organization at the microarchitectural level, processor organization (in terms of datapath, control, and register file), as well as logic circuits including clocking methodologies and sequential circuits such as latches. In Figure 4.1, the typical organization of a modern von Neumann processor is illustrated. Note that the CPU, memory subsystem, and I/O subsystem are connected by address, data, and control buses. The fact that these are parallel buses is denoted by the slash through each line that signifies a bus. Figure 4.1. Schematic diagram of a modern von Neumann processor, where the CPU is denoted by a shaded box -adapted from [Maf01]. It is worthwhile to further discuss the following components in Figure 4.1: The processor represented by the shaded block in Figure 4.1 is organized as shown in Figure 4.2. Observe that the ALU performs I/O on data stored in the register file, while the Control Unit sends (receives) control signals (resp. data) in conjunction with the register file. Figure 4.2. Schematic diagram of the processor in Figure 4.1, adapted from [Maf01]. In MIPS, the ISA determines many aspects of the processor implementation. For example, implementational strategies and goals affect clock rate and CPI. These implementational constraints cause parameters of the components in Figure 4.3 to be modified throughout the design process. Figure 4.3. Schematic diagram of MIPS architecture from an implementational perspective, adapted from [Maf01]. Such implementational concerns are reflected in the use of logic elements and clocking strategies. For example, with combinational elements such as adders, multiplexers, or shifters, outputs depend only on current inputs. However, sequential elements such as memory and registers contain state information, and their output thus depends on their inputs (data values and clock) as well as on the stored state. The clock determines the order of events within a gate, and defines when signals can be converted to data to be read or written to processor components (e.g., registers or memory). For purposes of review, the following diagram of clocking is presented: Here, a signal that is held at logic high value is said to be asserted. In Section 1, we discussed how edge-triggered clocking can support a precise state transition on the active clock pulse edge (either the rising or falling edge, depending on what the designer selects). We also reviewed the SR Latch based on nor logic, and showed how this could be converted to a clocked SR latch. From this, a clocked D Latch and the D flip-flop were derived. In particular, the D flip-flop has a falling-edge trigger, and its output is initially deasserted (i.e., the logic low value is present). 4.1.2. Register File The register file (RF) is a hardware device that has two read ports and one write port (corresponding to the two inputs and one output of the ALU). The RF and the ALU together comprise the two elements required to compute MIPS R-format ALU instructions. The RF is comprised of a set of registers that can be read or written by supplying a register number to be accessed, as well (in the case of write operations) as a write authorization bit. A block diagram of the RF is shown in Figure 4.4a. Figure 4.4. Register file (a) block diagram, (b) implementation of two read ports, and (c) implementation of write port - adapted from [Maf01]. Since reading of a register-stored value does not change the state of the register, no "safety mechanism" is needed to prevent inadvertent overwriting of stored data, and we need only supply the register number to obtain the data stored in that register. (This data is available at the Read Data output in Figure 4.4a.) However, when writing to a register, we need (1) a register number, (2) an authorization bit, for safety (because the previous contents of the register selected for writing are overwritten by the write operation), and (3) a clock pulse that controls writing of data into the register. In this discussion and throughout this section, we will assume that the register file is structured as shown in Figure 4.4a. We further assume that each register is constructed from a linear array of D flip-flops, where each flip-flop has a clock (C) and data (D) input. The read ports can be implemented using two multiplexers, each having log2N control lines, where N is the number of bits in each register of the RF. In Figure 4.4b, note that data from all N = 32 registers flows out to the output muxes, and the data stream from the register to be read is selected using the mux's five control lines. Similar to the ALU design presented in Section 3, parallelism is exploited for speed and simplicity. In Figure 4.4c is shown an implementation of the RF write port. Here, the write enable signal is a clock pulse that activates the edge-triggered D flip-flops which comprise each register (shown as a rectangle with clock (C) and data (D) inputs). The register number is input to an N-to-2N decoder, and acts as the control signal to switch the data stream input into the Register Data input. The actual data switching is done by and-ing the data stream with the decoder output: only the and gate that has a unitary (one-valued) decoder output will pass the data into the selected register (because 1 and x = x). We next discuss how to construct a datapath from a register file and an ALU, among other components. 4.2. Datapath Design and Implementation Reading Assignments and Exercises The datapath is the "brawn" of a processor, since it implements the fetch-decode-execute cycle. The general discipline for datapath design is to (1) determine the instruction classes and formats in the ISA, (2) design datapath components and interconnections for each instruction class or format, and (3) compose the datapath segments designed in Step 2) to yield a composite datapath. Simple datapath components include memory (stores the current instruction), PC or program counter (stores the address of current instruction), and ALU (executes current instruction). The interconnection of these simple components to form a basic datapath is illustrated in Figure 4.5. Note that the register file is written to by the output of the ALU. As in Section 4.1, the register file shown in Figure 4.6 is clocked by the RegWrite signal. Figure 4.5. Schematic high-level diagram of MIPS datapath from an implementational perspective, adapted from [Maf01]. Implementation of the datapath for I- and J-format instructions requires two more components - a data memory and a sign extender, illustrated in Figure 4.6. The data memory stores ALU results and operands, including instructions, and has two enabling inputs (MemWrite and MemRead) that cannot both be active (have a logical high value) at the same time. The data memory accepts an address and either accepts data (WriteData port if MemWrite is enabled) or outputs data (ReadData port if MemRead is enabled), at the indicated address. The sign extender adds 16 leading digits to a 16-bit word with most significant bit b, to product a 32-bit word. In particular, the additional 16 digits have the same value as b, thus implementing sign extension in twos complement representation. Figure 4.6. Schematic diagram of Data Memory and Sign Extender, adapted from [Maf01]. 4.2.1. R-format Datapath Implementation of the datapath for R-format instructions is fairly straightforward - the register file and the ALU are all that is required. The ALU accepts its input from the DataRead ports of the register file, and the register file is written to by the ALUresult output of the ALU, in combination with the RegWrite signal. Figure 4.7. Schematic diagram R-format instruction datapath, adapted from [Maf01]. 4.2.2. Load/Store Datapath The load/store datapath uses instructions such as lw $t1, offset($t2), where offset denotes a memory address offset applied to the base address in register $t2. The lw instruction reads from memory and writes into register $t1. The sw instruction reads from register $t1 and writes into memory. In order to compute the memory address, the MIPS ISA specification says that we have to sign-extend the 16-bit offset to a 32-bit signed value. This is done using the sign extender shown in Figure 4.6. The load/store datapath is illustrated in Figure 4.8, and performs the following actions in the order given: 1. Register Access takes input from the register file, to implement the instruction, data, or address fetch step of the fetch-decode-execute cycle. 2. Memory Address Calculation decodes the base address and offset, combining them to produce the actual memory address. This step uses the sign extender and ALU. 3. Read/Write from Memory takes data or instructions from the data memory, and implements the first part of the execute step of the fetch/decode/execute cycle. 4. Write into Register File puts data or instructions into the data memory, implementing the second part of the execute step of the fetch/decode/execute cycle. Figure 4.8. Schematic diagram of the Load/Store instruction datapath. Note that the execute step also includes writing of data back to the register file, which is not shown in the figure, for simplicity [MK98]. The load/store datapath takes operand #1 (the base address) from the register file, and sign-extends the offset, which is obtained from the instruction input to the register file. The sign-extended offset and the base address are combined by the ALU to yield the memory address, which is input to the Address port of the data memory. The MemRead signal is then activated, and the output data obtained from the ReadData port of the data memory is then written back to the Register File using its WriteData port, with RegWrite asserted. 4.2.3. Branch/Jump Datapath The branch datapath (jump is an unconditional branch) uses instructions such as beq $t1, $t2, offset, where offset is a 16-bit offset for computing the branch target address via PC-relative addressing. The beq instruction reads from registers $t1 and $t2, then compares the data obtained from these registers to see if they are equal. If equal, the branch is taken. Otherwise, the branch is not taken. By taking the branch, the ISA specification means that the ALU adds a sign-extended offset to the program counter (PC). The offset is shifted left 2 bits to allow for word alignment (since 22 = 4, and words are comprised of 4 bytes). Thus, to jump to the target address, the lower 26 bits of the PC are replaced with the lower 26 bits of the instruction shifted left 2 bits. The branch instruction datapath is illustrated in Figure 4.9, and performs the following actions in the order given: 1. Register Access takes input from the register file, to implement the instruction fetch or data fetch step of the fetch-decode-execute cycle. 2. Calculate Branch Target - Concurrent with ALU #1's evaluation of the branch condition, ALU #2 calculates the branch target address, to be ready for the branch if it is taken. This completes the decode step of the fetch-decode-execute cycle. 3. Evaluate Branch Condition and Jump to BTA or PC+4 uses ALU #1 in Figure 4.9, to determine whether or not the branch should be taken. Jump to BTA or PC+4 uses control logic hardware to transfer control to the instruction referenced by the branch target address. This effectively changes the PC to the branch target address, and completes the execute step of the fetch-decode-execute cycle. Figure 4.9. Schematic diagram of the Branch instruction datapath. Note that, unlike the Load/Store datapath, the execute step does not include writing of results back to the register file [MK98]. The branch datapath takes operand #1 (the offset) from the instruction input to the register file, then sign-extends the offset. The sign-extended offset and the program counter (incremented by 4 bytes to reference the next instruction after the branch instruction) are combined by ALU #1 to yield the branch target address. The operands for the branch condition to evaluate are concurrently obtained from the register file via the ReadData ports, and are input to ALU #2, which outputs a one or zero value to the branch control logic. MIPS has the special feature of a delayed branch, that is, instruction Ib which follows the branch is always fetched, decoded, and prepared for execution. If the branch condition is false, a normal branch occurs. If the branch condition is true, then Ib is executed. One wonders why this extra work is performed - the answer is that delayed branch improves the efficiency of pipeline execution, as we shall see in Section 5. Also, the use of branch-not-taken (where Ib is executed) is sometimes the common case. 4.3. Single-Cycle and Multicycle Datapaths Reading Assignments and Exercises A single-cycle datapath executes in one cycle all instructions that the datapath is designed to implement. This clearly impacts CPI in a beneficial way, namely, CPI = 1 cycle for all instructions. In this section, we first examine the design discipline for implementing such a datapath using the hardware components and instruction-specific datapaths developed in Section 4.2. Then, we discover how the performance of a single-cycle datapath can be improved using a multi-cycle implementation. 4.3.2. Single Datapaths Let us begin by constructing a datapath with control structures taken from the results of Section 4.2. The simplest way to connect the datapath components developed in Section 4.2 is to have them all execute an instruction concurrently, in one cycle. As a result, no datapath component can be used more than once per cycle, which implies duplication of components. To make this type of design more efficient without sacrificing speed, we can share a datapath component by allowing the component to have multiple inputs and outputs selected by a multiplexer. The key to efficient single-cycle datapath design is to find commonalities among instruction types. For example, the R-format MIPS instruction datapath of Figure 4.7 and the load/store datapath of Figure 4.8 have similar register file and ALU connections. However, the following differences can also be observed: 1. The second ALU input is a register (R-format instruction) or a signed-extended lower 16 bits of the instruction (e.g., a load/store offset). 2. The value written to the register file is obtained from the ALU (R-format instruction) or memory (load/store instruction). These two datapath designs can be combined to include separate instruction and data memory, as shown in Figure 4.10. The combination requires an adder and an ALU to respectively increment the PC and execute the R-format instruction. Figure 4.10. Schematic diagram of a composite datapath for R-format and load/store instructions [MK98]. Adding the branch datapath to the datapath illustrated in Figure 4.9 produces the augmented datapath shown in Figure 4.11. The branch instruction uses the main ALU to compare its operands and the adder computes the branch target address. Another multiplexer is required to select either the next instruction address (PC + 4) or the branch target address to be the new value for the PC. Figure 4.11. Schematic diagram of a composite datapath for R-format, load/store, and branch instructions [MK98]. ALU Control. Given the simple datapath shown in Figure 4.11, we next add the control unit. Control accepts inputs (called control signals) and generates (a) a write signal for each state element, (b) the control signals for each multiplexer, and (c) the ALU control signal. The ALU has three control signals, as shown in Table 4.1, below. The ALU is used for all instruction classes, and always performs one of the five functions in the right-hand column of Table 4.1. For branch instructions, the ALU performs a subtraction, whereas R-format instructions require one of the ALU functions. The ALU is controlled by two inputs: (1) the opcode from a MIPS instruction (six most significant bits), and (2) a two-bit control field (which Patterson and Hennesey call ALUop). The ALUop signal denotes whether the operation should be one of the following: The output of the ALU control is one of the 3-bit control codes shown in the left-hand column of Table 4.1. In Table 4.2, we show how to set the ALU output based on the instruction opcode and the ALUop signals. Later, we will develop a circuit for generating the ALUop bits. We call this approach multi-level decoding -- main control generates ALUop bits, which are input to ALU control. The ALU control then generates the three-bit codes shown in Table 4.1. The advantage of a hierarchically partitioned or pipelined control scheme is realized in reduced hardware (several small control units are used instead of one large unit). This results in reduced hardware cost, and can in certain instances produce increased speed of control. Since the control unit is critical to datapath performance, this is an important implementational step. Recall that we need to map the two-bit ALUop field and the six-bit opcode to a three-bit ALU control code. Normally, this would require 2(2 + 6) = 256 possible combinations, eventually expressed as entries in a truth table. However, only a few opcodes are to be implemented in the ALU designed herein. Also, the ALU is used only when ALUop = 102. Thus, we can use simple logic to implement the ALU control, as shown in terms of the truth table illustrated in Table 4.2. Table 4.2. ALU control bits as a function of ALUop bits and opcode bits [MK98]. In this table, an "X" in the input column represents a "don't-care" value, which indicates that the output does not depend on the input at the i-th bit position. The preceding truth table can be optimized and implemented in terms of gates, as shown in Section C.2 of Appendix C of the textbook. Main Control Unit. The first step in designing the main control unit is to identify the fields of each instruction and the required control lines to implement the datapath shown in Figure 4.11. Recalling the three MIPS instruction formats (R, I, and J), shown as follows: Observe that the following always apply: Additionally, we have the following instruction-specific codes due to the regularity of the MIPS instruction format: Note that the different positions for the two destination registers implies a selector (i.e., a mux) to locate the appropriate field for each type of instruction. Given these contraints, we can add to the simple datapath thus far developed instruction labels and an extra multiplexer for the WriteReg input of the register file, as shown in Figure 4.12. Figure 4.12. Schematic diagram of composite datapath for R-format, load/store, and branch instructions (from Figure 4.11) with control signals and extra multiplexer for WriteReg signal generation [MK98]. Here, we see the seven-bit control lines (six-bit opcode with one-bit WriteReg signal) together with the two-bit ALUop control signal, whose actions when asserted or deasserted are given as follows: Given only the opcode, the control unit can thus set all the control signals except PCSrc, which is only set if the instruction is beq and the Zero output of the ALu used for comparison is true. PCSrc is generated by and-ing a Branch signal from the control unit with the Zero signal from the ALU. Thus, all control signals can be set based on the opcode bits. The resultant datapath and its signals are shown in detail in Figure 4.13. Figure 4.13. Schematic diagram of composite datapath for R-format, load/store, and branch instructions (from Figure 4.12) with control signals illustrated in detail [MK98]. We next examine functionality of the datapath illustrated in 4.13, for the three major types of instructions, then discuss how to augment the datapath for a new type of instruction. 4.3.2. Datapath Operation Recall that there are three MIPS instruction formats -- R, I, and J. Each instruction causes slightly different functionality to occur along the datapath, as follows. R-format Instruction. Execution of an R-format instruction (e.g., add $t1, $t0, $t1) using the datapath developed in Section 4.3.1 involves the following steps: 1. Fetch instruction from instruction memory and increment PC 2. Input registers (e.g., $t0 and $t1) are read from the register file 3. ALU operates on data from register file using the funct field of the MIPS instruction (Bits 5-0) to help select the ALU operation 4. Result from ALU written into register file using bits 15-11 of instruction to select the destination register (e.g., $t1). Note that this implementational sequence is actually combinational, becuase of the single-cycle assumption. Since the datapath operates within one clock cycle, the signals stabilize approximately in the order shown in Steps 1-4, above. Load/Store Instruction. Execution of a load/store instruction (e.g., lw $t1, offset($t2)) using the datapath developed in Section 4.3.1 involves the following steps: 1. Fetch instruction from instruction memory and increment PC 2. Read register value (e.g., base address in $t2) from the register file 3. ALU adds the base address from register $t2 to the sign-extended lower 16 bits of the instruction (i.e., offset) 4. Result from ALU is applied as an address to the data memory 5. Data retrieved from the memory unit is written into the register file, where the register index is given by $t1 (Bits 20-16 of the instruction). Branch Instruction. Execution of a branch instruction (e.g., beq $t1, $t2, offset) using the datapath developed in Section 4.3.1 involves the following steps: 1. Fetch instruction from instruction memory and increment PC 2. Read registers (e.g., $t1 and $t2) from the register file. The adder sums PC + 4 plus sign-extended lower 16 bits of offset shifted left by two bits, thereby producing the branch target address (BTA). 3. ALU subtracts contents of $t1 minus contents of $t2. The Zero output of the ALU directs which result (PC+4 or BTA) to write as the new PC. Final Control Design. Now that we have determined the actions that the datapath must perform to compute the three types of MIPS instructions, we can use the information in Table 4.3 to describe the control logic in terms of a truth table. This truth table (Table 4.3) is optimized as shown in Section C.2 of Appendix C of the textbook to yield the datapath control circuitry. Table 4.3. ALU control bits as a function of ALUop bits and opcode bits [MK98]. 4.3.3. Extended Control for New Instructions The jump instruction provides a useful example of how to extend the single-cycle datapath developed in Section 4.3.2, to support new instructions. Jump resembles branch (a conditional form of the jump instruction), but computes the PC differently and is unconditional. Identical to the branch target address, the lowest two bits of the jump target address (JTA) are always zero, to preserve word alignment. The next 26 bits are taken from a 26-bit immediate field in the jump instruction (the remaining six bits are reserved for the opcode). The upper four bits of the JTA are taken from the upper four bits of the next instruction (PC + 4). Thus, the JTA computed by the jump instruction is formatted as follows: The jump is implemented in hardware by adding a control circuit to Figure 4.13, which is comprised of: The resulting augmented datapath is shown in Figure 4.14. Figure 4.14. Schematic diagram of composite datapath for R-format, load/store, branch, and jump instructions, with control signals labelled [MK98]. 4.3.4. Limitations of the Single-Cycle Datapath The single-cycle datapath is not used in modern processors, because it is inefficient. The critical path (longest propagation sequence through the datapath) is five components for the load instruction. The cycle time tc is limited by the settling time ts of these components. For a circuit with no feedback loops, tc > 5ts. In practice, tc = 5kts, with large proportionality constant k, due to feedback loops, delayed settling due to circuit noise, etc. Additionally, as shown in the table on p. 374 of the textbook, it is possible to compute the required execution time for each instruction class from the critical path information. The result is that the Load instruction takes 5 units of time, while the Store and R-format instructions take 4 units of time. All the other types of instructions that the datapath is designed to execute run faster, requiring three units of time. The problem of penalizing addition, subtraction, and comparison operations to accomodate loads and stores leads one to ask if multiple cycles of a much faster clock could be used for each part of the fetch-decode-execute cycle. In practice, this technique is employed in CPU design and implementation, as discussed in the following sections on multicycle datapath design. In Section 5, we will show that datapath actions can be interleaved in time to yield a potentially fast implementation of the fetch-decode-execute cycle that is formalized in a technique called pipelining. 4.3.5. Multicycle Datapath Design In Sections 4.3.1 through 4.3.4, we designed a single-cycle datapath by (1) grouping instructions into classes, (2) decomposing each instruction class into constituent operations, and (3) deriving datapath components for each instruction class that implemented these operations. In this section, we use the single-cycle datapath components to create a multi-cycle datapath, where each step in the fetch-decode-execute sequence takes one cycle. This approach has two advantages over the single-cycle datapath: 1. Each functional unit (e.g., Register File, Data Memory, ALU) can be used more than once in the course of executing an instruction, which saves hardware (and, thus, reduces cost); and 2. Each instruction step takes one cycle, so different instructions have different execution times. In contrast, the single-cycle datapath that we designed previously required every instruction to take one cycle, so all the instructions move at the speed of the slowest. We next consider the basic differences between single-cycle and multi-cycle datapaths. Cursory Analysis. Figure 4.15 illustrates a simple multicycle datapath. Observe the following differences between a single-cycle and multi-cycle datapath: Figure 4.15. Simple multicycle datapath with buffering registers (Instruction register, Memory data register, A, B, and ALUout) [MK98]. Note that there are two types of state elements (e.g., memory, registers), which are: 1. Programmer-Visible (register file, PC, or memory), in which data is stored that is used by subsequent instructions (in a later clock cycle); and 2. Additional State Elements(buffer registers), in which data is stored that is used in a later clock cycle of the same instruction. Thus, the additional (buffer) registers determine (a) what functional units will fit into a given clock cycle and (b) the data required for later cycles involved in executing the current instruction. In the simple implementation presented herein, we assume for purposes of illustration that each clock cycle can accomodate one and only one of the following operations: New Registers. As a result of buffering, data produced by memory, register file, or ALU is saved for use in a subsequent cycle. The following temporary registers are important to the multicycle datapath implementation discussed in this section: The IR and MDR are distinct registers because some operations require both instruction and data in the same clock cycle. Since all registers except the IR hold data only between two adjacent clock cycles, these registers do not need a write control signal. In contrast, the IR holds an instruction until it is executed (multiple clock cycles) and therefor requires a write control signal to protect the instruction from being overwritten before its execution has been completed. New Muxes. we also need to add new multiplexers and expand existing ones, to implement sharing of functional units. For example, we need to select between memory address as PC (for a load instruction) or ALUout (for load/store instructions). The muxes also route to one ALU the many inputs and outputs that were distributed among the several ALUs of the single-cycle datapath. Thus, we make the following additional changes to the single-cycle datapath: The details of these muxes are shown in Figure 4.16. By adding a few registers (buffers) and muxes (inexpensive widgets), we halve the number of memory units (expensive hardware) and eliminate two adders (more expensive hardware). New Control Signals. The datapath shown in Figure 4.16 is multicycle, since it uses multiple cycles per instruction. As a result, it will require different control signals than the single-cycle datapath, as follows: It is advantageous that the ALU control from the single-cycle datapath can be used as-is for the multicycle datapath ALU control. However, some modifications are required to support branches and jumps. We describe these changes as follows. Branch and Jump Instruction Support. To implement branch and jump instructions, one of three possible values is written to the PC: 1. ALU output = PC + 4, to get the next instruction during the instruction fetch step (to do this, PC + 4 is written directly to the PC) 2. Register ALUout, which stores the computed branch target address. 3. Lower 26 bits (offset) of the IR, shifted left by two bits (to preserve alginment) and concatenated with the upper four bits of PC+4, to form the jump target address. The PC is written unconditionally (jump instruction) or conditionally (branch), which implies two control signals - PCWrite and PCWriteCond. From these two signals and the Zero output of the ALU, we derive the PCWrite control signal, via the following logic equation: PCWriteControl = (ALUZero and PCWriteCond) or PCWrite, where (a) ALUZero indicates if two operands of the beq nstruction are equal and (b) the result of (ALUZero and PCWriteCond) determines whether the PC should be written during a conditional branch. We call the latter the branch taken condition. Figure 4.16 shows the resultant multicycle datapath and control unit with new muxes and corresponding control signals. Table 4.4 illustrates the control signals and their functions. 4.3.6. Multicycle Datapath and Instruction Execution Given the datapath illustrated in Figure 4.16, we examine instruction execution in each cycle of the datapath. The implementational goal is balancing of the work performed per clock cycle, to minimize the average time per cycle across all instructions. For example, each step would contain one of the following: Thus, the cycle time will be equal to the maximum time required for any of the preceding operations. Note: Since (a) the datapath is designed to be edge-triggered (reference Section 4.1.1) and (b) the outputs of ALU, register file, or memory are stored in dedicated registers (buffers), we can continue to read the value stored in a dedicated register. The new value, output from ALU, register file, or memory, is not available in the register until the next clock cycle. Figure 4.16. MIPS multicycle datapath [MK98]. Table 4.4. Multicycle datapath control signals and their functions [MK98]. In the multicycle datapath, all operations within a clock cycle occur in parallel, but successive steps within a given instruction operate sequentially. Several implementational issues present that do not confound this view, but should be discussed. One must distinguish between (a) reading/writing the PC or one of the buffer registers, and (b) reads/writes to the register file. Namely, I/O to the PC or buffers is part of one clock cycle, i.e., we get this essentially "for free" because of the clocking scheme and hardware design. In contrast, the register file has more complex hardware (as shown in Section 4.1.2) and requires a dedicated clock cycle for its circuitry to stabilize. We next examine multicycle datapath execution in terms of the fetch-decode-execute sequence. Instruction Fetch. In this first cycle that is common to all instructions, the datapath fetches an instruction from memory and computes the new PC (address of next instruction in the program sequence), as represented by the following pseudocode: IR = Memory[PC] # Put contents of Memory[PC] in Instr.Register PC = PC + 4 # Increment the PC by 4 to preserve alignment where IR denotes the instruction register. The PC is sent (via control circuitry) as an address to memory. The memory hardware performs a read operation and control hardware transfers the instruction at Memory[PC] into the IR, where it is stored until the next instruction is fetched. Then, the ALU increments the PC by four to preserve word alighment. The incremented (new) PC value is stored back into the PC register by setting PCSource = 00 and asserting PCWrite. Fortunately, incrementing the PC and performing the memory read are concurrent operations, since the new PC is not required (at the earliest) until the next clock cycle. Reading Assigment: The exact sequence of operations is described on p.385 of the textbook. Instruction Decode and Data Fetch. Included in the multicycle datapath design is the assumption that the actual opcode to be executed is not known prior to the instruction decode step. This is reasonable, since the new instruction is not yet available until completion of instruction fetch and has thus not been decoded. As a result of not knowing what operation the ALU is to perform in the current instruction, the datapath must execute only actions that are: Therefore, given the rs and rt fields of the MIPS instruction format (per Figure 2.7), we can suppose (harmlessly) that the next instruction will be R-format. We can thus read the operands corresponding to rs and rt from the register file. If we don't need one or both of these operands, that is not harmful. Otherwise, the register file read operation will place them in buffer registers A and B, which is also not harmful. Another action the datapath can perform is computation of the branch target address using the ALU, since this is the instruction decode step and the ALU is not yet needed for instruction execution. If the instruction that we are decoding in this step is not a branch, then no harm is done - the BTA is stored in ALUout and nothing further happens to it. We can perform these preparatory actions because of the of MIPS instruction formats. The result is represented in pseudocode, as follows: A = RegFile[IR[25:21]] # First operand = Bits 25-21 of instruction B = RegFile[IR[20:16]] # Second operand = Bits 25-21 of instruction ALUout = PC + SignExtend(IR[15:0]) << 2 ; # Compute BTA where "x << n" denotes x shifted left by n bits. Reading Assigment: The exact sequence of low-level operations is described on p.384 of the textbook. Instruction Execute, Address Computation, or Branch Completion. In this cycle, we know what the instruction is, since decoding was completed in the previous cycle. The instruction opcode determines the datapath operation, as in the single-cycle datapath. The ALU operates upon the operands prepared in the decode/data-fetch step (Section, performing one of the following actions: Memory Access or R-format Instruction Completion. In this cycle, a load-store instruction accesses memory and an R-format instruction writes its result (which appears at ALUout at the end of the previous cycle), as follows: MDR = Memory[ALUout] # Load Memory[ALUout] = B # Store where MDR denotes the memory data register. Reading Assigment: The control actions for load/store instructions are discussed on p.388 of the textbook. For an R-format completion, where Reg[IR[15:11]] = ALUout # Write ALU result to register file the data to be loaded was stored in the MDR in the previous cycle and is thus available for this cycle. The rt field of the MIPS instruction format (Bits 20-16) has the register number, which is applied to the input of the register file, together with RegDst = 0 and an asserted RegWrite signal. From the preceding sequences as well as their discussion in the textbook, we are prepared to design a finite-state controller, as shown in the following section. 4.4. Finite State Control Reading Assignments and Exercises In the single-cycle datapath control, we designed control hardware using a set of truth tables based on control signals activated for each instruction class. However, this approach must be modified for the multicycle datapath, which has the additional dimension of time due to the stepwise execution of instructions. Thus, the multicycle datapath control is dependent on the current step involved in executing an instruction, as well as the next step. There are two alternative techniques for implementing multicycle datapath control. First, a finite-state machine (FSM) or finite state control (FSC) predicts actions appropriate for datapath's next computational step. This prediction is based on (a) the status and control information specific to the datapath's current step and (b) actions to be performed in the next step. A second technique, called microprogramming, uses a programmatic representation to implement control, as discussed in Section 4.5. Appendix C of the textbook shows how these representations are translated into hardware. 4.4.1. Finite State Machine An FSM consists of a set of states with directions that tell the FSM how to change states. The following features are important: Implementationally, we assume that all outputs not explicitly asserted are deasserted. Additionally, all multiplexer controls are explicitly specified if and only if they pertain to the current and next states. A simple example of an FSM is given in Appendix B of the textbook. 4.4.2. Finite State Control The FSC is designed for the multicycle datapath by considering the five steps of instruction execution given in Section 4.3, namely: 1. Instruction fetch 2. Instruction decode and data fetch 3. ALU operation 4. Memory access or R-format instruction completion 5. Memory access completion Each of these steps takes one cycle, by definition of the multicycle datapath. Also, each step stores its results in temporary (buffer) registers such as the IR, MDR, A, B, and ALUout. Each state in the FSM will thus (a) occupy one cycle in time, and (b) store its results in a temporary (buffer) register. From the discussion of Section 4.3, observe that Steps 1 and 2 are indentical for every instruction, but Steps 3-5 differ, depending on instruction format. Also note that oafter completion of an instruction, the FSC returns to its initial state (Step 1) to fetch another instruction, as shown in Figure 4.17. Figure 4.17. High-level (abstract) representation of finite-state machine for the multicycle datapath finite-state control. Figure numbers refer to figures in the textbook [Pat98,MK98]. Let us begin our discussion of the FSC by expanding steps 1 and 2, where State 0 (the initial state) corresponds to Step 1. Instruction Fetch and Decode. In Figure 4.18 is shown the FSM representation for instruction fetch and decode. The control signals asserted in each state are shown within the circle that denotes a given state. The edges (lines or arrows) between states are labelled with the conditions that must be fulfilled for the illustrated transition between states to occur. Patterson and Hennessey call the process of branching to different states decoding, which depends on the instruction class after State 1 (i.e., Step 2, as listed above). Figure 4.18. Representation of finite-state control for the instruction fetch and decode states of the multicycle datapath. Figure numbers refer to figures in the textbook [Pat98,MK98]. Memory Reference. The memory reference portion of the FSC is shown in Figure 4.19. Here, State 2 computes the memory address by setting ALU input muxes to pass the A register (base address) and sign-extended lower 16 bits of the offset (shifted left two bits) to the ALU. After address computation, memory read/write requires two states: In both states, the memory is forced to equal ALUout, by setting the control signal IorD = 1. Figure 4.19. Representation of finite-state control for the memory reference states of the multicycle datapath. Figure numbers refer to figures in the textbook [Pat98,MK98]. When State 5 completes, control is transferred to State 0. Otherwise, State 3 completes and the datapath must finish the load operation, which is accomplished by transferring control to State 4. There, MemtoReg = 1, RegDst = 0, and the MDR contents are written to the register file. The next state is State 0. R-format Execution. To implement R-format instructions, FSC uses two states, one for execution (Step 3) and another for R-format completion (Step 4), per Figure 4.20. State 6 asserts ALUSrcA and sets ALUSrcB = 00, which loads the ALU's A and B input registers from register file outputs. The ALUop = 10 setting causes the ALU control to use the instruction's funct field to set the ALU control signals to implement the designated ALU operation. State 7 causes (a) the register file to write (assert RegWrite), (b) rd field of the instruction to have the number of the destination register (assert RegDst), and (c) ALUout selected as having the value that must be written back to the register file as the result of the ALU operation (by deasserting MemtoReg). Figure 4.20. Representation of finite-state control for the R-format instruction execution states of the multicycle datapath. Figure numbers refer to figures in the textbook [Pat98,MK98]. Branch Control. Since branches complete during Step 3, only one new state is needed. In State 8, (a) control signas that cause the ALU to compare the contents of its A and B input registers are set (i.e., ALUSrcA = 1, ALUSrcB = 00, ALUop = 01), and (b) the PC is written conditionally (by setting PCSrc = 01 and asserting PCWriteCond). Note that setting ALUop = 01 forces a subtraction, hence only the beq instruction can be implemented this way. (a)                                                        (b)                  Figure 4.21. Representation of finite-state control for (a) branch and (b) jump instruction-specific states of the multicycle datapath. Figure numbers refer to figures in the textbook [Pat98,MK98]. Jump Instruction. Similar to branch, the jump instruction requires only one state (#9) to complete execution. Here, the PC is written by asserting PCWrite. The value written to the PC is the lower 26 bits of the IR with the upper four bits of PC, and the lower two bits equal to 002. This is done by setting PCSrc = 102. 4.4.3. FSC and Multicycle Datapath Performance The composite FSC is shown in Figure 4.22, which was constructed by composing Figures 4.18 through 4.21. Figure 4.22. Representation of the composite finite-state control for the MIPS multicycle datapath [MK98]. When computing the performance of the multicycle datapath, we use this FSM representation to determine the critical path (maximum number of states encountered) for each instruction type, with the following results: Since each state corresponds to a clock cycle (according to the design assumption of the FSC controller in Section 4.4.2), we have the following expression for CPI of the multicycle datapath: CPI = [#Loads · 5 + #Stores · 4 + #ALU-instr's · 4 + #Branches · 3 + #Jumps · 3] / (Total Number of Instructions) Reading Assigment: Know in detail the example computation of CPI for the multicycle datapath, beginning on p.397 of the textbook. The textbook example shows CPI for the gcc benchmark is 4.02, a savings of approximately 20 percent over the worst-case CPI (equal to 5 cycles for all instructions, based the single-cycle datapath design constraint that all instructions run at the speed of the slowest). 4.4.4. Implementation of Finite-State Control The FSC can be implemented in hardware using a read-only memory (ROM) or programmable logic array (PLA), as discussed in Section C.3 of the textbook. Combinatorial logic implements the transition function and a state register stores the current state of the machine (e.g., States 0 through 9 in the development of Section 4.4.2). The inputs are the IR opcode bits, and the outputs are the various datapath control signals (e.g., PCSrc, ALUop, etc.) We next consider how the preceding function can be implemented using the technique of microprogramming. 4.5. Microprogrammed Control Reading Assignments and Exercises While the finite state control for the multicycle datapath was relatively easy to design, the graphical approach shown in Section 4.4 is limited to small control systems. We implemented only five MIPS instruction types, but the actual MIPS instruction set has over 100 different instructions. Recall that the FSC of Section 4.4 required 10 states for only five instruction types, and had CPI ranging from three to five. Now, observe that MIPS has not only 100 instructions, but CPI ranging from one to 20 cycles. A control system for a realistic instruction set (even if it is RISC) would have hundreds or thousands of states, which could not be represented conveniently using the graphical technique of Section 4.4. However, it is possible to develop a convenient technique of control system design and programming by using abstractions from programming language practice. This technique, called microprogramming, helps make control design more tractable and also helps improve correctness if good software engineering practice is followed. By using very low-level instructions (called microinstructions) that set the value of datapath control signals, one can write microprograms that implement a processor's control system(s). To do this, one specifies: We consider these issues, as follows. 4.5.1. Microinstruction Format A microinstruction is an abstraction of low-level control that is used to program control logic hardware. The microinstruction format should be simple, and should discourage or prohibit inconsistency. (An inconsistent microinstruction requires a given control signal to be set to two different values simultaneously, which is physically impossible.) The implementation of each microinstruction should, therefore, make each field specify a set of nonoverlapping values. Signals that are never asserted concurrently can thus share the same field. Table 4.5 illustrates how this is realized in MIPS, using seven fields. The first six fields control the datapath, while the last field controls the microinstruction sequencing (deciding which microinstruction will be executed next). Table 4.5. MIPS microinstruction format [MK98]. In hardware, microinstructions are usually stored in a ROM or PLA (per descriptions in Appendices B and C of the textbook). The microinstructions are usually referenced by sequential addreses to simplify sequencing. The sequencing process can have one of the following three modes: 1. Incrementation, by which the address of the current microinstruction is incremented to obtain the address of the next microinstruction. Thsi is indicated by the value Seq in the Sequencing field of Table 4.5. 2. Branching, to the microinstruction that initiates execution of the next MIPS instruction. This is implemented by the value Fetch in the Sequencing field. 3. Control-directed choice, where the next microinstruction is chosen based on control input. We call this operation a dispatch. This is implemented by one or more address tables (similar to a jump table) called displatch tables. The hardware implementation of dispatch tables is discussed in Section C.5 (Appendix C) of the textbook. In the current subset of MIPS whose multicycle datapath we have been implementing, we need two dispatch tables, one each for State 1 and State 2. The use of a dispatch table numbered i is indicated in the microinstruction by putting Dispatch i in the Sequencing field. Table 4.6 summarizes the allowable values for each field of the microinstruction and the effect of each value. Table 4.6. MIPS microinstruction field values and functionality [MK98]. Field Name Values for Field Field Functionality Label Any string Labels control sequencing, per p. 403 of the textbook ALU control Add ALU performs addition operation Subt ALU performs subtraction operation Func code Instruction's funct field determines ALU operation SRC1 PC The PC is the first ALU input A Buffer register A is the first ALU input SRC2 B Buffer register B is the second ALU input 4 The constant 4 is the second ALU input (for PC+4) Extend Output of sign extension module is second ALU input Extshft Sign-extended output of two-bit shifter is second ALU input Register Control Read Read two registers using rs and rt fields of the current instruction, putting data into buffers A and B Write ALU Write to the register file using the rd field of the instruction register as the register number and the contents of ALUout as the data Write MDR Write to the register file using the rd field of the instruction register as the register number and the contents of the MDR as the data Memory Read PC Read memory using the PC as the memory address, writing the result into the IR and MDR [implements instruction fetch] Read ALU Read memory using ALUout as the address, write the result into MDR Write ALU Write to memory using the ALUout contents as the address, writing to memory the data contained in buffer register B PCWrite control ALU Write the output of the ALU into the PC register ALUout-cond If the ALU's Zero output is high, write the contents of ALUout into the PC register Jump address Write the PC with the jump address from the instruction Sequencing Seq Choose the next microinstruction sequentially Fetch Got to the first microinstruction to begin a new MIPS instruction Dispatch i Dispatch using the ROM specified by i (where i = 1 or 2) In practice, the microinstructions are input to a microassembler, which checks for inconsistencies. Detected inconsistencies are flagged and must be corrected prior to hardware implementation. 4.5.2. Microprogramming the Datapath Control In this section, we use the fetch-decode-execute sequence that we developed for the multicycle datapath to design the microprogrammed control. First, we observe that sometimes an instruction might have a blank field. This is permitted when: We can now create the microprogram in stepwise fashion. Instruction Fetch and Decode, Data Fetch. Each instruction execution first fetches the instruction, decodes it, and computes both the sequential PC and branch target PC (if applicable). The two microinstructions are given by: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Fetch Add PC 4 --- Read PC ALU Seq --- Add PC Extshft Read --- --- Dispatch 1 where "---" denotes a blank field. In the first microinstruction, In the second microinstruction, we have the following actions: Dispatch Tables. Patterson and Hennessey consider the dispatch table as a case statement that uses the opcode field and dispatch table i to select one of Ni different labels. For in Dispatch Table #1 (i = 1, Ni = 4) we have label Mem1 for memory reference instructions, Rformat1 for arithmetic and logical instructions, Beq1 for conditional branches, and Jump1 for unconditional branches. Each of these labels points to a different microinstruction sequence that can be thought of as a kind of subprogram. Each microcode sequence can be thought of as comprising a small utility that implements the desired capability of specifying hardware control signals. Memory Reference Instructions. Three microinstructions suffice to implement memory access in terms of a MIPS load instruction: (1) memory address computation, (2) memory read, and (3) register file write, as follows: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Mem1 Add A Extend --- --- --- Dispatch 2 LW2 --- --- --- --- Read ALU --- Seq --- --- --- --- Write MDR --- --- Fetch The details of each microinstruction are given on pp. 405-406 of the textbook. R-format Execution. R-format instruction execution requires two microinstructions: (1) ALU operation, labelled Rformat1 for dispatching; and (2) write to register file, as follows: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Rformat1 Func code A B --- --- --- Seq --- --- --- --- Write ALU --- --- Fetch The details of each microinstruction are given on p. 406 of the textbook. Branch and Jump Execution. Since we assume that the preceding microinstruction computed the BTA, the microprogram for a conditional branch requires only the following microinstruction: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Beq1 Subt A B --- --- ALUout-cond Fetch Similarly, only one microinstruction is required to implement a Jump instruction: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Jump1 --- --- --- --- --- Jump address Fetch Implementational details are given on p. 407 of the textbook. The composite microprogram is therefore given by the following ten instructions: Label ALU control SRC1 SRC2 Register control Memory PCWrite Sequencing Fetch Add PC 4 --- Read PC ALU Seq SW2 --- --- --- --- Write ALU --- Fetch Here, we have added the SW2 microinstruction to illustrate the final step of the store instruction. Observe that these ten instructions correspond directly to the ten states of the finite-state control developed in Section 4.4. In more complex machines, microprogram control can comprise tens or hundreds of thousands of microinstructions, with special-purpose registers used to store intermediate data. 4.5.3. Implementing a Microprogram It is useful to think of a microprogram as a textual representation of a finite-state machine. Thus, a microprogram could be implemented similar to the FSC that we developed in Section 4.4, using a PLA to encode the sequencing function and main control. However, it is often useful to store the control function in a ROM, then implementing the sequencing function in some other way. Typically, the sequencer uses an incrementer to choose the next control instruction. Here, the microcode storage determines the values of datapath control lines and the technique of selecting the next state. Address select logic contains dispatch tables (in ROMs or PLAs) and determines the next microinstruction to execute, albeit under control of the address select outputs. This technique is preferred, since it substitutes a simple counter for more complex address control logic, which is especially efficient if the microinstructions have little branching. Using a ROM, the microcode can be stored in its own memory and is addressed by the microprogram counter, similar to regular program instructions being addressed by an instruction sequencer. It is interesting to note that this is how microprogramming actually got started, by making the ROM and counter very fast. This represented a great advance over using slower main memory for microprogram storage. Today, however, advances in cache technology make a separate microprogram memory an obsolete development, as it is easier to store the microprogram in main memory and page the parts of it that are needed into cache, where retrieval is fast and uses no extra hardware. 4.5.4. Exception Handling If control design was not hard enough, we also have to deal with the very difficult problem of implementing exceptions and interrupts, which are defined as follows: In this discussion, we follow Patterson and Hennessey's convention, for simplicity: An interrupt is an externally caused event, and an exception one of all other events that cause unexpected control flow in a program. An interesting comparison of this terminology for different processors and manufacturers is given on pp. 410-411 of the textbook. In this section, we discuss control design required to handle two types of exceptions: (1) an indefined instruction, and (2) arithmetic overflow. These exceptions are germane to the small language (five instructions) whose implementation we have been exploring thus far. Basic Exception Handling Mechanism. After an exception is detected, the processor's control circuitry must be able to (s) save the address in the exception counter (EPC) of the instruction that caused the exception, then (2) transfer control to the operating system (OS) at a prespecified address. The second step typically invokes an exception handler, which is a routine that either (a) helps the program recover from the exception or (b) issues an error message, then attempts to terminate the program in an orderly fashion. If program execution is to continue after the exception is detected and handled, then the EPC register helps determine where to restart the program. For example, the exception-causing instruction can be repeated byt in a way that does not cause an exception. Alternatively, the next instruction can be executed (in MIPS, this instruction's address is $epc + 4). For the OS to handle the exception, one of two techniques are employed. First, the machine can have Cause and EPC registers, which contain codes that respectively represent the cause of the exception and the address of the exception-causing instruction. A second method uses vectored interrups , where the address to which control is transferred following the exception is determined by the cause of the exception. If vectored interrupts are not employed, control is tranferred to one address only, regardless of cause. Then, the cause is used to determine what action the exception handling routine should take. Hardware Support. MIPS uses the latter method, called non-vectored exceptions. To support this capability in the datapath that we have been developing in this section, we need to add the following two registers: Two additional control signals are needed: EPCWrite and CauseWrite, which write the appropriate information to the EPC and Cause registers. Also required in this particular implementation is a 1-bit signal to set the LSB of Cause to be 0 for an undefined instruction, or 1 for arithmetic overflow. Of further use is an address AE that points to the exception handling routine to which control is transferred. In MIPS, we assume that AE = C000000016. In the previous datapath developed through Section 4.4, the PC input is taken from a four-way mux that has three inputs defined, which are: PC+4, BTA, and JTA. Without adding control lines, we can add a fourth possible input to the PC, namely AE, which is written to the PC by setting PCsource = 112. Unfortunately, we cannot simply write the PC into the EPC, since the PC is incremented at instruction fetch (Step 1 of the multicycle datapath) instead of instruction execution (Step 3) when the exception actually occurs. Thus, when an exception is detected, the ALU must subtract 4 from the PC and the ALUout register contents must be written to the EPC. It is fortunate that this requires no additional control signals or lines in this particular datapath design, since 4 is already a selectable ALU input (used for incrementing the PC during instruction fetch, and is selected via ALUsrcB control signal). Hardware support for the datapath modifications needed to implement exception handling in the simple case illustrated in this section is shown in Figure 4.23. In the finite-state diagrams of Figure 4.24 and 4.25, we see that each of the preceding two types of exceptions can be handled using one state each. For each exception type, the state actions are: (1) set the Cause register contents to reflect exception type, (2) compute and save PC-4 into the EPC to make avaialble the return address, and (3) write the address AE to the PC so control can be transferred to the exception handler. To update the finite-state control (FSC) diagram of Figure 4.22, we ned to add the two states shown in Figure 4.24. Figure 4.23. Representation of the composite datapath architecture and control for the MIPS multicycle datapath, with provision for exception handling [MK98]. Thus far, we have discussed exceptions and how to handle them, and have illustrated the requirements of hardware support in the multicycle datapath developed in this section. In the following section, we complete this discussion with an overview of the necessary steps in exception detection. Exception Detection. Each of the two possible exception types in our example MIPS multicycle datapath is detected differently, as follows: Figure 4.24. Representation of the finite-state models for two types of exceptions in the MIPS multicycle datapath [MK98]. Figure 4.25. Representation of the composite finite-state control for the MIPS multicycle datapath, including exception handling [MK98]. As a result of these modifications, Figure 4.25 represents a complete specification of control for our five-instruction MIPS datapath, including mechanisms to handle two types of exceptions. Our design goal remains keeping the control logic small, fast, and accurate. Unfortunately, the FSC in Figure 4.25 has some flaws. For example, the overflow detection circuitry does not cause the ALU operation to be rolled back or restarted. Rather, the ALU result appears in the ALUout register whether or not there is an exception. This contradicts this MIPS ISA, which specifies that an instruction should have no effect on the datapath if it causes an exception. In practice, certain types of exceptions require process rollback and this greatly increases the control system complexity, also decreasing performance. Reading Assigment: Study carefully Section 5.7 of the textbook (pp. 416-419) on the Pentium Pro exception handling mechanism. 4.5.3. Summary We have developed a multicycle datapath and focused on (a) performance analysis and (b) control system design and implementation. Microprogramming was seen to be an especially useful way to design control systems. Unfortunately, there are two assumptions about microprogramming that are potentially dangerous to computer designers or engineers, which are discussed as follows. First, it has long been assumed that microcode is a faster way to implement an instruction than a sequence of simpler instructions. This is an instance of a conflict in design philosophy that is rooted in CISC versus RISC tradeoffs. In the past (CISC practice), microcode was stored in a very fast local memory, so microcode sequences could be fetched very quickly. This made it look as though microcode was executing very fast, when in fact it used the same datapath as higher-level instructions - only the microprogram memory throughput was faster. Today, with fast caches widely available, microcode performance is about the same as that of the CPU executing simple instructions. The one exception is an architecture with few general-purpose registers (CISC-like), in which microcode might not be swapped in and out of the register file very efficiently. Another disadvantage of using microcode-intensive execution is that the microcode (and therefore the instruction set) must be selected and settled upon before a new architecture is made available. This code cannot be changed until a new model is released. In contrast, software-based approaches to control system design are much more flexible, since the (few, simple) instructions reside in fast memory (e.g., cache) and can be changed at will. At the very worst, a new compiler or assembler revision might be required, but that is common practice nowadays, and far less expensive than hardware revision. The second misleading assumption about microcode is that if you have some extra room in the control store after a processor control system is designed, support for new instructions can be added for free. This is not true, because of the typical requirement of upward compatibility. That is, any future models of the given architecture must include the "free" instructions that were added after initial processor design, regardless of whether or not the control storage space might be at a premium in future revisions of the architecture. This concludes our discussion of datapaths, processors, control, and exceptions. We next concentrate on another method of increasing the performance of the multicycle datapath, called pipelining. [Maf01] Mafla, E. Course Notes, CDA3101, at URL http://www.cise.ufl.edu/~emafla/ (as-of 11 Apr 2001). [MK98] Copyright 1998 Morgan Kaufmann Publishers, Inc. All Rights Reserved, per copyright notice request at http://www.mkp.com/books_catalog/cod2/cod2ecrt.htm (1998). [Pat98] Patterson, D.A. and J.L. Hennesey. Computer Organization and Design: The Hardware/Software Interface, Second Edition, San Francisco, CA: Morgan Kaufman (1998).
Krake's presentation at U of M, Crookston part of Disability Employment Awareness Month. Everybody likes a good story about dogs and how they can inspire and how they can even save lives. Earlier this week at the University of Minnesota, Crookston, students, faculty, staff and the public got to hear firsthand how Terri Krake's 4-year-old dog Brody helps her with everyday life. Krake, of Minneapolis, seems like an enthusiastic and able-bodied person, but in reality, she suffers from seizures from time to time because of an injury she sustained when she was younger. Because of this she requires the assistance of a service dog. In the 1980s, Krake was a young deputy sheriff in New Orleans who absolutely loved her job. "Every day was a new adventure," she said. But that all changed when she was called out one day to the site of a gas leak. Before she could run for cover, the gas ignited and there was an explosion that threw her in the air, only to land on her head. She didn't know it then, but she received a brain stem injury. Shortly after the accident, Krake began having seizures. For 4 1/2 years, she went through intensive physical therapy and drug experimentation and soon was able to control her seizures. They eventually returned, however, much stronger and longer than before. Fearful of leaving home, she stayed home most of the time except for going to doctors appointments. In 2008, her neurologist suggested that she get a Vague Nerve Stimulator implant, or VNS, which sends an electrical pulse that interrupts seizure activity and limits the maximum seizing time to 5 minutes. This device had a magnet with it so that when swiped across the implant area it would shorten or even prevent the seizures. But Krake was unable to use it since she had no sense of when one would occur. This was when it was suggested that she get a Seizure Assist dog. Krake applied and was later paired with a 14-month-old lab, Brody. His training included bringing an emergency phone or hitting a Lifeline panic button during a seizure emergency. Brody was also trained to incorporate the VNS and now wears it on is vest. "When I seize," Krake explained, "he will 'cuddle' with me. He lays across my chest with his nose 'snuggling' my neck. This swipes the magnet across the implant and stops the seizure. Then he barks to get someone's attention." Krake thinks very highly of her service dog and is grateful for all he has helped her with. Since getting Brody, she has been able to go out in public and even do volunteer work. "He really is a lifesaver," Krake said. "I probably wouldn't be here today without him." Krake's presentation in Bede Ballroom was held in conjunction with Disability Employee Recognition Month.
Wednesday , 7 December 2016 Breaking News Software helps researchers discover new antibiotics New York : Researchers at The Rockefeller University in New York said they discovered two promising new antibiotics by sifting through the human microbiome with the help of a software. By using computational methods to identify which genes in a microbe’s genome ought to produce antibiotic compounds and then synthesising those compounds themselves, they were able to discover the new antibiotics without having to culture a single bacterium, according to a study published in the journal Nature Chemical Biology. Most antibiotics in use today are based on natural molecules produced by bacteria – and given the rise of antibiotic resistance, there is an urgent need to find more of them. Yet coaxing bacteria to produce new antibiotics is a tricky proposition. Most bacteria won’t grow in the lab. And even when they do, most of the genes that cause them to churn out molecules with antibiotic properties never get switched on. The Rockefeller University team led by Sean Brady offers a new way to avoid these problems. The team began by trawling publicly available databases for the genomes of bacteria that reside in the human body. They then used specialised computer software to scan hundreds of those genomes for clusters of genes that were likely to produce molecules known as non-ribosomal peptides that form the basis of many antibiotics. Brady and his colleagues then used a method called solid-phase peptide synthesis to manufacture 25 different chemical compounds. By testing those compounds against human pathogens, the researchers successfully identified two closely related antibiotics, which they dubbed humimycin A and humimycin B. Both are found in a family of bacteria called Rhodococcus — microbes that had never yielded anything resembling the humimycins when cultured using traditional laboratory techniques. The humimycins proved especially effective against Staphylococcus and Streptococcus bacteria, which can cause dangerous infections in humans and tend to become resistant to various antibiotics, said the study. Leave a Reply
Is “Pink Slime” Healthy? The processed meat-ish byproduct known as "pink slime." Bon appétit. In the last few weeks, you’ve probably heard a lot about so-called “pink slime.” Otherwise known as “lean finely textured beef trimmings,” pink slime is a processed meat byproduct found in 70% of packaged ground beef in the United States. Rather than being made from muscle tissue, this meat-ish byproduct is created from connective tissue and treated with ammonia hydroxide to kill salmonella and E. coli. Doesn’t sound too appetizing. And really, the publicity about pink slime was one of the rare instances where mainstream consumers peered behind the veil and saw the unpleasant reality of industrial farming. The family farms and red barns that adorn product packaging are far cries from the shocking truth about how our food is made. Despite the unappealing process by which it’s created, the USDA considers pink slime safe for human consumption. Moreover, when it is added to ground beef, current regulations do not require that it’s disclosed on labels. Of course, safe and healthy are two different things. Twinkies are safe for consumption, but certainly not part of a healthy diet. The truth is, most Americans eat far too much red meat – pink slime or otherwise. In fact, a recent study by Harvard researchers concluded that 9% of male deaths and 7% of female deaths would be prevented if people lowered red meat consumption to 1.5 ounces (or less) per day. That’s a sobering statistic. The moral of the story is to eat less red meat. Period. It’s not that we need to exclude red meat entirely, but most of us would be significantly healthier with less red meat in our diets. Back in January, I made the decision to limit my red meat consumption to twice weekly. Instead of including red meat as a staple in my diet, it’s more of a special treat – and, when I do eat red meat, I usually opt for healthier, grass-fed varieties. If you hold the mindset that your body is a temple, then you’d want to fill that temple with those things that honor it. Twinkies, pink slime and the like certainly don’t make the cut; make those food choices that nourish, energize and lift up your body. About Davey Wavey 1. I couldn’t agree more, I was appaled when I first read about this “pink slime”. I truly believe that food is one of the most precious things for your body, and that something so important should never be reduced to its lowest common denominator. So many diseases which were never such a common threat came about in the 70/80’s when processed food became plentiful. 2. Marcus(2) says: If we were to remove this pink slime from our food, more children will go hungry every night. Using a little bit of ammonia to kill diseases bacteria and viruses is not going to kill you. Along with proper cooking techniques no one need fear this “mutant” meat. That is why the USDA says its fine to sell and especially to schools who can then take the money they saved from not buying organic soybean fed prime angus beef kosherly slaughtered, to provide a better education to the developing youths. Secondly, if we didn’t eat this processed meat, meat prices would sky rocket in the eyes of the lower classes putting more burden on their wallets. The lower classes cannot afford high quality meat or the ability to be vegetarian or vegan because the cost of those lifestyles are out of their reach. And no we cannot adjust society to make these lifestyles cheaper, unless you want to burn more of the rainforest down. The world is complex, and society makes it complexer. 3. One of the many reason why I only eat fish :) that and in one small package of tuna there are 17g of protein and about a gram of fat :) you guys can have the other meat I stick with fish :) 4. A few months ago I watched the film “Food.inc.”. It really changed my views on what I’m putting into my body. A month ago I made the decision to have meat (fish excluded) once a week only. I’m feeling much better physically, and find that my overall attitude about daily situations is more positive too. And when I do eat meat or dairy– all organic, free range, grass fed, ect. I can really taste and feel the difference. Sure, it’s WAY more expensive…. but you get what you pay for. 5. We are lucky enough to not have this pink slime in Australia but I agree with not eating it. Anything which is processed, created in a lab or genetically modified should be kept as far away from you as possible. Our body simply doesn’t process it properly. Things like this pink slime would possibly contain trans-fats which we know don’t break down, but rather build up as cholesterol in the bloodstream. For weight loss leaner meats and less red meat might be beneficial but for iron deficient people, red meat is still a good source of protein and iron, which is a necessary ion. Unless you are allergic to red meat or have other beliefs whether vegetarian, vegan or religious than some red meat is still good. 6. Please check out this video. Jamie Oliver is a cook and very active in teaching about nutrition. Btw. we don’t have that pink slime in Switzerland, very high meat prices and…people still live and get something to eat. It’s really not necessary to eat meat every day: 7. Travis says: “…9% of male deaths and 7% of female deaths would be prevented if people lowered red meat consumption to 1.5 ounces (or less) per day.”– isn’t that from the study that also says if you eat red meat you smoke and you have lower cholesterol? 8. It’s raelly great that people are sharing this information. 1. […] no secret that most Americans eat far too much red meat. As I recently shared, a Harvard study concluded that 9% of male deaths and 7% of female deaths would be prevented if […] Speak Your Mind
Login | Register    RSS Feed Download our iPhone app Browse DevX Sign up for e-mail newsletters from DevX Use Reflection to Validate Assembly References—Before Your Customers Do : Page 2 Building the AssemblyValidator The first step in validating the currently executing assembly's dependencies is to retrieve a handle to the currently executing assembly, because you need a handle to the topmost assembly for the process in which the validation code is running. The System.Reflection namespace's Assembly class provides the GetEntryAssembly() function that you can use to obtain a handle to this assembly. The function returns an Assembly object, which represents an assembly that has been loaded into memory. Dim objAssembly_Self As Assembly ObjAssembly_Self = Assembly.GetEntryAssembly() For reference, the Assembly class provides access to the metadata exposed by a particular instance of an assembly. It is important to note that the Assembly class is tied to an instance of an assembly loaded into memory because it is possible, especially with the Xcopy deployment methods promulgated by Microsoft, to have many identical assemblies that differ only by their locations. If you are interested in only the generic information about an assembly, use the AssemblyName class. The AssemblyName class stores enough information about an assembly to enable you to load an instance into memory—more specifically, it provides enough information for the .NET Framework to find and load it for you. One key detail used by .NET is the assembly's FullName property, which holds an assembly's name, version, culture, and public key. This combination of attributes ensures that .NET loads the exact assembly you intend—no two assemblies should ever have an identical FullName. When you query the assembly metadata for referenced assemblies the Assembly class returns a list of referenced assemblies as AssemblyName objects. So, after you get a reference to the entry assembly, you can request the list of assemblies it references, returned as an array of AssemblyName objects. You can then iterate through the array, passing each AssemblyName object to a recursive method named ValidateAssembly. The recursive nature of this function ensures that the AssemblyValidator validates all the dependencies that exist in the hierarchical assembly dependency structure. Dim objDepAssembly As AssemblyName For Each objDepAssembly in _ objAssembly_Self.GetReferencedAssemblies() ValidateAssembly(objDepAssembly) Next Internally, the ValidateAssembly method uses an Assembly_List object defined in the validation tool to keep track of which assemblies have been referenced and to maintain details about each of the referenced assemblies. Other than keeping track of assembly details, the most important role of the Assembly_List object is to avoid repeatedly validating assemblies that have already been validated. Worse than the small amount of additional time required to re-verify assemblies, you could quickly cause a stack overflow if the recursive calls exhausted your application's memory resources. So, before adding another assembly to the Assembly_List object, the AssemblyValidator first checks the list to see if it's already been verified. If so (it exists in the list), the tool stops the recursion and returns from the ValidateAssembly method. The ValidateAssembly method first attempts to load the assembly using the AssemblyName object provided as a parameter. The Assembly class provides a shared overloaded Load method; the sample application uses the overload version that accepts an assembly name object. The CreateAssembly method shown below demonstrates how to use the AssemblyName object to load assemblies. Note the possible common exceptions raised by the Load method. 'attempt to create the assembly using the assembly name object Private Function CreateAssembly( _ ByVal p_objAssemblyName As AssemblyName, _ ByRef p_strError As String) As Assembly Dim objAssembly As System.Reflection.Assembly '---- try to create the assembly Try objAssembly = System.Reflection.Assembly.Load( _ p_objAssemblyName) p_strError = "" Catch exSystem_BadImageFormatException As _ System.BadImageFormatException p_strError = "File is not a .NET assembly" objAssembly = Nothing Catch exSystem_IO_FileNotFoundException As _ System.IO.FileNotFoundException p_strError = "Could not load assembly -- " & _ "file not found" objAssembly = Nothing Catch ex As Exception p_strError = "An error occurred loading the assembly" objAssembly = Nothing End Try Return objAssembly End Function If the assembly cannot be loaded, then recursion stops at this level, and the AssemblyValidator logs an error in the Assembly_List indicating why the assembly could not be loaded. When the assembly loads successfully, the AssemblyValidator adds the assembly details to the Assembly_List object, and recursively verifies each of this assembly's referenced assemblies. This process continues until all the dependencies have been verified. Listing 1 shows the complete ValidateAssembly method. At the end of the process, the Assembly_List class provides a FormatList method used to produce a string representation of the list of referenced assemblies. By default, the AssemblyValidator displays only assemblies that could not be loaded, because it's far too difficult to scroll through the lists of dependencies manually, looking for problems—even simple "Hello World" WinForms projects produce long lists of dependencies. As an exercise, I recommend that you modify the sample code to instruct the Assembly_List to display all dependencies (without duplicates), including those assemblies that loaded successfully, and observe the list that is produced. To make this modification, open the Assembly_Validator class in the AssemblyDependencyValidator project, and modify the last line of the ValidateEntryAssembly method, changing the parameter to the FormatList method to be False, as shown below: m_strResults = strValidatorResults & vbCrLf & _ m_objBindingInfo.FormatList(False) Comment and Contribute
Planar lighting technology outshines OLED December 10, 2012 // By Christoph Hammerschmidt Global LighZ, a light technology company from Breitungen (Germany), has demonstrated the prototype of a new area light technology based on plasma technology. The technology could well compete with OLEDs, the company says. It generates glare-free light without shadows and aims at applications in movie, TV and photo studios. According to Global LighZ, the e3 technology (for energy-efficient excitation) offers significantly higher luminous efficacy than comparable OLED luminaires. The technology, originally developed for display backlight applications, is based on the company's research in the area of plasma physics and allows the development of custom-made solutions for applications in the investment goods segment that cannot be realized with conventional technologies including OLED. e3 plasma luminaires can generate light throughout the entire light temperature range from 2000K to 10.000K. Hence it also can be used for medical applications, for instance for the therapy of depressive moods my means of bluish light. Since it offers a high CRI, it also can be used in industrial inspection applications, for instance in coating lines or biological labs. Based on our activities in display applications we have very broad experience in distributing light in areas, explained Global LighZ CEO Klaus Wammes who also invented the 3e technology. "The e3 technology concept shows that huge achievements can be made beyond display technology." He added that the current point in time is favourable to disclose this technology since "many OLED developments fail in practice due to their poor light yield", as Wammes puts it. "With e3 powerflat we prove that we can implement customer-specific lighting solutions based on plasma technology which will remain dreams of the future for OLEDs for a long time". For more information visit
Published Online: Published in Print: August 7, 2002, as Limitations of the Market Model Limitations of the Market Model What's behind the travails of Edison Schools Inc.? Article Tools • PrintPrinter-Friendly • EmailEmail Article • ReprintReprints • CommentsComments A decade ago, during the corporate boom of the 1990s, Christopher Whittle had an idea: Why not start a system of publicly financed for-profit schools? It was the ideal environment for such a venture. What's behind the travails of Edison Schools Inc.? The market was hot, and so was education. The public sector was in disrepute. The time seemed ripe to replace cumbersome, overly politicized public school bureaucracies with smart, efficient corporate management and show how the market could improve the outcomes and the efficiency of public schools. So Mr. Whittle founded the Edison Project (later renamed Edison Schools Inc.), raised tens of millions of dollars in capital, hired a qualified, well-connected team, and set it loose for three years to design an exemplary school. When the first four Edison schools opened in the fall of 1995, they implemented "the Edison design," the well-researched educational plan still used in Edison schools today. In many ways, Edison did the school management business right. Moreover, Edison has tackled educational challenges that others either refused to accept or failed to remedy. In essence, the company has undertaken to convert some of the country's most troubled schools into educational successes. That low-performing schools became Edison's market should surprise no one. The crisis of our country's schools isn't a crisis of schools that educate middle-class or affluent children; it's a crisis of poor children's schools. And while a few school management companies have intentionally positioned their schools to attract specific segments of the middle-class market, Edison's concentration on managing existing public schools under contract to school districts, its reliance on the Success For All model, and its free computers for students' homes seem designed to appeal to struggling schools for disadvantaged children. Edison's growth has been exceptional. Starting with four schools in 1995-96, the company grew 20-fold over its first five years, running 79 schools in 1999-2000. In spring 2000, Edison could boast that it had never lost a contract. And while the idea of for-profit school management, and Edison Schools as its vanguard, always has been controversial, the company's media coverage remained overwhelmingly favorable for many years. From the beginning, Edison claimed that students in its schools were posting significant achievement gains. True, it didn't take long for researchers to begin collecting data and questioning the company's claims; and Edison's "Annual Report on School Performance"—with its lack of backup data and its facile school rating system—reads more like a marketing piece than a serious analysis of student achievement. Still, even the company's harshest critics allowed that the performance of Edison-run schools was "mixed," or on a par with other public schools. Given that the company was attempting to turn around some very troubled schools, this performance was nothing to be ashamed of, even if it didn't live up to Edison's claims. The same media that for years spoke glowingly of Edison's accomplishments now tell a tale of corporate woe. Edison's business strategy of taking over regular public schools under contract to school districts (as well as running charter schools) practically ensured that staff members in many of its schools would be covered by collective bargaining agreements. From its first year of operations, many Edison teachers have been represented by the National Education Association and the American Federation of Teachers, and the company made an early decision to try to work with the two unions. Some NEA and AFT affiliates actively opposed Edison; others just as actively cooperated to bring the company to their districts and to make the Edison-run schools successful once they were there. Both of us visited a number of Edison schools during the company's first five years; we found those schools to be safe and educationally productive. So far, so good. Market advocates should be cheering. But anyone who follows either education or the stock market knows that Edison Schools recently underwent a dramatic reversal of fortunes. The company's stock has plummeted from over $36 a share in February 2001 to barely $1 at the end of this May. The persuasive Chris Whittle, widely acknowledged as a master at raising capital, was hard-pressed to find the $40 million the company needs to open the schools it's scheduled to operate this fall. The same media that for years spoke glowingly of Edison's accomplishments now tell a tale of corporate woe. ("Edison Reels Amid Flurry of Bad News," May 22, 2002.) Certainly Edison is, at least in part, the author of its own misfortune. Its effort last year to take over five New York City schools was an unmitigated disaster, characterized by reliance on high-level political deals and a failure to organize in the schools' communities. Edison repeated its New York mistakes in Pennsylvania, managing to alienate almost the entire city of Philadelphia while relying on its ties with the governor's office. By overpromising, the company managed to turn the award of 20 Philadelphia schools—the largest ever to Edison or anyone else—into a defeat. But Edison's downward spiral has been caused by more than political ineptitude and excessively optimistic business predictions. The company could probably recover from these problems. It's the troubles in Edison schools that are the real cause for concern. Over the past two years, Edison has lost a total of 27 schools, including 16 at the end of the 2001-02 school year. (By contrast, a total of 26 schools have renewed their contracts with Edison.) In each case (with the exception of three schools from which Edison initiated the pullout for primarily financial reasons), the reasons school districts and charter school boards have given for canceling or not renewing Edison contracts have been a combination of low test scores, declining student enrollment, high teacher turnover, and Edison's cost. Moreover, at least 15 additional schools that will remain with Edison next year are on their states' low-performing lists, and six districts that plan to remain with Edison, at least in the short term, have expressed dissatisfaction with the company. Far from the educational home run Edison has been claiming, the company risks an educational strikeout. Why, when it had so much going for it, does Edison today find its very survival in question? Did the leading school management company self-destruct— or was it done in by contradictions inherent in the concept of operating public education as a business? The rules that govern the market contradict essential requirements for creating and maintaining excellent public schools. The answer, of course, is both. Some of Edison's problems are self- inflicted. But the root of its troubles lies in trying to operate public schools as a successful business. The rules that govern the market—that require companies to establish brand identity, attract capital, and become profitable— contradict essential requirements for creating and maintaining excellent public schools. Establishing 'Brand' and Growing Rapidly. Successful consumer companies establish "brand," allowing what they sell to be readily identified in the marketplace. But replicating a successful school at multiple sites is not like replicating a successful restaurant or bookstore. Schools' raw materials (students) are highly individual and unpredictable, the product of forces external to the school. The central control required to create schools that look and feel and educate like all a company's other schools stands in direct contradiction to the need for every school to respond to its students and community, its "customers." School design and curricula are only the starting points of the complex and nuanced task of creating a successful school. There's the matter of finding the right leadership and faculty, and nurturing their understanding of teaching and learning and their relationships with each other, their students, their students' families, and communities. And once a thriving school climate is established, it requires cultivation and support. The requirement for rapid growth further complicates the enormous challenge of establishing brand and controlling quality while responding to local needs. A criticism of Edison is that it has grown too fast. While this is certainly true, rapid growth wasn't simply an Edison whim. It was a demand of the market. In the absence of profitability (more on this later), Edison needed enormous growth to position itself as the market leader and produce steadily and rapidly rising revenues that would bolster its market value. (In all likelihood, this is behind the company's recent problems with the Securities and Exchange Commission. Earlier this year, the SEC found that Edison consistently reported as revenue funds, amounting to over 40 percent of reported revenues, that had never even passed through its banks, but were used by school districts to pay salaries, transportation costs, and other expenses for Edison schools.) So Edison has been attempting to assert corporate control, maintain quality, establish brand, and respond to local conditions—all while adding over 20 new schools a year. It is a Herculean feat, and one at which Edison is failing. But if investors are going to continue putting capital into the company and keep its stock price high, they want to see significant revenue growth. The Demand for Profitability and the Illusion of Scale. The bottom-line demand of the market is profitability. But Edison has never been profitable. It has accumulated $261 million in losses since its founding and recently took on another $40 million in debt so it can continue operations in the 2002-03 school year. Since the company's earliest days, Edison executives have said that when it gets big enough their company will become profitable. While the number of schools said to be needed for profitability has increased over the years, the basic notion has remained: When we have enough schools over which to spread our overhead costs and negotiate discounted prices from suppliers, we'll show a profit. Replicating a successful school at multiple sites is not like replicating a successful restaurant or bookstore. The problem with this scenario is that economies of scale don't apply to the business of schooling. Economies of scale work in industries with uniform products. But as noted above, schooling is not one-design-fits-all, and individual school faculties and communities want input into their schools. In addition, while one can assume that Edison with its 130 schools can bargain a better price with suppliers than a school district with 10 schools, materials and supplies are not what make schooling expensive. Schooling is highly labor-intensive, with salaries constituting 80 percent or more of school budgets. Short of hiring cheap labor (underqualified teachers) or replacing teachers with computers, neither of which is recommended as a way to create successful schools, there's simply no way to dramatically reduce labor costs. In the end, the market metaphor does not apply to public education. What is rational for a society—investing in education—may well not constitute a viable business. The troubles of Edison, a company that began its life in a thriving economic environment with a generous supply of capital and a solid educational blueprint, attest to the difficulties inherent in creating a system of good schools that serve the diverse needs of our nation's children. Public education is a social commitment that transcends individual interest and corporate gain. It is highly probable that schools designed to meet this responsibility are inherently unprofitable. This does not mean the commitment should be abandoned. It means that, as a human service, education is grounded in a belief in human dignity that transcends the values and behaviors associated with markets. It means public education cannot be squeezed to fit the market model and still meet the needs of a just society. Heidi Steffens is a senior policy analyst at the National Education Association in Washington. She can be reached at Peter W. Cookson Jr. is the president of TC Innovations and a professor at Teachers College, Columbia University, in New York City. He can be reached at Vol. 21, Issue 43, Pages 48,51 Ground Rules for Posting All comments are public. Back to Top Back to Top Most Popular Stories
How to find a password for wifi Written by theon weber • Share • Tweet • Share • Pin • Email How to find a password for wifi To obtain a Wi-Fi network's password, you'll need access to the router. (ADSL Router image by Phil2048 from Many Wi-Fi networks are encrypted, meaning that they cannot be used without the correct password. If you have an encrypted Wi-Fi network at home, but have forgotten (or never knew) the password, it can be frustrating trying to connect new devices to the network or make other changes. To find the password for an encrypted Wi-Fi network, you will need access to the wireless router providing the signal. This means you will most likely need to use a computer that is plugged directly into the router via a physical cable, although a computer that is already connected to the Wi-Fi network will also work. Skill level: Other People Are Reading 1. 1 Type the router's IP address in your Internet browser's address bar. This varies depending on the make of your router, but it is often or If neither of these work, check the user's guide that came with your router, or try an Internet search for the brand and model name -- which should be printed on the router itself -- and the words "setup," "access" or "IP." 2. 2 Press "Enter" to access the router's set-up page. This page might be password-protected, most likely with a different password from the one that's protecting the Wi-Fi signal. If you don't know the router password, check your user's guide or search online for instructions on resetting the router. There will likely be a physical reset button that restores the router's original factory settings, including its default password (often "admin" or even nothing at all). Be aware that this will also remove any changes you or others have made to the router's default settings, including clearing the Wi-Fi security settings. 3. 3 Access the router's wireless settings from the set-up page. This might be on a separate page, accessed through a "Wireless Settings" link, or it might simply be part of the main page. The wireless settings will include an option to encrypt the signal using WPA, WEP or some other encryption protocol. Near this will be a text box containing the Wi-Fi password. Write down the password from this box, or simply change the password to whatever you'd like. If you reset the router in Step 2, you will have to enable encryption and set a new Wi-Fi password. 4. 4 Click the "Save" button to save any changes you've made. The Wi-Fi signal will vanish briefly as the router restarts, and once it reappears, you should be able to use your new Wi-Fi password. If you didn't change the password but merely wrote it down, you will be able to use it to gain access to the signal right away. Don't Miss • All types • Articles • Slideshows • Videos • Most relevant • Most popular • Most recent No articles available No slideshows available No videos available
Remoteness (See also Isolation.) Allusions, Definition, Citation, Reference, Information - Allusion to Remoteness (See also Isolation.) 1. Antarctica continent surrounding South Pole. [Geography: NCE, 113–115] 2. Dan to Beersheba from one outermost extreme to another. [O.T.: Judges 20:1] 3. Darkest Africa in European and American imaginations, a faraway land of no return. [Western Folklore: Misc.] 4. end of the rainbow the unreachable end of the earth. [Western Folklore: Misc.] 5. Everest, Mt. Nepalese peak; highest elevation in world (29,028 ft.). [Geography: NCE, 907] 6. Great Divide great ridge of Rocky Mountains; once thought of as epitome of faraway place. [Am. Folklore: Misc.] 7. John O’Groat’s House traditionally thought of as the northern-most, remote point of Britain. [Geography: Misc.] 8. Land’s End the southwestern tip of Britain. [Geography: Misc.] 9. moon earth’s satellite; unreachable until 1969. [Astronomy: NCE, 1824] 10. North and South Poles figurative ends of the earth. [Geography: Misc.] 11. Outer Mongolia desert wasteland between Russia and China; figuratively and literally remote. [Geography: Misc.] 12. Pago Pago capital of American Samoa in South Pacific; thought of as a remote spot. [Geography: Misc.] 13. Pillars of Hercules promontories at the sides of Straits of Gibraltar; once the limit of man’s travel. [Gk. Myth.: Zimmerman, 110] 14. Siberia frozen land in northeastern U.S.S.R.; place of banishment and exile. [Russ. Hist.: NCE, 2510] 15. Tierra del Fuego archipelago off the extreme southern tip of South America. [Geography: Misc.] 16. Timbuktu figuratively, the end of the earth. [Am. Usage: NCE, 2749] 17. Ultima Thule to Romans, extremity of the world, identified with Iceland. [Rom. Legend: LLEI, I: 318] 18. Yukon northwestern Canadian territory touching on the Arctic Ocean. [Geography: Misc.] Repentance (See PENITENCE.) Reproof (See CRITICISM.)
Tn-1 Tn-2 Ercoupe - $4.95 The ERCO Ercoupe is a low wing monoplane first manufactured by the Engineering and Research Corporation (ERCO) shortly before World War II, production continued after WWII by several other manufacturers until 1967. It was designed to be the safest fixed-wing aircraft that aerospace engineering could provide at the time, and the type still enjoys a very faithful following today. Ercoupe-downloadable Cardmodel from Fiddlers Green Classic Ercoupe Ercoupe Inflight Arguably one of the most overlooked private plane in history. It first flew in 1938 and was built again after WWII. The brilliant engineering that went into this plane made it safe and easy to to fly but still it was a marketing failure. What people say... Say, I believe the original "flying milk stool" wasn't a Piper. Some folks may have referred to the Tri pacer as such due to Photo right by Wayne White, This model is best when printed on silver inkjet paper Ercoupe model The tricycle landing gear but it wasn't the first plane to bear that nickname. I'm trying to remember the name but it was a low wing 2 place all metal plane with a bubble cockpit and rudders out on the end of the horizontal stab and the ailerons were linked to the rudder. The combination of the unusual rudder arrangement, the linked controls, and a tricycle gear gave it the nick name the flying milk stool. I finished a 'do' of your Ercoupe, from way back in the Mudget/Fynn days--in Red River Silver, of course. Don't know if you are aware, but there is a serious discrepancy between the upper and lower wings parts. At 1/40 the lower wing is about 3/16 shorter than the upper if you line up the landing lights. You may want to check this out--or not, in your present turmoil. There is, of course, the matter of the pretty crude nose, but if you ever redo the whole plane I'm sure this will be taken care of. John Erco Ercoupe Erco ErcoupeThe Ercoupe (E and R coming from the company's name: Engineering and Research Corporation) was one of the most unusual-and controversial-light airplanes ever built. It was designed by Fred E. Weick, one of aviation's foremost engineers, who decided to solve with one bold stroke the biggest single cause of aviation fatalities: the stall, followed by spin, at altitudes too low to permit recovery. The Ercoupe was designed to be stall proof and spin-proof. (The same idea was executed, in a slightly different form, by Professor Otto Koppen, of MIT. His design, called the Skyfarer, was also stall and spin-proof, but it never reached volume production.) The Ercoupe could not be ignored. The wing was placed low, there were two vertical fins on a horizontal tail boom, and the third landing wheel was under the nose. This design flew in the face of all things known about proper light airplanes, which had high wings, one fin (and rudder), and a tail wheel. The Ercoupe really is a nice little plane though some pilots (mostly those who've never flown it) don't think it's too respectable. Most owners love them. Both the Ercoupe and Skyfarer were built in small quantities before World War II. After the war the Ercoupe came on strong, and was promoted as no airplane had ever been promoted before. It was displayed at state and county fairs, demonstrated at air shows, flown from shopping center parking lots, and even dismantled and reassembled inside department stores. The results were satisfying, to say the least, and Engineering and Research Corporation had to expand their production facilities several times before they could catch up with the demand. ErcoupeWayne White sends in this very nice Ercoupe model Note the extra work he did on the landing gear. How to mystify your modeln' pals.... Ercoupe all white cardmodel Ercoupe all white cardmodel Here are three Ercoupe photos you might be able to use. Printed on Wausau 90 LB Exact Index paper resulting in an eight inch wingspan. Ercoupe all white cardmodel embossed Though there is no print showing, the back surface was given a protective coating of Krylon Acrylic Clear prior to cutting out the parts. This helped keep the surface clean. Bob Penikas Ercoupe with clear cabin submitted by Bob Martin Ercoupe with clear cabin submitted by Bob Martin Ercoupe with clear cabin submitted by Bob Martin All of this took place with absolute disregard of what aviation's old timers were saying about the airplane. Because of its tricycle gear, they called it "the flying milking stool." Because its ailerons and rudder were interconnected-there was only one pedal, for brakes, on the floor-the old timers spoke darkly about the problems of landing in a cross wind. (In fact, there were almost none: the landing gear was sturdy, and would accept a very high level of cross wind and a correspondingly low level of pilot skill.) The Ercoupe was noticeably faster than its contemporaries and quite comfortable and easy to fly. One nice touch was that the cockpit canopy could be opened in flight (at some speed penalty), producing much the same sensation as driving a convertible with the top down. It was a nice looking, all-aluminum machine, once one got used to its unconventional design. It was precisely true that it would neither stall nor spin. Even so, it was soon found to have a serious fault. It would get into a high rate of descent (or "sink") which could only be stopped by full forward yoke and loss of a considerable amount of height. The usual result was a hard landing and expensive airframe damage. Injuries to the occupants rarely required medication, but the experience was unsettling enough to drive some new pilots out of aviation. Unmarked Ercoupe flying around Fred Weick's goal of eliminating the stall-spin accident sequence was achieved, but the airplane was badly oversold. The high sink rate was never mentioned. In fact many salesmen were themselves surprised by it. The major thrust of the sales effort was "anyone can fly," and cases without end were cited in which pilots who had never had a previous lesson soloed in two hours, or three, or even one. When the postwar airplane sales bubble burst, Engineering and Research Corporation was not alone in disaster, but unlike Beech, Cessna, and Piper, it did not survive. The Ercoupe itself refused to die and went through a series of revivals, with each new group of owners as starry-eyed as the last, certain that they could escape the fate which had overtaken their predecessors. Unfortunately, none of the attempts succeeded, not even the most recent revival by Mooney Aircraft, who bought all rights, tooling, and parts from Alon Aircraft, which had been building a few at a time in Kansas. This time the resurrectors took the approach that the only thing wrong with the Ercoupe was its stall-proof, spin-proof philosophy. The tail was redesigned, using one fin and rudder. Rudder pedals were made standard. (A previous field modification had permitted adding rudder controls to the original.) All of the engineering tricks which had made the Ercoupe stall proof and spin-proof were undone. The Cadet, as the reincarnation was called, no longer looked odd: by now, low wings and tricycle gears had become commonplace, and that double fin was gone. The Cadet flew just like other airplanes, given small differences in handling. It would stall, and it would spin. The attempt failed: the Cadet didn't even show the small spark of life visible in the previous tries. The unfortunate part of all this is that the Ercoupe is really quite a nice small airplane. The freedom from stalls and spins doesn't hurt, and anybody who wants to can have rudder pedals installed. The high sink rate can be avoided, as it is in all other airplanes, by proper pilot training and technique. The one remaining Ercoupe problem is social: it is not thought to be a respectable flying machine. Most of those who have this attitude have never flown one and have no idea of its real assets and liabilities, but that does not lessen their scorn. The Ercoupe is worth looking at, even so. The Great Silver Hope Masquerading under the Ercoupe, Alon, and Mooney labels, the Ercoupe design has been around much longer than most people realize. The Ercoupe was designed to a lofty concept and high level of sophistication....and did exactly what it was designed to do. It's roots go back to the early 30's when it was popular to believe that there would someday there would be a mass market for "Everyman's Airplane". Further, it was believed that the great mass market awaited only the appearance of a cheap, easy-to-fly and safe airplane. In 1936 Fred Weick was a young engineer hired by the just formed ERCO (Engineering Research Company) and is generally regarded as the creator of the legendary 1937 Ercoupe. All through the initial design and testing, wind tunnels were not used at all. The airplane was flown, modifications made to correct deficiencies, then flown again and again until it was certified on April 20, 1938. A placard, which was the first for any airplane, was allowed to be placed proudly on the instrument panel reading: "This aircraft characteristically incapable of spinning" Things looked rosy for the Ercoupe but then the Second World War came along and production was halted for lack of aluminum . Sadly, just 112 Ercoupes came off the line. After the war, it became evident that there simply wasn't an "Everyman's Airplane Market" and possibly might never be. The Ercoupe is, arguably, the best tested, best designed, and best researched light airplane ever produced. Even today it has few peers and it's only failure was that it was produced for a non-existent market. Look for one at your local airport. ERCO is "Engineering Research Corporation" whose first product was the Ercoupe. This was the first tricycle aircraft and was designed by Fred Weick. Fred is famous for many things, including the "takeoff/landing over a 50-foot obstacle" specification. He went on to design the Piper PA-28 Cherokee and others. The first JATO (Jet Assisted Take Off) was an Ercoupe which led to the foundation of the Jet Propulsion Laboratory. The Ercoupe, with its distinctive twin-tail design, was originally provided with "coordinated controls", i.e. the rudder was connected to the yoke and yaw correction was automatic - NO RUDDER PEDALS. The steerable nose wheel was connected directly to the yoke - you taxied exactly like you drive your car. This, and limited elevator travel, contributed to the result that the 'Coupe is "characteristically incapable of spinning"! You can try, but the plane will fly out of an incipient spin. An entirely new category of pilot license was created for the thousands of new pilots who had never seen a rudder pedal. This plane was designed pre WW2 and didn't get into real production till 1945 when thousands were sold through such esteemed aviation outlets as the Men's Department at Macy's!! Ercoupe flying nicelyErcoupe Navy version "Rudder Kits" were available to convert the plane from 2-control ("coordinated") to 3-control ("conventional"). Landing a 2-control 'Coupe is an "interesting" experience!! You crab it into the wind and land that way!! The nose wheel will caster and straighten it out ON THE RUNWAY. Another historical fact: all original Boeing 707 pilots were taught to land in the 'Coupe - the 707 had a similar problem - the low hanging engines meant that you couldn't drop a wing into a crosswind - you had to land them crabbed!! The Ercoupe's gear does not swivel, a common misconception, but the geometry causes the airplane to turn in the direction of forward motion. If you fight this tendency you can ground loop.] Mooney built the last 59 with a "Mooney tail" instead of the distinctive twin tail of all previous production. This, and other changes, created an airplane which could stall and spin with the best but also lost a lot of performance. It was their intention that the M10 Cadet be their "trainer". "Alon" was an interesting bit of history: While Forney was building the 'Coupe, one company which came mighty close to buying the type certificate was Beech!! John Allen (Beech plant manager) and Lee Higdon (Beech accounting manager) felt strongly that Beech should take it on, but Olive Beech got cold feet and said no. So they quit and setup the Allen-Higdon (ALON) company to do it. They were so impressed with the plane that they bought the company!! Alon made a number of speed/power changes to the airplane and reverted to providing rudder pedals as standard, with the 2-control by special order only. They changed from vertically sliding window entry to a sliding canopy. Some people dump on 'Coupes. It's unfair and ignorant criticism, but it keeps the prices down and the secret in the family!! If you ever have to opportunity to fly a 'Coupe - try it!! The Ercoupe has climb and cruise performance very similar to the performance of a Cessna 150 - but it drops like a rock when the power goes off. The best thing about a 'Coupe is you can fly it with the sliding windows down. Construction Notes! 2 View Ercoupe Looking at the front view (above), notice that the Ercoupe has a very distinctive forward fuselage shape that narrows toward the bottom. Curiously, the reason for this shape was to accommodate the ERCO inverted inline engine that was custom built for the Ercoupe. The Continental A-65 was ultimately used and the fuselage remained unchanged. Refer to the typical cross-section. Yes, the nose section IS larger to permit engine cooling air to escape. Keep dihedral in mind as you glue the wing center section in place. It's hard to add it as an after thought later. I mean bending the wings up is really dumb. Carefully curve and bend the wing fillets out BEFORE gluing the wings to the fuselage. A pencil is a good diameter over which to shape the fillets. Rocket-Assist Takeoff On Aug. 12, 1941, the first Air Corps rocket-assist takeoff was made by a Wright Field test pilot, Capt. Homer Boushey, using a small civilian-type Ercoupe airplane. Subsequent refinements of this technique were made for assisting heavily-loaded airplanes in taking off from limited space. This technique is still used whenever needed. Takeoff of Ercoupe airplane in much less than normal distance due to firing of rockets attached under its wing. For comparison, the light plane in the foreground although equipped with an engine of approximately the same horsepower as the Ercoupe, had just lifted off the ground at the instant the photo was taken. 2 Views of the Ercoupe Ercoupe Cockpit Cockpit of the Erco Ercoupe. Erco Ercoupe Factory Erco Ercoupe Factory during its post war heyday. Ercoupe Cutaway Specifications for the Ercoupe 3 View of the Erco Ercoupe Crew: 1 Capacity: 1 passenger Length: 20 ft 9 in Wingspan: 30 ft Height: 5 ft 11 in Wing area: 142.6 ft² Empty weight: 749 lb Useful load: 511 lb Max takeoff weight: 1,260 lb Powerplant: 1× flat-4 engine, 75 hp at 2,300 rpm Never exceed speed: 144 mph Maximum speed: 110 mph Cruise speed: 95 mph Stall speed: 48 mph Range: 300 mi Service ceiling: 13,000 ft Rate of climb: 550 ft/min Wing loading: 8.83 lb/ft² Power/mass: 0.13 hp/lb Ercoupe Callout A: The Ercoupe twin tail was chosen for its 'anti' Spin characteristics B: The strong, all aluminum fuselage was easy and inexpensive to build. C: The full, slide back Ercoupe canopy afforded perfect visibility over the low positioned wings D: Very rugged landing gear made flying out of small rough fields possible. Ecroupe Crash On April 11, 2009, at 1450 central daylight time, an Engineering and Research 415C (Ercoupe), N87384, was destroyed by a post crash fire after it impacted terrain about one mile north of the Woodlake Airport (IS65), located in Sandwich, Illinois. The sport pilot and passenger received fatal injuries. Meteorological conditions prevailed at the time of the accident, and no flight plan was filed. Aircoupe (sic)
Rh Furniture Home furnishings retail is an important business. Source: Restoration Hardware. The housing industry plays a major role in driving the U.S. economy. Homebuilders provide new homes for homeowners, while building materials companies prepare the essential components of those new homes before they're built. An entire subsector of the finance industry deals with loans for home construction and mortgages for home purchases. And after homebuyers close on their purchases and move in, they typically need home furnishings, such as furniture, electronics, appliances, household gadgets, and other accessories in order to complete their house. A host of home furnishings companies seek to meet the demand for the goods that help you make your house a home. Let's take a closer look at the home furnishings industry and its opportunities for investors. What is the home furnishings industry? The home furnishings industry most typically refers to companies that specialize in furniture and decorative accessories. From a broader perspective, department stores often have a wide range of furniture to complement their offerings of appliances and electronics, and several big-box electronics retailers have added appliances to cater to new homebuyers. But even though many homebuyers see television home-theater systems, refrigerators, and washer/dryer sets as essential purchases, those areas are treated as separate industry groups. That leaves home furnishings companies to focus on bedding, dining room tables and chairs, living room sets, and accessories ranging from lamps to gourmet coffee makers as their staples. Different companies focus on various segments of the home furnishings industry. Companies like Bed Bath & Beyond and Williams-Sonoma offer one-stop shopping for a large selection of household items, although their furniture selections are often somewhat limited. By contrast, specialists like Ethan Allen Interiors focus on producing furniture sets throughout the home. On the bedding side, Tempur Sealy and Select Comfort make mattresses and related bedroom furniture sets, along with pillows and other accessories. Tpx Bedding Image source: Tempur Sealy. How big is the home furnishings industry? Home furnishings have a larger impact on the U.S. economy than you might expect. Nearly 450,000 employees in the U.S. work in the home furnishings industry, according to the latest figures from the Bureau of Labor Statistics, and almost half of them hold jobs as retail salespeople. In addition, the home furnishings industry employs managers to oversee salespeople as well as workers to stock shelves and transport goods from manufacturers to retail stores. As you'd expect, the size of the home furnishings industry has risen and fallen with the prospects of the broader housing market. In the mid-2000s, furniture and home furnishings store revenue reached peak levels above $110 billion, according to figures from the U.S. Census Bureau. But the end of the housing boom led to a dramatic contraction in overall industry sales, and home furnishings revenue only climbed back above the $100 billion mark in 2013. Statistic: Furniture and home furnishings store sales in the United States from 1992 to 2013 (in billion U.S. dollars) | Statista Find more statistics at Statista. How does the home furnishings industry work? Like most retail businesses, the home furnishings industry involves manufacturers that make the products consumers want, as well as intermediaries to get those products into the hands of retail stores, and retailers that make the final sales to customers. Most of the major companies in the home furnishings sector are retail establishments, so they rely on homeowners and other consumer buyers to drive sales. Furniture manufacturers, on the other hand, have to cater to their direct retail customers in order to fulfill their function as suppliers, while also keeping in mind that they ultimately serve the consumers who buy their products. Two things that distinguish parts of the home furnishings industry from other retail businesses, though, are the high ticket prices of furniture and other items as well as their large physical size. The logistical difficulties involved with those items and the financial challenge consumers face when considering purchases make the home furnishings industry a particularly competitive environment in many respects. Wsm Bed Source: Williams-Sonoma. What drives the home furnishings industry? The most important driver of home furnishings sales is the housing market. When people are moving in and out of new homes, they often take the opportunity to buy new home furnishings or upgrade their existing furniture and accessories, driving sales higher. During times of economic hardship, however, more people stay put in their existing homes, and they don't have the disposable income to finance major purchases of furniture and other high-ticket items. The rise of Internet retail has also had a major impact on home furnishings. For smaller household goods like kitchen appliances, online retailers have posed a substantial competitive threat, undercutting home furnishings specialists and forcing them to establish their own e-commerce presence in order to counter attempts to take away their market share. For furniture and other bulky items, physical stores have more of an advantage against online retailers, but innovative retailers continue to look for ways to make even sales of larger items more efficient and logistically feasible. That could threaten the high margins some manufacturers currently enjoy on those items. The home furnishings industry is inexorably linked to the level of housing activity in the market. Investors need to consider the current state of the housing cycle before investing in the sector, especially after periods of strong performance in housing, or else they risk taking a hit in the next cyclical downturn for the industry.
Proprietary Software Is Often Malware Proprietary software, also called nonfree software, means software that doesn't respect users' freedom and community. A proprietary program puts its developer or owner in a position of power over its users. This power is in itself an injustice. Power corrupts; the proprietary program's developer is tempted to design the program to mistreat its users. (Software whose functioning mistreats the user is called malware.) Of course, the developer usually does not do this out of malice, but rather to profit more at the users' expense. That does not make it any less nasty or more legitimate. Yielding to that temptation has become ever more frequent; nowadays it is standard practice. Modern proprietary software is typically a way to be had.
Conceptual Physics (12th Edition) Published by Addison-Wesley ISBN 10: 0321909100 ISBN 13: 978-0-32190-910-7 Chapter 29 - Think and Explain: 29 The sun is much farther away from us, compared to the lamp. Work Step by Step The sun puts out spherical wavefronts just as the nearby lamp does (see Figure 29.3), but we are so far away that by the time it gets to us, the expanding spherical wave can be considered to be a plane wave. In an analogous way, a sufficiently small area of Earth's spherical surface can be considered to be flat. The lamp is close enough that the curvature of its emitted wavefronts cannot be ignored. Update this answer! Update this answer
An Introduction to Mythology Page: 8 [Pg 21] but they explain, or attempt to explain, primitive scientific notions as well.[18] The desire to know the 'reason why' early creates a thirst for knowledge, an intellectual appetite. "When the attention of a man in the myth-making stage of intellect is drawn to any phenomenon or custom which has to him no obvious reason, he invents and tells a story to account for it."[19] The character of most primitive myths amply justifies this statement. They are mostly explanations of intellectual difficulties, answers to such questions as, What is the origin of or reason for this or that phenomenon or custom? How came the world and man to be formed as they are? In what manner were the heavenly bodies so placed and directed in their courses? Why is the lily white, the robin's breast splashed with red? How came into force this sacrificial custom, this especial ritualistic attitude, the detail of this rite? The early replies to these questions partake not only of the nature of myth, but of science—primitive science, but science nevertheless—for one of the first functions of science is to enlighten man concerning the nature of the objects and forces by which he finds himself surrounded, and their causes and effects. These replies are none the less scientific because they take the shape of stones. Their very existence proves that the above questions, to clear up which they were invented, were asked. They cannot be accounted for without the previous existence of these questions. Mythology is the savage's science, his manner of explaining the universe in which he lives and moves. Says Lang: "They frame their stories generally in harmony with their general theory of things, by what may be called 'savage metaphysics.'" Of course they did not think on the lines of a well-informed modern scholar. Müller remarks in an illuminating passage: [Pg 22] "Early man not only did not think as we think, but did not think as we suppose he ought to have thought." One of the chief differences between the outlook of the primitive savage and that of civilized man is the great extension in the mind of the former of the theory of personality, an outlook we have already called 'animism,' Everything possesses a 'soul,' or at any rate will-power, in the judgment of the savage. But not only are sun, sky, river, lightning, beast, tree, persons among primitive or backward peoples; they are savage persons. Research and travel combine to prove that earliest man and the lowest savages cannot be found without myths, which, as we have seen, are both religion and science. The first recognized stage in man's mental experience is animism, so that the earliest myths must have been 'animistic.'[20] Roughly, animism is the belief that everything has a soul or at least a personality, but no race has yet been discovered possessing purely animistic beliefs. Even the lowest races we know have developed these considerably, and so we are only acquainted with animism in its pure form theoretically,[21] as a phase of religious experience through which man must at one time have passed. It is, in fact, a fossil faith. But just as fossil animals and plants have their living representatives to-day, so do ideas and conceptions representing this petrified form of religion and science still flourish in our present-day superstitions and our present-day faiths. Animistic myths naturally show primitive ideas regarding the soul. Animism will be dealt with more fully hereafter, but[Pg 23] in this introductory sketch we will cite one or two examples of animistic myth to illustrate what was, so far as we know, the earliest type of myth. Stories are found telling of journeys to the spirit land, of talking animals, of men metamorphosed into animals and trees, and these are all animistic or originate in animistic belief.[22] Modern folk-tales containing such stories possess a very great antiquity, or are merely very old myths partly obscured by a veneer of modernity. Spirit stories which have obviously a primitive setting or atmosphere are almost certainly animistic. Thus tales which describe the soul as a bird or a bee, flitting about when the body is asleep, are either direct relics of an animistic age, or have been inspired by earlier animistic stories handed down from that age. The tales of spirit journeys to the Otherworld, the provision of implements, weapons, shoes, and so forth, placed in the grave to assist the soul in its progress to the Land of Shadows, invariably point to an animistic stage of belief—the belief in a separable 'soul,' in an entity entirely different and apart from the 'tenement of clay' that has perished. There are not wanting authorities of discernment who believe that even this early phase was not the primitive phase in the religious experience of man. Of these the most clear-sighted and perspicuous in argument is Dr Marett, reader in anthropology at Oxford University. In a pregnant chapter-preface in his highly suggestive book, The Threshold of Religion, Dr Marett says: "Psychologically, religion requires more than thought, namely, feeling and will as well; and may manifest itself on its emotional side, even when ideation is vague. The question, then, is, whether apart from ideas of spirit, ghost, soul, and the like, and before such ideas have become dominant factors[Pg 24] in the constituent experience, a rudimentary religion can exist. It will suffice to prove that supernaturalism, the attitude of mind dictated by awe of the mysterious, which provides religion with its raw material, may exist apart from animism, and, further, may provide a basis on which an animistic doctrine is subsequently constructed. Objects towards which awe is felt may be termed powers." He proceeds to say that startling manifestations of nature may be treated as 'powers' without any assumption of spiritual intervention, that certain Australian supreme beings appear to have evolved from the bull-roarer,[23] and that the dead inspire awe. This he calls 'supernaturalism,' and regards it as a phase preceding animism. Very closely allied to and coexistent with animism, and not to be very clearly distinguished from it, is fetishism. This word is derived from the Portuguese feitiço, a charm, 'something made by art,' and is applied to any object, large or small, natural or artificial, regarded as possessing consciousness, volition, and supernatural qualities, especially magic power.[24] Briefly and roughly, the fetish is an object which the savage all over the world, in Africa, Asia, America, Australia, and, anciently, in Europe, believes to be inhabited by a spirit or supernatural being. Trees, water, stones, are in the 'animistic' phase considered as the homes of such spirits, which, the savage thinks, are often forced to quit their dwelling-places because they are under the spell or potent enchantment of a more powerful being. The fetish may be a bone, a stone, a bundle of[Pg 25] feathers, a fossil, a necklace of shells, or any object of peculiar shape or appearance. Into this object the medicine-man may lure the wandering or banished spirit, which henceforth becomes his servant; or, again, the spirit may of its own will take up its residence there. It is not clear whether, once in residence or imprisonment, the spirit can quit the fetish, but specific instances would point to the belief that it could do so if permitted by its 'master'[25] We must discriminate sharply between a fetish-spirit and a god, although the fetish may develop into a godling or god. The basic difference between the fetish and the god is that whereas the god is the patron and is invoked by prayer, the fetish is a spirit subservient to an individual owner or tribe, and if it would gain the state of godhead it must do so by long or marvellous service as a luck-bringer. Offerings may be made to a fetish; it may even be invoked by prayer or spell; but on the other hand it may be severely castigated if it fail to respond to its owner's desires. Instances of the castigation of gods proper are of rare occurrence, and could scarcely happen when a deity was in the full flush of godhead, unless, indeed, the assault were directed by an alien hand.[26] We have seen that the ancient Greeks had in their temples stones representing 'nameless gods' who seem to have been of fetish origin. Thus a fetish may almost seem an idol, and the line of demarcation between the great fetish and the idol is slender, the great fetish being a link between the smaller fetish and the complete god.
Last updated: Oct 18, 2011 Getty Images Wendy Foulds Mathes, PhD, is trying to teach rats to binge on Double Stuf Oreo cookies. You might think overstuffing yourself with yummy cookies would come naturally to a rodent, but it doesn't. In fact, Foulds Mathes, a research assistant professor of psychiatry at the University of North Carolina School of Medicine, in Chapel Hill, and her colleagues are working hard to create behavior in rats that comes all too easily to some humans: binge eating. They control when the rats are given cookies, and then look for changes in the brain that might indicate that foods high in fat and sugar affect the brains' reward systems in a similar way to drugs or alcohol. It's a serious question. People with bulimia or the condition known as binge eating disorder have an overwhelming, uncontrollable urge to binge on food in a way that seems similar to people with an addiction, experts say. In addition, they often struggle to change their behavior—which can cause potentially life-threatening health problems such as diabetes, hypertension, and heart arrhythmias. "Many people have noticed that when people with eating disorders—bulimia in general—talk about the foods they binge on, it can sound a lot like how people with substance abuse problems talk about abusing drugs," says B. Timothy Walsh, MD, an eating-disorder researcher and professor of psychiatry at Columbia University Medical Center, in New York City. The behaviors often go hand in hand, in fact. The American Psychological Association estimates that about 5 million Americans suffer from a diagnosable eating disorder. And according to a 2007 analysis of government data, roughly one-third and one-quarter of people with bulimia and binge-eating disorder, respectively, will also have an alcohol or drug problem at some point in their lives. "It's not uncommon to have both problems," says Richard J. Frances, MD, a clinical professor of psychiatry at the New York University Langone Medical Center, in New York City, who works with people with both types of disorders. "The way people have trouble stopping, and the addictive aspect of both kinds of disorders—and the compulsivity—are similarities." Feel-good food? Foulds Mathes's research in rats is paying off. She and her colleagues have seen some brain changes, such as the release of neurotransmitters, in rats that binge on high-fat sugary treats that they suspect are similar to those in rats dependent on drugs or alcohol. But you can only learn so much about binge eating from rodents, who aren't susceptible to peer pressure or other psychological and cultural factors thought to play a role in eating disorders in humans. "You can't ask a rat how it's feeling," Foulds Mathes says. That's where the human studies come in handy. Researchers have found that, similar to what happens in rodents, chemicals such as dopamine are released in specific areas of the brain involved in reward processing when you eat something you find enjoyable. And other studies have found high-calorie foods such as chocolate milkshakes activate "pleasure center" regions of the brain. But not everyone who encounters a chocolate milkshake feels compelled to consume 20 of them. What triggers this compulsive behavior? Dr. Walsh and his team of researchers at the New York State Psychiatric Institute of Columbia University Medical Center have been studying patients with eating disorders, such as bulimia, for about 30 years. Their research suggests these reward pathways may be under-stimulated. In other words, people who start binging may begin a process that makes it harder for them to get the same reward from food, so they keep eating. Allegra Broft, MD, a member of Dr. Walsh's team, used a type of brain scan known as positron emission tomography (PET), and found decreased levels of dopamine receptors in the brains of people with eating disorders. These were similar to the decreased levels seen in people with drug addictions, Dr. Broft says, but on a smaller scale. Dr. Walsh says that this smaller magnitude is probably due to how the reward pathway is activated. Drugs such as cocaine, crack, and heroin "pack a whomp," he says. "That's why they're abused—they're very potent drugs. So they will have a bigger effect on changes in brain chemistry in reward areas than natural rewards like tasty food." In addition to dopamine, other neurotransmitters such as serotonin are likely to be involved in eating disorders, Dr. Walsh says. The future of eating-disorder treatment? The addiction analogy isn't perfect. The brain mechanisms associated with eating disorders and addiction don't exactly overlap, and a binge eater or bulimic can't quit food cold turkey the way an alcoholic or a drug addict can sober up. Still, greater understanding about the brain networks that underlie both addiction and eating disorders could have important implications for treatment. Experts tend to avoid the term "addiction" when talking about eating disorders because treatment approaches for the two conditions are so different, Dr. Walsh says. Although addicts try never to use or consume drugs or alcohol again, people with bulimia must learn how to have a more normal relationship with food, and to eat for nutrition. "You can get over bulimia and live comfortably with foods you used to have problems with," Dr. Walsh says. Both cognitive behavioral therapy and antidepressants like Prozac (fluoxetine) can help people with bulimia, although antidepressants are not very useful for drug problems such as cocaine abuse, he adds. Dr. Broft and Dr. Walsh hope their research ultimately finds more powerful cures for eating disorders, and perhaps one day prevents them. Not all people with eating disorders respond to treatment, and some respond only partly. "I think it's very important to continue to pursue the neurobiology of addictions to substances and the neurobiology of eating disorders, and really try to understand how the neurobiological systems are affected," Dr. Walsh says. "What's similar and what's different—that's the key. It would be very helpful in understanding and treatment if we understood those in more detail."
Hinduism Today Magazine Issues and Articles Facing Life's Tests With Wisdom Category : September/October 2001 Facing Life's Tests With Wisdom Living by the ancient guidance of the yamas and niyamas can help us brave life's challenges When we are children, we run freely, because we have no great subconscious burdens to carry. Very little has happened to us. Of course, our parents and religious institutions try to prepare us for life'stests. But because the conscious mind of a child doesn't know any better, it generally does not accept the preparation without experience, and life begins the waking up to the material world, creating situations about us magnificent opportunities for failing these tests. If we do not fail, we know that we have at some prior time learned the lesson inherent in the experience. Experience gives us a bit of wisdom when we really face ourselves and discover the meaning of failure and success. Failureis just education. But you shouldn't fail once you know the law. There have been many systems and principles of ethicsand moralityestablished by various world teachers down through the ages. All of these have had only one common goal to provide for man living on the planet Earth a guidepost for his thought and action so that his consciousness, his awareness, may evolve to the realization of life's highest goals and purposes. The ancient yoga systems provided a few simple yamasand niyamasfor religiousobservance, defining how all people should live. The yamas, orrestraints, provide a basic system of discipline for the instinctive mind. The niyamas, or positive observances, are the affirming, life-giving actions and disciplines. Life offers you an opportunity. As the Western theologian speaks of sins of omission as well as sins of commission, so we find that life offers us an opportunity to break the law as indicated by the yamas, as well as to omit the observances of the niyamas. If we take the opportunity to live out of tune with Hindu dharma, reaction is built in the subconscious mind. This reaction stays with us and recreates the physical and astral body accordingly. Have you ever known a friend who reacted terribly to an experiencein life and as a result became so changed mentally and physically that you hardly recognized him? Our external conscious mind has a habit of not being able to take the meaning out of life's most evident lessons. It is our teaching not to react to life'sexperiences, but to understandthem and in the understanding to free ourselves from the impact of these experiences, realizing the Self within. The true Self is only realized when you gain a subconscious control over your mind by ceasing to react to your experiences so that you can concentrate your mind fully, experience first meditation and contemplation, then samadhi, or Self Realization. First we must face oursubconscious. There are many amusing ways in which people go about facing themselves. Some sit down to think things over, turning out the lightof understanding. They let their minds wander, accomplishing nothing. Let me suggest to you a better way. We carry with us in our instinctive nature basic tendencies to break these divine laws, to undergo the experiences that will create reactive conditions until we sit ourselves down and start to unravel the mess. If we are still reactingto our experiences, we are only starting on the yoga path to enlightenment. As soon as we cease to react, we have for the first time the vision of the innerlight. What do we mean by this word light? We mean light literally, not metaphysically or symbolically, but light, just as you see the lightof the sun or a light emitted by a bulb. You will see light first at the top of the head, then throughout the body. An openness of mind occurs, and great peace. As a seeker gazes upon his inner light in contemplation, he continues the process of purifying the subconscious mind. As soon as that first yoga awakening comes to you, your whole nature begins to change. You have a foundationon which to continue. The yamasand the niyamasare the foundation. Facing Life's Tests: Two feet planted firmly on the ground, the experienced devotee graciously greets the return of his own self-created karma, paving the way to its resolution rather than its ramification. The Yamas and Niyamas From the holy vedas we have assembled here ten yamas and ten niyamas, a simple statement of the ancient and beautiful laws of life. The ten yamasare: 1) Noninjury, ahimsa: Not harming others by thought, word, or deed. 2) Truthfulness, satya: Refraining from lyingand betraying promises. 3) Nonstealing, asteya: Neither stealing, nor coveting nor entering into debt. 4) Divine conduct, brahmacharya: Controlling lustby remainingcelibate when single, leading to faithfulness in marriage. 5) Patience, kshama: Restraining intolerance with people and impatience with circumstances. 6) Steadfastness, dhriti: Overcoming nonperseverance, fear, indecision and changeableness. 7) Compassion, daya: Conquering callous, cruel and insensitive feelings toward all beings. 8) Honesty, straightforwardness, arjava: Renouncing deceptionand wrongdoing. 9) Moderate appetite, mitahara: Neither eating too much nor consuming meat, fish, fowl or eggs. 10) Purity, saucha: Avoiding impurityin body, mind and speech. The ten niyamasare: 1) Remorse, hri: Being modest and showing shamefor misdeeds. 2) Contentment,santosha: Seeking joy and serenity in life. 3) Giving,dana: Tithingand giving generously without thought of reward. 4)Faith,astikya: Believing firmly in God, Gods, guru and the path to enlightenment. 5) Worshipof the Lord,Isvarapujana: The cultivation of devotion through daily worship and meditation. 6) Scriptural listening, siddhantasravana: Studying the teachings and listening to the wise of one's lineage. 7)Cognition,mati: Developing a spiritual will and intellect with the guru's guidance. 8) Sacredvows,vrata: Fulfilling religious vows, rules and observances faithfully. 9) Recitation,japa: Chanting mantras daily. 10) Austerity,tapas: Performing sadhana,penance, tapas andsacrifice.
Gallery of Plants Tech Blog Plant Profiles Mailing Lists     Search ALL lists     Search help     Subscription info Top Stories sHORTurl service Tom Clothier's Archive  Top Stories Disease could hit Britain's trees hard Ten of the best snowdrop cultivars Plant protein database helps identify plant gene functions Dendroclimatologists record history through trees Potato beetle could be thwarted through gene manipulation Hawaii expands coffee farm quarantine Study explains flower petal loss Unauthorized use of a plant doesn't invalidate it's patent RSS story archive Re: RE:Hyb: Cytoplasmic inheritence was disease resistance {Walter] That peanut phenomenon is intriguing, Walter. I haven't the foggiest of how it might be explained either. All those extra-nuclear structures are derived only from the mother. If some effect disappears over a couple of generations, you are quite right in saying it cannot be because of some condition in the mitochondria or other structures, as they have a relatively slow and fairly constant rate of mutation. Three generations wouldn't be likely to show much of any of those effects, which usually are so subtle as to defy detection without DNA If you ever hear of a likely explanation, I'd be interested to know about it. One might ask, "What have peanuts to do with irises?"--but I would answer--most all biological processes are carried out in almost exactly the same way in phyla and genera widely separated--all the way from legumes to irids--and a long way on each side of either. Anthocyanin production in potatoes, tobacco or tomato leaves is in response to UV radiation, and protects the rather delicate DNA in the cell from UV light--which has enough energy to break DNA bonds. Irises produce anthocyanins in the leaves too--but especially in the flowers. This appears to be a response to a parallel and reciprocal (a pair of evolving events that have a feed-back loop between them) development in insect color vision and flower color. Anthocyanin pigments attract insects and insects pollenate those flowers. Around and around the process proceeds. The chemical chain of events all the way from Acetic Acid to Delphinidin (or whatever anthocyanins are produced) is exactly the same in Tomato leaves as it in flowers with the same pigment. The biologists refer to this kind of parallelism as "the process is conserved across phyla" or whatever the range may be. There's only one way to make soup, and every household uses the same Neil Mogensen z 7 western NC mountains  © 1995-2015 Mallorn Computing, Inc.All Rights Reserved. Our Privacy Statement
Imperfect Triangle "Learning undigested by thought is labor lost. Thought unassisted by learning is perilous," reads the ever-timely Confucian message chalked onto the board of a dingy black township high school in South Africa, 1985, in Athold Fugard's searing 1989 polemic My Children! My Africa!. By play's end, when the fictional uprisings mirror the dramatic eruptions in Sharpeville and Soweto, Fugard -- the theater's most learned, thoughtful singer of apartheid's wrongs -- etches the crucial maxim indelibly into memory. Based on a brief newspaper account of the death of a black teacher during racial unrest near Port Elizabeth, the play depicts the burgeoning friendship between a white schoolgirl and a black schoolboy brought together for academic contests by a paternalistic teacher. At first, despite their cultural differences, the teenagers get along well. Isabel Dyson, a prep-school standout who has never previously ventured into a township, is invited to debate Thami Mbikwana, prized pupil of jovial Mr. M. When Mr. M., thrilled that his precocious young scholars attend to the content of the words and not the color of the faces, proposes that they apply to be a team he'll coach in a national literary competition, Isabel and Thami readily, enthusiastically agree. But this meeting of the minds collapses violently when racial unrest and school boycotts force the comrades in scholastic arms to choose sides. Thami, impatient and disgusted with a country "that doesn't allow the majority of its people any dreams at all," takes up the cause of active protest. What good is it to learn, the young revolutionary asks, when education doesn't lead 25 million people to their rightful shares? Mr. M., "an old-fashioned traditionalist," pleads for reason. "If the struggle needs weapons," he urges, "give it words." Thami's instincts tell him to gather in the streets with rocks at the ready; Mr. M.'s to come to school and work within the system. Isabel, her privileged white world crashing down around her, is paralyzed, caught between the polarizing opposites her new friends represent. Ridden with the guilt and good intentions of white liberalism, she doesn't know what to think or feel anymore. Three points that will never become a triangle, these characters are inevitably divergent, even in the face of death. Upcoming Events The schoolroom debate that becomes life-and-death gives considerable dramatic and metaphoric tension to My Children! My Africa!, a worthy play, if not Fugard's most accomplished. It lacks the intimacy of "Master Harold" ... and the Boys, A Lesson from Aloes, The Road to Mecca and Blood Knot because it has characters who are completely static; from animated opening to knolling ending, they state and restate their stances, never changing or enhancing their positions, despite their erudition. Nor do they ever talk about anything other than political immediacies, so their relationships are never allowed to deepen or complicate -- or seem real. Perhaps because they can't fully interact, Fugard attempts to realize the characters through introspective soliloquies, a technique which becomes distractingly predictable, repeatedly pulling the audience outside the action. The Houston premiere of My Children! My Africa!, at Theater LaB, takes this good but troubled play and makes it better than the text itself. Director Alex Allen Morris (a member of the Alley and Ensemble companies) begins the evening with friendly, spirited competition, then tightens the strain gradually, choking off all the comfortable air until neither the characters nor the audience can breathe deeply in the shock of events. Though Fugard draws the battle lines by the end of the first act, Morris' firm grasp makes the social conflicts resonate deeply into the second. The three poised performers are also superb (as are their accents, coached by Deborah Kinghorn). Adrian Cardell Porter explodes as Thami, whose polite, obedient exterior belies his pent-up rage. Rebecca Harris is utterly charming as Isabel, an engaged listener with an interested smile and direct delivery communicating a self-assurance that serves her well, until once-remote events cause her to lose her ideological bearings. Ray Anthony Walker finds energy and passion in the cheery Mr. M., an educator desperately wanting to feed young people with hope, even at the risk of alienating them. At one point, Mr. M. confides another Confucian proverb: that he can do whatever his heart prompts without transgressing what is right. Even in their single-mindedness, all the characters possess this flawed nobility, for they act out of concern for their people. The cast and crew of Theater LaB give their people a night to remember. My Children! My Africa! runs through April 23 at Theater LaB, 1706 Alamo, 868-7516. Sponsor Content • Top Stories Sign Up > No Thanks! Remind Me Later >
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
37