text
stringlengths
124
652k
go to homepage Alwin Nikolais American dancer and choreographer Alwin Nikolais American dancer and choreographer November 25, 1910? or November 25, 1912? Southington, Connecticut May 8, 1993 New York City, New York Alwin Nikolais, (born November 25, 1910/1912?, Southington, Connecticut, U.S.—died May 8, 1993, New York, N.Y.) American choreographer, composer, and designer whose abstract dances combine motion with various technical effects and a complete freedom from technique and established patterns. • Alwin Nikolais. Martha Swope Initially a silent-film accompanist and puppeteer, Nikolais began his study of dance in about 1935 with Truda Kaschmann, a former student of modern dancer Mary Wigman, to understand Wigman’s use of percussion accompaniment. In 1937 he founded a dance school and company in Hartford, Connecticut, and was director of the dance department of Hartt School of Music (now part of the University of Hartford) from 1940 to 1942 and from 1946 to 1949. After serving in World War II, Nikolais resumed dance studies with Hanya Holm and became her assistant. In 1948 he joined the Henry Street Settlement in New York City and founded its school of modern dance; the following year he became artistic director of its playhouse. The Nikolais Dance Theater (originally called the Playhouse Dance Company) was formed in 1951. In 1953 the company presented Nikolais’s first major work, Masks, Props, and Mobiles, in which the dancers were wrapped in stretch fabric to create unusual, fanciful shapes. In later works—such as Kaleidoscope (1956), Allegory (1959), Totem (1960), and Imago (1963)—Nikolais continued experiments in what he called the basic art of the theatre—an integration of motion, sound, shape, and colour, each given relatively equal emphasis. His later works include Tent (1968), Scenario (1971), Guignol (1977), Count Down (1979), and Talisman (1981). Nikolais frequently composed electronic scores for these productions. Although Nikolais’s choreography was sometimes criticized as “dehumanizing,” he maintained instead that it was liberating. He asserted that, in depersonalizing his dancers, they were relieved of their own forms and, hence, allowed to identify with whatever they portrayed. Nikolais was also noted for advancing the related concept of “decentralization,” in which the focal point could be anywhere on the dancer’s body or even outside the body. This was a departure from the traditional opinion that the “centre” of focus was the solar plexus. These theories were developed under Hanya Holm and were displayed in such works as Aviary, A Ceremony for Bird People (1978). During the 1970s the Nikolais group toured widely abroad. In 1978 the French Ministry of Culture, together with the French city of Angers, subsidized the new National Centre of Contemporary Dance at Angers, a Nikolais school and company that made its debut in Angers, France, in November 1979. Nikolais made films of his works, as well as broadcasts on American and British television. Learn More in these related articles: ...appear compressed, even fragmented, while a clearly lighted, open space may make the movement appear unconfined. Two choreographers who had been most innovative in their use of set and lighting were Alwin Nikolais and Merce Cunningham. The former has used props, lighting, and costumes to create a world of strange, often inhuman shapes—as in his Sanctum (1964).... in stagecraft As modern dance evolved, its rapid rhythms and pace offered the costume designer new challenges and scope for original work. The productions of Merce Cunningham and Alwin Nikolais in New York City presented unique shapes that attempted to express the exploration of time and space. Nikolais made his costumes part of a total stage design, a theatrical abstraction of the way he saw humankind, as... Innovative contributions to lighting and the use of projections were also made in American dance during the second half of the 20th century. Alwin Nikolais made very original use of dancers, costumes, light, and projections to form moving geometric and abstract designs. At times, the moving bodies of the dancers in his productions became the screen for the projections. Robert Joffrey’s... Alwin Nikolais • MLA • APA • Harvard • Chicago You have successfully emailed this. Error when sending the email. Try again later. Edit Mode Alwin Nikolais American dancer and choreographer Tips For Editing Leave Edit Mode You are about to leave edit mode. Your changes will be lost unless you select "Submit". Thank You for Your Contribution! Uh Oh Keep Exploring Britannica Apple Inc. Small piano accordion. Editor Picks: 8 Quirky Composers Worth a Listen Elvis Presley, c. 1955. Elvis Presley Leonardo da Vinci 8 Music Festivals Not to Miss Music 101: Fact or Fiction? Rediscovered Artists: 6 Big Names That Time Almost Forgot the Beatles Ludwig van Beethoven. Ludwig van Beethoven Music Composers: Fact or Fiction? Frank Sinatra, c. 1970. Frank Sinatra Test Your Instrument Knowledge Email this page
A Champion of Liberty This article appeared on on April 20, 2006. On April 20, 2006, Mart Laar, the former Prime Minister of Estonia, became the third recipient of the Milton Friedman Prize for Advancing Liberty. The Friedman Prize is awarded every two years to an individual who has made a significant contribution to the advancement of human freedom. It comes with $500,000 in prize money. The Cato Institute, which awards the Friedman Prize, is a non-profit public policy research foundation headquartered in Washington, D.C. The winner of the Friedman Prize is selected by an international selection committee that this year included Anne Applebaum of the Washington Post, Fareed Zakaria of Newsweek International, Francisco Flores, former President of El Salvador, Fred Smith, Chairman of the Federal Express Corporation, and Rose Friedman. The previous winners of the Friedman Prize were Peter Bauer, a British economist, for his pioneering work in development economics, and Hernando De Soto, a Peruvian economist, for his work on the importance of property rights in helping the poor to obtain access to capital. The two economists helped to create a theoretical basis for applying the market principles to fight global poverty. They showed that free market, characterized by trade openness, limited state intervention in the economy, and strong emphases on property rights and the rule of law, was the best available mechanism for alleviating global poverty. Mart Laar put those theoretical principles into practice to the benefit of his countrymen. According to the Economic Freedom of the World: 2005 Annual Report, which is published by the Fraser Institute in Canada, Estonia is the ninth economically freest country in the world. Today, many people find it difficult to remember the days of the Soviet Union, when the Estonian economy was completely dominated by the state and marked by endless lines and shortages. Mart Laar replaced the “dead hand” of the government with Adam Smith’s “invisible hand.” His government eliminated import tariffs (a decision that was partly reversed by Estonia’s membership of the European Union) and established a flat income tax. Corporate taxes on reinvested profits fell to zero and a currency board was established to combat inflation. The government also undertook extensive privatization of state companies. Though Estonia experienced a sharp but short recession that was shared by all transitional economies, by 1995 the economy was roaring again. According to the World Bank, between 1995 and 2004, Estonia’s per capita gross domestic product (GDP) grew at a compounded average annual rate of 6.6 percent. During that decade, Estonia’s GDP per capita adjusted for purchasing power parity rose from $6,847 to $12,773 in constant 2000 dollars, an increase of 86.5 percent. Estonia’s sustained, high growth rate was among the region’s highest and set the country on course to join the rest of the developed world. Mart Laar’s premiership also marked Estonia’s return to democratic rule, which the country enjoyed during a brief period of independence between the two World Wars. It did not have to be that way. In Belarus, Alexander Lukashenko’s assumption of power in 1994 marked the return of that country to a communist dictatorship. Ukraine had to wait 13 years after her declaration of independence in 1991 before becoming democratic, and Russia has slid back into autocracy under the leadership of Vladimir Putin. Mart Laar’s impact was felt beyond the influence he had on the lives of his fellow countrymen. Other post-communist countries learned from Estonia’s reforms and imitated them. Estonia’s successful adoption of the flat tax led the way for Russia, Slovakia, Ukraine, and others. Estonia’s unilateral trade liberalization is a continued inspiration for other countries; including, most recently, Georgia. There are also those who feel that the presence of a market-liberal Estonia in the European Union will lead the EU away from her socialist policies. Though I am not convinced that Estonia’s market-liberalism is safe in the EU, let alone that Estonia will be able to change the policy debate in Brussels, I certainly hope that Mart Laar’s optimism about the EU’s future evolution will be justified. Mart Laar was a superb choice for the 2006 Friedman Prize. I am very pleased that my employer, the Cato Institute, is able to honor him is that way and I hope that the fire of liberty that Mart Laar and his colleagues set alight in Estonia will continue to spread to the rest of the world. Marian L. Tupy is assistant director of the Project on Global Economic Liberty specializing in the study of Europe and sub-Saharan Africa.
Skip Ribbon Commands Skip to main content 1. Q1 What is an electric vehicle? Electric vehicles use batteries as the primary energy source and recharge from an external grid. For more details, please refer to the “What is it” section. 2. Q2 What are the benefits of electric vehicles? Electric vehicles are generally considered to be one of the most environmentally-friendly type of vehicles for they have no tailpipe emissions. They are also more energy efficient, economical, quieter and smoother than their gasoline counterparts. For more details, please refer to the “Benefits of using EV” section. 3. Q3 What are the differences between plug-in hybrid electric vehicles and pure electric vehicles? Pure electric vehicles are 100% propelled by the electric motor, which in turn draws its power from the battery. Plug-in hybrid electric vehicles use a combination of grid electricity and power from an internal combustion engine to propel the vehicle. When the battery is depleted, the gasoline engine kicks in as back up. 4. Q4 How far can an electric vehicle travel on one charge? The traveling range of an electric vehicle may vary depending on the car design and battery capacity. A few automakers claim that their vehicles can travel up to 300km on one single charge, but In general, many electric private cars nowadays can travel for well over 100km per charge. 5. Q5 Currently, how much do we need to pay by using CLP’s EV charging stations? To promote the wider adoption of EV in Hong Kong, CLP Power is offering free charging in all CLP standard, semi-quick and quick EV charging stations till the end of 2017. 6. Q6 Are there any financial incentives for purchasing electric vehicles? Please refer to the “Government's Incentive” section. 7. Q7 Can I take the battery home for charging? Normally, most electric cars have their chargers built on-board with fixed battery, which means it is not possible to remove it from the car. In addition, for safety reason it is also not desirable for car owners to handle the battery by themselves. 8. Q8 Can all Electric Vehicles use CLP's EV charging station for charging? Existing most of the Electric Vehicles available in Hong Kong can use the 220V, 13A socket outlets in CLP's standard EV charging station for charging; nevertheless, the 6 quick chargers are only compatible for the EVs manufactured by Mitsubishi, Nissan or other vehicles complied with the CHAdeMO Standard. To facilitate multiple types of electric vehicles’ adoption in Hong Kong, CLP has launched semi-quick chargers with American SAE standard and European IEC standard sockets in 2012. 9. Q9 How many CLP's EV charging locations are there currently? There are 31 standard/semi-quick and 14 quick charging stations in CLP’s supply area by Nov 2016. Please refer to List of CLP EV Charging Stations. 10. Q10 How many EV charging locations are there in Hong Kong currently?
PageLines- 2012_krugerrand_gold_1oz_2.png Bullion Definition The word bullion makes a reference to all the precious metals that are traded on the commodity markets around the globe. The form in which these precious metals are found is characterized as being found in bulk. A metal is referred to as being precious based on its rarity and also by the demand on the market. The main characteristics that describe a precious metal as being bullion are its purity and mass. Therefore, the face value as money is not as equal in importance as the aforementioned attributes. The main forms in which are found the bullion metals are that of ingots and coins. The most common purity is that equal to 99.9% which is also known as three nines and the purest metals minted as coins are the Canadian Gold Maple Leaf and the Gold Canadian Mountie with a purity of 99.999%, the highest amongst gold coins. We say that this is the highest purity because it is impossible to have a bullion product 100% pure. Silver coins have also a high purity and among investors and collectors are regarded as being more affordable than gold or platinum. Bullion Minted Coins The first bullion coin minted with the concept of measuring in pure gold was the Kruggerand. The rule for this coin is that it should have a content of at least 12/11 ounces of at least 11/12 pure gold. This was the coin that gave the start for all the gold coins. However, there are also coins that have a long reputation and tradition and are renowned for being consistent with their content of pure gold so they do not necessarily inscribe it on the coins as it is the case with the British Sovereign. Coins are minted as legal tender by many nations around the globe and their face value is smaller than their value as bullion and at the same time these coins are empowered with numismatic value for collectors. An example of how different the bullion value is from the face value can be given in the form of the Gold Maple Leaf who is issued in denominations of $50 for the weight of one troy ounce. However, the coin is worth approximately $1,500 for investors and collectors. A more unique coin was issued by the Australian government and we are referring to the one kilogram Australian Gold Nugget coin which has 99.9% fineness. Being amongst the largest gold coins around the world, it is valued at $10,000. Bullion for Investors Whether we are talking about silver or gold as bullion, investors can be sure that there are plenty alternatives which match their funds. Silver bullion is more affordable compared to gold and in order to acknowledge that you can study the evolution of the silver/gold ratio. Investing in coins as a form of owning silver or gold bullion is a lot cheaper compared to bars and also they are easier to be turned into liquid funds, being sought after by investors and collectors alike. Gold bullion coins are also exempt from VAT and are a way of investment that assures privacy for the investor. , , Comments are closed.
Go to the previous, next section. Install a Function Definition If you are reading this inside of Info in Emacs, you can try out the  multiply-by-seven function by first evaluating the function definition and then evaluating  (multiply-by-seven 3) . A copy of the function definition follows. Place the cursor after the last parenthesis of the function definition and type C-x C-e. When you do this,  multiply-by-seven will appear in the echo area. (What this means is that when a function definition is evaluated, the value it returns is the name of the defined function.) At the same time, this action installs the function definition. (defun multiply-by-seven (number) "Multiply NUMBER by seven." (* 7 number)) By evaluating this  defun , you have just installed  multiply-by-seven in Emacs. The function is now just as much a part of Emacs as  forward-word or any other editing function you use. ( multiply-by-seven will stay installed until you quit Emacs. To reload code automatically whenever you start Emacs, see section Install Code Permanently.) You can see the effect of installing  multiply-by-seven by evaluating the following sample. Place the cursor after the following expression and type C-x C-e. The number 21 will appear in the echo area. (multiply-by-seven 3) If you wish, you can read the documentation for the function by typing C-h f ( describe-function ) and then the name of the function,   multiply-by-seven . When you do this, a `*Help*' window will appear on your screen that says: Multiply NUMBER by seven. (To return to a single window on your screen, type C-x 1.) Go to the previous, next section.
On the night of 9 July 1962 a number of beach front hotels in Honolulu, Hawaii were throwing “rainbow bomb” parties; gathering sky gazers to the rooftops to enjoys a sight rarely seen in the South Pacific: the aurora. At the time they didn’t have the means to detect and track a CME (Coronal Mass Ejection), but the technology was impending, due in part to the events that were transpiring. This time instead of forecasting an aurora that might occur naturally, they were creating one. Specifically, DASA (Defense Atomic Support Agency) and the AEC (Atomic Energy Commission) were creating the aurora by means of a suborbital nuclear detonation. The test was codenamed Starfish Prime, and was part of Operation Dominic, a series of tests designed to test the abilities of nuclear weapons in space. The W49 thermonuclear warhead was launched on the nose of a Thor rocket. The warhead detonated 248.5 miles above Johnston Island—an altitude that is considered outer space. The yield of the blast was 1.5 Megatons, but there was no fireball. There is no air at that altitude to ignite. That is not to say, however, that there were no visual effects. Quoting Cecil R. Coale, PhD , who observed the flash from Canton Island: Then a brilliant white flash erased the darkness like a photoflash. Then the entire sky turned light green for about a second. In several more seconds, a deep red aurora, several moon diameters in size, formed where the blast had been. A white plasma jet came slowly out of the top of the red aurora (over Johnston Island) and painted a white stripe across the sky from north to south in about one minute. A deep red aurora appeared over Samoa at the south end of the white plasma jet. This visual display lasted for perhaps ten minutes before slowly fading. There was no sound at all. As spectacular as the sight was, it was the invisible features of the blast that were the most impressive. Not only was the sky illuminated, but in Hawaii alone 300 street lights failed, TV and radios malfunctioned, burglar alarms went off, and several power lines fused. In low Earth orbit 3 satellites were immediately disabled, and some artificial radiation bands were created that eventually disabled 1/3 of the low orbit satellites in orbit. Obviously the aurora effect was predicted, but as is expected in government jobs, most of the rest came as a surprise—including the fact that after the explosion they realized that they had no rockets or equipment to follow the bomb and study the aftereffects of the test.
Public Release:  How cannabis causes 'cognitive chaos' in the brain University of Bristol Cannabis use is associated with disturbances in concentration and memory. New research by neuroscientists at the University of Bristol, published in the Journal of Neuroscience [Oct. 25], has found that brain activity becomes uncoordinated and inaccurate during these altered states of mind, leading to neurophysiological and behavioural impairments reminiscent of those seen in schizophrenia. The collaborative study, led by Dr Matt Jones from the University's School of Physiology and Pharmacology, tested whether the detrimental effects of cannabis on memory and cognition could be the result of 'disorchestrated' brain networks. Brain activity can be compared to performance of a philharmonic orchestra in which string, brass, woodwind and percussion sections are coupled together in rhythms dictated by the conductor. Similarly, specific structures in the brain tune in to one another at defined frequencies: their rhythmic activity gives rise to brain waves, and the tuning of these brain waves normally allows processing of information used to guide our behaviour. Using state-of-the-art technology, the researchers measured electrical activity from hundreds of neurons in rats that were given a drug that mimics the psychoactive ingredient of marijuana. While the effects of the drug on individual brain regions were subtle, the drug completely disrupted co-ordinated brain waves across the hippocampus and prefrontal cortex, as though two sections of the orchestra were playing out of synch. Both these brain structures are essential for memory and decision-making and heavily implicated in the pathology of schizophrenia. The results from the study show that as a consequence of this decoupling of hippocampus and prefrontal cortex, the rats became unable to make accurate decisions when navigating around a maze. Dr Jones, lead author and MRC Senior Non-clinical Fellow at the University, said: "Marijuana abuse is common among sufferers of schizophrenia and recent studies have shown that the psychoactive ingredient of marijuana can induce some symptoms of schizophrenia in healthy volunteers. These findings are therefore important for our understanding of psychiatric diseases, which may arise as a consequence of 'disorchestrated brains' and could be treated by re-tuning brain activity." Michal Kucewicz, first author on the study, added: "These results are an important step forward in our understanding of how rhythmic activity in the brain underlies thought processes in health and disease." The research is part of a Medical Research Council (MRC)-supported collaboration between the University and the Eli Lilly & Co. Centre for Cognitive Neuroscience that aims to develop new tools and targets for treatment of brain diseases like schizophrenia and Alzheimer's disease.
Breast Milk and B. Infantis: Nature’s Favorite Probiotic Breastfeeding and probiotics We can’t help but extol the virtues of breast milk. A perfect recipe of vitamins, minerals, antibodies, lactoferrin, immune and growth factors, fatty acids, and much more—and a composition that miraculously changes based on the time of day and on your baby’s unique nutritional and immune system needs—breast milk provides the ideal and ever-evolving nourishment for your infant. But did you know that what’s under the microscope may be responsible for some of breast milk’s most beneficial properties? Breast milk is full of good bacteria that provide crucial functions to support your baby’s growth and development, and specifically, a balanced gut microbiome is one of the key factors for the proper development of your baby’s budding immune system. Since 80% of our immune system resides in the gut, our bacteria play a starring role as they work with our own cells to modulate and balance our immune responses. And how your baby’s immune system develops in those critical first few months can determine their immune system function for the rest of their life1. From training the immune system cells to respond correctly and providing protection from harmful bacteria to improving digestion and nutrient absorption, friendly flora in breast milk are essential to your infant’s long-term health and immunity. You see, before birth, babies encounter small amounts of bacteria from the placenta, but it’s the journey through the birth canal that inoculates them with the tremendous amount of beneficial microbes they need to jump start their microbial life outside of the womb. And then comes breastfeeding, nature’s slow and steady lifeline of goodness. Researchers estimate that infants ingest anywhere from 10 to 100 million bacterial cells every day through breast milk alone, representing hundreds of different species2. Here’s where it gets fascinating: we know that breastmilk is the best source of food and sustenance for children and that it supplies them with the probiotic bacteria they need, but what we’ve recently learned is that these friendly bacteria are so essential to your baby’s health that breast milk also contains the food your baby’s flora need to thrive! Microbes and the Sugar Connection Scientists have identified more than two hundred different sugars in breast milk, called human milk oligosaccharides (HMOs); in fact, these super sugars are the third most abundant component of breast milk3. HMOs are completely indigestible by humans, which begs the question—why would a mom’s body expend so much energy producing HMOs when they don’t offer any direct nutrition to her baby? Scientists were puzzled with this question for years—until they began to understand the microscopic world of the gut microbiome. First, they noticed that HMOs seem to perform as decoys for the bad guy bacteria. You see, undesirable bacteria like to latch on to sugar molecules on our intestinal cells, but HMOs bear a striking resemblance to the sugars. Bad guy bacteria get confused and attach to HMOs in baby’s gut instead, leaving vulnerable intestinal cells alone. In addition to putting on a facade, HMOs fulfill a critical role as prebiotics, or food for microbes. Prebiotics are to probiotics what fertilizer is to a garden—they help good bacteria grow and thrive. Scientists used to believe that HMOs in breast milk nourished all of a baby’s gut bacteria, but it turns out that the aptly named Bifidobacterium infantis—a subspecies of Bifidobacterium Longum—is the only strain that can fully break down and utilize the sugars. So, we know that the HMOs’ primary purpose is to feed beneficial bacteria, but why would breast milk evolve to selectively feed this specific strain? 5 Big Benefits of Bifidobacterium Infantis B. infantis boasts a plethora of special functions that could possibly explain why nature seems to have deemed this strain as the most important, foundational strain for human health. Let’s have a look at some of the crucial life-supporting services this friendly flora provides: 1. Produces short-chain fatty acids. As B. infantis digests HMOs, it releases short-chain fatty acids like acetic acid that nourish intestinal cells. Not only can acetic acid keep yeast and fungus growth under control, but it also provides a source of energy for growing bodies. 2. Supports gut integrity. Babies are born with open guts, meaning that they have spaces between their intestinal cells through which toxins and undesirable bacteria can slip into the bloodstream. B. infantis signals a baby’s gut cells to produce proteins that fill the gaps, thus reducing permeability and possible health issues4. 3. Crowds out bad guys. Because B. infantis devours HMOs—and because HMOs are so plentiful in breast milk—breastfed babies have gut microbiomes dominated by this mighty microbe. B. infantis outcompetes other bacteria and takes up space, leaving little room for the bad guys to settle in and cause problems. 4. Releases sialic acid. Sialic acid is an essential nutrient for brain development in infants. Unlike most other bacteria in the gut, B. infantis ferments and releases sialic acid as it consumes HMOs5. 5. Produces folate. B. infantis produces folate (aka vitamin B9), necessary for the production of red blood cells and baby’s healthy growth and development6. Folate also supports DNA synthesis and repair. B. infantis certainly lives up to its name when it comes to supporting your baby’s health. So, how do you make sure you have plenty of this advantageous microbe to go around? Breastfeeding is Best for Baby’s Gut Health First and foremost, when it comes to optimal health for your baby, breastfeed for as long as you can! Nursing exclusively for the first six months sets your baby up for a lifetime of good health by giving them the perfect nutrition for their growing body, including plenty of the good guy probiotics. And make sure to take care of you own microbial health! After all, you can only pass on beneficial bacteria to your little one if your own microbiome is healthy and balanced. Begin to live a gut healthy life by following these three simple steps: • Take probiotics! Support your microbiome before, during, and after pregnancy with a high-quality probiotic formula like Hyperbiotics PRO-Moms that includes B. infantis for your breastfeeding baby. • Stay away from probiotic-killers. Processed foods, antibiotics in food and as medicine, certain medications, and even antibacterial cleaners can all wipe out your population of probiotics. And, guess what? If your beneficial microbes are in low supply, your baby’s microbiome will suffer as well. • Focus on prebiotics. Just as HMOs in breast milk provide food for the microbes in your baby’s gut, you need to provide food for your own bacteria. Focus on a diet high in plant-based foods like asparagus, Jerusalem artichoke, bananas, dandelion greens, and garlic to nourish your friendly flora. And finally, when you’re up in the wee hours of the night with a suckling little one, remember that breast milk is nature’s perfect cocktail: it’s the absolute best nourishment you can provide for your child. Full of nutrients, powerful probiotic bacteria, and tailored-made prebiotics, your breast milk is designed and optimized to support your baby’s unique growth and development for a lifetime of vibrant health. 1. Houghteling, P. D., & Walker, W. A. (2015). From Birth to “Immunohealth,” Allergies and Enterocolitis. Journal of Clinical Gastroenterology, 49, S7-S12. 2. Boix-Amorós, A., Collado, M. C., & Mira, A. (2016). Relationship between Milk Microbiota, Bacterial Load, Macronutrients, and Human Cells during Lactation. Frontiers in Microbiology, 7. doi:10.3389/fmicb.2016.00492 3. Chichlowski, M., Lartigue, G. D., German, J. B., Raybould, H. E., & Mills, D. A. (2012). Bifidobacteria Isolated From Infants and Cultured on Human Milk Oligosaccharides Affect Intestinal Epithelial Function. Journal of Pediatric Gastroenterology and Nutrition, 55(3), 321-327. 4. Ewaschuk, J. B., Diaz, H., Meddings, L., Diederichs, B., Dmytrash, A., Backer, J., . . . Madsen, K. L. (2008). Secreted bioactive factors from Bifidobacterium infantis enhance epithelial cell barrier function. AJP: Gastrointestinal and Liver Physiology, 295(5). 5. Ward, R. E., Niñonuevo, M., Mills, D. A., Lebrilla, C. B., & German, J. B. (2007). In vitro fermentability of human milk oligosaccharides by several strains of bifidobacteria. Molecular Nutrition & Food Research, 51(11), 1398-1405. 6. Rossi, M., Amaretti, A., & Raimondi, S. (2011). Folate Production by Probiotic Bacteria. Nutrients, 3(12), 118-134. Emily Courtney is a Writer and Editor at Hyperbiotics and mom to two fun and active boys. Emily is passionate about natural wellness and helping others learn about the power of probiotics for vibrant health! For more ideas on how you can benefit from the power of probiotics and live healthier days, be sure to subscribe to our newsletter. Related Articles Posted in Mom + Child, Pregnancy & Breastfeeding
Chapter 51: Ether 6–10 Book of Mormon Teacher Manual, (2009), 186–87 In Ether 6–10, Moroni gives a summary of many generations of the Jaredite people. This fast-paced overview shows the consequences of righteousness and wickedness. Moroni’s observations and warnings can help us avoid the pitfalls experienced by the Jaredites. The Lord continued to call on the Jaredites to repent and come unto Him, and He continues to call on us to do the same so He can grant us peace and happiness. Some Doctrines and Principles Suggestions for Teaching Ether 6:1–12. As We Trust in the Lord and Do His Will, He Directs Our Course Invite students to read Ether 6:1–12 silently, looking for similarities between the Jaredites’ journey to the promised land and our journey through mortality toward the celestial kingdom. Suggest that they make a list of words and phrases in this scripture passage that can be applied to our life. For example, they might think about how the wind blowing toward the promised land could be compared to the influence of God in their lives. They might also consider parallels to the stones, the food stored in preparation for the journey, the depths of the sea, the barges or vessels, and the Jaredites themselves. When students have had time to study and ponder, ask them to gather in small groups to share what they have found. Then ask each group to choose someone to share the group’s ideas with the entire class. Invite students to share additional doctrines or principles as the discussion progresses. Suggest that students take notes on what each group shares. • What principles can we learn from the experience of the Jaredites? • In what ways can these principles help us receive God’s direction fully in our lives? Ether 6:9. “They Did Sing Praises unto the Lord” Give students time to silently read Ether 6:9 and 1 Nephi 18:9. • What was the difference between the singing described in these two verses? (For another comparison of these accounts, see page 369 in the student manual.) • What benefits come to us and to others as we “sing praises unto the Lord”? Ether 6:12. “They … Did Shed Tears of Joy … Because of the Multitude of [the Lord’s] Tender Mercies” Read Ether 6:12 with the students. • What does the word mercies mean to you? As part of this discussion, you may want to invite students to read the entry on “Mercy” on pages 102–3 in True to the Faith. • How does the word tender add meaning to the word mercies? • What does the word multitude contribute to our understanding of this verse? • What did the people do when they arrived in the promised land? In what ways can we follow their example? Give students time to think about “the multitude of [the Lord’s] tender mercies” in their lives. After sufficient time, invite some of them to share examples. Ether 6:17. “They Were Taught to Walk Humbly before the Lord” Ask a student to read Ether 6:17. Ask students to identify actions and attitudes they have seen in others that demonstrate “walk[ing] humbly before the Lord.” You may want to list students’ responses on the board. • Why do we need to “walk humbly before the Lord” in order to be “taught from on high”? • How can we be more humble? How can remembering our relationship to the Lord help us be humble? • What are some challenges we face as we strive to be humble? How can we overcome these challenges? Ether 7:23–27; 9:28–31. Prophets Condemn Wickedness and Warn of Danger Give students time to read Ether 7:23–27 and 9:28–31 silently. Ask them to look for the similarities and differences in the two accounts. Discuss the following questions: • What might lead a person to accept or reject a prophet’s warnings? • What are some warnings we have received from our living prophet? • What are some examples of people receiving blessings because they have followed the warnings of the prophet? (Encourage students to share examples from their lives or from the lives of people they know.) Ether 8; 9:26–27; 10:33. Secret Combinations Seek to Destroy Nations and Overthrow Freedom Ask students to review the material about Helaman 6:18–40 on pages 271–72 in the student manual, either individually or as a class. This material gives a brief explanation of secret combinations. Explain that as the prophet Moroni summarized the Jaredite history, he warned modern readers of the dangers of secret combinations. Have students read the chapter heading for Ether 8. Then, using Ether 8:20–26, discuss some or all of the following questions: • Moroni said that secret combinations destroyed the Jaredite civilization and the Nephite civilization (see verses 20–21). Why do you think secret combinations are so destructive? • How might individuals or nations “uphold” secret combinations? (See verse 22.) • Why do you think Moroni wrote about the awful results of secret combinations? (See verses 23–26.) • How are secret combinations a counterfeit of true covenants with God? Have students read Ether 9:26–27 and 10:33. • Why do you suppose that even after periods of righteousness the Jaredite civilization kept falling prey to secret combinations? • What personal attributes can we develop that will help us resist secret combinations? (Students may give responses such as personal integrity, love for the Lord, and love for the Lord’s commandments.) Have students read Helaman 6:37 and 3 Nephi 5:4–6. • What is the best way to rid a community of secret combinations? Ether 10. Leaders Can Influence Societies to Be Wicked or Righteous Explain that in Ether 10, Moroni summarizes several generations in just 34 verses. Some kings were righteous and led the people to prosperity and peace; others were wicked and led the people to misery. It is not likely that the society changed quickly from righteousness to wickedness or from wickedness to righteousness as their kings changed. Rather, it is likely that they changed gradually. Use the following object lesson to illustrate this point. You may want to practice it before class. Display a clear glass that is filled halfway with clean water. Ask a student to read Ether 10:5. Then add a drop of dark food coloring to the water. Have a student read Ether 10:9–11. Then add another drop of food coloring to the water. Invite a student to read Ether 10:13. Then add another drop of food coloring. Point out that just as societies with wicked leaders can gradually become wicked, societies with righteous leaders can gradually become more righteous. Have a student read Ether 10:16. Then add some bleach to the stained water. Repeat this process with verse 17 and then with verses 18–19. (At the conclusion, the water should be clear again.) • What principles can we learn from this object lesson? How did these principles apply to the Jaredites? In what ways do they apply to societies today? • What are some influences in our society that can make our lives impure? What can we do to keep our lives pure? Conclude by emphasizing that when we are living righteously, we can be happy in any circumstance.
Christmas Delivery Deadline 12/18 : : 1. Help Ancient Athens Ancient Athens - Page Text Content S: the role of men& woman BC: MellisaChirongoma& Hailely Pewtress | Open this book and reveal the wonders of what life was like in the Ancient Greece,so take a peek,it wouldn't hurt.Hope you learn from this book. FC: The role of both women and men in Anient Athens | .MellisaChirongoma & Hailey Pewtress 1: Men, women, and children in ancient Greece had different roles and responsibilities. Let's look at the roles you and your friends and family would have had if you had lived in ancient Greece. 2: What were girls roles in the ancient Greece | Girls grew up helping their mothers around the house. All girls were taught to cook, weave, and clean. Girls also learned ancient secret songs and dances so they could participate in the religious festivals. Some girls were taught to read and write by their mothers, but this was rare. At age 15, the girls of wealthy families were expected to throw away their toys and marry the man that their father chose for them. Peasant girls found their own husbands while working in the fields. 3: What were the boys roles in the ancient Greece | Boys were considered to be more important than girls and were sent to school at age 6. At school they learned to read, write the alphabet, add on an abacus, and enjoy poetry and music. Boys were expected to have a healthy mind and body. They were taught to have healthy bodies by participating in gymnastics -- this included wrestling, running, jumping, and throwing the javelin. At age 16, boys began to train for their future jobs. If they wanted to be in the army, they would have started training at age 7 and entered the army at age 20. Other popular jobs were those of businessmen and Olympic athletes. 4: what were the women roles in the Ancient Greece | Women dressed in clothes much like those worn by the men. If you were married to a rich man, your chiton would have been made of brightly colored wool or linen. On special occasions women wore wigs and makeup. Women didn't have as many privileges as men in ancient Greece. For example, they were not allowed to eat or sleep in the same room as men, go to the Olympics, or go into the marketplace or streets of the city. Since they spent a lot of time in the house, their most important tasks, aside from having children, were running the household and managing the slaves. Women in less wealthy households did not have slaves and had to do all the housework themselves. In peasant households, the women were in charge of working the fields. 5: what were mens role in the ancient Greece | Men in Greece wore special clothes. Every Greek man owned several chitons, long, rectangular pieces of cloth with holes for the head and arms. The chitons were decorated based on the man's status in society. The richest men had the fanciest chitons, made out of the most expensive cloth and with the most decorations. The man was in charge of the family and the house. Most men worked during the day as businessmen or farmers. When they were at home, they were treated with great respect. Even during dinner, the men laid on couches and were fed and entertained by the slaves while the women and children ate in another room. Men were given the most responsibility and, therefore, were considered the most important people in ancient Greece. 7: Boy clothing 9: Girl clothing 10: Women in most city-states of ancient Greece had very few rights. They were under the control and protection of their father, husband, or a male relative for their entire lives. Women had no role in politics. Women with any wealth did not work. They stayed indoors running their households. The only public job of importance for a woman was as a religious priestess. In Sparta, men stayed in barracks until they were thirty. Since Spartan women did not have this restriction, they had more freedoms and responsibilities in public life. They were able to go out in public unescorted, participate in athletic contests, and inherit land. In the fourth century, over two-fifths of the land in Sparta was owned by women. In Athens, the law required all inheritances to go through the male line and limited property that could be owned by women. 11: e... | It was the wives who supervised the slaves and managed the household responsibilities, such as weaving and cooking. In affluent homes, women had a completely separate area of the house where men were not permitted. In the homes of the poor, separate areas were not available. Poor women often worked outside the home, assisting their husbands at the market or at some other job. Poorer women often went to the market without a male escort 12: What did girls do? They learnt to read- in school or at home They learnt important household skills-spinning, weaving, sewing, cooking and other household jobs Learnt simple facts on mythology, religion and occasionally musical instruments Spent most of their time in her household with other women- only leaving the house to perform religious duties 13: What about marriage? Girls got married in their teens, often to a man in his 30's After a woman got married, she and her husband would give offerings to the god's and share a cake with her husband Her father would chose her husband- for most Athenians, marriage was basically living together Marriage may have been arranged from a very early age, if the daughter came from a wealthy family The ancient Greek girl did not know or meet her husband until the dowry(the girl's portion of the father's estate) and betrothal had been agreed to It was important that the ancient girls were virgins 14: Athenian Women Had Little Power | In 5th century Athens, women had no political power and very little power in their personal lives, as well. They were considered equal to slaves and foreigners, not given the right to vote or leave the house without a chaperone. The reason why an Athenian woman got married was to have children. 15: These heirs would become good Athenian citizens who would also be able to provide for their parents when their parents became elderly or infirm. Interesting contrast can be drawn to the lives of Spartan women, who were able to own and control their own property. Spartan women were encouraged to fight and had little interactions with their own children after giving birth to them. 16: Ancient Athenians believed that women were in the world to have babies and manage the household. This ideal grates on most modern women because it seems so horrifically simple. Such a prospective future cannot be looked at as anything but monotonous. Being a mother is a beautiful thing and no one doubts the need for a nice house. However, there must be something more. Wise women and men will continue to debate of the equality of the genders. It is interesting to see that while we as a civilization have come quite far, in some ways we have barely even begun. 18: Wealthy families | Girls clothing | Mothers clothing 19: photo gallery | Boy clothing | Father clothing 20: | More Bibliography 21: Bibligrapy | 22: : The Role of a Woman in Ancient Athens | Just a little more bibliography 23: dressing gown/f/desc,True/ | | 24: F O R E V E R | | 26: By, Hailey & Mellisa Sizes: mini|medium|large|massive Mellisa holly • By: Mellisa h. • Joined: almost 6 years ago • Published Mixbooks: 1 No contributors About This Mixbook • Title: Ancient Athens • really • Tags: hi • Published: almost 6 years ago Get up to 50% off Your first order Get up to 50% off Your first order
Cookies policy Technology news 2 April 2014 Five ways to make sure we never lose a plane again THE world has been transfixed by the fate of Malaysia Airlines flight MH370, which seems to have crashed in the far reaches of the southern Indian Ocean. But how can we lose a plane in an age of always-on surveillance? Some tracking measures already exist and others are under consideration. Flight-tracking over the oceans That flight MH370 could not be tracked with GPS is astonishing. The accident is likely to prompt the UN’s International Civil Aviation Organization (ICAO) to order mandatory tracking of aircraft on ocean routes. Satellite firms already sell spare satellite bandwidth to airlines so they can provide in-flight connectivity. The lost plane had a satellite antenna but did not use it to transmit technical data. But the transmitter pinged an hourly signal to an Inmarsat satellite and those radio pulses were used to work out a rough flight path for the missing plane. Pinging location data as well would only have cost one dollar an hour. Other providers of in-flight entertainment and seat-back connectivity could send out tracking signals as well: Panasonic, for instance, offers broadband and could provide data pings. And the Iridium satellite network is launching 66 new satellites that will supply constant aircraft location data via a service called Aireon from 2017. Smarter black boxes Why do aircraft flight recorders not routinely stream their data via satellite to servers on the ground? The stumbling block is a lack of affordable bandwidth to transmit the thousands of flight parameters that would be required. A much better idea called “triggered transmission” has been hatched by an industry working group led by the BEA, the French accident investigation organisation. The aim is to use avionics software to recognise conditions that suggest an accident is imminent, such as sudden rolling combined with a stall warning – and then transmit recent black box flight data via satellite. That way only planes in trouble will send out data on their status and location. An algorithm could then narrow down the plane’s last known location to within 11 kilometres. The ICAO will discuss the idea in a meeting in Montreal, Canada, in October. Extended recording time The Malaysia Airlines Boeing 777 flew for 6 hours after deviating from its planned route. With just 2 hours of recording time, the cockpit voice recorder is unlikely to reveal much about the cause of the incident. The European Aviation Safety Agency (EASA) is now considering moving to a 15-hour cockpit voice recorder. Longer-lasting underwater pings Flight recorders carry an underwater location device that emits an ultrasound pulse once a second for 30 days after it is submerged. EASA and the US Federal Aviation Administration are upping the battery lifetime to 90 days. Ping louder for ocean flights One problem highlighted in the 2009 search for Air France flight 447 was the difficulty a submarine had in hearing its pinger: the batteries ran out before it was found. EASA suggests that a longer-range, lower-frequency 8.8 kilohertz pinger is attached to planes which regularly fly across oceans, in addition to the pingers on black boxes. It would extend the range from 1500 metres to 10.7 kilometres. More on these topics:
Accelerating Science Discovery - Join the Discussion OSTIblog Articles in the geomagnetic storms Topic Plasmas - The Greatest Show on Earth Auroras are triggered by geomagnetic storms when gusts of solar plasma wind strike the Earth’s magnetic field; charged particles rain down over the north and south magnetic poles, lighting up the atmosphere and causing the air to glow. In the northern latitude, this display is known as the aurora borealis or northern lights and in the southern latitude, it is known as aurora australis or southern lights.  Like lightening, auroras are one of the few naturally-occurring plasmas found here on Earth. Plasma science involving both natural and artificially produced plasma encompasses a variety of science disciplines ranging from plasma physics, atomic and molecular physics, chemistry, energy security, advanced space propulsion, and material science.  Plasma’s unique behaviors and characteristics make them useful in a large and growing number of scientific and industrial applications important to our universe.   DOE researchers are making significant progress in this amazing field.  For example, researchers at the...
Analytic Philosophy Ordinary Language Philosophy Who was John Wisdom? Arthur John Terrence Dibben Wisdom (1904–1993) was educated at Cambridge University and became a professor there in 1952. His early work was on Jeremy Bentham (1748–1832) and logical atomism, but under Ludwig Wittgenstein’s (1889–1951) influence he began a project of examining different approaches toward philosophical problems. Wisdom’s publications in that area include Other Minds (1952), Philosophy and Psychoanalysis (1953), and Paradox and Discovery (1964). Wisdom discursively reflected on why philosophers say and write “very strange things,” and refuted skepticism about the existence of other minds. Wisdom brought the discussion of the “other minds problem” into twentieth century analytic contexts by ruling out the possibility of direct knowledge of other minds and at the same time showing why the claim that our knowledge is restricted to momentary sensations does not hold up. Overall, he argued that philosophers have always relied on the use of language and that there are historical precedents in philosophy for deciding when language gets the main subjects of philosophy right, as well as wrong. Wisdom thought that the main subjects of philosophy were categories of being in reality and kinds of statements in language. He held that relevant distinctions within these subjects were implicit in language. He is also the author of Philosophical Papers (1962).
Dismiss Notice Join Physics Forums Today! Question bout physical chem 1. May 31, 2005 #1 Hello,there are some question about physical chemistry to be asked. :tongue: 1) The space vehicle,Gemini,started its journey in space with 322kg of liquid fuel, CH3NHNH2. This fuel is oxidised by dinitrogen tetroxide,N2O4,to form nitrogen,carbon dioxide and water.hat is the mass of N2O4,in kg,that is required to oxidiese all the fuel? (A) 258 (B) 644 (C) 805 (D) 875 * (E) 161 The answer i got is c,but actual is d,is there anything wrong bout my calculations??can anyone show me??? :smile: 2) Which of the following contains 1 mol of the particles stated? (A) Chlorine molecules in 35.5g of chlorine gas (B) Electrons in 1g of hydrogen gas * (C) Hydrogen ions in 1dm3 of 1 moldm-3 aqueous sulphuric acid (D) Oxygen atoms in 22.4dm3 of oxygen gas at s.t.p thanks alot for anyone who help :wink: :smile: :tongue2: Last edited: May 31, 2005 2. jcsd 3. Jun 2, 2005 #2 How many moles is this? can you write out the balanced reaction that is taking place? Do these 2 steps and you should be able to get the answer. If not post what work you have done. Once again calculate moles. For A how many moles is 35.5g of Cl2 gas? 1 mole of Cl2 has 2 moles of Cl molecules. b) how many moles is 1g of H2 gas? there are 2 e- in 1 mol of H2 gas.... 4. Jun 3, 2005 #3 hi,actually from my calculation of the first question,the mass of N2O4 that i got is 805kg,but the answer given is 875kg.i don;t think that i have anything wrong in my calculations,so i need your answer.Because those question got alot of mistakes,haha,maybe i'm just lying to myself.but pls,can u give me your answer??? :yuck: Before i proceed,i have to apologize first because i got more question to trouble u.so sorry bout that. :blushing: :smile: 1) Explain why carbon-12 replaced oxygen as a standard in the determination of relative atomic mass???(or why it used as a standard in determining Ar?) Is it because carbon-12 is more available compared to oxygen,and it's easier to transport than oxygen.this is my answer,but i don;t know whether correct or not.it sounds stupid,right??? :tongue: 2) Haemoglobin is a protein which carries oxygen in the red blood cell. Each molecule of haemoglobin contains 4 iron atoms.If 1g of haemoglobin contains 0.00341g of iron ,calculate the RMM of haemoglobin. 3) The density of ice is 1.00gcm-3.What volume of steam is produced when 1cm3 of ice is heated to 323degree celcius(596K) at a pressure of 1 atm.(101kPa)? (1 mol gas occupies 24.0dm3 at 25 degree celcius and 1 atm.) Till here,that's all. I think there are more.sorry for that.u may think that i;m lazy to solve my own problem,but i did try,at last,i can't get it.therefore,i need someone to help.anyway,thanks alot for your help. :tongue2: :smile: :wink: 5. Jun 3, 2005 #4 The balanced equation is: CH3NHNH2 +(5/4)N204-----> (9/4)N2+CO2+3H20 I don't have a periodic table so you have to plug in what the values are... 322kg fuel x (1000gfuel/1kgfuel) x (1 mol fuel/mw of fuel) x (5/4 mol N204/1mol fuel) x (mw of N204/1mol N204)= units cancel and you get your answer in g of N204, you just need to fill in the 2 molecular weights. The answer asks for amount of N204 in kgs so just divide your answer by 1000 6. Jun 3, 2005 #5 These are all the same types of problems 1) instead of me typing this website explains-http://dl.clackamas.cc.or.us/ch104-03/changes.htm 2.)It says the ratio of molecules of Fe to a molecule of hemoglobin is 4:1. How many moles of Fe is 0.00341g of iron? Divide this by 4 and you have the number of moles of hemoglobin. Thus you know 1g is X moles of hemoglobin from that you should be able to find mw of hemoglobin 3.) how many moles of water is 1cm^3 of ice? figure that out and use PV=nRT 7. Jun 6, 2005 #6 question again. thanks bout the answer for changes in standard. :smile: for second question ,0.00341g of iron have 0.00341/26 moles of iron atoms ,right?the answer will then divided by 4 because 1 molecule of haemoglobin contain 4 atoms of iron.1g of haemoglobin will contain the moles that i count,then 1g will be divided by the answer,and the relative molecular mass of haemoglobin will be known.i got 30498.5 but the answer is 65454,why??? :confused: For the third question, 1 cm^3 of ice is 1/18 mole of ice,right???Then,pV=nRT, =2.72x10^-3 m3 *the answer given is 2.67 dm3,when i convert my answer,it's wrong ,why??? :redface: anyway,thanks alot for helping me so much :tongue2: :wink: Have something to add? Similar Discussions: Question bout physical chem 1. Chem question (Replies: 4) 2. Physical chem (Replies: 8) 3. Organic Chem Question (Replies: 3) 4. Organic Chem. Question (Replies: 1)
Dismiss Notice Dismiss Notice Join Physics Forums Today! What is the length of the catapult? 1. Jul 14, 2007 #1 A Jet fighter plane is launched from a catapult on an aircraft carrier. It reaches a speed of 42 m/s at the end of the catapult, and this requires 2.0s. Assuming the acceleration is constant, what is the length of the catapult? Am I looking at this correctly? final velocity(Vf) is 42 m/s initial velocity(Vi) is 0 m/s time(t) is 2.0 s acceleration(a) is 21 m/s^2 (I'm assuming this because if it started from rest and after 2 seconds is traveling 42 m/s than the acceleration must be half of that? or am I way off? ) Here's my solution, Vf^2 = Vi^2 +2ad (42 m/s)^2 = 0 + 2(21 m/s^2)d d = 42^2 / 42 = 42m Am I doing this correctly? Thank you for any help in advance. - Otis 2. jcsd 3. Jul 14, 2007 #2 User Avatar Yep, looks right. Have something to add?
Dismiss Notice Join Physics Forums Today! Why the Born Rule? 1. Jul 9, 2008 #1 With regard to quantum experimental phenomena, might not one think of the movement (light and sound indicators and subsequent recorded data streams) of some piece of macroscopic instrumentation as the results of measurements of the energy of wavelike disturbances within some volume of space during some intervals of time. That is, data streams in quantum experimental setups are generated by the energy imparted from quantum systems to macroscopic instruments. The Born rule says that the probability of a quantum system to produce a changes in macroscopic instrumentation is directly proportional to the square of the amplitude of the quantum system. With respect to classical systems, the energy transported by a wave (and imparted to obstacles in the wave's path) is approximately equal to the square of the amplitude of the wave. Is this where the Born rule came from? 2. jcsd 3. Jul 9, 2008 #2 User Avatar Staff Emeritus Science Advisor Gold Member My favorite justification of the Born rule is this: In a certain sense, the only really relevant property of a quantum state [itex]\psi[/itex] is that it determines an 'expected value' [itex]\langle T \rangle_\psi[/itex] for any observable [itex]T[/itex]. Then, you invoke a deep theorem of functional analysis that says that if you define states this way, then for any particular state [itex]\psi[/itex] you can find a (good) Hilbert space representation in which [itex]\psi[/itex] 'factors' into a bra and a ket satisfying [itex]\langle T \rangle_\psi = \langle \psi | T | \psi \rangle[/itex], for some ket [itex]|\psi \rangle[/itex] for all observables T. Conclusion: the Born rule is really just a mathematical artifact of the way Hilbert spaces can naturally be used to represent states. Of course, the discovery process went in the opposite direction.... 4. Jul 9, 2008 #3 I just did a quick search for a question I had and figured I could put it in this thread because I'm guessing its somewhat related. I apologize ahead of time if the answer could have been unearthed with a little more effort on my part looking into my statistics or qm textbooks (this question actually popped into my head as I was reviewing Griffiths' 1st chapter) or if its frankly just a silly question. I understand that with respect to the Born interpretation of the wave function that the actual probability of finding a particle (the probability of a quantum system to produce a change in this case) is given by the product of the wave function with its complex conjugate. This will yield a non-negative real probability for any normalizable function. However, I've been searching for the proof to this interpretation to understand why this is true mathematically. If this is an approximation why shouldn't the probability be proportional to any non-trig even function or specifically to say something like wave function4 or even an approximation expansion as opposed to the wave function squared? I've done a cursory search on this forum and found this https://www.physicsforums.com/showthread.php?t=237940&highlight=born+statistical+interpretation but it looks like the last comment just answers with an experimental justification and the post previous to that doesn't really address my question. Even if there isn't a proof (for whatever reason) I'd greatly appreciate a reply to my question [even if its just to point me to a helpful link or to tell me where I'm getting mixed up]---I understand there is a good chance that my question is just a fundamental misunderstanding of the application of statistics to the probability equation (though I did take a second to look at other distribution functions) but I know that leaving the issue unresolved will eat at it me even if it’s just an unnecessary diversion while I continue to review the book. Thanks again. 5. Jul 9, 2008 #4 User Avatar Science Advisor Its a good bet that, indeed, Born was in part motivated by the content of Poynting's Thrm, and the general importance of wave intensity in E&M and other wave-dominated fields. As a student, I had the good fortune to talk with a few physicists who learned and worked during Born's time, and learned my basic QM from one of them-- J.H. Van Vleck. I got the strong sense that Born's ideas were related to the usual interpretation of wave intensity, including the E&M approach to scattering -- see the classical derivation of the Rutherford scattering cross section, for example. Lots of the early work in QM was motivated by physical reasoning.The more abstract approach with Hilbert Spaces and state vectors, vs coordinate or momentum based wave functions really took off with the publication of Dirac's book in the 1930s. And, note that Born's interpretation is just that; it is justified by its consistent utility over nearly a century of application. Reilly Atkinson 6. Jul 12, 2008 #5 Is this where the Born rule came from? >> Schroedinger first tried to interpret rho = |psi|^2 as a continuous matter density, much like the classical intensity of a wave. But that wasn't consistent with the empirical observation of particles. Born then came along and showed that a probability interpretation of rho is self-consistent if one assumes it is a probability density distribution for discrete events like the "measurement" (the term was not even defined back then). In particular, one can construct an operator algebra that allows one to correctly compute the expectation values of various observables using rho as the probability measure. This is only a postulate that seems to nevertheless work extremely well. But it is not a dynamical, physical derivation of the Born rule. However, in the de Broglie-Bohm (deBB) pilot wave theory, one can actually derive the Born rule (rather than postulating it) in two different ways. One can show that rho is the measure of greatest typicality for point particles trajectories in 3-space, and that any hypothetical initial particle distribution where rho \= |psi|^2 will converge to rho = |psi|^2 and stay there for all t since it is the only equivariant measure. Or, one can show from a subquantum H-theorem that if an ensemble of deBB particles are initially distributed as rho \= |psi|^2, the mixing of the particles in nonequilibrium will relax very quickly to statistical equilibrium given by rho = |psi|^2, and will stay there because it is the only equivariant measure. Notice that both of these arguments are parallel to the justifications given by Boltzmann for the 2nd law of thermodynamics. The first is in fact an application of Boltzmann's "statistical argument" (also known as the typicality argument), and the second is just an application of Boltzmann's H-theorem. This is one reason why pilot wave theory is a more general theory than textbook QM. See the following papers: Quantum Equilibrium and the Origin of Absolute Uncertainty Authors: Dürr, Detlef; Goldstein, Sheldon; Zanghí, Nino Dynamical Origin of Quantum Probabilities Antony Valentini and Hans Westman Hidden Variables, Statistical Mechanics and the Early Universe Antony Valentini 7. Jul 12, 2008 #6 Thanks to all for your comments, links, and references. Have something to add? Similar Discussions: Why the Born Rule? 1. Born's rule (Replies: 2)
How do you give a money tree as a gift? Quick Answer To give a money tree as a gift, creatively attach dollar bills to living or artificial tree branches and present the tree as a gift. Purchase a small tree at a home gardening store, or find an artificial tree at a craft store. When attaching money to a living tree, do not harm the leaves. Use ribbons that can be easily cut off or removed to attach the bills. Continue Reading How do you give a money tree as a gift? Credit: fstop123 E+ Getty Images Full Answer Alternatively, tie the dollar bills to branches on a plant that looks like a tree. For artificial plants, tape colorful bows onto the branches. To make giving money easy, decorate and glue on small clothespins to the artificial tree to hold the money. Add sparkly paint and glitter to an artificial tree to make a standout gift. Attaching a poem or inspirational quote to the base of the plant is a nice way to personalize the money-tree gift. Include a personal note inside the card explaining the significance of the amount of money attached to the tree. This personal touch creates a fond memory for the gift receiver. This gift is always well-received since cash can be spent at any retail store. Alternatively, tie on 10 or 20, single dollar bills, and attach a prepaid credit card to the base of the tree for online purchases. Learn more about Gift Giving Related Questions
What are some attributes of rustic log cabins? Quick Answer Traditional cabins are constructed using horizontally laid and vertically posted logs. Rustic cabins consist of one pen surrounded by others in a continental, saddlebag or dog trot formation. Interior partitions create rooms. The Rocky Mountain style includes a covered porch that extends the roof to prevent snow slides from blocking entrances. Continue Reading Full Answer The log cabin type of construction comes from structures used by early homesteaders who needed to build shelters quickly. The continental floor plan is three rooms surrounding a pen that includes the hearth. The saddlebag is two pens constructed side-by-side sharing a common chimney. The dog trot is two pens separated by an open passage sharing a common roof. The passage provides air circulation and shelter from heat. There are several types of corner notching used in rustic cabin construction; however, the most popular crown method extends logs beyond the joint creating a buttress effect. Rough logs from chestnut, white oak, cedar or fir trees are common in rustic cabin construction because they are naturally long and straight with rot-resistant qualities. Additionally, these types of woods are easily worked using hand tools. Rustic cabin roofs are wood shingle or seam metal. Chimneys are brick or stone. Interior walls are generally exposed logs to highlight the beauty of hand-hewn wood. Learn more about Decor Related Questions
What dogs have hair instead of fur? Quick Answer VetInfo lists many dog breeds alleged to have hair rather than fur, including poodles, bichon frise, Irish water spaniels, Portuguese water dogs and lhasa apsos. The simple difference between hair and fur is that fur grows to a certain length and stops, while hair grows nearly continually. Continue Reading Full Answer Dogplay.com explains that hair and fur are composed of the same biological substance, keratin. Both come from follicles that go through four stages of life: anagen, or growth; catagen, a transition phase; telogen, a dormant phase; and exogen, when hair and fur are shed. Hair stays in anagen, the growth cycle, for the longest. Fur, however, goes through anagen quickly and spends most of its life cycle in the dormant phase, telogen. For this reason, fur seems to grow to a certain length and stop. Hair and fur in the telogen phase tends to shed "feathers," the microscopic scales that make up the exterior layer of hair. These scales, blended with dead skin and dried pet saliva, make pet dander, the allergen that triggers allergic reactions in people. Dogs that have hair instead of fur have less dander because fewer hairs are in the telogen phase at any given time. Health.com suggests that curly-coat dogs such as poodles are even better where allergens are concerned because the curly hair tends to trap shed dander. Learn more about Dogs Related Questions
What is a cable-stayed bridge? Quick Answer A cable-stayed bridge is a spanning structure that uses cables connected to the towers (also known as pylons) to support the roadway. Depending on the length of the bridge, cable-stayed spans have one or more towers. Continue Reading Full Answer Cable-stayed bridges share some similarities to suspension bridges; they both utilize towers and cables to support the roadway. Whereas suspension bridges have cables that anchor into the ground at either end of the span, running over the tops of the towers, cable-stayed bridges tie their cables into the towers themselves. Generally, there are two formats for cable-stayed bridges. Parallel or harp bridges anchor cables equidistantly from each other across the bridge and up the towers. Radial or fan bridges anchor the cables on the tower in one spot and spread out along the roadway. Learn more about Physics Related Questions
How many protons does polonium have? Quick Answer Polonium has 84 protons. It is a rare highly radioactive element with no stable isotopes. Polonium has few applications. It can be used in heaters in space probes, in antistatic devices and as a source of neutrons and alpha particles. Polonium naturally occurs in tobacco and some foods, particularly seafood. Continue Reading Full Answer Polonium was discovered by Polish chemist Marie Curie and her husband, French chemist Pierre Curie, in 1898. Marie Curie obtained the polonium from pitchblende, a material that contains uranium. She noticed that pitchblende was more radioactive than plain uranium and realized it must contain one or more other radioactive elements. This also led to her discovery of radium, another radioactive element. Marie and Pierre Curie received the Nobel Prize in Physics in 1903. Learn more about Atoms & Molecules Related Questions
Bipolar Disorder Symptoms Is violent behavior a symptom of bipolar disorder? A Answers (1) • A , Mental Health, answered Violence and rage are unfortunate symptoms of bipolar disorder. Many people who commit violent and rage filled acts are often punished or incarcerated without getting a much needed bipolar diagnosis. The behavior is often seen as an anger management problem and proper treatment is not initiated. During a mood swing, a person can think, say and do things they would never even contemplate when not ill. And though it’s easy to assume that violent and rage symptoms only happen to men, this is simply not true. Women experience these symptoms as well. I have had mood swings where I have chased down a car when a driver flipped me off and almost had a fight with a woman on a bicycle who went through a red light in front of me. I held myself back because I realized that bipolar was thinking for me. But I really wanted to make someone hurt. Violence and Rage Symptoms • Want to hurt someone- fighting in public • Screaming • Have violent thoughts that are scary and not the norm • Road rage (so common!) • Kick and punch things really hard to the point of breaking a bone • Yell and hit loved ones • Unable to control feelings or behavior • Out of proportion reactions to normal events • Unable to see consequences of actions • Lose contact with reality Please, please remember: The above actions happen when the person is in a mood swing- it’s usually a dysphoric (agitated, upset and negative ) manic episode- often with psychosis. These are not in any way their normal behaviors, nor are they a part of a person’s personality. When the mood swing ends, the person is often mortified, ashamed and truly repentant regarding what happened. I’ve heard so many stories where gentle and kind people do something violent while manic and have no idea what happened. A friend of mine beat up a man on a train platform in Japan. He just did it out of nowhere. The mania was that strong. People who do something violent and very out of character when ill need a lot of compassion. How Can this be Prevented? Violent mania often starts with irritation- if a person knows their irritation signs and immediately gets help with medications and a management plan, these destructive mood swings can be prevented. This is good news! Even if someone is refusing treatment now, I’ve seen plenty of people change and accept help eventually. No one wants to be in constant fights. Ultimately, managing bipolar is better than waking up in jail. 1 person found this helpful. Did You See?  Close What are the early warning signs of a bipolar disorder relapse?
All Junior Cert Science Revised Syllabus posts • avatar image Help! Why are magnesium and copper good electrodes??? ronanmurray Can someone please explain to me why magnesium and copper good electrodes please it's for my coursework b! 1. avatar image Hi Ronan! The reason why magnesium and copper are good electrodes is that they have a large difference in reactivity. If you studied the chapter on Metals and Non-Metals, you would have remembered that these four elements are ordered from most reactive to least reactive: Calcium, Magnesium, Zinc, Copper. This means magnesium loses its electrons faster than copper, and due to this, copper ions are more likely to accept electrons than magnesium ions. Therefore, electrons flow from the magnesium electrode to the copper electrode, creating an electric current. These properties apply to any two elements (or compounds) so long as they have a large enough difference in reactivity. The bigger the difference in reactivity, the larger the electric current will be. This would also work if you're using two of the same element, as long as one of the ions are positively charged and one of the ions are negatively charged. As a matter of fact, one of the elements/compounds doesn't have to conduct electricity (on their own), as long as a metal electrode is present so that the electrons can flow. Check out this site for the reactivity series, which ranks elements from most reactive to least reactive: 2. avatar image Thank you!!! 3. avatar image Share files from your computer
Eradicating Aging Cells Could Prevent Disease Mice lacking these cells were stronger and had no cataracts. For more than a decade, researchers have believed that aging cells damage the tissue around them, and that this damage underlies a number of age-related disorders. Now a new study in mice appears to confirm this. The study shows that selectively eliminating those aging, or “senescent,” cells, could help prevent the onset of everything from muscle loss to cataracts. Age eraser: Researchers subjected mouse bone-marrow cells to a drug that induced aging (top), then selectively killed off only the cells that could no longer divide (bottom). They hope that such a technique could one day be used to delay or prevent age-related disease. Senescent cells can no longer divide, and therefore fail to replenish aging tissue. More recently, researchers have suggested that these cells might be secreting damaging chemicals that poison the cells around them. To determine their role in the diseases of aging, scientists at the Mayo Clinic in Rochester, Minnesota, identified senescent cells in mice that had been genetically engineered to age rapidly using a biomarker, called p16Ink4a, specific to these cells. For the length of the animals’ lives, they were injected with a drug that induced only those biomarker-containing senescent cells to commit suicide, while leaving others untouched. The results were striking: in tissues that contained the labeled cells, including everything from fat to muscle to eyes, selective removal appeared to postpone age-related damage. Treated mice had no cataracts, and showed increased muscle mass, strength, and subcutaneous fat when compared to mice that hadn’t received the drug. “We’ve shown there is a causal link between these senescent cells and age-related decline in tissue function,” says Jan van Deursen, the Mayo Clinic cancer researcher who led the study. “It’s a proof of principle that if you remove this particular cell type from an organism—we did it in a mouse but it will probably hold true for humans—tissues and organs would function better and would be more resistant to aging.” Prior to this study, it was unclear how senescent cells contributed to aging. The cells make up a very small proportion of all tissue, somewhere between 1 and 4 percent in even the oldest animals, and many doubted that such a small number of cells could have such a toxic effect. Rather, they thought, when the cells lost their ability to divide, the inability to replace lost tissue might be what caused symptoms of aging. The new research appears to validate the idea of senescent-cell toxicity. “Now that we know the cells play a role in aging, it’s worth investing in trying to find a way to eliminate them,” says Felipe Sierra, the director of aging biology at the National Institute on Aging, who was not involved in the research. Not only does the study propose a biomarker for aging, levels of p16Ink4a, but it validates the idea that it might be possible to create a drug that targets senescent cells without causing harm to healthy cells. “I’m cautiously optimistic that this is a really major advance,” says Norman Sharpless, a geneticist who studies cancer and aging at the University of North Carolina-Chapel Hill. “Their results suggest an approach that is within the reach of big pharma today.” The current study used a mouse strain that had been genetically tweaked for rapid aging to speed up the experiment—because these mice tend to die early from other causes, the researchers were unable to determine whether increasing the animals’ “health span” would also increase their life span. Van Deursen and his colleagues are now beginning a more extensive study in normally aging mice in order to further investigate the effects of senescent-cell removal. Then, van Deursen says, “the challenge is to translate these findings into a way of getting rid of these cells in humans.” Insider Online Only $19.95/yr US PRICE You've read of free articles this month.
It’s become a running joke in the Linux world that it will be "the year of the Linux desktop," whatever year it happens to be. For years, Linux geeks have dreamed about unseating the Evil Empire of Windows, but that’s never happened. Of course, this could be attributed to Microsoft’s substantial clout, but part of it lies with the Linux community itself. Linux has not been a mainstream desktop operating system, being mostly relegated to programmers and system administrators. By Programmers, for Programmers One of the reasons that Linux has failed to appeal to mainstream computer users is that its user base is not made up of mainstream computer users, but of developers. This dates back to the heritage of Unix, which was also developed "by programmers, for programmers." It was developed by some very good programmers, Dennis Ritchie and Ken Thompson. When they were developing Unix at Bell Labs, there wasn’t much attention given to "user-friendliness," given that they were developing a system designed for computer science research. This developer orientation has persisted to the present day. Even with distros like Ubuntu that promised to be easier for nontechnical users to install and use, they still require a bit of know-how to navigate. Miguel de Icaza, one of the principal founders of the GNOME project, agrees. "The problem with Linux on the desktop is rooted in the developer culture that was created around it," he wrote. Besides being difficult to install and use, another major problem in his view is the tendency for developers to throw out interfaces and APIs that work perfectly well in favor of something more "elegant." "The attitude of our community was one of engineering excellence: we do not want deprecated code in our source trees, we do not want to keep broken designs around, we want pure and beautiful designs and we want to eliminate all traces of bad or poorly implemented ideas from our source code trees," he added. Windows, on the other hand, stresses backward compatibility to the point where some people think they have the opposite problem. Lack of a Consistent User Interface While Windows and Mac OS X give their interfaces a consistent look and feel and issue human interface guidelines, Linux is much more anarchic. One reason is that the GUI, running under the X Window System, is just another program instead of being intimately tied to the system. In addition to different window managers and desktops, there are a number of different toolkits. Technical users might happily use the Emacs editor, the Midnight Commander file manager and zsh, but a novice user might find the differing interface styles jarring. This has sent them into the arms of Windows and Mac OS X. Ripping everything out and starting from scratch is one symptom of elitism that can permeate the Linux community. Nearly everyone who’s been new to Linux and has asked a question on a forum or IRC channel has been told to "RTFM" (Read The Fine Manual) at least once. Linux programmers are justifiably proud of being able to completely build an operating system that’s open source, working with other programmers all over the world, completely from scratch. Sometimes they fail to realize that not everyone is a wizard programmer. Hardware Support Another irritating sticking point is hardware support. While writing device drivers can be tedious, devices that have incomplete functionality - or worse, don’t work at all in Linux - seriously hamper adoption. Of course, this isn’t completely the fault of developers. There are lots of devices out there, and it’s hard to write drivers for them. Some, like graphics cards, are considered trade secrets and manufacturers are mum about their designs. Wireless networking cards also suffer from the same problem. Developers have to reverse engineer them in order to implement at least some functionality or rely on proprietary drivers. Windows, Mac are Good Enough For Most People The main reason why more people haven’t moved to Linux en masse, even in the face of disasters like Windows 8 and Vista, is that Windows is simply good enough for most people. With Windows XP, ordinary desktop users finally gained full pre-emptive multitasking and with it, much greater stability. The "Blue Screen of Death" has mostly disappeared, except in the case of some serious hardware issue. Even the end of support for Windows XP didn’t prompt a mass migration to Linux. It seems the idea that Windows users would suddenly adopt Linux has been nothing more than wishful thinking. Windows XP users stuck with the system for so long because they weren’t willing to change in the first place. Why would they adapt now? Windows 7 and XP users also simply avoided Windows 8. Now that Microsoft is making Windows 10 a free upgrade for Windows 8 and Windows 7 users, it makes more sense for them to upgrade to Windows 10 instead of Ubuntu. Mac OS X seems to succeed where Linux has failed, offering a Unix-like desktop that’s easy to use. (Read more about the power of Unix in What IT Peope Can Learn from the Unix Philosophy.) Linux is Winning on Mobile While Linux isn’t a force on the desktop, the world is less dependent on the traditional desktop these days. More people are using Web apps like Google Docs and shifting their computing to mobile devices. Android, based on Linux, is winning with over 83 percent of the mobile market share. Chromebooks, lightweight laptop computers designed for use with the Web, are also muscling in on Windows from below. The Web apps that people use every day, including those from Google, mostly run on Linux as well. It seems that Linux is winning on everything but the desktop. While Linux is a great operating system, it hasn’t been and will probably never be a significant force on the desktop, though it will dominate the developer’s desktop for a long time to come.
Scientific Research Assignment Hepatitis B Biology Essay Hepatitis B is an infectious illness caused by hepatitis B virus which infects the liver of humans, and cause an inflammation called hepatitis (World Health Organization, 2010). It is formerly known as 'serum hepatitis', the disease has caused a widespread outbreak that  infects many people at the same time in parts of Asia and Africa, and specific regional in China. There are more than 2 billion people who have been infected with the hepatitis B virus, including 350 million chronic carriers of the virus (Wikipedia 2010). They are a viral infection that can lead to serious illness or death. Hepatitis is defined as the inflammation of the liver that can be caused by viruses, alcohol, drugs and other toxins, or less commonly by a breakdown in a person's immune system (Hepatitis Australia, 2010). There are five types of viruses that can cause infection of the liver and may produce similar symptoms. They are hepatitis A, hepatitis B, hepatitis C, hepatitis D and hepatitis E. The main difference between the hepatitis viruses is how they are transmitted and the effects they have on a person's health. Hepatitis is described as either an acute or chronic illness, the acute illness will only last a short time and despite the fact it can be severe, most people recover from the illness within a few weeks with no lasting effects. A chronic illness will last for a long time, often for the rest of a person's life. Hepatitis B is the most common liver infection in the world and is caused by the hepatitis B virus which is the virus that affects the liver which can cause pain and swelling (Hepatitis Australia, 2010). The infection leads to damage of cells in the liver which can cause the organs to not function properly (Oxford Medical Dictionary, 2010). How is HBV transmitted? Well HBV is a very infectious virus. It can be transmitted by infected blood or blood products contaminating hypodermic needles, blood transfusion, or tattooing needles, or by unprotected sexual contact (Better Health Channel, 2010). This can happen with people that have unsafe sex by not using a condom or someone who gets a body piercing or a tattoo if the person doing the piercing or tattooing is not very careful with making sure that everything is very clean (disinfected and sterilised does not happen). It also happen to people on drugs who shared needles with each other and also if someone steps on a needle that has been used by someone using drugs. HBV can also be transmitted by family members who have been infected such as sharing razor blades or toothbrushes (Kids Health, 2010) Also an infected mother might pass it to a newborn baby at the time of the baby's birth. In the research article of 'Perinatal transmission of hepatitis B virus: an Australian experience' , it states that "reported rates of transmission from mothers who are positive for hepatitis B "e" antigen vary from 7% to 28%." The research article concludes that the mothers who had Hepatitis B "e" antigen-positive have very high viral loads. So a mother who is positive for Hepatitis B surface antigen have a 20% risk of passing the infection to her offspring at the time of birth. This risk is as high as 90% if the mother is also positive for Hepatitis B "e" antigen (Wiseman et al. 2009). Mothers who are bound to Hepatitis B "e" antigen positive who have very high viral loads meaning having a severe viral infection, this is referring to the research article 'perinatal transmission of hepatitis B virus: an Australian experience'. After exposure to HBV (Hepatitis B virus), it can cause acute or chronic infection (Margaret, F (ed.), 2006). Acute infection usually last a short time but they can make you feel uncomfortable with signs and symptoms. If a person is unable to clear the hepatitis B virus from their system after a period of time, the person is said to be chronically infected (Wrong Diagnosis, 2010).  Chronic infection means continuous damage to the liver, which can result in cirrhosis, liver failure, hepatocellular cancer, and even death. Patients infected during childhood are at greatest risk for developing chronic hepatitis B infection. (Centers for Disease Control, 2010). The HBV is a double stranded DNA virus that comes from the group Hepadnaviridae (eMedicine Medscape 2010). HBA is a hepatotropic virus meaning it has a special attraction for the liver, which it prefers to infect the liver over and any other part of the body (Medical Dictionary: The free dictionary, 2010) thus it replicates in the liver and cause hepatic dysfunction. The virus is made up of a nucleocapsid and an outer envelope that consist three main hepatitis B surface antigens (HBsAgs) that is part of a role in the diagnosis of HBV infection. It is thought that this virus causes inflammation of the liver by inducing apoptosis (programmed cell death) which then causes HBV-induced liver injury (Baumert, et. al 2007) The liver is an organ that has many important roles in our body; we cannot live without a liver. The liver helps remove harmful chemicals from your blood, it fights infection, helps digest food, store energy, and also store nutrients and vitamins. The liver is the only organ in the body that has the capability to regenerate itself and make new liver tissue (NDDIC, 2010) The hepatitis viruses that generate into a chronic infection have what it takes to cause liver damage because the virus reproduces in the liver. The process fibrosis occurs after some time has gone pass, more of the liver cells will become damaged and destroyed which will cause the scar tissue to take place. If the fibrosis is severe it can cause the liver to become hardened, and it will keep it from working normally which is called cirrhosis of the liver. (Hepatitis Australia, 2010). In a small number of cases, serious damage to the liver can lead to liver failure and liver cancer. (Hepatitis Australia, 2010). A long duration of infection with hepatitis B virus may be associated with a long duration inflammation of the liver (chronic hepatitis), leading to cirrhosis over a period of several years. This type of infection will greatly increase the incidence of Hepatocellular Carcinoma (Wikipedia, 2010) so this is the reason why Hepatitis B Virus exposure may lead to the development of Hepatocellular Carcinoma. The hepatitis B virus can cause illness which can last for several weeks but for some people they do not become ill, their symptoms may be flu-like or some do not become sick at all. Children are less likely to have symptoms than adults when infected. A person with the infection will feel tired and lose their appetite which can last for many weeks. Their skin and eyes will look yellow, this is called jaundice. Their urine will look very dark and they may feel nauseous which can lead to vomiting a lot. They might have pain in their joints, pain in their liver and also have a fever. (Kids Health, 2010) There are also itchy skin signs which could a possible symptom for all hepatitis virus types. The illness lasts for a few weeks and then gradually improves in most affected people. A few patients may have more severe liver disease and may die as a result of it. (Better Health Channel, 2010) To detect whether you have Hepatitis B infection or not, a liver function test must be carried out. First the doctor or nurse will check your blood to see if your liver is working normally. When the liver function test is carried out, it measure the levels of enzymes found in the liver, heart and muscles. The Enzymes are the proteins that cause or increase chemical reaction in the living organisms. The laboratory tests include Bilirubin, AST, ALT, Alkaline Phosphatase, GGT and LDH. (The Body, 2010) Bilirubin is like a yellow fluid produced when red blood cells break down. If you have high levels, this can indicate liver disease but might also be caused by the antiviral drugs indinavir (Crixivan) and atazanavir (Reyataz). (The Body, 2010). AST which stands for Aspartate Aminotransferase, it is used with the ALT test to detect liver disease. (The Body, 2010). ALT stands for Alanine Aminotransferase; it is used with the AST test to detect liver disease. (The Body, 2010). Alkaline Phosphatase if you have high levels of this, it indicates that you may have liver or bone disease. (The Body, 2010). GGT stands for Gamma Glutamyl Transpeptidase, this tells you if the results can show whether other abnormal test results are due to liver problems or bone problems. (The Body, 2010). LDH stands for Lactic Dehydrogenase; it is a normal indicator of tissue damage (The Body, 2010) There is also liver biopsy where it involves by removing a small piece of tissue from the liver by just using a fine needle. The tissue is then examined under a microscope to look for inflammation or liver damage and alpha-fetoprotein is a blood test which can sometimes detect liver cancer. (The Body, 2010) People with hepatitis B have no signs of illness and do not realise they have the virus in their body so hepatitis B is diagnosed through different kind of blood tests, which look for markers of the hepatitis B virus in the blood. (Hepatitis Australia, 2010) Figure 1: Here is a table that gives an understanding of the Tests (Hepatitis Australia 2010) What is shows Hepatitis B surface antigen Shows that the person is infected with hepatitis B which can be detected during acute and chronic infection. Hepatitis B surface antibody HBsAb or Anti-HBs Shows that the person has developed immunity to hepatitis B which can be detected in people who have recovered from hepatitis B or been vaccinated against hepatitis B. Hepatitis B e antigen Shows that hepatitis B virus is multiplying. Hepatitis B e antibody HBeAb or Anti-HBe Shows that the person's immune system has responded against hepatitis B and the virus is not actively reproducing. Hepatitis B core antibody HBcAb or Anti-HBc Hepatitis B virus DNA Shows that the person has developed immunity to hepatitis (Hepatitis Australia 2010) The antigen is the foreign substance in the body, such as the hepatitis B virus and the. And the antibody is a protein that the immune system makes in responses to a foreign substance (Hepatitis Australia, 2010) Normal references values. For ALT test if a person (Age <20 years) was to have a result of >80 U/L (units per litre), they could have Mixed hepatocellular and cholestatic disease. But they have <80 U/L result they need to get a GGT test, so if their GGT result is >90 U/L it could result to Cholestatic liver disease or bone disease. But if their result is <90 U/L then are in the isolated elevated ALP which resulting to high serum ALP. (Melbourne Pathology, 2010) For predominant Hepatocellular Pathology, result of ALT or AST Or >150 U/L; ALP <200 U/L results can be cause by an infection such Hepatitis A, B, C. (Melbourne Pathology, 2010) A study of perinatal transmission of HBV was conducted between August 2002 and May 2008. The people who participated were pregnant women who attend Sydney South West Area Health Service antenatal clinics and they were tested positive for hepatitis B surface antigen (HbsAg), also their babies. The babies for hepatitis B surface antigen undergo 9 months follow up for further virological testing which involves HBV DNA sequencing. The pregnant women have a clinical and biochemical assessment which includes tests for their liver enzymes, Hepatitis B virus DNA and Hepatitis B "e"antigen. (Wiseman et al. 2009). Before November 2006, the study used laboratory methods to test for Hepatitis B virus in pregnant women and their babies. Hepatitis B virus serology was performed using the AxSYM microparticle enzyme immunoassay (Abbott Laboratories); it is a technique in which the solid-phase support consists of very small microparticles in liquid suspension. (Mondofacto, 2010) The research shows that 213 mothers out of 313 have detected with HBV DNA which 91 HBeAg were positive. Out of the 213 mothers there were 115 who have low viral load, 29 with high viral load and 69 had very high viral load. (Wiseman et al. 2009). To conclude hepatitis B are the most common infectious diseases in the world and is a serious problem for people who has it and that we need to be careful for it. It is a virus that can cause inflammation to the liver which can be found in blood and body fluids. There are acute hepatitis and chronic hepatitis Writing Services Essay Writing Assignment Writing Service Dissertation Writing Service Coursework Writing Service Dissertation Proposal Service Report Writing Essay Skeleton Answer Service Marking & Proofreading Service Exam Revision
There are millions of people around the world who have fallen through the cracks and whose voices are not part of the conversation. These roughly 40 million people have found themselves “internally displaced” or “refugees” due to circumstances beyond their control. Some are professors, doctors, engineers, writers, artists, etc. And some were also innovators solving their local needs and running their own businesses before they were forced to leave their homes and are now identified with an ID number and must rely on hand-outs to survive. This group of extraordinary, resilient people – who didn’t seek to become internally displaced or refugees – find themselves in a foreign country where they may or may not face language barriers. While they have brought their talents and knowledge with them, they are rarely asked to be a part of the solution; they languish in refugee camps wasting away their talents with little or no opportunities to put those talents to productive use. NGOs are overwhelmed with the continued crises that plague our world, and coupled with donor fatigue this group of brilliant minds is being underutilized. I have met Sudanese refugees who against all odds have become acclaimed music artists as well as super models; Congolese refugees in Tanzania who self-taught themselves English in order not to fall behind in school; Somalis who figured out a way to have an Internet café in the middle of the largest refugee camp in the world and create a business in the middle of the desert; another Somali who has finished Princeton University and is currently finishing his Masters in Finance at EBS; Palestinian refugees who without a roof over their heads have studied hard and are about to begin pre-med at Weill Cornell. All of them have something in common: someone saw their potential and decided to invest in them. They were given a chance even though they fell through the gaps. But these are exceptional cases and not the norm. Imagine if this group of people were part of the whole, if they were asked for their opinion and were invited to the problem-solving table. After all, they are the ones who want to go home to the country they were forced to leave behind. They are the ones that would work the hardest at solving their problems if they had access to finance, innovation, health and education. Author: Lorna Solis is the Founder and Chief Executive Officer of Blue Rose Compass, and is also a World Economic Forum Young Global Leader. Image: Refugees are seen next to their tents in a refugee camp REUTERS/Asmaa Waguih
OUP user menu Laboratory Diagnosis of Ethylene Glycol Poisoning The Cup Is Half Full? Ishwarlal Jialal MD, PhD, FRCPath, DABCC, Sridevi Devaraj PhD, DABCC DOI: http://dx.doi.org/10.1309/AJCPTZO0HRPKVPWM 165-166 First published online: 1 August 2011 The major toxic alcohols that pose a risk to human health and for which measurement of levels is requested from the clinical laboratory include ethanol, ethylene glycol, isopropanol, and methanol. Most clinical laboratories are equipped to accurately quantitate ethanol levels, mainly by an enzymatic assay on an automated chemistry analyzer platform. However, the measurement of the other 3 alcohols is more challenging.1,2 Some relevant laboratory tests that assist in the differential diagnosis are listed in Table 1. An increase in the osmolal gap is evident in the presence of all of the alcohols. In addition, methanol, ethanol, and ethylene glycol also result in an increase in metabolic acidosis with an increased anion gap, separating them from isopropanol. A relevant clue to isopropanol ingestion is positivity for acetone in the urine or blood without attendant hyperglycemia. In addition to an increase in the osmolal gap and anion gap acidosis, ethylene glycol ingestion can result in a positive urine analysis for the presence of calcium oxalate crystals (monohydrate and dihydrate) because it is converted to oxalate, which can chelate calcium and result in hypocalcemia. However, it needs to be emphasized that that the deposition of calcium oxalate can occur in many tissues, including the kidney, and can result in acute renal failure, pulmonary dysfunction, myocardial dysfunction, and impaired neurologic function. Furthermore, the documentation of calcium oxalate crystals is hampered the fact that it is only detected in around 50% of patients. During metabolism, ethylene glycol breaks down to glycolic acid and then to oxalic acid by the action of alcohol dehydrogenase. Oxalate binds calcium, leading to calcium oxalate crystals, which are deposited in many tissues but are most readily detected in the urine. The glycolic acids contribute to central nervous system manifestations, mortality, and a significant metabolic acidosis (anion gap acidosis).3 These metabolites also contribute to significant renal tubular necrosis. Thus, while the tests provide valuable clues to clinicians for differential diagnosis, measurement of the relevant alcohol is the most definite test to rule in or exclude a certain alcohol poisoning or ingestion. The preferred assays are summarized in Table 1. Thus, for methanol and isopropanol, gas chromatography (GC) is the confirmatory assay. For ethylene glycol, the preferred method is GC–flame ionization detection (FID). However, these instruments and the needed expertise are not readily available in most clinical laboratories. In this issue of the Journal, Jeunke et al4 present the validation of a modified kinetic time and data analysis technique in a rapid enzymatic assay that can be adopted on open chemistry analyzer platforms for the rapid measurement of ethylene glycol levels. The Catachem assay is based on a bacterial enzyme, glycerol dehydrogenase.57 This enzyme oxidizes ethylene glycol in the presence of NAD (nicotinamide adenine dinucleotide), generating NADH (the reduced form of NAD), resulting in an increase in absorbance at 340 nm, which is detected spectrophotometrically by automated chemistry analyzers. In the original method proposed by Catachem (Bridgeport, CT), the difference between the absorbance readings at 2 time points was used to determine the ethylene glycol concentration. In this study, the modification by Juenke et al4 was to determine the slope of the line by measuring absorbance differences at several points, starting at a later time point than the original 2-point design. For samples containing ethylene glycol, the difference between the 2 determinations is minimal; however, for samples containing compounds that mimic ethylene glycol (eg, propylene glycol), the difference between the 2 methods is substantial because the slope of the line flattens after an initial increase in absorbance. In this way, Juenke et al4 were able to demonstrate that the modified kinetic parameters applied to the Catachem reagent system accurately distinguished between propylene glycol, 3-butanediol, and ethanol from the true measurement of ethylene glycol. The modified assay, in addition to being precise (intra-assay coefficient of variation, <3.0%; interassay coefficient of variation, <8.0% ) and linear (up to 300 mg/dL), was also able to eliminate false-positives from a variety of interfering substances, including combinations of ethylene glycol, glycerol, propylene glycol, 2,3-butanediol, formic acid, n-propanol, isopropanol, acetone, methanol, ethanol, glycolic acid, polyethylene glycol, oxalic acid, glyoxal solution, glyoxylic acid, 1,2-butanediol, 1,4-butanediol, 1,3-propanediol, 1-butanol, 1,3-butanediol, DOT 3 brake fluid, and 1-octanol. These were generally flagged with a rate error. Also, an important observation is that there was no interference with fomepizole, commonly used for treatment of ethylene glycol poisoning.8 Juenke et al4 also performed patient sample comparison and showed excellent correlation with the “gold standard” method, GC-FID. Furthermore, there seemed to be no interference with glycerol under the defined assay conditions. View this table: Table 1 The advantage of this automated assay over the GC method is a substantial decrease in labor costs and a significant reduction in turnaround times by approximately 10 hours. Results can be made available in 30 minutes instead of 3 hours. Thus, this assay is clearly advantageous to an emergency department physician dealing with this serious disorder. Future studies will need to confirm the performance and usefulness of this automated testing in other clinical laboratories. Also, assays for glycolic acid need to be more readily available because levels correlate with symptomatology and mortality. Testing for glycolic acid remains a clear advantage of GC, which can readily assay ethylene glycol and its toxic metabolite. The advent of this modified enzymatic assay goes a long way in enhancing the role of the laboratory in the management of the life-threatening condition, ethylene glycol poisoning. The assay needs to be embraced by laboratorians to confirm its validity in identifying this serious form of intoxication, which constituted 2% of poisonings reported in the United States in 1999.9 1. 1. 2. 2. 3. 3. 4. 4. 5. 5. 6. 6. 7. 7. 8. 8. 9. 9. View Abstract
Free Will, Searle, and Determinism A propos determinism: I recently looked into John Searle’s latest (2007) book, Freedom & Neurobiology. As usual, he gets his knickers into the traditional twist that comes from being a physical determinist and an unacknowledged romantic dualist. In this connection, the following line of reasoning occurred to me. Searle says (p.64) that the conscious, voluntary decision-making aspects of the brain are not deterministic, in effect for our purposes asserting the following. If there is an algorithm that describes conscious, voluntary decision-making processes, it must be (at least perceived as) non-deterministic. Although it would be possible to extend the definition of an algorithm to include non-deterministic processes, the prospect is distasteful at best. How can we respond to this challenge? Searle reasons (p.57) that Searle’s non-determinist position in respect of free will is his response to the proposition that in theory absolutely everything is and always has been determined at the level of physical laws. “If the total state of Paris’s brain at t1 is causally sufficient to determine the total state of his brain at t2, in this and in other relevantly similar cases, then he has no free will.” (p. 61) By way of mitigation, however, note that quantum mechanical effects render the literal total determinism position formally untenable and a serious discussion requires assessing how much determinism there actually is. As Mitchell Lazarus pointed out to me, in neuro-glial systems, whether an active element fires (depolarizes) or not may be determined by precisely when a particular calcium ion arrives, a fact that ultimately depends on quantum mechanical effects. On the other hand, Edelman and Gally 2001 have observed that real world neuro-glial systems exhibit degeneracy, which is to say that algorithmically (at some level of detail) equivalent consequences may result from a range of stimulation patterns. This would tend to iron out at a macro level the effects of micro level quantum variability. Even so, macro catastrophes (in the mathematical sense) ultimately depend on micro rather than macro variations, again leaving us with not quite total determinism. Where that leaves us is that we make decisions 1) precisely when we think (perceive) we are making them, 2) on the basis of the reasons and principles we think we act on when making them. That the processes underlying our decision-making are as deterministic as physics will allow is, I think, reassuring. It seems to me that this is as good a description of free will as one could ask for. When we have to decide something, we do not just suddenly go into mindless zombie slave mode during the gap and receive arbitrary instructions from some unknown free-will agency with which we have no causal physical connection. Nor is it the case that it is desirable that the process be non-deterministic. To hold non-determinism to be a virtue would be to argue for randomness rather than consistency in decision-making. Rather, we simply do not have direct perceptual access to the details of its functioning. Leave a Reply You must be logged in to post a comment.
Advantages & Disadvantages of Electronic Paycheck Bank Deposits Your state may have pay stub laws regarding direct deposit. Your state may have pay stub laws regarding direct deposit. Electronic paycheck bank deposits are also called direct deposit. The process requires an employer to put employees’ pay into their bank accounts by payday. Most states forbid employers from making direct deposit mandatory. However, due to its advantages, many employers encourage their employees to sign up for the service. As an employee, consider the pros and cons of direct deposit before you accept it. Time Saver If you receive a live check rather than direct deposit, you likely cash it in person at a financial institution. Depending on the day, you may have to stand in line at the bank for a long time, which can disrupt your schedule for the rest of the day. With direct deposit, no trip to the bank is needed. Further, you do not have to make arrangements to obtain your paycheck if you are on vacation or absent from work due to an illness. Money Saver If you have a bank account but not direct deposit, you likely deposit your paycheck into your bank account in person. It can be tempting to make a withdrawal simply because you are at the bank making a deposit. Direct deposit limits the desire to make cash withdrawals. Direct deposit transactions are typically safe and reliable, while cash is susceptible to being stolen or lost. Direct deposit allows you to set up multiple accounts and to allocate funds accordingly. For example, if you are married and split bills with your spouse, both of you can allocate a percentage or flat amount of your paychecks to go toward your bill payment account. This helps to ensure proper budgeting while giving you control over the amount of money that goes into the account that you share with your spouse. Early Pay If you receive a live paycheck, you likely don’t receive it until payday. If the payday falls on a holiday or weekend, you must wait until the next business day to cash it. Direct deposit is processed ahead of the payday. Therefore, if the payday falls on a weekend or holiday, you’re paid early, on the prior business day. Paperwork Completion One of the downsides to direct deposit is the paperwork that must be completed for the service to take effect. Your employer likely has a standard form that employees must fill out. If improperly completed, you may not receive your first direct deposit on time. The process requires you to write your bank account and routing number on the form plus attach a voided check for checking accounts. Your employer performs a pre-notification on your account, which serves as a test prior to the actual direct deposit transaction. This pre-notification must be done days in advance of the payday. Because of this, it can take one or two pay periods for your first direct deposit to occur. During this time, you must be careful about automatic bill payments from your account. If your direct deposit doesn’t happen as planned, you may incur fees. Third-Party Access If you have lenders that you pay via automatic drafts on your account and if you are unable to make your payments on time, those lenders can draft your account because they have your bank account information. This puts you in a tough spot because you run the risk of depleting your finances. Third-Party Reliance Direct deposit largely depends on the reliability of your financial institution. Therefore, if your bank undergoes system glitches, it can affect your finances and whether you are paid on time. For this reason, use a financial institution that you are satisfied with prior to making direct deposit arrangements. About the Author Photo Credits • Comstock Images/Comstock/Getty Images
Acrylic bathtubs pros and cons - to dot the "i" By Admin | Plumbing And Water Supply 15 April 2016 purpose of data considered here - not to promote a particular product, and try to give objective advice to potential owners of new equipment.Let us examine further what constitutes acrylic bathtub, learn its pros and cons, ask the price and make the right conclusions. Acrylic bathtubs - advantages and disadvantages 1. How do acrylic bathtubs 2. What are the advantages inherent in acrylic baths 3. Acrylic bath - cons have each material 4. Tips for Choosing acrylic baths How do acrylic bathtubs Baths, called colloquially acrylic composite products are actually layered structure produced from several materials.In contact with water and the skin surface is made of acrylic - initially transparent plastic.It is given a special color powder added to the liquid polymethylmethacrylate (PMMA) prior to the polymerization step.The structure consists of sanitary acrylic additive prevents the growth of harmful bacteria on the walls of the bath, as well as other components that make plast ic becomes valuable qualities: • Plasticity - material well-shaped under high temperature; • resistance painting - acrylic products do not fade; • Hygiene - due to the low porosity plastic, dirt does not stay on the surface; • Maintainability - small defects are easily restored. In the production of baths used sheets of material, differing in thickness, size and manufacturing technology used: 1. Material ."Pure" acrylic (molding) produced by potting mold in sealed between two glass plates and then polymerizing in a steam oven.The result is a hard and wear-resistant sheet material of high quality. 2. Material .The two-layer plastic (acrylic co-extruded) is made by simultaneous extrusion and melt combining PMMA or other polymer - acrylonitrile butadiene styrene (ABS).Once hardened, the plastic structure has a glossy surface on one side and the other high impact basis.Produced of these components acrylic bathtubs are the advantages and disadvantages that differ significantly due to different characteristics of the materials. High-quality sanitary appliances made from "pure" acrylic.In the manufacture of baths from this material, plastic web is converted into the bath, passing sequentially through four operations: 1. Forming - sheet with a thickness of 8 mm is heated and stretched by the vacuum to the size of the workpiece; 2. Gain - the reverse side of thin acrylic shell is covered with a composite layer (usually polyester resin mixed with glass fibers); 3. Pruning - removes surplus plastic, drill the necessary holes; 4. assembly - workpiece mounted on the load frame and polished. Frame bath Without load frame acrylic bathtub will not sustain the weight of a person. method of manufacturing a two-component sheet of the bath easier, the cost of it below.But in the end performance parameters such acrylic bath is much worse: there are low levels of hardness of the coating and the reliability of the entire structure.It is used in two-layer plastic is expensive. What are the advantages inherent in acrylic baths + Lightweight . Weight acrylic bath is not more than 40 kg, so it is easy to carry, does not create a burden on the floors in houses. + Sufficient strength . made by technology of casting products withstand heavy blows with minimal damage. + good heat capacity . typed the water for a long time retains the original temperature (up to an hour). + variety of forms . corners, oval, rectangular or with sinuous lines - you can pick up a plastic tub for any interior. + High Attenuation . good polymer structure absorbs noise the drinking water. + comfortable feeling . Acrylic surfaces are smooth but slip and pleasant to the body. + variety of colors . Coloring baths It is not necessary to purchase a snow-white sanitary ware and bath buying any shade, it can not be afraid that it will fade. + Easy care . to clean the walls do not need special chemicals, lack of simple soap solution. + possible restoration . chips, scratches and cracks are eliminated with the help of polish and repair structures on the spot. + Wide functionality . acrylic baths can be equipped with all kinds of options: aero and hydro massage, and other automatic overflow. A substantial part of these advantages applies only to first-class products from cast acrylic - their lifespan is 10-25 years.Baths with thin walls of ABS was originally designed for a brief operation - about 3-4 years. Acrylic bath - cons of each material have - sensitivity to alcohol-based and powdered media . Washing tubs acrylic requires deliberate attitude in order to avoid damage to the surface and premature repairs.It can not be used for cleaning compositions containing harsh chemicals and abrasive particles. - Mechanical fragility . Bath under the heavy weight can be bent and "play" that is able to deliver a certain discomfort.In the case of the fall of heavy objects high probability of cracks or holes.Major damage is not always amenable to the restoration and sometimes easier to replace a bath. - insufficient resistance to high temperature . plastic melts at 160 ° C, so under the influence too hot liquid acrylic bathtub becomes softer and capable of deformation.For this reason, it is recommended first to pour cold and then hot water. - cost of plumbing Acrylic higher steel and iron counterparts . Given the limited life span, the acquisition of this type of bath is hardly reasonable financial investment. Tips for Choosing acrylic baths 1. Decide on the dimensions of space of your bathroom; 2. Grab a flashlight into the store - it will be easier to detect thin spots on the sides of the bath; 3. Push the surface of the bath - you can experience the reliability of the design; 4. Check cut products - should be two layers: acrylic and resin; 5. Move your hand over the surface - roughness indicates the poor quality of the goods; 6. Notice the shape - durable bath is not too complex configuration; 7. Ask your adviser specifications selected sample. aware of the cost of direct dependence of acrylic baths and the thickness of its walls.In the expensive and high-quality copies of the thickness of the boards is 4-6 mm, and the cost and low-grade products have a wall of 2-4 mm. simple acrylic bath costs a minimum of 6000-10000 rubles (Bach, Eago, Victoria), equipment Optional equipment raises the cost of 20,000 rubles.And above 60,000 rubles will cost a luxury product with antibacterial coating, thickened walls and chrome handrails from European manufacturer (Teuco, Villeroy & amp; Boch, Jacob Delafon). Traditionally used for plumbing fixtures and materials has its own disadvantages.Are the negative qualities of acrylic bathtubs significant shortcomings or is it just the features that you need to take advantage of and enjoy - this issue is resolved individually. Take our survey: your opinion of acrylic baths: • I understand all the disadvantages of acrylic bathtubs, but I think that is the best option • I think acrylic bathtub inferior steel • thinkthat gives cast-iron tub acrylic View results your opinion of acrylic baths: You must select at least one option before you vote! • reviews acrylic baths • Reviews of various manufacturers baths Ask our experts and visitors: + Add new question Case material: How to install the shower with his hands How to chooseshower bathroom How to choose the optimal bath How to choose acrylic bathtub and has not lost the quality types of baths and their comparison How to wash acrylic bathtub The washed acrylic bathtub - Tips and Tricks
The Arguable Attraction of Public Health Organizing Rock Creek  Besides fishing, reading, hiking, and sleeping, the other nice thing about vacations out here in the quiet off-the-grid, is that one has that rare, precious and luxurious feeling that you have time to think.  Not just to put two plus two together, but to arrange them in patterns, cancel out the highs and lows, stroll up the blind alleys and try to bridge the deeper canyons. Reading Linda Hirshman’s brilliant history of the gay movement, Victory: The Triumphant Gay Revolution, she makes a point almost off-handedly in a long discussion of the tragic AIDS epidemic and the effective organizing by ACT-UP and many, many others around the issue, that essentially argued that in a “liberal democratic society,” as she constantly refers to the United States, it is almost impossible to deny citizens access to public health resources.  I’m sure one could quibble about this, especially given the battle of the knife in recent years over affordable health care.  Nonetheless, Hirshman’s point is that in representative, constituent government it is impossible to endlessly deny the benefits of public health services regardless of the distances in the partisan and ideological divide.  Her proof is the acceleration of funding to the tune of several billions of dollars to find appropriate medicines and treatment programs for AIDS victims. How might we more effectively organize around the health issues of low-and-moderate income families, despite the obvious differences of class and race, to make a similar level of  life-and-death impact?  Dealing with asthma, obesity, hypertension, and higher incidences of deaths from cancer, alcohol, and the like is a toxic combination, widely felt across classes, even though more concentrated among lower income families than others.  Overall improvements in housing and environment would make a huge difference in lives and economic outcomes, but a couple of billion dollars would hardly fund the research and brochures, and would not dent the real issues. Several years ago Mayor Bloomberg  from his unassailable position as not only Mayor of New York City, but also richer than Croesus, made a big difference in public health by chasing cigarettes out of bars and restaurants and into the privacy of one’s own home.  I can remember discussions with representatives of his foundation several years ago and their common sense argument that they wanted to save lives in cities by forcing cutbacks in smoking, reducing traffic fatalities, and improving urban environments by making cities greener.  These are hardly front page headline issues, but they make a real difference. Constructing the campaigns would be difficult.  At one level, “liberal democratic society” in the United States has rationalized an acceptance of higher mortality rates by class.  No better evidence exists than the callous way that politically expedient governors in Texas, Louisiana, and other states are denying expansion of healthcare to the poor, uninsured families right now in the crassest political expediency and immorality that we have seen in a long time.  Blaming lifestyles of the poor, hearkens to racism and discrimination, but that and a couple of dollars might get you on a bus (that no one travels much but the poor either, so there goes public transit moneys!), but it won’t buy relief here. The only exception that comes quickly to mind in thinking about the death sentence of lower income families through imposed and unrelieved bad health conditions are the “class exception” of veterans.  The all-volunteer army has been heavily representative of lower incomes and racial minorities, and for very good reasons, governments at almost all levels, but particularly through the Veterans’ Administration provide first class health care to veterans regardless of income.  Risking you life for your country, especially when no one else will do so, seems the only ticket for a working stiff to get good, lifelong health care in modern America it seems. More research needs to be done.  I was hearing about a food processing additive being used by some food manufacturers that leaves you hungry for more.  Forced obesity triggered by artificial additives sounds like a drug, and a drug that for public health reasons should be banned by the Food &  Drug Administration.   Maybe the price of beer and alcohol should not be doubled with taxes like Canada does, but should be quadrupled and restricted to even fewer locations and venues, though this has not worked on the Indian reservations?  Maybe there are enough middle income families with children with asthma that an alliance could be built with lower income families? We need a lot more than a couple of days of vacation to think all of this through. food additives Leave a Reply
Article Tools Font size Share This Related stories Can you talk about the challenges Tiffany faces? My daughter Tiffany has overcome a huge obstacle. She broke 12 vertebrae and ended up in a wheelchair and a chest brace for two years. Now she is walking and functioning like a normal teenager and that is huge. She's been fracturing bones practically since she was born. So she's been in and out of casts, braces, walkers, crutches, you name it, through her whole life. Could you speak a little about Tiffany's medical condition, osteogenesis imperfecta? It's a genetic disorder. She has one of the rarest forms. She has a duplication of Chromosome 1 that is just being studied as we speak. She is being studied by the National Institutes of Health. She receives a drug Zometa every six months, which is a bone stimulator to help her grow bones. And it actually is working because she grew 66 percent of her bone mass the past two years. It is something she will have to deal with the rest of her life. What successful parenting strategy can you share with other parents? She learns by example. I'm a nurse. I'm always willing to help someone or even some animal that's in need. I'm the first to stop at a car accident - whether I know the person or not. So I hope she is learning by example.
Sunday, 6 March 2016 Nikolaus Harnoncourt - his true significance Nikolaus Harnoncourt has died, weeks after posting a hand-written farewell note on his website, a personal touch which shows the measure of the man (Read it here). Harnoncourt's values.transformed the whole way in which music is approached.  His insights into the spirit of the baroque illuminate approaches to repertoire far beyond his own areas of expertise - respect for the composer and period, respect for individuality and well-informed experimentation. Note, well-informed and  disciplined, not self-indulgence for the sake of ego. Harnoncourt connected the adventurous spirit of the baroque to modern music making, connecting European performance practice to its fundamental roots. He wasn't the only one to do so, but it's no exaggeration to say that, without him,  European music wouldn't be what it is. I've written quite a bit about the way baroque values apply to new music - there are connections between baroque values and Boulez! It's highly significant that the start of his career coincided with the boom in commercial recording, where music was packaged and made available as consumer product, reaching audiences who didn't necessarily have musical grounding but were shaped by what they bought. In principle, that's no bad thing, but consider the values of the Cold War - conformity, fear of the unknown, dependence on the trappings of success. Hence the taste for "classics", for "interventionist" interpretation, for big, flashy orchestras.    Harnoncourt's interest in the baroque stemmed from reappraising the past, when live performance connected players and audience, when music was made in nice surroundings, but where the emphasis was on serious listening. Harnoncourt made his own instruments, not just because he was a sculptor but also because he wanted to understand the physicality of sound and how each instrument has an individual voice. He studied original manuscript  scores to get a better idea of how music might have sounded to a composer.  The idea that historically-informed performance is wimpy and weedy is nonsense.  "(We) don't eat baroque food!" he said, meaning that no-one can truly replicate the baroque mindset. "I'm not a warden in a museum".  What mattered were principles - clarity, discipline, integrity.   Maybe even the idea that maybe we don't know everything and need to keep learning. HERE is a link to a documentary made for Harnoncourt's 80th birthday. At first, it might seem  it hasn't much to do with music, but persist. Its real title is "A Journey into the self". What we see is a portrait of the man and the motivations behind what he did.  Hence, no dogmas, no formulae. See also my "Harnoncourt against the safe and bland"  which has a link to a BBC interview which is still live. 1 comment: Obli said... So sad...Dunque, amate reliquie (Alessandra, Firenze-Italia)
the physiological perception of light. photoreception pho·to·re·cep·tion (fō’tō-rĭ-sěp’shən) pho’to·re·cep’tive (-tĭv) adj. Read Also: • Photorealistic • Phreaking [freek] /frik/ noun 1. . verb (used without object), phreaked, phreaking. 2. to act as a phone phreak. verb (used with object), phreaked, phreaking. 3. to tamper with (telephones) as a phone phreak does. /ˈfriːkɪŋ/ noun 1. the act of gaining unauthorized access to telecommunication systems, esp to obtain free calls n. 1972, originally in […] • Phreatic [free-at-ik] /friˈæt ɪk/ adjective, Geology. 1. noting or pertaining to ground water. 2. noting or pertaining to explosive volcanic activity involving steam derived from ground water: a phreatic explosion. /frɪˈætɪk/ adjective 1. (geography) of or relating to ground water occurring below the water table Compare vadose • Phreatic zone phreatic zone (frē-āt’ĭk) A subsurface zone of soil or rock in which all pores and interstices are filled with fluid. Because of the weight of the overlying groundwater, the fluid pressure in the phreatic zone is greater than the atmospheric pressure. Compare vadose zone.
Skip navigation Gravity Probe B Testing Einstein's Universe Special & General Relativity Questions and Answers What does the equation look like that shows how gravitational radiation is lost from the binary pulsar system? What astronomers observed in the Hulse-Taylor Pulsar was a was a decrease in the orbital period of the two neutron stars. From general relativity, it was possible to predict, mathematically, how the period ought to change in time as the binary system emitted gravitational energy during the time the orbits of the neutron stars were being 'circularized'. The predicted formula for the period change, P-dot, can be found in the excellent book by Stuart Shapiro and Saul Teukolsky Black holes, white dwarfs and neutron stars, and it looks like this: P-dot = -1.202 x 10^-12 M2 (2.8278 - M2) where M2 = 1.41 solar masses...the mass of one of the neutron stars determined by observation and the application of Kepler's Laws. The result is the predicted period change is p-dot = -2.40 x 10^-12 and the observed value is -2.30 +/- 0.22 x 10^-12. This implies a better than 10 percent disagreement between theory and observation, and thereby proves that gravitational radiation leakage is the simplest explanation. Return to the Special & General Relativity Questions and Answers page.
You are here: Geology/kind of rock? Dear Robert: I was picking up rocks for my husband, he farms and rocks breaks the equipment.  I found a rock that reminds me of a turtle's shell but it looks like some kind of cartilige joints the pieces together.  It is hard as the rock.  A piece was broken off and the inside is black and dark green.  I haven't ever seen a rock like this one.  Could you possilby have an idea?  I would really like to know.  Thank you,  Teresa Davis Hi Teresa, Without a picture it really is impossible to reliably identify your rock.  Also, you don't mention where you found it or how large it is, though I suspect it's at least fist sized since you mention your husband removes rocks that can damage his farm equipment.  However, your description of a "turtle's shell" instantly suggests a rock called a septarian nodule or concretion. A concretion is a hard, massive sedimentary rock.  The original rocks are usually limestone or clay that, because of mineral replacement, have become much harder than the rock that surrounds them.  They are usually spherical-ish in shape but can be very irregular.  It's derived from Latin "con" meaning together and "crescere" meaning "to grow".  They appear to grow in place in sedimentary layers, and while the process is not well understood, there is usually a nucleus of some kind within the rock.  A "nodule" is much the same as a concretion but it usually doesn't have a nucleus at it's center but it also grows in place.  Most times, as the rock dries out and lithifies many cracks form inside.  Mineral laden water flows through these cracks and slowly fills the cracks up.  The result is a kind of tree pattern inside the rock and set of polygonal cracks on the surface. They have long been misinterpreted as organic and some have even been described as fossilized turtles.  Here's a picture of a septarian from Arkansas; This is only the outside of the rock.  This site; gives you a little more information about septarians, including an interior image. The problem is that you describe the interior as "black and dark green".  It isn't impossible that the rock is a septarian but they are not normally that color.  It probably still is some kind of sedimentary rock but to really identify it there is no getting around an image.  If you can't send me a picture, try taking it to the Geology department of your local college, or the Geological Survey #or equivalent# of your State government. If you still prefer my help please send me another question with a clear picture with something for size in it #a coin works well#.  Also, tell me what State you are in.  Sometimes you can identify a rock just by knowing where it comes from. I am still going to go with your rock being a septarian nodule, barring any additional information. Hope this helps. All Answers Answers by Expert: Ask Experts C. Robert Reszka, Jr. Michigan Basin Geological Society Decade of North American Geology. Bedrock Geology of Michigan BS Wayne State University ©2016 All rights reserved.
The deliberate rearrangemant of the boundaries of congressional districts to influence the outcome of decisions. The original gerrymander was created in 1812 by Massachusetts governor Elbridge Gerry, who proposed a district for political purposes that looked like a salamander. Gerrymandering allows the concentration of opposition votes into a few districts to gain more seats for the majority in surrounding districts or the difussion of minority strength across many districts. Source:The Center for Voting and Democracy While Webster 1913 got the definition of "Gerrymander" correct, he was slightly vague on the history. Elbridge Gerry, a Democrat, was Governor of Massachusetts in 1812 and had a Democratic legislature serving with him. In order to secure increased representation in the State Senate, they redistributed the state, dividing it up so that the Federalist minority would not be able to elect a true percentage of the legislature. As a result of this, a district in Essex County was formed with a very irregular outline. Benjamin Russell, editor of the Columbian Centinel,"[sic] hung a map of the new district in his office. Gilbert Stuart, a visiting painter, saw this map and noticed the peculiar outline of the district in Essex County; he added a head, wings, and claws to it. Gilbert exclaimed "This will do for a salamander." "No," said Russell, "a Gerrymander." Thus, to "redistribute a state to get the maximum possible representation for one party at the expense of the other" became known as "Gerrymandering". The governor was quite insulted by the name, and voiced his comments. Within a few days, the editor stopped using the phrase. However, Gerry only held office for another 5 months. The day after Gerry stepped down, Russell had the phrase right back on the front page. Source: My AP English notes. Node your homework! The History Channel's website ( was used to check most facts, dates, and names. Gerrymandering is the deliberate redistricting of boundaries to influence outcomes of elections for congressional districts. Historically speaking, it’s been around since 1812 when Elbridge Gerry, governor of Massachusetts, created a district in the shape of a salamander. Salamander + Gerry = Gerrymandering. His last name plus the lizard theatrics are why it is possibly called “gerrymandering.” It maximizes the vote of their support groups while minimizing the vote of the opposition. It’s rather influential to redraw district lines, states have the ability to pack or concentrate as many voters of one type into one district as they possibly can, to offset or reinforce a voting style. Congress passed a regulation on redistricting in 1967. All representatives have to be now elected in the form of single member districts. However, even in 1982 a new amendment to the Voting Rights Act of 1965, which was supposed to protect minorities during redistricting, has led to states gerrymandering again "lawfully." Population Control Impossible - Blame the Census Every ten years a census is taken. Districts do in fact change, people move in, people move out, and the concentration of people should change district lines. Now if I was a politician trying to run for a Congressional seat, I’d happily say gerrymandering is okay. But ethically speaking, it just seems to be a nasty tool to disallow fair voting. It can be used not only to protect an elected official, but also push one out. Even though the state of Utah tried to redraw the lines to prevent a repeat elected official, and failed, it is still being used for unethical practice. Non-partisan Redistricting An organization who is not interested in the political outcome should be in charge of redistricting. That probably will never be the case on a full scale basis, however, some states have adopted it already. That still has its own problem too. If someone who was disinterested in the political outcome, lines would only concern themselves with the population figures. Not the people’s demographics (Although that's good), not the geographics, and possibly not even the city limits. It’s problematic when lines are drawn for demographic reasons. Minorities can be grouped with the majorities to silence them. Poor people can be grouped with the rich, or vice versa. A bad part of town might be divided in two just to prevent political back lash for a politician’s actions or inactions. It would almost be wiser to just leave district lines alone completely, than alter them for demographics. Some nations authorize non-partisan organizations to redistrict, such as the UK and Canada. That’s definitely a step in the right direction against gerrymandering. It prevents the ruling political party, like the Republicans in our state, from keeping Republicans in control. Although democrats in Utah may argue they still are affected by the “wasted vote effect,” they still could influence their own district if it isn’t gerrymandered. It’s already uncompetitive enough as it is in Utah without gerrymandering. Utah is far more likely to become a battle ground state for anything if concentrations aren’t messed with, because then elections aren’t blow outs. The closer elections are the more likely people are to be interested in them. At least some states have started to develop commissions that are non-partisan, Washington, Arizona, Rhode Island, and New Jersey have all created one. The advantages for incumbents is unbelievable, and if you don’t believe gerrymandering is still going on, take a look at this statistic. “In 2002, according to political scientists Norman Ornstein and Thomas Mann, only four challengers were able to defeat incumbent members of the US Congress, the lowest number in modern American history.” ( That being the case, then its nearly impossible to beat an incumbent if gerrymandering occurs. It also seems to me that the Supreme Court has ruled it constitutional to redraw maps in attempts at protecting their own political parties, so long as they don’t affect minorities. Even as recent as the summer of 2006, the Court ruled a 7-2 decision that Texas’s redrawn maps were okay even though there wasn’t even a consensus. This means you can redraw as often as you like! Preventing Gerrymandering Single-member districts: A single-member district is where each district votes on one person to represent them in a legislative body. Thomas L. Brunell, in his book, Redistricting and Representation, makes the argument that competitive elections are bad for America. The book’s explanation on single-member districts left me wondering whether or not Congress believes in it or not. Since they change the redistricting laws every census any way, the notion behind single-member districts seems moot. The idea behind it does make sense, however, but it is not compelling to fix gerrymandering. Brunell argues that single-member districts solves for the loss of votes for the losing candidate. He argues Gerrymandering is good for the sake of everyone voting for the same guy. Why there is concern over “wasted votes” in terms of voting for the losing candidate is beyond me. If the districts were drawn fairly in the first place it wouldn’t matter how many people voted for the loser or winner. It would just be fair, and justice would be “served.” The winner take all aspect definitely seems corrupt. It might be a speedier process in determining a winner, but it is not going to determine a legitimate winner. The single-member district system is flawed in the sense of representation. If every election the same 49% vote for the “other guy” and 51% vote for the incumbent, those 49% will never be represented in the way they would like. True the voted in representative should still represent that other side, but it doesn’t mean they will. I wouldn’t call the 49% wasted votes, however. They still had their say. They just never come to terms with their candidate. Every innovation and change has truth and sustenance behind it in one way or another. But the next change can completely overhaul the intentions of the previous change. One change may fit the ballot for a particular election to sway voters, or protect themselves.. Either way really, if all you have to do is pass a law every 4 years to quiet the masses, of course they’re going to do it. This one just fits the bill perfectly for it. Equal Population: The notion of equal population in all districts is probably the best way to make voting fair. It doesn’t matter if there are larger geographical locations, as long as the districts are equal in terms of population. It would fix California’s issue of 15,000 to 6 million ratio in comparing their smallest to largest districts. That is not equal representation. The Supreme Court has ruled on numerous cases to help rectify this problem. Dividing the population in a state by the amount of seats it gets gives you how many districts there should be. This would not fix gerrymandering either, however, because the discrepancy on how you draw those lines is going to be arbitrarily altered by the “population factor.” I agree the only non arbitrary factor is 0 population difference between districts. Nearly unobtainable, but still yet proven truth. Iowa’s justification for not breaking county lines, however, is acceptable. In principle equal districts is a great ideal. However, because the data is usually outdated, inaccurate, it is impossible to adhere to 100%. However, a 95% rate of accuracy would still be far better than a 400 to 1 ratio of population discrepancy. In conclusion no matter what system the law deems appropriate, gerrymandering can never be fully resolved. Not when the census is random… Not when they have the ability to redraw the maps in their own party's favor… Not when the checks and balances are typically slow and preposterous, and especially not in a place where the voter difference is highly favored on one side. An election can be swayed, many ways, it’s just a matter of discretion. A matter of hiding it behind public eyes, so that the politics can prop themselves up to hold the good old boys in the system, and keep out that new shiny sheriff from cleaning the baddies out. Ger`ry*man"der (?), v. t. [imp. & p. p. Gerrymandered (?); p. pr. & vb. n. Gerrymandering.] [Political Cant, U. S.] ⇒ This was done in Massachusetts at a time when Elbridge Gerry was governor, and was attributed to his influence, hence the name; though it is now known that he was opposed to the measure. © Webster 1913. Log in or registerto write something here or to contact authors.
The Imperial Line 1. The difference between the emperor and the real power (shogun). 2. There was only one Imperial line until Antoku (1183), where we say a split occurred. All of Antoku's descendents are created, and we have positioned them as the IP (The Taira). Assuming an average rule of 40 years (since many of them are unaware of reality, except in DLs, 40 may even be short…), that gives us about 20 people. Is there any point in listing them up? Since nobody knows about them, including themselves most of the time, maybe nobody bothered. There may have been multiple "The Taira" as a number of people in the bloodline reached the right age. Were any of them female? 3. The historical split occurred with the Northern Dynasty (1332-1390), and resulted in the creation of the Kumazawa family line, who we have positioned as the VP. They are apparently hiding their history and just living normal lives now. 4. Empresses: Especially avatars of Amaterasu (see Amaterasu/Himiko). Or maybe these should be covered under Amaterasu? Need a good chart of emperor names, reign names, dates and bloodline. The Origins of the Imperial Line The Japanese imperial line is said to have the longest continued lineage of any royal bloodline in the world, tracing its history back to at least AD 600, and according to legend for a considerable time before that. According to the Nihongi (The Chronicles of Japan from the Earliest Times to AD 697), which was written in AD720, Japan was created and ruled by the gods for 1,792,470 years until 667 BCE. In 667 BCE the first Emperor Jimmu reached the age of 45 and launched an aggressive program of conquest, bringing the Osaka-Kyoto region of Japan under his control. This region has remained the center of Imperial Japan until the present day. The official genealogies of Japan trace all emperors (and empresses) back to this figure, although the succession has not always been direct. For over a thousand years, the Emperor was revered by the people of Japan as a literal god, and a thousand years of tradition does not die easily regardless of what radio broadcasts were made at the end of World War II. For most of this time, the Emperor was a political pawn used by the most powerful families of the era, and isolated by them from the general populace. Time and time again the balance of political power shifted and a new government arose, but the Imperial family remained secure, sacred, and almost invincible in the Imperial City. The Imperial City itself was shifted with the centuries, but the traditions remained unchanged. The real power was almost always in the hand of the shogun, the supreme commander of the armies. Many of them respected the Emperor, more often they kept the Emperor imprisoned and impoverished, utilizing him merely as a controlled rubber stamp to give "official" backing to their arbitrary decrees. Theoretically, however, all power rested in the hands of the Emperor, and the shogun was merely his hired underling. The Fall of the Taira The Taira, the family which held essentially all the power between 1160 and 1185 after the fall of the Fujiwara, was exceedingly corrupt. They spent the majority of their time in the Imperial City, Kyoto, and largely ignored the rest of the country except as a means of providing them with tax monies. The Minamoto clan, on the other hand, had been for generations the warriors of the Fujiwara, and had defeated their enemies, steadily expanding Japan to include the entire island of Honshu. Yoritomo and Yoshitsune were an excellent pair, with Yoritomo one of the best strategists of his time, and Toshitsune an unparalleled tactician and warrior. They felt it was time for the Minamoto to control the power in Japan, and together moved to topple the Taira, replacing it with their own line. They were successful. A series of battles spearheaded by Minamoto Yoritomo and his younger brother Yoshitsune, a brilliant tactician and warrior, ended with the climactic battle of Shimonoseki, in the narrow channel separating Kyushu from Honshu, in 1185. At the battle the Taira were shattered, fleeing in disorder and vanishing from history forever. The young Emperor Antoku, only eight at the time, was also lost there, according to legend jumping into the waves with Niidono, his aunt, still gripping the sacred sword Kusanagi, one of the three Imperial regalia. An attempt was made to toss the sacred mirror into the water by the wife of Lord Shigehira. Just as she was about to jump, an arror pinned her clothing the boat, holding her. Several Genji soldiers snatched the casket and key, and as they were holding it were suddenly blinded and blood rushed from their noses. One of the heiki still alive said it contained one of the imperial treasures, and no commoner dare open it. Needless to say, Yoritomo took control of the mirror. Yoritomo, now effective head of power in Japan, quickly exterminated all traces of the Taira, and then moved to eliminate the only remaining possible threat to his power: Yoshitsune. Yoshitsune was finally cornered and committed suicide rather than be captured in 1189, in Koromogawa, Iwate Prefecture. At least, that's what the history books say. The reality is a bit different. The Amperor Antoku was a Dreamer, and was recognized at an early age as an extremely powerful one. This did not escape the notice of the Shoka, of course, who were successful in awakening Amaterasu. As it happened, this fit in quite well with Yoromoto's plans, because he was already committed to using Mythos assistance in toppling the Taira. Through the assistance of the shoka, he was able to reach an agreement with Amaterasu, whereby he would receive assistance in battle in return for the young Emperor. Yoromoto jumped at the chance: he wanted power, not the Emperor. And if Amaterasu wanted the boy Antoku, she was welcome to him… there were any number of other people with Imperial blood who could be made Emperor at the "request" of the man who controlled, on a practical level, all of Japan. Yoshitsune, however, was little interested in political power. He was a warrior (and a lover), and he was into "the noble fight" and the thrill of victory. He was also, unfortunately for Yoritomo, loyal to his Emperor. Yoshitsune discovered his brother's foul plot quite by accident, and swore to assist the Emperor. He was assisted in this by the Tainin Hodo, who did not, however, reveal themselves to him. Yoshitsune, until his death, believed that he had been assisted merely by a monk who happened to agree with him. He never discovered that Benkei, his trusted companion in his last battles and flight, was actually a Tainin Hodo monk. As a Minamoto, however, Yoshitsune also recognized that his brother did have a point: the Taira were corrupt, and in spite of Yorotomi's evil, the Minamoto as a whole represented a relatively clean and simple warrior ethic that he felt better. It was, in fact, one of the sources of the samurai ethic that would shape Japan so much in later centuries. He agreed with his brother that the Taira must be destroyed, and the Minamoto were the only power bloc that could replace them. He secretly determined that the Emperor Antoku, however, would escape alive. The war swept the entire nation, and Yoritomo's armies were almost always victorious. The final battle was in Shimonoseki, at the northern edge of Kyushu, in the narrow strait separating it from Honshu. After a massive sea battle, which Yoshitsune won handily enough, the Emperor Antoku and his nurse were seen to slip under waves forever. What happened then, however, was a bit different that the histories: they were spirited away with the assistance of the Tainin Hodo. The shattered remnants of the Taira, believing their Emperor dead, fled in disarray. A large contingent of them ended up in Shiina, a deep valley hidden in the almost inpenetrable mountains in the center of Kyushu. They lived in fear of retribution for generations, discovering only much later that the Minamoto themselves had also fallen to yet another contender for power. The young Emperor travelled north to stay with the Tainin Hodo at Ryuzoji for about 20 years, and then moved yet further north to the growing trading port of Tosa, at the northeast tip of Honshu. He lived here, with a growing family, as a merchant trading with China, until his death at a ripe old age. The family remained here in secret for quite some time. Yoshitsune, meanwhile, had made a decision to lead the pursuers (of which there were many - Yoritomo had put a healthy price on his head) on a wild goose chase, and raced the length and breadth of Japan. In 1189, he was supposed to have been trapped at Koromogawa, Iwate Prefecture, and to have committed suicide rather than be captured. In fact, they continued to flee north, bypassing the nearby Ryuzoji Temple and then into Hokkaido. Benkei worked with Yoshitsune, together constructing a plan to return the Emperor Antoku to power and (more importantly) to topple Yoritomo because of the potential for Mythos evil he represented. Several dozen tons of gold sand were collected in Hokkaido, to be used for military men and materiel. His plan never came to fuitition, however, as they began to be pursued by Mythos creatures. The gold was buried in "Kuma-no-sawa", a swamp in Hokkaido, and the pair of them vanished, some say moving on to the Chinese continent where Yoshitsune became Genghiz Khan, and went on to carve out his own empire there. This gold was later used by Ryuzoji Temple and the Tainin Hodo as required. In addition to being in an unknown place in relatively-unpopulated Hokkaido, the entire region has been since designated as a JSDF bombardment practice range, providing an excellent reason for keeping everyone away from the area. It is located somewhere in the region around Shikotsu-Toya National Park and the JSDF Shimamatsu Range, north of Lake Shikotsuko. What happened to Amaterasu? She was quite unhappy to lose Emperor Antoku. She was little interested in taking revenge on petty human beings, though, and merely fell back asleep. The shoka, however, suspected that Antoku still existed, and spurred on the pursuit of Yoshitsune in an effort to recapture the Emperor. The shoka continued their search for centuries, and still continue it today, convinced that the line of Emperor Antoku must still exist, and could still be the key that they need to fulfill Amaterasu's plan. Hiding, and the Ultimate Hiding Place With the combined powers of the shoka and the shogunate after him, The Taira and the Taininhodo had no choice but to flee. The Taininhodo had never been revealed, and so were able to operate relatively freely without fear of capture - in fact, Taininhodo almost always had a number of monks in the Imperial Court in Kyoto, collecting information, as well as a number of information sources in the shogun's headquarters, whether Kamakura, Osaka or Edo (Tokyo). The Taininhodo had no idea at this time exactly why Amaterasu wanted the Emperor back, but it was clear that she was not the "good" goddess described in so many Japanese myths and legends. Fewer and fewer people were told the truth of his identity, until only about a half a dozen knew. One of them was, of course, The Taira himself, and the rest were Taininhodo monks. In 1333, The Taira made his move in a carefully-plotted scheme designed to win back control of the Imperial Throne. He controlled Go-Daigo, who launched an effort to dislodge the existing Emperor and return to the glorious days of Imperial rule. For a period of about 60 years, there were two Imperial courts, each claiming to be the sole ruler of Japan. Ultimately, Go-Daigo failed, and was exiled, leaving Amaterasu and the shoka in uncontested control, but escaping with full knowledge of Amaterasu's plan to rebirth herself in a child of the Imperial bloodline, rulilng Japan and eventually the world. The shoka never knew that he was, in fact, The Taira, and this proved a fortuitous circumstance… for over six centuries, the line of Go-Daigo continued to exist, living in poverty under the lay name "Kuma-no-sawa" (later shortened to merely Kumazawa), watched but largely ignored by the shoka and their minions. At last, The Taira had found a safe hiding place, directly under the watching eyes of his dearest enemies. Even so, over the centuries The Taira was often a strong Dreamer, and Amaterasu sometimes caught his scent in the Dreamlands, although being unable to track it back to its source on earth. The efforts of the shoka continued, although varying with the changing times due to the changing situation in Japan, the actions of the reigning Emperor, and above all whether or not Amaterasu was awake and could be communicated with at the time. A number of times they approached the Kumazawa line, but never stumbled on the secret. It was clear, however, that it was only a matter of time before they did. The Taininhodo and The Taira met sometime in the late 16th century and agreed that the best place to hide The Taira would be in the Dreamlands… while his physical body and consciousness continued to live a normal life as the merchant family Kumazawa, his Dreamland self led quite a different existence. At times he was a monk at the Taininhodo temple in the Dreamlands, at times a merchant, at times a silversmith. There he retained his knowledge, and met with the monks in times of need to discuss happenings back in Japan. On one occasion he actually visited Amaterasu's Dreamlands, with a small bodyguard, but stayed only long enough to get the general atmosphere. (The Taininhodo have hired other explorers to keep track of goings-on there, through trading ships and other means.) Additional information from Holtom's "Japanese Enthronement Ceremonies" On the night before the formal enthronement of a new Emperor the Chinkon Sai (spirit purification ceremony) is held. The primary purpose of this service is to tranquilize the spirit of the Emperor. The ancient Japanese belief was that a person was inhabited by multiple spirits, such as a "gentle spirit, a rough spirit, a luck spirit and a wondrous spirit." These spirits might wander from the body and thus be separated from the body in time of crisis. The Chinkon Sai is designed to ensure that the Emperor is "all there" for the enthronement. The ceremony consists of setting up a temporary shrine, with eight ancient Shinto gods on enshrined on the right, and Onaobi-no-Kami on the left (this god "rectifies all errors and sets all wrongs, right"). A variety of offerings are placed, along with the Eight Imperial Treasures. Originally there were 10 treasures, all brought by Ninigi-no-Mikoto when he descended from heaven: The Mirror of the Office, the Mirror of the Shore, the Yata Sword, the Life-Inspiring Jewel, the Jewel of Perfect Health and Strength, the Jewel for Resuscitating the Dead, the Jewel for Warding Off Evil from Roads, the Serpent-Preventing Scarf, the Bee-Preventing Scarf and the Scarf of Various Materials and Efficacies. All were magical objects related to the protection and preservation of life. If these objects were shaken and the user counted from one to ten out loud, "such mighty power would be released from them as would recall even the dead to life." It is unclear what has happened to the original treasures, and they are represented by eight replicas instead.
Welcome to Not Quite Entomology By Ladyfisher For the past two weeks we've had a bee as our Fly of the Week. The first one an oldie, The McGinty, which really does work. This week it is a Steelhead Bee, done as a spey fly. Bees fall into the 'terrestrial' group of insects. Many folks see bees as a natural part of the fish's diet. Bees belong to a very large order Hymenoptera, which includes wasps, ants, and sawflies. Bumble Bee Even though we don't hear much about flies for this insect group (except the ants), knowledge of bee and wasps as flies for fish dates back to the third-century writing of Claudius Aelianus, who spoke of bee/wasp flies in the book De Animalium Natura. Dame Julian's first fly was the 'waspe flye." Chances are if you do find a bee in a local you fish, it will work as a fly there. Bees live in colonies, and occur in very large numbers. If the colony happens to be in a tree overhanging a stream - it guarantees large numbers of insects in a small concentrated region. With any amount of wind, the possibility of them getting blown into the water is pretty good. Some regions seem to 'fish' better with bee patterns than others. The White River in Arkansas, (particularly after October when presumably cold weather makes it more difficult for them to get and stay airborne.) The Yellowstone also produces great results with both bee and wasp patterns. Ernie Schweibert in Trout Strategies notes that "on the limestone streams of Pennsylvania many anglers swear by bee imitations in hot weather." Schwiebert further states that clipped-hair patterns of honeybee, sweat bee, and bumblebee "have often proven themselves on selective trout." Honey Bee Perhaps the hot weather is a key in the west and midwest, where we have seen bees and wasps drinking from the stream! I have a bumblebee which appears regularly on the tiny creeklet between the little ponds in our yard. There is plenty of saltwater around, this bee obviously has a preference for freshwater. There are some who believe fish do not 'take' bees for the insect they are, but instead take them for wasps. There are a group of wasps which enter the water to seek out the aquatic insect as hosts for their eggs! Diving wasps attach eggs, larvae, or the pupae, depending on the species. Some of this group crawl to the bottom, while other swim using their wings to propel them. Either way, they are certainly available to the fish as food - trout, steelhead, and panfish! Regardless of what the fish take bees for, they do take them. Being observant is always the key to success in fly fishing. If you see or hear bees, change to a bee pattern. Try the McGinty on still or slow waters, and the Steelhead Bee on faster water and riffles. You might be surprised! Previous 'Not Quite Entomology' Articles [ HOME ] [ Search ] [ Contact FAOL ] [ Media Kit ] FlyAnglersOnline.com © Notice
Scientific Research Homeopathic Arsenicum Album May Help Victims of Arsenic Poisoning Caused by Contaminated Groundwater The dismal scenario of groundwater arsenic intoxication in third world countries – Prolonged exposure to arsenic (As), a toxic metalloid, has caused various illnesses in millions of people globally distributed over some 20 countries.  In Bangladesh and its adjoining part of West Bengal (India) alone, about 100 million people are at risk1 of As poisoning from drinking contaminated groundwater with concentration of As ranging from 60 to 560.23 µg/l (even exceeding 3000 µg/l at certain spots). This greatly exceeds the maximum permissible limits of 10-20 µg/l for some advanced countries and 50 µg/l for developing countries as laid down by both WHO2 and the US Environmental Protection Agency3.  The problem of arsenic poisoning in third world countries is compounded by poor diet as well as prevailing health and hygiene conditions, especially in the rural areas. The attempts made so far by both Governmental and Non-Governmental organizations to provide As-free drinking water to the highly affected areas remain grossly inadequate. Various attempts are still being made to procure arsenic-free drinking water at an affordable cost. Serious effort should also be directed to remove As after it enters the body, because As also enters body through other sources.  However, such efforts through orthodox medicines (e.g. chelating agents like DMSA, DTPA etc and some anti-oxidants) have not yet been reported to be successful by and large 1,4,5.Chronic exposure generally leads to various ailments and dysfunctions of several vital organs like liver, kidney, lung etc.6, more so when there is an accompanying nutritional/dietary deficiency7. Therefore, the overall situation is extremely gloomy in the arsenic contaminated areas, particularly in the developing third world countries. Unless something can be done, which is affordable by a large population of poor people, millions of them have to die a slow but sure death. In such a scenario, our efforts are directed to find  a remedy that is cheap, easy to administer, effective in low doses and has no toxic effect of its own. Potentized homeopathic remedies in general fit the bill. The encouraging results of early attempts made8-9 to mobilize As in rats through micro doses of Arsenicum Album, and of our own studies in mice10-16 in this regard suggested that Arsenicum Album-30C and Arsenicum Album 200C have the potential to be used for alleviating As toxicity in humans as well. With financial support from the Boiron Laboratory, Lyon, France, a human trial was conducted to study some of the important aspects of arsenic toxicity and the efficacy of Arsenicum Album 30C and 200C in ameliorating symptoms of arsenic toxicity (arsenicosis) in humans exposed to groundwater arsenic 17-21 was tested. Protocols of the study: The following protocols of study were used: (i)                Assay of arsenic mobilization through urine from blood, and cytotoxicity in humans exposed to groundwater arsenic, as revealed from several toxicity biomarkers like acid and alkaline phosphatases, alanine aminotransferases (ALT) and aspartate amino transferases (AST), lipid peroxidation (LPO), reduced glutathione (GSH), gamma-glutamyl-transferase in (GGT), lactate dehydrogenase (LDH), glutathione-s-transferase (GST), and  catalase (CAT) activities etc.  in their sera; (ii)              Assay of erythrocyte sedimentation rate, Hb content, blood glucose, total and differential count (T.C./D.C), packed cell volume(PCV), cholesterol (HDL/LDL), urea/BUN, bilirubin, creatinine, albumin, triglyceride etc.  of blood samples , lymphocyte viability tests (apoptosis) ; (iii)            Immunoassay through use of ANA, dsDNA, Scl-70 antibody titer tests deploying ELISA READER etc. (iv)            Assay of expression of Matrix Metalloproteinases (MMPs) in blood sera of humans living in risk prone zones (and also p53 and Bcl2 gene products) and DNA damage through Ladder/Comet assay. What exactly was under focus? In suitable placebo controlled animal experiments, the effects of sub-lethal doses of arsenic trioxide on an animal model were tested. Mice have been studied and the efficacy of the potentized homeopathic drug, Arsenicum Album-30 (and 200th potency) in ameliorating arsenic toxicity has been assessed by analysis of data collected from some of the protocols mentioned above. These are all scientifically accepted protocols for determining the patho-physiological status of animals and humans. In the subsequent human trial, a periodic survey of arsenic content of urine and blood samples has been made, before (to get baseline data) and at different periods after administration of the homeopathic remedy, keeping suitable placebo controls. Similarly, several enzyme toxicity biomarkers have also been analyzed from blood samples periodically before and after drug administration, using initially a few placebo fed controls for a limited period, to be sure that the drug showed a positive response as compared to controls. The “verum fed” group of subjects showed considerable improvement in appetite, digestive ability, energy level (arsenic affected people are extremely morose, frustrated and devoid of motivation and energy to work) and in their general health condition. Similarly, periodic monitoring of other blood parameters, including ANA tests, were made before and after administration of the homeopathic remedy. These tests  supported the visibly noted improvement in  general appearance of and statements made by the patients taking the homeopathic remedies. There was amelioration of their joint and muscle pains, and noticeable improvement in their liver functions and other sufferings. What results were significant? The results were encouraging. There was clear indication of the ability of the homeopathic remedy to ameliorate arsenic toxicity in terms of both urinary arsenic excretion and corresponding positive modulations as revealed from the several toxicity biomarkers analyzed in randomized populations. An alarmingly high frequency of ANA positive cases had been recorded in two arsenic infested villages, Ghetugachhi and Dakshin Panchpota (under Chakdaha Block, Nadia, West Bengal) among random populations. Even some 9-13 year old children tested ANA positive. Many of them were also showing higher levels of blood glucose.  Results showed that the potentized homeopathic remedy could reverse ANA positive cases to ANA negative ones. Even symptoms of arsenicosis (particularly of skin lesions and liver ailments) showed signs of improvement. Blood glucose levels, which were generally found to be high in people inhabiting high-risk arsenic contaminated areas, showed signs of amelioration and in most, the glucose level could be brought down  to normal or near-normal levels by the administration of these remedies. The efficacy of a millesimal potency (Arsenic album 0/3) has also been tested to yield positive results (unpublished). More works are needed: More such studies, preferably by other independent groups of researchers with an open mind are necessary to verify, confirm (or refute) the findings, because we think this could ameliorate the sufferings of millions of people to a considerable extent, particularly where provision for supply of arsenic-free drinking water has not yet been made. The remedies would also help to ameliorate symptoms of arsenic victims who had been drinking arsenic-contaminated water for varying periods of time before arsenic-free water was actually made available to them. It must be emphasized that efforts to provide arsenic-free drinking water to all the affected people must go on, but this remedy can give an interim relief to a considerable extent before such measures are taken. On the mechanism of action of the potentized homeopathic remedy Potentized Arsenicum Album 30C and 200C used in the study was diluted 1/1060 and 1/10120 times, respectively. These dilutions were much beyond Avagadro’s limit, and thus cannot be theoretically expected to contain even one molecule of Arsenic trioxide, the initial source material from which the remedy was derived (by the homeopathic method of succussions and serial dilutions). Further, 8 small sugar globules soaked with a tiny drop (0.01 ml approx) of this remedy served as a single dose for a human subject. Thus the question, how can this act as a medicine, is quite pertinent. Homeopathy is unacceptable to many for lack of an authentic scientific explanation. The problem of explaining the mechanism of action of potentized homeopathic remedies is basically three-fold:  one has to explain i) how the medicinal property can be transferred to and retained by the “vehicle” (more often ethyl alcohol 40-70%), ii) how the ultra-low dose of the drug can transmit “information” to the cell receptors, and thereafter iii) is able to bring about often visible and quantifiable changes in many parameters of study. A large number of hypotheses have been proposed to explain the mechanism of action of the potentized remedies, and quite a few of these are appealing 22. To explain the first part of the problem, a hypothesis has to satisfy that the “vehicle” must have the ability to retain the “memory” of the original drug molecule (either as clathrate or other form of bubble)23 for a long time and also have the ability to produce smaller replicates of the “memory” molecules in larger numbers during dilution and succussion procedures. Thus, the receptor(s)24 of the cell (ion-gated channels/aquaporins?), which must be conformationally changed owing to action of the actual poison (Arsenic trioxide) must be able to recognize the “molecular signal/imprint/information” of Arsenic trioxide (imprinted in the remedy) that comes in contact with it (them). The cell surface receptors with different degrees of conformational changes (and of different sizes as well) will be able to get the signals in greater numbers if signals (even smaller in size) were available. For example, if one knows what an “alphabet B” looks like, one will identify without fail the alphabet B in any size, big or small. And if an “alphabet” carries a specific signal for the cell to perform, the more signals, the greater will be the activity by signal transduction in an amplified manner, triggering a cascade of downstream activity through a chain of activation/inactivation of genes, necessary for correcting the “mistake in functioning” of these genes in question. Hence, the dictum “like cures like”, and greater the dilution and higher the potency, the stronger and longer will be the action of the medicine, as claimed by the homeopathic doctrine. About the author A.R. Khuda Bukhsh A.R. Khuda Bukhsh Professor Anisur Rahman Khuda-Bukhsh obtained PhD in Genetics in1976, worked Onl Cytogenetics of fish, aphids and plant mites, karyomorphological & biochemical studies in fish & aphids substantial in India. Chromosome banding of fish and aphids and antclastogenic, antimutagenic effect of some homeopathic remedies against X-irradiation, toxic chemicals and heavy metals including Arsenic, Heavy ion, toxic chemical and mutagens extensively done. Presently working on Homeopathy using cancer model and other living and cell free systems. Leave a Comment 1 Comment • Dear Doctor A.R. Khuda Bakhsh assalam o alaikum Congratulations on conducting such a marvelous research work on arsenic poisoning and its homeopathic treatment. Keep it up. May Almighty Allah be with you in your all endeavors.
Posting by Ade Tuty Anggriany | 2:33 AM In multilingual communities, speakers switch among languages or varieties as monolinguals switch among styles. Language choice is not arbitrary and not all speech communities are organized in the same way. Through the selection of one language over another or one variety of the same language over another speakers display what may be called ‘acts of identity’, choosing the groups with whom they wish to identity. The first step in understanding what choices are available to speakers is to gain some idea of what languages and varieties are available to them in a particular social context. Context here is the varieties made available either officially or not within boundaries of a nation-state. There are two ways that most studies of societal bilingualism use to determine the linguistic composition in a nation-state, large-scale survey and census statistic. A census statistic operates under limitations of time and money, and thus of many facets such as extent of interference between languages, switching, etc. cannot be investigated in any detail. On the other hand, large-scale survey can yield data on bilingualism for a population of much greater size than any individual linguist or team could hope to survey in a lifetime. As a result, there are two kinds of bilingualism, de facto bilingualism and de jure bilingualism. There are often fewer bilingual individuals in de jure bilingual states than in those where de facto bilingualism occurs. In case of de jure bilingualism, knowledge about the demographic concentration of particular ethnic minorities is necessary for the implementation of language legislation. A domain is an abstraction which refers to a sphere of activity representing a combination of specific times, settings and role relationships. For example, in Puerto Rican community in New York City, Spanish or English was used consistently in five domains: family, friendship, religion, employment, and education. The way in which these variables were manipulated determined the extent to which the domain configuration was likely to be perceived as congruent or incongruent. In each domain there may be pressures of various kinds e.g. economic, administrative, cultural, political, religious, etc. which influence the bilingual towards use of one language rather than the other. Therefore, it is not possible to predict with absolute certainty which language an individual will use in a particular situation. A situation where language or variety in a multilingual community serves a specialized function and is used for particular purposes is called as Diglossia. In Diglossia, there are H and L variety of language used by the people. The H and l varieties differ not only in grammar, phonology and vocabulary, but also with respect to a number of social characteristics, namely function, prestige, literary heritage, acquisition, standardization, and stability. Diglossic societies are marked not only by this compartmentalization of varieties, but also by restriction of access, which can be illustrated in the importance attached by community members to using the right variety in the appropriate context. Then, the relationship between individual bilingualism and societal Diglossia is not a necessary or causal one. Either phenomenon can occur without the other one. Diglossia both with and without bilingualism may be a relatively stable, long-term arrangement, depending on the circumstances. There are many bilingual situations which do not last for more than three generations. In cases such as bilingualism without diglossia, the two languages compete for use in the same domains. Speakers are unable to establish the compartmentalization necessary for survival of the L variety. In such instances a shift to another language may be unavoidable. There is a story of language diversity in Australia. The Aboriginal languages have been in decline since their speakers came into contact with Europeans in the eighteenth century. Some linguists predict that if nothing is done, almost all Aboriginal languages will be dead by the year 2000. Many smaller languages are dying out due to the spread of a few world languages, such as English, French, Chinese, Russian, and Arabic. Choices made by individuals on an everyday basis have an effect on the long-term situation of the languages concerned. Language shift generally involves bilingualism (often with diglossia) as a stage on the way to eventual monolingualism in a new language. Typically a community which was once monolingual becomes bilingual as a result of contact with another group and becomes transitionally bilingual in the new language until their own language is given up altogether. There is an illustration of this which is found in an investigation of the use of German and Hungarian in the Austrian village of Oberwart. Villagers who were formerly Hungarian monolinguals, have over the past few hundred years become increasingly bilingual, and now the community is in the process of a shift to German. Once the process of shift has begun in certain domains and the functions of the languages are reallocated, the prediction is that it will continue until the whole community has shifted to German. In some cases shift occurs as a result of forced or voluntary immigration to a place where it is not possible to maintain one’s native language. The ultimate loss of a language is termed language death. Many factors are responsible for language shift and death, e.g. religious and educational background, settlement patterns, ties with the homeland, extent of exogamous marriage, attitudes of majority and minority language groups, government policies concerning language and education, etc. the inability of minorities to maintain the home as an intact domain for the use of their language has often been decisive for language shift. In a community whose language is under threat, it is difficult for children to acquire the language fully. Language undergoing shift often display characteristic types of changes, such as simplification of complex grammatical structures. These changes are often the result of decreased use of the language in certain contexts, which may lead to a loss of stylistic options. The degree of linguistic assimilation may serve as an index of social assimilation of a group. It depends on many factors, such as receptiveness of the group to the other culture and language, possibility of acceptance by the dominant group, degree of similarity between the two groups, etc. Although the existence of bilingualism, diglossia, and code-switching have often been cited as factors leading to language death, in some cases code-switching and diglossia are positive forces in maintaining bilingualism. In many communities switching between languages serves important functions. Here are some utterances which are switched by other languages: 1. kio ke six, seven hours te school de vic spend karde ne, they are speaking English all the time (Panjabi/English bilingual in Britain): ‘Because they spend six or seven hours a day at school, they are speaking English all the time’ 2. Will you rubim off? Ol man will come (Tok Pisin/English bilingual child in Papua New Guinea): ‘Will you rub [that off the blackboard]? The men will come’ 3. Sano että tulla tänne että I’m very sick (Finnish/English bilingual): ‘Tell them to come here that I’m very sick’ 4. Kodomotachi liked it (Japanese/English bilingual): ‘The children liked it’ 5. Have agua, please (Spanish/English bilingual child): ‘Have water, please’ 6. Won o arrest a single person (Yoruba/English bilingual): ‘They did not arrest a single person’ 7. This morning I hantar my baby tu dekat babysitter tu lah (Malay/English bilingual): ‘This morning I took my baby to the babysitter’ In these cases we can see that a switch of languages can be occurred whether in the initial, in the middle, or in the last of the sentence. Instances where a switch or mixing of languages occurs within the boundaries of a clause or sentence have been termed intra-sentential switches. While where the switching occurs at clause boundaries is called inter-sentential switches. Interlocutor suggests that the ideal bilingual switches from one language to another according to appropriate changes in the speech situation, but not in an unchanged speech situation. It has often been said that bilingualism is a step along the road to linguistic extinction. There is an approach has investigated speakers’ reasons for switching on the assumption that the motivation for switching is basically stylistic and that switching is to be treated as a discourse phenomenon which cannot be satisfactorily handled in terms of the internal structure of sentences. Various grammatical principles have been proposed for switching, such the one called the equivalence constraint, which predicts that code switches will tend to occur at points where the juxtaposition of elements from the two languages does not violate a syntactic rule of either language. This means that a language switch ought to take place only at boundaries common to both languages and switching should not occur between any two sentence elements unless they are normally ordered in the same way. Many linguists have stressed the point that switching is a communicative option available to a bilingual member of a speech community on much the same basis as switching between styles or dialects is an option for the monolingual speaker. A speaker may switch for a variety of reasons, for example to redefine the interaction as appropriate to a different social arena, or to avoid, through continual code-switching, defining the interaction in terms of any social arena. The latter function of avoidance is an important one because it recognizes that code-switching often serves as a strategy of neutrality or as a means to explore which code is most appropriate and acceptable in a particular situation. 1 Comment 1. Anonymous September 16, 2011 at 11:09 PM   Language in society by suzanne romaine
Sample Activities You can do math with your child everyday. Situations pop up all the time. Take advantage of such opportunities to do math together, to learn about your child's math thinking and understanding, and to share your own. The following are excellent opportunities for engaging in math thinking. Encourage your child to solve problems mentally rather than over-relying on mathematical tools such as pencil and paper and calculators. Activity What's the Math?* When shopping: How much do you think our groceries will cost? Estimation, addition How much money will we save if we use these coupons? Addition, subtraction How much change will we get if we give the clerk $20.00? Addition, subtraction How much money will we get for recycling a certain number of cans? How many cans were recycled if I got back $.65? Division How much do you think this bag of [apples, potatoes] weighs? (Then weigh to find out.) Estimating, measuring weight At the post office: How many stamps on a sheet or in a book? Counting, addition, multiplication About how much will it cost to buy a certain number of stamps? Estimation, addition, multiplication How much change will we get if we give the clerk $5? How many stamps can I buy for a certain amount of money? How much do you think this package weighs? (Compare the estimate to the actual weight.) Estimating, measuring weight When driving, taking the bus, or walking: About how many blocks until we get to a certain place? (Then count and find out.) Estimating, measuring distances, counting About how long will it take to get to a certain place? (Then time it to find out.) Estimating, measuring time How many dogs, stop signs or traffic lights do you think we will see along the way? Predicting, collecting data, counting If we take the bus [5] times today, how much will we spend on bus fare? How much will be left on our bus card? Addition, multiplication, subtraction If gas costs [$3.15] per gallon, about how much will it cost to fill our [12] gallon tank? If we can drive approximately [25] miles for each gallon of gas, how far can we drive on that tank of gas? Estimation, addition, multiplication When doing laundry: Can you sort the clothes into whites and colors to wash them? What about sorting into three categories? Sorting and classifying data How many pair of socks can we make if there are 14 single socks? (Or, if there are 5 pairs, how many socks is that?) Multiplication, division If the washing machine costs [$1.25] per load, and the dryer costs [$1.50] per load, how much will it cost to do 2 loads of laundry? How many quarters will we need? Addition, multiplication, division If one load takes [1/2 cup] of detergent, how much detergent will we use to do [3] loads? (Or, if we use 3 cups of detergent, how many loads did we do?) Adding fractions, dividing fractions When cooking dinner: How many [plates, napkins, spoons] do we need for a certain number of people? How much [water] do we need to measure when making [orange juice]? Measuring, fractions If this recipe calls for 3/4 cup of flour and we are doubling the recipe, how much flour do we need? Adding fractions, measuring When reading with your child: How many objects/pictures do you think are on this page? Let's count. Estimating, counting Do you think there are more birds or trees on this page? Why? How do you know? Counting and comparing quantities We're on page [27]. We're going to read the next chapter, which ends on page [42]. How many pages will we read? How many pages will we have left to read then? *Questions that involve money also focus on reading and using decimal numbers.
Predominant Nearshore Sediment Dispersal Patterns in Manila Bay Fernando Siringan, Cherry Ringor Net nearshore sediment drift patterns in Manila Bay were determined by combining the coastal geomorphology depicted in 1 : 50,000scale topographic maps and Synthetic Aperture Radar (SAR) images, with changes in shoreline position and predominant longshore current directions derived from the interaction of locally generated waves and bay morphology. Manila Bay is fringed by a variety of coastal subenvironments that reflect changing balances of fluvial, wave, and tidal processes. Along the northern coast, a broad tidal-river delta plain stretching from Bataan to Bulacan indicates the importance of tides, where the lateral extent of tidal influences is amplified by the very gentle coastal gradients. In contrast, along the Cavite coast sandy strandplains, spits, and wave-dominated deltas attest to the geomorphic importance of waves that enter the bay from the South China Sea. The estimates of net sediment drift derived from geomorphological, shoreline-change, and meteorological information are generally in good agreement. Sediment drift directions are predominantly to the northeast along Cavite, to the northwest along Manila and Bulacan, and to the north along Bataan. Wave refraction and eddy formation at the tip of the Cavite Spit cause southwestward sediment drift along the coast from Zapote to Kawit. Geomorphology indicates that onshore-offshore sediment transport is probably more important than alongshore transport along the coast fronting the tidal delta plain of northern Manila Bay. Disagreements between the geomorphic-derived and predicted net sediment drift directions may be due to interactions of wave-generated longshore currents with wind- and tide-generated currents. Full Text:
Cushing, Luther Stearns Also found in: Encyclopedia. Cushing, Luther Stearns "All language, not addressed to the house, in a parliamentary course, must be considered noise and disturbitive." —Luther Cushing Luther Stearns Cushing achieved prominence as a legal educator, author, and jurist. He was born June 22, 1803, in Lunenberg, Massachusetts. Cushing graduated from Harvard University with a bachelor of laws degree in 1826. From 1826 to 1832, Cushing was an editor for The American Jurist and Law Magazine. For the next twelve years, he served in the state government system as clerk of the Massachusetts House of Representatives. Cushing entered the judicial phase of his career in 1844, presiding as judge of the Boston Court of Common Pleas for a four-year period. In 1848, he became a reporter for the Massachusetts Supreme Court, performing these duties until 1853. In 1848 Cushing returned to his alma mater, Harvard University, and presented a series of lectures on Roman Law at the Harvard Law School until 1851. As an author, Cushing is famous for several publications, including A Manual of Parliamentary Practice, also known as Cushing's Manual, published in 1844, and Elements of the Law and Practice of the Legislative Assemblies in the United States, published in 1856. Cushing died June 22, 1856, in Boston, Massachusetts.
регистрация / вход Call Of The Wild Vs Darwin Essay Call Of The Wild Vs. Darwin Essay, Research Paper Where did man come from? Scientists thought they had answered this simple yet complex question through Charles Darwin’s theory of evolution. According to him, Call Of The Wild Vs. Darwin Essay, Research Paper living organisms evolved due to constant changing. Organisms which gained an edge would reign, while those without would die. Jack London’s books during the late 1800’s animated this theory through the use of wild animals in a struggle for survival. In fact, many prove that to survive a species “must” have an edge. In London’s book the Call of the Wild, the harsh depiction of the Klondike wilderness proves that to survive life must London uses Buck as his first character to justify his theory as he conforms well to the hostile North. While at Judge Miller’s, pampered Buck never worries about his next meal or shelter; yet while in the frozen Klondike he has death at his heels. Until his body adapts to the strenuous toil of the reins, Buck needs more food than the other dogs. He must steal food from his masters in order to conform. If Buck continues his stealthy work he will survive. A second example occurs when Thorton owns Buck, and Spitz, the lead dog, constantly watches the team in a dominant manner. Buck, if insubordinate, runs the risk of death. He lays low, learning Spitz’s every tactic. Buck adapts to circumstances until finally he strikes against Spitz in a fight for the dominant position. By killing Spitz, he gains a supreme air, and in turn an adaptation against the law of the fang. A third example surfaces during Buck’s leadership. The fledgling dog, to Francios and Perrault, cannot work up to par for the lead. So Buck conducts himself as a master sled dog, reaching Francios and Perrault’s goals, conforming to the team. The group plows through snow reaching at least forty miles a day. The dogs spend at most two weeks in the wild Klondike. In a way Buck heightens the safety of each person and dog. He adapts to the environment and new position. Within the Call of the Wild, Buck must have a part to justify London’s theory. In the novel London uses Mercedes, Hal, and Charles, a group of very inexperienced and even less equipped city goers, to depict the probable doom of those who do not adapt. While in Skagway the three have no idea what the Klondike holds. The well dressed well fed team wants nothing but riches and fame. In their effort for time they purchase the now exhausted dog team, which Buck leads, to take them to Dawson. Even during the beginnings of their journey they show their inevitable doom. Mercedes, the most hardheaded of the bunch parks load after load on the sled. Onlookers laugh at the sight, telling the group that the sled will tip. In their arrogance the warning goes without notice, soon to find the now moving sled strewn across the street. The next incident proves their stubbornness to adapt to the environment. After many weeks of toil Charles, Hal, and Mercedes reach White river, where they find Thorton, a mail courier with frost bite. The team drops dead in the traces. Hal’s philosophy pertains to the use of the whip. Beating after beating occurs but the team does not get up. Buck, the lead dog, gets the brunt of the attack until Thorton steps in. He fights Hal and wins Buck. So the beaten Hal moves on, not heeding Thorton’s warning of thin ice. Their doom arrives in a tumult of ice and water. All of the team dies in the cold murky lake. These three characters show a second side of adaptation that is very true. Thorton and Buck reach a final adaptation in their quest for fortune, which creates the man and beast which rise above all. John Thorton asked little of man or nature. During the search for the hidden treasure mine Thorton travels in no hurry. He ventures Indian fashion, hunting food with his hands, using his cunning to overcome. If he fails, Thorton keeps on traveling knowing that eventually he will find food. Thorton has adapted, and now he has the power to fend off the wilderness. Buck also reaches his own acme which creates the super being. After Thorton’s death a pack of wolves attacks Buck. He holds his ground crippling dog after dog. By using primitive instincts, his killer instincts, Buck does not fall. Rather he destroys the others until they are to tired to fight. The victory makes him the leader of the pack. He has become the super being that reigns over all. As to London’s theory, Buck and Thorton’s adaptation proves it without a doubt. Due to the harsh and wild depiction of the Klondike wilderness in Call of the Wild, London’s theory proves true. Through the use of wild creatures and people, London creates a visualization of how adaptation makes someone strong and well fit for their environment. He also teaches that if a great enough adaptation occurs, that the organism will rise above all obstacles. In conclusion, if the average person adapts to their position in life and strives to reach their own personal best they too, like Buck, will become the leader of the pack. Комментариев на модерации: 1. ДОБАВИТЬ КОММЕНТАРИЙ [можно без регистрации] Ваше имя:
регистрация / вход Electronic Data Interchange Essay Research Paper Electronic Electronic Data Interchange Essay, Research Paper Electronic Data Interchange One of the more commonly accepted definitions of Electronic Data Interchange, or EDI, has been “the computer-to-computer transfer of information in a structured, pre-determined format.” Traditionally, the focus of EDI activity has been on the replacement of pre-defined business forms, such as purchase orders and invoices, with similarly defined electronic forms.”1 Electronic Data Interchange Essay, Research Paper Electronic Data Interchange EDI is the electronic exchange of information between two business concerns in a specific predetermined format. The exchange occurs when messages that are related to standard business documents, such as Purchase Orders and Customer Invoices are exchanged. The business community has arrived at a series of standard transaction formats to cover a wide range of business needs. “Each transaction has an extensive set of data elements required for that business document, with specified formats and sequences for each data element. The various data elements are built up into segments such as vendor address, which would be made up of data elements for street, city, state, zip code, and country.”1 All the transactions are then grouped together, and are “preceded by a transaction header and followed by a transaction trailer record. If the transaction contains more than one transaction, many purchase orders can be sent to one vendor, several transaction groups would be preceded by another type of record, referred to as a functional group header, and would be followed by a function group trailer.”1 One of the first places that EDI was implemented was in the purchasing operations of a business. Before the implementation of EDI, a purchasing system would allow buyers to review their material requirements, and then create purchase orders, which would be printed out and mailed. The supplier would receive the purchase order, and manually enter it into their customer shipping system. The material would be shipped, and an invoice would be printed, which would then be mailed back to the supplier. In this example, even if the purchased materials were shipped and received on the same day the purchase order was received, the cycle time could be as much as a week, depending on the mail and the backlog at the supplier’s order entry system. With the introduction of EDI, this scenario changed dramatically. Purchasing agents would still review their material requirements and create their purchase orders, but instead of printing them out and mailing them, the purchase orders would be transmitted directly to the suppliers over an electronic network. On the supplier’s end, the transaction would be automatically received and posted. This new process could allow the shipment of material on the same day the purchase order was sent. Suppliers could send their shipping documentation electronically to the buyer in the form of a shipment notification, providing the buyer with accurate receiving documents prior to the actual arrival of the material. The supplier gained an additional advantage as well, since now the invoice could be sent directly to the customer’s accounts payable system, speeding payment to the supplier. Speed, Accuracy and Economy are the benefits of EDI. Whether execution of EDI was in the area of purchase orders, advanced shipment notification, or automatic invoicing, several immediate advantages could be realized by exchanging documents electronically. Information moving between computers moves more rapidly, and with little or no human interference. Sending an electronic message across the country takes minutes, or less. Mailing the same document will usually take a minimum of one day. Courier services can reduce the time, but increase the cost. Facsimile transmissions work well for small documents, but for several hundred pages, it’s not a feasible option. When alternate means of document transfer are used, they suffer from the major drawback of requiring re-entry into the customer order system, admitting the opportunity of keying errors. However, information that passes directly between computers without having to be re-entered eliminates the chance of transcription error. There is almost no chance that the receiving computer will invert digits, or add an extra digit; thus ending the human error element. The cost of sending an electronic document is not a great deal more than regular first class postage. Add to that the cost reductions afforded by eliminating the re-keying of data, human handling, routing, and delivery. The result is a substantial reduction in the cost of a transaction. Expense, Networking Complexity, and Alternatives are the drawbacks of EDI. Although these benefits are convincing, actual acceptance and execution of EDI was far less common than might be expected. For all the benefits, the technological problems of EDI presented a number of major stumbling blocks. “Computers, especially mainframes, and their business application systems were complex and expensive. Primarily serving the “on the edge” functions of a business, they were not regarded as being fully joined into all business activities”. 2 Traditionally, the mainframe-computing consciousness was viewed as an information reservoir. EDI required that information technology be extended beyond core functions. So while there were substantial savings to be gained from the use of EDI, the cost of re-designing and deploying software applications to conform EDI into an existing portfolio of business applications was high enough to offset the anticipated advantages. The need for telecommunications capability posed a second major barrier for EDI implementation. Beyond the computer, a basic requirement of EDI is a means to transmit and receive information to and from a wide variety of customers or suppliers. This required a large investment in computer networks. Unlike the mail, to send electronic documents there must be a specific point-to-point electronic path for the document to take. “Companies were either required to develop extensive, and expensive networks, or rely on intermittent point-to-point modem communication.”2 Because of the technological complexity and cost of implementation, cheaper alternatives hurt the widespread use of EDI. To gain some of the advantages of EDI without the high price of computer hardware, software and networks, many innovative alternatives were developed. “Overnight courier service, facsimile machines, and the ability to give customers limited access to mainframes through dumb terminals provided quick and reasonably priced alternatives to inviting a major alteration of business environments.” 3 The past decade has seen an enormous change in the computing environment in most businesses, with the new breed of small, inexpensive and powerful personal computers. With the PC, computers have literally moved out of the basements and back rooms, and onto the desktops, and there has been a reduction in price. “A client based computer, or server can be obtained today for about the same cost as a small mini-computer of 10 years ago, but the same dollars are now buying a machine that has mainframe computing capability in a PC-sized box. PC’s are now economical enough that their price approaches the same cost per user as a dumb terminal did attached to that same mini-computer.” 3 The same improvements are found in the area of communications. It is now commonplace for computer users in retail stores to access computers many hundreds of miles away. Now, the terminal or PC on a desk in a steel plant may actually be using data from several computers, each in a different location. “Advances in networking and client-server environments have encouraged the awareness that while information is surely one of the most valuable assets of any business, information that is shared within and between companies becomes a most powerful asset.” 4 Businesses have spent millions of dollars on computer technology to automate production processes. “Computer-assisted manufacturing systems, such as one might find in the grocery industry, have become commonplace. It is now possible for inventory consumption to be known immediately, and the impact of that consumption on purchasing requirements and master production scheduling can be recalculated continuously.”4 Computers can now be used to simulate factory production, optimizing processes and allowing engineers to determine the best utilization of equipment and personnel. If there is a sudden shift in demand, what will be the impact of major changes to production schedules? However, it does very little good to alter a production schedule if the supply line cannot react to the changed demand. “As the automation processes inside the four walls of manufacturing plant reached maturity, it became apparent that to gain the full benefits of the increased speed and flexibility could not be achieved as long as the process of receiving raw materials, and distributing finished products remained unchanged.”5 In applications of EDI, recalculating raw material requirements on an hourly basis offered little improvement as long as the ordering of raw materials was still based on traditional methods of placing purchase orders. The rapid shift in production frequently would mean hours on the phone obtaining material. While the manufacturing floor could operate on a “just in time” basis, the purchasing department would frequently have to operate on a “just in case” basis. In an emergency, obtaining material would have to rely on the “whatever it takes” methodology: premiums, surcharges, and special deliveries. Businesses began to push the boundaries of EDI. The initial performance of EDI looked at the documents used in business, and replaced them with electronic documents. However, this performance did not address how the documents were being used. It merely automated the method. “The need for greater speed and flexibility led the business analysts to take a serious look at how the documents were being used, and this led to an overhaul in the way the documents are being used. Analysis has looked not at replacing the documents, but at eliminating them altogether. With this approach, a new partnership between the customer and supplier was born. Rather than have purchasing agents review raw material requirements and place purchase orders, purchase orders can be placed automatically, based on pre-determined inventory levels.” 7 The Kroger Company has begun to make their inventory levels available to their suppliers via EDI, “allowing the supplier to adjust their own production schedules to respond more quickly to their customer’s needs. With bar coding and point-of-sale data collection, replenishment of retail inventory or shipment of finished products can now be triggered by information collected right at the cash register.” 7 With the changes in the computer technology and the increase in information technology, they both play a role in today’s manufacturing, distribution and service environments, “along with changes in business philosophy, has changed the definition of Electronic Data Interchange. The definition must now be more encompassing than merely the rapid transmission of electronic documents.”8 EDI must now be viewed as “an enabling technology that provides for the exchange of critical data between computer applications supporting the process of business partners by using agreed-to, standardized, data formats”.8 EDI is no longer merely a way to transmit documents. It is a means to “move data between companies that will be used by computer systems to order materials, schedule production, schedule and track transportation, and replenish stock.”7 To remain competitive in today’s economy, businesses are being forced to re-evaluate the way they do business with their customers and their vendors. The focus of these relationships has moved towards greater speed through shorter transaction cycles. With the dramatic increases in performance of computer technology, the impact of some of the drawbacks that led to limited execution of EDI have being reduced. What used to require mainframe power can now be handled on computers that fit conveniently on or under the desk, and can operate in the office, warehouse, production floor or use by a route man. There is a revolution going on in the software industry. The elapsed time from conception to deployment of new software is being dramatically reduced. Software developers can now produce packages that can run on a variety of hardware platforms, allowing them concentrate on delivering greater functionality and flexibility to their software packages, rather than spending valuable development time and dollars on customization. “This revolution in hardware speed, power and flexibility, combined with an increasingly selection of high-quality software products allows business to get a higher return for each dollar they invest in computer technology. It has allowed business to solidify the information needs of their own processes.”7 With improvements in their own processes, progress can come to a halt if the supply and distribution chain is not on the same page. So as the processes have been put under control, it has also forced management to focus on opportunities in their customer and supplier relationships – the area EDI ignored. The revolution in computer technology has led to another revolution… “The replacement of dictatorial or adversarial relationships between customers and suppliers with information partnerships. In fact, for some time in the vocabulary of EDI, two businesses engaged in electronic trading of information have been referred to as “trading partners.”9 The problem was it took management a long time to realize that partnership had to extend much further than just agreeing to trade electronic versions of paper documents. By breaking down the barriers between vendors and customers, another order of increase in speed and flexibility could be introduced. “The true value of EDI comes when business can begin to trade or share information. The scenario of EDI performance painted a picture where commonly used paper documents were replaced by electronic versions of the same documents. Purchase orders, shipment notifications, invoicing, and accounts payable began to participate in the process.”9 The business of preparing the electronic documents is easily the most demanding part of setting up EDI in any business. The varieties of methods available to generate the electronic documents are as different as there are businesses and applications, which include…transcribing data, enhancing existing applications, and purchasing software The Kroger Company has several hundred suppliers, and the chance of getting them all to agree upon the Kroger’s definition of a purchase order format was not going happen, “particularly if since those suppliers are also dealing with hundreds of other customers. This would require a unique set of rules for each partnership. The resulting chaos would quickly drive customers and suppliers alike to return to paper forms regardless of the benefits or savings.” 7 The solution has been found in the evolution over the last two decades of a comprehensive set of national and international standards. “These standards, typically developed by specific industry or business groups, have provided commonly agreed upon formats for use in virtually every type of business communication.”3 These standards provide a structured way of organizing information in a “transaction” format with definitions for the format and placement of each separate piece of data. “Translation to a standard format can be accomplished by internal systems or it can be done by a separate package of software. Regardless of the means you choose for translation, the end result of the process will be an output file generated in a specific format that any subscriber to the standard can understand.”3 In a simple one-to-one EDI relationship, transmitting data can be as simple as making a modem connection, and sending the file. However, this would become impractical with more than just a small number of vendors. If a manufacturer has to send out thousands of PO’s each week to hundreds of suppliers; it would require a small army to transmit all of their PO’s. Even if the manufacturer had an extensive network available, successful transmission would require that all vendors be linked into the network. Providing a connection to the sender’s computer to allow receivers to log on and collect their data would be one way to avoid these problems, but it poses a serious security problem. It will work on a limited basis, but only with controls, including separate hardware to isolate the system being accessed by third parties. Few companies would accept these approaches, for any extensive use of EDI. If these alternatives were required, chaos would reign, and once again most EDI users would quickly return to preparing printed documents, so that they could rely on the mail to distribute all their documents. Fortunately, the EDI user doesn’t have to rely on either of these alternatives. They can turn to third party network services, commonly referred to as “Value Added Networks” or VAN’s. The VAN functions as a clearing house for electronic transactions, in effect serving as a private electronic mail service. The VAN routes each vendor’s data to their own electronic mailbox. This process is the reverse of outbound translation. Once the PO’s have been placed in the electronic mailboxes by the VAN, the vendor can retrieve them at their convenience. “The next step in the process is to “de-map” the file, translating it into the specific format required by the vendor’s application(s). “Since a standard format has been used, the vendor will easily be able to first recognize which company the transaction is from, and then which type of transaction it is. When translation is complete it can be made usable in any desired format to the receiver’s internal applications. The Kroger Company “experimented with VAN, but quickly purchased the software package Chain-Tracks”. 8 Are there special tools required to implement EDI? Yes they are standards, software, hardware, and service providers. The need for definition of standards is absolute in assuring successful EDI. Without an agreed upon set of standards, EDI would be unworkable from the start. There is a set of public standards that define the requirements for a variety of EDI transaction types, so that any business concerns will be addressed within the guidelines of an internationally accepted set of standards. Companies have been exchanging data electronically for over three decades. Before the existence of national and international standards, companies wishing to exchange information had to determine themselves acceptable formats for the interchange of data. This resulted in the emergence of “in fact” standards defined by those companies with the financial clout to impose the requirement for data interchange on their suppliers or customers. They essentially dictated the terms under which electronic trading would take place. While this did provide some standards, real problems arose when equally stubborn partners collided with each other. “The result of such conflicts was that the smaller or newer players in the EDI market place were forced to observe a variety of conventions, depending on who the recipient of the information was to be. Confusion aside, an unavoidable consequence was increased cost for EDI implementation.”11 As the various standards collided in the market place, the result was the development of industry interest groups formed to try to reduce the chaos and confusion to manageable levels. The first was the Transportation Data Coordinating Committee, whose interest area was the standardization of the transactions required for trade and transportation. Beginning in the late 1980’s, many of these standards bodies began to combine their separate standards under the agency of the American National Standards Institute. All major American EDI transaction groups are now covered under the general umbrella of the Accredited Standards Committee, (ASC), and are referred to as the X12 group of standards. The ASC X12 Standards apply only for the United States. However, more and more companies are required to participate in the international exchange of electronic data. The increasingly global extent of many business enterprises requires that companies may have to at least be aware of the other major standards groups. The United Nations has provided a forum to provide a common set of international standards, under the general authority of the United Nations. Thus the Electronic Data Interchange for Administration, Commerce and Transport (EDIFACT) group was formed. EDI cannot be undertaken without software. “There is a broad range of options available, whether for low-cost first-time implementation or for the integration of EDI into a comprehensive portfolio of existing software.”7 “Design and development of computer software is an expensive and time-consuming process. The ready availability of commercial third-party packages or VAN’s will dictate against the internal development of in-house translation packages, since the annual cost of software licensing for third-party software will be less than the cost of developing and maintaining packages internally. The time required for internal software development will extend the valuable time it will take to deploy an EDI package.”4 There may be other reasons for developing a translation package internally. “If The Kroger Company owned or controlled its distribution like it controls its retail outlets; it could be cost effective to create a customized EDI package tailored specifically to the company’s distribution needs. The major drawback to such an approach is that performance of new transaction types will require additional development not only within the internal systems, but also within the EDI translation software.”7 With the continuing growth of EDI has also come the growth of a comprehensive library of EDI translation software packages with price tags that range from very inexpensive to significant. The extent of these packages ranges from modest PC-based translator packages to large-scale systems for proprietary minicomputers and mainframes, complete with firm communications features, and job and transmission scheduling capability. Basically a package can be found for just about every budget. Third-party translation packages offer several advantages over in-house development, which are: comprehensive standards coverage, cost effectiveness, and reduced maintenance Before the widespread availability and acceptance of PC’s and UNIX workstations, companies were pretty much bound by their existing restrictive hardware base. This dictated that EDI be implemented on whatever hardware was available and choice of software was severely limited by hardware options A business seeking to start EDI for the first time probably already has a PC that can be used to run an EDI translation package and communications software. Even if such hardware is not available, or is outdated, it can be obtained at a relatively small cost. The principal requirements for installing most PC-based packages are not any more demanding than today’s word-processing or spreadsheet packages. For software packages designed specifically for restrictive hardware, “the price-tag is likely to be higher than for a package designed for a UNIX workstation, because of the more limited market and the more specialized technical expertise required. Also this difference can be expected to grow as Reduced Instruction Set Computing (RISC) based open systems computers have gained popularity. Many software vendors are turning from strictly exclusive software to development of packages that will run under the UNIX operating system on a variety of RISC platforms with only minor modifications and differences.”5 RISC computers, because of their power, have put mainframe computing in a PC-sized package. They have gained popularity for client-server applications where a local PC will contain a software package that accesses remote databases. Another feature of the RISC/UNIX systems is their “open architecture” design. “Open Architecture for the EDI user means that the data on the system can be much more easily shared with software on other platforms through standardized file access protocols.”5 These UNIX systems are available in a wide range of performance configurations. At the low end the platforms are comparable in power to the larger PC with the added advantage of supporting multiple users. At the high end they compare favorably to mainframe capability. Early pioneers in EDI were faced with technically confusion and a costly choice when it came to communicating with their trading partners. So early use of EDI tended to be “in-house” rather than between companies, and was limited to those who could afford to develop and maintain extensive internal electronic networks. Implementing EDI in a business does not need to be difficult, and the benefits can be tremendous. It is important to understand that EDI is a tool, and not a cure-all. EDI is an undertaking requiring a partnership; nothing can stop its potential. A good example would be between Kroger and it’s vendors and the exchanging of electronic invoicing, and automatic fund transfer for payment helps Kroger reduce the cycle of their account receivables. In this case, everybody goes home a winner. For start-up EDI projects, limited objectives with visible benefits should be sought out. If Kroger had selected, for its first project, an ambitious plan to completely overhaul their entire retail distribution process, they may well be attempting too large a first step. Kroger selected a more realistic initial project this was computer assisted ordering (CAO). CAO assists the retail store in ordering product when the purchased product is scanned at the checkstand. The CAO system keeps track of all daily product movement within a certain time frame (usually twenty-eight days). The CAO system produces a recommended order of products for store personal to review. They verify accuracy of on-hand product level, future promotions and adjust accordingly. The one draw back to CAO is the cashier. For instance, if the customer buys six different flavors of Kool-Aid, and the cashier scans all as cherry, CAO will recommend buying just the cherry flavor; when in fact cherry is not the only flavor needed. An important aspect of implementation planning is involving all concerned parties at all steps of the project. Good communication is essential, so that newly installed EDI capabilities will change the way business is done, not disrupt it. One of the most valuable ways of providing good communication and project management is to define an EDI coordinator’s position. This position should be filled by an individual with strong knowledge of both the business requirements being addressed, and the technical requirements of EDI. A critical step in implementation planning is the testing process. Before users are actually committed to depending on their new EDI function, they must be comfortable that the process actually works reliably, all the time. This must be proven beyond doubt by carefully constructed testing and validation procedures. Since data is being transferred to another trading partner, it will be necessary to assure both internal and external users that correct information is being traded. In any project, it is necessary to know what the true cost of implementation will be, and it is no different with EDI implementations. Some of the costs are obvious, such as hardware and software. The unknown cost factor is training your staff. In conclusion, EDI requires a large number of choices. What are the business objectives? What tools should be used? How large is the intent? Hopefully, the one choice that will be easy to make will be the choice to take the first step with EDI. It is important to reiterate that EDI is only a tool, and if that tool is started without carefully defining objectives, it will not live up to expectations. A key point to remember is when tools are applied to the wrong process; it can complicate a business and frustrate its users. With correctly defined objectives, and a carefully considered plan of execution, it is a tool of great power. EDI will add speed and improve accuracy in any business. 2 Bridge Software Home page 22 January 2000 1 Compaq Corporation. Home page. 21 January 2000 3 National Institute of Standards and Technology, Home Page. 22 January 2000 * http://www.itl.nist.gov/div896/ipsg/eval_guide/tableofcontents3_1.html* 5 QRS Corporation. Home Page 22 January 2000 6 Simplix Corporation. Home Page 23 January 2000 * http://www.simplix.com/index.shtml* 4 TIE Corporate. Home page. 22 January 2000 10 Bouden, Scott. Personal interview. 20 January 2000 9 Bramlett, Rusty. Personal interview. 21 January 2000 8 Coffey, Tracey. Personal interview. 20 January 2000 11 Holder, Thomas. Personal interview. 24 January 2000 13 Tegl, Eric. Personal interview. 26 January 2000 12 Winkler, Lisa. Personal interview. 25 January 2000 7 White, Bill. Personal interview. 20 January 2000 Business and Technology Electronic Data Interchange Andy Schoen Communication and Technology, Comm 106 Professor Jack Miller 15 February 2000 Комментариев на модерации: 1. ДОБАВИТЬ КОММЕНТАРИЙ [можно без регистрации] Ваше имя:
ABC Rural Improving animal welfare across the globe ABC Rural Robin McConchie Establishing animal welfare standards for livestock transport. Australia is a key player in setting animal welfare standards The chairman of an international animal welfare working group admits that enforcing welfare standards is all but impossible. Dr Abdul Rahman is chairman of the World Organisation for Animal Health's working group on animal welfare. The organisation is known as OIE. The working group represents nearly 180 countries and has established standards for areas such as the transport of animals by land and sea, and the killing of animals for human consumption, Dr Rahman says the group covers livestock, pets, poultry, zoo animals and even fish. "Prior to 2004, when the OIE formed the animal welfare working group, there were no international standards." Dr Rahman says while the member countries may support the standards, they have no authority to enforce them, that is up to the individual countries. "The OIE is not the enforcing agency, some countries have made a lot of progress but others have not. The chairman of the Australian Animal Welfare Strategy Advisory Group, Dr Gardner Murray, says Australia is a leader in setting animal welfare standards and supports training initiatives in overseas countries such as Indonesia, the Philippines and Vietnam. More Stories
Data Description Arctic Snow: Daily maps of the snow depth on top of the floating sea ice. This dataset also determines the multi-year ice cover of the Arctic Ocean where, at this point, no snow depth can be retrieved. Reference: Markus, T. and D.J. Cavalieri, Snow depth distribution over sea ice in the Southern Ocean from satellite passive microwave data, in Antarctic Sea Ice Physical Processes, Interactions and Variability, Antarctic Research Series, 74, edited by M.O. Jeffries, pp.19-40, AGU, Washington, D.C., 1998. Comiso, J.C., D.J. Cavalieri, and T. Markus, Sea ice concentration, ice temperature, and snow depth using AMSR-E data, IEEE Trans. Geoscience and Remote Sensing, 41(2), 243-252, 2003. Data Format Northern Hemisphere Snow depth files from SMMR and SSM/I (1978-) Data processed with a five day history. This allows some determination of variability in the snowcover due to weather events or melt events, which gets flagged in the dataset. Files are stored in directories per winter season (WSYYYY_YYYYdata), starting 1 October and ending 30 September the following year. Files are named ssmi_n_snowdepth_5day_YYYYDDD.img, where YYYY is the year and DDD is the day-of-year. Files may be gzipped. Int arrays, 304 x 448, Little-endian (macs and unix will need to byteswap) Values are: 0-100 snow depth in cm 110 missing 120 land 130 open water 140 multi-year ice (no snow calculations done) 150 variability flag (significant changes in the snow depth over the period covered) as a result of weather effects and short-term melt events. 160 summer melt (5 consecutive variable days after April 1; will be flagged melt until 1 October) Thorsten Markus,, 301-614-5882 Don Cavalieri,, 301-614-5901 Alvaro Ivanoff,, 301-614-5886 Browse Images (gif) example of snow depth values and colorbar
Fuel cell lantern ditches batteries for salt water 6 pictures One of the PL-500's fuel cell anodes is claimed to last as long as 85 AA batteries(Credit: Hydra-Light) View gallery - 6 images For many people, camping/emergency lanterns are one of those things that may sit for months without being used, only to have dead batteries when they're finally needed again. While solar-powered lanterns are one alternative, they do still need to sit in the sunlight for a few hours in order to charge. That's where Hydra-Light's PL-500 comes in. It's a fuel cell-powered lantern that's ready to shine as soon as it receives some salt water. The PL(Personal Lantern)-500 features 16 LEDs, along with a USB outlet for charging devices such as smartphones. It also comes with a palm-sized 3-LED Accessory Light, which can be plugged into and powered by a 2.5-mm outlet on the main lantern, via a 30-ft (9-m) power cord. At the core of the lantern's EC-250 EngergyCell is a carbon film cathode, and a high energy-density alloy anode called the PowerRod. When exposed to salt water, that PowerRod starts to oxidize, releasing an electrical current as it does so. According to the designers, one "inexpensive" rod can power the lantern for over 250 hours before shrinking to the point that it needs to be replaced. When it does need replacing, the process reportedly takes just a few seconds. The salt water can take the form of tap water mixed with ordinary table salt, or it can even be straight seawater. Needless to say, the fuel cell should be rinsed off and stored dry when the lantern isn't in use. It is claimed to have a shelf life of at least 25 years. Hydra-Light plans to launch a Kickstarter campaign in the middle of this month, in order to finance production of the PL-500. A pledge of US$48 will get you one, when and if they're ready to go. Source: Hydra-Light View gallery - 6 images Top stories Recommended for you Latest in Outdoors Editors Choice
BBC News watch One-Minute World News Last Updated: Wednesday, 12 March 2008, 14:25 GMT Chest compressions 'saves lives' Researchers say 200 uninterrupted chest compressions work best Paramedics should give cardiac arrest patients uninterrupted chest compressions to improve their chances of survival, a US study has suggested. Researchers in Arizona found survival rates trebled when a technique which emphasises the importance of non-stop compressions was introduced. Very few people who suffer a cardiac arrest reach hospital alive. But in the Journal of the American Medical Association, researchers insisted these rates could be improved. Minimally interrupted cardiac resuscitation (MICR) involves providing 200 uninterrupted compressions, administering adrenaline early and waiting a little longer to insert a tube into the trachea to ventilate the lungs. The practice was taught to paramedics in two metropolitan cities. Excessive interruptions of chest compressions by pre-hospital personnel are extremely common Dr Bentley Bobrow Mayo Clinic In the period examined prior to instruction, some 218 patients were attended after heart attack, of whom four - or 1.8% - survived long enough to reach a hospital. After paramedics had been taught the technique, that figure increased to 5.4%. Thirty-six of 668 patients reached hospital. "During resuscitation efforts, the forward blood flow produced by chest compressions is so marginal that any interruption of chest compressions extremely harmful," Dr Bentley Bobrow from the Mayo Clinic in Arizona wrote. "Excessive interruptions of chest compressions by pre-hospital personnel are extremely common. Therefore, MICR emphasizes uninterrupted chest compressions." The UK's Resuscitation Council said guidelines had moved in this direction, but that this model placed even more emphasis on chest compressions from the outset, and spent less time initially on ventilation. In this country at present, some 30 compressions are recommended before ventilation begins. "It is a significant study, but has to be interpreted very cautiously. It is not strong enough for us immediately to change what we are doing," said Jerry Nolan, the body's chairman, arguing that a more tightly controlled study was needed. He added that one important caveat was the very low rate of survival to begin with in the Arizona study. "We would be looking at rates of five to 10% with our methods, so it is unclear whether the Arizona model would have such a dramatic impact as it has in the study." Judy O'Sullivan, cardiac nurse at the British Heart Foundation, said the study was "interesting" but also stressed the need for more investigation. "It doesn't give us enough evidence to change current practice in the UK." Brought back to life four times 13 Aug 07 |  Health The BBC is not responsible for the content of external internet sites Has China's housing bubble burst? How the world's oldest clove tree defied an empire Why Royal Ballet principal Sergei Polunin quit Americas Africa Europe Middle East South Asia Asia Pacific
Feb 142013 Daily Update 2.3 Week 3: Module 2: Date 2/14/13 Happy Valentines Day! Christopher Columbus, Leif Ericsson! Anyone else want a shot at the title of “first to discover” the Americas? When talking of discovery by Europeans or Africans or Chinese it must be noted that this is only a discovery in the sense that they were ignorant of the existence of this giant landmass filled with millions of people and numerous different cultures. All of these explorers are second to the indigenous people of the Americas who had been living there and prospering for thousands of years. Today we will consider another group of explorers that haven’t made it into many school textbooks of the past. The Islamic Golden Age, which lasted roughly from 750 AD to 1258 AD, was a period filled with discoveries and inventions and expansion of the Muslim world. During this time the Muslims made many contributions to medicine, art, architecture, philosophy, science and exploration, including intricate mosaics and inlaid stonework such as this mosque in Damascus, Syria. The Muslims were great traders and travellers and explored and established trade routes all over the Indian Ocean. silkroad trade map There are a number of sources* that date early Muslim contact with the Americas as early as 889 AD. Other records document Muslim crossings throughout the Golden Age. Shortly after the Golden Age ends there are records of a great Malian civilization in the 1300’s that sent an expedition of 400 ships to explore the Atlantic one of which returned and spoke of finding land across the ocean. For those of you who were following the location of the JRH shortly after they left Senegal, you would have noticed a dip southward in their course. It was during this time that they were being pushed by the main current and the prevailing direction of the seas. Had they continued on this path, following the main direction of the seas they would arrive in South America instead of Miami. The place they departed from in Senegal was part of the vast Malian empire and being one of the westernmost parts of Africa, is quite likely very near to where the departures of the past Malian voyagers departed from. There is one more powerful piece of evidence of early contact with South America and that is a map that was found in 1915 in Turkey. This map was drawn in 1513 and was based on numerous earlier maps including one by Columbus. This map shows, with great accuracy and detail, the coast of Brazil, where Columbus did not sail to. It also includes many details that the Europeans were yet to “discover.” This again shows that history is a fluid subject, and as such it can often be very difficult to assemble a true narrative. Imagine reading a book with half the pages missing and then creating a story from that. That story then becomes the history of the world. Do you think some parts of your story would be missing or untrue? [suffusion-the-author display='description']   One Response to “Daily Education Update 2.3 The Golden Age” 1. Many people don’t know about the contributions that Muslims have made to the world in the past. It was during the dark ages of Europe that the Islamic world flourished and carried the torch of advancing civilization. Good luck guys and thanks! Moe (Friend of Brandon Neville)  Leave a Reply
Barbecue season a recipe for carcinogenic cuisine CORVALLIS, Ore. - As the outdoor barbecue season kicks into high gear across America, millions of backyard chefs might want to consider that the fiery ritual they're about to embark upon may be the most unhealthy possible way to prepare meat. An increasing body of scientific evidence suggests that mutagenic compounds generated during the cooking of meat - especially the high temperatures and charring flames of an open grill - may have possible links to colorectal cancer. Scientists in the cancer chemoprotection program of the Linus Pauling Institute at Oregon State University say the debate is still ongoing about the mutagenic or carcinogenic effects of some of America's favorite methods of cooking meat - frying and grilling. But at least in laboratory tests, the data is disturbing. A number of cooked meat mutagens produce colon tumors in laboratory animals. Colon and rectal cancer together represent the number two cause of cancer death in the United States. Colon cancer alone killed about 47,700 last year, a fact the outdoor grilling aficionados might want to consider while they're arguing over the perfect barbecue sauce, said Rod Dashwood, an associate professor and toxicologist at OSU. There are clear indications that a high fat, low fiber diet is linked to a higher incidence of colon cancer, Dashwood said. But studies also suggest, at least with meat, that the problem may relate more specifically to how it is cooked and how much is eaten. "Research has shown that intake levels of a class of mutagens called heterocyclic amines can be increased up to 5,000 times, depending on how meat is prepared and much you eat," Dashwood said. "The mutagens form at high temperature and with prolonged heat exposure. A popular form of cooking, over open flame on a backyard grill, is just about the worst." The good news that may help lessen the risk of this cherished summertime ritual, Dashwood said, is evidence of protective mechanisms which may be as simple as a pot of tea or nice green salad. Research on both the causative and protective mechanisms related to colorectal cancer points to several simple dietary alterations which, at best, may have a significant protective effect, and at worst won't hurt anything. "It's clearly been shown that a poor diet is a significant contributor to many types of cancer," Dashwood said. "What we're trying to determine is how problems can be avoided with diet modification and what protective elements can be added." Potentially protective measures identified in Dashwood's own studies and those of other researchers include the use of cooking methods that involve lower temperatures and avoid charring. Protective dietary additions include green and black teas, foods high in chlorophyll and fiber, and possibly some vitamin supplements. Good advice includes: • Eat only modest portions of meat, in a diet dominated by fruits, vegetables and grains. • Cook fish or chicken in the skin and then remove the skin before eating, not only to avoid carcinogens but also to reduce fat and calories. • To add moisture, marinate meat before cooking, which appears to significantly reduce mutagen levels after it is cooked. • Heat meat briefly in a microwave before cooking. • If meat does become charred, cut off the most blackened parts. • Avoid making a gravy out of meat drippings which tend to concentrate the heterocyclic amines. • Consider adding protective foods, which can include green leafy vegetables, cruciferous vegetables such as cabbage and cauliflower, and some dairy products such as milk and cheese. • Consider also choosing the right beverage for a healthy diet, such as green or black tea. • - Some research suggests supplements of vitamin E, selenium and long-term folate intake may play a protective role in the colon. Mutagenic effects of cooked meat have seemed fairly profound in some laboratory studies, Dashwood said, but less so in animal and human clinical tests. Each individual may have different, and genetically influenced, abilities to metabolize and repair DNA damage caused by mutagens. But Dashwood said he believes that the right questions have not always been asked in human clinical studies. It's not so much whether or not meat is included in a diet, or even what type of meat is eaten, he says, but how much and how it is cooked. "Researchers have shown that the heterocyclic amine content of a single hamburger can produce measurable changes in the DNA of exposed animals and humans," Dashwood said. "While we can't yet pinpoint exactly how much of a risk factor it is, there appears to be a correlation between colorectal cancer and high dietary levels of well-done meat. "As a result of this research, I can honestly say that I eat more fruit and vegetables," he said. "I still eat meat, but I avoid any charred parts or remove the skin, and I try to eat more green salad before, during and after the meat. I also prefer healthier beverages, like tea and fruit juice. These aren't huge changes, but over a lifetime they may be important. And they're especially worth remembering during barbecue season."
Previous Sub2Sect Next Sub2Sect Summary of Constructions after Verbs of Hindering, etc. After verbs signifying (or suggesting) to hinder and the like, the infinitive admits the article τό or τοῦ (the ablatival genitive, cross1392). Hence we have a variety of constructions, which are here classed under formal types. The simple infinitive is more closely connected with the leading verb than the infinitive with τὸ μή or τὸ μὴ οὐ, which often denotes the result (cp. ὥστε μή) of the action of the leading verb and is either an accusative of respect or a simple object infinitive. The genitive of the infinitive is very rare with κωλύ_ω and its compounds. a. Some scholars regard the infinitive with the negative as an internal accusative, not as a simple object infinitive; and the infinitive without the negative as an external accusative. 1. εἴργει με μὴ γράφειν (the usual construction: examples cross2739). 2. εἴργει με γράφειν (less common). Since the redundant μή is not obligatory, we have the simple infinitive as object ( cross1989), as εἰ τοῦτό τις εἴργει δρᾶν ὄκνος if some scruple prevents us from doing this P. Soph. 242a, ὃν θανεῖν ἐρρυ_σάμην whom I saved from death E. Alc. 11, οἱ θεῶν ἡμᾶς ὅρκοι κωλύ_ουσι πολεμίους εἶναι ἀλλήλοις the oaths sworn in the name of the gods prevent our being enemies to each other X. A. 2.5.7, and so usually with κωλύ_ω (cp. cross2744. 7). 3. εἴργει με τὸ μὴ γράφειν (rather common; cp. cross1): εἶργον τὸ μὴ . . . κακουργεῖν they prevented them from doing damage T. 3.1, οἷοί τε ἦσαν κατέχειν τὸ μὴ δακρύ_ειν they were able to restrain their weeping P. Ph. 117c. 4. εἴργει με τὸ γράφειν (not uncommon; cp. cross2): ἐπέσχον τὸ εὐθέως τοῖς Ἀθηναίοις ἐπιχειρεῖν they refrained from immediately attacking the Athenians T. 7.33, ἔστιν τις, ὅς σε κωλύ_σει τὸ δρᾶν there is some one who will prevent thee from the deed S. Ph. 1241. 5. εἴργει με τοῦ μὴ γράφειν, with the ablatival genitive, 1392 (not so common as 3): πᾶς γὰρ ἀσκὸς δύο ἄνδρας ἕξει τοῦ μὴ καταδῦναι for each skin-bag will pre- -- 624 -- vent two men from sinking X. A. 3.5.11. Other cases are: Hdt. 1.86, T. 1.76, X. C. 2.4.13, 2. 4. 23, 3. 3. 31, I. 7.17, 12. 80, 15. 122, P. L. 637c, 832 b, D. 23.149, 33. 25. Observe that this idiom does not have the logical meaning ‘from not,’ which we should expect. Some write τὸ μή or μή alone. 6. εἴργει με τοῦ γράφειν (not common, and very rare with κωλύ_ω, as X. A. 1.6.2): τοῦ δὲ δρα_πετεύειν δεσμοῖς ἀπείργουσι; do they prevent their slaves from running away by fetters? X. M. 2.1.16, ἐπέσχομεν τοῦ δακρύ_ειν we desisted from weeping P. Ph. 117e (cp. cross3). 7. οὐκ εἴργει με γράφειν (not very common, but more often with οὐ κωλύ_ω; cp. cross2): οὐδὲ διακωλύ_ουσι ποιεῖν ὧν ἂν ἐπιθυ_μῇς; nor will they prevent you from doing what you desire? P. Lys. 207e, τί κωλύ_ει ( = οὐδὲν κ.) καὶ τὰ ἄκρα ἡμῖν κελεύειν Κῦρον προκαταλαβεῖν; what hinders our ordering Cyrus to take also the heights in advance for us? X. A. 1.3.16, ταῦτά τινες οὐκ ἐξαρνοῦνται πρά_ττειν certain people do not deny that they are doing these things Aes. 3.250. 8. οὐκ εἴργει με μὴ οὐ γράφειν (the regular construction): οὐκ ἀμφισβητῶ μὴ οὐχὶ σὲ σοφώτερον ἢ ἐμέ I do not dispute that you are wiser than I P. Hipp. Minor 369 d, οὐδὲν ἐδύνατο ἀντέχειν μὴ οὐ χαρίζεσθαι he was not able to resist granting the favour X. C. 1.4.2, τί ἐμποδὼν ( = οὐδὲν ἐμποδών) μὴ οὐχὶ . . . ὑβριζομένους ἀποθανεῖν; what hinders our being put to death ignominiously? X. A. 3.1.13, τί δῆτα μέλλεις μὴ οὐ γεγωνίσκειν τὸ πᾶν; why pray dost thou hesitate to declare the whole? A. Pr. 627. 9. οὐκ εἴργει με τὸ μὴ γράφειν (since occasionally the sympathetic οὐ is not added; cp. cross3): καὶ φημὶ δρᾶσαι κοὐκ ἀπαρνοῦμαι τὸ μή (δρᾶσαι) I both assent that I did the deed and do not deny that I did it S. Ant. 443, τίς . . . σοῦ ἀπελείφθη τὸ μή σοι ἀκολουθεῖν; who failed to follow you? X. C. 5.1.25. 10. οὐκ εἴργει με τὸ μὴ οὐ γράφειν (very common; cp. cross8): οὐκ ἐναντιώσομαι τὸ μὴ οὐ γεγωνεῖν πᾶν I will not refuse to declare all A. Pr. 786, τὸ μὲν οὖν μὴ οὐχὶ ἡδέα εἶναι τὰ ἡδέα λὁγος οὐδεὶς ἀμφισβητεῖ no argument disputes that sweet things are sweet P. Phil. 13a. Very unusual constructions are 11. οὐκ εἴργει τὸ γράφειν (οὐκ ἂν ἀρνοίμην τὸ δρᾶν I will not refuse the deed S. Ph. 118). 12. οὐκ εἴργει μὴ γράφειν (οὔτ' ἠμφεσβήτησε μὴ σχεῖν neither did he deny that he had the money D. 27.15). 13. οὐκ εἴργει τοῦ μὴ οὐ γράφειν (once only: E. Hipp. 48, where τὸ μὴ οὐ is read by some). On the negative after ὥστε, see cross2759. Previous Sub2Sect Next Sub2Sect Powered by PhiloLogic
Hubert Burda Media NED GOODWIN MW digs into the question of whether older vines really do produce finer wines. WE OFTEN SEE the epithet Vieilles Vignes or “Old Vines” on wine labels. The term is supposed to connote heritage and quality. Conversely, nobody speaks of young vines. For those in the know, young vines can bring delicacy and an attractive freshness to a wine under certain conditions, yet the strength of the old-vine hegemony suggests a lack of seriousness or substance in a wine made from younger vines. Just how much does vine age really matter to a wine's quality? For all intents and purposes, young vines are younger than 10 years of age, while old vines are mostly older than 30. Most vines face a gradual senescence from the age of 20. Thus the Bordelais frequently replant their vines after 20 to 30 years due to disease pressures and efficacy, while the Australians cling to their gnarled old centurions in regions such as the Barossa and McLaren Vale as stalwarts of yesteryear. The most elderly vines in Australia, many older than 150 years, are still on their own roots. In most instances, the need to graft them onto foreign rootstocks, as in almost all European vineyards, was obviated by fortuitous isolation and the absence of phylloxera at the time. Vines of that age exist in few other places, though California and Spain also boast plots of significant age. In stark contrast, vines throughout New Zealand are far younger on average due to the precocity of the country's wine culture. Both young and old vines have their advantages and disadvantages if a vineyard is in balance – in other words, when the confluence of climate, site and vine facilitates enough water and with it, nutrients, to be drawn into the vine for healthy, ripe grapes. The notion that old vines are intrinsically superior to young ones, therefore, is a sweeping generalisation. Indeed, despite popular opinion, young vines are capable of good balance. This is because they lack vigour and produce fewer clusters of grapes and foliage which, despite promoting less photosynthesis, allows for lower yields and less foliage, meaning less shading, better ripening of what fruit there is, and less risk of fungal diseases. This vineyard equation has positive implications for quality grapes and wine. Young vines do, however, demand richer soils to flourish because their root systems are relatively undeveloped and not yet capable of digging deep into the soil to forage for the water and nutrients intrinsic to quality grapes and wine. This dynamic is augmented with irrigation, legal throughout most of the New World and increasingly permitted in the Old, as well as the agreeably warm growing seasons that are increasingly the norm in many regions. To underscore these points, the winner of the Judgement of Paris, an event held in Paris at which French wines were pitted against upstarts from California, was a Stag's Leap 1973 Cabernet Sauvignon – a wine from three-year old vines! The Judgement, which served as the basis for a lousy film, remains the measuring stick for Californian wine's emergence on the world stage. The disadvantage of young vines, however, is that their shallow root-systems also mean that they are susceptible to heat, drought and water-logging. Top Burgundy producer Dominique Lafon believes that fruit from young vines failed in 1997, a warmer than average year at the time. This was, he thinks, because the root systems failed to penetrate the deeper substrata of soils to draw on the necessary water and nutrients for holistically ripe grapes. While sugar ripening was initially hastened in this particular vintage and alcohols soared, respiration was compromised by the heat. Unable to draw on the soil's deeper reservoirs due to shallow roots, young vines shut down in self-protection mode and stopped producing sugars. Overall ripeness was ultimately insufficient, yields low and acidity meagre. Flabbiness was further accentuated by phenolics in the thick grape skins. In contrast, old vines can mitigate drought and excessive alcohol to a large degree due to their developed root systems. In the case of the very hot 2003 Bordeaux vintage, for example, the best wines were those with freshness. According to Frédéric Engerer of Château Latour, the most balanced wines came from older plots able to mitigate hydric pressures. Not only can old vines readily deliver nutrients, but they can also ripen grapes faster than younger vines because of the rapidity of this delivery. This results in less alcohol in the ensuing wines. Advantageously, young vines can help nurture desirable textures and flavours in a wine. For example, Domaine de l'Arlot in Burgundy produces a declassified Nuits Saint George from young vines. It is fresh, fragrant and balanced. Moreover, Claude Dugat credits his eight-year-old plot of Pinot Noir vines in the prodigious Grand Cru vineyard Charmes-Chambertin with providing his 2005 with lift and poise in a year marked by extract and concentration. In the New World, too, Mahi in Marlborough prizes its year-old vines' adaptability to alternate rootstocks, clonal research and training systems – and, most importantly, for the pungent flavours these impish vines imbue to its Sauvignon Blanc. Research indicates that young vines can produce more isobutyl-methoxypyrazines, the chief compound responsible for Marlborough Sauvignon's passion-fruit punch. Undesirable among seasoned drinkers perhaps, this aromatic trait is responsible for the tsunami of New Zealand Sauvignon that has conquered the world. Old vines nevertheless give an unparalleled depth, concentration and vinosity to those wines from the Barossa and other hallowed sites. In fact, old vines seem to contribute to longer-chained tannins in wine, bringing a supple smoothness and balance. Roda, the new-wave Rioja producer, has received funding from the EU to pursue this research further. The rub, however, is that the oldest vines are susceptible to disease and declining productivity, with obvious ramifications for yields and revenue. Ideally it would seem that a good mix of vines, of varying ages, makes for a healthy vineyard and the platform for quality wines and flexibility in the winery. While it's unlikely that we will see a label screaming “Young Vines” any time soon, don't be conned by the ubiquity of the “Old Vines” mantra. There's more to it than meets the eye.
Information Technology 310 ITEC 310:  Programming in C and Unix Prerequisites: ITEC 220 with grade of “C” or better. Credit Hours: (3) Introduces the C programming language including C library routines and the system call interface to the Unix operating system. File and terminal I/O, process control, and inter process communication are also covered. Detailed Description of Content of Course Topics include: 1. C language history 2. C Simple data types 3. C control structures: assignment, conditional and iterative control structures 4. C functions: parameter passing 5. Structured design: structured decomposition, debugging strategies 6. C arrays, pointers and strings: array out of bound 7. C structures 8. Data structures: stacks, queues, linked lists, trees 9. C files 10. C bit operations 11. C enumerated data types, preprocessor, interacting with operating systems, inter-process communication 12. Unix history and editors: vi 13. Basic Unix commands: cd, pwd, file, date, touch, ls, chmod, cp, mv, rm -r, mkdir, rmdir, >, <, |, echo, cat -n, more, less, strings, last, head, tail, script, image, mount, df, tar cxzf, ps, bg, ctrl-z, jobs, fg, kill, ctrl-c, &, wc, paste, od 14. Unix filters and utilities: grep, egrep, fgrep, sort, find 15. Unix shells: tcsh, Bourn shell commands 16. Unix shell scripts: parameter passing 17. Unix file, directory and processes: hidden files, filename, inode, boot block, superblock, inode table, hard link, soft link 18. Unix system administration Detailed Description of Conduct of Course Lecture topics will include features of the C programming language, the tools and services provided with the Unix operating system, and the use of these by C programs.  Students will design and implement programming projects to explore and reinforce these concepts. Goals and Objectives of the Course Students who complete the course will be able to: 1. Demonstrate an ability to understand and apply mathematical concepts when writing a C program to solve a problem. 2. Describe, design and implement a C program using functions and a linked data structure. 3. Describe, design, and use a shell command and its options. 4. Describe Unix file and directory structures. 5. Describe, design, and implement a Unix shell script. Assessment Measures Students will be evaluated based on several programming projects and a minimum of two examinations. Other Course Information Review and Approvals Nov, 2003                Updated               John P. Helm, Chair Revised: June 1, 2012
Life\'s Greatest Gift: Question Preview (ID: 10040) Below is a preview of the questions contained within the game titled LIFE\'S GREATEST GIFT: Movie Test .To play games using this data set, follow the directions below. Good luck and have fun. Enjoy! [print these questions] What is the advantage of sexual reproduction over cloning? a) If humans were all clones, everyone would have the exact same immune system, and one successful para b) Cloning takes a longer period of time c) Cloning is for bacteria only d) Cloning is more difficult What is the process of ovulation? a) Ovulation is when the egg is fertilized by the sperm b) Ovulation is when the sperm reaches the egg c) Ovulation is when the female body is at the optimum temperature for pregnancy d) Ovulation occurs when a mature egg is released from the ovary, Where is the egg when it is fertilized? a) The egg is just coming out of the ovary b) The egg is in the fallopian tube c) The egg is on the uterine lining d) The egg is in the ovary When must the egg be fertilized? a) Within the menstrual cycle, which is between 28 and 32 days b) Within 10 to 14 days c) Within 12 to 24 hours after leaving the ovary d) Within 6 to 8 days How many sperm are produced in a 24 hour period? a) About a hundred million sperm each day b) About a hundred billion c) About a billion d) About a million How many chromosomes are in a human cell? a) 50 chromosomes; 25 from mom, 25 from dad b) 50 chromosomes; 20 from mom, 30 from dad c) 100 chromosomes; 50 from mom, 50 from dad d) 46 chromosomes; 23 from mom, 23 from dad When is a baby called a fetus? a) 5 months after fertilization b) 9 months after fertilization c) 2 months after fertilization d) 4 months after fertilization When can the fetus hear sounds? a) After 3 months b) After 1 month c) After 5 months, when the ear is fully formed d) After 2 months How fast does a fetus's heart beat? a) Just as fast as the mother's heartbeat b) Twice as fast as the mother's heartbeat c) Three times as fast as the mother's heartbeat d) Half as fast as the mother's heartbeat Which statement is true? a) A child contains a mixture of genes, not only from the parents but also from their ancestors b) A girl will always look like her mother and a boy will always look like his father c) Giving birth is a painless event in a woman's life d) Babies enjoy salsa music while they are in their mother's womb Play Games with the Questions above at To play games using the questions from the data set above, visit and enter game ID number: 10040 in the upper right hand corner at or simply click on the link above this text. Log In | Sign Up / Register
Valid XHTML 1.0! Valid CSS! Writing programs Congratulations! Now that you have made it this far, you are ready to start writing programs. Hello World Grab a text editor and type in the following: puts "Hello World" Save the file as hello.rb and run it by typing ruby hello.rb puts is named so because it will put an object as a String. The line gets printed to the terminal. Here is another example: Notice what we did. name is a string. Therefore, it can be added to other strings, like we saw earlier. Note: When you make a Ruby program, only the lines with puts will be printed to the screen. Another example In the last chapter we typed this in irb: Put this into one file. Save it and run it. Suppose that we want the computer to say "The answer is 20". We can't type this: puts "The answer is " + num2 # --> Error We can only add strings to strings. Therefore, we need to convert the integer num2 to a string. We know that we can do this with the Integer#to_s method: Making programs executable If you are running Linux or Unix, you can make your Ruby programs executable, so they can be run like any other program. First, you need to know where Ruby is installed in your system. Type 'which ruby' on a terminal: Precede this by '#!' (pronounced "sharp bang") and make that the very first line of your program. Now you can type 'chmod +x prog.rb' to make the program executable ('+x' means "executable"). Warning: Make sure that you type this exactly and that it's the very first line of your program. • If there is a blank line above this one, this won't work. • If there is a space before the '#!',this won't work. 1. Redo the excercises of the previous section. But this time as programs instead of using irb. 2. Finish the following program: name = "Daniel" age = 24 So that the program prints "Daniel is 24 years old".
An introduction to hardware security In this first part of a two-part column, we'll take a look at hardware-based protection devices and how they work. The Web Services Advisor (To receive this column in your inbox, click Edit your Profile and subscribe.) It's long been known that security is the Achilles Heel of Web services. Unless some way can be found to increase security and guarantee authentication and identity management, it will remain a useful, if minor, technology footnote. So how best to provide the right protection? The newest twist in armoring yourself is to use hardware-based protection, hardware firewalls and other devices that are targeted specifically at providing Web services security. In this first part of a two-part column, we'll take a look at what those devices are and how they provide protection. In the next column, we'll more closely examine the companies that make them and see whether the hardware is a long-term or short-term solution to the problem. What is hardware-based protection? The idea of using hardware-based protection for Web services is certainly not a new idea; networks have been protected by hardware solutions such as firewalls and proxy servers for quite some time. The difference here, though, is that these new hardware devices for Web services are special-purpose devices. They're designed specifically for Web services, not general network protection. The devices are so new that there's no general agreement about how they should work, or even what kind of services they should provide. But Randy Heffner, vice president of Forrester Research, several months ago finished an extensive report about XML-specific security devices, titled "Forrester Wave: XML Security Gateways." He notes that generally, the devices provide some combination of these three types of security services: • Attack protection: The hardware can be targeted to fight XML attacks. Heffner notes that it's entirely possible for valid XML to be an attack. That means that applications would have no way of knowing they were under attack; if the XML is valid, it runs, even if it's malicious. However, he says, hardware devices can be built that can identify Web services attacks at the application level. • Trust enablement: Key to Web services is being able to work with trusted partners and to securely establish identities. Think of trust enablement as the opposite of attack protection. Attack protection keeps out hackers and other "bad guys." Trust enablement lets in people who you want inside the system, by authenticating identities, authorizing requests, administration, audit/logging and security integration. • Acceleration: Encryption is commonly used for Web services security. Encryption slows down applications and so hardware can be used to accelerate encryption and decryption, as well as accelerate the XML itself. Why use hardware? Much of this work, such as attack protection, can be done via software. So the question remains, why buy a hardware-based solution, when software sitting on top of a server might do the job? Eugent Kuznetov, founder and Chief Technology Officer of DataPower, which makes hardware-based Web services security devices, says there are several reasons. First is that hardware simply does a better job, he claims. A hardware device includes its own operating system and has embedded technology specifically designed for special-purpose processing, such as cryptography. That means it's faster and more effective than software, he says. And because the hardware is built from the ground up to handle Web services security, it won't be prone to attacks that can foil software, such as buffer overruns. Additionally, hardware-based devices can do double-duty. So a device that does cryptography acceleration can also accelerate XML processing. An overriding concern is also that "companies have a bad history of implementing security inside applications," he contends. "Each application might protect against only from one to twenty threats and each application was built for a specific purpose." So if security is handled inside applications, that will necessarily lead to security loopholes. "You need to handle security outside of applications," he contends. "You need corporate-wide policies and can't do it on a per-application basis. And you need a scalable model as well, as companies get more serious about Web services and increase the number that they use. You also have to give the security control to a central security department, not to the application builders, if you want to be as safe as possible." And the best way to do that, he says, is to use hardware-based Web services security. He notes that years ago, enterprises used software-based firewalls to protect their intranets, but recognized over time that a more heavy-duty hardware-based solution was required. In the same way, he says, Web services security will move to hardware-based solutions as well. It's still early The market for hardware devices for Web services security has not really developed yet; it's still a nascent one. Heffner notes in his Forrester report that there are no big players yet, and the market segment isn't yet well-established. At the moment, according to Heffner's report, the two leading vendors are DataPower and Forum Systems, but there are others as well, including Westbridge Technology, Vordel, Sarvega, Reactivity, and Layer 7 Technologies. Next column, we'll take a closer look at them and at the future of their hardware. For related Articles and Commentary: About the Author This was last published in July 2004 Dig Deeper on SOA security tools Start the conversation Send me notifications when other members comment. Please create a username to comment.
Presentation is loading. Please wait. Presentation is loading. Please wait. Joe Hill (Joe Hilstrom) 7 October 1879 - 19 November 1915. Similar presentations Presentation on theme: "Joe Hill (Joe Hilstrom) 7 October 1879 - 19 November 1915."— Presentation transcript: 1 Joe Hill (Joe Hilstrom) 7 October November 1915 2 Joe Hill was born in Sweden to parents who were both musicians. He came to the United States in He was originally names Joel Emmanuel Haggland. Soon after arriving in the U.S., he became a drifter and traveled around the country. He began to call himself “Joseph Hillstrom” somewhere between 1906 and 1910, perhaps due to some legal trouble. 3 During his drifting travels in the U.S., Joseph Hillstrom became disillusioned with this nation. He saw life as desperate here for most common people. He saw the working poor as economic prisoners held captive by a small wealthy elite. In San Pedro, California, apparently while working on the docks, he became associated with a group of labor agitators who called themselves the “Industrial Workers of the World” (I.W.W.). Informally, they became known as the “Wobblies.” Joe Hillstrom became their secretary. 4 In 1910, in a letter written for the Industrial Worker, an I.W.W. newspaper, Joe Hillstrom identified himself as “Joe Hill,” the name he would continue to use to his death. Joe Hill became a famous union organizer, often entering very risky situations. Legends have him participating in I.W.W. activities nearly everywhere, sometimes in more than one place at the same time. 5 But the legends do covey one truth. Even if Joe Hill was not physically at a particular location during an I.W.W. activity, he was nonetheless there in song. Joe Hill wrote many labor songs, and virtually everyone involved in the labor struggles at the turn of the century knew and sung these songs. The songs were sometimes completely original in music and verse, while some songs borrowed existing music. All the lyrics were always Joe’s. 6 Joe Hill’s songs were generally very militant. His songs were collected in the I..W.W.’s Little Red Songbook. Near the end of his life, Joe traveled to Utah. While he was there, a robbery occurred and two people died. One was John Morrison, the store owner and former police officer in Salt Lake City. There was a suspicion that one of the robbers might have been wounded in the gunfire. 7 As it happens, Joe Hill went to the doctor about that time to be treated for a gunshot wound that he said he received in a fight over a virtuous maiden. The police arrested him based on this coincidence. He was tried and executed. There was a national campaign to obtain his release, including a plea from the President of the United States, Woodrow Wilson. 8 Joe Hill became a martyr for the union cause. While awaiting execution (by firing squad), Hill urged people not to morn for him, but to organize. Thus, his own writings suggest that he was aware that he was about to become a martyr, and that he might have consciously encouraged this. It is hard to underestimate his symbolic value to the early labor movement in the U.S. 9 Other big names in the early labor movement are: William D. Haywood Elizabeth Gurley Flynn Mother Jones Samuel Gompers 10 Joe Hill has been ranked with Beatle John Lennon, Woody Guthrie, Bob Dylan as among the four greatest protest songwriters of the 20th century. Some music has also been written that was inspired by Joe Hill, such as the song with his name that used the poem by Alfred Hayes, “I Dreamed I Saw Joe Hill.” This song was sung (most notably) by Paul Robeson and Joan Baez (at different times). 11 Here are a few of his more famous songs: Casey Jones - The Union Scab Mr. Block The Preacher and the Slave The Rebel Girl There is Power in a Union Where the Fraser River Flows Workers of the World, Awaken Download ppt "Joe Hill (Joe Hilstrom) 7 October 1879 - 19 November 1915." Similar presentations Ads by Google
Coptic Synaxarium (Coptic Orthodox Calendar) 1 Amcheer (The First Day of the Blessed Month of Amshir) 1.The Commemoration of the Ecumenical Council in Constantinople. 2.The Commemoration of the Consecration of the Church of St. Peter, the Seal of Martyrs. St-Takla.org Divider 1. On this day of the year 381 A.D., one hundred and fifty fathers assembled upon the order of Emperor Theodosius the Great, in the city of Constantinople. They assembled to judge Macedonius, Patriarch of Constantinople, and Sabellius and Apollinaris, for their blasphemy against God the Word and the Holy Spirit. When this blasphemy became widespread, the fathers of the church were concerned about the peace of the church, and made these heresies known to Emperor Theodosius. He ordered that a council be assembled, and invited Abba Timothy, 22nd Pope of Alexandria; Abba Damasus, Pope of Rome; Abba Petros (Peter), Patriarch of Antioch; and Abba Cyril (Kyrillos), Patriarch of Jerusalem. They came to the council with their bishops, except the Pope of Rome, who delegated others to attend on his behalf.     When the holy council convened in Constantinople, they called upon Macedonius. Abba Timothy, Pope of Alexandria, who was presiding over the council, asked him, "What is your belief?" Macedonius answered that the Holy Spirit was created like any other creature. Abba Timothy said, "The Holy Spirit is the Spirit of God. If we say as you claim that the Spirit of God is created, we are saying, in essence, that His Life is created, and therefore, He is 'lifeless' without it." He advised Macedonius to renounce his erroneous belief. When he refused, Macedonius was excommunicated, anathematized and striped of his rank.     Then Abba Timothy asked Sabellius, "And you, what is your belief?" He answered, "The Trinity is one being and one person." Abba Timothy said, "If the Trinity is as you claim, then the mentioning of the Trinity is groundless, and your baptism is futile, because it is in the Name of the Father, the Son and the Holy Spirit, and the Trinity would have suffered pain and died, and the saying of the gospel would be invalid, when it is said that the Son was in the Jordan River, and the Holy Spirit descended upon Him in the likeness of a dove, and the Father called upon Him from heaven." Then Abba Timothy advised him to renounce his belief. When Sabellius did not accept, Abba Timothy excommunicated, anathematized and striped him of his rank.     Then Abba Timothy asked Apollinaris, "And you, what is your belief?" Apollinaris said, "The Incarnation of the Son was by His union with the human flesh without the rational being, for His divinity replaced the soul and the mind of the human being." Abba Timothy replied, "God the Word united with our nature to save us, therefore if He only united with the animal body, then He did not save mankind but the animals. Humans will rise on the day of Resurrection with the rational and speaking soul with which there will be the communication and the judgement, and with it they will be granted the blessing or the condemnation. Accordingly, the Incarnation would be in vain. If that was the case, why did He call Himself a man if He did not unite with the rational speaking soul?" Then Abba Timothy advised him to turn away from his erroneous belief, but he also refused. He excommunicated Apollinaris as he did the other two friends.     Ultimately, the council excommunicated these three and all those who agreed with them. Then they completed the creed that was established by the fathers at the Council of Nicea until its saying, "Of Whose Kingdom shall be no end." The fathers of the Council of Constantinople added, "Truly we believe in the Holy Spirit, the Lord, Giver of Life... to the end." They put down many canons that are still in the hands of the believers today. The prayers of these holy fathers be with us. Amen. St-Takla.org Divider 2. On this day also, we celebrate the commemoration of the consecration of the Church of St. Peter, 17th Pope of Alexandria and the Seal of Martyrs . He was martyred in Alexandria during the last days of the reign of Diocletian the Infidel.     When Emperor Constantine the Great reigned, and all the idol temples were destroyed, churches were built. So the believers built this church west of Alexandria in the name of St. Peter, the Seal of Martyrs. The church existed till shortly after the reign of the Arabs over Egypt, when it was destroyed. His blessings be with us and glory be to our God forever. Amen. St-Takla.org Divider Send this page to a friend Like & share St-Takla.org
t.w._stan : module documentation Part of twisted.web View Source An s-expression-like syntax for expressing xml in pure python. Stan tags allow you to build XML documents using Python. Stan is a DOM, or Document Object Model, implemented using basic Python types and functions called "flatteners". A flattener is a function that knows how to turn an object of a specific type into something that is closer to an HTML string. Stan differs from the W3C DOM by not being as cumbersome and heavy weight. Since the object model is built using simple python types such as lists, strings, and dictionaries, the API is simpler and constructing a DOM less cumbersome.
'. ' Object Oriented Programming From APIDesign Jump to: navigation, search This is my personal take on OOP. In case you are searching for objective view, then rather see official wikipedia's explanation. Revolutionary view Let me point you to an essay explaining OOP in completely revolutionary view. I especially like statements like: • Typical object oriented program relies on functions more than many functional programs. • λ-calculus was the first OOP language Excellent explanation of the differences between OOP approach and the functional one can be summarized into: • In classical OOP one only knows own identity. Identity (and implementation) of others can be inspected only by calling their methods (sending them messages) • In functional world (when using algebraic types) the person that defines a type can inspect internals of all instances of the same type The above characteristic of OOP leads to an interesting conclussion: OOP needs tail calls! Last but not least the essay mentions the expression problem which I also analysed in TheAPIBook's chapter 18: Extensible Visitor Pattern Case Study. Object Oriented Reuse I always felt that there is some clash between the general desire for object oriented reuse and principles of good API design. Proponents of reuse seem to advocate making every method virtual, every class subclassable, every behavior changeable. Those who maintained API of a framework for some time and tried to keep some degree of BackwardCompatibility know that opening up more than planned for will at the end hurt the reuse. I attribute this to the way subclassing is done in Java or C++ or similar languages. For a long time I could not formulate that feeling, but I kept a thought back in my mind about an OOP language which is not flawed this way - Beta. However I never had enough time to learn Beta properly, I just read the specification. It felt somewhat down side up, but without practical experience it was hard to formulate exactly what was so attractive on this rotation. That is why I am thankful to authors of Super and Inner — Together at Last! for explaining everything in the Java terminology and even finding ways to make these two worlds co-exist in piece. The paper also proposes new access modifier (which I called for in my ClarityOfAccessModifiers essay). It is named pubment and it is a perfectly fine (from API design perspective) combination of callable and slot which allows augmentation: Code from SuperInner.java: See the whole file. public abstract class JavaLikeExample { public final int callable() { return 1 + theSlot(); protected abstract int theSlot(); Code from SuperInner.java: See the whole file. public abstract class BetaLikeExample { pubment int callable() { int res = inner.callable(); return res + 1; The paper also explains why inner is more suitable for API design. To quote: The overall philosophy of class extension in Java-like languages (using super) is: subclass implementors know better. The philosophy of class extension in Beta-like languages (with inner) is: superclass implementors know better. Regardless of using Java or Beta style and with the hope to support cluelessness of our API users I want to recommend: When designing an API, always make sure that you know better than users of your API! Name (required): OOP is no longer what it used to be. Somehow the original great visions diluted and instead we have class/object/inheritance as present in Java and other OOP languages of these days. The daily experience we have with these languages is so strong, so defining that we sometimes tend to forget that the roots of OOP used to be driven by visions and not technical concepts. I was reminded about that recently when I read DCI introduction paper at artima website. Just few quotes: Object oriented programming grew out as vision of the computer as an extension of the human mind. Wow! Really? Makes sense, but this is a piece of wisdom lost for a long time, am I right? Even when I learned about OOP I heard more the explanation describing the methodology as inspired by nature. The inheritance was the most natural way to define base class Mammal and subclasses Cat and Dog. Our brain can definitely capture more complex concepts than the mammal example, so maybe, if we want to stick with the old definition, the meaning of OOP shall be expanded. Definitely beyond the expressive capabilities of C++ and Java. MVC's goal was to provide the illusion of a direct connection from the end user brain to the computer "brain" a large method that represented an entire algorithm was believed to not be a "pure" object-oriented design This use of inheritance crept out of the world of programming language into the vernacular of design Personal tools
You've got family at Ancestry. Find more Maclara relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 6 less people named Maclara in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 12 people named Maclara in the 1930 U.S. Census. In 1940, there were 50% less people named Maclara in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 6 people named Maclara were living in the United States. In a snapshot: • The youngest was 27 and the oldest was 80 • 6 rented out rooms to boarders • 6 reported their race as other than white Learn where they came from and where they went. As Maclara families continued to grow, they left more tracks on the map: • 8% were born in foreign countries • Most immigrants originated from Germany • 25% were first-generation Americans • 1 were born in foreign countries
You've got family at Ancestry. Find more Shachelford relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 146 more people named Shachelford in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 34 people named Shachelford in the 1930 U.S. Census. In 1940, there were 429% more people named Shachelford in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 180 people named Shachelford were living in the United States. In a snapshot: • 30% of women had paying jobs • 65 were children • 21 adults were unmarried • On average men worked 43 hours a week Learn where they came from and where they went. As Shachelford families continued to grow, they left more tracks on the map: • 51 migrated within the United States from 1935 to 1940 • They most commonly lived in Mississippi • Most fathers originated from Texas • Most mothers originated from Texas
You've got family at Ancestry. Find more Witschi relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 9 more people named Witschi in the United States — and some of them are likely related to you. Start a tree and connect with your family. Create, build, and explore your family tree. What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 47 people named Witschi in the 1930 U.S. Census. In 1940, there were 19% more people named Witschi in the United States. What was life like for them? Picture the past for your ancestors. In 1940, 56 people named Witschi were living in the United States. In a snapshot: • 51 rented out rooms to boarders • 16 were children • The typical household was 2 people • 2% reported their race as other than white Learn where they came from and where they went. As Witschi families continued to grow, they left more tracks on the map: • Most immigrants originated from Switzerland • They most commonly lived in Massachusetts • The most common mother tongue was Swiss • 20 were first-generation Americans
[Neuroscience] Backpropagation in biological neurons David Olmsted via neur-sci%40net.bio.net (by david_olmsted from sbcglobal.net) Sun Jul 27 07:43:19 EST 2008 Fredo <fredo from hotmail.com> wrote: I've been reading a bit about neurophysiology and neurobiology. A number of texts refer to backpropagation in real neurons, where the signal backpropagates, though significantly attenuated, up the dendrites. I can't seem to find out much detail on the matter, beyond the fact that it exists. In particular, what is the significance? The backpropagation might affect gap junction-connected neurons, but it can't back-propagate through synapses, can it? If it can, what's the mechanism? Surely not neurotransmitter release in the dendrites... Back-propagation is the natural result of injecting any ionic current into a neuron. These ions will spread out in both directions. In neurons 3 ionic currents do this: Sodium ions, Potassium ions, and Calcium ions. Sodium ions are the "normal" neural charge, Potassium ions reverse the Sodium effects and thus are considered an inhibitory response, Calcium ions are involved in neural modulation and adaptability. As others have mentioned the Sodium back-propagation can be actively amplified (the NMDA receptors) and modulated further via dendritic micro-circuits. Yet you asked about the significance of all this. The answer is that the interactions of these back-propagating currents determine the response characteristics of the neuron which in turn is governed by the purpose of the neuron its local circuit which is in turn governed by the behavioral needs of the animal. By response characteristics I mean control over latency, time horizon, burstiness, frequency, connective type (more Sum-like or OR-like) etc. for a given set of input types, If you are really interested in how these back-propagating currents interact and want to play with their various control parameters I have a demo brain circuit simulation program for windows computers at my site (softstatemagic.com) where you can create your own neuron or use a neuron in an example brain circuit that you can download. And in an answer to Stephen Wolstenholme I hope you can see that the backpropagation technique used in artificial neuron networks is nothing like the neural backpropagation in question here. Brain Circuit Simulation Resources: http://www.softstatemagic.com More information about the Neur-sci mailing list
oscar pistorius running in the olympics Paul Gilham/Getty Images The Paralympic Games start this week, but the most famous Paralympic athlete in the world has already grabbed the spotlight in London. Thanks to incredible technological advancements to prosthetics, Oscar Pistorius was able to qualify for the Olympics, and other amputee sprinters are closing the gap between themselves and able-bodied athletes. In Paralympic running, each athlete uses custom-built prostheses that are tailored to their unique running styles. The one's Pistorius uses are called the Flex-Foot Cheetah, and they're made by Ossur. Here's how they work: First thing's first, it's not a bionic leg. There's no electronics or sensors or magnets, just a simply-shaped spring that stores energy and uses it to propel the runner forward. When the runner's foot hits the ground, the blade compresses like a spring, storing potential energy. amputee running leg LiveScience Then it rebounds to push the runner forward using 90% of the energy generated by a runner's stride. potential energy LiveScience It's amazing. But still, able-bodied athletes can produce up to 240% of the energy generated by their strides. So while the blade is successfully able to mimic how an able leg works, it's not nearly as efficient. In addition, amputee runners have to use a whole different set of muscles to move around the track, according to LiveScience. Runners like Pistorius have to use their core muscles to turn, whereas able-bodies runners just pivot their ankles to turn: oscar pistorius core muscles LiveScience So while the prostheses are made of carbon fiber and are custom-built for each runner, the mechanism that they use is rather simple. These things aren't perfect. They're super expensive, and the design is such that the athlete can't stand still for long while he or she has them on. But the success of Pistorius shows that science and technology is slowly coming up with the innovations necessary to erase the disparity between amputee and able-bodied athletes. Watch a full report form Live Science on Pistorius here:
Millipede A female Illacme plenipes with 618 legs. The leggiest animal in the world lives in northern California, on the outskirts of Silicon Valley. The bizarre creature is the millipede Illacme plenipes. Males have around 550 legs, but females can have up to 750 legs! Most millipedes, by the way, have between 80 and 100 legs.  First spotted in 1928, the Illacme plenipes was thought to be extinct until entomologist Paul Marek from the University of Arizona dug up one of the wiggly critters at the end of 2005. He reported his findings in the June 2006 issue of Nature The re-discovery was amazing because the millipede only lives in a very small area in San Benito County, California. Its limited habitat covers just 4.5 square kilometers of mossy oak forest, or 823 football fields, writes LiveScience.  The species is so rare that only 17 specimens were found during Marek's three-year search. Researchers think there could more, but stopped their hunt in 2007 because they didn't want to deplete the species.  Scientists have since been studying the elusive animal and published the most detailed study yet of the millipede on Wednesday, Nov. 14, in the journal ZooKeys. TGDaily's Flora Malein outlines some of the new insights: The millipede has a jagged and scaly translucent exoskeleton, covered with body hairs that produce silk. It also has no eyes, relying instead on a large pair of antennae to navigate its way through the dark. Another bizarre feature is its mouth; unlike other millipedes which chew on food by grinding their mouthparts, lllacme plenipes’s mouth is undeveloped and fused into structures that are probably used for piercing and sucking plant or fungal tissues. The millipede is about 3 centimeters long (a little over an inch) or slightly bigger than a hairpin.  Take a look at the leggy animal in the video below:
Cancer Life Expectancy Cancer life expectancy is the prime concern of most of the cancer patients worldwide. It is an indicator of the minimum chances of survival of a patient after getting diagnosed for cancer and may vary from person to person. Life expectancy may be different for different stages of cancer and is highest in the first or initial stages. Significance of cancer life expectancy Life expectancy is very useful for cancer research as it takes a wide number of factors into consideration and increases the scope of research. Data collected in calculating life expectancy is based on empirical research and is derived using life expectancy calculators. Cancer cases are on the rise and people suffering from various types of cancers are finding it hard to tackle the disease right in the developmental stages. Cancer treatments are usually harsh and the side effects they cause to patients are quite severe. Life expectancy can be very helpful for such patients to know the approximate status of the disease and the chances of curing the tumors. Life expectancy calculations also provide valuable data for finding out cancer survival rate and cancer prognosis. The pattern of treatment to be followed depends mostly upon the life expectancy as there is no use in recommending a harsh treatment if the chances of survival are low. Similarly, the case where the chances of survival and possibilities of eliminating tumors are quite high, a severe treatment can be recommended. Dimensions of cancer life expectancy Life expectancy calculations depend upon a variety of factors and are based on consistent research and breakthroughs for cancer. Age is the foremost factor to be considered for calculating life expectancy. Cancer occurs mainly in adults and is more prone in people above 45 years. Thus, the life expectancy gradually decreases with an increase in age. Chances of complete elimination of cancer tumors are almost 100% if the cancer cells are detected in younger people and that too in the initial or developmental stages. Health complications increase with age and thus, the chances of complete elimination of malignant tumors considerably decrease. Also, the incidences of cancer recurrence are quite high in old age. Thus, this factor is necessary to be considered while calculating life expectancy. Human papillomavirus (HPV) has been one of the major causes of cancerous development in the body and is detected in most of the patients suffering from cancer irrespective of the form and point of origin of cancerous development. The disease is caused mainly due to unsafe sex practices and leads to weakening the overall immune system of the patient's body. Hence, the person becomes more vulnerable to frequent infections and cancerous attack thus, leading to decrease in life expectancy rate. Family history of cancer can put a person at risk as the disease is hereditary and may transfer from one generation to another. The life expectancy of a person also depends upon his genetic patterns and such family history may seriously affect his survival rate in case he suffers from cancerous attack. Cancer stages also play an important role in deciding the life expectancy of cancer. According to studies, the rate of survival is highest in the initial stages and decreases considerably with an increase in intensity of the cancerous cells. Fourth stage is usually considered as the final stage of cancer in which the cells are capable to metastasize and grow rapidly and the chances of eliminating the cancer cells in this stage are almost negligible. Thus, the life expectancy rate is the lowest in this stage. Cancer life expectancy is usually calculated considering the death rate based on age. Gompertz function is the most common method used to calculate life expectancy though many sophisticated methods have been designed to make the calculations more accurate and reliable. Breast cancer is the most curable form of cancer in females while prostate cancer is the most curable type in males. Lung cancer is the most common cause of cancer-deaths followed by stomach, colorectal, liver and breast cancer. Cancer Articles!
Conditions + Treatments Treatments for Hematuria in Children LIke ThisLIke ThisLIke ThisLIke ThisLIke This How will my child's hematuria be treated? Your child's pediatric urologist will determine which treatment is appropriate for your child based on what's causing the hematuria. • Hematuria caused by urinary stones is generally treated by removal of the stones. • Hematuria caused by urinary tract infections is treated with antibiotic therapy to eradicate the infection. The doctor will also consider the extent of the condition, your child's tolerance for specific medicines and procedures and your preferences. In many cases, the hematuria goes away by itself and does not return; in this case, your child wouldn't require any specific therapy other than observation. - Sandra L. Fenwick, President and CEO
Mitigating harmonics in electrical systems Learning objectives • Understand current and voltage harmonics in electrical systems, and their negative effects on the facility electrical system. • Know how electronic power equipment such as VFDs creates harmonics. • Understand characteristic and noncharacteristic harmonics. • Understand IEEE 519 guidelines for the reduction of electrical harmonics. • Learn design techniques for mitigating harmonics with recommended applications. This article has been peer-reviewed.Harmonics and detrimental effects In North America, alternating current (ac) electrical power is generated and distributed in the form of a sinusoidal voltage waveform with a fundamental frequency of 60 cycles/sec, or 60 Hz. In the context of electrical power distribution, harmonics are voltage and current waveforms superimposed on the fundamental, with frequencies that are multiples of the fundamental. These higher frequencies distort the intended ideal sinusoid into a periodic, but very different shaped waveform. Many modern power electronic devices have harmonic correction integrated into the equipment, such as 12- and 18-pulse VFDs and active front-end VFDs. However, many nonlinear electronic loads, such as 6-pulse VFDs, are still in operation. These nonlinear loads generate significant magnitudes of fifth-order and seventh-order harmonics in the input current, resulting in a distorted current waveform (see Figure 1). The characteristics of the harmonic currents produced by a rectifier depend on the number of pulses, and are determined by the following equation: h = kp ±1 • h is the harmonic number, an integral multiple of the fundamental • k is any positive integer • p is the pulse number of the rectifier Figure 1: Transformers are available in a variety of sizes and distribution voltages, and can be installed indoors or outdoors. All images courtesy: TLC Engineering for ArchitectureThus, the waveform of a typical 6-pulse VFD rectifier includes harmonics of the 5th, 7th, 11th, 13th, etc., orders, with amplitude decreasing in inverse proportion to the order number, as a rule of thumb. In a 3-phase circuit, harmonics divisible by 3 are canceled in each phase. And because the conversion equipment’s current pulses are symmetrical in each half wave, the even order harmonics are canceled. While of concern, harmonic currents drawn by nonlinear loads result in true systemic problems when the voltage drop they cause over electrical sources and conductors results in harmonics in the voltage delivered to potentially all of the building electrical system loads—even those not related to the nonlinear loads. These resulting harmonics in the building voltage can have several detrimental effects on connected electrical equipment, such as conductors, transformers, motors, and other VFDs. Conductors: Conductors can overheat and experience energy losses due to the skin effect, where higher frequency currents are forced to travel through a smaller cross-sectional area of the conductor, bunched toward the surface of the conductor. Transformers: Transformers can experience increased eddy current and hysteresis losses due to higher frequency currents circulating in the transformer core. Motors: Motors can experience higher iron and eddy current losses. Mechanical oscillations induced by current harmonics into the motor shaft can cause premature failure and increased audible noise during operation. Other VFDs and electronic power supplies: Distortion to the increasing voltage waveform in other VFDs and electronic (switch mode) power supplies can cause failure of commutation circuits in dc drives and ac drives with silicon controlled rectifiers (SCRs). Establishing mitigation criteria The critical question is: When do harmonics in electrical systems become a significant enough problem that they must be mitigated? Operational problems from electrical harmonics tend to manifest themselves when two conditions are met: 1. Generally, facilities with the fraction of nonlinear loads to total electrical capacity that exceeds 15%. 2. A finite power source at the service or within the facility power distribution system with relatively high source impedance, resulting in greater voltage distortion resulting from the harmonic current flow. IEEE 519-1992, Recommended Practices and Requirements for Harmonic Control in Power Systems, was written in part by the IEEE Power Engineering Society to help define the limits on what harmonics will appear in the voltage the utility supplies to its customers, and the limits on current harmonics that facility loads inject into the utility. Following this standard for power systems of 69 kV and below, the harmonic voltage distortion at the facility’s electrical service connection point, or point of common coupling (PCC), is limited to 5.0% total harmonic distortion with each individual harmonic limited to 3%. In this standard, the highest constraint is for facilities with the ratio of maximum short-circuit current (ISC) to maximum demand load current (IL) of less than 20, with the following limits placed on the individual harmonic order: (Ref. Table 10.3, IEEE Std. 519) • For odd harmonics below the 11th order: 4.0% • For odd harmonics of the 11th to the 17th order: 2.0% • For odd harmonics of the 17th to the 23rd order: 1.5% • For odd harmonics of the 23rd to the 35th order: 0.6% • For odd harmonics of higher order: 0.3% • For even harmonics, the limit is 25% of the next higher odd harmonic. • The total demand distortion (TDD) is 5.0%. There are various harmonic mitigation methods available to address harmonics in the distribution system. They are all valid solutions depending on circumstances, each with their own benefits and detriments. The primary solutions are harmonic mitigating transformers; active harmonic filters; and line reactors, dc bus chokes, and passive filters. Scott , MS, United States, 04/09/14 01:20 PM: Great Article thanks BOB , OH, United States, 04/21/14 06:24 PM: Very good explaination - it takes skill to explain a complicated topic simply SAKTHIVEL , India, 05/14/14 10:00 AM: Really Informative Article click me
Study your flashcards anywhere! Download the official Cram app for free > • Shuffle Toggle On Toggle Off • Alphabetize Toggle On Toggle Off • Front First Toggle On Toggle Off • Both Sides Toggle On Toggle Off • Read Toggle On Toggle Off How to study your flashcards. H key: Show hint (3rd side).h key A key: Read text to speech.a key Play button Play button Click to flip 142 Cards in this Set • Front • Back Organs that form urine lead urine to the bladder lead urine to the outside of the body Roles of Urinary Homeostasis 1.regulating composition of body fluids 2.remove wastes (from metabolism, etc.) 3.kidneys are minor endocrine organs location of kidneys between twelfth thoracic and third lumbar vertebrae What enters and exits the bladder? renal artery enters and renal vein and urethra exit outer portion of kidney renal cortex inner, triangular portion(s) of kidney renal medulla (renal pyramids) a sudden loss of kidney function, usually associated with shock or intense renal vasoconstriction, from a few days to a few weeks acute renal failure inflammation of the urinary bladder blood in the urine a method of clearing waste products from teh blood in which blood passes by the semipermeable membrane of the artificial kidney and waste products are removed by diffusion night urination (during sleep) the condition of having urinary volumes of less than 500 ml/day (normal is 1000 ml/day) excessive urine output (as with diabetes insipidus) retention of urinary constituents in the blood owing to kidney dysfunction functional unit of the kidney T or F: one collecting duct serves one nephron F: one collecting duct serves several nephrons How many nephrons are there in each kidney? one million How many capillaries are there in the glomerulus? 50 capillaries How much more permeable is the glomerulus than typical capillaries? 100 to 1000 times more permeable feet processes that hold capillaries in place What is the purpose of brush border microvilli in the proximal convoluted tubule? increase surface area T or F: the nephron loop and distal convoluted tubule contain as much microvilli as the proximal convoluted tubule F: the nephron loop has NO microvilli and the distal convoluted tubule has much less empties into the collecting duct distal convoluted tubule the portion of the blood plasma that enters the capsule glomerular filtrate mechanisms that cause fluid to be filtered 1. high hydrostatic pressure of blood (45 to 60 mmHg) 2.large number of pores How much blood plasma is filtered in a day? 180 L or 45 gallons What is filtered and what is too big? water electrolytes, glucose, amino acids, urea, hormones and vitamins ARE filtered plasma proteins, RBC, WBC, and platelets are too big (albumin would be first to filter if something was damaged) What is the GFR or glomerular filtration rate? 120 ml per minute How is GFR regulated? vasoconstrition or dilation of afferent arterioles by extrinsic and intrinsic mechanisms What are the extrinsic and intrinsic factors? extrinsic (sympathetic nerves) and intrinsic (locally produced chemicals) What is used to measure GFR? GFR (ml/min)= (urine volume ml/min x inulin conc. in urine mg/ml)/(inulin conc. in plasma mg/ml) Where does reabsorption take place? in ALL parts of the nephron transfer of fluid and solutes out of the lumen of the nephron through the interstitial space and into peritubular capillaries tubular resabsorption How much filtrate is reabsorbed? 99% we only produce 1 to 2 L of urine per day no energy required requires energy active transport sodium Na filtration 99.5% reabsorbed, 67% in proximal tubule, 25% in loop of Henle, 8% in distal tubule What regulates sodium reabsorption? aldosterone of renin-angiotensin system Under normal conditions, how much of the glucose is reabsorbed? What is the transport maximum Tm for glucose? 375 mg/minute filtered A glucose value above 375 will not be reabsorbed and will appear in the urine as a sign of what? the secretion of substances from the peritubular capillaries into the lumen of the tubule tubular secretion (to selectively move substances into the lumen for excretion into the urine) What are the most important substances excreted by the tubules? hydrogen ions, potassium ions, and some organic anions Why is H+ secretion important? acid-base balance in the body What does aldosterone stimulate? Pootassium secretion and sodium reabsorption How do the kidneys regulate acid-base balance? by secretion of H+ ions into the tubules and the reabsorption of bicarbonate the ratio of CO2 to bicarbonate in the extracellular fluid is increased because of the production of CO2 or an increase of H+ formation Renal response of acidosis? 1)increased amounts of CO2 enter the tubular cells from EFC 2)increased amounts of H+ are secreted into lumen of nephron 3)bicarbonate in the lumen of nephron is reabsorbed into EFC net result of acidosis H+ ions are secreted into the urine and bicarbonate ions are retained the ratio of bicarbonate increases as the pH rises Renal response of alkalosis? 1)decreased amounts of CO2 enter tubular cells from EFC 2)decreased amounts of H+ are secreted into the lumen of the nephron 3) less bicarbonate is reabsorbed net result of alkalosis H+ ions are retained and bicarbonate ions are excreted Where are the mechanisms that the kidneys use to regulate urine concentration? medullary interstitium, tubules, vasa recta (medullary capillaries) Which part of the nephron loop is permeable to water? the descending limb What part of the nephron loop is IMpermable to water? the ascending limb Where do sodium and chloride ions diffuse in and out of? Na+ and Cl- diffuse into the descending vasa recta and diffuse out of the ascending vasa recta What quantity of renal blood flow passes through the vasa recta? 1 to 2 percent Net result of Regulation of Urine Concentration? high osmotic concentration ithe medulla Low levels of ADH? diluted urine because more water is produced in the urine High levels of ADH? concentrated urine because ADH inhibits formation of water in urine How is it decided which things will be retained and which will be expelled in the urine? it depends on the body's need to retain or eliminate that substance clearance (plasma clearance) of a certain substance Clearance (ml/min)= UxV / P If a substance is filtered and reabsorbed but not secreted... its plasma clearance rate is always less than the GFR (less than 120 ml/min) If a substance is filtered and secreted but not reabsorbed... its plasma clearance rate is always greater than the GFR (greater than 120 ml/min) length of the urethra in men vs. women it is significantly shorter in women than in men which means more urinary tract infections for women because bacteria doesn't have to travel as far where renin is produced and secreted juxtaglomerular apparatus kidneys play an important role in regulation of blood pressure via... the renin-angiotensin-aldosterone system Effects of renal failure water and salt retention high plasma urea, creatine uric acid coma-due to acidosis death if below 6.8 What senses the need to release renin from the juxtaglomerular cells? macula densa (wall of the distal tubule) What causes release of renin? decrease in blood volume/pressure or an increase in sodium chloride concentration in the distal tubule action of hormones vs. nerves action of hormones is relatively slow and effects are prolonged, nerve impoulses are faste and effects are short the study of endocrine glands, the hormones they secrete, and th effects they have on their target cells, or target tissues How do hormones influence their target cells? chemically binding to integral membrane protein receptors that bind and recognize that hormone Generally, how many receptors for a particular hormone does a target cell have? when a hormone is present in excess and the number of target cell receptors decreases when a hormone is deficient and the number of receptors increases (makes target tissue more sensitive to hormone) all steroid hormones are lipids that are derived from cholesterol characteristics of steroids lipid soluble and enter cells rapidly Ex: estrogens, progesterone, testosterone, aldosterone, cortisol biogenic amines synthesized by modifying amino acids Ex: T3 and T4, epinephrine, histamine, serotonin Peptides and proteins consist of chaines of 3 to 200 amino acids, synthesized on rough endoplasmic reticulum Ex: oxytoci, ADH, parathyroid hormone, calcitonin, CCK, gastin Electrical vs. chemical communication of the brain nervous system vs. endocrine system, respectively General Characteristics of Hormones -physiological regulators -effective in minute quantities -synthesized by living cells -secreted into and carried by the blood (some exceptions) -initiate specific actions the study fo the interactions between the nervous system and the endocrine system once called the master gland the pituitary gland, which is controlled by the hypothalamus anterior pituitary secretes 7 hormones growth hormone, ACTH, Thyroid stimulating hormone, prolactin, FSH, LH, melanocyte stimulating hormone posterior pituitary secretes 2 hormoes ADH and oxytocin (milk production) Where are the two hormones from the posterior pituitary made? in neurosecretory neurons (hormones from anterior pituitary are made in hypothalamus) Where are the hormones from the anterior pituitary made? made by the hypothalamus and transported in small blood vessels releasing hormones or inhibitory hormones CRH-stimulates ACTH TRH-stimulates TSH and prolactin GnRH-stimulates FSH and LH GIH or somatostatin-inhibits growth hormone secretion Giantism and Agromegaly-excess GH giantism-excess before puberty agromegaly-excess inadults major source of IGF-I (insulin-like growth factor) pituitary dwarfism lack of GH or GRH before puberty, may also be hypothalamic pituitary tumor production of milk Thyroid stimulating Hormone-TSH T3 and T4 Follicle Stimulating Hormone-FSH stimulates egg to maturity, estrogens, testicular growth Luteinizing Hormone-LH ovulation, corpus luteum, estrogen and progesterone, males-testosterone ACTH or adrenocorticotropin normal growth of adrenal cortex and secretion of glucocorticoids, increased lipids and skin pigmentation milk secretion and uterine contraction ADH or Vasopressin released in response to rising plasma tonicity or falling blood pressure diabetes insipidus lack of ADH due to damage of the pituitary or hypothalamus symptoms: polyuria, polydipsia, dehydration, fever, dry tongue and delirium tumor of the chromaffin cells in the medulla symptoms: high blood pressure location of adrenal gland on top of the kidney adrenal gland secretes? catecholamine hormones for sympathetic nervous system, mineral balance, energy balance, and reproductive function mineral corticoids (aldosterone) sodium, potassium, and water balance zona glomerulosa glutocorticoids (cortisol) anti-inflammatory, metabolism of carbs, proteins, and fats zona fasciculata gonadocorticoids (sex hormones) zona reticularis secretion of chromaffin cells in the adrenal medulla catacholamines, epinephrine (80%), norepinephrine (20%), dopamine (<1%) Inadequate secretion of glucocorticoids and mineralocorticoids = hypoglycemia, sodium and potassium imbalance, deydration, weight loss, weakness Addison's disease (President Kennedy had it) hypersecretion of corticosteroids usually from tumor or oversecretion of ACTH, puffy moon face, decreased antibodies, hyperglycemia Cushing's syndrome Alteration of enzymes required to produce mineralcorticoids, increase production of sex hormones, masculization of females Adrenogenital syndrome How many thyroid follicles do humans have? one million What do T4 and T3 make? a protein-rich fluid called colloid What do T4 and T3 do? regulate metabolism and body temperature, growth (hypothalamus and release of TSH regulate) What does Calcitonin do? lowers blood calcium by inhibiting the release of calcium from bone tissue insiffucient secretion of T4 and T3 in infants, stunted growth and thickened facial features cretinism (hypothyroidism) insufficient secretion fo T4 and T3 in adults pathological enlargement of the thyroid gland due to insufficient iodine intake excessive secretion of T4 and T3, bulging eyes Graves' disease Calcium and... 1) bones 2) digestive 3) kidneys Vitamin D helps to absorb Calcium distribution in the body 99% crystalline in bone, .9% in body cells, .1% in EFC How does calcitonin lower blood calcium? descreases bone resorption by inhibiting the activity of osteoclasts stimulates urinary excretion of calcium and phosphate by inhibiting their reabsorption in the kidneys T or F: PTH is essential for life PTH increases blood calcium by... stimulating activity of osteoclasts to reabsorb bone stimulates kidneys to reabsorb calcium from filtrate promotes formation of vitamin d usually caused by a tumor in one of the parathyroid glands. characterized by hypercalcemia used to be caused by remove of parathyroids during thyroid surgery pancrease exocrine function secretion of pancreatic juice (enzymes) goes into intestinal tract pancreas endocrine function Alpha cells-secrete glucagon Beta cells-secrete insulin What does Glucagon do? elevates blood glucose stimulating glycogenolysis in the liver What does insulin do? promotes the cells to take up glucose (brain, kidney, intestinal and red blood cells don't need insulin) lowers blood glucose levels and stimulates glycolysis lack of ADH from the posterior pituitary diabetes insipidus insulin deficiency diabetes mellitus Juvenile onset Type I insulin-dependant maturity onset Type II noninsulin-dependant How can juvenile diabetes be complicated? by ketoacidosis glucose in the urine increased urine volume increased drinking What happens to the excess blood sugary during hyperglycemia? it is shunted to the polyol pathway, or the organs that don't need glucose reactive hypoglycemia caused by an exaggerated response of the beta cells to a rise in blood glucose gestational diabetes happens to 2 -5 % of pregnant women then disappears after delivery
Home > Education > Classroom discipline Classroom discipline October 25, 2007 In a previous post about education; “Test Anxiety, Stand And Deliver” we got some really informative comments from an experienced teacher, who made the following point about classroom discipline: Discipline – I am amazed that schools spend so much time on curriculum and yet spend so little time on a cohesive discipline plan. Parents rarely know that their child is in trouble until the child has been suspended or received some form of severe punishment.  This despite the fact that there are more lines of communication than ever before – ie. websites, email, phone, etc. Parents know instantly how there child is doing with grades thanks to online grades nowadays. However, there is no place for discipline… (More than one teacher chimed in… go read!) And today Dynamics Of Cats picks up the same theme: I was talking to a friend recently. As with many of the volunteers she has no training in education and no experience with actual teaching of groups. She is a professional with two kids of her own. She mentioned the “two trouble boys” in the class she was helping with, and said she had tried to work with one of them. After trying to cajole him and then order him to do something, he turned to her and told her bluntly: “I’m not doing it, and you can’t make me”. He is seven years old. And, he is right…     Dynamics Of Cats: Children Of Our Time DOC then carries the idea to a frightening conclusion: Someday those kids will grow up and hold responsible positions in business and government.  And they may still be the same defiant, self-centered little brats at heart when they do. Must admit I’m stumped by this one.  How can schools get parents onboard with their kids’ discipline?  The solution must, like the problem does, extend beyond school walls. I told my kids they did not have to respect the teacher, but they had to be respectful towards the teacher.  They were never to disrupt the class, among other things.  But what else?  How do we make it matter to the parents?  I mean not just some punitive measure, but how do we convey the importance of a disciplined learning environment to the parents? Categories: Education 1. james old guy October 26, 2007 at 07:56 | #1 What did you expect? There is no such thing as actual punishment anymore. Parents won’t discipline their kids and threaten to sue the school for attempting any kind of minor punishment.  I had an employee about 8 years ago who had a kid that was a major problem, she never actually punished him for anything and threatened the school when they expelled him. Of course the school folded, it had no choice the school board was also afraid of lawsuits. They did the time out bullshit, and all those other half hearted attempts to get him under control, nothing worked.  Two years ago he was shot dead in a drug deal gone bad.  There are a lot of old sayings that have a basis in truth “ Spare the rod spoil the child”, “ kids should be seen and not heard “ are just two. I am not saying kids should not be protected against abuse, but we have taken the discipline responsibilities away from parents and given it to no one.  Corporal punishment has been outlawed , so what is left? 2. Lucas October 26, 2007 at 16:15 | #2 rise when elders enter the room. They contradict their parents, legs, and are tyrants over their teachers.” (See the link for discussion of attribution—usually attributed to Socrates.) When I look around me in graduate school, I mostly see bright, intellectually curious, well behaved people.  I guess that politicians and teachers must have been totally satisfied with us when we were in school in the 80’s and 90’s.  (DOF probably has something to say about this.)  Remember, the cream rises to the top, but more importantly, people grow up.  Children misbehave, it’s true, and I imagine that it’s incredibly frustrating to be facing a room full of ill-behaved students.  Many of them *will* realize the importantance of respecting authority as their life progresses.  I know I’m certainly much better behaved and better able to function in responsible roles than I was in high school. 3. Mrs SEB November 7, 2007 at 14:42 | #3 Yes, all children misbehave, but it should be rare for anyone to face a full room of “ill-behaved” students. It’s not about a full room; it’s about the minority percentage that monopolizes everyone’s time and energy.  It is about that minority that puts everyone at physical, psychological and academic risk. The issue revolves around: How do we deal with the extremes?  How do we cultivate, nurture, and maintain a safe, tolerant, cooperative community in our classrooms? We need a boatload of parental support and participation to accomplish this goal. As far as corporal punishment… I am not in favor of re-establishing the “spare to rod, spoil the child” or the “children should be seen and not heard” policies.  Violence begets violence. What do you really teach a child when they are physically assaulted (spanked/hand-slapped) for poking, hitting, pushing, spitting, etc another human being?  Maybe, I’ll get you back?  Or, don’t get caught?  How does this influence the positive growth of this child into a happy, functional, productive member of society? Children need to be heard.  They need to have a voice, but not necessarily the final word.  How else will our children develop their logic and reasoning skills?  Providing a structured environment that not only establishes classroom rules with consequences/rewards, but also teaches and utilizes conflict resolution is vital in the cognitive develop of all children/students.  Simply stating that it “must be frustrating” to face these out of control students is the understatement of the century.  It’s not just “frustrating” to have severely misbehaving students in the classroom.  It’s dangerous for the other students and the teacher as well as degrading the quality of education for all. How do you expect a teacher to teach if one or two or “a few” of the students are running rampant?  Not just “interrupting” by talking out or moving about in the classroom, but actually destroying property and initiating verbal and physical confrontations with other students (and possibly the teacher, themselves)? As the adult in the environment you can not “ignore” when Sam snatches Jill’s chair out from under-her.  Or “overlook” Kendra’s dash and snatch move, followed by dumping and grinding all of Bobby’s crayons underfoot.  Neither can you ask the classroom to “never mind” when Ian begins screaming at his seat while shredding his spelling manual and throwing the pieces in the air.  And, yes these are examples of average “misbehavior” both my parents encountered in their various K, 1st grade, 3 grade, and 5th grade classrooms throughout their careers as professional public school elementary teachers.  I also experienced a great deal of said misbehavior in my short lived teaching career. However, I beg to differ with the statement: No, most do not.  THINK: Why are we having such a difficult time coordinating parental support?  These children (if they survive) are becoming our parents.  They are becoming the parents who are often too young, self absorbed and often underdeveloped socially & cognitively to even understand there is a problem or care if they do comprehend. The 80s/90s have not produced a stellar quality parent.  I believe if you talk with career educators & politicians whose careers span the 80s/90s to the present day you will find their answers to be: There wasn’t (and isn’t) satisfaction. Some of “our parents,” may have been more interactive and cooperative with the schools at that time.  Hence, their participation cultivated a better environment to grow their children’s social skills. Hence Lucas has become better able to function in responsible roles as an adult than he believed he was capable of doing in High School. But, I tend to feel that, maybe, just maybe this was when it all began… A trend of parental neglect or parental ignorance (too busy with myself to want to deal with my kids) that is eating away at our society’s foundation.  One in which “suiting” the school or threatening to do so results from disciplining my child (when I do not) because it makes me look bad.  I believe that 90% of parental outcry and action at this time has nothing to do with what is best for their children. Comments are closed.
Standard views of the messianic idea in Judaism have long held that ancient Jews expected a triumphant hero to liberate them from their enemies. Students of early Christianity have even suggested that Jesus' predictions of his own death must have been inserted into the Gospel accounts after his crucifixion, because, they say, no such ideas existed in Judaism; the Gospel writers were trying to salvage their faith following his obvious defeat. In 2008, though, news began to appear of an ancient Hebrew tablet, 3 feet tall, found somewhere near the Dead Sea, that described a messiah who would be killed and then, three days later, rise from the dead. Dated to no later than the first century before Christ, it seemed to be pre-Christian. Jewish. But its interpretation continues to be debated. One of the other problems posed by the earliest Christian movement is how it was that presumably monotheistic Jews could have come to believe in a divine Father and a divine Son. Historically, some have argued that this could only have happened when Gentiles began to enter Christianity, bringing with them pagan notions of earthly demigods like Hercules. The clear implication is that Jesus himself wasn't, and didn't claim to be, divine. Daniel Boyarin, however, argues that conventional understandings of Jesus and of the origins of Christianity are wrong. Since 1990, Boyarin has been a professor of Talmudic culture in the departments of Near Eastern studies and of rhetoric at the University of California at Berkeley. The coming of the Messiah, he says, was fully imagined, in detail, in ancient Jewish texts. Many Jews accepted Jesus as Messiah, according to Boyarin (who expressly includes Jesus among those who did), because his self-description and eventually his biography tallied exactly with expectations that they already held as Jews, and his core teachings were consistent with a particular strand of Jewish beliefs and doctrines. Jesus and his followers, Boyarin shows, were simply Jewish. The neat distinction between Judaism and Christianity, he contends, came centuries after Christ, at Nicea. He also rejects the common academic division of "the Jesus of history" from "the Christ of faith." "I suggest," he writes, "that Jesus and Christ were one from the very beginning. It won't be possible any longer to think of some ethical religious teacher who was later promoted to divinity under the influence of alien Greek notions, with his so-called original message being distorted and lost; the idea of Jesus as divine-human Messiah goes back to the very beginning of the Christian movement, to Jesus himself, and even before that." Drawing heavily on the Hebrew biblical book of Daniel, which he dates to around 161 B.C., but in which he sees earlier "revelations" that sometimes troubled even the book's relatively late Jewish author, Boyarin argues for a pre-Christian, Jewish, expectation of a figure called "the Son of Man." This is scarcely the simple Jewish monotheism that we've always been told about: "He is divine," says Boyarin. "He is in human form. He may very well be portrayed as a younger-looking divinity than the Ancient of Days. He will be enthroned on high. He is given power and dominion, even sovereignty on earth." And, Boyarin explains, this notion of a "younger" deity and an "older" one "is among the earliest ideas about God in the religion of the Israelites." Moreover, "The notion of a humiliated and suffering Messiah was not at all alien within Judaism before Jesus' advent. … Jews, it seems, had no difficulty whatever with understanding a Messiah who would vicariously suffer to redeem the world." 3 comments on this story The Book of Mormon prophet Nephi foresaw in a vision that "many plain and precious things" would be "taken away" from "the record of the Jews." (See 1 Nephi 13.) With that prediction in mind, listen to Professor Boyarin: It's certainly noteworthy when one of the world's leading Jewish scholars publishes a book about Jesus. And Daniel Boyarin's "The Jewish Gospels: The Story of the Jewish Christ" (The New Press, 2012) doesn't disappoint. It's extremely stimulating, and Latter-day Saints who have enjoyed the work of the British Methodist scholar Margaret Barker will find parts of it especially intriguing.
SALT LAKE CITY — A court battle over an Italian earthquake has sent shock waves rippling through scientific circles around the world, even rattling geologists in Utah. Seven Italian experts were sentenced to six years in jail for manslaughter because they failed to adequately warn the public that a big quake was coming. The outcome is especially shocking to geologists because they know the verdict is based on expectations that scientists simply cannot meet. "I would say it's wrong," said Keith Koper, director of the University of Utah's Seismograph Station. "It's harsh and wrong, to be honest. It's a little bit unsettling that scientists are actually going to jail." The criminal trial revolved around a devastating quake on April 6, 2009. The 6.3-magnitude temblor flattened numerous buildings and killed 308 people in the vicinity of L'Aquila, a medieval town in northern Italy. An indictment accused the scientists and experts of giving "inexact, incomplete and contradictory information" about whether small tremors felt by residents in the weeks before the big quake should have constituted grounds for a warning to the general public. Even before the verdict and sentences were announced, scientists denounced the trial as ridiculous because seismic experts have no reliable way to predict earthquakes. "It would be a real coup if we could give warning to society and to the different emergency operators," Koper said. "But the problem is, it's just a complicated system and we don't really have a good understanding of why earthquakes start and why they stop." Koper said scientists do have the capability of making long-term projections about the likelihood of earthquakes in areas that have a history of seismic activity. But a short-term prediction, within hours or days, may never be possible. "People have tried for decades to come up with different precursors" that would indicate a big quake is coming, Koper said, "but nothing's been found yet. And most of us are pretty skeptical that we'll be able to predict earthquakes on a short time-scale." It's easy to imagine other circumstances in which the Italy precedent could be applied. Could a TV weatherman be charged with manslaughter if someone dies in a snowstorm that wasn't predicted? Could a hydrologist be blamed for a fatal flood no one expected? If a skier dies in an avalanche, on a day when conditions are rated "moderate," could the Utah Avalanche Center take the rap? "What happens if somebody doesn't predict the right track for a hurricane?" Koper asked. "Are they then criminally liable? That would definitely make the experts more hesitant to make predictions and to make forecasts." It could also make experts so jumpy they would predict too much, too often. "It's not a good thing if scientists are going to be overly cautious and, in a sense, cry wolf too often," Koper said. "People will get used to these alarms and then they won't react and then it could even be worse." He argues that some responsibility rests with the public, not just with seismic experts. 1 comment on this story "We don't really know when earthquakes are going to happen," Koper said. "So the best advice is to be prepared." The Italian case is currently on appeal so the seven experts have not yet started serving their sentences. Contributing: Associated Press
Lower Stress and Boost Memory with… Your Gut? By , Category : General Health gut health probioticsThere are a lot of stressors in life, but one of the recurring ones is memory trouble. Recalling faces, misplacing the keys, and remembering to do all the household chores can be annoying or even cause some serious problems! But when you think of mental health, memory, and stress levels, I’ll bet you’re not thinking about your stomach, and surely not about bacteria. But there’s a growing body of research showing that gut bacteria are associated with less stress and improved memory (1). Probiotics are by no means a sure thing, but there is plenty of research on them that shows a number of health benefits for your heart, weight, digestion, and brain function. And although more research must be completed to form a more definitive prognosis of the benefits of probiotics, I for one can say I’ve experienced benefits from consuming them. Boost Your Probiotic Intake to Diversify Your Gut Bacteria A recent study featured 22 men who were given a probiotic supplement for one month, then given a placebo for a month after that; they didn’t know what they were taking during the trial. The supplement contained more than a billion Bifidobacterium longum 1714 bacteria (2), and was taken once per day. The group reported less stress and displayed better memory function for the month they were on the probiotic when compared to the placebo phase (and compared to how they felt before the trial, too). They also had less cortisol—the stress hormone—in their bloodstreams. Although this study was small, the results mimic those found in rodent studies and complement the results of other studies showing the ties between gut bacteria and mental health (3). The population of bacteria in your intestines—your microbiome—may be one of the most important factors in your overall health. And when it comes to mental health issues such as stress reduction, anxiety, and memory, it appears to play a highly beneficial role. A study has even shown that people who eat more probiotic foods such as yogurt, pickles, kimchi, and sauerkraut report lower levels of social anxiety and neuroticism. Probiotic supplements are not very expensive and dietary sources of probiotics are readily available in your grocery store. Keeping cortisol low and controlling stress is central to your overall well-being, so boosting your probiotic intake to diversify your gut bacteria could pay major dividends. And if they can help improve memory, they can definitely make life less stressful! Read Next: Adrian Newman, B.A. About the Author, Browse Adrian's Articles
Top 10 Daily Habits That Can Damage Your Brain Do you know some of your daily habits that can Damage your Brain? Yes, these simple 10 daily habits are damaging your brain. The human brain is one of the most important parts of the human body. Human Brain is like the CPU that helps us think and do our daily tasks and brain also take care of all the organs and assigns them work to do. So being the most delicate part of our body we must take care of it. To stay healthy and fit you must take care not to damage the brain by any ways. Damaged brain can lead to a number of health complications in our whole body. The World Health Organization has recently released a list according to their research, the Top 10 daily habits that can damage your Brain. Top 10 Brain-Damaging Habits Skipping Breakfast/ No Breakfast Specially youngsters and girls avoid Breakfast either to diet or their not so busy schedules doesn’t give them much time to have their breakfast. After 8 hours of sleep your body needs sufficient nutrients. People who do not take breakfast or skip their very first meal of the morning, usually get a low blood sugar level which is not good for your health. This results in less supply of nutrients to thebrain, which can cause brain degeneration. Skipping meals means no carbohydrates, which leads to less energy which you need for all your day. When you have your favorite dish in front of you, more seems less. You enjoy eating so much that you forget that your stomach is actually full. That generally happens, but do you know overeatingcan harden the brain arteries. And Hardening of brain arteries results in decrease of mental power. High Sugar Consumption Some people have a sweet tooth and love to eat sweets and consume way too much sugar. If you like eating sweets then Stop. Consuming too much sugar interrupts in the absorption of proteins and nutrients. This results in malnutrition, which interferes with the development of the brain. Make sure kids are not involved in too much chocolate, sweets and sugary stuff. Air Pollution Not exactly a habit, but the fact is it has become a compulsion to live in polluted air. The irony is our habits themselves are the reason of Air Pollution.  We all know that the brain needs oxygen to work and make our body work. Inhaling polluted air results in less supply of oxygen to the brain. Less oxygen means decreased efficiency of the brain. Sleep Deprivation Sleep is very important for our body’s overall health and mental fitness. Proper Sleep for 8 hours means rest for all our vital organs and Brain. Sleeping late nights and sleep deprivation has become a trend of today’s youth, be it for work or friends. But long term sleep deprivation accelerates the death of brain cells. So if you don’t want your brain cells dead, get proper sleep. Head Covered While Sleeping Many people love to sleep with the head covered. But do you know this habit of sleeping with head covered increases the intake of carbon dioxide and decrease intake of oxygen. As the carbon dioxide you release stays around you and fresh oxygen doesn’t get an easy way to reach you. This may lead to brain damaging effects, so if you sleep with your head covered, change the habit right now. Working During Illness This is very common with working people and students. Working hard or studying when you are sick is not good for your mental health. In weakness your body and mind both need rest. This may result in decreasing effectiveness of the brain as well as it can damage the brain. Lacking in Stimulating Thoughts Not using your brain can cause Brain Shrinkage. Talking to yourself, thinking and debating is the best way to train our brain and stimulate thoughts. Thinking useful and healthy things are good for your mental health and thinking nonsense things is not advisable at all. Talking Rarely People who are introvert and talk way too less are on higher risk of decreasing the efficiency of the brain. Intellectual and healthy conversations whereas to promote the efficiency of the brain. It is easier for you to improve the health of your brain now, when you know the 10 Daily habits that can damage your brain. source and courtesy: trendsnhealth Recent Stories
Digestive system The digestive system is a group of organs responsible for the conversion of food into absorbable chemicals that are then used to provide energy for growthand repair. The digestive system is also known by a number of other names, including the gut, the digestive tube, the alimentary canal, the gastrointestinal (GI) tract, the intestinal tract, and the intestinal tube. The digestivesystem consists of the mouth, esophagus, stomach, and small and large intestines, along with several glands, such as the salivary glands, liver, gall bladder, and pancreas. These glands secrete digestive juices containing enzymes that break down the food chemically into smaller, more absorbable molecules. In addition to providing the body with the nutrients and energy it needs to function, the digestive system also separates and disposes of waste products ingested with the food. Food is moved through the alimentary canal by a wavelike muscular motion known as peristalsis, which consists of the alternate contraction and relaxationof the smooth muscles lining the tract. In this way, food is passed through the gut in much the same manner as toothpaste is squeezed from a tube. Churning is another type of movement that takes place in the stomach and small intestine, which mixes the food so that the digestive enzymes can break down the food molecules. Food in the human diet consists of carbohydrates, proteins, fats, vitamins, and minerals. The remainder of the food is fiber and water. The majority of minerals and vitamins pass through to the bloodstream without the need for further digestive changes, but other nutrient molecules must be broken down to simpler substances before they can be absorbed and used. Food taken into the mouth is first prepared for digestion in a two step process known as mastication. In the first stage, the teeth tear the food into smaller pieces. In the second stage, the tongue rolls these pieces into balls (boluses). Sensory receptors on the tongue (taste buds) detect taste sensationsof sweet, salt, bitter, and sour, or cause the rejection of bad-tasting food. The olfactory nerves contribute to the sensation of taste by picking up thearoma of the food and passing the sensation of smell on to the brain. The sight of the food also stimulates the salivary glands. Altogether, the sensations of sight, taste, and smell cause the salivary glands, located in themouth, to produce saliva which then pours into the mouth to soften the food.An enzyme in the saliva called amylase begins the breakdown of carbohydrates(starch) into simple sugars, such as maltose. Ptyalin is one of the main amylase enzymes found in the mouth; ptyalin is also secreted by the pancreas. The bolus of food, which is now a battered, moistened, and partially digestedball of food, is swallowed, moving to the throat at the back of the mouth (pharynx). In the throat, rings of muscles force the food into the esophagus, the first part of the upper digestive tube. The esophagus extends from the bottom part of the throat to the upper part of the stomach. The esophagus does not take part in digestion. Its job is to get the bolus into the stomach. There is a powerful muscle (the esophageal sphincter), at thejunction of the esophagus and stomach which acts as a valve to keep food, stomach acids, and bile from flowing back into the esophagus and mouth. Chemical digestion begins in the stomach. The stomach, a large, hollow, pouched-shaped muscular organ, is shaped like a lima bean. When empty, the stomachbecomes elongated; when filled, it balloons out. Food in the stomach is broken down by the action of the gastric juice containing hydrochloric acid and a protein-digesting enzyme called pepsin. Gastric juice is secreted from the linings of the stomach walls, along with mucus, which helps to protect the stomach lining from the action of the acid. The 3 layers of powerful stomach muscles churn the food into a fine semiliquid paste called chyme. From time to time, the chyme is passed through an opening (the pyloric sphircler), which controls the passage of chyme between the stomach and the beginning of the small intestine. There are several mechanisms responsible for the secretion of gastric juice in the stomach. The stomach begins its production of gastric juice while the food is still in the mouth. Nerves from the cheeks and tongue are stimulated and send messages to the brain. The brain in turn sends messages to nerves inthe stomach wall, stimulating the secretion of gastric juice before the arrival of the food. The second signal for gastric juice production occurs when the food arrives in the stomach and touches the lining. This mechanism providesfor only a moderate addition to the amount of gastric juice that was secreted when the food was in the mouth. Gastric juice is needed mainly for the digestion of protein by pepsin. If a hamburger and bun reach the stomach, there is no need for extra gastric juicefor the bun (carbohydrate), but the hamburger (protein) will require a much greater supply of gastric juice. The gastric juice already present will beginthe breakdown of the large protein molecules of the hamburger into smaller molecules--polypeptides and peptides. These smaller molecules in turn stimulatethe cells of the stomach lining to release the hormone gastrin into the bloodstream. Gastrin then circulates throughout the body, and eventually reaches the stomach, where it stimulates the cells of the stomach lining to produce more gastric juice. The more protein there is in the stomach, the more gastrin will beproduced, and the greater the production of gastric juice. The secretion of more gastric juice by the increased amount of protein in the stomach represents the third mechanism of gastric juice secretion. While digestion continues in the small intestine, it also becomes a major site for the process of absorption, that is, the passage of digested food into the bloodstream, and its transport to the rest of the body. The small intestine is a long, narrow tube, about 20 ft (6 m) long, running from the stomach to the large intestine. The small intestine occupies the areaof the abdomen between the diaphragm and hips, and is greatly coiled and twisted. The small intestine is lined with muscles that move the chyme toward the large intestine. The mucosa, which lines the entire small intestine, contains millions of glands that aid in the digestive and absorptive processes of the digestive system. The small intestine, or small bowel, is subdivided by anatomists into three sections, the duodenum, the jejunum, and the ileum. The duodenum is about 1 ft(0.3 m) long and connects with the lower portion of the stomach. When fluidfood reaches the duodenum it undergoes further enzymatic digestion and is subjected to pancreatic juice, intestinal juice, and bile. The pancreas is a large gland located below the stomach that secretes pancreatic juice into the duodenum via the pancreatic duct. There are three enzymesin pancreatic juice that digest carbohydrates, lipids, and proteins. Amylase,(the enzyme that is also found in saliva) breaks down starch into simpler sugars such as maltose. The enzyme maltase in intestinal juice completes breaksmaltose down into glucose. Libases in pancreatic juice break down fats into fatty acids and glycerol, while proteinases continue the break down of proteins into amino acids. The gall bladder, located next to the liver secretes bile into the duodenum. While bile does not contain enzymes, it contains salts and other substances that help to emulsify (dissolve) fats that are otherwise insoluble in water. The fatsso broken down into small globules allow the lipase enzymes a greater surface area for their action. Chyme passing from the duodenum next reaches the jejunum of the small intestine, which is about 3 ft (0.91 m) long. Here the digested breakdown products of carbohydrates, fats, proteins, and most of the vitamins, minerals, and ironare absorbed. The inner lining of the small intestine is composed of up to five million tiny, finger-like projections called villi. The villi increase the rate of absorption of the nutrients into the bloodstream by extending the surface of the small intestine to about five times that of the surfacearea of the skin. There are two transport systems that pick up the nutrients from the small intestine. Simple sugars, amino acids, glycerol, and some vitamins and salts areconveyed to the liver in the bloodstream. Fatty acids and vitamins are absorbed and then transported through the lymphatic system, the network of vesselsthat carry lymph and white blood cells throughout the body. Lymph eventuallydrains back into the bloodstream and so circulates throughout the body. The last section of the small intestine is the ileum. It is smaller and thinner-walled than the jejunum, and it is the preferred site for vitamin B12 absorption and bile acids derived from the bile juice. The large intestine, or colon, is wider and heavier then the small intestine,but much shorter-only about 4 ft (1.2 m) long. It rises up on one side of the body (the ascending colon), crosses over to the other side (the transversecolon), descends (the descending colon), forms an s-shape (the sigmoid colon), reaches the rectum and anus, from which the waste products of digestion (feces or stool) are passed out, along with gas. The muscular rectum, about 5 in(13 cm) long, expels the feces through the anus, which has a large muscularsphincter that controls the passage of waste matter. The large intestine extracts water from the waste products of digestion and returns some of it to the bloodstream, along with some salts. Fecal matter contains undigested food, bacteria, and cells from the walls of the digestive tract. Certain types of bacteria of the large intestine help to synthesize thevitamins needed by the body. These vitamins find their way to the bloodstreamalong with the water absorbed from the colon, while excess fluids are passedout with the feces. User Contributions:
Residents in Sierra Leone's remaining Ebola hotspots will be confined to their houses for three days next week, officials said, as the government tries to snuff out an outbreak that has killed over 10,200 people across West Africa. The number of Ebola cases in the region has fallen in recent months, though a spike in Guinea highlights the risk of complacency, over a year into the worst outbreak on record. Sidi Yaya Tunis, an official at Sierra Leone's National Ebola Response Centre, said health officials would carry out house-to-house searches from March 27-29 to identify the sick in the north and west, where the virus is spreading fastest. Elsewhere, where transmission is lower, officials will focus on education and prevention, he said. Health officials said a previous lockdown in Sierra Leone in September was a success and helped identify more than 100 cases. "If we don't get on top of this before the rains come, it will be a horror show," said a Sierra Leone health official who asked not to be named because the details of the lockdown have not been made public. "Many people are still not following the basic rules." The rains are due to begin in May. The World Health Organization has said they could greatly complicate the fight against Ebola by washing away roads and making it harder for aid and healthcare workers to get to affected areas. The official said that residents would be allowed out to attend church on Palm Sunday for a few hours. The latest figures issued by the WHO showed that there had been 10,216 confirmed, probable and suspected deaths from Ebola in West Africa. Regional leaders have set themselves a target to completely stamp out the disease by mid-April. Ebola outbreaks in Nigeria, Mali and Senegal have been contained. Liberia has recorded the most deaths with 4,283 since the crisis began, according to the WHO. However, there are currently no confirmed cases in the country. Sierra Leone has been the next worst affected country with 3,702 dead. Guinea, where the outbreak was first identified, has recorded 2,231 confirmed and probable deaths from Ebola but has seen a recent spike, with the number of patients more than doubling since last month.
What Is a Maitake Mushroom? Maitake are one of the lesser known edible mushrooms. They can be found in health food stores and cooperatives, but they aren't always as available as portobella or shiitake mushrooms. Maitake are considered gourmet mushrooms, and tend to be more costly than other types. Today, they are not only used for cooking, but are used in tea or other drinks, and are even taken as an alternative medicinal supplement. Maitake mushrooms derive their scientific name, Grifola frondosa, from the legendary myth of the Griffin in Greek mythology. The common name means "dancing mushroom". In ancient Japan, people who found this mushroom often danced because of the mushroom's high value. Traditionally, it has been used both in Japan and China as an immune system enhancer. They have been used for over 3000 years both medicinally and as a food source. Maitake mushrooms have been found to regulate glucose, insulin and blood pressure in the body, according to Disabled-World.com. The mushrooms can also help with weight loss and to regulate liver lipids. Maitake mushrooms are high in amino acids, fiber, magnesium, niacin, potassium and vitamin B-2, C and D. Maitake mushrooms have been used in tonics, soups, teas and cooking in Asia for over 1,000 years to promote a long and healthy life, according to Cancer.org. Maitake mushrooms are sometimes called "Dancing Mushrooms" or "Hen of the Woods". They grow throughout North America, Europe and Japan in more temperate areas. In China, maitake is known as "Keisho" medicinally. Japan has been the largest producer and consumer of Maitake mushrooms since 1981, according to Diet-and-Health.net. Wild maitake commonly grows in Northeastern Japan where the temperature, humidity and moisture is best for their growth. In a study published in Volume 6 Issue 1 of the Alternative Medicine Review, it was found that maitake mushrooms could help to fight cancer. Maitake mushrooms could slow prostate, brain, liver, stomach and lung tumor growth, according to the study. The study also found that maitake mushrooms increased immunity defense against AIDs and infections. Maitake was found to have anti-diabetic activity in another study published in Volume 17 Issue 8 of the Biological and Pharmaceutical Bulletin. The most common maitake that used is the Grifola frondosa or shiromaitake. However, there are other species such as Grifola albicans or choreimaitake, Grifola gigantea, and the Grifola umbellata or tonbimaitake. The mushroom is made up of fruit bodies that overlap one another in clumps. According to the study in the Alternative Medicine Review, they commonly are found at the base of persimmons, elms, or oak trees. The texture and taste is described as being similar to that of a hen or chicken in the study. Keywords: maitake, edible mushrooms, gourmet mushrooms About this Author
Untitled Document Web Extra Friday, Apr. 15, 2005 New evidence for the earliest hominid In 2002, a team of researchers found a skull in the deserts of Central Africa. Dubbed "Toumaï," which means "hope of life" in the local language of Chad, the skull was assigned to a new group of hominid, Sahelanthropus tchadensis (see Geotimes, Sept 2002). However, the skull produced controversy over whether it was truly a hominid, and not a species of ape or chimpanzee. Now scientists say they have new evidence confirming that Toumaï is a new hominid species — the oldest known to date. The team, an international and interdisciplinary group known as the Mission Paleoanthropologique Franco-Tchadienne (MPFT), discovered new fossils of jawbones and teeth belonging to S. tchadensis from the same site that produced the skull in Chad. These finds, along with a virtual reconstruction of the Toumaï skull — two separate studies published in the April 7 Nature — help establish what scientists originally thought: Toumaï is a separate species more closely related to humans than to the apes and chimps. "The new material essentially confirms the original diagnosis of the species," says Daniel Lieberman, a professor of biological anthropology at Harvard University and co-author on both papers. "The skull has a huge number of features that resemble hominids, not apes — not only in early hominids, but also in later hominids." This skull, identified as S. tchadensis and found in Chad, is the oldest evidence of hominid evolution, according to new research. Image courtesy of MPFT. The skull is unique because of its age and geographic location, says Michel Brunet from the University of Poitiers in France, an author on both studies and head of MPFT. The bones were located more than 2,000 miles from the Rift Valley where most ancestral human fossils have been found. The location, Brunet says, indicates that the dispersal of the earliest hominids is not localized in a particular region, such as Eastern Africa, as typically thought. Toumaï is approximately 7 million years old, 4 million years older than the famed "Lucy," Brunet says. "This is older than the molecular biologists think of the last split" in the lineage between humans and apes, he says. Previously thought to be about 5 million years ago, humans and apes split from a common ancestor that would have been related to both groups. Based on Toumaï, "now we know that the split between humans and apes is between 7 and 8 million years ago." Scientists reconstructed the skull by scanning it and affixing data points to particular regions to create a virtual replica. "This was a real technological tour de force," says John Fleagle, an anatomist at State University of New York, Stony Brook, who was not affiliated with the research. "You can put bones together and take them apart and you don't have to actually touch the fossil." The Mission Paleoanthropologique Franco-Tchadienne (MPFT), an international, interdisciplinary team of scientists is searching for fossil remains of ancestral humans in Central Africa. Image courtesy of MPFT. The virtual skull was manipulated to resemble the facial features of nonhominid primates to test the possibility that it may be related to some other group, but "it just didn't reconstruct that way," Fleagle says. When trying to make a chimp face out of the Toumaï skull data points, areas of the bones start to overlap and produce gaps in other places. In the case of trying to make a gorilla out of the Toumaï bones, the two sides of the braincase overlap because of a gorilla's smaller brain size, Fleagle says; it is also impossible to elongate the face of the Toumaï skull to replicate a gorilla's snout. Other factors about the skull also indicate that it was more hominid-like than ape-like, Fleagle says. For instance, the jawbones lack a sharpening mechanism notable on apes that have long, pointed canines. Also, the digital reconstruction of the skull shows Toumaï to have a short, flat face, like that of more modern humans, Lieberman says, and unlike Lucy (an Australopithecus afarensis), who had a much longer face. The reconstruction team also says that Toumaï was a biped, walking upright on two legs, based on the position of the opening in the neck for the spinal cord. In most primates, the neck is bent backwards to hold the head upright while walking on all fours. "It's a bit of a stretch," Fleagle says, as they have only found the creature's skull. "You'd like to have legs before you start talking about how it moved." Michel Brunet of the University of Poitiers and head of MPFT, searches the sands of the African desert in search of more evidence of Toumaï. Image courtesy of MPFT. The next step is to find more comparable specimens from the late Miocene, says Brunet, who has already started looking. He would also like the team to start digging into deposits older than those in which Toumaï was found. "What happened in the older level should reveal something more about human evolution," Brunet says, "because Toumaï is not far from the common ancestor." Until more specimens are found, however, scientists will continue to analyze Toumaï, providing more detailed descriptions of the morphological and anatomical features, Fleagle says. "It's a puzzle where this guy fits in," Lieberman says. "It's difficult to figure out evolutionary relationships from the lumps and bumps on bones." Laura Stafford MPFT homepage "Fossil find reveals evolutionaty montage" Geotimes, September 2002 Back to top Untitled Document
The Glamorgan-Gwent Archaeological Trust Ltd. Character Areas Lower Wye Valley 019 Highmeadow Woods View along one of the picturesque' walk through Beaulieu wood. HLCA 019 Highmeadow Woods Ancient Woodland: woodland management features: charcoal burning hearths; relict industrial archaeology: charcoal burning at Priory Grove; and quarrying; communication: footpaths ('picturesque' walk through Beaulieu wood); public rail (dismantled Ross-Monmouth railway); Roman road (OS 1st edition); ornamental/leisure; tourism ('picturesque' walk and viewing platforms); woodland boundaries. Back to map Historic Background The historic landscape area of Highmeadow Woods is an area of ancient woodland, which occupies the summit of a hill overlooking a large bend in the River Wye. The boundaries are defined by the extent of the ancient woodland, and by the national border with England to the east. The earliest evidence activity in the area is a Hollow Way, which preserves the line of a track which may date to the Roman period. The manor of Hadnock in the parish of Dixton, within which this area of woodland lies, was included in a grant of land given to the Priory of Monmouth by Withenock, the second Lord of Monmouth (Bradney 1904). This association between the Priory and the woodland is demonstrated by the name of 'Priory Grove', an area immediately above the west bank of the Wye, which directly overlooks the river. The area of woodland to the south, known as 'Beaulieu Wood' is also associated with Beaulieu Farm in the adjoining agricultural area, which is believed to be associated with the medieval Beaulieu Grange, a possession of the Abbey of Grace Dieu. The area of Highmeadow Woods historically fell within the parish of Dixton, in the manor of Hadnock (Bradney 1904). The area has existed as woodland throughout most of its history, evidence of woodland management of unknown date is represented by a large number of charcoal burning hearths. Large parts of the area, which was named 'Hadnock Wood' on the tithe map of the parish (1845), survive as semi-natural ancient woodland to the current day. After the dissolution of the monasteries, the area belonged firstly to the Huntley and Herbert families, then to the Duchy of Lancaster. It then fell to the family of the steward, Benedict Hall of Highmeadow, and eventually to Lord Gage. The area was then divided, Upper Hadnock being sold around 1800, and Lower Hadnock being sold to Admiral Griffin in approximately 1747. Following the death of his heir without a son, the estate was then sold to Richard Blakemore of the Leys, MP for Wells (Bradney 1904, 23). The tithe map lists Blakemore as the owner of much of the land within this character area, while the area of the Beaulieu farm was part of the Beaufort estate. The development of Picturesque interest in the area in the late eighteenth/early nineteenth centuries led to Beaulieu Woods, in the south of the area, being included in the designed landscape of the Kymin Park. Following on from the construction of the Roundhouse in 1794 by the Monmouth Picnic Club (Register of Parks and Gardens), the surrounding area was developed to maximise appreciation for the landscape and views from the hill. This involved the creation of a circular footpath through Beaulieu Wood with viewing points and platforms from which the scenery could be appreciated (Cadw 1994). Authorised by an Act of 1865, the Ross and Monmouth Railway Company constructed the Pontypool, Monmouth and Ross section of railway during the 1860s. It was opened to passengers in 1873, although was not heavily used, and was taken over by GWR in 1905. Passenger services on this section of the line stopped in 1959, and the section was closed, although freight services continued to run on the remaining section further east, between Ross-on-Wye and Lydbrook Junction, until 1965, when this section also closed ( The building of the railway further influenced the development of the area in other ways, in addition to running through the character area; it necessitated the construction of several quarries which lie along the line of the railway, and can be seen on First Edition OS maps (1882). Historic Landscape Characteristics Highmeadow Woods is characterised primarily by ancient woodland, the majority of which is replanted, although there are some areas where it remains semi-natural. It consists mainly of deciduous trees, with some stands of evergreens. This area contains two SSSIs, Fiddlers Elbow and part of the Upper Wye Gorge SSSI. The former is also a National Nature Reserve (NNR), while the latter includes the Lady Park Wood NNR. The areas of Beaulieu Wood to the south, and Priory Grove to the west, are now in the care of the Woodland Trust. There is significant evidence for woodland management in the form of charcoal burning (PRNs 07770g, 07771g, 07773g, 07775g, 07777g, 07786g, 07787g, 07790g, 07791g, and 07793g - 07801g) in a concentration on the west-facing slopes of the hill directly above the river. Industrial archaeology is mainly represented by extractive sites; two quarries (PRNs 07788g, 07792g) probably date to the 1860s when the railway was constructed, and Hadnock Road was realigned. There is a further quarry labelled as 'old' on First Edition OS maps (1882) possibly of a similar date, also alongside the line of the railway. A Hollow Way (PRN 07779g) defines the west boundary of the area at the border with the adjoining fieldscape, (HLCA 020). This is depicted on First and Second Edition OS maps (1881, 1901) as a Roman Road, and is known locally as the Royal Road, thought to be one of the main exits from the Royal Forest of Dean. Although its origin is unknown, its depth, up to three to four metres in places, suggests considerable age. Other communication features include the Ross and Monmouth branch of the Great Western Railway (PRN 03266.0g), which ran along the north edge of the area parallel with the bank of the river; the line of the railway can still be followed as a path. Additionally, there are two tracks running parallel depicted on First and Second Edition OS maps (1881, 1901) Lady Grove Ride (PRN 07778g) and Priory Grove Ride (07769g), neither is sunken or metalled. There is a further ride in the area (PRN 07776g) which is not depicted on the early OS maps. Other communication links include woodland walks, tracks and paths and public rights of way. Some of these have historic associations with the picturesque movement; a picturesque walk (PRN 08966g) runs through the woodland from the Kymin Naval Temple in the adjoining area (HLCA031) to the viewing platform (PRN 08967g) from where there are panoramic views across the Wye Valley.
Why Geopolitical Strategy Is Key To Sustaining American Power Highlight text to share via Facebook and Twitter Adam Gault via Getty Images Map of eastern hemisphere highlighting Asia There has been a recent surge of literature on the decline of America as a global power. Some Americans are also increasingly worried about their national decline. Many in the country and across the globe believe that the US is slipping away from being the world's most powerful nation. A Pew Poll conducted last year found that that only 28% of Americans believe that their "country stands above all others." This is down 10% points from just three years earlier. Several believe that the very existence of a debate regarding American declinism narrative questions the health of the country. "Rethinking economic policies at home, pushing technology frontiers and robust diplomatic missions abroad backed with required military reach would be critical to sustaining American pre-eminence." Most recently Joseph S Nye in his book Is the American Century Over? demystifies American's long history of worrying about their country's decline. He makes the case that America will continue to play a central role as the most dominant power even during 2040s. He corroborates his arguments with facts and figures on the indices of America's favourable geography, demographics, military power and soft power, purchasing power parity and science and innovation. Nye's thesis is compelling and has merit as these factors would continue enabling America to be a pivotal power in world politics. Increasingly, there is an asymmetry in the fulcrum of economic power and military power. Taking cognizance of the gradual shift in the centres of geo-economics and geopolitics is critical to sustaining and maintaining America's power in global affairs. America's leadership in global affairs needs to be supplemented with maintaining economic primacy with a competitive edge in market-driven economics. Rethinking economic policies at home, pushing technology frontiers and robust diplomatic missions abroad backed with required military reach would be critical to sustaining American pre-eminence. Relative power matters in international politics. Indices like the largest army, share of international trade, and most technologically advanced military, powerful navy, best universities and demographics portends that for a long time US would be unrivalled in world politics. With a resilient political system, the US has been able to construct a global order based on liberal values and solidified it with an intricate system of alliances which expands America's unipolar moment. With the turn of the century, the world has witnessed transnational challenges such as terrorism, global pandemic diseases, climate change and proliferation of weapons of mass destruction. These challenges have increasingly proved that America alone cannot address these global threats alone and needs credible partners across the globe. In Asia, during the 1990s the great globalisation push led to the emergence of countries like China, India and East Asian Tigers in the Asian economic centrestage. This shift in economic power to Asia has also created underlying currents of a power transition from a dominant US in the Asia-Pacific to a multipolar region. Since 2008, China has adopted assertive policies which undermined Asian stability and challenged the status quo in maritime Asia along China's periphery. In 2013, China's Ministry of National Defense issued an announcement of the aircraft identification rules for the East China Sea Air Defense Identification Zone of the People's Republic of China. Recently, it is reported that China's land reclamation is creating a "great wall of sand" in the South China Sea. The "unprecedented" reclaiming of land in contested waters has led to serious questions regarding Chinese intentions. "The challenge for US and its partners would be to deter Chinese aggressive posture without risking an escalation of conflict." The steady and rapid growth of China since the 1980s in the economic sphere and its challenge to American domination in East Asia has engendered the debate on China's peaceful rise. Recently two interesting discourses have emerged on how the US should engage with China. Robert D Blackwill and Ashley J Tellis in "Revising U.S. Grand Strategy Towards China" argue that Washington needs a new grand strategy towards China that centres on balancing the rise of Chinese power rather than continuing to assist its ascendancy. In another competing analysis by Kevin Rudd, "The Future of U.S.-China Relations Under Xi Jinping", the former Australian Prime Minister argues that both Washington and Beijing can avoid the "Thucydides' Trap," the historical pattern of conflict when rising powers rival ruling ones and can forge a common narrative which is mutually beneficial. The Asia-Pacific region is by and large becoming a theatre of a brewing strategic rivalry between Washington and Beijing. Asian countries whilst deepening economic partnerships with China look to powers like the United States, India, Japan, Australia and Indonesia to infuse a sense of dynamic multipolarity in the region. Recently, the US Navy sent USS Fort Worth, a littoral combat ship on its first patrol of the disputed Spratly Islands in the South China Sea along with patrolling of the airspace with a dispatched reconnaissance drone and a Seahawk helicopter. America's patrol of the South China Sea comes at a time when there are added concerns in Washington that China might impose air and sea restrictions in the Spratly Islands once it completes work on its seven artificial islands. One of the unintended consequences of this rivalry in the Asia-Pacific is China's gradual deepening ties with Russia to counter America's Asia Pivot. Russian Defence Minister Sergei Shoigu, during a visit to Beijing, has stated that in 2015, Russia and China will hold joint military exercises in the Pacific Ocean closer to the Chinese mainland and in the Mediterranean Sea. In the face of escalation of tensions in the Asia-Pacific, the US role in the region should be backed by long term demonstrable political commitment. In the better interest of stability and unity in Asia, Beijing needs to be mindful that its long term strategic interests lie in an approach which is not revisionist. US Pivot to Asia policy in action in the Asia-Pacific region provides allies and partners with the credible deterrence against an increasingly assertive China. However, American policies should fine-tune a balance between avoiding a hot conflict and cooperation with China to facilitate the country's peaceful rise. The costs of a hot conflict in the region would be high and have difficult consequences which need to be avoided. The challenge for US and its partners would be to deter Chinese aggressive posture without risking an escalation of conflict. Like Us On Facebook | Follow Us On Twitter | Contact HuffPost India More On This Topic
Miles Kington: A stirring life of wanderlust and adventure He invented a method of drugging rhinos using a crossbow and a syringe fired from a helicopter into the animal's backside Click to follow The Independent Online Cadbury Castle is not a castle at all, but an enormous hill fort in Somerset which was one of the British strongholds against the invading Romans. The Romans eventually overran it, slaughtered the inhabitants and left it in ruins. Unlike most such hill forts, however, it was reoccupied and rebuilt later, after the Romans had gone, in the so-called Dark Ages. Reoccupied by who? Well, by none other than King Arthur and his knights, is the romantic theory. Archaeologists have found a lot of valuable objects of just the right period to establish that it was the headquarters of someone very important, and certainly when you stand on top of Cadbury Castle and stare across the plains to distant Glastonbury Tor, it is easy to ignore the buzz of the A303 below you and tune in to other more stirring times. I was first taken up to Cadbury Castle 15 years ago by my father-in-law, Nick Carter, who was fascinated by all things Arthurian. Unfortunately, Nick lived in South Africa, where they don't have a lot of Arthurian remains, so when he came back to Britain in 1991 for his first return in some 40 years, we did some concentrated fort-scrambling. Cadbury Castle was his favourite of the sites we visited, though, and it was there that he said he would like his ashes to be scattered one day. Nick was British, not South African, but he had ended up in South Africa after an odyssey which took him from one end of the continent to the other. He was in tanks in North Africa during the war, when he gained the MC and rose to Major. After the war he tried to readjust to peacetime back in Britain, even at one time working on a farm in the Isle of Wight belonging to J B Priestley (a man he did not warm to), but wanderlust prevailed, and he rejoined the Army and found himself back in Africa, in Kenya. Here, after his army days were over, he became heavily involved in wild game preservation and became famous at one time for inventing a method of drugging rhinoceroses using a crossbow and a syringe fired from a helicopter into the animal's backside. He had left his family behind in England, and the first time my wife really got to know him at all was when, as a young teenager, she went out to the Kenyan bush to stay with her father for several weeks. In our kitchen there is a framed photo from that time, showing her sitting smiling on a bench beside the bearded Nick, who is looking very like Ernest Hemingway. Nick is patting an animal on the head. It is a rhinoceros. Only a baby rhino, it's true, but still a lot bigger than the average retriever or Labrador ... Nick wrote a book about his rhino adventures called The Arm'd Rhinoceros. Then after Kenya he went to Mozambique to look after more rhinos and elephants, and he ended up in South Africa trying to safeguard the last surviving herd of native South African elephants, in Knysna Forest, which is when I first met him. Like most people who have been through a lot, he had adopted a slightly sardonic tone about his adventures, so he never talked about the times he had seen, for instance, best friends die in burning tanks but preferred to tell you about, the time in the desert his tanks were facing the Germans across a valley. "Before hostilities could start, the Luftwaffe suddenly flew over and started firing away. Luckily, it's hard to tell one tank from another from the air, so we were relieved to see that the Germans had started bombarding their own side. We weren't so pleased when, a little later, the RAF came over and started shooting at us...." Nick died two years ago. Not a single British newspaper carried an obituary of him. Well done, Fleet Street. But now his widow, Gillian, and his son, Alex, have finally made it back to Britain carrying the casket containing his ashes, and two days ago, on Saturday afternoon, you could have seen a family procession slipping and sliding up the muddy track to the top of Cadbury Castle in order to lay his ashes to rest. More of this stirring stuff tomorrow.
Sorghum Grass for Biofuel? Sorghum Grass for Biofuel? Sorghum Grass for Biofuel? Researchers at Iowa State University have led a study designed to test the efficiency of cropping sorghum grass for biofuel production.  Testing was performed on biofuel production yields in both single-cropping and double-dropping systems. Using sorghum from a single-cropping system was determined to be more effective for the production of ethanol over the leading ethanol producer, corn. Ben Goff, author of the report, suggests that only 15%-25% of energy requirements can be fulfilled using corn or starch-based ethanol While Goff states that sorghum is more efficient for ethanol production standpoint, it remains to be seen whether the double-cropping long-term benefits, such as reduced erosion potential, are an acceptable trade-off for the reduced total biomass production. Specific genotypes of sorghum from the double-cropping study yielded total biomass equal to those in the single-cropping study, but all of the sorghum varieties in the single-cropping study had consistently higher ethanol yields. Biofuels from sources like sorghum could have significant energy, sustainability, and industry impacts. It’s Green – Now Find Out What That Really Means comments powered by Disqus Newsletter Subscriptions
London - Goggles are being used to help diagnose the balance condition benign paroxysmal positional vertigo, also known as BPPV. A crucial component of our balance system are “ear rocks”, calcium carbonate crystals that are held in a pouch in the inner ear and help stimulate nerves when we move our heads. In BPPV, these rocks fall out of position (due to a virus, head injury or ageing) and float into other parts of the ear, triggering dizziness. Specialists diagnose the condition by assessing twitching in the eyes, as the rocks interfere with signals from the ear to the part of the brain that causes eye movement. However, in daylight the eye tends to fixate on objects, which can make twitching difficult to detect. The goggles worn by the patient prevent this from happening because they examine the eyes in the dark. They use infrared imaging and a camera to send magnified pictures of the eye to a computer screen. - Daily Mail
Are your pipes too big? The problem with Long Fat Networks A few years ago, I was involved in a consulting project with a large company in the healthcare industry that was in the middle of a data center migration. After the networks and servers were stood up at the new location they needed to migrate massive amounts of data in bulk so the company secured a pair of OC192 circuits, providing nearly 10Gbps of throughput in each direction on each circuit. Everything seemed to be in order, so they began transferring data. To their surprise, they were only seeing throughput in the tens of megabits per second, even on servers connected to the network via gigabit Ethernet switches. After exhausting all the normal troubleshooting steps, they decided to bring in a fresh set of eyes. What we discovered may seem counterintuitive: this company's pipes were just too big. The company was suffering from a Long Fat Network (LFN). The LFN problem addressed here relates to one function of one protocol in one layer of the OSI model: the Transmission Control Protocol, or TCP. Layer 4, the Transport Layer, provides numerous functions, including: Segmentation of data. If the amount of data sent by an application exceeds the capability of the network, or of the sender or receiver's buffer, the Transport Layer can split up the data into segments and send them separately. Ordered delivery of segments. If a piece of data is broken up into multiple segments and sent separately, there is no guarantee the segments will arrive at the destination in the correct order. The Transport Layer is responsible for receiving all the segments and, if necessary, putting the data stream back together in the correct order. Multiplexing. If a single computer is running multiple applications, the Transport Layer differentiates between them and ensures data arriving on the network is sent to the correct application. In addition, the Transport Layer traditionally has been responsible for reliability, or guaranteed delivery of data. Not all Transport Layer protocols provide reliability mechanisms, and which Transport Layer protocol is used by a given application depends on a number of considerations. However, the majority of data traversing networks today utilizes the TCP, which does indeed provide a reliability mechanism. And it is TCP's reliability mechanism that is at the heart of the LFN problem. When data is ready to be sent TCP performs the following sequence of events: 1. TCP on the initiating computer establishes a connection with TCP on the remote computer. 2. Each computer advertises its Window Size, which is the maximum amount of data that the other computer should send before pausing to wait for an acknowledgment. The advertised window size is typically related to the size of the computer's receive buffer. 3. TCP begins transmitting the data in intervals equal to the maximum segment size, or MSS (also negotiated by the hosts). Once the amount of data transmitted equals the window size, TCP pauses and waits for an acknowledgment. TCP will not send any more data until an acknowledgment has been received. 4. If an acknowledgment is not received in a timely manner, TCP retransmits the data and once again pauses to wait for an acknowledgment. This "send and wait" method of reliability ensures that data has been delivered, and frees applications and their developers from having to reinvent the wheel every time they want to add reliability to their applications. However, this method lends itself to inefficiencies based on two factors: 1) how much data a computer sends before pausing, and 2) how long the computer has to wait to receive an acknowledgment. It is these two factors that are critical to understanding, and ultimately overcoming, the LFN problem. We now have enough information to understand the LFN problem. TCP is efficient on Short Skinny Networks, but not on Long Fat Networks. The longer the network (i.e. the higher the latency), the longer TCP has to sit by twiddling its thumbs waiting for an acknowledgment before it can send more data. And the fatter the network (i.e. the faster a sender can serialize data onto the wire), the greater the percentage of time TCP is sitting by idly. When you put those two together -- Longness and Fatness, or high latency and high bandwidth -- TCP can become very inefficient. Here is an analogy. Let's say you have a coworker who talks a lot. It's not that he has a lot to say, he just talks really slowly. When you are having a conversation face to face, he can pretty much just keep talking and talking and talking. He gets near-instant acknowledgment that you heard what he said, so he can just keep talking. There is very little dead air. This is equivalent to a Short Skinny network. Now let's say your coworker becomes an astronaut and flies to Mars. He calls you on his astronaut phone to tell you about the trip, and he is really excited so he talks really fast. But the delay is really long. Since he can't see you, he decides that every 25 words he will pause and wait for you to respond before he continues speaking. Since your friend talks really fast, let's say it only takes him five seconds to spit out 25 words before pausing to wait for a response. If the round trip delay between Earth and Mars is 10 seconds, he will only be able to speak 33% of the time. The other 67% of the time the line between you and the Martian is sitting idle. It wouldn't be such a big deal if he didn't speak so fast. If it took him two minutes to speak those same 25 words instead of blurting them out in five seconds, he'd be speaking for about 92% of the time. Likewise, if the round trip latency between you were lower, let's say two seconds, the utilization percentage of the line would go up as well. In this case he would speak for five seconds and then pause for two, achieving a utilization of about 71%. Let's look at a real-world network scenario. Two computers, Computer A and Computer B, are located at two different sites that are connected by a T-3 link. The computers are connected to Gigabit Ethernet switches. The one-way latency is 70 milliseconds. Computer A initiates a data transfer to Computer B using an FTP PUT operation. The following sequence of events occurs (for the sake of simplicity, I will leave out some of the TCP optimizations that may occur in the real world): 1. Computer A initiates a TCP connection to Computer B for the data transfer. 2. Each computer advertises a window size of 16,384 bytes, and an MSS of 1,460 bytes is negotiated. 3. Computer A starts sending data to Computer B. With an MSS of 1,460 bytes and a window size of 16,384 bytes, Computer A can send 11 segments before pausing to wait for an acknowledgment from Computer B. So how efficient is our sample network? To figure this out, we need to calculate two numbers: 1. The maximum amount of data that could be in flight on the wire at any given point in time. This is called the bandwidth-delay product. Think of it like an oil pipeline: how much oil is contained within a one mile stretch of pipe if the oil is flowing at 10mph and you are pumping 10 gallons per minute? (Answer: 60 gallons). In our example, the T-3 bandwidth is 44.736Mbps (or 5.592 megabytes per second) and the delay is 70 milliseconds. So the bandwidth-delay product is 5.592 x .07, or about 0.39MB (399.36KB). This means at any given point in time, if the T-3 link is totally saturated, there is 0.39MB of data in flight on the wire in each direction. 2. The amount of data actually transmitted by Computer A before pausing to wait for an acknowledgment. In our example, Computer A sends 11 segments, each being 1460 bytes. So Computer A can only send 1460 x 11 = 16,060 bytes (15.68KB) before having to pause and wait for an acknowledgment from Computer B. So, if the network link could support 399.36KB at any given point, but Computer A can only put 15.6KB on the wire before pausing to wait for an acknowledgment, the efficiency is only 3.9%. That means that the link is sitting idle 96.1% of the time! Do you see the problem? In some cases, TCP sacrifices performance for the sake of reliability, particularly when latency and/or bandwidth is relatively high. But is it possible to achieve both performance and reliability? Can we have our cake and eat it too? Yes, and we'll look at the options in a moment. But first, let's look at the two obvious but unrealistic solutions: 1. Decrease latency. If we could decrease the amount of time it takes for a bit to make it from one side of the network to the other, computers wouldn't have to go get a cup of coffee every time they send a TCP Window's worth of data. But until someone comes out with the Quantum Wormhole router, or figures out how to increase the speed of light or bend space, you're probably stuck with the latency you've got. 2) Decrease throughput. If we turn a Fat network into a Skinny network without changing the latency or the TCP window size, it stands to reason that link utilization would go up. But I recommend thinking twice before bringing this option up with your colleagues ("What, you want less bandwidth?"). OK, now that we've got that out of the way, let's look at the real solutions: 1) TCP window scaling. One might wonder why the TCP window size field is only 16 bits long, allowing for a maximum of a 65,535 byte window. But remember that TCP's reliability mechanism was written in a day when data link bandwidth was measured in bits. Today, 10 Gigabit links are common (and 40G and 100Gbps links are becoming more common). RFC 1323, titled "TCP Extensions for High Performance," was published in 1992 to address some of the performance limitations of the original TCP specification in a world of ever increasing bandwidth. In particular, TCP Option 3, titled "Window Scaling," addressed the 65,535 byte window size limitation. Rather than increasing the window size field in the TCP header to a number larger than 16 bits (and thus rendering it incompatible with existing implementations), Option 3 introduces a value by which the TCP window size is bitwise shifted to the left. A value of 1 shifts the 16 bits to the left by 1 bit, doubling the window size. A value of 2 shifts the 16 bits to the left by 2 bits, quadrupling the window size. The maximum value of 14 shifts the 16 bits to the left by 14, increasing the window size by 2^14. Increasing the window size has the obvious benefit of allowing TCP to send more segments before pausing to wait for a response. However, this performance benefit comes with some risk, such as buffer issues and larger retransmits when segments are lost. Virtually every modern operating system in use today uses TCP window scaling by default, so if you're seeing small window sizes on the network, you may need to do some troubleshooting. Are there any firewalls or IPS devices on the network stripping TCP options? Are hosts scaling back the window size due to buffers filling up or excessive packet loss? 2) Multiple TCP sessions. The problem described here applies to a single TCP session only. In the earlier example, Computer A's TCP session was only utilizing 3.9% of the link's bandwidth. If 25 computers were transmitting, each using a single TCP session, a link utilization of 97.5% could be achieved. Or, if Computer A was able to open 25 TCP sessions simultaneously, the same utilization could be achieved. This will almost never be a good solution to the problem at hand, but is included here for completeness. 3) Different transport layer protocol. TCP isn't the only transport layer protocol available. TCP's unreliable cousin, the User Datagram Protocol, does not provide any guarantee of delivery, and is therefore free to consume all available resources. 4) Caching. Content caching utilizes proxies to store data closer to the client. The first client to access the data "primes" the cache, while subsequent requests for the same data are served from the local proxy. Content caching is a band-aid solution that is becoming increasingly obsolete in an age of constantly changing and dynamically created content, but it is still worth mentioning. 5) Edge computing. Paid services like Akamai decentralize content and push it to the edges of the network, as close to the clients as possible. One of the results is lower latency between clients and servers. 6) WAN optimization and acceleration. Products from companies like Riverbed and Silver Peak, or open source alternatives like TrafficSqueezer, employ various techniques such as data deduplication, compression, and dictionary templating to increase the perceived performance of a WAN link. Earlier, I said the company in question here had too much bandwidth. Well, that wasn't really the case. Their real problem was the behavior of the protocol they were using. One of their system admins, at the direction of a software developer, had monkeyed with the configuration of their servers in an attempt to tune application performance. The application wasn't able to pick up packets from the buffer fast enough (due to a software bug), causing the TCP window to scale back the window size, but not before buffers were filled, packets were dropped, and retransmits were occurring. In an attempt to eliminate the retransmits, this admin had statically set the TCP window size on the servers to a relatively low value. Since TCP was being artificially constrained, it was never able to scale up to fill the available bandwidth of the data link. Today, barring misconfiguration, most networks won't run into LFN problems because TCP window scaling is widely used by default. However, with bandwidth ever on the rise, performance will most certainly become more and more of an issue because latency is fixed (unless you figure out how to bend space-time), and we are likely to see similar symptoms. Heder, CCIE No. 24788, is a network architect with NES Associates in Alexandria, Va., specializing in large-scale network design. Heder holds a master's degree with a concentration in network architecture and design, and has a patent filed for an IPv6 technology. He can be reached at This story, "Are your pipes too big?" was originally published by Network World. ITWorld DealPost: The best in tech deals and discounts. Shop Tech Products at Amazon
Get Adobe Flash player 10 and 100 nanometers (small enough to slip through a surgical mask)—challeng- ing researchers because of the difficulty integrating these small spatial scales. However, once Reisner’s team accounted for these “nano-droplets,” they found that the previous structural mischaracteriza- tion of clouds caused a false rendering of temperature conditions. The discovery led to a vital correction to the temperatures in the hurricane’s eye wall, the area in the storm where the most damaging winds and rainfall are located. Reisner’s two developments—proper cloud representation and lightning predic- tive models—together help demystify hurricane intensity, which may lead to more accurate predictions that help the public prepare for potential devastation. v —Kirsten Fox Agricultural Alchemy Susan Hanson wants to convert grass into gold. Stumps or stalks or weeds in a field can be burned to generate energy— emitting harmful byproducts—but Hanson and others in her field have a more elegant solution in mind: turning agricultural and for- estry waste into valuable chemicals and fuel. The sources are abundant, and this form of chemical production would not compete with demands for food, fertilizer, or water since it uses only waste products rather than requiring additional agricultural produc- tion. But the nation’s energy problems are not easily solved, and where the enormous hurdle of efficiently extracting the energy from plant matter has stumped others, this Los Alamos chemist is getting to the root of the matter: the lignin. Plant cellular walls are primarily composed of cellulose, hemicellulose, and lignin. Comprising nearly 30 percent of the biomass on Earth, lignin conducts water in stems, provides the mechanical support for the plant, and strengthens cell walls. One Los Alamos scientists are improving methods to convert agricultural waste into fuels and other valuable chemicals. gram of lignin contains about 2.27 kilojoules of energy—comparable to coal and 30 percent more than cellulose alone. But just as this woody material “glues” the cell wall together to protect plants from pests and pathogens, it protects them from research- ers as well. An efficient method is needed by which researchers can break apart the com- plex polymer’s strong bonds and degrade the plant into energy-rich simple sugars. Most research has focused on pre- treating lignin with environmentally and economically unfriendly solvents, heat, or high pressure to rupture the lignin bonds. But Hanson and Los Alamos colleagues Pete Silks and Ruilian Wu found a “green” catalyst, which enables a desired chemi- cal reaction without being consumed by it, to break down lignin into smaller chemical components. These components could potentially be used to produce alcohols, waxes, surfactants (for detergents and other applications), and fuels. Historically, precious metals such as platinum have been the basis for most catalysts, but Hanson focuses on vanadium. Found in many minerals and marine organ- isms, and often collocated with iron ores and petroleum, vanadium is an earth-abundant metal that is not toxic in the small amounts used in catalysis. The reaction proceeds in air at atmospheric pressure with only mild heat, making vanadium easier to use than most other metals, which are sensitive to air and damaged by oxidation. Hanson’s team designed and synthesized its catalysts by combining vanadium with other components. The Los Alamos work demonstrates that a lignin model compound may be broken down selectively into useful components using a vanadium catalyst. The strong carbon-carbon and carbon-oxygen bonds are cleaved via oxidation that breaks the lig- nin into smaller, usable pieces. And the only byproduct—besides the desirable, energy- rich sugars—is water. v —Kirsten Fox 1663 los alamos science and technology magazine june 2012 21
• You're all caught up! <Back to Posts Why All Calories Are Not Created Equal I get asked a lot of nutrition questions, and one of the most common topics that comes up is the subject of calories. There is so much information out there and so many misconceptions. Consequently, I wanted to bring up this very important topic and offer up the truth that all calories are NOT created equal. I know this flies in the face of some "conventional wisdom" that many nutritionists, doctors, and personal trainers have been spreading for decades. I'm sure you've heard it before... "Eat less calories to lose weight." It sounds logical, right? We've all heard this, and it makes sense that substituting lower calorie foods for higher calorie options is the key to losing weight. In the 1980s, the processed food industry went into a massive and unfortunate PR campaign against fat in foods, all while secretly substituting it with more and more sugar. Because fat has twice as many calories as sugar, the assumption was cutting calories should start with cutting fat. But if this works, then why are so many people still overweight? The truth is that only counting-calories has many flaws, but the main one is that it does NOT account for the hormonal effects that certain kinds of foods have on your body. You know this instinctively from eating a hearty salad compared with eating a cupcake. You feel different, and your body reacts differently to these foods. Even if the calorie counts are the same, the math does not account for the hormonal, mental and emotional effects those calories have on you. Calories that Store Fat The major problem I usually see, is that most people who try low calorie diets end up eating foods that trigger a cascade of fat-storing hormones. And unfortunately, many of the commonly recommended "low calorie" health foods are exactly the ones that set off this cascade of fat-storing hormones. For example, imagine having to choose between a 250-calorie energy bar or a 250-calorie avocado. Yes, they both have 250 calories, but the difference is what happens to you hormonally. When you eat the high-sugar/high-carb energy bar, your blood sugar rises and your pancreas secretes insulin in response to the elevated blood sugar. Insulin is a "building" hormone, which means that it helps to move sugar from the food you've eaten into your cells to be used for energy and it also stores extra sugar (energy) as fat. The problem is most people eat these carbs in excess and at a time of the day when your muscles do not need them, so they are stored as fat. The monounsaturated fat from the avocado, on the other hand, inhibits insulin, leading to a more sustained energy release and feeling full. But that's not all, the avocado actually sends signals to your body to burn body fat. So even though both foods have the same amount of calories, they do not have the same affect on your body. Do Calories Even Matter? Yes, in the big picture they do. You cannot over-consume calories without gaining weight, but I would dare say that the hormonal effect of the foods you choose to eat is just as important. To optimize your hormonal response to food, eat lots of vegetables and plenty of protein from organic sources, whenever possible. Include healthy fats like olive oil and avocado. Reserve "starchy" carb intake to the hour immediately after your hardest workouts, and maybe have one weekly cheat meal. Calories that Increase Cravings Your body craves whatever it eats most regularly. It's a survival mechanism, because our bodies learn to use whatever food has "worked" in the past to fuel us. A 2011 study of low-carb and low-fat dieters found that participants' cravings mimicked what they restricted. In the study, participants who were on a low-carb diet had less cravings for high-sugar and high carbohydrate foods. Now I know what you're thinking. You've tried carb-restriction diets and actually craved carbs, right? You craved them so bad you could smell a breadcrumb from a half mile away. I know the feeling, but that's only the withdrawal phase; I promise it gets easier afterward. Eating healthier calories will re-teach your body what's best for it. Flipping the switch may be hard initially, as you will have withdrawals, but stay focused and remember that after a week or so of craving carbs like mad, your body will adjust and start to crave them less. As you control your mind to do what you know is best, your body will stop feeling compelled to eat calories that are not positively "feeding" into its own health and well-being. So, What Calories Should I Be Eating? Seek calories from protein and fiber, which help slow down digestion and help you stay full longer. Fats are also satisfying and help trigger the "I feel full" response. Put together, incorporating lean protein, fiber, and healthy fats should keep you feeling full throughout the day. There are refined carbs, sometimes called fast or quick-release carbs, and there are slow carbs, or slow-release carbs. These carbs take a lot longer to break down into sugar, and thus don't cause the rapid rise and fall in blood sugar that quick-release carbs do. You've heard of the sugar crash I'm sure, but the carbs that increase energy are slow carbs, which come from green leafy vegetables, as well as starchy veggies like sweet potatoes. Other good slow carbs are whole grains, wild rice, quinoa, and steel-cut oatmeal. Beans and legumes are another great slow carb. All of these foods help create longer-lasting energy. Because they are slowly released in the body, they allow you to enjoy a balance of energy infusion over a longer period of time, so there's no crash at the end. And if you're worried about certain fruits, don't get too caught up on avoiding tropical fruits and citrus fruits that some may say have a higher glycemic index. If you're eating fruit whole, it has the fiber to help you digest it a lot slower than if it were juice. Action Steps 1. Stay full with protein-rich meals that add fiber and healthy fats. 2. Avoid cravings by making sure that healthy whole foods make up 80% of what you consume each day. 3. Incorporate hormonal activating foods like coconut oil, olive oil, and avocado along with a healthy dose of vegetables to optimize your hormonal response. 4. Reserve "starchy" slow-carb intake and "sugary" fast-carb intake to the hour before and the hour immediately after your hardest workouts. Readers - What are your thoughts on calories? Do you know how many calories you consume each day? Do you track your calories on LIVESTRONG's MyPlate free calorie tracker app? Why or why not? What are foods that you've given up because of the effect that they have on your body? Have you tried coconut oil, avocado, olive oil or other "hormonal activating foods" in your diet? Do you find that they have a positive effect on your body? Celebrity Fitness & Motivation expert Brett Hoebel was a trainer on NBC's The Biggest Loser, health expert on Food Network’s Fat Chef, and judge on Fit or Flop: America’s Next Fitness Star. Brett is the creator of the 20 Minute Body™, RevAbs® from Beachbody, and frequently blogs for LIVESTRONG.COM, US News, Eleven By Venus and other prominent media outlets. He regularly appears on TV shows like Dr. Oz and The Talk to discuss topics such as weight loss, bullying and emotional obstacles, and contributes to national publications like SELF and Fitness Magazine. Must see: Slideshow & Video Member Comments
You are here Glimpses of Soliton Theory: The Algebra and Geometry of Nonlinear PDEs Alex Kasman American Mathematical Society Publication Date:  Number of Pages:  Student Mathematical Library 54 BLL Rating:  [Reviewed by William J. Satzer , on Solitons are explicit solutions to nonlinear partial differential equations. They are waves that behave in many respects like particles. The founding story of soliton theory, repeated so often it is now almost indistinguishable from myth, tells of John Scott Russell and his observation in 1834 of a peculiar solitary wave in a canal near Edinburgh. He followed this wave on horseback as it kept its speed and shape for a mile or two until he lost it. The response to this amazing discovery was … not much, mostly scoffing, because everyone thought that such a wave would disperse and distort and could not propagate. In 1895 Korteweg and de Vries modeled water waves in a canal, derived the KdV equation named after them and found a number of wave-like solutions that travel and maintain their shape. But not even they seemed particularly interested in what they found. It wasn’t until the twentieth century and computational work by Fermi-Pasta-Ulam and later Kruskal-Zabusky that the soliton got a name and some respect. It’s an odd thing. We can write down many explicit exact solutions of the nonlinear KdV equation. Why is this possible when we usually can’t find even one explicit solution for most nonlinear partial differential equations? Moreover, an n-soliton solution to the KdV equation (with n peaks) bears an unusually close relationship to n individual one-soliton solutions: it looks almost — but not quite — like a linear combination of the others. Is there a geometric structure analogous to the vector spaces we see with solutions of ordinary differential equations? This book explores the ramifications of these questions for advanced undergraduates who have had basic calculus and linear algebra. It’s very challenging material for undergraduates, but it presents an exciting opportunity too. As a capstone course, or for independent study, soliton theory ties together several important applications to science and engineering with an extraordinary range of mathematical topics from PDEs to elliptic curves, differential algebras and Grassmanians. The author doesn’t expect to bring students to the research frontiers with his book; his aim is rather to provide a “glimpse” that intrigues and engages. Partial differential equations and algebraic geometry meet in a most remarkable and unexpected way. After an introductory review of differential equations that emphasizes the differences between linear and nonlinear equations, the author tells the story of solitons. He begins with James Scott Russell and continues to twentieth century developments and applications. These include examples in telecommunications (where solitons travel down optical fibers) and biology (where solitons play a role in DNA transcription and energy transfer). The book’s real work begins with an examination of Korteweg and de Vries’ solution to the KdV equation. We see that the general solution they find can be written in terms of a Weierstrass -function. The connection to elliptic curves and algebraic geometry begins here. The author makes extensive use of Mathematica throughout the book; in particular, that program is used to introduce the Weierstrass -function without requiring the background in complex analysis that would otherwise be necessary. (It is also used throughout for a variety of straightforward, if messy, calculations and for animations of wave dynamics.) This innovative use of Mathematica works well here where the object is to offer glimpses of a broad and subtle theory. It does, however, tie the book to the software — it would be unsatisfying to read the book without having access to Mathematica, and difficult to derive full benefit from it. After dipping into algebraic geometry, the author goes on to discuss the n-soliton version of the KdV equation and its solutions that look asymptotically like linear combinations of solutions to one-soliton equations. The next few chapters try to explain the special nature of the KdV equations, and along the way discuss the algebra of differential operators, isospectral matrices, and the Lax form for KdV and other soliton equations. It’s only with an additional spatial dimension and analysis of the corresponding generalization of the KdV equation (the KP equation) that the picture gets a little clearer. We then finally get a glimmer of the geometry of the solution space, and a way to describe it using the Grassman cone. This book challenges and intrigues from beginning to end. It would be a treat to use for a capstone course or senior seminar.
Did you know? QWERTY keyboard - definition and synonyms What are red words? Thesaurus diagram noun [countable]  QWERTY keyboard pronunciation in British English /ˌkwɜː(r)ti ˈkiːbɔː(r)d/ Word Forms singularQWERTY keyboard pluralQWERTY keyboards 1. the normal computer keyboard that is used for typing in the English language. Its top row of letters starts on the left with q, w, e, r, t, and y. quiz invitation Open Dictionary a person who has or claims to have a lot of influence add a word global English and language change from our blog Macmillan learn live love play
• Join over 1.2 million students every month • Accelerate your learning by 29% • Unlimited access from just £6.99 per month How and with what success did the new deal solve the problems facing the USA in the1930s? Extracts from this document... How and with what success did the new deal solve the problems facing the USA in the 1930s? The New Deal was very successful to pull America out of the problems which the Americans were facing. The New Deal seemed to start solving the problem very quickly and the people started to gain confidence again. And with all of the new deal laws money started to go around in the economical system. That's what it has been like in the beginning. But than the new deal system didn't continue to solve any more problems especially as most of the people were still being paid low. There were a lot of major problems in America at the time. Because of the crash lots of people became unemployed. Up to 5000 banks have been forced to close down because people have borrowed money but then were unable to repay the money to the banks. And lots of people, companies and went bankrupt and prices of products kept dropping. This lead a lot to homelessness and starvation. ...read more. From about 1933 until 1936 all of the policies were successful. But after a year the government started to spend less money on the project and production fell again. 4 million people were taken away from the bread line and were put into a job. All the people started to work although the wages were very low. The wages were only 1 dollar a day. And within 3 years, 8 million Americans were on a public project. The alphabet agency called the WPA managed to build 116,000 public buildings, 78,000 bridges and 650,000 miles of road. For the short term effects it seemed very good and helped a lot of Americans since they didn't have any work in the first place. In the beginning the Americans were able to live from only one dollar a day. But getting such a low wage for more than a few years was very problematic and didn't work. By the beginning of 1933 there were roughly 14 million people who were unemployed. ...read more. Doctor Francis Townsend had his own specific ideas. His idea was to give everyone who was 60 and above a pension of 200 dollars a month providing that they would spend the money on the same month and that they would give up their job. Doctor Francis Townsend believed that it would create a lot of jobs for the young people and it would put money back into the economical system. The New Deal was partially very successful. The New Deal policies helped a lot to recover the economics in America. From 1932 until 1936 managed to stabilize the banks mange to put people into work and help other specific group of people. The number of unemployed people and the amount of homelessness dropped a lot. But the work for the people was very badly paid. So the people weren't able to live of the low wages for a long time. The new deal didn't pull America out of depression but it helped America to recover for a very long rime. ...read more. The above preview is unformatted text Found what you're looking for? • Start learning 29% faster today • 150,000+ documents available • Just £6.99 a month Not the one? Search for your essay title... • Join over 1.2 million students every month • Accelerate your learning by 29% • Unlimited access from just £6.99 per month See related essaysSee related essays Related GCSE USA 1919-1941 essays 1. The USA Whereas source F did not, it only stated that what might have happened after the prohibition had been introduced d. Study sources G and H Do these two sources prove that Prohibition was successful? Sources G and H both seemed to suggest that prohibition were successful. The fact that it is a scarecrow wearing ragged and torn clothes shows us, I believe, that Hoover's 'rugged individualism' had worked the American population too hard and starved them of support and reducing them to poor, weak men who wear torn clothes and cannot support themselves anymore. 1. Was the New Deal a success It shows FDR using more and more money which to prime the New Deal pump. It also shows that the money that Roosevelt is pouring into the pump is leaking away. This means that the money Roosevelt was spending way too much money and all of it was being wasted. Once more this shows that suffering was not equal through out Britain after the depression. It is also economic because the source aims to increase sales in the housing market. This source can also be seen as social because it shows the improvement of lifestyle in Britain even. 1. Source Analysis OCR: Was the New Deal a Success? Source H is a letter from 'old folks' writing about how one of Roosevelt's policies helped them. The letter is highly flattering and congratulatory letter designed to thank Roosevelt for the creation of the Home Owners Loan Cooperation. The letter tells of how the man sent from this organisation 2. T.Roosevelt and the New Deal. The poorest 20% were earning 4%. The slump of 1937-8. The limitations of the New Deal were shown in 1937. The economy seemed to be improving and Roosevelt took the opportunity to cut the amount spent on New Deal programmes. In the hundred days, FDR sent 15 proposals to congress and all 15 were accepted. As well as making all these changes he wanted to inform all Americans what was happening because before Hoover didn't and it was one of the reasons why the wall street crash happened. did not help cure America, and Roosevelt has to find a new solution. Meanwhile, a submissive Congress lingers in his shadow. Republicans, the rich, and some businesses were not benefited by the New Deal whatsoever. In fact they were considerably damaged by it. • Over 160,000 pieces of student written work • Annotated by experienced teachers • Ideas and feedback to improve your own work
• Join over 1.2 million students every month • Accelerate your learning by 29% • Unlimited access from just £6.99 per month The resistance of a piece of wire is dependent on its temperature, length, cross-sectional area and the type of metal the wire is made of. Free essay example: Aashini Patel Physics coursework Visit summary report For our physics coursework we had visited the Wembley arena on the 20th January, 2009 to watch a cultural event performed by a few Bollywood film stars. Wembley arena is a unique and special hall of London, where great numbers of events are hosted quite often. On the day of our visit, once we settled in to watch the cultural show; I was amazed by the decorations and the thousands of dazzling lights of the arena. The show had the most impressive selections of music; its sounds were heard all the way through the show. As it was in my mind to look for some real applications of physics in our daily lives, I was observing how the beautiful lighting and sound were being controlled. From the knowledge I have so far gained suggests that these lighting and sound were being controlled by resistance in the electrical circuits involved. What is resistance? Electricity is conducted through a conductor, in this case wire, by means of free electrons. The number of free electrons depends on the material and more free electrons means a better conductor, i.e. it has less resistance. For example, gold has more free electrons than iron and, as a result, it is a better conductor. The free electrons are given energy and as a result move and collide with neighbouring free electrons. This happens across the length of the wire and thus electricity is conducted. Resistance is the result of energy loss as heat. It involves collisions between the free electrons and the fixed particles of the metal, other free electrons and impurities. These collisions convert some of the energy that the free electrons are carrying into heat. Resistance is usually given the symbol 'R'. The unit for electrical resistance is the ohm. Ohm's law is the voltage drop (V) across a resistor proportional to the current running through it. The resistance of a given wire can be calculated using the following equation: R = ρ L /A   L = Length (m) A = cross-sectional area (m2) ρ = resistivity of the metal By rearranging the equation the resistivity of the metal can be calculated: ρ = R A / L The resistivity differs depending on the metal however it is constant at room temperature for each metal.  This means that two pieces of wire made of the same metal and at room temperature should give the same result when calculating resistivity regardless of its length and cross-sectional area.  The following equation can be used to calculate the resistance of a wire:                                                 R = V / I      where:  V = volts                                                                                    I = amps                                                                                   R = resistance The factors affecting resistance • Temperature of the wire: The higher the temperature of the wire, the higher the resistance. This is because the ions get more vibration energy and vibrate faster. The electrons are then repelled off course even more due to the ions interpreting their path. • Length of wire: The length of the wire is directly proportional to resistance. Direct proportionality is a relationship in which one variable goes up if the other goes up, and down if the other goes down. The longer the wire, the more amounts of ions for the electrons to collide with. This means that it will take longer for the electricity to pass.  • Cross Sectional of the wire: A thick wire presents less resistance to the flow of electrons than a thin wire of the same material. The larger the cross-section, the lower the resistance. Resistance is inversely proportional to the cross section of the wire, (a relationship in which one variable goes up as the other goes down) therefore if the width is doubled, the resistance is also doubled. • Material of the wire: Metals are the ideal conductors of electricity because they have free electrons that help the flow of current. The denser the material, the more atoms per unit of volume. So, the number of collisions increases. Insulators like Plastic or wood have such high resistance that they stop the current altogether.  Economical effects Resistance occurs in using electricity; hence it is important to know the effects of consuming excessive electricity. One of the societal impacts is global warming. Global warming has been increasing in the past few years due to the increase in the production of carbon dioxide, among other gases, and fossil fuels. This can also result to pollution and acid rain, which damages our environment, destroys our ecosystems and our natural resources. Not only that, but our sea levels and rainfall can also be affected. Moreover, in terms of health, the products released from the combustion of fossil fuels are extremely dangerous, and could form respiratory and cardiac diseases, among other disabilities. When referring to our economy, we can see that people will start having money problems as not only will the prices for electricity increase but salaries have been lowered, due to the fact that our natural resources are running out because we are using them excessively. Variable factors The factors that I am going to vary are the length of the constantan wire and by adjusting the variable resistor to keep the voltage at a fixed value, I shall be measuring the corresponding current for different lengths such as 0cm,10cm, 20cm, 30cm, 40cm, 50cm, 60cm, 60cm, 70, 80cm and 90cm.. The factors that I am going to keep constant are: the thickness of the wire, the same wire, temperature and the set up of the circuit should be the same. • constantan wire   • power pack • meter ruler • crocodile clips • sellotape • connection leads • ammeter • voltmeter • variable resister Circuit diagram I started off with the experiment by setting up the circuit as shown above. I had to be careful in connecting the circuit, because the voltmeter had to be placed in parallel and the ammeter, which had to be placed in series. The constantan wire was cut to just over 100cm so the crocodile clips could attach onto the wire, making the results more accurate.  I stretched out the wire and sellotape it to the ruler. I did this so I do not need to cut the wire every time, all I have to do is just move one of the crocodile clips to another length. The power supply is then switched on. I will then record the reading of the ammeter and put the results in a table. After this I will adjust the variable resistor to 3 volts, which would show up on the voltmeter, I will record the reading of the ammeter. I will once again adjust the variable resistor to 3 volts this time and record the reading. The safety precautions that I need to be taken into consideration are: Handle the power supply carefully. Be careful when touching the wire, it may be hot, it might even burn if the voltage exceeds. Do not carry out the experiment in wet areas, as waster is a very good conductor of electricity, which could be dangerous if it comes in contact with the current. Fair testing To ensure that I conduct a fair test I will ensure that the experiment is done at least twice to have more reliable and accurate results. Wire measurements are kept the same and the equipment is in a good working order. I will also ensure that the thickness of the wire stays constant for each length and current. I shall also ensure that the current passing through the wire will not change until all the lengths of wire have been tested and the voltages recorded, then I will increase/decrease the current to the desired voltage. After every time I experiment I will let the wire cool down before I start my other experiment. And to improve the accuracy I will ensure that the use of the ruler, wires, etc remains the same. I will also record the readings from the ammeter and voltmeter by 3 decimal places. I will observe the resistance of the circuit and the current, the voltage and ohms. I will also observe the thickness has any effect on resistance and length. The will also observe the temperature to make sure the heating effect does not change any readings.  | Page Not the one? Search for your essay title... • Join over 1.2 million students every month • Accelerate your learning by 29% • Unlimited access from just £6.99 per month Related GCSE Science Skills and Knowledge Essays See our best essays Related GCSE Electricity and Magnetism essays 1. Marked by a teacher Investigation into the Physics of a Light Dependent Resistor. 4 star(s) an increased number of electrons in the LDR's material but I have offered no justification for WHY the number of Charge carriers increase with light intensity. Not to Scale In the diagram above I have tried to represent an atom in the semi-conductor of the LDR. 2. Discover the factors affecting resistance in a conductor. Silver is the best conductor, however a metal like copper is the second best, and it is used in connecting wires because it is cheaper. Resistance is anything that causes an opposition to the flow of electricity in a circuit. 1. How the Resistance of a Wire is affected by Cross-Sectional Area But on the other hand in very big power lines you do not want to waste any electricity in heating up the power lines. That is why in the National Grid they use very wide wires so that there are fewer collisions between the electrons and the metal atoms, that's 1. Investigate one or more factors affecting the resistance of metal wires chance of the wire from overheating or even melting (the current must be kept within certain limits). (I have decided that I will conduct my experiment at both 3V and 4.5V to prove that constantan's resistance is not affected by a change in temperature.) 2. How does the length and cross-sectional area of a wire affect resistance This varies the voltage in the circuit. Apparatus: ? Power supply (Power Pack) ? Voltmeter To measure the resistance. ? Ammeter To measure the resistance. ? Rheostat ~ (Resistor 1) ? Nichrome Wire + Metre Ruler + Crocodile Clips ~ (Resistor 2) rounded 2 d.p To predict resistance for a length of wire of 60cm I use the equation, Resistance = resistivity x length / area of cross section of the wire Resistance = 100 x 10-8 x 0.6m / 6.16m2 x 10 -8 Resistance = 9.74 ? 2. Conductors and Insulators. The X-rays are converted into light and the more energy that has reached the recording system, the darker that area of the film will be. This is why the bones on an X-ray image appear whiter (less energy passes through) • Over 160,000 pieces of student written work • Annotated by experienced teachers • Ideas and feedback to improve your own work
Why Most People Never Bounce Back from Failure The Surprising Difference Between Millionaires and the Middle Class I’m currently running my fourth business. The first one failed miserably. So did the second one. And the third. So, why then is this one so successful? It’s, in part, because I knew how to come back…and what to do next. Why Most People Never Bounce Back from Failure The Problem with Failure That makes sense, of course. When you fall flat on your face, instinct tells us to avoid whatever it was that led to that pain. But we should do the exact opposite. We should try it again. Studies show us that is exactly what successful people do. One study, done by Lewis Schiff, author of the book Business Brilliant, shows that self-made millionaires have at least three significant business failures in their lives. Sounds familiar to me. One-fifth of these successful people reported at least six such failures. The middle class, on the other hand, reported less than two. They’ve experienced failure in business as little as only once or possibly never. Why is this? The answer is simple…successful people fail and try again. Unsuccessful people fail and give up…or never dare to risk failure in the first place. The Reason You’re Afraid to Fail Two weeks ago, I wrote this about failure: The real reason people are afraid to fail is that they think they are going to do it again. They think it will become a habit. So, our very nature is screaming in the aftermath of failure to do whatever we can to avoid that again. Ten thousand years ago, many mistakes were fatal. They meant the village didn’t eat for a week. Or that you had your head bitten off by a ferocious beast. Today, at least in most parts of the world, failure is rarely fatal for us. But those instincts still reign. We’re afraid that if we fail once, we will fail again, possibly in the same way. So, why then would we try the same thing again? Why Try, Try Again Actually Works If we fail in a certain business, why would we try that same business again? That is exactly what I have done. Each business that has failed has basically been similar in nature. So, why is this one suddenly successful? The answer lies in looking at how a child learns. Here’s what I wrote last week: This concept comes naturally to a child but we lose it over time. Less than a decade after learn to tie our shoes, failure suddenly becomes something to be avoided, something that hurts, something that we no longer learn from. Think of how a child operates as he learns to do something for the first time. He tries it one way and fails, so he tries it a different way. He forgets step two, remembers step four, does it backwards, sideways, and upside down, but each time he builds on his past failures until one day, it’s second nature. He doesn’t get halfway through learning to tie his shoes, fail at it and decide that isn’t what he wants to do. He doesn’t then decide he’ll never learn to tie his shoes, but rather learn to ride a bike. No, he takes what he has learned about tying shoes and each time he starts over, he is at a new level. He doesn’t start over from scratch each time. He builds on what he’s already learned, mostly from failure. The Importance of Failure One of the key differentiators between the middle class and the wealthy, Schiff points out, is their view of the importance of failure. According to his study, 8 out of 10 millionaires say that failure is important to becoming wealthy. Only 2 out of 10 in the middle class agree. And that belief is evident in how they respond to failure. Rather than persist, most middle class respondents give up. They either “give up to focus on other projects” or “try again, but in a different field.” The millionaires, on the other hand, start right where they left off. That might seem stubborn, but it’s actually incredibly smart. Schiff tells the story of Steve Dering, a self-made millionaire who literally failed his way to success. If you don’t go back and try the same thing after two or three failures, he explained, ‘then you don’t get any of the benefits of learning from what went wrong. Successful people don’t just bounce back from failure. They don’t just “get over it” and move on. That’s a great start, but it’s not the key to coming back from failure. Truly successful people use failure. They embrace failure. They lean into it. And when they do, they come out stronger, more knowledgeable, and better equipped for success. Question: What can you learn from a recent failure to help you get to the next level? You can leave a comment by clicking here. Free Affiliate Training from Matt McWilliams
Ask Question, Ask an Expert Ask Business Law & Ethics Expert problem 1: Social Security makes sure a person against economic distress resultant from different contingencies and assures him minimum level of living consistent with the nation’s capacity to pay. Illustrate on it. problem 2: What are the Constitutional provisions that speak about the concept of social security? problem 3: Trace the evolution of social security legislations in developing country. problem 4: Describe the need for social Insurance and social assistance schemes for well being of industrial workers. Differentiate between Social Insurance and Social Assistance. problem 5: Share International Labor organization (ILO) conventions on the subject of social security. How far they have been implemented in developing country? problem 6: Describe the liability of employer for the payment of compensation under the Workmen’s Compensation Act with the assistance of decided cases. problem 7: Describe the salient features of the Maternity Benefit Act, 1961. problem 8: Describe the salient features of the Employees Provident Fund Act. Describe the retirement advantages under the Act. problem 9: What are the recommendations of Second National Commission on Labor (NCL) for providing social security to un-organized sector? a) Un-employment Insurance. b) Occupational Disease. c) E.S.I. Courts. d) Labor welfare. Business Law & Ethics, Finance • Category:- Business Law & Ethics • Reference No.:- M95426 Have any Question?  Related Questions in Business Law & Ethics Problem-oriented policing what and howwrite a three- to Problem-Oriented Policing: What and How? Write a three- to five-page paper (excluding title and reference pages). Explain the four components and applications of problem-oriented policing. Identify and explain at least t ... Eflhart v william low co 25 cal 3d 503 17771who are the Eflhart V. William Low Co. 25 cal. 3d 503 (1777) 1. Who are the parties? Who sued who, and for what? 2. Was there a contract between Earhart and Low? If so, what kind? 3. Who won in the trial court? On what contract theo ... In 5-6 paragraphs address the followingwhat types of In 5-6 paragraphs, address the following: What types of emergency and support services do you think should be considered first to be a part of a crisis response plan? Why? How do you think law enforcement agencies or sup ... Ukraine has to decide on a trade-policy strategy to go with Ukraine has to decide on a trade-policy strategy to go with other reforms for promoting development. Comment on the merits and drawbacks of the following available choices: a. Unilaterally taxing its wheat exports. b. Fo ... Organizational design help or hindranceidentify two medium Organizational Design: Help or Hindrance? Identify two medium to large-sized cities' law enforcement agencies within the United States. Write a three to five page paper comparing two organizational design structures rela ... Scenario charactersyou information security specialist Homeland security influencethis journal article review Homeland Security Influence: This journal article review addresses criminal justice policy issues that stem from the formation of the Department of Homeland Security in 2002, such as concerns stemming from the passage of ... Write a 700- to 1050-word paper that discusses the Write a 700- to 1,050-word paper that discusses the development of private security in the United States. Include the following: Explain why the development of private correctional facilities is or is not necessary in th ... The united states is considering adopting a regulation that The United States is considering adopting a regulation that foreign apples can be imported only if they are grown and harvested using the same techniques that are used in the United States. These techniques are used in t ... 1describe in your own words the concept of police officers 1.Describe in your own words, the concept of police officer's "working personality" using the TWO principal variables, danger and authority; according to the text in terms of; a.The symbolic assailant and police culture ... • 4,153,160 Questions Asked • 13,132 Experts • 2,558,936 Questions Answered Ask Experts for help!! Looking for Assignment Help? Start excelling in your Courses, Get help with Assignment Ask Now Help with Problems, Get a Best Answer A cola-dispensing machine is set to dispense 9 ounces of What is marketingbullwhat is marketing think back to your Question -your client david smith runs a small it Inspection of a random sample of 22 aircraft showed that 15 Effective hrmquestionhow can an effective hrm system help
Liberal Studies Division VI - Foundations of Visual and Performing Arts Students take a minimum of three credits from this division. Students completing these courses will be able to identify the forms of artistic expression (e.g., forms of music, dance, painting, sculpture, etc.) in relation to a historical and cultural context; they will also be able to recognize and articulate the reasons why these forms of artistic expression developed and evolved in the manner they did. Further, students will be able to demonstrate and articulate an understanding of the principles behind the evolution of judgment and taste. All courses are four credits unless otherwise noted.
Tough days when dad drove bullocks By Christine McKay Add a comment SMOKO TIME: Taking a break. SMOKO TIME: Taking a break. Hardworking bullock teams were not just a feature of coastal Tararua for Hawke's Bay's Charlie Anderton - they are a proud part of his family history as his father, Arthur, worked with the teams in the early 1900s. Arthur Anderton was born in 1883 and died in 1955, aged 72, but his son, Charlie, can remember some of those very early days of the bullock teams. Charlie, 96, who now lives at Sommerset in Greenmeadows, was born in 1920 at Kereru where his father's job was to load the bullock wagons and take the logs to the mill. "The bullock teams were an important part of my father's life," he said. "Fortunately I've inherited some very precious family photos from those early days in the 1900s and they tell the story of a time not many can remember now. "Prior to 1912 when my father was single, he was taking timber down to what was then the Pukehou Railway Station, using the bullocks. That was not long after he left school. He then went to Kereru Mill and had two or four bullocks, using them to drag the logs around the stumps of felled trees to the skids. A wire rope hauler would then drag the logs to the mill. "My one strong memory was the sound of the hooter going for lunch. It frightened the life out of me." An unusual occurrence was the shoeing of the hooves of the white bullocks in the teams and Charlie believes a family photo of his father shoeing the bullocks is rare. "The hooves of white bullocks were softer than those of the black bullocks and if they were hauling loads on the road they were shoed." Bullocks were also part of Charlie's grandparents early farming days on the original Te Aute Station. "In the early days at the station my grandfather worked with bullocks which would get bogged and had to be dragged out by the rest of the team. Despite this, bullocks were outstanding working in wet, boggy conditions but as the land was drained and it became drier, it was a more appropriate environment for horses. "Although bullocks were the original workhorses, they were too slow compared to horse teams." After leaving Kereru Mill Arthur Anderton purchased land at Otane, where he farmed. Charlie then took over the farm after the war and purchased more land near Waipawa. In 1960 he sold both blocks and purchased Carlyon Station on Farm Rd,10 minutes east of Waipukurau. In 1980 Charlie sold Carlyon and bought Mangatarata Station, east of Waipukurau, in partnership with his son-in-law, Don Macdonald. Eventually, daughter Judy and son Donald bought Charlie out and he and his wife, Isa, retired to Waipukurau. The couple now live at Sommerset in Greenmeadows and in June celebrated their 73rd wedding anniversary. - Hawkes Bay Today Get the news delivered straight to your inbox Have your say 1200 characters left View commenting guidelines. © Copyright 2016, NZME. Publishing Limited Assembled by: (static) on production bpcf03 at 09 Dec 2016 15:35:50 Processing Time: 525ms
A royal colour, found in a fluid derived from sea creatures on the Mediterranean coast of Palestine. Purple‐dyed cloth was used for priestly vestments (Exod. 25: 4) and a purple cloak was mockingly thrown round the ‘King of the Jews’ (John 19: 2, 5) before Jesus' crucifixion. Paul encountered a seller of purple at Thyatira (Acts 16: 14).
Definition of abrasiveness noun from the Oxford Advanced Learner's Dictionary BrE BrE//əˈbreɪsɪvnəs// ; NAmE NAmE//əˈbreɪsɪvnəs// jump to other results 1. 1a rough quality in a substance that can be used to clean a surface or make it smooth The abrasiveness of the diamonds made them particularly useful for industrial use. 2. 2the quality of being rude and unkind in a way that may hurt other people’s feelings His abrasiveness frequently annoyed his teammates.