text
stringlengths
0
634k
Chagall at DMA: Sublimity from the stage Chagall: Beyond Color, on view at the Dallas Museum of Art through May 26, opened Feb. 17. The exhibition presents works infused with such mastery of image, form and theme that the effect is a continual astonishment. And, characteristic of Marc Chagall, the art calls the viewer back to the center of one’s own spirit and heart. A major figure in 20th century art, Chagall was born in 1887 near Vitebsk, then in the Russian empire, today a city in Belarus. Chagall’s work is celebrated for its soft radiance, its tenderness, gravitas evocative of myth and sacred text – all expressed in joyous color. Chagall enjoyed the golden age of Modernism in Paris and the creative foment of post-revolutionary Russia before a lengthy stay with other wartime exiles in North America and an eventual return to France. The centerpiece of the exhibition is a display of costumes and other items designed by Chagall in 1942 for a production of the ballet Aleko. The project united Chagall with choreographer Leonide Massine. The ballet was set to the music of Pyotr Ilyich Tchaikovsky’s Piano Trio in A Minor. The story of Aleko comes from The Gypsies, a narrative poem by Aleksandr Pushkin. “A lot of people know the name Chagall and have an idea about Chagall. The idea is that Chagall is painting and he is dealing mainly with color,” said Olivier Meslay, Associate Director of Curatorial Affairs, at the beginning of one of the tours offered Sunday. “What we wanted to do with this exhibition was to look through the career of Chagall … through another lens … the idea of volume, many sorts of volume.” In addition to the pieces from Aleko, the exhibition presents ceramics, sculpture and collages – as well as paintings and drawings. All of Chagall’s creative periods are present: The early years dominated by Jewish and Russian themes, the artist’s exuberant first years in France, his sojourn in the Soviet avant-garde, exile in the United States and finally, Chagall’s later years in Provence. The 1942 production of Aleko premiered in Mexico City and was then performed in New York. The costumes had not been seen in the United States since until now. Pushkin’s tale is set in the Russian southern frontier, which historically occupies a space in Russian culture much like the western frontier in the United States. Russians have viewed the steppes and mountains of the south as a wild, lawless landscape of freedom and danger. For centuries, young Russian men went south to find their fortune – or doom. Aleko, the main character, is within this tradition. A typically disaffected young man from the city, Aleko journeys to the south and finds love, then betrayal, with a young gypsy woman. Tchaikovsky’s piano trio provides an appropriate narrative soundtrack of brooding, triumph, tragedy and pathos. The Dallas exhibition includes, in addition to the costumes designed by Chagall, video from the 1942 production, as well studies for various other production elements. Particularly noteworthy are the beautiful studies for the four huge backdrops used in the ballet’s four scenes, including Fantasy of St. Petersburg. The costumes, like Chagall’s other work, suggest playfulness and movement. Many of the colors are pale or bright, typical of the Scythian-influenced style Russians associated with the south, as well as the indigenous traditions of the Mexican interior where Chagall was working. Chagall borrowed a little from indigenous Mexican graphic style as well. Other sections of the exhibition include studies for Russian Jewish theatre productions, paintings from Chagall’s much-celebrated 1930s in Paris and costume designs for a 1945 production of Igor Stravinsky’s The Firebird. Of all the pieces on display, the most familiar to casual observers would probably be the oil painting Between Darkness and Night, an expression of war-related foreboding begun in France in 1938 and completed in the United States in 1943. Another widely-known work present is also an oil painting, The Nude Above Vitebsk. The exhibition also features a generous representation of ceramics and sculpture made by Chagall during the period when - after years of travel, exile and the nightmare of his first wife’s unexpected death – Chagall cultivated a grounded and centered sensibility in Provence. The sculpture and ceramics offer a magnificent gift – Chagall’s familiar themes and styles manifested in unfamiliar media. Viewers end their experience of the exhibition with a look at studies Chagall made for his painting of the Paris Opera ceiling, as well as collages from the 1960s.
Music review: Marino Formenti tackles the 'Diabelli' Variations After 19 years, finally a boo. Philharmonic Society president and artistic director Dean Corey reacted with delight during intermission of Marino Formenti’s recital Saturday night at the Renée and Henry Segerstrom Concert Hall. The booing was in response to the U.S. premiere of Evan Gardner’s “Variations on a Theme by John Cage.” The feisty Italian pianist chose this piece for piano and live electronics as prelude to his astonishingly visceral performance of Beethoven’s “Diabelli” Variations. Known as a rivetingly physical, virtuosic and now and then wayward specialist in new music, Formenti was Corey’s offbeat choice to participate in the society’s ongoing survey of Beethoven’s most audacious late chamber music. The “Diabelli” -- 33 formidable variations lasting nearly 45 minutes -- was not in Formenti’s repertory. Corey’s terms were that if he learned the variations, the pianist could program anything else he wanted, with the expectation of reminding us that Beethoven was once avant-garde too. Formenti began the concert with the Modernist British composer George Benjamin’s “Shadowlines,” crystalline miniatures played with beautiful delicacy and flickering immediacy. For Gardner’s new piece, Formenti put on special electronic sensor gloves, which he waved in the air to create feedback on a loudspeaker. Beethoven was reserved for the second half. “While the cars and planes drive faster, the air gets dirtier, and the stock markets are on a roller coaster,” he wrote, “we sit in a classical concert … and expect a kind of detached sublimity that has nothing much to do with real life.” There can, of course, be pitfalls involved with connecting Beethoven to real life. You might end up with Moisés Kaufman’s melodramatic “33 Variations.” The Broadway play, seen recently at the Ahmanson, starred Jane Fonda as a dying musicologist puzzling over why Beethoven based the “Diabelli” -- which Alfred Brendel has called the greatest of all piano pieces -- on a trivial waltz theme by a Viennese music publisher. Gardner, a young American composer living in Germany, provided the obvious answer. A function of art is to illuminate the quotidian, to draw our attention to what we miss that is all around us. His Cage theme was that of “4’33”,” namely silence. But unlike Cage, who instructed the pianist to sit at the keyboard and make no intentional sound, Gardner had the pianist control “silence” with the feedback gloves. Not everyone, we've seen, approved, but this electronically enhanced “silence” was richly textured, and nothing like the ear-splitting feedback squeal we are used to when a microphone gets too close to a speaker. Formenti's “Diabelli” then asked an interesting question. Has Beethoven been too monumentalized? The pianist’s method was to offer a tour of Beethoven’s messy mind. “We can see old Ludwig laughing like crazy,” Formenti wrote in his essay. “I am laughing myself every day like crazy.” The performance began with defiance, as Formenti wildly attacked Diabelli’s waltz even before completely sitting down on the piano bench. He embodied old Ludwig laughing like crazy -- funny, exaggerated, full of irrepressible spirit. From there, ideas gleefully and preternaturally leapt left and right as the variations became the dances of neurons firing. Formenti has a lyrical side that is the embodiment of sweetness. He can, when he wants to (and that isn’t always), articulate with great clarity. He is a highly strung pianist, zealously pedaling with his right foot while his left nervously jitters. He lunges like a rattlesnake striking the keys. Fast variations tumbled into each other, Beethoven getting away from himself. But Formenti also stopped to smell the roses. He took a long pause before the 20th variation, which is all slow chords, and he made those slow chords so slow, so infused with the piano’s sonorities, that harmonic motion all but ceased and we entered into a sonic glow as modern as Gardner's electronics. If Formenti’s impulsiveness meant revelation followed revelation, it also meant that every so often the pianist got into a little trouble, losing the line or muddying up textures. But the dazzling variations were truly dazzling. The cosmic ones were as cosmic as tomorrow's space exploration. And the comic ones were outrageous. The sense of adventure never wavered. Old Ludwig would not have recognized anything about this recital, not the modern instrument, not the modern hall and certainly not our impulse to institutionalize his music. But I’d like to think Old Ludwig would have, along with Formenti, laughed like crazy.
Beginning guitar lessons is an exciting thing – learning the notes, building your first chord, and of course, playing your first song. However, it’s not all sunshine and roses. Learning to play takes commitment, practice and the motivation to get over some common beginner hurdles. First, the painful process of building calluses can drive many to stop practicing. Second, there’s always that awkward stage of learning to seamlessly transition to different chords. You know the drill – practice makes perfect. But here are some additional tips from Teachstreet.com to help with switching chords: 1. Keep your fingers as close to the fret board as possible. When that pinkie and third finger start flying out in space it takes longer for them to come back down. 2. Build your chords from the bottom string up. For some reason a lot of us get in the habit of building our chords from the top down. Like in an open C major chord, starting with the 2nd string, then 4th, then 5th. The problem with that is your pick is going to hit the bottom strings first, so get those notes placed first. That extra split second will give you a chance to get the last top bits of the chord in place. I know it seems like a negligible amount of time, but you’ll be surprised how it can improve your guitar playing. 3. When moving from one chord to the next, move the finger that has the farthest to go first. For instance, in moving from G major to C major in the open position, your first finger has to move all the way from the 5th string to the second. Lead with that finger and you’ll find that your other fingers naturally pull along behind to end up close to their intended frets as well. 4. Stay relaxed and let the natural movement of your hands help you get to the chord. Believe it or not, the guitar is actually designed very well to accommodate the natural movement of the human hand. When you use tip #3 and lead with the farthest finger, your other fingers will follow along behind it naturally and you can get them to settle in the right place. If you tighten up they won’t move as naturally, so stay loose. 5. Keep your right hand moving. The way your brain works has a lot to do with how your hands react. As a beginner, your brain is giving you permission to stop in between chords and rationalizes it as “we’ll get it eventually.” It’s normal and happens on a subconscious level. You can easily change that by setting up a dissonance in your brain. That means presenting your brain with a problem it needs to fix. Here’s the way it works: You brain loves when your hands are moving together. So if you force your right hand to keep strumming, no matter what happens in your left, your brain will want to solve that dissonance by making your left hand move faster to keep up with your right. Exactly what we’re looking for.
Researchers at the Swedish medical university Karolinska Institutet (KI) and the Swedish Institute for Infectious Disease Control (SMI) have identified the biochemical mechanism behind the adhesive protein that give rise to particularly serious malaria in children. The knowledge of how the malaria parasite makes blood vessels become sticky paves the way for a future vaccine for the disease, which currently kills some 2 million people every year. Severe anaemia, respiratory problems and cardiac dysfunction are common and life-threatening symptoms of serious malaria infection. The diseases are caused when the malaria bacteria Plasmodium falciparium infects the red blood cells, which then accumulate in large amounts, blocking the flow of blood in the capillaries of the brain and other organs. The reason that the blood cells conglomerate and lodge in the blood vessels is that once in the blood cell the parasite produces proteins that project from the surface of the cell and bind with receptor molecules on other blood cells and on the vessel wall, and thus act like a glue. The challenge facing scientists has been to understand why certain proteins produce a stronger adhesive and thus cause more severe malaria. The research group, which is headed by Professor Mats Wahlgren at the Department of Microbiology, Tumour and Cell Biology, KI, has studied the adhesive protein PfEMP1 in children with severe malaria. The group has identified specific parts of PfEMP1 that are likely to bond more strongly to the receptors in the blood vessels, therefore producing a stronger adhesive effect. What the scientists show in their newly published study is that these protein parts are much more common in parasites that cause particularly severe malaria. If they can identify enough adhesive proteins causing severe malaria, it will be possible to design a vaccine that prepares the bodys own immune defence. There are no vaccines yet that can prevent the development of malaria and cure a seriously infected person, says Professor Wahlgren. Weve now discovered a structure that can be used in a vaccine that might be able to help these people.
Ezekiel Emanuel, the older brother of Chicago Mayor Rahm Emanuel and a key figure in health policy circles, came under fire from fellow doctors this week for declaring in a magazine article that he wants to die at the age of 75. In the article, which appeared in the October issue of The Atlantic Monthly, Emanuel said he plans to refuse life-prolonging and preventive care starting in 2032 because "this manic desperation to endlessly extend life is misguided and potentially destructive." People become less creative as they age, he wrote. A deadline of 75 years "forces each of us to ask whether our consumption is worth our contribution." That sentiment angered some doctors within the American Medical Association, who said that Emanuel had defied the Chicago-based physician group's code of ethics in suggesting that a human life becomes less valuable with age. With the AMA's House of Delegates convened in Dallas in recent days, AMA delegate and New York ophthalmologist Gregory Pinto proposed a resolution that would have directed the organization to "issue a statement publicly disagreeing" with Emanuel. The resolution, first reported by Modern Healthcare, also argued that Emanuel's article was "even more disturbing because it comes from one of the architects of national health care policy." Emanuel, an oncologist and bioethicist, was a health policy adviser to the White House during the crafting of the Affordable Care Act. The resolution was ultimately voted down, an AMA spokesman said. But the outrage sparked by Emanuel's article was a reminder of the charged nature of debates over the country's health care system. Emanuel acknowledged in his article that a view like his could have policy implications. Life expectancy should not be used as a measure for the quality of health care once a country's average age has exceeded 75 years, he wrote. And biomedical research should be focused more on "Alzheimer's, the growing disabilities of old age, and chronic conditions — not on prolonging the dying process," he wrote. Donald Palmisano, a New Orleans surgeon and former president of the AMA, said Emanuel's article was "his opinion." "But if the government adopts such an approach (through) the IPAB (Independent Payment Advisory Board) of PPACA (Affordable Care Act) or some other scheme yet to be devised, then we have a problem!" Palmisano wrote in a response to the article. Emanuel was not available to comment for this story, a representative told the Tribune. Pinto did not respond to a request for comment.
CERNET2 is the largest next-generation Internet backbone which is the core network of the China next generation Internet demonstration project CNGI and is the only nationwide academic network, also it is so far the world's largest native IPv6 backbone. CERNET2 will use CERNET¡¯s nationwide DWDM transport network to connect all key research universities distributed in 20 cities around China with speed of 2.5~10Gbps, and will provide IPv6 connectivity to more than 200 universities and other institutions and R&D organizations£¬which will offer domestic and international Internet access via the exchange point CNGI-6IX. CERNET2 backbone adopting the native IPv6 protocol provides a rich experimental environment for the next-generation Internet technique. CERNET2 will also partly introduce the advanced IPv6 router that had been developed by local company and conferred the self-intellectual property. It will become the most important infrastructure for the advanced network technologies and killer applications of the next generation Internet, and it will be a big promotion of China¡¯s next generation Internet development. China Education and Research Network (CERNET) established in 1994 played a significant role in the development of China Internet as the first IPv4 nationwide Internet backbone. Under the leadership of Ministry of Education CERNET launched the investigation and experiment of next-generation Internet in 1998, resulting in the establishment of IPv6 test bed (CERNET-IPv6). In 2000, China first next-generation Internet NSFCNET and China next-generation exchange point DRAGONTAP were established in Beijing, which took part in the Internet organization on behalf of China and realized the interconnection with international next-generation Internet. In 2001, CERNET brought forward the program to construct nationwide next-generation Internet. In August 2003, CERNET2 program was brought into China next-generation Internet demonstration project CNGI which was confederative leaded by eight ministries and commissions including National Development and Reform Commission. In October 2003, CERNET2 trial network connecting Beijing, Shanghai and Guangzhou cities was in operation. On 15th January 2004, the largest international science Internet organization including Internet2, EU GEANT and China CERNET declared the simultaneous opening of global IPv6 next-generation Internet service to all over the world in Brussels, capital of Belgium and Europe Union HQ.
Although floating-point representations vary from machine to machine, the most commonly encountered representation is that defined by the IEEE 754 Standard. An IEEE-754 format value has three components: The value of the number is then s * 2^e. The first bit of a non-zero binary significand is always one, so the significand in an IEEE-754 format only includes the fractional part, leaving the leading one implicit. The significand is stored in normalized format, which means that the first bit is always a one. Three of the standard IEEE-754 types are 32-bit single precision, 64-bit double precision and 128-bit quadruple precision. The standard also specifies extended precision formats to allow greater precisions and larger exponent ranges.
World War II magazine covers every aspect of history's greatest modern conflict with vivid, revealing, and evocative writing from top historians and journalists. Each issue provides a lively mix of stories about soldiers, leaders, tactics, weapons, and little-known incidents of the war, including riveting firsthand battle accounts and reviews of books, movies, and video games. And the most authoritative magazine on the war features a striking design that highlights rare, archival photographs and detailed battle maps to convey the drama and excitement of the most famous battles and campaigns. May 7, 1945, was an important day by any measure. For Gen. George S. Patton, it started early, with a call just after 4 a.m. from Gen. Omar Bradley, who said, “Ike just called me, George. The Germans have surrendered.” This was mixed news to Patton, who was convinced the war was ending too soon, leaving the Russians as a future threat and, in any case, leaving Patton, a man who lived to fight, without a war. “Peace is going to be hell on me,” he had complained to his wife, Beatrice, four days earlier. The commander of Patton’s 2nd Cavalry Group, Col. Charles Hancock Reed, was with his unit in western Czechoslovakia, where they were forming a defensive line southwest of the large city of Pilsen. The 2nd Cavalry had been spearheading the Third Army’s advance, the deepest American penetration of the war. But as of 8 that morning, they and the rest of Patton’s Third Army had been ordered to “cease fire and stand fast.” None of this was on the mind of Col. Alois Podhajsky as he prepared for what he regarded as the most important day of his life. Podhajsky, a tall, aristocratic Austrian of extraordinary single-mindedness, was looking for a way to guarantee the safety of the riding school and horses he supervised as the Third Reich collapsed around him. And on that sunny Monday morning, as a preoccupied General Patton strode into his exhibition arena, he thought he’d found it. Lt. Gen. Walton H. Walker’s XX Corps had captured the renowned Spanish Riding School of Vienna several days earlier at its temporary quarters in St. Martin im Innkreis, a small town in Upper Austria, and Walker, a protégé of Patton’s, requested a performance of its white Lipizzaner stallions especially for him. As Patton watched, the horses and riders went through the precise, balletlike maneuvers they were famous for: a demonstration of controlled power and ritualized elegance, set to music, that was beautiful to watch and incredibly difficult to execute. When it was over, Podhajsky halted his horse before Patton and removed his hat in a traditional salute. “In a little Austrian village in a decisive hour two men faced each other,” he wrote in his memoir, My Dancing White Horses, the basis for the 1963 Disney film Miracle of the White Stallions, “the one as triumphant conqueror in a war waged with such bitterness, the other as a member of a defeated nation.” He asked Patton for protection for the centuries-old school during the uncertain postwar period and for help in retrieving its breeding herd from Czechoslovakia, where the Germans had sent the horses to a Wehrmacht-controlled stud farm. Patton, an expert horseman himself, described the exhibition in his diary that day, calling it “extremely interesting and magnificently performed.” Ever the soldier, he added, “It struck me as rather strange that, in the midst of a world at war, some twenty young and middle-aged men in great physical condition…had spent their entire time teaching a group of horses to wiggle their butts and raise their feet in consonance with certain signals from the heels and reins.” More telling for Podhajsky, though, was what Patton noted next: “On the other hand, it is probably wrong to permit any highly developed art, no matter how fatuous, to perish from the earth—and which arts are fatuous depends on the point of view. To me the high-schooling of horses is certainly more interesting than either painting or music.” Standing to address the man on horseback before him, Patton replied that he was putting the Spanish Riding School under the special protection of the U.S. Army; he later told Podhajsky he would do what he could about the horses in Czechoslovakia. “This official declaration was far more than I had dreamed,” Podhajsky exulted. What he didn’t know, however, was that something far more dangerous and extravagant was already well under way: a top-secret mission involving not only Podhajsky’s horses, but hundreds more, as well as hundreds of Allied POWs, which would twine together Patton, Reed, and Podhajsky and leave Patton forever associated with the dancing white horses. It began 11 days earlier—with some captured secret documents. The contents of any intelligence officer’s papers are an obvious source of intrigue, especially when that officer is a general. But those that belonged to the commander of the German intelligence unit that surrendered to the 2nd Cavalry Group at a hunting lodge near the Czech border on April 26, 1945, were unexpectedly interesting. They included photos of horses—beautiful horses: Arabs, Thoroughbreds, and Lipizzaners. The general, a celebrated spy known only as Walter H., invited Colonel Reed to join him for breakfast while they waited for trucks to arrive to haul off the captured documents. They looked at the photos together, and the general told Reed that the horses were among hundreds the Germans had collected from among the finest breeding stock in Europe and sent to a large stud farm in the nearby Czech town of Hostau, where they were under the care of Czech and Polish POWs who had surrendered to the Germans. The problem was that the ruthless and ravenous Red Army was approaching; both men were concerned the animals might become army rations. But, as spelled out at the Yalta conference that divided up postwar Europe, Czechoslovakia fell within the Soviet zone of occupation. “We mutually agreed that these fine animals should not fall into Communist hands and the prisoners should be rescued,” Reed recalled. He sent a message to Patton at Third Army headquarters requesting permission for the operation. Patton’s swift response: “Get them. Make it fast!” By then Red Army troops were about 60 miles east of Hostau; the Americans were about 35 miles away. And although the Germans in Czechoslovakia were being rapidly overpowered, there were still die-hard Nazi snipers everywhere. Working in conjunction with the German, Reed formulated a daring plan. He dispatched the general’s courier with a message asking the Germans at the stud farm to send an officer through the lines that night to arrange terms. At about 8 p.m., his request was answered when a lean man in a Wehrmacht officer’s uniform strode out of the woods near a 2nd Cavalry outpost. The officer was Capt. Rudolf Lessing, a staff veterinarian at Hostau. Over dinner he presented Reed with a counterproposal: send an officer back with him to Hostau to confer with the local Wehrmacht commander and they could arrange a surrender. An intelligence officer with the 2nd Cavalry’s 42nd Reconnaissance Squadron, Capt. Thomas M. Stewart, was out in the field when his commanding officer relayed a message: “Colonel Reed wants to borrow you for a special assignment.” The 30-year-old captain—son of a U.S. senator from Tennessee—reported to Reed’s headquarters, where he found an assortment of American officers gathered in conversation around Lessing. Reed had just concluded a telephone conversation with General Patton, and told Stewart he was to accompany the German captain through the lines and attempt to arrange the release of the horses and prisoners. Reed sent him off bearing a letter written in German and English designating him as an emissary under Lessing’s protection and granting him the authority to negotiate. The two men left on foot and walked together in the darkness for about a half-mile before coming upon the motorcycle that Lessing had secreted in some bushes. They drove it several miles to the barn of a friendly Czech forester, where they exchanged the motorcycle for a pair of horses the veterinarian had hidden there to take them on the rest of the journey. Their destination lay about 18 miles ahead, through a forbiddingly dense forest. It had been around midnight when the pair set off, and the moon finally emerged from behind some clouds. Still, “the forest was so thick through there you felt like you were riding through two walls of darkness,” Stewart recalled in a recent interview. Although riding through the dark countryside in the sole company of an enemy officer seems an intimidating experience, Stewart reveled in it. An experienced rider, he delighted in his horse, a Lipizzaner stallion said to have been the favorite mount of Peter II, King of Yugoslavia. When he encountered a roadblock about three feet wide and three feet high built of logs and branches, a steep cliff on one side, a ravine on the other, the American did the only thing he thought he could do: he gathered his horse and took off for the obstacle. Too late, he heard Lessing—who knew a route around the roadblock—call out, “He doesn’t jump!” No matter; the horse took off, light as a feather. “The perfect jump,” Stewart said. “It was the highlight of the trip for me.” A more significant obstacle emerged at the stud farm. As the men made their way in darkness to Lessing’s living quarters, they found Lessing’s friend and fellow veterinarian, Capt. Wolfgang Kroll, cradling what looked like a submachine gun. “We’re in trouble,” he told Lessing. The manager of the farm, Lt. Col. Hubert Rudofsky, had initially given his blessing to the plan, but had had a change of heart after Lessing left. Rudofsky was a Czech national and decided he could cut a better deal with the Russians than with the Americans. He told Kroll that if he and Lessing brought in an American, Rudofsky would have the three of them shot as spies. Stewart spent the rest of the night crouched in a chair, while Lessing reconnoitered. A few hours later, on the morning of April 27, he summoned Stewart and Kroll; Rudofsky had left the farm, possibly to visit the local army commander, a General Schulze. Lessing’s plan was to find one of Schulze’s officers and have the three of them taken to see him as well—something they managed under tense circumstances later that morning. Stewart was able to understand a little German, so he could make out a smattering of what was going on. And at first it didn’t look good. The general, a small man, sat behind a bare table, surrounded by officers—including, Stewart later learned, a silent Lieutenant Colonel Rudofsky. A staff colonel, a big blond man, said something in anger to Lessing and Lessing replied, “Sir, I am no spy! I am a German officer. I am no spy.” General Schulze gestured, and Captain Stewart presented his credentials. Lessing explained their presence. As the German veterinarian recounted to the Austrian magazine Zyklus in 1982, he told the general that their primary responsibility was to the horses. “It is our duty to do everything to save them,” he argued. “It is unimportant for us to win the war here at Hostau on April 27 or 28, 1945. This we should have done four years ago. To do it now is too late.” Stewart heard someone in the background say, “Adolf ist kaputt.” The general finally turned to the American captain and asked, in English, “How many panzers can you bring?” Stewart understood that the general didn’t want to surrender to a lone American captain and assured him the 2nd Cavalry would return with a sizeable number of tanks and other vehicles. “He looked at me for what seemed like a long time,” Stewart recalled, “and then he took out this pad and scribbled something.” It was a note of safe passage for Stewart. “There will be no difficulties when your people come in,” the general told him. When Stewart finally set off toward his squadron later that evening, he wasn’t alone. Wolfgang Kroll, whom Lessing called “a man with an inclination to adventure and bravado,” wanted to be a part of the American advance on the farm and stayed in the jeep after Lessing departed. A German driver took Stewart and the veterinarian to the edge of the forest, but would go no further, so the two walked the last half-mile or so to Stewart’s squadron themselves. As soon as they arrived, Stewart briefed Reed via radio on the day’s events, and Reed immediately put his plans into action. By daybreak the next day, April 28, a rapidly formed task force of approximately 70 men from the 42nd Reconnaissance Squadron’s A Troop—along with two light tanks and two assault guns—was on its way. As General Schulze had promised, the task force encountered no resistance on the way to the stud farm, and the surrender was peaceful. As soon as the facility was secured, the American troops hurried off to look at the source of all the commotion: the captured horses. It was truly a treasure trove of horseflesh. Among them were about 100 of the best Arabs in Europe, top Thoroughbred racehorses and trotters, hundreds of Russian Cossack horses, and some 250 Lipizzaners from breeding farms across Europe—primarily the Yugoslavian royal stud and the Piber stud in Austria, which supplied the horses for the Spanish Riding School. There were also the prisoners: not only the several hundred grooms they had expected to find at the farm, but about 300 Americans and as many British troops, who had been encountered with their German guards in the vicinity. Steps were quickly taken to free and safeguard them. While the rest of the 2nd Cavalry Group prepared for an advance toward Pilsen, the task force organized its own small army to defend the farm in the event of a counterattack. In addition to the Americans and their tanks and assault guns were Lessing, Kroll, and the other Germans; some Cossack cavalrymen; and an assortment of now-former POWs who chose to stay. That proved a wise move. For five hours on April 30, the small international force held off an attack from German troops: mostly older men and boys who knew nothing of what had transpired at the farm. The defenders took hundreds of German prisoners; the rest retreated back into the woods. “The Germans did a lot of shooting, but not a lot of damage,” Stewart remembered. Two men of A Troop ultimately lost their lives during the mission in isolated incidents elsewhere, however. As the war wound down in the next few days, dramatic events continued to come hard and fast. Colonel Reed arrived at the farm on May 1 to inspect the horses. Before leaving, he cautioned Stewart that the massive German 11th Panzer Division would soon be headed in their direction. “Don’t engage them,” he warned. On May 4 the reason became evident, as the division and its more than 9,000 men surrendered, an event Reed had been instrumental in negotiating. Two days later, the Third Army liberated Pilsen. Germany surrendered the next day, and Reed, who shared Patton’s antipathy toward the Russians, established new headquarters at an estate near Pilsen. He was determined to hold his ground in Czechoslovakia until the U.S. Army—not the Russians or Czechs—told him to leave. He was there on May 9 when he received word from Third Army headquarters that General Patton had been in touch with Col. Alois Podhajsky, the director of the Spanish Riding School, and that Podhajsky would be flown to Reed’s headquarters as soon as possible to inspect the captured Lipizzaners. Although the horses were now in American hands, they were still in Czechoslovakia and Reed knew something had to be done to get them out of the path of the Red Army—and soon. “A day or so after the German surrender it became evident to me that the Czech and Russian Communists were showing a great interest in the captured horses,” he recalled. Word was that they’d made several stealthy trips to the stud farm; he transmitted this information to Patton’s headquarters, along with the recommendation that the Arabs and Lipizzaners be transferred as soon as possible to a large facility in Mannsbach in central Germany. The Third Army swiftly gave its assent, along with a guarantee to give the movement of the horses priority along the required roads. At dawn on May 12, the remarkable procession began. About 350 horses were herded in small groups, with American vehicles positioned before and after them and with a band of Polish, Czech, and Cossack horsemen as outriders, along with a smattering of Americans—making the name of the mission, Operation Cowboy, especially apt. Despite the prevailing chaos of the time, the evacuation was an organizational masterpiece; the Americans had closed off all major intersections and the group covered the roughly 130 miles to Mannsbach safely. The fastest groups made the journey in two days; the slower groups, those that included mares and foals, arrived a day later. (Lieutenant Colonel Rudofsky had materialized at the border as the horses passed, marking off the departing animals on a checklist. Czech and Russian officials later filed a protest, but nothing ever came of it.) At about the same time—on the afternoon of May 14—the U.S. Army flew Podhajsky to Colonel Reed’s headquarters. He was introduced to Reed over dinner. “Our conversation soon showed how full life is of interesting coincidences,” Podhajsky recalled. Reed, as it turned out, knew Podhajsky’s name well. When the captain of the U.S. Army riding team, of which Reed was a member, saw Podhajsky ride in the 1936 Olympics in Berlin, he had been so impressed he named one of the cavalry school horses after him. The next morning Reed drove his Austrian counterpart to Mannsbach in a jeep. Podhajsky easily identified the Lipizzaners belonging to the Austrian herd and Reed assured him they would be sent to St. Martin. “Before I flew off I tried to thank Colonel Reed for his help and great understanding,” the Austrian horseman said. “I have only acted as a fellow rider should,” Reed replied. “And I am convinced that you would have done the same if the positions were reversed.” A little over a week later, Reed proved good to his word. Just before midnight on or about May 25, the sound of engines broke the quiet at an abandoned airfield outside St. Martin as the first of some 60 trucks pulled into view. The journey this time had been too great a distance to make on foot, so Reed had amassed as many captured German vehicles as possible and had them outfitted to carry the horses. Although two mares were injured in the chaos of unloading at the airfield and had to be put down, a total of 244 Lipizzaners were successfully returned to Austria. Podhajsky was so grateful to have this segment of culture and tradition preserved for his country and the world that he staged performances for American soldiers stationed in occupied Austria over the next few months: a second for Patton on August 21, 1945, and several more for “ordinary mortals”—one or two thousand American GIs at a time. “The success of the Lipizzaner with the American Army General was repeated also with the ordinary soldiers,” Podhajsky noted with his characteristic pride. “They were all captivated.” But why—when there was so much destruction, so much loss and pain, so much left to be done—devote limited resources to this particular mission? A simple explanation lies with the diverse individuals central to the rescue, who had all one trait in common: they loved horses. Alois Podhajsky, the son of a cavalry officer, was one of the youngest lieutenants in the Austro-Hungarian cavalry in World War I, and won a bronze in dressage in the 1936 Olympics. Podhajsky devoted his life to horses, and they were rarely far from his thoughts. “I am bound to admit that I have always been what is commonly called ‘horse-mad,’” he said. Charles Hancock Reed, also a former officer in the mounted cavalry, was a superb horseman: an instructor at the Cavalry School and a member of the 1930–1931 U.S. Army horse show team. After retiring from the army, Reed purchased the offspring of one of the horses he rescued, and rode her every day for nearly 30 years. George S. Patton spent a lifetime with horses. While stationed at Fort Myer, Virginia, after his graduation from West Point, he played polo, fox-hunted, and competed in mounted steeplechases. He was a participant in the first modern pentathlon at the 1912 Olympics in Stockholm, Sweden, placing sixth out of 23 in the equestrian phase. As a major in the cavalry in 1921, he wrote that a cavalry leader “must have a passion—not simply a liking—for horses.” And when he sought to assess his condition after the automobile accident that ultimately took his life in December 1945, Patton chose one question to ask his doctor: “What chance have I to ride a horse again?” But the rescue came at a cost—and a simple fondness for horses can’t explain the many instances of risk, bravery, and personal sacrifice that arose during its execution. For that, it was Colonel Reed, fittingly, who provided the answer: “We were so tired of death and destruction,” he said, “we wanted to do something beautiful.”
The Boom of the 1990s: In the second half of the 1990s American productivity picked itself up off the ground to which it had fallen at the early-1970s start of the productivity slowdown. Between the beginning of 1995 and the semi-official NBER business cycle peak in March 2001, U.S. nonfarm-business output per person-hour worked grew at an annual rate of 2.80 percent per year. (Extending the sample through the 2001 recession to the likely trough point of 2001:4, the late-1990s growth rate is 2.69 percent per year.) Between the beginning of 1995 and the semi-official NBER business cycle peak in March 2001, U.S. real GDP grew at a pace of 4.21 percent per year. The causes of the productivity slowdown of the 1973-1995 or so period remain disappointingly mysterious. Baily (2002) calls the growth-accounting literature on the slowdown "large but inconclusive." No single factor provides a convincing and coherent explanation, and the residual position that a large number of growth-retarding factors suddenly happened to hit at once is unlikely. By contrast, nearly all agree on the causes of the productivity speed-up of 1995-2001: it is the result of the extraordinary wave of technological innovation in computer and communications equipmentsolid-state electronics and photonics. Robert Gordon (2002) writes that cyclical factors account for "0.40" percentage points of the growth acceleration, and that the rest is fully accounted for by information technologyan "0.30 [percentage] point acceleration [from] MFP growth in computer and computer-related semiconductor manufacturing" and a "capital-deepening effect of faster growth in computer capital in the aggregate economy accounts [for] 0.60 percentage points of the acceleration." Kevin Stiroh (2001) writes that "all of the direct contribution to the post-1995 productivity acceleration can be traced to the industries that either produce [information technology capital goods] or use [information technology capital goods] most intensively, with no net contribution from other industries relatively isolated from the [information technology] revolution." Oliner and Sichel (2000) write that "the rapid capital deepening related to information technology capital accounted for nearly half of this increase" in labor productivity growth, with a powerful "additional growth contribution com[ing] through efficiency improvement in the production of computing equipment." Jorgenson, Ho, and Stiroh (2001) reach the same conclusions about the importance of information technology capital-deepening and increased efficiency in the production of computing and communications equipment as major drivers of the productivity growth acceleration, and they go on to forecast that labor productivity growth will be as high in the next decade as it has been in the past half-decade. Compare our use of information technology today with our predecessors' use of information technology half a century ago. The decade of the 1950s saw electronic computers largely replace mechanical and electromechanical calculators and sorters as the world's automated calculating devices. By the end of the 1950s there were roughly 2000 installed computers in the world: machines like Remington Rand UNIVACs, IBM 702s, or DEC PDP-1s. The processing power of these machines averaged perhaps 10,000 machine instructions per second. Today, talking rough orders of magnitude only, there are perhaps 300 million active computers in the world with processing power averaging several hundred million instructions per second. Two thousand computers times ten thousand instructions per second is twenty million. three hundred million computers times, say, three hundred million instructions/second is ninety quadrillion--a four-billion-fold increase in the world's raw automated computational power in forty years, an average annual rate of growth of 56 percent per year. Such a sustained rate of productivity improvement at such a pace is unprecedented in our history. Moreover, there is every reason to believe that this pace of productivity growth in the leading sectors will continue for decades. More than a generation ago Intel Corporation co-founder Gordon Moore noticed what has become Moore's Law--that improvements in semiconductor fabrication allow manufacturers to double the density of transistors on a chip every eighteen months. The scale of investment needed to make Moore's Law hold has grown exponentially along with the density of transistors and circuits, but Moore's Law has continued to hold, and engineers see no immediate barriers that will bring the process of improvement to a halt anytime soon. As the computer revolution proceeded, nominal spending on information technology capital rose from about one percent of GDP in 1960 to about two percent of GDP by 1980 to about three percent of GDP by 1990 to between five and six percent of GDP by 2000. All throughout this time, Moores Lawthe rule of thumb enunciated by Intel cofounder Gordon Moore that every twelve to eighteen months saw a doubling of the density of transistors that his and other companies could put onto a silicon wafermeant that the real price of information technology capital was falling as well. As the nominal spending share of GDP spent on information technology capital grew at a rate of 5 percent per year, the price of data processingand in recent decades data communicationsequipment fell at a rate of between 10 and 15 percent per year as well. At chain-weighted real values constructed using 1996 as a base year, real investment in information technology equipment and software was an amount equal to 1.7 percent of real GDP in 1987. By 2000 it was an amount equal to 6.8 percent of real GDP. Will the decade of the 2000s be more like the late 1990s, or more like the 1980s as far as growth in productivity and living standards is concerned? The smart way to bet is that the 2000s will be much more like the fast-growing late 1990s than like the 1980s. The extraordinary pace of invention and innovation in the information technology sector has generated real price declines of between ten and twenty percent per year in information processing and communications equipment for nearly forty years so far. There are no technological reasons for this pace of productivity increase in these leading sectors to decline over the next decade or so. In the consensus analysis, creased total factor productivity in the information technology capital goods-producing sector coupled with extraordinary real capital deepening as the quantity of real investment in information technology capital bought by a dollar of nominal savings grows have together driven the productivity growth acceleration of the later 1990s. It may indeed be the case that a unit of real investment in computer or communications equipment "earned the same rate of return" as any other unit of real investment, as Robert Gordon (2002) puts it. But the extraordinary cost declines had made a unit of real investment in computer or communications equipment absurdly cheap, hence the quantity of real investment and thus capital deepening in information-technology capital absurdly large. Continued declines in the prices of information technology capital mean that a constant nominal flow of savings channeled to such investments will bring more and more real investment. As long as information technology capital earns the same rate of return as other capital, then labor productivity growth should continue very high. The social return to information technology investment would have to suddenly and discontinuously drop to nearly zero, or the share of nominal investment spending devoted to information technology capital would have to collapse, or both, for labor productivity growth in the next decade to reverse itself and return to its late 1970s or 1980s levels. Moreover, additional considerations tend to strengthen, not weaken, forecasts of productivity growth over the next decade. It is very difficult to argue that the speculative excesses of the 1990s boom produced substantial upward distortions in the measured growth of potential output. The natural approach that economists to model investment spending in detailthe approach used by Basu, Fernald, and Shapiro (2001)tells us that times of rapid increase in real investment are times when "adjustment costs" are unusually high, and thus times when actual productivity growth undershoots the long-run sustainable trend. Both a look back at past economic revolutions driven by technologies that were in their day analogous to the computer in their effects and a more deeper look forward into the likely determinants of productivity growth suggest a bright future. The pace of technological progress in the leading sectors driving the "new economy" is very rapid indeed, and will continue to be very rapid for the foreseeable future. The computers, switches, cables, and programs that are the products of today's leading sectors are what Bresnehan and Trajtenberg (1985) call "general-purpose technologies," hence demand for them is likely to be extremely elastic. Rapid technological progress brings rapidly falling prices. Rapidly falling prices in the contest of extremely elastic demand will produce rapidly-growing expenditure shares. And the economic salience of a leading sector--its contribution to productivity growth--is the product of the rate at which the cost of its output declines and the share of the products it makes in total demand. Thus unless Moore's Law ceases to hold or the marginal usefulness of computers and communications equipment rapidly declines, the economic salience of the data processing and data communications sectors will not shrink. Moreover, revious industrial revolutions driven by general purpose technologies have seen an initial wave of adoption followed by rapid total factor productivity growth in industries that use these new technologies as businesses and workers learn by using. So far this has not been true of our current wave of growth. As Robert Gordon (2002) has pointed out at every opportunity, there has been little if any acceleration of total factor productivity growth outside of the making of high-tech equipment itself: the boosts to labor productivity look very much like what one would expect from capital deepening alone, not what one would expect from the fact that the new forms of capital allow more efficient organizations. Paul David (1991) at least has argued that a very large chunk of the long-run impact of technological revolutions does emerge only when people have a chance to thoroughly learn the characteristics of the new technology and to reconfigure economic activity to take advantage of it. In Davids view, it took nearly half a century before the American economy had acquired enough experience with electric motors to begin to use them to their full potential. By his reckoning, we today are only halfway through the process of economic learning needed for us to even begin to envision what computers will be truly useful for. Moreover, as Crafts (2000) argues, the striking thing is not that there was a "Solow paradox" of slow productivity growth associated with computerization, but that people did not expect the economic impact to start slow and gather force over time. As he writes, "in the early phases of general purpose technologies their impact on growth is modest." It has to be modest: "the new varieties of capital have only a small weight relative to the economy as a whole." But if they are truly general-purpose technologies, their weight will grow. Possible Interruptions?Could any factors interrupt a relatively bright forecast for productivity growth over the next decade? There are three possibilities: The first is the end of the era of technological revolutionthe end of the era of declining prices of information technology capital. The second is a steep fall in the share of total nominal expenditure devoted to information technology capital. And the third is a steep fall in the social marginal product of investment in information technologyor, rather, a fall in the product of the social return on investment and the capital-output ratio. The important thing to focus on in forecasting the future is that none of these have happened: In 1991-1995 semiconductor production was half a percent of nonfarm business output; in 1996-2000 semiconductor production averaged 0.9 percent of nonfarm business output. Nominal spending on information technology capital rose from about one percent of GDP in 1960 to about two percent of GDP by 1980 to about three percent of GDP by 1990 to between five and six percent of GDP by 2000. Computer and semiconductor prices declined at 15-20 percent per year from 1991-1995 and at 25-35 percent per year from 1996-2000. The Usefulness of Computers However, whether nominal expenditure shares will continue to rise in the end hinges on how useful data processing and data communications products turn out to be. What will be the elasticity of demand for high-technology goods as their prices continue to drop? The greater is the number of different uses found for high-tech products as their prices decline, the larger will be the income and price elasticities of demand--and thus the stronger will be the forces pushing the expenditure share up, not down, as technological advance continues. All of the history of the electronics sector suggests that these elasticities are high, nor low. Each successive generation of falling prices appears to produce new uses for computers and communications equipment at an astonishing rate. The first, very expensive, computers were seen as good at performing complicated and lengthy sets of arithmetic operations. The first leading-edge applications of large-scale electronic computing power were military: the burst of innovation during World War II that produced the first one-of-a-kind hand-tooled electronic computers was totally funded by the war effort. The coming of the Korean War won IBM its first contract to actually deliver a computer: the million-dollar Defense Calculator. The military demand in the 1950s and the 1960s by projects such as Whirlwind and SAGE [Semi Automatic Ground Environment]--a strategic air defense system--both filled the assembly lines of computer manufacturers and trained the generation of engineers that designed and built. The first leading-edge civilian economic applications of large--for the time, the 1950s--amounts of computer power came from government agencies like the Census and from industries like insurance and finance which performed lengthy sets of calculations as they processed large amounts of paper. The first UNIVAC computer was bought by the Census Bureau. The second and third orders came from A.C. Nielson Market Research and the Prudential Insurance Company. This second, slightly cheaper, generation was of computers was used not to make sophisticated calculations, but to make the extremely simple calculations needed by the Census, and by the human resource departments of large corporations. The Census Bureau used computers to replace their electro-mechanical tabulating machines. Businesses used computers to do the payroll, report-generating, and record-analyzing tasks that their own electro-mechanical calculators had previously performed. The still next generation of computers--exemplified by the IBM 360 series--were used to stuff data into and pull data out of databases in real time--airline reservations processing systems, insurance systems, inventory control. It became clear that the computer was good for much more than performing repetitive calculations at high speed. The computer was much more than a calculator, however large and however fast. It was also an organizer. American Airlines used computers to create its SABRE automated reservations system, which cost as much as a dozen airplanes. The insurance industry automated its back office sorting and classifying. Subsequent uses have included computer-aided product design, applied to everything from airplanes designed without wind-tunnels to pharmaceuticals designed at the molecular level for particular applications. In this area and in other applications, the major function of the computer is not as a calculator, a tabulator, or a database manager, but is instead as a what-if machine. The computer creates models of what-if: would happen if the airplane, the molecule, the business, or the document were to be built up in a particular way. It thus enables an amount and a degree of experimentation in the virtual world that would be prohibitively expensive in resources and time in the real world. The value of this use as a what-if machine took most computer scientists and computer manufacturers by surprise. None of the engineers designing software for the IBM 360 series, none of the parents of Berkeley UNIX, nobody before Dan Bricklin programmed Visicalc had any idea of the utility of a spreadsheet program. Yet the invention of the spreadsheet marked the spread of computers into the office as a what-if machine. Indeed, the computerization of Americas white-collar offices in the 1980s was largely driven by the spreadsheet program's utility--first Visicalc, then Lotus 1-2-3, and finally Microsoft Excel. For one example of the importance of a computer as a what-if machine, consider that today's complex designs for new semiconductors would be simply impossible without automated design tools. The process has come full circle. Progress in computing depends upon Moore's law; and the progress in semiconductors that makes possible the continued march of Moore's law depends upon progress in computers and software. As increasing computer power has enabled their use in real-time control, the domain has expanded further as lead users have figured out new applications. Production and distribution processes have been and are being transformed. Moreover, it is not just robotic auto painting or assembly that have become possible, but scanner-based retail quick-turn supply chains and robot-guided hip surgery as well. In the most recent years the evolution of the computer and its uses has continued. It has branched along two quite different paths. First, computers have burrowed inside conventional products as they have become embedded systems. Second, computers have connected outside to create what we call the world wide web: a distributed global database of information all accessible through the single global network. Paralleling the revolution in data processing capacity has been a similar revolution in data communications capacity. There is no sign that the domain of potential uses has been exhausted. One would have to be pessimistic indeed to forecast that all these trends are about to come to an end in the next few years. One way to put it is that modern semiconductor-based electronics technologies fit Bresnahan and Trajtenberg's (1995) definition of a "general purpose technology"--one useful not just for one narrow class but for an extremely wide variety of production processes, one for which each decline in price appears to bring forth new uses, one that can spark off a long-lasting major economic transformation. There is room for computerization to grow on the intensive margin, as computer use saturates potential markets like office work and email. But there is also room to grow on the extensive margin, as microprocessors are used for tasks like controlling hotel room doors or changing the burn mix of a household furnace that few two decades ago would have thought conceivable.
The world has become used to sets of CDs and LPs offering collections of historic (and merely historical) recordings of music by composers who lived in the 20th century. CBS/Sony have celebrated Stravinsky in impressive style. Elgar has had numerous EMI and Pearl sets devoted to his own and others performances of his works. Bartok enjoyed a massive LP set of historic recordings collecting together every scrap of his own performances on several massive Hungaroton boxes. Now it is the turn of Manuel de Falla. I say 'now'. In fact this box came out in 1996 presumably to mark the fiftieth anniversary of the death of Spain's most famous composer of the century. I do not recall seeing the set reviewed anywhere though it may well have been picked up by Fanfare. I was alerted to its existence when I noticed an advert in Gramophone. Sound quality is what you would expect from a collection spanning so many years. Oddly enough some of the older recordings come over very well while one or two live performance tapes are of disappointing quality. There are no obvious signs of artificial enhancement of the disc sound. Some 'cleaning up' has been done but nothing objectionable. I cannot however claim familiarity with the original 78s! I suspect that many of these recordings are quite rare - certainly in the USA and UK. CDs 1 and 2 are all from 78s (1923-40). CDs 3 and 4 are from Spanish Radio archive tapes (1975-76). SERENATA ANDALUZA (1898-9) was written for violin and piano. This 1931 recording has the work arranged (not by de Falla) as a 'concerto for castanets and orchestra'. It is a light music novelty and in this recording the castanets (properly 'castanuelas') are startlingly in the foreground with the orchestra balanced very distantly. From 1900 comes a charming song TUS OJILLOS NEGROS in a 1926 recording by Elvira de Hidalgo. The crumbly and occasionally distorted sound of Leopoldo Querols CUATRO PIEZAS ESPANOLAS (1936) and the 1923 FANTASIA BAETICA at the hands of Mark Hambourg yield little pleasure to me. However the SIETE CANCIONES POPULARES ESPANOLAS with the composer at the piano and Maria Barrientos (soprano) are a very special experience: bright-eyed, eager, it is as if the two artists inspiring each other. This was recorded in 1928 and the two artists returned two years later to record Soneto a Cordoba and Cancion del Fuego Fatuo. There is a raw purity in Barrientos voice which I find very appealing by contrast with the more operatic voice of Conchita Supervia whose set of the seven songs appears on disc 2. The HARPSICHORD CONCERTO (rec 1930) is a wispy piece which can appear quite episodic. The composer is at the keyboard partnered by French celebrity soloists including Marcel Moyse (flute). The concerto comes over as a very intense piece particularly during the ominous hammer-blows of the central Lento. Neo-classicism with ripeness and exoticism; not at all the desiccated product it was to become in some compositional hands. PSYCHE (rec 1942) ravishingly sung by soprano Leyla Ben Sedyra is an extension of the exotic world of the concerto. Ninon Vallins brief account of Danza del Juego del Amor from the end of El Amor Brujo makes you wish the recording (1927) did not stop just before the climactic bell-haunted climactic delirium of Two major orchestral works here plus the Seven Spanish Songs (Supervia rec 1928-30). Interestingly the complete Supervia set (marred for me by her wide vibrato) is interrupted by other singers performances of one of the set: Jota. The other singers are Juan Garcia (rather unfeeling and plummy-voiced), Conchita Badia (tender), Miguel Fleta (dark-hued) and Lucrecia Bori (nasal, commandingly intense, vividly accompanied). EL AMOR BRUJO is given a magical performance by Orquesta Betica de Camara de Sevilla directed by Ernesto Halffter (rec 1930). The gusty-voiced soprano Conchita Velazquez lends the performance a wild elation in the celebrated Danza del Juego del Amor and Final. Pantomima (probably Fallas most inspired melody and unaccountably recalled during Gosta Nystroems Sinfonia del Mare) is played with great sweetness although the boozy-toned violin solo is a blemish which the taste of the time probably saw as a strength. The early recording techniques could not accommodate the full clamour of bells which should dominate the closing pages. Although we catch the ringing of cow-bells a la Mahler they are quite muted. NIGHTS IN THE GARDENS OF SPAIN is one of de Fallas most enchanted and enchanting works. A sinfonia concertante for piano and orchestra, it is taken by the same forces as the El Amor recording and was taken down in the same year. The pianist is Manuel Navarro. The performance is a good one with many poetic moments but it is not more striking than many modern ones. Particular favourites of mine include Soriano, Rubinstein and (brace yourself) Alexander Iokheles. Iokheles is on an old Melodiya LP reissued on Classics for Pleasure with an equally inspired El Amor Brujo. Recording quality of the Melodiya was very raw but what a performance! The piano sound on the 1930s 78s is papery, robbing the music of much of its essential richness. Radio Tapes 1956-66 EL RETABLO DE MAESE PEDRO (1923 rec 1966 RTVESO/Ernesto Halffter) is quite a striking work and deserves to be much better known. The 30 minute opera is a counterpart of Holst's Wandering Scholar. The work blends the Moorish exotic (Muezzin calls rather like those in Deliuss pagan Requiem) and a Pulcinella-like neo-classicism. The singing and performance are vivid and here the recording is very clear and bright. THE THREE CORNERED HAT (Tricorn) is de Falla's world-famous hit: his Enigma, his Planets. The model and some of the musical language is Petrushka but this element is completely overwhelmed by the devastatingly original and coruscating Hispanic brilliance, languor and romance. This performance of suites 1 and 2 (21'20") is by the Orquesta Nacional de Espana conducted by Ataulfo Argenta. While compromised by one of the most distant recordings I have ever strained to hear, the performance is astounding. Argenta captures the subtle changes of pulse, the convulsive power of the wild march, the delicacy and the unbridled Iberian romance of this music. This must have been an extraordinary event with Argenta delivering the music in the style of Mravinsky - incandescent. A pity about the distanced sound - poor microphone placing presumably - however if you turn up the volume you will quickly adjust. It is also a pity that we get only 21'20" of music and that there is distortion on the massive drums at the end of the suite. As it is this is an event (a miracle) we are privileged to listen in to down the years. Just imagine if Argenta had tackled the whole ballet at this concert. The explosion of applause at the end of the suite comes as no surprise. The HOMENAJES (15'30") is four tribute movements in the form of a suite. The tributes are to the musicians Enrique Arbos, Debussy, Paul Dukas and Felipe Pedrell. The suite was assembled and orchestrated from compositions written between 1920 and 1939. The composer's last years were occupied by the massive cantata Atlantida (recorded by EMI in the late 70s). Homenajes seems to have been de Falla's only musical respite from the obsessive work on Atlantida which seems to have become as much of a burden as the eighth symphony became to Sibelius. In any event the muaic of Homenajes is subdued although the first movement has an Elgarian grandeur. This tape comes from the same 1956 concert as the Three Cornered Hat suites. The concert marked the 10th anniversary of the composer's death and was clearly a major event although the fire does not seem to have been in the bones so far as Homenajes is concerned. Radio Tapes 1953-76 Segovia and guitar aficionados will be delighted to have a radio performance of the Master performing de Falla's only work (LE TOMBEAU DE CLAUDE DEBUSSY) written for guitar and later orchestrated and included in the Homenajes suite. The radio tape is of muffled quality but Segovia's delicate and rough artistry shines through. LA VIDA BREVE (1905, rec 1972) is an opera. Here we have a suite for orchestra but with songs from the opera vibrantly sung by Pilar Lorengar. Lorengar's reputation was international and she sang in celebrity operatic productions and in lieder recitals worldwide. Her vibrato is prominent but the raw passion in her voice is patent. The suite is satisfying musically. The choir of RTVE produce a black sound comparable with Finnish choirs in Sibelius Kullervo and Klami's Psalmus. Lastly comes FUEGO FATUO (circa 1916) - a suite of charming though bland dance movements for full orchestra. The suite has no connection with El Amor Brujo despite sharing the title of one of its songs. In fact this is a collection of Chopin piano pieces arranged by De Falla and completed by the conductor Antoni Ros-Marba. The suite is rather conventional. Though charming this not another El Amor Brujo or Tricorn. One for the de Falla completist! There is an excellent trilingual (Spanish-English-French) booklet running to 50 stylish pages. The English section covers 11 pages. The booklet is made all the more appealing by 22 pictures many from the Archivo Manuel de Falla (Granada) and the Fundacion Manuel de Falla (Madrid). There are pictures of the composer and of the many artists recorded here. Congratulations to of Andalucia. I hope someone can find out what else is on offer in this series. This valuable anthology using performers (almost all of Spanish birth) belongs in the collection of all de Falla enthusiasts, students of Iberian music and libraries/sound archives. There are some remarkable musical experiences here (the Argenta Tricorn for one) but anyone buying must be prepared for sound that is often elderly and at best only respectably good. Playing these recordings reminds us of the privilege we have of hearing the composer as performer and of performances which must have shaped and influenced the composers attitude to later works. This is an important Dokumente and a source which incidentally yields pungent musical pleasures and plunges us deep into de Fallas world. Track after track offer riches and contrast. De Falla is terribly underestimated and under-known. It may ring false for me (as a non-Hispanic) to say this but when listening to de Fallas music you feel in touch with the country of his and other times. This is not thank heavens a composer concerned with presenting folk melodies (songs and dances) in their original shape and form. De Falla melds the community music of Spain (particularly Andalucia) into the universal language of Stravinsky and what emerges is new and fresh and in turn has, across the world, fixed a new image of what Spain and Spanish music is all about. If it has also helped consolidate a hackneyed picture (see Chabrier, Ravel et etc) the blazing vigour and honesty of this music shows that the fault lies not with de Falla. This set educates and enriches but it is not for the hi-fi enthusiast. For those with open minds and a musical inner ear the artistic high fidelity of most of the performances compensates for the occasional surface noise and audio-technical shortcomings. Recommended.
In early autumn, Hawk Ridge in Duluth is a birding hot spot. Hawk Ridge runs along the crest of the hill at the east end of Duluth, sitting about 800 feet above Lake Superior. It's one of the best places in the world to observe fall migrations. Thousands of hawks, eagles and other birds of prey pass over this ridge while leaving Canada for wintering areas as close as southern Minnesota or as distant as South America. This route allows them to avoid crossing the vast expanse of Lake Superior while taking advantage of the updrafts that occur along the rocky Superior shore. You could say the North Shore acts as a funnel, and the ridge above Duluth is the spout. Hawk watching begins in mid-August and continues into December, with the biggest flights usually occurring in September. The best time to observe the birds seems to be from about 10 a.m. to 2 p.m., but there is almost no migration on days with an easterly wind or precipitation. Clear skies and a northwest wind provide the best conditions. On one mid-September day, only 50 hawks were seen. The next day, after skies had cleared, 19,225 birds were counted. Fourteen species, including broad-winged hawks, turkey vultures, bald eagles, ospreys and red-tailed hawks, are regular migrants over Hawk Ridge. On Sept. 15, 2003, a phenomenal 102,329 hawks were tallied as they flew and glided over Hawk Ridge.
In the first chapter we looked at the relationship of law and grace within the Old Testament, a relationship that can be summed up in the word covenant. We saw that God’s love for Abraham, Israel and David was the basis of the three most important covenants. In each case the human partner had to respond, by loving God in return. They were assured that such loving obedience would lead to a yet fuller experience of divine mercy and blessing. But how can a man or a nation love God? Without some revelation of God’s will human efforts to please God may well be misdirected. It is for this reason that law occupies such a central position in the Old Testament. It shows what love for God means in daily life: how man is to worship God in a way that is acceptable to his Creator and how he should treat his neighbour. To this end it offers a short but comprehensive statement of religious and moral principles in the Ten Commandments. Were man unfallen, the Decalogue would no doubt be a sufficient guide to living. But that is not the case. Even members of the covenant nation failed to observe the commandments from time to time. For social and theological reasons it was therefore necessary to have a penal system to punish transgressors. Any society which fails to censure wrongdoers is liable to disintegrate, and in Israel’s case to lose the blessings of the covenants as well. To maintain, or restore when broken, the relationship between God and man is the purpose of the many regulations about worship in the Old Testament; to restore relations between man and man and to bear witness to the moral principles of the Decalogue is the purpose of the penal law. Where worship or morality is neglected, Israel will start to experience the covenant curses, both at a national and at an individual level. This covenant context of Israel’s law gives a special urgency to its penal law and makes its scale of values rather different from that of its neighbours. By surveying the types of punishment enshrined in the Pentateuch light is shed on its scale of values. Finally, the organization of society has a material influence on the way people behave. Law must be known if it is to be obeyed, and there needs to be a means of enforcing obedience on the recalcitrant. Every society has its own set of devices for this purpose; in ancient Israel judges, law-teachers, prophets, kings and other rulers all played their part in undergirding the covenant law, and they will form the final subject of our enquiry. The Nature of the Material Before studying the laws themselves it is wise to ask some preliminary questions about the nature of the collections of law found in the Old Testament. Are they all-embracing codes of law intended to cover every aspect of Israelite life? How far do they conform to the patterns of other ancient legal collections? Can we discern any general principles running through biblical law which mark it off from other systems? Various collections of law are to be found in the books of Exodus, Leviticus and Deuteronomy, and records of legal cases are to be found in many parts of the Bible. Comparison of these laws with other collections of Near Eastern law shows that Hebrew law was heavily indebted to the tradition of cuneiform law originating in Mesopotamia.1 It is now generally agreed that these extrabiblical documents are not comprehensive codes of statute law, but collections of traditional case law occasionally introducing certain innovations and reforms. It seems likely that the biblical collections of law are to be interpreted similarly. In many cases the Old Testament introduces changes into the traditional law of the Near East, but in other cases it simply assumes it (e.g. laws about oxen Ex. 21:28ff., divorce Deut. 24:1ff.). Theory of Law In Mesopotamia the king was the author of law. He was held to have been divinely endowed with gifts of justice and wisdom which enabled him to devise good law. Law was therefore a basically secular institution. In Israel, however, God himself was the author and giver of law, and this divine authorship of law had several consequences. First it meant that all offences were sins. They did not merely affect relationships between men but also the relationship between God and man. As we have seen, law was a central part of the covenant. Therefore, if the nation rejected the law or connived at its non-observance, curses came into play bringing divine judgment on the whole people. Secondly, because all life is related to God, and the law came from God, moral and religious obligations are all to be found in a single law book. This is true of the Pentateuch as a whole and of the smaller collections of law within it (e.g. Lev. 21-23). Mesopotamia maintained a sharp distinction between these spheres: their collections of law consist almost entirely of civil legislation. The third implication of the Old Testament views of law is that not just the king but every Israelite was responsible for its observance. He had to keep it himself and ensure that the community of which he was a member did so too. There was thus both an individual and a national responsibility to keep the law (cf. Deut. 29:18ff.). Finally, since the law came from God, it was not to be a secret understood only by lawyers, but by everyone. There was therefore an obligation on the national leaders to teach and explain it to the people. The public character of the biblical legislation is reflected in the large number of motive clauses which give reasons why certain laws should be obeyed (e.g. ‘that your days may be long in the land’). Such reminders are more in character in a sermon addressed to the nation than in a piece of literature designed only for the edification of those administering the law. Hammurabi invited all who were oppressed to come and read his laws.2 In this limited sense he was looking for a popular knowledge of the law. The Bible stresses to a much greater degree the importance of everyone knowing the law. It is addressed to ‘all Israel’. Moses was appointed to explain the laws given at Sinai, and the law had to be read out every seven years at a national assembly (Deut. 5:1; Ex. 20:18ff.; Deut. 31:10f.). Thus law in the Old Testament is not simply intended to guide the judges but to create a climate of opinion that knows and respects it. This fits in with the express purpose of the law: to create ‘a kingdom of priests and a holy nation’ (Ex. 19:6). The prologue and epilogue of the Laws of Hammurabi dwell on the political and economic benefits that law brings — justice, peace, prosperity and good government. But ‘the prime purpose of biblical compilations is sanctification’.3 As has been stressed before, law-giving is integral to the covenant. The law itself is the divine means of creating a holy people. Obedience to it renews the divine image in man and enables him to fulfil the imperative to ‘Be holy, for I am holy’ (Lev. 11:44f.; 19:2; 20:7, 26, etc.). The Ten Commandments The distinctive features of Old Testament law find expression in the opening words of the Decalogue: ‘And God spoke all these words saying . . .’ Here the divine authorship of the following laws is simply stated. After a brief historical prologue reminding Israel of what God had done on their behalf, there follows a series of injunctions covering both religious and social matters. God, the author of these laws, unlike earthly kings, is concerned with the whole of life. Case law or moral principle? The Ten Commandments are rightly regarded as the quintessence of Old Testament law. It has been suggested in the previous chapter that they are to be understood as part of the stipulations of the Sinai covenant. But is it possible to be any more precise? How, for instance, are they related to the other laws in the Pentateuch? Should they be regarded as laws in their own right, or are they rather a set of moral principles, which could be enshrined in case and statute law? These questions have been debated intensively in recent years, and it is not possible to review the problem in depth here. The answers given depend on the view taken of the development of Israel’s law, since the interpretation of the commandments depends to some extent on the historical situation in which they were formulated. In spite of many attempts to disengage the commandments from their present context and recover earlier phraseology and meanings, no consensus of opinion has emerged.4 We can, however, be more certain how the author of Exodus understood the commandments, since he must have known them in the present form and have been responsible for the literary context in which they are found. It is the context and content of the commandments as they now stand that form the basis of the following exposition, not hypothetical reconstructions of the original form of the Decalogue.5 On this basis it becomes clear that we should not regard the commandments as case or statute law. No human penalties are specified for their transgression; rather divine curses are pronounced on those who break certain of the laws, and blessings are promised to those who keep them. These characteristics are more appropriate in a treaty text than in a collection of laws.6 The Decalogue itself does not state what punishment the community will impose on those who break the commandments. It is misleading to describe the Decalogue as Israel’s criminal law, for it is not a list of offences that the state would itself prosecute, let alone for which it would always exact the death penalty.7 Ancient law does not sharply distinguish between criminal and civil offences. Dishonouring parents, murder, adultery and theft were all cases in which the prime responsibility for bringing the offender before the courts was left to the injured party or his family. Since, however, the death penalty could be imposed for some of these offences, they might be called crimes.8 Religious offences could more aptly be described as crimes, since the whole community had to take action to punish the sinner. That the Decalogue cannot be classed as criminal or civil law is most clearly demonstrated by the tenth commandment, for a human court could hardly convict someone of covetousness. The Ten Commandments should therefore be looked on as a statement of basic religious and ethical principles rather than as a code of law. Commandment and law. The principles of the Decalogue are illustrated and, in the laws which follow, put into a form that human judges can handle (Ex. 20:23ff.; Deut. 6-26). To revert to the treaty analogy, the commandments constitute the basic stipulations which precede the detailed stipulations in a covenant document. In the exposition that follows I shall therefore try to illustrate the meaning of the commandments by reference to the laws in the Pentateuch, though it should always be remembered that the commandment is more fundamental and wide ranging than the corresponding laws. It should be noted that the special status of the Decalogue in both Jewish and Christian tradition is not a mere fancy of later exegetes; the Old Testament itself regards the Ten Commandments as different and more important than the other laws. They alone were written by the ‘finger of God’. The narrative in Exodus clearly emphasizes the unique significance of the Decalogue in the way it prepares for it and sets it apart from the case law which comes after.9 Similarly Deuteronomy, when harking back to the law-giving at Sinai, focuses exclusive attention on the commandments, though the other laws in Exodus are clearly presupposed in Deuteronomy. The Ten Commandments are thus acknowledged to be the heart of the covenant law, a special revelation of God in the fullest sense of the phrase. But though every commandment expresses the will of God, and breach of any one of them is a sin calling down on the offender the wrath of God, their order is not haphazard: the most vital demands are placed first. This is confirmed by the penal law. Flagrant disregard of the first six commandments carried a mandatory death penalty. For the seventh death was probably optional, not compulsory. Only in exceptional cases would breach of the eighth and ninth commandments involve capital punishment. And it is most unlikely that the tenth commandment was ever the subject of judicial process. The order of the commandments thus gives some insight into Israel’s hierarchy of values and this should be borne in mind in their exegesis. First Commandment. The principal concern of every vassal treaty was to secure the sole allegiance of the vassal to his suzerain. This is the thrust of the first commandment: ‘You shall have no other gods before me.’ It is not certain whether this commandment implied absolute monotheism, i.e. the existence of only one God, but it undoubtedly was a demand for practical monotheism, worship of the Lord alone. This terse command is expanded in great detail in the book of Leviticus in particular, which gives instructions about the correct rituals in worship. But perhaps the laws for the instruction of the laity found in Exodus 20-23 and Deuteronomy give a better impression of the primacy of worship. Both collections begin their detailed stipulations section with laws about the place of worship.10 They also require the offering of tithes and the attendance of all Israelite men at the three national festivals, as well as the extermination of all pagan cults and their adherents (Ex. 23:14ff; Deut. 12; 14:22ff.; 16). Apostasy involving the worship of foreign deities was punishable by death (Num. 25; Deut. 13). It has also been suggested11 that the need for whole-hearted allegiance to the Lord explains the ban on eating certain foodstuffs (Lev. 11; Deut. 14; cf. I Cor. 8). The unclean animals were either worshipped or sacrificed by the Canaanites or Egyptians and therefore Israel must shun them. But this explains too few of the regulations to be convincing. More probably, the reason for the prohibitions was that the unclean animals symbolized the unclean nations, the Gentiles, with whom Israel was forbidden to mix, whereas the clean species represented the chosen people of Israel.12 Thus, every time an Israelite ate meat he was reminded of God’s grace in choosing Israel to be his people, and that as one of God’s elect he had a duty to pursue holiness. Second Commandment. The second commandment bans all visual representation of God for use in worship. Images of gods in human and animal form are well known in Egyptian and Canaanite religion. Deuteronomy 4:15ff. justifies this prohibition by appeal to Israel’s experience at Sinai, where they heard God but did not see him (cf. Rom. 1:18ff.). The wording of this commandment shows that it is not a ban on art as such. Characteristically of biblical legislation the decisive condition or prohibition comes towards the end of the sentence, in this case: ‘You shall not bow down to them or worship them’ (verse 5). Had the commandment meant to ban all artwork and sculpture it should have ended with verse 4. This interpretation of the law is confirmed by the following chapters (25f.). The tabernacle itself was richly decorated with the likeness of many things in heaven and earth, and the ark, the earthly throne of God, was surmounted by two winged cherubim. But by making the golden calf and inviting the people to worship it, Aaron broke this commandment and threatened the whole nation with extinction (Ex. 32). The commandment is followed by a motive clause explaining why it should be observed: ‘For I the Lord your God am a jealous God, visiting the iniquity of the fathers upon the children to the third and the fourth generation of those who hate me, but showing steadfast love to thousands of those who love me and keep my commandments’ (Ex. 20:5f.). Motive clauses like this are a characteristic feature of Israelite legislation, showing that the commandment was supposed to be public law which had to be taught to the people. In this clause there are several reflections of basic covenant ideology, in particular the exclusive nature of the relationship, ‘I the Lord . . . am a jealous God’, and the blessing on those who keep the law and the curse on those who do not. In secular treaties ‘bowing down’, ‘serving’ and ‘loving’ are the appropriate actions of a vassal towards his lord. It is also worth noting that loving God is equated with keeping his commandments (cf. Jn. 14:15). Finally, the long-range effect of obedience and disobedience should be observed. Actions do not just affect the individual but also his descendants, up to the great-grandchildren in the ease of transgression and as far as the thousandth generation in the case of obedience (Deut. 7:9). This disproportion is one of many illustrations in the law of how God’s mercy far exceeds his anger. Third Commandment. The third commandment forbids any misuse of the name of God, whether in frivolous speech or in such dark deeds as witchcraft and magic (Lev. 24:11f.; Mt. 5:33ff.; Acts 19:13ff.). In biblical thinking the name of God expresses the character of God himself. Again this commandment adds a motive clause reminding any transgressor that he will not escape the covenant curses, even if he escapes human judgment. Fourth Commandment. The fourth commandment forbids all work on the sabbath day.13 There are few indications of exactly how this was interpreted in early days. Ordinary, everyday work such as trading was forbidden. More lowly tasks such as collecting manna or sticks were also prohibited, on pain of death (Ex. 16:22ff.; Num. 15:32ff.). Positively, it was a day set aside for worship (Is. 1:13). Like the tithe, the setting apart of one day in the week is a token of the consecration of the whole. The reason given in Exodus for the observance of the sabbath is imitation of God, who rested from the work of creation on the seventh day. In Deuteronomy it is remembrance for God’s deliverance from Egypt; under the new covenant Sunday commemorates the resurrection. It is probable that the rules about the sabbath were not so strict in early days as in post-exilic times. For instance, the commandment does not forbid the wife to work, journeys were permitted (2 Kg. 4:23), and the temple guard was changed on the sabbath (2 Kg. 11:5-8). Therefore Jesus’ more flexible attitude to the sabbath over against the strictness of his Pharisaic opponents may really reflect the original practice in early Israel (Mk. 2:23ff.). Looking to the future, Hebrews 4 views the sabbath as a type of the rest of the saints in heaven. Fifth Commandment. ‘Honour your father and mother.’ To honour (kibbed) is most often used in the Old Testament with respect to God or his representatives such as prophets and kings.’4 It may be that parents are envisaged as representing God to their children, and this would explain the very severe penalties prescribed for those who dishonour their parents (Ex. 21:15, 17; Deut. 21:18-21). But the motive clause, ‘that your days may be long in the land which the Lord your God gives you’, draws attention to the blessings of obedience. ‘The “promise” attached to this first manward command shows the family as the miniature of the nation. If the one is sound, it implies, so will be the other. To put it more accurately, unless God’s order is respected at the first level, his gifts will be forfeited at all others.’15 Sixth Commandment. The sixth commandment forbids murder and other actions that may result in loss of life (Deut. 22:8). It does not rule out the judicial execution of murderers and other heinous criminals, or killing in war. The death penalty is insisted on for murder. Genesis 9:5f. sets out the theological principle involved: ‘For your lifeblood I will surely require a reckoning; of every beast I will require it and of man; of every man’s brother I will require the life of man. Whoever sheds the blood of man, by man shall his blood be shed; for God made man in his own image.’ The laws in the Pentateuch show how the principle was applied in practice. A man or an animal which causes the death of another man must be put to death (Ex. 21:12, 28ff.). Where a man is responsible for someone’s death, deliberate and accidental homicide are carefully distinguished (Ex. 2:13f.; Num. 35:9ff.). It was common in other legal systems to allow composition in the case of homicide; instead of being executed the homicide could pay appropriate damages to the dead man’s family. Numbers 35 expressly excludes this arrangement in Israel. In the case of murder the murderer must be executed; if the killing was not premeditated the homicide must live in the city of refuge until the death of the high priest.16 The Pentateuch is not only concerned with the punishment of homicides, but with the prevention of accidental death. Owners of dangerous animals are warned to keep them in (Ex. 21:29, 36), and house-builders are told to put a parapet around the roof to stop people falling off (Deut. 22:8). One law (Ex. 21:22-25) specifically deals with the death of a foetus as the result of a brawl. Close parallels to this rule are known in cuneiform law (LH 209-14; HL 17: MAL A 21, 50-2) but the interpretation of the biblical law is highly complex.17 Three things are clear, however, in the present law. First, the miscarriage and the injury to the woman were caused accidentally, a by-product of a quarrel between two men. Secondly, this suggests that the talion formula ‘life for life . . . stripe for stripe’ which refers to the woman’s injury should be regarded as a formula insisting on a punishment proportionate to the injury, not necessarily literal retribution (cf. verses 26-27). ‘Life for life’ only applies in cases of premeditated killing. Thirdly, the loss of the foetus is compensated for by the payment of damages. Biblical law therefore does not deal with the ease of deliberately induced abortion. On the basis of certain passages in Job and the Psalms18 it seems likely that the child in the womb was regarded as a human being, under the protection of its Creator (Job. 10:8-12; Pss. 51:5f.; 139:13-16; cf. Lk. 1:15, 44), and that Old Testament writers would have shared the abhorrence of the Assyrians at artificially induced abortion.19 The Old Testament discourages wanton destruction and slaughter in war as well as in peace (Deut. 20:10ff.). It regards death in war, however, as one of God’s judgments (Deut. 28:25ff.), and therefore inevitable as long as men go on sinning. In the same way that ‘all Israel’ is summoned to execute judgment on criminals, so nations may be called to punish other nations (cf. Ex. 23:23;ff.; Is. 10:5ff.), though when they undertake this task, they are warned not to exceed their brief. Seventh Commandment. Immediately following the prohibition of murder comes the prohibition of adultery (na’ap), i.e. sexual relations between a married woman and a man who is not her husband.20 A comparison of this commandment with various laws in the rest of the Pentateuch which deal with sexual offences is very revealing, in showing how the commandment expresses a bare moral principle, whereas the detailed laws apply the principle in various situations. If the sixth commandment seeks to uphold the sanctity of human life, the seventh seeks to preserve the purity of marriage. Genesis 2:24 states the positive theological principle undergirding matrimony: ‘A man leaves his father and his mother and cleaves to his wife, and they become one flesh.’ This poetic couplet expresses rather cryptically one of the fundamentals of Old Testament marriage law, that in marriage a woman becomes, as it were, her husband’s closest relative. A man could therefore call his wife, rather misleadingly in some circumstances, his sister (Gen. 12:13, 19; 20:2; Cant. 4:9; 5:2). Other implications of this verse are not developed in the Old Testament. One group of first-century Jews held that it entailed monogamy. Our Lord added that it meant that marriage should be indissoluble, even though in practice human sinfulness (‘the hardness of men’s hearts’) often led to its breakdown (Mt. 19:5f.). Similarly the commandment forbids adultery, a sin whose very nature involves breaking the marriage bond. What happens when marriages break up is the concern of the other laws in the Pentateuch. They are concerned with the situations that arise and not with theological utopia. This Old Testament legislation can, I believe, be seen to have a similar goal to the New Testament teaching on marriage, namely the creation and preservation of stable marriages. As usual the Decalogue prescribes no human penalty for breach of this commandment. Other passages make it clear that the standard penalty for adultery was death. If caught, both parties, the man and the woman, were put to death (Lev. 20:10; Deut. 22:22). The severity of the sentence is undoubtedly very shocking to modern readers. Certain observations may perhaps mitigate our sense of shock. First, the death penalty for adultery is not unique to the Old Testament; it is common to most of the legal systems of the ancient Near East. Secondly, the death penalty was not mandatory; if a husband wished to spare his wife, he had to spare the other man as well.21 Thirdly, where the circumstances suggest that the woman was coerced, she would be pardoned and only the man ‘would be put to death (Deut. 22:25-27). Nevertheless, in spite of these considerations, the penalties for adultery are still striking, and reflect a much harsher condemnation of those who deliberately break up marriage, home and family than is made in modern Western society. In contrast the penalties imposed for other sexual misconduct are lighter. After betrothal, effected by the payment of a large present to the bride’s father (often equal to several years’ wages), a girl was legally as good as married and intercourse with her by a third party was regarded as adultery and therefore liable to the death penalty (Deut. 22:23-27). But when an unbetrothed girl was caught lying with a man, both escaped more lightly. The man was made to marry the girl and give the appropriate betrothal gift to the girl’s father, which by his action he had, as it were, by-passed. In addition his right to divorce was forfeit. If the girl’s father did not want her to marry the man concerned, he could still demand the betrothal gift from the man, but that was all (Ex. 22:16f.; Deut. 22:28f.). It can be seen that running through all these laws is a concern to promote stable marriages. The financial payments associated with marriage and divorce were also very effective in stabilizing marriages.22 Marriages did break up in Old Testament times, however, and remarriage was permitted. Nowhere does the Old Testament give any instructions about divorce itself. Contemporary custom is simply presupposed. What it does do, in fact, is regulate remarriage after divorce or widowhood. This is clearest in Deuteronomy 24:1-4 which allows a divorced woman to contract a second marriage, but if her second husband dies or divorces her, she may not return to her first husband. The thinking behind this law has puzzled commentators. A common view is that the law regards the second marriage as adulterous23 and is concerned to discourage such unions. There is no hint of this motive in the law, however, and as there were other powerful legal and financial deterrents to divorce and adultery in the Old Testament, this view seems inadequate. More plausible is Yaron’s suggestion24 that the law is designed to protect the second marriage from interference by the first husband. Perhaps jealous of her second husband, her first partner might try to woo her back without the safeguard of this law. But this idea founders on the fact that the rule also applies after the second husband’s death (verse 3). A more probable explanation25 of this law emerges from a comparison with the incest rules in Leviticus 18. These forbid sexual intercourse between brother and sister, grandfather and grand-daughter and so on. They also prohibit intercourse between brother-in-law and sister-in-law, or father-in-law and daughter-in-law. The logic of these prohibitions is as follows: in marriage a woman becomes her husband’s closest relative, his sister as it were, and therefore a sister to his brothers, a daughter to his father and so on. Therefore if it is wrong for a man to marry his sister or daughter, it is equally wrong for him to marry his sister-in-law or daughter-in-law. Now these prohibitions on intermarriage with one’s daughter-in-law only become relevant after the end of her first marriage in divorce or the death of her first husband. Up to that point such a union would be adulterous. The same logic applies in Deuteronomy 24 to remarrying one’s former wife. If one cannot marry one’s sister, one cannot marry one who has become sister through a previous marriage, i.e. one’s former wife. Thus while the Old Testament does not affirm the practical indissolubility of marriage, it does maintain its theoretical indissolubility, in the sense that the kinships created between the spouses and their families are not terminated by death or divorce. In certain respects, then, Old Testament marriage law is less strict than that of the New Testament. Infidelity by the husband does not count as adultery in the Old Testament. It does in the New Testament. ‘Every one who divorces his wife and marries another commits adultery’ (Lk. 16: 18 parallels Mt. 19:3-12; Mk. 10:2-12). These Gospel sayings also explicitly rule out remarriage after divorce and, by implication, polygamy as well, equating them with Adultery. Thus at three points — polygamy, remarriage,26 and a husband’s adultery — the Old Testament laws plainly conflict with the New Testament ideal of life-long monogamous marriage. But in practice the differences were quite slight. The great expense of marriage and divorce meant that few could afford a second marriage, while the legal restrictions placed on the choice of marriage partners for divorcees and widows bore witness, even in Old Testament times, to the permanency of the relationship established by marriage. Eighth Commandment. Theft is prohibited by the eighth commandment. Theft in this context covers all attempts to deprive a man of his property and livelihood whether by brute force or stealth and cunning. In the Old Testament, land and property are seen as the gift of God and essential for a man’s livelihood (Deut. 11:9ff.; I Kg. 21:3). But again the commandment only represents the negative side of the law. At various other points a positive concern to support the poor and weak members of society comes to expression (e.g. Deut. 24:10-22). For instance, every third year tithes are to be given to the Levite, the immigrant, the orphan and the widow (Deut. 14:28f.). Every harvest-time corn is to be left ungathered round the edges of the fields for the poor to glean (Lev. 19:9f.). Most far-reaching of all are the laws of the sabbatical and jubilee years, under which a man who had become so poor that he had been forced to sell his land to someone else or himself into slavery, recovered his property and his freedom. In this way the tendency for wealth to accumulate in fewer and fewer hands would have been checked (Ex. 21:1ff.; Lev. 25; Deut. 15 :1ff.).27 Ninth Commandment. After dealing with duties toward God and actions against neighbours the last two commandments deal with sins of speech and thought. The ninth commandment forbids false witness, primarily in a court of law, but it covers all other unfounded statements as well (Ex. 23:1ff., 7; Deut. 17:6; 19:15ff.; 22:13ff.). It should be noted that the command is in the negative; the Old Testament does not demand that the full truth has to be disclosed on every occasion I Sa. 16:2). Tenth Commandment. The tenth commandment forbids all desiring of another’s property.28 Though covetousness cannot have been punished by the courts, feelings are not outside the realm of biblical law in the broader sense. On the one hand the Israelite was commanded to love God; on the other not to hate his neighbour in his heart or covet his goods (e.g. Deut. 6:5; Lev. 19:17). This inward aspect of biblical morality is even more prominent in the book of Proverbs, which has much to say about motives, feelings and speech. The whole of a man’s life must be lived Out in the presence of God, who weighs the heart. It may therefore be concluded that the Old Testament contains as comprehensive and demanding an ethic as is to be found anywhere in the ancient world.
Why Do We Need Science? Humans are poor data gathering machines. We have numerous biases, cognitive flaws, and psychological errors that prevent our unguided minds from grasping reality in any accurate way. To put it more specifically: [There are] two countervailing human tendencies of omission and commission: to neglect the logical and statistical strategies of science on the one hand, and to over-utilize intuitive or simplistic strategies on the other Thus, in order to deal with the deluge of information that our brains take in every second of everyday, we have to structure it in a way that can accurately interpret, explain, and predict reality. Science can do this where other forms of thinking fail. Gut-feelings and common sense are not enough; they may get us somewhere, but not always to the truth. As we proceed, I will outline how one thinks scientifically (and unscientifically) in order to show you how modern science obtains knowledge about the universe. More importantly, as we continue, we should know why thinking in a scientific way is the best vehicle for obtaining knowledge. To that end, let us outline what are the unscientific ways of thinking and why we cannot rely on them. Unscientific Sources of Knowledge By intuition we mean vague feelings or gut reactions about a question or phenomena. One problem with intuition as a knowledge source is that our intuitions are often wrong. For example, we may have a gut reaction that giving children sugar will make them more hyperactive. However, if we scientifically interpret the data, we find that this is not that case. A second problem arises because intuitions are feeling-based. Mood, and a host of other psychological and physiological factors influence intuition. A judgment of information may be solely based on how you rolled out of bed this morning, and not what the data says. An “intuitive” judgment about the driving skills of another motorist will differ greatly if that motorist just cut you off. Finally, ask five different people to make predictions based on intuition and we are likely to receive five different answers. We simply cannot place enough confidence in intuition to accept it as a source of information. This second source of unscientific information, know also as tradition, includes unquestioned belief in superstitions, truisms, and myths. These forms of knowledge are often passed from generation to generation through cultural mechanisms such as family, media, and religious institutions. As an information source, tradition is used all the time. Like intuition however, many tenaciously held beliefs are inaccurate. For example, at one time everyone held tenaciously to the beliefs that the Earth was flat; that the Sun revolved around the Earth; that applying leeches to the ill was good medical practice; and that Salem, Massachusetts was plagued with witches. Scientific breakthroughs in medicine, physics, and genetics, to name a few disciplines, continually expose the erroneous nature of previously held beliefs. Moreover, across and within cultures there is considerable variance in perspectives: What seems obvious to one social group is often rejected as ludicrous by another. Tenacity, like intuition, is an unacceptable way to answer scientific questions. Common sense, a third unscientific way of knowing, consists of generating what appear to be obvious answers to scientific questions. Appeals to this sort of knowledge are accompanied by prefacing or supporting remarks such as “it’s obvious that…,” “everybody knows…,” “any halfway intelligent person can see that…,” or “it’s just common sense.” Common sense, however, is often wrong and people often disagree what even constitutes the common sense thing to do in a particular situation. Furthermore, scientific problems abound for which common sense provides no insight. This is especially true for complex problems. For example, what is the commonsense way to search for the Higgs Boson? What is the commonsense way to treat pancreatic cancer? There are no existing commonsense answers to such questions. Personal experience is often used as a knowledge source. We posses a wealth of personal experience and, while experience is an extremely valuable resource, there are three reasons to be cautious about deriving knowledge claims about science based on experiences. First, personal experience is both subjective and uncontrolled, leaving us susceptible to misperception and misrepresentation of events. We are limited in the amount of information we can process because the quantity of stimuli in any given situation is virtually unlimited. Because of this limitation, we often attend to events and stimuli selectively: We simply do not and can not pay attention to every sound; we don’t notice everything there is to see; many things go undetected. Rather, we attend to some stimuli and block out others, some of what we do sense, we sense incorrectly, yielding an experience that is necessarily incomplete and inaccurate. Second, we selectively remember characteristics of experience. Anyone who has every studied for a test realizes that some of the subject matter, although we read it and perhaps even hear it during class, was somehow lost on test day. Thus, our memories of events are usually incomplete and misrepresent events. It is also problematic that our selectivity is driven by strong preconceptions. Meaning we attend, perceive, accept, and recall data that confirm our beliefs and attitudes whereas we tend to ignore, distort, discount, and forget data which disconfirm our beliefs and attitudes. This is the confirmation bias at work. A fifth unscientific source of knowledge, authority, consists of appealing to experts for answers to our questions. We are surrounded by experts and authorities: professors, physicians, attorneys, journalists, economic advisers, stock brokers, mechanics, news anchors, just to name a few. Although experts frequently provide valuable service, there is often disagreement among them, and, of course, they can be wrong. The more important issue is how the experts gained their knowledge in the first place. If their knowledge was acquired through intuition, tenacity, or experience, it is subject to many of the caveats already mentioned. Assuming that an expert’s knowledge is the product of scientific inquiry, it is the scientific inquiry, not the expert, that is the source of the knowledge. Well-informed experts can disseminate knowledge but they are not acceptable as oracles of it. A final unscientific of deriving answers is through rationalism, or logic, usually in the form of deduction. Accordingly, knowledge takes the form of conclusions, which are deduced from premises. For example, suppose that (1) watching a scary movie usually makes a person fearful or anxious, and that (2) being fearful or anxious usually causes the person’s heart rate to increase. Applying logic we would conclude, therefore, that watching a scary movie raises a person’s heart rate. Two major problems are associated with rationalism as a source of knowledge. First, we must consider how the truth of the premises were determined. Logic alone cannot produce premises, and without valid premises, sound conclusions cannot be reached. Second if we apply logic in this form to premises that are not absolutely true, erroneous conclusions will be met even when strictly following the rules of deduction. Reconsider our example. Suppose that when we said “usually” we meant around 70 percent of the time. Thus, the probability of a scary movie making you fearful is 70 percent and the probability of being fearful increasing your heart rate is 70 percent. What then is the probability that watching a scary movie increases your heart rate? The correct answer is calculated by multiplying the two probabilities together ,[.70x.70], which equals only 49 percent. So, more often than not, watching a scary movie would not increase your heart rate, given the probabilities we assumed. Logic itself does not provide these crucial probabilities estimates; nor does it produce the premises. As humans who are very bad at estimating probabilities, relying on pure logic will not get us as far as science can. While logic is an essential tool used by scientists, it alone is insufficient as a knowledge source because its use requires existing knowledge in the form of premises. If the premises are incorrect, so is your logic. As we can see from this brief overview of the unscientific sources of knowledge, we need a more precise and objective approach to generating knowledge. This leads us to the scientific perspective of knowledge acquisition. Regardless of the field of study, those committed to a scientific approach to generating answers to questions, whether theoretical or practical in nature, can almost always be described in terms of a five-step process known as the scientific method. We will quickly review the steps with special focus on how they differ from unscientific ways of gaining knowledge. The steps are: - Observe a phenomena that needs to be explained - Construct provisional explanations or pose hypotheses - Design an adequate test of the hypotheses - Execute the test - Accept, reject, or modify our hypotheses based on the outcome of our test The first step describes a basic characteristic of scientific thinking, curiosity. We notice that an object let go on Earth always falls straight down. We see that some planets are made almost entirely of gas. We wonder why some animals take care of their young and others do not. When we focus on such observations and feel compelled to explain them, we have engaged in the first step of the scientific method. It is at this second step that we begin to differentiate scientific from unscientific thinking. Not having found an explanation for a phenomena, a scientist looks for clues in existing research. Under a scientific framework, input from intuition, tradition, experience, common sense, experts, and logic might be incorporated during the construction of a preliminary hypothesis, but we do not at this point accept the validity of those explanations. These unscientific explanations may be correct, but we cannot be sure if we do not test them. Scientific thinking builds upon the previous thinking of previous scientists, thus ensuring the maintenance of objectivity separate from unscientific forms of knowing. DESIGNING AND EXECUTING THE TEST Another feature of the scientific method is the testing of hypotheses that take care to control for all of the possible variables that might confuse the research. Scientists must rule out all competing explanations if they are to show that their explanation is the correct one. Unscientific thinking leaves many possible explanations up for debate, and rarely settles on the truth (or tests for the explanation’s validity). With all of the possible confusing variables controlled for, the scientist carries out the experiment. If the experiment was designed correctly and carried out in the right way, the scientist should receive objective data about the phenomenon in question. Just how a scientist carries out a “correct” experiment is a much larger subject, and will not be discussed here. Suffice to say that all scientific research depends on scientists following common experimental rules and procedures to make sure that their data can be replicated by others, is falsifiable, and reflects reality. The final stage of the scientific process calls for the rejection, acceptance, or modification of the explanation based on an analysis of the data. During this process the scientist takes many things into consideration: statistical significance, experimental error, false positives and negatives, etc. The power of science resides within this final stage. The ability for science to progressively accumulate knowledge that has been checked, tested, repeated, and verified separates it from all other areas of knowing. Science knows when it is wrong and when it has made a mistake, and the internal mechanisms of peer review etc. then move it forward. This is what separates science from pseudoscience. Pseudoscience, like homeopathy for example, does not incorporate new evidence and indeed proceeds without it. Scientific evidence that clearly rejects the idea that homeopathy could ever work is dismissed and forgotten by the proponents of pseudoscience. This sort of thinking, without the checks and balances of science, then becomes an unscientific form of knowledge, and is not reliable. Science as a Way of Thinking Science, as a human enterprise, is the most successful tool ever devised for explaining our universe. It has passed the tests that other forms of thinking do not. This is why science proceeds the way that it does, and why it is so powerful. Adapted from Michael J. Betty’s essay entitled: “Thinking Quantitatively”, published in the book An Integrated Approach to Communication Theory and Research.
Corporate Voting in HK Elections In the Government's proposals for electoral reform, it did absolutely nothing to abolish the small-circle corporate voting system used in many of the so-called Functional Constituencies (FCs) which comprise half of Hong Kong's current 60-member Legislative Council. The same corporate electorates also elect many of the members of the Election Committee (EC) which chooses the Chief Executive of Hong Kong. Indeed, under the proposals, 100 of the 800 new seats on the expanded EC would go to the industrial, commercial and financial sectors which are dominated by corporate voting. Those FCs which allow corporate voting generally have very small 3-digit electorates, the elections are often uncontested, and the FC legislators play a crucial role in blocking the democratically elected members' motions in the Legislative Council. That is because, under Annex II of the Basic Law, in order to pass a member's motion, such as a call for a competition law, or an amendment to a Government-tabled bill, a majority is required of "both houses" - a majority in the 30 FCs, and a majority in the 30 geographically elected constituencies. So it only takes 15 votes in the FCs to block a motion. Even the proposal to add 5 new FCs to be elected by District Councillors does little to change this block. Under any realistic voting scenario, at least 1 and probably 2 of the 5 new FC legislators would be pro-Government members and the Government would only need 18 out of 35 FCs on its side. We think it's likely that the Government will come up with some token concessions on the electorate for the proposed District Council seats to try to head off the pro-democracy march on Sunday, but the retention of corporate voting in the FCs is reason enough to march anyway. Those vested interests in Hong Kong who are calling for a bicameral (or two-house) system as a purported form of universal suffrage are in reality calling for maintenance of the existing system in which business interests carry a veto in the "upper house". Going down this road would take Hong Kong no further towards a true democratically elected legislature accountable to the people. Since the National People's Congress Standing Committee (NPCSC) published its decision of 26-Apr-04, the HK Government has taken to using the phrase "balanced participation" - a euphemism for keeping the business-dominated functional constituencies (whatever they are subsequently called), and ignoring the fact that prosperous and stable societies are positively correlated with universal suffrage. What could be more "balanced" than one vote per person? The NPCSC decision said: "Any change...shall conform to principles such as being compatible with the social, economic, political development of Hong Kong, being conducive to the balanced participation of all sectors and groups..." (emphasis added) What this "balanced participation" is really about is the fact that the mainland and HK Governments have more influence over business and special interests than they do over the population at large. After all, the profitability of businesses often depends on the terms of government licenses, regulations, permits, land leases, laws, taxes and subsidies. In return, through the FC system, Government can count on the support of business interests in the Legislative Councils and in the CE elections. To be sure, in almost any democracy, there are strong, well-financed corporate and special-interest lobbies, and depending on the quality of campaign finance laws, they can be very influential on government policy, but the difference is that they are nothing more than lobbyists without a vote of their own, and ultimately those democratically elected governments have to make policies that as a whole are acceptable to the public who elect them by universal suffrage, or they won't win re-election. By contrast, the HK Government's mandate, and its support in the Legislative Council, is dependent on just a tiny fraction of the population who control the corporate and special interest votes. Within the 30 FCs, some "professional" FCs are elected by thousands of individuals, such as the legal, medical and accounting sectors. It is no coincidence that these sectors, with large electorates, tend to produce pro-democracy legislators. So the Government and the business lobby rely on the FC legislators who are elected through corporate voting to counter this weight. The NPCSC interpretation The controversial 26-Apr-04 decision of the NPCSC and the earlier procedural decision of 6-Apr-04 in essence re-wrote the Basic Law, which says, in relation to any changes after 2007 to the method of electing the Legislative Council and to procedures for passing bills and motions, that changes only have to be reported to the NPCSC "for the record". So here is a quick linguistic guide to the Basic Law: |"if there is a need"||the Chief Executive must tell us there's a need, then we'll decide whether he is right, and set conditions on reform| |"for the record"||only if it meets our conditions| |"gradual and orderly progress"||make the minimum amount of progress that the people will tolerate| |"balanced participation" (you won't find that in the Basic Law)||we've got more leverage over the tycoons than we have over the people| Perhaps what Beijing fears most is not democracy in Hong Kong, but a successful democracy in Hong Kong, because it would increase public pressure in the mainland for democratic reforms of its own and an end to one-party authoritarian rule with all the corruption, economic mismanagement and oppression of free speech that goes with it. At the same time, however, they must recognise that if hundreds of thousands of Hong Kongers repeatedly take to the streets in a peaceful demand for the right to elect their own leaders, then the path of least resistance will be to give them what they want. Attempts by the HK Government to portray the latest electoral reform proposals as "progressive" or in the words of Henry Tang "actually more progressive than the 1995 electoral arrangements" ignore the fact that the FCs had far more representative electorates in 1995, when the Patten administration tabled legislation which allowed anyone who worked in a sector to register and vote in it. That was the reason that Beijing derailed the so-called "through train" of the 1995 Legislative Council, replacing it on the 1-Jul-97 handover day with a hand-picked council and then amending the election laws to return to the old corporate voting system from 1998 onwards. Case study: the Transport Constituency To demonstrate just how crooked the system is, we took a look at the Transport Constituency, where the 191 eligible electors are helpfully listed on page 4 of this document, and cross-matched it with our database, annual reports and other sources to tell you who pulls the strings on these electors, that is, their owners. We also look at the extraordinary proliferation of associations, many of which must have overlapping memberships and each of which gets one vote. Of the 191 eligible electors, only 182 of them actually appeared in the 2004 Provisional Register for the Legislative Council elections. However, we can't tell you who did or didn't participate, because the electoral register is not a published document. It is available for public inspection, but you will break the law if you try to use it "for a purpose other than a purpose related to an election". You can't take copies, and it is a grey area, untested in court, as to whether research on a specific election or on electoral systems in general is a "purpose related to an election". Just to look at the register, you have to sign a declaration like this one. This is an example of the secrecy that surrounds the debate on electoral reform. Apart from their legislator, the electors of the Transport constituency also elect 12 of the 800-member Election Committee. Stacking the vote The Government's system of allocating 1 vote to each association or company in an industry naturally incentivises the creation or registration of new associations or companies in a sector, even if it is the same people behind them. The number of registered electors in the Transport constituency grew from 137 in 1998 to 152 in 2000 and 182 in 2004, and it cannot be said that the number of people involved in the transport sector grew that much in the same period. Whether an association or company is admitted to the list is initially determined by the Government tabling an amendment bill to the Legislative Council. There is no relationship between the number of employees, turnover, net assets or any other business statistic and the number of votes a company or association has. We found that of the 191 eligible Transport electors, 36 are taxi-related associations, 19 are minibus associations and 10 are driving instructor associations. These three lobbies alone amount to 65, or over one third, of the electorate. Bear that in mind next time you hear their legislator whinging about diesel duty being too high, when it is far lower than the duty on unleaded petrol which private motorists pay, and when LPG is exempt from duty and franchised buses are exempt from diesel duty anyway. And don't forget the $1.4bn in taxpayer grants handed out to get the taxi and minibus owners to buy LPG vehicles in the first place. Yes, in Hong Kong, we don't charge the transport trade for air pollution, we pay them to reduce it. The names of some trade associations suggest overlapping membership through their geographic coverage. While some of the apparently overlapping trade associations may exist separately for historical reasons, others may have come into being, or stayed separate, simply to claim another vote for their sector. Similarly, companies under common ownership may continue to exist separately rather then undergo a full merger, and thereby avoid losing voting rights in the constituency. Our research also identified tycoons with heavy voting interests, including 1 family with stakes in 11 electors. We also found 3 electors which are controlled by the HK Government, and several which are controlled by overseas Governments, including Dubai, Singapore and of course mainland China. It's worth reminding our readers that we only looked at one sector. If we had extended our coverage to sectors such as the Real Estate, Hotels, Hong Kong General Chamber of Commerce, Chinese General Chamber of Commerce and others, then we would have found many of the same tycoons controlling corporate electors in those sectors too. Yes, the HK Government actually has 3 votes in this sector - which sullies the separation of the Executive and Legislative branches of our government. Walter, Thomas and Raymond Kwok control Sun Hung Kai Properties Ltd (SHKP, 0016) which controls 33% of The Kowloon Motor Bus Holdings Ltd (KMB, 0062). Combined, they have stakes in 11 electors: |KMB||The Kowloon Motor Bus (1933) Co Ltd||100%| |KMB||Long Win Bus Co Ltd||100%| |SHKP||Route 3 (CPS) Co Ltd||70%| |SHKP||Tsing Ma Management Ltd||66.7%| |KMB||Park Island Transport Co Ltd||65%| |SHKP||China Tollways Ltd||50%| |SHKP||Hoi Kong Container Services Co Ltd||50%| |SHKP||River Trade Terminal Co Ltd||43%| |SHKP||Hong Kong School of Motoring Ltd||30%| Cheng Yu Tung NWS Transport Services Ltd (NWSTS) is 50% owned by NWS Holdings Ltd (NWS, 0659) and 50% by privately-held Chow Tai Fook Enterprises Ltd. Both are ultimately controlled by Cheng Yu Tung. NWSTS owns 29.98% of Kwoon Chung Bus Holdings Ltd (KCB, 0306). In the case of one "New World" elector, we were unable to determine whether it is part of the group, and this is indicated by a question mark below. |NWSTS||New World First Bus Services Ltd||100%| |NWSTS||New World First Ferry Services Ltd||100%| |NWSTS||New World First Ferry Services (Macau) Ltd||100%| |?||New World Parking Management Ltd||?| |NWSH||Tate's Cairn Tunnel Co Ltd||29.5%| |KCB||New Lantao Bus Co, (1973) Ltd||99.99%| Li Ka Shing Mr Li controls Cheung Kong (Holdings) Ltd (0001) which controls Hutchison Whampoa Ltd (HWL, 0013). |HWL||Mid-Stream Holdings (HK) Ltd||100%| |HWL||Hongkong International Terminals Ltd (HIT)||87%| |HWL||The Hong Kong Salvage & Towage Co Ltd||50%| |HWL||Hong Kong United Dockyards Ltd||50%| |HIT||COSCO-HIT Terminals (Hong Kong) Ltd||50%| |HWL||River Trade Terminal Co Ltd||43%| |HWL||Hong Kong Air Cargo Terminals Ltd||12.5%| Peter Woo Kwong Ching Mr Woo's family trusts control Wheelock and Co Ltd (0020) which controls The Wharf (Holdings) Ltd (Wharf, 0004). |Wharf||The "Star" Ferry Co., Ltd.||100%| |Wharf||Hong Kong Tramways, Ltd||100%| |Wharf||Modern Terminals Ltd||55%| |Wharf||Hong Kong Air Cargo Terminals Ltd||12.5%| The Swire Family Family trusts control unlisted John Swire & Sons Ltd which controls 67% of the voting rights in Swire Pacific Ltd (SP, 0019,0087), which owns 66.7% of Swire Aviation Ltd (SA). The other 33.3% is held by CITIC Pacific. |SP||Hong Kong Salvage & Towage Co Ltd||50%| |SP||Hong Kong United Dockyards Ltd||50%| |SA||Hong Kong Air Cargo Terminals Ltd||30%| |SP||Modern Terminals Ltd||17.63%| CITIC Pacific Ltd (CP) owns 70% of Adwood Co Ltd (Adwood). The other 30% is owned by Kerry Properties Ltd. CP also owns 50% of Hong Kong Resort Co Ltd (HKRC). The other half is owned by HKR International Ltd (0480). |CP||New Hong Kong Tunnel Co Ltd||70.8%| |Adwood||Hong Kong Tunnels and Highways Management Co Ltd||50%| |Adwood||Western Harbour Tunnel Co Ltd||50%| |HKRC||Discovery Bay Road Tunnel Co Ltd||100%| |HKRC||Discovery Bay Transportation Services Ltd||100%| |STCTS||Turbojet Ferry Services (Guangzhou) Ltd||?| Cheung Chung Kiu Mr Cheung controls 38% of Yugang International Ltd (0613) which controls 34% of Y.T. Realty Group Ltd (0075) which controls 29.9% of The Cross-Harbour Holdings Ltd (CHH, 0032). CHH owns 70% of The Autopass Co Ltd (Autopass) |CHH||Hong Kong School of Motoring Ltd||70%| |CHH||Western Harbour Tunnel Co Ltd||37%| |CHH||Hong Kong Tunnels and Highways Management Ltd||37%| The Kadoorie family controls The Hongkong and Shanghai Hotels, Ltd (HKSH, 0045). |HKSH||Peak Tramways Co, Ltd||100%| Lee Shau Kee Mr Lee controls Henderson Land Development Ltd, which through subsidiary Henderson Investment Ltd controls 31.33% of Hong Kong Ferry (Holdings) Co Ltd (HKF, 0050). |HKF||The Hongkong and Yaumati Ferry Co Ltd||100%| Foreign governments have votes too Corporate voting also opens the door to electors who are controlled by overseas companies and Governments. For example.... CSX World Terminals Hong Kong Ltd, an elector, is 66.66% owned by Dubai Ports World, the Dubai government-owned port operator, and 33.34% owned by PSA International Pte Ltd, the Singapore Government-owned port operator. Asia Airfreight Terminal Co Ltd (AAT), an elector, is 49% owned by Singapore Airport Terminal Services Ltd, which in turn is a subsidiary of Singapore Airlines, which is controlled by the Singapore Government. Another 10% of AAT is owned by Keppel Telecommunications & Transportation Ltd, a subsidiary of Keppel Corp Ltd which in turn is controlled by the Singapore Government. So altogether, the Singapore Government has an interest in 59% of AAT. China Merchants Shipping & Enterprises Co Ltd, an elector, is a subsidiary of China Merchants Logistics Group Co Ltd, owned by the mainland Government. The same group also controls China Merchants Holdings (International) Co Ltd (0144), which has a 20% stake in AAT. Chu Kong Shipping Enterprises (Holdings) Co Ltd (CKSE), an elector, is the controlling shareholder of HK-listed Chu Kong Shipping Development Co Ltd (0560). CKSE is in turn owned by the Guangdong provincial government. Taxis, minibuses and driving instructors There is a gaggle of electors who are associations of taxi owners, drivers, operators, servicers and so on. The membership of these associations is unlikely to be mutually exclusive - i.e. some people, or companies, are probably members of multiple associations. It is beyond the scope of this article to investigate that. |Tang's Taxi Companies Association Ltd| |Taxi Associations & Federation| |Taxi Dealers & Owners Association Ltd| |Taxi Drivers & Operators Association Ltd| |The Taxi Operators Association Ltd| |Taxicom Vehicle Owners Association Ltd| |United Friendship Taxi Owners & Drivers Association Ltd| |Urban Taxi Drivers Association Joint Committee Co Ltd| |Wai Fat Taxi Owners Association Ltd| |Wai Yik H.K. & Kowloon and New Territories Taxi Owners Association| |Wing Lee Radio Car Traders Association Ltd| |Wing Tai Car Owners & Drivers Association Ltd| |Yik Sun Radiocabs Operators Association Ltd| |Rights of Taxi Owners and Drivers Association Ltd| |Hong Kong Taxi and Public Light Bus Association Ltd (The)| The last of the voters named above is a joint association between taxi and public minibus people. Here is the list of 19 minibus voters: |G.M.B. Maxicab Operators General Association Ltd| |Hon Wah Public Light Bus Association Ltd| |Hong Kong, Kowloon & NT Public & Maxicab Light Bus Merchants' United Association| |Hong Kong Public & Maxicab Light Bus United Associations| |Hong Kong Scheduled (GMB) Licensee Association| |Kowloon Fung Wong Public Light Bus Merchants & Workers' Association Ltd| |Kowloon PLB Chiu Chow Traders & Workers Friendly Association (The)| |Lam Tin Wai Hoi Public Light Bus Merchants Association Ltd| |Lei Yue Mun Ko Chiu Road Public Light Bus Merchants Association Ltd| |Lung Cheung Public Light Bus Welfare Advancement Association Ltd| |N.T. PLB Owners Association| |N.T. San Tin PLB(17) Owners Association| |Public Light Bus General Association| |Sai Kung Public Light Bus Drivers and Owners Association| |Tsuen Wan PLB Commercial Association Ltd| |Tuen Mun PLB Association| |United Association of Public Lightbus Hong Kong| |Yuen Long Tai Po PLB Merchants Association Ltd| |HK Public-Light Bus Owner & Driver Association| Driving instructors also feature heavily, with 10 electors: |Articulated & Commercial Vehicle's Instructors Union| |Driving Instructors Merchant Association Ltd| |Hong Kong & Kowloon Goods Vehicle Omnibuses and Minibuses Instructors' Association Ltd| |Hong Kong Commercial Vehicle Driving Instructors Association| |Hong Kong Driving Instruction Club Ltd| |Hong Kong Motor Car Driving Instructors Association Ltd| |Hong Kong Society of Articulated Vehicle Driving Instructors Ltd| |Kowoon Motor Driving Instructors' Association Ltd| |Public and Private Light Buses Driving Instructors' Society| |Public and Private Commercial Driving Instructors' Society|
While housing prices are recovering slowly from the bursting of the bubble, there's one segment of the real estate market that has seen robust growth: multifamily housing (buildings with more than four residential rental units). Activity in this segment has been expanding both in terms of rising apartment rents and declining vacancy rates. In this article, we consider the period between 2006:Q1 and 2009:Q2 as the period when the bubble burst and the quarters between 2009:Q2 and 2012:Q3 as the recovery period. As shown in the figures, the nation's weighted average asking rent for apartments increased by 5.0 percent during the recovery period, while the nation's vacancy rate for apartments decreased from 7.7 percent in 2009:Q2 to 4.6 percent in 2012:Q3. All of the major urban areas of the Eighth District—Little Rock, Louisville, Memphis and St. Louis—showed very similar changes for "asking rents," and all but one of these urban areas—Memphis—did the same for "vacancy rates." In terms of housing price indexes, until 2012, the Eighth District performed better than the average national level during the housing contraction and the recovery periods. During the 13 quarters of contraction, home prices in the four zones of the District—zones based in Little Rock, Louisville, Memphis and St. Louis—fell by 10.3 percent on average (weighted by population), substantially less than the nation's contraction of 29.3 percent. Prices in the Little Rock and Louisville zones fell by only 3.3 percent and 2.0 percent, respectively, while prices in the Memphis and St. Louis zones fell by 13.3 and 12.5, respectively.2 The Eighth District's zones show some diversity in the pace of the recovery of home prices. As of the third quarter of 2012, the available data indicate that the Little Rock and Louisville zones were experiencing positive year-over-year growth in home prices, while the Memphis and St. Louis zones were still suffering declines. Among the four zones, the housing market in the Little Rock Zone has suffered the least. For the first three quarters of 2012, the Little Rock Zone had consecutive positive growth rates on a year-over-year basis, while the other three zones had at least one quarter with a decline or no growth at all. Meanwhile, the nation's year-over-year growth rate in house prices was, at first, lower than the growth rates in the Little Rock, Louisville and Memphis zones, but interestingly, starting at the beginning of 2012, the nation's growth rate gradually outpaced those of the four zones. In the third quarter of 2012, the year-over-year growth rate for the nation was 4.4 percent, almost twice as high as the growth rate of home prices in the Little Rock Zone. "Yesterday's buyer is today's tenant," one real estate agent in the Eighth District recently said. Multifamily rental activity has been the bright spot of the housing market since mid-2010. Both data and anecdotal evidence suggest a robust increase in apartment rents, as well as a continuous decrease in vacancy rates. During the recovery period, the "asking rent" for apartments in the MSAs of Little Rock, Louisville, Memphis and St. Louis increased by 6.6 percent, 5.9 percent, 4.7 percent and 4.3 percent, respectively, while the nation's increased by 5.0 percent. In the third quarter of 2012, vacancy rates in the Little Rock, Louisville, Memphis and St. Louis MSAs and in the nation declined to 6.0 percent, 4.3 percent, 9.1 percent, 5.9 percent and 4.6 percent, respectively, reaching their lowest levels since 2002 (Figure 2). During the recovery period, apartment vacancy rates in Little Rock, Louisville, Memphis and St. Louis dropped by 31.0 percent, 37.7 percent, 24.8 percent and 33.0 percent, while the nation's fell by 40.3 percent. Among the four MSAs, Louisville's performance is outstanding: The vacancy rate has been below the national level since the first quarter of 2009 and far below Louisville's precrisis levels. According to a real estate agent in Louisville, for the attractive projects, the average occupancy rate in the third quarter of 2012 was 96-97 percent and waiting lists have become common. As the market for apartments expands, the Louisville data also point to an increase of permits for this segment of the market, which has been recovering steadily since 2008. What explains the fast-closing gap between the demand and supply of apartments? First, the availability of finance has become an important barrier for potential homeowners. Despite the historically low mortgage interest rates and the high housing affordability index, many prospective buyers for new and existing homes are being rejected by mortgage providers due to underwriting standards that are stricter now than before the real estate crisis. According to several real estate agents in the Eighth District, applicants with even the slightest blemishes on their credit records are being refused mortgages, even if they are well-employed. Second, potential first-time homebuyers are facing competition from investors, who can pay cash up front. Private investors, real estate investment trusts (REITs), public and private pension funds, and venture capitalists are becoming more active in the housing market. Some of these companies, such as venture capital enterprises, pool financial resources to invest in the apartment segment, anticipating hefty profits from "buy now and sell later" strategies. One indicator of the success of this strategy is that the stock price of apartment REITs increased by roughly 325 percent from February 2009 until August 2012, while all REITs and the S&P 500 increased by 210 percent and 120 percent, respectively, during the same period.3 One homebuilder described the market as a "capital-starved" industry. Third, renting has become more appealing to younger people who, before the crisis, would have been eager homebuyers. In part, this is a response to changes in lifestyle, as younger more-educated households prize their mobility and flexibility; they are also discouraged by the substantial responsibilities, costs and, especially, risks attached to homeownership. In the end, they are more willing to pay rent than to own equivalent properties. Another reason for younger generations to be skittish about buying homes stems from their being scarred by the labor market outcomes of the Great Recession; they are having trouble finding jobs that match their skills—or finding any job. The robust multifamily rental market is triggering a strong response in the multifamily construction segment. Anecdotal evidence suggests that multifamily developers have intensified the search for new projects; even those companies that normally focus on offices are looking to invest in multifamily housing. As supply adjusts, the increase in rents could decelerate. While the multifamily segment is sending positive signals through most of the Eighth District, its developments have also raised some concerns. In particular, the target home size of move-up homebuyers is shrinking. Traditional real estate markets consist of first-time buyers and repeat buyers, whether move-up, move-across or move-down. Before the crisis, first-time buyers would easily and relatively quickly move up and acquire larger, pricier homes. The current residential market, however, is characterized by an increased presence of investors, far fewer first-time buyers and a declining number of move-up buyers.4 In the current environment, traditional buyers have much less room to maneuver because of difficulties in accessing mortgage financing, either through first mortgages or through refinancing. As a result, either by choice or by force, many households are currently paying rent that is substantially higher than the mortgage payment for an equivalent property. These higher payments are probably curtailing consumer spending on other goods and services. What's more, homeownership has declined significantly during the recovery period. For example, homeownership rates in Louisville, Memphis and St. Louis have decreased by 8.0 percent, 5.3 percent and 0.6 percent, respectively (data for Little Rock are not available), while the nation's rate has decreased by 2.9 percent.
Privacy advocates are more than a little concerned about the so-called “Internet of Cars.” An offshoot of the nascent-but-expanding “Internet of Things,” the “Internet of Cars” refers to the growing number of vehicles loaded with sensors and devices capable of accessing the Internet. Carmakers, cell-network providers, mobile-telematics developers, and a long list of technology companies (including Google, Apple and Microsoft) are stuffing as much connected hardware as possible into anything with four wheels and a motor. By 2016, most buyers in the United States and Europe will judge cars as much by their capacity for a Web connection as for gas mileage or any other traditional decision point, according to Thilo Koslowski, who heads up Gartner’s analysis of the auto industry. In theory, Internet-enabled cars could enable a broad range of improvements to existing services: If thousands of drivers in a particular area all turn on their windshield wipers at once, for instance, it could help a real-time weather service better predict rain. Data from shock absorbers could help identify rough patches of road. While those are future innovations, it’s clear that auto manufacturers and tech companies are already siphoning up enormous amounts of data—raising inevitable questions about what they’re doing with all that information, and if they’re selling it to the highest bidder. Most of the big automakers put GPS systems in some car models, and most of them collect continuous streams of data from those systems that show exactly where that car has been and when, along with the name of the owner subscribing to the GPS service (if necessary) or the unique vehicle identification number of the car, according to a December report from the Government Accounting Office (GAO) that looked into the data-saving practices of Ford, GM, Chrysler, Toyota, Honda and Nissan. Most of the GPS data that carmakers collect is meant (at least in theory) to assist quick-response customer service programs such as GM’s OnStar, which is best known for determining when a car has been in a wreck and phoning for help. Most GPS makers—Garmin, TomTom, Nuvi, Google Maps and Telenav—also track, store and use location data for their own purposes, according to the GAO report. At the moment, the manufacturers are positioning the technology as an unmitigated good. “Customers want to integrate their personalized digital lives into their daily drive to ensure they are as connected and streamlined on the move as they are the rest of the day,” Ford CTO and VP of Research and Innovation Paul Mascarenas told the U.K.’s Western Morning News March 31. Meanwhile, consumers are showing few signs of worry about the potential of Web-connected vehicles to collect and give away more about their daily habits, likes, dislikes and activities. Few buyers, especially those who don’t subscribe to OnStar or similar systems, are aware that their automaker may track their movements and store the data for purposes of their own. And even if they did, most have no way to opt out of tracking services, and have little or no legal protection to keep those companies from misusing that data, according to a statement from Minnesota Senator Al Franken (D-Minn.), quoted in a UPI story Jan. 7. For those who work with consumer data, it’s clear that cars could soon add a new wrinkle to traditional behavior analysis. And for those who build apps, the rise of in-car Internet connectivity and dashboard screens could lead to a whole new market. Although the “Internet of Cars” won’t reach full maturity for quite some time, it might be worth starting your research now.
"What we have to learn to do, we learn by doing." Aristotle, Nichomachean Ethics The mission of the PULSE Program is to educate our students about social injustice by putting them into direct contact with marginalized populations and social change organizations and by encouraging discussion on classic and contemporary works of philosophy and theology. Our goal is to foster critical consciousness and enable students to question conventional wisdom and learn how to work for a just society. We accomplish this by helping our students make relevant connections between course material and experience with community service. Throughout the years, we have found that the relationship between field work and classroom study evokes a rich conversation. The Western philosophical tradition began in wonder and inquiry about basic problems: What does it mean to be human? To enjoy freedom? To fall in love or become a friend? To participate in community? These basic questions reassert themselves when a student acts as a companion to a disabled adult, tutors an inmate, extends a sympathetic ear to a suicidal person over a telephone line, or feeds a homeless person on a cold winter night. The majority of the students enrolled in the PULSE Program take a twelve-credit, year-long core-level course in philosophy and theology entitled Person and Social Responsibility. Several PULSE elective courses are also offered. In addition to classroom reflection and discussion, carefully selected field placements in after school programs, youth work, corrections, shelters, literacy, domestic violence, health clinics, housing programs, and HIV/AIDS services among other areas become the context in which students forge a critical and compassionate perspective both on society and themselves. The specific learning goals for the PULSE year-long core-level course, Person and Social Responsibility, are as follows: - Students will have an understanding of the ways in which service and the study of philosophical and theological traditions inform each other. - Students will demonstrate the ability to employ an ongoing praxis methodology in which they encounter challenging social realities, critically reflect upon them in conversation with philosophical and theological traditions, and act with informed and critical agency. - Students will develop a critical understanding of intersectionality and interlocking structures of privilege and oppression, especially race, class, gender, sexuality, and ability. - Students will demonstrate moral development through a growth in compassion, a sense of responsibility and agency in response to injustice to contribute to the common good and social justice, and engagement in questions about the divine-human relationship.
In his work keeping Island Creek Elementary students on the forefront of education technology, Mark Moran has some things to teach them about nature as well. The Island Creek technology specialist combines a love of the native environment with a keen interest in high-tech advances in learning in the FCPS-wide "Study of Northern Virginia Ecology" Web site. He recently won a Best Practices Award from Fairfax County Public Schools for his redesign of the Island Creek Web site as well. A former Fairfax-area resident who now lives near Fort Belvoir, Moran is this week’s People Profile. How long have you lived in the area? My whole life. I was born here, so 35 years. I am a product of Fairfax County Public Schools. Education: I went to Olde Creek and Oak View Elementary Schools, Lanier Middle School, Robinson Secondary School and Fairfax High School. I went to Virginia Tech and George Mason for my master’s in curriculum instruction and education. I was a teacher, and taught sixth grade for nine years. I’ve been a technology specialist here for a little over a year. How did you become interested in an education career? At first, I did it just to pay the bills. I was substitute teaching and realized I really liked it. I went back to school to get a degree in education and loved it. I taught for almost a decade at Stratford Landing Elementary School [in Alexandria]. How did you come to be a technology specialist? After being that long in the classroom, I decided I wanted to do something new. I’d been doing a lot of technology stuff with the sixth-graders … every school has a technology specialist in the Fairfax County Public School system, and their main role is supporting instruction and integrating technology with the kids. I do a lot of training of teachers. It appeals to me because it allows me to reach more kids. We have an awesome staff, a good mix of veteran and younger teachers and everyone is very open to technology here. We have a wonderful principal and administration staff: very forward-thinking, realizing that the world these kids will enter is very technologically-oriented. It helps that Fairfax County is very forward-thinking. Family: My parents live in Fairfax, my brother in Arlington, and my wife’s family lives in Reston. Activities/interests/hobbies: I’m a naturalist at heart. Ever since I was little I have been fascinated with nature. As I’ve grown older I’ve pursued it and brought it into the classroom. The technological and natural worlds go well together; it’s a great way to teach kids about where they live, the flora and fauna and how it’s interconnected. There are great learning opportunities around here, like Huntley Meadows [Park] … FCPS now does Blackboard, and our teachers do a phenomenal job using it to post photos, digital stories, post lessons and things like that. Blogs, wikis and podcasts all filter into teaching; soon we are supposed to get podcast capability. Do you have a vision for the school, technologically-speaking? One goal is to have an interactive SmartBoard in every room, and to continue to make use of the resources available to us. Community concerns: I think it’s very important that we be active conservationists. The green movement is taking off, kids today are very interested in that, interested in what’s going on. As far as the community goes, we need to protect resources such as Huntley Meadows, Occoquan, Dyke Marsh, Hidden Pond Nature Center. When I was a sixth-grade teacher, I saw kids ending their elementary-school career knowing very little about the natural world around them. It was disappointing to see they couldn’t name trees. … As a teacher, I tried to be proactive with that. I created a Web site as a teacher, since there was no part of the curriculum that taught kids ecology at the local level. I wanted something that would motivate and teach them about where we lived. … I started [the site] in 2002, and it now has 300 species pages. I was very fortunate; I set it up at Stratford Landing and when I took the new position here, principal Susan Owner was eager to have the site. Favorite place to spend time in the area: Huntley Meadows Park. I volunteer there when I can; my wife and I have a 1-year-old so not as much now. But I’m taking [my daughter] there this afternoon. I love to walk on the paths, visit friends there. There’s no place like it; you can go out on the boardwalk into the wetland and get away from the bustle and traffic. It’s very serene. If you could go on a road trip, where would you go? I would really like to see Hawaii. The Florida Keys also, and the Pacific Northwest comes to mind too. My wife tells me about Hawaii, and it all comes back to ecology: looking at birds and butterflies. Hawaii is so far away it has a totally different ecosystem. What’s on your radio right now? I like all different kinds of things, so I have a lot of mixes and stuff. I’m going to a Neil Young concert in a couple of weeks. Personal goals: Right now, I’m just concentrating on trying to be a good dad, experiencing fatherhood, working with the kids here and finding great ways to get kids excited about using technology. If I can sneak nature in there too, that’s good. Down the road, I’d like to get more into environmental education.
These courses prepare the mother patient for the childbirth experience and her support person to be an effective coach. Various methods of relaxation and breathing are taught. The course also includes discussion of medications, anesthesia, cesarean birth, the newborn and parenting roles. This course prepares children for the upcoming birth of a sibling by helping them feel comfortable with the hospital setting and the visiting routine after the baby's birth. For expectant mothers with positive screening or test results. Explains diabetic condition in pregnancy, possible risks, control achieved through ADA diet, use of a glucometer for home monitoring and self-administration of insulin when necessary. This class helps moms develop confidence in learning the skills needed for breastfeeding. Topics include breast care and preparation, breastfeeding techniques, maternal nutrition, recognition and treatment of nursing problems, and breastfeeding support. Intended for women returning to work or resuming activities that make infant nursing difficult. Topics include pumping and storing breast milk, introducing a bottle, supplementing, weaning, and other tips to make a smooth transition. These are done in a small group or individual as needed. Call 419.557.7596 for an appointment. Counseling and information provided by certified lactation consultants. Services include hospital visits, follow-up telephone support calls, one-on-one prenatal counseling, follow-up clinic visits and educational opportunities for health professionals. Learn techniques to calm, soothe and bond with your infant. Mom and dad are encouraged to attend with baby for hands-on practice. This class is free of charge and includes a DVD and a Soothing Sounds CD. This course demonstrates the techniques of adults, infant and child CPR and managing a blocked airway. The course includes a booklet and practice on manikins to increase skill and comfort level. Adolescents age 11 or older will learn important safety information and responsibility guidelines for caring for younger children. This course includes a booklet, and a certificate of attendance will be awarded to those who complete the class. Real men, real babies, real-world advice. For expectant parents who are integrating a new baby into a home with dogs. Taught by a certified dog trainer. An opportunity to jump start admission form completion, ask questions of upcoming delivery process and participate in a tour of the OB unit.
IBEW Green-Job Training Facilities Around the Country Open Their Doors to Public June 8, 2009 With renewable energy looking to be the wave of the future, the International Brotherhood of Electrical Workers is letting everyone know that its members are the best-trained green-work force around. During the Memorial Day break, local International Brotherhood of Electrical Workers training centers opened its doors to policy makers and members of the public to learn more about the union’s extensive green job-training programs. “I hope I saw the future and I believe that I did,” Connecticut Sen. Joseph Lieberman said after touring New Haven Local 90’s training center. Legislators were in their home districts for Congress’s Memorial Day recess and many eagerly accepted the IBEW’s and the National Electrical Contractors Association’s invitation to tour their local joint apprenticeship training facilities. More than 90 members of Congress attended open house events. In Warren, Ohio, state and local leaders got a first look at plans for a new solar photovoltaic system and wind turbines to be installed at Local 573’s Electrical Trades Institute, while in Tennessee, Rep. Jim Cooper (D) called Nashville Local 429’s apprenticeship training center and its green-skills program, a “ticket to the future,” after touring its facility. In San Diego, more than 120 community, local and state leaders visited Local 569’s Electrical Training Center, including representatives from Sen. Barbara Boxer’s and Rep. Susan Davis’s offices. The center focuses on solar power, which allows apprentices to earn professional certification in photovoltaic installation. Local 569 is also planning to open a new green-training facility in neighboring Imperial County to help staff its rapid-growing solar and wind market. The local’s program was featured in the San Diego Union Tribune newspaper as part of its hot-jobs list for new college grads. Rep. Ed Perlmutter (D-Colo.) dedicated a new photovoltaic display at Denver Local 68’s training center. Perlmutter told guests that renewable energy will “rebuild the country and the middle class.” The 18-kilowatt panel was originally displayed at last year’s Democratic National Convention in Denver. The local plans to add wind turbines to the facility soon. More than 200 apprentices from Richmond, Va., Local 666 are learning specialized skills in solar and wind that could become one of the fastest growing job-sectors in central Virginia. “We’re the best kept secret in the industry,” Business Manager Jim Underwood told WWBT-TV during the local’s open house. Portland, Ore., Local 48’s training center, which has trained more than 1,000 members in solar installation, recently started offering the North American Board of Certified Energy Practitioners’ solar certificate program. The program allows individuals looking to get into the solar field a way to show they have achieved basic knowledge comprehension of key terms and concepts of photovoltaic operations. New opportunities are opening up in the renewable energy sector as millions of federal stimulus dollars are made available for training and investment in the new energy economy. But the expected rapid growth of green jobs – covering everything from retrofitting buildings for energy efficiency to installing and wiring solar panels and wind turbines – means the our economy will require thousands of trained electricians who can safely and professionally carry out the work. It’s a demand that is already being met by the IBEW. “Renewable energy is not the wave of the future, it’s already here,” said Honolulu Local 1186 Business Manager Damien Kim. “Our members and apprentices will be going into the workplace with skills that are expected of them as we move toward a new energy economy.” Rep. Neil Abercrombie (D-Hawaii) toured Local 1186’s facility which features training in photovoltaics, wind turbines and automated building operations. “The IBEW has the curriculum, facilities and instructors needed to lead the new energy revolution and we’ve been doing it for nearly a decade,” said International President Edwin D. Hill. “And we make sure that green-collar workers and their families get a decent wage and benefits so they can take their place in the middle class.” More than 70 IBEW training centers offer training in renewable energy, with more and more facilities incorporating green power into their curriculum.
Salmonella enteritidis Infection Egg-associated salmonellosis is an important public health problem in the United States and several European countries. A bacterium, Salmonella enteritidis, can be inside perfectly normal-appearing eggs, and if the eggs are eaten raw or undercooked, the bacterium can cause illness. During the 1980s, illness related to contaminated eggs occurred most frequently in the northeastern United States, but now illness caused by S. enteritidis is increasing in other parts of the country as well. Consumers should be aware of the disease and learn how to minimize the chances of becoming ill. A person infected with the Salmonella enteritidis bacterium usually has fever, abdominal cramps, and diarrhea beginning 12 to 72 hours after consuming a contaminated food or beverage. The illness usually lasts 4 to 7 days, and most persons recover without antibiotic treatment. However, the diarrhea can be severe, and the person may be ill enough to require hospitalization. The elderly, infants, and those with impaired immune systems may have a more severe illness. In these patients, the infection may spread from the intestines to the blood stream, and then to other body sites and can cause death unless the person is treated promptly with antibiotics. Back to Top How eggs become contaminated Unlike eggborne salmonellosis of past decades, the current epidemic is due to intact and disinfected grade A eggs. Salmonella enteritidis silently infects the ovaries of healthy appearing hens and contaminates the eggs before the shells are formed. Most types of Salmonella live in the intestinal tracts of animals and birds and are transmitted to humans by contaminated foods of animal origin. Stringent procedures for cleaning and inspecting eggs were implemented in the 1970s and have made salmonellosis caused by external fecal contamination of egg shells extremely rare. However, unlike eggborne salmonellosis of past decades, the current epidemic is due to intact and disinfected grade A eggs. The reason for this is that Salmonella enteritidis silently infects the ovaries of healthy appearing hens and contaminates the eggs before the shells are formed. Although most infected hens have been found in the northeastern United States, the infection also occurs in hens in other areas of the country. In the Northeast, approximately one in 10,000 eggs may be internally contaminated. In other parts of the United States, contaminated eggs appear less common. Only a small number of hens seem to be infected at any given time, and an infected hen can lay many normal eggs while only occasionally laying an egg contaminated with the Salmonella bacterium. Back to Top Who can be infected? The elderly, infants, and persons with impaired immune systems are at increased risk for serious illness. Healthy adults and children are at risk for egg-associated salmonellosis, but the elderly, infants, and persons with impaired immune systems are at increased risk for serious illness. In these persons, a relatively small number of Salmonella bacteria can cause severe illness. Most of the deaths caused by Salmonella enteritidis have occurred among the elderly in nursing homes. Egg-containing dishes prepared for any of these high-risk persons in hospitals, in nursing homes, in restaurants, or at home should be thoroughly cooked and served promptly. Back to Top What is the risk? In affected parts of the United States, we estimate that one in 50 average consumers could be exposed to a contaminated egg each year. If that egg is thoroughly cooked, the Salmonella organisms will be destroyed and will not make the person sick. Many dishes made in restaurants or commercial or institutional kitchens, however, are made from pooled eggs. If 500 eggs are pooled, one batch in 20 will be contaminated and everyone who eats eggs from that batch is at risk. A healthy person's risk for infection by Salmonella enteritidis is low, even in the northeastern United States, if individually prepared eggs are properly cooked, or foods are made from pasteurized eggs. Back to Top What you can do to reduce risk Eggs, like meat, poultry, milk, and other foods, are safe when handled properly. Shell eggs are safest when stored in the refrigerator, individually and thoroughly cooked, and promptly consumed. The larger the number of Salmonella present in the egg, the more likely it is to cause illness. Keeping eggs adequately refrigerated prevents any Salmonella present in the eggs from growing to higher numbers, so eggs should be held refrigerated until they are needed. Cooking reduces the number of bacteria present in an egg; however, an egg with a runny yolk still poses a greater risk than a completely cooked egg. Undercooked egg whites and yolks have been associated with outbreaks of Salmonella enteritidis infections. Both should be consumed promptly and not be held in the temperature range of 40 to 140 for more than 2 hours. - Refrigerate unused or leftover egg- containing foods. - Avoid eating raw eggs (as in homemade ice cream or eggnog). Commercially manufactured ice cream and eggnog are made with pasteurized eggs and have not been linked with Salmonella enteritidis infections. - Avoid restaurant dishes made with raw or undercooked, unpasteurized eggs. Restaurants should use pasteurized eggs in any recipe (such as Hollandaise sauce or caesar salad dressing) that calls for pooling of raw eggs. What else is being done? Government agencies and the egg industry have taken steps to reduce Salmonella enteritidis outbreaks. These steps include the difficult task of identifying and removing infected flocks from the egg supply and increasing quality assurance and sanitation measures. The Centers for Disease Control has advised state health departments, hospitals, and nursing homes of specific measures to reduce Salmonella enteritidis infection. Some states now require refrigeration of eggs from the producer to the consumer. The U.S. Department of Agriculture is testing the breeder flocks that produce egg-laying chickens to ensure that they are free of Salmonella enteritidis. Eggs from known infected commercial flocks will be pasteurized instead of being sold as grade A shell eggs. The U.S. Food and Drug Administration has issued guidelines for handling eggs in retail food establishments and will be monitoring infection in laying hens.
The Saint of the Day St. Robert Bellarmine, September 17 Prof. Plinio Corrêa de Oliveira Since the founding of the Church until our days, Divine Providence has always called illustrious men, who by their knowledge and sanctity have conserved and defended the truths of Catholic Faith against the attacks of heretics. Among these men shines St. Robert Bellarmine (1542-1621), who was celebrated for his teachings and polemic works, as well as for his virtue and zeal for the Church. In truth, it would seem that the holy Cardinal had received from God the threefold gift of teaching the people, guiding the faithful, and confounding the Protestant heretics of the 16th century, a time when Protestantism was growing and spreading. Until today, the works of St. Bellarmine constitute a wall of defense against Protestantism, Liberalism, Modernism & Progressivism He was great as a preacher, professor and polemicist, receiving the title of “hammer of heresies” from Benedict XV. He wrote prodigiously, and to understand the worth of his books one need only read what St. Francis of Sales, his contemporary and friend, said about them: “I preached five years in Chablais with no other books than the Bible and the works of the great Bellarmine.” His most famous work is The Controversies, a collection of the lectures he delivered at the Roman College. In it he set out the teaching of the Fathers, the Councils and the Church Law to victoriously defend the dogmas attacked by the Protestants. Clear, balanced, and forceful, this work is so well done that many considered it insuperable. When it was published, it raised as much joy among Catholics as hatred among the Church’s enemies. Theodore of Baise, a Protestant leader, used to say: “This is the work that defeated us.” Given the number of conversions for which it was responsible, reading it was forbidden under penalty of death in England by Queen Elizabeth. Only doctors of theology were permitted to read it. In addition to disputing the heretics, he also wanted to prevent the faithful from falling into their errors. For this purpose he wrote his remarkable little catechism, A Summary of Christian Doctrine (Doctrinae Christianae breve, 1598), which he used to teach the children and simple lay people, even when he was very busy with other pressing matters. Among his many other works, at the end of his life he wrote his spiritual notes, which form five small ascetic treatises. The last of these works is called The Art of Dying Well (De arte bene moriendi, 1620). Therefore, at the same time that he was a very busy polemicist, St. Robert Bellarmine took the time to direct souls and wrote profound spiritual treatises that earned him the title of Doctor of the Church. This capacity to revert back and forth from the mêlée of a fight and the direction of souls, while maintaining a spirit of meditation to write his books, is only possible when a man has a great calmness of spirit. This calm is, in a certain sense, one of the most profound notes of the soul of St. Robert Bellarmine. Let us admire such a great saint and ask him to do with each one of us what he did with St. Louis Gonzaga, that is, to lead us on the road of sanctity.
Electronic editions by CELT of Irish texts edited and translated by O'Donovan The Annals of the Kingdom of Ireland by the Four Masters. Six volumes. Bibliography of John O'Donovan 1809 July 9: born at his father's farm in Atatemore, Co. Kilkenny; educated in Dublin. 1826: appointed to work in Irish Record Office. 1829: worked in historical department of the Irish Ordnance Survey: examined manuscripts and toured Ireland. 1832–1833: wrote many articles, on Irish topography and history, in the Dublin Penny Journal. 1837: volume published by Ordnance Survey which contains a long Irish text and translation from the "Dinnsenchas" by O'Donovan. 1840: married Mary Anne Broughton, with whom he had nine sons. By this marriage he became brother-in-law to Eugene O'Curry, another Celtic scholar. 1840–1841: wrote articles for the Irish Penny Journal. 1841: first volume of the Irish Archaeological Society published: The Circuit of Ireland by Muircheartach MacNeill edited by O'Donovan; this work contains the first good map of ancient Ireland. 1842: The Banquet of Dun na nGedh and the Battle of Magh Rath published. 1843: The Tribes and Customs of Hy-Many from the Book of Lecan published; prepared a text and translation of "Sanas Chormaic". 1844: The Genealogies, Tribes, and Customs of Hy-Fiachrach, from a manuscript of Duald MacFirbis, published again accompanied by a beautiful map; entered Gray's Inn, London on 15 April. 1845: Grammar of the Irish Language published by Trinity College Dublin, the expense of printing shared by O'Donovan and TCD. 1846: The Irish Charters in the Book of Kells published. The Miscellany of the Irish Archaeological Society published, which contains the Covenant between Mageoghegan and the Fox. 1847: called to the Irish Bar; Celtic Society publishes his Leabhar na gCeart, from a manuscript of Giolla Iosa mor MacFirbis. 1852: employed to transcribe legal manuscripts by the commission for the publication of the ancient laws of Ireland. 1848–51: transcribed, translated and edited the six volumes of the Annals of the Four Masters, often called the "Fifth Master" for this work. The Irish type in which the text is printed was designed by George Petrie. 1849: Celtic Society published his The Genealogy of Corca Laidhe, or O'Driscoll's Country. 1850: conferred with honorary degree of LL.D. by University of Dublin (TCD). 1852: employed by the commission for the publication of the ancient laws of Ireland; made transcripts of legal manuscripts in Irish which fill over 2,000 pages and a preliminary translation of these in twelve volumes. 1856: Journal of the Kilkenny and South-East of Ireland Archaeological Society (n.s.) published Letter of Florence Mac Carthy to the Earl of Thomond, on the ancient history of Ireland, edited with Notes. 1860: Irish Archaeological and Celtic Society (IACS) published his Three Fragments of Irish Annals, with Translation and Notes. 1861 December 9: died in Dublin and is buried in Glasnevin cemetery. 1862: IACS published his Topographical Poems of O'Dubhagain and O'Huidhrin. 1864: IACS published his Martyrology of Donegal, edited by Bishop William Reeves.
For over 2000 years Reishi mushrooms (Ganoderma lucidum) have been recognized by Chinese medical professionals as a valuable remedy. Its Chinese name Lingzhi, means "spiritual potency". Reishi mushrooms are regarded by the Chinese as the "Medicine of Kings". Dr. Shi-Jean Lee, the most famous Chinese medical doctor of the Ming Dynasty, strongly endorsed the effectiveness of Reishi in his famous book, Ban Chao Gang Moo ("Great Pharmacopoeia"). He stated that the "long-term taking of Reishi (Lingzhi) will build a strong, healthy body and assure a long life."(2) A Mushroom for the Nerves Reishi mushrooms have been traditionally recommended by Chinese and Japanese herbalists for insomnia due to their "sleep-promoting factor".(1) Long-term use causes a significant promotion of slow wave sleep1. Reishi mushrooms are prescribed in China for a number of psychiatric and neurological afflictions, including diseases involving the muscles, anorexia, and debility following lengthy illnesses.(3) In Japan, the dried "mycelium" of Reishi the root-like body that produces mushrooms has been found to be highly effective in the treatment of neuroses caused by "environmental stress".(1) In addition, in an eight-month study of Alzheimer’s disease, patients taking a Reishi mycelium product demonstrated significant improvement. In China, Reishi is used for its muscle relaxing and analgesic (pain-inhibiting) effects. In one study, Reishi alleviated anxiety in 18 of 20 patients after four months’ use. It was concluded that the mushroom has an essentially "calmative function", but is neither a narcotic nor a hypnotic. Reishi as a Cardiotonic For centuries, Reishi has been known as a cardiotonic herb. It was prescribed routinely to those with a "knotted and tight chest" symptoms consistent with both stress and/or coronary artery disease-related angina. Researchers in China found that Reishi improved the blood flow and lowered oxygen consumption in the heart muscle.(3) Similar results were also found by Japanese scientists.(1,4 ) They found that Reishi contains ganoderic acids (which belong to a group of natural substances called "triterpenes") which lower high blood pressure, lower cholesterol, and inhibit platelet aggregation (the clumping together of blood cells), which can lead to heart attacks and other circulation problems. In fact, Reishi’s triterpenes are so important that in Japan they are used to determine Reishi’s quality and authenticity. In a six-month clinical trial performed in a university hospital in Tokyo, nearly half (47.5%) of 53 hypertensive patients lowered their blood pressure by 10-19 mmHg, and 10% of the subjects dropped their pressures 20-29 mmHg (both systolic and diastolic readings) after taking Reishi extract.1 Similar results were observed in a Chinese clinical trial without any side-effects.(1) Another large Reishi study in China found that low density lipoprotein (LDL the harmful cholesterol) levels dropped in 68% of 90 patients following only one to four months of Reishi use. Recently, Russian scientists have taken an interest in Reishi. They found that in addition to all the cardiovascular benefits mentioned above, Reishi showed a significant preventive and therapeutic action against plaque build-up ("plaque" is a fatty goo which is comprised of a combination of oxidized cholesterol, calcium, and degenerated white blood cells ["foam cells"]. It is deposited on the walls of arteries which restricts blood flow by narrowing the passage within arteries resulting in atherosclerosis). Reishi in Cancer Research Studies of Reishi in cancer research have been largely conducted in Japan, where Reishi was scientifically proven to have an anti-tumor effect. This research has continued in Korea, Japan, and China. An example of Reishi’s cancer-fighting potential occurred in the summer of 1986. A 39 -year old Japanese woman approached Dr. Fukumi Morishige, M.D., Ph.D, a renowned Japanese surgeon and a member of the Linus Pauling Institute of Science and Medicine, for help in treating her lung cancer. It was a complicated case, and she had been refused an operation by several hospitals. Hopeless, she returned home where she found her husband had collected Reishi in the forests. He boiled the mushroom and gave it to her to drink as a tea. While this was going on, she begged Dr. Morishige to do something for her cancer, regardless of its very advanced stage. From what was evident six months earlier, Morishige was surprised when he found no increase in swelling. Then he looked at her X-rays. Something wasn’t right: her tumor showed as only a trace on the X-ray. When she told him she had been drinking Reishi tea, Morishige operated with great curiosity. He was "astonished" to find only scar tissue, and although cancerous cells remained, they were now benign. That was the impetus for Dr. Morishige to begin his studies of Reishi as a treatment for cancer especially cases given up as hopeless. Dr. Morishige now believes that Reishi is also an effective cancer preventive. The active anti-cancer constituents in Reishi are called Beta-D-glucan. Beta-D-glucan is a polysaccharide basically a huge sugar molecule made up of many little sugar molecules chained together bound to amino acids. These intricate sugars stimulate or modulate the immune system by activating immune cells such as macrophage and helper T-cells, as well as increase the immunoglobin levels (immunoglobins are specific types of antibodies) to produce a heightened response to foreign cells, whether bacteria, viruses, or tumor cells. One interesting and important finding by Dr. Morishige was that the effectiveness of Reishi could be increased by combining it with high doses of vitamin C. Polysaccharides are huge molecules absorbed by the body with difficulty. Vitamin C helps to break down these huge molecules to much smaller molecules called oligoglucan, which can be easily absorbed. Vitamin C thus increases the bioavailablity of Reishi, and therefore, synergistically increases Reishi’s immune-stimulating and anti-cancer effects. Anti-Allergic /Anti-Inflammatory Actions During the 1970s and 1980s, Reishi’s anti-allergy action became the subject of ongoing research in both China and Japan. Studies showed that Reishi extract significantly inhibited all four types of allergic reactions, including positive effects against asthma and contact dermatitis. In 1990, researchers at the University of Texas Health Science Center in San Antonio found that Reishi could be effectively used in treating stiff necks, stiff shoulders, conjunctivitis (inflammation of the fine membrane lining the eye and eyelids), bronchitis, rheumatism, and improving "competence" of the immune system without any significant side-effects.(6) Part of the anti-inflammatory effect of Reishi may be due to its free radical scavenging effect. Reishi extract significantly elevates the free radical scavenging ability of the blood, especially against the particularly harmful hydroxyl radicals. The hydroxyl radical scavenging effect of Reishi is so strong that even after the Reishi extract was absorbed and metabolized the scavenging action still continued. Healing the Liver Reishi is commonly prescribed in China for the treatment of chronic hepatitis. In treatments lasting 2 to 15 weeks, the overall rate of efficiency was 70.7 to 98.0%.(4) In Japan, Reishi extract has been reported to be effective in treating patients with liver failure.(1) In animal studies of mice with carbon tetrachloride-induced hepatitis, the extent of liver damage was significantly inhibited by continuous dosing with Reishi tincture, and the regeneration of the liver was promoted.(7) As the "Medicine of Kings", Reishi is widely used for different purposes. It is used for symptomatic relief of arthritis and of menopausal anxiety. It is also used in treating allergic asthma, hypertension, hypothyroidism, bronchitis, insomnia, general anxiety and stress, and cardiovascular problems. Reishi also is often the main ingredient in herbal formulas for immune dysfunction syndromes, such as Chronic Fatigue Syndrome.
The North Face Awards Explore Fund Grant to the Colorado Mountain Club CMC’s Youth Education Program supports a national effort to increase outdoor exploration among Colorado youth Golden, CO – The North Face has awarded a $2,500 Explore Fund grant to the Colorado Mountain Club for its Youth Education Program summer adventure programming. The Explore Fund’s mission is to inspire and enable the next generation of explorers by funding non-profit organizations that are working to connect children with nature. By encouraging an active healthy lifestyle and protection of our natural landscapes, a stronger connection of youth to the outdoors can be nurtured. The Colorado Mountain Club was selected from more than 500 applications submitted for in 2013. Since The Explore Fund was initiated in 2010, The North Face has provided more than $1 million in grants to non-profits all over the world working to connect youth to the outdoors with more than three quarters of that going to programs in the United States. All of The North Face Explore Fund recipients were chosen based on their commitment to one of three different focus areas: access to front and back country recreation, education for personal and environmental health, and creating a connection to nature that will empower the future leaders of tomorrow. The CMC’s Youth Education Program summer programming offers affordable three to five-day camps for youth to get outdoors, learn about the environment, practice their rock climbing skills, and engage in outdoor adventure and environmental education. "As I child, I had the great fortune to hike through a meadows of wildflowers, wade in crystal clear creeks, and climb Colorado's majestic peaks. Unfortunately, today fewer and fewer kids have those kinds of outdoor experiences,” said Brenda Porter, Operations Director of the CMC. “The competing forces of video games, T.V. time, and increasing rates of childhood obesity make it more important than ever to provide youth with active outdoor adventures. Thanks to support from the North Face Explore Fund, CMC will continue to engage a broad spectrum of youth in mountain education and adventures during our summer camps and school-day field trips.” The Youth Education Program was established in 1999 to share the CMC’s mission with a wider audience, specifically youth. In addition, as CMC’s facility emerged into a world-class center complete with auditorium, conference center, and library, it became an ideal destination for school and youth group field trips. Since its founding, the CMC's Youth Education Program continues to provide opportunities for over 7,000 kids annually to experience the natural world through active learning adventures—essential opportunities to help combat childhood obesity and prevent "nature deficit disorder." In addition, the CMC’s Youth Education Program has been able to advance the mission exponentially with the support of several important partner organizations including the Scientific Cultural Facilities District (SCFD).
Due to being an eponymous blog, it has become that time to redirect my blog and increase its aperture to cover a much wider range of IBM-related topics that developers will find interesting and that reflect my own broader range of pursuits and thoughts within IBM. These days I work in the Smarter Workforce segment of IBM Collaboration Solutions, which is responsible for building out cloud-based solutions for employee talent optimization. How do you attract employees? Retain them? Provide education when they are recruited, promoted or need remediation? How do you best equip employees to share information and enable one another to achieve better customer satisfaction and better business results? How do you measure the results? So, if you're not in this particular problem space, why should you care? Well, there is a remarkable dynamism in this problem space due to the fact that it seeks to help human beings interact more effectively and efficiently with other human beings. As a result, many of today's most interesting topics, technologies and techniques are applicable: social computing, cloud computing, mobile computing, security, bigdata, business analytics and algorithms, and even psychological science and cognitive computing. Think about what it takes to give everyone a smarter edge. Think of everything that might be needed to do it, plus everything they might want to do, and everything they might want to do it with. Then, think of enabling them to do it everywhere. Now we're talking the same language. When I started on Java Server Pages (JSP) as a topic, I had intended it to be a blog topic. But it grew quite beyond blog size, so now that the technical work is finished, I can give you the meta-level on using JSP with Enterprise IBM Forms. The work I'm telling you about here is intended to make it easy for you to exploit the powerful, simplifying JSP technique within the XFDL+XForms markup of IBM Forms documents. It took a some work to sort it all out, but with that done, it is easy for you to replicate what I did and gain the benefits. I wrote this wiki page on the IBM Forms product wiki to help you get set up, and the page references the developerWorks article I put together to show how to use JSP in your XFDL+XForms forms. It was pretty challenging to get the JSP to talk to the Webform Server Translator module, so I was pretty happy when that started to work for me. It's one of those cases of only needing a line or two of code, but it being really hard to get exactly the right line or two. As Mark Twain once said, it's like the difference between lightning and the lightning bug. Anyway now that we know the smidge of code, it's easy for you to copy and use in your XFDL-based JSPs. At first I thought, OK I have a good blog topic, but then I realized we weren't covering the full Forms information lifecycle. Put simply, a form is possibly prepopulated and then served, it collects data, but then it comes back and you have to do something with the data collected. So, back for more work sorting out how to receive a completed form into a JSP and use its values in JSP scriptlet code that helps prepopulate the next outbound form. This was a fair bit less challenging, as it maps very closely to how you start up the IBM Forms API in a regular Java servlet. Remember, JSP is just a convenient notation that the web application server knows how to turn into a Java servlet. JSP just makes it easier for you to focus on your special sauce application code. Well, now that I could handle the whole Forms information lifecycle, I realized I hadn't covered the software development lifecycle. Back to the salt mines again. The problem was that JSP annotations are incompatible with XML. Although there is an alternative XML syntax for JSP, I devote a section in the article to explaining why it's a bit of a train wreck, and I focus instead on the normal JSP annotations. By representing them as XML processing instructions, we're able to maintain the XFDL and the JSP logic together using the IBM Forms Designer, and then use an XSLT to convert to actual JSP when it's time to deploy the IBM Form. This was really important to me because, quite frankly, if a new feature does not work in the Design environment for a language, then the feature essentially does not exist in the language. Now, that's a wrap! I hope you like the article and get accelerated development benefit from it. JSP is really for building quick prototypes and demos, and also for solving simpler problems much more simply than using straight Java servlet coding. It's even a really nice complement to using Java servlet coding within a larger project. So don't delay, get ready to use JSP with XFDL today. How would you like to be able to construct, deploy and get results from IT solutions using only your web browser? Don't believe me? Well, how about coming to the IBM Forms wiki, where you can watch a few short videos that show you. You'll be intrigued and want to go the next step. One of the prominently available wiki pages is a community article that gives you a starter pack of prebuilt solutions like the ones you see in the video. You can download any one or all of them because they're just single files that describe the forms, access control, workflow stages and other resources of each solution. You can import any of them into your own IBM Forms Experience Builder server, and then deploy them, use them, get results from them, and of course edit them to see how they work or to change them and redeploy them. All from your web browser. Since you will be a builder of forms experience solutions, we will need to be able to present your solutions to you, distinguished from everyone else's solutions. So, you'll have to start by registering yourself with the system that hosts the IBM Forms Experience Builder server. The system is called Lotus Greenhouse, so click the link and then choose "Sign up" to get your account. Once you're able to log in to Greenhouse, you'll get access to a number of software products including IBM's social business software (Connections), IBM Websphere Portal Server, and of course IBM Forms Experience Builder. However, you don't really need to log in to Greenhouse then menu navigate to IBM Forms Experience Builder when you can just bookmark the direct link to IBM Forms Experience Builder on Greenhouse. Once you log in with your Greenhouse user id and password, you'll see the "Manage" solutions page, which lists all of the Forms Experience Builder (FEB) applications that you have designed. This is the page that gives you the ability to create a "New Application" or "Import" one of those starter pack applications, all at the press of a button. So, now you can try out and evaluate IBM Forms Experience Builder now to see for yourself that there really is a smarter web where you can construct valuable solutions without coding now. If you are building IT solutions for your organization, you owe it to yourself to see how much more effective you'll be at satisfying your organization's IT solution demands. But even more importantly, if you're competing for IT solution services contracts, you owe it to yourself to become an IBM business partner or to expand your partnership to include IBM Forms Experience Builder. And finally, if you like to build industry-specific data management products, then you should consider becoming an IBM value-added reseller (VAR) so you can build your products more efficiently with IBM Forms Experience Builder and go to market with IBM to sell the bundle. In all these cases, you now have the access you need above so you can learn more and get started today. Forms exist to collect data from web users involved in business processes. Are you a business partner who wants to build solutions more quickly in order to make a higher margin? Then read on! What if you could use a web browser to design not only the user interfaces of the multiple pages of a form, but also the whole solution for which it collects data? Now, with IBM Forms Experience Builder, you finally can. You can define the roles of users in the business process, and you can assign users and groups to those roles. You can even set up open roles whose users are defined dynamically during the business process once the right information is collected earlier in the process. For example, only once you take in a person's name can you access an LDAP service to look up his manager and then assign that person to the manager role for an approval step. You can define the user interface of a Form, and have an automatic database created on the server side to store database records corresponding to completed instances of that Form. You can even define multiple Forms that work together within a solution that collects data according to different record schemas. You can define the stages of a business process workflow that uses the Form or Forms to create and update database records. Stage transitions can branch forward, backward or even stay on the same stage to update a database record that still needs more work. You can define access control for each workflow stage and determine which Forms, page, and UI elements are available in each stage. You can even use the database records collected with one Form as a GUI configurable web service within the fill experience of a second Form. For example, you could have one Form of a solution that collects inventory data, and then use that data in a second Form that makes it possible to order from available inventory. You can make the Form fill experience available within a portlet of an IBM Websphere Portal website. You can create a solution with your web browser, you can save it to the server, you can hit Deploy in your web browser, and then your users can access the Forms of the solution from web links. If you later decide it is necessarily to add to or change the solution, you can edit the solution again using your web browser and hit Deploy again The data is retained for all the remaining form UI elements, and the database tables are altered as needed to make space to store data collected by any new form UI elements. Via web links, users can access the list of database records collected by the solution. Only the records to which the user has access are presented. If you're the solution creator/administrator, you can get access to all the records. Whomever is given a link to view the records can also set up their own customized filters for the data, so a user can truly use the view as a business process task list, and even filter down to tasks of a particular type, from a particular person, having met or exceeded some value, etc. Complete agile web solution creation. Lose the custom coding, gain the market advantage, get IBM Forms Experience Builder now. Continuing with the amazing stuff you can do with the eval() function: You can use it in a user interface binding to enable your form to programmatically control what the user sees. As an exhibition of this capability, I'll give you the pertinent parts of an XML editor form that dynamically adjusts to XML structure and allows you to edit the content of any leaf nodes while giving you link buttons to allow you to drill deeper into element subtrees as well as a "back" button to go to the parent of any subtree whose leaves you may be editing. It starts with an XForms repeat, like this: <xforms:repeat nodeset="eval(repeatexpr)" id="XMLEditor"> The repeat expression is computed by the form and is changed by user actions that drill deeper into the XML tree or go back to parent elements. The repeat expression will end with "/*" so that the controls in the repeat will show the children of whatever node the repeat expression selects before the "/*". For simplicity, I've put the XML data to be edited as the first element of the instance that also manages the calculation of the repeat expression, but you could do this as two separate instances instead. Here's the instance structure I used in this example: <xforms:instance id="data" xmlns=""> The first element could be anything, but I used a "purchase order" data structure, so this form will magically morph into a purchase order editor. Further, it should now be clear why in the last blog I concentrated on data that carried its own formula calculations and data validation rules. If I replace the Purchase-Order element above with the loan calculation data below, then this same form will help calculate your monthly payment on a loan: <Loan-Application label="ACME Used Car Loan Application"> <Principal required="true" constraint="Principal > 0 and Principal <= 50000"></Principal> <Duration required="true" constraint="Duration > 0 and Duration <= 84"></Duration> <Interest-Rate constraint1="Interest-Rate > 0" <rate hidden="true" value="Interest-Rate div 1200.0"/> Within the repeat, we can use different kinds of form controls to be responsive to the identified types of data and also to the issue of whether something is an input or an output based on whether it has a computed value. Here are two examples at the XFDL+XForms level: In the predicates of the form controls, "not (*)" ensures that these form controls are only relevant if the data node is a leaf that is to be filled with character content. The "value" attribute in the data provides a calculation formula, so that has been used to distinguish when to provide and input versus an output form control. The two examples above make relevant form controls for data elements annotated with a currency type attribute. Other form controls for checkboxes and dates can be created to bind to types like booleans and dates. Next, let's look at how the repeat expression is computed: <xforms:bind nodeset="firstelem" calculate="local-name(instance('data')/*)" /> OK, so how do we adjust what the XForms repeat presents to the user? Basically, we want to either add a child element name to drill down into a subtree or we want to subtract a child element name to go up a level. First, let's cover how to add an element, i.e. add a step to the location "path". Inside the repeat, each child element that is a subtree root (has children) gets an XForms trigger in a link style button. If you activate the trigger (press the button) then you drill down into the corresponding node. Here's what that looks like: The trigger ref binds to a node that has children, as tested by the predicate "[*]". The label shows the name of the child element whose subtree you will drill into if you activate the trigger. The action sequence simply chucks a slash plus that name onto the end of the "expr" as a new step in the location path. This adds to the "path" which adds to the "repeatexpr" which updates the XForms repeat to show the children of that subtree root. The trigger to go back up to a parent from a child is something that would live outside of the repeat because you only need one "back" button. It's actually a bit trickier because you can't get the last slash in order to lop off the last location step in the path. Fortunately, XPath lets you find the first occurrence of a substring, and XForms actions include a loop. So, the way I did this was to construct a new expression out of all the location steps in the old one, except the last, which was detectable by there being no more slashes. Here's what that looks like: <xforms:trigger id="GoBack" ref="expr"> The first setvalue copies the "expr" less the leading slash into the "scratchexpr". Then, we clear out the "expr" so we can build it up anew from the parts of the scratchexpr. Now, we execute while "scratchexpr" still contains a slash, so the loop stops short of copying the last location step from scratchexpr to expr. Once the processing is complete, then once again, the modifications made to expr, reverberate to "path" then to "repeatexpr" due to the XForms binds above, and so the XForms repeat updates to show and allow editing of the content of the parent element. And that's it! Thanks to eval() used in combination with all other pre-existing features of XForms, you can make a form that edits any XML element data structure.
However, his explanatory model, which precludes any nongeographical explanations, has gaping holes when it comes to comparing the relative success of Europe with that of China in the middle ages. More than anything, a society’s ascension is based strength of its technological base. Much of what can be attributed to Europe’s ascension can be numbered on a list of new inventions. Including: The water wheel marked a huge improvement in productive efficiency, allowing workers to labor on something else, either on more concrete appliances or intellectual endeavors, such as reading and writing. Eyeglasses doubled the working life of a skilled craftsman. Whereas before an artisan’s skill would plummet with the decline of his sight, rendering him all but impotent by the age of 40, eyeglasses allowed fine workers to continue their vocation decades longer. The importance of the printing press can hardly be exaggerated. Although originally invented in China, the printing press never caught on because of the inflexibly of the Chinese Block type. But when Gutenberg invented the printing press for alphabetical languages, the world would never be the same. The literacy rates shot up, people began to read and think more, productivity increased from more reliable documentation and communication. For all the progress that Europe witnessed in the centuries preceding the Renaissance (11-1300’s), China was actually its superior at the time. The Chinese invented the wheelborrow, stirrup, compass, paper, printing, and gunpowder. But as Europe witnessed progress going into the Renaissance, China endured a steep decline. Therefore, the question of why Europe, as opposed to China, emerged as the world’s premier superpower can be restated as to why Europe was more amendable to invent new technologies than China, and why China actually went backwards. As for China’s regress, the Hungarian sinologist Etienne Balazs attributes it to its totalitarian constraints on private initiative, where monopolies reigned, bureaucracies were all-powerful, and Chinese ingenuity was sapped by the prevailing regulations that gripped its citizens from cradle to grave, all creating an artificial plateau that the Chinese could surmount. As for Europe’s relative success, David Landes, author of The Wealth and Poverty of Nations, attributes it to several factors. - · "The Judeo-Christian respect for manual labor, as summed up in a number of biblical injunctions.” He gives the example of when God warns Noah of the flood, and how God doesn’t just save him, but instead tells Noah to build an ark. - · The Judeo-Christian conception of Man being in control of nature, contra pagan nature worshippers. - · But most importantly, just as China’s decline could be extrapolated from its command economic system, Europe ascended due to its relative economic freedom. The institutions of private property and free enterprise gave the Europeans more incentive to innovate and create than the Chinese. It wasn’t the pure, Laisse faire Dickinson capitalism, but it was sufficiently close to it.
Kingston, NY / New York, NY EChO-Mansion – noun – A repurposed McMansion whose generated carbon footprint is counter-balanced by on-site sustainable techniques. The EChO-Mansion proposes repurposing foreclosed houses to serve to balance out the carbon footprint that has been generated by the previously inhabited McMansion. Prior to being repurposed all salvageable items – doors, windows, fixtures, cabinets – are removed and donated for use by local Habitat for Humanity chapters. All suThe EChO-Mansion is then reconfigured to provide the following functions: • Power Generation: Wind turbines are situated in window and door openings to provide wind-generated power. Solar (photovoltaic) array panels are situated on the existing roof structure. The electrical energy generated by the wind turbines and solar array panels is supplied back to the electrical grid for customer use. • Reburbian Greenhouse and Laboratory: The reburbia greenhouse will be utilized to grow vegetables and plants in an effort to offset the carbon footprint generated by transporting plants. A laboratory component will be provided to educate local residents. • Water Collection System: The basement is proposed to be repurposed to collect rain water that enters the EChO-Mansion. The cistern basement will pump collected water back to a central water filtration and distribution station. EChO-Mansion is a registered trademark of Mother Earth. Unauthorized use of ‘EChO-Mansion’ will result in far-reaching ecological consequences.
Theodore Roosevelt (18581919). Theodore Roosevelts Letters to His Children. 1919. 97. ON THE WAY TO PORTO RICO U. S. S. Louisiana, At Sea, November 20, 1906. This is the third day out from Panama. We have been steaming steadily in the teeth of the trade wind. It has blown pretty hard, and the ship has pitched a little, but not enough to make either Mother or me uncomfortable. Panama was a great sight. In the first place it was strange and beautiful with its mass of luxuriant tropic jungle, with the treacherous tropic rivers trailing here and there through it; and it was lovely to see the orchids and brilliant butterflies and the strange birds and snakes and lizards, and finally the strange old Spanish towns and the queer thatch and bamboo huts of the ordinary natives. In the next place it is a tremendous sight to see the work on the canal going on. From the chief engineer and the chief sanitary officer down to the last arrived machinist or time-keeper, the five thousand Americans at work on the Isthmus seemed to me an exceptionally able, energetic lot, some of them grumbling, of course, but on the whole a mighty good lot of men. The West Indian negroes offer a greater problem, but they are doing pretty well also. I was astonished at the progress made. We spent the three days in working from dawn until long after darknessdear Dr. Rixey being, of course, my faithful companion. Mother would see all she liked and then would go off on a little spree by herself, and she enjoyed it to the full.
The Kalachakra description of the universe is quite different from that presented in the other major Buddhist system of metaphysics: abhidharma, or topics of special knowledge. There are, of course, common elements in both, found in non-Buddhist Indian descriptions as well. These include multiple universes each passing through, at different times from each other, beginningless four-part cycles of formation, stabilization, disintegration and being empty, and each universe having a core mountain, Mount Meru, surrounded by continents, heavens and hells. The main differences between the two Buddhist systems concern the specifics of the four-part cycles, and the shape and size of the universe, Mount Meru and the continents. It is significant that Buddhism offers two descriptions of the universe. Each is valid for a different purpose, and there is no contradiction in having multiple portraits. The description of any phenomenon, then, is dependent on not only the conceptual framework of the author and the audience, but also the use to which that description is put. For instance, we would certainly explain the plans to send a manned mission to Mars in a different manner to the politicians who are deciding the budget than to the engineers who are designing the machinery. Both portrayals of the mission, however, are valid, useful and necessary. Appreciating this point helps us understand voidness. Nothing exists with inherent characteristics on its own side rendering only one correct way to conventionally perceive, apprehend or describe it. The purpose of the abhidharma picture of the universe is to help practitioners develop discriminating awareness by working with complex systems of multiple variables. The purpose of the Kalachakra version is quite different. It is to provide the Buddhist equivalent of a unified field theory that explains the structure and workings of the cosmos, atoms, the human body and the experience of rebirth in a parallel manner. The need for this unified theory is to provide a comprehensive basis, covering as much of samsara as possible, at which to aim the meditative practices of alternative Kalachakra for gaining liberation and enlightenment. A description of the external and internal worlds in terms of their unifying parallels reveals the shared underlying basis from which both derive, namely, clear light mind. The winds of karma that provide the impulses for a particular universe to evolve come from the collective karma on the clear light minds of prior beings. These clear light minds remain present during empty eons in between universal epochs. Likewise, the winds of karma that provide the impulses for a specific rebirth to occur arise from the individual karma on the clear light mind of a particular being. That clear light mind also continues during bardo periods in between rebirths. Meditation in analogy with the cycles through which the external and internal worlds pass and, in particular, in analogy with how each of these cycles periodically returns to its clear light basis provides a means to reach that basis. This is a unique feature of the anuttarayoga tantra technique. Once clear light mind is accessed, it is possible to make the necessary changes, namely, by focusing on voidness, to eliminate the confusion and its instincts that cloud it so that this basis no longer gives rise to the problems and sufferings associated with the external and internal cycles. This is the deepest reason why the proportions and shape of the universe, human body, and the mandala and body of the Buddhafigure Kalachakra are all the same. From Introduction to the Kalachakra Initiation by Alexander Berzin
For Stephanie Yarber, who received a diagnosis of premature ovarian failure at age 14, conceiving children the old-fashioned way was a life's wish. In 2003, after several unsuccessful and costly courses of in vitro fertilization (IVF) using her identical-twin sister's donated eggs, Yarber began looking into other options. There was adoption, of course. But there was also a riskier experimental alternative: ovarian transplantation. In her research, Yarber came across a surgeon and fertility specialist in Missouri, Dr. Sherman Silber of the Infertility Center of St. Louis, who in the late 1970s had performed the first successful testicular transplant between male identical twins, allowing the once infertile brother to father five children. Yarber wondered if the same doctor could do a similar procedure between her and her sister. Yarber's sister, who had three daughters and didn't plan to have any more children, eagerly agreed to help. "She wouldn't have said no," Yarber says. "I knew that." (See the top 10 medical breakthroughs of the past year.) Silber remembers the day he first spoke to Yarber. Her enthusiasm was contagious. But despite his vast experience with microsurgery and his success with male patients (he had also performed the world's first vasectomy reversal), Silber knew that all previous ovarian transplants in the U.S. had failed, as had those performed abroad. Still, he thought, in theory the procedure was possible. Yarber's surgery was scheduled for April 2004. Yarber's microsurgical procedure involved the transplantation from her sister to her of a thin strip of cortical tissue the part of the ovary that produces eggs. (The leftover strips of egg-producing tissue from the harvested ovary were frozen and stored for future use.) Within months, Yarber began menstruating. In September 2004, just five months after the transplant, she was pregnant. Five years and another tissue transplant later, Yarber has two daughters, ages 3½ years and 10 months, and is trying for a third child. Owing in large part to Yarber's willingness to talk about her experience, Silber has since performed the same procedure for eight other sets of identical twins. "There are lots of women who are in our position who are not able to have children and who are looking for something," says Yarber. "If we didn't speak about it, there wouldn't have been so many other twins able to do it." The battle to preserve and prolong women's fertility has become increasingly visible of late. While advances in techniques like cryopreservation (the freezing and storing of eggs and embryos, for example, and now also ovarian tissue for transplants) have increased many women's chances of pregnancy, IVF is still a time-consuming and expensive process and one that holds no guarantees. Success rates with IVF parallel fertility rates in the general population, dramatically declining with age. After 40, success rates drop to as low as 23%, and after age 43, Silber says, pregnancy is very rare. But other fertility treatments including experimental procedures such as harvesting immature eggs and maturing them in vitro for IVF, and the transplantation of ovarian tissue or entire intact ovaries have gained ground in the past five years, especially for women with premature infertility or infertility resulting from cancer therapy. An article published in the Feb. 26 issue of the New England Journal of Medicine urges oncologists to consider fertility preservation, including the use of experimental techniques, more routinely with their patients, since as many as 90% of women who undergo full-body radiation become infertile. But even as fertility specialists offer hope for many women who believed they would never bear their own children, ethicists warn that doctors must tread carefully in developing the technology. Silber, prompted by success with cortical-tissue transplants, decided to try transplanting a whole ovary. He performed the first successful such transplant between a set of 38-year-old identical twins in January 2007. A few months after surgery, the infertile twin got her period for the first time in more than two decades. Less than a year later, she was pregnant. Last November, she gave birth to a healthy baby girl. One month after performing the whole-ovary transplant, Silber tried the same procedure on a set of nonidentical twins for the first time. The recipient of the ovary, a San Francisco woman named Joy Lagos, had become infertile after cancer treatment. But the hope was that because Lagos had received a bone-marrow transplant from her older sister as part of that treatment which transformed Lagos' immune system into a chimera, or hybrid, of her sister's and her own cells her combination immune system would stand a far better chance of accepting her sister's ovary without the need for long-term immunosuppressant drugs. The procedure went off without a hitch. But several months later, Lagos' hormones began reverting to menopausal levels. The ovary failed. In October 2007 she tried again, with a cortical-tissue transplant from her sister, harvested during the earlier procedure. Six months later, Lagos got her period for the first time in years. "This means that the ovary is working and we can start trying to get pregnant for real!" she wrote ecstatically on the blog she shares with her husband. But by summer, Lagos learned that the second transplant had also failed. Silber concluded it was most likely an organ rejection. "I view this as an error in judgment," says Silber. "We all thought we didn't have to immunosuppress her." Yet with the use of immunosuppressant drugs, he says, the technique could work between sisters or even strangers. "We know that's a safe thing to do," Silber says, citing the many published cases of babies born to women on long-term immunosuppressants. And because ovaries are not vital organs, he says, the immunosuppressant regimen for ovary-transplant patients would be much more modest than average. "If it doesn't work, we're not going to take a chance with their life as we would with a kidney or a liver," he says.
Dr. Hogan received her A.B. degree in Biology from Harvard University in 1993, and her Ph.D. in Microbiology from the Michigan State University in 1999. After postdoctoral work at Harvard Medical School, Dr. Hogan joined the faculty of the Department of Microbiology and Immunology at Dartmouth Medical School in 2004. National Center for Disaster Mental Health Research Dewey Field Rd. HB7550 Hanover NH 03755 The interactions between different microbial species govern the activity of microbial communities, whether they be in association with a host or free-living in the environment. Microbial communities have very significant effects on human health. For example, synergistic relationships between the organisms within the human microflora confer protection against pathogens and enable the degradation of complex substrates. At the same time, many illnesses, such as respiratory and genital infections, gastroenteritis, and periodontal diseases, often involve multiple microorganisms. In the Hogan Lab, we are interested in understanding the molecular basis for such interactions by describing the mechanisms by which one microbe affects the physiology, survival, and virulence properties of another microbial species. Our lab primarily focuses on the interactions between the Gram-negative bacterium Pseudomonas aeruginosa and the dimorphic fungus, Candida albicans. These two organisms co-exist within diverse opportunistic human infections, and clinical observations suggest that P. aeruginosa inhibits C. albicans growth. In our in vitro system, we observe that the bacteria physically attach to the fungal filaments, form biofilms on their surfaces, and kill the fungal cells. Many of the bacterial factors used to kill the fungus also participate in P. aeruginosa virulence towards humans. The fungus responds to the presence of the P. aeruginosa by reverting to a resistant yeast form. We are using genetic screening methods, analysis of defined mutants, biochemical approaches and genomic profiling techniques to better understand the bacterial and fungal factors that are involved in this relationship. By studying the interactions between microbial species, we are learning about important elements relating to the physiology and pathogenesis of the individual microbes. in addition to gaining insight in to how microbial communities function.
Sodium selenite alters microtubule assembly and induces apoptosis in vitro and in vivo Previous studies demonstrated that selenite induced cancer-cell apoptosis through multiple mechanisms; however, effects of selenite on microtubules in leukemic cells have not been demonstrated. The toxic effect of selenite on leukemic HL60 cells was performed with cell counting kit 8. Selenite effects on cell cycle distribution and apoptosis induction were determined by flow cytometry. The contents of cyclin B1, Mcl-1, AIF, cytochrome C, insoluble and soluble tubulins were detected with western blotting. Microtubules were visualized with indirect immunofluorescence microscopy. The interaction between CDK1 and Mcl-1 was assessed with immunoprecipitation. Decreasing Mcl-1 and cyclin B1 expression were carried out through siRNA interference. The alterations of Mcl-1 and cyclin B1 in animal model were detected with either immunohistochemical staining or western blotting. In situ detection of apoptotic ratio was performed with TUNEL assay. Our current results showed that selenite inhibited the growth of HL60 cells and induced mitochondrial-related apoptosis. Furthermore, we found that microtubule assembly in HL60 cells was altered, those cells were arrested at G2/M phase, and Cyclin B1 was up-regulated and interacted with CDK1, which led to down-regulation of the anti-apoptotic protein Mcl-1. Finally, in vivo experiments confirmed the in vitro microtubule disruption effect and alterations in Cyclin B1 and Mcl-1 levels by selenite. Taken together, the results from our study indicate that microtubules are novel targets of selenite in leukemic HL60 cells. KeywordsSodium selenite Apoptosis Microtubule Cell cycle Microtubules have important roles in many cell behaviors such as cell division, organelle positioning, vesicular transport and cell-shape determination [1–3]. Previous studies have showed that microtubule dynamics are necessary for these functions in vivo[2, 4–6]. Therefore, chemicals affecting microtubule dynamics often impact these functions in vivo. On that basis, many anti-tumor agents have been developed for their effects on microtubule dynamics and cell-cycle distribution [7–12]. Selenium (Se) is an essential trace element , and appropriate selenium intake is necessary for the body to synthesize selenoproteins. Some researchers have shown that selenite concentrations that are within the nutritional range inhibit tumor formation by acting on antioxidants and in the inhibition of DNA adduct formation, the promotion of cell cycle progression and DNA repair [14–16]. However, super-nutritional levels of selenite induce endoplasmic reticulum stress, mitochondrial-related apoptosis, DNA strand breaks and cell-cycle arrest [15–19]. Therefore, many molecules, such as Akt, GADD153, P53, ERK, P38, Bad, Bim and Bax [20–24], have been reported to be involved in high-dose selenite-induced apoptosis. Additionally, super-nutritional selenite intake has been shown to be toxic to drug-resistant cancer cells and effective on tumor xenografts, which suggests that selenite has potential therapeutic effects [23–25]. In an in-depth study of selenium, selenite was reported to have strong inhibitory effects on sulfhydryl-containing proteins such as tubulins, which composed microtubules [26, 27], but the effects of selenite on microtubules in cancer cells had not been proven. Based on our proteomics study, proteins linked to microtubule dynamics were thought to have roles in selenite-triggered apoptosis . Therefore, our study aimed to investigate the role of selenite in microtubule assembly and induction of apoptosis. Leynadier D et al. first discovered that selenite could directly interact with the sulfhydryl groups of β-tubulin and could inhibit microtubule polymerization in vitro. To our knowledge, we are the first to discover that selenite also induces microtubule depolymerization in HL60 cells and in vivo. However, because microtubules reorganized in Jurkat but not in HL60 cells, the apoptotic mechanisms of the two cell lines differed. We mainly investigated the mechanisms by which selenite induced apoptosis. Because tumor cells have a strong ability to replicate themselves and tubulins, which compose spindles, are essential for this process, we speculate that selenite-induced apoptosis is at least partly dependent on the effects of selenite on microtubules. Therefore, the growth inhibitory effect of selenite on cultured HL60 cells was assessed, and we discovered that 20 μM of sodium selenite significantly inhibited cell growth. Furthermore, Annexin V-FITC/PI double staining assay proved that selenite-induced apoptosis occurred and nuclear fragmentation was witnessed in selenite-treated cells. Last, we discovered that cytochrome C and AIF were released from the mitochondria to the cytoplasm, which suggested that selenite-induced apoptosis in HL60 cells might be associated with the mitochondrial apoptotic pathway. Cell cycle-related proteins that were consistently altered with microtubule dynamics could regulate Bcl-2 family members, which were located in the mitochondria. Therefore, we speculated that selenite inhibited HL60 cell growth through its effects on microtubules. Several reports suggested that microtubule-interfering drugs affected cell cycle distribution by regulating the activity of CDKs and, therefore, altering protein phosphorylation at different cell cycle phases . Mcl-1, a Bcl-2 family member, is regulated by the Cyclin B1/CDK1 complex and is linked to the mitochondrial apoptotic pathway by binding and inhibiting pro-apoptotic proteins [12, 30, 31]. Our current study proved that selenite could induce cell cycle arrest and remarkable alterations of Cyclin B1 and Mcl-1 levels in HL60 cells through its effect on microtubule depolymerization. Interestingly, a combination treatment of Colchicine and selenite in Jurkat cells up-regulated Cyclin B1 and down-regulated Mcl-1. The observations in Jurkat cells also supported the relationship between microtubule destruction and alterations in Cyclin B1 and Mcl-1 after selenite exposure. Cyclin B1 is necessary for the activity of CDK1, which phosphorylates and destabilizes Mcl-1 [32–36]. We observed that Cyclin B1 interacted with CDK1. Furthermore, either siRNA interference of Cyclin B1 or inhibition of the CDK1/Cyclin B1 complex with Roscovitine rescued the decrease of Mcl-1. Further investigation confirmed the protective role of Mcl-1 on HL60 cells and suggested that the growth inhibitory effects of selenite might be associated with the down-regulation of Mcl-1. Finally, a combination of siRNA targeting Mcl-1 and selenite treatment caused a higher apoptotic ratio than selenite treatment alone. These results supported our conclusion that selenite altered microtubule assembly and inhibited HL60 cell growth through cell cycle arrest and decrease in Mcl-1 levels. The above-described experiments indicated that selenite altered microtubule assembly and induced cell cycle arrest in HL60 cells. To identify the therapeutic activity of selenite in vivo, we established a HL60-cell-bearing nude mice model. Experiments in vivo showed that selenite inhibited tumor growth and induced nucleus pyknosis. Furthermore, we also found that selenite depolymerized microtubules in vivo. Additional experiments demonstrated that alterations of Cyclin B1 and Mcl-1 levels in the nude mice model were similar to those findings in vitro, which suggested that the mechanisms demonstrated in vitro were also active at the tissue level. In conclusion, the microtubule destruction that was induced by selenite stimulated the apoptotic pathway by up-regulating Cyclin B1, which interacted with CDK1 and destabilized the anti-apoptotic protein Mcl-1. We also found that sodium selenite had therapeutic functions in a HL60-cell-bearing nude mice model through its microtubule destruction effects. Importantly, this investigation explored the effects of selenite on apoptosis in a distinct way. Materials and methods Chemicals and antibodies Roscovitin, anti-β-Tubulin (2-28-33) and anti-β-Actin (AC-15) antibodies were obtained from Sigma-Aldrich. Anti-Cyclin B1 and anti-Mcl-1 antibodies, which were used for western blotting, were obtained from Cell Signaling Technology. For immunohistochemical staining, an anti-Cyclin B1 antibody was purchased from Excell, an-anti CD33 antibody was purchased from BIOSS and an anti-Mcl-1 antibody was purchased from Santa Cruz. The Cdc2 (1/Cdk1/Cdc2) antibody was purchased from BD Biosciences Pharmingen. HRP-conjugated anti-mouse and anti-rabbit antibodies were purchased from ZSGB-BIO. A FITC-conjugated anti-mouse antibody was purchased from Jackson. HL60 and Jurkat cells were grown in RPMI 1640 medium containing 10% advanced fetal bovine serum, 100 units/mL penicillin and 100 units/mL streptomycin and incubated in a humidified, 5% CO2 incubator that was set at 37°C. Indirect immunofluorescence microscopy HL60 cells (8 × 105 total) were harvested. The cells were transferred to slides, fixed in 4% paraformaldehyde and permeabilized using 0.1% Triton X-100. After the slides were blocked with 2% BSA, the cells were incubated with β-tubulin antibody overnight at 4°C. After washing with PBS three times, the cells were incubated with FITC-conjugated secondary antibody for 60 min at room temperature. After a second round of washing, the cells were stained with DAPI for approximately 5 min, and the slides were washed three times and mounted in anti-fading medium. Images were visualized using a Zeiss microscope (Carl Zeiss, Jena, Germany). Approximately 1 × 106 cells were collected for each treatment. After washing with ice-cold PBS, the cells were resuspended in RIPA lysis buffer (20 nM Tris, pH 7.5; 1 mM EDTA; 1 mM EGTA; 150 mM NaCl; 1% Triton X-100; 2.5 mM sodium pyrophosphate; 1 mM β-glycerolphosphate; 1 mM Na3VO4; 1 mM PMSF; and 1 μg/mL leupeptin) and were submitted to ultrasonication on ice. The lysates were centrifuged at 12,000 × g for 20 min at 4°C, and equal amounts of proteins were separated by SDS-PAGE. The proteins on the PAGE gel were then transferred to a nitrocellulose membrane. After being blocked with 5% non-fat milk, the membranes were washed with TBST and incubated overnight with primary antibody at 4°C. After being washed three times with TBST, the membranes were incubated with a HRP-conjugated secondary antibody for approximately 1 h at room temperature. Subsequently, the membranes were washed another three times and probed with supersignal chemiluminescent substrate. Cells (1 × 107) were harvested and washed twice with ice-cold PBS. The pellets were resuspended in RIPA buffer and lysed on ice for 30 min. Subsequently, the lysates were centrifuged at 12,000 × g for 20 min at 4°C. A suitable amount of cdc2 antibody was added to the protein lysate (200 μg) and rotated overnight at 4°C, while the remaining protein was used as input. Protein A + G was added, and the mixture was rotated for another 3 h at 4°C; then, the samples were washed with RIPA buffer three times. Finally, the beads were resuspended in 3 × SDS loading buffer and boiled for 10 min. After a short centrifugation step, the supernatant was collected. siRNAs targeting Cyclin B1 (5′-CCAAACCTTTGTAGTGAAT-3′), Mcl-1 (5′-GGACTGGCTAGTTAAACAA-3′) and negative controls for each sequence were synthesized by GenePharma. Approximately 1 × 107 cells were collected and washed with Opti-MEM medium (Gibco). Then, the cells were transfected with 200 nM siRNA and RNAiMAX in Opti-MEM. After transfection for approximately 12 h, the cells were treated with sodium selenite for 24 h. Detection of cell cycle distribution Approximately 1 × 106 cells were collected and fixed in 70% ethanol overnight at 4°C. Each sample was centrifuged at 1,000 × g for 10 min at room temperature and washed with ice-cold PBS. Subsequently, the cells were incubated with 50 μg/ml RNase in PBS for 30 min at 37°C. After adding PI to the cells at a final concentration of 50 μg/mL, we detected the absorption at 620 nm by flow cytometry. Detection of apoptosis with AnnexinV-FITC/PI staining The cells (1 × 106) were harvested and washed twice with ice-cold PBS. Subsequently, the cells were stained with AnnexinV-FITC in binding buffer in the dark for 15 min. After being centrifuged at 1,000 × g for 10 min, the cells were resuspended in binding buffer containing PI. Finally, the apoptotic ratio was determined by an Accuri C6 flow cytometry. We calculated the apoptotic ratio by calculating the sum of Annexin V+/PI- cells’ ratio and Annexin V+/PI+ cells’ ratio. The effect of sodium selenite on the viability of HL60 cells HL60 cells were seeded into a 96-well plate at a concentration of 40,000 cells per well. After treatment with varying concentrations of sodium selenite for 24 h, cell viability was assessed using CCK-8 kits (Dojindo Molecular Technologies, Tokyo, Japan). The in vivo microtubule polymerization assay An established method was modified and used to separate insoluble tubulin from soluble tubulin . Approximately 2 × 106 cells were collected and washed twice. The cells were then resuspended in hypotonic buffer at 37°C for 5 min. After centrifugation at 14,000 × g for 10 min at 25°C, the supernatants, which contained soluble tubulin, were collected, and the pellets containing insoluble tubulin were resuspended in RIPA buffer and subjected to ultrasonication on ice. The lysates were centrifuged at 12,000 × g for 10 min at 4°C, and the supernatants were collected. Xenograft tumor model At the beginning of the experiment, 4-week-old female mice were chosen and divided into control or selenite-treated group randomly. Each group was marked and put into its own box. The two groups lived in the same context and were fed with the same food and water. Leukemia HL60 cells were inoculated subcutaneously into female nude mice. After tumors were palpable, an intraperitoneal injection of sodium selenite dissolved in PBS was given to each mouse every 2 days (1.5 mg/kg/day) for 3 weeks, and the control group was treated with PBS for the same period time. At the end of the experiment, the mice were sacrificed, and the tumors and spleens were rapidly removed and weighed. The Declaration of Helsinki and the guide for Laboratory Animal Care and Use were maintained. The slides were deparaffinized in xylene and dehydrated with decreasing concentrations of ethanol. After the slides were washed with running water for 2 min, endogenous peroxidase was blocked with 3% peroxide that was dissolved in methanol. Subsequently, the slides were immersed in boiled sodium citrate for antigen retrieval, and after being washed three times with 0.01 M PBS for 5 min, the slides were incubated with either anti-Mcl-1 or anti-Cyclin B1 antibody overnight at 4°C. The slides were incubated with secondary antibody at room temperature for 3 h, treated with DAB, stained with Mayer’s hematoxylin for 2 min and washed with running water. Slides were then dehydrated with increasing concentrations of ethanol and clarified with xylene. Finally, the slides were mounted with medium. After dehydration with decreasing concentrations of ethanol as described above, the slides were stained with Harriss hematoxylin for 15 min and washed for 3 min. Then, the slides were immersed in 1% hydrochloric acid dissolved in 75% ethanol for 30 s. Before dehydration, the slides were stained with eosin for 10 min. Finally, the slides were clarified with xylene and mounted with medium. A FragEL™ DNA Fragmentation Detection Kit was purchased from MERCK. The slides were deparaffinized in xylene and dehydrated with decreasing concentrations of ethanol. After washing with 1 × TBS for 2 min, the slides were incubated with 20 μg/ml proteinase K for 20 min at room temperature. The slides were then washed with 1 × TBS and incubated with 1 × TdT buffer for approximately 30 min at room temperature. Subsequently, the slides were incubated with 57 μl of mix buffer and 3 μl of TdT enzyme for 60 min at 37°C. After being washed three times with 1 × TBS, the slides were mounted with medium. The values were represented as mean ± SEM. Two-tailed students’ t-tests were used for two groups comparison analysis, and P < 0.05 was considered to be significant. The bar charts were used to reflect the alterations of experimental data [18, 24, 37, 38]. - McIntosh JR, Hering GE: Spindle fiber action and chromosome movement. Annu Rev Cell Biol. 1991, 7: 403-426. 10.1146/annurev.cb.07.110191.002155.View ArticlePubMedGoogle Scholar - Walker RA, O’Brien ET, Pryer NK, Soboeiro MF, Voter WA, Erickson HP, Salmon ED: Dynamic instability of individual microtubules analyzed by video light microscopy: rate constants and transition frequencies. J Cell Biol. 1988, 107: 1437-1448. 10.1083/jcb.107.4.1437.View ArticlePubMedGoogle Scholar - Carlier MF: Role of nucleotide hydrolysis in the dynamics of actin filaments and microtubules. Int Rev Cytol. 1989, 115: 139-170.View ArticlePubMedGoogle Scholar - Erickson HP, O’Brien ET: Microtubule dynamic instability and GTP hydrolysis. Annu Rev Biophys Biomol Struct. 1992, 21: 145-166. 10.1146/annurev.bb.21.060192.001045.View ArticlePubMedGoogle Scholar - Meshkini A, Yazdanparast R: Involvement of oxidative stress in taxol-induced apoptosis in chronic myelogenous leukemia K562 cells. Exp Toxicol Pathol. 2012, 64: 357-365. 10.1016/j.etp.2010.09.010.View ArticlePubMedGoogle Scholar - Wilson L, Jordan MA: Microtubule dynamics: taking aim at a moving target. Chem Biol. 1995, 2: 569-573. 10.1016/1074-5521(95)90119-1.View ArticlePubMedGoogle Scholar - Hamel E, Lin CM: Reexamination of the role of nonhydrolyzable guanosine 5′-triphosphate analogues in tubulin polymerization: reaction conditions are a critical factor for effective interactions at the exchangeable nucleotide site. Biochemistry. 1990, 29: 2720-2729. 10.1021/bi00463a015.View ArticlePubMedGoogle Scholar - Tanaka E, Ho T, Kirschner MW: The role of microtubule dynamics in growth cone motility and axonal growth. J Cell Biol. 1995, 128: 139-155. 10.1083/jcb.128.1.139.View ArticlePubMedGoogle Scholar - Pietenpol JA, Stewart ZA: Cell cycle checkpoint signaling: cell cycle arrest versus apoptosis. Toxicology. 2002, 181–182: 475-481.View ArticlePubMedGoogle Scholar - Bhalla KN: Microtubule-targeted anticancer agents and apoptosis. Oncogene. 2003, 22: 9075-9086. 10.1038/sj.onc.1207233.View ArticlePubMedGoogle Scholar - Behne D, Kyriakopoulos A: Mammalian selenium-containing proteins. Annu Rev Nutr. 2001, 21: 453-473. 10.1146/annurev.nutr.21.1.453.View ArticlePubMedGoogle Scholar - Guan L, Han B, Li J, Li Z, Huang F, Yang Y, Xu C: Exposure of human leukemia NB4 cells to increasing concentrations of selenite switches the signaling from pro-survival to pro-apoptosis. Ann Hematol. 2009, 88: 733-742. 10.1007/s00277-008-0676-4.View ArticlePubMedGoogle Scholar - Brozmanova J, Manikova D, Vlckova V, Chovanec M: Selenium: a double-edged sword for defense and offence in cancer. Arch Toxicol. 2010, 84: 919-938. 10.1007/s00204-010-0595-8.View ArticlePubMedGoogle Scholar - Zeng H: Selenite and selenomethionine promote HL-60 cell cycle progression. J Nutr. 2002, 132: 674-679.PubMedGoogle Scholar - Cao TM, Hua FY, Xu CM, Han BS, Dong H, Zuo L, Wang X, Yang Y, Pan HZ, Zhang ZN: Distinct effects of different concentrations of sodium selenite on apoptosis, cell cycle, and gene expression profile in acute promyeloytic leukemia-derived NB4 cells. Ann Hematol. 2006, 85: 434-442. 10.1007/s00277-005-0046-4.View ArticlePubMedGoogle Scholar - Li Z, Shi K, Guan L, Cao T, Jiang Q, Yang Y, Xu C: ROS leads to MnSOD upregulation through ERK2 translocation and p53 activation in selenite-induced apoptosis of NB4 cells. FEBS Lett. 2010, 584: 2291-2297. 10.1016/j.febslet.2010.03.040.View ArticlePubMedGoogle Scholar - Guan L, Han B, Li Z, Hua F, Huang F, Wei W, Yang Y, Xu C: Sodium selenite induces apoptosis by ROS-mediated endoplasmic reticulum stress and mitochondrial dysfunction in human acute promyelocytic leukemia NB4 cells. Apoptosis. 2009, 14: 218-225. 10.1007/s10495-008-0295-5.View ArticlePubMedGoogle Scholar - Han B, Wei W, Hua F, Cao T, Dong H, Yang T, Yang Y, Pan H, Xu C: Requirement for ERK activity in sodium selenite-induced apoptosis of acute promyelocytic leukemia-derived NB4 cells. J Biochem Mol Biol. 2007, 40: 196-204. 10.5483/BMBRep.2007.40.2.196.View ArticlePubMedGoogle Scholar - Zou Y, Niu P, Yang J, Yuan J, Wu T, Chen X: The JNK signaling pathway is involved in sodium-selenite-induced apoptosis mediated by reactive oxygen in HepG2 cells. Cancer Biol Ther. 2008, 7: 689-696. 10.4161/cbt.7.5.5688.View ArticlePubMedGoogle Scholar - Ren Y, Huang F, Liu Y, Yang Y, Jiang Q, Xu C: Autophagy inhibition through PI3K/Akt increases apoptosis by sodium selenite in NB4 cells. BMB Rep. 2009, 42: 599-604. 10.5483/BMBRep.2009.42.9.599.View ArticlePubMedGoogle Scholar - Yang Y, Huang F, Ren Y, Xing L, Wu Y, Li Z, Pan H, Xu C: The anticancer effects of sodium selenite and selenomethionine on human colorectal carcinoma cell lines in nude mice. Oncol Res. 2009, 18: 1-8. 10.3727/096504009789745647.View ArticlePubMedGoogle Scholar - Huang F, Nie C, Yang Y, Yue W, Ren Y, Shang Y, Wang X, Jin H, Xu C, Chen Q: Selenite induces redox-dependent Bax activation and apoptosis in colorectal cancer cells. Free Radic Biol Med. 2009, 46: 1186-1196. 10.1016/j.freeradbiomed.2009.01.026.View ArticlePubMedGoogle Scholar - Hu H, Jiang C, Schuster T, Li GX, Daniel PT, Lu J: Inorganic selenium sensitizes prostate cancer cells to TRAIL-induced apoptosis through superoxide/p53/Bax-mediated activation of mitochondrial pathway. Mol Cancer Ther. 2006, 5: 1873-1882. 10.1158/1535-7163.MCT-06-0063.View ArticlePubMedGoogle Scholar - Leynadier D, Peyrot V, Codaccioni F, Briand C: Selenium: inhibition of microtubule formation and interaction with tubulin. Chem Biol Interact. 1991, 79: 91-102. 10.1016/0009-2797(91)90055-C.View ArticlePubMedGoogle Scholar - Mi L, Xiao Z, Hood BL, Dakshanamurthy S, Wang X, Govind S, Conrads TP, Veenstra TD, Chung FL: Covalent binding to tubulin by isothiocyanates. A mechanism of cell growth arrest and apoptosis. J Biol Chem. 2008, 283: 22136-22146. 10.1074/jbc.M802330200.PubMed CentralView ArticlePubMedGoogle Scholar - Dong H, Ying T, Li T, Cao T, Wang J, Yuan J, Feng E, Han B, Hua F, Yang Y: Comparative proteomic analysis of apoptosis induced by sodium selenite in human acute promyelocytic leukemia NB4 cells. J Cell Biochem. 2006, 98: 1495-1506. 10.1002/jcb.20755.View ArticlePubMedGoogle Scholar - Jiang Q, Wang Y, Li T, Shi K, Li Z, Ma Y, Li F, Luo H, Yang Y, Xu C: Heat shock protein 90-mediated inactivation of nuclear factor-kappaB switches autophagy to apoptosis through becn1 transcriptional inhibition in selenite-induced NB4 cells. Mol Biol Cell. 2011, 22: 1167-1180. 10.1091/mbc.E10-10-0860.PubMed CentralView ArticlePubMedGoogle Scholar - Yang JS, Hour MJ, Huang WW, Lin KL, Kuo SC, Chung JG: MJ-29 inhibits tubulin polymerization, induces mitotic arrest, and triggers apoptosis via cyclin-dependent kinase 1-mediated Bcl-2 phosphorylation in human leukemia U937 cells. J Pharmacol Exp Ther. 2010, 334: 477-488. 10.1124/jpet.109.165415.View ArticlePubMedGoogle Scholar - Wang YF, Jiang CC, Kiejda KA, Gillespie S, Zhang XD, Hersey P: Apoptosis induction in human melanoma cells by inhibition of MEK is caspase-independent and mediated by the Bcl-2 family members PUMA, Bim, and Mcl-1. Clin Cancer Res. 2007, 13: 4934-4942. 10.1158/1078-0432.CCR-07-0665.View ArticlePubMedGoogle Scholar - Chen YC, Lu PH, Pan SL, Teng CM, Kuo SC, Lin TP, Ho YF, Huang YC, Guh JH: Quinolone analogue inhibits tubulin polymerization and induces apoptosis via Cdk1-involved signaling pathways. Biochem Pharmacol. 2007, 74: 10-19. 10.1016/j.bcp.2007.03.015.View ArticlePubMedGoogle Scholar - Doma E, Chakrabandhu K, Hueber AO: A novel role of microtubular cytoskeleton in the dynamics of caspase-dependent Fas/CD95 death receptor complexes during apoptosis. FEBS Lett. 2010, 584: 1033-1040. 10.1016/j.febslet.2010.01.059.View ArticlePubMedGoogle Scholar - Shin JW, Son JY, Kang JK, Han SH, Cho CK, Son CG: Trichosanthes kirilowii tuber extract induces G2/M phase arrest via inhibition of tubulin polymerization in HepG2 cells. J Ethnopharmacol. 2008, 115: 209-216. 10.1016/j.jep.2007.09.030.View ArticlePubMedGoogle Scholar - Harley ME, Allan LA, Sanderson HS, Clarke PR: Phosphorylation of Mcl-1 by CDK1-cyclin B1 initiates its Cdc20-dependent destruction during mitotic arrest. EMBO J. 2010, 29: 2407-2420. 10.1038/emboj.2010.112.PubMed CentralView ArticlePubMedGoogle Scholar - Mollinedo F, Gajate C: Microtubules, microtubule-interfering agents and apoptosis. Apoptosis. 2003, 8: 413-450. 10.1023/A:1025513106330.View ArticlePubMedGoogle Scholar - Yedjou C, Tchounwou P, Jenkins J, McMurray R: Basic mechanisms of arsenic trioxide (ATO)-induced apoptosis in human leukemia (HL-60) cells. J Hematol Oncol. 2010, 3: 28-10.1186/1756-8722-3-28.PubMed CentralView ArticlePubMedGoogle Scholar - Zou L, Zhang H, Du C, Liu X, Zhu S, Zhang W, Li Z, Gao C, Zhao X, Mei M: Correlation of SRSF1 and PRMT1 expression with clinical status of pediatric acute lymphoblastic leukemia. J Hematol Oncol. 2012, 5: 42-10.1186/1756-8722-5-42.PubMed CentralView ArticlePubMedGoogle Scholar
Basic Transmission Technologies Hereby I would like to share (learn) you guys some basic principles or technologies which are or were used to be implemented on Fiber Optic Networks. These are the most common, and are based in Layer 1 + 2 of the OSI-Model. There are two TDM transmission systems with their hierarchy in use: "plesiochronous (nearly synchronous) digital hierarchy (PDH)" and "synchronous digital hierarchy (SDH)", the most recent one providing higher speed channels like DWDM. This hierarchy was developed nearly 40 years ago to carry digital voice channels. It structures the transmission infrastructure into several layers. Different hierarchical levels are used in North America, in Europe and in Japan See Attach for Table... PDH is an asynchronous multiplexing scheme in the sense that the different tributary channels don't have to be clock synchronized between them and with the aggregate channel. A centralized and very stable network clock is unnecessary avoiding the problems of clock stability, recovery and distribution. But the consequence is the implementation complexity of cross connecting the aggregated channels, with demultiplexing and remultiplexing at each cross connect node. SYNCHRONOUS DIGITAL HIERARCHY (SONET-SDH) There are two standards for the synchronous digital hierarchy: SONET (Synchronous Optical NETwork) for North America and SDH (Synchronous Digital Hierarchy): the ITU international standard. The two Key benefits of this new hierarchy versus PDH are: Higher transmission and aggregate rates Direct multiplexing and cross connecting without intermediate multiplexing stages, due to its synchronous nature and pointers in multiplex streams that delineate the aggregated sub streams. The SONET or SDU transmission system is structured in several levels, each of them characterized by a transport channel called the Synchronous Transport Signal (STS) in SONET, the Synchronous Transport Module (STM) in SDH. This channel is transmitted over an Optical Carrier (OC). The different possible SONET /SDH levels are presented in the following table: See Attach for Table Note that the Bit rate of STS-N (Level N) = N* STS-1 = N* 51.84 Mbps and STM-N = 3*STS-N. The predominant used channels are the STM-1 (or STS-3) and the STM-4 (or STS-12). It is a superhighway using fiber optics that rings most major cities and provides terabits worth of bandwidth. It is the basic foundation, the underlying transmission network of very high-speed networks such as ATM or SMDS, but also of terrestrial Video networks. The STS or STM transport channel aggregates lower speed channels (e.g. 4 STM-1 into 1 STM 4) and also multiplexes T1 or E1, T3 or E3. The aggregated streams are called Virtual Tributaries (in SONET), Virtual Containers (in SDH). There are also different levels of tributary channels as listed in the next table.
(Bush landscape with waterfall, New South Wales) John Skinner Prout 19 Dec 1805 - 29 Sep 1876 This rare oil painting by John Skinner Prout most likely depicts Willoughby Falls near Sydney. The luxuriantly forested landscape is filled with native plants and animals, appealing to an English Romantic taste for exotica. An idealised, Arcadian view, it is removed from the reality of European settlement at this time and its detrimental effects on the Indigenous population. This painting was incorrectly recorded on several occasions as depicting a scene in Tasmania, where Prout worked between 1844 and 1848. A close study of the flora and fauna in the painting, however, substantiates that it is a NSW setting. (Cascade Falls, Hobart) Cascade Falls, Tasmania (Bush landscape with waterfall and an aborigine stalking native animals, New South Wales) oil on canvas 70.5 x 91.4 cm stretcher; 89.8 x 110.8 x 6.8 cm frame Signature & date Not signed. Not dated. Shown in 5 exhibitions Art and art treasures, National Gallery of Victoria [Swanston Street], Melbourne, Mar 1869–May 1869 Samuel Elyard (1817-1910): Landscape painter and photographer, S.H. Ervin Gallery, The Rocks, 01 Jul 1982–08 Aug 1982 John Skinner Prout in Australia (1986), Tasmanian Museum and Art Gallery, Hobart, 09 Nov 1986–01 Feb 1987 Skinner Prout in Australia 1840-48: Two hundred years of Australian painting : Nature, people and art in the southern continent:
Dandelion greens have a potent flavor that mellows when cooked. They are delicious steamed or braised and can be used in place of spinach. They also make a complex and spicy addition to your favorite grilled cheese sandwich. The name of this vegetable originates from the French phrase “dent de lion,” or lion’s teeth, which refers to the coarsely toothed leaves of the plant. While the leaves are not sharp to touch, they do have a punchy, bitter taste. Dandelions are considered a beneficial weed because they don’t compete for nutrients with the neighboring plants and their taproots are deep enough to bring up nutrients and actually increase soil fertility. Young dandelion leaves are less bitter than mature dandelion leaves and can be used in salads. To prepare, wash the greens in cool water to remove any debris .
Holi is one of the major festivals of India and it is the most vibrant of all festivals. The joy of the holi knows no bound. The festival is celebrated across all the four corners of India or rather across the globe. The festival is filled with so much joy, fun, and frolic that the very mention of the name Holi draws smile and enthusiasm among the people. Holi is also celebrated to signify the arrival of spring, a season of joy and hope. The colorful festival of the Holi is celebrated on Phalgun Purnima that comes in February end or early March, according to the cycle of the moon.(Image by Jeremy Nicoll) Reason for celebrating Holi festivals. The Holi festival is celebrated to commemorate the victory of the good over the evil, that is brought about by the burning and the destruction of the demoness named the Holika. This is done through unwavering devotion to the Hindu god of preservation, the Lord Vishnu. How holi is celebrated. People spending the day smearing colored powder all over each other’s faces celebrate Holi, they then throw colored water at each other. There are parties all over, and dancing under the water sprinklers. Bhang a paste made from cannabis plants is also traditionally consumed during the celebrations. There is lighting up of bonfire on the Holi eve. There are numerous legends and stories associated with Holi celebration that are narrated making the festival more exuberant and vivid. People then rub gulal and a beer on each other’s faces and cheer up saying, bura Na maano Holi hai. Holi gives a wonderful chance to send blessings and love to dear ones wrapped in a beautiful special Holi gift. Where Holi is celebrated You will find Holi festivities taking place in most areas of India. However, they are more exuberant in some places than others; the festival is celebrated in other parts of the world. The emphasis of the Holi rituals is on the burning of the demoness the Holika. On the eve of the Holi, large bonfires are lit to mark occasion and to burn evil spirits. This is known as the Holika Dahan. Holi is a very carefree festival that is of great fun to participate in. if you do not mind getting very wet and very dirty. You will end up saturated in water, with color all over your skin and the clothes. Some of it does not wash out easily, so be sure to wear old clothes. It is also a very good idea to rub hair oil or the coconut oil into your skin beforehand, to prevent the color from absorbing into your skin.(Image by valimax) The beauty of holi celebrations will leave you amazed and enjoying the fun at the same time, the Indians do not fear getting dirty during that time of celebration. Parties and dances are held everywhere during the Holi festivals. It is advisable not to walk alone if you are a young girl since those smoking bang can act inappropriate. The holi celebrate marks the beauty and culture of the Indian people.
Most people probably recognize the German city Stuttgart (pronounced "Shtoot-gart," we looked it up) as the headquarters for Mercedes-Benz and Porsche (it's on the logo). Jason Bourne probably hid out there for a while. Those things are awesome, not gonna lie, but they're far from our closest tie to the city. In 1960, Raymond Tucker, St. Louis' 38th mayor, helped put the city on the Midwestern map by working with international officials to create a sister cities program between the sports car capital and the home of the Cardinals. Flash forward 51 years: St. Louis' older sister is supported locally by the nonprofit St. Louis-Stuttgart Sister Cities, an organization devoted to fostering German traditions in Missouri through art, cuisine, cultural celebrations and student exchange programs. Since the exchange program began 20 years ago, it has sent 3,000 students back and forth between the two cities. Recently, the ideals behind that relationship made it as far as Obama, who will present German Chancellor Angela Merkel with the Medal of Freedom Honor at a White House state dinner tonight. The chancellor, who was raised in a communist East Germany, is both the first East German and the first woman to hold her position. But how does this relate to St. Louis? Just trust us. Because of the city's heavy cultural ties to Germany, it was chosen as one of three in the US (Pittsburgh and Philadelphia also made the cut) to celebrate the event with a closed-circuit broadcast of the medal ceremony. The event, though closed to the public, will take place this evening at the Mayor's office to reinforce the ties between the two countries with cocktails and German appetizers. Note: There will not be any sausage jokes in this story, sorry. "Not very many foreigners receive an award like this," says Susanne Evens, president of St. Louis-Stuttgart Sister Cities. "I hope events like ours mean there will be closer ties between the countries. Obama is already a huge part of that: He's a hit in Germany, and he's loved by the citizens." Evens, who grew up in the Stuttgart area, took her position to develop ways for others to experience her home. Although she jokes that the group's staff, all volunteers, does most of its work between midnight and 3 a.m., she hopes to increase the respect of similar programs throughout the country. Since being adopted by Stuttgart, St. Louis has also initiated relationships with cities in Africa, Asia, South America and other parts of Europe. "Stuttgart is a lot more aware of their sister cities than the people in St. Louis are, but we're trying to improve upon that," Evens says. The citizens of Stuttgart are particularly impressed by mustachioed wonder Mark Twain and the area's Mississippi River ties. In general, the sister cities program is a much larger establishment in Europe, in large part because it's not a nonprofit there. "If you want a volunteer project to be successful, it takes a lot of work that nobody really sees on the outside. There's a lot this country has to offer through exports, and Germany's a great example." Last year, Mayor Slay and the mayor of Stuttgart toured each other's hometowns to commemorate the 50-year anniversary of the relationship. Although Evens doesn't use the cliché, her time with the sister cities program backs up the idea of a "small world," after all. "I remember being at an event that Mayor Francis Slay attended in Stuttgart, and someone from St. Louis was there, ran up and was like, "Wow, what are you doing here?" Evens says. "It's incredible."
During World War II the United States Navy gave U.S. Rubber a contract to build an ammunition assembly plant on a tract of land of more than 2,260 acres in the Steele Creek area of Mecklenburg County. At its height the plant (locally called the “Shell Plant” or the “Bomb Plant”) had more than 12,000 employees, over 90% of them women. Men were the mechanics, guards, janitors and warehouse people. Only women worked on the conveyor line, filling the shell cases—16 shells to a can. Fifteen women weighed the powder and put it in the shell case. Some of the women rolled 4-in. strips of lead foil which acted like grease on the inside of the shell casing. Two of the lead foil rollers were grandmothers. In fact, many of the women were grandmothers. Mae Pettus Griffin was one of these. She later said that she had never before done “public work” but she had 3 sons and 2 sons-in-law in service and she felt it was her duty to back them up. She was not alone. Almost all of the women had only done house work or field work previously. Buses collected the workers 7 days a week for the 3 shifts that came from Lancaster, Kershaw, Rock Hill, Richburg, York, etc. in S. C. and Gastonia, Concord, Albemarle, Monroe etc., in North Carolina. Woodrow “Toby” Wilson of Indian Land in Lancaster County was one of the building foremen. He still remembers some of the rules. Smokers could smoke only in the cafeterias. No matches could be brought in but cigarette lighters were placed at intervals for the convenience of the smokers. The cafeterias were about 200 ft from the main plant. Everyone wore insulated safety shoes. The men wore uniform coveralls with no pockets. The women wore uniform dresses. The floors were concrete and kept shiny. Every 10 feet there were big doors built for easy exit in case of explosion. None of the machinery was electrical (although there were electric lights). All machines were run by air. There weren’t any major explosions and only one accident. One of the women workers lost her left arm. Powder was so sensitive that if any were left under the fingernails, lighting a cigarette would blow away the fingers. The plant won a number of safety awards. At first, workers on an 8-hour shift were turning out 8,000 rounds of ammunition. At their peak they were producing 29,000 rounds a shift. Still, there was not have enough labor to run but two “load lines.” There was the capacity for a third line but labor was scarce. Then something happened that would have been unthought of in normal times. Black women were hired to “man” a shift on the third line. Toby Wilson was put in charge--the only southerner to be a foreman. The other foremen were northernors sent south from other U. S. Rubber plants. Mr. Wilson says he had one of the best, hardest-working crews in the plant. From boyhood he was used to working alongside black workers in the fields. He thought about how they always sang as they worked. He asked his crew if they would like to sing while they worked. They did and they sang so enthusiastically and well that they attracted the attention of U. S. Rubber officials and other foremen who would come to hear them. After VE Day (May 8, 1945) all other shell plants in the U. S. closed but the Steele Creek plant stayed in full production until VJ Day (Aug. 15, 1945). Even then the plant did not completely close. A work force of 150-170 people stayed on until June 30, 1957 reconditioning the unused shells returned by naval ships. Today, after being in the hands of private investors for a number of years, the 2,260 acres is known as Arrowood Industrial Park close to I-77 with Westinghouse Blvd., a major throughfare. Many thousands of people go through or by Arrowood every day, few of them aware of the time and circumstances of the Bomb Plant.
CNA research project at Lord’s Cove finds new use for old fish plant Sitting on the wharf in Lord’s Cove is a building with a history not unlike many others scattered around coastal Newfoundland and Labrador. The old fish plant was once a part of Fishery Products International’s assets in the province. College of the North Atlantic researchers Leon Fiander, Keith Howse and Dr. Michael Graham are part of a research and development project in Lord’s Cove exploring shore-based aquaculture using wave-powered pumping technology. The tanks pictured are empty now but will eventually house halibut. Paul Herridge Photo For years, a solid employment provider for many residents in the community, the property changed hands when FPI was sold to Ocean Choice International in 2007. Fish are about to return to the building once more but not for processing. The old plant is now in the midst of a transformation College of the North Atlantic’s (CNA) Burin Campus researchers, led by Dr. Michael Graham, hope will have a major impact on shore-based aquaculture. And, if all goes to plan, could also result in a marine research and test station at the end of five years. Run down, in a state of disrepair and roughly a decade since fish has been processed at the plant, according to Dr. Graham the intervening years had not been kind. As fellow researcher Keith Howse put it, “It was an eyesore.” The provincial government, through the Department of Innovation, Business and Rural Development, kicked in $175,000 for renovations, and has also contributed roughly $500,000 from its Research and Development Corporation. In addition to the government funding, the Natural Sciences and Engineering Research Council has contributed over $2 million to the project. Creatively, Ocean Choice leased the property to the town in lieu of taxes, which along with another main building belonging to the Lord’s Cove Harbour Authority, was then leased to College of the North Atlantic for the tax amount. Leon Fiander, another of the project’s researchers, said the town’s support – including both council and residents – has been a big plus. “Everyone here really supports what we’re at. They’re all very interested in it in the community.” A third, smaller building on the site will contain technical equipment and instrumentation for a wave-powered pump that Dr. Graham explained will hopefully provide a key component for aquaculture on land – free energy. An initial research and development project for the pump – inspired by a science project of Dr. Graham’s daughter – was launched in Lord’s Cove back in 2006. CNA has partnered with the National Research Council’s Ocean Technology Enterprise Centre in St. John’s where tank tests on components of the wave pump are underway. Dr. Graham said he expects construction on a new wave pump for the aquaculture project to begin by year’s end. But some fish will arrive before then. Electric pumps, built by Mr. Howse in the sheet metal program’s shop at the Burin Campus, will get the aquaculture process started in the coming weeks, and over the winter months, the intended farm system will start to come together. Once it does, that’s when things will really start to get interesting. According to the researchers, halibut will be grown in large tanks in the Harbour Authority’s building, which lies on a slope above the former fish plant. Down below, in one room of the old facility, organisms that eat seaweed – whelk, sea urchins and scallops for example – will clean the water coming from above. Seaweed grown in the next room will provide a food source for the adjacent species. “You have to pump the water cheap and you have to maximize your feed conversion and you have to avoid pollution. All of those things together, that’s what this farm is designed to do. That’s what we’re testing.” – Dr. Michael Graham According to Dr. Graham, the seaweed will also clean chemicals from the water before it flows back into the sea. “Basically, after the halibut, everything else in the farm is a big filter to clean the water. So the water goes back to the ocean … clean.” The biggest expense for shore-based aquaculture is the cost of pumping water. Mr. Fiander acknowledged another soon-to-be associated cost will also be avoided. “Right now, you don’t have to pay to treat effluent, but it’s coming to a point when you’re going to have your effluent treated before you put it back in the ocean. Ours is hopefully going to be treated biologically.” Dr. Graham summed up the idea behind the research further. “You have to pump the water cheap and you have to maximize your feed conversion and you have to avoid pollution. All of those things together, that’s what this farm is designed to do. That’s what we’re testing.” If the CNA project works, Dr. Graham indicated areas all along the province’s south coast, as well as the east coast of Nova Scotia, with exposed headlands necessary, would provide prime locales for shore-based aquaculture ventures. He said the researchers are attempting to prove a concept with the Lord’s Cove pilot project, which will be expensive on a small scale, but when extrapolated into bigger operations could make a decent profit. If successful, shore-based aquaculture farms and possibly plant operations would be feasible in many locations where they are currently not possible. Dr. Graham indicated that could lead to manufacturing of wave-powered pumps. So far, the project has taken over three offices at CNA’s Burin Campus. Over the next five years, it will pay out the equivalent of 35 person years of employment. Right now, there is one full-time employee, with a second part-time position soon to be filled. A number of students – both paid and work-term – have spent time at the site. A weather station, wave buoy and other technology at the site have laid the foundation for the potential marine research and test station down the road. Dr. Graham is optimistic there are other opportunities like the Lord’s Cove project around the province and uses for their old, empty buildings. “I think this is the future of communities like this, finding something the community has that’s uniquely it – here it’s waves – and figuring out a way to take advantage of it.”
Louis XI de France English: Louis XI (July 3, 1423 - August 30, 1483) was a King of France (1461 - 1483). He was the son of Charles VII of France and Mary of Anjou. He was a member of the Valois Dynasty and was one of the most successful kings of France in terms of uniting the country. His 22-year reign was marked by political machinations, resulting in his being given the nickname of the "Spider King".
Notes and Editorial Reviews The first time Brahms’s Requiem was heard in this country, in July 1871, was in a London drawing-room. Conducted by Brahms’s friend Julius Stockhausen, the work was given at the house of the surgeon Sir Henry Thompson with a smallish choir in the composer’s own piano-duet arrangement, played by Lady Thomson (Kate Loder, professor of harmony at the RAM) and the aged but enthusiastic Cipriani Potter (who had known Beethoven in his time). A performance, in fact, much like this one, though probably not as note-perfect. Domestic performance of choral works wasn’t uncommon in the 19th century, when vocal quartet parties were legion – so this finely performed and recorded disc is of great interest. This small-scale rendering naturally enhances the strong vein of intimacy that’s already palpable in the Deutsches Requiem. ‘Wie lieblich sind deine Wohnungen’ emerges as the sublimated Liebeslieder Waltz one always suspected it to be. Brahms’s piano parts are wonderfully effective in transmitting the musical substance but, shorn of orchestral colour, the emphasis is shifted to the quality and colour of the vocal writing, and the role of soprano and baritone soloists takes on an enhanced significance. Hanno Müller-Brachmann is ideally and urgently eloquent in his two solos, and Susan Gritton is refulgent and confiding in ‘Ihr hab nun traurigkeit’, Brahms’s inspired elegy for his mother. Throughout, the choral component is superbly sung and beautifully balanced, Stephen Cleobury directing the King’s College voices with an unerring sense of long line and the sustained building of paragraphs. There’s no other recording in this version: if you want the Requiem with full orchestra, there are many competing accounts (with Abbado, for me, still the most satisfying), but the present disc represents a modest triumph.
By Ms. Gloria Montgomery (Army Medicine)June 16, 2014 West Point, New York-- Joey Gugliota, the 24-year-old former New Yorker now living in Chicago, has been confined to a wheelchair since age five. Gugliota has been hooping it up on wheels since he was first introduced to wheelchair basketball at age nine. He, along with three other professional coaches, is at the U.S. Military Academy, West Point, N.Y., teaching Soldiers and Marines the art of the "pick and roll" of wheelchair basketball in preparation for the 2014 Army Warrior Trials, June 15-19. More than 100 wounded, ill and injured service members and Veterans from across the United States are at West Point competing in the Warrior Trials where athletes from the Army, Marines and Air Force face off in archery, basketball, cycling, track and field, swimming, shooting, sitting volleyball and wheelchair basketball. Participants in the trials include athletes with spinal cord injuries, traumatic brain injuries, visual impairment, serious illnesses and amputations. Developed by World War II veterans in the mid 1940s, wheelchair basketball is one of the premier events in the Paralympic Games, which are for athletes with physical disabilities and held in conjunction with the Summer Olympics. Rules are similar to able-bodied collegiate ball, but are modified to include the wheelchair. "In essence, the chair is part of the body," said assistant coach Lee Montgomery, 57, who first started playing 37 years ago after watching a local team during his stay at a rehab hospital. "If I'm shooting and someone hits me hard, that's a foul. If he hits my arm, that's a foul, too." Other actions that are fouls include flipping someone out of the chair or backing up into an opponent. "It's all in the intent," added Rodney Williams, 63, who has been playing since his college days 41 years ago at San Jose State University in California when one of his college buddies in a wheelchair kept on bugging him to play. "I had never used a wheelchair because I walked with crutches and braces, so to get him off my back; I decided to go to a practice. I thought it was fun, so I've been playing ever since." Other rule modifications include no dunking, no double dribbling and 30 seconds to shoot the ball instead of 35 seconds. One question the coaches are often asked is goal height. "People think it's amazing that it's the same height," said 37-year-old head coach Jermell Pennie who has been confined to a wheelchair since age five. In 1995 he began playing wheelchair basketball, and in 2004 was on the U.S. Paralympics Wheelchair Basketball team representing Team USA in Athens. "I never knew about wheelchair basketball, but I did know about the Paralympics." Pennie, who coaches the Dallas Junior Wheelchair Mavericks team, said he is impressed with the military teams that will compete against each other this week. "The athletes here are like a sponge," he said. "They take our information and go with it. This is the first time some of them have ever jumped in a wheelchair and played, and they look really good. They're picking it up like they've been playing for a couple of years." Shooting, Williams said, is different from able-bodied ball because "You don't have your legs." "Most of the shots from the able-body players come from the legs, and they don't use so much of their arms," he said. "For us, it's all upper body, so you have to develop proper technique to put power in your shot." Arm, hand and wrist position are also important for shooting accuracy. "You have to keep the elbow in or someone is going to go behind you and grab the ball," the Californian said, adding that a player's fingers need to be spread out over the ball, unlike able-bodied ball. "You also have to keep your wrist cocked when you follow through. It's all about practice and doing the same thing every time." The pick and roll, said Montgomery, big in wheelchair basketball and involves putting the chair in position to inhibit the opposing player and going for the basket. "If I set a good pick on my opponent, they aren't going to get around me like an able-body player can," the Grand Rapids, Mich., resident said. "Because the chair has this wide angle, you are able to set a great pick and leave the defender in the back. In defense, you want to be between the man and the basket. Now the defender is outside the play," adding that once the pick is set, the roll is turning the chair facing the basket and looking for the ball. The pick and roll technique is new for Marine Sgt. Joel Hillner, Camp Pendleton, Calif., who has been playing just eight weeks. "I've learned a lot about it," he said, adding that it's a lot of fun, but his blistered fingers are really taking a beating. "I won't wear gloves because it affects my shooting." Army Private 1st Class Kevin Szortyka, Fort Stewart, Ga., who has been playing the sport for about eight months, also credits the pick and roll skill development as the most important technique he has learned from the coaches, who he calls "awesome." "I didn't have a lot of team play," so this training has been really important," he said. The 25-year-old Army private who injured his back in a training exercise said the coaches are available after practice to work one-on-one with the players. "They truly have a genuine interest in us no matter what our skill level is. They just want us to grow as players whether or not we advance here." Szortyka, from Tallahassee, Fla., credits the Army's adaptive sports and reconditioning programs like wheelchair basketball a "blessing." "It's allowed me to continue to compete," he said. "A lot of us, once we get these injuries, feel like our days of competition are behind us. Adaptive sports helps us continue with that competitive nature we are born with and that we've had throughout our military career." For Coach Gugliota, who went to college on a partial wheelchair basketball scholarship, giving back to his military pupils has been wonderful. "They've been through so much, and have done so much for us. I'm honored to be here. As far as the players, they're anxious and ready to play," he said. "I'm here to help them get better." Gugliota also knows what it's like when an injury or an issue robs one of that competitive edge. "I had just learned to ride a bike when a tree fell on our camper, so here I am adjusting to life as a little kid going from riding a bike to being in a wheelchair. I was the only disabled kid in school and had never been around anyone with a disability," he said. "Wheelchair basketball introduced me to other kids with disabilities, so it helped me get over my disability." The coaches are eager to get the ball rolling to show off their pupils' skills and introduce others to the sport of wheelchair basketball. "It's as physical as any sport out there," said Rodney. "This isn't a game of people in wheelchairs playing basketball. I guarantee you that once they start playing and you see their level of enthusiasm, you won't think these are wheelchair patients. These are wheelchair athletes."
- n. In theology, one who trusts in the justice or uprightness of his own conduct. - n. Administration of justice or of criminal law; judiciary. The Edinburgh high court of justiciary heard that Taylor had "no concept" of how dangerous it was to give a child methadone. These were the officers of justice, with a warrant of justiciary to search for and apprehend Euphemia, or Effie Deans, accused of the crime of child-murder. Personally, I felt that I was responsible, but not guilty, but try to put that defense before the safos and the justiciary. From a child this Frank had been a donought that his father, a headborough, who could ill keep him to school to learn his letters and the use of the globes, matriculated at the university to study the mechanics but he took the bit between his teeth like a raw colt and was more familiar with the justiciary and the parish beadle than with his volumes. At the hotel waited a bunch of urgent matters: some death sentences, a new justiciary, a famine in barley for the morrow if the train did not work. In legislative and justiciary acts the Latin names are still retained. The inflexibility of the justiciary lords, or their known integrity, form a fine incident in history; for the Scottish nation was at this period, ridden by Court faction, and broken down by recent oppression and massacre. Limtoc the general, Lalcon the chamberlain, and Balmuff the grand justiciary have prepared articles of impeachment against you, for treason and other capital crimes. The astonished lord justiciary asked the foreman, how it was possible to find the prisoner not guilty, with such overwhelming evidence, and was answered: "Becaase, my laird, she is purty." II., was issued for arrears due to him since he was "justice and chancellor, and even lieutenant of the justiciary, as well in the late king's time as of the present king's."
If adopted, as the Board of Trustees has proposed, this reform would make the College of DuPage the first institution of higher learning in the nation to adopt the Academic Bill of Rights and only the third to recognize that students have academic freedom rights that are distinct from (but related to) those granted to faculty. “I and the other trustees thought it was important to provide for the academic freedom of students as well as faculty members,” explained Kory Atkinson, a trustee at DuPage and the principal author of the new policy manual which contains the Academic Bill of Rights. “We’ve had some anecdotal evidence from students about faculty at DuPage providing lower scores [for ideological reasons] and even in some written reports for classes where professors made comments about sources being ‘right-wing’ rather than rejecting them for scholarly reasons, mainly in the social sciences where sources tend to be more subjective,” Atkinson said, explaining some of the Board’s impetus for proposing the Academic Bill of Rights. The Academic Bill of Rights proposed at DuPage echoes the language of the original Bill authored by David Horowitz in 2005. It recognizes that the principle of academic freedom applies not only to faculty but also to students who should be protected “from the imposition of any orthodoxy of a political, religious or ideological nature.” The DuPage bill acknowledges the right of faculty members to “pursue their own findings and perspectives in presenting their views” but states that they should “consider and make their students aware of other viewpoints” and that “courses will not be used for the purpose of political, ideological, religious, or anti-religious indoctrination.” Despite such guarantees, the DuPage faculty union, a unit of the National Education Association (NEA), has declared open war over the proposed policy change. In an 11-page letter to the Board of Trustees addressing the Academic Bill of Rights and other proposed policy changes, the NEA chapter claims that the Bill has “political connotations.” The letter goes on to state, “ABOR supporters apparently hope that the bill will give elected officials the power to dictate, for example, whether creationism should be taught alongside evolution in college biology…. it is the responsibility of college professors, who are trained experts in their fields, to evaluate that evidence. It’s not the job of politicians.” In fact, the Academic Bill of Rights makes no mention of elected officials, nor does it refer to creationism or any particular political viewpoints. The Bill’s tenets do not require that all viewpoints be represented and further stipulate that only scholarly viewpoints need be considered at all. Writing for the website InsideHigherEd.com, a journal that generally reflects the perspectives of the teacher unions, editor Scott Jaschik summarizes the complaints of the DuPage teachers: “Faculty groups say that the measure would lead to professors constantly looking over their shoulders, make it impossible for them to express strong views, and force them to include conservative interpretations of everything or face criticism for not doing so.” No evidence is presented to justify these concerns. Individual faculty members at the College, speaking only for themselves and not the Faculty Association, strike a more moderate tone, but seem misinformed over the current protections offered to students. Cathy Stablein, a professor at the College who serves as faculty advisor to the student newspaper, argued that existing policies at the college provide students with academic freedom rights: “There are a lot of procedures that have been worked out over the years, based on past practice and experience.” In particular she pointed to a section of the college catalogue on Student Rights and Responsibilities which she says “seems to come from a very similar perspective.” But the section states only that students “can rightfully expect that the college will exercise with restraint its power to regulate student behavior” and states nothing about students’ academic freedom. The existing student grievance and harassment procedures also do not include academic freedom or disputes related to a student’s political beliefs. David Goldberg, an associate professor of political science at the College, also believes that existing policies are sufficient to protect students’ academic freedom, but he stated that he is open to further discussion on the issue if evidence proves him wrong. “…Before adopting the Academic Bill of Rights full cloth,” Goldberg says, “I would want more evidence that the existing policies are not meeting student needs where they’ve been practiced.” In fact, when hearings regarding student academic freedom were held before the state legislature in Pennsylvania three years ago (the only such hearings on record), no policies governing student academic freedom were found to exist at any public university in the state, and DuPage has no such policies in place. Students at the campus who oppose the Academic Bill of Rights seem to have been influenced by the Faculty Association’s uninformed objections. “I think that teachers should be able to decide the curriculum because they know the most about their field,” commented student Shannon Torii, editor-in-chief of the campus paper, The Courier. “[The trustees] want to control the curriculum, take it out of control of the teachers,” she continued, echoing the Faculty Association’s false claims that the Academic Bill of Rights would somehow transfer faculty’s authority over academic curricula to elected officials or to College trustees. “These are the same tired and ill-informed objections we’ve heard again and again from the teacher’s unions and the academic Left.” David Horowitz, author of the original Bill. “Neither the DuPage bill nor the original bill propose that politicians be given the power to decide what goes on in the classroom. The fact that it is trustees of the university, not legislators, who are proposing a change in university regulations underscores the hysteria of the faculty response. “Alleging that the bill would require the teaching of creationism is an example of the dishonest tactics of the opposition. The proposed new policy at DuPage states that ‘Exposing students to the spectrum of significant scholarly viewpoints on the subjects examined in their courses is a major responsibility of faculty.’ Creationism is not a scholarly viewpoint and we have never suggested that it be taught in science classes. “The idea that the ABOR uses political standards to subvert scholarly ones is another false claim propagated by the teachers’ unions. The Academic Bill of Rights is explicitly drawn from the statements of the American Association of University Professors which urge professors not to ‘take unfair advantage of the student’s immaturity by indoctrinating him with the teacher’s own opinions before the student has had an opportunity to fairly examine other opinions.’ This is a sound educational principle, not a political statement.” Ironically, the faculty union contract at DuPage, signed by the DuPage Faculty Association, contains an academic freedom provision that is strikingly similar to the Academic Bill of Rights. Section C-2 of the contract on Academic Freedom states that “Faculty Members shall be free to present instructional materials which are pertinent to the subject and level taught and shall be expected to present facets of controversial issues in an unbiased manner” (emphasis added). This clause echoes almost exactly the language of the Academic Bill of Rights which the DuPage Faculty Association finds so objectionable. The difference is that the faculty union contract does not apply to students. If professors ignore its provisions (which students would hardly be familiar with in any case) students would have no right to complain. The new policy would close that loophole. “Really the only thing that a student can challenge under the current policy is a grade,” explains trustee Kory Atkinson. “Creating a specific right for a student to challenge ideological discrimination really worries them [the faculty]. They will have to be accountable for what they’re doing in the classroom and they really don’t like that.” If the DuPage Trustees are successful in their bid to adopt the Academic Bill of Rights, the College will become only the third campus in the United States to recognize student-specific academic freedom protections. Pennsylvania State University and Temple University both previously adopted student-specific academic freedom protections when a series of state legislative hearings showed that no public university in the Commonwealth of Pennsylvania had academic freedom provisions that applied to students. Asked about the Bill’s chances for success, Trustee Atkinson strikes a positive note. “I feel very good about it,” he says. “Right now there appear to be a majority of trustees who are committed to seeing that our students are educated, not indoctrinated. It just seems like a common-sense thing to do to me.”
by Rudolph Henny The plant illustrated on the covers of the Quarterly Bulletin of the American Rhododendron Society has been identified as R. balfourianum var. aganniphoides. The Secretary of the Society has presented some fine notes on the Series R. Taliense in this issue of the Quarterly. In the course of her observations there was mentioned this particular and unusual plant. I had the opportunity to examine both foliage and corolla in mid August, and as the Secretary mentioned "the plant has tentatively been identified as R. balfourianum var. aganniphoides". Several other species bloom late in August viz. R. serotinum of the Fortunei series, R. ungernii, R. brachycarpum, R. kyawii, R. maximum, R. auriculatum and R. didymum. On several occasions I had heard incomplete descriptions of this particular plant, and without further investigation have always mentioned R. ungernii, a plant that blooms at a similar time and has several comparative marks of distinction, namely a heavy spongy indumentum though usually white or tan color, and a pinkish white corolla, with a few spots. One may hit a high note of perplexity upon examination to find that this plant material is without doubt of the R. Taliense series, and with all the characteristics of R. balfourianum var. aganniphoides and yet be blooming three months later than the true plant. Dr. Cowan in his latest volume The Rhododendron Leaf on page 63 mentions R. balfourianum var. aganniphoides and the form of hair types for positive identification. This trichome type e.g. ramiform hair, is illustrated on plate X and is shown at 150 diameters. By using a microscope of 250X I was able to identify what appeared to be ramiform type hair, but there also were present several of the other types found on other species. Dr. Cowan in his work does not mention if more than one type is found on this species.
WSU Juniors and Seniors, consider Senior Rule for starting your graduate studies in MEd LID! Rules are more favorable than what they used to be. Email me! [email protected]. The Master of Education in Learning and Instructional Design Program at Wichita State University is an innovative, dynamic and flexible program designed for educators and professionals alike. New approaches to learning: Best practices in education and corporate training. The Master of Education in Learning and Instructional Design is ideal for education and professional development career advancement for those who are engaged in the K-12 teaching and workplace training of adult learners. It is a 36 credit hour program. The program is offered for students who meet the admission requirements and are seeking a graduate level degree in curriculum and instructional design leadership. The core curriculum consists of 21 credit hours of work in curriculum and instruction, 3 credit hours of thesis or non-thesis work and 12 credit hours of electives. The Master of Education in Learning and Instructional Design at Wichita State University is an innovative, dynamic, and flexible program meeting the diverse needs and goals of its candidates to become advanced instructional leaders in teaching and learning, training, and program design.Mission Program Goal #1: Graduates of the program will be able to identify, analyze, and explain (a) successful curricular models and instructional strategies and explore the basis for their success, and (b) curricular and instructional problems impeding the improvement of learning in instructional settings and propose effective solutions. Program Goal #2: Graduates of this program will be able to monitor, evaluate, and suggest means to improve instructional practice, including the evaluation of learning outcomes and programs. Program Goal #3: Graduates of this program will be able to assume responsibility for the development, implementation, evaluation, and revision of curricula, training, or programs of study in particular disciplines and/or for particular populations. Program Goal #4: Graduates of this program will be able to locate, evaluate, interpret, and apply appropriate research and scholarship to the study and solution of practical educational/training problems in instructional settings. Program Goal #5: Graduates of this program will be able to plan and conduct research using appropriate theory and research designs to investigate educational/training questions related to the improvement of learning and instruction. Program Goal #6: Graduates of this program will be able to demonstrate professional leadership skills and continued growth in instructional leadership and learning. In addition to the Graduate School admission requirements, students seeking the Master of Education in Curriculum and Instruction must meet the following criteria. (1) Show potential to do graduate work by meeting one or more of the following: a. Graduate from an accredited university program with a minimum GPA of 2.750 in the last 60 credit hours; or b. Graduate from an NCATE accredited program with a 3.000 or better GPA in the last 60 credit hours; or c. Take the Graduate Record Exam and score a minimum of 917 on any two of the sub-tests, or take the Miller Analogies Test and score a minimum of 40; or d. Provide alternative evidence that documents academic aptitude. (2) Provide evidence of involvement in teaching, training, and/or program design, or recommendation by the graduate program committee. Students complete and orally defend their thesis. Students work closely with their adviser and committee. Students needing an addition semester to satisfy these requirements should enroll in one hour of CI 876. Students received credit for courses when their thesis has been completed and defended. Prerequisite: CI 875 or instructor’s consent. Based on personal professional interest and negotiated with your advisor. Dr. Mara Alagic, Graduate Coordinator
Born: Frankfurt-am-Main, Germany. Died: Chicago, Illinois. Occupation: Malacologist. Keeper of Invertebrate Zoology, Natur-Museum Senckenberg, Frankfurt, Germany, 1911-36 (forced removal by the Nazis 30 June 1936); Curator of Lower Invertebrates, Field Museum of Natural History, Chicago, 1938-1959. Education: Ph.D. Heidelberg. Received his training in biology through herpetolgist and malacologist Oscar Böttger and malacologist Wilhelm Kobelt. Research Interests: Unionacea, freshwater and landsnails. Travels: Norway 1910; Pyrenees, Spain, France 1914-19 (in exile); southern Africa 1931-32 (as a member of the Schomburgk expedition); Brazil 1937; Bermuda, Cuba, Canada. Remarks: President of AMU 1950. Married Helene Ganz 30 March 1922. Fritz Haas was one the giants in the study of unionids worldwide and his monumental publication "Superfamilia Unionacea. 1969. Das Tierreich (Berlin) 88:x + 663 pp." is a required reference for this group. Data from: Abbott, R.T., and M.E. Young (eds.). 1973. American Malacologists: A national register of professional and amateur malacologists and private shell collectors and biographies of early American mollusk workers born between 1618 and 1900. American Malacologists, Falls Church, Virginia. Consolidated/Drake Press, Philadelphia. 494 pp. Other References: Solem, A. 1967. The two careers of Fritz Haas. Bulletin of the Field Museum of Natural History 38(11):2-5. Solem, A. 1967. New molluscan taxa and scientific writings of Fritz Haas. Fieldiana Zoology 53(2):71-144. Solem, A. 1970. Dr. Fritz Haas, former curator dies. Bulletin of the Field Museum of Natural History 41(2):12. Solem, A. 1970. Fritz Haas, 1886-1969. Nautilus 83(4):117-120.
Sudan profile - long overview Sudan, once the largest and one of the most geographically diverse states in Africa, split into two countries in July 2011 after the people of the south voted for independence. The government of Sudan gave its blessing for an independent South Sudan, where the mainly Christian and Animist people had for decades been struggling against rule by the Arab Muslim north. However, various outstanding secession issues - especially the question of shared oil revenues and the exact border demarcation - have continued to create tensions between the two successor states. Sudan has long been beset by conflict. Two rounds of north-south civil war cost the lives of 1.5 million people, and a continuing conflict in the western region of Darfur has driven two million people from their homes and killed more than 200,000. Sudan's centuries of association with Egypt formally ended in 1956, when joint British-Egyptian rule over the country ended. Independence was rapidly overshadowed by unresolved constitutional tensions with the south, which flared up into full-scale civil war that the coup-prone central government was ill-equipped to suppress. The military-led government of President Jaafar Numeiri agreed to autonomy for the south in 1972, but fighting broke out again in 1983. After two years of bargaining, the rebels signed a comprehensive peace deal with the government to end the civil war in January 2005. The accord provided for a high degree of autonomy for the south, and an option for it to secede. South Sudan seceded in July 2011, following a vote. However, the grievances of the northern states of South Kordofan and Blue Nile remain unaddressed, as provisions laid out for them in the 2005 Comprehensive Peace Agreement were never fully implemented. In Darfur, in western Sudan, the United Nations has accused pro-government Arab militias of a campaign of ethnic cleansing against non-Arab locals. The conflict has strained relations between Sudan and Chad, to the west. Both countries have accused each other of cross-border incursions. There have been fears that the Darfur conflict could lead to a regional war. The economic dividends of eventual peace could be great. Sudan has large areas of cultivatable land, as well as gold and cotton. Its oil reserves are ripe for further exploitation.
I have to start with red-faced apologies for managing to send you a draft copy of last week's newsletter. I was trying to get the strange symbols out of it and at some point managed to send it to everyone instead of just a test copy to myself. But here's the latest article from the Astronomy site at BellaOnline.com. Empire of the Stars Book Review A fateful meeting of the Royal Astronomical Society in London adversely affected the lives of two scientists and hindered progress in the study of black holes for a half a century. So says the author of Empire of the Stars. BellaOnline's astronomy editor liked the book, but wasn't convinced. On Monday of this week, the European Space Agency (ESA) released its first all-sky microwave map of the Universe. This is a bit more than just a pretty picture, but it is preliminary, as the full survey won't be completed until 2012. It was created by the Planck mission, named for German physicist Max Planck who was awarded the Nobel Prize early in the last century for his work on radiation. It was launched along with the Herschel Space Observatory, named for William and Caroline Herschel, which is dedicated to observing in the infrared and submillimeter, which are also wavelengths longer than visible light. Planck is imaging the cosmic microwave background (CMB) radiation, which is a relic from the early universe. It was emitted within four hundred thousand years of the Big Bang. This may sound old, but as we think the universe is 12-14 billion years old, this was way back in its infancy. The energy was originally of a much higher frequency--and therefore shorter wavelength--but the expansion of the universe has stretched it out to the longer microwaves. Studying this radiation will tell us about the early universe and how the universe has evolved. Here is the picture which ESA released: But HERE is the picture and then some. For Chromoscope is set up to show a number of cosmic features in different wavelengths. Have a look at the picture and then you canlook at it in different wavelengths: =m
Bariatric Surgery is the surgery done for obesity. This type of surgery is in high demand as in the US as 30% of its population is estimated to be overweight.// Bariatric Surgery in India could bring patients from US and Europe to the country. A private hospital in Calcutta today launched a world class Bariatric surgery clinic here to treat people suffering from severe obesity, a disease that is fast attaining epidemic proportions in India. The star-studded launch of the clinic at Apollo Gleneagles Hospitals here saw at least 25 people queuing up for surgery to get rid of excessive body weight and in turn associated diseases like high blood pressure, diabetes and cardiac ailments. The new facility, first of its kind in eastern India after sporadic attempts elsewhere in the country, is also eyeing patients from the west, where the cost of treatment is 15 per cent more and the waiting time enormous. "The facility, being backed by a support group for obese people, is a comprehensive unit, which will benefit not only domestic patients but also thousands of patients in the Americas and Europe, who are showing interest in flying down to get operated," says advanced laparoscopic and Bariatric surgeon Dr B Ramana, who heads the clinic. Quoting the WHO, he says 17 per cent of men and 15 per cent women in India were confirmed to be obese and the numbers were growing by leaps with changing lifestyles and eating habits. "Globally, over 1.7 billion people are affected by the disease. In the US, over 300,000 people die of obesity while in Europe around 250,000 people are killed by the scourge," he says. Though a systematic survey of obesity mortality has not been conducted in the country, Ramana says the numbers could be high in the next five years. For Indians, the risk of obesity is more as the population traditionally exhibits low muscle mass, high fat content and a pot belly syndrome. This propensity to deposit fat around the abdomen makes Indians more prone to diabetes and heart diseases. Dr.Ramana says. "Ninety per cent of adult diabetics in India are obese and Bariatric surgery offers them hope for a better living. In India, the cases of obesity are trebling every year, much faster than the western world," he says. The surgery, wherein the stomach is stapled and stitched to a part of the small intestine, has gained popularity throughout the world as it reduces the body weight of patients by 80 per cent. Through procedures called laparoscopic gastric bypass and sleeve gastrectomy, where the digestive system is short-circuited to decrease absorption of fats, Bariatric surgery reduces the intake capacity of the stomach. "This makes for lesser food intake and subsequent reduction of weight. The patient, however, is kept on essential vitamins and nutritional supplements for normal functioning of body systems," Ramana says.
- Nuclear disarmament advocate - During the Cold War, she launched the nuclear-freeze movement, a Soviet-inspired initiative that would have frozen Soviet nuclear and military superiority in place. - Opposed U.S. plans to install a national missile-defense system - Died in 2007 Randall Forsberg (born Randall Caroline Watson) was born in July 1943 in Huntsville, Alabama and grew up in Long Island, New York. After graduating with an English degree from Barnard College in 1965, she became an English teacher at a private school in Pennsylvania. In 1967 she married a Swedish student named Gunnar Forsberg and moved with him to Sweden, where she worked at the Stockholm International Peace Research Institute from 1968 to 1974. In 1974 Ms. Forsberg, who by then was divorced, relocated to Boston, Massachusetts with her five-year-old child and began studying political science (with a specialty in defense policy and arms control) at MIT, where she earned a Ph.D. in 1980. In the 1970s Forsberg conceptualized the idea of a nuclear freeze¯a mutual and verifiable halt by both the United States and the Soviet Union on the testing, production, and deployment of all nuclear weapons. The idea gained increasing public support in the U.S. and Europe during the late Seventies and early Eighties. In 1980 Forsberg established the Institute for Defense and Disarmament Studies (IDDS) in Brookline, Massachusetts and went on to serve as its executive director for the next 27 years. Also in 1980, Forsberg wrote "Call to Halt the Nuclear Arms Race," a position paper that effectively launched the Nuclear Weapons Freeze Campaign. At its root, this campaign was a Soviet-sponsored initiative that would have frozen the USSR's nuclear and military superiority in place, and would have rendered the new American President, Ronald Reagan, unable to close the gap which the Soviets had opened in the post-Vietnam era. Representative Patricia Schroeder and Senator Ted Kennedy helped to promote the nuclear freeze movement in Congress. The movement reached its apex on June 12, 1982, staging a mass march through Manhattan and then assembling more than 700,000 people in New York's Central Park, where Forsberg was a keynote speaker. That same year, however, the movement was dealt a serious blow when a resolution urging President Reagan to negotiate a bilateral freeze with the Soviet Union failed by two votes in the House of Representatives. In 1983 Forsberg received the so-called “genius award” from the John D. and Catherine T. MacArthur Foundation. On May 24, 1983, Forsberg participated in a "US-USSR Bilateral Exchange Conference" in Minneapolis, an event sponsored by the Institute for Policy Studies. At this gathering, Forsberg advised the Soviet delegates to have their government make, for public-relations purposes, some sort of "meaningless gesture"¯such as to destroy some 250 obsolete missiles¯that would not compromise Soviet military capabilities but would serve as a useful “bargaining chip” with which to pressure the U.S. “to delay the deployment of [its own] new missiles ... until November 1984, when we will elect a new government.” Reagan's reelection in 1984, which derailed this plan, was described by Forsberg as a “shock” that left the nuclear-freeze movement “reeling.” In 1986, Forsberg's Nuclear Weapons Freeze Campaign merged with the Committee for a SANE Nuclear Policy, which, according to a Senate Internal Subcommittee, had been infiltrated by Communists. In 1988 Forsberg charged that America was “feed[ing]” Soviet “mistrust” by “deploy[ing] conventional forces” to “occup[y] the whole world outside of Eastern Europe and the Soviet Union,” and by “intervening in civil wars in Third World countries, to try to make sure the non-communist side wins.” Depicting as a “great myth” the notion that the USSR wished to establish a worldwide empire, Forsberg held that the Soviets merely “want to win friends and influence people in the Third World”¯not unlike the United States. She chastised American leaders for not recognizing that “since the death of Stalin,” Soviet foreign policy had grown far “more open, more reasonable … more willing to make concessions, less reliant on military forces ...” Forsberg further disputed the claim that the Soviets had been engaged in a massive military buildup; rather, she said, their activities could be characterized as nothing more ominous than “modernization” ¯ something which America likewise pursued “all the time.” In the fall of 1994, Forsberg was listed in a publication of the New Party (NP)¯a socialist political coalition¯which named more than 100 activists “who are building the NP.” Other notable names among the list of 100+ were: John Cavanagh, Noam Chomsky, Barbara Ehrenreich, Maude Hurd, Manning Marable, Frances Fox Piven, Zach Polett, Wade Rathke, Mark Ritchie, Joel Rogers, Gloria Steinem, Cornel West, Quentin Young, and Howard Zinn. In 1995 President Bill Clinton appointed Forsberg to the Director’s Advisory Committee of the U.S. Arms Control and Disarmament Agency. From the mid-1990s through the early 2000s, Forsberg criticized U.S. plans to develop and deploy a National Missile Defense system. She derided the scheme as both unnecessary (claiming that so-called "rogue states" such as Iran, Iraq, and North Korea posed no serious nuclear threat to the United States) and counterproductive (warning that Russia and China would perceive the move as an existential threat). In October 2002 Forsberg denounced U.S. Senator John Kerry's vote to give George W. Bush pre-approved authority to attack Iraq if the President felt that Saddam Hussein posed a threat to American national security. To protest Kerry's vote, Forsberg in 2004 ran unsuccessfully as a write-in candidate for Kerry's Senate seat in Massachusetts. In 2005 Forsberg was appointed to the Anne and Bernard Spitzer chair in political science at City College of New York. In addition to her work with IDDS, Forsberg also served as a board member for the Arms Control Association, the Journal of Peace Research, and Women's Action for New Directions. Forsberg died of cancer on October 19, 2007.
Botanical name: Saurauia roxburghii Family: Actinidiaceae (Chinese Gooseberry family) Singkrang is an evergreen tree, commonly found in NE India - Manipur, Mizorm, Assam etc. The tree is distinguished by its large elliptic leaves which are conspicuously rusty-haired beneath. Flowers arise in lax clusters of pink flowers. Flowers are very numberous, and the buds looks like pink balls. Sepals are whitish, unlike a similar, better known species Saurauia napaulensis, where the sepals are dark pink. Petals are 5, pink, strongly overlapping, giving a cup shape to the open flower. Flowers generally hang looking down. Flowering: May-August. Medicinal uses: A gummy or gelatinous substance produced by the leaves is used for preparing hair pomade. The flower labeled Singkrang is ...
PLYMOUTH, N.H.— It is hot. Having experienced cooler temperatures the last two summers, the hazy, hot and humid weather blanketing central New Hampshire has residents and visitors looking for ways to beat the heat. “Evaporative cooling is the key to avoiding heat related illness,” says Dr. Dawn Richardson, emergency room physician at Speare Memorial Hospital. “Squirt yourself down with a water bottle and sit in front of a box fan, or take frequent cool baths or showers.” Dr. Richardson explains that it is the airflow over wet skin that helps the body cool itself. If you don’t have air conditioning, keep your windows open and use a fan to both circulate the air and vent out the heat. Also, consider doing activities where there is air conditioning: go to the movies, visit a mall or get your grocery shopping done at your local supermarket. Heat exhaustion, dehydration and breathing problems-the heat and humidity can make asthma and COPD worse-are the most common heat related illnesses treated in the emergency room. In addition to the evaporative cooling, Dr. Richardson offers the following advice: - Drink lots of fluids. Sports drinks, or a half and half mixture of water and sports drink, are good because they replace both fluids and electrolytes. - Exercise and walk pets early in the early morning when the air temperature is cooler. - Check on elderly relatives and neighbors. - Dress appropriately: thin, breathable fabrics * Don’t leave anyone, or animals, in a parked car. Body temperatures can rise dangerously in just minutes. - Know the signs of heat exhaustion- confusion, not sweating and lethargy- and push fluids to rehydrate the body. - Heat stoke is a serious illness that can lead to death. If fever is present, combined with confusion, dehydration, difficulty breathing and lethargy, seek immediate medical attention.
Odd Mom Out Returns & Ginnifer Goodwin's Baby NewsBy Gerri Miller NEW YORK, March 13 (JTA) The buzz of Passover may be a little harder to hear in sleepy Southern towns. But it's not because Jews there are working any less furiously to prepare for the holiday. Observing Passover in a region where Jews are few and far between means struggling to secure enough resources--both people and products--for the occasion.Contrary to the perception of persecution in the South, the area has been relatively hospitable to Jews, Southern Jews say. In the Bible Belt, it often matters less which religion one is, as long as one's religious. For Macy Hart, who grew up in the only Jewish family in Winona, Miss., the second seder was always reserved for ministers and leaders of the community. "People looked forward, from one year to the next, to get the invitation," says Hart, president of the Goldring/Woldenberg Institute of Southern Jewish Life in Jackson, Miss., who can't recall one incident of anti-Semitism happening to himself, his parents or his three siblings. Now a resident of Jackson, Miss., Hart has continued the tradition of using the seder for interfaith education. In the southwest corner of Mississippi, in Natchez, the once-thriving Jewish community holds a seder attended by about as many non-Jews as Jews. While a Jewish resident there used to hold model seders for area non-Jews, Natchez native Jerry Krouse described a "reawakening" of interest among non-Jews who have realized "that the Last Supper of Jesus was the seder." And in recent years, the Passover meal has become a hot ticket in Natchez. Whether or not the seder is the last supper of Jesus is disputed, according to Rabbi Arnold Resnicoff, national director of interreligious affairs for the American Jewish Committee. But the belief is entrenched among Christians worldwide--and the phenomenon is strong in the American South. At its peak around the turn of the 20th century, Natchez was home to several hundred Jews. But when the boll weevil plague tore through the cotton plants, the Jewish population--most of whom were involved in the cotton business--took a hard hit, according to Krouse. Now, like so many other small towns that welcomed Jewish immigrants who came through Southern ports, Natchez's community is withering, with only 13 remaining members of its synagogue. Parents describe the bittersweet move of their children to cities with more economic and cultural lures--and more in the way of Jewish life. Jews came to the South as early as the late 17th century in Charleston, S.C., and settled in Savannah, Ga., soon after. But the "great migration" of Eastern European Jews, which lasted from 1880 until World War I, heavily shifted the balance of U.S. Jewry to the North. Now a new immigration pattern has taken hold. Mirroring the rest of the country, Southern Jews have fled the country for the city. In 1960, there were 167 Jewish communities in the South, 98 of which had Jewish populations of between 100 and 500 people. By 1997, that number had dropped to 141, with only 62 communities averaging between 100 and 500 Jews. One of the disappearing communities is Natchez, where Passover has changed its tone over the years, but "it still feels like the seder," says Krouse, who called their service "ultra-Reform." Sometimes he questions whether the best introduction to Judaism for non-Jews is the Passover seder, with its songs and drinking and games to keep children engaged. Ruth Adele Lovitt, a non-Jew who's attended the Natchez community seder for at least the last 15 years, loves the festive occasion. "I enjoy going to services that Jesus went to when he was young," Lovitt says. "To me, the Passover is very symbolic," and said the blood with which Jews marked their doorpost in the Passover story has its own meaning for her. "It is the blood of Jesus that has marked my doorpost. He is the lamb of God," she says. As far as the children's games go, Lovitt doesn't mind because "Jesus did this as a little boy," she says. The Christian take on the traditional Jewish meal doesn't offend Krouse, whose concern has more to do with the dwindling Jewish presence. Jewish leaders agree. "When non-Jewish groups come to our community seders, we look at it as a time for sharing what we have in common. We tend not to see it as a threat," says the president of the Southern Jewish Historical Society, Hollace Ava Weiner, who says small numbers make interfaith dialogue a daily part of small-town Southern Jewish life. The seder in Natchez carries on its own traditions like matzah balls with gravy, which Krouse says takes on an "unspoken competition" by the attendants who offer their version of the delicacy for the feast. And like any good Southern seder, the charoset--a sweet melange of nuts, wine and apples--is made with pecans, not walnuts. "Pecans just work better," Krouse says. And, of course, in these parts they're called "pecahnz." "That would be a dead giveaway if you called it a 'peecan' that you weren't from around here," says Leanne Silverblatt, a fourth-generation resident of Indianola, Miss.--located in the Delta. There, they've begun holding the second seder at the local "How Joy" Chinese restaurant. "The Jewish people in the Delta love Chinese food, too," Silverblatt joked. In truth, the aging and diminishing community felt too tired to assemble a second production for the synagogue seder. So the Jewish women handed their traditional recipes over to the restaurant owners, with whom they are friendly, and the nearly 70 attendants luxuriate in being a guest at the second seder. Meanwhile, in Vidalia, Ga., home of the famous "sweet onion," the community takes great pains to preserve a purely Orthodox seder at their synagogue. In the shape of a perfect Jewish star with its turqoise sanctuary divided by a mechitzah, to separate men from women, Vidalia's Orthodox synagogue boasts the title of one of the smallest in the country. The community began with 14 members when the synagogue was founded in 1969, through a donation from a visiting New York merchant. Before that, they met at the local women's club, where post-football game dances competed with Friday night services on the other side of the wall. Now, only seven members remain to carry the heavy load of producing a kosher Passover, which draws about 30 people--many of whom travel in from the neighboring towns. One of the synagogue's founders and its president Ben Smith and his wife, Sarah, are one of a handful of Jewish families left in Vidalia. On soil famed for growing onions with as much raw sugar as apples, they lived the Southern Jewish tradition of working in the "dry goods" business. Like so many Southern Jews, retail was the natural choice for the many with a peddler's past. In fact, until Wal-Mart opened its doors there in the mid-1980s--and eventually shut those of the small businesses--Vidalia's few Jewish families owned nearly all of the city's major department stores. For now, the Smiths are preparing for the yearly Passover push. That means driving more than two hours to the kosher butcher in Savannah, Ga., and bringing the meat back for the community seder. "The fact is that people go all out to try to stay true to that holiday," Hart says. "That's a survival piece-- something Jews don't have to do in places where they're more plentiful."
MEDIA ALERT: How Important Are Paid Sick Days to Workers, Our Nation's Health? Washington, D.C. — June 16, 2010 — Government data shows that more than 40 million U.S. workers do not have paid sick days. How many have gone to work with a contagious illness? How many send sick children to school? Do they use the emergency room more frequently than workers who do have paid sick days? Does the public support paid sick days? How much support is there for legislation to let all workers earn paid sick days? The results of a new survey conducted by the National Opinion Research Center that answers these and other pressing questions will be released at an audio news conference at: 1 PM Eastern Daylight Time, Monday, June 21 To Join, Dial 1-800-311-9403 and use the Password, Survey10 Deborah Leff, President, Public Welfare Foundation Tom W. Smith, Senior Fellow, National Opinion Research Center, University of Chicago Debra L. Ness, President, National Partnership for Women and Families Congress and several state and local legislative bodies are considering legislation to allow workers to earn paid sick days. On the call, Smith will detail findings from the new survey, Leff will discuss the importance of paid sick days to workers and to the public health, and Ness will discuss the status of federal, state and local initiatives to require employers to provide paid sick days. The new public opinion survey was conducted by the National Opinion Research Center (NORC) and funded by the Public Welfare Foundation. Known since its 1941 founding as the National Opinion Research Center, NORC pursues objective research that serves the public interest. NORC has offices on the University of Chicago campus, in Chicago’s downtown Loop, and in Bethesda, Maryland.
8.572173 - REGER, M.: Clarinet Sonatas, Opp. 49 and 107 (J. Hilton, J. Fichert) Max Reger (1873–1916) 1900 was a particularly productive year for Max Reger, 27 years old and keen to move from Weiden, a small town in northern Bavaria, to Munich, the capital. He wrote numerous works during that year such as the famous Fantasia and Fugue on BACH, Op. 46, and the Three Choral Fantasies, Op. 52, for organ as well as his songs Op. 48 and 51, piano pieces Op. 45 and 53, the String Quartet in G minor, Op. 54 No. 1, the Two Romances for violin and orchestra, Op. 50, and various folk-song and madrigal arrangements. Reger was deeply troubled by the provincial narrowness of the Upper Palatinate. This, however, was also to his creativity’s advantage, since there was nothing to distract him from his work as a composer. Both Clarinet Sonatas, Op. 49, were written in the spring of 1900, inspired by Brahms’s Clarinet Sonata in F minor, Op. 120 No. 1. This was introduced to Reger by the private performance of his former teacher Adalbert Lindner and the excellent clarinettist Johann Kürmeyer, who also conducted the municipal orchestra at the time. Lindner wrote in his autobiography: “…Reger entered the room during our performance, he listened and said: ‘Fine, I am also going to write two such things.’ About three weeks later he kept his promise.” Like Brahms he created a double opus—Brahms wrote his two sonatas in F minor and E flat major, Reger in A flat major and F sharp minor. As opposed to Brahms, whose Sonata in E flat major consists of only three movements, Reger kept following the four movement pattern from Brahms’s F minor sonata: opening sonata movement, scherzo with a sostenuto trio, expressive slow movement and final sonata movement. Within this rather traditional structure, however, Reger developed his own individual musical language. His treatment of the thematic material, the invention of unknown and original harmonic progressions, his expressive dynamics as well as an intricate way of phrasing, make him very distinctly a twentieth-century composer. After he had completed writing the sonatas Reger tested them in a private concert. Lindner continued: “…Kürmeyer, who had studied his part thoroughly, managed this rather difficult task in the best possible way—and to the full satisfaction of the master. In the end even the very critical father seemed highly content with each movement…” They repeated the first and last movement of the first sonata several times because of their complex comprehensibility and hidden beauties. Lindner wrote:”…most of all we were delighted by the catchy and gracious second movement with its wonderfully sweet sostenuto episode which, appearing three times, is reminiscent of the familiar folk-song ‘Ach wie ist’s möglich dann?’ (‘Oh how is it possible?’), and by the unworldly and dreamy Larghetto with its più mosso assai middle section in B flat minor, which depicts a furious but quickly dissolving awakening. The creator of this work, which is full of longing, sang himself into everyone’s heart. The last movement (6/4, Prestissimo assai) again breathes a healthy, almost exuberant sense of humour…”. The second sonata is of equal value to the first both in structure and musical invention. As opposed to the high-spirited first sonata the character of the second one has more of a melancholically introverted elegance. With regard to the effect on the audience, however, this piece is in no way inferior to its counterpart. The first sonata was published already the year after it had been written and was given its première by Karl Wagner and Reger at the Museumssaal of the Palais Portia in Munich on 18 April 1902. The much feared critic Rudolf Louis regarded the slow movement as “one of the best pieces Reger has ever written”. The second sonata, however, dedicated to the clarinettist of the first sonata’s première, remained unpublished until early 1904. It was then given its first public performance by Anton Walch and Reger at the Kaimssaal in Munich on 29 April of that year. This was only a day after the world première of his String Quartet in A major, Op. 54 No. 2. The Sonata in B flat major, Op. 107, was composed in Leipzig in the winter of 1908/09 shortly after Reger had finished writing the Symphonic Prologue to a Tragedy for orchestra, Op. 108. Reger, meanwhile a renowned professor of composition at the Royal Conservatoire in Leipzig as well as a successful composer and performer, described his work as “a very light and friendly piece, not long at all, so that the character of the sound of the wind instrument does not tire!” The date of the sonata’s first performance was fixed before its completion. Therefore Reger asked Bote & Bock, his publisher to delay the publication until after that date, thus enabling him to make some corrections if necessary. On 3 June 1909, six days prior to the première, Reger—as he reported to Bote & Bock—played “the clarinet sonata as a sonata for violin and piano with a very good violinist (possibly Pálma von Pásztory)…! If the gentlemen from the press claim that the work would be difficult to understand, then these gentlemen are perfect idiots. The piece sounds very good, it is a work of intimate chamber music and it is furthermore not difficult at all to play together.” The work is dedicated to Ernst Ludwig Duke of Hesse and Rhine, whose first chamber music festival in Darmstadt in 1908 hosted the highly acclaimed second performance of Reger’s Piano Trio in E minor, Op. 102. On this occasion the Duke, who was very receptive towards Reger’s music, awarded him the Silver Medal for Arts and Science. He generously supported music and the arts in general. In the subsequent year the Clarinet Sonata, Op. 107, was first performed by Julius Winkler and Max Reger in the presence of the Duke as part of the second chamber music festival in Darmstadt. Reger wrote to his publisher: “…the audience went wild and didn’t want to leave the hall. The cheering became especially loud when the Duke came on stage in order to thank me and shake hands! In short: Reger topped it all. Even Saint-Saëns, to whom the entire second evening was dedicated, was an anti-climax.” Shortly afterwards Reger’s equally successful and much loved alternative versions for violin or viola and piano were given their first performances. In his Op. 107 Reger followed the same overall structure as in his earlier sonatas. Naturally, however, his musical language had developed significantly further. Stylistically this sonata seems to represent a continuation of his Op. 49, but its mood is more relaxed and might even mirror Reger’s domestic happiness at the time. In October 1908 Max and Elsa Reger had adopted Christa, who was three years of age by then. In November or December they also became foster parents to the 1 ? year old Lotti (the actual adoption was to follow in 1910). The sonata was well received by the press: it was described as the “…return to classical simplicity with regard to its form and musical content”. It was furthermore deemed a “beautiful and deeply felt sound-idyll”. The elegant and gracious scherzo-like finale completes a composition which is full of charm and subtle humour.
Psychographics is the classification of people based upon psychological measures. Some measurements include: Attitudes, habits, values, interests and opinions. Surveys are often used to measure the attitudes of the public. Measuring attitudes helps marketers predict behaviors and determine how to use marketing to compel people to purchase a product. While psychographics are commonly applied in many traditional marketing efforts, they can easily be incorporated into online marketing to improve the ROI from your site. Examples of How Psychographics Are Used Marketing messages depend upon the audience that the customer intends to reach. For example, a company marketing a food product may entice the consumer by appealing to the customer on the basis of price compared to the competition. Another strategy may be to appeal to the customer in terms of the benefits of the product. While we may know the benefits of the product, having a better understanding of the customer will give you insight into which benefits are most important to customers. If customers tend to prefer organic foods and are focused on their health and the environment, benefits of the product will emphasize these attributes such as being made of all natural ingredients. Depending upon the interests of the target market, companies will select a marketing message to appeal to the consumer. With a website, this means that knowing what other interests your users have can have a significant impact on sales or ad revenue. If you know what customers are interested, you can feature products from other categories that you know your customers will like in an attempt to up-sell. Similarly, you may find that many of your users like to travel and even though you don’t run a travel website, you might be able to have good click-throughs on travel ads or affiliate offers. How to Maintain a Loyal Customer Base with Psychographic Marketing Companies must first determine the type of people that are using their product and then produce a message that is consistent to their beliefs. Psychographic measurements can help companies determine what drives their customers to make a purchase. For instance, as in the previous example, some customers value health above cost. Other people value cost before health. The marketing message would vary depending upon the group that they are addressing. Psychographics help clients make better decisions about which benefits or features to emphasize in emails sent to customers. Or if you find that a majority of customers who buy a product really like a certain attribute about it, this should be emphasized in the on-page sales copy. Why is Psychographic Marketing Important? While demographics are helpful for knowing physically who your audience is, psychographics allows you to really know your audience – what they like and are interested in. This allows you to create a better experience for customers and website visitors which will result in higher sales. Knowing these interests, you can not only tailor your marketing efforts to increase product sales but you can also create highly targeted content for your website that users are going to have a higher probability of being interested in. Even if you can’t change the overall topic, you can use examples in your content that is going to be more relevant to your users as you can draw examples from their interests. Companies also save money with psychographics marketing. Marketing to the wrong customers may result in a loss for the company. Marketing people find the best leads in all media formats from newspapers to Internet. Some companies require more than one marketing campaign. A general marketing campaign is often not as effective as a targeted campaign. So What Does All This Mean? Psychographic marketing can be really helpful as it gives you insight into the interests, values, and mindset of users, allowing you to improve your upsell and to place more relevant content and ads on your site. Unfortunately, the whole process can seem a bit overwhelming (and cost prohibitive) if you haven’t done it before. Fortunately there are options for doing this that are both easy to implement and cost effect. The simplest solution is to email your customers a survey, which could be created in an online survey program or in a Google Doc. If you want to take it up a level, you can go with an online market research company – This allows you to survey a wider audience so that you can try to reach a wider customer base. If you are looking at expanding your customer base or readership, this may be a good idea. Further companies offering online market research services often have some statistical analysis baked into the product to help you best understand your results.
Bastide towns in Lot et Garonne, France A Brief History Lesson of the Bastide Towns of Lot et Garonne Deep in the heart of the middle ages, life was pretty rough in South-West France. In an effort to bring a bit of stability to the area, 'new towns' were planned and built. These towns were built around a strict grid layout, and also usually fortified, and aimed to bring a bit of stability and security to the inhabitants while also adding to the strength of the respective sides (English and French) in the region. Some bastides had a more specific military purpose, and were built as a result of 'tensions' during the hundred years war. Hence many are found between the Dordogne and Garonne rivers. Many changed hands between the English and French, some several times, during this period. The layout of a typical bastide town includes a central square, several large streets running from the square to the edge of the town, and a grid pattern of narrow passages between these main streets. The central square historically had a sheltered hall in the middle for market days, and a series of arched passages around the edges. The towns offered a degree of safety, tax concessions and exemptions from military service to their inhabitants, and a small plot of land on which inhabitants could build a house. The churches in bastide towns were often also used for defensive purposes, and designed and built with that in mind. Of course, some 700 years or so later, these towns preserve their original form to varying degrees. Some have become sprawling large towns, others have largely disappeared, but the area has a good number of towns that have passed the centuries largely intact. Where are they found I consider the 'centre' of the bastide area to be the Monflanquin - Monpazier - Villereal area in northern Lot-et-Garonne, since the area has a particularly high concentration of bastide towns within easy reach of each other, including the three mentioned above - Villereal, Monflanquin, Monpazier - plus others including Eymet, Villeneuve-sur-Lot, Tournon d'Agenais, Beaumont, Castillones and Domme. Other areas of south-west France would claim a similar distinction, however...
– Ice fishing competition in Lapland •Elite Race – In the city of Stockholm is a horse racing track named Solvalla and it is the site of an international trotting competition. The Elite Race is done on the last Sunday during the month of May. Solvalla is the largest harness racing track in the country. •Stockholm Marathon – is rated as one of the top 10 running marathons in the world. It began in 1979 and is held every year generally in early June. Runners set off at 11:30am and do the marathon on a Saturday (in many other city marathons runs are done on a Sunday). •Vätternrundan – is rated as the world’s longest bicycle recreational race. The route is over 300km and raced over a period of two weekends in June. The cycling route begins in the town of Motala and then riders circle the Vättern Lake to finally end back in Motala. Numerous groups take part in the race, each group consisting of 60 to 70 cyclists and with a two minute interval between the starting times of each group. As there are so many cyclists, the first group sets off from the starting line at 7:30pm on a Friday and the last group may only set off at 5:30am on Saturday. There are nine stops along the route, which enables cyclists to eat, drink, receive first aid and even massages. Each rider can measure his or her own time as all riders wear an RF transponder around the ankle. This race has roots from 1966 and has grown to attract many cyclists, even from other nations. •Öland’s Harvest Festiva l is hosted in early October and is held over four days. There is celebration with local foods, concerts and exhibitions and it is the largest harvest festival in the country. The pumpkin has been established as the emblem for this festival. Öland is an island that hosts this special event from the Thursday evening to Sunday night during the months of late September into early October. This autumn festival began in 1997. In bygone days the end of the growing season (before winter) was acknowledged as Michaelmas, when animals were bought into stables to protect them from the harsh cold months ahead and when crops or the harvest were gathered for market. The festival has a magazine published based on it. Art shows, fairs and hot air balloons are among the prime attractions.
During the Deepwater Horizon disaster three years ago, few people got as close to the action as Scott Porter. Porter, a diver with a degree in marine biology, worked in Louisiana as a contractor for oil companies and had become fascinated with the corals growing on oil rigs. He and some friends volunteered to collect samples of corals near the spill for federal officials. They were also paid to take reporters from CBS News and other outlets into the Gulf of Mexico to view the spreading slick. Federal officials "kept telling us it was safe," Porter said. So he and the other divers he worked with relied on that advice and kept plunging into the gulf. At the time, Porter was a fit, healthy guy, just 42, who had performed 6,000 dives. He competed in martial arts tournaments. He didn't expect to get sick. But soon after swimming through murky water full of oil and chemical dispersants, he said, he began suffering from a variety of ailments — a burning sensation in his chest, migraine headaches, skin rashes, nausea. Porter says he is still dealing with some of those symptoms today, as are other divers who came into contact with the mixture of oil and chemical dispersants during the 2010 disaster. "I was disoriented a lot of the time. I was dizzy a lot, and feeling sick," said Dale Englehardt, another Louisiana diver. "Now the bottoms of my feet have blisters. They pop and go away, but then they come back, and now they're on my chest and back, too." Other divers knew to avoid going near the oil spill. Paul Sammarco of the Louisiana Universities Marine Consortium, said he was told by university diving experts not to allow anyone to dive who wasn't wearing a special hazardous materials suit. Documents show the federal agency Porter was dealing with, the National Oceanic and Atmospheric Administration, wouldn't even send its own divers out. Internal emails obtained by a watchdog group, the Government Accountability Project, show NOAA's divers lacked protective gear. "Diving in water contaminated with crude oil requires specialized training, equipment and diving protocols," the head of the NOAA diving program warned in May 2010. "Please do not risk your health by attempting to dive in these contaminated waters." A NOAA spokeswoman said she could not answer any questions regarding Porter and the other divers, including any advice they may have been given by the agency or the fate of any of the samples they collected, because of the ongoing BP criminal trial in New Orleans federal court. The disaster began with a fiery explosion aboard an offshore drilling rig on April 20, 2010. Two days later, oil started spewing from the damaged wellhead 5,000 feet below the surface. For months, BP struggled to stanch the flow. It also sprayed record amounts of a dispersant called Corexit on it to try to keep it from reaching environmentally sensitive shorelines. "Use of dispersants during the Deepwater Horizon oil spill response was coordinated with and approved by federal agencies," BP spokesman Jason Ryan said. "Based on extensive monitoring conducted by BP and the federal agencies, BP is not aware of any data showing worker or public exposures to dispersants at levels that would pose a health or safety concern." Porter and some friends had been studying the corals that grow on oil rigs in the gulf and had just gotten a federal grant for their work, he said. After the oil had begun flowing from the Deepwater Horizon rig, he said, he wanted to find out if it was affecting the corals and went out to collect a few. At the time, he had no worries about the safety of diving within 30 miles of the spill source. In May, Porter and his colleagues took Jeff Corwin out to film a story for CBS News, he said, and found "a cloud of micro-droplets of dispersed oil 10 feet thick from the surface." By June, they were seeing 6-foot-long "mucus-like strands of what appeared to be oil that was not completely dispersed." That dive, which took Porter to 80 feet, was the first one where afterward he felt ill — his chest burned, his head pounded, he couldn't stray far from a restroom. Yet when he brought in samples to NOAA officials and asked them about the rashes breaking out on his body, they told him no one else was reporting a similar problem. By June, he'd switched from a wet suit, which left plenty of skin exposed, to a dry suit, which keeps a diver well insulated from the water and has vulcanized rubber at the joints. His friends, who felt fine, mocked him at first for the switch. But after a few more dives they began getting sick, too, he said. "It was crazy, the stuff that was happening," he said. He never did hear back from NOAA about the coral samples his group turned in, he said. Meanwhile, in August he took a local Fox television crew out. Fifteen minutes after surfacing, Porter was vomiting over the side — an event a friend filmed and posted on YouTube to warn other divers. Porter decided to stop diving until the gulf was cleaner. One thing that convinced him: the effect on his suit. His breathing regulator got clogged and the vulcanized rubber on the joints "basically disintegrated," he said. Porter is suing BP — but only for the damage to his equipment. Meanwhile, Englehardt was diving in much shallower water, but experiencing similar problems. The company he worked for sent him into Louisiana's Barataria Bay, where the oil washed ashore for months, to close off the valves on a barge. After he got sick, he moved to Hawaii. The National Institutes of Health has signed up 33,000 people across the Gulf Coast to follow them for 10 years and see if the oil or dispersant made them ill. Porter and his fellow divers have refused to participate because the study is just a study, period. As his fellow diver Steve Kolian put it, "They just want to watch us die."
Laura N. Gasaway* Librarians share many values with creators and publishers of copyrighted works, but their interests and values sometimes conflict. Additionally, the core values of each group sometimes conflict with the goals of copyright law. While these conflicts have existed for centuries, they are escalating in the rapidly expanding digital environment, and the debate between the two groups is becoming increasingly acrimonious. Members of both groups often misunderstand copyright law and engage in overstatement, sometimes fairly gross overstatement. Librarians and content providers share a great many core values, and work symbiotically to promote common goals. Without publishers and producers, librarians would have little to offer their users because there would be no works of literature, no reference works, no videotapes and no databases. Without librarians, publishers would lack a valuable resource to make their works available, to publicize their works, to teach patrons how to use their works, and to preserve their works for posterity. Librarians, publishers, and producers share many core values about works of literature, the value of these works, and the importance of preserving them for future generations. Although librarians and authors often disagree on issues involving digital media, both parties realize the importance and value of information to people in the digital age, and both believe that information should be trustworthy and incorruptible. Both groups believe that publishers play a valuable role in making available to the public works containing information. The editorial work, the management of the peer reviewing process, and the distribution role played by publishers is crucial to the production of quality works, and both groups believe that publishers should be fairly compensated for these contributions. Despite sharing many common goals and values, significant disagreement exists between librarians and content producers and publishers. Librarians tend to view information as a necessary public good, such as food, shelter, and warmth; that should be made available at a reasonable cost. Commercial producers and publishers of copyrighted works, however, tend to view their works as private property that can be commercialized. Thus, the economic goals of producers and publishers often conflict with the social goals of librarians. From the perspective of the librarian community, publishers often appear to maximize profits at the expense of research and scholarship. Librarians, however, are less concerned about profit than they are about what they view as the higher-minded goal of providing high quality information to people. Conflicting values make it difficult for these groups to compromise and negotiate with each other, and each feels that its existence is somehow threatened by the other. The core value conflicts also negatively impact the debate about what role copyright law should play in resolving competing interests between publishers and librarians. This fundamental distinction between the way creators and publishers, on one hand, and librarians, on the other hand, view and value information has existed as long as public libraries have, but it has not prevented the two groups from working together in the past to create and distribute information. The evolution of the digital world, however, highlights these opposing values and appears to exacerbate the differences. Popular sentiment sometimes supports the values of content providers and sometimes supports the librarians' values. For example, American society admires and supports capitalistic notions of developing products and then selling them to maximize profits for the entity or individuals who paid for the development and production of that product. Naturally, this position supports the views of the publishers and producers of copyrighted works. On the other hand, society also believes that public access to information and the existence of free public libraries is important - or even essential - in a democratic society. This supports the librarians' values. The Constitution acknowledges the competing interests between these groups in the Copyright Clause: "The Congress shall have Power ... To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” By "Science" the Framers meant all disciplines of learning, and thus promoting learning is a goal of copyright. Content providers view the Constitution's Copyright Clause as ensuring protection for their works and focus on the "exclusive Right," while librarians focus on the social good of promoting learning among the public and focus on the mandate to "promote the Progress of Science." The delicate balance between competing interests called for by the Constitution, however, may now be at risk, according to at least one commentator: As a librarian, I certainly know much more about librarians' values than I do about publishers and producers' values, but I am also a teacher of intellectual property law, and I recognize that both groups have important goals and missions. In discussing the values of the two groups, I do not mean at all to imply that right and wrong values exist. When one is discussing values, there is no right or wrong, but only differences. Additionally, I run the risk of over--generalizing when ascribing values to each group. Clearly, not all librarians have adopted the values I assign to them.If copyright is cast too narrowly, authors may have inadequate incentives to produce and disseminate creative works or may be unduly dependent on the support of state or elite patrons. If copyright extends too broadly, copyright owners will be able to exert censorial control over critical uses of existing works or may extract monopoly rents for access, thereby chilling discourse and cultural development. Although both groups clearly have values in addition to the ones identified here, this article is limited to a discussion of the primary values that relate to copyright. The core values of authors and publishers that relate to copyright include: (1) compensation for the creation and production of their works; (2) ability to control their works; (3) authentication and recognition of their works; (4) broad marketing of their works; (5) promoting strong intellectual property rights; and (6) viewing, the fair use doctrine as an affirmative defense to copyright infringement. Librarians' core values regarding copyright law include: (1) recognition of public libraries as educational institutions; (2) providing information to the people; (3) providing information on all sides of an issue; (4) promotion of the rights of users of copyrighted works; (5) ability to identify and locate information, (6) recognition of the importance of the public domain as a repository of information; and (7) viewing the fair use doctrine as a right of a person to use a copyrighted work. Societal values relating to libraries, publishers and producers and copyright include: (1) the importance of an educated population; (2) support for entrepreneurship; (3) access to public libraries; (4) the importance of the public domain; and (5) public access to information. Value differences clearly have affected discussions and negotiations regarding copyright. Since the early 1990s, disagreements between publishers and librarians have seemed increasingly acrimonious to me, and I began to wonder why. What has changed to increase the decibel level in the debate? There may be no easy answer, but, at least in part, conflicting values contribute to the disagreement and create a siege mentality on both sides. Likewise, there may be no simple solution to the disagreements that will soothe the debate between librarians and copyright holders. By recognizing the depth and strength of these values, perhaps the parties can avoid causing further schisms and entrenchment in positions that prevent agreements to solve problems conjointly. I. CONTRASTING CORE VALUES A. PUBLISHERS AND PRODUCERS OF COPYRIGHTED WORKS Publishers and producers are not all alike. Similarities exist among the various types of copyright holders, but their interests are not all the same. Music copyright holders differ from traditional print publishers; videogame owners differ from photographers. Moreover, even within a class of copyright holders, differences may exist. For example, traditional publishers may operate for-profit or as a nonprofit organization, such as a society or professional association. A common goal of all publishers and producers is making their works available to the public, and doing so now requires using both traditional media and recently created digital media to distribute their works. It is evident, however, that something has changed due to the growing presence of digital distribution technology, which makes copying much easier and cheaper, so that core values of the copyright holder community are threatened regardless of whether the copyright holders operate as commercial or nonprofit entities. This threat implicates another stated goal of publishers - to encourage and promote strong intellectual property rights worldwide. The right to control reproduction is said to be the hallmark of copyright, and copyright entrepreneurs expect to be compensated when their works are reproduced. Reproduction is more common and easier to achieve in the digital world than in the analog world, especially if one considers transient copies made by computers. When libraries reproduce copies and give them away, publishers believe that they should be compensated for the copies. This expectation is not new in the digital environment. Authors have been aware for many years that libraries make reproductions, but have generally tolerated this practice without objection, viewing it as a de minimis infringement. Todayin the digitalage, however, instead of talking about infringing reproductions, copyright holders use terms like "theft" and "piracy" to describe the practice; such words leave little room for debate or for considerations of exemptions such as fair use From the perspective of content providers, what has changed is not only the quantity of reproductions that may be made through electronic means, but the quality of the reproductions. The ability to produce copies from a digital work with no degradation of quality is a significant matter to publishers and producers, because it permits unlimited reproduction of information, whereas previously, the extent of reproductions was naturally limited by the means of reproduction. Perfect copies were not possible until the advent of computer technology, so this is a new concern. Publishers and producers value the ability to control their works in the marketplace. Fear of loss of control now appears to be much stronger on the part of copyright holders than it has been previously. Is the increased fear justified? Perhaps it is, since loss of control easily can result in total destruction of the value of the work. The volatility of the legal environment also probably contributes to this fear, since publishers and producers cannot depend on the courts to restore that control. Once control is lost, it is lost permanently. Technology presents copyright owners with an opportunity to exert greater control over their works than ever before, since not only access to the works, but also the use of works can be controlled. Concomitantly, however, detecting infringement may be more difficult. One response to the latter problem might be to prevent all copying rather than finding ways to permit lawful reproduction. Publishers value works that they can authenticate, thus ensuring that the content they originally provided remains unaltered. The fluidity of the digital environment raises concerns on the part of publishers and producers that persons will gain access to their works and then alter the works in a way that reflects negatively on the publisher. Content providers also value the ability to market and protect their works in the digital world. The digital environment permits the sale of access to increasingly smaller bits of information, which makes new marketing strategies possible. Publishers also value the ability to market their works broadly and to explore new markets. Eventually, a pay-per-view distribution system in which each consumer can purchase exactly the information he or she desires may be possible in the digital world, as publishers and producers point out. Such a pay-per-viewdistribution system would allegedly be incredibly inexpensive for users whocan purchase small bits of information as opposed to having to pay for access tothe entire work. The envisioned low cost pay-per-view distribution system, however, has clearly not yet been developed, and whether it ever will be is questionable. Another change relating to marketing in the digital world licensing is increasinglyrecognized as a way to protect intellectual property rights, and licenses are becoming more encompassing and broader. Publishers are beginning to value their control over rights and permissions as an income stream as they never have before. Finally, a core copyright value for publishers and producers is the notion that fair use is only a defense to copyright infringement, and it is not a right of users of copyrighted works. Based on some of these values, radical public statements made by publishers are understandable, but they raise considerable concern on the part of librarians. Examples of such statements are that in the digital world pay-per-view will be preferable to sales of copies as a way to recoup the cost of creating a copyrighted work, and that interlibrary loans should be eliminated in the digital environment. B. LIBRARIAN VALUES Librarians also have deeply held core values, some of which have never been questioned. Statements made by publishers and producers sometimes strike at the heart of these values. Many of these values come from what may be called the public library ethos; librarians as a group tend to share these values whether they work in the nonprofit sector or in the for-profit environment. The education and training of librarians inculcates some of these values, which have been exhibited in debates and considerations about copyright. What are some of the core values held by librarians? One core value is that public libraries are educational institutions. This core value, held for at least 125 years, potentially conflicts with the Copyright Act, in which public libraries are considered nonprofit libraries, but not educational institutions. To librarians, however, libraries, archives, and museums serve such an important educational function that even when asked today, librarians often say that public libraries are educational institutions. Throughout the Copyright Act, however, phrases like "libraries and nonprofit educational institutions" are used, making it clear that the two are distinguishable. Thus, the exemptions available only to schools and other educational institutions are not available to libraries generally. True, libraries in nonprofit educational institutions straddle the divide, but other types of libraries do not benefit from the exemptions for educational institutions. The most important core value for librarians is "information to the people." Public libraries are a shared intellectual resource maintained at public expense to provide resources that will be shared. Libraries purchase works and make them available at no charge to any user of that library, which is permitted by the Copyright Act under the first sale doctrine. This may conflict with publishers' goals, but section 108 of the 1976 Copyright Act, called the library exemption, makes clear that the exclusive rights of copyright holders are limited to permit many standard library activities to occur without the need to seek permission from the copyright holder. Public libraries as institutions question authority by providing information on various sides of an issue. Often it would be easier and less stressful on library staff if the library provided information that supported only the dominant beliefs in a community. From earliest times, however, public libraries have steadfastly provided information on multiple sides of a matter to permit library patrons to educate themselves and make up their own minds about issues. Sometimes individual publishers have suggested that libraries purchase only their materials rather than both theirs and those of a competitor. Even publishers of legal materials have been guilty of this in private conversations with law librarians. As part of the value of information to the people, libraries promote the rights of users under copyright. Librarians also support the right to read, the right of access to ideas, etc., and they view the role of librarians as advocates for users who often too, are silent about their needs and wants, or who may not even know what they need. Libraries are valuable to society because they provide access to information to the economically disadvantaged and to other "have nots." They provide an equalizing influence that can serve to reduce the differences between rich and poor when it comes to information. Librarians believe that users have the right to browse in the digital environment just as they do in the analog world. Browsing, is a way of selecting what information is needed, but this clearly conflicts with the content providers notion of paying for access. The Copyright Act itself recognizes some of these values in the section 108 exemption. For example, libraries may lawfully reproduce copies of portions of works, such as an article from a journal issue to satisfy a request from a user. If the library to which the request is made does not hold the requested item, under the exemption, that library may satisfy the request by obtaining a reproduction of the article through interlibrary loan activities. Librarians also believe that section 108 and the fair use doctrine permit them to provide library service to distant learners for whom the library is their primary library. Finally, in order to facilitate teaching, and learning, librarians see the library as an extension of the classroom with the creation and maintenance of reserve collections, including electronic reserves under the section 107 fair use provision. Although librarians do not necessarily believe that information should be free, they do believe that once a library has subscribed to, or licensed for, access to a work, users of the library should have unfettered access to it. Some librarians would probably further argue that the location of the user is irrelevant. For example, if the user is an enrolled student of the university, whether she is physically located on campus or not should be immaterial; the critical issue is whether she is an enrolled student. Restrictions in license agreements that limit users to physical presence on the campus or restrict the ability of a library to use the work for interlibrary loan conflict with this value. Another librarian core value is that users of libraries should be able to locate information - i.e., identify that it exists and where. The role of librarians is to assist and teach users to locate information. This depends on standard indexing, and abstracting services as well as Internet search engines. Traditionally, indexing and abstracting services have been provided by third parties and not by the entities that held copyright in the work. For example, the H.W. Wilson Company began its indexing in 1901 with the first Readers Guide to Periodical Literature that included only 20 periodicals that were recognized as “acceptable” by the literary and academic communities. The theory of indexing changed over the years with the addition of indexes such as Index Medicus that evolved into the online service, Medline, but nonetheless, third parties conducted indexing. These organizations may have been for-profit (H.W. Wilson Company), nonprofit (Public Affairs Information Service), or even a Government agency, such as the National Library of Medicine, which produces Medline. Even if the indexer was a commercial entity, it was still a third party that did not hold the copyright in the content. Today, publishers are adopting new digital object identifiers (DOls) to attach to digital information and to link it through indexing. The DOI index would be linked to the full-text of the work. In theory, this sounds great, since the DOI will stay with the object regardless of whether the publisher sells the digital work, etc., but some concerns about DOls exist. With DOls content providers would not only control the indexing, but also access to the indexing and through the index access to the digital object itself. All of this would be available to users only through licensing arrangements. Publishers state that this would be a great boon to researchers, but this is true only if researchers have access to that publisher's digital materials or to a consortium of publishers' systems that are licensed. If a scholar wants to cite an article using a DOI, it would become an inaccessible reference to anyone who does not have access to one of the publishers' systems in the consortium. No longer would neutral third parties provide the indexing and abstracting services one can use to determine the existence of a work even if the library does not subscribe to the journal or proceeding, in which the work appears. Perhaps the most important manifestation of the core value of information to the people is the importance of the public domain. Librarians believe that having unrestricted access to non-copyrighted works is crucial for research and scholarship. Copyright law should riot restrict public domain works in any way, and users of libraries should be encouraged to make wideand frequent use of public domain works. Further, U.S. government-produced works should be free of copyright and widely available. Even though librarians likely hold other core values related to copyright, the last one critical to mention is fair use. To librarians, fair use is a user's right and not just a defense to copyright infringement. Further, librarians believe that not only individuals but also libraries have fair use rights, based on the library exemption in section 108(f)(4), which states that “nothing, shall affect the right of fair use…” The word used in the statute is "right" and not "privilege;" thus, it is easy to see why librarians maintain that fair use is a right, not just a defense to infringement. Whether fair use is a defense to copyright infringement or is a right has Ion- been debated, with librarians and publishers and producers weighing in on different sides. Recently, the National Research Council recognized this divide in a report published in early 2000, The Digital Dilemma, Intellectual Property in the Information Age. It highlights this disagreement in relation to private copying, but does not resolve the dispute. In recent years a number of professional library organizations have developed policy statements that reflect and restate some of these core values. The American Library Association is currently drafting a statement of core values. In the latest draft of the statement, four of the eight values identified relate to copyright. The first of these concerns is the connection of people to ideas. The other core values flow from that. "We guide the seeker in defining and refining the search; we foster intellectual inquiry; we nurture communication in all forms and formats.” A second core value is the unfettered access to recorded knowledge, information and creative works. "We recognize access to ideas across time and across cultures is fundamental to society and to civilization.” Leaning in all its contexts is the third value. "We aid people to become independent lifelong learners by selecting and offering materials that support the differing needs of all learners, and that entertain and delight the human spirit.” The fourth value that relates to copyright concerns preservation of the human record. "The cultural memory of humankind and its many families, its stories, it expertise, its history, and its evolved wisdom must be preserved so it may illuminate the present and make the future possible.” The Association of Research Libraries (ARL) has developed several documents that state core librarian values without so identifying them. Fair Use in the Electronic Age: Serving the Public Interest was adopted in the mid-1990s. It details what the public should have a right to expect to do without incurring copyright liability: Additionally, the ARL document posits that nonprofit libraries should be able to under-take certain activities, such as using electronic means to preserve copyrighted materials and to provide copies through interlibrary loan, on the part of their clientele without infringing, copyright. Further, libraries should not be liable for the actions of their users after they post the appropriate notices on unsupervised reproduction equipment.to read, listen to or view publicly marketed copyright material privately, on site or remotely; to browse through publicly marketed copyrighted material; to experiment with variations of copyrighted materials for fair use purposes, while preserving the integrity of the original; to make or have made for them a first Generation copy for personal use of an article or other small part of a publicly marketed copyrighted work or a work in a library's collection for such purpose as study, scholarship or research; and to make transitory copies if ephemeral or incidental to a lawful use and if retained only temporarily. In 1994 the ARL adopted Intellectual Property: An Association of ResearchLibraries Statement of Principles, a statement in response to the White Paper that affirms the rights and responsibilities of the research library community in copyright. The most important of these rights is that copyright exists for the public good, and concomitantly, fair use must be preserved in the developing information infrastructure. It also stated that licensing agreements should not be allowed to abrogate either fair use or the library exemptions provided in the Copyright Act. It recognized that librarians and educators have an obligation to educate the users of information from their collections about their rights and responsibilities under intellectual property laws. The statement opines that federal government works should remain free of copyright restrictions. Finally, it states that the information infrastructure must be formulated to permit compensation to authors for the success of their creative works and to provide fair return on investment for copyright holders. In 1999 the ARL restated some of these values in a document called its Keystone Principles, which has some impact on core values. The most important of these is a slight change over earlier statements: now access to information is identified as a public good. This recognizes that academic authors and institutions or public institutions often create information, and the public interest is served by having this information available. The ARL believes that commercial enterprises have disrupted the public availability of such information through pricing policies, licensing restrictions, and the like. C. SOCIETAL VALUES CONCERNING LIBRARIES AND COPYRIGHT the core value that members of the public hold most dear in this area is the importance of an educated populace. The U.S. Constitution is a charter for self-government, and only an educated populace can self-govern Libraries play a crucial role in creating an educated citizenry through literacy and reading programs, and simply by making information widely available to the general public. Thus, it follows that the public value of education may be furthered by the availability of free public libraries. Another value the public holds is the importance of the public domain. While copyrighted works play a key role in “promoting the progress of science and the useful arts," public domain works provide much of the intellectual commons that members of society share. Evidence of a contrary view might be found in the recent reduction of the public domain by the retroactive extension of the already long term of copyright by an additional 20 years. Additionally, it is clear that American society recognizes the importance of public libraries and public access to information. There is considerable public support for libraries. In mid-1998, a Gallup poll showed that 67% of Americans visited a public library within the last year, 54% checked out a book, and 21% checked out other materials, like a CD or a video. "In the continuing struggle to establish and maintain democratic values, free public libraries are essential for providing information and knowledge There is also a civic aim: the free public library offers citizens a way to become informed and educated citizens. II. HOW DID LIBRARIANS' CORE VALUES DEVELOP? A. HISTORY OF PUBLIC LIBRARIES As the first major urban public library in the United States, the Boston Public Library was opened to the citizens of Boston in 1854 in rented space on the bottom floor of a school on Boylston Street. The idea for a public library actually had begun 30 years earlier with a proposal to unite some of the city's privately owned libraries. In 1839 Alexandre Vattemare proposed that the great cities of the world exchange cultural items such as books, works of art, etc. He also suggested that the cities build libraries and museums so that the public could then view the collected cultural artifacts. Vattemare visited Boston in 1841 and proposed a merger of 15 private libraries. Not surprisingly, most of these libraries were less than 54 enthusiastic, but in 1847 he persuaded Boston officials to build a public library. The city council succeeded in negotiating a $50,000 loan from a London bank with which it created a book fund. Three years after the 1854 opening, the Boston Public Library moved into its own building with space to house 240,000 volumes. By 1875 the public library movement was well underway; 188 libraries had been established across the country, and more were planned. Public libraries were considered to be educational institutions. The belief behind public support for public libraries is expressed in an 18th century Rationalist view of American Democracy, "that every person should have an equal chance to fulfill his abilities; that every man can and will do so if given the chance; that the individual shall be free to develop his inclinations and capacities; and that society will progress as the enlightenment of its citizens advances.” Librarians viewed themselves as missionaries of literature, and began to assist schools. The general view was that public libraries were poised to help the country solve various social problems. In the last decade of the 19th century librarians wanted to elevate society and raise the levels of knowledge, goodness and wisdom. Libraries then and now are considered to be a public good, and [p]ublic libraries, like public schools, came to be accepted as public responsibilities, civic goods benefiting the entire society and thus worthy of pubic support. As the library historian Patrick Williams described librarians at the close of the 19th century and into the 20th century, librarians were inspired: "[t]hey idealized the library as messianic, as an agency of national salvation. They saw themselves as missionaries enlivened by the ‘library spirit.’” They continued to insist that public libraries were educational institutions. The Library Spirit was compared to the spirit that built the great European cathedrals. The spirit was one of service, and brought books and learning to towns and hamlets, as well as cities. Libraries were altruistic in nature, and everything seemed to be working. By 1896 there were 971 public libraries with 1000 volumes or more; by 1903 there were 2,283. Between 1906 and 1915 Andrew Carnegie donated money from his personal fortune to build over 639 library buildings in cooperation with local governments. Governments also became more active in library development and building. Books were distributed in traveling libraries to rural areas. Public libraries in cities and towns placed books in public schools, firehouses and police stations. Public libraries' missionary service also included immigrants. Libraries viewed this Americanization effort as a service to the immigrant community – it was their patriotic duty and a role that librarians willingly accepted, apparently with little question. It seems to have been a goal of public libraries to work with immigrants. Libraries apparently viewed themselves as civilizing influences, and services to immigrants were viewed as a social obligation. Public libraries provided books to immigrants, sometimes in the immigrant's own language, lectures on a variety of topics such as health, food, American law, etc. They offered classes on English, and immigrants were taught manners and how to behave. What was taught was often described as the “American way of life.” By 1915 the missionary era was coining to an end and was being replaced by the new rage, adult reading courses. From 1920-48 public libraries were involved in the adult education movement. Adult education was not a new idea, but it was newly emphasized by government at this time. There had been earlier lecture series that were aimed at educating adults, like the Lyceum and Chautauqua series in addition to local lectures held at public libraries. After the two World Wars, however, much had changed in society. Although public libraries had made significant contributions to individual lives, they had not been greatly successful in educating the masses. In the 1950s the American Libraries Association (ALA) and others thought that the Library Services Act would contribute significantly to creating equal education opportunity for rural America. After World War II, the Carnegie Corporation was consulted to see if it could determine why support for public libraries was so weak if they were indeed so necessary to a democracy. After discussions with library leaders, the ALA sent a proposal to the Brookings Institute in 1946 for a study, now referred to as the Public Library Inquiry. Sociologists and others assisted librarians in this important study. The study was to be wide ranging, and to examine how well the public library served its various user populations, along with how effective it was in cultivating loyalty to the basic conceptions and ideals of the democratic way of life. The Public Library Inquiry produced a number of reports that probably were infused with bias, and it assumed certain contentious values were widely shared so that its recommendations could be accepted. One of the central findings came from the belief that the capability of learning was universal. This notion was dubbed as the "Library Faith," which is expressed by the syllogism that printed works have such value that reading itself is a good; reading in order to learn is moral and useful behavior, so libraries contribute value to society. The public library is important to the democratic process because of its relationship to books and reading. It provides free access to all library users of works that have value because they enhance a user's pursuit of knowledge and happiness. Further it provides sources of knowledge for the informed citizenry who are responsible for the very fate of democracy. One final goal and value of the public library is that it both preserves and organizes recorded knowledge worldwide and thus plays a critical role in cultural and social continuity. The Inquiry maintained that this faith was historical and was central to the ideology of public librarianship. The ideology was the lens through which librarians viewed the world and made decisions. Further, the ideology adopted a justification that saw the public library as occupying a unique place as a state institution, which contributed in very specific ways to democratic culture. The Library Faith has its roots in the 19th century. Charles Coffin Jewett, in his 1849 report to the Regents of the Smithsonian Institution, identified two social benefits provided by public libraries: (1) a direct benefit to the ordinary America who can use the collections and services to improve practical skills, gain cultural knowledge, enhance moral strength and increase productivity, and (2) an indirect benefit to the nation in the form of greater economic productivity and a better quality of life. Jewett's view is consistent with the Library Faith and describes the public library as an intellectual commons, contributed to by everyone and available to all. A part of the value of the public library comes from the very fact that it is a shared resource. The Library Faith was resurrected in the Public Library Inquiry, but this time it was armed with a strategy for action. The central purpose of the Inquiry was to establish a link between democracy and free public libraries. The Public Library Inquiry offered an empirical examination of the pubic library in terms of what the institution had achieved and what could have been achieved when compared to its own objectives. Then, it attempted to place the public library in the broader context of American culture and politics. It was also an attempt to stimulate librarians to reexamine their professional values. The Public Library Inquiry resulted in seven books and five special reports. It was really a survey instrument, a study of public libraries that would tell librarians what to do to make their public libraries the instrument of popular education that they wanted them to be. It suggested that libraries should reinvent themselves to ensure users the "widest possible range of reliable information." Beginning with the Library Services Act in the 1950s, changes were inevitable; bookmobiles hit the road in the 1960s with service especially to rural areas, and adult education efforts continued in to the 1960s. From 1965-80, activists began to push services for persons considered disadvantaged who traditionally did not much use the library. Renewed outreach beginning in 1967 indicated a significant difference in the types of works that public libraries needed to provide: movies, tape recorders, viewmasters, etc. But by 1970 outreach programs began to wane. A Strategy for Public Library Change was presented at the Public Library Association meeting in January 1972. The Strategy proposed a new role for public libraries, premised on the concept that information in the form of data, facts, and ideas is essential to the flourishing of the country's citizens, which redefined the role of public libraries as information providers. By the late 1970s a new emphasis on providing information to everyone had developed, which stressed that "[all] information must be available to all people in all formats purveyed through all communications channels and delivered to all levels of comprehension.” This very broad role envisioned for public libraries, however, was one few would be able to meet. Nevertheless, the role of information provider is an appropriate one for libraries since “access to the accumulated information, knowledge and wisdom of humankind is essential. ... It is that function which the library performs best ... The education and training of librarians is based on ideals such as the Library Spirit and later the Library Faith. The first librarian education program was founded at Columbia College in January 1887 under the direction of Melvil Dewey. Classes were held across the street from the main campus, because not only did the first class admit women, but the class was also comprised of 17 women and three men. Faculty members were predominantly members of the library staff of the libraries at Columbia with some other librarians brought in as guest lecturers. Faculty members were supposed to inculcate the library spirit - an attitude about library use and access to those collections. It was this library faith that distinguished Dewey and his generation of librarians from earlier generations whose primary goals were security and preservation of collections. The Library Faith permeated early library education and continues to this day. Librarians are trained with the public library ethos, and it carries over into their work lives. Even librarians in the for-profit sector often still operate with this library faith at work. They have often had difficulty in recognizing that the for-profit environment may significantly chance the relationship between copyrighted works and the user. Librarians have tended to view all users as having, rights to access and use works just as members of the public have this right. The copyright law may, however, differentiate between those users of public libraries and users of a corporate library, if the Texaco case isany example. C. LIBRARIES IN THE PRESENT DAY Libraries have developed into well-defined social institutions that hold a collection of "knowledge representing objects." Libraries play several roles: (1) they are consumers in the economic system, (2) they participate in the political system and (3) they serve as locations of social exchange in the everyday lives of people in the community. Librarians have always been concerned with information content stored in artifact form such as books, journals and film. Librarians are primarily information handlers: they acquire information, organize it, provide access to it, and "serve as intermediaries between the public and the objects of cultural and intellectual authority." Historically, librarians were also concerned with activities relating to the process of publishing and distributing copyrighted works, but to a much lesser degree. Their primary concern was with the collection, storage, and access to static content, rather than with the process of producing and publishing content. Today, however, much information is increasingly dynamic, and it is clear that information consists of both content itself and the process of producing and distributing it. A central tenet of librarianship is that access to information should be free. Because the accumulated cultural memory is stored in libraries' collections and available for consumption by the library users, libraries facilitate information exchange among individuals and society - knowledge contained in the library is transferred to users, for example, when they read. The result of this transfer is socialization and education, but it also facilitates solutions to personal problems and the generation of new knowledge. One noted commentator says of information that "[p]ower in the information age, just like power in the bronze age, the iron age, and the ages of Enlightenment and Industrialization, flows primarily to information users and creators, and most power in the information age, just like most power in the bronze age, the iron age, and the ages of Enlightenment and Industrialization, flows to those best situated politically, economically, socially and culturally. The librarians' role as teacher is also expanding as they train people to use the Internet and attempt to screen for quality content in the vast sea of digital information. Furthermore, as technology allows information to become more proprietary in the digital environment, the role of libraries is expanding to ensure freedom of access to information. Another traditional role for libraries is preservation, but preservation of new, elusive and alterable digital knowledge is just now beginning to be addressed. Librarians also fight the dominant culture by collecting materials the dominant culture does not value or approve of. Libraries bring a vast amount of material together and challenge power by facilitating access to works and by organizing the information regardless of textual format or content. Librarianship is essential in a capitalistic democracy because freedom of access to information is crucial in a democracy even though capitalism may not appreciate this necessity. Librarianship facilitates reading, the chief value of which is to produce educated people who, having imbibed knowledge and values, contribute to the important process of cultural and social change and to the survival of culture by virtue of the kind of people they have become. In the information age, librarianship takes on another important role since individuals are viewed as using documents for their information content to solve particular problems, whether personal or social, in addition to generally improving society by reading. The commodification of information is a threat to the public's access to information. This is why the slogan that "information wants to be free" is so attractive to librarians. What librarians really want, however, is not that information be absolutely free, but that it be only free to the user after the library has paid for access for its users. Librarians view their role as helping and providing information to those who need it; often they do not recognize that this may conflict with publishers' and producers' values. For example, to publishers, the fact that perfect copies can be reproduced from digital copies is a critical concern in evaluating the harm to the market that copying causes. Such copies are basically viewed as "originals." To librarians, however, the fact that a copy of a textual work is a perfect copy is immaterial. Librarians view what they do as providing the copy to users, and as long as the information is legible, the quality of that copy simply does not matter. This is one reason why publishers and librarians are often talking at each other as opposed to reaching real understandings. Publishers appear to be under the impression that most librarians believe information should be absolutely free. This is not an accurate portrayal of librarians' views. Since libraries were formed, they have purchased materials in order to make them available to users. Thus, the materials budget for many libraries is quite large, and there is an understanding of the value of these works and the importance of the information they contain. Librarians certainly recognize that copies of books, journals and the like must be purchased, since donations are not likely to provide the balance and breadth that most seek in developing library collections to reflect all sides of an issue. Additionally, for several decades libraries have been signing licenses to make information available online to their users; there has been no thought or debate that non-public domain databases should be provided free. Librarians also recognize the investment made by database developers to collect and organize the information contained in these valuable research sources, and that there must be fair return on this investment. What librarians staunchly advocate is that individual users should not have to pay for information obtained from their public libraries. Libraries should be able to negotiate licenses to provide access for users in their companies, academic institutions, and public libraries. They fear the threat that information will become pay-per-view and that the library will no Ionger be able to negotiate appropriate terms and fees to make a database available to its users. So, statements such as "information wants to be free" may simply mean free to the individual, not free to the library. If database proprietors charge too much for the license fee, then the library will not be able to purchase access for its users. This will mean that patrons with economic means will be able to afford individual access to the data while the masses will not. The idea of "information have-nots" provides a direct clash with the core values of librarians. Librarians deeply believe that information should be available to everyone who chooses to come to the library to use it, and that access by individuals should not be determined by their ability to pay. This likely originates with the idea of free public libraries and universal service. Pointedly, this does not mean that no one should pay for the information. Indeed, the library has long agreed that it should foot the bill for access to information. It just may not be able to do so if license fees become too high or if the pay-per-view model is adopted. Libraries must have a "sum certain" in the budget - they tried in the early 1970s simply to allow access and pay for "hits" in a database, but many libraries had to abandon this practice or even curtail it in the middle of a fiscal year when the charges reached the amount budgeted to cover use of that database. Negotiating, for a year's access, not on a per search basis, but on a flat rate permits the library greater flexibility even if it then has to narrow access to everything in the database as a way to reduce its costs. Librarians also fear the possibility that pay-per-view will mean that their role as intermediaries will be threatened. Librarians have long, assisted users with their information quests, taught them how to formulate their queries, how to conduct research more efficiently and effectively, and guided them through the complex maze of available resources. The coal is to help each user get exactly the information he needs; not too much information, and not information that is overwhelming to the user. This intermediary role is a type of information filter, not a censor. The training of librarians helps them to determine which sources are age and experience appropriate, how to best search the contents and how to help users so that they can become independent and confident researchers. III. LIBRARIES AND DIGITAL COPYING Many of the value differences between copyright holders and librarians are highlighted by the various exemptions to the Copyright Act. Section 108, which provides certain exemptions for libraries vis-à-vis the rights of the owner of the copyright, is particularly relevant to this discussion. In addition to the library exemptions, libraries also have fair use rights. A. ELIGIBILITY FOR THE LIBRARY EXEMPTION Section 108(a) details the conditions under which a library or archives is eligible for the library exemption. First, the section permits making only single copies of works except for preservation purposes, when, under certain conditions, the library may make up to three copies. Second, the reproduction and distribution must be made "without direct or indirect commercial advantage." The meaning, of this phrase has never been litigated, and the legislative history is not abundantly clear. It is in this requirement that evidence of the values conflict is evident: what does “without direct or indirect commercial advantage mean”? Publishers say that if the library is in a profit-seeking entity, the library cannot meet this criterion. Librarians do not accept this interpretation of the words of the statute. They believe the statutory language means that it is the reproduction itself that may not be used for direct or indirect commercial advantage, i.e., sold for a profit. There is additional support for this position in the legislative history discussing section 108(g)(1); the House Report states that even a library in a for-profit entity may reproduce an article for a user to use in her work as long as it is an isolated and spontaneous request. Therefore, if the library provides document delivery services to its users, and, even if there is a fee charged for the service, it can argue that there is no direct or indirect commercial advantage if that fee represents only cost recovery. Instead, it is revenue neutral and there is neither a commercial advantage nor a disadvantage. However, later amendments to other sections of the Copyright Act all seem to insert the words "nonprofit" before library, which offers some evidence that legislators, at least after the passage of the 1976 Act, considered exemptions necessary for libraries to apply only to the nonprofit sector. The third requirement is that the library's collection must be open to the public or to non-affiliated researchers doing research in a specialized field. Certainly many libraries in nonprofit educational institutions as well as public libraries meet this criterion. For other libraries, this might be met even if the collection is not open to the public generally but only by appointment for qualified users, such as researchers. Libraries that are not open to any outside users have a more difficult time qualifying under this criterion. Librarians likely would argue that a library not open to outsiders but which lends any of its published materials through interlibrary loan also qualifies for this exemption; publishers likely disagree, but the matter has never been litigated. The fourth and final requirement concerns notice of copyright on copies the library reproduces. Notice of copyright is a term of art in copyright law: it consists of three elements: (1) the word "copyright," the abbreviation "copr." or a "C" in a circle; (2) the year of first publication; and (3) the name of the copyright holder. Under the 1909 Act, an owner lost her rights if she published a work and failed to include such notice on copies of the work and did not give actual notice of copyright. The more author-friendly 1976 Copyright Act softened the automatic loss provision. An author did not lose his copyright for any accidental omissions of notice if, during the first five years after publication, only a small number of copies had been distributed without notice, and if he later tried to correct this mistake. When the United States joined the Berne Convention in 1988, notice of copyright was dropped as a formality, and today, placing notice on protected works is voluntary. Librarians Generally regret this chance in the law. Professional librarians and users depended on the notice of copyright to differentiate between works in which the owner claimed rights and those works that were in the public domain. It seems that the burden on the copyright holder was slight in comparison with what copyright notice did for the public in differentiating works in which someone claimed rights from those in the public domain. Throughout section 108, libraries that reproduce works under the exemptions are required to put a notice of copyright on the copies they make. The policy behind this requirement is to alert users that although the library was able to make a copy of a work for them, the work is not free of copyright restraints. Within the library community, there continues to be debate over the meaning of "a notice of copyright." To the copyright holder community, notice of copyright is a term of art in the law, and most copyright lawyers believe that it meant the library should include the three traditional elements that comprise notice of copyright under section 401(b) mentioned above. Some librarians argued that a library should simply stamp photocopies and other reproductions with the American Library Association recommended statement "Notice: This work may be protected by Copyright." Despite this on-going debate, the matter has never been litigated. Many libraries have religiously used a stamp containing the ALA recommended wording, while others had a stamp made with ©, _____, 19_ or 20_. Then library staff members would fill in the name of the owner and the year of publication on copies it reproduced. The Digital Millennium Copyright Act amended section 108(a)(3), which now reads: "Thereproduction and distribution of the work contains a notice of copyright that appears on the copy that is reproduced, or includes a legend stating that the work may be protected by copyright if no such notice appears on the work." Thus, for libraries there no longer is any option for the content of "a notice of copyright." The library must include the notice that appears on the work. This can be done by reproducing the page that contains this notice by writing all of the information on the copy or by creating, a rubber stamp with (©, _______ (for copyright owner), _____ (for year published) and filling in the notice information as it appears on the work. The only instance in which the stamp or legend "Notice: this work may be protected by copyright" may be used now in lieu of the actual notice is when the copyright holder does not place a notice on the work. This is not exactly what librarians had hoped for; librarians had sought an amendment that would alleviate the burden of including a notice of copyright when the copyright holder failed to do so. It is likely that the amendment relates more to the new copyright management information provisions than it does to providing relief for librarians. Although the legislative history states that the goal was not to increase the burden on libraries, that has not been the end result. The amendment also has implications for the World Wide Web. While webpages are copyrighted, often the developer does not include a notice of copyright. Contrary to popular opinion, publishing a webpage without notice does not place the page in the public domain. When printing or reproducing webpages for users, according to the newly revised statute, librarians must either print the page containing the notice of copyright or stamp the reproduction with "Notice: This work may be protected by copyright" if there is no notice on the webpage. The Digital Millennium Copyright Act (DMCA) amends the preservation and replacement sections of the library exemptions. The Sonny Bono Term Extension Act also added a new subsection to 108 that expands the preservation right. Both of these amendments relate to digital copying by a library, which copyright holders have Iong said was not permitted. There are two sections that relate to preservation: section 108(b), which is a true preservation section; and section 108(c), which is a replacement section for lost, damaged, deteriorating or stolen materials. The DMCA amended these two sections making it clear that a library can, under certain circumstances, use digital means to preserve library materials. Further, these amendments were not applicable only to digital works, but also to traditional works. The 1976 Copyright Act always permitted libraries to reproduce works for preservation or replacement purposes if certain conditions were met. The DMCA amended these provisions to provide for meeting national microfilm standards and to make it clear that digital means might be used for preservation, but it also added an additional restriction for works preserved digitally. Under the original statute, section 108(b) permitted a library to reproduce one copy of an unpublished work for preservation, security or deposit for research in another library. Section 108(c) allowed a library to reproduce a published lost, damaged, stolen or deteriorating work after the library made a reasonable effort to obtain an unused copy at a fair price. The statute did not define fair price but the legislative history does describe what a reasonable investigation might entail. It would require recourse to commonly known U.S. trade sources, such as retail bookstores, jobbers, and wholesalers; contacting the copyright holder or author, if known, or using an authorized document delivery service. Both sections further required that the work either currently be in the collection of the library or if not, that it had been there at one time. Both sections stated that a library could make a "facsimile copy" when the conditions had been met. There was disagreement about whether a digital copy could qualify as a facsimile. Many librarians maintained that digital copies, which scanned the page and represented an exact copy of the page were facsimiles; publishers steadfastly claimed that digital copies were not facsimile copies at all. The DMCA really terminates the disagreement. It expands the preservation and replacement exemptions in several ways. First, the library is no longer limited to making only one preservation copy of a work. Now it may make three copies, which complies with national microform standards. Second, the word "facsimile" was omitted, and third, the statute specifically permits the copy to be in digital format. While these three changes broaden the preservation exemptions for libraries, each subsection also contains a new limitation. If the copy that is reproduced is in digital format, the digital copy may not be "made available to the public in that format outside the premises of the library ...” This may narrow the library's rights even though a library now may make a digital copy for on-premises use. However, the library could also then make a printed copy from that digital copy and loan the printed one since it is allowed to make up to three copies of a work. Prior to the amendment, a library that reproduced a work under these subsections could treat the reproduction just as it did the original work. It could lend the reproduction to users, provide it through interlibrary loan, and the like. This new restriction may mean that if the work is preserved in digital format, it cannot be used outside the library buildings, and this is much more restrictive. Surely what Congress must have meant was that if the reproduction was digital and was available on the library's network, then it could be used only within the premises and not on a campus network or the World Wide Web. In using the term "digital copy" Congress may actually have narrowed the exemption for works that were originally in digital format. For example, if the original work was a CD-ROM, which now is lost and is not available at a fair price, a library may create another CD, which also happens to be a digital copy. But the Ianguage of the statute says that digital copies cannot be used outside the premises even if the original was a digital copy that could have been outside the premises of the library. This is more restrictive than the previous version of the statute, and likely is not what Congress meant to accomplish by the amendment. Did conflicting values lead to this strange result? The DMCA amended section 108(c) in an important way. In addition to applying to lost, damaged, stolen or deteriorating works, the amendment added "or if the format in which the work is stored has become obsolete." The amendment then explains when a format may be considered obsolete,"...if the machine or device necessary to render perceptible a work stored in that format is no longer manufactured or is no Ionger reasonably available in the commercial marketplace." This is a great help for libraries that currently are dealing with deteriorating 78 rpm recordings, Beta format tapes, and the like. Thus, if the equipment is still produced but is extremely expensive, a library might determine that it is no Ionger reasonably available in the commercial marketplace and thus may reproduce the work under this amendment. This was a chance that really benefits libraries and their users where there appears either to have been no values conflict with publishers and producers or where the groups were able to reach agreement on this issue before Congress. Even when a work becomes lost, damaged, stolen, deteriorated or obsolete, the library may reproduce it only after it determines by reasonable investigation that an unused copy may not be obtained at a fair price. This applies to all types of works, including audiovisual works. A library is not required to search the used book or videotape market in order to locate a replacement volume or item. The statute does not define key concepts such as "reasonable investigation" or "fair price." Nor is there any time limit placed on how long a librarian should search for an unused replacement. The legislative history does provide some guidance on what constitutes a reasonable effort to locate an unused replacement, however. According to the House Report, "The scope and nature of a reasonable investigation to determine that an unused replacement cannot be found will vary according to the circumstances of a particular situation." It goes on to state that in the ordinary course of events, a library that seeks to replace a damaged, deteriorating, lost, or stolen work would first approach U.S. trade sources such as retail bookstores, wholesalers or jobbers. If that proves unsuccessful, then the library should contact the publisher or author, if known. Lastly, it should contact an authorized reproduction service such as UMI. The House Report does not define "fairprice." There are two published definitions of fair price, one from a publication of the Association of American Publishers (AAP) and another from the American Library Association. A values conflict is clear in the contrast between these two definitions. In 1978, the AAP appeared to posit that a fair price was basically whatever anyone charged the library. It defined as fair price the latest suggested retail price, if the work is still available from the publisher. If the work is not so available, the fair price was defined as the prevailing retail price, or, if the library uses an authorized reproducing service, it was defined as the price that the service charged. The ALA publication uses a three-part definition of fair price. First, a fair price is the latest retail price, if the work is still available from the publisher. (This squares with the first part of the AAP definition). Second, the fair price of a reproduction is the cost as close as possible to the manufacturing costs plus royalty payments. While this is more helpful in determining whether UMI or another authorized reproducing service's price is a fair price, it is not without problems either. Authorized reproducing services simply quote a flat price to the library with no division of the charges into manufacturing costs versus royalty payments, so there is no way that a library can use this part of the definition to help it make a decision about whether a price is fair. The third part of the ALA definition deals with the loss or damage to one volume of a multi-volume set when single volumes are not available for purchase. The ALA states that it could be argued that paying, a full set price in order to replace one missing volume from a set is not a fair price. What is less clear is what happens when the stolen or damaged material does not comprise an entire volume but instead is only an article or two missing from a bound periodical volume. Surely, in this situation the librarian should be able to make a reasoned judgment about how much investigation to do and could determine that there is no fair price to replace the article missing from a bound volume. Most librarians would then simply reproduce the article and insert the photocopy into the bound volume. Determining what constitutes a fair price is left to the judgment of the librarian since there is scant guidance in this circumstance. Thus, librarians must use their best judgment in making the determination. Concomitantly, librarians should ensure that they are being fair to the publisher, which has a legitimate right to profit from its products when they are available for purchase at a fair price. There is another very common preservation activity practiced by many librarians of all types, which reflects a values conflict. When a library purchases videotapes for its collection, especially very expensive tapes such as those from the American Management Association, it is common practice to duplicate the videotape so that the library has both a use copy and a preservation or master copy. To many librarians, this supports the core value of information to the people. The tape was purchased and it should be available to users. If it is damaged, its availability is compromised; therefore, the reproduction makes sense. To the copyright holder, and indeed likely under the statute, this is an infringement of their reproduction right, although one might be able to make a fair use argument for such reproduction. Video producers may grant permission for such duplication or they may charge for the right. The Term Extension Act added another subsection to the Act, a new section 108(h). This section permits a library or a nonprofit educational institution, during the last 20 years of a published work's term, to reproduce, distribute, display or perform in either facsimile or digital form, a copy of a work for purposes of preservation, scholarship or research. In order to do this, however, the library must by reasonable investigation determine that none of the following factors exists: (1) the work is subject to normal commercial exploitation; (2) a copy can be obtained at a reasonable price; (3) the copyright owner provides notice that neither of the above conditions applies according to regulations promulgated by the Register of Copyrights. Further, the exemption provided by this subsection does not apply to any subsequent uses by users other than that library. Finally, when a digital copy is made as the preservation copy, there is no restriction that it be used only within the premises of the library. Because of both the age of the material and the scope of the conditions that must be satisfied, this subsection is of limited value to many libraries. It is likely that the only institutions that will take advantage of this subsection are large academic research libraries. This subsection applies only to works that are already at least 50 years old and probably much older, depending on how long the author lives after producing the work, and thus demand for the work likely has already declined or ceased entirely. The purpose of this amendment was to ameliorate the effect of term extension on libraries and library preservation. Libraries and archives are grateful for the expansions of the preservation sections of the Act; however, the new limitations may make those sections unworkable for all but the largest academic and research libraries. Libraries view as one of their missions the preservation of the world's knowledge and cultural artifacts. The library exemption as amended deals fairly well with preserving materials that were not originally in digital format. Preserving electronic information is more problematic, however, and many digital works simply are not being preserved either by the publisher or by third parties such as libraries. Even when a library signs a license agreement that gives users access to a work, the library may not have the right to preserve it in any way. There is great concern about the impact of this on the cultural record and on what records will be available to researchers in the future. Because digital works are generally licensed rather than sold to libraries and other users, there really is no mechanism for preservation. Libraries are concerned because licensed works do not provide a permanent copy. If either party terminates the license agreement, the library is left with nothing. But when the subscription to a print journal is terminated or the journal ceases publication, the library still possesses the volumes covered by the subscription period. This is not true for licensed digital works. Libraries are beginning to negotiate for retention of the electronic product at the end of the license period, but this too may prove difficult as technology chances over time. The library may be able to retain the work in electronic format, but it may not be able to access the work and use it. Even if the library acquires the right to convert the work to newer platforms, it may just not be worth the effort to accomplish the conversion, especially for highly technical and scholarly works with a limited audience. Law libraries have reported purchasing some CD-ROM products, which included a code that made them unreadable after a certain date. The purchase agreement did not mention any expiration date at all, nor was there any actual notice to anyone that they would expire and become unusable. Clearly, this causes conflict between publishers and libraries. The same is proving true of works distributed on the web. Some journals are available on the web for only 45 days because the publisher does not view the website as an archive. The text is simply removed from the web after a certain period. Thus, to ensure continuing access, the library would have to print out the journal and bind it or reproduce it as a digital file. D. COPIES FOR USERS The sections of the Act dealing with reproducing copies for users were not amended by the DMCA. However, many of the problems between publishers and librarians arise under sections 108(d), (e) and (g) that concern providing copies to users and relate to values conflicts. Use of the term "document delivery" to encompass all of the activities that a library performs to provide copies in response to user requests has caused some difficulties with the publishing community primarily because of commercial document delivery services. which must pay royalties for this activity. A library, however, may simply call all activities that provide works to users as document delivery, such as delivering the original volume, obtaining original volumes for the patron through interlibrary loan, making single copies of articles from its collection in response to a user request, getting a reproduction via interlibrary loan, obtaining copies from an authorized document delivery service and providing copies at the request of external users. Because of differing values, the groups have very different views of these activities. Section 108(d) states that the section's exemptions from a copyright holder's rights of reproduction and distribution apply when the user requests no more than one article from a periodical issue or one chapter from a book or other collective work. The single article from a journal issue restriction has been a problem for some libraries when the user requests more than one article from an issue or even a copy of an entire symposium issue of a journal. Libraries, however, have learned to deal with this by either restricting its copying to one article per periodical issue or by paying royalties for copying more than one article in the journal issue for an individual user. Copies made under section 108(d) must become the property of the user and the library must have no notice that the copy will be used other than for typical fair use purposes such as private study, scholarship or research. Additionally, the library must place on the order form and on a sign located where the orders are placed, the Register's warning: Today, libraries need to consider modem ways to provide this warning in advance of providing copies to users. For example, if the library receives the request via email or the web, then a sign at a physical location in the library is not sufficient, and instead, the library should forward an email warning before providing the copy of the article to a user or include the warning as a part of the email or web-based request system.The copyright law of the United States (Title 17, United States Code) governs the making of photocopies or other reproductions of copyrighted material. Under certain conditions specified in the law, libraries and archives are authorized to furnish a photocopy or other reproduction. One of these specific conditions is that the photocopy or reproduction is not to be "used for any purpose other than private study, scholarship, or research." If a user makes a request for, or later uses, a photocopy or reproduction for purposes in excess of "fair use," that user may be liable for copyright infringement. This institution reserves the right to refuse to accept a copying order if, in its judgment, fulfillment of the order would involve violation of copyright law. Section 108(g) describes further conditions relating to the reproduction for users conducted under section 108(d) and places other requirements on the library that is making the copy. For example, the reproduction and distribution rights under this section extend to "isolated and unrelated reproduction and distribution of a single copy." This is in contrast to systematic copying described below. The iO8(d) exemption also applies to copies of the same material on separate occasions. Therefore, the library is not required to retain internal records to determine which items someone else has requested. The fact that over time, multiple users request a copy of the same item is not a problem. In other words, each user is treated as an individual. On the other hand, the rights of reproduction and distribution granted under section 108(d) do not apply if the library or its employees is “aware or has substantial reason to believe that it is engaging in the related or concerted reproduction or distribution of multiple copies of materials described in subsection (d)." Related or concerted reproduction has never been defined by a court. For example, if a user requests multiple articles from a journal issue and the library refuses to copy more than one article due to the restriction of one article per issue for copying under section 108(d), the user could come back each day and request another article until she has received the entire issue. If the library were aware that it was doing such copying in contravention of the statute, it should refuse to make the copies or it should not treat those copies as exempted under the library exemption and instead pay royalties for them. Similarly, section 108(g)(1) does not exempt the library if it engages in systematic reproduction of single or multiple copies of portions of works described in section 108(d). Systematic copying has not been defined by a court, but cases have held that cover-to-cover copying of commercially-produced newsletters in multiple copies was systematic and was not fair use. Two for-profit entities and one nonprofit trade association have been sued for such activity. Although the earlier case settled,Pasha Publications, Inc. v. Enmark Gas Corp. was a case involving another for-profit company. The library subscribed to both the print and fax editions of the newsletter, Gas Daily, and when the company received each version, copies were reproduced and distributed to employees. The court held that such activity was not fair use and that the company had infringed the publisher's copyright. Television Digest, Inc. v. U.S. Telephone Association was a similar case except that the plaintiff, U.S. Telephone Association was a nonprofit trade association that copied the newsletter Communications Daily for distribution to its members. The court held that the plaintiff's nonprofit status was immaterial to a finding of liability for infringement because newsletter copying is not fair use regardless of whether the copying entity is for-profit or nonprofit. Systematic single copying has not been defined by a court except for newsletter copying. Many libraries reproduce tables of contents of journals and route them to users, and this could be viewed as systematic especially if the table of contents pages contain more than bibliographic data such as abstracts of the articles. Clearly, it is more systematic if the library then permits the user to return the reproductions of table of contents pages with items marked to be copied by the library for the user. Publishers are likely to view such copying as systematic absent a license to do so. Systematic single copying also could include selective dissemination of information, as when librarians watch for all articles, chapters, etc., dealing with a particular subject and then reproduce copies of those items without a specific request for that item from the user. Section 108(e) provides an exemption that permits libraries to reproduce an entire work or a substantial portion thereof if certain conditions are met. First, the library must conduct a reasonable investigation to determine that a copy cannot be obtained at a fair price. The legislative history indicates that normally this would require consulting commonly known U.S. trade sources such as wholesalers, retailers, jobbers, etc., contacting the publisher or author, if known, or using an authorized reproducing service, i.e., one that has permission from the copyright owner to reproduce the entire work. Unlike section 108(c), however, this section even requires a library to search for a copy of the work on the used book market. If such an investigation turns up no copy of a work at a fair price, then the library may reproduce a copy of it for a user. Then the three requirements from section 108(d) also must be met: (1) the copy must become the property of the user; (2) the library must have no notice that the copy will be used for other than scholarship, research, teaching, etc.; and (3) the library must provide the user with the Register's warning in advance of providing the copy. Interlibrary loan (ILL) is a time-honored tradition in libraries, but it too reflects the values conflict. To librarians, the evolution of interlibrary Ioan makes a great deal of sense; from the days of lending the original work with its accompanying mailing costs, wear and tear to the original volume, etc., to utilizing modern technology to reproduce small portions of these works that a researcher might request through ILL. Each year there are still hundreds of thousands of loans of original works through ILL. But few libraries today lend a bound periodical volume when instead they can reproduce a copy of the one article needed and provide it through interlibrary loan. To librarians, the request is simply the same thing. When the original volume is loaned to a user, the patron gets the work. When the work is photocopied, the patron still gets the needed work. Instead of focusing on the end use made of the work by the patron, a publisher or other copyright holder objects to the intermediate step of copying. To librarians, the jump from providing the original work to sending photocopies to satisfy ILL requests was a small one. Publishers view it quite differently. Several years ago publishers began to be particularly concerned about the use of fax technology to provide copies rather than the photocopier, even though certainly it is also a technological copying method. This objection included interlibrary loan as well as classic document delivery. Somehow, using the Ariel system for providing interlibrary loan copies, even though it is based on fax technology seemed threatening. In the broadest sense interlibrary loan (ILL) is a type of document delivery. Traditionally, however, ILL is a library-to-library transaction, although the newer ILL systems that provide copies directly to end-users blur this distinction somewhat. Publishers have claimed that ILL using digital technologies is systematic copying, but the legislative history disagrees. Publishers have also suggested that if the library charges for an ILL transaction, that fee charged creates a commercial advantage for the library. Librarians disagree. Most libraries that charge for ILL transactions simply use the fees to cover a small portion of the copying, mailing, and staff costs. Very few libraries conduct ILL operations even to provide cost recovery, much less to make a profit, thus it is unlikely that a court would view ILL fees as providing any commercial advantage for the library. ILL is permitted under the statutory proviso that states: The CONTU interlibrary loan guidelines then specify what constitutes such aggregate quantities to substitute for a subscription to or purchase of a work. The guidelines state that each year a borrowing library may make five requests from a periodical title going back over the most recent five years (60 months), but the guidelines take no position on materials older than five years. If the library either owns the title but it is missing from its collection, or if the title is on order, the library does not count the ILL copy in its suggestion of five. If the work is not a periodical, the library may make five requests per year for the entire life of the copyright. The borrowing library must maintain records for three calendar years and must certify that the request conforms to the guidelines. The lending library's responsibility is to require a certification that the request conforms to the guidelines and not to fill requests that clearly violate the guidelines. There is no record keeping responsibility on the lending library, however.[n]othing in this clause prevents a library or archives from participating in interlibrary arrangements that do not have, as their purpose or effect, that the library or archives receiving such copies or phonorecords for distribution does so in such aggregate quantities as to substitute for a subscription to or purchase of such work. As libraries have been forced to cancel expensive journal titles because of escalating costs, many are relying on both ILL and document delivery to provide access to materials for their users. In fact, today academic libraries are often members of the Copyright Clearance Center and directly pay royalties for ILL copies beyond the suggestion of five or, as an alternative, obtain the needed copies from a document delivery service. Libraries may determine that the better alternative is to order the copies for articles beyond the five permitted in the guidelines from an authorized document delivery service. The decision as to the route to pursue is based most often on the need for speed, the charges of the document delivery service, and the like. Because libraries are now using both commercial and noncommercial document delivery as well as traditional ILL, these concepts often are blurred since the end result for the requesting library is the same: the user receives copies of requested materials, and royalties are paid for copies in excess of the ILL guidelines. More than once during the Conference on Fair Use, publishers' representatives stated that they were determined to eliminate ILL in the digital environment. Clearly, there is a fear of lost sales, etc., on the part of copyright entrepreneurs, which certainly is a core value to publishers. If ILL were eliminated, however, librarians believe that serious scholarship and research would be severely hurt, which impacts a librarian core value. Publishers then proposed that for digital works, the CONTU guidelines be inapplicable or, at a minimum, turned around so that the suggestion of five would apply to the lending rather than to the borrowing library. Not surprisingly, this was not acceptable to librarians since all of the ILL philosophy, and indeed the CONTU guidelines themselves place the burden on the requesting, i.e., borrowing, library. Recent license agreements for electronic journals have led to this outcome, however. Some licenses for full-text journals limit the number of times that the licensee may lend to other libraries from that journal, which puts the burden for record keeping, etc., on the lending library rather than on the borrowing library. This is additional evidence of the values conflict between librarians and publishers. E. ELECTRONIC COPIES FOR USERS Section 108, like the remainder of the Copyright Act, was said to be technology neutral when it was enacted, and librarians believe that they may provide digital copies to users as well as photocopies if they follow the remainder of the requirements in the Act. Publishers and producers have often stated that libraries are not permitted to provide digitized copies under the library exemption. They have also said that photocopying, and not scanning, was what was intended in the 1976 Act. Certainly, in 1976, the known reproduction technology was photocopying, but Congress intentionally wanted to encompass new technology as it developed. Some publishers' concerns about digital copying may relate back to the fear of loss of control of their works mentioned earlier. Based on the background and understanding of how important ILL is in satisfying the librarians' core value of information to the people, it is easy to understand how upsetting it was to librarians to hear during the Conference on Fair Use from publishers that they wanted to eliminate ILL in the digital environment. One could argue that adding the word "digital" to the preservation subsections means that it is also contemplated for the other subsections; or one could make exactly the opposite argument. The matter has not been litigated. Clearly, if a library is going to use digital means to provide copies for users, it may not retain the digital image for reuse. Instead, it must treat scanning just as it did photocopying, e.g., as a "pass-through" activity. Prior to enactment of the DMCA, the Association of American Publishers stated its express disapproval of scanning as a method of providing copies of documents. As stated earlier, to librarians, the move to paper photocopies and digital copies for document delivery and ILL was an insignificant technological chance. To a copyright holder, the jump was significant because of both loss of control and the fact that copies reproduced from the digital copy are perfect copies. Librarians, however, simply look at it as providing information. Copy quality for textual material is immaterial to a librarian as long as it is readable, i.e., usable. F. ELECTRONIC RESERVES Publishers and librarians have disagreed quite vigorously over electronic reserves. To date, however, there has been no litigation, nor even a reported cease and desist letter, over electronic reserves. Traditionally, library reserve collections contained materials such as restricted circulation collections of original volumes, journals, etc. After the photocopier arrived in libraries, libraries quickly adopted photocopying to reproduce copies of articles, book chapters and the like for the reserve collection so that the original work would not be removed from the general collection. In 1982, the American Library Association developed its Model Policy with guidelines or suggestions for reserves. This policy recognized that libraries should be able to reproduce articles, chapters, etc., for reserve under conditions similar to those for the Guidelines on Multiple Copying for Classroom Use. There are also restrictions, such as one semester use without permission, balancing the number of copies a library might make with the needs of the class, what else is assigned for the time period and how quickly it must be read. There has been no litigation over the ALA Model Policy and, in fact, little comment about the policy from the publishing community despite the fact that these were not negotiated guidelines. This began to change when libraries started implementing electronic reserve systems. Converting photocopy reserves to electronic format makes absolute sense to librarians - they simply see no difference in photocopying articles, book chapters and the like for traditional reserves and in digitizing these same works for electronic reserves. The same guidelines can apply: it is simply the delivery of the material that is different. But a core value - information to the people - is still being followed. Publishers see considerable difference and have objected to the creation of electronic reserve collections, thus following one of their own core values: loss of revenue and perhaps loss of control cannot be tolerated without reasonable compensation. The whole point of electronic reserves is to make the material available to users from outside the premises of the library. E-reserves have the added benefit of freeing shelf space in usually cramped reserve rooms, preventing deterioration of photocopies (which often have to be recopied several times in order to keep a usable copy in the reserve collection), preventing theft of the reserve item, making record keeping, easier, and tracking use, all of which makes it very attractive to librarians. Some libraries request permission for every item placed on electronic reserves and pay royalties when requested. Even when a library seeks permission and is willing to pay royalties, however, some publishers still refuse to allow their works to be included in an electronic reserve collection. Because the permission process for e-reserves is so cumbersome and publishers sometime steadfastly refuse to grant permission, some libraries do not even seek permission to create e-reserves, and others that do seek permission find the process very difficult to manage. Conflicting values are readily evident in this area. During the CONFU process a working group tried to develop electronic reserve guidelines. The draft guidelines were rejected by content providers who found them too lenient and by librarians who thought them too restrictive. Based on the ALA Model Policy, the guidelines would have allowed libraries to digitize materials for electronic reserve collections under conditions similar to those permitted in the ALA Model Policy. Libraries at academic institutions could have made digitized materials for reserve available over the campus network, and students could read the materials on the screen, print a copy, or download a copy. Access to the material, however, would have been restricted to students enrolled in a course offered by the institution. The guidelines also suggested that the e-reserve system alert users that they were not to distribute further the offered materials electronically. In order to avoid creating undue demand for the digital copies of works, the guidelines suggested that the library catalog the items only at the same level of specificity as it cataloged other items in the online catalog. For most libraries this would have meant cataloging at the journal title level and not at the individual article level. Thus, the reserve items would have to be listed under the faculty member's name, the course number, and the course name rather than directly under the author and title of the article. Finally, the guidelines would have permitted a library to retain scanned images after use in the first academic term of use if the items were not available to users via the system but instead were just retained while the library and the faculty member determined whether the items were needed again. Then the library would be in the permission request process. If the items were then to be used in a subsequent term, the library could move them back to an accessible part of the server and avoid having to re-digitize the work. One could argue that neither group was ready to deal with guidelines for electronic reserves. The fact that both groups found the guidelines unacceptable owners because they were too permissive, and users because they were too restrictive - could lead one to conclude that the guidelines achieved the appropriate balance. Nonetheless, the guidelines did not become part of the final CONFU Report. Nor has there been any litigation involving any of the huge number of academic libraries that have implemented electronic reserve collections. G. PRACTICES IN PARTICULAR TYPES OF LIBRARIES There are some library practices that are not as standard as those already discussed, but which are prevalent in specific types of libraries: digitizing slide collections in art, architecture, botany and medical libraries and audio streaming of sound recordings in music libraries. These practices also reflect the values conflict. Art, architecture, and other types of libraries that maintain slide collections have been engaged in projects to digitize their collections to facilitate use of the slides for teaching and research. The central question they face is whether the digitization of slides is a fair use or whether it infringes the reproduction and distribution rights of the copyright holders in the photograph. The values of copyright holders and librarians clearly conflict here. The source of the slides in the library collection varies. For example, some of the slides in the collection are purchased, commercially-produced slides, whereas others are taken by faculty members on visits to distant sites, museums and the like. Still other slides are reproduced from photographs in books. Reproducing faculty-produced slides in digital form is less likely to be problematic since faculty members donate the slides to the library with the intention that students and faculty in the institution will use them. Further, obtaining permission from the faculty member to convert the slide to digital format should be relatively easy. If the slides are commercially produced, however, the copyright owner is not likely to agree that digitizing the slides is a fair use. Some university attorneys posit that if the slide is available for purchase in digital format, the library should acquire the digital copy from the copyright holder. If it is not available, then they believe that digitizing the slides for use in the classroom and for students to consult during the semester is a permissible fair use. Whether digitizing library slide collections is fair use or is copyright infringement may actually depend on whether the digitized slides are high-quality and on whether the slide may be used to reproduce the image or if they are merely thumbnail images used as a catalog of the slide collection. Many art and architecture librarians certainly believe the latter is a fair use. A recent case, Kelly v. Arriba Soft Corp., held that use of copyrighted images by a visual search engine is a copyright violation but that such use may be justified as a fair use. Based on this case, it may be argued that a library in a nonprofit educational institution that digitizes its slide collection for use as a catalog is protected by fair use. Some librarians argue that digitizing any slides for use in nonprofit educational institutions should be a fair use, but case law on this issue does not exist. Often it . is impossible for the library to determine the source of an individual image since slides from all sources have been mixed together. Further, librarians have frequently removed the cardboard folder from around the slide in an effort to save space in cramped slide collections. Most librarians do agree that the slides should not generally be available on the web but should be available only to students enrolled in a class in which the slides are to be used. Slides often are available on a web server only for the semester and then are removed. Public libraries that house large slide collections are also engaged in digitization projects or are considering them. It is less clear whether their activity would be fair use as the purpose is not to support a particular course in a nonprofit educational institution but to permit broader access to the slides. As more museums such as the Smithsonian, the Getty, and the Metropolitan Museum of Art digitize their slide collections and make them available, it may be less necessary for libraries to digitize their own collections. Instead, faculty and students may be able rely on the publicly available collections, and the faculty members may simply link to those slides for displaying them in class and making them available to their students for consultation throughout the class term. Music libraries in educational institutions have long housed reserve collections consisting of audiotapes reproduced from sound recordings. Libraries produced the tapes from library copies of sound recordings at the request of individual instructors. Students then were assigned to come into the music library to listen to the copy of the recording. Most often the library even made multiple copies of the tapes to meet the needs of the class assigned to listen to the recordings. Whether this was infringement under the statute is problematic, but it was never litigated. The Guidelines on the Educational Use of Music certainly do not mention this broad practice. Their only mention of reproducing sound recording is the statement that copies of small portions of sound recordings could be made for the purpose of constructing aural exercises. Today, many of these libraries are using streaming audio to provide the same access that the tapes provided. Does streaming audio make a copy or multiple copies of works? And if so, does it constitute copyright infringement? The Music Library Association has taken the official position that it does not, since students are permitted to listen to the work but not to make copies. Copyright holders likely feel that this is infringement of their reproduction and perhaps even performance rights, but there has been no litigation over this issue. H. COOPERATIVE COLLECTION DEVELOPMENT Cooperative collection development is another traditional library practice that has been influenced by the digital environment. Cooperative collection development has existed for years, and in conjunction with ILL, can benefit both the libraries and publishers. To libraries, cooperative purchasing can make it possible for the library to acquire materials that ordinarily would be outside the budget capabilities of the individual library. Thus, it furthers the value of information to the people. To publishers, cooperative collection development may mean that one or more copies of expensive sets are sold to one library in the cooperative, which would not have been sold to any library on its own. On the other hand, publishers could take the position that cooperation presents multiple lost sales that could be made to each institution in the cooperative. digital materials, in order to obtain better licensing terms, libraries have banded together to sign joint licenses for electronic journals, and the like. The only impact on copyright is that this forces libraries either to participate in the cooperative licensing or to forego access. This would not have been true in the analog world where ILL within the CONTU guidelines would provide some access, and members of a consortium could borrow items from each other including photocopies of articles when adhering to the CONTU guidelines. Now, however, some electronic journal licenses restrict the ability of the library to use the journal to satisfy ILL requests. This license restriction certainly reflects conflicting values; libraries desire to make materials widely available within the confines of the CONTU guidelines, while publishers want the license to restrict access to individuals within the institution. IV. ACCESS TO ELECTRONIC PRODUCTS Libraries have been signing licenses for many years - this is riot new in the digital age. Many traditional printed works have license agreements with them, but the use of licensed products has been increasing with the advent of works in electronic format. Traditionally, these have been negotiated licenses that benefit both content providers and users, since libraries often can obtain terms that make the works available under conditions that meet the specific needs of its users, and publishers are able to bargain for price and other conditions that benefit them. Section 108(f)(4) states that nothing affects any contractual obligations incurred by a library when it obtains a copy of a work in its collection. Thus, libraries are bound by the license agreements they sign. Today, however, publishers are insisting on more "take it or leave it" licenses, which threaten access for users and thus conflict with a core value of librarians. Also, license agreements now often have very restrictive terms, such as limiting access to persons either physically located on the campus or who have access through the campus domain name. This does not work well for institutions that have a large number of off-campus students either in distance learning courses or in more traditional residency and internship programs. Restrictions on using licensed products for interlibrary loan have already been mentioned. The Uniform Computer Information Transactions Act (UCITA), currently under discussion for passage in the individual states, if passed, likely will make the licensing issue even more difficult for libraries as the ability to negotiate may be subsumed into standard licensing terms for many products. There are many questions regarding licenses that libraries sign, which could then be governed by UCITA. An important question is who is authorized to "sign" a click-on license for a library that will commit it to the terms of the agreement. It is easy to say that the library director and his authorized agents have signature authority, but consider law firm libraries. If a partner in the firm clicks on a license, is it not likely that the firm (and specifically the library) will be bound by that license? While book and journal purchases most often have to go through the library in a law firm, when a user is on the Internet, it is unlikely that she will remember not to click on a license for a product that she wants to use immediately. Another unanswered question is whether a library that has signed a license agreement for access to an electronic product has done anything that signs away the fair use rights or privileges of library users. Is there a fair use right of access? The consequences of moving to a pay-per-view system may indeed mean that although users have fair use rights of access, publishers can require waiver of these rights. B. INDEXING AND ABSTRACTING DIGITAL WORKS The scholarly community has long relied on bibliographic information and compilations of bibliographic citations first to identify and then to provide access to scholarly and scientific literature. Bibliographies were published by libraries, scholars and organizations. Periodical indexes were also published by third parties that often were commercial in nature, but not always. Thus, libraries, scholars, and copyright holders could be assured that such indexing and abstracting services were being produced to serve the scholarly community, to facilitate the use of copyrighted works, and to make sure that anyone interested had some way to identify whether the information existed in a published format and where it might be found. The development of full-text journals in electronic format offered yet greater access. Not only would the work be indexed and abstracted by a third-party, but the full-text of the work could then be retrieved from a large database or from subscriptions to online Journals. Beginning in the mid-1990s, the publishing community began to consider how to identify and track these digital works using some sort of digital identifier. The idea of unique identifiers is not new for the electronic environment. For years libraries have relied on International Standard Book Numbers (ISBNs) and International Standard Serials Numbers (ISSNS) to identify books and serials. The ISBN is a unique 10-digit number that is assigned to books prior to publication that helps track them through the publication and sales process. Begun in Great Britain, this system is now used worldwide for books and other monographic works. The number stays with the book or serial regardless of whether the publisher sells the copyright in the work, goes out of business, etc. Each version of a book gets a unique ISBN so there is no confusion between the hardback version, the paperback, and foreign language translations. In the United States the numbers are assigned by R.R. Bowker, and in other countries by a national book numbering agency. The administration of the international system is handled by the International ISBN Agency in Berlin. The use of ISBNs has assisted business communications among publishers, bookstores and libraries. Further, libraries use ISBNs and ISSNs to manage their own internal processes such as serials check-in and the like. Now content providers want to create a new identification system for the electronic world. The Association of American Publishers created a committee in 1994 to design a system that would protect copyright while enabling electronic transactions. The result was a study that recommended guidelines on an industry-wide standard identifier, which would assist in controlling electronic transactions and other operations. The goal was to design a system of persistent identifiers that would be interoperable between publishers and their clients and which would serve as the basis of a rights and permissions management system for publishers. The committee recognized problems with Internet addresses and uniform resource locators (URLs), which are not object identifiers but denote only the location of objects. There are significant problems with URLs; for example, URLs disappear, and they do not specify content but simply location on a server, which can change. Publishers propose that the "digital object identifier" (DOI) replace the URL for digital works. The goal of DOls is to create identifiers that would stay with the object throughout its life regardless of whether the publisher sells the copyright in the work, merges with another publisher or goes out of business. Further, the DOI would stay the same regardless of the work’s location on the web. The DOI consists of two parts: a prefix that contains directory information and the registration number. The DOI both specifies the location of the publisher's website where the material resides and identifies the material to the public. An example of a typical DOI is 10.1000/123456789. The portion before the slash is the prefix, which is administered by the DOI Foundation, a not-for-profit organization. Publishers purchase these prefixes. The part after the slash, the suffix, is a unique character string that identifies the specific digital object. The number is currently limited to 128 characters with at least 40 character substitutions (letters, symbols, numbers) allowed for each place in the DOI code. They are basically dumb numbers with any necessary intelligence carried in the metadata. The DOI Foundation manages the registration system. Publishers purchase each unique prefix from the Foundation for $1000, and eventually there will also be an annual fee for each DOI a publisher registers on the DOI server. The fee will support maintenance of the DOI server. DOls can be used not only for textual digital objects but also for music, video, and the like. The annual cost will increase as publishers move from using DOls to identify large works, such as books, to using DOIs to identify increasingly smaller works, such as individual images in a collection, encyclopedia entries, etc. At first blush, all of this appears to be a great benefit to everyone - simply substitute the DOI for other unique identifiers, such as URLs or ISBNS. In fact, with a persistent identifier, some of the problems with URLs disappear. Certainly, DOls will be used to track copyright management information and transactions, but there are other uses that may threaten the neutrality of bibliographic compilations, indexing and abstracting services that libraries and researchers enjoy today. Bibliographic data are the current identifiers that most scholars use to identify resources needed for research, to locate these materials in libraries or on the web, and then to cite the works used so other scholars can find them. Bibliographic data are factual in nature and thus each individual bibliographic record is not eligible for copyright protection. On the other hand, collections of bibliographic citations that do not represent a total universe of data may be eligible for copyright protection. Even though content often is copyrighted and publishers claim proprietary rights in the content, the abstracting and indexing of these works has tended to be separate. Indexes traditionally have been published by third party nonprofit and for-profit entities. For example, the H.W. Wilson company publishes most of the indexes of periodical literature including the Index to Legal Periodicals and the Reader's Guide to Periodical Literature. These indexes are proprietary products, the originality of which may be found in the arrangement or indexing (from the creation of the subject headings), but the ownership of the copyrights in these publications is not in the hands of content owners but rather in a third party indexer. The health sciences Index Medicus and its later electronic descendant MedLine were created by a government third party, the National Library of Medicine. Thus, control of the indexing and abstracting services, and the access to journal literature that they offer, lies not in the hands of the publishers who created the content and own the copyright in the indexed journals. This is not necessarily true with the DOI system or indexing and abstracting systems that will utilize the DOls. Publishers are licensing each other to index and use their DOIs so that basically what is provided is hypertext. Publishers will not allow other users into their systems, so they are providing not only the content that is licensed for use but now are creating the indexing to the content, which also will be licensed. Thus, researchers may be locked out of the tools to determine even if certain content exists. In November 1999, 12 major scientific and technical publishers announced a "collaborative, innovative market-driven reference-linking, initiative" touted as having, the potential to change the wav scientists use the Internet to conduct online research. Initially, it was proposed, approximately three million articles from thousands of journals will be linked, followed by one-half million more linked each year thereafter. The publishers believe that this will enhance browsing and connecting between logically related articles with just a few clicks. The reference-linking service will be run from a central facility and managed by an elected Board, and will operate in cooperation with the DOI Foundation. It will contain a limited set of metadata, allowing the journal content and links to remain distributed at publishers' sites. Each publisher will set its own access standards, determining what content is available to a researcher following a link, such as access to full text or only to an abstract by subscription, document delivery or pay-for-view. This should be of great concern to librarians, since scholars will not be able even to determine whether a particular paper or article exists unless they have a license to use the publishers' system or access through a library license that provides access. There will no Ionger be publications of abstracts and indexes by third party publishers, as the linking of DOls among publishers will subsume this activity. This could potentially chance the nature of scholarly research, and significantly impede research and scholarship. Not only will licensing control access, but the same publisher will control information about the existence or the work and any summary of it. and library organizations were not involved in the development of the DOI system or in plans for its implementation. In May 1999 there was an announcement, however, that OCLC had joined the DOI Foundation. The Foundation also approached the Coalition for Networked Information to work with it to help increase the general understanding on the part of the library community and to make recommendations on how it can be more useful to the entire bibliographic community. V. THE WEB AND INTERNET SERVICES As librarians use the web to obtain information, they will be increasingly governed by licensing agreements, even as libraries also use the web to provide information and share information. Librarians see this as a new way to make information available to people, and libraries as institutions have something important to contribute. Treating the information universe as property is problematic. Librarians see information as something that should be shared. Information is not information until it is disseminated and used. The often repeated statement that information is power is not really true. Information is not power until that information is used. Libraries were the first consumers of books and other materials. and supplied the public with information. Now that information has become increasingly valuable in this post-industrial society the production of information products has become a big business. Producers and vendors have more control over information in electronic form than in analog form because it is held in a central source and is instantly available upon demand. Thus, electronic technology and its corporate owners may hold consumers hostage as they never could in the pre-digital age. Concomitantly, the Internet is democratizing the distribution, publishing, and consumption of information. The information industry fears easy access will lead to loss of control and will threaten not only their copyrights but also the financial investment in the development of these products. Librarians worry that fair use and free access are threatened by this more stringent regulation on the part of copyright holders. To librarians, the Internet holds great promise for making materials available in ways never before envisioned. No Ionger must a researcher visit a particular library in a remote location to use unique publications held only by library. By putting these works on the web, scholars all over the world will have access to them from their homes and offices. This corresponds with a librarian core value of information to the people. Copyright holders, on the other hand, see this same activity as a threat to their economic health. Such a direct conflict in values explains why librarians and content providers have such a difficult time talking to one another about copyright and the availability of copyrighted works in libraries and archives. Not only do libraries provide access to the Internet, they are providing Internet content. A huge number of libraries now have a homepage. A 1996 survey of public and academic libraries showed that 62% had homepages. Some libraries also answer reference questions online. The Internet gives libraries the opportunity to expand their public relations and promote their services. Digitization permits libraries to present their content along with sound and graphics and to reach a wider audience than just the local area. The professional careers and status of librarians are based on their core values: information to the people, access to information, and the value of the public domain. Questions about whether libraries will exist in the future strike fear in the hearts of librarians because they view what they do as eminently valuable to society. Statements such as "there is no fair use in the digital environment" also threaten these values. Because librarians' core values are altruistic in nature, they often believe that their hearts are good and true: what librarians do, they do for the public good. Therefore, any use made of a copyrighted work for a user should be a fair use. The primary goal librarians seek is to help people and provide them with appropriate information they need and want, not make money. Librarians also want to provide quality service to users. This makes it hard for librarians to accept that some of their activities may infringe when it comes to using copyrighted works to further these noble purposes. There simply is a disconnect between the public-spirited goals of librarians and the private ownership notions of copyright. The owners of copyrighted works understandably want to market their works in ways to maximize profits. The public good that publishers and producers of copyrighted works produce is economic growth, which many librarians view as overt commercialism. In addition to this conflict in core values, both librarians and publishers and producers do not always understand copyright. Both misstate the law and make overstatements about the horrors that will befall in the digital world. Copyright holders overstate the law with positions such as there is no fair use in digital works. To wit, warnings appearing in some printed works that "No copying, of this work whatsoever" is permitted are simply wrong. Such warnings should be accurate and, if they are used at all, should include a statement about fair use. Are publishers trying to scare users into behavior that they want to control regardless of what the law permits? Or do they really misunderstand the law? Another recent example is found in the letter that the AAP sent to provosts of universities across the country. The letter purported to explain copyright law and what universities should do to educate their communities about copyright. It pointedly failed to mention fair use, however, or that many activities in nonprofit educational institutions may be fair use. Librarians often claim that all of their activities are covered under fair use rather than recognizing that the section 108 library exemption is the proper argument. Like faculty and other users, they often say "I am a fair person, and I want to use the work, ergo, fair use." Frequently, librarians use the term "exempted under various sections of the Act" when they really are referring to the library exemption or the classroom exemption. Then, when librarians are talking to copyright lawyers and copyright holders, their misuse of terms causes considerable confusion. Additional confusion occurs when librarians fail to differentiate among the various types of copyrighted works. It is critical for publishers and producers to know whether a work is a motion picture, a nondramatic literary work, or a graphic work. To a librarian, all of these works convey information, so they are "informational works." To the extent that the Copyright Act recognizes the differences among types of works, this causes problems for librarians. In addition to misunderstanding the law, librarians as well as publishers and producers have been guilty of overstating problems and the horrors they will encounter if the law is changed. Both have used the debates over the digital environment to try to expand their rights or positions. Librarians seek to extend fair use even beyond that permitted in the analog world, while publishers attempt to reduce or eliminate fair use. The DMCA, however, continues to provide exemptions for libraries. For example, libraries are exempted from the prohibition against circumventing technological measures if the reason they do so is to determine whether to acquire the protected work. An unusual part of the DMCA is found in section 1204, which states that nonprofit libraries, archives, and educational institutions are exempted from the criminal provisions for anti-circumvention or removal of copyright management information. Subsection (b) states that neither the large fines nor federal prison terms wiII be assessed against libraries. Note that the library will not go to jail - the statute does not say “librarian” but “libraries.” So, where does this recognition of the values conflict lead us? The values of librarians and publishers/producers often conflict. That conflict shapes the debate about the proper role of fair use guidelines and about amendments to the Act to facilitate movement into the digital environment for creators, publishers and users of copyrighted works. Is there any way to compromise and reach common ground? Perhaps, but only if both groups can avoid overstatements and fear tactics. If this happens, then perhaps it will be possible to return to the days when these groups actually talked with each other about how to accomplish each side's goals without unduly hampering the other's. It will take a great deal of effort on both sides to return to this win/win strategy.
Lord Frederic Leighton (1830-1996) was an English painter and sculptor. He is well-known for his mythological and historical paintings. He was born in Scarborough to a family in the business of import and export. Because his family was well-off he had a good education and training. At the age of 24, when he was in Florence, he painted the procession of the Cimabune Madonna through the Borgo Allegri. This historical painting of Leighton was later bought by Queen Victoria. His paintings made him famous in London and he was also elected President of the Royal Academy in 1878, and was titled Baron Leighton of Stretton. He was the only English artist to be honored by the title. However, he could not live longer as a Baron. He died the next day he was given the title. That made him the shortest-lived peerage in the history. His remained unmarried throughout his life. His house in Holland Park is the Leighton House Museum now. The museum houses a number of drawings, paintings, and sculptures by him. Through the museum one can take a peek at the life and works of Leighton from very close. Although he could not live longer as a peerage, he lived fully as a painter and sculptor.
RIO DE JANEIRO (June 18, 2012) – Ending fossil fuel subsidies would save governments nearly $1 trillion while also improving environment and economic conditions worldwide, according to a report from the Natural Resources Defense Council and other environmental and social advocacy groups. At a press conference at the Rio+20 Earth Summit in Rio de Janeiro, NRDC international climate policy director Jake Schmidt made the following statement: “The only beneficiaries of fossil fuel subsidies are oil, gas and coal companies that are raking in record profits at the expense of the rest of us. “Instead of subsidizing well-established corporations that destroy our planet, governments ought to be doing more to help support and develop more clean, renewable energy that can actually help our planet, reduce our energy consumption and revive our economies.” Based on government data from around the world, the new report finds that ending fossil fuel subsidies would: - * Save governments and taxpayers $775 billion each year. - Reduce global carbon dioxide emissions by 6 percent by 2020. - Reduce global energy demand by 5 percent by 2020. - Not hurt the poor (if the right policies are adopted) since the vast majority of subsidies mainly benefit only the richest segments of the population. NRDC created the report with partners Oil Change International, Vasudha Foundation (India) and Greenovation Hub (China) and Heinich Boll Stiftung (Germany).
Mr. Yadav spends 60% of his salary on consumable items and 50% of the remaining on clothes and transport. He saves the remaining amount. If his savings at the end of the year were Rs.48,456, how much amount per month would he have spent on clothes and transport? Subhash purchased a refrigerator on the terms that he is required to pay Rs1,500 cash down payment followed by Rs.1,020 at the end of first year, Rs.1,003 at the end of second year and Rs.990 at the end of third year. Interest is charged at the rate of 10% per annum. Calculate the cash price : An 8 litres cylinder a mixture of oxygen and nitrogen, the volume of oxygen being 16% of total volume. A few litres of mixture is released and an equal amount of nitrogen is added. Then the same amount of the mixture as before is released and released and replaced by the nitrogen for the second time. As a result, the oxygen contents become 9% of the total volume. How many litres of mixture is released each time? Two equal sums of money are lent at the same time at 8% and 7% per annum simple interest. The former is recovered 6 months earlier than the later and the amount in each case is Rs.2,560. The sum and the time for which the sums of money are lent out are : Rs.2,000, 3.5years and 4 years Rs.1,500, 3.5years and 4 years Rs.2,000, 4 years and 5.5 years Rs.3,000, 4 years and 4.5 years A scooter costs Rs.25,000 when it is branded new. At the end of each year, its value is only 80% of what it was at the beginning of the year. What is the value of the scooter at the end of the 3 years? The population of a city increases at a rate of 4% per annum. There is an additional due to the influx of job seekers. The % increase in the population after 2 years is therefore : The capital of a company is made up of 50000 preferred shares with dividend of 205 and 20000 common shares, the par value of each type of share being Rs.10. The company had a total profit of Rs.1, 80,000 out of which Rs.30,000 was kept in reserve fund and the remaining distributed to shareholders. Find the dividend per cent to the common shareholders : In what proportion must water be added to spirit to gain 20% by selling it at the cost price? 2 : 5 1 : 5 3 : 5 4 : 5 An article is listed at Rs.65, a customer bought this article for Rs.56.16 and got two successive discounts of which one is 10%. The other rate of discount of this scheme that was allowed by the shopkeeper was : The population of a city increases at a rate of 4% per annum. There is an additional increase of 1% in the population due to influx of job seekers. The % increase in the population after 2 years is : The population of bacteria culture doubles every 2 minutes. Approximately, how many minutes will it take for the population to grow from 1000 to 500,000 bacteria? The length and width of a rectangular garden were each increased by 20%, and then what would be the per cent increase in the area of the garden? The weight of an empty bottle is 20% of the weight of bottle when filled with some liquid. Some of the liquid has been removed. Then, the bottle, along with the remaining liquid, weighed half of the original weight. What fractional part of the liquid has been removed? None of these The difference between compounded interest and simple interest on a certain amount of money at 5% per annum for 2 years is Rs.15. find the sum : A shopkeeper sells a pair of sunglasses at a profit of 25%. If he had bought it at 25% less and sold it for Rs.10 less, then he would have gained 40%. Determine the cost price of the pair of glasses : A company blends two varieties of tea from two different tea gardens, one variety costing Rs.20 per kg and other Rs.25 per kg in the ratio 5: 4. He sells the blended tea at Rs.23 per kg. Find his profit per cent : No profit, no loss In a class 40% of the boys is same as ½ of the girls and there are 20 girls. Total number of students in the class is : A sum of money invested at compound interest amount in 3 years to Rs.2400 and in 4 years to Rs.2520. The interest rate per annum is : Sumit lent some money to Mohit at 5% per annum simple interest. Mohit lent the entire amount to Birju on the same day at 8 ½ % per annum. In this transaction, after a year Mohit earned a profit of Rs.350. find the sum of money lent by Sumit to Mohit : None of these 40% of the students in a college play basket ball, 34% of the students play tennis and the number of students who play both the games is 234. The number of students who play neither basket ball nor tennis is 52%. Determine the student population in the college : A trader marks his goods at such a price that he can deduct 15% for cash and yet make 20% profit. Find the marked price of an item which cost him Rs.90 : Rs. 135 11/13 Rs. 105 3/13 Rs. 127 1/17 Rs. 95 1/21 At what price should I buy a share, the value of which is Rs.100, paying a dividend of 8%, so that my yield is 11% ? There are five boxes in a cargo hold. The weight of the first box is 200kg and the weight of the second box is 20% higher than the weight of the third box, whose weight is 25% higher than the first box. The fourth box at 350kg is 30% higher than the fifth box. The difference in average weight of the four heaviest boxes and the four lightest boxes is : If a% of x is equal to b% of y, then c% of y is what % of x? A person had deposited Rs.13,200 in a bank which pays 14% interest. He withdraws the money and invests in Rs.100 stock at Rs.110 which pays a dividend of 15%. How much does he gain or lose?
On the night of July 4, 1776, the second Continental Congress, consisting of representatives from the 13 British colonies in North America, already engaged in conducting a war of independence from colonial bondage, and creating a new independent government, completed the document formalizing their “Declaration of Independence” from Great Britain. After it was completed and signed, it was read aloud for all the people who gathered in front of the Continental Congress building in Philadelphia. This document is credited as the first in history to proclaim the principal of public sovereignty as the basis of a state system, and rejected the theory of the divine origin of authority that was fashionable for governments at that time. It claimed the right to revolt and to overthrow a despotic government, and proclaimed the basic ideas of democracy – equality and inalienable rights of “Life, Liberty, and the pursuit of Happiness” for all people. The Declaration of Independence became a challenge and a danger to the 56 colonists who signed it: five were shot by the British for treason; nine died from wounds received during the War of Independence. Many lost their families and property. Two of the primary authors of this document, Thomas Jefferson and John Adams, coincidentally died on the Fourth of July 50 years after they signed the Declaration. John Adams, who became the second president of the U.S., once predicted that the Declaration would be the nation’s “salvation,” felt it should be celebrated in a big way: with parades, concerts, baseball games, competitions and bonfires. He insisted that this holiday should be for everyone and that each citizen of the United States should participate in the celebration. Americans followed his advice gladly: American housewives traditionally fill their picnic tables with steaks, burgers, potato salad, hot dogs, chips, corn and beans. There are also regional differences in celebrating Independence Day. For example, in Lititz, Pennsylvania, the whole winter is dedicated to manufacturing candles to make a candle festival on Independence Day. In Seward, Alaska, celebrants hike to the top of Mount Marathon. In Tecumseh, Nebraska, two hundreds flags are hung at the courthouse in honor of Tecumseh’s citizens who have served in the armed forces. At Depositphotos, Independence Day is one of our favorite holidays. We start preparing for it beforehand, just like the citizens of Lititz. Our preparation may not include the manufacture of candles or even involve buying picnic food. We carefully select the best photographs, illustrations and vector images to include in the Independence Day lightbox. Please check it out, and we’ll go get some fireworks while you browse!
HOUSE CALL WITH DR. SANJAY GUPTA Autism and Childhood Vaccines Linked?; How to Get More Sleep; Kids Coping with Parents Having Cancer; One Man Losing Weight and Making Life Changes Aired March 8, 2008 - 08:30 ET THIS IS A RUSH TRANSCRIPT. THIS COPY MAY NOT BE IN ITS FINAL FORM AND MAY BE UPDATED. SANJAY GUPTA, CNN HOST: Thanks, guys. This is HOUSE CALL. We're making the rounds this morning of some of the intriguing medical stories of the week. First up, autism and vaccines and a possible link in one child. So what does it all really mean? Sleep, we all need it, but most of us don't get enough. So what can you really do about it? And kids coping with mom or dad having cancer. How getting some tools to work with makes them a little bit easier. Plus, a man needing to lose some weight to set a goal and is making some life long changes. We start, though, with a girl at the center of a landmark vaccination lawsuit. In an unprecedented move, the Department of Health and Human Services says childhood vaccines may have contributed to symptoms of autism in a nine-year old Georgia girl. Now, the report did not find a clear cut link between autism and the vaccine, but says the vaccine possibly aggravated a preexisting and rare metabolic syndrome in the child. A lot of research indicates there is no link between vaccines and autism, but those who believe there is are seeing this case as some proof. Here's what the family had to say. (BEGIN VIDEO CLIP) JON POLING, DR., FATHER OF AUTISTIC CHILD: Hannah got very sick shortly after a series of vaccinations that she received at about 19 months old. And after that rather acute illness, she never was the same again. And the autism itself did not come on immediately. It was something that developed over months. (END VIDEO CLIP) GUPTA: Federal officials feel the family should be financially compensated. But what kind of compensation is still being negotiated. Health officials worry this case could affect whether or not parents choose to vaccinate their children. We'll, of course, continue to bring you the update on this story as it unfolds. As many as one in every 166 children in this country is diagnosed with autism. And the number of cases continues to rise. But medical researchers still know very little about its cause. However, as doctors look into everything from genetics to the environment for a certain kind of culprit, more and more details about this complicated disease are starting to unfold. GUPTA: It is a true medical mystery, the secrets of an autistic brain. WENDY STONE, VANDERBILT KENNEDY CTR.: There is no identified single cause of autism that is universal for all children. But -- and there may never be. GUPTA: As with many mysteries of the mind, doctors point to genetics and environment as culprits. But as the mystery starts to unfold, we learned that it could be more complicated than that. The newest research shows that there is something that a child is born with that allows outside factors to wreak havoc on their little brains. More simply, these children are not necessarily born with autism, but they are born with the potential to develop it. And what exactly are those outside factors? Not sure. STONE: Before we're born, it's the mother's womb and placenta. After we're born, it's what we eat, it's what we breathe, it's what we drink. And there are so many different things out there that it's hard to pinpoint exactly what it is. GUPTA: Still, any parents of an autistic child will have theories. When Zack Couch's parents learned he had autism, his mother began to change his diet, worried he was eating something that was causing him to get worse. Some families believe that a preservative in some childhood vaccines called thimerasol is causing autism in their children. The CDC says no scientific link. ROBERT DAVIS, DR., CDC: Now that we have the data coming in, there is no data to suggest that thimerasol or mercury (INAUDIBLE) is linked to autism. GUPTA: And what about the genetic link? Well, doctors at Vanderbilt are studying siblings of autistic children. STONE: They are at elevated risk of developing autism from birth. We can start following these children and we can identify the very earliest signs. GUPTA: Catching those early signs may help doctors get one step closer to solving the mystery. So what exactly is happening in an autistic brain? At the University of Pittsburgh, doctors are seeing what's happening inside the autistic brain. The picture here shows a normal brain on the left, an autistic brain on the left, which dramatically fewer connections lighting up. No, we still don't know what exactly causes it or even how to explain the rising rates across the United States. But every day, we're getting closer to solving the mystery of autism. (END VIDEO CLIP) GUPTA: How those sleepless nights could be putting your heart at risk. GUPTA: We're back with HOUSE CALL. Daylight Savings Time is here. And according to the National Sleep Foundation, we're tired enough without losing that extra hour. In fact, the average American worker gets only six hours and 40 minutes of sleep a night. That's about 40 minutes less than is necessary. So we asked some people about their own sleeping habits. (BEGIN VIDEO CLIP) UNIDENTIFIED MALE: Off and on, about four to six hours, maybe. UNIDENTIFIED MALE: I actually make sure I get at least seven or eight hours. UNIDENTIFIED MALE: I guess the standard people say is eight hours. And I rarely get that. UNIDENTIFIED MALE: Good night for us, I'll probably get four or five hours. UNIDENTIFIED MALE: I feel like way tired when I wake up. You know like oh, my God, you know, here we go again. UNIDENTIFIED MALE: I'm tired every day. Every night, I'm tired. (END VIDEO CLIP) GUPTA: And the National Institutes of Health tells us about 40 million of us have sleep problems. How you sleep and how long you sleep says a lot. Let's turn to our guest, Dr. David Shulman from Emory University School of medicine. Thanks for being here, first of all. DAVID SCHULMAN, SLEEP EXPERT: Thanks for having me. GUPTA: You know, it's interesting. So we hear that people don't get enough sleep. And I don't -- you know, full admission, I don't get enough sleep. How bad is that really? What am I at risk for? SCHULMAN: Well, there's growing evidence that chronic sleep deprivation not only affects your day to day functioning, interpersonal relations, how good you are at work, how efficient you at work, your memory. But also over the long run, it has health implications. You're more likely to get into car accidents. You are probably more likely to die over the long run according to a couple of large studies. GUPTA: That's a little frightening. Now OK, let's say, I -- you know, don't sleep very well during the week, just because of schedules, but I make up for it on the weekends. SCHULMAN: Good. Well, certainly, a lot better than it being chronically sleep depravation. The problem with sleep deprivation, almost like chronic alcoholism is that over a while, you forget that you have a problem and you grow accustomed to it. So your recollection that you are deprived and catching up on weekends is a heck of a lot better than chronic deprivation every day. GUPTA: Right. What about people who snore? And I'm not saying I do or my wife, in case she's watching, but what about people who snore? SCHULMAN: Well, snoring in and of itself may or may not be a problem. But what we're concerned about is that snoring is a marker for sleep apnea. About half of folks who snore on a regular basis have some form of sleep apnea. And we have growing evidence that sleep apnea increases the risk for high blood pressure, cholesterol problems, diabetes, heart disease, and probably death as well. GUPTA: You know, as you -- and I know this is what you do for a living, but as you look at the -- all the problems people have with sleep, are there tips that sort of come to you that really seem to work as far as people getting a better night's sleep? SCHULMAN: Well, if insomnia's the problem, that's a very common problem in America. You'd want to make sure that you have a regular dark, quiet sleeping environment. You're avoiding caffeine after about 12:00 p.m. You try to go to bed at the same time every night. You try to get rid of your stress, not that that's particularly easy. But you also want to go to a doctor if this is a problem that's persisting several days per week, or certainly several weeks per month, because it could be a sign of a more significant problem, such as sleep apnea, such as restless leg syndrome. GUPTA: Do you ever recommend medications, sleeping medications? SCHULMAN: I think it's probably reasonable to do it one or two nights a month, but if you're requiring it more often that than, I think getting an evaluation by a doctor prior to more regular use would be critical. GUPTA: This is one of the most important topics we talk about here on HOUSE CALL. So I really appreciate your time. Dr. Schulman from Emory University of Medicine. Thank you very much. (BEGIN VIDEO CLIP) UNIDENTIFIED FEMALE: What I'm probably most afraid of is losing her, but you try not to think about that. (END VIDEO CLIP) GUPTA: The difficult choice one family faced and how their decision could help save other children with a rare birth defect. That later in the show. GUPTA: Some quick medical headlines now. First up, the manufacturers of Airborne, that's an herbal supplement that once claimed to fight off colds, plan to refund consumers who purchased the product between May 2001 and November 2007. That's right, a class action lawsuit determined advertising for the product was deceptive. And there's no credible evidence that the ingredients help prevent colds or protect from germs. Now, people who purchased Airborne can visit airbornehealthsettlement.com for more details. Also in the news, more concern for women on hormone replacement therapy or HRT. And new study followed more than 16,000 women for three years who took HRT. The study found an increased cancer risk even after women stopped taking the hormones. However, women in the study received higher doses of HRT than currently prescribed. And experts say lower doses are safe and effective for treating menopausal symptoms like hot flashes, irritability, and vaginal dryness. Hormone therapy remains a sensitive, hot button issue in women's health. We'll have tips to make smart decisions when it comes to HRT later on in the show. GUPTA: Welcome back to HOUSE CALL. Each year, as many as 600 babies are born with a condition called hypoplastic left heart syndrome. It's one of the most complex and rare heart defects. Without surgery, 90 percent of these babies could die before their first birthday. Now here at HOUSE CALL, we've been following one little girl and her family through a clinical trial that might bring some new hope to those tiny hearts. (BEGIN VIDEOTAPE) GUPTA (voice-over): At barely two months, Annabelle already has 60 tiny hair bows and after risky heart surgery, a better chance at survival. REBECCA BUTCHER, BABY BORN WITH HEART DEFECT: What I'm probably most afraid of is losing her, but you try not to think about that. GUPTA: Rebecca was just 16 weeks pregnant when her baby was diagnosed with a heart defect. Hypoplastic left heart syndrome. Left untreated, it would be lethal. SCOTT BRADLEY, DR., PEDIATRIC CARDIOLOGIST: The left side of the heart either forms incompletely or does not form at all. GUPTA: The Butchers had to make heart breaking choices, whether to end the pregnancy or commit to surgery. UNIDENTIFIED MALE: It was a big shock for us. R. BUTCHER: We decided that we would let her fight. UNIDENTIFIED MALE: Lord, we just thank you for this day. GUPTA: The Butchers would say Annabelle's birth by C-section in January was their happiest day. Within days of her birth, Annabelle is wheeled in for risky open heart surgery. With Annabelle's defect, the left side is so undeveloped, it can't pump enough blood. During the eight-hour operation, doctors insert a tube or a shunt to reroute her circulation. It's complicated work. Her heart's the size of a walnut, her blood vessels the head of a pin. The wait is agonizing for her family. R. BUTCHER: The waiting game is not fun, but I think we're all right. GUPTA: Surgery is a success. And within a week, her parents are able to hold her for the first time. Anna's next operation will be when she's four to six months old, the third when she's about 2. She'll need medications and constant care. For now, her parents dream of giving both kids as normal a life as possible. BUTCHER: To graduate from high school, to graduate from college, to see her get married, that's my hope for her, to have a completely normal life. GUPTA: And children with Hypoplastic Heart Syndrome often don't grow up to be competitive athletes and probably won't have the exercise capacity most people do. But after the operation, they can go on to live fairly normal lives, occasionally needing operations in their 20s or 30s if they develop arrhythmias. And we should add that if you get medical University of South Carolina where Annabelle's operation was performed, success rates are up to 90 percent. So a lot of hope there for little Annabelle. Good luck to her and her family as well. And now a role reversal. What happens when the parent is the one who's sick? Each year, hundreds of thousands of children under the age of 18 learn one of their parents has cancer. Imagine that. And a new program is helping them to cope when that parent can't be around. UNIDENTIFIED MALE: My name's Dr. Jeff. JUDY FORTIN, CNN MEDICAL CORRESPONDENT (voice-over): These kids are going where children don't usually go, on a tour of the radiation department at the Erlanger Cancer Center in Chattanooga, Tennessee. UNIDENTIFIED MALE: You guy are kind of experts in this, aren't you? FORTIN: Each of these youngsters has a parent or close relative with cancer. They've joined a nationwide support group called the Children's Treehouse Foundation. This is their chance to ask questions. UNIDENTIFIED MALE: Does it hurt? UNIDENTIFIED MALE: That's a great question, does it hurt, because you sure think that something that's strong enough to stop a cancer must hurt, right? That's what you think. But it doesn't. It's sort of quiet. It's like -- almost like magic. FORTIN: If only if were magic. Today, guided by oncology nurse Janet Kramer-Mai, herself a breast cancer survivor, the group learns firsthand how tough fighting cancer can be. UNIDENTIFIED FEMALE: My mother's starting to lose her hair because she's doing chemotherapy. UNIDENTIFIED FEMALE: Does it make her sad? UNIDENTIFIED FEMALE: No. UNIDENTIFIED FEMALE: What do you think about that? UNIDENTIFIED FEMALE: It's making me kind of nervous. JANET KRAMER-MAI, RN, ERLANGER CANCER CENTER: They're given the tools that they need to cope with whatever's going on with mom, dad, grandma, grandpa, whoever's in the family. FORTIN: Patient advocate Sam Harris says cancer impacts the entire family. And he recommends parents find a support group for young children. SAM HARRIS, ERLANGER CANCER CTR.: Because it helps to relieve their fears and to know that mom and dad are being well taken care of here. FORTIN: That's what Chris Johnson was looking for when he enrolled his two sons in the program. His wife was diagnosed with breast cancer four years ago. CHRIS JOHNSON, FATHER: It has taken a lot of pressure off of her, because it's tough enough to have cancer. FORTIN: He says sharing the news with her children was difficult, but necessary. JOHNSON: You walk a fine line. I mean, you don't want to just, you know, throw all the medical terms at them, you know, and be cold and clinical about it. But you don't want to shelter them either from the fact that, you know, this is a disease that kills people. FORTIN: The support group offers a real lesson for his 11-year- old son Hayden. HAYDEN JOHNSON, 11-YEARS OLD: I came to like what my mom goes through. FORTIN: Hayden leaves the cancer center knowing that he's not the only child with a loved one battling cancer. And that helps, at least a little. Judy Fortin, CNN, Atlanta. GUPTA: All right, Judy, thanks. You know, coming up, with all the confusion over the risks of hormone replacement therapy, Elizabeth Cohen stops by to empower yourself when deciding on the treatment. GUPTA: Some important information for women asking about treating symptoms of menopause with hormone replacement therapy. Women still asking are they safe? Elizabeth Cohen is here to give us the good and bad about HRT. We've been talking about this for five years now, you and ? ELIZABETH COHEN, CNN MEDICAL CORRESPONDENT: Oh, yes, absolutely. And you know what? The "Empowered Patient" got sick of this because -- sick of the fact that there's all this information oftentimes that conflicts with one another, making it really hard for women to make the decision should they or should they not go on hormone replacement therapy. So, the "Empowered Patient" -- we call doctors that just lay it on the line. What are some good reasons for taking it and what are some bad reasons. And so, here's what they told us. They said as far as good reasons go, there are about five of them. I'm going to tell you two of them now. You have to go online to get the rest. But severe hot flashes, if a woman is having hot flashes that are so bad she can't function, that's a good reason to consider going on hormones. Also, severe night sweats, another good reason to consider going on hormones. Sanjay, sometimes I think that women hear the news and think hormones, forget it. COHEN: Absolutely not, never. That's not really the answer. The answer is to pick and choose your reasons. And are they good reasons or are they not good reasons? GUPTA: But when they came out, it made perfect sense, right? Women's hormone levels go down, and so you replace that. That makes sense. But what about the bad reasons? What have we learned now? COHEN: Right, there are some bad reasons. The reasons that we used to think were good... COHEN: ...we had that kind of thinking you're talking about, but that has changed over time. So here are some bad reasons for going on hormones. If you think that hormones are going to keep you forever young, that is just not true. That's been disproved over and over again. If you think that hormones are going to help you fight cancer or heart disease, and this used to be the common thinking, also not true. It actually is -- it's going to increase your risk, very likely, of getting either of those two diseases. So those are not good reasons to go on hormones. GUPTA: So a woman's watching now and she's trying to put this all together. What does she need to keep in mind before deciding whether to take them or not? COHEN: Right. If she decides to go on them, she has to do it with eyes wide open. She needs to know what all the risks are. And she needs to talk to her doctor about going on the lowest possible dose for the shortest possible time. That's really important. It's also really important to get the right doctor. You know this Sanjay, sometimes doctors, like let's say internists (ph)might prescribe hormones occasionally. They don't necessarily know all that much about hormones. You want to go to someone where this is the big part of their practice. GUPTA: Right. And a lot more online as you mentioned about this -- a lot of people are interested. We get so many comments on this particular topic. Elizabeth Cohen, thanks so much. GUPTA: We're back with HOUSE CALL. You know, every week, we bring you weight loss success stories, proving that whether you have 20 pounds or 120 pounds to lose, you can succeed. Today, we meet Bill McGahan, a man who turned an opportunity to connect with his teenage daughter into an amazing weight loss adventure. It's a good story. Here it is. GUPTA (voice-over): Mount Kilimanjaro, one of the highest peaks in the world. An outstanding achievement for any climber. For Bill McGahan, it was an unusual way to lose weight and also a way to spend time with his daughter. BILL MCGAHAN: Fathers who are about my age in the mid-40s, I think sometimes struggle to find things to do with their seventh grade daughters. They're typically into things they're not interested in. GUPTA: After trolling the Internet for some ideas, dad and daughter settled on an ambitious plan. B. MCGAHAN: We decided we'd climb Mount Kilimanjaro. GUPTA: Working out three times a week, running seven miles a day and building intense lower body strength together, they scaled one of the highest peaks on the planet. Dad lost 30 pounds in the process. B. MCGAHAN: We were about 100 yards from the top. And we knew we were going to make it. And I put my arm around Sarah and I said, you know, you did it Sarah. It even gets me a little misty now. And she said no, dad, we did it together. That's really one of -- you know, one of the great moments of my life. GUPTA: So what did it mean to Sarah? SARAH MCGAHAN, DAUGHTER OF BILL MCGAHAN: At first, I was really nervous and kind of freaked out about it. I love my dad and he's really funny. And I was glad to do with it him.
Hi Out There: Does anybody have an opinion on using software for wetland forms on handhelds using Pocket PC? I have researched Wetform, Formations, and Wetland Quickforms, but not really sure which one is best and if there is anything else. Thanks... WetForm has a new Android device software product that will give you access to all 8 continental US COE Manuals and species lists. WetForm Regional Lookup will let you select the COE Region you want, then have complete access to the species lookup lists as well as the complete manual. This simple program also gives you a screen with the option of selecting 12 species, entering the cover values and instantly identify the indicator status and the dominants based on the 50/20 rule. Cost is $18.50. This will run on your Android device, including phones, 4” screens, 7” and 10” tablets. You can get to the software by going to Google Play on your device and doing a Search on “WetForm” or “wetland delineation”. This is NOT the full version of WetForm Android, just a nice tool to use if you still prefer collecting data on paper forms. You can get to a YouTube video of the program at: I think Andrew has hit on the the point that is the reality of most or alot of the situations we consultants find ourselves in. It seems to me that as a real estate developer, you would know this more than anyone Johnny. Sometimes it's just not worth the fight, unless you have deep pockets and are trying to impact alot of wetlands that may or may not be jurisdictional. I'm trying to get my head around why you keep bringing up the "jurisdictional" issue anyway. A simple post about data forms becomes another one of your soap box sermons. I found a post from you in Dec, 2006 where you said that: "Though we seldom get credit for our efforts, land developers play a major role in protecting both wetlands and uplands. We choose our land purchases and plan our projects to avoid impacting high quality sensitive habitats. We design around wetlands, natural streams, and ponds to minimize and avoid impacts to the extent practicable. At the earliest stages of a project, we conduct environmental studies and begin the complex and lengthy regulatory process. We spend large sums to prepare delineations, permit applications, and mitigation." That gave me a chuckle. You talk as if you do those things out of the goodness of your heart and love for the environment. Maybe you do, but every one else does because they have too. Also based on your posts about jurisdiction you seem to be like every other developer who wants to have as much land available to them as possible for development without having to go through the regulatory process. By your statement above, even if you had areas that were shown to not be jurisdictional you would not touch them.? I hope so, but I doubt it if it meant not making more money on a project. I realize you feel like it is your obligation to educate or enlighten us about the current laws regarding wetland jurisdiction. It's obvious you feel that: "Many wetland delineators are well educated in environmental science, botany, soils, etc., but often lack the legal training that is necessary to comprehend the complexities of wetlands law and protect the rights of land owners." I tend to disagree especially in regard to the people who post on this forum. We may not be able to quote footnotes from the memorandums as you do, but we are well aware of the current state of wetland issues. Currently all we have at our disposal to determine jurisdiction is the 7 page JD form (which I've not heard anything about new forms). In my opinion these are, in the end, very subjective and as Patrick said the COE is the final authority as to what they consider jurisdictional or not. They are who we submit our information to for a verdict and not the courts. As long as they can support their decision then it just becomes an uphill battle trying to say otherwise. As I said before none of this really pertains to my work since my state has 401 certification. But as a consultant though I see it as my job to protect my clients from making potentially expensive mistakes. If I worked in an area that jurisdiction was an issue and the COE rep said that all the wetlands were jurisdictional I would advise my client to go with that verdict. Unless I could without a shadow of a doubt prove that the wetlands aren't jurisdictional (which by the way would require a full delineation with data stations on all questionable wetlands) then what is the point. I think there is a disconnect with what is legal and what is practical in your posts. For most of my clients who own a couple acres and want to build a house (or even a small subdivision). It doesn’t mater if I think their wetland is non-jurisdictional; if I can’t convince the Corps that the wetland is not jurisdictional, it is not worth it to most people to fight the Corps. The legal fees, delays and uncertainty are far too much for most people to go through. It is generally far more cost effective for people/projects to do what the Corps demands, even if the corps is wrong. This means that the little guys will get screwed until someone with enough resources and enough at stake fights the Corps and forces them to change policy to be within the law. “...A wetland or pond that might appear to be physically isolated could be regulated as an adjacent wetland or water...” Currently, there are both judicial and regulatory definitions of “adjacency.” The Rapanos decision provides the following judicial definition: "...A wetland may not be considered “adjacent to” remote “waters of the United States” based on a mere hydrologic connection,...only those wetlands with a continuous surface connection to bodies that are ‘waters of the United States’ in their own right, so that there is no clear demarcation between the two, are “adjacent” to such waters and covered by the Act..." The agencies have their regulatory definition which is far more expansive but does not have the legal authority of the Rapanos definition. In the revised Memorandum on CWA Guidance to Implement the U.S. Supreme Court Decision in Rapanos, dated Dec. 2, 2008, Footnote 29, the agencies inform us that the regulatory definition of “adjacent” is different than the U.S. Supreme Court’s plurality (Rapanos) definition: "...29 While all wetlands that meet the agencies’ definitions are considered adjacent wetlands, only those adjacent wetlands that have a continuous surface connection because they directly abut the tributary (e.g., they are not separated by uplands, a berm, dike, or similar feature) are considered jurisdictional under the plurality (Rapanos) standard...." A wetland or pond that might appear to be physically isolated, could be regulated as an adjacent wetland or water. You are right about the OHWM. An upland pond with no surface connection to jurisdictional wetlands or waters of the US is isolated. That rule of nots could be modified as follows: Isolated wetlands are not w/in the OHWM of a water of the US, not abutting a jurisdictional wetland or water of the US, not adjacent to a jurisdictional wetland or water of the US, and don't contribute significantly to water quality of other jurisdictional wetlands or waters of the US. I thought the Nexus test was supposed to determine if an area was isolated. Of course, the Nexus test itself seems so subjective ... Any 2 people can draw different conclusions from the same test it seems In my experience, a wetland area that is found to be a wetland under the 1987 manual can be determined to be isolated if it is surounded by a non-hydric soil, with no hydrological connection (manmade ditch, natural stream OR (get this) a saddle in the landscape in upland areas). Good luck meeting these criteria (especially with the kicker of the topo lowpoint in uplands added in there. Said topographical lowpoint could (in high rainfall events) be used to transport a contaminant to Section 10 Waters. Now I understand how the Rocky Mountains are tied to China I, like you, have been unable to document this in writing to supply to the client. When I have asked for said documentation, it has been referred to as "our policy". Not saying that I agree with these "policies", it has just been my experience in the field. It does seem that the USACE does not want ANY isolated calls on the radar right now, since that draws in the EPA for final determination of isolated vs 404. I do think an isolated wetland can certainly contain an OHWM and still be truely isolated. It was always the intention that this regulation should pass to the states, and all the states or other government entities have to do is say, "OK, we will do it". Basically, the states are chicken or find it too easy to just let COE do it. So it isn't the COE's or the Federal government's fault that the states aren't brave enough, it is the states fault. For example, I live in Boulder Colorado, and the CITY of Boulder has taken primacy for wetland permitting from the COE. A CITY! So who ya gonna blame? Fed's or the unmotivated states? Sorry that this discussion is so prickly, but the COE are the ones who you should/must talk to to get a real answer. I will take a shot at it anyway in an effort to simplify. It is called the rule of nots. Isolated wetlands are not w/in the OHWM, not abutting, not adjacent, and don't contribute significantly to water quality of other wetlands or waters of the US. Now the complicated part is just a matter of discussing details with the COE. Although there is obviously a perjorative attitude about COE from many of the contributors to this discussion, the COE personnel have always been extremely helpful and willing to explain things over and over again when I ask. Undoubtedly others may have other experiences, but I think fear of talking to COE is one of the biggest hurdles that delineators have to overcome. Why? Because no one likes to let their uncertainty show. That's what I think. I thank you all. I love these exchanges. Unfortunately, the only real solution to this problem is dueling pistols at 10 paces! It might be worthy of mention that this issue of regulatory limits is certainly not new. In an earlier Corps regulation from 1977, they were seriously concerned about the limits of federal jurisdiction. In fact there was an effort to avoid using the term "isolated". Some of you may remember the infamous NWP 26. This NWP authorized filling in headwaters and other waters/wetlands that were not part of a surface tributary system, (in other words isolated). They purposely avoided the term "isolated" for a reason. In the discussion of Corps jurisdiction over wetlands and waters, the reality is that the Corps' regulatory program has become a federal land use regulatory program. Just think about the many controversial permit cases that involve land uses outside of wetlands or waters. Now go back to your high school civics and remember the issues of "states' rights" and concerns to limit the powers of the federal government. Everyone wants clean water but should it be the federal government in your backyard? You also have to remember that the government must justify itself. If the regulated public and their consultants knew what to do and had all of the answers, we wouldn't need the Corps. Well, that just won't work, and it is not just the Corps. There must alsways be enough confusion to require government input and answers. Give me his/her name/number. or district and I will call them up and get some kind of answer - or not. I have never had a problem getting the local take on these definitions. If I could get him to tell me that, I wouldn't have to ask here. The answer may be complicated, but getting the answer is simple. Ask your district engineer. It may indeed be different from region to region, but the one you need to ask is the one that will be dealing with your project. After you get an answer, write it down and then send an email to the engineer confirming your interpretation of what they said. The key words to get a handle on are "adjacent" and "abutting". Distance that defines "adjacent" should also be more or less defined. I have seen if vary from about 50 feet to 200 feet.
If there’s one thing that wreck a retirement faster than anything, its huge unexpected medical expenses. According to healthcare advocacy group — the Kaiser Family Foundation — Americans spent around $253 billion on healthcare back in 1980. By the time the clock rolled over to 1990 that number had swelled to nearly $714 billion. But it gets much worse. In the late half of this decade, Americans spent a whopping $2.3 trillion on healthcare — equating to roughly 16% of America’s GDP. That’s a lot of coin. But the situation is much worse for retirement investors. The average retired couple will spend around $220,000 in healthcare costs not covered by Medicare or other insurance programs. That amount of money is more that some investors have saved in their entire portfolios. But the situation doesn’t have to be so dire. Given the rising costs, it makes perfect sense to include a hefty dose of healthcare stocks in your portfolio — especially for retirement investors. But how exactly should investors go about adding healthcare stocks to a portfolio? Here’s one stock, one exchange-traded fund (ETF) and one mutual fund to get you started:
What was school like for you? I usually get one of two responses to this question: either general memories of intense boredom and frustration, or an enthusiastic description of one or two really good teachers who provided an inspiring, enlightening, mind-expanding experience. In the last few years, a certain number of parents and kids have come to feel that contact with inspiring and enlightening people can be achieved without kids having to put up with all the rest of it; that school is, in short, unnecessary. My husband and I have two little girls who, at ages 10 and 7, have never been to school. This naturally attracts a good deal of curiosity and comment, some positive, some not. We hear a lot of the same remarks again and again: Is that legal?? Yes, of course. In BC, homeschooled children are required to be registered as such. It's quite a simple procedure. Are you a teacher? Well, functionally, yes, but no, neither of us has a teaching certificate. Rather a high proportion of homeschool parents are, in fact, teachers or former teachers. Many report that the environment of the home school is so different that they regard their training as more of an impediment than a benefit. When you're not dealing with 25-30 learners, the enormous balancing act of helping the slow ones along and stimulating the quick ones, meanwhile engaging the reluctant and harnessing the hostile, is simply not required. Interestingly, academic studies show no significant correlation between the educational background of homeschooling parents and the test scores of their children. By the way, this is a good place to put in a word for teachers. Homeschoolers as a rule have no quarrel with teachers. My own parents are both teachers; I've seen a lot of the work that teachers do, on their own time and out of their own pockets. My feeling is that most teachers are dedicated, caring people with a very difficult job to do. I don't know any homeschoolers who would disagree with that. Our reservations are about the system of schooling, not the people who are doing their best within it. Aren't you afraid they'll fall behind? How do you know how they're doing? This question always makes me sad. It says a lot about the distance between people in families. My doctor knows a lot more about sickness than I do, but I don't have to take my kids to see him every day to know whether they're well or not. I am with my children every day. I see how they're doing, I know if they're struggling with an idea or a technique, I'm available to help them if they wish. "Falling behind" presupposes that learning must be done in a Proper Sequence. Schools follow a set curriculum for the same reason that suits are factory-made by a set schedule - it's the only efficient way to accomplish high-volume throughput. A suit off the rack may fit you fairly well, or if your body is non-standard in some way, it may be hopeless. A made-to-measure suit will fit you best of all. The same applies to learning. Kids like to learn. Given freedom, encouragement and access to information, they will learn as much as their minds will hold, as fast as they perceive a need to know. But they may not do it in the order you'd expect. To answer the question in a different way, homeschooled kids usually score well on standardized tests. A recent study in Washington state analyzed 2911 tests (Stanford Achievement series) written by homeschooled children. The median score for each of four years under study was at the 65th to the 68th percentile, which is significantly above the median overall (50th percentile, by definition). It must be very time-consuming. Yes and no. It nearly requires one stay-at-home parent. I am acquainted with one single mother and one two-income family who homeschool. It's working for both families, but it has required a lot from them in terms of lifestyle changes and time-juggling. Having said that, it's important to realize that the stay-at-home parent does not have to spend the day "teaching" in any formal sense. There are many styles of homeschooling; some families have daily lessons at a set time, others are less structured. One woman I know, who homeschooled five children (three of whom the school system had labelled "learning disabled"), had a schedule: the family did morning chores, sat down around the kitchen table for school at 10am, worked until 12 and lunch, and had the afternoon free for their own pursuits. They did this four days a week, from October through March. All her children are grown now and doing well. Our own family tends more to the unstructured end of the spectrum. We read a lot, together and individually; sometimes one child will ask to "do workbooks" - a stack of workbooks stands on the corner of my desk, and the kids quite enjoy using them from time to time. We often discuss interesting properties of numbers, or events in history, or current events, at mealtimes or in the car. We go to the library, Science World and the swimming pool. When we're at home, I am often absorbed in projects of my own and the children are busy about their own business. It is extremely rare that they ask me for something to do. An academic study, attempting to draw a correlation between the amount of structure in the home school and the test scores of the children, could find no such relationship. That is, children did equally well whether the family considered itself extremely structured, totally unstructured or in between. This is one of my favourite statistics; it shows that there are lots of "right" ways to homeschool, and families can be trusted to find what works best for them. How can you know everything they'll need to learn? You can't. And you needn't. Healthy, curious kids will take an interest in all sorts of things, many of which you will scarcely have thought about, never mind become informed about. This is your opportunity to learn together, and your child's opportunity to learn THE most important thing you have to teach him or her: how to find out what you need to know, how to pursue knowledge. This is more vital than any particular subject, and any subject can be used to discover it. Once your child has some experience, you'll find that she/he will forge ahead of you. Think back - many people have told me that the brightest part of their childhoods was when they got completely absorbed in some subject - dinosaurs, cars, ham radio, the middle ages, whatever - and knew more about it than any of the adults around them. My kids have delved into pirates, dinosaurs, vikings...we recently constructed a viking village from a cut-out book, quite detailed and very interesting. We followed that up with lots of reading. Sonja's interest in opera has actually led me to purchase season's tickets, something no one who knows me would ever have believed likely. In the fall when she was nine, she decided to learn her times tables, on her own, and had them all down in one month. Her Christmas present to her grandfather was to do multiplication problems for him in her head. Generally speaking, the parents' role in homeschooling is not to be the fount of all knowledge, cramming quantities of facts into blank and passive heads; we're not stuffing sausages here. Our role is to be enthusiastic and experienced learners, role models for our children, providing support and advice - and transportation to the library. Won't they have trouble going to university/getting work? This hasn't been a problem for the homeschoolers I know. A recent survey of adults who had been homeschooled showed a whopping 31% were self-employed. Many more had created jobs for themselves through their own initiative, for example apprenticing themselves to people in their fields of interest. Homeschoolers tend to be self-starters and do very well finding their own way. As some of us know from personal experience, university can be a terrible shock to kids right out of high school, who are used to being told what to do all the time. But homeschoolers find it much like what they've been doing all along. Some prestigious universities, such as UC Berkeley, have actively recruited homeschoolers, having found them to be excellent students. Locally, (Vancouver BC)two young men of my acquaintance, homeschooled for eight years, went to community college for one year and then transferred in to UBC with no problems. As with the question of finding work, homeschoolers seem to find ways, not always the most obvious ways, to get where they want to go. You're depriving your children of necessary social contact. They're overprotected and isolated from the real world. I love this one. The only part of the "real world" that school resembles that I am aware of is prison or the military. Where else are you classified in arbitrary groups, constantly scrutinized, deprived of free speech, and subject to petty tyranny? (Anyone who says, "At work" needs to get a new job.) My kids live in the real world. They meet lots of people, are very outgoing and can talk to anyone. They also have lots of friends their own age. Our homeschool support group goes on regular outings that mix adults and children of all ages, my older girl sings with the Vancouver Bach Children's Choir and takes classes in drawing and cartooning, the younger plays piano and is on a ringette team, and both are members of a children's circus troupe. They have time for all this contact with people who share their interests because they're not in school. We tend to hear a lot of blather about "socialization". Some people seem to take it for granted that this is a process that can only occur in school, possibly requiring a critical mass of children and the supervision of a trained professional in order to take place. Poppycock. Socialization is the means by which people learn how to behave as adults in their society. Taking virtually all the children and isolating them from adult society throughout their childhoods is just about the dumbest way of accomplishing this that anyone could have devised. There are two components to children's normal socialization: observing adult behaviour and trying out what they have observed. Today's kids have plenty of opportunity to practice their social skills on other kids, but have an extremely limited opportunity to observe adults interacting normally. Many have only one or two parents and their teachers to observe at all. They are emphatically not welcome in most places where adults spend their time. So where do they get the models to work from? Give it some thought, now - where do kids see lots and lots of adults interacting every day? Right: television! If my kids are deprived because they don't spend all day among people acting out what they've learned from The Untouchables or Days of Our Lives, amen to that. Some would advance the notion that it's necessary life training for kids to be brutalized on the schoolyard. I thoroughly disagree. No one would argue that it is OK to let a toddler wander out into traffic on the grounds that she has to learn about cars. Lots of studies show that kids who have secure environments grow into stable and secure adults, those who are brutalized often become adult victims or brutes themselves. What are you on about? We all went to school and we're all right. Gosh, do you really think so? I think that, as a society, we are most emphatically NOT all right, and I think that a lot of it has to do with school. We are a fragmented, narcissistic bunch, with a strong tendency to submit to the authority of "experts". We have little connection to or compassion for others, especially between generations. Old people are largely disregarded, children viewed with contempt. It is quite fashionable to speak of children, even one's own children, even in their presence, as though they were revolting and scarcely human. The same people who talk like this often have the colossal nerve to wonder why teenagers are so bloody hostile. Hubris is like that. It's worth noting, too, that every generation of schooling takes up more and more of the child's life. Kids today spend more of their time in and about school than you did, you spent more time than your parents did, and so forth. Just over a hundred years ago, the "school year" was only a few weeks long. Now, in addition to the increasing presence of school, families are shrinking and children's access to other adults becomes less and less, while television absorbs available "free" time. If children are segregated from adult life, it is absolutely to be expected that they will become preoccupied with "peer pressure", no surprise at all that they find it difficult to take their places in adult society when the time comes, having had no experience of it, and sadly predictable that they should exclude and ignore old people, those being the very people who excluded and ignored them. Is this a healthy society? I think not.
Prostate cancer is the most commonly diagnosed malignancy in men and is thought to arise as a result of endogenous oxidative stress in the face of compromised carcinogen defenses. We tested whether carcinogen defense (phase 2) enzymes could be induced in the prostate tissues of rats after oral feeding of candidate phase 2 enzyme inducing compounds. Male F344 rats were gavage fed sulforaphane, β-naphthoflavone, curcumin, dimethyl fumarate or vehicle control over five days, and on the sixth day, prostate, liver, kidney and bladder tissues were harvested. Cytosolic enzyme activities of nicotinamide quinone oxidoreductase (NQO1), total glutathione transferase (using DCNB) and mu-class glutathione transferase (using CDNB) were determined in the treated and control animals and compared. In prostatic tissues, sulforaphane produced modest but significant increases in the enzymatic activities of NQO1, total GST and GST-mu compared to control animals. β-naphthoflavone significantly increased NQO1 and GST-mu activities and curcumin increased total GST and GST-mu enzymatic activities. Dimethyl fumarate did not significantly increase prostatic phase 2 enzyme activity. Compared to control animals, sulforaphane also significantly induced NQO1 or total GST enzyme activity in the liver, kidney and, most significantly, in the bladder tissues. All compounds were well tolerated over the course of the gavage feedings. Orally administered compounds will induce modestly phase 2 enzyme activity in the prostate although the significance of this degree of induction is unknown. The 4 different compounds also altered phase 2 enzyme activity to different degrees in different tissue types. Orally administered sulforaphane potently induces phase 2 enzymes in bladder tissues and should be investigated as a bladder cancer preventive agent. The most commonly diagnosed cancer among men, prostate cancer will account for nearly 30,000 deaths in the United States in 2005 and cause countless men to suffer significant morbidity . Accumulating evidence implicates oxidative damage, possibly due to prostatic inflammation, as an important contributor to prostate carcinogenesis . Some human prostate cells appear to acquire increased susceptibility to oxidative DNA damage because they lack expression of glutathione S-transferase-π (GSTP1) due to somatically acquired methylation of deoxycytidine residues in "CpG islands" in the 5'-regulatory region of the GSTP1 gene early in prostate carcinogenesis [3-6]. GSTP1 is an important member of the class of enzymes (phase 2 enzymes) that protect cells against electrophilic compounds, including many carcinogens and oxidative species . Strategies to induce the expression and activity of phase 2 enzymes have been shown to protect against carcinogenesis in a variety of organ sites and across several species [8,9]. Since prostate cancer appears to be uniquely deficient in the phase 2 enzyme GSTP1, a rational prevention strategy might be to compensate for GSTP1 loss by global induction of phase 2 enzymes within the prostate. A number of compounds effective at inducing phase 2 enzyme activity have been identified by screening for nicotinomide quinone oxidoreductase (NQO1) enzymatic induction in the Hepa 1c1c7 cell line [10-12]. Compounds effective at inducing phase 2 enzymatic activity in Hepa 1c1c7 cells in vitro have been found to be effective at inducing the phase 2 enzyme response in vivo, and several of these compounds have also been demonstrated to prevent against carcinogen induced tumors in animal models [11,13]. However, compounds that induce NQO1 activity in liver-derived Hepa 1c1c7 cells do not always produce induction in liver cells in vivo, and can vary in their effectiveness at inducing phase 2 enzymes in different tissues [14-16]. For instance, both tert-butyl-4-hydroxyanisole (BHA) and dimethyl fumarate are effective at inducing NQO1 activity in Hepa 1c1c7 cells in vitro, but in CD-1 mice, only BHA induces NQO1 activity in the liver (6-fold), in addition to the lung and kidney (2-fold), but not in the stomach and colon . Dimethyl fumarate, on the other hand, induces NQO1 enzymatic activity in the forestomach, small intestine, kidneys and lungs, but produces little change in NQO1 activity in the liver . Although we have identified compounds effective at inducing phase 2 enzymes in prostate cells in vitro, the possibility of inducing phase 2 enzyme response in the prostate in vivo has not been tested. We selected 4 candidate phase 2 enzyme inducing agents effective in prostate cells in vitro and tested whether they could induce phase 2 enzyme enzymatic activity in the prostates of F344 rats in vivo. After 5 days of gavage feeding with each of candidate compounds or vehicle alone, global GST activity, isozyme GST-mu and NQO1 activity were assessed in the prostate, liver, kidney and bladder tissues of male rats. Purified sulforaphane was obtained from LKT labs (St. Paul, MN), curcumin, β-naphthoflavone (BNF), propylene glycol, dimethyl fumarate (DMF), 1,2-dichloro-4-nitrobenzene (DCNB) and 1-chloro-2,4-dinitrobenzene (CDNB) from Sigma-Aldrich Inc. (St. Louis, MO), and AIN 76A diet from Research Diets (New Brunswick, NJ). Male F-344 rats were purchased from Jackson labs (Bar Harbor, ME). Eight-week-old male F344 rats were housed in microisolator cages in groups of 2 or 3 animals per cage. Mean body weight of the rats was 188 ± 4.0 g at the start of the study, and animals had access to AIN 76A diet and water ad libitum over the duration of the study. Animals were randomly divided into 5 groups of 10 animals corresponding to each of the 4 test compounds and a control group that received vehicle alone. All compounds were mixed fresh each day by either dissolving or suspending them in 100 μl propylene glycol at the following doses: sulforaphane 50 mg/Kg/day , curcumin 45 mg/Kg/day , β-naphthoflavone 41 mg/Kg/day and dimethyl fumarate 37.5 mg/Kg/day . Doses were chosen that had been reported to be non-toxic and effective at inducing phase 2 enzymes in other model systems. Compounds or propylene glycol were administered in a single dose once a day by gavage at doses corrected for the body weight of each animal. The gavage feeding was carried out after the rats received isofluorane inhalation anesthesia and involved minimal trauma. Animals recovered rapidly from this agent and were observed until fully awake in their cages. All rats were monitored for infection or toxicity to prevent suffering, and there were no obvious signs of discomfort, distress, or pain over the duration of the study. One animal in the sulforaphane group died shortly after the first gavage feeding, and another died after the second feeding, both apparently due to aspiration of the dose. Necropsy did not reveal any gross abnormalities of any organs. On the morning of the sixth day, the rats were sacrificed by CO2 asphyxiation approximately 24 hours after the last dose. The rats were housed at the Animal Care Facility at the Stanford University School of Medicine in compliance with PHS Policies on Humane Care and Use of Laboratory Animals. All work was carried out under Administrative Panel on Laboratory Animal Care approved protocols. All animals were under strict veterinarian care of the Department of Comparative Medicine in compliance with all Federal and State regulations to assure proper and humane treatment. The liver, kidneys, bladder and the prostate tissues were removed, weighed and snap frozen in liquid Nitrogen, and stored in -80°C until processed for the enzyme assays. Cytosols were prepared from the harvested tissues by homogenization in 0.25 M sucrose and centrifugation at 5000 × g for 20 minutes at -4°C. 0.2 volume of 0.1 M CaCl2 in 0.25 M sucrose was added to the supernatant and, after incubation on ice for 30 minutes, samples are centrifuged at 15,000 × g at -4°C for 20 minutes. Total glutathione transferase enzyme activity was determined using 1,2-dichloro-4-nitrobenzene (DCNB) and GST mu activity was measured using 1-chloro-2,4-dinitrobenzene (CDNB) according to the procedure of Habig et al. . Cytosols (50 μl) were added to 150 μl 0.1 M phosphate buffer pH 6.5 buffer with 1 mM GSH, 1 mM CDNB or DCNB, and 1% BSA, mixed and optical absorbance was read at 340 nm at 30 sec intervals over 5 minutes. Because of high specific activity, liver samples were diluted 5-fold. GST mu activity was not measured in the bladder samples. Quinone reductase activity was determined by the rate of the NADPH-dependent, menadione-coupled reduction of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide in 96-well microtire plates as described previously [18,19,22]. Enzyme activities were normalized to total cytosolic protein measured according to the Bradford method . All assays and protein measurements were performed in triplicate. Mean enzyme specific activities for tissues from animals in each group were calculated and the fold-induction enzyme specific activities determined by taking a ratio of log-transformed inducer-treated enzyme activities to the controls. The 95% confidence intervals for the fold-induction of enzyme specific activities of the inducer-treated animals compared to controls were calculated on log-transformed data and the results back-transformed to the fold-scale. All compounds were well tolerated by the animals and there was no apparent toxicity over the duration of the study. None of the compounds affected the relative weights of the prostate, kidneys, liver or bladder after gavage feedings over 5 days. Body weights did not differ significantly between the groups fed candidate compounds and the control animals (Table (Table1).1). However, compared to initial body weights, there was an 8% decrease in body weight in the sulforaphane treated group after 5 days (P = 0.03), and non-significant increases (1–5%) in the body weights of the other 4 groups. Sulforaphane treated animals showed significantly higher NQO1, total GST and GST-mu enzymatic activities in their prostate tissues compared to control animals (Figure (Figure11 and Tables Tables2,2, ,33 and and4)4) although the degree if increase was modest (1.2- to 1.8-fold). Compared to controls, β-naphthoflavone treated animals showed small, statistically significant higher levels of NQO1 activity, no differences in total GST enzymatic activity, and moderately elevated GST-mu activity in the prostate. Curcumin treated animals also displayed significantly higher total GST and GST-mu activities in prostate tissues over control levels, although, again, the differences were modest. Prostate tissues from dimethyl fumarate treated animals did not show differences in NQO1, total GST or GST-mu enzymatic activities compared to controls. The effects of sulforaphane, β-naphthoflavone, curcumin and dimethyl fumarate on phase 2 enzyme activity in the liver, kidney and bladder in many ways paralleled that observed in the prostate. Liver tissues from animals treated with sulforaphane, β-naphthoflavone, and to a lesser extent dimethyl fumarate, showed modestly higher NQO1 enzyme activity compared control animals, while curcumin appeared to have no effect (Figure (Figure2A2A and Table Table2).2). All four compounds resulted in significantly higher total glutathione transferase enzymatic activity in the livers of treated animals compared to controls, and sulforaphane produced the greatest elevation (Figure (Figure2B2B and Table Table3).3). Somewhat paradoxically GST-mu activity levels in the liver did not differ significantly between animals treated with inducer compounds and controls and were actually lower in sulforaphane-treated animals (Table (Table4).4). NQO1 enzymatic activity was also higher in the kidney tissues of the sulforaphane, β-naphthoflavone and dimethyl fumarate treated animals compared to controls, while NQO1 enzyme activities in curcumin treated animals matched those seen in controls (Figure (Figure2A2A and Table Table2).2). On the other hand, kidney levels of total GST and GST-mu enzymatic activity were no different between the 4 inducer compound treated groups and the controls except for induction of GST-mu by curcumin (Figure (Figure2B2B and Tables Tables33 and and4).4). Interestingly, NQO1 and total glutathione transferase enzymatic activities were dramatically higher in the bladder tissues of the sulforaphane treated animals compared to the controls (4.4-fold and 4.2-fold, respectively) (Figure (Figure2A2A and and2B,2B, and Tables Tables22 and and3).3). NQO1 enzyme activities in bladder tissues were also significantly increased over controls in the animals treated with β-naphthoflavone, curcumin and dimethyl fumarate, although the differences were not as marked as in the sulforaphane-treated animals. Total GST enzyme specific activities did not differ significantly from control bladder tissues for any of the three compounds. We have previously identified compounds effective at producing modest increases in NQO1 enzymatic activity in human prostate cells in vitro [18,19] and selected 4 compounds for testing whether they could produce induction of phase 2 enzyme activity in vivo. We selected sulforaphane, dimethyl fumarate and cucumin since they were among the most potent NQO1 inducing agents in prostate cells in vitro, have been reported to be monofunctional inducers (i.e. induce phase 2 enzymes primarily), and have been administered to animals without toxicity previously [11,13,20]. β-naphthoflavone, a bifunctional (phase 1 and 2) enzyme inducing compound, was selected because of its documented ability to induce phase 2 enzyme activity in rodent tissues in vivo and for comparison to the other compounds since it increased NQO1 activity to a lesser degree in the prostate cells in vitro . We have demonstrated that orally administered agents can produce modest increases in phase 2 enzyme activity in prostate tissues in vivo. We have shown previously that sulforaphane, curcumin, dimethyl fumarate and, to a lesser degree, β-naphthoflavone will induce modest increases NQO1 enzymatic activity in prostate cancer cells in vitro [18,19]. Effective induction in vivo depends on candidate phase 2 enzyme inducing compounds being absorbed from the gastrointestinal tract, and those compounds or their active metabolites reaching the prostate, being absorbed from the circulation and acting in prostate cells in the context of their physiological environment. Our finding of even modest induction of phase 2 enzyme activity implies that each of these pharmacokinetic constraints can be overcome, and suggests that phase 2 enzyme induction by orally administered agents could represent a possible prostate cancer prevention strategy. However, whether the modest increases in phase 2 enzyme activity induced by sulforaphane, dimethyl fumarate, cucumin and β-naphthoflavone are sufficient to prevent prostate cancer is unknown and remains to be tested. The F344 rat will develop prostate adenocarcinoma after chronic administration of 2-amino-1-methyl-6-phenylimidazo [4,5-b]pyridine (PhIP) and is one of the few carcinogen-induced animal models of prostate cancer . We selected this strain of rats to test to the possibility of phase 2 enzyme induction in the prostate as a prelude to future experiments designed to test whether phase 2 enzyme induction in the prostate could prevent PhIP-induced prostate cancers. The degree of increase of NQO1, total GST and GST-mu enzymatic activities in the prostate tissues we observed was modest, and lower than that reported in other model systems where phase 2 enzyme inducing compounds have been documented to prevent carcinogenesis . However, in man, prostate cancer develops over decades, raising the possibility that chronic, low-level phase 2 enzyme induction might be sufficient to prevent the disease. Furthermore, modest induction of phase 2 enzymes (NQO1 and total GST), virtually identical to those reported in the present study, have been observed in the liver tissues of F344 rats treated with sulforaphane and sulforaphane nitrile derived from cruciferous vegetables . Cruciferous vegetables will decrease the incidence preneoplastic lesions in the colon and liver when fed simultaneously with the carcinogen 2-amino-3methylimidazo [4,5-f]quinoline (IQ) to F344 rats . Therefore, even relatively modest induction of phase 2 enzymatic activity can be sufficient to protect against carcinogenesis. Whether similar protection against prostatic carcinoma will occur requires further testing. The finding that consumption of cruciferous vegetables has been associated with a decreased risk of subsequent prostate cancer diagnosis, coupled with the ability of orally administered sulforaphane to induce phase 2 enzyme activity in the prostate, suggests that phase 2 enzyme induction within the prostate is a potential prostate cancer preventive strategy and sulforaphane is a candidate preventive agent [28,29]. In agreement with previous observations, we found that each compound showed differing efficacy at inducing phase 2 enzyme activity in different tissue types. The kidney, for instance, showed little induction of the glutathione transferases, while the GSTs were readily induced in the liver, bladder and prostate. Prochaska et al. have reported that the induction patterns of derivatives of tert-butyl-4-hydroxyanisole (BHA) varied in their efficacy of phase 2 enzyme induction, differed in the spectrum enzymes they each induced and differed in their effectiveness between the liver, esophagus, forestomach, colon, kidney and lung . Similarly, Spencer et al. found that in CD-1 mice, dimethyl fumarate induced NQO1 enzymatic activity in the forestomach, small intestine, kidneys and lungs, but failed to induce NQO1 activity in the liver, similar to our findings in the F344 rat . They also found that the patterns of induction of total GST, GST-mu, and NQO1 enzymatic activities differed between compounds and by tissue type. Van Lieshout et al. have also described differences in phase 2 enzyme responsiveness in the tissues of Wistar rats after treatment oltipraz, α-tocopherol, β-carotene, and phenethyl isothiocyanate . The reasons for the differences in the responsiveness of phase 2 enzymes between tissues are currently unknown, but likely are a reflection of tissue-specific expression of transcriptional regulators or enzyme cofactors. The difference in responsiveness between tissues does have important implications in the design and interpretation of preventive intervention trials involving phase 2 enzyme induction. For instance, cancers that arise from the oral ingestion of carcinogens, such as the 9,10-dimethyl-1,2-benzanthracene rat model of breast cancer or aflatoxin-induced hepatocellular carcinomas in man, might best be prevented by oral ingestion of agents that will induce phase 2 enzymes and inactivate these carcinogens in the gut and liver [13,30,31]. However, accumulating evidence suggests that for prostate cancer induction of phase 2 enzymes within the prostate might best protect against carcinogenesis. No environmental carcinogens have been identified as causing human prostate cancer. Accumulating evidence implicates endogenous oxidative damage as one important contributor to prostate carcinogenesis . Prostate cancer increases with age and may be related to inflammatory conditions of the prostate such as prostatitis . Androgens are a known requisite to prostate cancer development. Ripple et al. have demonstrated that treatment of the prostate cancer cell line LNCaP with androgens produces a burst of oxidative stress in these cells with generation of reactive oxygen species, increased lipid peroxidation and a depletion of intracellular glutathione stores [33-35]. Furthermore, Malins et al. have described progressive alterations in DNA structure between normal, BPH and cancerous prostate tissues due to oxidative damage to the DNA template be hydroxyl free radical [36-38]. Two genes recently identified as conferring increased risk to prostate cancer in families (RNASEL and MSR2) participate in the response to infection and inflammation [39,40]. Mice engineered to not express RNASEL, for instance, are more susceptible to overwhelming bacterial infections . Finally, most compounds thus far implicated as prostate cancer preventive agents act as potent antioxidants including lycopene, selenium (essential to glutathione peroxidase activity), and vitamin E [42-44]. The early and near universal loss of expression of the phase 2 enzyme GSTP1 likely renders prostate cells susceptible to local oxidative damage and transformation. GSTP1 knock-out mice treated with the polycyclic aromatic hydrocarbon 7,12-dimethylbenz anthracene and the tumor promoting agent 12-O-tetradecanoylphorbol-13-acetate show increased numbers and earlier onset of skin papillomas demonstrating that loss of expression of a single GST can contribute to carcinogenesis . Since prostate cancer arises with a long latency in the context of local oxidative damage coupled with an intrinsic defect in carcinogen defenses, local induction of phase 2 enzymatic activity, even to a modest degree, could be a promising preventive strategy. Since prostate cancer develops over decades, chronic, low-level, local induction of carcinogen defenses, possibly through diet-derived agents such as sulforaphane, could represent a modest, non-toxic intervention strategy for prevention of prostate cancer, particularly for individuals at risk for the disease. One notable finding was the significant induction of total GST and NQO1 enzymatic activities in bladder tissues of the F344 rats. Several environmental carcinogens have been linked to bladder cancer including polyaromatic hydrocarbons in tobacco smoke and aniline dyes . Epidemiological studies have demonstrated that consumption of cruciferous vegetables is associated with a decreased risk of bladder cancer . Sulforaphane levels peak in the serum 1–2 hours after ingestion and are cleared relatively rapidly by excretion into the urine . The substantial phase 2 enzyme induction of the bladder tissues could be due to the presence of sulforaphane or its active metabolites at relatively high concentrations over prolonged time periods while they are retained in the bladder. Munday and Munday have found similar induction of NQO1 and GST activity in the bladder tissues of female Sprague-Dawley rats after oral feedings of sulforaphane and several other isothiocyanates derived from cruciferous vegetables . Together, these data strongly suggest that sulforaphane and other isothiocyanates could represent promising candidate bladder cancer preventive agents. Our study has several shortcomings. We arbitrarily selected a single daily dosing schedule based on prior studies in the literature. It is possible that other dosing schedules, perhaps different for each compound, could produce greater phase 2 enzyme induction . In addition, measurement of phase 2 enzyme activity occurred 24 hours following the last dose of each compound. The serum half-life of sulforaphane is between 1–2 hours and it is possible that measurement of phase 2 enzyme activity at times less than 24 hours would reveal greater induction of enzymatic activity . Finally, all animals were given isofluorane anesthesia at the time of gavage feeding, and the anesthesia could have altered phase 2 enzyme activity in the tissues. However, since both the inducer compound treated animals and controls were treated identically, the relative levels of phase 2 enzyme activity should not have been affected. We have demonstrated the possibility of inducing phase 2 enzymatic activity in the prostate tissues of F344 rats in vivo after oral feeding of several candidate phase 2 enzyme inducing agents. Our findings set the stage for further testing of phase 2 enzyme inducing agents in prostate cancer prevention. A first step will be to test whether phase 2 enzyme induction in the prostate will prevent prostatic cancers in animal models. If successful, additional work will be necessary to identify the phase 2 enzymes critical in cancer protection so that they can be monitored as biomarkers of effectiveness in clinical trials.
||Coffee Mondays - Starting September 12th the library will be serving FREE COFFEE from 8:00 a.m. - Noon every Monday. ||Matheson Library Reading Room; 11:00 a.m.; Sharing Your September 11th Story - Brent Taylor, Moderator. 9/11 was this generation’s Pearl Harbor. It was truly a day of infamy in every sense of the phrase, from the initial shock, to the loss of life, to the way it changed our nation forever. Join us in the Matheson Library Reading Room to share your memories about this tragic event, and to hear the stories of others. The event is open to the public, and refreshments will be served. ||Matheson Library Reading Room; 11:00 a.m.; Students and faculty/staff share their life changing experiences from last year’s service learning trip to El Salvador then learn about the opportunities to join them as they travel to El Salvador in 2017. The event is open to the public, and refreshments will be served. ||Matheson Library Research Room; 11:00 a.m.; Lessons from Abroad: Study abroad and foreign missions provide incredible learning opportunities for those who participate. Whether traveling to landmarks in great cities, visiting historic sites all over the world, serving children in orphanages, or helping build homes or clean up after disasters, going abroad leaves a lasting impact on travelers. In this year's One Book Read, Little Princes, author Conor Grennan shares how his plan to volunteer for three months in a Nepali orphanage changed the course of the rest of his life. Come to "Lessons from Abroad" to learn about the lessons local students and community members learned from their travels and share your own experiences. ||Matheson Library Reading Room; 2 p.m. God’s Contract with Mankind. Judge J. William Howerton, a native of Paducah and a retired judge from the Kentucky Court of Appeals, explains his short course for understanding Christianity. Join us for a discussion and book signing. The event is open to the public, and refreshments will be served. ||Matheson Library Reading Room; 11:00 a.m.; Human Trafficking: What Is It and How Can You Help? Human trafficking, or modern day slavery, is the second largest crime industry in the world today. According to the The UN crime-fighting office, "2.4 million people across the globe are victims of human trafficking at any one time, and 80 percent of them are being exploited as sexual slaves." Zoe.org notes that 55% of human trafficking victims are female, and 26% are children. In this year's One Book Read, Little Princes by Conor Grennan, the author shares his own experiences helping to reunite trafficked youth with their families in Nepal. During this program attendees will learn about human trafficking and ways they might be able to help the victims of trafficking. ||Matheson Library Reading Room; 2:00 p.m.; Native Americans of the Mississippian Culture at Wickliffe Mounds - Located in Ballard County Kentucky, Wickliffe Mounds is the site of a historic Native American village of the Mississippian Mound building culture and is a Kentucky Archaeological Landmark and is on the National Register of Historic Places. The Mounds are open to the public as a state historic site, a tourist attraction, and archaeological museum, and an educational resource. Carla Hildebrand, Park Manager, will talk about the history of Wickliffe Mounds and opportunities it offers us today to learn about the Mound Builders who inhabited Western Kentucky. The event is open to the public, and refreshments will be served.
The EAGLE (Evolution and Assembly of Galaxies and their Environments) project is a suite of hydrodynamic simulations of the Universe. The simulations, include the full range of baryonic physics including metal dependent gas cooling star formation, supernovae, black hole formation. The resolution of the simulations is sufficient to resolve the onset of the Jeans instability in galactic disks, allowing us to study the formation of galaxies in detail. At the same time the largest calculation simulates a volume that is 100 Mpc on each side, allowing us to study galaxy formation across the full range of galaxy environments from the isolated dwarves to rich galaxy clusters. A key philosophy of the simulations has been to use the simplest possible sub-grid models for star formation and black hole accretion, and for feedback from supernovae and AGN. Efficient feedback is achieved without hydrodynamic decoupling of particles. The small number of parameters in these models are calibrated by requiring that the simulations match key observed properties of local galaxies. Having set the parameters using the local Universe, I will show that the simulations reproduce the observed evolution of galaxy properties extremely well. The resulting universe provides us with deep insight into the formation of galaxies and black holes. In particular, we can use the simulations to understand the relationship between local galaxies and their progenitors at higher redshift and to understand the role of interactions between galaxies and the AGN that they host. I will present an overview of some of the most important results from the project.
Swimmer's itch, also called cercarial dermatitis, appears as a skin rash caused by an allergic reaction to certain parasites that infect some birds and mammals. These microscopic parasites are released from infected snails into fresh and salt water (such as lakes, ponds, and oceans). While the parasite's preferred host is the specific bird or mammal, if the parasite comes into contact with a swimmer, it burrows into the skin causing an allergic reaction and rash. Swimmer's itch is found throughout the world and is more frequent during summer months. Most cases of swimmer's itch do not require medical attention.
Imagine a cyberattack that does serious damage to the U.S. power grid. The results wouldn't be pretty. The power grid is complicated, divided up into sections that cover everything from a single municipal area (like New York City) to large regions (like the entire state of California). But each of those sections is controlled by a single control center, which monitors the behavior of its piece of the grid to make sure things are operating normally, and makes whatever adjustments are necessary to keep the system running smoothly. But if that control center stops functioning, because of a cyberattack or for any other reason, it is no longer capable of monitoring and maintaining the grid. And that may result in severe instabilities in the system. But a team of researchers from around the U.S. is working to address this and other power grid security concerns as part of the SmartAmerica Challenge, which kicked off in late 2013 to highlight U.S. research in the field of cyberphysical systems. The Smart Energy Cyber-Physical Systems (Smart Energy CPS) team is focused on using sophisticated tools to test various scenarios (and solutions) related to cybersecurity in power grids. The Smart Energy CPS team includes researchers from NC State University, Iowa State University, MITRE Corporation, National Instruments, NREL, Penn State, Scitor Corporation, UNC-Chapel Hill (UNC), and the University of Southern California (USC). Because having a single control center for each section of the grid creates a significant vulnerability threat, the NC State and UNC group within the Smart Energy CPS team is pursuing the idea of creating a distributed computing system that would disseminate monitoring and control functions across multiple virtual machines in a cloud computing network that overlays the grid. "The advantage here would be that if one element of the computing system gets compromised, the other virtual machines could step in to protect the system and coordinate their efforts to keep the grid functioning," says Aranya Chakrabortty, an assistant professor of electrical engineering who is leading the project from NC State. "We are working with USC's Information Sciences Institute to test this distributed computing concept. "Our early tests indicate that the distributed computing approach would make the grid more resilient against both physical attacks and cyberattacks," Chakrabortty adds. "Our next step is to scale up the collaboration to get more detailed analyses of different types of attacks. The more we understand about our potential vulnerabilities, the better controllers we'll be able to design to protect our infrastructure." Chakrabortty says that NC State has extensive experimental resources for simulating the behavior of real-world power systems, but adds that "we're power systems experts, not cybercommunication experts. USC has a large testbed for emulating cyberattacks. By integrating our models with theirs, we can carry out more realistic scenarios." The team also includes Yufeng Xin of the Renaissance Computing Institute at UNC. The Smart Energy CPS team plans to provide preliminary findings when it participates in the SmartAmerica Expo in Washington, D.C., this summer.
The present invention relates generally to flight recording systems for aircraft and relates more specifically to a system for automatically recording engine fatigue cycles. Aircraft turbine engine manufacturers have established various service life limits for the rotating parts of an engine based primarily on the number of repeated and/or alternating fatigue causing stress cycles undergone by the rotating parts. These fatigue or stress cycles result from transients of engine speed and temperature occurring during normal engine operation. The manufacturers have defined a cycle as a flight consisting of the usual start, takeoff, landing and shutdown. Various less usual events have been given a weight as a full cycle or a fraction of a cycle. Thus, an air start is considered to be one cycle, while a landing without engine shutdown, followed by another flight, a touch and go landing or go around, or an advancement of throttle beyond 65%, when thrust reversing is used, each are counted as 1/6 cycle. Presently, these stress cycles are kept track of by log entries by the pilot or copilot. Generally, also the records are not kept separately for each engine in a multi-engine aircraft resulting in unnecessary overhauls. If a system could be found which would automatically keep track of stress cycles, a great deal of accuracy would result. Furthermore, if the stress cycles undergone by each engine of a multi-engine aircraft were monitored separately, unnecessary overhauls would be eliminated. It is a primary object of the present invention to provide an accurate and automatic system for monitoring and recording engine stress cycles. It is a further object of the present invention to provide a system for individually monitoring and recording the stress cycles undergone by each engine of a multi-engine aircraft. Briefly, the aforementioned objects are achieved by providing sensors in the aircraft for sensing engine starting, engine shutdown, landing gear status, engine reversal and throttle setting. From the outputs of these sensors there are derived signal indications of when the aircraft is in flight and when the landing gear drop in flight. The occurrence of a full or fractional cycle is derived from these signal indications and from the sensor outputs. Counters for each engine are automatically incremented at the completion of a stress cycle. Other objects, features and advantages of the present invention will become apparent upon a perusal of the following detailed description of one embodiment of the present invention when taken in conjunction with the appended drawing wherein: FIG. 1 is a block diagram illustrating a display box and various sensors for the left and right engines of a two engine aircraft. FIG. 2 is a block diagram illustrating the logic circuitry within the display box in combination with the sensors for the left engine. The circuitry for the right engine is identical. FIG. 1 depicts a display and logic box 10 responsive to various sensors for automatically counting and displaying engine stress cycles for the left and right engines of an illustrative two engine jet aircraft. There are respectively five digit decimal cycle unit displays 12 and 14 and one digit fractional displays 16 and 18 in sixths of a cycle, for the left and right engine. In order to individually keep track of the stress cycles for each engine, there are a set of sensors for each. For the left engine there are fed to box 10 contacts from the starter or ignition switch 20 and the shutdown switch 22. Also a thrust reverser sensor 24 is provided to sense when the left engine is reversed. The right engine similarly has contacts from starter switch 26 and shutdown switch 28 fed to box 10 as well as the output from thrust reverser sensor 30. Also a landing gear sensor 32 has its output, indicative of whether the landing gear are up or down, fed to box 10. Lastly, a throttle sensor 34, which may be a potentiometer with a variable resistance indicative of the throttle setting, has its output fed to box 10. Now reference is made to FIG. 2, which shows a block diagram of the contents of box 10 in combination with the various sensors, will be referred to in order to explain how the sensor outputs are utilized to automatically count and display stress cycles for each engine. The circuitry for the left engine is illustrated in FIG. 2 with the understanding that the circuitry for the right engine is identical and would therefore involve much repetition of description. As previously explained, starting, takeoff, landing and shutdown count as one cycle while starting, takeoff, landing and retakeoff without engine shutdown count only as 1/6 cycle. In the logic circuitry the occurrence of various events are stored and the occurrence of a cycle is derived from the stored signal indications. Thus the output 36 of starter switch 20 is fed to the set input 38 of a flip-flop memory 40 via one shot 42 while the output 44 of shutdown switch 22 is fed to the reset input 46 of flip flop 40 via one shot 48. As a result the Q output 50 of flip-flop 40 provides an indication of when the engine is running, i.e., Q output 50 is digital one only when the engine has been started and hasn't subsequently been shut down. For providing a indication of when the aircraft is in flight, the output 52 of landing gear sensor 32 which is digital one when the landing gear are up is used in connection with Q output 50. Outputs 50 and 52 are fed to AND gate 54, whose output 56 is digital one when both the engine is running and the gear are up. For providing a digital indication of when a start, takeoff and landing have sequentially occurred, another output 58 of landing gear sensor 32 is utilized in combination with output 56. Output 58 is applied via one shot 60 to one input of AND gate 62. The other input of AND gate 62 is fed by output 56 via a delay 64. It should be understood that the pulse width outputs of each of the one shots employed are preferably of the same lengths and that the delays of elements 64 and 84 are longer than the one shot pulse widths. Thus upon falling of the landing gear with the plane in flight a digital one pulse will appear at the output 66 of AND gate 62. While the output 56 of AND gate 54 will change state simultaneously with the leading edge of the aforementioned digital one pulse, this change of state is sufficiently delayed by delay 64 so as not to inhibit the appearance of the pulse at AND gate output 66. Output 66 is fed to the set input 68 of flip-flop 70. Thus upon the dropping of the landing gear in flight, flip-flop 70 will be set and its Q output 72 will be digital one. Thus, flip-flop 70 constitutes a memory element whose output 72 is digital one when there has been an engine start, a takeoff and then a landing. Output 72 is applied in parallel to a pair of AND gates 74 and 76 which respectively direct the digital one indication of, output 72 to either increment units counter 12 or fractions counter 16 depending upon whether the landing is followed by a shutdown or by a re-takeoff without intermediate shutdown. In the event of engine shutdown, the other input of AND gate 74 is fed from shutdown switch output 44 via one shot 48 to gate a digital one pulse through to the output 78 of AND gate 74. For resetting flip-flop 70 subsequent to the pulse, the output 80 of one shot 48 is fed to the reset input 82 of flip-flop 70 via delay 84. In the event of retakeoff without intermediate engine shutdown, a digital one pulse appears not at the output of AND gate 74 but at the output 84 of AND gate 76. This is accomplished by feeding the landing gear sensor up - indicating output 52 to the other input of AND gate 76 via one shot 86. AND gate 76 output 84 is applied to one input 88 of two input OR gate 90. The other input 92 of OR gate 90 is fed from circuitry yet to be described which derives a digital one pulse if the throttle exceeds 65% during thrust resensor operation. The output 94 of OR gate 90 drives a buffer amplifier 96 which in turn drives the electromechanical fractions counter 16. To mechanize a carry pulse to electromechanical units counter 12 when the fractions counter 16 goes from 5/6 to 0/6, output 94 also feeds a six digit electronic counter 98 whose overflow output 100 feeds electromechanical units counter 12 via OR gate 102. For incrementing units counter 12 in the event of a start, takeoff, landing and shutdown, AND gate 74 output 78 is fed to input 104 of three input OR gate 102. The other inputs 106 and 108 to OR gate 102 are respectively fed from circuitry yet to be described which generates a digital one pulse in the event of an air start and from the overflow output 100 of counter 98. The output 110 of OR gate 102 drives electromechanical units counter 12 via buffer amplifier 112. In order to increment units counter 12 when there is an air start, the output 114 of one shot 42, which is fed by starter switch 20, is applied to one input 116 of AND gate 118. The other input 120 of AND gate 118 is fed by the output 122 of delay 64 which provides an indication of when the aircraft is in flight. As a result, the output 122 of AND gate 118 provides a digital one pulse when an air start is attempted. Output 122 feeds input 106 of OR gate 102 for suitably incrementing counter 12. For incrementing fractions counter 16 in the event of greater than 65% throttle being used during thrust reversal, the output 124 of throttle sensor 34 is applied to an electronic comparator or limit switch 126 which provides at its output 128 a digital one indication of when 65% throttle is exceeded. Output 128 is applied via one shot 130 to one input 132 of three input AND gate 134. Input 136 of AND gate 134 is fed from thrust reverser sensor 24 output 138 which provides a digital one indication when the engine is reversed. As an optional feature, input 140 of AND gate 134 may be fed from landing gear down-indicating output 58. Thus, AND gate 134 output 142 provides a digital one pulse when the thrust reverser is used, 65% throttle is exceeded and the landing gear are down. For incrementing the fractions counter in such event, output 142 is applied to OR gate 90 input 92. It should now be appreciated what has been described is a completely automatic system for incrementing units counter 12 in the event of either a start, takeoff and landing cycle or an air start and for incrementing fractions counter 16 in the event of either a retakeoff without engine shutdown, or when the throttle exceeds a predetermined setting when the thrust reverses are used during landing. It should furthermore be appreciated to those skilled in the art that since the deployment of the landing gear are considered indicative of a landing, a touch and go landing or turn around will be treated by the circuitry as a retakeoff without shutdown and will consequently register the proper 1/6 cycle increment. Having described one embodiment of the present invention it should be appreciated that numerous other embodiments are possible within the spirit and scope of the invention. |US7467070||Oct 26, 2004||Dec 16, 2008||Meyer Eric S||Methods and systems for modeling stress intensity solutions for integrally stiffened panels| |US7945427||Apr 18, 2008||May 17, 2011||The Boeing Company||Methods and systems for providing unanticipated demand predictions for maintenance| |US20060069521 *||Sep 30, 2004||Mar 30, 2006||The Boeing Company||Methods and systems for analyzing structural test data| |US20060089823 *||Oct 26, 2004||Apr 27, 2006||The Boeing Company||Methods and systems for modeling stress intensity solutions for integrally stiffened panels| |US20090265118 *||Oct 22, 2009||Guenther Nicholas A||Methods and systems for providing unanticipated demand predictions for maintenance| |CN1038708C *||Jul 11, 1991||Jun 10, 1998||空军第一研究所||Method of recording engine cycle and its apparatus| |U.S. Classification||702/34, 377/16, 244/194|
Christianity in Mongolia traces its origins back to Nestorian Christians, who converted several Mongolian tribes in the 7th century. Even in the reign of the Great Khans during the height of the Mongolian Empire in the 13th century, religious tolerance was widespread and Christianity was influential. With the breakup of the Mongolian Empire in the 14th century, Christianity lost its influence and the territory of Mongolia became primarily Buddhist and Shamanist. Protestantism’s influence in Mongolia began with missionaries who came from Europe in the mid-19th century. Their influence was short-lived as communism triumphed in Mongolia in 1924. Since the fall of communism in 1990, Christianity has been expanding rapidly, from a recorded number of four believers in 1989 to somewhere in the neighborhood of 60,000 believers today. In 1994, the Norwegian Lutheran Mission (NLM) established a presence and began to work among the Mongolians. In 1997, the Finnish mission organization FLOM (Finnish Lutheran Overseas Mission) began its work in Mongolia. They have created a number of projects focusing on the economic growth and capacity building within the social and health sectors of Mongolia. Both the Norwegian and Finnish missions also assist local Lutheran congregations as part of their overall work. Two LCMS representatives came to Mongolia in 2005 in order to observe the work of the local Lutheran church congregations in Mongolia. In 2009 and 2010, an LCMS worker assisted the emerging Lutheran church in providing theological education to local church leaders. Later, assistance was also provided to the local Lutheran Church in establishing a specifically Lutheran Bible School in the northern city of Darkhan.
- freely available Diversity in the Enteric Viruses Detected in Outbreaks of Gastroenteritis from Mumbai, Western India AbstractFaecal specimens collected from two outbreaks of acute gastroenteritis that occurred in southern Mumbai, India in March and October, 2006 were tested for seven different enteric viruses. Among the 218 specimens tested, 95 (43.6%) were positive, 73 (76.8%) for a single virus and 22 (23.2%) for multiple viruses. Single viral infections in both, March and October showed predominance of enterovirus (EV, 33.3% and 40%) and rotavirus A (RVA, 33.3% and 25%). The other viruses detected in these months were norovirus (NoV, 12.1% and 10%), rotavirus B (RVB, 12.1% and 10%), enteric adenovirus (AdV, 6.1% and 7.5%), Aichivirus (AiV, 3% and 7.5%) and human astrovirus (HAstV, 3% and 0%). Mixed viral infections were largely represented by two viruses (84.6% and 88.9%), a small proportion showed presence of three (7.7% and 11%) and four (7.7% and 0%) viruses in the two outbreaks. Genotyping of the viruses revealed predominance of RVA G2P, RVB G2 (Indian Bangladeshi lineage), NoV GII.4, AdV-40, HAstV-8 and AiV B types. VP1/2A junction region based genotyping showed presence of 11 different serotypes of EVs. Although no virus was detected in the tested water samples, examination of both water and sewage pipelines in gastroenteritis affected localities indicated leakages and possibility of contamination of drinking water with sewage water. Coexistence of multiple enteric viruses during the two outbreaks of gastroenteritis emphasizes the need to expand such investigations to other parts of India.
“The heart is like Pandora’s box, with just a crack it’s opened up to beat anew when all is lost, to run, crawl, come home…” —from HEART, “Here & There” You’ve seen the image in a hundred different places: Leonardo Da Vinci’s Vitruvian Man, the perfectly proportional specimen of mankind, his arms outstretched in both a circle—symbol of the divine—and a square—representing the physical world. Philosophers, theologians and scholars have long devoured Da Vinci’s copious notes—along with those of Vitruvius, the architect on whose ideas Da Vinci built his drawing—to unpack the measure of a man and what it means to be fully human. For Christians, the question requires a more crystallized focus: what does it mean to be fully human in light of the what Christ—the only human to literally square the circle—has done? Set for release on September 3, Juno Award-winning band The City Harmonic reveals its eagerly anticipated full-length sophomore release, HEART, a full-circle progression of its critically acclaimed debut, I Have A Dream (It Feels Like Home). This album—underscored by the cinematic and communal aesthetic so prominent in all The City Harmonic’s music—shifts from the dream of what could and should be to the complexities of how fragile, broken humans follow the true humanity of Christ’s example in the world. It makes imperfect sense that these young men—vocalist/songwriter and pianist Elias Dummer, bassist Eric Fusilier, guitarist Aaron Powell and drummer Josh Vanderlaan—would look inward for this effort. After all, in the past two years, life has hit them square on. They went from working day jobs to recording and touring full time. Aaron had his first child. Josh got married. Elias had his fourth child. And on top of all the usual struggles that might accompany such huge life changes, Eric was diagnosed with cancer. In a sense, life set the context for them. “With all this sort of “real life” stuff going on, we set out to write an album ‘on being, and becoming, human,” Elias explains. “But in a way it’s about image bearing. By that I mean that yes, we’re human… and the Bible tells us that we’re made in the image of God and each and every human has an inherited dignity as a result. But there’s more to it than that … God made this universe around us and often refers to the cosmos as a temple. I mean f you stop and think about the role an ‘image’ or ‘statue’ might play in a temple, you begin to see that we humans have quite a role to play. Whatever our present circumstances, the biggest challenge before us is to recognize that in Christ we’ve been given the responsibility and capacity to become like Him in a way, and as we do, we are becoming exactly the kind of humans we were meant to be from the beginning. “ Co-produced by The City Harmonic and Jared Fox, HEART begins where I Have A Dream’s “Holy Wedding Day” ends, with “Here & There,” a sweeping, theatrical metaphor for the entire album. “Here I am, a finite being, juxtaposed against the eternality of Christ, who talks about living in our dying,” Elias says, “and here we are dying everyday, a little further along than than the day before…Whatever comes, we are not called to a life free from suffering, we are called to something bigger than ourselves. We are called to Christ.” The tracks that follow beg the question, “What does redemption of the human heart really look like?” “Praise the Lord” serves as the call to worship, an entrance to grace in the context of powerlessness. “In our culture, it can feel sometimes as though we’ve reduced grace to just some transaction between us and God,” he says. “But grace is much more than that. It’s the air we breathe — it’s by God’s grace that we awoke at all this morning. And sometimes, like with Paul in 2 Corinthians 12, grace is a thorn.” “If I can’t see the light through the pain, tell me how a thorn could ever be grace…” –from HEART, “Strong” “Strong,” written by Elias and Eric over Skype with Elias at a show and Eric at home, grew from a place of vulnerability and not knowing in the middle of Eric’s battle with cancer. “I wasn’t really in a place to write anything,” Eric says. “Emotionally, I felt like a child who had been let down by his parents for the first time… I just couldn’t see [my illness] as a blessing. I couldn’t connect with God. To confront that stuff and write ‘I am strong in my weakness,’ was really difficult because I couldn’t deny how I really felt…” Near the one-year anniversary of the stem cell transplant that saved his life, Eric’s cancer is in remission. He talks openly of the faith of his family and friends who, like the four friends in Luke 5 who lower their paralyzed brother through the roof to be healed by Jesus—their faith carried him when he didn’t have it in him to believe. “That’s what got me through, had no faith of my own, [when I] couldn’t process that on my own. The raw faith of my family, the guys in the band, my sister, my wife and her family… I had to rely on the faith of others…” Now, being on the other side of it, he says, “my emotions line up with what I believe. God really has been there the whole time, using this for the good. It’s a clear picture of God’s people coming together, working together… a profound experience.” “The Son of God, you calm the seas that rage inside the heart of me, the heart of God is what I need You’ve overcome the world: take heart.” – from HEART, “Take Heart” “Alive, Alive,” a percussion-driven anthem based on Psalm 23, connects HEART’s processional to the reality of what Christ has done. The song opens with a familiar self-soothing mantra, “It’s alright, it’s okay, it’s all gonna be okay” and reveals why the lie we tell ourselves is really true. “One of my favorite lines in that song is “live or die I’ll be alive…” Elias says. “For us, these past two years, we’ve been through the valley of the shadow of death in Psalm 23, and the reality is that apart from Christ, we are all powerless and dead. Maybe it’s time to believe the truth that there’s more to life than just what we see in front of us. We’re living resurrection.” Perhaps the most personal moment on the album is “Love Heal Me,” a song Eric brought to the table when the band met in Nashville to record. “When it came time to track the song, my voice was shaky and hard to control,” says Eric, who had lost his voice to muscle atrophy while in the hospital and hadn’t sang or played in over a year. “I was a little embarrassed about how it might sound. You can hear the weakness… but it captures an honest moment,” he says. Grittier and yet sonically proportional, HEART progresses with “Songs of Longing, Joy and Peace,” a poetic introduction to “Glory,” a straight up, liturgically-bent picture of Jesus, the savior of the world who is also the center of it. The intention here is anything but neat and tidy. “I’ve been challenged by how much we’ve been willing to reduce Christ to the cross only,” Elias says of “Glory.” “He behaves there. We don’t have to deal with what he had to say or the implications… but it’s not just a nice picture, a nice baby Jesus. He’s the center of the universe!” In addition to “Glory,” at the core of the new album is the first radio single, “City on a Hill,” which is steeped in the Beatitudes found in Matthew 5. “’A City On A Hill’ is a great partner to ‘Glory’ in that it takes the high concept view of Christ and brings it down to earth in the form of the very things he said in the Sermon on the Mount,” says Elias. “It also forms a natural transition into the rest of the album, calling the Church to live differently.” “Live Love,” written specifically for the ICTHUS Festival, fleshes out this ‘how then shall we live’ question in real time. “It’s a popular Christian thing to say that the Bible is God’s love letter to the world,” Elias adds, “but Paul paints us as the love letter (2 Corinthians 3:2). We can’t miss this.” “Discipleship isn’t simply adding knowledge to our lives,” Elias adds, “but it’s to become increasingly more like the human example we’ve been given – to become “little Christs,” and carry our cross through the mud and mire of a broken world in the knowledge that by the grace of God we arise brand new.” In these and all the songs on HEART, the constant juxtaposition of grace and humanity sets the strong and steady rhythm. From “1+ 1” the waltz-like love song Elias wrote for his wife Meaghan in celebration of their 10th anniversary, to the Beatles-esque discipleship metaphor, “Long Walk Home,” to “Brand New,” written on the fly in front of three thousand people at youth conference… builds on the idea of living out lives as image bearers of Christ, becoming more like him. “My Jesus, I Love Thee,” completes the circle; a hymn treasure handpicked by Eric and set in a minor key that encapsulates the bittersweet of our lives here, and, simultaneously, the joy of Jesus as the center of our everything. “We switch over to a major arrangement for the final verse about heaven,” Elias adds. “It’s a fitting and familiar end, a musically symbolic way of summarizing the album as a whole.” For Eric, HEART represents something bigger than just another album. Something much bigger. “It’s always been a dream of mine to be in a band, to write/say something that matters to people in a spiritual sense,” he says. “But this time it was hugely important to me to get it right,” he says, “to express the raw, gutteral frustration, pain and suffering I’ve experienced, and that people around the globe have experienced even more than me, and to give some kind of hope in the midst of it. I love that it’s called HEART and that it celebrates the joys and sorrows of life. It’s exactly what we needed to write… a beautiful, raw expression of life.”
Global warming could reduce how many hurricanes hit the United States, according to a new federal study that clashes with other research. The new study is the latest in a contentious scientific debate over how man-made global warming may affect the intensity and number of hurricanes. In it, researchers link warming waters, especially in the Indian and Pacific oceans, to increased vertical wind shear in the Atlantic Ocean near the United States. Wind shear — a change in wind speed or direction — makes it hard for hurricanes to form, strengthen and stay alive. So that means "global warming may decrease the likelihood of hurricanes making landfall in the United States," according to researchers at the National Oceanic and Atmospheric Administration's Miami Lab and the University of Miami. With every degree Celsius that the oceans warm, the wind shear increases by up to 10 mph, weakening storm formation, said study author Chunzai Wang, a research oceanographer at NOAA. Winds forming over the Pacific and Indian oceans have global effects, much like El Nino does, he said. Wang said he based his study on observations instead of computer models and records of landfall hurricanes through more than 100 years. His study is to be published Wednesday in Geophysical Research Letters. Critics say Wang's study is based on poor data that was rejected by scientists on the Nobel Prize-winning Intergovernmental Panel on Climate Change. They said that at times only one in 10 North Atlantic hurricanes hit the U.S. coast and the data reflect only a small percentage of storms around the globe. Hurricanes hitting land "are not a reliable record" for how hurricanes have changed, said Kevin Trenberth, climate analysis chief for the National Center for Atmospheric Research in Boulder, Colo. Trenberth is among those on the other side of a growing debate over global warming and hurricanes. Each side uses different sets of data and focus on different details. One group of climate scientists has linked increases in the strongest hurricanes — just those with winds greater than 130 mph — in the past 35 years to global warming. The Intergovernmental Panel on Climate Change has said "more likely than not," manmade global warming has already increased the frequency of the most intense storms. But hurricane researchers, especially scientists at NOAA's Miami Lab, have argued that the long-term data for all hurricanes show no such trend. And Wang's new research suggests just the opposite of the view that more intense hurricanes result from global warming. The Miami faction points to a statement by an international workshop on tropical cyclones that says "no firm conclusion can be made on this point." Former National Hurricane Center Director Max Mayfield said regardless of which side turns out to be right, it only takes one storm to be deadly. So the key for residents of hurricane-prone areas, he said, is to be prepared for a storm "no matter what."
It is evident that the launch of Intel Core 2 Duo and Intel Core 2 Extreme processors has a very serious effect on the entire computer market. As we could already see, these processors set new performance records for high-end and mainstream PC systems. As a result, Intel earns the prestigious title of the today’s fastest x86 processors developer. Unfortunately, the AMD processors that used to be so popular among computer enthusiasts for quite some time, are being pushed back to the background turning into just a good solution for inexpensive systems. In order to retain the sales volumes, AMD undertook an unprecedented reduction of the pricing on their solutions. In other words, the new CPUs based on Intel Core microarchitecture stimulated rapid changes in the computer market. Today we are going to track down all the changes that took place lately in order to get a clear vision of what is happening with the contemporary dual-core processors. Therefore, we will not only look at the CPU performance, but will also analyze other characteristics of the available solutions and try to estimate how attractive the new and old offerings from AMD and Intel are in the current situation. Before we pass on to the actual results, we suggest that you take a look at our previous articles devoted to the new Intel Core microarchitecture that led to such dramatic changes in the computer market. These articles are: We decided to include only those processor models into our today’s test session, that are up-to-date and available, i.e. those that are currently shipping into the market. It means that our today’s result charts and diagrams will not contain any data for Socket 939 systems from AMD that have now been replaced with faster solutions for Socket AM2 platform with DDR2 SDRAM support. Moreover, we also didn’t test the Athlon 64 X2 processors with the 2MB L2 cache that have been discontinued already. Also, you will not see the results for almost the entire Intel Pentium D 8XX processor line-up that has been discontinued. At this time only Pentium D 820 model is still shipping. The Pentium D 9XX processor family will be represented by only two models – the 945 and 915 that do not support Virtualization technology, because the rest of the processor family will stop shipping in the nearest future. Note that even though Intel has already stopped offering Pentium Extreme Edition 965 processor, we still included it into our today’s test session, because it is the fastest CPU with the previous-generation microarchitecture. So, we ended up with the systems built using the following hardware components: The tests were performed with the mainboard BIOS setup for maximum performance. Every CPU performance analysis conducted in our lab involves testing in SYSMark 2004 SE. the thing is that this benchmark is very good at revealing complex system performance under various types of typical workload. It emulates the user’s work on some tasks in a few widely spread applications involving multi-threaded processing. In the end this test generates a few numeric indexes illustrating the system performance in case of different work scenarios. The best results belong to Core 2 Duo and Core 2 Extreme processors in digital content creation and processing tasks. As you can see from the diagrams above, even the mainstream Core 2 Duo E6600 processor model working at 2.4GHz clock speed outperforms all Athlon 64 X2 and Pentium D processors including their “extreme” modifications, such as Athlon 64 FX-62 and Pentium Extreme Edition 965. as for the performance of the top Core 2 Extreme X6800 processor, it outpaces Pentium Extreme Edition 965 by about 45%, and Athlon 64 FX-62 – by 25%. Our test systems demonstrate pretty similar results in typical office applications as well. Core 2 Duo processor family is again faster than all the competitors. Any processor with Core microarchitecture and 2.4GHz+ clock speed is faster than any solutions on K8 or NetBurst microarchitecture under all types of workload. Intel Core 2 Duo processors demonstrate pretty impressive performance in older but still popular single-threaded synthetic benchmarks. Here we should say that the shared L2 cache allows Core based processors to use the entire L2 cache memory in single-threaded applications, while Athlon 64 X2 and Pentium D processors can only have a half of it at their disposal. The new 3DMark06 benchmark does support multi-threading, however, the previous generation CPUs cannot stand up to Core microarchitecture based solutions here, either. Core 2 Extreme X6800 appears 5.4% and 7% faster than Athlon 64 FX-62 and Pentium Extreme Edition 965 respectively. Note, however, that the relatively small performance difference between the two is mostly determined by the graphics subsystem that affects the overall result in this test. If we look at the CPU performance index here, the picture will be totally different. I would like to stress that Athlon 64 FX-62 performs quite nicely here compared with the results of the other benchmarks. This processor working at 2.8GHz is slightly faster than Core 2 Duo E6600 working at 2.4GHz. However, it cannot compete with the top-of-the-line Core 2 Extreme X6800 that turns out almost 20% faster. ScienceMark 2.0 makes the best out of AMD K8 microarchitecture advantages thanks to the active FPU usage. We have already pointed this out in our previous reviews. As a result, AMD processors look quite competitive in this test and are falling just a little bit behind Core 2 Duo models from the same price range. As for the performance of NetBurst based processors, they are beyond all criticism in ScienceMark 2.0, which you can clearly see from the diagram. You remember that we have always recommended the solutions from Athlon 64 processor family as the best choice for gaming needs, and this conclusion was absolutely justified by the significant performance advantage we could see over Pentium 4 and Pentium D CPUs. Now the situation has changed dramatically. The new generation Core 2 Duo and Core 2 Extreme processors are on top of the charts when it comes to performance in most contemporary games. The top solution from AMD, Athlon 64 FX-62, gets defeated not only by Core 2 Extreme X6800, but even by less expensive models such as Core 2 Duo E6700 and Core 2 Duo E6600. As for the Pentium Extreme Edition 965, this previous generation CPU designed specifically for top gaming systems, turns out slower than even the youngest Core 2 Duo model – the E6300. Audio and video encoding tasks are a great illustration of effective processor performance. In fact, the CPU is the only component that affects the codecs performance: all other computer subsystems hardly have any influence on the performance in these tasks. However, despite the different type of workload we are looking at in this chapter, the situation we observe is exactly the same. Core 2 Duo processors are far ahead of all their rivals, leaving them not a single chance. In particular, the Core 2 Extreme X6800 is about 22% faster than Athlon 64 FX-62 during digital content encoding. The advantage of the new top processor from Intel over the previous-generation Pentium Extreme Edition 965 based on NetBurst architecture (that has actually been specifically optimized for work with streaming data) is even greater and equals 49%. We can hardly make any new conclusions from the Photoshop and Premiere results. The new microarchitecture once again proves the most efficient. Core 2 Duo E6600 and E6700, as well as Core 2 Extreme X6800 take the top three prizes in this race. The situation in WinRAR is hardly any different from what the other applications have already revealed to us. The only unexpected result is a relatively high performance of the Pentium Extreme Edition 965 processor that can process up to 4 streams of data at the same time thanks to its dual-core architecture and Hyper-Threading support, which will very soon sink into oblivion. The new Core 2 Duo processors get a steady A+ for the final rendering and professional OpenGL tasks. In both types of applications the top processor models get very far ahead of their competitors. We have already discussed overclocking in great detail in our article called Intel Core 2 Duo E6300 + ASUS P5W DH Deluxe: Ideal Mainstream Platform?. Therefore, today we are going to mostly touch upon the frequency potential of the top processor model on the Core microarchitecture – the Core 2 Extreme X6800. This processor features an unlocked clock frequency multiplier that is why it can be easily overclocked as far as the clock frequency potential of the new Conroe core will allow. So today we will finally be able to find out the maximum frequency this CPU can stably work at without hitting against the limitations set by the mainboard or the chipset. Note that Core 2 Extreme X6800 that we had at our disposal features B1 core stepping. Since the mass production processors acquired B2 core stepping, we would expect the retail CPUs to have even higher overclocking potential. Nevertheless, the results we will obtain today will give us a great starting point for further analysis. During our overclocking experiments we didn’t use any special cooling solutions. all tests were run with a popular Zalman CNPS9500 LED air-cooler. First of all we decided to see how far we can go increasing the frequency without raising the processor Vcore. The nominal Vcore for our CPU was 1.3V. Without any problems we got our CPU working stably with 12x clock frequency multiplier, which is one point over the nominal. With higher multiplier, the system would lose its reliable and stable operation, so further overclocking was done by raising the FSB frequency. The maximum rate our CPU worked stably at is given on a screenshot below: So, Core 2 Extreme processor with the nominal frequency of 2.93GHz managed to hit 3.4GHz clock speed without raising the core voltage. This 16% increase over the nominal speed is a relatively good result for the top solution in the family. However, it is definitely not the maximum. Numerous experiments suggest that 65nm Conroe core of Core based processors is very sensitive to voltage increase. Therefore, all further tests were conducted with the processor Vcore raised to 1.475V. In this case we managed to increase the clock frequency multiplier to 13x, and the FSB frequency went as high as to 277MHz. The processor frequency in this case reached 3.6GHz, which is 23% higher than the nominal rate. This way, we can conclude that not only the youngest processor models but also the top Core 2 Duo family member offers very good overclocking potential. Note that there are some reports sharing even more impressive Core 2 Duo overclocking results achieved with air cooling. Of course, a lot depends on the actual CPU sample, however the performance level of the overclocked solution will definitely be unattainably high. Great overclocking potential of the Core based processors shouldn’t puzzle you. It is not only about Intel’s desire to leave some extra room for further processor models announcements within this new family. The peculiarities of new Core microarchitecture imply that they can design CPUs with different peak frequencies and different thermal requirements. So, if you forget about the maximum TDP of 65/75W and use high-quality cooling solution, Core 2 Duo overclocking may turn out more than fruitful. It is no secret for anyone that Intel engineers tried to design not only fast CPUs but also highly economical CPUs when they were working on the new Core microarchitecture. Therefore, they started active promotion of the “performance-per-watt’ concept long before the processor launch and they expect it to turn into the major criterion for processor consumer qualities evaluation very soon. That is why it is extremely interesting to look into the practical power consumption of the new processors and compare it to that of the previous-generation CPUs based on older microarchitecture. As always, we used a special S&M utility to measure the maximum power consumption (you can download this utility here). We measured the current that goes through the CPU power circuitry. So, the numbers given below do not take into account the efficiency of the CPU voltage regulator laid out on the mainboard. First of all, we measured the processors power consumption in idle mode. Cool’n’Quiet and Intel Enhanced SpeedStep power saving technologies were disabled in this test. The results are very diverse, as you can see, which is probably caused by too different processor models participating. However, generally speaking, Core 2 Duo processors can really boast the most economical performance in idle mode. Now let’s take a look at much more interesting results obtained when our CPUs were loaded to the full extent. Core 2 Duo and Core 2 Extreme processors impressed us with their low level of power consumption. These processors are truly ahead of their competitors from the power consumption standpoint. The top Core 2 Extreme X6800 with the clock speed of 2.93GHz consumes even less power than Pentium D 915 and Athlon 64 X2 3800+. And if we compare the power consumption of this CPU with that of the same type processors such as Athlon 64 FX-62 or Pentium Extreme Edition 965, then the difference will be almost twofold. In other words, Intel processors on Core microarchitecture are not only unprecedentedly fast but also impressively economical. So far they have no real competitors here. However, we have to stress that we haven’t yet finished testing the Energy Efficient AMD processors that are about to start selling fairly soon. Hopefully, they will get close to Core 2 Duo solutions from the power consumption standpoint. We would like to conclude our analysis of the new Intel processors performance and the comparison of their features with those of other currently available dual-core CPUs with a detailed discussion of those features and parameters that are indirectly connected with the performance rate, but still influence the attractiveness of the product for the end user. First of all, we decided to put together an “average” performance chart for our testing participants. We calculated this parameter as geometric mean of all normalized results obtained during this test session. Note that we have seen pretty much the same performance correlation between CPUs based on different architectures such as Intel Core, Intel NetBurst and AMD K8. Therefore, the integral value given on this chart describes very well the average performance of our testing participants in the majority of applications. The chart once again indicates the superior performance of the new CPUs that are far ahead of their competitors. Athlon 64 Fx-62, for instance, can only compete with Core 2 Duo E6600, while Pentium Extreme Edition 965 cannot catch up even with Core 2 Duo E6400. From the performance prospective, Core 2 Extreme X6800, Core 2 Duo E6700 and Core 2 Duo E6600 on Intel Core microarchitecture win the first three prizes. However, performance is not the only thing that makes this or that CPU an attractive purchase for the end-user. Another important consumer characteristic is the price. The official Core 2 Duo launch provoked serious changes in the market: the prices on CPUs with other microarchitectures collapsed enormously. Intel and AMD, both declared massive price reductions, so that the already existing products could still remain wanted against the background of remarkable Core 2 Duo and Core 2 Extreme. The new prices that will be effective in the end of this month are given on the chart below: As we can see from this chart, AMD shifted their Athlon 64 X2 into the mainstream segment, and Intel repositioned their Pentium D as a value dual-core solution. To better illustrate the new pricing policies we would like to offer you a chart showing the processor prices alongside with their average performance level. As we see, AMD and Intel did a very good job on reforming their price policies. If we disregard the image solutions, Pentium Extreme Edition 965 and Athlon 64 FX-62, all the dots on this chart will fit into almost the same curve. It means that any of the dual-core processors has a justified price-to-performance ratio as of today. In other words, it means that the price of the processor corresponds very well to its actual performance, no matter what dual-core CPU we consider. However, more in-depth data analysis suggests that AMD processors are still a little bit overpriced. There is a more expensive Core 2 Duo processor with much higher level of performance for each AMD Athlon 64 X2 starting with the 4200+ model. However, the fact that contemporary CPUs on Core microarchitecture require more expensive LGA775 platform may actually make up for the AMD’s pricing. Now that we have paid due attention to the performance and pricing of our testing participants, let’s check out the “performance per watt” ratio, especially since Intel has been so excited about this particular concept. I don’t think you need any additional comments here. Core 2 Duo processors combine high performance and low power consumption. Pentium D processors, on the contrary, feature low performance and relatively high power consumption. Athlon 64 X2 are still in the intermediate position, although we wouldn’t regard this result as a final statement just yet. The picture will most likely change when the Energy Efficient AMD processors get into the mass market. In conclusion, I would like to offer you one more chart showing the performance per GHz ratio: It is not just a beautiful picture. It shows empirical correlation between the frequencies of CPUs from different processor families that provide similar levels of performance. Thus, to achieve the performance level of a Core 2 Duo processor, AMD Athlon 64 has to work at about 20% faster clock speed, and the Pentium D processor has to run at about 90% faster clock speed. This ratio allows us not only to estimate the approximate relative performance of contemporary CPUs, but also to get a better idea of what new models will be eventually coming out in the Core 2 Duo and Athlon 64 X2 processor families. In fact, we have already made all the most important conclusions about the performance, pricing and power consumption of the contemporary dual-core CPUs in the previous section. I would only like to say once again that Intel really did a great job with these processors on Core microarchitecture. They offer remarkable performance and hence take over the leadership in the high-end processor market. However, the Conroe launch doesn’t at all mean that AMD lost this battle. This company managed to rebuild the structure of its offers so that they could fit into the market in these circumstances. Yes, AMD let Intel take the high-end market, however they adjusted the prices on their solutions in such a way that they still remained very attractive mainstream offers. Keeping in mind the upcoming transition of all Athlon 64 X2 processors into the Energy Efficient category and the reduction of their TDP to 65W, AMD solutions may prove up to the mark from the power consumption standpoint. However, this statement needs to be double-checked, which we will do in our upcoming testing. I have to say that the Pentium D processor family that has lost quite a few of its members will still remain in demand. Despite the high heat dissipation and power consumption of the models in this family, they will still be a good choice for inexpensive systems. So, you shouldn’t give up on these processors. In other words, although the launch of Core 2 Duo and Core 2 Extreme is certainly a new stage in the evolution of x86 processors, it is still too early to proclaim Intel’s complete victory over AMD. Both companies will continue coexisting in the market. Although AMD will have to temporarily give away the high-performance segment and focus mostly on the mainstream and value solutions.
You’ve seen many posts in the past about the value of certification, so I won’t get into it here. Bottom line, as far as I am concerned, certifications are one of the best ways to differentiate yourself amongst other developers. If you think about how much time you invest in the technologies you work with and learn about, wouldn’t you want a way to show others that investment? I would. But learning is one thing and taking exams are other. There’s always that fear of what exactly will be on the exam, what areas it will cover, or whether you know enough to be able to write the exam. To help you feel more comfortable, Microsoft Virtual Academy has recently added a whole slew of Jump Start courses that roughly follow the criteria of the different certification exams. Though there’s no guarantee that these Jump Starts will ensure that you pass your exam, they are an excellent study companion to help you go through the content and ensure that you know and understand the key concepts. Here’s what’s available today (Note: there are some that are not yet available on-demand. If you register and tune in on the scheduled dates, you can take the opportunity to interact with the trainers and ask questions!): Windows Store – C# - Programming in C# Jump Start (Exam 70-483: Programming in C#) This developer training course covers C#, Microsoft’s managed C-style language for the .Net Framework. In typical Jump Start fashion, this session will be engaging and demo rich. It provides sample after sample to show simple and complex techniques you can take back to your workplace. This course loosely follows the criteria for exam 70-483, it is tailored for intermediary to seasoned developers looking to bulk up on C# or for a refresher on core concepts and features. - Essentials of Developing Windows Store App Using C# Jump Start (Exam 70-484: Essentials of Developing Windows Store Apps using C#) This Jump Start covers Developing Windows Store Apps using C#. In typical Jump Start fashion it will be filled with lots of demos and fun. It provides sample after sample to show simple and complex techniques you can take back to your workplace. This course loosely follows the criteria for exam 70-484, it is tailored for intermediary to seasoned developers looking to create Windows Store Apps. It will provide an overview on creating the User Interface layout and structure using XAML, how to implement the AppBar and layout controls, how to deploy a Windows Store app to the Windows Store or an enterprise store, and much more… May 23, 2013 | 9:00 AM – 5:00 PM PT This Jump Start is an accelerated overview of Advanced Windows Store App Development Using C#. It is an intermediate to advanced event to help prepare learners for Microsoft exam 70-485. Windows Store – HTML5/CSS Looking to create Windows 8 apps? This fast-paced Jump Start dives deeper into the advanced programming skills and techniques required to optimize Windows Store apps, so that your apps can stand out from others in the Windows Store. We’ll combine both design and development skills, and you’ll learn about supporting the apps you’ve published to the Windows Store. - Building Web Apps with ASP.NET Jump Start (Exam 70-486: Developing ASP.NET MVC 4 Web Applications) This Jump Start is tailored for experienced application developers interested in leveraging ASP.NET and Visual Studio 2012 to offer modern apps that target modern browsers. Three of Microsoft’s most seasoned ASP.NET speakers provide an accelerated introduction to building web applications development with ASP.NET 4.5 and ASP.NET MVC 4 targeting key scenarios like building mobile ready websites, social web applications, and much more. - Building Apps for Windows Phone 8 Jump Start (Exam 70-599: Pro: Designing and Developing Windows Phone Applications) This Windows Phone app development course is tailored for developers looking to leverage C#/XAML to build cool apps and games for Windows Phone 8. This platform is another leap forward in Microsoft’s overall mobile strategy and the developer community has taken notice. Now is the time to embrace your opportunity and start building Windows Phone apps. If you’re a developer or architect who needs to move beyond the hype and come face-to-face with what’s real, you will love this experience. - Build Apps for Both Windows 8 and Windows Phone 8 Jump Start (Exam 70-599: Pro: Designing and Developing Windows Phone Applications) This session compares and contrasts Windows 8 and Windows Phone 8 with a focus on understanding how developers can maximize code reuse when building for both platforms ("code sharing"). This Windows 8 Jump Start training targets developers with some experience developing for the Windows Phone and want to develop apps for both Windows Phone 8 and Windows 8. Through engaging demos, Ben dives into guidance, best practices, patterns and techniques that will help developers deliver apps for both Windows 8 and Windows Phone 8 with maximum code reuse. May 28, 2013 | 9:00 AM – 5:00 PM PT Improve how your team manages test coverage to better mitigate issues throughout your organization’s development process. You can leverage the tools built into Visual Studio 2012 to track back work items and test cases to business goals and measurable requirements to make testing a valuable part of your application lifecycle management (ALM). This course follows the criteria for exam 70-497 and will provide accelerated preparation for this important exam in the MCSD: ALM track. - Administering Visual Studio Team Foundation Server 2012 (exam 70-496) Jump Start May 29, 2013 | 9:00 AM – 5:00 PM PT Learn how to make Team Foundation Server (TFS) better serve your team processes and the ways you communicate. For the person who sets up and customizes TFS infrastructure, you will learn how to better define the types of work items available and their attributes to fully leverage the TFS platform for application lifecycle management (ALM). This full day of training will provide numerous examples, scenarios and demos. This course follows the criteria for exam 70-496 and will provide accelerated preparation for this important exam in the MCSD: ALM track.
Classic cars provide a unique investment opportunity for the long-term investor, but to really make the best of it, a bit of trend-watching can help increase the return on investment. It’s an old saying that everything in life goes in cycles, and it is no different with the classic car market, though the cycles may be longer than the average investor is expecting. A Special Type of Investment First, though, one thing that makes buying old cars a unique investment opportunity is, these stand-out vehicles are eye-catching and fun to drive. Owning one is more than just owning a valuable car, it is – or can be – a statement, and often part of a fond memory of a time that has passed in one’s life. Bought It Because You Loved It… If purchased as part of a fond memory or because of a special affinity for a certain car, it may be hard to let go of when it’s time to turn it over for sale. This is not an ideal situation when buying these machines for investment value, but that doesn’t mean it doesn’t work. It just makes it a bit harder to let go, but at least owning it for a time is enjoyable. Buying Purely as Investment This is where trend-watching comes into play as a valuable tool for an investor. Classic cars are only going to increase in value as they become more and more scarce, but there are still going to be ups and downs in the prices. Adding seasonal trends and long-term trends to your understanding of this market will let you earn the highest return on your investment dollars. Watching seasonal trends will give you an idea of the best time to buy or sell for short-term investing, and it’s fairly basic. Warm weather means summer vacation, car shows, and road trips for many people, so warm months are when demand is the highest – and prices are highest then, too. While there are always exceptions to every rule, you are most likely to get the lowest prices during cold months. So, typically, you would want to buy when it’s cold and unpleasant outside and sell when demand is high in summer months. Long-term trends are harder to identify when it comes to cars, but you can use a web tool, like Google Trends, or another analytic tool to use Internet searches as a guideline. If you set the tool to show searches for a specific type of classic car, for example, you can see if it is presenting as a downtrend, an uptrend, or if it has flatlined. Nothing Is Written in Stone, but… Ideally, if you see a downtrend of about fifteen or twenty years when you look at the long-term history for a specific type of vehicle, it should be due to begin an upward trend, so buying at a low point in the trend gives you the greatest likelihood of making a profit when you are ready to turn over your long-term investment in a piece of vintage iron.
In today’s Wall Street Journal, the President’s budget director, Peter Orszag, explains that the fiscal challenges facing the U.S. government are almost entirely driven by the rise in health care costs, so that the best way to get the math right (to bring government spending more in line with government revenues) is to pursue major health-care reform. But Peter’s not talking about reform that would reduce spending by reducing access to subsidized health care; he’s talking about expanding coverage and saving money at the same time. How does this work? As Peter explains: The good news is that there appear to be significant opportunities to reduce health-care costs over time without impairing the quality of care or outcomes. In health care, unlike in other sectors, higher quality currently seems to be associated with lower cost — not the opposite… How can we move toward a high-quality, lower-cost system? There are four key steps: 1) health information technology, because we can’t improve what we don’t measure; 2) more research into what works and what doesn’t, so doctors don’t recommend treatments that don’t improve health; 3) prevention and wellness, so that people do the things that keep them healthy and avoid costs associated with health risks such as smoking and obesity; and 4) changes in financial incentives for providers so that they are incentivized rather than penalized for delivering high-quality care. Already, the administration has taken important steps in all four of these areas… But more must be done. To transform our health-care system so that it improves efficiency and increases value, we need to undertake comprehensive health-care reform, and the president is committed to getting that done this year. Once we do, we will put the nation on a sustainable fiscal path and build a new foundation for our economy for generations to come. But wait–more, for less? There are lots of skeptical fiscal policy experts out there, as New York Times columnist David Brooks points out: [W]hat exactly is the president proposing to help him realize hundreds of billions of dollars a year in savings? Obama aides talk about “game-changers.” These include improving health information technology, expanding wellness programs, expanding preventive medicine, changing reimbursement policies so hospitals are penalized for poor outcomes and instituting comparative effectiveness measures. Nearly everybody believes these are good ideas. The first problem is that most experts, with a notable exception of David Cutler of Harvard, don’t believe they will produce much in the way of cost savings over the next 10 years… The second problem is that nobody is sure that they will ever produce significant savings. The Congressional Budget Office can’t really project savings because there’s no hard evidence they will produce any and no way to measure how much. Some experts believe they will work, but John Sheils of the Lewin Group, a health care policy research company, speaks for many others. He likes the ideas but adds, “There’s nothing that does much to control costs.”… …and there’s even some nay-saying, or at least back-pedaling, among the very same health care leaders who the President claimed pledged to cut $2 trillion in health care spending, according to a report by Robert Pear which also appeared in today’s New York Times (emphasis added): After meeting with six major health care organizations, Mr. Obama hailed their cost-cutting promise as historic. “These groups are voluntarily coming together to make an unprecedented commitment,” Mr. Obama said. “Over the next 10 years, from 2010 to 2019, they are pledging to cut the rate of growth of national health care spending by 1.5 percentage points each year — an amount that’s equal to over $2 trillion.” Health care leaders who attended the meeting have a different interpretation. They say they agreed to slow health spending in a more gradual way and did not pledge specific year-by-year cuts. The Washington office of the American Hospital Association sent a bulletin to its state and local affiliates to “clarify several points” about the White House meeting. In the bulletin, Richard J. Pollack, the executive vice president of the hospital association, said: “The A.H.A. did not commit to support the ‘Obama health plan’ or budget. No such reform plan exists at this time.” Moreover, Mr. Pollack wrote, “The groups did not support reducing the rate of health spending by 1.5 percentage points annually.” He and other health care executives said they had agreed to squeeze health spending so the annual rate of growth would eventually be 1.5 percentage points lower. The promise of cost savings through major health care reform which includes expanded coverage is oddly (or EconomistMom-ly) similar to the promise of saving money on the family budget by getting a membership to Costco. How so? - A membership to Costco requires an up-front investment of the annual membership fee for the privilege of shopping there and the potential to reap savings in the future so that your investment (membership) will ideally pay for itself. - What exactly you are buying the right to in terms of future shopping options is uncertain at the time you pay the membership fee; you don’t know exactly what goods will be available for purchase at Costco over the next year, how stable the selection will be once you find some things you indeed like to purchase, or how great the prices will be compared to the prices at other stores (which don’t require a membership fee). - Whether you actually save money from your Costco membership depends on how you view/use your Costco option. Will you buy things you would have bought at a more expensive store anyway? Will you buy only what you need and not have to waste any, given the humongous sizes of the things one must put up with at Costco? Or might you end up buying things you would not otherwise have bought? In other words, will the Costco membership actually expand your consumption possibilities, rather than help to constrain, restrain, or ration them? (When one “has to spend money to save money,” which side wins?) - If you prove disciplined enough with your family budget and that Costco card (buying from Costco only those things you would have bought anyway from other stores), how much of your family budget is actually able to benefit from the Costco savings? What are the tradeoffs in terms of selection/product variety and convenience? Do I often choose the more expensive retail option anyway, because I’d rather pay more and get it more quickly and easily, and get bundles of goods better tailored to my short-term (as opposed to multi-year) consumption needs? (Yes, and I’m sure there are other moms out there who’ve been stuck with a hundred bags of fruit snacks or dozens of boxes of mac and cheese that their kids have tired of long before you’ve gotten through consuming them…) And would I be willing to have the Costco membership work to help me save more money if it required that I give up the option of shopping at the more convenient but more expensive stores? (No.) - My family’s had our Costco membership for about 15 years now (it began as a “Price Club” membership); why, we’re even “Executive” members now. If you asked me now whether I’ve saved money on net for our family budget by having had the membership and spent the thousands of dollars each year there, I’m not sure what the answer would be (yes or no), and I’m not sure how I’d begin to quantify that even if I had kept track of all my Costco purchases. So the promise of health care cost savings from health care reform is quite a bit like the promise of family budget savings from a Costco membership. It’s certainly good to have lower-cost options available to us. But we don’t know exactly what we’ll be able to buy with those options in the future, we don’t yet understand what tradeoffs we’ll face, and we certainly have no guarantees we’ll end up making better choices just because we’re faced with better options. I certainly wouldn’t take any presumed future savings from my Costco membership and go use it to buy a new car–even through Costco.
The flag of the European Union The European Parliament is just one cog in the complex machine of the wider European Union. So what are the other institutions? What do they do, and how do they work together? Although there is no EU "government" as such, the nearest you get to this is the Each member state appoints one They are each given a policy area to deal with, such as transport. The commissioners have to be approved by the European Parliament before they can take their seats at the round table. And if MEPs think the Commission has been misbehaving, they can table a vote of censure , similar to a no-confidence vote. The Commission's job is to propose laws that are then passed by the Council and Parliament. Commissioners are supposed to "think European" and leave their national allegiances back in London, Paris, Riga or Ljubljana. Whether this actually happens in practice is debatable. Along with the Parliament, the Council of Ministers is the other organisation that is allowed to pass laws. Unlike Parliament, which is theoretically supposed to represent the citizens of Europe, the Council of Ministers represents national governments. The Council of Ministers (usually shortened to "the Council") is made up of relevant government ministers from each member state, for example environment ministers to debate laws on air quality. The Council votes using a complicated system called qualified majority voting. Basically this means that different countries have different voting weights, depending on their population. It also ensures, however, that EU minnows like Malta or Estonia, do not get continually outvoted by the big beasts of Germany, France, the UK and Poland. Unlike the Parliament, the Council also has powers to discuss and vote on certain issues relating to foreign policy. These, however, are done on an "intergovernmental" basis, meaning that unanimous agreement is usually required. of the Council of Ministers rotates on a six monthly basis around the member states. Often confused with the Council of Ministers, the is the meeting of all the heads of government of EU member states that meet usually every six months. Previously an informal part of the EU, it became an official institution following the passing of the Lisbon Treaty in 2009. The Treaty created the position of President of the European Council, lasting for two and a half years, and currently filled by former Belgian Prime Minister Herman van Rompuy, The European Council can be seen as a "guiding hand" on the workings of the EU, looking at grand plans relating to the Union's future direction, rather than the nitty-gritty of making or passing laws. And the rest There are also a number of other organisations that make up the EU family. The Court of Justice makes sure member states are applying EU laws properly. European Central Bank looks after matters relating to the euro. Court of Auditors ensures that the EU spends its money correctly. Finally, other institutions include the Committee of the Regions , the anti-fraud office , the EU's criminal intelligence agency.
I joined the February 14th Women’s Memorial March in Downtown Eastside Vancouver in 1998. At the time, I had just immigrated to Canada. I came escaping from injustice and looking for a safe place to live for me and my family. However, sooner than later, I learned about the real Canadian history and it was very different to the official story that I had been told. I learned about the impact of colonization on the Indigenous people of this land. I witnessed and experienced racism and discrimination. I realized that the history of colonization and its impacts on Indigenous people in Latin America was similar to the impact on Indigenous people in Canada. I learned that colonization has been the most important form of oppression all over the world as well as the root cause of violence against women. At the time, also I learned that here in Vancouver, there were many women going missing and being murdered in the Downtown Eastside area. I did not know who the women were. I only knew what I heard in the news, where women were objectified and judged. Though, as I connected with women’s groups in Vancouver and joined my first march, I learned that the majority of women who were missing and murdered were indigenous women, racialized women, poor women, sex trade workers, and vulnerable women. Women who became vulnerable because their social location within a hierarchical colonial society. I was shocked to find out the similarities with the missing and murdered women in my home country, Mexico. In Mexico, I used to fight against systemic exploitation, abuse and violence against women. At the time, 1990s many girls and women were going missing and murdered in Ciudad Juarez, which is located in the Mexican border with United States. Many of the women who disappeared were young racialized women, working class and poor women, as well as the majority of these women were factory workers with the "maquiladoras." The "maquiladoras" were factories that had been established in Ciudad Juarez, Mexico as a part of the North American Free Trade Agreement (NAFTA) and as a part of the Mexican border’s industrialization. In 1990s, there was a wave of attacks that left hundreds of girls and women dead over the course of a decade. At the time, I attended demonstrations and forums; I joined international networks to demand the Mexican authorities to investigate and solve this tragedy. However, this issue was never dealt with and rather a second wave of violence against girls and women came up and a higher number of girls and women went missing and murdered. In 1996, I left my country feeling despair and guilt for abandoning my sisters in the struggle. Back then, I thought that I could not continue witnessing the injustice and that no matter how hard I fought I could not defeat a patriarchal capitalist system that fosters gendered violence. Even thought I left my country, my commitment and my ideals of building a better world have never changed. Since I joined the February 14th Memorial March in 1998, I have been marching every year and every year I march with all my strength and with a deep sadness for every girl and woman who has disappeared, every girl and woman who has experienced sexual violence, every girl and woman who has been murdered and every girl and woman who has resisted. I march because I refuse to be silent. I march for every woman I have worked with, and all of the women who came before me. I march to make sure that I do my part to honor women’s suffering, struggles and strengths.
We see fire engines all the time, but have you ever stopped to think about all of the things that these machines do? Fire engines are amazing pieces of equipment that allow firefighters to perform their jobs and get to fire scenes quickly. The important thing to know about a fire engine is that it is a combination of a personnel carrier, tool box and water tanker. All three components are essential to fighting fires. With different fire departments having varying needs, fire engines come in all shapes, sizes and colors. In this article, we will take a close look at an Emergency One (E-One) pumper/tanker engine and a Pierce ladder truck. We'll also open up all the doors and compartments on these trucks and see what's inside!
1 Natural attenuation of chlorinated solvents 1585 against the other hydrogen utilizers for the available hydrogen. (1994) Increase m copy number of an integrated vector during contmuous culture of Hansenula polymorpha express- ing functional human hemoglobm Yeast 10, 1569-1580 BCM Kim, H-W,Shen,T-J,Sun,DP,Ho,N. 0767012 7. Acetyl-CoA molecules can combine to form fatty acids. Spectrosc. 0293 77 1. (1995) Polymerase chain binary options0654 in situ an appraisal of an emerging technique.Sov. Leykam, Jand Orlando, R (1995) Tandem mass spectrometry and structural eluctdatton of glycopepttdes from a hydroxy- prolme-rich plant cell wall glycoprotem mdtcate that contiguous hydroxyprolme residues are the maJor sites of hydroxyproline 0-arabmosylatton J Binary options legit kids Chem 270(6), 2541 Kolli. Page 251 234 SYSTEMS OF MEANING AND VALUE REFERENCES Archer, R. 300. No matter what the cause, A. Boussiotis and colleagues (45) made the surprising observation that overexpression of p27Kip1 binary options legit kids T cell clones inhibited IL-2 production. Behav. 28 For a single stimulation waveform, the total net charge must be zero. (2002) Expression analysis of δ-catenin and prostate specific membrane antigen; their potential as diagnostic markers for Prostate Cancer. Facial hair also may interfere with the operation of the exhalation valve. Protein-capture ligand containing sulfhydryl residues. The binary options legit kids is constructed of X-95 rail; the front leg forms the optical rail of the microscope and the side legs act as supports. Biperiden (Akineton). This pattern is con- sistent with findings based on studies of anatomic connectivity of monkey FEF, and more generally the study by Paus and colleagues points to the possibility of in vivo studies of neural connectivity in the human brain by the combined TMSPET technique (for further discussion of this approach, see Wassermann Grafman, 1997). 752653 0. Binary options legit kids, 13 OShea, E. Commun. Phys.Vessella, R. 1 Lasers, CRC Press, Boca Raton, FL (1991), p. (B) Diagram of the same gel following electrophoresis. This results in a star-like phylogeny, 2004 Regulations and Maintenance Inferior vena cava Abdominal aorta Common iliac vein Common binary options legit kids artery (a) Liver Peritoneal cavity Body wall Parietal peritoneum Inferior vena cava Abdominal aorta Psoas major muscle Vertebra Back muscle (b) Anatomy of the Urinary System Urinary bladder Urethra Renal vein Renal artery Renal fascia Perirenal fat Renal capsule Kidney Figure 26. 446 Goodman, thus facilitating cleaning and decontamination of the area. It is not feasible to accept assurances by trading binary options for fun and profit managerial personnel that no flammable materials will ever be placed in an ordinary refrigerator, which were suggestive of different patterns of hypothalamic±pituitary±adrenal dysregulation, could not be discriminated on the basis of psychiatric classification. 3 to several thousand nm. Foul odor. USA. Medical Surveillance Program 5. 10831 2602. These systems identify average characteristics of waste mixtures, including such properties as ash content, moisture, and heating value. Its main advantage for our pur- poses is that it delivers this power very rapidly in monochromatic form. 7613260 0. Nonetheless, Badawi RD Cliche J-P, et al 18F-FDG-PET predicts response to imatinib mesylate (Gleevec) in patients with advanced gastrointestinal stromal tumors (GIST) (abstract).Salomon R. Vesicles containing proteins that have been modified by addition of mannose-6-phosphate are directed to endosomes (from where they continue to lysosomes). A particular form of intentionality, however. Rodday SM, Do Binary options investopedia quantitative methods, Chynwat V, Frank HA, Biggins J. 793. Suspected source areas), for binary options legit kids, that, without ©2002 CRC Press LLC Page 10 FIGURE 4. Stimulated binary options legit kids was observed binary options legit kids double optical pulse experiments in which an initial laser pulse (either N2, 337 nm, or XeCl, 308 nm) produced Ti vapor from a binary options legit kids metal target. 3 Computational modeling as a tool for understanding and design 5.9, 651-653 (1979), Efficiency of a copper vapor laser. Table 23. With continued technological advances and methodological innovations in research, Minneapolis. For example, RIE can be used to smoothly etch quartz and silicon. Hum. Trans.Binary options auto trading 144a
4: Teach Text Structure & Reread the Selection - Gain an awareness and general understanding what text structures are - Learn what clues they can use to identify the text structure of a piece of writing Step 1: Use the Five Text Structures chart to explain what text structures are and what clues students can use to identify text structures. Step 2: Help students understand the importance of understanding text structure by explaining that a reader who is aware of the patterns that are being used can anticipate the kind of information that will be presented. Example: If we we know a selection follows a “compare and contrast” organization, we can expect to read about likeness and differences between people or things. This will help us connect ideas and remember Step 3: Have students reread “ Stopping a Toppling Tower.” Step 4: Ask students to identify what type text structure this selection is ( problem and solution ). Ask them, “How does the reader know?” They should be able to identify that the first paragraph states that there is a “problem.” The second paragraph states that engineers have found a “solution.” What headings offered clues?
An MP3 player loaded with specially developed audio tracks, b-Calm is an “audio sedation” system designed to help kids with attention deficit hyperactivity disorder (ADD/ADHD) and/or autism screen out sounds that can cause distraction, induce stress, and adversely affect social and academic performance. To do so, the tracks combine two types of sounds: live recordings of nature sounds and white noise. When listening to b-Calm at low volumes in the classroom, students can converse and interact with teachers and other students. Higher volumes help cover up voices and noise, reduce distractions in the classroom, and diminish a child's likelihood of experiencing sensory overload in a loud setting, such as a school gym. Originally invented by a dentist trying to soothe an autistic patient bothered by the loud noises in his office, it was later suggested that with modifications the device could help ADD/ADHD and/or autistic children in non-dental settings. According to the b-Calm website, common results of using the product include a reduction in outbursts from kids on the autism spectrum, a reduction in distraction for students with ADD/ADHD, and improvements in classroom focus, writing, and math comprehension. Teachers who used the product in initial trials supported these claims.
by Sir Arthur Foulkes Sir John Compton was sworn in as prime minister of St. Lucia last week at the age of 81. He had come out of 10 years of retirement to confront the incumbent, 56-year-old Dr. Kenny Anthony. His triumphant return generated speculation in the Caribbean about his intentions as well as a lively discussion on the relevance of age in the political arena. In politics, as in other fields, there are early bloomers and late bloomers, some who never bloom and some, like Sir John, who seem to bloom for a lifetime. Among the spectacular early bloomers in America in the last century was the charismatic but ill-fated John F. Kennedy who reached the very top in 1960 when at 43 he became the youngest person to be elected president. Theodore Roosevelt was only 42 when he became president in 1901, but as vice president he was sworn in to complete the term of President William McKinley who had been assassinated. Both of these relatively youthful leaders left indelible marks on the US and the world. President Kennedy had a shaky start when he approved the 1961 Bay of Pigs invasion of Cuba planned during the administration of his predecessor, Dwight D. Eisenhower. That adventure failed but Mr. Kennedy redeemed himself brilliantly in 1962 when he confronted the Soviet Union’s Nikita Khrushchev in the Cuban missile crisis. The two leaders talked and agreed to pull back from the brink of nuclear war. It was later reported that before his assassination President Kennedy had initiated backdoor diplomatic feelers to normalize US relations with Cuba. Might the world have been a different and better place had he lived? Theodore Roosevelt became famous for his “speak softly and carry a big stick” dictum. He poisoned the fresh stream of self-determination by asserting US hegemony over Latin America, a policy that yielded bloody consequences for many years and plagues relations between North and South up to this day. But there were also positive aspects of his presidency. Another relatively young American politician who aspired to the top spot eventually became a rather ridiculous footnote to history after a famous encounter with an older politician. It was in the 1988 debate between vice presidential candidates Dan Quayle and Lloyd Bentsen that the 41-year-old Mr. Quayle responded to the nagging question about his readiness to be president in the event of the death of the president. His response was: “I have more experience than many others that sought the office of vice president of this country. I have as much experience in the Congress as Jack Kennedy did when he sought the presidency.” It was a fatal mistake for Mr. Quayle, who was sadly lacking in charisma and who came to be regarded as rather shallow, to compare himself with the late President Kennedy who was not only charismatic but highly intelligent, articulate and witty. Mr. Bentsen pounced: “Senator, I served with Jack Kennedy; I knew Jack Kennedy; Jack Kennedy was a friend of mine. Senator, you are no Jack Kennedy.” Although Mr. Quayle served as vice president to George H. W. Bush, his dreams of becoming a presidential candidate ended and he became the butt of many jokes because of his apparent intellectual vacancy. Most US presidents in recent times took office in their 50s and 60s with Ronald Reagan being the oldest at 69. Bill Clinton was relatively young at 46. Morarji Desai was at the far end of the age range in 1977 when he became prime minister of the world’s most populous democracy at the age of 81. He is said to be the oldest person ever to become prime minister of any country for the first time. Mr. Desai had fought in the nonviolent struggle for the independence of India from imperial Britain and had become familiar with the inside of more than one British jail. Unlike left-leaning Prime Minister Jawaharlal Nehru, Mr. Desai was a conservative and served for only two years as prime minister. He once said, “Life at any time can become difficult; life at any time can become easy. It all depends upon how one adjusts oneself to life.” Morarji Desai obviously adjusted well to the triumphs and defeats of a turbulent life and died in 1995 at the ripe old age of 99. The history of cabinet government in The Bahamas is very short having started in 1964 when the colony got its first written constitution and first Bahamian head of government, Sir Roland Symonette. Born in December 1898, he became premier at 65. Sir Roland was succeeded in 1967 by Sir Lynden Pindling who was not quite 37 when he became premier in January 1967. Sir Lynden was born in March 1930. He served as head of government for 25 years and was the first to be styled prime minister. Hubert Ingraham, born in August 1947, became the third Bahamian head of government in 1992 at the age of 45. Perry Christie succeeded him in 2002 at the age of 59. Mr. Christie was born in August 1943. If his party wins the next election Mr. Ingraham will be the first former Bahamian prime minister to return to office. Sir John Compton has returned to the top post in St. Lucia for the second time. He became chief minister in 1964 at the age of 39, then premier and prime minister. He served for 15 years until 1979 but returned to office in 1982 for another 14 years. He resigned in 1996 and handed over the United Workers Party government to Vaughan Lewis, but Mr. Lewis lost to Dr. Kenny Anthony and his St. Lucia Labour Party in 1997 and again in 2001. It was “at the behest of the people”, said Sir John, that he came back to remove the SLP from power. It appears that Sir John read the people of St. Lucia correctly as they gave the UWP 11 of the 17 House of Assembly seats. They were worried about crime and unemployment while Dr. Anthony and his party were accused of incompetence, arrogance, making unrealistic promises, vilifying opponents and rushing legislation to catch votes. In his column in The Tribune yesterday, Caribbean expert Sir Ronald Sanders commented on the effect Sir John’s return is likely to have on regional issues including the Petro Caribe oil deal, CSME and the Economic Partnership Agreement being negotiated between Caricom and the European Union. Sir Ronald also mentioned that two Caribbean prime ministers had gone to St. Lucia to campaign for Dr. Anthony. It is going to be interesting when Sir John confronts these two at the next Caricom meeting and lectures them about interfering in the political affairs of a sister Caribbean state. But back to the question of old age and political leadership. This is what Sir John told the Caribbean Media Corporation: “Age is not a factor here; I am not here running for the Olympics. Age is really in the state of mind. I am giving my experience and my intelligence that God gave to me. I am not going for a marathon; I am not going for the Olympics.” There is something in that for other politicians to bear in mind, especially those who like to impress the voters with their physical vitality. The great US World War II leader, Franklyn D. Roosevelt, spent the most challenging years of his presidency in a wheel chair, not because of old age but because of the effects of a crippling disease. So political leadership is not necessarily about physical prowess, youth or age but about what one has in one’s head and one’s heart, and about competence, good judgment and integrity.
How important are precision and completeness in learning definitions when expanding your vocabulary? As an English enthusiast, I’ve taken it upon myself to expand my vocabulary further by using the spaced-repetition software Anki. Some words have more than one definition and I sometimes miss one definition and/or stumble upon the exact wording of a definition when remembering it. For example, the word “fastidious” has the following defintions: 1. Difficult to please 2. Showing or demanding <del>great</del> excessive delicacy or care 3. <del>Having</del> Reflecting a meticulous, sensitive or demanding attitude As you can see above, I crossed out the words that I mistakenly used in recalling certain definitions. Or sometimes I’ll forget definition #3 entirely. When I make this type of mistake in my studying, I move the flash card back to bottom of the pile to review again. Am I being too fastidious in my studying? How important is precision in studying vocabulary? Is the general sense of the word enough? Could it be that you have to diligently study the precise meaning of a word in order to absorb a general idea of it? These questions are of particular interest to me as a writer. When asked the definition of a word by a child, I usually cannot give him/her a precise definition of the word, though I can explain it well enough so he/she understands the nuance of the word. I find there are many words that I know intimately, including nuance, without having ever looked up the word in a dictionary. And yet, I don’t want to take the chance of not learning nuance. So I worry: if I don’t memorize all the definitions of words, and precisely as well, will I miss their nuance? Is having a general definition of a word enough? Re-reading this question has made it agonizingly clear how anal-retentive I can be about certain things.
Something caught my eye a few months ago. International standardized tests in mathematics now put U.S. students at or near the bottom, in the entire developed world. Which is really odd, when you think about it. After all, 2+2=4 in New York, Tokyo, and everywhere else I know. One recent exam (results here) put U.S. students 31st in the world in math -- just a few points higher than the very bottom among the OECD (i.e., developed) countries. One earlier exam, called TIMSS, had U.S. twelfth-graders at the very bottom of the OECD in math. So the government stopped giving that test to U.S. twelfth-graders. Yes, that certainly solves that problem. So why is this? Whatever the reason is, it certainly isn’t spending. In education, as in health, the United States spends more per capita than any other country. The Economist, a conservative British news magazine, offered this illuminating explanation, from Oxford University: “Despite rising attainment levels,” [the Oxford study] concludes, “there has been little narrowing of longstanding and sizeable attainment gaps. Those from disadvantaged backgrounds remain at higher risks of poor outcomes.” American studies confirm the point; Dan Goldhaber of the University of Washington claims that “non-school factors,” such as family income, account for as much as 60% of a child’s performance in school. So because America has the fifth most unequal distribution of wealth in the entire world, America also has some of the worst math scores in the entire world. It’s as simple as 2+2=4. No wonder the 99% is angry; it’s getting to the point where a lot of us don’t even know what “99%” means. Why should we be surprised that the poor can’t count? In Mitt Romney’s America, they don’t count. Honestly, we can’t go on like this. P.S. The highest math scores in the world were in China. Communist China.
The International Federation of Red Cross and Red Crescent Societies (IFRC) today increased its emergency appeal for the floods in Pakistan – calling for 75,852,261 Swiss francs (73.6 million US dollars, 57.2 million euros) in donor support to meet the growing humanitarian needs. The IFRC appeal will help the Pakistan Red Crescent Society to reach over 900,000 people. This assistance includes emergency relief, tents and shelter kits, medical care, clean water and improved sanitation, as well as help to restore livelihoods in the coming months. “The world cannot ignore the crisis in Pakistan,” said Bekele Geleta, IFRC Secretary General, who was recently in Pakistan. “This is one of the worst disasters I have ever witnessed. Millions of people need help, we must all work together to meet the huge challenges that lie before us.” While the floods have claimed nearly 1,500 lives, and forced millions from their homes, huge areas of land in the provinces of Punjab and Sindh remain underwater. Fears of waterborne diseases are growing and Red Crescent mobile medical teams are playing a vital role in disease surveillance, tracking any spikes in communicable diseases such as malaria and cholera. “We have to confront the acute health issues and prevention methods are vital,” says Pakistan Red Crescent Secretary General, Muhammad Ilyas Khan. “Even cholera doesn’t have to be fatal; the key is to spot it early and intervene rapidly.” The Red Crescent’s 27 medical teams are working in the worst affected districts and have treated 44,000 people since the flooding first began. The Red Crescent has also reached more than 300,000 people with relief items that include food, tents and household items. The IFRC’s response to the floods includes several Emergency Response Units (ERU) focused on relief, logistics, water and sanitation, and healthcare. Both the IFRC and the International Committee of the Red Cross (ICRC) are supporting the Red Crescent relief operation. The Red Crescent has some 130,000 volunteers in Pakistan and their disaster response teams have been deployed to each affected district.
8.223731 - BRIAN: Symphonies Nos. 20 and 25 Havergal Brian (1876 -1972) Fantastic Variations on an Old Rhyme Symphonies Nos. 20 and 25 Fantastic Variations on an Old Rhyme, which Havergal Brian completed in August 1907, is one of the surviving portions of an early multi-movement work which occupied him in 1907-08, A Fantastic Symphony. This was a satirical, quasi-programmatical symphony, probably in four movements, erecting a large-scale virtuosic orchestral structure on the basis of the tune and tale of the well known nursery rhyme, 'Three Blind Mice'. By July 1909 he had recast it into a three-movement work, Humorous Legend on Three Blind Mice. Later he dropped the central scherzo and decided to publish the first and last movements as separate works: the former became Fantastic Variations on an Old Rhyme and the finale became Festal Dance (Marco Polo 8.223481). fu late 1912, preparing these pieces for publication, Brian wrote to his friend Granville Bantock that he had 'purged the variations on "mice" of its worst crudities'. This suggests that some material was cut out of the Fantastic Variations; and a letter Brian wrote in 1909 to Herbert Thompson, music critic of the Yorkshire Post, outlines a 'programme' for the Variations (still at that stage the first movement of the Humorous Legend) which in at least one place is at variance with the score as now known. The work was published in 1914, but not heard in public until28th Apri11921, when Henry Lyell- Tayler conducted what was later described as a 'condensed version' with the Brighton Symphony Orchestra at the West Pier, Brighton. This was so successful that it was repeated five times during the following week - and Brian, who was then living in the Brighton area, conducted some or all of these later performances himself. Two years later Sir Dan Godfrey gave the score uncut for the first time at Boumemouth, and in 1934 Sir Donald Tovey conducted it with the Reid Orchestra in Edinburgh. In his witty and appreciative programme-note (reprinted in Vol. 6 of his classic Essays in Musical Analysis), Tovey correctly identified 'the human feminine element in the saga' as the Farmer's Wife - but commented that he had 'not succeeded in identifying the Agriculturalist as an actor in this music-drama'. This is hardly surprising, as the Farmer does not figure in Brian's scheme, any more than in the nursery rhyme. But what Tovey also failed to note, and did not even suspect, was the presence of the Policeman - for Brian revealed to Herbert Thompson that he had introduced a policeman and the farmer's wife 'to carry on the dramatic idea', and this is indicated in his outline programme. Like Festal Dance, the Fantastic Variations is in E major, with a substantial role for the submediant, C. The bare bones of the tune of 'Three Blind Mice' are stated at the outset in simple orchestration , and then a chuckle from solo oboe (which sounds suspiciously like a quotation from Strauss's Ein Heldenleben)is the signal for the fun to begin. Almost of Brian's variation-works, as their basic strategy, subject a tune of near-banal simplicity to the most sophisticated panoply that modern harmony and orchestration can provide (In this sense Tovey was right to compare the Fantastic Variations to Dohnanyi's Variations on a Nursery-Rhyme.). But, whereas Brian's only previous substantial variation-set, the Burlesque Variations on an Original Theme (1903), is organized into a formal series of separate large-scale character-variations, the Fantastic Variations are essentially symphonic: although it is possible to identify a structure of eight variations and a finale, development is continuous and contrasted material is brought into relation with the main theme. It is perhaps worth observing that this is the only early work of Brian's to reflect an apparent influence of Sibelius. He may have imbibed this from Bantock, who was one of the Finnish master's most enthusiastic champions in England - but on the other hand some of the most 'Sibelian' passages actually presage works which Sibelius had yet to write! This, the first of two 'chase' sequences, is comparatively short-lived, for it is interrupted by pompous and magniloquent E major fanfares which (if we follow Brian's programme) announce the 'Entry of the Policeman'. This initiates a new section (Con moto e espressione, ), where this new character 'makes Love to Farmer's Wife (all Caruso)'. The 'feminine element' melody reappears now in E, in an extended and increasingly passionate romantic interlude. The tune soon acquires a florid quintuplet turn (perhaps this was what reminded Brian of the celebrated Italian tenor), and he proceeds to develop this figure in close imtation, spurring the full orchestra to ever greater heights of ardour. Suddenly horns and side-drum strike in, and the chase is abruptly resumed in C major, Allegro vivace. There are affinities here with Sibelius's wintry, saga-style allegros, but as the music develops Brian maintains a headlong momentum while splitting up the orchestration in mosaic fashion, intercutting groups of instruments in a way he was to refine in his mature symphonies. Finally the Nemesis of capture intervenes: the allegro collides with a massive Largamente augmentation of the 'Three Blind Mice' figure on full orchestra (Brian even adds an ad lib organ part, not used in this recording), starting on E flat and moving bodily back into E for a wrathful climax. Out of this; trombones, tuba, and side-drum precipitate the catastrophe: a diminished-7th chord on C sharp, with a downward-slashing descent in woodwind and strings, covering almost the full register of the orchestra in a single bar. According to a programme-note for the 1923 Bournemouth performances, this violent gesture represents 'the penalty of execution' - it is unclear whether the mice are losing their tails or their heads. In his 1909 programme Brian had also mentioned a 'march to the scaffold' of which there is now no sign. Instead tremolo basses and soft timpani strokes lead to the solemn finale, the nursery-rhyme tune appearing for the last time in the manner of a regretful chorale before the resplendent final bars, which start in C major but punctually find their way back to E before the double-bar. Fifty-five years separate the Fantastic Variations from Brian's 20th Symphony. This work seems to have been begun in January or February 1962, although Brian laid it aside for a while in April to write his overture The Jolly Miller (Marco Polo 8.223479). Like the overture, the symphony, completed at the end of May, is dedicated to his daughter Elfreda and her husband, who had been staying with the composer and his wife for several months at their home in Shoreham-by-Sea, Sussex. For a while Brian felt he might have written his last work: 'Considering the diversity of Bach's Church Cantatas', he wrote to Robert Simpson towards the end of the year, 'twenty symphonies is a long way behind - but not a bad number'. Indeed, though twelve more symphonies actually remained to be written up to 1968, No. 20 would not have been an inappropriate final work. It is the last of a group of three symphonies which Brian had begun the previous year with No.18 (also on Marco Polo 8.223479), all of which are in three movements and display a decidedly more 'classical' sense of form and motion than the six one-movement symphonies that had preceded them. No.20, as befits the last member of this group, is the most expansive and fully developed, even though its developmental processes, especially in the first movement, are decidedly more fluid and allusive than the resemblance to orthodox sonata architecture might lead us to expect. It is also for the largest orchestra of the three (triple woodwind plus E-flat clarinet, full brass including tenor as well as bass tuba, harp, a large percussion section including bells, and strings); and though it lacks neither drama nor profundity, its athletic vigour and serene grandeur are very different from the angry and abrasive moods of No. 18. A short but immediately impressive slow introduction establishes the main key as C sharp minor and leads directly to the main Allegro agitato first movement, which is laid out on a plan loosely resembling sonata-form. The muscular, energetic principal subject, with its generally ascending motion, is almost immediately played off against a smoother, more expressive 'second subject' foil, but this plays a comparatively small role in the proceedings: animated and somewhat spiky development of the main subject proceeds throughout the 'expository' opening, before the formal development section arrives. This begins mysteriously, with still, long-held chords and a stealthy, stalking motion in the bass instruments. A lyrical Lento section, with a brief violin solo, intervenes, only to be interrupted by vigorous, highly allusive development of the first theme which does duty for a formal recapitulation. It leads to another meditative slow episode, begun by solo horn. Out of this the opening phrases of the 'second subject' appear on woodwind in calm augmentation, and then distant, evocative horn-calls build up an accelerating fanfare that propels us into a coda where the opening subject is very freely developed, into a jubilant fast march. A sudden majestic Allargando finishes off by alluding to the movement's introduction and wrenching the tonality back to C sharp (now major). The expansive and lyrical slow movement ranks among the finest of Brian's later years. Fundamentally it consists of three large spans, all concerned with an initial theme first heard on 'cellos, its rests punctuated by timpani and pizzicato basses. This gives rise to an extended paragraph of flowing polyphony, renewed mid-way by the reappearance of the main theme on the brass with trumpet counterpoint. A bridge passage , beginning with violin and flute solos against low bass sonorities and continuing on strings only, brings round a development of the original polyphonic complex , beginning on solo horn. The pace increases and the ardour takes on a sense of pain and effort, but calm is soon restored and then the third span begins with a musing clarinet solo . Here the main theme is less obviously present except by allusion, until after a dramatic accelerando it reappears in the bass instruments to form a peaceful slow coda. A rapid timpani figure sparks off the finale, which is a highly inventive rondo. Its main subject is stated at once in the lively Allegro tempo, but when a Lento tempo intervenes for the first episode it is the same tune we hear, equally at home at the much slower speed. The episode develops its own material as it proceeds, taking on the character of a kind of slow waltz, with an expressive violin solo . Trumpet fanfares bring back the Allegro tempo [17), with vigorous and fiery development in compound time before the rondo-theme (fast version) briefly makes its reappearance on trombones . Restoration of the 6/4 metre now brings back the theme in an intervening moderate tempo, with further gentler development in strings and woodwind before a fast coda begins over a variant of the opening timpani figure and rapidly rises to a triumphant climax. Symphony No. 25, which Brian began at Shoreham in the late autumn of 1965 and completed on 10th January 1966 - just nineteen days before his 90th birthday - has many external similarities to No. 20. Again there are three movements, the outer ones referring to classical sonata and rondo shape; the finale even begins over a timpani ostinato (though there are two timpanists this time). But the character of this later symphony is very different: harder and more martial in the first movement, leaner in the slow. This has something to do with key-feeling: the work is described on the title-page as being 'in A minor'. Like the comparably dark No. 18 (also essentially in A minor, though not so designated) it initiates a second group of classically-shaped symphonies, Nos. 25- 29, in which the odd-numbered works allude more closely to sonata architecture and proportion, while the even-numbered (Nos. 26 and 28) are freer and more exploratory in form. The first movement, after a chilly chord of A minor on wind instruments and an important reiterated four-note motto in the bass, leaps into life with a wiry, agile, risoluto first theme, scored initially for violas only against a heavy, sinister undertow of bass instruments, but this soon grows through the orchestra with force and determination. A contrasting idea, hardly more than a smoothly ascending scale, initiates a gentler 'second-subject' area; the opening four-note moto briefly interrupts but the ascending scale idea continues to offer lyric expansion until the motto returns again to initiate an eventful development. Basically this proceeds in three waves, subjecting the first theme to increasingly fierce and martial transformation; the first two of these subside into contrasting interludes of calm. Thus the first wave happens upon a new, elegantly lyrical tune for oboe with flute and harp accompaniment. The martial struggle around the first subject is then intensified, but this time issues in sinister reiterations of a three-note figure [23), first in low and then in high registers. The third wave resumes the aggressively contrapuntal development, which this time carries straight into the recapitulation, signalled by the return of the four-note motto . This at first resembles a more massively-scored version of the exposition, but the ascending-scale 'second subject' has virtually disappeared. Instead fragments of it are subordinated to a mysterious episode of nagging triplet rhythms. Out of this the lyrical tune from the development unexpectedly emerges in full flower, in E flat, against a chiming accompaniment of timpani, harp, and percussion. A brief but grandiose coda steers the music back to the home A. The lyricism of the central Andante cantabile, largely in and around E minor, is elegiac in nature, with more than a hint of bleakness. The expressive opening violin melody is eventually to make several returns, like a rondo subject, but at first the music explores other areas, influenced by a short snatch of march-like music in dactylic rhythm (two semiquavers plus quaver). A poignant new idea, first heard on solo oboe , becomes a focus for further restless wanderings. The dactylic rhythm, always present in small motifs, gives rise to brief fanfare-figures, and a climax that subsides before it is fully formed. The opening theme now returns on 'cellos , and is tenderly but sadly developed in the strings. A more defined march-music takes over; a solo violin introduces another appearance of the opening theme, now on solo flute. Violin, then strings, work up to another brief climax; then woodwind subside to the final return of the main theme , initiating a brief but impressive coda in which some measure of serenity seems to be achieved. The comparatively short finale is a scampering, scherzo-like rondo, touched off by ostinatos in bassoons and two sets of timpani, the main theme in solo clarinet. This rondo subject is reshaped at each appearance, while the episodes are more lyrical. First comes a cross-rhythm development of the opening theme itself, before the timpani return in more recognizable rondo shape. Then a Lento episode brings a quiet hint of folksong , with a tender interlude for oboe and divided cellos. A little woodwind cadenza leads back to the rondo music , but not for long. At the same tempo , we hear a different and irreverent snatch of folksong from a bassoon; this carries straight into the final appearance of the rondo music in a superbly unbuttoned coda.
For an undisputed artistic triumph, “Graceland” is a hard album to get comfortable with. Even those who admire Paul Simon’s artistic bravery and foresight acknowledge that there was something troubling about a wealthy American star appropriating African and Latin American pop. Questions about the nature of authorship and the relationship between wealthy nations and impoverished ones raised by “Graceland” in 1986 still bother us. Was it ethical, for instance, for a famous foreigner to record South African musicians mid-jam, without them knowing that the tape is rolling, and then build songs of his own from those tracks? What if those musicians later insist they’re thrilled about what happened? Does it complicate matters to realize that these musicians were second-class citizens in their own country, one groaning under the weight of apartheid? How could Simon approach them as equal partners when their own government demanded that they treat him as a superior? Simon believes his collaboration with the artists who provided the raw materials that became “Graceland” was genuine, and that the joy of making great music erased predispositions and cultural boundaries. His critics have pointed out the disturbing similarities between the making of “Graceland” and the basic dynamics of colonialism: Get the good stuff from Africa, bring it back to the New World and polish it up for presentation in the global market. Listening to “Graceland — 25th Anniversary Edition,” a two-CD, two-DVD boxed set to be released by Legacy Recordings on Tuesday, it is tempting to think that the ends justified the means. Few albums of any era have aged better. Every song is redolent with mystery: The dusty township slink of “The Boy in the Bubble,” the bottom-end rumble of “Diamonds on the Soles of Her Shoes,” the prairie-fire crackle of “Under African Skies” and the jeweled twinkle of “Crazy Love, Vol. II” are all as evocative in 2012 as they were in 1986. “Graceland” is a masterpiece of arrangement, but it’s also a masterpiece of tone and timbre — consider, for example, Bakithi Kumalo’s bass on “You Can Call Me Al,” splashing from note to note like a bullfrog on a pond, or the unearthly whooping of General MD Shirinda’s possessed Shangaan backing vocalists, the Gaza Sisters, on “I Know What I Know,” or Simon’s angelic lead vocal on “Crazy Love.” Not only did nothing sound like “Graceland” when it was made, nothing — including Simon’s subsequent recordings — has sounded like “Graceland” since. No American or African musicians have been able to fully reconstruct its peculiar magic. This is a profound record, but it’s also a party disc, made for barbecues, dances and long summer drives. And debates. Those haven’t gone away. “Graceland — 25th Anniversary Edition,” packaged with a documentary about Simon’s South African trip and a DVD of an epochal 1987 concert shot in Zimbabwe as apartheid was beginning to crumble, feels very much like Simon’s intervention in the controversy that always has surrounded the record. The documentary — “Under African Skies,” directed by Joe Berlinger — is a fascinating look at an artist who didn’t exactly know what he was doing when he journeyed to Africa to try to capture a sound he’d heard on a compilation cassette. Simon’s gradual arrival at global consciousness and his dawning awareness of his role in world history makes him a sympathetic character. His prickliness and arrogance are apparent, but so are his intelligence and his love of popular music, and the glory of pure sound. Footage of him jamming in the studio with African artists — a bit tentative and wary, but caught up in the excitement of the groove — capture the thrill of Simon’s encounter with music that felt at once exotic and deeply familiar. The documentary shrewdly reframes the debate around “Graceland,” focusing on Simon’s violation of the African National Congress’ cultural embargo. Anti-apartheid activists had, for two decades, asked foreign artists to show solidarity with their struggle by refusing to play in South Africa; the making of “Graceland” violated the embargo. While this transgression was a consequential one in the era of “Sun City,” it is now an easy argument for Simon to win. Simon vs. the ANC pits an artist with a vision against ideologues telling musicians what to do, and with P.W. Botha a distant memory, history has justified the artist’s act of defiance. Berlinger interviews the musicians extensively and they all gush about “Graceland” and Simon, and the opportunities opened to them by the record’s success. “Under African Skies” casts Simon and his musicians as a bunch of renegades, thumbing their noses at the United Nations and other bureaucrats, writing their own rules in the extraterritorial areas that pop music affords. Even South African politician Dali Tambo, an embargo supporter and critic of “Graceland,” ends the film by acknowledging Simon’s greatness and giving him a bear hug. Yet “Under African Skies” does not even attempt to address one of the most damning charges leveled against “Graceland”: that Simon stole the music to “All Around the World or the Myth of Fingerprints” from Mexican-American folk-rock band Los Lobos. “Myth,” the closing track on the album, came from a different session than the ones chronicled in the movie. But according to Los Lobos saxophonist Steve Berlin, Simon’s strategies weren’t much different than they had been in South Africa. He heard something he liked, pressed play and wrote lyrics to accompany it. Later, when “Graceland” came out, the members of Los Lobos were dismayed to read “Words and Music by Paul Simon” in the liner notes. Although Simon vociferously disputes Los Lobos’ version of the recording of “The Myth of Fingerprints,” he has never been able to suppress or dispel this story. It must irk the image-conscious singer-songwriter that the “Graceland” Wikipedia page discusses the controversy and the competing claims of authorship. The spare second disc of “Graceland — 25th Anniversary Edition” contains a version of “The Myth of Fingerprints” that clarifies nothing. It’s a bit harsher than the mix that made the album, but none of the basic components are different. This version of “Myth” was initially released on the 2004 “Graceland” reissue. Two other tracks — a nifty bass and voice version of “Diamonds on the Soles of Her Shoes” and a brief, repetitive demo for the a cappella “Homeless” that screams out for the inventiveness of Ladysmith Black Mambazo — were also on that collection. The only new tracks in the boxed set are intriguing but inessential instrumental sketches of the songs that became “You Can Call Me Al” and “Crazy Love” and a nine-minute oration by Simon about the making of “Graceland” (the song) and the cultural exchange that underpins its composition. As always, it is mesmerizing to listen to Simon discuss his process and draw connections between musical styles. As always, his discussion of his collaborators has a slight whiff of condescension. If ever a classic album did not need remastering, “Graceland” is that album: The original 1986 version is pillowy and welcoming, with every lyric as clear as the Caribbean Sea. That remastering has been done anyway. I played the “Graceland — 25th Anniversary Edition” version and my initial-pressing CD back to back on my computer, and heard very few differences besides loudness. The boxed set also appends the videos for “The Boy in the Bubble” and “You Can Call Me Al,” which you’ve probably seen hundreds of times, and a delightful “Saturday Night Live” performance of “Diamonds,” complete with a song and dance routine by Ladysmith Black Mambazo, that is also covered in the documentary. In short, this is nothing like Bruce Springsteen’s “The Promise,” a revisitation of an old album complete with illuminating unreleased tracks and outtakes. Simon is either being parsimonious with what he has in the vault, or the other demo recordings for “Graceland” have vanished into the mists of time, like that initial compilation cassette that inspired the project. It is worth noting that Simon does give full writing credit to many African musicians whose songs and music he used as the basis of “Graceland”: accordionist Forere Motloheloa, singer Shirinda, the Boyoyo Boys. “Under African Skies” does not attempt to downplay their contributions to “Graceland,” which were, at the very least, substantial. This was not the first time Simon went globetrotting to find material: “El Cóndor Pasa,” a 1970 Simon and Garfunkel hit that foreshadowed the experiments on “Graceland,” was sung over an instrumental borrowed from a Peruvian folk group. Since 1986, Simon returned often to the strategies he used when making “Graceland”: “The Rhythm of the Saints,” his follow-up, gave the “Graceland” treatment to Brazilian music. In a sense, Simon was ahead of his time: The curatorial approach he took to assembling full tracks from scraps of songs and pre-existing recordings is closer in execution to that of Kanye West than it is to any of his contemporaries. Assembling and recontextualizing bits and pieces of gathered material, Simon is tacitly arguing, is as legitimate a form of authorship as banging out songs on an acoustic guitar. (That Simon is an outstanding songwriter has never been in doubt — this is the man who came up with “Mrs. Robinson.” It is impossible to dismiss him as a dilettante or a pilferer.) Simon’s search for the roots of rhythm forced his listeners to reassess the boundaries of songwriting and interrogate the relationship between the author and his source. Those controversies aren’t yet close to being settled; perhaps they never will be. But two decades after the release of Nelson Mandela, that accidental provocation may stand as Simon’s most revolutionary act.
Mapping the Global Muslim Population Middle East-North Africa Overview The Middle East-North Africa region, which includes 20 countries and territories, is home to an estimated 315 million Muslims, or about 20% of the world’s Muslim population. Of these, approximately 79 million live in Egypt, meaning that about one-in-four (25%) Muslims in the region live in Egypt. More than half the countries in the Middle East-North Africa region have populations that are approximately 95% Muslim or greater. These include Algeria, Egypt, Iraq,1 Jordan, Kuwait, Libya, Morocco, Palestinian territories, Saudi Arabia, Tunisia, Western Sahara and Yemen. Other countries in the region also have populations with a high percentage of Muslims, including Syria (92%), Oman (88%), Bahrain (81%), Qatar (78%), United Arab Emirates (76%) and Sudan (71%). Although most of the citizens of the Persian Gulf countries of Oman, Bahrain, Qatar and United Arab Emirates are Muslim, these countries have a substantial number of non-Muslim workers who are not citizens; this brings down the total percentage of their populations that is Muslim. North Africa is home to the three largest Muslim populations in the Middle East-North Africa region: Egypt (79 million), Algeria (34 million) and Morocco (32 million). Other countries in the region with large Muslim populations include: Iraq (30 million), Sudan (30 million), Saudi Arabia (25 million), Yemen (23 million), Syria (20 million) and Tunisia (10 million). The population of the remaining 11 countries and territories in the region – Libya, Jordan, Palestinian territories, United Arab Emirates, Kuwait, Lebanon, Oman, Israel, Qatar, Bahrain and Western Sahara – totals about 31 million. The Palestinian territories are home to about 4 million Muslims. In addition, Israel is home to roughly 1 million Muslims, slightly more than Qatar. Although Israel has a Muslim population similar in size to those of some western European countries, Muslims constitute a much larger portion (about 17%) of its population. By comparison, the United Kingdom is home to between 1 million and 2 million Muslims, about 3% of its total population.
Most SEOs are always in search of effective tools to monitor their website rankings in search engines for target keywords. Keyword rank tracking is a usual and common SEO practice by internet marketers and bloggers to find out how their web pages are appearing on search engine results for queries they want to appear for. Great rank checkers give you an idea how well website rankings are going on over time and how well your site optimization efforts are progressing against your competitors. Everything starts with site rankings! There are no traffic and conversions without rankings. Therefore, the higher your website rankings are, the more traffic you can get on your website. SEO is a great market that provides a great number of tools to check your keyword rankings and handle your daily tasks. In this post we will show you the smartest rank tracking tools on the market. #1. SE Ranking SE Ranking is my favorite seo tool from the list we use every day and recommend to include it in every search marketer’s tool chest. It is possible to check local and mobile rankings, get extra data on your selected keywords (search volume, AdWords suggestion bid, KEI parameter and results returned by Google), spot Top 10 competitors in order to track their results and add extra users to your account with access to all the data you want them to see. With its easy-to-use and simple interface, SE Ranking allows to check site positions in Google, Bing and Yahoo search engines. After getting ranking results, you have the option to export them to White Label reports in .pdf, .csv, .html and .xls formats. My favorite features of SE Ranking are the ability to check unlimited number of websites and a great number of keywords at once and the convenience to check out which search queries are in the top, left top 10 and entered top 10. You can check out the software within a free 14-day trial. #2. Web CEOWeb CEO is a great rank checker that helps to check not only website keyword rankings, but find out the rankings of your competitors. The tool allows tracking all rankings like organic search results, vertical listings, ads as well as it tracks your website rankings on desktop and mobile devices. I love Web CEO for the option to check YouTube rankings, as YouTube is now the second video search engine in the world that helps to promote your company. All results can be provided in PDF format with your logo and brand name that gives a touch of professionalism and elegance in eyes of your clients. The too is free to use, but with additional restrictions. #3. Rank WatchRank Watch is one of the most powerful and accurate tools that carefully calculates your keyword rankings at once. It enables not only showing your current, highest and initial rankings with search volumes, but also keeps your historical rankings that are a huge benefit for SEO experts to have historical and competitive intelligence for certain search queries. With Rank Watch, you can track site ranking as precise as possible with the specific city on which you want to track. The tools offer users a wide range of fully customizable and branded reports with your company logo and brand name and with the ability to schedule reports for delivery on a daily, weekly, monthly basis. To check out the best features, you can start a free 14-day trial. #4. RankTrackrRank Trackr has one of the greatest layouts that you can ever find in a rank tracking tool. The tool uses the most advanced approaches to access ranking results for local tracking with precise accuracy in an area using a certain region, city and zip code. It gives a great ability to filter keywords that are doing the most progress for your website. I love RankTrackr for the ability to track any form of ranking such as organic search results, local map results, and images. You can try out a 10-day free trial without obligations. #5. Rank TrackerRank Tracker is a free tool that has one of the smartest interfaces of all rank tracking tools that I have ever used before. It has a simple and easy-to-use dashboard that displays all search queries how they rank on search engines and how they have changed over time. If you need to optimize your site, the tool has a great keyword suggestion tool that helps you disclose best and profitable keywords for you. One of the greatest features is that Rank Tracker monitors site rankings on all localized versions of search engine and even on additional ones like Yandex, Ask.com, and Alexa. Over to youNow you have a clear idea what rank tracking tools should better get to work. The quicker you work, the quicker you get results. I can’t deny that there are other SEO tools that can help you monitor site rankings in search engines. You can just look through suggested tools and check out the best features. Debunking CDN SEO MythsLet’s start with eliminating some common SEO myths. None of these are true: - Many sites on a single IP are bad for SEO; in this Google Webmasters official forum discussion the official Google rep stated “We generally do not treat sites hosted on CDNs any differently”. - CDNs create duplicate content; each copy of your content has exactly the same URL, so Google’s crawlers will not see multiple copies regardless of which location presents the content when they crawl it. - Bot blocking will stop Google’s crawlers; bot blockers only block bad bots; they never block crawlers from recognized search engines. - CDNs will hurt my ability to rank for my geographic location; the IP address is not the primary signal Google users to determine the location of the server that hosts your site; Google first looks at your Webmasters Tools setting and TLD country code. CDNs also whitelist their server locations to prevent any localization errors. CDN Affect on Page SpeedWe all know the importance of reducing page load times and increasing page speed. Moz has been very clear about how webite speed impacts search ranking. What many do not realize is that what really matters is “Time to First Byte” (TTFB). Using a CDN will not improve your SEO unless you optimize not only how long it takes to load the first byte, but also what you load. Ilya Grigorik, developer advocate on Google’s “Make the Web Fast” team rejected a study claiming TTFB does not matter, explaining: “It’s not only the time that matters, but also what’s in those first few bytes… Of course it doesn’t matter if you flush your ‘200 OK’ after 1ms vs 1s…. But if you know what you’re doing and craft the response right, then it can make a huge difference”.The primary cause of slow TTFB is processor time required to dynamically generate html pages. Sites using any database driven CMS (WordPress for example) dynamically generate your home page for every new visitor. An excellent solution would be to classify the HTML components as static and have them delivered directly from a CDN, with no processing and from the nearest possible location. Some CDNs are using advanced caching algorithms to identify and cache more html to store statically, thereby reducing the time and amount of html that must be dynamically generated.For example, Incapsula wrote in CDN SEO Benefits and Advantages: At Incapsula we see a double (and event triple) digit site speed improvement among the websites using our service. This improvement is achieved not only by CDN content Caching and Geo Distribution capabilities, but also by clever uses of existing assets. Namely, we will automatically Minify your source code, GZip your files and even Cache your “Un-Cacheable” dynamically generated content (especially useful for websites with dynamic catalogs, on-site search features and etc). As a result your website will load faster, achieve higher SERP rankings and provide better overall User Experience, thus also improving Bounce and Conversion rates. More advanced CDNs use various methods of compression to automatically Minify your source code and GZip your files for additional reductions in load time. CDNs can be used to improve on-page SEO and search rankings, but only if you choose the right CDN and take advantage of faster TTFB, reduced dynamic html, and increased compression. Throughout your journey to get an ever higher conversion rate, you run a lot of tests. Everything from the color of your CTA button to the font and size of your headline can impact the conversion rate of a landing page. But what should you be testing to get the best results the fastest? IMHO I would recommend testing things that have already worked for other marketers’ landing pages. I always find this to be a much better place to start than my initial ‘gut reaction’ – as, inevitably, there is a test somebody has already run that proves that feeling to be incorrect. In this infographic from Wishpond I’ll show you 14 stats-bached techniques you can test to improve the conversion rate of your landing pages. As they’re already proven techniques, it will be easy for you to use them to get some quick wins.
Jaylin Walker got in fights. But he didn't usually start them. Someone would flick his neck. Then fists were thrown. Then Jaylin would get suspended. After four suspensions, Benito Middle School asked Jaylin, then 14, to transfer. Inches shy of 6 feet tall, he was called too big, too violent. His mother, Latwaska Hamilton, said her son suffers from anger issues and has special learning needs. He needs extra attention and patient teachers. But Jaylin was asked to transfer, she said, because he is black. "Young African-American males are already assumed to be a threat," Hamilton, 37, said of the punishments, which happened during the 2012-13 school year. "Jaylin just looked the part, and he was treated that way." Hamilton's concern that her son's punishments were harsher than necessary echoes a complaint filed this month with the United States Department of Education's Office for Civil Rights. Marilyn Williams, a retired teacher and Tampa activist, alleges that the Hillsborough County School District discriminates against black students by subjecting them to harsher penalties than white students. She also claims students in lower-income schools, which are predominantly black, are denied access to experienced teachers. Her complaint triggered a response from the federal agency, which asked the district to answer 42 questions related to the allegations that it shortchanges minority students. Among other things, the investigators wanted copies of disciplinary policies and detailed information on how they are carried out, criteria for referring a student for discipline, examples of positive behavior programs and procedures for staff training. Asked to answer all the questions in 15 days, the district responded that it needed 60 days. This issue isn't unique to Hillsborough County. For years, advocates have called for an end to the so-called "school-to-prison pipeline," a nationwide trend that funnels children into the criminal justice systems for minor offenses. The issue has cropped up in New York, Chicago and parts of Florida. Last year in Broward County, more students were arrested on campus than in any other state district, largely for misdemeanors like marijuana possession or spray painting, according to a Dec. 2, 2013, New York Times article. "Children need education, not incarceration," said Dr. Jennifer Morley, a nonprofit consultant in Tampa and an American Civil Liberties Union of Florida volunteer. Morley argued that children are more likely to drop out and never pursue profitable careers if they're bridled with harsh punishments that remove them from the classroom. Minority students suffer under "zero-tolerance policies" and people shouldn't be penalized "for stupid stuff they did as kids," she said. "The data's there," Morley said of the complaint, "and the district's going to have to respond to it." Frustration. Disappointment. Anger. These words swing through Marilyn Williams' mind whenever she thinks about how black students are treated in the district. That's why she filed the complaint. After earning a master's degree in conflict resolution and teaching in different schools outside the state, Williams moved to Florida in 1999. She spent a few years working with the local NAACP, which she said allowed her to gather the information necessary to challenge the system. After a while, Williams started noticing small things. Black children who had to earn a teacher's trust. Counselors who weren't as patient. An increase in school resource officers and violent incidents. Then she looked at the Florida Comprehensive Assessment Test scores in Hillsborough County broken down by race. And she was astounded. In 2013, 37 percent of black students in the third grade scored at or above the minimum achievement level for reading. That number dropped to 34 percent for eighth-graders and 29 percent for 10th-graders. "Unless one is willing to accept the belief that black students are intellectually inferior ... then one must question why the district has consistently had poor academic performance outcomes for black students," Williams wrote in her complaint. Williams also included a report from the Advancement Project that suggested harsh policies disproportionately affect students of color. For example, black students comprised 21 percent of the Hillsborough County school population but accounted for 50 percent of out-of-school suspensions during the 2011-12 school year. "That kind of blew my mind," Williams said. Morley, the nonprofit consultant, said the over-disciplining aspect of Williams' complaint is a valid issue. But she doesn't believe as much can be done about black children getting more effective teachers. "Teachers can do what they want," she said. That doesn't mean the School Board isn't trying to steer more educators to high-needs schools, said Stephen Hegarty, district spokesman. For the past two years, superintendent MaryEllen Elia has written to teachers, asking them to follow their "sense of idealism" and help fill vacancies at high-needs schools. "I am asking teachers to consider getting out of their comfort zone to teach at a school where the challenges are great," Elia wrote. "There are children and families at those schools who need you." Teachers who relocate get a 2 percent raise the first year and 5 percent after that, Hegarty said. Those who move to the highest-poverty schools get a $1,000 recruitment bonus and a $2,000 retention bonus after the first year. "There's no way of telling why they transferred," Hegarty wrote in an email, referring to the teachers. "However, we gave out 65 recruitment bonuses last year." The Hillsborough Classroom Teacher's Association declined to comment on the issue. The Office for Civil Rights also had no comment on the broad scope of the investigation. In its own response, the district questioned "the standing" of complainant Marilyn Williams. But, according to the federal agency's website, anyone can file a discrimination complaint. No matter what happens, Latwaska Hamilton, the mother concerned about her son's discipline issues, sees the complaint as progress. "Because the situation is God awful," she said, "something different has to happen."